[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk1.8.0_201) - Build # 461 - Failure!

2019-04-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/461/
Java: 64bit/jdk1.8.0_201 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.update.processor.CategoryRoutedAliasUpdateProcessorTest.testSliceRouting

Error Message:
Timeout occurred while waiting response from server at: 
https://127.0.0.1:33141/solr/myAlias

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occurred while 
waiting response from server at: https://127.0.0.1:33141/solr/myAlias
at 
__randomizedtesting.SeedInfo.seed([FD45FE61BE71B99A:CCD5728D1A2746B3]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:660)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1068)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:837)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:769)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:504)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:460)
at 
org.apache.solr.update.processor.CategoryRoutedAliasUpdateProcessorTest.testSliceRouting(CategoryRoutedAliasUpdateProcessorTest.java:367)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Comment Edited] (SOLR-13320) add a param ignoreDuplicates=true to updates to not overwrite existing docs

2019-04-24 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825692#comment-16825692
 ] 

Noble Paul edited comment on SOLR-13320 at 4/25/19 3:55 AM:


That just sounds too complex. We will have a tough time explaining it to people


was (Author: noble.paul):
That just sounds very cool complex. We will have a tough time explaining it to 
people

> add a param ignoreDuplicates=true to updates to not overwrite existing docs
> ---
>
> Key: SOLR-13320
> URL: https://issues.apache.org/jira/browse/SOLR-13320
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> Updates should have an option to ignore duplicate documents and drop them if 
> an option  {{ignoreDuplicates=true}} is specified



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13320) add a param ignoreDuplicates=true to updates to not overwrite existing docs

2019-04-24 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825692#comment-16825692
 ] 

Noble Paul commented on SOLR-13320:
---

That just sounds very cool complex. We will have a tough time explaining it to 
people

> add a param ignoreDuplicates=true to updates to not overwrite existing docs
> ---
>
> Key: SOLR-13320
> URL: https://issues.apache.org/jira/browse/SOLR-13320
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> Updates should have an option to ignore duplicate documents and drop them if 
> an option  {{ignoreDuplicates=true}} is specified



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13418) ZkStateReader.PropsWatcher synchronizes on a string value & doesn't track zk version

2019-04-24 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825689#comment-16825689
 ] 

Gus Heck commented on SOLR-13418:
-

[~tomasflobbe], there's a patch in SOLR-13420 that demonstrates the use case 
and improves the synchronization somewhat. I haven't been able to convince 
myself that there are any ways in which collection properties are likely to  be 
updated frequently, and mostly 
org.apache.solr.common.cloud.ZkStateReader#createClusterStateWatchersAndUpdate()
 is called when new objects like ZK or overseer update threads are created (or 
if the overseer hits an error, and calls 
forciblyRefreshAllClusterStateSlow()...) 

If you think we need to do something like keep a map of lock objects keyed by 
collection to further reduce synchronization or you have another cleaner idea 
than that we can re-open this.

> ZkStateReader.PropsWatcher synchronizes on a string value & doesn't track zk 
> version
> 
>
> Key: SOLR-13418
> URL: https://issues.apache.org/jira/browse/SOLR-13418
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.0, master (9.0)
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
> Fix For: 8.1, master (9.0)
>
>
> While contemplating adding better caching to collection properties to avoid 
> repeated calls to ZK from code that wishes to consult collection properties, 
> I noticed that the existing PropsWatcher class is synchronizing on a string 
> value for the name of a collection. Synchronizing on strings is bad practice, 
> given that you never know if the string might have been interned, and 
> therefore someone else might also synchronizing on the same object without 
> your knowledge creating contention or even deadlocks. Also this code doesn't 
> seem to be doing anything to check ZK version information, so it seems 
> possible that out of order processing by threads could wind up caching out of 
> date data. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13420) Allow Routed Aliases to use Collection Properties instead of core properties

2019-04-24 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825686#comment-16825686
 ] 

Gus Heck commented on SOLR-13420:
-

Patch moves the functionality for identifying the routed alias associated with 
a request to CollectionProperties. Caching and a more conservative, double 
checked synchronization were added to ZkStateReader for Collection properties 
such that if any code either registers a collection properties watcher for a 
collection or observes the collection properties via getProperties, the 
properties for that collection will then be cached and updated via watches 
thereafter. This avoids spamming zookeeper now that each call to 
RoutedAliasUpdateProcessor.wrap() checks CollectionProperties.

> Allow Routed Aliases to use Collection Properties instead of core properties
> 
>
> Key: SOLR-13420
> URL: https://issues.apache.org/jira/browse/SOLR-13420
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
> Attachments: SOLR-13420.patch
>
>
> The current routed alias code is relying on a core property named 
> routedAliasName to detect when the Routed Alias wrapper URP should be applied 
> to Distributed Update Request Processor. 
> {code:java}
> #Written by CorePropertiesLocator
> #Sun Mar 03 06:21:14 UTC 2019
> routedAliasName=testalias21
> numShards=2
> collection.configName=_default
> ... etc...
> {code}
> Core properties are not changeable after the core is created, and they are 
> written to the file system for every core. To support a unit test for 
> SOLR-13419 I need to create some legacy formatted collection names, and 
> arrange them into a TRA, but this is impossible due to the fact that I can't 
> change the core property from the test. There's a TODO dating back to the 
> original TRA implementation in the routed alias code to switch to collection 
> properties instead, so this ticket will address that TOD to support the test 
> required for SOLR-13419.
> Back compatibility with legacy core based TRA's and CRA's will of course be 
> maintained. I also expect that this will facilitate some more nimble handling 
> or routed aliases with future auto-scaling capabilities such as possibly 
> detaching and archiving collections to cheaper, slower machines rather than 
> deleting them. (presently such a collection would still attempt to use the 
> routed alias if it received an update even if it were no longer in the list 
> of collections for the alias)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13420) Allow Routed Aliases to use Collection Properties instead of core properties

2019-04-24 Thread Gus Heck (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gus Heck updated SOLR-13420:

Attachment: SOLR-13420.patch

> Allow Routed Aliases to use Collection Properties instead of core properties
> 
>
> Key: SOLR-13420
> URL: https://issues.apache.org/jira/browse/SOLR-13420
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
> Attachments: SOLR-13420.patch
>
>
> The current routed alias code is relying on a core property named 
> routedAliasName to detect when the Routed Alias wrapper URP should be applied 
> to Distributed Update Request Processor. 
> {code:java}
> #Written by CorePropertiesLocator
> #Sun Mar 03 06:21:14 UTC 2019
> routedAliasName=testalias21
> numShards=2
> collection.configName=_default
> ... etc...
> {code}
> Core properties are not changeable after the core is created, and they are 
> written to the file system for every core. To support a unit test for 
> SOLR-13419 I need to create some legacy formatted collection names, and 
> arrange them into a TRA, but this is impossible due to the fact that I can't 
> change the core property from the test. There's a TODO dating back to the 
> original TRA implementation in the routed alias code to switch to collection 
> properties instead, so this ticket will address that TOD to support the test 
> required for SOLR-13419.
> Back compatibility with legacy core based TRA's and CRA's will of course be 
> maintained. I also expect that this will facilitate some more nimble handling 
> or routed aliases with future auto-scaling capabilities such as possibly 
> detaching and archiving collections to cheaper, slower machines rather than 
> deleting them. (presently such a collection would still attempt to use the 
> routed alias if it received an update even if it were no longer in the list 
> of collections for the alias)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9769) solr stop on a service already stopped should return exit code 0

2019-04-24 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825680#comment-16825680
 ] 

Shawn Heisey commented on SOLR-9769:


I see your point, and offer the following:

If Solr is already stopped, and you try to stop it again, that is actually an 
error condition.  The script cannot complete the requested action ... so one 
way of interpreting that is an error ... though some would say that since the 
service is in fact stopped, it's successful.  I think it should be reported as 
an error.

Perhaps what should happen here is the exit code should be 1 if Solr is already 
stopped, and 2 or higher if there's something that could be classified as more 
of a "real" problem.

As a workaround until we decide exactly what to do about this error report, you 
should investigate whether the "do something" part of your script can be done 
while Solr is running, and use "/etc/init.d/solr restart" instead after it's 
done.  Because most unix and unix-like platforms allow you to delete files that 
are currently held open, there's a good chance that whatever you want to do can 
be done while Solr is running.  I cannot guarantee this, of course.  If we find 
that the restart action doesn't work when the service is already stopped, I 
think that qualifies as a bug.


> solr stop on a service already stopped should return exit code 0
> 
>
> Key: SOLR-9769
> URL: https://issues.apache.org/jira/browse/SOLR-9769
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.3
>Reporter: Jiří Pejchal
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> According to the LSB specification
> https://refspecs.linuxfoundation.org/LSB_4.0.0/LSB-Core-generic/LSB-Core-generic.html#INISCRPTACT
>  running stop on a service already stopped or not running should be 
> considered successful and return code should be 0 (zero).
> Solr currently returns exit code 1:
> {code}
> $ /etc/init.d/solr stop; echo $?
> Sending stop command to Solr running on port 8983 ... waiting up to 180 
> seconds to allow Jetty process 4277 to stop gracefully.
> 0
> $ /etc/init.d/solr stop; echo $?
> No process found for Solr node running on port 8983
> 1
> {code}
> {code:title="bin/solr"}
> if [ "$SOLR_PID" != "" ]; then
> stop_solr "$SOLR_SERVER_DIR" "$SOLR_PORT" "$STOP_KEY" "$SOLR_PID"
>   else
> if [ "$SCRIPT_CMD" == "stop" ]; then
>   echo -e "No process found for Solr node running on port $SOLR_PORT"
>   exit 1
> fi
>   fi
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13320) add a param ignoreDuplicates=true to updates to not overwrite existing docs

2019-04-24 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825667#comment-16825667
 ] 

Gus Heck commented on SOLR-13320:
-

It would be an error if you sent version=-1 as suggested by Shalin. So the 
haltBatchOnError=false plus the existing functionality with version=-1 covers 
your case, right?

> add a param ignoreDuplicates=true to updates to not overwrite existing docs
> ---
>
> Key: SOLR-13320
> URL: https://issues.apache.org/jira/browse/SOLR-13320
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> Updates should have an option to ignore duplicate documents and drop them if 
> an option  {{ignoreDuplicates=true}} is specified



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13320) add a param ignoreDuplicates=true to updates to not overwrite existing docs

2019-04-24 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825658#comment-16825658
 ] 

Noble Paul commented on SOLR-13320:
---

well, it's not an error in the strictest sense. 

* Basically what we want is ignore a document if it already exists and,
* the response should have ids of discarded docs

> add a param ignoreDuplicates=true to updates to not overwrite existing docs
> ---
>
> Key: SOLR-13320
> URL: https://issues.apache.org/jira/browse/SOLR-13320
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> Updates should have an option to ignore duplicate documents and drop them if 
> an option  {{ignoreDuplicates=true}} is specified



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-24 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825652#comment-16825652
 ] 

Joel Bernstein commented on SOLR-13414:
---

Yep, make sure the old jar is moved out of the class path entirely.

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, managed-schema, new_solr-8983-console.log, new_solr.log, 
> solr-8983-console.log, solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
> at 
> 

[jira] [Commented] (SOLR-13081) In-Place Update doesn't work when route.field is defined

2019-04-24 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825641#comment-16825641
 ] 

Lucene/Solr QA commented on SOLR-13081:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
34s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  2m  5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 42s{color} 
| {color:red} core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
14s{color} | {color:green} solrj in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.SystemCollectionCompatTest |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-13081 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12966954/SOLR-13081.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.4.0-137-generic #163~14.04.1-Ubuntu SMP Mon 
Sep 24 17:14:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 48dc020 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 |
| Default Java | LTS |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/385/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/385/testReport/ |
| modules | C: solr/core solr/solrj U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/385/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> In-Place Update doesn't work when route.field is defined
> 
>
> Key: SOLR-13081
> URL: https://issues.apache.org/jira/browse/SOLR-13081
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Dr Oleg Savrasov
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-13081.patch, SOLR-13081.patch, SOLR-13081.patch, 
> SOLR-13081.patch, SOLR-13081.patch
>
>
> As soon as cloud collection is configured with route.field property, In-Place 
> Updates are not applied anymore. This happens because 
> AtomicUpdateDocumentMerger skips only id and version fields and doesn't 
> verify configured route.field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13320) add a param ignoreDuplicates=true to updates to not overwrite existing docs

2019-04-24 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825630#comment-16825630
 ] 

Gus Heck commented on SOLR-13320:
-

Maybe this could be broadened a bit? An option to continue with a batch even if 
one document has an error. A return response enumerating failed docs and their 
associated messages would also make sense. That would be a generally useful 
feature I think. Call it haltBatchOnError... defaults to true.

> add a param ignoreDuplicates=true to updates to not overwrite existing docs
> ---
>
> Key: SOLR-13320
> URL: https://issues.apache.org/jira/browse/SOLR-13320
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> Updates should have an option to ignore duplicate documents and drop them if 
> an option  {{ignoreDuplicates=true}} is specified



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-11.0.2) - Build # 7908 - Failure!

2019-04-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7908/
Java: 64bit/jdk-11.0.2 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 15439 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\temp\junit4-J1-20190424_234143_91615299961085881586109.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4] Dumping heap to 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\heapdumps\java_pid14188.hprof
 ...
   [junit4] Heap dump file created [507457451 bytes in 8.323 secs]
   [junit4] <<< JVM J1: EOF 

   [junit4] JVM J1: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\temp\junit4-J1-20190424_234143_916719330351008639850.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] WARN: Unhandled exception in event serialization. -> 
java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4] at 
java.base/sun.nio.cs.StreamEncoder.write(StreamEncoder.java:133)
   [junit4] at 
java.base/java.io.OutputStreamWriter.write(OutputStreamWriter.java:229)
   [junit4] at java.base/java.io.Writer.write(Writer.java:249)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.gson.stream.JsonWriter.string(JsonWriter.java:547)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.gson.stream.JsonWriter.writeDeferredName(JsonWriter.java:398)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.gson.stream.JsonWriter.value(JsonWriter.java:413)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.events.AbstractEvent.writeBinaryProperty(AbstractEvent.java:36)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.events.AppendStdErrEvent.serialize(AppendStdErrEvent.java:30)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.events.Serializer$2.run(Serializer.java:129)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.events.Serializer$2.run(Serializer.java:124)
   [junit4] at java.base/java.security.AccessController.doPrivileged(Native 
Method)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.events.Serializer.flushQueue(Serializer.java:124)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:98)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain$3$2.write(SlaveMain.java:498)
   [junit4] at 
java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:81)
   [junit4] at 
java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:142)
   [junit4] at java.base/java.io.PrintStream.flush(PrintStream.java:417)
   [junit4] at 
org.apache.lucene.util.TestRuleLimitSysouts$DelegateStream.flush(TestRuleLimitSysouts.java:189)
   [junit4] at java.base/java.io.PrintStream.write(PrintStream.java:561)
   [junit4] at 
org.apache.logging.log4j.core.util.CloseShieldOutputStream.write(CloseShieldOutputStream.java:53)
   [junit4] at 
org.apache.logging.log4j.core.appender.OutputStreamManager.writeToDestination(OutputStreamManager.java:261)
   [junit4] at 
org.apache.logging.log4j.core.appender.OutputStreamManager.flushBuffer(OutputStreamManager.java:293)
   [junit4] at org.apache.logging.log4j.core.ap
   [junit4] pender.OutputStreamManager.flush(OutputStreamManager.java:302)
   [junit4] at 
org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:199)
   [junit4] at 
org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:190)
   [junit4] at 
org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:181)
   [junit4] at 
org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156)
   [junit4] at 
org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:129)
   [junit4] at 
org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:120)
   [junit4] at 
org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)
   [junit4] at 
org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:464)
   [junit4] at 
org.apache.logging.log4j.core.async.AsyncLoggerConfig.callAppenders(AsyncLoggerConfig.java:127)
   [junit4] <<< JVM J1: EOF 

[...truncated 2 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
C:\Users\jenkins\tools\java\64bit\jdk-11.0.2\bin\java.exe 
-XX:-UseCompressedOops -XX:+UseParallelGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\heapdumps
 -ea -esa --illegal-access=deny -Dtests.prefix=tests 

[jira] [Commented] (SOLR-9769) solr stop on a service already stopped should return exit code 0

2019-04-24 Thread Jiri Pejchal (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825609#comment-16825609
 ] 

Jiri Pejchal commented on SOLR-9769:


Let's say you have  some kind of restart script that exits on first error. The 
script stops solr, does something, starts solr. When solr is not running this 
script fails and does not start solr again.
{code}
#!/bin/bash
set -e
/etc/init.d/solr stop
# do something
/etc/init.d/solr start
{code}

Calling service {{stop}} on already stopped solr should return success status - 
zero (instead of the currently returned error status - one). 
Calling {{restart}} already returns zero even if solr is not running.

> solr stop on a service already stopped should return exit code 0
> 
>
> Key: SOLR-9769
> URL: https://issues.apache.org/jira/browse/SOLR-9769
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.3
>Reporter: Jiří Pejchal
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> According to the LSB specification
> https://refspecs.linuxfoundation.org/LSB_4.0.0/LSB-Core-generic/LSB-Core-generic.html#INISCRPTACT
>  running stop on a service already stopped or not running should be 
> considered successful and return code should be 0 (zero).
> Solr currently returns exit code 1:
> {code}
> $ /etc/init.d/solr stop; echo $?
> Sending stop command to Solr running on port 8983 ... waiting up to 180 
> seconds to allow Jetty process 4277 to stop gracefully.
> 0
> $ /etc/init.d/solr stop; echo $?
> No process found for Solr node running on port 8983
> 1
> {code}
> {code:title="bin/solr"}
> if [ "$SOLR_PID" != "" ]; then
> stop_solr "$SOLR_SERVER_DIR" "$SOLR_PORT" "$STOP_KEY" "$SOLR_PID"
>   else
> if [ "$SCRIPT_CMD" == "stop" ]; then
>   echo -e "No process found for Solr node running on port $SOLR_PORT"
>   exit 1
> fi
>   fi
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-13-ea+shipilev-fastdebug) - Build # 23978 - Still Unstable!

2019-04-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23978/
Java: 64bit/jdk-13-ea+shipilev-fastdebug -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestCloudSearcherWarming.testRepFactor1LeaderStartup

Error Message:
No live SolrServers available to handle this request

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:343)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1068)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:837)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:769)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.deleteAllCollections(MiniSolrCloudCluster.java:547)
at 
org.apache.solr.cloud.TestCloudSearcherWarming.tearDown(TestCloudSearcherWarming.java:78)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-24 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825580#comment-16825580
 ] 

Shawn Heisey commented on SOLR-13414:
-

bq. and old core was renamed

I don't think you can just rename the old jar.  It needs to be completely 
removed from WEB-INF/lib or Jetty/Java will probably still load it and use it.  
Moving it rather than deleting it would be a good idea, so it can be restored 
later.

Hopefully this note is actually helpful and not a wild goose chase.

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, managed-schema, new_solr-8983-console.log, new_solr.log, 
> solr-8983-console.log, solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> 

[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-24 Thread David Barnett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825563#comment-16825563
 ] 

David Barnett commented on SOLR-13414:
--

Hi Joel, any other thoughts on this?

We really would like to understand the issue.

Thanks very much




> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, managed-schema, new_solr-8983-console.log, new_solr.log, 
> solr-8983-console.log, solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
> at 
> 

[JENKINS] Lucene-Solr-Tests-8.x - Build # 165 - Unstable

2019-04-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/165/

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.AutoScalingHandlerTest.testReadApi

Error Message:
expected:<2> but was:<3>

Stack Trace:
java.lang.AssertionError: expected:<2> but was:<3>
at 
__randomizedtesting.SeedInfo.seed([9903D1E166615DE9:CE2A2A54BD93BFF2]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.solr.cloud.autoscaling.AutoScalingHandlerTest.testReadApi(AutoScalingHandlerTest.java:822)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13264 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.AutoScalingHandlerTest
   [junit4]   2> Creating dataDir: 

[jira] [Updated] (SOLR-13081) In-Place Update doesn't work when route.field is defined

2019-04-24 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-13081:

Attachment: (was: SOLR-13081.patch)

> In-Place Update doesn't work when route.field is defined
> 
>
> Key: SOLR-13081
> URL: https://issues.apache.org/jira/browse/SOLR-13081
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Dr Oleg Savrasov
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-13081.patch, SOLR-13081.patch, SOLR-13081.patch, 
> SOLR-13081.patch, SOLR-13081.patch
>
>
> As soon as cloud collection is configured with route.field property, In-Place 
> Updates are not applied anymore. This happens because 
> AtomicUpdateDocumentMerger skips only id and version fields and doesn't 
> verify configured route.field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13081) In-Place Update doesn't work when route.field is defined

2019-04-24 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-13081:

Attachment: SOLR-13081.patch

> In-Place Update doesn't work when route.field is defined
> 
>
> Key: SOLR-13081
> URL: https://issues.apache.org/jira/browse/SOLR-13081
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Dr Oleg Savrasov
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-13081.patch, SOLR-13081.patch, SOLR-13081.patch, 
> SOLR-13081.patch, SOLR-13081.patch
>
>
> As soon as cloud collection is configured with route.field property, In-Place 
> Updates are not applied anymore. This happens because 
> AtomicUpdateDocumentMerger skips only id and version fields and doesn't 
> verify configured route.field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1317 - Failure

2019-04-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1317/

No tests ran.

Build Log:
[...truncated 23468 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2526 links (2067 relative) to 3355 anchors in 253 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-9.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.


Re: [JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-11.0.2) - Build # 5113 - Unstable!

2019-04-24 Thread Erick Erickson
Thought I’d finally gotten this one. I have the log and will dig sometime.

> On Apr 24, 2019, at 11:55 AM, Policeman Jenkins Server  
> wrote:
> 
> DocValuesNotIndexedTest


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-13-ea+12) - Build # 23977 - Unstable!

2019-04-24 Thread Erick Erickson
Oh lovely, And just when I said “We haven’t seen one of these for a while. 
Siiih.


> On Apr 24, 2019, at 12:54 PM, Policeman Jenkins Server  
> wrote:
> 
> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23977/
> Java: 64bit/jdk-13-ea+12 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC
> 
> 1 tests failed.
> FAILED:  
> junit.framework.TestSuite.org.apache.solr.index.TestSlowCompositeReaderWrapper
> 
> Error Message:
> The test or suite printed 558082 bytes to stdout and stderr, even though the 
> limit was set to 8192 bytes. Increase the limit with @Limit, ignore it 
> completely with @SuppressSysoutChecks or run with -Dtests.verbose=true
> 
> Stack Trace:
> java.lang.AssertionError: The test or suite printed 558082 bytes to stdout 
> and stderr, even though the limit was set to 8192 bytes. Increase the limit 
> with @Limit, ignore it completely with @SuppressSysoutChecks or run with 
> -Dtests.verbose=true
>   at __randomizedtesting.SeedInfo.seed([B4E975FCA4530CEE]:0)
>   at 
> org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:282)
>   at 
> com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterIfSuccessful(TestRuleAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:37)
>   at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
>   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>   at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
>   at java.base/java.lang.Thread.run(Thread.java:835)
> 
> 
> 
> 
> Build Log:
> [...truncated 2005 lines...]
>   [junit4] JVM J1: stderr was not empty, see: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J1-20190424_183124_3188054815914452805536.syserr
>   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
>   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
>   [junit4] <<< JVM J1: EOF 
> 
> [...truncated 3 lines...]
>   [junit4] JVM J2: stderr was not empty, see: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J2-20190424_183124_31616747199810433147944.syserr
>   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
>   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
>   [junit4] <<< JVM J2: EOF 
> 
> [...truncated 5 lines...]
>   [junit4] JVM J0: stderr was not empty, see: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J0-20190424_183124_3164879239611108324.syserr
>   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
>   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
>   [junit4] <<< JVM J0: EOF 
> 
> [...truncated 301 lines...]
>   [junit4] JVM J0: stderr was not empty, see: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J0-20190424_184251_5495209865308560311108.syserr
>   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
>   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
>   [junit4] <<< JVM J0: EOF 
> 
> [...truncated 3 lines...]
>   [junit4] JVM J1: stderr was not empty, see: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J1-20190424_184251_54911207766674926898785.syserr
>   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
>   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
>   [junit4] <<< JVM J1: EOF 
> 
> [...truncated 3 lines...]
>   [junit4] JVM J2: stderr was not empty, see: 
> /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J2-20190424_184251_54911857548735274141503.syserr
>   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
>   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
>   [junit4] <<< JVM J2: EOF 
> 
> [...truncated 1075 lines...]
>   [junit4] JVM J0: stderr was not 

[jira] [Updated] (SOLR-13081) In-Place Update doesn't work when route.field is defined

2019-04-24 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-13081:

Attachment: (was: SOLR-13081.patch)

> In-Place Update doesn't work when route.field is defined
> 
>
> Key: SOLR-13081
> URL: https://issues.apache.org/jira/browse/SOLR-13081
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Dr Oleg Savrasov
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-13081.patch, SOLR-13081.patch, SOLR-13081.patch, 
> SOLR-13081.patch, SOLR-13081.patch
>
>
> As soon as cloud collection is configured with route.field property, In-Place 
> Updates are not applied anymore. This happens because 
> AtomicUpdateDocumentMerger skips only id and version fields and doesn't 
> verify configured route.field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13081) In-Place Update doesn't work when route.field is defined

2019-04-24 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-13081:

Attachment: SOLR-13081.patch

> In-Place Update doesn't work when route.field is defined
> 
>
> Key: SOLR-13081
> URL: https://issues.apache.org/jira/browse/SOLR-13081
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Dr Oleg Savrasov
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-13081.patch, SOLR-13081.patch, SOLR-13081.patch, 
> SOLR-13081.patch, SOLR-13081.patch
>
>
> As soon as cloud collection is configured with route.field property, In-Place 
> Updates are not applied anymore. This happens because 
> AtomicUpdateDocumentMerger skips only id and version fields and doesn't 
> verify configured route.field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13081) In-Place Update doesn't work when route.field is defined

2019-04-24 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-13081:

Attachment: SOLR-13081.patch

> In-Place Update doesn't work when route.field is defined
> 
>
> Key: SOLR-13081
> URL: https://issues.apache.org/jira/browse/SOLR-13081
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Dr Oleg Savrasov
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-13081.patch, SOLR-13081.patch, SOLR-13081.patch, 
> SOLR-13081.patch, SOLR-13081.patch
>
>
> As soon as cloud collection is configured with route.field property, In-Place 
> Updates are not applied anymore. This happens because 
> AtomicUpdateDocumentMerger skips only id and version fields and doesn't 
> verify configured route.field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12584) Add basic auth credentials configuration to the Solr exporter for Prometheus/Grafana

2019-04-24 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825521#comment-16825521
 ] 

Jan Høydahl commented on SOLR-12584:


Cool. Have you tested? Perhaps first step could be to document this in 
ref-guide for 8.x.

> Add basic auth credentials configuration to the Solr exporter for 
> Prometheus/Grafana  
> --
>
> Key: SOLR-12584
> URL: https://issues.apache.org/jira/browse/SOLR-12584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics, security
>Affects Versions: 7.3, 7.4
>Reporter: Dwane Hall
>Priority: Minor
>  Labels: authentication, metrics, security
> Attachments: lucene-solr.patch
>
>
> The Solr exporter for Prometheus/Grafana provides a useful visual layer over 
> the solr metrics api for monitoring the state of a Solr cluster. Currently 
> this can not be configured and used on a secure Solr cluster with the Basic 
> Authentication plugin enabled. The exporter does not provide a mechanism to 
> configure/pass through basic auth credentials when SolrJ requests information 
> from the metrics api endpoints and would be a useful addition for Solr users 
> running a secure Solr instance.   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy commented on issue #653: SOLR-13425: Wrong color in horizontal definition list

2019-04-24 Thread GitBox
janhoy commented on issue #653: SOLR-13425: Wrong color in horizontal 
definition list
URL: https://github.com/apache/lucene-solr/pull/653#issuecomment-486430486
 
 
   I'll merge to branch_8x, but since refGuide for 8.0 is not yet released, can 
I merge to branch_8_0 as well?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13425) Wrong color in horizontal definition list

2019-04-24 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13425:
---
Fix Version/s: 8.0

> Wrong color in horizontal definition list
> -
>
> Key: SOLR-13425
> URL: https://issues.apache.org/jira/browse/SOLR-13425
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.0, 8.1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> See 
> [https://lucene.apache.org/solr/guide/7_7/monitoring-solr-with-prometheus-and-grafana.html#configuration-tags-and-elements]
> The {{[horizontal]}} definition list ends up in a html table with keys in a 
> {{foo}} tag. The text here is white on white 
> background, since it inherits from the {{table th code}} rule in 
> {{customstyles.css}}
> A possible fix is to set black bold in ref-guide.css, see PR.
> [~ctargett]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13425) Wrong color in horizontal definition list

2019-04-24 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13425:
---
Fix Version/s: 8.1

> Wrong color in horizontal definition list
> -
>
> Key: SOLR-13425
> URL: https://issues.apache.org/jira/browse/SOLR-13425
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> See 
> [https://lucene.apache.org/solr/guide/7_7/monitoring-solr-with-prometheus-and-grafana.html#configuration-tags-and-elements]
> The {{[horizontal]}} definition list ends up in a html table with keys in a 
> {{foo}} tag. The text here is white on white 
> background, since it inherits from the {{table th code}} rule in 
> {{customstyles.css}}
> A possible fix is to set black bold in ref-guide.css, see PR.
> [~ctargett]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13333) terms.ttf=true doesn't work when distrib=false and terms.list is not specified

2019-04-24 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825512#comment-16825512
 ] 

Mikhail Khludnev commented on SOLR-1:
-

+1

> terms.ttf=true doesn't work when distrib=false and terms.list is not specified
> --
>
> Key: SOLR-1
> URL: https://issues.apache.org/jira/browse/SOLR-1
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Reporter: Munendra S N
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-1.patch
>
>
> In SOLR-10349, support to return total term frequency was added. This works 
> fine in distributed mode or when terms.list is specified but doesn't work 
> with non-distributed mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13425) Wrong color in horizontal definition list

2019-04-24 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-13425:
--

Assignee: Jan Høydahl

> Wrong color in horizontal definition list
> -
>
> Key: SOLR-13425
> URL: https://issues.apache.org/jira/browse/SOLR-13425
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> See 
> [https://lucene.apache.org/solr/guide/7_7/monitoring-solr-with-prometheus-and-grafana.html#configuration-tags-and-elements]
> The {{[horizontal]}} definition list ends up in a html table with keys in a 
> {{foo}} tag. The text here is white on white 
> background, since it inherits from the {{table th code}} rule in 
> {{customstyles.css}}
> A possible fix is to set black bold in ref-guide.css, see PR.
> [~ctargett]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13333) terms.ttf=true doesn't work when distrib=false and terms.list is not specified

2019-04-24 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reassigned SOLR-1:
---

Assignee: Mikhail Khludnev

> terms.ttf=true doesn't work when distrib=false and terms.list is not specified
> --
>
> Key: SOLR-1
> URL: https://issues.apache.org/jira/browse/SOLR-1
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Reporter: Munendra S N
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-1.patch
>
>
> In SOLR-10349, support to return total term frequency was added. This works 
> fine in distributed mode or when terms.list is specified but doesn't work 
> with non-distributed mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy commented on issue #653: SOLR-13425: Wrong color in horizontal definition list

2019-04-24 Thread GitBox
janhoy commented on issue #653: SOLR-13425: Wrong color in horizontal 
definition list
URL: https://github.com/apache/lucene-solr/pull/653#issuecomment-486428848
 
 
   Thanks. I updated the PR with your solution which is better. Tried to build 
the HTML and it looks good. Will merge soon.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 3307 - Failure

2019-04-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3307/

All tests passed

Build Log:
[...truncated 5108 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/backward-codecs/test/temp/junit4-J1-20190424_210926_8336844328500337657446.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0x7f33a7e0726c, pid=25443, tid=25492
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (11.0.1+13) (build 
11.0.1+13-LTS)
   [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (11.0.1+13-LTS, mixed 
mode, tiered, compressed oops, g1 gc, linux-amd64)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0xd4026c][thread 29016 also had an error][thread 
29015 also had an error]
   [junit4] 
   [junit4]   PhaseIdealLoop::split_up(Node*, Node*, Node*) [clone 
.part.39]+0x47c
   [junit4] #
   [junit4] # Core dump will be written. Default location: Core dumps may be 
processed with "/usr/share/apport/apport %p %s %c %d %P" (or dumping to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/backward-codecs/test/J1/core.25443)
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/backward-codecs/test/J1/hs_err_pid25443.log
   [junit4] 
   [junit4] [timeout occurred during error reporting in step ""] after 30 s.
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
/usr/local/asfpackages/java/jdk-11.0.1/bin/java -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/heapdumps
 -ea -esa --illegal-access=deny -Dtests.prefix=tests 
-Dtests.seed=B5D607E49C50248A -Xmx512M -Dtests.iters= -Dtests.verbose=false 
-Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=9.0.0 -Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=2 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Dcommon.dir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene
 
-Dclover.db.dir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/clover/db
 
-Djava.security.policy=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/tools/junit4/tests.policy
 -Dtests.LUCENE_VERSION=9.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master 
-Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/backward-codecs/test/J1
 
-Djunit4.tempDir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/backward-codecs/test/temp
 -Djunit4.childvm.id=1 -Djunit4.childvm.count=3 -Dfile.encoding=UTF-8 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dtests.filterstacks=true -Dtests.leaveTemporary=false -Dtests.badapples=false 
-classpath 

[jira] [Commented] (SOLR-7530) Wrong JSON response using Terms Component with distrib=true

2019-04-24 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825507#comment-16825507
 ] 

Mikhail Khludnev commented on SOLR-7530:


ok. Let's introduce an optional parameter {{terms.format=9.0}}, which will be 
always set at >= 9.0.

> Wrong JSON response using Terms Component with distrib=true
> ---
>
> Key: SOLR-7530
> URL: https://issues.apache.org/jira/browse/SOLR-7530
> Project: Solr
>  Issue Type: Bug
>  Components: Response Writers, SearchComponents - other, SolrCloud
>Affects Versions: 4.9
>Reporter: Raúl Grande
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: master (9.0)
>
>
> When using TermsComponent in SolrCloud there are differences in the JSON 
> response if parameter distrib is true or false. If distrib=true JSON is not 
> well formed (please note at the [ ] marks)
> JSON Response when distrib=false. Correct response:
> {"responseHeader":{ 
>   "status":0, 
>   "QTime":3
> }, 
> "terms":{ 
> "FileType":
> [ 
>   "EMAIL",20060, 
>   "PDF",7051, 
>   "IMAGE",5108, 
>   "OFFICE",4912, 
>   "TXT",4405, 
>   "OFFICE_EXCEL",4122, 
>   "OFFICE_WORD",2468
>   ]
> } } 
> JSON Response when distrib=true. Incorrect response:
> { 
> "responseHeader":{
>   "status":0, 
>   "QTime":94
> }, 
> "terms":{ 
> "FileType":{ 
>   "EMAIL":31923, 
>   "PDF":11545, 
>   "IMAGE":9807, 
>   "OFFICE_EXCEL":8195, 
>   "OFFICE":5147, 
>   "OFFICE_WORD":4820, 
>   "TIFF":1156, 
>   "XML":851, 
>   "HTML":821, 
>   "RTF":303
>   } 
> } } 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13320) add a param ignoreDuplicates=true to updates to not overwrite existing docs

2019-04-24 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825485#comment-16825485
 ] 

Noble Paul commented on SOLR-13320:
---

[~shalinmangar] I guess we are good to go , right?

> add a param ignoreDuplicates=true to updates to not overwrite existing docs
> ---
>
> Key: SOLR-13320
> URL: https://issues.apache.org/jira/browse/SOLR-13320
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> Updates should have an option to ignore duplicate documents and drop them if 
> an option  {{ignoreDuplicates=true}} is specified



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-13-ea+12) - Build # 23977 - Unstable!

2019-04-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23977/
Java: 64bit/jdk-13-ea+12 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.index.TestSlowCompositeReaderWrapper

Error Message:
The test or suite printed 558082 bytes to stdout and stderr, even though the 
limit was set to 8192 bytes. Increase the limit with @Limit, ignore it 
completely with @SuppressSysoutChecks or run with -Dtests.verbose=true

Stack Trace:
java.lang.AssertionError: The test or suite printed 558082 bytes to stdout and 
stderr, even though the limit was set to 8192 bytes. Increase the limit with 
@Limit, ignore it completely with @SuppressSysoutChecks or run with 
-Dtests.verbose=true
at __randomizedtesting.SeedInfo.seed([B4E975FCA4530CEE]:0)
at 
org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:282)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterIfSuccessful(TestRuleAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:37)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)




Build Log:
[...truncated 2005 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J1-20190424_183124_3188054815914452805536.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J2-20190424_183124_31616747199810433147944.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 5 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J0-20190424_183124_3164879239611108324.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 301 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J0-20190424_184251_5495209865308560311108.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 3 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J1-20190424_184251_54911207766674926898785.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J2-20190424_184251_54911857548735274141503.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 1075 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J0-20190424_184452_67810164798974096382111.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-11.0.2) - Build # 5113 - Unstable!

2019-04-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/5113/
Java: 64bit/jdk-11.0.2 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingDVOnly

Error Message:
Unexpected number of elements in the group for intGSL: 6 rsp: 
{responseHeader={zkConnected=true,status=0,QTime=23,params={q=*:*,group.sort=id 
asc,_stateVer_=dv_coll:4,group.limit=100,rows=100,wt=javabin,version=2,group.field=intGSL,group=true}},grouped={intGSL={matches=59,groups=[{groupValue=731670119,doclist={numFound=6,start=0,maxScore=1.0,docs=[SolrDocument{id=0,
 intGSL=731670119, longGSL=8651878754908271199, doubleGSL=10006.32436319339, 
floatGSL=10001.458, dateGSL=Sun Oct 18 11:00:41 ACDT 215147305, 
stringGSL=base_string_295309__00010009, boolGSL=true, 
sortableGSL=base_string_653116__00010001, _version_=1631719282112987136, 
_root_=0}, SolrDocument{id=1, intGSL=731670119, longGSL=8651878754908271199, 
doubleGSL=10006.32436319339, floatGSL=10001.458, dateGSL=Sun Oct 18 11:00:41 
ACDT 215147305, stringGSL=base_string_295309__00010009, boolGSL=false, 
sortableGSL=base_string_653116__00010001, _version_=1631719282112987136, 
_root_=1}, SolrDocument{id=2, intGSL=731670119, longGSL=8651878754908271199, 
doubleGSL=10006.32436319339, floatGSL=10001.458, dateGSL=Sun Oct 18 11:00:41 
ACDT 215147305, stringGSL=base_string_295309__00010009, boolGSL=true, 
sortableGSL=base_string_653116__00010001, _version_=1631719282114035712, 
_root_=2}, SolrDocument{id=3, intGSL=731670119, longGSL=8651878754908271199, 
doubleGSL=10006.32436319339, floatGSL=10001.458, dateGSL=Sun Oct 18 11:00:41 
ACDT 215147305, stringGSL=base_string_295309__00010009, boolGSL=false, 
sortableGSL=base_string_653116__00010001, _version_=1631719282119278592, 
_root_=3}, SolrDocument{id=4, intGSL=731670119, longGSL=8651878754908271199, 
doubleGSL=10006.32436319339, floatGSL=10001.458, dateGSL=Sun Oct 18 11:00:41 
ACDT 215147305, stringGSL=base_string_295309__00010009, boolGSL=true, 
sortableGSL=base_string_653116__00010001, _version_=1631719282122424320, 
_root_=4}, SolrDocument{id=5, intGSL=731670119, longGSL=8651878754908271199, 
doubleGSL=10006.32436319339, floatGSL=10001.458, dateGSL=Sun Oct 18 11:00:41 
ACDT 215147305, stringGSL=base_string_295309__00010009, boolGSL=false, 
sortableGSL=base_string_653116__00010001, _version_=1631719282125570048, 
_root_=5}]}}, 
{groupValue=null,doclist={numFound=18,start=0,maxScore=1.0,docs=[SolrDocument{id=1,
 _version_=1631719282112987136, _root_=1}, SolrDocument{id=10005, 
_version_=1631719282122424321, _root_=10005}, SolrDocument{id=10010, 
_version_=1631719281418829828, _root_=10010}, SolrDocument{id=10015, 
_version_=1631719282127667205, _root_=10015}, SolrDocument{id=10020, 
_version_=1631719282127667206, _root_=10020}, SolrDocument{id=10025, 
_version_=1631719281418829830, _root_=10025}, SolrDocument{id=10030, 
_version_=1631719282125570055, _root_=10030}, SolrDocument{id=10035, 
_version_=1631719281418829834, _root_=10035}, SolrDocument{id=10040, 
_version_=1631719282125570058, _root_=10040}, SolrDocument{id=10045, 
_version_=1631719281418829836, _root_=10045}, SolrDocument{id=18, 
intGSF=2002032454, longGSF=4327527930903182714, doubleGSF=30018.914077687292, 
floatGSF=30014.146, dateGSF=Tue May 23 03:12:54 ACST 53652997, 
stringGSF=base_string_132955__00030012, boolGSF=true, 
sortableGSF=base_string_504936__00030007, _version_=1631719281418829829, 
_root_=18}, SolrDocument{id=26, intGSF=2002042455, longGSF=4327527930903192719, 
doubleGSF=40018.91407768729, floatGSF=40019.145, dateGSF=Tue May 23 03:13:04 
ACST 53652997, stringGSF=base_string_132955__00040019, boolGSF=true, 
sortableGSF=base_string_504936__00040010, _version_=1631719281418829831, 
_root_=26}, SolrDocument{id=30, intGSF=2002052458, longGSF=4327527930903202723, 
doubleGSF=50023.91407768729, floatGSF=50026.145, dateGSF=Tue May 23 03:13:14 
ACST 53652997, stringGSF=base_string_132955__00050027, boolGSF=true, 
sortableGSF=base_string_504936__00050017, _version_=1631719281418829832, 
_root_=30}, SolrDocument{id=31, intGSF=2002052458, longGSF=4327527930903202723, 
doubleGSF=50023.91407768729, floatGSF=50026.145, dateGSF=Tue May 23 03:13:14 
ACST 53652997, stringGSF=base_string_132955__00050027, boolGSF=false, 
sortableGSF=base_string_504936__00050017, _version_=1631719281418829833, 
_root_=31}, SolrDocument{id=39, intGSF=2002062465, longGSF=4327527930903212726, 
doubleGSF=60023.91407768729, floatGSF=60026.145, dateGSF=Tue May 23 03:13:24 
ACST 53652997, stringGSF=base_string_132955__00060029, boolGSF=false, 
sortableGSF=base_string_504936__00060021, _version_=1631719281418829835, 
_root_=39}, SolrDocument{id=6, intGSF=2002012444, longGSF=4327527930903162712, 
doubleGSF=10005.914077687292, floatGSF=10005.146, dateGSF=Tue May 23 03:12:34 
ACST 53652997, stringGSF=base_string_132955__00010004, boolGSF=true, 
sortableGSF=base_string_504936__00010002, _version_=1631719281418829825, 

[jira] [Commented] (LUCENE-8776) Start offset going backwards has a legitimate purpose

2019-04-24 Thread Ram Venkat (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825432#comment-16825432
 ] 

Ram Venkat commented on LUCENE-8776:


Michael G and Michael M:

We do not use Lucene's standard query parser. We have made significant 
enhancements to Surround Parser (several thousand lines of code), and that is 
our primary parser. I can see how I will pass the PositionLengthAttribute and 
change the adjacency distance in the query. But I have to implement that in 
SurroundParser and I can do that sometime in the future and contribute it. 

So, back to my original point: This check broke existing applications with 
valid use cases, without providing a workaround. A simple way to bypass the 
check would be sufficient for our purposes. I believe that we should make that 
change. 

> Start offset going backwards has a legitimate purpose
> -
>
> Key: LUCENE-8776
> URL: https://issues.apache.org/jira/browse/LUCENE-8776
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 7.6
>Reporter: Ram Venkat
>Priority: Major
>
> Here is the use case where startOffset can go backwards:
> Say there is a line "Organic light-emitting-diode glows", and I want to run 
> span queries and highlight them properly. 
> During index time, light-emitting-diode is split into three words, which 
> allows me to search for 'light', 'emitting' and 'diode' individually. The 
> three words occupy adjacent positions in the index, as 'light' adjacent to 
> 'emitting' and 'light' at a distance of two words from 'diode' need to match 
> this word. So, the order of words after splitting are: Organic, light, 
> emitting, diode, glows. 
> But, I also want to search for 'organic' being adjacent to 
> 'light-emitting-diode' or 'light-emitting-diode' being adjacent to 'glows'. 
> The way I solved this was to also generate 'light-emitting-diode' at two 
> positions: (a) In the same position as 'light' and (b) in the same position 
> as 'glows', like below:
> ||organic||light||emitting||diode||glows||
> | |light-emitting-diode| |light-emitting-diode| |
> |0|1|2|3|4|
> The positions of the two 'light-emitting-diode' are 1 and 3, but the offsets 
> are obviously the same. This works beautifully in Lucene 5.x in both 
> searching and highlighting with span queries. 
> But when I try this in Lucene 7.6, it hits the condition "Offsets must not go 
> backwards" at DefaultIndexingChain:818. This IllegalArgumentException is 
> being thrown without any comments on why this check is needed. As I explained 
> above, startOffset going backwards is perfectly valid, to deal with word 
> splitting and span operations on these specialized use cases. On the other 
> hand, it is not clear what value is added by this check and which highlighter 
> code is affected by offsets going backwards. This same check is done at 
> BaseTokenStreamTestCase:245. 
> I see others talk about how this check found bugs in WordDelimiter etc. but 
> it also prevents legitimate use cases. Can this check be removed?  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12833) Use timed-out lock in DistributedUpdateProcessor

2019-04-24 Thread jefferyyuan (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825415#comment-16825415
 ] 

jefferyyuan commented on SOLR-12833:


[~ab] [~markrmil...@gmail.com]

I cleaned the code and added the test cases, please check the pr: 
[https://github.com/apache/lucene-solr/pull/641/files]
 * all the doXXX methods will suppose it already owns the lock(either the 
intrinsic monitor or lock object) and unlock it at the finally block.
 * its caller calls vinfo.lockForUpdate(0 before and vinfo.unlockForUpdate() at 
the finally block.
 * so its clear who owns lock and should release the lock: symmetric : )

> Use timed-out lock in DistributedUpdateProcessor
> 
>
> Key: SOLR-12833
> URL: https://issues.apache.org/jira/browse/SOLR-12833
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update, UpdateRequestProcessors
>Affects Versions: 7.5, 8.0
>Reporter: jefferyyuan
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 7.7, 8.0
>
> Attachments: SOLR-12833-noint.patch, SOLR-12833.patch, 
> SOLR-12833.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There is a synchronize block that blocks other update requests whose IDs fall 
> in the same hash bucket. The update waits forever until it gets the lock at 
> the synchronize block, this can be a problem in some cases.
>  
> Some add/update requests (for example updates with spatial/shape analysis) 
> like may take time (30+ seconds or even more), this would the request time 
> out and fail.
> Client may retry the same requests multiple times or several minutes, this 
> would make things worse.
> The server side receives all the update requests but all except one can do 
> nothing, have to wait there. This wastes precious memory and cpu resource.
> We have seen the case 2000+ threads are blocking at the synchronize lock, and 
> only a few updates are making progress. Each thread takes 3+ mb memory which 
> causes OOM.
> Also if the update can't get the lock in expected time range, its better to 
> fail fast.
>  
> We can have one configuration in solrconfig.xml: 
> updateHandler/versionLock/timeInMill, so users can specify how long they want 
> to wait the version bucket lock.
> The default value can be -1, so it behaves same - wait forever until it gets 
> the lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13268) Clean up any test failures resulting from defaulting to async logging

2019-04-24 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825396#comment-16825396
 ] 

ASF subversion and git services commented on SOLR-13268:


Commit f08ddbc713b8fa528307c6c1c48e2522e7c220f8 in lucene-solr's branch 
refs/heads/branch_8x from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=f08ddbc ]

SOLR-13268: Clean up any test failures resulting from defaulting to async 
logging

(cherry picked from commit 48dc020ddaf0b0911012b4d9b77d859b2af3d3ae)


> Clean up any test failures resulting from defaulting to async logging
> -
>
> Key: SOLR-13268
> URL: https://issues.apache.org/jira/browse/SOLR-13268
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Blocker
> Attachments: SOLR-13268-flushing.patch, SOLR-13268.patch, 
> SOLR-13268.patch, SOLR-13268.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This is a catch-all for test failures due to the async logging changes. So 
> far, the I see a couple failures on JDK13 only. I'll collect a "starter set" 
> here, these are likely systemic, once the root cause is found/fixed, then 
> others are likely fixed as well.
> JDK13:
> ant test  -Dtestcase=TestJmxIntegration -Dtests.seed=54B30AC62A2D71E 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=lv-LV 
> -Dtests.timezone=Asia/Riyadh -Dtests.asserts=true -Dtests.file.encoding=UTF-8
> ant test  -Dtestcase=TestDynamicURP -Dtests.seed=54B30AC62A2D71E 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=rwk 
> -Dtests.timezone=Australia/Brisbane -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13268) Clean up any test failures resulting from defaulting to async logging

2019-04-24 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825387#comment-16825387
 ] 

ASF subversion and git services commented on SOLR-13268:


Commit 48dc020ddaf0b0911012b4d9b77d859b2af3d3ae in lucene-solr's branch 
refs/heads/master from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=48dc020 ]

SOLR-13268: Clean up any test failures resulting from defaulting to async 
logging


> Clean up any test failures resulting from defaulting to async logging
> -
>
> Key: SOLR-13268
> URL: https://issues.apache.org/jira/browse/SOLR-13268
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Blocker
> Attachments: SOLR-13268-flushing.patch, SOLR-13268.patch, 
> SOLR-13268.patch, SOLR-13268.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This is a catch-all for test failures due to the async logging changes. So 
> far, the I see a couple failures on JDK13 only. I'll collect a "starter set" 
> here, these are likely systemic, once the root cause is found/fixed, then 
> others are likely fixed as well.
> JDK13:
> ant test  -Dtestcase=TestJmxIntegration -Dtests.seed=54B30AC62A2D71E 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=lv-LV 
> -Dtests.timezone=Asia/Riyadh -Dtests.asserts=true -Dtests.file.encoding=UTF-8
> ant test  -Dtestcase=TestDynamicURP -Dtests.seed=54B30AC62A2D71E 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=rwk 
> -Dtests.timezone=Australia/Brisbane -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12188) Inconsistent behavior with CREATE collection API

2019-04-24 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825374#comment-16825374
 ] 

Lucene/Solr QA commented on SOLR-12188:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m  3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:black}{color} | {color:black} {color} | {color:black}  0m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12188 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917712/SOLR-12188.patch |
| Optional Tests |  validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 33c9456 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 |
| modules | C: solr/webapp U: solr/webapp |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/384/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Inconsistent behavior with CREATE collection API
> 
>
> Key: SOLR-12188
> URL: https://issues.apache.org/jira/browse/SOLR-12188
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, config-api
>Affects Versions: 7.4
>Reporter: Munendra S N
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-12188.patch
>
>
> If collection.configName is not specified during create collection then 
> _default configSet is used to create mutable configSet (with suffix 
> AUTOCREATED)
> * In the Admin UI, it is mandatory to specify configSet. This behavior is 
> inconsistent with CREATE collection API(where it is not mandatory)
> * Both in Admin UI and CREATE API, when _default is specified as configSet 
> then no mutable configSet is created. So, changes in one collection would 
> reflect in other



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 3206 - Unstable

2019-04-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/3206/

[...truncated 40 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-8.x/163/consoleText

[repro] Revision: 0cfd85baef7f6f6fb997330b9a14471d66a62889

[repro] Repro line:  ant test  -Dtestcase=TestSimTriggerIntegration 
-Dtests.method=testNodeMarkersRegistration -Dtests.seed=358215E74FC6D351 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=en-SG 
-Dtests.timezone=America/Cordoba -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=AuditLoggerIntegrationTest 
-Dtests.method=testSynchronous -Dtests.seed=358215E74FC6D351 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sv-SE 
-Dtests.timezone=America/Marigot -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
33c94562a630eacad12ab0a94a2a6b3d683f5417
[repro] git fetch
[repro] git checkout 0cfd85baef7f6f6fb997330b9a14471d66a62889

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestSimTriggerIntegration
[repro]   AuditLoggerIntegrationTest
[repro] ant compile-test

[...truncated 3576 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.TestSimTriggerIntegration|*.AuditLoggerIntegrationTest" 
-Dtests.showOutput=onerror  -Dtests.seed=358215E74FC6D351 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=en-SG -Dtests.timezone=America/Cordoba 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 1203 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration
[repro]   1/5 failed: org.apache.solr.security.AuditLoggerIntegrationTest
[repro] git checkout 33c94562a630eacad12ab0a94a2a6b3d683f5417

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12291) Async prematurely reports completed status that causes severe shard loss

2019-04-24 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825366#comment-16825366
 ] 

Lucene/Solr QA commented on SOLR-12291:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m  
1s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  3m 50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  3m 50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  3m 51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 44s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.security.AuditLoggerIntegrationTest |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12291 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12966845/SOLR-12291.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 33c9456 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 |
| Default Java | LTS |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/383/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/383/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/383/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Async prematurely reports completed status that causes severe shard loss
> 
>
> Key: SOLR-12291
> URL: https://issues.apache.org/jira/browse/SOLR-12291
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, SolrCloud
>Reporter: Varun Thacker
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12291.patch, SOLR-12291.patch, SOLR-12291.patch, 
> SOLR-122911.patch
>
>
> The OverseerCollectionMessageHandler sliceCmd assumes only one replica exists 
> on one node
> When multiple replicas of a slice are on the same node we only track one 
> replica's async request. This happens because the async requestMap's key is 
> "node_name"
> I discovered this when [~alabax] shared some logs of a restore issue, where 
> the second replica got added before the first replica had completed it's 
> restorecore action.
> While looking at the logs I noticed that the overseer never called 
> REQUESTSTATUS for the restorecore action , almost as if it had missed 
> tracking that particular async request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Using util.Optional instead of a raw null?

2019-04-24 Thread Gus Heck
I'm not so keen on using optional for parameters (-0). It's really for
return values. I tend to agree with this SO answer:
https://stackoverflow.com/a/31923105 YMMV

On Fri, Apr 19, 2019 at 12:43 AM Tomás Fernández Löbbe <
tomasflo...@gmail.com> wrote:

> In general, I'm +1. I think we may want to be careful in the cases where
> too many objects would be created, like when iterating docs/values, etc.
> That specific case you link to would be a good candidate in my mind.
>
> On Wed, Apr 10, 2019 at 10:20 AM Diego Ceccarelli (BLOOMBERG/ LONDON) <
> dceccarel...@bloomberg.net> wrote:
>
>> Hi *,
>> I have a general question about using Optional instead of a raw null:
>> I have noticed that some functions in Solr are dealing with input
>> parameters that might be null, these parameters might be wrapped into
>> Optional - to avoid forgetting that they might be nulls and also to make
>> clear that they are.. optional.
>>
>> For example in marshalOrUnmarshalSortValue
>> https://github.com/apache/lucene-solr/blob/1d85cd783863f75cea133fb9c452302214165a4d/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/ShardResultTransformerUtils.java#L37
>>
>> both originalSortValue and schemaField are optional, and we might declare
>> them Optional.
>> any opinion?
>>
>> Cheers,
>> Diego
>>
>

-- 
http://www.the111shift.com


[jira] [Commented] (LUCENE-8776) Start offset going backwards has a legitimate purpose

2019-04-24 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825332#comment-16825332
 ] 

Michael McCandless commented on LUCENE-8776:


I think your use case can be properly handled as a token graph, without offsets 
going backwards, if you set proper {{PositionLengthAttribute}} for each token; 
indeed it's for exactly cases like this that we added 
{{PositionLengthAttribute}}.

Give your {{light-emitting-diode}} token {{PositionLengthAttribute=3}} so that 
the consumer of the tokens knows it spans over the three separate tokens 
({{light}}, {{emitting}} and {{diode}}).

To get correct behavior you must do this analysis at query time, and Lucene's 
query parsers will properly interpret the resulting graph and query the index 
correctly.  Unfortunately, you cannot properly index a token graph: Lucene 
discards the {{PositionLengthAttribute}} which is why if you really want to 
index a token graph you should insert a {{FlattenGraphFilter}} at the end of 
your chain.  This still discards information (loses the graph-ness) but tries 
to do so minimizing how queries are broken.

> Start offset going backwards has a legitimate purpose
> -
>
> Key: LUCENE-8776
> URL: https://issues.apache.org/jira/browse/LUCENE-8776
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 7.6
>Reporter: Ram Venkat
>Priority: Major
>
> Here is the use case where startOffset can go backwards:
> Say there is a line "Organic light-emitting-diode glows", and I want to run 
> span queries and highlight them properly. 
> During index time, light-emitting-diode is split into three words, which 
> allows me to search for 'light', 'emitting' and 'diode' individually. The 
> three words occupy adjacent positions in the index, as 'light' adjacent to 
> 'emitting' and 'light' at a distance of two words from 'diode' need to match 
> this word. So, the order of words after splitting are: Organic, light, 
> emitting, diode, glows. 
> But, I also want to search for 'organic' being adjacent to 
> 'light-emitting-diode' or 'light-emitting-diode' being adjacent to 'glows'. 
> The way I solved this was to also generate 'light-emitting-diode' at two 
> positions: (a) In the same position as 'light' and (b) in the same position 
> as 'glows', like below:
> ||organic||light||emitting||diode||glows||
> | |light-emitting-diode| |light-emitting-diode| |
> |0|1|2|3|4|
> The positions of the two 'light-emitting-diode' are 1 and 3, but the offsets 
> are obviously the same. This works beautifully in Lucene 5.x in both 
> searching and highlighting with span queries. 
> But when I try this in Lucene 7.6, it hits the condition "Offsets must not go 
> backwards" at DefaultIndexingChain:818. This IllegalArgumentException is 
> being thrown without any comments on why this check is needed. As I explained 
> above, startOffset going backwards is perfectly valid, to deal with word 
> splitting and span operations on these specialized use cases. On the other 
> hand, it is not clear what value is added by this check and which highlighter 
> code is affected by offsets going backwards. This same check is done at 
> BaseTokenStreamTestCase:245. 
> I see others talk about how this check found bugs in WordDelimiter etc. but 
> it also prevents legitimate use cases. Can this check be removed?  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13268) Clean up any test failures resulting from defaulting to async logging

2019-04-24 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825324#comment-16825324
 ] 

Erick Erickson commented on SOLR-13268:
---

We haven't seen any of these errors in a while, however I did find that the 
solrj log4j2.xml is still using synchronous logging, apparently I missed it 
when I made async the default. So I'm going to push that change shortly and 
continue to monitor.

And [~caomanhdat2] I noticed you added jetty logging to the test log4j2.xml 
files. Does it need to be synchronous? I've changed it to async in the push I'm 
about to do, but I can change it back.

> Clean up any test failures resulting from defaulting to async logging
> -
>
> Key: SOLR-13268
> URL: https://issues.apache.org/jira/browse/SOLR-13268
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Blocker
> Attachments: SOLR-13268-flushing.patch, SOLR-13268.patch, 
> SOLR-13268.patch, SOLR-13268.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This is a catch-all for test failures due to the async logging changes. So 
> far, the I see a couple failures on JDK13 only. I'll collect a "starter set" 
> here, these are likely systemic, once the root cause is found/fixed, then 
> others are likely fixed as well.
> JDK13:
> ant test  -Dtestcase=TestJmxIntegration -Dtests.seed=54B30AC62A2D71E 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=lv-LV 
> -Dtests.timezone=Asia/Riyadh -Dtests.asserts=true -Dtests.file.encoding=UTF-8
> ant test  -Dtestcase=TestDynamicURP -Dtests.seed=54B30AC62A2D71E 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=rwk 
> -Dtests.timezone=Australia/Brisbane -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro-Java11 - Build # 23 - Unstable

2019-04-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro-Java11/23/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1830/consoleText

[repro] Revision: 80d3ac8709c6d93c4e4634dc7c10ef667a029cb1

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=HdfsAutoAddReplicasIntegrationTest 
-Dtests.method=testSimple -Dtests.seed=D6D3BA8A786F9AE5 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=saq-KE -Dtests.timezone=SystemV/CST6 -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
33c94562a630eacad12ab0a94a2a6b3d683f5417
[repro] git fetch
[repro] git checkout 80d3ac8709c6d93c4e4634dc7c10ef667a029cb1

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   HdfsAutoAddReplicasIntegrationTest
[repro] ant compile-test

[...truncated 3309 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.HdfsAutoAddReplicasIntegrationTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=D6D3BA8A786F9AE5 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=saq-KE -Dtests.timezone=SystemV/CST6 -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 4789 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   2/5 failed: 
org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest
[repro] git checkout 33c94562a630eacad12ab0a94a2a6b3d683f5417

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-MAVEN] Lucene-Solr-Maven-master #2545: POMs out of sync

2019-04-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-master/2545/

No tests ran.

Build Log:
[...truncated 18070 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:673: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:209: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/build.xml:408:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/common-build.xml:1709:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/common-build.xml:581:
 Error deploying artifact 'org.apache.lucene:lucene-core:jar': Error deploying 
artifact: Error transferring file

Total time: 8 minutes 57 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-13426) Solr graph queries - should error when run in multiple shard collections?

2019-04-24 Thread Nicholas DiPiazza (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas DiPiazza updated SOLR-13426:
-
Description: 
I noticed that Solr will allow you to run a graph query when the limitation on 
nodes/shards is not agreed to.

see: 
https://lucene.apache.org/solr/guide/6_6/other-parsers.html#OtherParsers-Limitations.1
 

bq. Limitations
bq. The graph parser only works in single node Solr installations, or with 
SolrCloud collections that use exactly 1 shard.
bq. 

This will result in no error, and the query results will be incorrect. This 
leads you to thinking everything is fine until you realize the issue later on.

Is it possible to throw an error to force people to meet these limitations?


  was:
I noticed that solr will allow you to run a graph query against a collection 
that has multiple shards. 

This will result in no error and incorrect query results.

Is it possible to throw an error to force people to use shards=1 for graph 
query parser?

Will prevent someone from accidentally using graph query parser in a situation 
where it will return really misleading results. 


> Solr graph queries - should error when run in multiple shard collections?
> -
>
> Key: SOLR-13426
> URL: https://issues.apache.org/jira/browse/SOLR-13426
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.7.1
>Reporter: Nicholas DiPiazza
>Priority: Major
>
> I noticed that Solr will allow you to run a graph query when the limitation 
> on nodes/shards is not agreed to.
> see: 
> https://lucene.apache.org/solr/guide/6_6/other-parsers.html#OtherParsers-Limitations.1
>  
> bq. Limitations
> bq. The graph parser only works in single node Solr installations, or with 
> SolrCloud collections that use exactly 1 shard.
> bq. 
> This will result in no error, and the query results will be incorrect. This 
> leads you to thinking everything is fine until you realize the issue later on.
> Is it possible to throw an error to force people to meet these limitations?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13426) Solr graph queries - should error when run in multiple shard collections?

2019-04-24 Thread Nicholas DiPiazza (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas DiPiazza updated SOLR-13426:
-
Description: 
I noticed that Solr will allow you to run a graph query when the limitation on 
nodes/shards is not agreed to.

see: 
https://lucene.apache.org/solr/guide/6_6/other-parsers.html#OtherParsers-Limitations.1
 

bq. Limitations - The graph parser only works in single node Solr 
installations, or with SolrCloud collections that use exactly 1 shard.

This will result in no error, and the query results will be incorrect. This 
leads you to thinking everything is fine until you realize the issue later on.

Is it possible to throw an error to force people to meet these limitations?


  was:
I noticed that Solr will allow you to run a graph query when the limitation on 
nodes/shards is not agreed to.

see: 
https://lucene.apache.org/solr/guide/6_6/other-parsers.html#OtherParsers-Limitations.1
 

bq. Limitations
bq. The graph parser only works in single node Solr installations, or with 
SolrCloud collections that use exactly 1 shard.
bq. 

This will result in no error, and the query results will be incorrect. This 
leads you to thinking everything is fine until you realize the issue later on.

Is it possible to throw an error to force people to meet these limitations?



> Solr graph queries - should error when run in multiple shard collections?
> -
>
> Key: SOLR-13426
> URL: https://issues.apache.org/jira/browse/SOLR-13426
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.7.1
>Reporter: Nicholas DiPiazza
>Priority: Major
>
> I noticed that Solr will allow you to run a graph query when the limitation 
> on nodes/shards is not agreed to.
> see: 
> https://lucene.apache.org/solr/guide/6_6/other-parsers.html#OtherParsers-Limitations.1
>  
> bq. Limitations - The graph parser only works in single node Solr 
> installations, or with SolrCloud collections that use exactly 1 shard.
> This will result in no error, and the query results will be incorrect. This 
> leads you to thinking everything is fine until you realize the issue later on.
> Is it possible to throw an error to force people to meet these limitations?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 82 - Failure

2019-04-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.x/82/

No tests ran.

Build Log:
[...truncated 23882 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2526 links (2067 relative) to 3355 anchors in 253 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/solr-8.1.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:

[jira] [Commented] (SOLR-13412) Make the Lucene Luke module available from a Solr distribution

2019-04-24 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825287#comment-16825287
 ] 

Erick Erickson commented on SOLR-13412:
---

Well, I spent some time adding a new "luke" command to bin/solr and getting 
SolrCLI to catch it, with two optional parameters, one to specify a core to 
auto-start with and one to specify solr_home. There are  two things I need to 
do to make progress:

1> a way to invoke this from a distribution. So far, we don't have a target to 
bundle Luke up into a jar that can be added to the "ant package" Solr build 
target

2> a way to pass an argument to the main method and have it auto-open the index 
indicated. Passing the argument(s) is trivial, I just haven't taken a dive into 
the code to figure out how to bypass the dialog box if the -core param is 
present and to set the CWD to solr_home if that parameter is present.

I didn't go very far with either of those before I ran out of time, and I'll be 
on vacation most of next week so I'm not sure when I'll get back to it. 
Progress so far attached in a second.

> Make the Lucene Luke module available from a Solr distribution
> --
>
> Key: SOLR-13412
> URL: https://issues.apache.org/jira/browse/SOLR-13412
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> Now that [~Tomoko Uchida] has put in a great effort to bring Luke into the 
> project, I think it would be good to be able to access it from a Solr distro.
> I want to go to the right place under the Solr install directory and start 
> Luke up to examine the local indexes. 
> This ticket is explicitly _not_ about accessing it from the admin UI, Luke is 
> a stand-alone app that must be invoked on the node that has a Lucene index on 
> the local filesystem
> We need to 
>  * have it included in Solr when running "ant package". 
>  * add some bits to the ref guide on how to invoke
>  ** Where to invoke it from
>  ** mention anything that has to be installed.
>  ** any other "gotchas" someone just installing Solr should be aware of.
>  * Ant should not be necessary.
>  * 
>  
> I'll assign this to myself to keep track of, but would not be offended in the 
> least if someone with more knowledge of "ant package" and the like wanted to 
> take it over ;)
> If we can do it at all



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13426) Solr graph queries - should error when run in multiple shard collections?

2019-04-24 Thread Nicholas DiPiazza (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas DiPiazza updated SOLR-13426:
-
Description: 
I noticed that solr will allow you to run a graph query against a collection 
that has multiple shards. 

This will result in no error and incorrect query results.

Is it possible to throw an error to force people to use shards=1 for graph 
query parser?

Will prevent someone from accidentally using graph query parser in a situation 
where it will return really misleading results. 

  was:
I noticed that solr will allow you to run a graph query against a collection 
that has multiple shards. 

This will result in no error and incorrect query results.

Is it possible to throw an error to force people to use shards=1 for graph 
query parser?


> Solr graph queries - should error when run in multiple shard collections?
> -
>
> Key: SOLR-13426
> URL: https://issues.apache.org/jira/browse/SOLR-13426
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.7.1
>Reporter: Nicholas DiPiazza
>Priority: Major
>
> I noticed that solr will allow you to run a graph query against a collection 
> that has multiple shards. 
> This will result in no error and incorrect query results.
> Is it possible to throw an error to force people to use shards=1 for graph 
> query parser?
> Will prevent someone from accidentally using graph query parser in a 
> situation where it will return really misleading results. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13426) Solr graph queries - should error when run in multiple shard collections?

2019-04-24 Thread Nicholas DiPiazza (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas DiPiazza updated SOLR-13426:
-
Summary: Solr graph queries - should error when run in multiple shard 
collections?  (was: Solr graph queries - errors when run in multiple shard 
collections?)

> Solr graph queries - should error when run in multiple shard collections?
> -
>
> Key: SOLR-13426
> URL: https://issues.apache.org/jira/browse/SOLR-13426
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.7.1
>Reporter: Nicholas DiPiazza
>Priority: Major
>
> I noticed that solr will allow you to run a graph query against a collection 
> that has multiple shards. 
> This will result in no error and incorrect query results.
> Is it possible to throw an error to force people to use shards=1 for graph 
> query parser?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13426) Solr graph queries - errors when run in multiple shard collections?

2019-04-24 Thread Nicholas DiPiazza (JIRA)
Nicholas DiPiazza created SOLR-13426:


 Summary: Solr graph queries - errors when run in multiple shard 
collections?
 Key: SOLR-13426
 URL: https://issues.apache.org/jira/browse/SOLR-13426
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: query parsers
Affects Versions: 7.7.1
Reporter: Nicholas DiPiazza


I noticed that solr will allow you to run a graph query against a collection 
that has multiple shards. 

This will result in no error and incorrect query results.

Is it possible to throw an error to force people to use shards=1 for graph 
query parser?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12514) Rule-base Authorization plugin skips authorization if querying node does not have collection replica

2019-04-24 Thread Hrishikesh Gadre (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825260#comment-16825260
 ] 

Hrishikesh Gadre commented on SOLR-12514:
-

Ok great. Thanks [~krisden]

> Rule-base Authorization plugin skips authorization if querying node does not 
> have collection replica
> 
>
> Key: SOLR-12514
> URL: https://issues.apache.org/jira/browse/SOLR-12514
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Affects Versions: 7.3.1
>Reporter: Mahesh Kumar Vasanthu Somashekar
>Assignee: Noble Paul
>Priority: Major
> Fix For: 6.6.6, 7.7
>
> Attachments: SOLR-12514.patch, SOLR-12514.patch, Screen Shot 
> 2018-06-24 at 9.36.45 PM.png, demo.sh, security.json
>
>
> Solr serves client requests going throught 3 steps - init(), authorize() and 
> handle-request ([link 
> git-link|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.3.1/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java#L471]).
>  init() initializes all required information to be used by authorize(). 
> init() skips initializing if request is to be served remotely, which leads to 
> skipping authorization step ([link 
> git-link|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.3.1/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java#L291]).
>  init() relies on 'cores' object which only has information of local node 
> (which is perfect as per design). It should actually be getting security 
> information (security.json) from zookeeper, which has global view of the 
> cluster.
>  
> Example:
> SolrCloud setup consists of 2 nodes (solr-7.3.1):
> {code:javascript}
> live_nodes: [
>  "localhost:8983_solr",
>  "localhost:8984_solr",
> ]
> {code}
> Two collections are created - 'collection-rf-1' with RF=1 and 
> 'collection-rf-2' with RF=2.
> Two users are created - 'collection-rf-1-user' and 'collection-rf-2-user'.
> Security configuration is as below (security.json attached):
> {code:javascript}
> "authorization":{
>   "class":"solr.RuleBasedAuthorizationPlugin",
>   "permissions":[
> { "name":"read", "collection":"collection-rf-2", 
> "role":"collection-rf-2", "index":1},
> { "name":"read", "collection":"collection-rf-1", 
> "role":"collection-rf-1", "index":2},
> { "name":"read", "role":"*", "index":3},
> ...
>   "user-role":
> { "collection-rf-1-user":[ "collection-rf-1"], "collection-rf-2-user":[ 
> "collection-rf-2"]},
> ...
> {code}
>  
> Basically, its setup to that 'collection-rf-1-user' user can only access 
> 'collection-rf-1' collection and 'collection-rf-2-user' user can only access 
> 'collection-rf-2' collection.
> Also note that 'collection-rf-1' collection replica is only on 
> 'localhost:8983_solr' node, whereas ''collection-rf-2' collection replica is 
> on both live nodes.
>  
> Authorization does not work as expected for 'collection-rf-1' collection:
> $ curl -u collection-rf-2-user:password 
> 'http://*localhost:8983*/solr/collection-rf-1/select?q=*:*'
> {code:html}
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP ERROR 403
>  Problem accessing /solr/collection-rf-1/select. Reason:
>   Unauthorized request, Response code: 403
>  
>  
> {code}
> $ curl -u collection-rf-2-user:password 
> 'http://*localhost:8984*/solr/collection-rf-1/select?q=*:*'
> {code:javascript}
>  {
>"responseHeader":{
>  "zkConnected":true,
>  "status":0,
>  "QTime":0,
>  "params":{
>"q":"*:*"}},
>"response":{"numFound":0,"start":0,"docs":[]
>  }}
> {code}
>  
> Whereas authorization works perfectly for 'collection-rf-2' collection (as 
> both nodes have replica):
> $ curl -u collection-rf-1-user:password 
> 'http://*localhost:8984*/solr/collection-rf-2/select?q=*:*'
> {code:html}
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP ERROR 403
>  Problem accessing /solr/collection-rf-2/select. Reason:
>   Unauthorized request, Response code: 403
>  
>  
> {code}
> $ curl -u collection-rf-1-user:password 
> 'http://*localhost:8983*/solr/collection-rf-2/select?q=*:*'
> {code:html}
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP ERROR 403
>  Problem accessing /solr/collection-rf-2/select. Reason:
>   Unauthorized request, Response code: 403
>  
>  
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12514) Rule-base Authorization plugin skips authorization if querying node does not have collection replica

2019-04-24 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825259#comment-16825259
 ] 

Kevin Risden commented on SOLR-12514:
-

>From the notification email: CVE-2018-11802

> Rule-base Authorization plugin skips authorization if querying node does not 
> have collection replica
> 
>
> Key: SOLR-12514
> URL: https://issues.apache.org/jira/browse/SOLR-12514
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Affects Versions: 7.3.1
>Reporter: Mahesh Kumar Vasanthu Somashekar
>Assignee: Noble Paul
>Priority: Major
> Fix For: 6.6.6, 7.7
>
> Attachments: SOLR-12514.patch, SOLR-12514.patch, Screen Shot 
> 2018-06-24 at 9.36.45 PM.png, demo.sh, security.json
>
>
> Solr serves client requests going throught 3 steps - init(), authorize() and 
> handle-request ([link 
> git-link|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.3.1/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java#L471]).
>  init() initializes all required information to be used by authorize(). 
> init() skips initializing if request is to be served remotely, which leads to 
> skipping authorization step ([link 
> git-link|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.3.1/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java#L291]).
>  init() relies on 'cores' object which only has information of local node 
> (which is perfect as per design). It should actually be getting security 
> information (security.json) from zookeeper, which has global view of the 
> cluster.
>  
> Example:
> SolrCloud setup consists of 2 nodes (solr-7.3.1):
> {code:javascript}
> live_nodes: [
>  "localhost:8983_solr",
>  "localhost:8984_solr",
> ]
> {code}
> Two collections are created - 'collection-rf-1' with RF=1 and 
> 'collection-rf-2' with RF=2.
> Two users are created - 'collection-rf-1-user' and 'collection-rf-2-user'.
> Security configuration is as below (security.json attached):
> {code:javascript}
> "authorization":{
>   "class":"solr.RuleBasedAuthorizationPlugin",
>   "permissions":[
> { "name":"read", "collection":"collection-rf-2", 
> "role":"collection-rf-2", "index":1},
> { "name":"read", "collection":"collection-rf-1", 
> "role":"collection-rf-1", "index":2},
> { "name":"read", "role":"*", "index":3},
> ...
>   "user-role":
> { "collection-rf-1-user":[ "collection-rf-1"], "collection-rf-2-user":[ 
> "collection-rf-2"]},
> ...
> {code}
>  
> Basically, its setup to that 'collection-rf-1-user' user can only access 
> 'collection-rf-1' collection and 'collection-rf-2-user' user can only access 
> 'collection-rf-2' collection.
> Also note that 'collection-rf-1' collection replica is only on 
> 'localhost:8983_solr' node, whereas ''collection-rf-2' collection replica is 
> on both live nodes.
>  
> Authorization does not work as expected for 'collection-rf-1' collection:
> $ curl -u collection-rf-2-user:password 
> 'http://*localhost:8983*/solr/collection-rf-1/select?q=*:*'
> {code:html}
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP ERROR 403
>  Problem accessing /solr/collection-rf-1/select. Reason:
>   Unauthorized request, Response code: 403
>  
>  
> {code}
> $ curl -u collection-rf-2-user:password 
> 'http://*localhost:8984*/solr/collection-rf-1/select?q=*:*'
> {code:javascript}
>  {
>"responseHeader":{
>  "zkConnected":true,
>  "status":0,
>  "QTime":0,
>  "params":{
>"q":"*:*"}},
>"response":{"numFound":0,"start":0,"docs":[]
>  }}
> {code}
>  
> Whereas authorization works perfectly for 'collection-rf-2' collection (as 
> both nodes have replica):
> $ curl -u collection-rf-1-user:password 
> 'http://*localhost:8984*/solr/collection-rf-2/select?q=*:*'
> {code:html}
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP ERROR 403
>  Problem accessing /solr/collection-rf-2/select. Reason:
>   Unauthorized request, Response code: 403
>  
>  
> {code}
> $ curl -u collection-rf-1-user:password 
> 'http://*localhost:8983*/solr/collection-rf-2/select?q=*:*'
> {code:html}
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP ERROR 403
>  Problem accessing /solr/collection-rf-2/select. Reason:
>   Unauthorized request, Response code: 403
>  
>  
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12514) Rule-base Authorization plugin skips authorization if querying node does not have collection replica

2019-04-24 Thread Hrishikesh Gadre (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825249#comment-16825249
 ] 

Hrishikesh Gadre commented on SOLR-12514:
-

Is there a CVE associated with this issue? I don't see one.

> Rule-base Authorization plugin skips authorization if querying node does not 
> have collection replica
> 
>
> Key: SOLR-12514
> URL: https://issues.apache.org/jira/browse/SOLR-12514
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Affects Versions: 7.3.1
>Reporter: Mahesh Kumar Vasanthu Somashekar
>Assignee: Noble Paul
>Priority: Major
> Fix For: 6.6.6, 7.7
>
> Attachments: SOLR-12514.patch, SOLR-12514.patch, Screen Shot 
> 2018-06-24 at 9.36.45 PM.png, demo.sh, security.json
>
>
> Solr serves client requests going throught 3 steps - init(), authorize() and 
> handle-request ([link 
> git-link|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.3.1/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java#L471]).
>  init() initializes all required information to be used by authorize(). 
> init() skips initializing if request is to be served remotely, which leads to 
> skipping authorization step ([link 
> git-link|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.3.1/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java#L291]).
>  init() relies on 'cores' object which only has information of local node 
> (which is perfect as per design). It should actually be getting security 
> information (security.json) from zookeeper, which has global view of the 
> cluster.
>  
> Example:
> SolrCloud setup consists of 2 nodes (solr-7.3.1):
> {code:javascript}
> live_nodes: [
>  "localhost:8983_solr",
>  "localhost:8984_solr",
> ]
> {code}
> Two collections are created - 'collection-rf-1' with RF=1 and 
> 'collection-rf-2' with RF=2.
> Two users are created - 'collection-rf-1-user' and 'collection-rf-2-user'.
> Security configuration is as below (security.json attached):
> {code:javascript}
> "authorization":{
>   "class":"solr.RuleBasedAuthorizationPlugin",
>   "permissions":[
> { "name":"read", "collection":"collection-rf-2", 
> "role":"collection-rf-2", "index":1},
> { "name":"read", "collection":"collection-rf-1", 
> "role":"collection-rf-1", "index":2},
> { "name":"read", "role":"*", "index":3},
> ...
>   "user-role":
> { "collection-rf-1-user":[ "collection-rf-1"], "collection-rf-2-user":[ 
> "collection-rf-2"]},
> ...
> {code}
>  
> Basically, its setup to that 'collection-rf-1-user' user can only access 
> 'collection-rf-1' collection and 'collection-rf-2-user' user can only access 
> 'collection-rf-2' collection.
> Also note that 'collection-rf-1' collection replica is only on 
> 'localhost:8983_solr' node, whereas ''collection-rf-2' collection replica is 
> on both live nodes.
>  
> Authorization does not work as expected for 'collection-rf-1' collection:
> $ curl -u collection-rf-2-user:password 
> 'http://*localhost:8983*/solr/collection-rf-1/select?q=*:*'
> {code:html}
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP ERROR 403
>  Problem accessing /solr/collection-rf-1/select. Reason:
>   Unauthorized request, Response code: 403
>  
>  
> {code}
> $ curl -u collection-rf-2-user:password 
> 'http://*localhost:8984*/solr/collection-rf-1/select?q=*:*'
> {code:javascript}
>  {
>"responseHeader":{
>  "zkConnected":true,
>  "status":0,
>  "QTime":0,
>  "params":{
>"q":"*:*"}},
>"response":{"numFound":0,"start":0,"docs":[]
>  }}
> {code}
>  
> Whereas authorization works perfectly for 'collection-rf-2' collection (as 
> both nodes have replica):
> $ curl -u collection-rf-1-user:password 
> 'http://*localhost:8984*/solr/collection-rf-2/select?q=*:*'
> {code:html}
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP ERROR 403
>  Problem accessing /solr/collection-rf-2/select. Reason:
>   Unauthorized request, Response code: 403
>  
>  
> {code}
> $ curl -u collection-rf-1-user:password 
> 'http://*localhost:8983*/solr/collection-rf-2/select?q=*:*'
> {code:html}
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP ERROR 403
>  Problem accessing /solr/collection-rf-2/select. Reason:
>   Unauthorized request, Response code: 403
>  
>  
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13081) In-Place Update doesn't work when route.field is defined

2019-04-24 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reassigned SOLR-13081:
---

Assignee: Mikhail Khludnev

> In-Place Update doesn't work when route.field is defined
> 
>
> Key: SOLR-13081
> URL: https://issues.apache.org/jira/browse/SOLR-13081
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Dr Oleg Savrasov
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-13081.patch, SOLR-13081.patch, SOLR-13081.patch, 
> SOLR-13081.patch
>
>
> As soon as cloud collection is configured with route.field property, In-Place 
> Updates are not applied anymore. This happens because 
> AtomicUpdateDocumentMerger skips only id and version fields and doesn't 
> verify configured route.field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8753) New PostingFormat - UniformSplit

2019-04-24 Thread juan camilo rodriguez duran (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825225#comment-16825225
 ] 

juan camilo rodriguez duran commented on LUCENE-8753:
-

[~rcmuir] as [~jpountz] said the last benchmark does not show the benefits of 
Uniform Split as most of the query time is spent most of the time processing 
the postings. Just as a recap Uniform Split shines for its simplicity and 
extensibility with addition of lower memory consumption and faster segment 
merge.

> New PostingFormat - UniformSplit
> 
>
> Key: LUCENE-8753
> URL: https://issues.apache.org/jira/browse/LUCENE-8753
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 8.0
>Reporter: Bruno Roustant
>Assignee: David Smiley
>Priority: Major
> Attachments: Uniform Split Technique.pdf, luceneutil.benchmark.txt
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is a proposal to add a new PostingsFormat called "UniformSplit" with 4 
> objectives:
>  - Clear design and simple code.
>  - Easily extensible, for both the logic and the index format.
>  - Light memory usage with a very compact FST.
>  - Focus on efficient TermQuery, PhraseQuery and PrefixQuery performance.
> (the pdf attached explains visually the technique in more details)
>  The principle is to split the list of terms into blocks and use a FST to 
> access the block, but not as a prefix trie, rather with a seek-floor pattern. 
> For the selection of the blocks, there is a target average block size (number 
> of terms), with an allowed delta variation (10%) to compare the terms and 
> select the one with the minimal distinguishing prefix.
>  There are also several optimizations inside the block to make it more 
> compact and speed up the loading/scanning.
> The performance obtained is interesting with the luceneutil benchmark, 
> comparing UniformSplit with BlockTree. Find it in the first comment and also 
> attached for better formatting.
> Although the precise percentages vary between runs, three main points:
>  - TermQuery and PhraseQuery are improved.
>  - PrefixQuery and WildcardQuery are ok.
>  - Fuzzy queries are clearly less performant, because BlockTree is so 
> optimized for them.
> Compared to BlockTree, FST size is reduced by 15%, and segment writing time 
> is reduced by 20%. So this PostingsFormat scales to lots of docs, as 
> BlockTree.
> This initial version passes all Lucene tests. Use “ant test 
> -Dtests.codec=UniformSplitTesting” to test with this PostingsFormat.
> Subjectively, we think we have fulfilled our goal of code simplicity. And we 
> have already exercised this PostingsFormat extensibility to create a 
> different flavor for our own use-case.
> Contributors: Juan Camilo Rodriguez Duran, Bruno Roustant, David Smiley



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12584) Add basic auth credentials configuration to the Solr exporter for Prometheus/Grafana

2019-04-24 Thread Stefan Billet (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825223#comment-16825223
 ] 

Stefan Billet edited comment on SOLR-12584 at 4/24/19 2:33 PM:
---

Hello everyone. Actually, it is possible to export metrics from a SolrCloud 
secured by Basic Auth, SSL and ZooKeeper ACLs without any change to the 
Exporter.
 The security configuration can be injected using environment variables. The 
exporter's main script _solr-exporter_ uses two external environment variables:
 * $JAVA_OPTS allows to add extra JVM options
 * $CLASSPATH_PREFIX allows to add extra libraries

Suppose you have a file basicauth.properties with the Solr Basic-Auth 
credentials:

{{httpBasicAuthUser=myUser}}
 {{httpBasicAuthPassword=myPassword}}

Then you can start the Exporter as follows.
 # export 
JAVA_OPTS="-Djavax.net.ssl.trustStore={color:#ff}truststore.jks{color} 
-Djavax.net.ssl.trustStorePassword={color:#ff}truststorePassword{color} 
-Dsolr.httpclient.builder.factory=org.apache.solr.client.solrj.impl.PreemptiveBasicAuthClientBuilderFactory
 -Dsolr.httpclient.config={color:#ff}basicauth.properties{color} 
-DzkCredentialsProvider=org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
 -DzkDigestUsername={color:#ff}readonly-user{color} 
-DzkDigestPassword={color:#ff}zkUserPassword{color}"
 # export 
CLASSPATH_PREFIX="../../server/solr-webapp/webapp/WEB-INF/lib/commons-codec-1.11.jar"
   (The Exporter needs Commons-Codec for SSL/BasicAuth, but doesn't bring it)
 # ./bin/solr-exporter -p 9854 -z 
{color:#ff}zk1:2181,zk2:2181,zk3:2181{color} -f 
./conf/solr-exporter-config.xml -n 16


was (Author: sbillet):
Hello everyone. Actually, it is possible to export metrics from a SolrCloud 
secured by Basic Auth, SSL and ZooKeeper ACLs without any change to the 
Exporter.
 The security configuration can be injected using environment variables. The 
exporter's main script _solr-exporter_ uses two external environment variables:
 * $JAVA_OPTS allows to add extra JVM options
 * $CLASSPATH_PREFIX allows to add extra libraries

Suppose you have a file basicauth.properties with the Solr Basic-Auth 
credentials:

{{httpBasicAuthUser=myUser}}
 {{httpBasicAuthPassword=myPassword}}

Then you can start the Exporter as follows.
 # export 
JAVA_OPTS="-Djavax.net.ssl.trustStore={color:#ff}truststore.jks{color} 
-Djavax.net.ssl.trustStorePassword={color:#ff}truststorePassword{color} 
-Dsolr.httpclient.builder.factory=org.apache.solr.client.solrj.impl.PreemptiveBasicAuthClientBuilderFactory
 -Dsolr.httpclient.config={color:#ff}basicauth.properties{color} 
-DzkCredentialsProvider=org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
 -DzkDigestUsername={color:#ff}readonly-user{color} 
-DzkDigestPassword={color:#ff}zkUserPassword{color}"
 # export 
CLASSPATH_PREFIX="../../server/solr-webapp/webapp/WEB-INF/lib/commons-codec-1.11.jar"
   (The Exporter needs Commons-Codec for SSL/BasicAuth), but doesn't bring it)
 # ./bin/solr-exporter -p 9854 -z 
{color:#ff}zk1:2181,zk2:2181,zk3:2181{color} -f 
./conf/solr-exporter-config.xml -n 16

> Add basic auth credentials configuration to the Solr exporter for 
> Prometheus/Grafana  
> --
>
> Key: SOLR-12584
> URL: https://issues.apache.org/jira/browse/SOLR-12584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics, security
>Affects Versions: 7.3, 7.4
>Reporter: Dwane Hall
>Priority: Minor
>  Labels: authentication, metrics, security
> Attachments: lucene-solr.patch
>
>
> The Solr exporter for Prometheus/Grafana provides a useful visual layer over 
> the solr metrics api for monitoring the state of a Solr cluster. Currently 
> this can not be configured and used on a secure Solr cluster with the Basic 
> Authentication plugin enabled. The exporter does not provide a mechanism to 
> configure/pass through basic auth credentials when SolrJ requests information 
> from the metrics api endpoints and would be a useful addition for Solr users 
> running a secure Solr instance.   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12584) Add basic auth credentials configuration to the Solr exporter for Prometheus/Grafana

2019-04-24 Thread Stefan Billet (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825223#comment-16825223
 ] 

Stefan Billet commented on SOLR-12584:
--

Hello everyone. Actually, it is possible to export metrics from a SolrCloud 
secured by Basic Auth, SSL and ZooKeeper ACLs without any change to the 
Exporter.
 The security configuration can be injected using environment variables. The 
exporter's main script _solr-exporter_ uses two external environment variables:
 * $JAVA_OPTS allows to add extra JVM options
 * $CLASSPATH_PREFIX allows to add extra libraries

Suppose you have a file basicauth.properties with the Solr Basic-Auth 
credentials:

{{httpBasicAuthUser=myUser}}
 {{httpBasicAuthPassword=myPassword}}

Then you can start the Exporter as follows.
 # export 
JAVA_OPTS="-Djavax.net.ssl.trustStore={color:#ff}truststore.jks{color} 
-Djavax.net.ssl.trustStorePassword={color:#ff}truststorePassword{color} 
-Dsolr.httpclient.builder.factory=org.apache.solr.client.solrj.impl.PreemptiveBasicAuthClientBuilderFactory
 -Dsolr.httpclient.config={color:#ff}basicauth.properties{color} 
-DzkCredentialsProvider=org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
 -DzkDigestUsername={color:#ff}readonly-user{color} 
-DzkDigestPassword={color:#ff}zkUserPassword{color}"
 # export 
CLASSPATH_PREFIX="../../server/solr-webapp/webapp/WEB-INF/lib/commons-codec-1.11.jar"
   (The Exporter needs Commons-Codec for SSL/BasicAuth), but doesn't bring it)
 # ./bin/solr-exporter -p 9854 -z 
{color:#ff}zk1:2181,zk2:2181,zk3:2181{color} -f 
./conf/solr-exporter-config.xml -n 16

> Add basic auth credentials configuration to the Solr exporter for 
> Prometheus/Grafana  
> --
>
> Key: SOLR-12584
> URL: https://issues.apache.org/jira/browse/SOLR-12584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics, security
>Affects Versions: 7.3, 7.4
>Reporter: Dwane Hall
>Priority: Minor
>  Labels: authentication, metrics, security
> Attachments: lucene-solr.patch
>
>
> The Solr exporter for Prometheus/Grafana provides a useful visual layer over 
> the solr metrics api for monitoring the state of a Solr cluster. Currently 
> this can not be configured and used on a secure Solr cluster with the Basic 
> Authentication plugin enabled. The exporter does not provide a mechanism to 
> configure/pass through basic auth credentials when SolrJ requests information 
> from the metrics api endpoints and would be a useful addition for Solr users 
> running a secure Solr instance.   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12188) Inconsistent behavior with CREATE collection API

2019-04-24 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825214#comment-16825214
 ] 

Erick Erickson commented on SOLR-12188:
---

The default configset is trappy enough since it defaults to schemaless, I kind 
of like that it is in people's faces. I agree it's inconsistent, but am happy 
to live with that. I wouldn't veto the change, but wanted to mention it

> Inconsistent behavior with CREATE collection API
> 
>
> Key: SOLR-12188
> URL: https://issues.apache.org/jira/browse/SOLR-12188
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, config-api
>Affects Versions: 7.4
>Reporter: Munendra S N
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-12188.patch
>
>
> If collection.configName is not specified during create collection then 
> _default configSet is used to create mutable configSet (with suffix 
> AUTOCREATED)
> * In the Admin UI, it is mandatory to specify configSet. This behavior is 
> inconsistent with CREATE collection API(where it is not mandatory)
> * Both in Admin UI and CREATE API, when _default is specified as configSet 
> then no mutable configSet is created. So, changes in one collection would 
> reflect in other



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8776) Start offset going backwards has a legitimate purpose

2019-04-24 Thread Michael Gibney (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825202#comment-16825202
 ] 

Michael Gibney edited comment on LUCENE-8776 at 4/24/19 2:20 PM:
-

Ram, it's good that this solution worked for you, but taking a step back, your 
solution seems like a workaround for LUCENE-7398 and LUCENE-4312. Workarounds 
aren't inherently _bad_ of course, but when they depend on ambiguity of (or 
lack of enforcement of) contracts, backward compatibility can't be guaranteed 
(to paraphrase what I take Robert and Adrien to be saying).

Of course, one person's "patch" is another person's "workaround", but I'd be 
curious to know whether any of the ["LUCENE-7398/*" 
branches|https://github.com/magibney/lucene-solr/tree/LUCENE-7398/branch_7_6] 
might help for your use case. (There's a high-level description in [this 
comment on the LUCENE-7398 
issue|https://issues.apache.org/jira/browse/LUCENE-7398?focusedCommentId=16630529#comment-16630529]).
 Particularly relevant to this discussion: the patch supports recording token 
positionLength in the index, and enforces index ordering by startPosition and 
endPosition (compatible with ordering specified for the Spans API).


was (Author: mgibney):
Ram, it's good that this solution worked for you, but taking a step back, your 
solution seems like a workaround for LUCENE-7398 and LUCENE-4312. Workarounds 
aren't inherently _bad_ of course, but when they depend on ambiguity of (or 
lack of enforcement of) contracts, backward compatibility can't be guaranteed 
(to paraphrase what I take Robert and Adrien to be saying).

Of course, one person's "patch" is another person's "workaround", but I'd be 
curious to know whether any of the ["LUCENE-7398/*" 
branches|https://github.com/magibney/lucene-solr/tree/LUCENE-7398/branch_7_6] 
might help for your use case. (There's a high-level description in this comment 
on the LUCENE-7398 issue). Particularly relevant to this discussion: the patch 
supports recording token positionLength in the index, and enforces index 
ordering by startPosition and endPosition (compatible with ordering specified 
for the Spans API).

> Start offset going backwards has a legitimate purpose
> -
>
> Key: LUCENE-8776
> URL: https://issues.apache.org/jira/browse/LUCENE-8776
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 7.6
>Reporter: Ram Venkat
>Priority: Major
>
> Here is the use case where startOffset can go backwards:
> Say there is a line "Organic light-emitting-diode glows", and I want to run 
> span queries and highlight them properly. 
> During index time, light-emitting-diode is split into three words, which 
> allows me to search for 'light', 'emitting' and 'diode' individually. The 
> three words occupy adjacent positions in the index, as 'light' adjacent to 
> 'emitting' and 'light' at a distance of two words from 'diode' need to match 
> this word. So, the order of words after splitting are: Organic, light, 
> emitting, diode, glows. 
> But, I also want to search for 'organic' being adjacent to 
> 'light-emitting-diode' or 'light-emitting-diode' being adjacent to 'glows'. 
> The way I solved this was to also generate 'light-emitting-diode' at two 
> positions: (a) In the same position as 'light' and (b) in the same position 
> as 'glows', like below:
> ||organic||light||emitting||diode||glows||
> | |light-emitting-diode| |light-emitting-diode| |
> |0|1|2|3|4|
> The positions of the two 'light-emitting-diode' are 1 and 3, but the offsets 
> are obviously the same. This works beautifully in Lucene 5.x in both 
> searching and highlighting with span queries. 
> But when I try this in Lucene 7.6, it hits the condition "Offsets must not go 
> backwards" at DefaultIndexingChain:818. This IllegalArgumentException is 
> being thrown without any comments on why this check is needed. As I explained 
> above, startOffset going backwards is perfectly valid, to deal with word 
> splitting and span operations on these specialized use cases. On the other 
> hand, it is not clear what value is added by this check and which highlighter 
> code is affected by offsets going backwards. This same check is done at 
> BaseTokenStreamTestCase:245. 
> I see others talk about how this check found bugs in WordDelimiter etc. but 
> it also prevents legitimate use cases. Can this check be removed?  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] ctargett commented on issue #653: SOLR-13425: Wrong color in horizontal definition list

2019-04-24 Thread GitBox
ctargett commented on issue #653: SOLR-13425: Wrong color in horizontal 
definition list
URL: https://github.com/apache/lucene-solr/pull/653#issuecomment-486262130
 
 
   Good catch, thanks.
   
   This is caused by the customized HTML templates I introduced in SOLR-12746 
(7.6). Where the 1st column of that style of horizontal list used to be in a 
`td` tag, now it's in a `th` tag.
   
   An alternate change that would ensure that style of list always looks 
exactly the way it did before would be to modify the template at 
https://github.com/apache/lucene-solr/blob/master/solr/solr-ref-guide/src/_templates/_hdlist.html.slim#L9
 to just change:
   
   ```th.hdlist1 class=('strong' if option? 'strong')```
   
   to
   
   ```td.hdlist1 class=('strong' if option? 'strong')```
   
   This would prevent us from possibly finding other things we have to fix in 
the CSS later on. What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8776) Start offset going backwards has a legitimate purpose

2019-04-24 Thread Michael Gibney (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825202#comment-16825202
 ] 

Michael Gibney commented on LUCENE-8776:


Ram, it's good that this solution worked for you, but taking a step back, your 
solution seems like a workaround for LUCENE-7398 and LUCENE-4312. Workarounds 
aren't inherently _bad_ of course, but when they depend on ambiguity of (or 
lack of enforcement of) contracts, backward compatibility can't be guaranteed 
(to paraphrase what I take Robert and Adrien to be saying).

Of course, one person's "patch" is another person's "workaround", but I'd be 
curious to know whether any of the ["LUCENE-7398/*" 
branches|https://github.com/magibney/lucene-solr/tree/LUCENE-7398/branch_7_6] 
might help for your use case. (There's a high-level description in this comment 
on the LUCENE-7398 issue). Particularly relevant to this discussion: the patch 
supports recording token positionLength in the index, and enforces index 
ordering by startPosition and endPosition (compatible with ordering specified 
for the Spans API).

> Start offset going backwards has a legitimate purpose
> -
>
> Key: LUCENE-8776
> URL: https://issues.apache.org/jira/browse/LUCENE-8776
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 7.6
>Reporter: Ram Venkat
>Priority: Major
>
> Here is the use case where startOffset can go backwards:
> Say there is a line "Organic light-emitting-diode glows", and I want to run 
> span queries and highlight them properly. 
> During index time, light-emitting-diode is split into three words, which 
> allows me to search for 'light', 'emitting' and 'diode' individually. The 
> three words occupy adjacent positions in the index, as 'light' adjacent to 
> 'emitting' and 'light' at a distance of two words from 'diode' need to match 
> this word. So, the order of words after splitting are: Organic, light, 
> emitting, diode, glows. 
> But, I also want to search for 'organic' being adjacent to 
> 'light-emitting-diode' or 'light-emitting-diode' being adjacent to 'glows'. 
> The way I solved this was to also generate 'light-emitting-diode' at two 
> positions: (a) In the same position as 'light' and (b) in the same position 
> as 'glows', like below:
> ||organic||light||emitting||diode||glows||
> | |light-emitting-diode| |light-emitting-diode| |
> |0|1|2|3|4|
> The positions of the two 'light-emitting-diode' are 1 and 3, but the offsets 
> are obviously the same. This works beautifully in Lucene 5.x in both 
> searching and highlighting with span queries. 
> But when I try this in Lucene 7.6, it hits the condition "Offsets must not go 
> backwards" at DefaultIndexingChain:818. This IllegalArgumentException is 
> being thrown without any comments on why this check is needed. As I explained 
> above, startOffset going backwards is perfectly valid, to deal with word 
> splitting and span operations on these specialized use cases. On the other 
> hand, it is not clear what value is added by this check and which highlighter 
> code is affected by offsets going backwards. This same check is done at 
> BaseTokenStreamTestCase:245. 
> I see others talk about how this check found bugs in WordDelimiter etc. but 
> it also prevents legitimate use cases. Can this check be removed?  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13081) In-Place Update doesn't work when route.field is defined

2019-04-24 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825192#comment-16825192
 ] 

Munendra S N commented on SOLR-13081:
-

[~osavrasov] [~ichattopadhyaya] [~mkhludnev]
changes LGTM

> In-Place Update doesn't work when route.field is defined
> 
>
> Key: SOLR-13081
> URL: https://issues.apache.org/jira/browse/SOLR-13081
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Dr Oleg Savrasov
>Priority: Major
> Attachments: SOLR-13081.patch, SOLR-13081.patch, SOLR-13081.patch, 
> SOLR-13081.patch
>
>
> As soon as cloud collection is configured with route.field property, In-Place 
> Updates are not applied anymore. This happens because 
> AtomicUpdateDocumentMerger skips only id and version fields and doesn't 
> verify configured route.field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12127) Using atomic updates to remove docValues type dynamic field does not work

2019-04-24 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-12127:

Attachment: SOLR-12127.patch

> Using atomic updates to remove docValues type dynamic field does not work
> -
>
> Key: SOLR-12127
> URL: https://issues.apache.org/jira/browse/SOLR-12127
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.6.2, 7.2
>Reporter: Oliver Kuldmäe
>Assignee: Ishan Chattopadhyaya
>Priority: Critical
> Attachments: SOLR-12127.patch, SOLR-12127.patch, SOLR-12127.patch
>
>
> I have defined a dynamic field which is stored=false, indexed=false and 
> docValues=true. Attempting to set this field's value to null via atomic 
> update does not remove the field from the document. However, the document's 
> version is updated. Using atomic updates to set a value for the field does 
> work. Tested on 6.6.2 and 7.2.1. 
> An example of a non-working update query:
> {code:java}
> 
> 
> 
> 372335
> 
> 
> 
> 
> {code}
>  
> An example of a working update query:
> {code:java}
> 
> 
> 
> 372335
>  update="set">1521472499
> 
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12127) Using atomic updates to remove docValues type dynamic field does not work

2019-04-24 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-12127:

Attachment: (was: SOLR-12127.patch.1)

> Using atomic updates to remove docValues type dynamic field does not work
> -
>
> Key: SOLR-12127
> URL: https://issues.apache.org/jira/browse/SOLR-12127
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.6.2, 7.2
>Reporter: Oliver Kuldmäe
>Assignee: Ishan Chattopadhyaya
>Priority: Critical
> Attachments: SOLR-12127.patch, SOLR-12127.patch, SOLR-12127.patch
>
>
> I have defined a dynamic field which is stored=false, indexed=false and 
> docValues=true. Attempting to set this field's value to null via atomic 
> update does not remove the field from the document. However, the document's 
> version is updated. Using atomic updates to set a value for the field does 
> work. Tested on 6.6.2 and 7.2.1. 
> An example of a non-working update query:
> {code:java}
> 
> 
> 
> 372335
> 
> 
> 
> 
> {code}
>  
> An example of a working update query:
> {code:java}
> 
> 
> 
> 372335
>  update="set">1521472499
> 
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12127) Using atomic updates to remove docValues type dynamic field does not work

2019-04-24 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825186#comment-16825186
 ] 

Munendra S N commented on SOLR-12127:
-

[^SOLR-12127.patch]
[~ichattopadhyaya]
Rebased the patch to master

> Using atomic updates to remove docValues type dynamic field does not work
> -
>
> Key: SOLR-12127
> URL: https://issues.apache.org/jira/browse/SOLR-12127
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.6.2, 7.2
>Reporter: Oliver Kuldmäe
>Assignee: Ishan Chattopadhyaya
>Priority: Critical
> Attachments: SOLR-12127.patch, SOLR-12127.patch, SOLR-12127.patch, 
> SOLR-12127.patch.1
>
>
> I have defined a dynamic field which is stored=false, indexed=false and 
> docValues=true. Attempting to set this field's value to null via atomic 
> update does not remove the field from the document. However, the document's 
> version is updated. Using atomic updates to set a value for the field does 
> work. Tested on 6.6.2 and 7.2.1. 
> An example of a non-working update query:
> {code:java}
> 
> 
> 
> 372335
> 
> 
> 
> 
> {code}
>  
> An example of a working update query:
> {code:java}
> 
> 
> 
> 372335
>  update="set">1521472499
> 
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12127) Using atomic updates to remove docValues type dynamic field does not work

2019-04-24 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-12127:

Attachment: SOLR-12127.patch.1

> Using atomic updates to remove docValues type dynamic field does not work
> -
>
> Key: SOLR-12127
> URL: https://issues.apache.org/jira/browse/SOLR-12127
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.6.2, 7.2
>Reporter: Oliver Kuldmäe
>Assignee: Ishan Chattopadhyaya
>Priority: Critical
> Attachments: SOLR-12127.patch, SOLR-12127.patch, SOLR-12127.patch
>
>
> I have defined a dynamic field which is stored=false, indexed=false and 
> docValues=true. Attempting to set this field's value to null via atomic 
> update does not remove the field from the document. However, the document's 
> version is updated. Using atomic updates to set a value for the field does 
> work. Tested on 6.6.2 and 7.2.1. 
> An example of a non-working update query:
> {code:java}
> 
> 
> 
> 372335
> 
> 
> 
> 
> {code}
>  
> An example of a working update query:
> {code:java}
> 
> 
> 
> 372335
>  update="set">1521472499
> 
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13081) In-Place Update doesn't work when route.field is defined

2019-04-24 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825177#comment-16825177
 ] 

Lucene/Solr QA commented on SOLR-13081:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  2m  4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 47m 
27s{color} | {color:green} core in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m  
4s{color} | {color:green} solrj in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-13081 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12966847/SOLR-13081.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.4.0-137-generic #163~14.04.1-Ubuntu SMP Mon 
Sep 24 17:14:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 33c9456 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 |
| Default Java | LTS |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/382/testReport/ |
| modules | C: solr/core solr/solrj U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/382/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> In-Place Update doesn't work when route.field is defined
> 
>
> Key: SOLR-13081
> URL: https://issues.apache.org/jira/browse/SOLR-13081
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Dr Oleg Savrasov
>Priority: Major
> Attachments: SOLR-13081.patch, SOLR-13081.patch, SOLR-13081.patch, 
> SOLR-13081.patch
>
>
> As soon as cloud collection is configured with route.field property, In-Place 
> Updates are not applied anymore. This happens because 
> AtomicUpdateDocumentMerger skips only id and version fields and doesn't 
> verify configured route.field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8777) Inconsistent behavior in JapaneseTokenizer search mode

2019-04-24 Thread Tomoko Uchida (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825156#comment-16825156
 ] 

Tomoko Uchida commented on LUCENE-8777:
---

This should be easily fixed by just adding first column of the CSV to 
segmentations, when constructing {{UserDictionary}}. And of course tests should 
be fixed.

[~cm]: Could you give me some thoughts / comments about current Tokenizer 
behaviour (and my proposal here)?

> Inconsistent behavior in JapaneseTokenizer search mode
> --
>
> Key: LUCENE-8777
> URL: https://issues.apache.org/jira/browse/LUCENE-8777
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Tomoko Uchida
>Priority: Minor
>
> A user reported to me about inconsistent behaviour in JapaneseTokenizer's 
> search mode.
> Without user dictionary, JapaneseTokenizer (mode=search) outputs "long token" 
> and all of "short (custom segmented) token"s.
> e.g.:
> 関西国際空港 => 関西 / 関西国際空港 / 国際 / 空港
> With user dictionary, JapaneseTokenizer (mode=search) outputs all short 
> tokens but not long token.
> e.g.:
> {code}
> $ cat config/userdict.txt 
> 関西国際空港,関西 国際 空港,カンサイ コクサイ クウコウ,カスタム名詞
> {code}
> 関西国際空港 => 関西 / 国際 / 空港
>  
> This behaviour is confusing for users and would be better to be fixed. I am 
> not sure which behaviour is correct, but in my perspective the first one 
> (without user dictionary) is preferable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] diegoceccarelli commented on a change in pull request #162: SOLR-8776: Support RankQuery in grouping

2019-04-24 Thread GitBox
diegoceccarelli commented on a change in pull request #162: SOLR-8776: Support 
RankQuery in grouping
URL: https://github.com/apache/lucene-solr/pull/162#discussion_r278121455
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/handler/component/QueryComponent.java
 ##
 @@ -1270,7 +1270,7 @@ private void 
doProcessGroupedDistributedSearchFirstPhase(ResponseBuilder rb, Que
   final int topNGroups;
   Query query = cmd.getQuery();
   if (query instanceof AbstractReRankQuery){
-topNGroups = cmd.getOffset() + 
((AbstractReRankQuery)query).getReRankDocs();
+topNGroups = Math.max(((AbstractReRankQuery)query).getReRankDocs(), 
cmd.getOffset() + cmd.getLen());
 
 Review comment:
   You were right, I added a unit test and it is failing :/ I'm working on it


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12127) Using atomic updates to remove docValues type dynamic field does not work

2019-04-24 Thread Ishan Chattopadhyaya (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825137#comment-16825137
 ] 

Ishan Chattopadhyaya commented on SOLR-12127:
-

[~munendrasn], can you please update for master?

> Using atomic updates to remove docValues type dynamic field does not work
> -
>
> Key: SOLR-12127
> URL: https://issues.apache.org/jira/browse/SOLR-12127
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.6.2, 7.2
>Reporter: Oliver Kuldmäe
>Assignee: Ishan Chattopadhyaya
>Priority: Critical
> Attachments: SOLR-12127.patch, SOLR-12127.patch
>
>
> I have defined a dynamic field which is stored=false, indexed=false and 
> docValues=true. Attempting to set this field's value to null via atomic 
> update does not remove the field from the document. However, the document's 
> version is updated. Using atomic updates to set a value for the field does 
> work. Tested on 6.6.2 and 7.2.1. 
> An example of a non-working update query:
> {code:java}
> 
> 
> 
> 372335
> 
> 
> 
> 
> {code}
>  
> An example of a working update query:
> {code:java}
> 
> 
> 
> 372335
>  update="set">1521472499
> 
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12127) Using atomic updates to remove docValues type dynamic field does not work

2019-04-24 Thread Ishan Chattopadhyaya (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya reassigned SOLR-12127:
---

Assignee: Ishan Chattopadhyaya

> Using atomic updates to remove docValues type dynamic field does not work
> -
>
> Key: SOLR-12127
> URL: https://issues.apache.org/jira/browse/SOLR-12127
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.6.2, 7.2
>Reporter: Oliver Kuldmäe
>Assignee: Ishan Chattopadhyaya
>Priority: Critical
> Attachments: SOLR-12127.patch, SOLR-12127.patch
>
>
> I have defined a dynamic field which is stored=false, indexed=false and 
> docValues=true. Attempting to set this field's value to null via atomic 
> update does not remove the field from the document. However, the document's 
> version is updated. Using atomic updates to set a value for the field does 
> work. Tested on 6.6.2 and 7.2.1. 
> An example of a non-working update query:
> {code:java}
> 
> 
> 
> 372335
> 
> 
> 
> 
> {code}
>  
> An example of a working update query:
> {code:java}
> 
> 
> 
> 372335
>  update="set">1521472499
> 
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8776) Start offset going backwards has a legitimate purpose

2019-04-24 Thread Ram Venkat (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825116#comment-16825116
 ] 

Ram Venkat edited comment on LUCENE-8776 at 4/24/19 1:00 PM:
-

Adrien - That will not work as searching for "organic adjacent to light" would 
highlight the entire word "light-emitting-diode" instead of just "light". And 
only light or diode gets highlighted when light-emitting-diode is given the 
same offset as light or diode (when you search for light-emitting-diode). 

Robert,

We are not writing any new 'bad" algorithm. We have been using this feature for 
a while. Allowing offsets to go backwards is an existing feature in Lucene for 
a long time. This check and exception broke that feature. 

And, no, I am not asking anyone to buy more hardware. It's just a figure of 
speech to say that the net performance depends on many factors and a certain 
part of code being order of n-square, may or may not affect the net 
performance, due to many other factors. In our case, it does not. That is all 
the point I want to make. 

Removing a long existing feature in Lucene because (a) it affects a newer 
feature (postings) which is used by some people or (b) might cause a noticeable 
performance degradation in some cases, is not a great argument. We are 
dependent on this feature. We have no alternatives at this point. And, I have 
proof that it does not affect performance in a noticeable way, with extensive 
testing in our environment/data etc. Plus, I am guessing that we are not the 
only one in the world using this feature.  

For these reasons, we should either move this check and exception to other 
parts of Lucene (without affecting indexing and standard highlighter) or remove 
it. 

 


was (Author: venkat11):
Adrien - That will not work as searching for "organic adjacent to lighting" 
would highlight the entire word "light-emitting-diode" instead of just "light". 
And only light or diode gets highlighted when light-emitting-diode is given the 
same offset as light or diode (when you search for light-emitting-diode). 

Robert,

We are not writing any new 'bad" algorithm. We have been using this feature for 
a while. Allowing offsets to go backwards is an existing feature in Lucene for 
a long time. This check and exception broke that feature. 

And, no, I am not asking anyone to buy more hardware. It's just a figure of 
speech to say that the net performance depends on many factors and a certain 
part of code being order of n-square, may or may not affect the net 
performance, due to many other factors. In our case, it does not. That is all 
the point I want to make. 

Removing a long existing feature in Lucene because (a) it affects a newer 
feature (postings) which is used by some people or (b) might cause a noticeable 
performance degradation in some cases, is not a great argument. We are 
dependent on this feature. We have no alternatives at this point. And, I have 
proof that it does not affect performance in a noticeable way, with extensive 
testing in our environment/data etc. Plus, I am guessing that we are not the 
only one in the world using this feature.  

For these reasons, we should either move this check and exception to other 
parts of Lucene (without affecting indexing and standard highlighter) or remove 
it. 

 

> Start offset going backwards has a legitimate purpose
> -
>
> Key: LUCENE-8776
> URL: https://issues.apache.org/jira/browse/LUCENE-8776
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 7.6
>Reporter: Ram Venkat
>Priority: Major
>
> Here is the use case where startOffset can go backwards:
> Say there is a line "Organic light-emitting-diode glows", and I want to run 
> span queries and highlight them properly. 
> During index time, light-emitting-diode is split into three words, which 
> allows me to search for 'light', 'emitting' and 'diode' individually. The 
> three words occupy adjacent positions in the index, as 'light' adjacent to 
> 'emitting' and 'light' at a distance of two words from 'diode' need to match 
> this word. So, the order of words after splitting are: Organic, light, 
> emitting, diode, glows. 
> But, I also want to search for 'organic' being adjacent to 
> 'light-emitting-diode' or 'light-emitting-diode' being adjacent to 'glows'. 
> The way I solved this was to also generate 'light-emitting-diode' at two 
> positions: (a) In the same position as 'light' and (b) in the same position 
> as 'glows', like below:
> ||organic||light||emitting||diode||glows||
> | |light-emitting-diode| |light-emitting-diode| |
> |0|1|2|3|4|
> The positions of the two 'light-emitting-diode' are 1 and 3, but the offsets 
> are obviously the same. This works beautifully in Lucene 5.x in 

[jira] [Comment Edited] (LUCENE-4056) Japanese Tokenizer (Kuromoji) cannot build UniDic dictionary

2019-04-24 Thread Kazuaki Hiraga (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825122#comment-16825122
 ] 

Kazuaki Hiraga edited comment on LUCENE-4056 at 4/24/19 1:00 PM:
-

I agree with [~Tomoko Uchida] and I believe that UniDis is more suitable for 
Japanese full-text information retrieval since the dictionary is well 
maintained by researchers of Japanese government funded institute and it 
applies stricter rules than IPA dictionary that intends to produce consistent 
tokenization results. 


was (Author: h.kazuaki):
I agree with [~Tomoko Uchida] and I believe that UniDis is more suitable for 
Japanese full-text information retrieval since the dictionary is well 
maintained by researchers of Japanese government funded institute and applies 
stricter rules than IPAdictionary that intend to produce consistent 
tokenization results. 

> Japanese Tokenizer (Kuromoji) cannot build UniDic dictionary
> 
>
> Key: LUCENE-4056
> URL: https://issues.apache.org/jira/browse/LUCENE-4056
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 3.6
> Environment: Solr 3.6
> UniDic 1.3.12 for MeCab (unidic-mecab1312src.tar.gz)
>Reporter: Kazuaki Hiraga
>Priority: Major
>
> I tried to build a UniDic dictionary for using it along with Kuromoji on Solr 
> 3.6. I think UniDic is a good dictionary than IPA dictionary, so Kuromoji for 
> Lucene/Solr should support UniDic dictionary as standalone Kuromoji does.
> The following is my procedure:
> Modified build.xml under lucene/contrib/analyzers/kuromoji directory and run 
> 'ant build-dict', I got the error as the below.
> build-dict:
>  [java] dictionary builder
>  [java] 
>  [java] dictionary format: UNIDIC
>  [java] input directory: 
> /home/kazu/Work/src/solr/brunch_3_6/lucene/build/contrib/analyzers/kuromoji/unidic-mecab1312src
>  [java] output directory: 
> /home/kazu/Work/src/solr/brunch_3_6/lucene/contrib/analyzers/kuromoji/src/resources
>  [java] input encoding: utf-8
>  [java] normalize entries: false
>  [java] 
>  [java] building tokeninfo dict...
>  [java]   parse...
>  [java]   sort...
>  [java] Exception in thread "main" java.lang.AssertionError
>  [java]   encode...
>  [java]   at 
> org.apache.lucene.analysis.ja.util.BinaryDictionaryWriter.put(BinaryDictionaryWriter.java:113)
>  [java]   at 
> org.apache.lucene.analysis.ja.util.TokenInfoDictionaryBuilder.buildDictionary(TokenInfoDictionaryBuilder.java:141)
>  [java]   at 
> org.apache.lucene.analysis.ja.util.TokenInfoDictionaryBuilder.build(TokenInfoDictionaryBuilder.java:76)
>  [java]   at 
> org.apache.lucene.analysis.ja.util.DictionaryBuilder.build(DictionaryBuilder.java:37)
>  [java]   at 
> org.apache.lucene.analysis.ja.util.DictionaryBuilder.main(DictionaryBuilder.java:82)
> And the diff of build.xml:
> ===
> --- build.xml (revision 1338023)
> +++ build.xml (working copy)
> @@ -28,19 +28,31 @@
>
>  
>
> +  
>  
>
> -  
> +
> +  
> +  
> +  
> +   value="/home/kazu/Work/src/nlp/unidic/_archive"/>
> +
>
> +  
> +  
> +  
> +
>
>
>  
> @@ -58,7 +70,8 @@
>  
>
>
> - 
> + 
> +  tofile="${build.dir}/${dict.src.file}"/>
>   
>   
>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4056) Japanese Tokenizer (Kuromoji) cannot build UniDic dictionary

2019-04-24 Thread Kazuaki Hiraga (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825122#comment-16825122
 ] 

Kazuaki Hiraga commented on LUCENE-4056:


I agree with [~Tomoko Uchida] and I believe that UniDis is more suitable for 
Japanese full-text information retrieval since the dictionary is well 
maintained by researchers of Japanese government funded institute and applies 
stricter rules than IPAdictionary that intend to produce consistent 
tokenization results. 

> Japanese Tokenizer (Kuromoji) cannot build UniDic dictionary
> 
>
> Key: LUCENE-4056
> URL: https://issues.apache.org/jira/browse/LUCENE-4056
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 3.6
> Environment: Solr 3.6
> UniDic 1.3.12 for MeCab (unidic-mecab1312src.tar.gz)
>Reporter: Kazuaki Hiraga
>Priority: Major
>
> I tried to build a UniDic dictionary for using it along with Kuromoji on Solr 
> 3.6. I think UniDic is a good dictionary than IPA dictionary, so Kuromoji for 
> Lucene/Solr should support UniDic dictionary as standalone Kuromoji does.
> The following is my procedure:
> Modified build.xml under lucene/contrib/analyzers/kuromoji directory and run 
> 'ant build-dict', I got the error as the below.
> build-dict:
>  [java] dictionary builder
>  [java] 
>  [java] dictionary format: UNIDIC
>  [java] input directory: 
> /home/kazu/Work/src/solr/brunch_3_6/lucene/build/contrib/analyzers/kuromoji/unidic-mecab1312src
>  [java] output directory: 
> /home/kazu/Work/src/solr/brunch_3_6/lucene/contrib/analyzers/kuromoji/src/resources
>  [java] input encoding: utf-8
>  [java] normalize entries: false
>  [java] 
>  [java] building tokeninfo dict...
>  [java]   parse...
>  [java]   sort...
>  [java] Exception in thread "main" java.lang.AssertionError
>  [java]   encode...
>  [java]   at 
> org.apache.lucene.analysis.ja.util.BinaryDictionaryWriter.put(BinaryDictionaryWriter.java:113)
>  [java]   at 
> org.apache.lucene.analysis.ja.util.TokenInfoDictionaryBuilder.buildDictionary(TokenInfoDictionaryBuilder.java:141)
>  [java]   at 
> org.apache.lucene.analysis.ja.util.TokenInfoDictionaryBuilder.build(TokenInfoDictionaryBuilder.java:76)
>  [java]   at 
> org.apache.lucene.analysis.ja.util.DictionaryBuilder.build(DictionaryBuilder.java:37)
>  [java]   at 
> org.apache.lucene.analysis.ja.util.DictionaryBuilder.main(DictionaryBuilder.java:82)
> And the diff of build.xml:
> ===
> --- build.xml (revision 1338023)
> +++ build.xml (working copy)
> @@ -28,19 +28,31 @@
>
>  
>
> +  
>  
>
> -  
> +
> +  
> +  
> +  
> +   value="/home/kazu/Work/src/nlp/unidic/_archive"/>
> +
>
> +  
> +  
> +  
> +
>
>
>  
> @@ -58,7 +70,8 @@
>  
>
>
> - 
> + 
> +  tofile="${build.dir}/${dict.src.file}"/>
>   
>   
>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12188) Inconsistent behavior with CREATE collection API

2019-04-24 Thread Ishan Chattopadhyaya (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825120#comment-16825120
 ] 

Ishan Chattopadhyaya commented on SOLR-12188:
-

I think we should move "configSet" box to the advanced section. A regular user 
(non advanced user) doesn't need to specify a configSet.

> Inconsistent behavior with CREATE collection API
> 
>
> Key: SOLR-12188
> URL: https://issues.apache.org/jira/browse/SOLR-12188
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, config-api
>Affects Versions: 7.4
>Reporter: Munendra S N
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-12188.patch
>
>
> If collection.configName is not specified during create collection then 
> _default configSet is used to create mutable configSet (with suffix 
> AUTOCREATED)
> * In the Admin UI, it is mandatory to specify configSet. This behavior is 
> inconsistent with CREATE collection API(where it is not mandatory)
> * Both in Admin UI and CREATE API, when _default is specified as configSet 
> then no mutable configSet is created. So, changes in one collection would 
> reflect in other



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8776) Start offset going backwards has a legitimate purpose

2019-04-24 Thread Ram Venkat (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825116#comment-16825116
 ] 

Ram Venkat edited comment on LUCENE-8776 at 4/24/19 12:53 PM:
--

Adrien - That will not work as searching for "organic adjacent to lighting" 
would highlight the entire word "light-emitting-diode" instead of just "light". 
And only light or diode gets highlighted when light-emitting-diode is given the 
same offset as light or diode (when you search for light-emitting-diode). 

Robert,

We are not writing any new 'bad" algorithm. We have been using this feature for 
a while. Allowing offsets to go backwards is an existing feature in Lucene for 
a long time. This check and exception broke that feature. 

And, no, I am not asking anyone to buy more hardware. It's just a figure of 
speech to say that the net performance depends on many factors and a certain 
part of code being order of n-square, may or may not affect the net 
performance, due to many other factors. In our case, it does not. That is all 
the point I want to make. 

Removing a long existing feature in Lucene because (a) it affects a newer 
feature (postings) which is used by some people or (b) might cause a noticeable 
performance degradation in some cases, is not a great argument. We are 
dependent on this feature. We have no alternatives at this point. And, I have 
proof that it does not affect performance in a noticeable way, with extensive 
testing in our environment/data etc. Plus, I am guessing that we are not the 
only one in the world using this feature.  

For these reasons, we should either move this check and exception to other 
parts of Lucene (without affecting indexing and standard highlighter) or remove 
it. 

 


was (Author: venkat11):
Adrien - That will not work as searching for "organic adjacent to lighting" 
would highlight the entire word "light-emitting-diode" instead of just "light". 
And only light or diode gets highlighted when light-emitting-diode is given the 
same offset as light or diode (when you search for light-emitting-diode). 

Robert,

We are not writing any new 'bad" algorithm. We have been using this feature for 
a while. Allowing offsets to go backwards is an existing feature in Lucene for 
a long time. This check and exception broke that feature. 

And, no, I am not asking anyone to buy more hardware. It's just a figure of 
speech to say that the net performance depends on many factors and a certain 
part of code being \{{O(n^2)} may or may not affect the net performance, due to 
many other factors. In our case, it does not. That is all the point I want to 
make. 

Removing a long existing feature in Lucene because (a) it affects a newer 
feature (postings) which is used by some people or (b) might cause a noticeable 
performance degradation in some cases, is not a great argument. We are 
dependent on this feature. We have no alternatives at this point. And, I have 
proof that it does not affect performance in a noticeable way, with extensive 
testing in our environment/data etc. Plus, I am guessing that we are not the 
only one in the world using this feature.  

For these reasons, we should either move this check and exception to other 
parts of Lucene (without affecting indexing and standard highlighter) or remove 
it. 

 

> Start offset going backwards has a legitimate purpose
> -
>
> Key: LUCENE-8776
> URL: https://issues.apache.org/jira/browse/LUCENE-8776
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 7.6
>Reporter: Ram Venkat
>Priority: Major
>
> Here is the use case where startOffset can go backwards:
> Say there is a line "Organic light-emitting-diode glows", and I want to run 
> span queries and highlight them properly. 
> During index time, light-emitting-diode is split into three words, which 
> allows me to search for 'light', 'emitting' and 'diode' individually. The 
> three words occupy adjacent positions in the index, as 'light' adjacent to 
> 'emitting' and 'light' at a distance of two words from 'diode' need to match 
> this word. So, the order of words after splitting are: Organic, light, 
> emitting, diode, glows. 
> But, I also want to search for 'organic' being adjacent to 
> 'light-emitting-diode' or 'light-emitting-diode' being adjacent to 'glows'. 
> The way I solved this was to also generate 'light-emitting-diode' at two 
> positions: (a) In the same position as 'light' and (b) in the same position 
> as 'glows', like below:
> ||organic||light||emitting||diode||glows||
> | |light-emitting-diode| |light-emitting-diode| |
> |0|1|2|3|4|
> The positions of the two 'light-emitting-diode' are 1 and 3, but the offsets 
> are obviously the same. This works beautifully in Lucene 5.x in both 

[jira] [Comment Edited] (LUCENE-8776) Start offset going backwards has a legitimate purpose

2019-04-24 Thread Ram Venkat (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825116#comment-16825116
 ] 

Ram Venkat edited comment on LUCENE-8776 at 4/24/19 12:51 PM:
--

Adrien - That will not work as searching for "organic adjacent to lighting" 
would highlight the entire word "light-emitting-diode" instead of just "light". 
And only light or diode gets highlighted when light-emitting-diode is given the 
same offset as light or diode (when you search for light-emitting-diode). 

Robert,

We are not writing any new 'bad" algorithm. We have been using this feature for 
a while. Allowing offsets to go backwards is an existing feature in Lucene for 
a long time. This check and exception broke that feature. 

And, no, I am not asking anyone to buy more hardware. It's just a figure of 
speech to say that the net performance depends on many factors and a certain 
part of code being \{{O(n^2)} may or may not affect the net performance, due to 
many other factors. In our case, it does not. That is all the point I want to 
make. 

Removing a long existing feature in Lucene because (a) it affects a newer 
feature (postings) which is used by some people or (b) might cause a noticeable 
performance degradation in some cases, is not a great argument. We are 
dependent on this feature. We have no alternatives at this point. And, I have 
proof that it does not affect performance in a noticeable way, with extensive 
testing in our environment/data etc. Plus, I am guessing that we are not the 
only one in the world using this feature.  

For these reasons, we should either move this check and exception to other 
parts of Lucene (without affecting indexing and standard highlighter) or remove 
it. 

 


was (Author: venkat11):
Adrien - That will not work as searching for "organic adjacent to lighting" 
would highlight the entire word "light-emitting-diode" instead of just "light". 
And only light or diode gets highlighted when light-emitting-diode is given the 
same offset as light or diode (when you search for light-emitting-diode). 

Robert,

We are not writing any new 'bad" algorithm. We have been using this feature for 
a while. Allowing offsets to go backwards is an existing feature in Lucene for 
a long time. This check and exception broke that feature. 

And, no, I am not asking anyone to buy more hardware. It's just a figure of 
speech to say that the net performance depends on many factors and a certain 
part of code being \{{O(n^2)} may or may not affect the net performance, due to 
many other factors. In our case, it does not. That is all the point I want to 
make. 

Removing a long existing feature in Lucene because (a) it affects a newer 
feature (postings) which is used by some people or (b) might cause a noticeable 
performance degradation in some cases, is not a great argument. We are 
dependent on this feature. We have no alternatives at this point. And, I have 
proof that it does not affect performance in a noticeable way, with extensive 
testing in our environment/data etc. Plus, I am guessing that we are not the 
only one in the world using this feature.  

For these reasons, we should either move this check and exception to other 
parts of Lucene (without affecting indexing and standard highlighter) or remove 
it. 

 

> Start offset going backwards has a legitimate purpose
> -
>
> Key: LUCENE-8776
> URL: https://issues.apache.org/jira/browse/LUCENE-8776
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 7.6
>Reporter: Ram Venkat
>Priority: Major
>
> Here is the use case where startOffset can go backwards:
> Say there is a line "Organic light-emitting-diode glows", and I want to run 
> span queries and highlight them properly. 
> During index time, light-emitting-diode is split into three words, which 
> allows me to search for 'light', 'emitting' and 'diode' individually. The 
> three words occupy adjacent positions in the index, as 'light' adjacent to 
> 'emitting' and 'light' at a distance of two words from 'diode' need to match 
> this word. So, the order of words after splitting are: Organic, light, 
> emitting, diode, glows. 
> But, I also want to search for 'organic' being adjacent to 
> 'light-emitting-diode' or 'light-emitting-diode' being adjacent to 'glows'. 
> The way I solved this was to also generate 'light-emitting-diode' at two 
> positions: (a) In the same position as 'light' and (b) in the same position 
> as 'glows', like below:
> ||organic||light||emitting||diode||glows||
> | |light-emitting-diode| |light-emitting-diode| |
> |0|1|2|3|4|
> The positions of the two 'light-emitting-diode' are 1 and 3, but the offsets 
> are obviously the same. This works beautifully in Lucene 5.x in both 
> 

[jira] [Commented] (LUCENE-8776) Start offset going backwards has a legitimate purpose

2019-04-24 Thread Ram Venkat (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825116#comment-16825116
 ] 

Ram Venkat commented on LUCENE-8776:


Adrien - That will not work as searching for "organic adjacent to lighting" 
would highlight the entire word "light-emitting-diode" instead of just "light". 
And only light or diode gets highlighted when light-emitting-diode is given the 
same offset as light or diode (when you search for light-emitting-diode). 

Robert,

We are not writing any new 'bad" algorithm. We have been using this feature for 
a while. Allowing offsets to go backwards is an existing feature in Lucene for 
a long time. This check and exception broke that feature. 

And, no, I am not asking anyone to buy more hardware. It's just a figure of 
speech to say that the net performance depends on many factors and a certain 
part of code being \{{O(n^2)} may or may not affect the net performance, due to 
many other factors. In our case, it does not. That is all the point I want to 
make. 

Removing a long existing feature in Lucene because (a) it affects a newer 
feature (postings) which is used by some people or (b) might cause a noticeable 
performance degradation in some cases, is not a great argument. We are 
dependent on this feature. We have no alternatives at this point. And, I have 
proof that it does not affect performance in a noticeable way, with extensive 
testing in our environment/data etc. Plus, I am guessing that we are not the 
only one in the world using this feature.  

For these reasons, we should either move this check and exception to other 
parts of Lucene (without affecting indexing and standard highlighter) or remove 
it. 

 

> Start offset going backwards has a legitimate purpose
> -
>
> Key: LUCENE-8776
> URL: https://issues.apache.org/jira/browse/LUCENE-8776
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 7.6
>Reporter: Ram Venkat
>Priority: Major
>
> Here is the use case where startOffset can go backwards:
> Say there is a line "Organic light-emitting-diode glows", and I want to run 
> span queries and highlight them properly. 
> During index time, light-emitting-diode is split into three words, which 
> allows me to search for 'light', 'emitting' and 'diode' individually. The 
> three words occupy adjacent positions in the index, as 'light' adjacent to 
> 'emitting' and 'light' at a distance of two words from 'diode' need to match 
> this word. So, the order of words after splitting are: Organic, light, 
> emitting, diode, glows. 
> But, I also want to search for 'organic' being adjacent to 
> 'light-emitting-diode' or 'light-emitting-diode' being adjacent to 'glows'. 
> The way I solved this was to also generate 'light-emitting-diode' at two 
> positions: (a) In the same position as 'light' and (b) in the same position 
> as 'glows', like below:
> ||organic||light||emitting||diode||glows||
> | |light-emitting-diode| |light-emitting-diode| |
> |0|1|2|3|4|
> The positions of the two 'light-emitting-diode' are 1 and 3, but the offsets 
> are obviously the same. This works beautifully in Lucene 5.x in both 
> searching and highlighting with span queries. 
> But when I try this in Lucene 7.6, it hits the condition "Offsets must not go 
> backwards" at DefaultIndexingChain:818. This IllegalArgumentException is 
> being thrown without any comments on why this check is needed. As I explained 
> above, startOffset going backwards is perfectly valid, to deal with word 
> splitting and span operations on these specialized use cases. On the other 
> hand, it is not clear what value is added by this check and which highlighter 
> code is affected by offsets going backwards. This same check is done at 
> BaseTokenStreamTestCase:245. 
> I see others talk about how this check found bugs in WordDelimiter etc. but 
> it also prevents legitimate use cases. Can this check be removed?  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 3204 - Still Unstable

2019-04-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/3204/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/80/consoleText

[repro] Revision: 0cfd85baef7f6f6fb997330b9a14471d66a62889

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=HdfsAutoAddReplicasIntegrationTest 
-Dtests.method=testSimple -Dtests.seed=595F1B1D2487CE1A -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=en-SG -Dtests.timezone=Pacific/Tahiti -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
33c94562a630eacad12ab0a94a2a6b3d683f5417
[repro] git fetch
[repro] git checkout 0cfd85baef7f6f6fb997330b9a14471d66a62889

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   HdfsAutoAddReplicasIntegrationTest
[repro] ant compile-test

[...truncated 3576 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.HdfsAutoAddReplicasIntegrationTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=595F1B1D2487CE1A -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=en-SG -Dtests.timezone=Pacific/Tahiti -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 2489 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: 
org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest
[repro] git checkout 33c94562a630eacad12ab0a94a2a6b3d683f5417

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11.0.2) - Build # 23975 - Failure!

2019-04-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23975/
Java: 64bit/jdk-11.0.2 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 2001 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J1-20190424_123124_3277701471056332218569.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 8 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J0-20190424_123124_32710403547086812918310.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J2-20190424_123124_3276882641266229789520.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 301 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J1-20190424_124120_1821069212121651646809.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J2-20190424_124120_18212833489301824236459.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J0-20190424_124120_18212290171293524388660.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 1075 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J2-20190424_124255_5255102073826704784101.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J0-20190424_124255_52510059149642011624110.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 3 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J1-20190424_124255_52510406230899538590370.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 241 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/icu/test/temp/junit4-J1-20190424_124535_0654194809971511091409.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J2: stderr was not empty, see: 

[JENKINS] Lucene-Solr-Tests-8.x - Build # 163 - Unstable

2019-04-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/163/

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testNodeMarkersRegistration

Error Message:
trigger did not fire event after await()ing an excessive amount of time

Stack Trace:
java.lang.AssertionError: trigger did not fire event after await()ing an 
excessive amount of time
at 
__randomizedtesting.SeedInfo.seed([358215E74FC6D351:2D389DEB41F31EBE]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testNodeMarkersRegistration(TestSimTriggerIntegration.java:1001)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.security.AuditLoggerIntegrationTest.testSynchronous

Error Message:
expected:<3> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<2>
at 

[jira] [Commented] (SOLR-12833) Use timed-out lock in DistributedUpdateProcessor

2019-04-24 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825043#comment-16825043
 ] 

Andrzej Bialecki  commented on SOLR-12833:
--

Nice! I like this refactoring.

Minor issue: I think it's confusing that the new {{doVersionAdd}} method leaves 
the bucket lock in a different state than the other {{do*}} methods. I think 
that they should all call unlock in their {{finally}} section (or they should 
all leave the bucket locked on return, but this creates a hidden side-effect).

> Use timed-out lock in DistributedUpdateProcessor
> 
>
> Key: SOLR-12833
> URL: https://issues.apache.org/jira/browse/SOLR-12833
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update, UpdateRequestProcessors
>Affects Versions: 7.5, 8.0
>Reporter: jefferyyuan
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 7.7, 8.0
>
> Attachments: SOLR-12833-noint.patch, SOLR-12833.patch, 
> SOLR-12833.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There is a synchronize block that blocks other update requests whose IDs fall 
> in the same hash bucket. The update waits forever until it gets the lock at 
> the synchronize block, this can be a problem in some cases.
>  
> Some add/update requests (for example updates with spatial/shape analysis) 
> like may take time (30+ seconds or even more), this would the request time 
> out and fail.
> Client may retry the same requests multiple times or several minutes, this 
> would make things worse.
> The server side receives all the update requests but all except one can do 
> nothing, have to wait there. This wastes precious memory and cpu resource.
> We have seen the case 2000+ threads are blocking at the synchronize lock, and 
> only a few updates are making progress. Each thread takes 3+ mb memory which 
> causes OOM.
> Also if the update can't get the lock in expected time range, its better to 
> fail fast.
>  
> We can have one configuration in solrconfig.xml: 
> updateHandler/versionLock/timeInMill, so users can specify how long they want 
> to wait the version bucket lock.
> The default value can be -1, so it behaves same - wait forever until it gets 
> the lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-8.x - Build # 84 - Unstable

2019-04-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-8.x/84/

4 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:46501: ADDREPLICA failed to create replica

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:46501: ADDREPLICA failed to create replica
at 
__randomizedtesting.SeedInfo.seed([5CE358846E666460:D4B7675EC09A0998]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:649)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1068)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:837)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:769)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224)
at 
org.apache.solr.cloud.BasicDistributedZkTest.testANewCollectionInOneInstanceWithManualShardAssignement(BasicDistributedZkTest.java:861)
at 
org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:421)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Resolved] (SOLR-13423) Upgrade RRD4j to version 3.5

2019-04-24 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-13423.
--
Resolution: Fixed

> Upgrade RRD4j to version 3.5
> 
>
> Key: SOLR-13423
> URL: https://issues.apache.org/jira/browse/SOLR-13423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.1, master (9.0)
>
>
> Solr uses now RRD4j 3.2, which is not compatible with Java 9+.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13423) Upgrade RRD4j to version 3.5

2019-04-24 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825029#comment-16825029
 ] 

ASF subversion and git services commented on SOLR-13423:


Commit 60af5dfcfc1e5eda19db0b9e89059e987bb31e46 in lucene-solr's branch 
refs/heads/branch_8x from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=60af5df ]

SOLR-13423: Upgrade RRD4j to version 3.5.


> Upgrade RRD4j to version 3.5
> 
>
> Key: SOLR-13423
> URL: https://issues.apache.org/jira/browse/SOLR-13423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.1, master (9.0)
>
>
> Solr uses now RRD4j 3.2, which is not compatible with Java 9+.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8753) New PostingFormat - UniformSplit

2019-04-24 Thread Robert Muir (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825028#comment-16825028
 ] 

Robert Muir commented on LUCENE-8753:
-

Why are we looking at committing this when the most recent benchmark is iffy: 
most searches are the same or slower?

> New PostingFormat - UniformSplit
> 
>
> Key: LUCENE-8753
> URL: https://issues.apache.org/jira/browse/LUCENE-8753
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 8.0
>Reporter: Bruno Roustant
>Assignee: David Smiley
>Priority: Major
> Attachments: Uniform Split Technique.pdf, luceneutil.benchmark.txt
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is a proposal to add a new PostingsFormat called "UniformSplit" with 4 
> objectives:
>  - Clear design and simple code.
>  - Easily extensible, for both the logic and the index format.
>  - Light memory usage with a very compact FST.
>  - Focus on efficient TermQuery, PhraseQuery and PrefixQuery performance.
> (the pdf attached explains visually the technique in more details)
>  The principle is to split the list of terms into blocks and use a FST to 
> access the block, but not as a prefix trie, rather with a seek-floor pattern. 
> For the selection of the blocks, there is a target average block size (number 
> of terms), with an allowed delta variation (10%) to compare the terms and 
> select the one with the minimal distinguishing prefix.
>  There are also several optimizations inside the block to make it more 
> compact and speed up the loading/scanning.
> The performance obtained is interesting with the luceneutil benchmark, 
> comparing UniformSplit with BlockTree. Find it in the first comment and also 
> attached for better formatting.
> Although the precise percentages vary between runs, three main points:
>  - TermQuery and PhraseQuery are improved.
>  - PrefixQuery and WildcardQuery are ok.
>  - Fuzzy queries are clearly less performant, because BlockTree is so 
> optimized for them.
> Compared to BlockTree, FST size is reduced by 15%, and segment writing time 
> is reduced by 20%. So this PostingsFormat scales to lots of docs, as 
> BlockTree.
> This initial version passes all Lucene tests. Use “ant test 
> -Dtests.codec=UniformSplitTesting” to test with this PostingsFormat.
> Subjectively, we think we have fulfilled our goal of code simplicity. And we 
> have already exercised this PostingsFormat extensibility to create a 
> different flavor for our own use-case.
> Contributors: Juan Camilo Rodriguez Duran, Bruno Roustant, David Smiley



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8776) Start offset going backwards has a legitimate purpose

2019-04-24 Thread Robert Muir (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825025#comment-16825025
 ] 

Robert Muir commented on LUCENE-8776:
-

{quote}
If performance gets worse for large documents, isn't it better to just log a 
warning, rather than completely remove that feature? Net performance depends on 
other factors like hardware, right?
{quote}

No, as computer scientists we don't write bad algorithms and tell people to buy 
more hardware. 
And as a library logging anything, especially logging a warning rather than 
enforcing the contract, is wrong to do.

{quote}
At this point, we are forced to remove this check and recompile the source. 
Instead, can we move this check to where postings are used?
{quote}

no.

> Start offset going backwards has a legitimate purpose
> -
>
> Key: LUCENE-8776
> URL: https://issues.apache.org/jira/browse/LUCENE-8776
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 7.6
>Reporter: Ram Venkat
>Priority: Major
>
> Here is the use case where startOffset can go backwards:
> Say there is a line "Organic light-emitting-diode glows", and I want to run 
> span queries and highlight them properly. 
> During index time, light-emitting-diode is split into three words, which 
> allows me to search for 'light', 'emitting' and 'diode' individually. The 
> three words occupy adjacent positions in the index, as 'light' adjacent to 
> 'emitting' and 'light' at a distance of two words from 'diode' need to match 
> this word. So, the order of words after splitting are: Organic, light, 
> emitting, diode, glows. 
> But, I also want to search for 'organic' being adjacent to 
> 'light-emitting-diode' or 'light-emitting-diode' being adjacent to 'glows'. 
> The way I solved this was to also generate 'light-emitting-diode' at two 
> positions: (a) In the same position as 'light' and (b) in the same position 
> as 'glows', like below:
> ||organic||light||emitting||diode||glows||
> | |light-emitting-diode| |light-emitting-diode| |
> |0|1|2|3|4|
> The positions of the two 'light-emitting-diode' are 1 and 3, but the offsets 
> are obviously the same. This works beautifully in Lucene 5.x in both 
> searching and highlighting with span queries. 
> But when I try this in Lucene 7.6, it hits the condition "Offsets must not go 
> backwards" at DefaultIndexingChain:818. This IllegalArgumentException is 
> being thrown without any comments on why this check is needed. As I explained 
> above, startOffset going backwards is perfectly valid, to deal with word 
> splitting and span operations on these specialized use cases. On the other 
> hand, it is not clear what value is added by this check and which highlighter 
> code is affected by offsets going backwards. This same check is done at 
> BaseTokenStreamTestCase:245. 
> I see others talk about how this check found bugs in WordDelimiter etc. but 
> it also prevents legitimate use cases. Can this check be removed?  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12167) Child documents are ignored if unknown atomic operation specified in parent doc

2019-04-24 Thread Ishan Chattopadhyaya (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-12167:

Fix Version/s: master (9.0)
   8.1

> Child documents are ignored if unknown atomic operation specified in parent 
> doc
> ---
>
> Key: SOLR-12167
> URL: https://issues.apache.org/jira/browse/SOLR-12167
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Munendra S N
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-12167.patch
>
>
> On trying to add this nested document,
> {code:java}
> {uniqueId : book6, type_s:book, title_t : "The Way of Kings", author_s : 
> "Brandon Sanderson",
>   cat_s:fantasy, pubyear_i:2010, publisher_s:Tor, parent_unbxd:true,
>   _childDocuments_ : [
> { uniqueId: book6_c1, type_s:review, 
> review_dt:"2015-01-03T14:30:00Z",parentId : book6,
>   stars_i:5, author_s:rahul,
>   comment_t:"A great start to what looks like an epic series!"
> }
> ,
> { uniqueId: book6_c2, type_s:review, 
> review_dt:"2014-03-15T12:00:00Z",parentId : book6,
>   stars_i:3, author_s:arpan,
>   comment_t:"This book was too long."
> }
>   ],labelinfo:{label_image:"",hotdeal_type:"",apply_hotdeal:""}
>  }
> {code}
> Only parent document is getting indexed(without labelinfo field) and child 
> documents are being ingored.
> On checking the code,
> https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/update/processor/AtomicUpdateDocumentMerger.java#L94
>  
> I realized that since *labelinfo* is a Map, Solr is trying for atomic updates 
> and since label_image, hotdeal_type, apply_hotdeal are invalid operation 
> field is ignored. Unfortunately, child documents are also not getting indexed.
> h4. Problem with current behavior:
> * field is silently ignored when its value is a map instead of failing 
> document update(when present in parent)
> * In the above case, child document is also getting ignored
> * If any field value is Map in child document but not in parent then nested 
> document is indexed properly
> {code:java}
> {uniqueId : book6, type_s:book, title_t : "The Way of Kings", author_s : 
> "Brandon Sanderson",
>   cat_s:fantasy, pubyear_i:2010, publisher_s:Tor, parent_unbxd:true,
>   _childDocuments_ : [
> { uniqueId: book6_c1, type_s:review, 
> review_dt:"2015-01-03T14:30:00Z",parentId : book6,
>   stars_i:5, author_s:rahul,
>   comment_t:"A great start to what looks like an epic series!"
> ,labelinfo:{label_image:"","hotdeal_type":"","apply_hotdeal":""}
> }
> ,
> { uniqueId: book6_c2, type_s:review, 
> review_dt:"2014-03-15T12:00:00Z",parentId : book6,
>   stars_i:3, author_s:arpan,
>   comment_t:"This book was too long."
> }
>   ]
>  }
> {code}
> Here, nested document is indexed and labelinfo field value indexed in 
> book6_c1 as string(using Map.toString())
> h4. Probable solution
> * If an unknown operation is specified in update document then instead of 
> ignoring the field and field value, fail the document update(fail fast 
> approach). So, that user would know something is wrong with the document. 
> Also, this would solve the case where the parent doc is getting indexed and 
> child documents are getting ignored
> * Currently, when child document's field value is a Map even then it gets 
> indexed, instead update should fail



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12167) Child documents are ignored if unknown atomic operation specified in parent doc

2019-04-24 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825021#comment-16825021
 ] 

ASF subversion and git services commented on SOLR-12167:


Commit a2d499e32a471f5d22f9101125543e163c0db293 in lucene-solr's branch 
refs/heads/branch_8x from Ishan Chattopadhyaya
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a2d499e ]

SOLR-12167: Throw an exception, instead of just a warning, upon unknown atomic 
update


> Child documents are ignored if unknown atomic operation specified in parent 
> doc
> ---
>
> Key: SOLR-12167
> URL: https://issues.apache.org/jira/browse/SOLR-12167
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Munendra S N
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-12167.patch
>
>
> On trying to add this nested document,
> {code:java}
> {uniqueId : book6, type_s:book, title_t : "The Way of Kings", author_s : 
> "Brandon Sanderson",
>   cat_s:fantasy, pubyear_i:2010, publisher_s:Tor, parent_unbxd:true,
>   _childDocuments_ : [
> { uniqueId: book6_c1, type_s:review, 
> review_dt:"2015-01-03T14:30:00Z",parentId : book6,
>   stars_i:5, author_s:rahul,
>   comment_t:"A great start to what looks like an epic series!"
> }
> ,
> { uniqueId: book6_c2, type_s:review, 
> review_dt:"2014-03-15T12:00:00Z",parentId : book6,
>   stars_i:3, author_s:arpan,
>   comment_t:"This book was too long."
> }
>   ],labelinfo:{label_image:"",hotdeal_type:"",apply_hotdeal:""}
>  }
> {code}
> Only parent document is getting indexed(without labelinfo field) and child 
> documents are being ingored.
> On checking the code,
> https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/update/processor/AtomicUpdateDocumentMerger.java#L94
>  
> I realized that since *labelinfo* is a Map, Solr is trying for atomic updates 
> and since label_image, hotdeal_type, apply_hotdeal are invalid operation 
> field is ignored. Unfortunately, child documents are also not getting indexed.
> h4. Problem with current behavior:
> * field is silently ignored when its value is a map instead of failing 
> document update(when present in parent)
> * In the above case, child document is also getting ignored
> * If any field value is Map in child document but not in parent then nested 
> document is indexed properly
> {code:java}
> {uniqueId : book6, type_s:book, title_t : "The Way of Kings", author_s : 
> "Brandon Sanderson",
>   cat_s:fantasy, pubyear_i:2010, publisher_s:Tor, parent_unbxd:true,
>   _childDocuments_ : [
> { uniqueId: book6_c1, type_s:review, 
> review_dt:"2015-01-03T14:30:00Z",parentId : book6,
>   stars_i:5, author_s:rahul,
>   comment_t:"A great start to what looks like an epic series!"
> ,labelinfo:{label_image:"","hotdeal_type":"","apply_hotdeal":""}
> }
> ,
> { uniqueId: book6_c2, type_s:review, 
> review_dt:"2014-03-15T12:00:00Z",parentId : book6,
>   stars_i:3, author_s:arpan,
>   comment_t:"This book was too long."
> }
>   ]
>  }
> {code}
> Here, nested document is indexed and labelinfo field value indexed in 
> book6_c1 as string(using Map.toString())
> h4. Probable solution
> * If an unknown operation is specified in update document then instead of 
> ignoring the field and field value, fail the document update(fail fast 
> approach). So, that user would know something is wrong with the document. 
> Also, this would solve the case where the parent doc is getting indexed and 
> child documents are getting ignored
> * Currently, when child document's field value is a Map even then it gets 
> indexed, instead update should fail



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12167) Child documents are ignored if unknown atomic operation specified in parent doc

2019-04-24 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825020#comment-16825020
 ] 

ASF subversion and git services commented on SOLR-12167:


Commit 33c94562a630eacad12ab0a94a2a6b3d683f5417 in lucene-solr's branch 
refs/heads/master from Ishan Chattopadhyaya
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=33c9456 ]

SOLR-12167: Throw an exception, instead of just a warning, upon unknown atomic 
update


> Child documents are ignored if unknown atomic operation specified in parent 
> doc
> ---
>
> Key: SOLR-12167
> URL: https://issues.apache.org/jira/browse/SOLR-12167
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Munendra S N
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-12167.patch
>
>
> On trying to add this nested document,
> {code:java}
> {uniqueId : book6, type_s:book, title_t : "The Way of Kings", author_s : 
> "Brandon Sanderson",
>   cat_s:fantasy, pubyear_i:2010, publisher_s:Tor, parent_unbxd:true,
>   _childDocuments_ : [
> { uniqueId: book6_c1, type_s:review, 
> review_dt:"2015-01-03T14:30:00Z",parentId : book6,
>   stars_i:5, author_s:rahul,
>   comment_t:"A great start to what looks like an epic series!"
> }
> ,
> { uniqueId: book6_c2, type_s:review, 
> review_dt:"2014-03-15T12:00:00Z",parentId : book6,
>   stars_i:3, author_s:arpan,
>   comment_t:"This book was too long."
> }
>   ],labelinfo:{label_image:"",hotdeal_type:"",apply_hotdeal:""}
>  }
> {code}
> Only parent document is getting indexed(without labelinfo field) and child 
> documents are being ingored.
> On checking the code,
> https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/update/processor/AtomicUpdateDocumentMerger.java#L94
>  
> I realized that since *labelinfo* is a Map, Solr is trying for atomic updates 
> and since label_image, hotdeal_type, apply_hotdeal are invalid operation 
> field is ignored. Unfortunately, child documents are also not getting indexed.
> h4. Problem with current behavior:
> * field is silently ignored when its value is a map instead of failing 
> document update(when present in parent)
> * In the above case, child document is also getting ignored
> * If any field value is Map in child document but not in parent then nested 
> document is indexed properly
> {code:java}
> {uniqueId : book6, type_s:book, title_t : "The Way of Kings", author_s : 
> "Brandon Sanderson",
>   cat_s:fantasy, pubyear_i:2010, publisher_s:Tor, parent_unbxd:true,
>   _childDocuments_ : [
> { uniqueId: book6_c1, type_s:review, 
> review_dt:"2015-01-03T14:30:00Z",parentId : book6,
>   stars_i:5, author_s:rahul,
>   comment_t:"A great start to what looks like an epic series!"
> ,labelinfo:{label_image:"","hotdeal_type":"","apply_hotdeal":""}
> }
> ,
> { uniqueId: book6_c2, type_s:review, 
> review_dt:"2014-03-15T12:00:00Z",parentId : book6,
>   stars_i:3, author_s:arpan,
>   comment_t:"This book was too long."
> }
>   ]
>  }
> {code}
> Here, nested document is indexed and labelinfo field value indexed in 
> book6_c1 as string(using Map.toString())
> h4. Probable solution
> * If an unknown operation is specified in update document then instead of 
> ignoring the field and field value, fail the document update(fail fast 
> approach). So, that user would know something is wrong with the document. 
> Also, this would solve the case where the parent doc is getting indexed and 
> child documents are getting ignored
> * Currently, when child document's field value is a Map even then it gets 
> indexed, instead update should fail



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4056) Japanese Tokenizer (Kuromoji) cannot build UniDic dictionary

2019-04-24 Thread Tomoko Uchida (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825017#comment-16825017
 ] 

Tomoko Uchida commented on LUCENE-4056:
---

Hi,

as far as licensing, UniDic is now distributed under GPL, LGPL, and BSD 
3-Clause. To my knowledge, the last one is compatible with ALv2.

Please see: [https://unidic.ninjal.ac.jp/download] and 
[https://unidic.ninjal.ac.jp/copying/BSD]

Personally I am looking for using UniDic from kuromoji, because the dictionary 
is still maintained by researchers and suitable for search purpose than current 
search mode based on mecab-ipadic.

If there is possibility to proceed this issue I'd like to help with this issue.

 

> Japanese Tokenizer (Kuromoji) cannot build UniDic dictionary
> 
>
> Key: LUCENE-4056
> URL: https://issues.apache.org/jira/browse/LUCENE-4056
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 3.6
> Environment: Solr 3.6
> UniDic 1.3.12 for MeCab (unidic-mecab1312src.tar.gz)
>Reporter: Kazuaki Hiraga
>Priority: Major
>
> I tried to build a UniDic dictionary for using it along with Kuromoji on Solr 
> 3.6. I think UniDic is a good dictionary than IPA dictionary, so Kuromoji for 
> Lucene/Solr should support UniDic dictionary as standalone Kuromoji does.
> The following is my procedure:
> Modified build.xml under lucene/contrib/analyzers/kuromoji directory and run 
> 'ant build-dict', I got the error as the below.
> build-dict:
>  [java] dictionary builder
>  [java] 
>  [java] dictionary format: UNIDIC
>  [java] input directory: 
> /home/kazu/Work/src/solr/brunch_3_6/lucene/build/contrib/analyzers/kuromoji/unidic-mecab1312src
>  [java] output directory: 
> /home/kazu/Work/src/solr/brunch_3_6/lucene/contrib/analyzers/kuromoji/src/resources
>  [java] input encoding: utf-8
>  [java] normalize entries: false
>  [java] 
>  [java] building tokeninfo dict...
>  [java]   parse...
>  [java]   sort...
>  [java] Exception in thread "main" java.lang.AssertionError
>  [java]   encode...
>  [java]   at 
> org.apache.lucene.analysis.ja.util.BinaryDictionaryWriter.put(BinaryDictionaryWriter.java:113)
>  [java]   at 
> org.apache.lucene.analysis.ja.util.TokenInfoDictionaryBuilder.buildDictionary(TokenInfoDictionaryBuilder.java:141)
>  [java]   at 
> org.apache.lucene.analysis.ja.util.TokenInfoDictionaryBuilder.build(TokenInfoDictionaryBuilder.java:76)
>  [java]   at 
> org.apache.lucene.analysis.ja.util.DictionaryBuilder.build(DictionaryBuilder.java:37)
>  [java]   at 
> org.apache.lucene.analysis.ja.util.DictionaryBuilder.main(DictionaryBuilder.java:82)
> And the diff of build.xml:
> ===
> --- build.xml (revision 1338023)
> +++ build.xml (working copy)
> @@ -28,19 +28,31 @@
>
>  
>
> +  
>  
>
> -  
> +
> +  
> +  
> +  
> +   value="/home/kazu/Work/src/nlp/unidic/_archive"/>
> +
>
> +  
> +  
> +  
> +
>
>
>  
> @@ -58,7 +70,8 @@
>  
>
>
> - 
> + 
> +  tofile="${build.dir}/${dict.src.file}"/>
>   
>   
>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1830 - Still Unstable

2019-04-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1830/

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest.testSimple

Error Message:
Waiting for collection testSimple2 Timeout waiting to see state for 
collection=testSimple2 
:DocCollection(testSimple2//collections/testSimple2/state.json/25)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{ "shard1":{  
 "range":"8000-",   "state":"active",   "replicas":{
 "core_node3":{   
"dataDir":"hdfs://localhost:44317/solr_hdfs_home/testSimple2/core_node3/data/", 
  "base_url":"http://127.0.0.1:33379/solr;,   
"node_name":"127.0.0.1:33379_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:44317/solr_hdfs_home/testSimple2/core_node3/data/tlog",
   "core":"testSimple2_shard1_replica_n1",   
"shared_storage":"true",   "state":"down"}, "core_node5":{  
 
"dataDir":"hdfs://localhost:44317/solr_hdfs_home/testSimple2/core_node5/data/", 
  "base_url":"http://127.0.0.1:40551/solr;,   
"node_name":"127.0.0.1:40551_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:44317/solr_hdfs_home/testSimple2/core_node5/data/tlog",
   "core":"testSimple2_shard1_replica_n2",   
"shared_storage":"true",   "state":"active",   
"leader":"true"}}}, "shard2":{   "range":"0-7fff",   
"state":"active",   "replicas":{ "core_node7":{   
"dataDir":"hdfs://localhost:44317/solr_hdfs_home/testSimple2/core_node7/data/", 
  "base_url":"http://127.0.0.1:33379/solr;,   
"node_name":"127.0.0.1:33379_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:44317/solr_hdfs_home/testSimple2/core_node7/data/tlog",
   "core":"testSimple2_shard2_replica_n4",   
"shared_storage":"true",   "state":"down"}, "core_node8":{  
 
"dataDir":"hdfs://localhost:44317/solr_hdfs_home/testSimple2/core_node8/data/", 
  "base_url":"http://127.0.0.1:40551/solr;,   
"node_name":"127.0.0.1:40551_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:44317/solr_hdfs_home/testSimple2/core_node8/data/tlog",
   "core":"testSimple2_shard2_replica_n6",   
"shared_storage":"true",   "state":"active",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"2",   "autoAddReplicas":"true",   "nrtReplicas":"2",   
"tlogReplicas":"0"} Live Nodes: [127.0.0.1:38787_solr, 127.0.0.1:40551_solr] 
Last available state: 
DocCollection(testSimple2//collections/testSimple2/state.json/25)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{ "shard1":{  
 "range":"8000-",   "state":"active",   "replicas":{
 "core_node3":{   
"dataDir":"hdfs://localhost:44317/solr_hdfs_home/testSimple2/core_node3/data/", 
  "base_url":"http://127.0.0.1:33379/solr;,   
"node_name":"127.0.0.1:33379_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:44317/solr_hdfs_home/testSimple2/core_node3/data/tlog",
   "core":"testSimple2_shard1_replica_n1",   
"shared_storage":"true",   "state":"down"}, "core_node5":{  
 
"dataDir":"hdfs://localhost:44317/solr_hdfs_home/testSimple2/core_node5/data/", 
  "base_url":"http://127.0.0.1:40551/solr;,   
"node_name":"127.0.0.1:40551_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:44317/solr_hdfs_home/testSimple2/core_node5/data/tlog",
   "core":"testSimple2_shard1_replica_n2",   
"shared_storage":"true",   "state":"active",   
"leader":"true"}}}, "shard2":{   "range":"0-7fff",   
"state":"active",   "replicas":{ "core_node7":{   
"dataDir":"hdfs://localhost:44317/solr_hdfs_home/testSimple2/core_node7/data/", 
  "base_url":"http://127.0.0.1:33379/solr;,   
"node_name":"127.0.0.1:33379_solr",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://localhost:44317/solr_hdfs_home/testSimple2/core_node7/data/tlog",
   "core":"testSimple2_shard2_replica_n4",   
"shared_storage":"true",   "state":"down"}, "core_node8":{  
 
"dataDir":"hdfs://localhost:44317/solr_hdfs_home/testSimple2/core_node8/data/", 
  "base_url":"http://127.0.0.1:40551/solr;,   
"node_name":"127.0.0.1:40551_solr",   "type":"NRT",   
"force_set_state":"false",   

  1   2   >