[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1176 - Failure!

2017-03-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1176/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 67913 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /var/tmp/ecj1177668767
 [ecj-lint] Compiling 1061 source files to /var/tmp/ecj1177668767
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/export/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/export/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 212)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 212)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 212)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/core/CoreContainer.java
 (at line 937)
 [ecj-lint] core = new SolrCore(dcore, coreConfig);
 [ecj-lint] ^^
 [ecj-lint] Resource leak: 'core' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
 (at line 233)
 [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, 
blockCacheReadEnabled, false, cacheMerges, cacheReadOnce);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 9. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 120)
 [ecj-lint] reader = cfiltfac.create(reader);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'reader' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 10. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 144)
 [ecj-lint] return namedList;
 [ecj-lint] ^
 [ecj-lint] Resource leak: 'listBasedTokenStream' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 11. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/handler/ClassifyStream.java
 (at line 88)
 [ecj-lint] SolrCore solrCore = (SolrCore) solrCoreObj;
 [ecj-lint]  
 [ecj-lint] Resource leak: 'solrCore' is 

[jira] [Commented] (SOLR-10235) fix last TestJdbcDataSource / mock issue with java9

2017-03-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898878#comment-15898878
 ] 

Uwe Schindler commented on SOLR-10235:
--

Hi [~hossman],
thanks for converting it, but there are 2 issues: {{getPropertyInfo(String url, 
Properties info)}} and {{jdbcCompliant()}} miss to delegate to wrapper; they 
call themselves leading to stack overflow. It looks like they are not called, 
but if the test gets extended it may overflow.

Otherwise, I'd make the Mock setup in the constructor of the wrapper. With the 
current code, if the JDBC driver would actually create a new instance of the 
wrapper based on the class name, the driver would not have the recorded 
behaviour. To do this reove "static" from the inner class. 
{{getMockitoDriver()}} is then also obsolete.

Finally, add {{@Override}}.

> fix last TestJdbcDataSource / mock issue with java9
> ---
>
> Key: SOLR-10235
> URL: https://issues.apache.org/jira/browse/SOLR-10235
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>  Labels: java9
> Attachments: SOLR-10235.patch
>
>
> The way TestJdbcDataSource was converted to use Mockito in SOLR-9966 still 
> left one outstanding test that was incompatible with Java9: 
> {{testRetrieveFromDriverManager()}} 
> The way this test worked with mock classes was also sketchy, but under java9 
> (even with Mockito) the attempt at using class names to resolve things in the 
> standard SQL DriverManager isn't viable.
> It seems like any easy fix is to create _real_ class (with a real/fixed 
> classname) that acts as a wrapper around a mockito "Driver" instance just for 
> the purposes of checking that the DriverManaer related code is working 
> properly.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: pylucene-6.4.1 Makefile

2017-03-06 Thread Andi Vajda

> On Mar 7, 2017, at 00:11, Omar Ali  wrote:
> 
> Hello,
> 
> I noticed that, in the Makefile of pylucene 6.4.1, the environment
> variables section for Mac OS X 10.12 are uncommented out unlike the
> Makefile of the previous release. This is only true for the Makefile
> packaged in pylucene-6.4.1-src.tar.gz. The Makefile on svn
>  still has
> this section commented out. Was this change intentional? Can you re-upload
> pylucene-6.4.1-src.tar.gz with the Mac OS X section commented out again?
> This is breaking the workflow of specifying the desired environment
> variables on the command line without editing the Makefile.

Indeed, that is a bug, it should not have been committed that way. Apologies.
To be fixed for the next release.

Andi..

> 
> Thanks,
> Omar


pylucene-6.4.1 Makefile

2017-03-06 Thread Omar Ali
Hello,

I noticed that, in the Makefile of pylucene 6.4.1, the environment
variables section for Mac OS X 10.12 are uncommented out unlike the
Makefile of the previous release. This is only true for the Makefile
packaged in pylucene-6.4.1-src.tar.gz. The Makefile on svn
 still has
this section commented out. Was this change intentional? Can you re-upload
pylucene-6.4.1-src.tar.gz with the Mac OS X section commented out again?
This is breaking the workflow of specifying the desired environment
variables on the command line without editing the Makefile.

Thanks,
Omar


[jira] [Commented] (SOLR-10209) UI: Convert all Collections api calls to async requests, add new features/buttons

2017-03-06 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898867#comment-15898867
 ] 

Amrit Sarkar commented on SOLR-10209:
-

Need advice on the following:

We were solving two problems in this:
1. Indefinite retires of the API calls when the server goes down without 
completing the request
2. Don't say the connection is list if the API is taking more than 10 sec.

(2) is done and good to go, I am working on elegant progress bar so that it can 
accommodate more than one call at single time.
For (1), we are heading towards greater problems as earlier the original API 
call was replicated, now in addition REQUESTSTATUS api is clinging on with it 
and now two APIs are filling the network call list.

There is no way to fix it other than we change the base js file i.e. app.js. 
This means we will change how the API calls are made in other pages e.g. cloud, 
core, mbeans etc. I intend not to change the base js file, and suggestions will 
be deeply appreciated on this.

> UI: Convert all Collections api calls to async requests, add new 
> features/buttons
> -
>
> Key: SOLR-10209
> URL: https://issues.apache.org/jira/browse/SOLR-10209
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Amrit Sarkar
> Attachments: SOLR-10209.patch, SOLR-10209-v1.patch
>
>
> We are having discussion on multiple jiras for requests for Collections apis 
> from UI and how to improve them:
> SOLR-9818: Solr admin UI rapidly retries any request(s) if it loses 
> connection with the server
> SOLR-10146: Admin UI: Button to delete a shard
> SOLR-10201: Add Collection "creates collection", "Connection to Solr lost", 
> when replicationFactor>1
> Proposal =>
> *Phase 1:*
> Convert all Collections api calls to async requests and utilise REQUESTSTATUS 
> to fetch the information. There will be performance hit, but the requests 
> will be safe and sound. A progress bar will be added for request status.
> {noformat}
> > submit the async request
> if (the initial call failed or there was no status to be found)
> { report an error and suggest the user look check their system before 
> resubmitting the request. Bail out in this case, no retries, no attempt to 
> drive on. }
> else
> { put up a progress indicator while periodically checking the status, 
> Continue spinning until we can report the final status. }
> {noformat}
> *Phase 2:*
> Add new buttons/features to collections.html
> a) "Split" shard
> b) "Delete" shard
> c) "Backup" collection
> d) "Restore" collection
> Open to suggestions and feedbacks on this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 742 - Failure!

2017-03-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/742/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at http://127.0.0.1:51671/solr/awhollynewcollection_0: 
Expected mime type application/octet-stream but got text/html.   
 
Error 510HTTP ERROR: 510 Problem 
accessing /solr/awhollynewcollection_0/select. Reason: 
{metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg={awhollynewcollection_0:6},code=510}
 http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.14.v20161028   

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:51671/solr/awhollynewcollection_0: Expected 
mime type application/octet-stream but got text/html. 


Error 510 


HTTP ERROR: 510
Problem accessing /solr/awhollynewcollection_0/select. Reason:

{metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg={awhollynewcollection_0:6},code=510}
http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.14.v20161028



at 
__randomizedtesting.SeedInfo.seed([F2B7F038BFA688D4:BAC2848CB995A741]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:578)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:435)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:387)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1361)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1112)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1215)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1215)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1215)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1215)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1215)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1042)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:522)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 

[jira] [Commented] (SOLR-10039) LatLonPointSpatialField

2017-03-06 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898782#comment-15898782
 ] 

David Smiley commented on SOLR-10039:
-

Yes; thanks Alexandre.

I'll commit this patch within a couple days or sooner if I get a +1.

> LatLonPointSpatialField
> ---
>
> Key: SOLR-10039
> URL: https://issues.apache.org/jira/browse/SOLR-10039
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_10039_LatLonPointSpatialField.patch, 
> SOLR_10039_LatLonPointSpatialField.patch, 
> SOLR_10039_LatLonPointSpatialField.patch
>
>
> The fastest and most efficient spatial field for point data in Lucene/Solr is 
> {{LatLonPoint}} in Lucene's sandbox module.  I'll include 
> {{LatLonDocValuesField}} with this even though it's a separate class.  
> LatLonPoint is based on the Points API, using a BKD index.  It's multi-valued 
> capable.  LatLonDocValuesField is based on sorted numeric DocValues, and thus 
> is also multi-valued capable (a big deal as the existing Solr ones either 
> aren't or do poorly at it).  Note that this feature is limited to a 
> latitude/longitude spherical world model.  And furthermore the precision is 
> at about a centimeter -- less precise than the other spatial fields but 
> nonetheless plenty good for most applications.  Last but not least, this 
> capability natively supports polygons, albeit those that don't wrap the 
> dateline or a pole.
> I propose a {{LatLonPointSpatialField}} which uses this.  Patch & details 
> forthcoming...
> This development was funded by the Harvard Center for Geographic Analysis as 
> part of the HHypermap project



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9601) DIH: Radicially simplify Tika example to only show relevant configuration

2017-03-06 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898778#comment-15898778
 ] 

David Smiley commented on SOLR-9601:


+1 and to all the /example configs for that matter -- same principle.  Keep the 
relevant parts that are to be exercised; no kitchen sinks that needs to be 
maintained.

> DIH: Radicially simplify Tika example to only show relevant configuration
> -
>
> Key: SOLR-9601
> URL: https://issues.apache.org/jira/browse/SOLR-9601
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler, contrib - Solr Cell (Tika 
> extraction)
>Affects Versions: 6.x, master (7.0)
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>  Labels: examples, usability
>
> Solr DIH examples are legacy examples to show how DIH work. However, they 
> include full configurations that may obscure teaching points. This is no 
> longer needed as we have 3 full-blown examples in the configsets. 
> Specifically for Tika, the field types definitions were at some point 
> simplified to have less support files in the configuration directory. This, 
> however, means that we now have field definitions that have same names as 
> other examples, but different definitions. 
> Importantly, Tika does not use most (any?) of those modified definitions. 
> They are there just for completeness. Similarly, the solrconfig.xml includes 
> extract handler even though we are demonstrating a different path of using 
> Tika. Somebody grepping through config files may get confused about what 
> configuration aspects contributes to what experience.
> I am planning to significantly simplify configuration and schema of Tika 
> example to **only** show DIH Tika extraction path. It will end-up a very 
> short and focused example.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7727) Replace EOL'ed pegdown by flexmark-java for Java 9 compatibility

2017-03-06 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898771#comment-15898771
 ] 

David Smiley commented on LUCENE-7727:
--

Thanks for the tips Uwe.  I forgot to mention I _did_ run {{ant 
ivy-bootstrap}}.  So the problem was that my {{~/.ant/lib/}} had _both_ 
ivy-2.2.0.jar and ivy-2.3.0.jar somehow for who knows how long, and this is 
biting me now.  Perhaps Java 9 might better alert users to classpaths 
containing jars with classes in the same package?  Something to look forward to 
if so.

> Replace EOL'ed pegdown by flexmark-java for Java 9 compatibility
> 
>
> Key: LUCENE-7727
> URL: https://issues.apache.org/jira/browse/LUCENE-7727
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7727.patch, LUCENE-7727.patch
>
>
> The documentation tasks use a library called "pegdown" to convert Markdown to 
> HTML. Unfortunately, the developer of pegdown EOLed it and points the users 
> to a faster replacement: flexmark-java 
> (https://github.com/vsch/flexmark-java).
> This would not be important for us, if pegdown would work with Java 9, but it 
> is also affected by the usual "setAccessible into private Java APIs" issue 
> (see my talk at FOSDEM: 
> https://fosdem.org/2017/schedule/event/jigsaw_challenges/).
> The migration should not be too hard, its just a bit of Groovy Code rewriting 
> and dependency changes.
> This is the pegdown problem:
> {noformat}
> Caused by: java.lang.RuntimeException: Could not determine whether class 
> 'org.pegdown.Parser$$parboiled' has already been loaded
> at org.parboiled.transform.AsmUtils.findLoadedClass(AsmUtils.java:213)
> at 
> org.parboiled.transform.ParserTransformer.transformParser(ParserTransformer.java:35)
> at org.parboiled.Parboiled.createParser(Parboiled.java:54)
> ... 50 more
> Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make 
> protected final java.lang.Class 
> java.lang.ClassLoader.findLoadedClass(java.lang.String) accessible: module 
> java.base does not "opens java.lang" to unnamed module @551b6736
> at 
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:335)
> at 
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:278)
> at 
> java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:196)
> at java.base/java.lang.reflect.Method.setAccessible(Method.java:190)
> at org.parboiled.transform.AsmUtils.findLoadedClass(AsmUtils.java:206)
> ... 52 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 302 - Still Unstable

2017-03-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/302/

2 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrBootstrapTest.testConvertClusterToCdcrAndBootstrap

Error Message:
Document mismatch on target after sync expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: Document mismatch on target after sync 
expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([B003351AF67433AA:67D41A6D422BABED]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.CdcrBootstrapTest.testConvertClusterToCdcrAndBootstrap(CdcrBootstrapTest.java:134)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 

[jira] [Reopened] (SOLR-9836) Add more graceful recovery steps when failing to create SolrCore

2017-03-06 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reopened SOLR-9836:
---

> Add more graceful recovery steps when failing to create SolrCore
> 
>
> Key: SOLR-9836
> URL: https://issues.apache.org/jira/browse/SOLR-9836
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mike Drob
>Assignee: Mark Miller
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9836.patch, SOLR-9836.patch, SOLR-9836.patch, 
> SOLR-9836.patch, SOLR-9836.patch, SOLR-9836.patch, SOLR-9836.patch
>
>
> I have seen several cases where there is a zero-length segments_n file. We 
> haven't identified the root cause of these issues (possibly a poorly timed 
> crash during replication?) but if there is another node available then Solr 
> should be able to recover from this situation. Currently, we log and give up 
> on loading that core, leaving the user to manually intervene.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3877 - Still Unstable!

2017-03-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3877/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not find collection:collection2

Stack Trace:
java.lang.AssertionError: Could not find collection:collection2
at 
__randomizedtesting.SeedInfo.seed([9671C894EC03AC8D:1E25F74E42FFC175]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:159)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:144)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:139)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:865)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:620)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Resolved] (SOLR-9986) Implement DatePointField

2017-03-06 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat resolved SOLR-9986.

Resolution: Fixed

> Implement DatePointField
> 
>
> Key: SOLR-9986
> URL: https://issues.apache.org/jira/browse/SOLR-9986
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Cao Manh Dat
> Attachments: SOLR-9986.patch, SOLR-9986.patch
>
>
> Followup task of SOLR-8396



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9986) Implement DatePointField

2017-03-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898726#comment-15898726
 ] 

ASF subversion and git services commented on SOLR-9986:
---

Commit 4c2ed22b3721b7d6a86e5809821ca88f9af833ad in lucene-solr's branch 
refs/heads/branch_6x from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4c2ed22 ]

SOLR-9986: Implement DatePointField


> Implement DatePointField
> 
>
> Key: SOLR-9986
> URL: https://issues.apache.org/jira/browse/SOLR-9986
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Cao Manh Dat
> Attachments: SOLR-9986.patch, SOLR-9986.patch
>
>
> Followup task of SOLR-8396



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10205) Evaluate and reduce BlockCache store failures

2017-03-06 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-10205.
-
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.5

> Evaluate and reduce BlockCache store failures
> -
>
> Key: SOLR-10205
> URL: https://issues.apache.org/jira/browse/SOLR-10205
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Fix For: 6.5, master (7.0)
>
> Attachments: cache_performance_test.txt, SOLR-10205.patch, 
> SOLR-10205.patch, SOLR-10205.patch
>
>
> The BlockCache is written such that requests to cache a block 
> (BlockCache.store call) can fail, making caching less effective.  We should 
> evaluate the impact of this storage failure and potentially reduce the number 
> of storage failures.
> The implementation reserves a single block of memory.  In store, a block of 
> memory is allocated, and then a pointer is inserted into the underling map.  
> A block is only freed when the underlying map evicts the map entry.
> This means that when two store() operations are called concurrently (even 
> under low load), one can fail.  This is made worse by the fact that 
> concurrent maps typically tend to amortize the cost of eviction over many 
> keys (i.e. the actual size of the map can grow beyond the configured maximum 
> number of entries... both the older ConcurrentLinkedHashMap and newer 
> Caffeine do this).  When this is the case, store() won't be able to find a 
> free block of memory, even if there aren't any other concurrently operating 
> stores.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10205) Evaluate and reduce BlockCache store failures

2017-03-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898670#comment-15898670
 ] 

ASF subversion and git services commented on SOLR-10205:


Commit f2da342c47f8588996c7a68433a4e11131e46ee2 in lucene-solr's branch 
refs/heads/branch_6x from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f2da342 ]

SOLR-10205: BlockCache - use 4 reserved blocks, don't use executor in caffeine, 
call cleanUp


> Evaluate and reduce BlockCache store failures
> -
>
> Key: SOLR-10205
> URL: https://issues.apache.org/jira/browse/SOLR-10205
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: cache_performance_test.txt, SOLR-10205.patch, 
> SOLR-10205.patch, SOLR-10205.patch
>
>
> The BlockCache is written such that requests to cache a block 
> (BlockCache.store call) can fail, making caching less effective.  We should 
> evaluate the impact of this storage failure and potentially reduce the number 
> of storage failures.
> The implementation reserves a single block of memory.  In store, a block of 
> memory is allocated, and then a pointer is inserted into the underling map.  
> A block is only freed when the underlying map evicts the map entry.
> This means that when two store() operations are called concurrently (even 
> under low load), one can fail.  This is made worse by the fact that 
> concurrent maps typically tend to amortize the cost of eviction over many 
> keys (i.e. the actual size of the map can grow beyond the configured maximum 
> number of entries... both the older ConcurrentLinkedHashMap and newer 
> Caffeine do this).  When this is the case, store() won't be able to find a 
> free block of memory, even if there aren't any other concurrently operating 
> stores.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10237) Poly-Fields should error if subfield has docValues=true

2017-03-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-10237:
-
Attachment: SOLR-10237.patch

> Poly-Fields should error if subfield has docValues=true
> ---
>
> Key: SOLR-10237
> URL: https://issues.apache.org/jira/browse/SOLR-10237
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-10237.patch
>
>
> DocValues aren’t really supported in poly-fields at this point, but they 
> don’t complain if the schema definition of the subfield has docValues=true. 
> This leaves the index in an inconsistent state, since the SchemaField has 
> docValues=true but there are no DV for the field.
> Since this breaks compatibility, I think we should just emit a warning unless 
> the subfield is an instance of {{PointType}}. With 
> {{\[Int/Long/Float/Double/Date\]PointType}} subfields, this is particularly 
> important, since they use {{IndexOrDocValuesQuery}}, that would return 
> incorrect results.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9986) Implement DatePointField

2017-03-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898658#comment-15898658
 ] 

ASF subversion and git services commented on SOLR-9986:
---

Commit 3131ec2d99401c1fd1fc33a00343a59a78ab6445 in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3131ec2 ]

SOLR-9986: Implement DatePointField


> Implement DatePointField
> 
>
> Key: SOLR-9986
> URL: https://issues.apache.org/jira/browse/SOLR-9986
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Cao Manh Dat
> Attachments: SOLR-9986.patch, SOLR-9986.patch
>
>
> Followup task of SOLR-8396



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10039) LatLonPointSpatialField

2017-03-06 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898655#comment-15898655
 ] 

Alexandre Rafalovitch commented on SOLR-10039:
--

I think we can just remove that field and related definitions from all the 
legacy examples. I don't see anything rely on them by definition. 

And yes, I think at least DIH examples should be stripped to absolute minimum. 
I was going to do that for Tika example in SOLR-9601. Is that the kind of 
things you were thinking about?

> LatLonPointSpatialField
> ---
>
> Key: SOLR-10039
> URL: https://issues.apache.org/jira/browse/SOLR-10039
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_10039_LatLonPointSpatialField.patch, 
> SOLR_10039_LatLonPointSpatialField.patch, 
> SOLR_10039_LatLonPointSpatialField.patch
>
>
> The fastest and most efficient spatial field for point data in Lucene/Solr is 
> {{LatLonPoint}} in Lucene's sandbox module.  I'll include 
> {{LatLonDocValuesField}} with this even though it's a separate class.  
> LatLonPoint is based on the Points API, using a BKD index.  It's multi-valued 
> capable.  LatLonDocValuesField is based on sorted numeric DocValues, and thus 
> is also multi-valued capable (a big deal as the existing Solr ones either 
> aren't or do poorly at it).  Note that this feature is limited to a 
> latitude/longitude spherical world model.  And furthermore the precision is 
> at about a centimeter -- less precise than the other spatial fields but 
> nonetheless plenty good for most applications.  Last but not least, this 
> capability natively supports polygons, albeit those that don't wrap the 
> dateline or a pole.
> I propose a {{LatLonPointSpatialField}} which uses this.  Patch & details 
> forthcoming...
> This development was funded by the Harvard Center for Geographic Analysis as 
> part of the HHypermap project



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10238) Remove LatLonType in 7.0; replaced by LatLonPointSpatialField

2017-03-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898652#comment-15898652
 ] 

Tomás Fernández Löbbe commented on SOLR-10238:
--

I think it would be too soon. AFAIK we try to keep index back compatibility the 
of one full major version (same as Lucene), so I think we should mark it as 
deprecated in 6.x and 7.x (maybe also remove it from all example schemas) and 
remove in 8.

> Remove LatLonType in 7.0; replaced by LatLonPointSpatialField
> -
>
> Key: SOLR-10238
> URL: https://issues.apache.org/jira/browse/SOLR-10238
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: master (7.0)
>
>
> LatLonPointSpatialField is about to land in SOLR-10039.  This field is 
> superior to LatLonType.  In 7.0, lets remove LatLonType and mark it 
> deprecated in 6.x?  Or must this wait yet another release cycle?
> FYI RPT fields still have life in them due to their ability to index 
> non-point shapes, handle custom (user coded ) shapes, and heatmaps, and are 
> not limited to a lat-lon coordinate space.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10238) Remove LatLonType in 7.0; replaced by LatLonPointSpatialField

2017-03-06 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898617#comment-15898617
 ] 

David Smiley commented on SOLR-10238:
-

FYI [~tomasflobbe] since you've been working on numerics -> Points in Solr.

> Remove LatLonType in 7.0; replaced by LatLonPointSpatialField
> -
>
> Key: SOLR-10238
> URL: https://issues.apache.org/jira/browse/SOLR-10238
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: master (7.0)
>
>
> LatLonPointSpatialField is about to land in SOLR-10039.  This field is 
> superior to LatLonType.  In 7.0, lets remove LatLonType and mark it 
> deprecated in 6.x?  Or must this wait yet another release cycle?
> FYI RPT fields still have life in them due to their ability to index 
> non-point shapes, handle custom (user coded ) shapes, and heatmaps, and are 
> not limited to a lat-lon coordinate space.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10238) Remove LatLonType in 7.0; replaced by LatLonPointSpatialField

2017-03-06 Thread David Smiley (JIRA)
David Smiley created SOLR-10238:
---

 Summary: Remove LatLonType in 7.0; replaced by 
LatLonPointSpatialField
 Key: SOLR-10238
 URL: https://issues.apache.org/jira/browse/SOLR-10238
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: master (7.0)


LatLonPointSpatialField is about to land in SOLR-10039.  This field is superior 
to LatLonType.  In 7.0, lets remove LatLonType and mark it deprecated in 6.x?  
Or must this wait yet another release cycle?

FYI RPT fields still have life in them due to their ability to index non-point 
shapes, handle custom (user coded ) shapes, and heatmaps, and are not limited 
to a lat-lon coordinate space.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10039) LatLonPointSpatialField

2017-03-06 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-10039:

Attachment: SOLR_10039_LatLonPointSpatialField.patch

Last patch; I think it's ready now.  This _replaces_ {{solr.LatLonType}} with 
{{solr.LatLonPointSpatialField}} in the solr/server/solr/configsets/ schemas.  

However it doesn't modify solr/example/ schemas... I wish those schemas were 
stripped down so that they were easier to maintain (separate issue).  Thoughts 
[~arafalov]?

> LatLonPointSpatialField
> ---
>
> Key: SOLR-10039
> URL: https://issues.apache.org/jira/browse/SOLR-10039
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Attachments: SOLR_10039_LatLonPointSpatialField.patch, 
> SOLR_10039_LatLonPointSpatialField.patch, 
> SOLR_10039_LatLonPointSpatialField.patch
>
>
> The fastest and most efficient spatial field for point data in Lucene/Solr is 
> {{LatLonPoint}} in Lucene's sandbox module.  I'll include 
> {{LatLonDocValuesField}} with this even though it's a separate class.  
> LatLonPoint is based on the Points API, using a BKD index.  It's multi-valued 
> capable.  LatLonDocValuesField is based on sorted numeric DocValues, and thus 
> is also multi-valued capable (a big deal as the existing Solr ones either 
> aren't or do poorly at it).  Note that this feature is limited to a 
> latitude/longitude spherical world model.  And furthermore the precision is 
> at about a centimeter -- less precise than the other spatial fields but 
> nonetheless plenty good for most applications.  Last but not least, this 
> capability natively supports polygons, albeit those that don't wrap the 
> dateline or a pole.
> I propose a {{LatLonPointSpatialField}} which uses this.  Patch & details 
> forthcoming...
> This development was funded by the Harvard Center for Geographic Analysis as 
> part of the HHypermap project



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10237) Poly-Fields should error if subfield has docValues=true

2017-03-06 Thread JIRA
Tomás Fernández Löbbe created SOLR-10237:


 Summary: Poly-Fields should error if subfield has docValues=true
 Key: SOLR-10237
 URL: https://issues.apache.org/jira/browse/SOLR-10237
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Tomás Fernández Löbbe
Priority: Minor


DocValues aren’t really supported in poly-fields at this point, but they don’t 
complain if the schema definition of the subfield has docValues=true. This 
leaves the index in an inconsistent state, since the SchemaField has 
docValues=true but there are no DV for the field.
Since this breaks compatibility, I think we should just emit a warning unless 
the subfield is an instance of {{PointType}}. With 
{{\[Int/Long/Float/Double/Date\]PointType}} subfields, this is particularly 
important, since they use {{IndexOrDocValuesQuery}}, that would return 
incorrect results.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10236) Remove FieldType.getNumericType() from master

2017-03-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-10236:
-
Priority: Minor  (was: Major)

> Remove FieldType.getNumericType() from master
> -
>
> Key: SOLR-10236
> URL: https://issues.apache.org/jira/browse/SOLR-10236
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-10236.patch
>
>
> {{LegacyNumericType FieldType.getNumericType()}} is no longer used since 
> SOLR-10011, and it was deprecated (replaced by {{NumberType 
> getNumberType()}}). We can remove it from master (7.0)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10236) Remove FieldType.getNumericType() from master

2017-03-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-10236:
-
Attachment: SOLR-10236.patch

> Remove FieldType.getNumericType() from master
> -
>
> Key: SOLR-10236
> URL: https://issues.apache.org/jira/browse/SOLR-10236
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Tomás Fernández Löbbe
> Attachments: SOLR-10236.patch
>
>
> {{LegacyNumericType FieldType.getNumericType()}} is no longer used since 
> SOLR-10011, and it was deprecated (replaced by {{NumberType 
> getNumberType()}}). We can remove it from master (7.0)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10236) Remove FieldType.getNumericType() from master

2017-03-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-10236:
-
Affects Version/s: master (7.0)

> Remove FieldType.getNumericType() from master
> -
>
> Key: SOLR-10236
> URL: https://issues.apache.org/jira/browse/SOLR-10236
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Tomás Fernández Löbbe
> Attachments: SOLR-10236.patch
>
>
> {{LegacyNumericType FieldType.getNumericType()}} is no longer used since 
> SOLR-10011, and it was deprecated (replaced by {{NumberType 
> getNumberType()}}). We can remove it from master (7.0)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10236) Remove FieldType.getNumericType() from master

2017-03-06 Thread JIRA
Tomás Fernández Löbbe created SOLR-10236:


 Summary: Remove FieldType.getNumericType() from master
 Key: SOLR-10236
 URL: https://issues.apache.org/jira/browse/SOLR-10236
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Tomás Fernández Löbbe


{{LegacyNumericType FieldType.getNumericType()}} is no longer used since 
SOLR-10011, and it was deprecated (replaced by {{NumberType getNumberType()}}). 
We can remove it from master (7.0)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10233) Add support for different replica types in Solr

2017-03-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898538#comment-15898538
 ] 

Tomás Fernández Löbbe commented on SOLR-10233:
--

bq. let's make it {{realtimeReplicas=X=Y=Z}}
SGTM. I'll change the names

> Add support for different replica types in Solr
> ---
>
> Key: SOLR-10233
> URL: https://issues.apache.org/jira/browse/SOLR-10233
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-10233.patch
>
>
> For the majority of the cases, current SolrCloud's  distributed indexing is 
> great. There is a subset of use cases for which the legacy Master/Slave 
> replication may fit better:
> * Don’t require NRT
> * LIR can become an issue, prefer availability of reads vs consistency or NRT
> * High number of searches (requiring many search nodes)
> SOLR-9835 is adding replicas that don’t do indexing, just update their 
> transaction log. This Jira is to extend that idea and provide the following 
> replica types:
> * *Realtime:* Writes updates to transaction log and indexes locally. Replicas 
> of type “realtime” support NRT (soft commits) and RTG. Any _realtime_ replica 
> can become a leader. This is the only type supported in SolrCloud at this 
> time and will be the default.
> * *Append:* Writes to transaction log, but not to index, uses replication. 
> Any _append_ replica can become leader (by first applying all local 
> transaction log elements). If a replica is of type _append_ but is also the 
> leader, it will behave as a _realtime_. This is exactly what SOLR-9835 is 
> proposing (non-live replicas)
> * *Passive:* Doesn’t index or writes to transaction log. Just replicates from 
> _realtime_ or _append_ replicas. Passive replicas can’t become shard leaders 
> (i.e., if there are only passive replicas in the collection at some point, 
> updates will fail same as if there is no leaders, queries continue to work), 
> so they don’t even participate in elections.
> When the leader replica of the shard receives an update, it will distribute 
> it to all _realtime_ and _append_ replicas, the same as it does today. It 
> won't distribute to _passive_ replicas.
> By using a combination of _append_ and _passive_ replicas, one can achieve an 
> equivalent of the legacy Master/Slave architecture in SolrCloud mode with 
> most of its benefits, including high availability of writes. 
> h2. API (v1 style)
> {{/admin/collections?action=CREATE…&*realtime=X=Y=Z*}}
> {{/admin/collections?action=ADDREPLICA…&*type=\[realtime/append/passive\]*}}
> * “replicationFactor=” will translate to “realtime=“ for back compatibility
> * if _passive_ > 0, _append_ or _realtime_ need to be >= 1 (can’t be all 
> passives)
> h2. Placement Strategies
> By using replica placement rules, one should be able to dedicate nodes to 
> search-only and write-only workloads. For example:
> {code}
> shard:*,replica:*,type:passive,fleet:slaves
> {code}
> where “type” is a new condition supported by the rule engine, and 
> “fleet:slaves” is a regular tag. Note that rules are only applied when the 
> replicas are created, so a later change in tags won't affect existing 
> replicas. Also, rules are per collection, so each collection could contain 
> it's own different rules.
> Note that on the server side Solr also needs to know how to distribute the 
> shard requests (maybe ShardHandler?) if we want to hit only a subset of 
> replicas (i.e. *passive *replicas only, or similar rules)
> h2. SolrJ
> SolrCloud client could be smart to prefer _passive_ replicas for search 
> requests when available (and if configured to do so). _Passive_ replicas 
> can’t respond RTG requests, so those should go to _append_ or _realtime_ 
> replicas. 
> h2. Cluster/Collection state
> {code}
> {"gettingstarted":{
>   "replicationFactor":"1",
>   "router":{"name":"compositeId"},
>   "maxShardsPerNode":"2",
>   "autoAddReplicas":"false",
>   "shards":{
> "shard1":{
>   "range":"8000-",
>   "state":"active",
>   "replicas":{
> "core_node5":{
>   "core":"gettingstarted_shard1_replica1",
>   "base_url":"http://127.0.0.1:8983/solr;,
>   "node_name":"127.0.0.1:8983_solr",
>   "state":"active",
>   "leader":"true",
>   **"type": "realtime"**},
> "core_node10":{
>   "core":"gettingstarted_shard1_replica2",
>   "base_url":"http://127.0.0.1:7574/solr;,
>   "node_name":"127.0.0.1:7574_solr",
>   "state":"active",
>   **"type": "passive"**}},
>   }},
> "shard2":{
>   ...
> {code}
> h2. Back compatibility
> We should be 

[jira] [Commented] (SOLR-8045) Deploy Solr in ROOT (/) path

2017-03-06 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898531#comment-15898531
 ] 

Cao Manh Dat commented on SOLR-8045:


[~noble.paul] There are something left in blob handler which I do not sure how 
to handle it ( should it change to "/v2" or not? ). The tests still pass 
even we do not touch it.

> Deploy Solr in ROOT (/) path 
> -
>
> Key: SOLR-8045
> URL: https://issues.apache.org/jira/browse/SOLR-8045
> Project: Solr
>  Issue Type: Wish
>Reporter: Noble Paul
>Assignee: Cao Manh Dat
> Fix For: 6.0
>
> Attachments: SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch, 
> SOLR-8045.patch, SOLR-8045.patch
>
>
> This does not mean that the path to access Solr will be changed. All paths 
> will remain as is and would behave exactly the same



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10235) fix last TestJdbcDataSource / mock issue with java9

2017-03-06 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-10235:

Attachment: SOLR-10235.patch

Attaching a patch showing what i had in mind.

seems to work fine with both java8 & java9.

[~thetaphi] & [~caomanhdat]: do you guys see any problems with this type of 
approach?

> fix last TestJdbcDataSource / mock issue with java9
> ---
>
> Key: SOLR-10235
> URL: https://issues.apache.org/jira/browse/SOLR-10235
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>  Labels: java9
> Attachments: SOLR-10235.patch
>
>
> The way TestJdbcDataSource was converted to use Mockito in SOLR-9966 still 
> left one outstanding test that was incompatible with Java9: 
> {{testRetrieveFromDriverManager()}} 
> The way this test worked with mock classes was also sketchy, but under java9 
> (even with Mockito) the attempt at using class names to resolve things in the 
> standard SQL DriverManager isn't viable.
> It seems like any easy fix is to create _real_ class (with a real/fixed 
> classname) that acts as a wrapper around a mockito "Driver" instance just for 
> the purposes of checking that the DriverManaer related code is working 
> properly.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10235) fix last TestJdbcDataSource / mock issue with java9

2017-03-06 Thread Hoss Man (JIRA)
Hoss Man created SOLR-10235:
---

 Summary: fix last TestJdbcDataSource / mock issue with java9
 Key: SOLR-10235
 URL: https://issues.apache.org/jira/browse/SOLR-10235
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


The way TestJdbcDataSource was converted to use Mockito in SOLR-9966 still left 
one outstanding test that was incompatible with Java9: 
{{testRetrieveFromDriverManager()}} 

The way this test worked with mock classes was also sketchy, but under java9 
(even with Mockito) the attempt at using class names to resolve things in the 
standard SQL DriverManager isn't viable.

It seems like any easy fix is to create _real_ class (with a real/fixed 
classname) that acts as a wrapper around a mockito "Driver" instance just for 
the purposes of checking that the DriverManaer related code is working properly.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_121) - Build # 6436 - Unstable!

2017-03-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6436/
Java: 64bit/jdk1.8.0_121 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.update.TestInPlaceUpdatesStandalone.testUpdatingDocValues

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([E5327FD412D6687F:3342A273798B1629]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.update.TestInPlaceUpdatesStandalone.testUpdatingDocValues(TestInPlaceUpdatesStandalone.java:191)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 13084 lines...]
   [junit4] Suite: org.apache.solr.update.TestInPlaceUpdatesStandalone
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-8906) Make transient core cache pluggable.

2017-03-06 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898488#comment-15898488
 ] 

Noble Paul commented on SOLR-8906:
--

lazy cores itself is a vestige a of the old master-slave model where cores were 
not expected to be up. 

So, this is an X-Y problem. Let's ask the question , why do we want to unload a 
core?

We just need to ensure that the resources held by a core is kept minimal. The 
expensive resources are file handles & caches.(there could be others and we can 
ignore them for a while). So, if we manage to free up these resources for the 
unused core we can pretty much achieve our objective. 

> Make transient core cache pluggable.
> 
>
> Key: SOLR-8906
> URL: https://issues.apache.org/jira/browse/SOLR-8906
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> The current Lazy Core stuff is pretty deeply intertwined in CoreContainer. 
> Adding and removing active cores is based on a simple LRU mechanism, but 
> keeping the right cores in the right internal structures involves a lot of 
> attention to locking various objects to update internal structures. This 
> makes it difficult/dangerous to use any other caching algorithms.
> Any single age-out algorithm will have non-optimal access patterns, so making 
> this pluggable would allow better algorithms to be substituted in those cases.
> If we ever extend transient cores to SolrCloud, we need to have load/unload 
> decisions that are cloud-aware rather then entirely local so in that sense 
> this is would lay some groundwork if we ever want to go there.
> So I'm going to try to hack together a PoC. Any ideas on the most sensible 
> pattern for this gratefully received.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10205) Evaluate and reduce BlockCache store failures

2017-03-06 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-10205:

Attachment: (was: SOLR-10205.patch)

> Evaluate and reduce BlockCache store failures
> -
>
> Key: SOLR-10205
> URL: https://issues.apache.org/jira/browse/SOLR-10205
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: cache_performance_test.txt, SOLR-10205.patch, 
> SOLR-10205.patch, SOLR-10205.patch
>
>
> The BlockCache is written such that requests to cache a block 
> (BlockCache.store call) can fail, making caching less effective.  We should 
> evaluate the impact of this storage failure and potentially reduce the number 
> of storage failures.
> The implementation reserves a single block of memory.  In store, a block of 
> memory is allocated, and then a pointer is inserted into the underling map.  
> A block is only freed when the underlying map evicts the map entry.
> This means that when two store() operations are called concurrently (even 
> under low load), one can fail.  This is made worse by the fact that 
> concurrent maps typically tend to amortize the cost of eviction over many 
> keys (i.e. the actual size of the map can grow beyond the configured maximum 
> number of entries... both the older ConcurrentLinkedHashMap and newer 
> Caffeine do this).  When this is the case, store() won't be able to find a 
> free block of memory, even if there aren't any other concurrently operating 
> stores.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10205) Evaluate and reduce BlockCache store failures

2017-03-06 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-10205:

Attachment: SOLR-10205.patch

> Evaluate and reduce BlockCache store failures
> -
>
> Key: SOLR-10205
> URL: https://issues.apache.org/jira/browse/SOLR-10205
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: cache_performance_test.txt, SOLR-10205.patch, 
> SOLR-10205.patch, SOLR-10205.patch
>
>
> The BlockCache is written such that requests to cache a block 
> (BlockCache.store call) can fail, making caching less effective.  We should 
> evaluate the impact of this storage failure and potentially reduce the number 
> of storage failures.
> The implementation reserves a single block of memory.  In store, a block of 
> memory is allocated, and then a pointer is inserted into the underling map.  
> A block is only freed when the underlying map evicts the map entry.
> This means that when two store() operations are called concurrently (even 
> under low load), one can fail.  This is made worse by the fact that 
> concurrent maps typically tend to amortize the cost of eviction over many 
> keys (i.e. the actual size of the map can grow beyond the configured maximum 
> number of entries... both the older ConcurrentLinkedHashMap and newer 
> Caffeine do this).  When this is the case, store() won't be able to find a 
> free block of memory, even if there aren't any other concurrently operating 
> stores.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10205) Evaluate and reduce BlockCache store failures

2017-03-06 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-10205:

Attachment: SOLR-10205.patch

Final patch - I plan on committing shortly.

> Evaluate and reduce BlockCache store failures
> -
>
> Key: SOLR-10205
> URL: https://issues.apache.org/jira/browse/SOLR-10205
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: cache_performance_test.txt, SOLR-10205.patch, 
> SOLR-10205.patch, SOLR-10205.patch
>
>
> The BlockCache is written such that requests to cache a block 
> (BlockCache.store call) can fail, making caching less effective.  We should 
> evaluate the impact of this storage failure and potentially reduce the number 
> of storage failures.
> The implementation reserves a single block of memory.  In store, a block of 
> memory is allocated, and then a pointer is inserted into the underling map.  
> A block is only freed when the underlying map evicts the map entry.
> This means that when two store() operations are called concurrently (even 
> under low load), one can fail.  This is made worse by the fact that 
> concurrent maps typically tend to amortize the cost of eviction over many 
> keys (i.e. the actual size of the map can grow beyond the configured maximum 
> number of entries... both the older ConcurrentLinkedHashMap and newer 
> Caffeine do this).  When this is the case, store() won't be able to find a 
> free block of memory, even if there aren't any other concurrently operating 
> stores.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9835) Create another replication mode for SolrCloud

2017-03-06 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898465#comment-15898465
 ] 

Cao Manh Dat commented on SOLR-9835:


[~tomasflobbe] That sounds great. I will create a branch for this ticket and 
run some jenkins tests for the patch before committing.

> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.
> On CAP point of view, this ticket will trying to promise to end users a 
> distributed systems :
> - Partition tolerance
> - Weak Consistency for normal query : clusters can serve stale data. This 
> happen when leader finish a commit and slave is fetching for latest segment. 
> This period can at most {{pollInterval + time to fetch latest segment}}.
> - Consistency for RTG : if we *do not use DQBs*, replicas will consistence 
> with master just like original SolrCloud mode
> - Weak Availability : just like original SolrCloud mode. If a leader down, 
> client must wait until new leader being elected.
> To use this new replication mode, a new collection must be created with an 
> additional parameter {{liveReplicas=1}}
> {code}
> http://localhost:8983/solr/admin/collections?action=CREATE=newCollection=2=1=1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8045) Deploy Solr in ROOT (/) path

2017-03-06 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898463#comment-15898463
 ] 

Noble Paul commented on SOLR-8045:
--

[~caomanhdat] what is left to be done on this

> Deploy Solr in ROOT (/) path 
> -
>
> Key: SOLR-8045
> URL: https://issues.apache.org/jira/browse/SOLR-8045
> Project: Solr
>  Issue Type: Wish
>Reporter: Noble Paul
>Assignee: Cao Manh Dat
> Fix For: 6.0
>
> Attachments: SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch, 
> SOLR-8045.patch, SOLR-8045.patch
>
>
> This does not mean that the path to access Solr will be changed. All paths 
> will remain as is and would behave exactly the same



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7712) SimpleQueryString should support auto fuziness

2017-03-06 Thread Lee Hinman (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lee Hinman updated LUCENE-7712:
---
Attachment: LUCENE-7712.patch

Attached a small patch that adds auto-fuzziness and updates the tests to check 
it.

> SimpleQueryString should support auto fuziness
> --
>
> Key: LUCENE-7712
> URL: https://issues.apache.org/jira/browse/LUCENE-7712
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: David Pilato
> Attachments: LUCENE-7712.patch
>
>
> Apparently the simpleQueryString query does not support auto fuziness as the 
> query string does.
> So {{foo:bar~1}} works for both simple query string and query string queries.
> But {{foo:bar~}} works for query string query but not for simple query string 
> query.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8045) Deploy Solr in ROOT (/) path

2017-03-06 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-8045:


Assignee: Cao Manh Dat  (was: Noble Paul)

> Deploy Solr in ROOT (/) path 
> -
>
> Key: SOLR-8045
> URL: https://issues.apache.org/jira/browse/SOLR-8045
> Project: Solr
>  Issue Type: Wish
>Reporter: Noble Paul
>Assignee: Cao Manh Dat
> Fix For: 6.0
>
> Attachments: SOLR-8045.patch, SOLR-8045.patch, SOLR-8045.patch, 
> SOLR-8045.patch, SOLR-8045.patch
>
>
> This does not mean that the path to access Solr will be changed. All paths 
> will remain as is and would behave exactly the same



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10233) Add support for different replica types in Solr

2017-03-06 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898451#comment-15898451
 ] 

Noble Paul commented on SOLR-10233:
---

the parameters are not explicit when you create the collection 
{{realtime=X=Y=Z}}

let's make it {{realtimeReplicas=X=Y=Z}}

> Add support for different replica types in Solr
> ---
>
> Key: SOLR-10233
> URL: https://issues.apache.org/jira/browse/SOLR-10233
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-10233.patch
>
>
> For the majority of the cases, current SolrCloud's  distributed indexing is 
> great. There is a subset of use cases for which the legacy Master/Slave 
> replication may fit better:
> * Don’t require NRT
> * LIR can become an issue, prefer availability of reads vs consistency or NRT
> * High number of searches (requiring many search nodes)
> SOLR-9835 is adding replicas that don’t do indexing, just update their 
> transaction log. This Jira is to extend that idea and provide the following 
> replica types:
> * *Realtime:* Writes updates to transaction log and indexes locally. Replicas 
> of type “realtime” support NRT (soft commits) and RTG. Any _realtime_ replica 
> can become a leader. This is the only type supported in SolrCloud at this 
> time and will be the default.
> * *Append:* Writes to transaction log, but not to index, uses replication. 
> Any _append_ replica can become leader (by first applying all local 
> transaction log elements). If a replica is of type _append_ but is also the 
> leader, it will behave as a _realtime_. This is exactly what SOLR-9835 is 
> proposing (non-live replicas)
> * *Passive:* Doesn’t index or writes to transaction log. Just replicates from 
> _realtime_ or _append_ replicas. Passive replicas can’t become shard leaders 
> (i.e., if there are only passive replicas in the collection at some point, 
> updates will fail same as if there is no leaders, queries continue to work), 
> so they don’t even participate in elections.
> When the leader replica of the shard receives an update, it will distribute 
> it to all _realtime_ and _append_ replicas, the same as it does today. It 
> won't distribute to _passive_ replicas.
> By using a combination of _append_ and _passive_ replicas, one can achieve an 
> equivalent of the legacy Master/Slave architecture in SolrCloud mode with 
> most of its benefits, including high availability of writes. 
> h2. API (v1 style)
> {{/admin/collections?action=CREATE…&*realtime=X=Y=Z*}}
> {{/admin/collections?action=ADDREPLICA…&*type=\[realtime/append/passive\]*}}
> * “replicationFactor=” will translate to “realtime=“ for back compatibility
> * if _passive_ > 0, _append_ or _realtime_ need to be >= 1 (can’t be all 
> passives)
> h2. Placement Strategies
> By using replica placement rules, one should be able to dedicate nodes to 
> search-only and write-only workloads. For example:
> {code}
> shard:*,replica:*,type:passive,fleet:slaves
> {code}
> where “type” is a new condition supported by the rule engine, and 
> “fleet:slaves” is a regular tag. Note that rules are only applied when the 
> replicas are created, so a later change in tags won't affect existing 
> replicas. Also, rules are per collection, so each collection could contain 
> it's own different rules.
> Note that on the server side Solr also needs to know how to distribute the 
> shard requests (maybe ShardHandler?) if we want to hit only a subset of 
> replicas (i.e. *passive *replicas only, or similar rules)
> h2. SolrJ
> SolrCloud client could be smart to prefer _passive_ replicas for search 
> requests when available (and if configured to do so). _Passive_ replicas 
> can’t respond RTG requests, so those should go to _append_ or _realtime_ 
> replicas. 
> h2. Cluster/Collection state
> {code}
> {"gettingstarted":{
>   "replicationFactor":"1",
>   "router":{"name":"compositeId"},
>   "maxShardsPerNode":"2",
>   "autoAddReplicas":"false",
>   "shards":{
> "shard1":{
>   "range":"8000-",
>   "state":"active",
>   "replicas":{
> "core_node5":{
>   "core":"gettingstarted_shard1_replica1",
>   "base_url":"http://127.0.0.1:8983/solr;,
>   "node_name":"127.0.0.1:8983_solr",
>   "state":"active",
>   "leader":"true",
>   **"type": "realtime"**},
> "core_node10":{
>   "core":"gettingstarted_shard1_replica2",
>   "base_url":"http://127.0.0.1:7574/solr;,
>   "node_name":"127.0.0.1:7574_solr",
>   "state":"active",
>   **"type": "passive"**}},
>   }},
> "shard2":{
>   ...
> {code}
> h2. Back 

[jira] [Created] (SOLR-10234) "Too many open files" in distrib tests due to fixed HandleLimitFS (regardless of num nodes in test)

2017-03-06 Thread Hoss Man (JIRA)
Hoss Man created SOLR-10234:
---

 Summary: "Too many open files" in distrib tests due to fixed 
HandleLimitFS (regardless of num nodes in test)
 Key: SOLR-10234
 URL: https://issues.apache.org/jira/browse/SOLR-10234
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


I just got an failure from BasicDistributedZkTest on master 
(acb185b2dc7522e6a4fa55d54e82910736668f8d) that caught my attention -- the 
reported failure was "Remote error message: Exception writing document id 57 to 
the index; possible analysis error.", but digging intothe logs the root cause 
was "Too many open files" coming from the mock
{{HandleLimitFS}} class we have...

{noformat}

   [junit4]   2> 495598 ERROR (qtp155652658-4405) [] 
o.a.s.h.RequestHandlerBase java.nio.file.FileSystemException: 
/home/jenkins/lucene-solr/solr/build/solr-core/test/J1/temp/solr.cloud.BasicDistributedZkTest_8D04773C07230D3B-001/index-NIOFSDirectory-002/_o_Memory_0.mdvm:
 Too many open files
   [junit4]   2>at 
org.apache.lucene.mockfile.HandleLimitFS.onOpen(HandleLimitFS.java:48)
   [junit4]   2>at 
org.apache.lucene.mockfile.HandleTrackingFS.callOpenHook(HandleTrackingFS.java:81)
   [junit4]   2>at 
org.apache.lucene.mockfile.HandleTrackingFS.newOutputStream(HandleTrackingFS.java:160)
   [junit4]   2>at 
java.base/java.nio.file.Files.newOutputStream(Files.java:218)
   [junit4]   2>at 
org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413)
   [junit4]   2>at 
org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409)
   [junit4]   2>at 
org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
   [junit4]   2>at 
org.apache.lucene.store.MockDirectoryWrapper.createOutput(MockDirectoryWrapper.java:665)
...
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=BasicDistributedZkTest -Dtests.method=test 
-Dtests.seed=8D04773C07230D3B -Dtests.slow=true -Dtests.locale=en-ER 
-Dtests.timezone=Europe/Volgograd -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR259s J1 | BasicDistributedZkTest.test <<<
{noformat}

...what concerns me in particular about this is is that it's coming from a 
distributed test, involving many multiple "nodes" (all using the same 
randomized similarity) writting to the same "file://" filesystem in the same 
JVM -- but {{TestRuleTemporaryFilesCleanup}} seems to be initializing the 
filesystem with a fixed {{MAX_OPEN_FILES = 2048}}

So perhaps all (distributed/cloud) Solr tests should use 
{{SuppressFileSystems}} to ensure we don't get false failures like this?

Or perhaps we should enhance the way we use {{HandleLimitFS}} in our test 
scaffolding so that we can give each solr node it's own mock filesystem? (with 
it's own MAX_OPEN_FILES limit?)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10232) Linear Regression on Text

2017-03-06 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-10232:
-

Assignee: (was: Joel Bernstein)

> Linear Regression on Text
> -
>
> Key: SOLR-10232
> URL: https://issues.apache.org/jira/browse/SOLR-10232
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> This is probably a fairly straight forward extension of our Logistic 
> Regression on text capabilities. The main idea is to predict numeric outcomes 
> based on text. Examples:
> * predict salary based on a text in a resume.
> * predict age of author based on text of document.
> * predict number of clicks based on text in an article.
> * predict number of sales based on product description.
> * predict the number of prior art listed based on the text of the patent.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10232) Linear Regression on Text

2017-03-06 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-10232:
-

Assignee: Joel Bernstein

> Linear Regression on Text
> -
>
> Key: SOLR-10232
> URL: https://issues.apache.org/jira/browse/SOLR-10232
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> This is probably a fairly straight forward extension of our Logistic 
> Regression on text capabilities. The main idea is to predict numeric outcomes 
> based on text. Examples:
> * predict salary based on a text in a resume.
> * predict age of author based on text of document.
> * predict number of clicks based on text in an article.
> * predict number of sales based on product description.
> * predict the number of prior art listed based on the text of the patent.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8919) Add SELECT COUNT(DISTINCT COL) queries to the SQL Interface

2017-03-06 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-8919:


Assignee: Joel Bernstein

> Add SELECT COUNT(DISTINCT COL) queries to the SQL Interface
> ---
>
> Key: SOLR-8919
> URL: https://issues.apache.org/jira/browse/SOLR-8919
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2, master (7.0)
>
>
> While analyzing the Enron emails for SOLR-, I was wishing that 
> COUNT(DISTINCT) was implemented. This ticket is to implement it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8919) Add SELECT COUNT(DISTINCT COL) queries to the SQL Interface

2017-03-06 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898430#comment-15898430
 ] 

Joel Bernstein commented on SOLR-8919:
--

I haven't yet looked at the query place that Calcite is generating for this, 
but I suspect it may not work properly with all the functionality that is being 
pushed down.

I'd like to push down this functionality anyway, because we have some nice 
tools for doing this in Solr. 



> Add SELECT COUNT(DISTINCT COL) queries to the SQL Interface
> ---
>
> Key: SOLR-8919
> URL: https://issues.apache.org/jira/browse/SOLR-8919
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
> Fix For: 6.2, master (7.0)
>
>
> While analyzing the Enron emails for SOLR-, I was wishing that 
> COUNT(DISTINCT) was implemented. This ticket is to implement it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9835) Create another replication mode for SolrCloud

2017-03-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898427#comment-15898427
 ] 

Tomás Fernández Löbbe commented on SOLR-9835:
-

[~caomanhdat], I created SOLR-10233 with some related work

> Create another replication mode for SolrCloud
> -
>
> Key: SOLR-9835
> URL: https://issues.apache.org/jira/browse/SOLR-9835
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch, 
> SOLR-9835.patch, SOLR-9835.patch, SOLR-9835.patch
>
>
> The current replication mechanism of SolrCloud is called state machine, which 
> replicas start in same initial state and for each input, the input is 
> distributed across replicas so all replicas will end up with same next state. 
> But this type of replication have some drawbacks
> - The commit (which costly) have to run on all replicas
> - Slow recovery, because if replica miss more than N updates on its down 
> time, the replica have to download entire index from its leader.
> So we create create another replication mode for SolrCloud called state 
> transfer, which acts like master/slave replication. In basically
> - Leader distribute the update to other replicas, but the leader only apply 
> the update to IW, other replicas just store the update to UpdateLog (act like 
> replication).
> - Replicas frequently polling the latest segments from leader.
> Pros:
> - Lightweight for indexing, because only leader are running the commit, 
> updates.
> - Very fast recovery, replicas just have to download the missing segments.
> On CAP point of view, this ticket will trying to promise to end users a 
> distributed systems :
> - Partition tolerance
> - Weak Consistency for normal query : clusters can serve stale data. This 
> happen when leader finish a commit and slave is fetching for latest segment. 
> This period can at most {{pollInterval + time to fetch latest segment}}.
> - Consistency for RTG : if we *do not use DQBs*, replicas will consistence 
> with master just like original SolrCloud mode
> - Weak Availability : just like original SolrCloud mode. If a leader down, 
> client must wait until new leader being elected.
> To use this new replication mode, a new collection must be created with an 
> additional parameter {{liveReplicas=1}}
> {code}
> http://localhost:8983/solr/admin/collections?action=CREATE=newCollection=2=1=1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10233) Add support for different replica types in Solr

2017-03-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-10233:
-
Attachment: SOLR-10233.patch

Here is an initial patch, that adds the the Type enum to Replica and some 
handling of passive replicas. It relies on code from SOLR-9835 (an older patch, 
I'll update that next). Also full of nocommits.

> Add support for different replica types in Solr
> ---
>
> Key: SOLR-10233
> URL: https://issues.apache.org/jira/browse/SOLR-10233
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-10233.patch
>
>
> For the majority of the cases, current SolrCloud's  distributed indexing is 
> great. There is a subset of use cases for which the legacy Master/Slave 
> replication may fit better:
> * Don’t require NRT
> * LIR can become an issue, prefer availability of reads vs consistency or NRT
> * High number of searches (requiring many search nodes)
> SOLR-9835 is adding replicas that don’t do indexing, just update their 
> transaction log. This Jira is to extend that idea and provide the following 
> replica types:
> * *Realtime:* Writes updates to transaction log and indexes locally. Replicas 
> of type “realtime” support NRT (soft commits) and RTG. Any _realtime_ replica 
> can become a leader. This is the only type supported in SolrCloud at this 
> time and will be the default.
> * *Append:* Writes to transaction log, but not to index, uses replication. 
> Any _append_ replica can become leader (by first applying all local 
> transaction log elements). If a replica is of type _append_ but is also the 
> leader, it will behave as a _realtime_. This is exactly what SOLR-9835 is 
> proposing (non-live replicas)
> * *Passive:* Doesn’t index or writes to transaction log. Just replicates from 
> _realtime_ or _append_ replicas. Passive replicas can’t become shard leaders 
> (i.e., if there are only passive replicas in the collection at some point, 
> updates will fail same as if there is no leaders, queries continue to work), 
> so they don’t even participate in elections.
> When the leader replica of the shard receives an update, it will distribute 
> it to all _realtime_ and _append_ replicas, the same as it does today. It 
> won't distribute to _passive_ replicas.
> By using a combination of _append_ and _passive_ replicas, one can achieve an 
> equivalent of the legacy Master/Slave architecture in SolrCloud mode with 
> most of its benefits, including high availability of writes. 
> h2. API (v1 style)
> {{/admin/collections?action=CREATE…&*realtime=X=Y=Z*}}
> {{/admin/collections?action=ADDREPLICA…&*type=\[realtime/append/passive\]*}}
> * “replicationFactor=” will translate to “realtime=“ for back compatibility
> * if _passive_ > 0, _append_ or _realtime_ need to be >= 1 (can’t be all 
> passives)
> h2. Placement Strategies
> By using replica placement rules, one should be able to dedicate nodes to 
> search-only and write-only workloads. For example:
> {code}
> shard:*,replica:*,type:passive,fleet:slaves
> {code}
> where “type” is a new condition supported by the rule engine, and 
> “fleet:slaves” is a regular tag. Note that rules are only applied when the 
> replicas are created, so a later change in tags won't affect existing 
> replicas. Also, rules are per collection, so each collection could contain 
> it's own different rules.
> Note that on the server side Solr also needs to know how to distribute the 
> shard requests (maybe ShardHandler?) if we want to hit only a subset of 
> replicas (i.e. *passive *replicas only, or similar rules)
> h2. SolrJ
> SolrCloud client could be smart to prefer _passive_ replicas for search 
> requests when available (and if configured to do so). _Passive_ replicas 
> can’t respond RTG requests, so those should go to _append_ or _realtime_ 
> replicas. 
> h2. Cluster/Collection state
> {code}
> {"gettingstarted":{
>   "replicationFactor":"1",
>   "router":{"name":"compositeId"},
>   "maxShardsPerNode":"2",
>   "autoAddReplicas":"false",
>   "shards":{
> "shard1":{
>   "range":"8000-",
>   "state":"active",
>   "replicas":{
> "core_node5":{
>   "core":"gettingstarted_shard1_replica1",
>   "base_url":"http://127.0.0.1:8983/solr;,
>   "node_name":"127.0.0.1:8983_solr",
>   "state":"active",
>   "leader":"true",
>   **"type": "realtime"**},
> "core_node10":{
>   "core":"gettingstarted_shard1_replica2",
>   "base_url":"http://127.0.0.1:7574/solr;,
>   "node_name":"127.0.0.1:7574_solr",
>   "state":"active",
>  

[jira] [Commented] (LUCENE-7727) Replace EOL'ed pegdown by flexmark-java for Java 9 compatibility

2017-03-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898366#comment-15898366
 ] 

Uwe Schindler commented on LUCENE-7727:
---

See http://ant.apache.org/ivy/history/2.3.0/use/postresolvetask.html:

{quote}
*Child elements*
(Since 2.3)

These child elements are defining an inlined ivy.xml's dependencies elements. 
(...)
{quote}

For flexmark we actually require multiple dependencies and this can only be 
handles with a single cachepath in that version. An alternative would be to 
have a ivy.xml file, but that's complicated for common-build.xml.

So there is only the way to update to Ivy 2.3 as documented since years.

> Replace EOL'ed pegdown by flexmark-java for Java 9 compatibility
> 
>
> Key: LUCENE-7727
> URL: https://issues.apache.org/jira/browse/LUCENE-7727
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7727.patch, LUCENE-7727.patch
>
>
> The documentation tasks use a library called "pegdown" to convert Markdown to 
> HTML. Unfortunately, the developer of pegdown EOLed it and points the users 
> to a faster replacement: flexmark-java 
> (https://github.com/vsch/flexmark-java).
> This would not be important for us, if pegdown would work with Java 9, but it 
> is also affected by the usual "setAccessible into private Java APIs" issue 
> (see my talk at FOSDEM: 
> https://fosdem.org/2017/schedule/event/jigsaw_challenges/).
> The migration should not be too hard, its just a bit of Groovy Code rewriting 
> and dependency changes.
> This is the pegdown problem:
> {noformat}
> Caused by: java.lang.RuntimeException: Could not determine whether class 
> 'org.pegdown.Parser$$parboiled' has already been loaded
> at org.parboiled.transform.AsmUtils.findLoadedClass(AsmUtils.java:213)
> at 
> org.parboiled.transform.ParserTransformer.transformParser(ParserTransformer.java:35)
> at org.parboiled.Parboiled.createParser(Parboiled.java:54)
> ... 50 more
> Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make 
> protected final java.lang.Class 
> java.lang.ClassLoader.findLoadedClass(java.lang.String) accessible: module 
> java.base does not "opens java.lang" to unnamed module @551b6736
> at 
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:335)
> at 
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:278)
> at 
> java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:196)
> at java.base/java.lang.reflect.Method.setAccessible(Method.java:190)
> at org.parboiled.transform.AsmUtils.findLoadedClass(AsmUtils.java:206)
> ... 52 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7727) Replace EOL'ed pegdown by flexmark-java for Java 9 compatibility

2017-03-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898354#comment-15898354
 ] 

Uwe Schindler commented on LUCENE-7727:
---

Outdated Ivy? We require 2.3 minimum. Maybe clean up your ~/.ant/lib folder and 
run "ant ivy-bootstrap"?

> Replace EOL'ed pegdown by flexmark-java for Java 9 compatibility
> 
>
> Key: LUCENE-7727
> URL: https://issues.apache.org/jira/browse/LUCENE-7727
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7727.patch, LUCENE-7727.patch
>
>
> The documentation tasks use a library called "pegdown" to convert Markdown to 
> HTML. Unfortunately, the developer of pegdown EOLed it and points the users 
> to a faster replacement: flexmark-java 
> (https://github.com/vsch/flexmark-java).
> This would not be important for us, if pegdown would work with Java 9, but it 
> is also affected by the usual "setAccessible into private Java APIs" issue 
> (see my talk at FOSDEM: 
> https://fosdem.org/2017/schedule/event/jigsaw_challenges/).
> The migration should not be too hard, its just a bit of Groovy Code rewriting 
> and dependency changes.
> This is the pegdown problem:
> {noformat}
> Caused by: java.lang.RuntimeException: Could not determine whether class 
> 'org.pegdown.Parser$$parboiled' has already been loaded
> at org.parboiled.transform.AsmUtils.findLoadedClass(AsmUtils.java:213)
> at 
> org.parboiled.transform.ParserTransformer.transformParser(ParserTransformer.java:35)
> at org.parboiled.Parboiled.createParser(Parboiled.java:54)
> ... 50 more
> Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make 
> protected final java.lang.Class 
> java.lang.ClassLoader.findLoadedClass(java.lang.String) accessible: module 
> java.base does not "opens java.lang" to unnamed module @551b6736
> at 
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:335)
> at 
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:278)
> at 
> java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:196)
> at java.base/java.lang.reflect.Method.setAccessible(Method.java:190)
> at org.parboiled.transform.AsmUtils.findLoadedClass(AsmUtils.java:206)
> ... 52 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10232) Linear Regression on Text

2017-03-06 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10232:
--
Description: 
This is probably a fairly straight forward extension of our Logistic Regression 
on text capabilities. The main idea is to predict numeric outcomes based on 
text. Examples:

* predict salary based on a text in a resume.
* predict age of author based on text of document.
* predict number of clicks based on text in an article.
* predict number of sales based on product description.
* predict the number of prior art listed based on the text of the patent.



  was:
This is probably a fairly straight forward extension of our Logistic Regression 
on text capabilities. The main idea is to predict numeric outcomes based on 
text. Examples:

* predict salary based on a text in a resume.
* predict age of author based on text of document.
* predict number of clicks based on text in an article.
* predict number of sales based on product description.




> Linear Regression on Text
> -
>
> Key: SOLR-10232
> URL: https://issues.apache.org/jira/browse/SOLR-10232
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> This is probably a fairly straight forward extension of our Logistic 
> Regression on text capabilities. The main idea is to predict numeric outcomes 
> based on text. Examples:
> * predict salary based on a text in a resume.
> * predict age of author based on text of document.
> * predict number of clicks based on text in an article.
> * predict number of sales based on product description.
> * predict the number of prior art listed based on the text of the patent.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 708 - Still Unstable!

2017-03-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/708/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.update.TestInPlaceUpdatesDistrib.test

Error Message:
'sanitycheck' results against client: 
org.apache.solr.client.solrj.impl.HttpSolrClient@3c6d11ea (not leader) wrong 
[docid] for SolrDocument{id=25, 
id_field_copy_that_does_not_support_in_place_update_s=25, title_s=title25, 
id_i=25, inplace_updatable_float=101.0, _version_=1561158095162310656, 
inplace_updatable_int_with_default=666, 
inplace_updatable_float_with_default=42.0, [docid]=274} expected:<247> but 
was:<274>

Stack Trace:
java.lang.AssertionError: 'sanitycheck' results against client: 
org.apache.solr.client.solrj.impl.HttpSolrClient@3c6d11ea (not leader) wrong 
[docid] for SolrDocument{id=25, 
id_field_copy_that_does_not_support_in_place_update_s=25, title_s=title25, 
id_i=25, inplace_updatable_float=101.0, _version_=1561158095162310656, 
inplace_updatable_int_with_default=666, 
inplace_updatable_float_with_default=42.0, [docid]=274} expected:<247> but 
was:<274>
at 
__randomizedtesting.SeedInfo.seed([B0D418BB606CC73A:38802761CE90AAC2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.assertDocIdsAndValuesInResults(TestInPlaceUpdatesDistrib.java:442)
at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.assertDocIdsAndValuesAgainstAllClients(TestInPlaceUpdatesDistrib.java:413)
at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.docValuesUpdateTest(TestInPlaceUpdatesDistrib.java:321)
at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[jira] [Commented] (SOLR-9017) Implement PreparedStatementImpl parameterization

2017-03-06 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898286#comment-15898286
 ] 

Kevin Risden commented on SOLR-9017:


The Spark specific error message is:

{code}
java.lang.UnsupportedOperationException
at 
org.apache.solr.client.solrj.io.sql.ConnectionImpl.prepareStatement(ConnectionImpl.java:217)
{code}

This correlates to:
{code}
  @Override
  public PreparedStatement prepareStatement(String sql, int resultSetType, int 
resultSetConcurrency) throws SQLException {
throw new UnsupportedOperationException();
  }
{code}

> Implement PreparedStatementImpl parameterization
> 
>
> Key: SOLR-9017
> URL: https://issues.apache.org/jira/browse/SOLR-9017
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
>Assignee: Kevin Risden
> Attachments: SOLR-9017.patch
>
>
> SOLR-8809 implemented prepared statements to avoid a NPE when clients were 
> connecting. The next step is to flesh out the rest of the class and implement 
> parameterization. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10233) Add support for different replica types in Solr

2017-03-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-10233:
-
Description: 
For the majority of the cases, current SolrCloud's  distributed indexing is 
great. There is a subset of use cases for which the legacy Master/Slave 
replication may fit better:

* Don’t require NRT
* LIR can become an issue, prefer availability of reads vs consistency or NRT
* High number of searches (requiring many search nodes)

SOLR-9835 is adding replicas that don’t do indexing, just update their 
transaction log. This Jira is to extend that idea and provide the following 
replica types:

* *Realtime:* Writes updates to transaction log and indexes locally. Replicas 
of type “realtime” support NRT (soft commits) and RTG. Any _realtime_ replica 
can become a leader. This is the only type supported in SolrCloud at this time 
and will be the default.
* *Append:* Writes to transaction log, but not to index, uses replication. Any 
_append_ replica can become leader (by first applying all local transaction log 
elements). If a replica is of type _append_ but is also the leader, it will 
behave as a _realtime_. This is exactly what SOLR-9835 is proposing (non-live 
replicas)
* *Passive:* Doesn’t index or writes to transaction log. Just replicates from 
_realtime_ or _append_ replicas. Passive replicas can’t become shard leaders 
(i.e., if there are only passive replicas in the collection at some point, 
updates will fail same as if there is no leaders, queries continue to work), so 
they don’t even participate in elections.

When the leader replica of the shard receives an update, it will distribute it 
to all _realtime_ and _append_ replicas, the same as it does today. It won't 
distribute to _passive_ replicas.

By using a combination of _append_ and _passive_ replicas, one can achieve an 
equivalent of the legacy Master/Slave architecture in SolrCloud mode with most 
of its benefits, including high availability of writes. 

h2. API (v1 style)
{{/admin/collections?action=CREATE…&*realtime=X=Y=Z*}}
{{/admin/collections?action=ADDREPLICA…&*type=\[realtime/append/passive\]*}}

* “replicationFactor=” will translate to “realtime=“ for back compatibility
* if _passive_ > 0, _append_ or _realtime_ need to be >= 1 (can’t be all 
passives)

h2. Placement Strategies

By using replica placement rules, one should be able to dedicate nodes to 
search-only and write-only workloads. For example:
{code}
shard:*,replica:*,type:passive,fleet:slaves
{code}
where “type” is a new condition supported by the rule engine, and 
“fleet:slaves” is a regular tag. Note that rules are only applied when the 
replicas are created, so a later change in tags won't affect existing replicas. 
Also, rules are per collection, so each collection could contain it's own 
different rules.
Note that on the server side Solr also needs to know how to distribute the 
shard requests (maybe ShardHandler?) if we want to hit only a subset of 
replicas (i.e. *passive *replicas only, or similar rules)

h2. SolrJ
SolrCloud client could be smart to prefer _passive_ replicas for search 
requests when available (and if configured to do so). _Passive_ replicas can’t 
respond RTG requests, so those should go to _append_ or _realtime_ replicas. 

h2. Cluster/Collection state

{code}
{"gettingstarted":{
  "replicationFactor":"1",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "shards":{
"shard1":{
  "range":"8000-",
  "state":"active",
  "replicas":{
"core_node5":{
  "core":"gettingstarted_shard1_replica1",
  "base_url":"http://127.0.0.1:8983/solr;,
  "node_name":"127.0.0.1:8983_solr",
  "state":"active",
  "leader":"true",
  **"type": "realtime"**},
"core_node10":{
  "core":"gettingstarted_shard1_replica2",
  "base_url":"http://127.0.0.1:7574/solr;,
  "node_name":"127.0.0.1:7574_solr",
  "state":"active",
  **"type": "passive"**}},
  }},
"shard2":{
  ...
{code}

h2. Back compatibility
We should be able to support back compatibility by assuming replicas without a 
“type” property are _realtime_ replicas. 

h2. Failure Scenarios for passive replicas

h3. Replica-Leader partition
In SolrCloud today, in this scenario the replica would be placed in LIR. With 
_passive_ replicas, replicas may not be able to replicate from some time (and 
fall behind with the index) but queries can still be served. Once the 
connection is re-established the replication will continue. 

h3. Replica ZooKeeper partition
_Passive_ replica will leave the cluster. “Smart clients” and other replicas 
(e.g. for distributed search) won’t find it and won’t query on it. Direct 
search requests to the replica may still succeed. 

h3. Passive replica dies (or is unreachable)
Replica won’t 

[jira] [Created] (SOLR-10233) Add support for different replica types in Solr

2017-03-06 Thread JIRA
Tomás Fernández Löbbe created SOLR-10233:


 Summary: Add support for different replica types in Solr
 Key: SOLR-10233
 URL: https://issues.apache.org/jira/browse/SOLR-10233
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Reporter: Tomás Fernández Löbbe
Assignee: Tomás Fernández Löbbe


For the majority of the cases, current SolrCloud's  distributed indexing is 
great. There is a subset of use cases for which the legacy Master/Slave 
replication may fit better:

* Don’t require NRT
* LIR can become an issue, prefer availability of reads vs consistency or NRT
* High number of searches (requiring many search nodes)

SOLR-9835 is adding replicas that don’t do indexing, just update their 
transaction log. This Jira is to extend that idea and provide the following 
replica types:

* *Realtime:* Writes updates to transaction log and indexes locally. Replicas 
of type “realtime” support NRT (soft commits) and RTG. Any _realtime_ replica 
can become a leader. This is the only type supported in SolrCloud at this time 
and will be the default.
* *Append:* Writes to transaction log, but not to index, uses replication. Any 
_append_ replica can become leader (by first applying all local transaction log 
elements). If a replica is of type _append_ but is also the leader, it will 
behave as a _realtime_. This is exactly what SOLR-9835 is proposing (non-live 
replicas)
* *Passive:* Doesn’t index or writes to transaction log. Just replicates from 
_realtime_ or _append_ replicas. Passive replicas can’t become shard leaders 
(i.e., if there are only passive replicas in the collection at some point, 
updates will fail same as if there is no leaders, queries continue to work), so 
they don’t even participate in elections.

When the leader replica of the shard receives an update, it will distribute it 
to all _realtime_ and _append_ replicas, the same as it does today. It won't 
distribute to _passive_ replicas.

By using a combination of _append_ and _passive_ replicas, one can achieve an 
equivalent of the legacy Master/Slave architecture in SolrCloud mode with most 
of its benefits, including high availability of writes. 

h2. API (v1 style)
{{/admin/collections?action=CREATE…&*realtime=X=Y=Z*}}
{{/admin/collections?action=ADDREPLICA…&*type=\[realtime/append/passive\]*}}

* “replicationFactor=” will translate to “realtime=“ for back compatibility
* if _passive_ > 0, _append_ or _realtime_ need to be >= 1 (can’t be all 
passives)

h2. Placement Strategies

By using replica placement rules, one should be able to dedicate nodes to 
search-only and write-only workloads. For example:
{code}
shard:*,replica:*,type:passive,fleet:slaves
{code}
where “type” is a new condition supported by the rule engine, and 
“fleet:slaves” is a regular tag. Note that rules are only applied when the 
replicas are created, so a later change in tags won't affect existing replicas. 
Also, rules are per collection, so each collection could contain it's own 
different rules.
Note that on the server side Solr also needs to know how to distribute the 
shard requests (maybe ShardHandler?) if we want to hit only a subset of 
replicas (i.e. *passive *replicas only, or similar rules)

h2. SolrJ
SolrCloud client could be smart to prefer _passive_ replicas for search 
requests when available (and if configured to do so). _Passive_ replicas can’t 
respond RTG requests, so those should go to _append_ or _realtime_ replicas. 

h2. Cluster/Collection state

{code}
{"gettingstarted":{
  "replicationFactor":"1",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "shards":{
"shard1":{
  "range":"8000-",
  "state":"active",
  "replicas":{
"core_node5":{
  "core":"gettingstarted_shard1_replica1",
  "base_url":"http://127.0.0.1:8983/solr;,
  "node_name":"127.0.0.1:8983_solr",
  "state":"active",
  "leader":"true",
  **"type": "realtime"**},
"core_node10":{
  "core":"gettingstarted_shard1_replica2",
  "base_url":"http://127.0.0.1:7574/solr;,
  "node_name":"127.0.0.1:7574_solr",
  "state":"active",
  **"type": "passive"**}},
  }},
"shard2":{
  ...
{code}

h2. Back compatibility
We should be able to support back compatibility by assuming replicas without a 
“type” property are _realtime_ replicas. 

h2. Failure Scenarios for passive replicas

h3. Replica-Leader partition
In SolrCloud today, in this scenario the replica would be placed in LIR. With 
_passive_ replicas, replicas may not be able to replicate from some time (and 
fall behind with the index) but queries can still be served. Once the 
connection is re-established the replication will continue. 

h3. 

[jira] [Commented] (LUCENE-7716) Reduce specialization in TopFieldCollector

2017-03-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898141#comment-15898141
 ] 

Hoss Man commented on LUCENE-7716:
--

NOTE: this was also committed to branch_6x, but there was a typo so gitbot 
didn't auto update the issue...

http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/55ddb5f2

> Reduce specialization in TopFieldCollector
> --
>
> Key: LUCENE-7716
> URL: https://issues.apache.org/jira/browse/LUCENE-7716
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7716.patch
>
>
> TopFieldCollector optimizes the single-comparator case. I think we could 
> replace this specialization with a MultiLeafFieldComparator wrapper, 
> similarly to how MultiCollector works. This would have the benefit of 
> replacing code duplication of non-trivial logic with a simple wrapper that 
> delegates calls to its sub comparators.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8185) Add operations support to streaming metrics

2017-03-06 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove closed SOLR-8185.
-
Resolution: Won't Fix

This work is superceded by SOLR-9916.

> Add operations support to streaming metrics
> ---
>
> Key: SOLR-8185
> URL: https://issues.apache.org/jira/browse/SOLR-8185
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Dennis Gove
>Assignee: Dennis Gove
>Priority: Minor
> Attachments: SOLR-8185.patch
>
>
> Adds support for operations on stream metrics.
> With this feature one can modify tuple values before applying to the computed 
> metric. There are a lot of use-cases I can see with this - I'll describe one 
> here.
> Imagine you have a RollupStream which is computing the average over some 
> field but you cannot be sure that all documents have a value for that field, 
> ie the value is null. When the value is null you want to treat it as a 0. 
> With this feature you can accomplish that like this
> {code}
> rollup(
>   search(collection1, q=*:*, fl=\"a_s,a_i,a_f\", sort=\"a_s asc\"),
>   over=\"a_s\",
>   avg(a_i, replace(null, withValue=0)),
>   count(*),
> )
> {code}
> The operations are applied to the tuple for each metric in the stream which 
> means you perform different operations on different metrics without being 
> impacted by operations on other metrics. 
> Adding to our previous example, imagine you want to also get the min of a 
> field but do not consider null values.
> {code}
> rollup(
>   search(collection1, q=*:*, fl=\"a_s,a_i,a_f\", sort=\"a_s asc\"),
>   over=\"a_s\",
>   avg(a_i, replace(null, withValue=0)),
>   min(a_i),
>   count(*),
> )
> {code}
> Also, the tuple is not modified for streams that might wrap this one. Ie, the 
> only thing that sees the applied operation is that particular metric. If you 
> want to apply operations for wrapping streams you can still achieve that with 
> the SelectStream (SOLR-7669).
> One feature I'm investigating but this patch DOES NOT add is the ability to 
> assign names to the resulting metric value. For example, to allow for 
> something like this
> {code}
> rollup(
>   search(collection1, q=*:*, fl=\"a_s,a_i,a_f\", sort=\"a_s asc\"),
>   over=\"a_s\",
>   avg(a_i, replace(null, withValue=0), as="avg_a_i_null_as_0"),
>   avg(a_i),
>   count(*, as="totalCount"),
> )
> {code}
> Right now that isn't possible because the identifier for each metric would be 
> the same "avg_a_i" and as such both couldn't be returned. It's relatively 
> easy to add but I have to investigate its impact on the SQL and FacetStream 
> areas.
> Depends on SOLR-7669 (SelectStream)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8185) Add operations support to streaming metrics

2017-03-06 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898133#comment-15898133
 ] 

Dennis Gove commented on SOLR-8185:
---

I agree - Evaluators take care of this feature.

> Add operations support to streaming metrics
> ---
>
> Key: SOLR-8185
> URL: https://issues.apache.org/jira/browse/SOLR-8185
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Dennis Gove
>Assignee: Dennis Gove
>Priority: Minor
> Attachments: SOLR-8185.patch
>
>
> Adds support for operations on stream metrics.
> With this feature one can modify tuple values before applying to the computed 
> metric. There are a lot of use-cases I can see with this - I'll describe one 
> here.
> Imagine you have a RollupStream which is computing the average over some 
> field but you cannot be sure that all documents have a value for that field, 
> ie the value is null. When the value is null you want to treat it as a 0. 
> With this feature you can accomplish that like this
> {code}
> rollup(
>   search(collection1, q=*:*, fl=\"a_s,a_i,a_f\", sort=\"a_s asc\"),
>   over=\"a_s\",
>   avg(a_i, replace(null, withValue=0)),
>   count(*),
> )
> {code}
> The operations are applied to the tuple for each metric in the stream which 
> means you perform different operations on different metrics without being 
> impacted by operations on other metrics. 
> Adding to our previous example, imagine you want to also get the min of a 
> field but do not consider null values.
> {code}
> rollup(
>   search(collection1, q=*:*, fl=\"a_s,a_i,a_f\", sort=\"a_s asc\"),
>   over=\"a_s\",
>   avg(a_i, replace(null, withValue=0)),
>   min(a_i),
>   count(*),
> )
> {code}
> Also, the tuple is not modified for streams that might wrap this one. Ie, the 
> only thing that sees the applied operation is that particular metric. If you 
> want to apply operations for wrapping streams you can still achieve that with 
> the SelectStream (SOLR-7669).
> One feature I'm investigating but this patch DOES NOT add is the ability to 
> assign names to the resulting metric value. For example, to allow for 
> something like this
> {code}
> rollup(
>   search(collection1, q=*:*, fl=\"a_s,a_i,a_f\", sort=\"a_s asc\"),
>   over=\"a_s\",
>   avg(a_i, replace(null, withValue=0), as="avg_a_i_null_as_0"),
>   avg(a_i),
>   count(*, as="totalCount"),
> )
> {code}
> Right now that isn't possible because the identifier for each metric would be 
> the same "avg_a_i" and as such both couldn't be returned. It's relatively 
> easy to add but I have to investigate its impact on the SQL and FacetStream 
> areas.
> Depends on SOLR-7669 (SelectStream)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10228) XLSXWriter can fail on some JVMs if no fonts are available due to JVM/OS pacakging of fonts - causes errors in TestXLSXResponseWriter

2017-03-06 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-10228:

 Labels:   (was: Java9)
Description: 
I found this while trying to setup some automated testing against JDK9, but it 
can also affect users of java8/java7 depending on how their JDK/JRE is packaged.

Some JVM packagers (in particular debian JVM "*-headless" packages) do not 
install any fonts along with the JDK/JRE, nor do these pacakges depend on any 
other packages providing {{fontconfig}} support for the JVM to pick up 
dynamically.

This can cause problems when using 
XLSXWriter -- notably in the form of errors that look like this...

{noformat}
   [junit4]> Throwable #1: java.lang.InternalError: 
java.lang.reflect.InvocationTargetException
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([C8331E32DDBEC2E6:3E224C5FC7B09A3D]:0)
   [junit4]>at 
java.desktop/sun.font.FontManagerFactory$1.run(FontManagerFactory.java:86)
   [junit4]>at 
java.base/java.security.AccessController.doPrivileged(Native Method)
   [junit4]>at 
java.desktop/sun.font.FontManagerFactory.getInstance(FontManagerFactory.java:74)
   [junit4]>at java.desktop/java.awt.Font.getFont2D(Font.java:495)
   [junit4]>at 
java.desktop/java.awt.Font.canDisplayUpTo(Font.java:2244)
   [junit4]>at 
java.desktop/java.awt.font.TextLayout.singleFont(TextLayout.java:469)
   [junit4]>at 
java.desktop/java.awt.font.TextLayout.(TextLayout.java:530)
   [junit4]>at 
org.apache.poi.ss.util.SheetUtil.getDefaultCharWidth(SheetUtil.java:254)
   [junit4]>at 
org.apache.poi.xssf.streaming.AutoSizeColumnTracker.(AutoSizeColumnTracker.java:117)
   [junit4]>at 
org.apache.poi.xssf.streaming.SXSSFSheet.(SXSSFSheet.java:77)
   [junit4]>at 
org.apache.poi.xssf.streaming.SXSSFWorkbook.createAndRegisterSXSSFSheet(SXSSFWorkbook.java:653)
   [junit4]>at 
org.apache.poi.xssf.streaming.SXSSFWorkbook.createSheet(SXSSFWorkbook.java:646)
   [junit4]>at 
org.apache.solr.handler.extraction.XLSXWriter$SerialWriteWorkbook.(XLSXResponseWriter.java:112)
   [junit4]>at 
org.apache.solr.handler.extraction.XLSXWriter.(XLSXResponseWriter.java:165)
   [junit4]>at 
org.apache.solr.handler.extraction.XLSXResponseWriter.write(XLSXResponseWriter.java:66)
   [junit4]>at 
org.apache.solr.handler.extraction.TestXLSXResponseWriter.getWSResultForQuery(TestXLSXResponseWriter.java:237)
   [junit4]>at 
org.apache.solr.handler.extraction.TestXLSXResponseWriter.getWSResultForQuery(TestXLSXResponseWriter.java:232)
   [junit4]>at 
org.apache.solr.handler.extraction.TestXLSXResponseWriter.testPseudoFields(TestXLSXResponseWriter.java:211)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]>at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]>at 
java.base/java.lang.reflect.Method.invoke(Method.java:547)
   [junit4]>at java.base/java.lang.Thread.run(Thread.java:844)
   [junit4]> Caused by: java.lang.reflect.InvocationTargetException
   [junit4]>at 
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
 Method)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
   [junit4]>at 
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   [junit4]>at 
java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:473)
   [junit4]>at 
java.desktop/sun.font.FontManagerFactory$1.run(FontManagerFactory.java:84)
   [junit4]>... 55 more
   [junit4]> Caused by: java.lang.NullPointerException
   [junit4]>at 
java.desktop/sun.awt.FontConfiguration.getVersion(FontConfiguration.java:1288)
   [junit4]>at 
java.desktop/sun.awt.FontConfiguration.readFontConfigFile(FontConfiguration.java:225)
   [junit4]>at 
java.desktop/sun.awt.FontConfiguration.init(FontConfiguration.java:107)
   [junit4]>at 
java.desktop/sun.awt.X11FontManager.createFontConfiguration(X11FontManager.java:765)
   [junit4]>at 
java.desktop/sun.font.SunFontManager$2.run(SunFontManager.java:440)
   [junit4]>at 
java.base/java.security.AccessController.doPrivileged(Native Method)
   [junit4]>at 
java.desktop/sun.font.SunFontManager.(SunFontManager.java:385)
   [junit4]>at 

[jira] [Resolved] (SOLR-10228) XLSXWriter can fail on some JVMs if no fonts are available due to JVM/OS pacakging of fonts - causes errors in TestXLSXResponseWriter

2017-03-06 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-10228.
-
Resolution: Not A Problem

> XLSXWriter can fail on some JVMs if no fonts are available due to JVM/OS 
> pacakging of fonts - causes errors in TestXLSXResponseWriter 
> --
>
> Key: SOLR-10228
> URL: https://issues.apache.org/jira/browse/SOLR-10228
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - Solr Cell (Tika extraction)
> Environment: debian openjdk-9-jdk-headless b158
>Reporter: Hoss Man
>
> I found this while trying to setup some automated testing against JDK9, but 
> it can also affect users of java8/java7 depending on how their JDK/JRE is 
> packaged.
> Some JVM packagers (in particular debian JVM "*-headless" packages) do not 
> install any fonts along with the JDK/JRE, nor do these pacakges depend on any 
> other packages providing {{fontconfig}} support for the JVM to pick up 
> dynamically.
> This can cause problems when using 
> XLSXWriter -- notably in the form of errors that look like this...
> {noformat}
>[junit4]> Throwable #1: java.lang.InternalError: 
> java.lang.reflect.InvocationTargetException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([C8331E32DDBEC2E6:3E224C5FC7B09A3D]:0)
>[junit4]>  at 
> java.desktop/sun.font.FontManagerFactory$1.run(FontManagerFactory.java:86)
>[junit4]>  at 
> java.base/java.security.AccessController.doPrivileged(Native Method)
>[junit4]>  at 
> java.desktop/sun.font.FontManagerFactory.getInstance(FontManagerFactory.java:74)
>[junit4]>  at java.desktop/java.awt.Font.getFont2D(Font.java:495)
>[junit4]>  at 
> java.desktop/java.awt.Font.canDisplayUpTo(Font.java:2244)
>[junit4]>  at 
> java.desktop/java.awt.font.TextLayout.singleFont(TextLayout.java:469)
>[junit4]>  at 
> java.desktop/java.awt.font.TextLayout.(TextLayout.java:530)
>[junit4]>  at 
> org.apache.poi.ss.util.SheetUtil.getDefaultCharWidth(SheetUtil.java:254)
>[junit4]>  at 
> org.apache.poi.xssf.streaming.AutoSizeColumnTracker.(AutoSizeColumnTracker.java:117)
>[junit4]>  at 
> org.apache.poi.xssf.streaming.SXSSFSheet.(SXSSFSheet.java:77)
>[junit4]>  at 
> org.apache.poi.xssf.streaming.SXSSFWorkbook.createAndRegisterSXSSFSheet(SXSSFWorkbook.java:653)
>[junit4]>  at 
> org.apache.poi.xssf.streaming.SXSSFWorkbook.createSheet(SXSSFWorkbook.java:646)
>[junit4]>  at 
> org.apache.solr.handler.extraction.XLSXWriter$SerialWriteWorkbook.(XLSXResponseWriter.java:112)
>[junit4]>  at 
> org.apache.solr.handler.extraction.XLSXWriter.(XLSXResponseWriter.java:165)
>[junit4]>  at 
> org.apache.solr.handler.extraction.XLSXResponseWriter.write(XLSXResponseWriter.java:66)
>[junit4]>  at 
> org.apache.solr.handler.extraction.TestXLSXResponseWriter.getWSResultForQuery(TestXLSXResponseWriter.java:237)
>[junit4]>  at 
> org.apache.solr.handler.extraction.TestXLSXResponseWriter.getWSResultForQuery(TestXLSXResponseWriter.java:232)
>[junit4]>  at 
> org.apache.solr.handler.extraction.TestXLSXResponseWriter.testPseudoFields(TestXLSXResponseWriter.java:211)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:547)
>[junit4]>  at java.base/java.lang.Thread.run(Thread.java:844)
>[junit4]> Caused by: java.lang.reflect.InvocationTargetException
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>[junit4]>  at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>[junit4]>  at 
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:473)
>[junit4]>  at 
> java.desktop/sun.font.FontManagerFactory$1.run(FontManagerFactory.java:84)
>[junit4]>  ... 55 more
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> 

[jira] [Updated] (SOLR-10232) Linear Regression on Text

2017-03-06 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10232:
--
Description: 
This is probably a fairly straight forward extension of our Logistic Regression 
on text capabilities. The main idea is to predict numeric outcomes based on 
text. Examples:

* predict salary based on a text in a resume.
* predict age of author based on text of document.
* predict number of clicks based on text in an article.
* predict number of sales based on product description.



  was:
This is a probably a fairly straight forward extension of our Logistic 
Regression on text capabilities. The main idea is to predict numeric outcomes 
based on text. Examples:

* predict salary based on a text in a resume.
* predict age of author based on text of document.
* predict number of clicks based on text in an article.
* predict number of sales based on product description.




> Linear Regression on Text
> -
>
> Key: SOLR-10232
> URL: https://issues.apache.org/jira/browse/SOLR-10232
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> This is probably a fairly straight forward extension of our Logistic 
> Regression on text capabilities. The main idea is to predict numeric outcomes 
> based on text. Examples:
> * predict salary based on a text in a resume.
> * predict age of author based on text of document.
> * predict number of clicks based on text in an article.
> * predict number of sales based on product description.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10232) Linear Regression on Text

2017-03-06 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10232:
--
Description: 
This is a probably a fairly straight forward extension of our Logistic 
Regression on text capabilities. The main idea is to predict numeric outcomes 
based on text. Examples:

* predict salary based on a text in a resume.
* predict age of author based on text of document.
* predict number of clicks based on text in an article.
* predict number of sales based on product description.



  was:
This is a probably a fairly straight forward extension of our Logistic 
Regression on text capabilities. The main idea is to predict numeric outcomes 
based on text. Examples:

* predict salary based on a text in a resume.
* predict age of author based on text of document.
* predict number of clicks based on text in an article.




> Linear Regression on Text
> -
>
> Key: SOLR-10232
> URL: https://issues.apache.org/jira/browse/SOLR-10232
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> This is a probably a fairly straight forward extension of our Logistic 
> Regression on text capabilities. The main idea is to predict numeric outcomes 
> based on text. Examples:
> * predict salary based on a text in a resume.
> * predict age of author based on text of document.
> * predict number of clicks based on text in an article.
> * predict number of sales based on product description.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10232) Linear Regression on Text

2017-03-06 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10232:
--
Description: 
This is a probably a fairly straight forward extension of our Logistic 
Regression on text capabilities. The main idea is to predict numeric outcomes 
based on text. Examples:

* predict salary based on a text in a resume.
* predict age of author based on text of document.
* predict number of clicks based on text in an article.



  was:
This is a probably a fairly simple extension of our Logistic Regression on text 
capabilities. The main idea is to predict numeric outcomes based on text. 
Examples:

* predict salary based on a text in a resume.
* predict age of author based on text of document.
* predict number of clicks based on text in an article.




> Linear Regression on Text
> -
>
> Key: SOLR-10232
> URL: https://issues.apache.org/jira/browse/SOLR-10232
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> This is a probably a fairly straight forward extension of our Logistic 
> Regression on text capabilities. The main idea is to predict numeric outcomes 
> based on text. Examples:
> * predict salary based on a text in a resume.
> * predict age of author based on text of document.
> * predict number of clicks based on text in an article.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10232) Linear Regression on Text

2017-03-06 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10232:
--
Description: 
This is a probably a fairly simple extension of our Logistic Regression on text 
capabilities. The main idea is to predict numeric outcomes based on text. 
Examples:

* predict salary based on a text in a resume.
* predict age of author based on text of document.
* predict number of clicks based on text in an article.



  was:
This is a probably a fairly simple extension of our Logistic Regression on text 
capabilities. The main idea is to predict numeric outcomes based on text. For 
example predict salary based on a text in a resume.




> Linear Regression on Text
> -
>
> Key: SOLR-10232
> URL: https://issues.apache.org/jira/browse/SOLR-10232
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> This is a probably a fairly simple extension of our Logistic Regression on 
> text capabilities. The main idea is to predict numeric outcomes based on 
> text. Examples:
> * predict salary based on a text in a resume.
> * predict age of author based on text of document.
> * predict number of clicks based on text in an article.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10232) Linear Regression on Text

2017-03-06 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-10232:
-

 Summary: Linear Regression on Text
 Key: SOLR-10232
 URL: https://issues.apache.org/jira/browse/SOLR-10232
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


This is a probably a fairly simple extension of our Logistic Regression on text 
capabilities. The main idea is to predict numeric outcomes based on text. For 
example predict salary based on a text in a resume.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8185) Add operations support to streaming metrics

2017-03-06 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897995#comment-15897995
 ] 

Joel Bernstein edited comment on SOLR-8185 at 3/6/17 8:17 PM:
--

I think this ticket has been superseded by the stream Evaluators work. Shall we 
close this ticket out?


was (Author: joel.bernstein):
I think this ticket has been superseded by the stream Evaluators work? Shall we 
close this ticket out?

> Add operations support to streaming metrics
> ---
>
> Key: SOLR-8185
> URL: https://issues.apache.org/jira/browse/SOLR-8185
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Dennis Gove
>Assignee: Dennis Gove
>Priority: Minor
> Attachments: SOLR-8185.patch
>
>
> Adds support for operations on stream metrics.
> With this feature one can modify tuple values before applying to the computed 
> metric. There are a lot of use-cases I can see with this - I'll describe one 
> here.
> Imagine you have a RollupStream which is computing the average over some 
> field but you cannot be sure that all documents have a value for that field, 
> ie the value is null. When the value is null you want to treat it as a 0. 
> With this feature you can accomplish that like this
> {code}
> rollup(
>   search(collection1, q=*:*, fl=\"a_s,a_i,a_f\", sort=\"a_s asc\"),
>   over=\"a_s\",
>   avg(a_i, replace(null, withValue=0)),
>   count(*),
> )
> {code}
> The operations are applied to the tuple for each metric in the stream which 
> means you perform different operations on different metrics without being 
> impacted by operations on other metrics. 
> Adding to our previous example, imagine you want to also get the min of a 
> field but do not consider null values.
> {code}
> rollup(
>   search(collection1, q=*:*, fl=\"a_s,a_i,a_f\", sort=\"a_s asc\"),
>   over=\"a_s\",
>   avg(a_i, replace(null, withValue=0)),
>   min(a_i),
>   count(*),
> )
> {code}
> Also, the tuple is not modified for streams that might wrap this one. Ie, the 
> only thing that sees the applied operation is that particular metric. If you 
> want to apply operations for wrapping streams you can still achieve that with 
> the SelectStream (SOLR-7669).
> One feature I'm investigating but this patch DOES NOT add is the ability to 
> assign names to the resulting metric value. For example, to allow for 
> something like this
> {code}
> rollup(
>   search(collection1, q=*:*, fl=\"a_s,a_i,a_f\", sort=\"a_s asc\"),
>   over=\"a_s\",
>   avg(a_i, replace(null, withValue=0), as="avg_a_i_null_as_0"),
>   avg(a_i),
>   count(*, as="totalCount"),
> )
> {code}
> Right now that isn't possible because the identifier for each metric would be 
> the same "avg_a_i" and as such both couldn't be returned. It's relatively 
> easy to add but I have to investigate its impact on the SQL and FacetStream 
> areas.
> Depends on SOLR-7669 (SelectStream)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8185) Add operations support to streaming metrics

2017-03-06 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897995#comment-15897995
 ] 

Joel Bernstein commented on SOLR-8185:
--

I think this ticket has been superseded by the stream Evaluators work? Shall we 
close this ticket out?

> Add operations support to streaming metrics
> ---
>
> Key: SOLR-8185
> URL: https://issues.apache.org/jira/browse/SOLR-8185
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Dennis Gove
>Assignee: Dennis Gove
>Priority: Minor
> Attachments: SOLR-8185.patch
>
>
> Adds support for operations on stream metrics.
> With this feature one can modify tuple values before applying to the computed 
> metric. There are a lot of use-cases I can see with this - I'll describe one 
> here.
> Imagine you have a RollupStream which is computing the average over some 
> field but you cannot be sure that all documents have a value for that field, 
> ie the value is null. When the value is null you want to treat it as a 0. 
> With this feature you can accomplish that like this
> {code}
> rollup(
>   search(collection1, q=*:*, fl=\"a_s,a_i,a_f\", sort=\"a_s asc\"),
>   over=\"a_s\",
>   avg(a_i, replace(null, withValue=0)),
>   count(*),
> )
> {code}
> The operations are applied to the tuple for each metric in the stream which 
> means you perform different operations on different metrics without being 
> impacted by operations on other metrics. 
> Adding to our previous example, imagine you want to also get the min of a 
> field but do not consider null values.
> {code}
> rollup(
>   search(collection1, q=*:*, fl=\"a_s,a_i,a_f\", sort=\"a_s asc\"),
>   over=\"a_s\",
>   avg(a_i, replace(null, withValue=0)),
>   min(a_i),
>   count(*),
> )
> {code}
> Also, the tuple is not modified for streams that might wrap this one. Ie, the 
> only thing that sees the applied operation is that particular metric. If you 
> want to apply operations for wrapping streams you can still achieve that with 
> the SelectStream (SOLR-7669).
> One feature I'm investigating but this patch DOES NOT add is the ability to 
> assign names to the resulting metric value. For example, to allow for 
> something like this
> {code}
> rollup(
>   search(collection1, q=*:*, fl=\"a_s,a_i,a_f\", sort=\"a_s asc\"),
>   over=\"a_s\",
>   avg(a_i, replace(null, withValue=0), as="avg_a_i_null_as_0"),
>   avg(a_i),
>   count(*, as="totalCount"),
> )
> {code}
> Right now that isn't possible because the identifier for each metric would be 
> the same "avg_a_i" and as such both couldn't be returned. It's relatively 
> easy to add but I have to investigate its impact on the SQL and FacetStream 
> areas.
> Depends on SOLR-7669 (SelectStream)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10200) Streaming Expressions should uses the shards parameter if present

2017-03-06 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-10200:
-

Assignee: Joel Bernstein

> Streaming Expressions should uses the shards parameter if present
> -
>
> Key: SOLR-10200
> URL: https://issues.apache.org/jira/browse/SOLR-10200
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> Currently Streaming Expressions select shards using an internal ZooKeeper 
> client. This ticket will allow stream sources to except a *shards* parameter 
> so that non-SolrCloud deployments can set the shards manually.
> The shards parameters will be added as http parameters in the following 
> format:
> collectionA.shards=url1,url1,...=url1,url2...
> The /stream handler with then add the shards to the StreamContext so all 
> stream sources can check to see if their collection has the shards set 
> manually.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10208) Adjust scoring formula for the scoreNodes function

2017-03-06 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10208:
--
Fix Version/s: 6.5

> Adjust scoring formula for the scoreNodes function
> --
>
> Key: SOLR-10208
> URL: https://issues.apache.org/jira/browse/SOLR-10208
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Fix For: 6.5
>
> Attachments: SOLR-10208.patch
>
>
> While working on SOLR-10156 I experimented with different scoring formula's 
> for scoring terms. I found that the scoring formula used by the scoreNodes 
> functions overweights the raw term counts. Through experimentation I found a 
> formula that does not overweight the raw term counts and provides a much 
> better significance score. This ticket applies the new formula.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2017-03-06 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-8593.
--
Resolution: Resolved

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>  Components: Parallel SQL
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10208) Adjust scoring formula for the scoreNodes function

2017-03-06 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-10208.
---
Resolution: Resolved

> Adjust scoring formula for the scoreNodes function
> --
>
> Key: SOLR-10208
> URL: https://issues.apache.org/jira/browse/SOLR-10208
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10208.patch
>
>
> While working on SOLR-10156 I experimented with different scoring formula's 
> for scoring terms. I found that the scoring formula used by the scoreNodes 
> functions overweights the raw term counts. Through experimentation I found a 
> formula that does not overweight the raw term counts and provides a much 
> better significance score. This ticket applies the new formula.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2017-03-06 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897978#comment-15897978
 ] 

Joel Bernstein edited comment on SOLR-8593 at 3/6/17 8:12 PM:
--

Ok, I added the assumption for the Turkish locale. I'm planning on resolving 
this ticket. If other issues come up with this the Apache Calcite integration 
we can open up a new issue.


was (Author: joel.bernstein):
Ok, I added the assumption for the Turkish locale. I'm planning on resolving 
this ticket. If other issue come up we can up a new issue.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>  Components: Parallel SQL
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2017-03-06 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897978#comment-15897978
 ] 

Joel Bernstein edited comment on SOLR-8593 at 3/6/17 8:12 PM:
--

Ok, I added the assumption for the Turkish locale. I'm planning on resolving 
this ticket. If other issues come up with the Apache Calcite integration we can 
open up a new issue.


was (Author: joel.bernstein):
Ok, I added the assumption for the Turkish locale. I'm planning on resolving 
this ticket. If other issues come up with this the Apache Calcite integration 
we can open up a new issue.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>  Components: Parallel SQL
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2017-03-06 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897978#comment-15897978
 ] 

Joel Bernstein commented on SOLR-8593:
--

Ok, I added the assumption for the Turkish locale. I'm planning on resolving 
this ticket. If other issue come up we can up a new issue.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>  Components: Parallel SQL
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10036) Revise jackson-core version from 2.5.4 to latest

2017-03-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897977#comment-15897977
 ] 

Steve Loughran commented on SOLR-10036:
---

There's now a fully shaded AWS JAR, jackson included. See HADOOP-14040.

At 50 MB it's big, and has been causing some problems simply due to classloader 
overhead and distribution, hence we aren't (currently) backporting in Hadoop. 
But it stops your choice of jackson being dictated by the S3 integration.

> Revise jackson-core version from 2.5.4 to latest
> 
>
> Key: SOLR-10036
> URL: https://issues.apache.org/jira/browse/SOLR-10036
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shashank Pedamallu
> Attachments: SOLR-10036.patch
>
>
> The current jackson-core dependency in Solr is not compatible with Amazon AWS 
> S3 dependency. AWS S3 requires jackson-core-2.6.6 while Solr uses 
> jackson-core-dependency-2.5.4. This is blocking the usage of latest updates 
> from S3.
> It would be greatly helpful if someone could revise the jackson-core jar in 
> Solr to the latest version. This is a ShowStopper for our Public company.
> Details of my Setup:
> Solr Version: 6.3
> AWS SDK version: 1.11.76



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10046) Create UninvertDocValuesMergePolicy

2017-03-06 Thread Keith Laban (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897964#comment-15897964
 ] 

Keith Laban commented on SOLR-10046:


Thanks Christine, I missed this last comment. I merged your pull request

> Create UninvertDocValuesMergePolicy
> ---
>
> Key: SOLR-10046
> URL: https://issues.apache.org/jira/browse/SOLR-10046
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Keith Laban
>Assignee: Christine Poerschke
>
> Create a merge policy that can detect schema changes and use 
> UninvertingReader to uninvert fields and write docvalues into merged segments 
> when a field has docvalues enabled.
> The current behavior is to write null values in the merged segment which can 
> lead to data integrity problems when sorting or faceting pending a full 
> reindex. 
> With this patch it would still be recommended to reindex when adding 
> docvalues for performance reasons, as it not guarenteed all segments will be 
> merged with docvalues turned on.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 741 - Still Unstable!

2017-03-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/741/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.TestRandomDVFaceting.testRandomFaceting

Error Message:
org.apache.solr.search.SyntaxError: Cannot parse 'id:(OECO NWUG THVO JLIJ FHFW 
YMUJ LUCN RPFP HNPE GWDA RSPU XLZW LGDM LYHI ROQB SCUQ EUAB LOOI IEUN UVAG RWGP 
AMXA HGXJ TBGK VJAH AEZX DSMH VTNE WUIF ORBR UWXL TFTW LEGV CBGL HMBW OXVJ MDYG 
QOQZ ZNRZ KORD ELTX RUVV TPXO GVCM IABH ETQN ZHHR ZIWQ TJGH QNGO WEZA ATME UUAI 
HPVU OPSH ZVDG BXXH ADEX KTOP UAPC RKIU NQMY QRKD XLDX WEKH ASNP WQLG VUHF UMWG 
HONQ UDHS PUZM RODV KQKV ZSIO MNHS JGLK MRPL NDHU XTGD IZSM AMRQ INDI NEKT TZDH 
RVPD MOCN AECM BKNO RDXD DFLN HIFZ CMSN SHMZ BBVZ AMRF ENKO ZRXG OBWU MNWO ZYKL 
XKRM IAOD CUGJ MXIU RETH BAGR RKIK GBQT EBOV GEOZ OOFU QABG BBLT TJZA HLME YYXU 
FGIM PWGP BPYQ ECQW YAIT VDYV TQMC KARI KDEK SIIX SGPG SAHM NKXD LZYE MMPG LYYV 
INDM JVYT DRYG MEIZ WVVP LHWO UIAE LNUX UXIA SUEK UJKL EFHA JXUO FMIV ITSH KVYK 
NHBL JEIH NBIG EANH FYKR WXAF PVEC HIEG QYKY ZCAI SVOM DQBY VLOZ DHZA KZSN HVAV 
LLXL YKVZ RUSB SBDH NEWB TNYX CQIE SYOR REWR EQUW BRHD DDQY TESD BGZB DWJK XJGJ 
SYXX ZLRS OHVV UHEL IIBE HDCP GZAJ AEAX WHKW VXZE TNBO DEQL LYCF XWTH TMWZ IRAC 
PDMA MLUX BYTW HMWT BFYJ HYQG EBUI SWCL BBQG YEZD LLLM HZVJ SSSL PIWK BZOI CHNY 
QIMD IZWB PVXT RDKK ACUO NWMF YUII IKOR VWDW HEIU YQAL GYJU ZKVH YOWI RRRY TCAI 
VGEU SCKE TAKT AHJY YTID LMUO XGLV MPIS TGPE XWSJ NLTS APLC HSGG AHDE JUSG UZWA 
BERW TSFC LZLE FPGA FZFJ AQJH PJTS JNZH KWHJ WTFN YYXX UFHZ VXQH YWXW FNMW ODVB 
QVWW WLZZ TUIR RDVV CLIG BSUL DJYA QJFT RSKG EPPE YJBJ HVCY WNFR NIAH UZTH DPMW 
SAJJ SJML ZDTH EQOJ JAZL OOEX EZFS RVWQ SWWL IALD DQMI YTUW JTHB OSGS FWHP BSRI 
PEWT MDNY LDJT PVBE EXGP SEWR YNLL GSXG MWKW RFVF DVLX JDGX OBUP VQLK UETL XVNP 
CZNG LQHJ HZLG SVDJ BRCJ PCKF ERNY KIAJ EDRG MALX EGFB KVZY CIQS FGSZ WXGZ RGHL 
GXQP XHNJ LWNK RWQN YHOQ TYZR YXXR LXIP VRCB PWNC KGHY JPZN WDEK BAPL ILSV PVYL 
VFYZ BOUH PZNY ZLQA BAQO UGKY XCGM IBFP IPFD DLNK JMII GKWT TDZC ZMMK EFTZ YETX 
QLHH AKSG BNCJ TFSF XPQO BHAM LYAA ANPO BSKH ZFUF HMQV NRWZ ZAHE LERS SUZX SGMM 
LZTG OQPU DINE AYRQ VLHG HOKO XLYH WGHH JAAH WDLT DTSI WHKN IBJR FYPB LYQO NNVY 
HQHF IQHB ITOH FOGV MXXT ZFHS NPEY FOZL YUJJ ASTR CRQF IDPM FHEY SFLR AJIT BZCH 
LSUY FOEI PJML ZZUW YURB YACC HSRI QJBQ UYMK DSPA YWZO VAVC RBHW QZLW FCGA OMPO 
AGBA SAGC SVBM PRCF SOFE XXCG NIWD VNBD JNMO LETG CMPN QXRB DTGG KLVE GVED MOHI 
VPPA DKUO BLWJ ZKVI UGIR WHFA FBQY YSCI LIOC FJJA USEX LFKZ SXEA GADJ JCTT ICAK 
RCFE YZNL ZTGR OOEA ORSP KXOD KSRC ICER JCVM BFKJ QITK HCZP OIQE IZYE BYPQ OUXC 
USDX DOJT XHTN TDHV BGFQ HJYP XMCY KMWH IHNH YJSR WQQV COJV GLZF JSIJ OGWX GCWE 
LWHJ ARIU LXRT HBKG YVOC VUSU RZWA LNVZ WIIC RROS KKTS RSYI ANYK XIHP WQSO CSYO 
SMOU ODIU AHTG DJJX VXMD DDOQ ELAG PCAK FRAM EHIF WYCN WZBX AHTV UXEC ZCMY IGAH 
LDIS HGLF HQHA IVUQ ALVW DTLK IBEA BUMK PWBD KHBW ARXM GYDI TAFI LQOY RIYK JAON 
YHPN YIBP VZVF ASRZ MVRU BCSX BWAD JFKL VIHM NIVT TQZM HVPR PTSP QJEZ RLZE QZII 
KRKN YPPM XAJU JTPL QGHA CGKD JGJE GGTN EZXC WMTP HZDB QKQL UUGW XQFQ ZUYP XGPC 
EIIZ MDLY WYBC GQPX NEUJ RDNA LFTK QACN OVME GBDR GQSZ LWFP IEVB QDVC LRGB RSKZ 
PWBV GVLU RTPX VYGX PKTG HZJI SSKK ONRB YUIH YBVK XFWW EHCN WARU OJUL NEJR FMFR 
VWIY ZDWZ ODNN JRYJ QLGB UPFW ZSYA NCZR HGOE HLSX BYGF MAGA BCSE NTTL STPJ YJMZ 
XVVR AMGM UNRY LGUQ QFJP QGEB ZOLE SNUD EGRU CHIY DITA ITXH CNNX LWZK NKKS VKMY 
JUEC NVUR GIRG MXWP SIVC VYKG SKVI ZTVW MNDJ KQDH IAIY CFMZ QITV QEJJ OSDS IYJT 
RJWV WVUU HBVP NNYW ZJGL HKBG KGEU JIIC QVIN VGKI SYVA DZDX SBJF AMAL UNJU FBFC 
ZKVK FGVE OHUB JOLO YCFJ LWDA AGFF FZJB YKGQ PSFK UDRX JUKJ ZACQ EFCI HAKL JEVQ 
PNFD RZEU LUJB WSRU XTYS BWCW VJIG RUHU QMEL GLTI HJQD HQZP AROB OQGW FSKI KXOF 
IMCH HFOL BZWR FKWS NJGI IPSP WKVA YKZR BDFH EEOH NFEH KXIB SETY LQTI CQEU ZLEK 
CGGD NZKX LVFT DXVH CIOZ JRSL JKRI DJSP FMAF UYNX QVFX FYPH OEUU XUUL DLSN ZHUG 
JUNT NNDH UUEZ YXOC GKVL ELVS XBSB BKUB KPWU CDIZ XWNS TTLV KJQO EAHF MECD RGAB 
YCCW AIYX BLHB DNTR TYWR DAIO GWNZ QXRG WYLS XRDG KBPR WIAB ZTRO BORN GVTD QYOK 
GCQP HXGG OFAO DACD HXXA CTGB CMBV FHHL JQDT XWQO ZSEX DLFP SLDM YKCW NUYR PZSQ 
HWBW YGSK KCXV UHYG VDBY RAPF TPIK QMEV ZNWR SMMV LZSC SUQV VFUK MNEA GLNI QQBS 
MESA OTOL NQMO OBKW QSAE VKRY KIKQ WOZP BXCD WMIS GRYY VGFW ZJRQ AAWB ZVED MLTN 
KUJP KRKH RNMF GRDI MRPA BHKO MLIK TPFP NYIO UMOK VURQ TIJB DJFO XSDD TEVE BYTO 
JBIA RLLC HQVQ DOGI FYCH KLWP NSUL OEPV IOUR KQRC FXHX TCFH NIMJ LAFP QEUQ COAY 
HNEX LQPC ANKX BOKZ AUHM UCWZ FDXD MTZQ UZAF NCZD ROFE ZCAH BEHI ZWSZ UQUD VDYE 
RTZJ QOII GVPT SXXE PMRQ TKZN XHZC ZYGO XLNW SVUK DPJK OSZH MJAG SROU FETI YTGM 
BWEN IABR TSWJ YGZY IHCV NLPX EUNG OIVC LLVI HXYP EXTU OWPP WQIB RRKW KNIR CNMI 
WKEG DSYR NPXZ AROL RFBM JDKK SHWZ TNZV DWEH ZTSB YXDQ AIJJ KVGX XQYJ WAXZ GJBT 
BNAO LJKC AVPJ NGTV UWNA XEQU HHRX WOOH XDSH JEQX WIRP LDTA LPLK JPIG NOPJ ZAPI 
UYBN IEKD LVLW XZYX URXK UXGN VOKU PWQS VWDY BGGW JZWQ GUQW ZHEN 

[JENKINS] Lucene-Solr-6.4-Linux (64bit/jdk1.8.0_121) - Build # 145 - Unstable!

2017-03-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.4-Linux/145/
Java: 64bit/jdk1.8.0_121 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:39228","node_name":"127.0.0.1:39228_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/33)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:44840;,   
"core":"c8n_1x3_lf_shard1_replica2",   "node_name":"127.0.0.1:44840_"}, 
"core_node2":{   "core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:45475;,   "node_name":"127.0.0.1:45475_",  
 "state":"down"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:39228;,   "node_name":"127.0.0.1:39228_",  
 "state":"active",   "leader":"true",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:39228","node_name":"127.0.0.1:39228_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/33)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:44840;,
  "core":"c8n_1x3_lf_shard1_replica2",
  "node_name":"127.0.0.1:44840_"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:45475;,
  "node_name":"127.0.0.1:45475_",
  "state":"down"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:39228;,
  "node_name":"127.0.0.1:39228_",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([AFDCF4D88650EC2C:2788CB0228AC81D4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:168)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 

[jira] [Commented] (SOLR-10226) JMX metric avgTimePerRequest broken

2017-03-06 Thread Bojan Smid (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897818#comment-15897818
 ] 

Bojan Smid commented on SOLR-10226:
---

I tested the patch quickly, metric totalTime is now there, but there is one 
small problem - it is expressed in ns. To be backward compatible it should be 
in ms.

> JMX metric avgTimePerRequest broken
> ---
>
> Key: SOLR-10226
> URL: https://issues.apache.org/jira/browse/SOLR-10226
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.4.1
>Reporter: Bojan Smid
>Assignee: Andrzej Bialecki 
> Attachments: SOLR-10226.patch
>
>
> JMX Metric avgTimePerRequest (of 
> org.apache.solr.handler.component.SearchHandler) doesn't appear to behave 
> correctly anymore. It was a cumulative value in pre-6.4 versions. Since 
> totalTime metric was removed (which was a base for monitoring calculations), 
> avgTimePerRequest seems like possible alternative to calculate "time spent in 
> requests since last measurement", but it behaves strangely after 6.4.
> I did a simple test on gettingstarted collection (just unpacked the Solr 
> 6.4.1 version and started it with "bin/solr start -e cloud -noprompt"). The 
> query I used was:
> http://localhost:8983/solr/gettingstarted/select?indent=on=*:*=json
> I run it 30 times in a row (with approx 1 sec between executions).
> At the same time I was looking (with jconsole) at bean 
> solr/gettingstarted_shard2_replica2:type=/select,id=org.apache.solr.handler.component.SearchHandler
> Here is how metric was changing over time (first number is "requests" metric, 
> second number is "avgTimePerRequest"):
> 10   6.6033
> 12   5.9557
> 13   0.9015---> 13th req would need negative duration if this was 
> cumulative
> 15   6.7315
> 16   7.4873
> 17   0.8458---> same case with 17th request
> 23   6.1076
> At the same time bean 
> solr/gettingstarted_shard1_replica2:type=/select,id=org.apache.solr.handler.component.SearchHandler
>   also showed strange values:
> 65.13482
> 810.5694
> 90.504
> 10  0.344
> 12  8.8121
> 18  3.3531
> CC [~ab]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9986) Implement DatePointField

2017-03-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897757#comment-15897757
 ] 

Tomás Fernández Löbbe commented on SOLR-9986:
-

LGTM. +1 to commit

> Implement DatePointField
> 
>
> Key: SOLR-9986
> URL: https://issues.apache.org/jira/browse/SOLR-9986
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Cao Manh Dat
> Attachments: SOLR-9986.patch, SOLR-9986.patch
>
>
> Followup task of SOLR-8396



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7705) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-03-06 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897762#comment-15897762
 ] 

Erick Erickson commented on LUCENE-7705:


Patch looks good. I'm going to hang back on committing this until we figure out 
SOLR-10229 (control schema proliferation). The additional schema you put in 
here is about the only way currently to test Solr schemas, so that's perfectly 
appropriate. I'd just like to use this as a test case for what it would take to 
move constructing schemas to inside the tests rather than have each new case 
like this require another schema that we then have to maintain.

But if SOLR-10229 takes very long I'll just commit this one and we can work out 
the rest later.

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: LUCENE-7705
> URL: https://issues.apache.org/jira/browse/LUCENE-7705
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, 
> LUCENE-7705.patch
>
>
> SOLR-10186
> [~erickerickson]: Is there a good reason that we hard-code a 256 character 
> limit for the CharTokenizer? In order to change this limit it requires that 
> people copy/paste the incrementToken into some new class since incrementToken 
> is final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10186) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-03-06 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-10186.
---
Resolution: Duplicate

this is really LUCENE-7705, see that JIRA for status.

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: SOLR-10186
> URL: https://issues.apache.org/jira/browse/SOLR-10186
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-10186.patch, SOLR-10186.patch, SOLR-10186.patch
>
>
> Is there a good reason that we hard-code a 256 character limit for the 
> CharTokenizer? In order to change this limit it requires that people 
> copy/paste the incrementToken into some new class since incrementToken is 
> final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10205) Evaluate and reduce BlockCache store failures

2017-03-06 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897721#comment-15897721
 ] 

Yonik Seeley edited comment on SOLR-10205 at 3/6/17 5:48 PM:
-

Here's the results (attached) of testing with different numbers of reserved 
blocks (up to 4) and different number of calls to cleanUp when the map size 
exceeds the number of blocks - reserved.  Tests were done on systems with 16 
and 32 logical (hyper-threaded) cores.

The speedups compared to trunk range from 11% to 68% for these artificial 
random tests.

Based on the results, I think the right balance is going with reserved blocks = 
4 and a single call to cleanUp in the outer loop of 
BlockCache.findEmptyLocation()


was (Author: ysee...@gmail.com):
Here's the results of testing with different numbers of reserved blocks (up to 
4) and different number of calls to cleanUp when the map size exceeds the 
number of blocks - reserved.

The speedups compared to trunk range from 11% to 68% for these artificial 
random tests.

Based on the results, I think the right balance is going with reserved blocks = 
4 and a single call to cleanUp in the outer loop of 
BlockCache.findEmptyLocation()

> Evaluate and reduce BlockCache store failures
> -
>
> Key: SOLR-10205
> URL: https://issues.apache.org/jira/browse/SOLR-10205
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: cache_performance_test.txt, SOLR-10205.patch, 
> SOLR-10205.patch
>
>
> The BlockCache is written such that requests to cache a block 
> (BlockCache.store call) can fail, making caching less effective.  We should 
> evaluate the impact of this storage failure and potentially reduce the number 
> of storage failures.
> The implementation reserves a single block of memory.  In store, a block of 
> memory is allocated, and then a pointer is inserted into the underling map.  
> A block is only freed when the underlying map evicts the map entry.
> This means that when two store() operations are called concurrently (even 
> under low load), one can fail.  This is made worse by the fact that 
> concurrent maps typically tend to amortize the cost of eviction over many 
> keys (i.e. the actual size of the map can grow beyond the configured maximum 
> number of entries... both the older ConcurrentLinkedHashMap and newer 
> Caffeine do this).  When this is the case, store() won't be able to find a 
> free block of memory, even if there aren't any other concurrently operating 
> stores.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10205) Evaluate and reduce BlockCache store failures

2017-03-06 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-10205:

Attachment: cache_performance_test.txt

Here's the results of testing with different numbers of reserved blocks (up to 
4) and different number of calls to cleanUp when the map size exceeds the 
number of blocks - reserved.

The speedups compared to trunk range from 11% to 68% for these artificial 
random tests.

Based on the results, I think the right balance is going with reserved blocks = 
4 and a single call to cleanUp in the outer loop of 
BlockCache.findEmptyLocation()

> Evaluate and reduce BlockCache store failures
> -
>
> Key: SOLR-10205
> URL: https://issues.apache.org/jira/browse/SOLR-10205
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
> Attachments: cache_performance_test.txt, SOLR-10205.patch, 
> SOLR-10205.patch
>
>
> The BlockCache is written such that requests to cache a block 
> (BlockCache.store call) can fail, making caching less effective.  We should 
> evaluate the impact of this storage failure and potentially reduce the number 
> of storage failures.
> The implementation reserves a single block of memory.  In store, a block of 
> memory is allocated, and then a pointer is inserted into the underling map.  
> A block is only freed when the underlying map evicts the map entry.
> This means that when two store() operations are called concurrently (even 
> under low load), one can fail.  This is made worse by the fact that 
> concurrent maps typically tend to amortize the cost of eviction over many 
> keys (i.e. the actual size of the map can grow beyond the configured maximum 
> number of entries... both the older ConcurrentLinkedHashMap and newer 
> Caffeine do this).  When this is the case, store() won't be able to find a 
> free block of memory, even if there aren't any other concurrently operating 
> stores.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3876 - Unstable!

2017-03-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3876/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForHashRouter

Error Message:
Collection not found: routeFieldColl

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: routeFieldColl
at 
__randomizedtesting.SeedInfo.seed([7951314821B74D80:D167AF95BED6A6DA]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.getCollectionNames(CloudSolrClient.java:1379)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1072)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1042)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:232)
at 
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForHashRouter(CustomCollectionTest.java:166)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2017-03-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897712#comment-15897712
 ] 

ASF subversion and git services commented on SOLR-8593:
---

Commit 4b1a16361d85710289e2905c1a796dba6ac858ec in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4b1a163 ]

SOLR-8593: in TestSQLHandler assume not run with Turkish locale


> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>  Components: Parallel SQL
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10226) JMX metric avgTimePerRequest broken

2017-03-06 Thread Bojan Smid (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897711#comment-15897711
 ] 

Bojan Smid commented on SOLR-10226:
---

Thanks for looking into this and patching it so quickly :).

>From what I see, "totalTime" was removed in SOLR-8785. Having it back solves 
>my problem (actually, any monitoring solution would need such cumulative total 
>time). Re avgTimePerRequest - I agree with what you suggest, decayed value 
>makes much more sense (non-decayed would only be useful as a hack to get to 
>totalTime).



> JMX metric avgTimePerRequest broken
> ---
>
> Key: SOLR-10226
> URL: https://issues.apache.org/jira/browse/SOLR-10226
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.4.1
>Reporter: Bojan Smid
>Assignee: Andrzej Bialecki 
> Attachments: SOLR-10226.patch
>
>
> JMX Metric avgTimePerRequest (of 
> org.apache.solr.handler.component.SearchHandler) doesn't appear to behave 
> correctly anymore. It was a cumulative value in pre-6.4 versions. Since 
> totalTime metric was removed (which was a base for monitoring calculations), 
> avgTimePerRequest seems like possible alternative to calculate "time spent in 
> requests since last measurement", but it behaves strangely after 6.4.
> I did a simple test on gettingstarted collection (just unpacked the Solr 
> 6.4.1 version and started it with "bin/solr start -e cloud -noprompt"). The 
> query I used was:
> http://localhost:8983/solr/gettingstarted/select?indent=on=*:*=json
> I run it 30 times in a row (with approx 1 sec between executions).
> At the same time I was looking (with jconsole) at bean 
> solr/gettingstarted_shard2_replica2:type=/select,id=org.apache.solr.handler.component.SearchHandler
> Here is how metric was changing over time (first number is "requests" metric, 
> second number is "avgTimePerRequest"):
> 10   6.6033
> 12   5.9557
> 13   0.9015---> 13th req would need negative duration if this was 
> cumulative
> 15   6.7315
> 16   7.4873
> 17   0.8458---> same case with 17th request
> 23   6.1076
> At the same time bean 
> solr/gettingstarted_shard1_replica2:type=/select,id=org.apache.solr.handler.component.SearchHandler
>   also showed strange values:
> 65.13482
> 810.5694
> 90.504
> 10  0.344
> 12  8.8121
> 18  3.3531
> CC [~ab]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2017-03-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897697#comment-15897697
 ] 

ASF subversion and git services commented on SOLR-8593:
---

Commit 6df17c8cfe72d229140fb644d067a50cd7a2b455 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6df17c8 ]

SOLR-8593: in TestSQLHandler assume not run with Turkish locale


> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>  Components: Parallel SQL
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7727) Replace EOL'ed pegdown by flexmark-java for Java 9 compatibility

2017-03-06 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897693#comment-15897693
 ] 

David Smiley commented on LUCENE-7727:
--

Uwe, when I try to run {{ant precommit}}, I now see this error:
{noformat}
resolve-markdown:

BUILD FAILED
/SmileyDev/Search/lucene-solr/lucene/common-build.xml:2415: ivy:cachepath 
doesn't support the nested "dependency" element.
{noformat}
I'm wondering if you know what the cause of that may be.  This is happening on 
one of my machines consistently, but not at all on another.

> Replace EOL'ed pegdown by flexmark-java for Java 9 compatibility
> 
>
> Key: LUCENE-7727
> URL: https://issues.apache.org/jira/browse/LUCENE-7727
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>  Labels: Java9
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7727.patch, LUCENE-7727.patch
>
>
> The documentation tasks use a library called "pegdown" to convert Markdown to 
> HTML. Unfortunately, the developer of pegdown EOLed it and points the users 
> to a faster replacement: flexmark-java 
> (https://github.com/vsch/flexmark-java).
> This would not be important for us, if pegdown would work with Java 9, but it 
> is also affected by the usual "setAccessible into private Java APIs" issue 
> (see my talk at FOSDEM: 
> https://fosdem.org/2017/schedule/event/jigsaw_challenges/).
> The migration should not be too hard, its just a bit of Groovy Code rewriting 
> and dependency changes.
> This is the pegdown problem:
> {noformat}
> Caused by: java.lang.RuntimeException: Could not determine whether class 
> 'org.pegdown.Parser$$parboiled' has already been loaded
> at org.parboiled.transform.AsmUtils.findLoadedClass(AsmUtils.java:213)
> at 
> org.parboiled.transform.ParserTransformer.transformParser(ParserTransformer.java:35)
> at org.parboiled.Parboiled.createParser(Parboiled.java:54)
> ... 50 more
> Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make 
> protected final java.lang.Class 
> java.lang.ClassLoader.findLoadedClass(java.lang.String) accessible: module 
> java.base does not "opens java.lang" to unnamed module @551b6736
> at 
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:335)
> at 
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:278)
> at 
> java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:196)
> at java.base/java.lang.reflect.Method.setAccessible(Method.java:190)
> at org.parboiled.transform.AsmUtils.findLoadedClass(AsmUtils.java:206)
> ... 52 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7722) Remove BoostedQuery

2017-03-06 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897692#comment-15897692
 ] 

Alan Woodward commented on LUCENE-7722:
---

Looking closer at BoostingQuery, I think the same effect could be had by using 
a BooleanQuery and wrapping the 'suppressing' subquery with a negative-valued 
BoostQuery?  In addition, BoostingQuery has no tests that actually run the 
query...

On reader-dependent DoubleValuesSource implementations, I think we need to add 
something like a rewrite() function to make the dependancy explicit.  Otherwise 
you could have odd interactions with things like the QueryCache.

> Remove BoostedQuery
> ---
>
> Key: LUCENE-7722
> URL: https://issues.apache.org/jira/browse/LUCENE-7722
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
>
> We already  have FunctionScoreQuery, which is more flexible than BoostedQuery 
> as it can combine scores in arbitrary ways and only requests scores on the 
> underlying scorer if they are needed. So let's remove BoostedQuery?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10226) JMX metric avgTimePerRequest broken

2017-03-06 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-10226:
-
Attachment: SOLR-10226.patch

Patch adding back non-decayed "totalTime".

> JMX metric avgTimePerRequest broken
> ---
>
> Key: SOLR-10226
> URL: https://issues.apache.org/jira/browse/SOLR-10226
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.4.1
>Reporter: Bojan Smid
>Assignee: Andrzej Bialecki 
> Attachments: SOLR-10226.patch
>
>
> JMX Metric avgTimePerRequest (of 
> org.apache.solr.handler.component.SearchHandler) doesn't appear to behave 
> correctly anymore. It was a cumulative value in pre-6.4 versions. Since 
> totalTime metric was removed (which was a base for monitoring calculations), 
> avgTimePerRequest seems like possible alternative to calculate "time spent in 
> requests since last measurement", but it behaves strangely after 6.4.
> I did a simple test on gettingstarted collection (just unpacked the Solr 
> 6.4.1 version and started it with "bin/solr start -e cloud -noprompt"). The 
> query I used was:
> http://localhost:8983/solr/gettingstarted/select?indent=on=*:*=json
> I run it 30 times in a row (with approx 1 sec between executions).
> At the same time I was looking (with jconsole) at bean 
> solr/gettingstarted_shard2_replica2:type=/select,id=org.apache.solr.handler.component.SearchHandler
> Here is how metric was changing over time (first number is "requests" metric, 
> second number is "avgTimePerRequest"):
> 10   6.6033
> 12   5.9557
> 13   0.9015---> 13th req would need negative duration if this was 
> cumulative
> 15   6.7315
> 16   7.4873
> 17   0.8458---> same case with 17th request
> 23   6.1076
> At the same time bean 
> solr/gettingstarted_shard1_replica2:type=/select,id=org.apache.solr.handler.component.SearchHandler
>   also showed strange values:
> 65.13482
> 810.5694
> 90.504
> 10  0.344
> 12  8.8121
> 18  3.3531
> CC [~ab]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10226) JMX metric avgTimePerRequest broken

2017-03-06 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897649#comment-15897649
 ] 

Andrzej Bialecki  commented on SOLR-10226:
--

You're right - the {{Timer}} implementation that was used in Solr 6.3 and 
earlier internally used {{Histogram}}, which did not apply decaying to the 
total accumulated value. When we upgraded this class to a newer version from 
Codahale Metrics the underlying new implementation of {{Histogram}} does apply 
decaying to this value...

Anyway, we have to add back a simple counter to track the total value as 
"totalTime", which somehow disappeared for no good reason. From that you will 
be able again to calculate the non-decaying average time.

The question is what to do with avgTimePerRequest. In my opinion, moving 
forward we should keep the decaying avgTimePerRequest because it more correctly 
represents the recent state of the system as opposed to the cumulative 
non-decayed value, which doesn't really reflect anything in particular (there 
could have been extended periods of idle time followed by recent high activity, 
and the value would be still low even though the recent load was high).

CC [~otis].

> JMX metric avgTimePerRequest broken
> ---
>
> Key: SOLR-10226
> URL: https://issues.apache.org/jira/browse/SOLR-10226
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 6.4.1
>Reporter: Bojan Smid
>Assignee: Andrzej Bialecki 
>
> JMX Metric avgTimePerRequest (of 
> org.apache.solr.handler.component.SearchHandler) doesn't appear to behave 
> correctly anymore. It was a cumulative value in pre-6.4 versions. Since 
> totalTime metric was removed (which was a base for monitoring calculations), 
> avgTimePerRequest seems like possible alternative to calculate "time spent in 
> requests since last measurement", but it behaves strangely after 6.4.
> I did a simple test on gettingstarted collection (just unpacked the Solr 
> 6.4.1 version and started it with "bin/solr start -e cloud -noprompt"). The 
> query I used was:
> http://localhost:8983/solr/gettingstarted/select?indent=on=*:*=json
> I run it 30 times in a row (with approx 1 sec between executions).
> At the same time I was looking (with jconsole) at bean 
> solr/gettingstarted_shard2_replica2:type=/select,id=org.apache.solr.handler.component.SearchHandler
> Here is how metric was changing over time (first number is "requests" metric, 
> second number is "avgTimePerRequest"):
> 10   6.6033
> 12   5.9557
> 13   0.9015---> 13th req would need negative duration if this was 
> cumulative
> 15   6.7315
> 16   7.4873
> 17   0.8458---> same case with 17th request
> 23   6.1076
> At the same time bean 
> solr/gettingstarted_shard1_replica2:type=/select,id=org.apache.solr.handler.component.SearchHandler
>   also showed strange values:
> 65.13482
> 810.5694
> 90.504
> 10  0.344
> 12  8.8121
> 18  3.3531
> CC [~ab]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10079) TestInPlaceUpdatesDistrib failure

2017-03-06 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897650#comment-15897650
 ] 

Steve Rowe commented on SOLR-10079:
---

A branch_6x failure from Policeman Jenkins 
[https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2955/] (expired history 
- this is the email notification):

{noformat}
FAILED:  org.apache.solr.update.TestInPlaceUpdatesDistrib.test

Error Message:
Earlier: [12867, 12867, 12867], now: [12867, 12866, 12867]

Stack Trace:
java.lang.AssertionError: Earlier: [12867, 12867, 12867], now: [12867, 12866, 
12867]
at 
__randomizedtesting.SeedInfo.seed([D5E9E36D7ABF3F16:5DBDDCB7D44352EE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.ensureRtgWorksWithPartialUpdatesTest(TestInPlaceUpdatesDistrib.java:501)
at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:141)
{noformat}


> TestInPlaceUpdatesDistrib failure
> -
>
> Key: SOLR-10079
> URL: https://issues.apache.org/jira/browse/SOLR-10079
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Ishan Chattopadhyaya
> Attachments: SOLR-10079.patch, stdout
>
>
> From [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18881/], 
> reproduces for me:
> {noformat}
> Checking out Revision d8d61ff61d1d798f5e3853ef66bc485d0d403f18 
> (refs/remotes/origin/master)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestInPlaceUpdatesDistrib -Dtests.method=test 
> -Dtests.seed=E1BB56269B8215B0 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=sr-Latn-RS -Dtests.timezone=America/Grand_Turk 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 77.7s J2 | TestInPlaceUpdatesDistrib.test <<<
>[junit4]> Throwable #1: java.lang.AssertionError: Earlier: [79, 79, 
> 79], now: [78, 78, 78]
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([E1BB56269B8215B0:69EF69FC357E7848]:0)
>[junit4]>  at 
> org.apache.solr.update.TestInPlaceUpdatesDistrib.ensureRtgWorksWithPartialUpdatesTest(TestInPlaceUpdatesDistrib.java:425)
>[junit4]>  at 
> org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:142)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:543)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
>[junit4]>  at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
>[junit4]>  at java.base/java.lang.Thread.run(Thread.java:844)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {id_i=PostingsFormat(name=LuceneFixedGap), title_s=FSTOrd50, 
> id=PostingsFormat(name=Asserting), 
> id_field_copy_that_does_not_support_in_place_update_s=FSTOrd50}, 
> docValues:{inplace_updatable_float=DocValuesFormat(name=Asserting), 
> id_i=DocValuesFormat(name=Direct), _version_=DocValuesFormat(name=Asserting), 
> title_s=DocValuesFormat(name=Lucene70), id=DocValuesFormat(name=Lucene70), 
> id_field_copy_that_does_not_support_in_place_update_s=DocValuesFormat(name=Lucene70),
>  inplace_updatable_int_with_default=DocValuesFormat(name=Asserting), 
> inplace_updatable_int=DocValuesFormat(name=Direct), 
> inplace_updatable_float_with_default=DocValuesFormat(name=Direct)}, 
> maxPointsInLeafNode=1342, maxMBSortInHeap=6.368734895089348, 
> sim=RandomSimilarity(queryNorm=true): {}, locale=sr-Latn-RS, 
> timezone=America/Grand_Turk
>[junit4]   2> NOTE: Linux 4.4.0-53-generic i386/Oracle Corporation 9-ea 
> (32-bit)/cpus=12,threads=1,free=107734480,total=518979584
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10079) TestInPlaceUpdatesDistrib failure

2017-03-06 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897633#comment-15897633
 ] 

Steve Rowe commented on SOLR-10079:
---

Collecting together non-'sanitycheck' failures from my Jenkins - none of these 
reproduce for me:

This one was on branch_6x.  The Jenkins history has expired, so I only have the 
email message for it:
{noformat}
FAILED:  org.apache.solr.update.TestInPlaceUpdatesDistrib.test

Error Message:
This doc was supposed to have been deleted, but was: SolrDocument{id=1, 
inplace_updatable_float=1.0, _version_=1560704758310764544, 
inplace_updatable_int_with_default=666, 
inplace_updatable_float_with_default=42.0}

Stack Trace:
java.lang.AssertionError: This doc was supposed to have been deleted, but was: 
SolrDocument{id=1, inplace_updatable_float=1.0, _version_=1560704758310764544, 
inplace_updatable_int_with_default=666, 
inplace_updatable_float_with_default=42.0}
at 
__randomizedtesting.SeedInfo.seed([8D42E06510E9275F:516DFBFBE154AA7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.delayedReorderingFetchesMissingUpdateFromLeaderTest(TestInPlaceUpdatesDistrib.java:896)
at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:142)
[...]
{noformat}

Two more branch_6x failures:

{noformat}
  [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestInPlaceUpdatesDistrib -Dtests.method=test 
-Dtests.seed=5425406C4847D9FE -Dtests.slow=true -Dtests.locale=vi 
-Dtests.timezone=Africa/Malabo -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
  [junit4] FAILURE 28.9s J5  | TestInPlaceUpdatesDistrib.test <<<
  [junit4]> Throwable #1: java.lang.AssertionError
  [junit4]> at 
__randomizedtesting.SeedInfo.seed([5425406C4847D9FE:DC717FB6E6BBB406]:0)
  [junit4]> at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.testDBQUsingUpdatedFieldFromDroppedUpdate(TestInPlaceUpdatesDistrib.java:1144)
  [junit4]> at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:138)
  [junit4]> at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
  [junit4]> at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
[...]
  [junit4]   2> NOTE: test params are: codec=Asserting(Lucene62), 
sim=RandomSimilarity(queryNorm=true,coord=crazy): {}, 
locale=vi,timezone=Africa/Malabo
  [junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
1.8.0_77 (64bit)/cpus=16,threads=1,free=284892968,total=528482304
{noformat}

{noformat}
  [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestInPlaceUpdatesDistrib -Dtests.method=test 
-Dtests.seed=D774EC9D22D63EE4 -Dtests.slow=true -Dtests.locale=sr-ME 
-Dtests.timezone=America/Argentina/Cordoba -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
  [junit4] FAILURE 41.7s J7  | TestInPlaceUpdatesDistrib.test <<<
  [junit4]> Throwable #1: java.lang.AssertionError
  [junit4]> at 
__randomizedtesting.SeedInfo.seed([D774EC9D22D63EE4:5F20D3478C2A531C]:0)
  [junit4]> at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.testDBQUsingUpdatedFieldFromDroppedUpdate(TestInPlaceUpdatesDistrib.java:1144)
  [junit4]> at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:138)
  [junit4]> at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
  [junit4]> at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
[...]
  [junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
{title_s=PostingsFormat(name=Memory doPackFST= true), id=FST50, 
id_field_copy_that_does_not_support_in_place_update_s=BlockTreeOrds(blocksize=128)},
 docValues:{inplace_updatable_float=DocValuesFormat(name=Lucene54), 
id_i=DocValuesFormat(name=Asserting), _version_=DocValuesFormat(name=Memory), 
id=DocValuesFormat(name=Lucene54), 
inplace_updatable_int_with_default=DocValuesFormat(name=Lucene54), 
inplace_updatable_float_with_default=DocValuesFormat(name=Asserting)}, 
maxPointsInLeafNode=1061, maxMBSortInHeap=7.879225467035905, 
sim=RandomSimilarity(queryNorm=true,coord=no): {}, locale=sr-ME, 
timezone=America/Argentina/Cordoba
  [junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
1.8.0_77 (64-bit)/cpus=16,threads=1,free=241914112,total=395313152
{noformat}

A master failure (again, history is gone, so this is the email notification):


Re: [VOTE] Release Lucene/Solr 6.4.2 RC1

2017-03-06 Thread Ishan Chattopadhyaya
Thanks everyone for testing. The vote has passed, and I shall work on
releasing the artefacts today.

On Sat, Mar 4, 2017 at 8:25 PM, Yonik Seeley  wrote:

> +1
>
> -Yonik
>
>
> On Wed, Mar 1, 2017 at 3:42 PM, Ishan Chattopadhyaya 
> wrote:
> > Please vote for release candidate 1 for Lucene/Solr 6.4.2
> >
> > The artifacts can be downloaded from:
> > https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.4.2-RC1-
> rev34a975ca3d4bd7fa121340e5bcbf165929e0542f
> >
> > You can run the smoke tester directly with this command:
> >
> > python3 -u dev-tools/scripts/smokeTestRelease.py \
> > https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.4.2-RC1-
> rev34a975ca3d4bd7fa121340e5bcbf165929e0542f
> >
> > Here's my +1
> > SUCCESS! [0:52:41.429385]
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Updated] (SOLR-5111) Change SpellCheckComponent default analyzer when queryAnalyzerFieldType is not defined

2017-03-06 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-5111:

Component/s: spellchecker

> Change SpellCheckComponent default analyzer when queryAnalyzerFieldType is 
> not defined
> --
>
> Key: SOLR-5111
> URL: https://issues.apache.org/jira/browse/SOLR-5111
> Project: Solr
>  Issue Type: Improvement
>  Components: spellchecker
>Reporter: Varun Thacker
>Priority: Minor
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-5111.patch, SOLR-5111.patch
>
>
> In the collection1 example, the SpellCheckComponent uses the query analyzer 
> of "text_general" FieldType. If "queryAnalyzerFieldType" is removed from the 
> configuration a WhitespaceAnalyzer is used by default.
> I suggest we could change the default to SimpleAnalyzer so that "foo" and 
> "Foo" gives the same results and log that the analyzer is missing.
> Also are there more places in solrconfig which have dependencies on schema 
> like this?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.4 - Build # 23 - Still Unstable

2017-03-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.4/23/

8 tests failed.
FAILED:  org.apache.lucene.search.TestFuzzyQuery.testRandom

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([1580E339465FAA17]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestFuzzyQuery

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([1580E339465FAA17]:0)


FAILED:  org.apache.solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest.test

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([2B1D0FE136F8BB26:A349303B9804D6DE]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:187)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:144)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:139)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:850)
at 
org.apache.solr.cloud.TlogReplayBufferedWhileIndexingTest.test(TlogReplayBufferedWhileIndexingTest.java:72)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (LUCENE-7695) Unknown query type SynonymQuery in ComplexPhraseQueryParser

2017-03-06 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897433#comment-15897433
 ] 

Markus Jelsma commented on LUCENE-7695:
---

Hello [~mkhludnev], your patch works nicely!

> Unknown query type SynonymQuery in ComplexPhraseQueryParser
> ---
>
> Key: LUCENE-7695
> URL: https://issues.apache.org/jira/browse/LUCENE-7695
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 6.4
>Reporter: Markus Jelsma
> Fix For: master (7.0), 6.5, 6.4.2
>
> Attachments: LUCENE-7695.patch, LUCENE-7695.patch, LUCENE-7695.patch, 
> LUCENE-7695.patch, LUCENE-7695.patch
>
>
> We sometimes receive this exception using ComplexPhraseQueryParser via Solr 
> 6.4.0. Some terms do fine, others don't.
> This query:
> {code}
> {!complexphrase}owmskern_title:"vergunning" 
> {code}
> returns results just fine. The next one:
> {code}
> {!complexphrase}owmskern_title:"vergunningen~"
> {code}
> Gives results as well! But this one:
> {code}
> {!complexphrase}owmskern_title:"vergunningen"
> {code}
> Returns the following exception:
> {code}
> IllegalArgumentException: Unknown query type 
> "org.apache.lucene.search.SynonymQuery" found in phrase query string 
> "algemene plaatselijke verordening"
> at 
> org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser$ComplexPhraseQuery.rewrite(ComplexPhraseQueryParser.java:313)
> at 
> org.apache.lucene.search.BooleanQuery.rewrite(BooleanQuery.java:265)
> at 
> org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:684)
> at 
> org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:734)
> at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
> at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:241)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListAndSetNC(SolrIndexSearcher.java:1919)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1636)
> at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:611)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:533)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9858) Collect aggregated metrics from nodes and shard leaders in overseer

2017-03-06 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15897427#comment-15897427
 ] 

Shalin Shekhar Mangar commented on SOLR-9858:
-

I tried the patch again Andrzej but I found another bug. Instead of core name, 
now the core node name is used for the core-specific registry e.g. I see 
{{solr.core.gettingstarted.shard1.core_node1}} in the output of /admin/metrics.

I have a few more requests:
# In this patch, the leader metrics are exposed at e.g. 
{{solr.core.gettingstarted.shard2.leader}} and 
{{solr.core.gettingstarted.shard2.leader}} -- there is no reason to have "core" 
in this name. Can we rename it to e.g. 
{{solr.collection.gettingstarted.shard1.leader}}?
# The aggregated cluster level metrics are exposed at {{solr.overseer}}. Can we 
rename it to {{solr.cluster}} to make its purpose explicit? Another reason to 
do this is that in future when we have overseer specific metrics such as the 
ones from the Overseer Status collection API, they will have to be shoehorned 
into the same registry and for users it will be difficult to distinguish 
between what is aggregated from the cluster and what is overseer specific.
# Can we disable the aggregation by default? Users should be able to 
enable/disable this aggregation via the CLUSTERPROP API for the cluster level 
metrics and using modify collection API for the leader level metrics. If you 
want, we can have just one switch for both and separate them out later.

> Collect aggregated metrics from nodes and shard leaders in overseer
> ---
>
> Key: SOLR-9858
> URL: https://issues.apache.org/jira/browse/SOLR-9858
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (7.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Attachments: SOLR-9858.patch, SOLR-9858.patch
>
>
> Overseer can collect metrics from Solr nodes and shard leaders in order to 
> have a notion of the indexing / query / replication / system load on each 
> node, shard and its replicas. This information then can be used for cluster 
> (auto)scaling.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >