[jira] [Comment Edited] (LUCENE-8712) Polygon2D does not detect crossings in some cases

2019-03-04 Thread Ignacio Vera (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781690#comment-16781690
 ] 

Ignacio Vera edited comment on LUCENE-8712 at 3/5/19 7:55 AM:
--

-I have a go at this. I think the issue is in the method 
GeoUtils#lineRelateLine. Currently it return the {{Relation.CELL_INSIDE_QUERY}} 
if any of the segments terminates on the other.- 

-I think the logic should only return that if the first segment terminates on 
the other, otherwise should return {{Relation.CELL_CROSSES_QUERY.}}-

 

I have a closer look and it seems this approach still will misshandle  some 
situations.


was (Author: ivera):
I have a go to this. I think the issue is in the method 
GeoUtils#lineRelateLine. Currently it return the {{Relation.CELL_INSIDE_QUERY}} 
if any of the segments terminates on the other. 

I think the logic should only return that if the first segment terminates on 
the other, otherwise should return {{Relation.CELL_CROSSES_QUERY.}}

> Polygon2D does not detect crossings in some cases
> -
>
> Key: LUCENE-8712
> URL: https://issues.apache.org/jira/browse/LUCENE-8712
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8712.patch
>
>
> Polygon2D does not detect crossing if the triangle crosses through points of 
> the polygon and none of the points are inside it. For example:
>  
> {code:java}
> public void testLineCrossingPolygonPoints() {
>   Polygon p = new Polygon(new double[] {0, -1, 0, 1, 0}, new double[] {-1, 0, 
> 1, 0, -1});
>   Polygon2D polygon2D = Polygon2D.create(p);
>   PointValues.Relation rel = 
> polygon2D.relateTriangle(GeoEncodingUtils.decodeLongitude(GeoEncodingUtils.encodeLongitude(-1.5)),
>   GeoEncodingUtils.decodeLatitude(GeoEncodingUtils.encodeLatitude(0)),
>   GeoEncodingUtils.decodeLongitude(GeoEncodingUtils.encodeLongitude(1.5)),
>   GeoEncodingUtils.decodeLatitude(GeoEncodingUtils.encodeLatitude(0)),
>   
> GeoEncodingUtils.decodeLongitude(GeoEncodingUtils.encodeLongitude(-1.5)),
>   GeoEncodingUtils.decodeLatitude(GeoEncodingUtils.encodeLatitude(0)));
>   assertEquals(PointValues.Relation.CELL_CROSSES_QUERY, rel);
> }{code}
> [~nknize] you might want to look at this as I am not sure what to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] apoorvprecisely closed pull request #588: feat(facet): support interval faceting for json facets

2019-03-04 Thread GitBox
apoorvprecisely closed pull request #588: feat(facet): support interval 
faceting for json facets
URL: https://github.com/apache/lucene-solr/pull/588
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-8.0 - Build # 12 - Failure

2019-03-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.0/12/

1 tests failed.
FAILED:  
org.apache.solr.uninverting.TestDocTermOrdsUninvertLimit.testTriggerUnInvertLimit

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
__randomizedtesting.SeedInfo.seed([1723FA2E9E9E8C99:2491D2EA9329562E]:0)
at org.apache.lucene.store.RAMFile.newBuffer(RAMFile.java:84)
at org.apache.lucene.store.RAMFile.addBuffer(RAMFile.java:57)
at 
org.apache.lucene.store.RAMOutputStream.switchCurrentBuffer(RAMOutputStream.java:168)
at 
org.apache.lucene.store.RAMOutputStream.writeBytes(RAMOutputStream.java:154)
at 
org.apache.lucene.store.MockIndexOutputWrapper.writeBytes(MockIndexOutputWrapper.java:141)
at 
org.apache.lucene.store.MockIndexOutputWrapper.writeByte(MockIndexOutputWrapper.java:126)
at 
org.apache.lucene.codecs.simpletext.SimpleTextUtil.writeNewline(SimpleTextUtil.java:53)
at 
org.apache.lucene.codecs.simpletext.SimpleTextFieldsWriter.newline(SimpleTextFieldsWriter.java:199)
at 
org.apache.lucene.codecs.simpletext.SimpleTextFieldsWriter.write(SimpleTextFieldsWriter.java:146)
at 
org.apache.lucene.codecs.simpletext.SimpleTextFieldsWriter.write(SimpleTextFieldsWriter.java:61)
at 
org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105)
at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:244)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:139)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4459)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4054)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2155)
at 
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3455)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3407)
at 
org.apache.lucene.index.RandomIndexWriter.maybeFlushOrCommit(RandomIndexWriter.java:220)
at 
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:192)
at 
org.apache.solr.uninverting.TestDocTermOrdsUninvertLimit.testTriggerUnInvertLimit(TestDocTermOrdsUninvertLimit.java:65)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)




Build Log:
[...truncated 13761 lines...]
   [junit4] Suite: org.apache.solr.uninverting.TestDocTermOrdsUninvertLimit
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestDocTermOrdsUninvertLimit -Dtests.method=testTriggerUnInvertLimit 
-Dtests.seed=1723FA2E9E9E8C99 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.0/test-data/enwiki.random.lines.txt
 -Dtests.locale=sk -Dtests.timezone=Europe/Amsterdam -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] ERROR300s J2 | 
TestDocTermOrdsUninvertLimit.testTriggerUnInvertLimit <<<
   [junit4]> Throwable #1: java.lang.OutOfMemoryError: Java heap space
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([1723FA2E9E9E8C99:2491D2EA9329562E]:0)
   [junit4]>at 
org.apache.lucene.store.RAMFile.newBuffer(RAMFile.java:84)
   [junit4]>at 
org.apache.lucene.store.RAMFile.addBuffer(RAMFile.java:57)
   [junit4]>at 
org.apache.lucene.store.RAMOutputStream.switchCurrentBuffer(RAMOutputStream.java:168)
   [junit4]>at 
org.apache.lucene.store.RAMOutputStream.writeBytes(RAMOutputStream.java:154)
   [junit4]>at 
org.apache.lucene.store.MockIndexOutputWrapper.writeBytes(MockIndexOutputWrapper.java:141)
   [junit4]>at 
org.apache.lucene.store.MockIndexOutputWrapper.writeByte(MockIndexOutputWrapper.java:126)
   [junit4]>at 

[jira] [Commented] (SOLR-13271) Implement a read-only mode for a collection

2019-03-04 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784176#comment-16784176
 ] 

Shalin Shekhar Mangar commented on SOLR-13271:
--

I don't think using indexEnabled is correct here. A full index replication at a 
time when the disk does not have enough space to download the full index, sets 
indexEnabled=false. Before the changes in DistributedURP, any updates coming 
from the leader at such a time would have been buffered but now they will fail 
with a forbidden error.

> Implement a read-only mode for a collection
> ---
>
> Key: SOLR-13271
> URL: https://issues.apache.org/jira/browse/SOLR-13271
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.x, master (9.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-13271.patch, SOLR-13271.patch
>
>
> Spin-off from SOLR-11127. In some scenarios it's useful to be able to block 
> any index updates to a collection, while still being able to search its 
> contents.
> Currently the scope of this issue is SolrCloud, ie. standalone Solr will not 
> be supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Resolved] (PYLUCENE-46) __dir__ module paramter

2019-03-04 Thread Andi Vajda
You're welcome !

Andi..

> On Mar 4, 2019, at 22:29, Petrus Hyvönen  wrote:
> 
> Thanks Andi..
> 
>> On Mon, Mar 4, 2019 at 11:44 PM Andi Vajda (JIRA)  wrote:
>> 
>> 
>> [
>> https://issues.apache.org/jira/browse/PYLUCENE-46?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
>> ]
>> 
>> Andi Vajda resolved PYLUCENE-46.
>> 
>>Resolution: Fixed
>> 
>>> __dir__ module paramter
>>> 
>>> 
>>>Key: PYLUCENE-46
>>>URL: https://issues.apache.org/jira/browse/PYLUCENE-46
>>>Project: PyLucene
>>> Issue Type: Bug
>>>Environment: Windows, Python3.7, JCC 3.4
>>>   Reporter: Petrus Hyvönen
>>>   Priority: Minor
>>> 
>>> Hi,
>>> Since Python 3.7 the __dir__ module attribute is part of the API to
>> return the values that shall be presented from the "dir" python command.
>>> [https://www.python.org/dev/peps/pep-0562/]
>>> [https://docs.python.org/3/reference/datamodel.html#object.__dir__]
>>> The top level module of wrapped libraries use this variable name for the
>> path to the module location, which confuses some IDE's. "TypeError: 'str'
>> object is not callable"
>>> The best would be if this module __dir__() returned the names of the top
>> level wrapped classes, but renaming the variable should solve the IDE
>> problem.
>>> 
>> 
>> 
>> 
>> --
>> This message was sent by Atlassian JIRA
>> (v7.6.3#76005)
>> 
> 
> 
> -- 
> _
> Petrus Hyvönen, Uppsala, Sweden
> Mobile Phone/SMS:+46 73 803 19 00



[jira] [Updated] (SOLR-13272) Interval facet support for JSON faceting

2019-03-04 Thread Apoorv Bhawsar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apoorv Bhawsar updated SOLR-13272:
--
Description: 
Interval facet is supported in classical facet component but has no support in 
json facet requests.
 In cases of block join and aggregations, this would be helpful

Assuming request format -
{code:java}
json.facet={pubyear:{type : interval,field : 
pubyear_i,intervals:[{key:"2000-2200",value:"[2000,2200]"}]}}
{code}
 
 PR https://github.com/apache/lucene-solr/pull/597

  was:
Interval facet is supported in classical facet component but has no support in 
json facet requests.
 In cases of block join and aggregations, this would be helpful

Assuming request format -
{code:java}
json.facet={pubyear:{type : interval,field : 
pubyear_i,intervals:[{key:"2000-2200",value:"[2000,2200]"}]}}
{code}
 
 PR 
[https://github.com/apache/lucene-solr/pull/597|https://github.com/apache/lucene-solr/pull/593]


> Interval facet support for JSON faceting
> 
>
> Key: SOLR-13272
> URL: https://issues.apache.org/jira/browse/SOLR-13272
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Apoorv Bhawsar
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Interval facet is supported in classical facet component but has no support 
> in json facet requests.
>  In cases of block join and aggregations, this would be helpful
> Assuming request format -
> {code:java}
> json.facet={pubyear:{type : interval,field : 
> pubyear_i,intervals:[{key:"2000-2200",value:"[2000,2200]"}]}}
> {code}
>  
>  PR https://github.com/apache/lucene-solr/pull/597



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Resolved] (PYLUCENE-46) __dir__ module paramter

2019-03-04 Thread Petrus Hyvönen
Thanks Andi..

On Mon, Mar 4, 2019 at 11:44 PM Andi Vajda (JIRA)  wrote:

>
>  [
> https://issues.apache.org/jira/browse/PYLUCENE-46?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
> ]
>
> Andi Vajda resolved PYLUCENE-46.
> 
> Resolution: Fixed
>
> > __dir__ module paramter
> > 
> >
> > Key: PYLUCENE-46
> > URL: https://issues.apache.org/jira/browse/PYLUCENE-46
> > Project: PyLucene
> >  Issue Type: Bug
> > Environment: Windows, Python3.7, JCC 3.4
> >Reporter: Petrus Hyvönen
> >Priority: Minor
> >
> > Hi,
> > Since Python 3.7 the __dir__ module attribute is part of the API to
> return the values that shall be presented from the "dir" python command.
> > [https://www.python.org/dev/peps/pep-0562/]
> > [https://docs.python.org/3/reference/datamodel.html#object.__dir__]
> > The top level module of wrapped libraries use this variable name for the
> path to the module location, which confuses some IDE's. "TypeError: 'str'
> object is not callable"
> > The best would be if this module __dir__() returned the names of the top
> level wrapped classes, but renaming the variable should solve the IDE
> problem.
> >
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v7.6.3#76005)
>


-- 
_
Petrus Hyvönen, Uppsala, Sweden
Mobile Phone/SMS:+46 73 803 19 00


[jira] [Updated] (SOLR-13211) Fix the position of color legend in Cloud UI.

2019-03-04 Thread Junya Usui (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junya Usui updated SOLR-13211:
--
Fix Version/s: (was: 7.3.1)

> Fix the position of color legend in Cloud UI.
> -
>
> Key: SOLR-13211
> URL: https://issues.apache.org/jira/browse/SOLR-13211
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Junya Usui
>Priority: Major
> Attachments: SOLR-13211.patch, fix_legend_position.pdf
>
>
> This patch contains two display enhancements which make the legend easier to 
> read, especially when the number of solr nodes is larger than 40.
>  # In the Could -> Graph page,
> it difficult to read the server name and legend since they are overlapping. 
> (Page.1)
>  #  In the Could -> Graph (Radial) page,
> the horizontal distance between the graph and the legend is too far.
> (Page.2)
> These issues have been adjusted for a long time. 
> Ref: 
> https://issues.apache.org/jira/browse/SOLR-3915?focusedCommentId=13472876=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-13472876
> And this patch provided a solution only modifying the cloud.css. The legend 
> was moved to the outside of the graph so that it will keep at left-bottom 
> corner without overlapping. (Page.3-4)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11558) It would be nice if the Graph section of the Cloud tab in the Admin UI could give some more information about the replicas of a collection

2019-03-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784118#comment-16784118
 ] 

Tomás Fernández Löbbe commented on SOLR-11558:
--

bq. The same could be done to display some extra information of the shard (like 
active/inactive, routing range) and the collection (autoAddReplicas, 
maxShardsPerNode, configset, etc)
This last part is not done yet. I'm fine with closing this Jira and opening a 
new one for the last bits, or keep this one open, same with me.

> It would be nice if the Graph section of the Cloud tab in the Admin UI could 
> give some more information about the replicas of a collection
> --
>
> Key: SOLR-11558
> URL: https://issues.apache.org/jira/browse/SOLR-11558
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Tomás Fernández Löbbe
>Assignee: Erick Erickson
>Priority: Minor
>
> Right now it lists the nodes where they are hosted, the state and if they are 
> or not leader. I usually find the need to see more, like the replica and core 
> names and the replica type, and I find myself moving between this view and 
> the “tree” view. 
> I thought about two options:
> # A mouse over action that lists the additional information (after some time 
> of holding the mouse pointer on top of the replica)
> # Modify the click action to display this information (right now the click 
> sends you to the admin UI of that particular replica)
> The same could be done to display some extra information of the shard (like 
> active/inactive, routing range) and the collection (autoAddReplicas, 
> maxShardsPerNode, configset, etc)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13272) Interval facet support for JSON faceting

2019-03-04 Thread Apoorv Bhawsar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apoorv Bhawsar updated SOLR-13272:
--
Description: 
Interval facet is supported in classical facet component but has no support in 
json facet requests.
 In cases of block join and aggregations, this would be helpful

Assuming request format -
{code:java}
json.facet={pubyear:{type : interval,field : 
pubyear_i,intervals:[{key:"2000-2200",value:"[2000,2200]"}]}}
{code}
 
 PR 
[https://github.com/apache/lucene-solr/pull/597|https://github.com/apache/lucene-solr/pull/593]

  was:
Interval facet is supported in classical facet component but has no support in 
json facet requests.
 In cases of block join and aggregations, this would be helpful

Assuming request format -
{code:java}
json.facet={pubyear:{type : range,field : 
pubyear_i,intervals:[{key:"2000-2200",value:"[2000,2200]"}]}}
{code}
 
 PR 
[https://github.com/apache/lucene-solr/pull/597|https://github.com/apache/lucene-solr/pull/593]


> Interval facet support for JSON faceting
> 
>
> Key: SOLR-13272
> URL: https://issues.apache.org/jira/browse/SOLR-13272
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Apoorv Bhawsar
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Interval facet is supported in classical facet component but has no support 
> in json facet requests.
>  In cases of block join and aggregations, this would be helpful
> Assuming request format -
> {code:java}
> json.facet={pubyear:{type : interval,field : 
> pubyear_i,intervals:[{key:"2000-2200",value:"[2000,2200]"}]}}
> {code}
>  
>  PR 
> [https://github.com/apache/lucene-solr/pull/597|https://github.com/apache/lucene-solr/pull/593]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 2968 - Unstable

2019-03-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/2968/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/302/consoleText

[repro] Revision: 76aae1cc1b6c69e7adbd65d39e8e9d0db2ace7f6

[repro] Repro line:  ant test  -Dtestcase=SolrRrdBackendFactoryTest 
-Dtests.method=testBasic -Dtests.seed=9ED1EE114564C00F -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=uk 
-Dtests.timezone=Universal -Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
7bfe7b265a4091048707e782657f622e937b6e70
[repro] git fetch
[repro] git checkout 76aae1cc1b6c69e7adbd65d39e8e9d0db2ace7f6

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SolrRrdBackendFactoryTest
[repro] ant compile-test

[...truncated 3583 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.SolrRrdBackendFactoryTest" -Dtests.showOutput=onerror  
-Dtests.seed=9ED1EE114564C00F -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=uk -Dtests.timezone=Universal 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 88 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest
[repro] git checkout 7bfe7b265a4091048707e782657f622e937b6e70

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-7.x - Build # 1260 - Still Unstable

2019-03-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/1260/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ObjectTracker found 5 object(s) that were not released!!! [InternalHttpClient, 
MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper, SolrCore] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.http.impl.client.InternalHttpClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:321)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:330)
  at 
org.apache.solr.handler.IndexFetcher.createHttpClient(IndexFetcher.java:225)  
at org.apache.solr.handler.IndexFetcher.(IndexFetcher.java:267)  at 
org.apache.solr.handler.ReplicationHandler.inform(ReplicationHandler.java:1202) 
 at org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:696) 
 at org.apache.solr.core.SolrCore.(SolrCore.java:1000)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:874)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1178)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:690)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:95)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:770)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:967)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:874)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1178)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:690)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:359)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:738)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:967)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:874)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1178)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:690)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:508)  
at org.apache.solr.core.SolrCore.(SolrCore.java:959)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:874)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1178)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:690)  
at 

[jira] [Commented] (SOLR-13234) Prometheus Metric Exporter Not Threadsafe

2019-03-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784061#comment-16784061
 ] 

ASF subversion and git services commented on SOLR-13234:


Commit 2726c8ce8e80f4b51498410eb212fb8d8066ca5e in lucene-solr's branch 
refs/heads/branch_7_7 from Shalin Shekhar Mangar
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2726c8c ]

SOLR-13234: Prometheus Metric Exporter not threadsafe.

This changes the prometheus exporter to collect metrics from Solr on a fixed 
interval controlled by this tool and prevents concurrent collections. This 
change also improves performance slightly by using the cluster state instead of 
sending multiple HTTP requests to each node to lookup all the cores.

This closes #571.

(cherry picked from commit 1f9c767aac76ac1618ccaffce42524e109335fe8)


> Prometheus Metric Exporter Not Threadsafe
> -
>
> Key: SOLR-13234
> URL: https://issues.apache.org/jira/browse/SOLR-13234
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.6, 8.0
>Reporter: Danyal Prout
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
>  Labels: metric-collector
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-13234-branch_7x.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Solr Prometheus Exporter collects metrics when it receives a HTTP request 
> from Prometheus. Prometheus sends this request, on its [scrape 
> interval|https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config].
>  When the time taken to collect the Solr metrics is greater than the scrape 
> interval of the Prometheus server, this results in concurrent metric 
> collection occurring in this 
> [method|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L86].
>  This method doesn’t appear to be thread safe, for instance you could have 
> concurrent modifications of a 
> [map|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L119].
>  After a while the Solr Exporter processes becomes nondeterministic, we've 
> observed NPE and loss of metrics.
> To address this, I'm proposing the following fixes:
> 1. Read/parse the configuration at startup and make it immutable. 
>  2. Collect metrics from Solr on an interval which is controlled by the Solr 
> Exporter and cache the metric samples to return during Prometheus scraping. 
> Metric collection can be expensive, for example executing arbitrary Solr 
> searches, it's not ideal to allow for concurrent metric collection and on an 
> interval which is not defined by the Solr Exporter.
> There are also a few other performance improvements that we've made while 
> fixing this, for example using the ClusterStateProvider instead of sending 
> multiple HTTP requests to each Solr node to lookup all the cores.
> I'm currently finishing up these changes which I'll submit as a PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13234) Prometheus Metric Exporter Not Threadsafe

2019-03-04 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-13234.
--
Resolution: Fixed

The fix will be released with Solr 8.1.0 and 7.7.2 releases.

Thanks Danyal!

> Prometheus Metric Exporter Not Threadsafe
> -
>
> Key: SOLR-13234
> URL: https://issues.apache.org/jira/browse/SOLR-13234
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.6, 8.0
>Reporter: Danyal Prout
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
>  Labels: metric-collector
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-13234-branch_7x.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Solr Prometheus Exporter collects metrics when it receives a HTTP request 
> from Prometheus. Prometheus sends this request, on its [scrape 
> interval|https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config].
>  When the time taken to collect the Solr metrics is greater than the scrape 
> interval of the Prometheus server, this results in concurrent metric 
> collection occurring in this 
> [method|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L86].
>  This method doesn’t appear to be thread safe, for instance you could have 
> concurrent modifications of a 
> [map|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L119].
>  After a while the Solr Exporter processes becomes nondeterministic, we've 
> observed NPE and loss of metrics.
> To address this, I'm proposing the following fixes:
> 1. Read/parse the configuration at startup and make it immutable. 
>  2. Collect metrics from Solr on an interval which is controlled by the Solr 
> Exporter and cache the metric samples to return during Prometheus scraping. 
> Metric collection can be expensive, for example executing arbitrary Solr 
> searches, it's not ideal to allow for concurrent metric collection and on an 
> interval which is not defined by the Solr Exporter.
> There are also a few other performance improvements that we've made while 
> fixing this, for example using the ClusterStateProvider instead of sending 
> multiple HTTP requests to each Solr node to lookup all the cores.
> I'm currently finishing up these changes which I'll submit as a PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13234) Prometheus Metric Exporter Not Threadsafe

2019-03-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784058#comment-16784058
 ] 

ASF subversion and git services commented on SOLR-13234:


Commit 9f9d65d6ec9b54bf903702620c41acc75b481809 in lucene-solr's branch 
refs/heads/branch_7x from Shalin Shekhar Mangar
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=9f9d65d ]

SOLR-13234: Adding CHANGES.txt entry under 7.8.0 section


> Prometheus Metric Exporter Not Threadsafe
> -
>
> Key: SOLR-13234
> URL: https://issues.apache.org/jira/browse/SOLR-13234
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.6, 8.0
>Reporter: Danyal Prout
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
>  Labels: metric-collector
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-13234-branch_7x.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Solr Prometheus Exporter collects metrics when it receives a HTTP request 
> from Prometheus. Prometheus sends this request, on its [scrape 
> interval|https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config].
>  When the time taken to collect the Solr metrics is greater than the scrape 
> interval of the Prometheus server, this results in concurrent metric 
> collection occurring in this 
> [method|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L86].
>  This method doesn’t appear to be thread safe, for instance you could have 
> concurrent modifications of a 
> [map|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L119].
>  After a while the Solr Exporter processes becomes nondeterministic, we've 
> observed NPE and loss of metrics.
> To address this, I'm proposing the following fixes:
> 1. Read/parse the configuration at startup and make it immutable. 
>  2. Collect metrics from Solr on an interval which is controlled by the Solr 
> Exporter and cache the metric samples to return during Prometheus scraping. 
> Metric collection can be expensive, for example executing arbitrary Solr 
> searches, it's not ideal to allow for concurrent metric collection and on an 
> interval which is not defined by the Solr Exporter.
> There are also a few other performance improvements that we've made while 
> fixing this, for example using the ClusterStateProvider instead of sending 
> multiple HTTP requests to each Solr node to lookup all the cores.
> I'm currently finishing up these changes which I'll submit as a PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13234) Prometheus Metric Exporter Not Threadsafe

2019-03-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784057#comment-16784057
 ] 

ASF subversion and git services commented on SOLR-13234:


Commit e1eeafb5dc077976646b06f4cba4d77534963fa9 in lucene-solr's branch 
refs/heads/branch_7x from Shalin Shekhar Mangar
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e1eeafb ]

SOLR-13234: Prometheus Metric Exporter not threadsafe.

This changes the prometheus exporter to collect metrics from Solr on a fixed 
interval controlled by this tool and prevents concurrent collections. This 
change also improves performance slightly by using the cluster state instead of 
sending multiple HTTP requests to each node to lookup all the cores.

This closes #571.

(cherry picked from commit 1f9c767aac76ac1618ccaffce42524e109335fe8)


> Prometheus Metric Exporter Not Threadsafe
> -
>
> Key: SOLR-13234
> URL: https://issues.apache.org/jira/browse/SOLR-13234
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.6, 8.0
>Reporter: Danyal Prout
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
>  Labels: metric-collector
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-13234-branch_7x.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Solr Prometheus Exporter collects metrics when it receives a HTTP request 
> from Prometheus. Prometheus sends this request, on its [scrape 
> interval|https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config].
>  When the time taken to collect the Solr metrics is greater than the scrape 
> interval of the Prometheus server, this results in concurrent metric 
> collection occurring in this 
> [method|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L86].
>  This method doesn’t appear to be thread safe, for instance you could have 
> concurrent modifications of a 
> [map|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L119].
>  After a while the Solr Exporter processes becomes nondeterministic, we've 
> observed NPE and loss of metrics.
> To address this, I'm proposing the following fixes:
> 1. Read/parse the configuration at startup and make it immutable. 
>  2. Collect metrics from Solr on an interval which is controlled by the Solr 
> Exporter and cache the metric samples to return during Prometheus scraping. 
> Metric collection can be expensive, for example executing arbitrary Solr 
> searches, it's not ideal to allow for concurrent metric collection and on an 
> interval which is not defined by the Solr Exporter.
> There are also a few other performance improvements that we've made while 
> fixing this, for example using the ClusterStateProvider instead of sending 
> multiple HTTP requests to each Solr node to lookup all the cores.
> I'm currently finishing up these changes which I'll submit as a PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13284) NPE on passing Invalid response writer as request parameter

2019-03-04 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784042#comment-16784042
 ] 

Munendra S N commented on SOLR-13284:
-

[^SOLR-13284.patch]
Reuploading the patch rerun the tests
Based on this 
[comment|https://issues.apache.org/jira/browse/SOLR-13268?focusedCommentId=16782736=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16782736],
 failure is not related to the patch. Even without patch failure is reproducible

> NPE on passing Invalid response writer as request parameter
> ---
>
> Key: SOLR-13284
> URL: https://issues.apache.org/jira/browse/SOLR-13284
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: v2 API
>Reporter: Munendra S N
>Assignee: Mikhail Khludnev
>Priority: Minor
> Attachments: SOLR-13284.patch, SOLR-13284.patch
>
>
> V1 API or the old API uses default response writer when non-existent response 
> writer is specified in the request whereas V2 API fails with NPE with below 
> stack trace
> {noformat}
> {trace=java.lang.NullPointerException
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:776)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at org.eclipse.jetty.server.Server.handle(Server.java:502)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>   at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
>   at 
> org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
>   at java.lang.Thread.run(Thread.java:745)
> ,code=500}
> {noformat}
> h5. Possible Solutions :
>  * V2 API should fall back to default response writer like V1 API
>  

[jira] [Updated] (SOLR-13234) Prometheus Metric Exporter Not Threadsafe

2019-03-04 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-13234:
-
Attachment: SOLR-13234-branch_7x.patch

> Prometheus Metric Exporter Not Threadsafe
> -
>
> Key: SOLR-13234
> URL: https://issues.apache.org/jira/browse/SOLR-13234
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.6, 8.0
>Reporter: Danyal Prout
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
>  Labels: metric-collector
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-13234-branch_7x.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Solr Prometheus Exporter collects metrics when it receives a HTTP request 
> from Prometheus. Prometheus sends this request, on its [scrape 
> interval|https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config].
>  When the time taken to collect the Solr metrics is greater than the scrape 
> interval of the Prometheus server, this results in concurrent metric 
> collection occurring in this 
> [method|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L86].
>  This method doesn’t appear to be thread safe, for instance you could have 
> concurrent modifications of a 
> [map|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L119].
>  After a while the Solr Exporter processes becomes nondeterministic, we've 
> observed NPE and loss of metrics.
> To address this, I'm proposing the following fixes:
> 1. Read/parse the configuration at startup and make it immutable. 
>  2. Collect metrics from Solr on an interval which is controlled by the Solr 
> Exporter and cache the metric samples to return during Prometheus scraping. 
> Metric collection can be expensive, for example executing arbitrary Solr 
> searches, it's not ideal to allow for concurrent metric collection and on an 
> interval which is not defined by the Solr Exporter.
> There are also a few other performance improvements that we've made while 
> fixing this, for example using the ClusterStateProvider instead of sending 
> multiple HTTP requests to each Solr node to lookup all the cores.
> I'm currently finishing up these changes which I'll submit as a PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13234) Prometheus Metric Exporter Not Threadsafe

2019-03-04 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-13234:
-
Attachment: (was: SOLR-13234-branch_7x.patch)

> Prometheus Metric Exporter Not Threadsafe
> -
>
> Key: SOLR-13234
> URL: https://issues.apache.org/jira/browse/SOLR-13234
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.6, 8.0
>Reporter: Danyal Prout
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
>  Labels: metric-collector
> Fix For: 8.x, master (9.0)
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Solr Prometheus Exporter collects metrics when it receives a HTTP request 
> from Prometheus. Prometheus sends this request, on its [scrape 
> interval|https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config].
>  When the time taken to collect the Solr metrics is greater than the scrape 
> interval of the Prometheus server, this results in concurrent metric 
> collection occurring in this 
> [method|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L86].
>  This method doesn’t appear to be thread safe, for instance you could have 
> concurrent modifications of a 
> [map|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L119].
>  After a while the Solr Exporter processes becomes nondeterministic, we've 
> observed NPE and loss of metrics.
> To address this, I'm proposing the following fixes:
> 1. Read/parse the configuration at startup and make it immutable. 
>  2. Collect metrics from Solr on an interval which is controlled by the Solr 
> Exporter and cache the metric samples to return during Prometheus scraping. 
> Metric collection can be expensive, for example executing arbitrary Solr 
> searches, it's not ideal to allow for concurrent metric collection and on an 
> interval which is not defined by the Solr Exporter.
> There are also a few other performance improvements that we've made while 
> fixing this, for example using the ClusterStateProvider instead of sending 
> multiple HTTP requests to each Solr node to lookup all the cores.
> I'm currently finishing up these changes which I'll submit as a PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13284) NPE on passing Invalid response writer as request parameter

2019-03-04 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-13284:

Attachment: SOLR-13284.patch

> NPE on passing Invalid response writer as request parameter
> ---
>
> Key: SOLR-13284
> URL: https://issues.apache.org/jira/browse/SOLR-13284
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: v2 API
>Reporter: Munendra S N
>Assignee: Mikhail Khludnev
>Priority: Minor
> Attachments: SOLR-13284.patch, SOLR-13284.patch
>
>
> V1 API or the old API uses default response writer when non-existent response 
> writer is specified in the request whereas V2 API fails with NPE with below 
> stack trace
> {noformat}
> {trace=java.lang.NullPointerException
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:776)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at org.eclipse.jetty.server.Server.handle(Server.java:502)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
>   at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
>   at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
>   at 
> org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
>   at java.lang.Thread.run(Thread.java:745)
> ,code=500}
> {noformat}
> h5. Possible Solutions :
>  * V2 API should fall back to default response writer like V1 API
>  * V2 API should fail with proper error message and error code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_172) - Build # 1014 - Unstable!

2019-03-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/1014/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestNRTOpen

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestNRTOpen_83ECD4C20377A6F5-001\init-core-data-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestNRTOpen_83ECD4C20377A6F5-001\init-core-data-001

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestNRTOpen_83ECD4C20377A6F5-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestNRTOpen_83ECD4C20377A6F5-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestNRTOpen_83ECD4C20377A6F5-001\init-core-data-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestNRTOpen_83ECD4C20377A6F5-001\init-core-data-001
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestNRTOpen_83ECD4C20377A6F5-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestNRTOpen_83ECD4C20377A6F5-001

at __randomizedtesting.SeedInfo.seed([83ECD4C20377A6F5]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:318)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13286 lines...]
   [junit4] Suite: org.apache.solr.core.TestNRTOpen
   [junit4]   2> 961112 INFO  
(SUITE-TestNRTOpen-seed#[83ECD4C20377A6F5]-worker) [] o.a.s.SolrTestCaseJ4 
SecureRandom sanity checks: test.solr.allowed.securerandom=null & 
java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J0\temp\solr.core.TestNRTOpen_83ECD4C20377A6F5-001\init-core-data-001
   [junit4]   2> 961114 INFO  
(SUITE-TestNRTOpen-seed#[83ECD4C20377A6F5]-worker) [] o.a.s.SolrTestCaseJ4 
Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 961115 INFO  
(SUITE-TestNRTOpen-seed#[83ECD4C20377A6F5]-worker) [] o.a.s.SolrTestCaseJ4 
Randomized ssl (true) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 961116 INFO  
(SUITE-TestNRTOpen-seed#[83ECD4C20377A6F5]-worker) [] o.a.s.SolrTestCaseJ4 
initCore
   [junit4]   2> 961117 INFO  
(SUITE-TestNRTOpen-seed#[83ECD4C20377A6F5]-worker) [] 
o.a.s.c.SolrResourceLoader [null] Added 2 libs to classloader, from paths: 
[/C:/Users/jenkins/workspace/Lucene-Solr-7.x-Windows/solr/core/src/test-files/solr/collection1/lib,
 
/C:/Users/jenkins/workspace/Lucene-Solr-7.x-Windows/solr/core/src/test-files/solr/collection1/lib/classes]
   [junit4]   2> 961171 INFO  
(SUITE-TestNRTOpen-seed#[83ECD4C20377A6F5]-worker) [] o.a.s.c.SolrConfig 
Using Lucene MatchVersion: 7.8.0
   [junit4]   2> 961181 INFO  
(SUITE-TestNRTOpen-seed#[83ECD4C20377A6F5]-worker) [] o.a.s.s.IndexSchema 
[null] Schema name=minimal
   [junit4]   2> 961188 WARN  
(SUITE-TestNRTOpen-seed#[83ECD4C20377A6F5]-worker) [] o.a.s.s.IndexSchema 
no uniqueKey specified in schema.
   [junit4]   2> 961188 INFO  
(SUITE-TestNRTOpen-seed#[83ECD4C20377A6F5]-worker) [] o.a.s.s.IndexSchema 
Loaded schema minimal/1.1 with uniqueid field null
   [junit4]   2> 961627 INFO  

[jira] [Commented] (SOLR-13234) Prometheus Metric Exporter Not Threadsafe

2019-03-04 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784016#comment-16784016
 ] 

Shalin Shekhar Mangar commented on SOLR-13234:
--

Here is a patch that applies on branch_7x. I'll commit after running precommit 
and tests.

> Prometheus Metric Exporter Not Threadsafe
> -
>
> Key: SOLR-13234
> URL: https://issues.apache.org/jira/browse/SOLR-13234
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.6, 8.0
>Reporter: Danyal Prout
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
>  Labels: metric-collector
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-13234-branch_7x.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Solr Prometheus Exporter collects metrics when it receives a HTTP request 
> from Prometheus. Prometheus sends this request, on its [scrape 
> interval|https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config].
>  When the time taken to collect the Solr metrics is greater than the scrape 
> interval of the Prometheus server, this results in concurrent metric 
> collection occurring in this 
> [method|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L86].
>  This method doesn’t appear to be thread safe, for instance you could have 
> concurrent modifications of a 
> [map|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L119].
>  After a while the Solr Exporter processes becomes nondeterministic, we've 
> observed NPE and loss of metrics.
> To address this, I'm proposing the following fixes:
> 1. Read/parse the configuration at startup and make it immutable. 
>  2. Collect metrics from Solr on an interval which is controlled by the Solr 
> Exporter and cache the metric samples to return during Prometheus scraping. 
> Metric collection can be expensive, for example executing arbitrary Solr 
> searches, it's not ideal to allow for concurrent metric collection and on an 
> interval which is not defined by the Solr Exporter.
> There are also a few other performance improvements that we've made while 
> fixing this, for example using the ClusterStateProvider instead of sending 
> multiple HTTP requests to each Solr node to lookup all the cores.
> I'm currently finishing up these changes which I'll submit as a PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 2966 - Still Unstable

2019-03-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/2966/

[...truncated 47 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/1259/consoleText

[repro] Revision: 76aae1cc1b6c69e7adbd65d39e8e9d0db2ace7f6

[repro] Repro line:  ant test  -Dtestcase=LeaderTragicEventTest 
-Dtests.method=test -Dtests.seed=726D1E6B0EC13FA5 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=es-ES -Dtests.timezone=Africa/Lusaka 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
7bfe7b265a4091048707e782657f622e937b6e70
[repro] git fetch
[repro] git checkout 76aae1cc1b6c69e7adbd65d39e8e9d0db2ace7f6

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   LeaderTragicEventTest
[repro] ant compile-test

[...truncated 3583 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.LeaderTragicEventTest" -Dtests.showOutput=onerror  
-Dtests.seed=726D1E6B0EC13FA5 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=es-ES -Dtests.timezone=Africa/Lusaka -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 1042 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: org.apache.solr.cloud.LeaderTragicEventTest
[repro] git checkout 7bfe7b265a4091048707e782657f622e937b6e70

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 36 - Still Failing

2019-03-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.x/36/

No tests ran.

Build Log:
[...truncated 23464 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2493 links (2037 relative) to 3314 anchors in 250 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/solr-8.1.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: 

[jira] [Updated] (SOLR-13234) Prometheus Metric Exporter Not Threadsafe

2019-03-04 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-13234:
-
Attachment: SOLR-13234-branch_7x.patch

> Prometheus Metric Exporter Not Threadsafe
> -
>
> Key: SOLR-13234
> URL: https://issues.apache.org/jira/browse/SOLR-13234
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.6, 8.0
>Reporter: Danyal Prout
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
>  Labels: metric-collector
> Fix For: 8.x, master (9.0)
>
> Attachments: SOLR-13234-branch_7x.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Solr Prometheus Exporter collects metrics when it receives a HTTP request 
> from Prometheus. Prometheus sends this request, on its [scrape 
> interval|https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config].
>  When the time taken to collect the Solr metrics is greater than the scrape 
> interval of the Prometheus server, this results in concurrent metric 
> collection occurring in this 
> [method|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L86].
>  This method doesn’t appear to be thread safe, for instance you could have 
> concurrent modifications of a 
> [map|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L119].
>  After a while the Solr Exporter processes becomes nondeterministic, we've 
> observed NPE and loss of metrics.
> To address this, I'm proposing the following fixes:
> 1. Read/parse the configuration at startup and make it immutable. 
>  2. Collect metrics from Solr on an interval which is controlled by the Solr 
> Exporter and cache the metric samples to return during Prometheus scraping. 
> Metric collection can be expensive, for example executing arbitrary Solr 
> searches, it's not ideal to allow for concurrent metric collection and on an 
> interval which is not defined by the Solr Exporter.
> There are also a few other performance improvements that we've made while 
> fixing this, for example using the ClusterStateProvider instead of sending 
> multiple HTTP requests to each Solr node to lookup all the cores.
> I'm currently finishing up these changes which I'll submit as a PR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+shipilev-fastdebug) - Build # 23745 - Still Unstable!

2019-03-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23745/
Java: 64bit/jdk-12-ea+shipilev-fastdebug -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.TestDistributedGrouping.test

Error Message:
Error from server at http://127.0.0.1:37311/collection1: Error from server at 
null: java.lang.NullPointerException  at 
org.apache.solr.handler.component.ResponseBuilder.setResult(ResponseBuilder.java:466)
  at 
org.apache.solr.handler.component.QueryComponent.doProcessGroupedDistributedSearchSecondPhase(QueryComponent.java:1369)
  at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:362)
  at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2565)  at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)  at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:165)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:703)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
 at org.eclipse.jetty.server.Server.handle(Server.java:502)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)  at 
org.eclipse.jetty.server.HttpChannel.run(HttpChannel.java:305)  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683) 
 at java.base/java.lang.Thread.run(Thread.java:835) 

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:37311/collection1: Error from server at null: 
java.lang.NullPointerException
at 
org.apache.solr.handler.component.ResponseBuilder.setResult(ResponseBuilder.java:466)
at 
org.apache.solr.handler.component.QueryComponent.doProcessGroupedDistributedSearchSecondPhase(QueryComponent.java:1369)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:362)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2565)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:165)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
at 

[JENKINS] Lucene-Solr-8.x-Windows (64bit/jdk-11) - Build # 67 - Unstable!

2019-03-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/67/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ZkShardTermsTest.testParticipationOfReplicas

Error Message:
Timeout occurred while waiting response from server at: 
http://127.0.0.1:62826/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occurred while 
waiting response from server at: http://127.0.0.1:62826/solr
at 
__randomizedtesting.SeedInfo.seed([F52E458C43F1812:BABBA0CEDF6E1FAF]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:660)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1055)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:830)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:763)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224)
at 
org.apache.solr.cloud.ZkShardTermsTest.testParticipationOfReplicas(ZkShardTermsTest.java:68)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-13211) Fix the position of color legend in Cloud UI.

2019-03-04 Thread Junya Usui (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junya Usui updated SOLR-13211:
--
Fix Version/s: 7.3.1

> Fix the position of color legend in Cloud UI.
> -
>
> Key: SOLR-13211
> URL: https://issues.apache.org/jira/browse/SOLR-13211
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Junya Usui
>Priority: Major
> Fix For: 7.3.1
>
> Attachments: SOLR-13211.patch, fix_legend_position.pdf
>
>
> This patch contains two display enhancements which make the legend easier to 
> read, especially when the number of solr nodes is larger than 40.
>  # In the Could -> Graph page,
> it difficult to read the server name and legend since they are overlapping. 
> (Page.1)
>  #  In the Could -> Graph (Radial) page,
> the horizontal distance between the graph and the legend is too far.
> (Page.2)
> These issues have been adjusted for a long time. 
> Ref: 
> https://issues.apache.org/jira/browse/SOLR-3915?focusedCommentId=13472876=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-13472876
> And this patch provided a solution only modifying the cloud.css. The legend 
> was moved to the outside of the graph so that it will keep at left-bottom 
> corner without overlapping. (Page.3-4)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13287) Allow zplot to visualize probability distributions in Apache Zeppelin

2019-03-04 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13287:
--
Attachment: Screen Shot 2019-03-04 at 7.52.21 PM.png

> Allow zplot to visualize probability distributions in Apache Zeppelin
> -
>
> Key: SOLR-13287
> URL: https://issues.apache.org/jira/browse/SOLR-13287
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13287.patch, SOLR-13287.patch, Screen Shot 
> 2019-03-03 at 2.21.10 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-03 at 2.48.32 PM.png, Screen Shot 2019-03-04 at 7.47.57 
> PM.png, Screen Shot 2019-03-04 at 7.52.21 PM.png
>
>
> The *zplot* Stream Evaluator doesn't currently know how to plot the 
> probability distribution functions in the Math Expressions library. This 
> ticket will add this capability to zplot so it can plot probability 
> distributions and Monte Carlo Simulations in Apache Zeppelin.
> Syntax:
> {code:java}
> zplot(dist=poissonDistribution(100)){code}
> The attached screenshots shows how distributions are visualized in Apache 
> Zeppelin.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13287) Allow zplot to visualize probability distributions in Apache Zeppelin

2019-03-04 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13287:
--
Attachment: Screen Shot 2019-03-04 at 7.47.57 PM.png

> Allow zplot to visualize probability distributions in Apache Zeppelin
> -
>
> Key: SOLR-13287
> URL: https://issues.apache.org/jira/browse/SOLR-13287
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13287.patch, SOLR-13287.patch, Screen Shot 
> 2019-03-03 at 2.21.10 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-03 at 2.48.32 PM.png, Screen Shot 2019-03-04 at 7.47.57 
> PM.png
>
>
> The *zplot* Stream Evaluator doesn't currently know how to plot the 
> probability distribution functions in the Math Expressions library. This 
> ticket will add this capability to zplot so it can plot probability 
> distributions and Monte Carlo Simulations in Apache Zeppelin.
> Syntax:
> {code:java}
> zplot(dist=poissonDistribution(100)){code}
> The attached screenshots shows how distributions are visualized in Apache 
> Zeppelin.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13287) Allow zplot to visualize probability distributions in Apache Zeppelin

2019-03-04 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13287:
--
Attachment: SOLR-13287.patch

> Allow zplot to visualize probability distributions in Apache Zeppelin
> -
>
> Key: SOLR-13287
> URL: https://issues.apache.org/jira/browse/SOLR-13287
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13287.patch, SOLR-13287.patch, Screen Shot 
> 2019-03-03 at 2.21.10 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-03 at 2.48.32 PM.png
>
>
> The *zplot* Stream Evaluator doesn't currently know how to plot the 
> probability distribution functions in the Math Expressions library. This 
> ticket will add this capability to zplot so it can plot probability 
> distributions and Monte Carlo Simulations in Apache Zeppelin.
> Syntax:
> {code:java}
> zplot(dist=poissonDistribution(100)){code}
> The attached screenshots shows how distributions are visualized in Apache 
> Zeppelin.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6968) LSH Filter

2019-03-04 Thread Mayya Sharipova (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783921#comment-16783921
 ] 

Mayya Sharipova commented on LUCENE-6968:
-

[~andyhind] Thanks very much for your answer, it made things more clear.  I 
still have a couple of additional questions if you don't mind:

1)  The default settings of the filter will produce for each document 512 
tokens each of the size 16 bytes, that is approximately 8Kb. Isn't 8Kb too big 
of a size to be a document's signature?

2) What is the way to combine `min_hash` tokens to a query for similarity 
search? Do you have any examples? Is this a work in progress?

Thanks again!

> LSH Filter
> --
>
> Key: LUCENE-6968
> URL: https://issues.apache.org/jira/browse/LUCENE-6968
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Cao Manh Dat
>Assignee: Tommaso Teofili
>Priority: Major
> Fix For: 6.2, 7.0
>
> Attachments: LUCENE-6968.4.patch, LUCENE-6968.5.patch, 
> LUCENE-6968.6.patch, LUCENE-6968.patch, LUCENE-6968.patch, LUCENE-6968.patch
>
>
> I'm planning to implement LSH. Which support query like this
> {quote}
> Find similar documents that have 0.8 or higher similar score with a given 
> document. Similarity measurement can be cosine, jaccard, euclid..
> {quote}
> For example. Given following corpus
> {quote}
> 1. Solr is an open source search engine based on Lucene
> 2. Solr is an open source enterprise search engine based on Lucene
> 3. Solr is an popular open source enterprise search engine based on Lucene
> 4. Apache Lucene is a high-performance, full-featured text search engine 
> library written entirely in Java
> {quote}
> We wanna find documents that have 0.6 score in jaccard measurement with this 
> doc
> {quote}
> Solr is an open source search engine
> {quote}
> It will return only docs 1,2 and 3 (MoreLikeThis will also return doc 4)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-8.x-Linux (64bit/jdk-12-ea+shipilev-fastdebug) - Build # 229 - Failure!

2019-03-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/229/
Java: 64bit/jdk-12-ea+shipilev-fastdebug -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 2605 lines...]
   [junit4] JVM J2: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/analysis/common/test/temp/junit4-J2-20190304_230517_4554013051912970705191.sysout
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] # To suppress the following error report, specify this argument
   [junit4] # after -XX: or in .hotspotrc:  SuppressErrorAt=/split_if.cpp:322
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  Internal Error 
(/home/buildbot/worker/jdk12u-linux/build/src/hotspot/share/opto/split_if.cpp:322),
 pid=13846, tid=13916
   [junit4] #  assert(prior_n->is_Region()) failed: must be a post-dominating 
merge point
   [junit4] #
   [junit4] # JRE version: OpenJDK Runtime Environment (12.0) (fastdebug build 
12-testing+0-builds.shipilev.net-openjdk-jdk12-b109-20190215-jdk-1229)
   [junit4] # Java VM: OpenJDK 64-Bit Server VM (fastdebug 
12-testing+0-builds.shipilev.net-openjdk-jdk12-b109-20190215-jdk-1229, mixed 
mode, tiered, serial gc, linux-amd64)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0x17da240]  PhaseIdealLoop::spinup(Node*, Node*, 
Node*, Node*, Node*, small_cache*) [clone .part.43]+0x330
   [junit4] #
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/analysis/common/test/J2/hs_err_pid13846.log
   [junit4] #
   [junit4] # Compiler replay data is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/analysis/common/test/J2/replay_pid13846.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] Current thread is 13916
   [junit4] Dumping core ...
   [junit4] <<< JVM J2: EOF 

   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/analysis/common/test/temp/junit4-J2-20190304_230517_45510639770585220518859.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: increase O_BUFLEN in ostream.hpp 
-- output truncated
   [junit4] <<< JVM J2: EOF 

[...truncated 717 lines...]
   [junit4] ERROR: JVM J2 ended with an exception, command line: 
/home/jenkins/tools/java/64bit/jdk-12-ea+shipilev-fastdebug/bin/java 
-XX:-UseCompressedOops -XX:+UseSerialGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/heapdumps -ea 
-esa --illegal-access=deny -Dtests.prefix=tests -Dtests.seed=C76941A83895A5AC 
-Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=8.1.0 -Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Dcommon.dir=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene 
-Dclover.db.dir=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/clover/db
 
-Djava.security.policy=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/tools/junit4/tests.policy
 -Dtests.LUCENE_VERSION=8.1.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/home/jenkins/workspace/Lucene-Solr-8.x-Linux 
-Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/analysis/common/test/J2
 
-Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/analysis/common/test/temp
 -Djunit4.childvm.id=2 -Djunit4.childvm.count=3 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dtests.filterstacks=true -Dtests.leaveTemporary=false -Dtests.badapples=false 
-classpath 

Re: Call for help: moving from ant build to gradle

2019-03-04 Thread Gézapeti
I'd be happy to help with the gradle migration.
I could not find a jira that covers it, only LUCENE-5755, which was closed
a long time ago.
Where can I join the discussion about this?

Thanks for the pointers,
gp


On Thu, Feb 7, 2019 at 8:23 PM Vladimir Kroz 
wrote:

> +1 for moving to gradle. I'm happy to help.
>
> On Wed, Dec 19, 2018 at 8:25 AM Mark Miller  wrote:
>
>> +1. Gradle is the alpha and the omega of build systems. I will help.
>>
>> - Mark
>>
>> On Sun, Nov 4, 2018 at 1:13 PM Đạt Cao Mạnh 
>> wrote:
>>
>>> Hi guys,
>>>
>>> Recently, I had a chance of working on modifying different build.xml of
>>> our project. To be honest that was a painful experience, especially the
>>> number of steps for adding a new module in our project. We reach the
>>> limitation point of Ant and moving to Gradle seems a good option since it
>>> has been widely used in many projects. There are several benefits of the
>>> moving here that I would like to mention
>>> * The capability of caching result in Gradle make running task much
>>> faster. I.e: rerunning forbiddenApi check in Gradle only takes 5 seconds
>>> (comparing to more than a minute of Ant).
>>> * Adding modules is much easier now.
>>> * Adding dependencies is a pleasure now since we don't have to run ant
>>> clean-idea and ant idea all over again.
>>> * Natively supported by different IDEs.
>>>
>>> On my very boring long flight from Montreal back to Vietnam, I tried to
>>> convert the Lucene/Solr Ant to Gradle, I finally achieved something here by
>>> being able to import project and run tests natively from IntelliJ IDEA
>>> (branch jira/gradle).
>>>
>>> I'm converting ant precommit for Lucene to Gradle. But there are a lot
>>> of things need to be done here and my limitation understanding in our Ant
>>> build and Gradle may make the work take a lot of time to finish.
>>>
>>> Therefore, I really need help from the community to finish the work and
>>> we will be able to move to a totally new, modern, powerful build tool.
>>>
>>> Thanks!
>>>
>>>
>>
>> --
>> - Mark
>>
>> http://about.me/markrmiller
>>
>
>
> --
> Best regards,
>
> Vladimir Kroz
> www.linkedin.com/in/*vkroz* 
> Phone: (707) 515-9195
>


[jira] [Resolved] (PYLUCENE-46) __dir__ module paramter

2019-03-04 Thread Andi Vajda (JIRA)


 [ 
https://issues.apache.org/jira/browse/PYLUCENE-46?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andi Vajda resolved PYLUCENE-46.

Resolution: Fixed

> __dir__ module paramter 
> 
>
> Key: PYLUCENE-46
> URL: https://issues.apache.org/jira/browse/PYLUCENE-46
> Project: PyLucene
>  Issue Type: Bug
> Environment: Windows, Python3.7, JCC 3.4
>Reporter: Petrus Hyvönen
>Priority: Minor
>
> Hi,
> Since Python 3.7 the __dir__ module attribute is part of the API to return 
> the values that shall be presented from the "dir" python command.
> [https://www.python.org/dev/peps/pep-0562/]
> [https://docs.python.org/3/reference/datamodel.html#object.__dir__]
> The top level module of wrapped libraries use this variable name for the path 
> to the module location, which confuses some IDE's. "TypeError: 'str' object 
> is not callable"
> The best would be if this module __dir__() returned the names of the top 
> level wrapped classes, but renaming the variable should solve the IDE problem.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PYLUCENE-46) __dir__ module paramter

2019-03-04 Thread Andi Vajda (JIRA)


[ 
https://issues.apache.org/jira/browse/PYLUCENE-46?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783852#comment-16783852
 ] 

Andi Vajda commented on PYLUCENE-46:


Fixed in rev 1854800 (renamed __dir__ to __module_dir__).

> __dir__ module paramter 
> 
>
> Key: PYLUCENE-46
> URL: https://issues.apache.org/jira/browse/PYLUCENE-46
> Project: PyLucene
>  Issue Type: Bug
> Environment: Windows, Python3.7, JCC 3.4
>Reporter: Petrus Hyvönen
>Priority: Minor
>
> Hi,
> Since Python 3.7 the __dir__ module attribute is part of the API to return 
> the values that shall be presented from the "dir" python command.
> [https://www.python.org/dev/peps/pep-0562/]
> [https://docs.python.org/3/reference/datamodel.html#object.__dir__]
> The top level module of wrapped libraries use this variable name for the path 
> to the module location, which confuses some IDE's. "TypeError: 'str' object 
> is not callable"
> The best would be if this module __dir__() returned the names of the top 
> level wrapped classes, but renaming the variable should solve the IDE problem.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-13-ea+8) - Build # 3605 - Unstable!

2019-03-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3605/
Java: 64bit/jdk-13-ea+8 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenRenew

Error Message:
expected:<200> but was:<403>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<403>
at 
__randomizedtesting.SeedInfo.seed([B07075C9709929D5:87EB81D74855F471]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.renewDelegationToken(TestSolrCloudWithDelegationTokens.java:132)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.verifyDelegationTokenRenew(TestSolrCloudWithDelegationTokens.java:317)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenRenew(TestSolrCloudWithDelegationTokens.java:335)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1783 - Still Unstable

2019-03-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1783/

3 tests failed.
FAILED:  org.apache.solr.TestDistributedGrouping.test

Error Message:
Error from server at https://127.0.0.1:42473/_imo/collection1: Error from 
server at null: java.lang.NullPointerException  at 
org.apache.solr.handler.component.ResponseBuilder.setResult(ResponseBuilder.java:466)
  at 
org.apache.solr.handler.component.QueryComponent.doProcessGroupedDistributedSearchSecondPhase(QueryComponent.java:1369)
  at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:362)
  at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2565)  at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)  at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:165)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:703)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
 at org.eclipse.jetty.server.Server.handle(Server.java:502)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)  at 
org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:411)
  at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:305)  
at org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:159)  
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)  at 
org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683) 
 at java.lang.Thread.run(Thread.java:748) 

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:42473/_imo/collection1: Error from server at 
null: java.lang.NullPointerException
at 
org.apache.solr.handler.component.ResponseBuilder.setResult(ResponseBuilder.java:466)
at 
org.apache.solr.handler.component.QueryComponent.doProcessGroupedDistributedSearchSecondPhase(QueryComponent.java:1369)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:362)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2565)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:165)
at 

[jira] [Assigned] (SOLR-13294) TestSQLHandler failures on windows jenkins machines

2019-03-04 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-13294:
-

Assignee: Joel Bernstein

> TestSQLHandler failures on windows jenkins machines
> ---
>
> Key: SOLR-13294
> URL: https://issues.apache.org/jira/browse/SOLR-13294
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Joel Bernstein
>Priority: Major
>
> _Windows_ jenkins builds frequently - but _not always_ - fail on 
> {{TestSQLHandler}} @ L236
> In cases where a windows jenkins build finds a failing seed for 
> {{TestSQLHandler}}, and the same jenkins build attempts to reproduce using 
> that seed, it reliably encounters a *different* failure earlier in the test 
> (related to docValues being missing from a sort field).
> These seeds do not fail for me when attempted on a Linux machine, and my own 
> attempts @ beasting on linux hasn't turned up any similar failures.
> Here's an example from jenkins - the exact same pattern has occured in other 
> windows jenkins builds on other branches at the exact same asserts..
> [https://jenkins.thetaphi.de/view/Lucene-Solr/job/Lucene-Solr-8.0-Windows/57/]
> {noformat}
> Using Java: 32bit/jdk1.8.0_172 -server -XX:+UseConcMarkSweepGC
> ...
> Checking out Revision 0376bc0052a53480ecb2edea7dfe58298bda5deb 
> (refs/remotes/origin/branch_8_0)
> ...
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler 
> -Dtests.method=doTest -Dtests.seed=EEE2628F22F5C82A -Dtests.slow=true 
> -Dtests.locale=id -Dtests.timezone=BST -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 23.3s J0 | TestSQLHandler.doTest <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([EEE2628F22F5C82A:49A6DA2B4F4EDB93]:0)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:236)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:93)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
>[junit4]>at java.lang.Thread.run(Thread.java:748)
> ...
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler 
> -Dtests.method=doTest -Dtests.seed=EEE2628F22F5C82A -Dtests.slow=true 
> -Dtests.badapples=true -Dtests.locale=id -Dtests.timezone=BST 
> -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   20.8s J0 | TestSQLHandler.doTest <<<
>[junit4]> Throwable #1: java.io.IOException: --> 
> http://127.0.0.1:61309/collection1_shard2_replica_n1:Failed to execute 
> sqlQuery 'select id, field_i, str_s, field_i_p, field_f_p, field_d_p, 
> field_l_p from collection1 where (text='()' OR text='') AND 
> text='' order by field_i desc' against JDBC connection 
> 'jdbc:calcitesolr:'.
>[junit4]> Error while executing SQL "select id, field_i, str_s, 
> field_i_p, field_f_p, field_d_p, field_l_p from collection1 where 
> (text='()' OR text='') AND text='' order by field_i desc": 
> java.io.IOException: java.util.concurrent.ExecutionException: 
> java.io.IOException: --> 
> http://127.0.0.1:61309/collection1_shard2_replica_n1/:id{type=string,properties=indexed,stored,sortMissingLast,uninvertible}
>  must have DocValues to use this feature.
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([EEE2628F22F5C82A:49A6DA2B4F4EDB93]:0)
>[junit4]>at 
> org.apache.solr.client.solrj.io.stream.SolrStream.read(SolrStream.java:215)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.getTuples(TestSQLHandler.java:2617)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:145)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:93)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
>[junit4]>at java.lang.Thread.run(Thread.java:748)
> ...
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler 
> -Dtests.method=doTest -Dtests.seed=EEE2628F22F5C82A -Dtests.slow=true 
> 

[JENKINS] Lucene-Solr-repro - Build # 2965 - Unstable

2019-03-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/2965/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.0/11/consoleText

[repro] Revision: 0376bc0052a53480ecb2edea7dfe58298bda5deb

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.0/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=LeaderTragicEventTest 
-Dtests.method=test -Dtests.seed=C2C29C52E000157 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.0/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-TN -Dtests.timezone=America/Belem -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=HdfsUnloadDistributedZkTest 
-Dtests.method=test -Dtests.seed=C2C29C52E000157 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.0/test-data/enwiki.random.lines.txt
 -Dtests.locale=vi-VN -Dtests.timezone=Antarctica/Rothera -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
7bfe7b265a4091048707e782657f622e937b6e70
[repro] git fetch
[repro] git checkout 0376bc0052a53480ecb2edea7dfe58298bda5deb

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   HdfsUnloadDistributedZkTest
[repro]   LeaderTragicEventTest
[repro] ant compile-test

[...truncated 3572 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.HdfsUnloadDistributedZkTest|*.LeaderTragicEventTest" 
-Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.0/test-data/enwiki.random.lines.txt
 -Dtests.seed=C2C29C52E000157 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.0/test-data/enwiki.random.lines.txt
 -Dtests.locale=vi-VN -Dtests.timezone=Antarctica/Rothera -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 19835 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.LeaderTragicEventTest
[repro]   3/5 failed: org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest
[repro] git checkout 7bfe7b265a4091048707e782657f622e937b6e70

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Comment Edited] (LUCENE-8716) Test logging can bleed from one suite to another, cause failures due to sysout limits

2019-03-04 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783823#comment-16783823
 ] 

Erick Erickson edited comment on LUCENE-8716 at 3/4/19 10:05 PM:
-

[~hossman] This worries me since it might be related to switching to async 
logging by default, including the tests. It sure sounds in the same 
neighborhood at least. SOLR-12055 and SOLR-13268, especially since 
TestStressReorder is a Solr test.

Tests with async logging are having weird stuff bubble up out of the cracks, 
lmax.disruptor for instance. Kevin and I have some leads. One of the solutions 
is to subclass all the tests in Solr from SolrTestCaseJ4 rather than 
LuceneTestCase to insure that the proper logging shutdown happens. Which I'm 
experimenting with now, but don't feel very good about. I want to see if it's 
possible then discuss.

If the logging output is _always_ from TestStressReorder, we could put the 
shutdown for async logging specifically in that class as a test, I can help 
with that. I stress that this is only to see if this the underlying problem, 
not a robust solution.

Saying "well, our test framework doesn't like async logging, therefore we 
shouldn't do it" smells. AFAIK, this is a test-only problem, not a problem 
actually running Solr.

OTOH, changing about 150 test classes to derive from SolrTestCaseJ4 rather than 
LuceneTestCase smells too.

OTOOH, playing whack-a-mole with individual tests (or perhaps combinations of 
tests) smells too.

OTOOOH, saying "async logging should work, but we can't make our tests play 
nice with it, therefore use at your own risk" smells too.

[~krisden] WDYT about whether the async logging might be part of this?

All this assuming this failure is related to the async logging...


was (Author: erickerickson):
[~hossman] This worries me since it might be related to switching to async 
logging by default, including the tests. It sure sounds in the same 
neighborhood at least. SOLR-12055 and SOLR-13268, especially since 
TestStressReorder is a Solr test.

Tests with async logging are having weird stuff bubble up out of the cracks, 
lmax.disruptor for instance. Kevin and I have some leads. One of the solutions 
is to subclass all the tests in Solr from SolrTestCaseJ4 rather than 
LuceneTestCase to insure that the proper logging shutdown happens. Which I'm 
experimenting with now, but don't feel very good about. I want to see if it's 
possible then discuss.

If the logging output is _always_ from TestStressReorder, we could put the 
shutdown for async logging specifically in that class as a test, I can help 
with that. I stress that this is only to see if this the underlying problem, 
not a robust solution.

Saying "well, our test framework doesn't like async logging, therefore we 
shouldn't do it" smells. AFAIK, this is a test-only problem, not a problem 
actually running Solr.

OTOH, changing about 150 test classes to derive from SolrTestCaseJ4 rather than 
LuceneTestCase smells too.

OTOOH, playing whack-a-mole with individual tests (or perhaps combinations of 
tests) smells too.

OTOOOH, saying "async logging should work, but we can't make our tests play 
nice with it, therefore use at your own risk" smells too.

[~krisden] WDYT about whether the async logging might be part of this?


> Test logging can bleed from one suite to another, cause failures due to 
> sysout limits
> -
>
> Key: LUCENE-8716
> URL: https://issues.apache.org/jira/browse/LUCENE-8716
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Hoss Man
>Priority: Major
> Attachments: thetaphi_Lucene-Solr-master-Linux_23743.log.txt
>
>
> in solr land, {{HLLUtilTest}} is an incredibly tiny, simple, test that tests 
> a utility method w/o using any other solr features or doing any logging - as 
> such it extends {{LuceneTestCase}} directly, and doesn't use any of the 
> typical solr test framework/plumbing or {{@SuppressSysoutChecks}}
> on a recent jenkins build, {{HLLUtilTest}} failed due to too much sysoutput 
> -- all of which seems to have come from the previous test run on that JVM -- 
> {{TestStressReorder}} -- suggesting that somehow the sysout from one test 
> suite can bleed over into the next suite?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8716) Test logging can bleed from one suite to another, cause failures due to sysout limits

2019-03-04 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783823#comment-16783823
 ] 

Erick Erickson commented on LUCENE-8716:


[~hossman] This worries me since it might be related to switching to async 
logging by default, including the tests. It sure sounds in the same 
neighborhood at least. SOLR-12055 and SOLR-13268, especially since 
TestStressReorder is a Solr test.

Tests with async logging are having weird stuff bubble up out of the cracks, 
lmax.disruptor for instance. Kevin and I have some leads. One of the solutions 
is to subclass all the tests in Solr from SolrTestCaseJ4 rather than 
LuceneTestCase to insure that the proper logging shutdown happens. Which I'm 
experimenting with now, but don't feel very good about. I want to see if it's 
possible then discuss.

If the logging output is _always_ from TestStressReorder, we could put the 
shutdown for async logging specifically in that class as a test, I can help 
with that. I stress that this is only to see if this the underlying problem, 
not a robust solution.

Saying "well, our test framework doesn't like async logging, therefore we 
shouldn't do it" smells. AFAIK, this is a test-only problem, not a problem 
actually running Solr.

OTOH, changing about 150 test classes to derive from SolrTestCaseJ4 rather than 
LuceneTestCase smells too.

OTOOH, playing whack-a-mole with individual tests (or perhaps combinations of 
tests) smells too.

OTOOOH, saying "async logging should work, but we can't make our tests play 
nice with it, therefore use at your own risk" smells too.

[~krisden] WDYT about whether the async logging might be part of this?


> Test logging can bleed from one suite to another, cause failures due to 
> sysout limits
> -
>
> Key: LUCENE-8716
> URL: https://issues.apache.org/jira/browse/LUCENE-8716
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Hoss Man
>Priority: Major
> Attachments: thetaphi_Lucene-Solr-master-Linux_23743.log.txt
>
>
> in solr land, {{HLLUtilTest}} is an incredibly tiny, simple, test that tests 
> a utility method w/o using any other solr features or doing any logging - as 
> such it extends {{LuceneTestCase}} directly, and doesn't use any of the 
> typical solr test framework/plumbing or {{@SuppressSysoutChecks}}
> on a recent jenkins build, {{HLLUtilTest}} failed due to too much sysoutput 
> -- all of which seems to have come from the previous test run on that JVM -- 
> {{TestStressReorder}} -- suggesting that somehow the sysout from one test 
> suite can bleed over into the next suite?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 3201 - Unstable

2019-03-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3201/

3 tests failed.
FAILED:  org.apache.solr.TestTolerantSearch.testGetTopIdsPhaseError

Error Message:
Index: 0, Size: 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at 
__randomizedtesting.SeedInfo.seed([7449FF36ADEE9C52:4A07EF400747DA83]:0)
at java.util.ArrayList.rangeCheck(ArrayList.java:657)
at java.util.ArrayList.get(ArrayList.java:433)
at 
org.apache.solr.TestTolerantSearch.testGetTopIdsPhaseError(TestTolerantSearch.java:198)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.OverseerTest.testOverseerFailure

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([7449FF36ADEE9C52]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.OverseerTest

Error 

[jira] [Commented] (SOLR-13294) TestSQLHandler failures on windows jenkins machines

2019-03-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783784#comment-16783784
 ] 

ASF subversion and git services commented on SOLR-13294:


Commit c4807a64b156c5946aae8ebd6d8c8e80e428a266 in lucene-solr's branch 
refs/heads/branch_7x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c4807a6 ]

SOLR-13294: refactor test to include more loging to help diagnose some windows 
jenkins failures


> TestSQLHandler failures on windows jenkins machines
> ---
>
> Key: SOLR-13294
> URL: https://issues.apache.org/jira/browse/SOLR-13294
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> _Windows_ jenkins builds frequently - but _not always_ - fail on 
> {{TestSQLHandler}} @ L236
> In cases where a windows jenkins build finds a failing seed for 
> {{TestSQLHandler}}, and the same jenkins build attempts to reproduce using 
> that seed, it reliably encounters a *different* failure earlier in the test 
> (related to docValues being missing from a sort field).
> These seeds do not fail for me when attempted on a Linux machine, and my own 
> attempts @ beasting on linux hasn't turned up any similar failures.
> Here's an example from jenkins - the exact same pattern has occured in other 
> windows jenkins builds on other branches at the exact same asserts..
> [https://jenkins.thetaphi.de/view/Lucene-Solr/job/Lucene-Solr-8.0-Windows/57/]
> {noformat}
> Using Java: 32bit/jdk1.8.0_172 -server -XX:+UseConcMarkSweepGC
> ...
> Checking out Revision 0376bc0052a53480ecb2edea7dfe58298bda5deb 
> (refs/remotes/origin/branch_8_0)
> ...
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler 
> -Dtests.method=doTest -Dtests.seed=EEE2628F22F5C82A -Dtests.slow=true 
> -Dtests.locale=id -Dtests.timezone=BST -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 23.3s J0 | TestSQLHandler.doTest <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([EEE2628F22F5C82A:49A6DA2B4F4EDB93]:0)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:236)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:93)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
>[junit4]>at java.lang.Thread.run(Thread.java:748)
> ...
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler 
> -Dtests.method=doTest -Dtests.seed=EEE2628F22F5C82A -Dtests.slow=true 
> -Dtests.badapples=true -Dtests.locale=id -Dtests.timezone=BST 
> -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   20.8s J0 | TestSQLHandler.doTest <<<
>[junit4]> Throwable #1: java.io.IOException: --> 
> http://127.0.0.1:61309/collection1_shard2_replica_n1:Failed to execute 
> sqlQuery 'select id, field_i, str_s, field_i_p, field_f_p, field_d_p, 
> field_l_p from collection1 where (text='()' OR text='') AND 
> text='' order by field_i desc' against JDBC connection 
> 'jdbc:calcitesolr:'.
>[junit4]> Error while executing SQL "select id, field_i, str_s, 
> field_i_p, field_f_p, field_d_p, field_l_p from collection1 where 
> (text='()' OR text='') AND text='' order by field_i desc": 
> java.io.IOException: java.util.concurrent.ExecutionException: 
> java.io.IOException: --> 
> http://127.0.0.1:61309/collection1_shard2_replica_n1/:id{type=string,properties=indexed,stored,sortMissingLast,uninvertible}
>  must have DocValues to use this feature.
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([EEE2628F22F5C82A:49A6DA2B4F4EDB93]:0)
>[junit4]>at 
> org.apache.solr.client.solrj.io.stream.SolrStream.read(SolrStream.java:215)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.getTuples(TestSQLHandler.java:2617)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:145)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:93)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
>[junit4]>at 
> 

[jira] [Created] (LUCENE-8716) Test logging can bleed from one suite to another, cause failures due to sysout limits

2019-03-04 Thread Hoss Man (JIRA)
Hoss Man created LUCENE-8716:


 Summary: Test logging can bleed from one suite to another, cause 
failures due to sysout limits
 Key: LUCENE-8716
 URL: https://issues.apache.org/jira/browse/LUCENE-8716
 Project: Lucene - Core
  Issue Type: Test
Reporter: Hoss Man


in solr land, {{HLLUtilTest}} is an incredibly tiny, simple, test that tests a 
utility method w/o using any other solr features or doing any logging - as such 
it extends {{LuceneTestCase}} directly, and doesn't use any of the typical solr 
test framework/plumbing or {{@SuppressSysoutChecks}}

on a recent jenkins build, {{HLLUtilTest}} failed due to too much sysoutput -- 
all of which seems to have come from the previous test run on that JVM -- 
{{TestStressReorder}} -- suggesting that somehow the sysout from one test suite 
can bleed over into the next suite?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Query about solr volunteers to mentor: GSoC 19

2019-03-04 Thread David Smiley
BTW another topic is the migration of Solr's admin UI to a more modern
Angular JS -- or something like that -- I haven't been following that very
closely.  I'm definitely not the right mentor for that but perhaps someone
here could mentor if you choose to pick that up.

~ David Smiley
Freelance Apache Lucene/Solr Search Consultant/Developer
http://www.linkedin.com/in/davidwsmiley


On Mon, Mar 4, 2019 at 4:16 PM David Smiley 
wrote:

> Hello Nazerke,
>
> Thanks for your interest and proactively reaching out to us!
>
> I am interested in being a mentor provided the topic interests me.  Topics
> of interest to me:
> * spatial
>ex: any open issue, such as:
> https://issues.apache.org/jira/browse/SOLR-4242
> * highlighting
>ex: any open issue, esp. relating to the UnifiedHighlighter
> * test infrastructure utilities
> * benchmarking automation
> * the build: migrate from Ant to Gradle
> * refactorings related to technical debt
>
> And perhaps others might interest me if you propose something specific. I
> know you commented on SOLR-10329 but I'd rather not mentor for that.
>
> Depending on the scope of some issue(s) there might be multiple actual
> things to work on, perhaps ideally in the same subject area.
>
> What do you think?
>
> ~ David
>
> On Mon, Mar 4, 2019 at 11:35 AM Nazerke Seidan
>  wrote:
>
>> Hi All,
>>
>> I am a final year CS BSc student, interested in participating GSoC'19 by
>> contributing to Apache Solr project. I was wondering if there are any
>> volunteers from Solr community to mentor GSoC'19 project. I would like to
>> discuss about potential topics.
>>
>>
>> Many thanks,
>>
>> Nazerke
>>
> --
> Lucene/Solr Search Committer (PMC), Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>


[jira] [Commented] (SOLR-7414) CSVResponseWriter returns empty field when fl alias is combined with '*' selector

2019-03-04 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783789#comment-16783789
 ] 

David Smiley commented on SOLR-7414:


I have this issue on my TODO list to investigate why there is specifically an 
issue with CSV that is not also present with our other formats.  Shouldn't the 
code impacted for one format be essentially the same as the other formats?   It 
keeps slipping from my priorities so I'll simply ask it to everyone following 
this.

> CSVResponseWriter returns empty field when fl alias is combined with '*' 
> selector
> -
>
> Key: SOLR-7414
> URL: https://issues.apache.org/jira/browse/SOLR-7414
> Project: Solr
>  Issue Type: Bug
>  Components: Response Writers
>Reporter: Michael Lawrence
>Priority: Major
> Attachments: SOLR-7414-old.patch, SOLR-7414.patch, SOLR-7414.patch, 
> SOLR-7414.patch, SOLR-7414.patch, SOLR-7414.patch
>
>
> Attempting to retrieve all fields while renaming one, e.g., "inStock" to 
> "stocked" (URL below), results in CSV output that has a column for "inStock" 
> (should be "stocked"), and the column has no values. 
> steps to reproduce using 5.1...
> {noformat}
> $ bin/solr -e techproducts
> ...
> $ curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/techproducts/update?commit=true' --data-binary 
> '[{ "id" : "aaa", "bar_i" : 7, "inStock" : true }, { "id" : "bbb", "bar_i" : 
> 7, "inStock" : false }, { "id" : "ccc", "bar_i" : 7, "inStock" : true }]'
> {"responseHeader":{"status":0,"QTime":730}}
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7=id,stocked:inStock=csv'
> id,stocked
> aaa,true
> bbb,false
> ccc,true
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7=*,stocked:inStock=csv'
> bar_i,id,_version_,inStock
> 7,aaa,1498719888088236032,
> 7,bbb,1498719888090333184,
> 7,ccc,1498719888090333185,
> $ curl 
> 'http://localhost:8983/solr/techproducts/query?q=bar_i:7=stocked:inStock,*=csv'
> bar_i,id,_version_,inStock
> 7,aaa,1498719888088236032,
> 7,bbb,1498719888090333184,
> 7,ccc,1498719888090333185,
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13294) TestSQLHandler failures on windows jenkins machines

2019-03-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783786#comment-16783786
 ] 

ASF subversion and git services commented on SOLR-13294:


Commit 30f7562eb43785a8117988227a249677d4c96af6 in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=30f7562 ]

SOLR-13294: refactor test to include more loging to help diagnose some windows 
jenkins failures


> TestSQLHandler failures on windows jenkins machines
> ---
>
> Key: SOLR-13294
> URL: https://issues.apache.org/jira/browse/SOLR-13294
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> _Windows_ jenkins builds frequently - but _not always_ - fail on 
> {{TestSQLHandler}} @ L236
> In cases where a windows jenkins build finds a failing seed for 
> {{TestSQLHandler}}, and the same jenkins build attempts to reproduce using 
> that seed, it reliably encounters a *different* failure earlier in the test 
> (related to docValues being missing from a sort field).
> These seeds do not fail for me when attempted on a Linux machine, and my own 
> attempts @ beasting on linux hasn't turned up any similar failures.
> Here's an example from jenkins - the exact same pattern has occured in other 
> windows jenkins builds on other branches at the exact same asserts..
> [https://jenkins.thetaphi.de/view/Lucene-Solr/job/Lucene-Solr-8.0-Windows/57/]
> {noformat}
> Using Java: 32bit/jdk1.8.0_172 -server -XX:+UseConcMarkSweepGC
> ...
> Checking out Revision 0376bc0052a53480ecb2edea7dfe58298bda5deb 
> (refs/remotes/origin/branch_8_0)
> ...
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler 
> -Dtests.method=doTest -Dtests.seed=EEE2628F22F5C82A -Dtests.slow=true 
> -Dtests.locale=id -Dtests.timezone=BST -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 23.3s J0 | TestSQLHandler.doTest <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([EEE2628F22F5C82A:49A6DA2B4F4EDB93]:0)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:236)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:93)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
>[junit4]>at java.lang.Thread.run(Thread.java:748)
> ...
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler 
> -Dtests.method=doTest -Dtests.seed=EEE2628F22F5C82A -Dtests.slow=true 
> -Dtests.badapples=true -Dtests.locale=id -Dtests.timezone=BST 
> -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   20.8s J0 | TestSQLHandler.doTest <<<
>[junit4]> Throwable #1: java.io.IOException: --> 
> http://127.0.0.1:61309/collection1_shard2_replica_n1:Failed to execute 
> sqlQuery 'select id, field_i, str_s, field_i_p, field_f_p, field_d_p, 
> field_l_p from collection1 where (text='()' OR text='') AND 
> text='' order by field_i desc' against JDBC connection 
> 'jdbc:calcitesolr:'.
>[junit4]> Error while executing SQL "select id, field_i, str_s, 
> field_i_p, field_f_p, field_d_p, field_l_p from collection1 where 
> (text='()' OR text='') AND text='' order by field_i desc": 
> java.io.IOException: java.util.concurrent.ExecutionException: 
> java.io.IOException: --> 
> http://127.0.0.1:61309/collection1_shard2_replica_n1/:id{type=string,properties=indexed,stored,sortMissingLast,uninvertible}
>  must have DocValues to use this feature.
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([EEE2628F22F5C82A:49A6DA2B4F4EDB93]:0)
>[junit4]>at 
> org.apache.solr.client.solrj.io.stream.SolrStream.read(SolrStream.java:215)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.getTuples(TestSQLHandler.java:2617)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:145)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:93)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
>[junit4]>at 
> 

[jira] [Commented] (SOLR-13294) TestSQLHandler failures on windows jenkins machines

2019-03-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783785#comment-16783785
 ] 

ASF subversion and git services commented on SOLR-13294:


Commit 4f2581804830893a0bac0b00d5a2c3773f376f95 in lucene-solr's branch 
refs/heads/branch_8_0 from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4f25818 ]

SOLR-13294: refactor test to include more loging to help diagnose some windows 
jenkins failures


> TestSQLHandler failures on windows jenkins machines
> ---
>
> Key: SOLR-13294
> URL: https://issues.apache.org/jira/browse/SOLR-13294
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> _Windows_ jenkins builds frequently - but _not always_ - fail on 
> {{TestSQLHandler}} @ L236
> In cases where a windows jenkins build finds a failing seed for 
> {{TestSQLHandler}}, and the same jenkins build attempts to reproduce using 
> that seed, it reliably encounters a *different* failure earlier in the test 
> (related to docValues being missing from a sort field).
> These seeds do not fail for me when attempted on a Linux machine, and my own 
> attempts @ beasting on linux hasn't turned up any similar failures.
> Here's an example from jenkins - the exact same pattern has occured in other 
> windows jenkins builds on other branches at the exact same asserts..
> [https://jenkins.thetaphi.de/view/Lucene-Solr/job/Lucene-Solr-8.0-Windows/57/]
> {noformat}
> Using Java: 32bit/jdk1.8.0_172 -server -XX:+UseConcMarkSweepGC
> ...
> Checking out Revision 0376bc0052a53480ecb2edea7dfe58298bda5deb 
> (refs/remotes/origin/branch_8_0)
> ...
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler 
> -Dtests.method=doTest -Dtests.seed=EEE2628F22F5C82A -Dtests.slow=true 
> -Dtests.locale=id -Dtests.timezone=BST -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 23.3s J0 | TestSQLHandler.doTest <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([EEE2628F22F5C82A:49A6DA2B4F4EDB93]:0)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:236)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:93)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
>[junit4]>at java.lang.Thread.run(Thread.java:748)
> ...
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler 
> -Dtests.method=doTest -Dtests.seed=EEE2628F22F5C82A -Dtests.slow=true 
> -Dtests.badapples=true -Dtests.locale=id -Dtests.timezone=BST 
> -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   20.8s J0 | TestSQLHandler.doTest <<<
>[junit4]> Throwable #1: java.io.IOException: --> 
> http://127.0.0.1:61309/collection1_shard2_replica_n1:Failed to execute 
> sqlQuery 'select id, field_i, str_s, field_i_p, field_f_p, field_d_p, 
> field_l_p from collection1 where (text='()' OR text='') AND 
> text='' order by field_i desc' against JDBC connection 
> 'jdbc:calcitesolr:'.
>[junit4]> Error while executing SQL "select id, field_i, str_s, 
> field_i_p, field_f_p, field_d_p, field_l_p from collection1 where 
> (text='()' OR text='') AND text='' order by field_i desc": 
> java.io.IOException: java.util.concurrent.ExecutionException: 
> java.io.IOException: --> 
> http://127.0.0.1:61309/collection1_shard2_replica_n1/:id{type=string,properties=indexed,stored,sortMissingLast,uninvertible}
>  must have DocValues to use this feature.
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([EEE2628F22F5C82A:49A6DA2B4F4EDB93]:0)
>[junit4]>at 
> org.apache.solr.client.solrj.io.stream.SolrStream.read(SolrStream.java:215)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.getTuples(TestSQLHandler.java:2617)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:145)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:93)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
>[junit4]>at 
> 

[jira] [Commented] (SOLR-13294) TestSQLHandler failures on windows jenkins machines

2019-03-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783787#comment-16783787
 ] 

ASF subversion and git services commented on SOLR-13294:


Commit 7bfe7b265a4091048707e782657f622e937b6e70 in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=7bfe7b2 ]

SOLR-13294: refactor test to include more loging to help diagnose some windows 
jenkins failures


> TestSQLHandler failures on windows jenkins machines
> ---
>
> Key: SOLR-13294
> URL: https://issues.apache.org/jira/browse/SOLR-13294
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> _Windows_ jenkins builds frequently - but _not always_ - fail on 
> {{TestSQLHandler}} @ L236
> In cases where a windows jenkins build finds a failing seed for 
> {{TestSQLHandler}}, and the same jenkins build attempts to reproduce using 
> that seed, it reliably encounters a *different* failure earlier in the test 
> (related to docValues being missing from a sort field).
> These seeds do not fail for me when attempted on a Linux machine, and my own 
> attempts @ beasting on linux hasn't turned up any similar failures.
> Here's an example from jenkins - the exact same pattern has occured in other 
> windows jenkins builds on other branches at the exact same asserts..
> [https://jenkins.thetaphi.de/view/Lucene-Solr/job/Lucene-Solr-8.0-Windows/57/]
> {noformat}
> Using Java: 32bit/jdk1.8.0_172 -server -XX:+UseConcMarkSweepGC
> ...
> Checking out Revision 0376bc0052a53480ecb2edea7dfe58298bda5deb 
> (refs/remotes/origin/branch_8_0)
> ...
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler 
> -Dtests.method=doTest -Dtests.seed=EEE2628F22F5C82A -Dtests.slow=true 
> -Dtests.locale=id -Dtests.timezone=BST -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 23.3s J0 | TestSQLHandler.doTest <<<
>[junit4]> Throwable #1: java.lang.AssertionError
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([EEE2628F22F5C82A:49A6DA2B4F4EDB93]:0)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:236)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:93)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
>[junit4]>at java.lang.Thread.run(Thread.java:748)
> ...
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler 
> -Dtests.method=doTest -Dtests.seed=EEE2628F22F5C82A -Dtests.slow=true 
> -Dtests.badapples=true -Dtests.locale=id -Dtests.timezone=BST 
> -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   20.8s J0 | TestSQLHandler.doTest <<<
>[junit4]> Throwable #1: java.io.IOException: --> 
> http://127.0.0.1:61309/collection1_shard2_replica_n1:Failed to execute 
> sqlQuery 'select id, field_i, str_s, field_i_p, field_f_p, field_d_p, 
> field_l_p from collection1 where (text='()' OR text='') AND 
> text='' order by field_i desc' against JDBC connection 
> 'jdbc:calcitesolr:'.
>[junit4]> Error while executing SQL "select id, field_i, str_s, 
> field_i_p, field_f_p, field_d_p, field_l_p from collection1 where 
> (text='()' OR text='') AND text='' order by field_i desc": 
> java.io.IOException: java.util.concurrent.ExecutionException: 
> java.io.IOException: --> 
> http://127.0.0.1:61309/collection1_shard2_replica_n1/:id{type=string,properties=indexed,stored,sortMissingLast,uninvertible}
>  must have DocValues to use this feature.
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([EEE2628F22F5C82A:49A6DA2B4F4EDB93]:0)
>[junit4]>at 
> org.apache.solr.client.solrj.io.stream.SolrStream.read(SolrStream.java:215)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.getTuples(TestSQLHandler.java:2617)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:145)
>[junit4]>at 
> org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:93)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
>[junit4]>at 
> 

Re: Query about solr volunteers to mentor: GSoC 19

2019-03-04 Thread David Smiley
Hello Nazerke,

Thanks for your interest and proactively reaching out to us!

I am interested in being a mentor provided the topic interests me.  Topics
of interest to me:
* spatial
   ex: any open issue, such as:
https://issues.apache.org/jira/browse/SOLR-4242
* highlighting
   ex: any open issue, esp. relating to the UnifiedHighlighter
* test infrastructure utilities
* benchmarking automation
* the build: migrate from Ant to Gradle
* refactorings related to technical debt

And perhaps others might interest me if you propose something specific. I
know you commented on SOLR-10329 but I'd rather not mentor for that.

Depending on the scope of some issue(s) there might be multiple actual
things to work on, perhaps ideally in the same subject area.

What do you think?

~ David

On Mon, Mar 4, 2019 at 11:35 AM Nazerke Seidan
 wrote:

> Hi All,
>
> I am a final year CS BSc student, interested in participating GSoC'19 by
> contributing to Apache Solr project. I was wondering if there are any
> volunteers from Solr community to mentor GSoC'19 project. I would like to
> discuss about potential topics.
>
>
> Many thanks,
>
> Nazerke
>
-- 
Lucene/Solr Search Committer (PMC), Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Created] (SOLR-13294) TestSQLHandler failures on windows jenkins machines

2019-03-04 Thread Hoss Man (JIRA)
Hoss Man created SOLR-13294:
---

 Summary: TestSQLHandler failures on windows jenkins machines
 Key: SOLR-13294
 URL: https://issues.apache.org/jira/browse/SOLR-13294
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


_Windows_ jenkins builds frequently - but _not always_ - fail on 
{{TestSQLHandler}} @ L236

In cases where a windows jenkins build finds a failing seed for 
{{TestSQLHandler}}, and the same jenkins build attempts to reproduce using that 
seed, it reliably encounters a *different* failure earlier in the test (related 
to docValues being missing from a sort field).

These seeds do not fail for me when attempted on a Linux machine, and my own 
attempts @ beasting on linux hasn't turned up any similar failures.

Here's an example from jenkins - the exact same pattern has occured in other 
windows jenkins builds on other branches at the exact same asserts..

[https://jenkins.thetaphi.de/view/Lucene-Solr/job/Lucene-Solr-8.0-Windows/57/]
{noformat}
Using Java: 32bit/jdk1.8.0_172 -server -XX:+UseConcMarkSweepGC
...
Checking out Revision 0376bc0052a53480ecb2edea7dfe58298bda5deb 
(refs/remotes/origin/branch_8_0)
...
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler 
-Dtests.method=doTest -Dtests.seed=EEE2628F22F5C82A -Dtests.slow=true 
-Dtests.locale=id -Dtests.timezone=BST -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 23.3s J0 | TestSQLHandler.doTest <<<
   [junit4]> Throwable #1: java.lang.AssertionError
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([EEE2628F22F5C82A:49A6DA2B4F4EDB93]:0)
   [junit4]>at 
org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:236)
   [junit4]>at 
org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:93)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
   [junit4]>at java.lang.Thread.run(Thread.java:748)

...

   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler 
-Dtests.method=doTest -Dtests.seed=EEE2628F22F5C82A -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=id -Dtests.timezone=BST 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   20.8s J0 | TestSQLHandler.doTest <<<
   [junit4]> Throwable #1: java.io.IOException: --> 
http://127.0.0.1:61309/collection1_shard2_replica_n1:Failed to execute sqlQuery 
'select id, field_i, str_s, field_i_p, field_f_p, field_d_p, field_l_p from 
collection1 where (text='()' OR text='') AND text='' order by 
field_i desc' against JDBC connection 'jdbc:calcitesolr:'.
   [junit4]> Error while executing SQL "select id, field_i, str_s, 
field_i_p, field_f_p, field_d_p, field_l_p from collection1 where 
(text='()' OR text='') AND text='' order by field_i desc": 
java.io.IOException: java.util.concurrent.ExecutionException: 
java.io.IOException: --> 
http://127.0.0.1:61309/collection1_shard2_replica_n1/:id{type=string,properties=indexed,stored,sortMissingLast,uninvertible}
 must have DocValues to use this feature.
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([EEE2628F22F5C82A:49A6DA2B4F4EDB93]:0)
   [junit4]>at 
org.apache.solr.client.solrj.io.stream.SolrStream.read(SolrStream.java:215)
   [junit4]>at 
org.apache.solr.handler.TestSQLHandler.getTuples(TestSQLHandler.java:2617)
   [junit4]>at 
org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:145)
   [junit4]>at 
org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:93)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
   [junit4]>at java.lang.Thread.run(Thread.java:748)

...

   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSQLHandler 
-Dtests.method=doTest -Dtests.seed=EEE2628F22F5C82A -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=id -Dtests.timezone=BST 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   20.6s J1 | TestSQLHandler.doTest <<<
   [junit4]> Throwable #1: java.io.IOException: --> 
http://127.0.0.1:61322/collection1_shard1_replica_n1:Failed to execute sqlQuery 
'select id, field_i, str_s, field_i_p, field_f_p, field_d_p, field_l_p from 
collection1 where (text='()' OR 

[jira] [Commented] (SOLR-9882) exceeding timeAllowed causes ClassCastException: BasicResultContext cannot be cast to SolrDocumentList

2019-03-04 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783770#comment-16783770
 ] 

Mikhail Khludnev commented on SOLR-9882:


CloudExitableDirectoryTests seems passed, but looks like {{ant clean test  
-Dtestcase=TestTolerantSearch -Dtests.method=testGetTopIdsPhaseError 
-Dtests.seed=7449FF36ADEE9C52 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=pl-PL -Dtests.timezone=America/Atikokan -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8}} is broken 
https://builds.apache.org/view/L/view/Lucene/job/Lucene-Solr-Tests-master/3201/console

> exceeding timeAllowed causes ClassCastException: BasicResultContext cannot be 
> cast to SolrDocumentList
> --
>
> Key: SOLR-9882
> URL: https://issues.apache.org/jira/browse/SOLR-9882
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Yago Riveiro
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-9882-7987.patch, SOLR-9882-solr-7.6.0-backport.txt, 
> SOLR-9882.patch, SOLR-9882.patch, SOLR-9882.patch, SOLR-9882.patch, 
> SOLR-9882.patch, SOLR-9882.patch, SOLR-9882.patch, SOLR-9882.patch, 
> SOLR-9882.patch, SOLR-9882.patch, SOLR-9882.patch, SOLR-9882.patch, 
> SOLR-9882.patch
>
>
> After talk with [~yo...@apache.org] in the mailing list I open this Jira 
> ticket
> I'm hitting this bug in Solr 6.3.0.
> null:java.lang.ClassCastException:
> org.apache.solr.response.BasicResultContext cannot be cast to
> org.apache.solr.common.SolrDocumentList
> at
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:315)
> at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:153)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2213)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:303)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:254)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at
> org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:169)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 23744 - Still unstable!

2019-03-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23744/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  org.apache.solr.TestTolerantSearch.testGetTopIdsPhaseError

Error Message:
Index: 0, Size: 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at 
__randomizedtesting.SeedInfo.seed([FEA3A44755991F08:C0EDB431FF3059D9]:0)
at java.util.ArrayList.rangeCheck(ArrayList.java:657)
at java.util.ArrayList.get(ArrayList.java:433)
at 
org.apache.solr.TestTolerantSearch.testGetTopIdsPhaseError(TestTolerantSearch.java:198)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.TestTolerantSearch.testGetTopIdsPhaseError

Error Message:
Index: 0, Size: 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at 
__randomizedtesting.SeedInfo.seed([FEA3A44755991F08:C0EDB431FF3059D9]:0)
at 

[JENKINS] Lucene-Solr-8.x-Solaris (64bit/jdk1.8.0) - Build # 59 - Failure!

2019-03-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Solaris/59/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 15530 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/export/home/jenkins/workspace/Lucene-Solr-8.x-Solaris/solr/build/solr-core/test/temp/junit4-J1-20190304_181408_0847568707618334953016.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4] Dumping heap to 
/export/home/jenkins/workspace/Lucene-Solr-8.x-Solaris/heapdumps/java_pid13135.hprof
 ...
   [junit4] Heap dump file created [491661374 bytes in 5.009 secs]
   [junit4] <<< JVM J1: EOF 

[...truncated 8934 lines...]
BUILD FAILED
/export/home/jenkins/workspace/Lucene-Solr-8.x-Solaris/build.xml:633: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-8.x-Solaris/build.xml:585: Some of 
the tests produced a heap dump, but did not fail. Maybe a suppressed 
OutOfMemoryError? Dumps created:
* java_pid13135.hprof

Total time: 105 minutes 14 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/export/home/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[GitHub] [lucene-solr] ctargett commented on issue #575: SOLR-13235: Split Collections API Ref Guide page

2019-03-04 Thread GitBox
ctargett commented on issue #575: SOLR-13235: Split Collections API Ref Guide 
page
URL: https://github.com/apache/lucene-solr/pull/575#issuecomment-469383469
 
 
   > The only classification I thought twice about was your choice in putting 
`REBALANCELEADERS` in the collection-mgmt page, instead of the cluster-mgmt 
page. I might've put it on the cluster-mgmt page, since it affects all 
collections on the cluster. (It's a bit like `BALANCESHARDUNIQUE` that way) But 
I see the argument the other way too. 路‍♂️
   
   Interesting. I put it with the collection management commands because it 
seemed it only works on a single collection (it has a required `collection` 
param to define the collection name), and the example just shows it working on 
a single collection.
   
   Do you know more about how it works on all collections?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13285) ByteArrayUtf8CharSequence cannot be cast to java.lang.String exception during replication

2019-03-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783685#comment-16783685
 ] 

ASF subversion and git services commented on SOLR-13285:


Commit 7771d7bb844fdc7a3e6132a3d5b141c379a811e4 in lucene-solr's branch 
refs/heads/master from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=7771d7b ]

SOLR-13285: Updates with enum fields and javabin cause ClassCastException


> ByteArrayUtf8CharSequence cannot be cast to java.lang.String exception during 
> replication
> -
>
> Key: SOLR-13285
> URL: https://issues.apache.org/jira/browse/SOLR-13285
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java), SolrCloud, SolrJ
>Affects Versions: 7.7, 7.7.1, 8.1
> Environment: centos 7
> solrcloud 7.7.1, 8.1.0
>Reporter: Karl Stoney
>Assignee: Noble Paul
>Priority: Major
>  Labels: newbie, replication
> Attachments: SOLR-13285.patch, SOLR-13285.patch
>
>
> Since upgrading to 7.7 (also tried 7.7.1, and 8.1.0) from 6.6.4, we're seeing 
> the following errors in the SolrCloud elected master for a given collection 
> when updates are written.  This was after a full reindex of data (fresh 
> build).
> {code:java}
> request: 
> http://solr-1.search-solr.preprod.k8.atcloud.io:80/solr/at-uk_shard1_replica_n2/update?update.distrib=FROMLEADER=http%3A%2F%2Fsolr-2.search-solr.preprod.k8.atcloud.io%3A80%2Fsolr%2Fat-uk_shard1_replica_n1%2F=javabin=2
> Remote error message: org.apache.solr.common.util.ByteArrayUtf8CharSequence 
> cannot be cast to java.lang.String
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:385)
>  ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - 
> ishan - 2019-02-23 02:39:09]
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:183)
>  ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - 
> ishan - 2019-02-23 02:39:09]
> at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>  ~[metrics-core-3.2.6.jar:3.2.6]
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
>  ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - 
> ishan - 2019-02-23 02:39:09]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_191]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
> {code}
> Following this through to the replica, you'll see:
> {code:java}
> 08:35:22.060 [qtp1540374340-20] ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:java.lang.ClassCastException: 
> org.apache.solr.common.util.ByteArrayUtf8CharSequence cannot be cast to 
> java.lang.String
> at 
> org.apache.solr.common.util.JavaBinCodec.readEnumFieldValue(JavaBinCodec.java:813)
> at 
> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:339)
> at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278)
> at 
> org.apache.solr.common.util.JavaBinCodec.readSolrInputDocument(JavaBinCodec.java:640)
> at 
> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:337)
> at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278)
> at 
> org.apache.solr.common.util.JavaBinCodec.readMapEntry(JavaBinCodec.java:819)
> at 
> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:341)
> at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:295)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readIterator(JavaBinUpdateRequestCodec.java:280)
> at 
> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:333)
> at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readNamedList(JavaBinUpdateRequestCodec.java:235)
> at 
> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:298)
> at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278)
> at 
> org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:191)
> at 
> 

[jira] [Commented] (SOLR-13285) ByteArrayUtf8CharSequence cannot be cast to java.lang.String exception during replication

2019-03-04 Thread Karl Stoney (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783683#comment-16783683
 ] 

Karl Stoney commented on SOLR-13285:


 [^SOLR-13285.patch] For what it's worth here is a patch for the 7x branch

> ByteArrayUtf8CharSequence cannot be cast to java.lang.String exception during 
> replication
> -
>
> Key: SOLR-13285
> URL: https://issues.apache.org/jira/browse/SOLR-13285
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java), SolrCloud, SolrJ
>Affects Versions: 7.7, 7.7.1, 8.1
> Environment: centos 7
> solrcloud 7.7.1, 8.1.0
>Reporter: Karl Stoney
>Assignee: Noble Paul
>Priority: Major
>  Labels: newbie, replication
> Attachments: SOLR-13285.patch, SOLR-13285.patch
>
>
> Since upgrading to 7.7 (also tried 7.7.1, and 8.1.0) from 6.6.4, we're seeing 
> the following errors in the SolrCloud elected master for a given collection 
> when updates are written.  This was after a full reindex of data (fresh 
> build).
> {code:java}
> request: 
> http://solr-1.search-solr.preprod.k8.atcloud.io:80/solr/at-uk_shard1_replica_n2/update?update.distrib=FROMLEADER=http%3A%2F%2Fsolr-2.search-solr.preprod.k8.atcloud.io%3A80%2Fsolr%2Fat-uk_shard1_replica_n1%2F=javabin=2
> Remote error message: org.apache.solr.common.util.ByteArrayUtf8CharSequence 
> cannot be cast to java.lang.String
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:385)
>  ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - 
> ishan - 2019-02-23 02:39:09]
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:183)
>  ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - 
> ishan - 2019-02-23 02:39:09]
> at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>  ~[metrics-core-3.2.6.jar:3.2.6]
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
>  ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - 
> ishan - 2019-02-23 02:39:09]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_191]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
> {code}
> Following this through to the replica, you'll see:
> {code:java}
> 08:35:22.060 [qtp1540374340-20] ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:java.lang.ClassCastException: 
> org.apache.solr.common.util.ByteArrayUtf8CharSequence cannot be cast to 
> java.lang.String
> at 
> org.apache.solr.common.util.JavaBinCodec.readEnumFieldValue(JavaBinCodec.java:813)
> at 
> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:339)
> at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278)
> at 
> org.apache.solr.common.util.JavaBinCodec.readSolrInputDocument(JavaBinCodec.java:640)
> at 
> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:337)
> at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278)
> at 
> org.apache.solr.common.util.JavaBinCodec.readMapEntry(JavaBinCodec.java:819)
> at 
> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:341)
> at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:295)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readIterator(JavaBinUpdateRequestCodec.java:280)
> at 
> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:333)
> at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readNamedList(JavaBinUpdateRequestCodec.java:235)
> at 
> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:298)
> at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278)
> at 
> org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:191)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:126)
> at 
> org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:123)
> at 
> 

[jira] [Updated] (SOLR-13285) ByteArrayUtf8CharSequence cannot be cast to java.lang.String exception during replication

2019-03-04 Thread Karl Stoney (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Stoney updated SOLR-13285:
---
Attachment: SOLR-13285.patch

> ByteArrayUtf8CharSequence cannot be cast to java.lang.String exception during 
> replication
> -
>
> Key: SOLR-13285
> URL: https://issues.apache.org/jira/browse/SOLR-13285
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java), SolrCloud, SolrJ
>Affects Versions: 7.7, 7.7.1, 8.1
> Environment: centos 7
> solrcloud 7.7.1, 8.1.0
>Reporter: Karl Stoney
>Assignee: Noble Paul
>Priority: Major
>  Labels: newbie, replication
> Attachments: SOLR-13285.patch, SOLR-13285.patch
>
>
> Since upgrading to 7.7 (also tried 7.7.1, and 8.1.0) from 6.6.4, we're seeing 
> the following errors in the SolrCloud elected master for a given collection 
> when updates are written.  This was after a full reindex of data (fresh 
> build).
> {code:java}
> request: 
> http://solr-1.search-solr.preprod.k8.atcloud.io:80/solr/at-uk_shard1_replica_n2/update?update.distrib=FROMLEADER=http%3A%2F%2Fsolr-2.search-solr.preprod.k8.atcloud.io%3A80%2Fsolr%2Fat-uk_shard1_replica_n1%2F=javabin=2
> Remote error message: org.apache.solr.common.util.ByteArrayUtf8CharSequence 
> cannot be cast to java.lang.String
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:385)
>  ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - 
> ishan - 2019-02-23 02:39:09]
> at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:183)
>  ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - 
> ishan - 2019-02-23 02:39:09]
> at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>  ~[metrics-core-3.2.6.jar:3.2.6]
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
>  ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - 
> ishan - 2019-02-23 02:39:09]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_191]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_191]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
> {code}
> Following this through to the replica, you'll see:
> {code:java}
> 08:35:22.060 [qtp1540374340-20] ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:java.lang.ClassCastException: 
> org.apache.solr.common.util.ByteArrayUtf8CharSequence cannot be cast to 
> java.lang.String
> at 
> org.apache.solr.common.util.JavaBinCodec.readEnumFieldValue(JavaBinCodec.java:813)
> at 
> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:339)
> at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278)
> at 
> org.apache.solr.common.util.JavaBinCodec.readSolrInputDocument(JavaBinCodec.java:640)
> at 
> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:337)
> at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278)
> at 
> org.apache.solr.common.util.JavaBinCodec.readMapEntry(JavaBinCodec.java:819)
> at 
> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:341)
> at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:295)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readIterator(JavaBinUpdateRequestCodec.java:280)
> at 
> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:333)
> at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readNamedList(JavaBinUpdateRequestCodec.java:235)
> at 
> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:298)
> at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:278)
> at 
> org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:191)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:126)
> at 
> org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:123)
> at 
> org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:70)
> at 
> 

[jira] [Comment Edited] (SOLR-13259) Ref Guide: Add explicit docs on when to reindex after field/schema changes

2019-03-04 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783625#comment-16783625
 ] 

Cassandra Targett edited comment on SOLR-13259 at 3/4/19 7:05 PM:
--

Thanks everyone who reviewed and gave feedback, I've incorporated just about 
all of it somehow. We can iterate on the page going forward.

Anyone who wants to refer to the page before 8.0 Ref Guide is out can use the 
page from the branch_8x build: 
https://builds.apache.org/view/L/view/Lucene/job/Solr-reference-guide-8.x/javadoc/reindexing.html


was (Author: ctargett):
Thanks everyone who reviewed and gave feedback, I've incorporated just about 
all of it somehow. We can iterate on the page going forward.

> Ref Guide: Add explicit docs on when to reindex after field/schema changes
> --
>
> Key: SOLR-13259
> URL: https://issues.apache.org/jira/browse/SOLR-13259
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 8.0, master (9.0)
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Many changes to field definitions, field types, or other things defined in 
> the schema require documents to be reindexed, but some can be OK if the 
> consequences of not reindexing are acceptable, and still other changes do not 
> require a reindex at all.
> It would be nice if the Ref Guide had some definitive information about these 
> types of changes to assist users with planning changes to the schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12993) Split the state.json into 2. a small frequently modified data + a large unmodified data

2019-03-04 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783664#comment-16783664
 ] 

Andrzej Bialecki  commented on SOLR-12993:
--

bq. Would this be worth the overhead of changing existing code?
Very little code interacts directly with these files, in most places this 
status is accessed via ClusterState.

> Split the state.json into 2. a small frequently modified data + a large 
> unmodified data
> ---
>
> Key: SOLR-12993
> URL: https://issues.apache.org/jira/browse/SOLR-12993
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> This a just a proposal to minimize the ZK load and improve scalability of 
> very large clusters.
> Every time a small state change occurs for a collection/replica the following 
> file needs to be updated + read * n times (where n = no of replicas for this 
> collection ). The proposal is to split the main file into 2.
> {code}
> {"gettingstarted":{
> "pullReplicas":"0",
> "replicationFactor":"2",
> "router":{"name":"compositeId"},
> "maxShardsPerNode":"-1",
> "autoAddReplicas":"false",
> "nrtReplicas":"2",
> "tlogReplicas":"0",
> "shards":{
>   "shard1":{
> "range":"8000-",
>   
> "replicas":{
>   "core_node3":{
> "core":"gettingstarted_shard1_replica_n1",
> "base_url":"http://10.0.0.80:8983/solr;,
> "node_name":"10.0.0.80:8983_solr",
> "state":"active",
> "type":"NRT",
> "force_set_state":"false",
> "leader":"true"},
>   "core_node5":{
> "core":"gettingstarted_shard1_replica_n2",
> "base_url":"http://10.0.0.80:7574/solr;,
> "node_name":"10.0.0.80:7574_solr",
>  
> "type":"NRT",
> "force_set_state":"false"}}},
>   "shard2":{
> "range":"0-7fff",
> "state":"active",
> "replicas":{
>   "core_node7":{
> "core":"gettingstarted_shard2_replica_n4",
> "base_url":"http://10.0.0.80:7574/solr;,
> "node_name":"10.0.0.80:7574_solr",
>
> "type":"NRT",
> "force_set_state":"false"},
>   "core_node8":{
> "core":"gettingstarted_shard2_replica_n6",
> "base_url":"http://10.0.0.80:8983/solr;,
> "node_name":"10.0.0.80:8983_solr",
>  
> "type":"NRT",
> "force_set_state":"false",
> "leader":"true"}}
> {code}
> another file {{status.json}} which is frequently updated and small.
> {code}
> {
> "shard1": {
>   "state": "ACTIVE",
>   "core_node3": {"state": "active", "leader" : true},
>   "core_node5": {"state": "active"}
> },
> "shard2": {
>   "state": "active",
>   "core_node7": {"state": "active"},
>   "core_node8": {"state": "active", "leader" : true}}
>   }
> {code}
> Here the size of the file is roughly one tenth of the other file. This leads 
> to a dramatic reduction in the amount of data written/read to/from ZK.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13293) org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error consuming and closing http response stream.

2019-03-04 Thread Karl Stoney (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Stoney updated SOLR-13293:
---
Description: 
Hi, 
Testing out branch_8x, we're randomly seeing the following errors on a simple 3 
node cluster.  It doesn't appear to affect replication (the cluster remains 
green).

They come in (mass, literally 1000s at a time) bulk.

There we no network issues at the time.

{code:java}
16:53:01.492 [updateExecutor-4-thread-34-processing-x:at-uk_shard1_replica_n1 
r:core_node3 null n:solr-2.search-solr.preprod.k8.atcloud.io:80_solr c:at-uk 
s:shard1] ERROR 
org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error 
consuming and closing http response stream.
java.nio.channels.AsynchronousCloseException: null
at 
org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:316)
 ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
at java.io.InputStream.read(InputStream.java:101) ~[?:1.8.0_191]
at 
org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:287)
 ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:283)
 ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
b14748e61fd147ea572f6545265b883fa69ed27f - root
- 2019-03-04 16:30:04]
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176)
 ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04
16:30:04]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
 ~[metrics-core-3.2.6.jar:3.2.6]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
 ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 16:30:04]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_191]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_191]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
{code}


  was:
Hi, 
Testing out branch_8x, we're randomly seeing the following errors on a simple 3 
node cluster.

They come in (mass, literally 1000s at a time) bulk.

There we no network issues at the time.

{code:java}
16:53:01.492 [updateExecutor-4-thread-34-processing-x:at-uk_shard1_replica_n1 
r:core_node3 null n:solr-2.search-solr.preprod.k8.atcloud.io:80_solr c:at-uk 
s:shard1] ERROR 
org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error 
consuming and closing http response stream.
java.nio.channels.AsynchronousCloseException: null
at 
org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:316)
 ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
at java.io.InputStream.read(InputStream.java:101) ~[?:1.8.0_191]
at 
org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:287)
 ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:283)
 ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
b14748e61fd147ea572f6545265b883fa69ed27f - root
- 2019-03-04 16:30:04]
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176)
 ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04
16:30:04]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
 ~[metrics-core-3.2.6.jar:3.2.6]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
 ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 16:30:04]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_191]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_191]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
{code}



> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error 
> consuming and closing http response stream.
> -
>
> Key: SOLR-13293
> URL: https://issues.apache.org/jira/browse/SOLR-13293
> 

[jira] [Created] (SOLR-13293) org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error consuming and closing http response stream.

2019-03-04 Thread Karl Stoney (JIRA)
Karl Stoney created SOLR-13293:
--

 Summary: 
org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error 
consuming and closing http response stream.
 Key: SOLR-13293
 URL: https://issues.apache.org/jira/browse/SOLR-13293
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrJ
Affects Versions: 8x
Reporter: Karl Stoney


Hi, 
Testing out branch_8x, we're randomly seeing the following errors on a simple 3 
node cluster.

They come in (mass, literally 1000s at a time) bulk.

There we no network issues at the time.

{code:java}
16:53:01.492 [updateExecutor-4-thread-34-processing-x:at-uk_shard1_replica_n1 
r:core_node3 null n:solr-2.search-solr.preprod.k8.atcloud.io:80_solr c:at-uk 
s:shard1] ERROR 
org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient - Error 
consuming and closing http response stream.
java.nio.channels.AsynchronousCloseException: null
at 
org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:316)
 ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
at java.io.InputStream.read(InputStream.java:101) ~[?:1.8.0_191]
at 
org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:287)
 ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:283)
 ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
b14748e61fd147ea572f6545265b883fa69ed27f - root
- 2019-03-04 16:30:04]
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176)
 ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04
16:30:04]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
 ~[metrics-core-3.2.6.jar:3.2.6]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
 ~[solr-solrj-8.1.0-SNAPSHOT.jar:8.1.0-SNAPSHOT 
b14748e61fd147ea572f6545265b883fa69ed27f - root - 2019-03-04 16:30:04]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_191]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_191]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13259) Ref Guide: Add explicit docs on when to reindex after field/schema changes

2019-03-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783622#comment-16783622
 ] 

ASF subversion and git services commented on SOLR-13259:


Commit 876fcb7f7b56b71020c5ed05a122bcfe766a10c0 in lucene-solr's branch 
refs/heads/master from Cassandra Targett
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=876fcb7 ]

SOLR-13259: Add new section on Reindexing in Solr (#594)

Add new reindexing.adoc page; standardize on "reindex" vs "re-index"

> Ref Guide: Add explicit docs on when to reindex after field/schema changes
> --
>
> Key: SOLR-13259
> URL: https://issues.apache.org/jira/browse/SOLR-13259
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 8.0, master (9.0)
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Many changes to field definitions, field types, or other things defined in 
> the schema require documents to be reindexed, but some can be OK if the 
> consequences of not reindexing are acceptable, and still other changes do not 
> require a reindex at all.
> It would be nice if the Ref Guide had some definitive information about these 
> types of changes to assist users with planning changes to the schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13259) Ref Guide: Add explicit docs on when to reindex after field/schema changes

2019-03-04 Thread Cassandra Targett (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-13259.
--
Resolution: Fixed

Thanks everyone who reviewed and gave feedback, I've incorporated just about 
all of it somehow. We can iterate on the page going forward.

> Ref Guide: Add explicit docs on when to reindex after field/schema changes
> --
>
> Key: SOLR-13259
> URL: https://issues.apache.org/jira/browse/SOLR-13259
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 8.0, master (9.0)
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Many changes to field definitions, field types, or other things defined in 
> the schema require documents to be reindexed, but some can be OK if the 
> consequences of not reindexing are acceptable, and still other changes do not 
> require a reindex at all.
> It would be nice if the Ref Guide had some definitive information about these 
> types of changes to assist users with planning changes to the schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13292) Provide extended per-segment status of a collection

2019-03-04 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-13292:
-
Attachment: colstatus.json

> Provide extended per-segment status of a collection
> ---
>
> Key: SOLR-13292
> URL: https://issues.apache.org/jira/browse/SOLR-13292
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0), 8x
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-13292.patch, adminSegments.json, colstatus.json
>
>
> When changing a collection configuration or schema there may be non-obvious 
> conflicts between existing data and the new configuration or the newly 
> declared schema. A similar situation arises when upgrading Solr to a new 
> version while keeping the existing data.
> Currently the {{SegmentsInfoRequestHandler}} provides insufficient 
> information to detect such conflicts. Also, there's no collection-wide 
> command to gather such status from all shard leaders.
> This issue proposes extending the {{/admin/segments}} handler to provide more 
> low-level Lucene details about the segments, including potential conflicts 
> between existing segments' data and the current declared schema. It also adds 
> a new COLSTATUS collection command to report an aggregated status from all 
> shards, and optionally for all collections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Vector based store and ANN

2019-03-04 Thread Pedram Rezaei
Merging the threads and pasting all the replies into here and responding to 
them below:

Thank you all for your detailed and thoughtful contributions.

Here at Bing, we used to use the coarse approximation nearest neighbor approach 
(using something similar to the LSH hashing technique) on the inverted index 
and a finer-grained final rescoring method. However, for Bing, we have seen a 
visible impact on relevance using ANN. This even applies to smaller indexes 
with 20M records. We also find that recall varies on LSH on our interested 
dataset. Hence we adopted KD-tree & RNG which has more stable recall. The 
algorithm is open sourced here. We have 
also seen success with HNSW and FAISS.

The links provided by Doug and J. are attempting to add vectors to the existing 
index. These solutions typically inefficient on medium to large size indexes if 
used for online querying as they tend to behave more like a linear search. The 
author of EsAknn has also alluded to this on its github 
page:

“If you need to quickly run KNN on an extremely large corpus in an offline job, 
use one of the libraries from 
Ann-Benchmarks. If you need KNN in 
an online setting with support for horizontally-scalable searching and indexing 
new vectors in near-real-time, consider EsAknn (especially if you already use 
Elasticsearch).”

Using a vector-based index tuned for ANN searches, with an ability to hook in 
other index formats and algorithms as Rene requested below, we can provide a 
solution that can, for example, index and serve hundreds of millions of images 
and offers fast query over those indexes. We use these algorithms and indexes 
that are referenced above for image and text search. The user can choose the 
most relevant one or even combine multiple of those before the final scoring.

I would love to hear your thoughts on this and see if the community is open to 
a proposal by Bing on contributing some of its tech to Lucene. We will run the 
design and the development incrementally with the full input from the community.

Thanks,

Pedram

From: Doug Turnbull 
Sent: Saturday, March 2, 2019 3:50 PM
To: dev@lucene.apache.org
Cc: Pedram Rezaei ; Radhakrishnan Srikanth (SRIKANTH) 
; Arun Sacheti ; Kun Wu 
; Junhua Wang ; Jason Li 

Subject: Re: Vector based store and ANN

I'll add that Elasticsearch has a vector scoring (though not 
filtering/matching) coming in to Elasticsearch mainline by Mayya Sharipova

https://github.com/elastic/elasticsearch/pull/33022

It uses doc values to do some reranking using standard similarities. It's a 
start, hopefully something that can be built upon

Hoping Mayya can be at Haystack... vector filtering/similarities/use cases 
could even be its own breakout/collaboration session

From: René Kriegler 
Sent: Saturday, March 2, 2019 3:23 PM
To: J. Delgado 
Cc: dev@lucene.apache.org; Radhakrishnan Srikanth (SRIKANTH) 
; Arun Sacheti ; Kun Wu 
; Junhua Wang ; Jason Li 

Subject: Re: Vector based store and ANN

Thanks for the links, Joaquin!

Yet another thought related to an implementation at Lucene level: I wonder how 
much sense it makes to try to implement a one-approach-fits-all solution for 
vector-based retrieval. We have different expectations of a solution, depending 
on aspects such as vector dimensionality, domain (text vs. image recognition 
vs. …) and retrieval quality priorities (recall vs precision). I think that was 
also reflected in the Slack discussion. I think it would be very helpful to 
have real-life vector datasets (labelled for specific retrieval tasks), so that 
we could benchmarks solutions for retrieval speed and quality metrics. For 
example, we could easily create synthetic vector datasets for KNN search (which 
is still a good starting point!) - but using random vectors probably doesn’t 
reflect the distribution we would normally face in an image search or when 
searching by word embeddings.

Best,
René

On 2 Mar 2019, at 22:06, J. Delgado 
mailto:joaquin.delg...@gmail.com>> wrote:

Apparently, there is already an implementation along the lines discussed here:

https://blog.insightdatascience.com/elastik-nearest-neighbors-4b1f6821bd62

[jira] [Commented] (SOLR-13259) Ref Guide: Add explicit docs on when to reindex after field/schema changes

2019-03-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783624#comment-16783624
 ] 

ASF subversion and git services commented on SOLR-13259:


Commit 8d92a542bb64c5e52eb5727e0413b7deb1d0c212 in lucene-solr's branch 
refs/heads/branch_8_0 from Cassandra Targett
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8d92a54 ]

SOLR-13259: Add new section on Reindexing in Solr (#594)

Add new reindexing.adoc page; standardize on "reindex" vs "re-index"

> Ref Guide: Add explicit docs on when to reindex after field/schema changes
> --
>
> Key: SOLR-13259
> URL: https://issues.apache.org/jira/browse/SOLR-13259
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 8.0, master (9.0)
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Many changes to field definitions, field types, or other things defined in 
> the schema require documents to be reindexed, but some can be OK if the 
> consequences of not reindexing are acceptable, and still other changes do not 
> require a reindex at all.
> It would be nice if the Ref Guide had some definitive information about these 
> types of changes to assist users with planning changes to the schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13259) Ref Guide: Add explicit docs on when to reindex after field/schema changes

2019-03-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783623#comment-16783623
 ] 

ASF subversion and git services commented on SOLR-13259:


Commit 68adeab46a08fdc66c6d613e5761413f16b45c0e in lucene-solr's branch 
refs/heads/branch_8x from Cassandra Targett
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=68adeab ]

SOLR-13259: Add new section on Reindexing in Solr (#594)

Add new reindexing.adoc page; standardize on "reindex" vs "re-index"

> Ref Guide: Add explicit docs on when to reindex after field/schema changes
> --
>
> Key: SOLR-13259
> URL: https://issues.apache.org/jira/browse/SOLR-13259
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 8.0, master (9.0)
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Many changes to field definitions, field types, or other things defined in 
> the schema require documents to be reindexed, but some can be OK if the 
> consequences of not reindexing are acceptable, and still other changes do not 
> require a reindex at all.
> It would be nice if the Ref Guide had some definitive information about these 
> types of changes to assist users with planning changes to the schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] ctargett merged pull request #594: SOLR-13259: Add new section on Reindexing in Solr

2019-03-04 Thread GitBox
ctargett merged pull request #594: SOLR-13259: Add new section on Reindexing in 
Solr
URL: https://github.com/apache/lucene-solr/pull/594
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13292) Provide extended per-segment status of a collection

2019-03-04 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-13292:
-
Attachment: adminSegments.json

> Provide extended per-segment status of a collection
> ---
>
> Key: SOLR-13292
> URL: https://issues.apache.org/jira/browse/SOLR-13292
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0), 8x
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-13292.patch, adminSegments.json, colstatus.json
>
>
> When changing a collection configuration or schema there may be non-obvious 
> conflicts between existing data and the new configuration or the newly 
> declared schema. A similar situation arises when upgrading Solr to a new 
> version while keeping the existing data.
> Currently the {{SegmentsInfoRequestHandler}} provides insufficient 
> information to detect such conflicts. Also, there's no collection-wide 
> command to gather such status from all shard leaders.
> This issue proposes extending the {{/admin/segments}} handler to provide more 
> low-level Lucene details about the segments, including potential conflicts 
> between existing segments' data and the current declared schema. It also adds 
> a new COLSTATUS collection command to report an aggregated status from all 
> shards, and optionally for all collections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13292) Provide extended per-segment status of a collection

2019-03-04 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783616#comment-16783616
 ] 

Andrzej Bialecki  commented on SOLR-13292:
--

This patch adds a new COLSTATUS command and an extended support in 
{{SegmentsInfoRequestHandler}} for reporting low-level details of Lucene 
segments and their compliance with the current schema.

Example requests:
{code:java}
http://localhost:8983/solr/gettingstarted/admin/segments?coreInfo=true=true

http://localhost:8983/solr/admin/collections?action=COLSTATUS=true=true{code}
Responses are attached.

> Provide extended per-segment status of a collection
> ---
>
> Key: SOLR-13292
> URL: https://issues.apache.org/jira/browse/SOLR-13292
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0), 8x
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-13292.patch
>
>
> When changing a collection configuration or schema there may be non-obvious 
> conflicts between existing data and the new configuration or the newly 
> declared schema. A similar situation arises when upgrading Solr to a new 
> version while keeping the existing data.
> Currently the {{SegmentsInfoRequestHandler}} provides insufficient 
> information to detect such conflicts. Also, there's no collection-wide 
> command to gather such status from all shard leaders.
> This issue proposes extending the {{/admin/segments}} handler to provide more 
> low-level Lucene details about the segments, including potential conflicts 
> between existing segments' data and the current declared schema. It also adds 
> a new COLSTATUS collection command to report an aggregated status from all 
> shards, and optionally for all collections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13292) Provide extended per-segment status of a collection

2019-03-04 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-13292:
-
Attachment: SOLR-13292.patch

> Provide extended per-segment status of a collection
> ---
>
> Key: SOLR-13292
> URL: https://issues.apache.org/jira/browse/SOLR-13292
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0), 8x
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-13292.patch
>
>
> When changing a collection configuration or schema there may be non-obvious 
> conflicts between existing data and the new configuration or the newly 
> declared schema. A similar situation arises when upgrading Solr to a new 
> version while keeping the existing data.
> Currently the {{SegmentsInfoRequestHandler}} provides insufficient 
> information to detect such conflicts. Also, there's no collection-wide 
> command to gather such status from all shard leaders.
> This issue proposes extending the {{/admin/segments}} handler to provide more 
> low-level Lucene details about the segments, including potential conflicts 
> between existing segments' data and the current declared schema. It also adds 
> a new COLSTATUS collection command to report an aggregated status from all 
> shards, and optionally for all collections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-8.x-Linux (64bit/jdk-13-ea+8) - Build # 228 - Unstable!

2019-03-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/228/
Java: 64bit/jdk-13-ea+8 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.TestDistributedGrouping.test

Error Message:
Error from server at http://127.0.0.1:44837/collection1: Error from server at 
null: java.lang.NullPointerException  at 
org.apache.solr.handler.component.ResponseBuilder.setResult(ResponseBuilder.java:466)
  at 
org.apache.solr.handler.component.QueryComponent.doProcessGroupedDistributedSearchSecondPhase(QueryComponent.java:1369)
  at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:362)
  at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2565)  at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)  at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:165)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:703)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
 at org.eclipse.jetty.server.Server.handle(Server.java:502)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)  at 
org.eclipse.jetty.server.HttpChannel.run(HttpChannel.java:305)  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683) 
 at java.base/java.lang.Thread.run(Thread.java:835) 

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:44837/collection1: Error from server at null: 
java.lang.NullPointerException
at 
org.apache.solr.handler.component.ResponseBuilder.setResult(ResponseBuilder.java:466)
at 
org.apache.solr.handler.component.QueryComponent.doProcessGroupedDistributedSearchSecondPhase(QueryComponent.java:1369)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:362)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2565)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:165)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
at 

[jira] [Created] (SOLR-13292) Provide extended per-segment status of a collection

2019-03-04 Thread Andrzej Bialecki (JIRA)
Andrzej Bialecki  created SOLR-13292:


 Summary: Provide extended per-segment status of a collection
 Key: SOLR-13292
 URL: https://issues.apache.org/jira/browse/SOLR-13292
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: master (9.0), 8x
Reporter: Andrzej Bialecki 
Assignee: Andrzej Bialecki 


When changing a collection configuration or schema there may be non-obvious 
conflicts between existing data and the new configuration or the newly declared 
schema. A similar situation arises when upgrading Solr to a new version while 
keeping the existing data.

Currently the {{SegmentsInfoRequestHandler}} provides insufficient information 
to detect such conflicts. Also, there's no collection-wide command to gather 
such status from all shard leaders.

This issue proposes extending the {{/admin/segments}} handler to provide more 
low-level Lucene details about the segments, including potential conflicts 
between existing segments' data and the current declared schema. It also adds a 
new COLSTATUS collection command to report an aggregated status from all 
shards, and optionally for all collections.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



BadApples

2019-03-04 Thread Erick Erickson
It’s getting much shorter. I’m not going to annotate or clear tests this week.

Processing file (History bit 3): HOSS-2019-03-04.csv
Processing file (History bit 2): HOSS-2019-02-25.csv
Processing file (History bit 1): HOSS-2019-02-18.csv
Processing file (History bit 0): HOSS-2019-02-12.csv


**Annotated tests that didn't fail in the last 4 weeks.

  **Tests removed from the next two lists because they were specified in 
'doNotEnable' in the properties file
 MoveReplicaHDFSTest.testNormalFailedMove

  **Annotations will be removed from the following tests because they haven't 
failed in the last 4 rollups.

  **Methods: 5
   DeleteReplicaTest.deleteLiveReplicaTest
   ForceLeaderTest.testReplicasInLowerTerms
   LeaderTragicEventTest.testOtherReplicasAreNotActive
   TestCollectionStateWatchers.testCanWaitForNonexistantCollection
   TestIndexWriterOnVMError.testCheckpoint


Failures in Hoss' reports for the last 4 rollups.

There were 279 unannotated tests that failed in Hoss' rollups. Ordered by the 
date I downloaded the rollup file, newest->oldest. See above for the dates the 
files were collected 
These tests were NOT BadApple'd or AwaitsFix'd
All tests that failed 4 weeks running will be BadApple'd unless there are 
objections

Failures in the last 4 reports..
   Report   Pct runsfails   test
 0123  24.0  114 14  HdfsUnloadDistributedZkTest.test
 0123   1.3  951  6  TestDynamicLoading.testDynamicLoading
 0123   5.3  980 53  TestSQLHandler.doTest
 0123   0.5  954  4  UnloadDistributedZkTest.test
 Will BadApple all tests above this line except ones listed at the 
top**

Full output:

DO NOT ENABLE LIST:
MoveReplicaHDFSTest.testFailedMove
MoveReplicaHDFSTest.testNormalFailedMove
TestControlledRealTimeReopenThread.testCRTReopen
TestICUNormalizer2CharFilter.testRandomStrings
TestICUTokenizerCJK
TestImpersonationWithHadoopAuth.testForwarding
TestLTRReRankingPipeline.testDifferentTopN
TestRandomChains


DO NOT ANNOTATE LIST
CdcrBidirectionalTest.testBiDir
IndexSizeTriggerTest.testMergeIntegration
IndexSizeTriggerTest.testMixedBounds
IndexSizeTriggerTest.testSplitIntegration
IndexSizeTriggerTest.testTrigger
InfixSuggestersTest.testShutdownDuringBuild
ShardSplitTest.test
ShardSplitTest.testSplitMixedReplicaTypes
ShardSplitTest.testSplitWithChaosMonkey
TestLatLonShapeQueries.testRandomBig
TestRandomChains.testRandomChainsWithLargeStrings
TestTriggerIntegration.testSearchRate

Processing file (History bit 3): HOSS-2019-03-04.csv
Processing file (History bit 2): HOSS-2019-02-25.csv
Processing file (History bit 1): HOSS-2019-02-18.csv
Processing file (History bit 0): HOSS-2019-02-12.csv


**Annotated tests that didn't fail in the last 4 weeks.

  **Tests removed from the next two lists because they were specified in 
'doNotEnable' in the properties file
 MoveReplicaHDFSTest.testNormalFailedMove

  **Annotations will be removed from the following tests because they haven't 
failed in the last 4 rollups.

  **Methods: 5
   DeleteReplicaTest.deleteLiveReplicaTest
   ForceLeaderTest.testReplicasInLowerTerms
   LeaderTragicEventTest.testOtherReplicasAreNotActive
   TestCollectionStateWatchers.testCanWaitForNonexistantCollection
   TestIndexWriterOnVMError.testCheckpoint


Failures in Hoss' reports for the last 4 rollups.

There were 279 unannotated tests that failed in Hoss' rollups. Ordered by the 
date I downloaded the rollup file, newest->oldest. See above for the dates the 
files were collected 
These tests were NOT BadApple'd or AwaitsFix'd
All tests that failed 4 weeks running will be BadApple'd unless there are 
objections

Failures in the last 4 reports..
   Report   Pct runsfails   test
 0123  24.0  114 14  HdfsUnloadDistributedZkTest.test
 0123   1.3  951  6  TestDynamicLoading.testDynamicLoading
 0123   5.3  980 53  TestSQLHandler.doTest
 0123   0.5  954  4  UnloadDistributedZkTest.test
 Will BadApple all tests above this line except ones listed at the 
top**



 0122.3  310  5  BasicAuthIntegrationTest.testBasicAuth
 0124.2  739 21  
ChaosMonkeySafeLeaderWithPullReplicasTest.test
 0123.0  730 21  TestCloudSchemaless.test
 0121.4  668  6  TestPKIAuthenticationPlugin.test
 0122.4  755 19  TestSimLargeCluster.testSearchRate
 0120.4  790  9  TestSimTriggerIntegration.testEventQueue
 0125.7  790 38  TestSimTriggerIntegration.testSearchRate
 0120.5  690  4  TestStressCloudBlindAtomicUpdates.test_dv
 01 3   0.5  691  6  
NodeMarkersRegistrationTest.testNodeMarkersRegistration
 01 3   0.9  685 10  ShardRoutingTest.test

[jira] [Commented] (SOLR-13291) Failed to create collection due to lock held by this virtual machine

2019-03-04 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783593#comment-16783593
 ] 

Erick Erickson commented on SOLR-13291:
---

Possibly related to SOLR-13021, although 13021 is in the tests...

> Failed to create collection due to lock held by this virtual machine
> 
>
> Key: SOLR-13291
> URL: https://issues.apache.org/jira/browse/SOLR-13291
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.5, 7.7
> Environment: * Solr 7.7.1 (also reproduced on 7.5)
>  * running on Ubuntu 18.04 (also reproduced on AWS instances using Amazon 
> Linux)
>  * Using OpenJDK 11.0.1 as distributedby AdoptOpenJDK
>  * setting up a solr example cloud using `solr start -e cloud` and accepting 
> all default values (2 cluster nodes)
>Reporter: Joachim Sauer
>Priority: Major
> Attachments: tortureSolr.sh
>
>
> We have a weird workload that at some times involves deletion and re-creation 
> of collections with the same name in a short period of time (don't ask why).
>  
> When running in a SolrCloud cluster this will occasionally leave a random 
> core lying around and locked even though the Collection deletion was reported 
> to have finished successfully.
>  
> This results in an error the next time a collection of that given name should 
> be created.
>  
> The attached shell script is consistently able to reproduce the error states 
> within a small number of iterations against the 7.7.1 binary distribution 
> running the default cloud example (`solr start -e cloud`, accept all default 
> values).
>  
> Log entries that seemed relevant to me are:
> At the time when the collection is deleted:
> {code}
> 2019-03-04 16:56:44.037 WARN  (Thread-24) [c:myCollection s:shard2 
> r:core_node4 x:myCollection_shard2_replica_n2] o.a.s.c.ZkController listener 
> throws error
> org.apache.solr.common.SolrException: Unable to reload core 
> [myCollection_shard2_replica_n2]
> at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1463) 
> ~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan 
> - 2019-02-23 02:39:07]
> at 
> org.apache.solr.core.SolrCore.lambda$getConfListener$20(SolrCore.java:3041) 
> ~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan 
> - 2019-02-23 02:39:07]
> at 
> org.apache.solr.cloud.ZkController.lambda$fireEventListeners$21(ZkController.java:2803)
>  [solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan 
> - 2019-02-23 02:39:07]
> at java.lang.Thread.run(Thread.java:834) [?:?]
> Caused by: org.apache.solr.common.SolrException
> at org.apache.solr.core.SolrCore.(SolrCore.java:1048) 
> ~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan 
> - 2019-02-23 02:39:07]
> at org.apache.solr.core.SolrCore.reload(SolrCore.java:666) 
> ~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan 
> - 2019-02-23 02:39:07]
> at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1439) 
> ~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan 
> - 2019-02-23 02:39:07]
> ... 3 more
> Caused by: java.lang.NullPointerException
> at 
> org.apache.solr.metrics.SolrMetricManager.loadShardReporters(SolrMetricManager.java:1160)
>  ~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan 
> - 2019-02-23 02:39:07]
> at 
> org.apache.solr.metrics.SolrCoreMetricManager.loadReporters(SolrCoreMetricManager.java:92)
>  ~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan 
> - 2019-02-23 02:39:07]
> at org.apache.solr.core.SolrCore.(SolrCore.java:920) 
> ~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan 
> - 2019-02-23 02:39:07]
> at org.apache.solr.core.SolrCore.reload(SolrCore.java:666) 
> ~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan 
> - 2019-02-23 02:39:07]
> at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1439) 
> ~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan 
> - 2019-02-23 02:39:07]
> {code}
>  
> Later, when trying to re-create the collection:
>  
> {code}
> 2019-03-04 16:56:51.982 ERROR 
> (OverseerThreadFactory-9-thread-5-processing-n:127.0.1.1:8983_solr) [   ] 
> o.a.s.c.a.c.OverseerCollectionMessageHandler Error from shard: 
> http://127.0.1.1:8983/solr
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://127.0.1.1:8983/solr: Error CREATEing SolrCore 
> 'myCollection_shard2_replica_n2': Unable to create core 
> 

[jira] [Commented] (SOLR-13288) Async logging max length should only apply to the message

2019-03-04 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783587#comment-16783587
 ] 

Erick Erickson commented on SOLR-13288:
---

[~varunthacker]

Is this theoretical or practical? 10K characters is enough to get the 
hits/status/QTime for the vast majority of cases. We've seen some very long 
queries where the 10K limit would truncate the end bits, and I suppose that if 
some of the update options were turned on where we dumped 1,000 IDs that would 
miss too.

Hmmm, one other thing that just came to mind: What about stack traces?

> Async logging max length should only apply to the message
> -
>
> Key: SOLR-13288
> URL: https://issues.apache.org/jira/browse/SOLR-13288
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
>
> After SOLR-12753 messages are limited to 10240 chars. +1 for having a limit, 
> we even hit this issue internally recently.
>  
> Sample log line 
> {code:java}
> 2019-03-03 19:04:51.293 INFO  (qtp776700275-57) [c:gettingstarted s:shard2 
> r:core_node7 x:gettingstarted_shard2_replica_n4] o.a.s.c.S.Request 
> [gettingstarted_shard2_replica_n4]  webapp=/solr path=/select 
> params={q=*:*&_=1551639889792} hits=0 status=0 QTime=206 } {code}
> The way it's implemented currently though it picks the first 10240 chars from 
> the start. So let's say it was reduced to 10 the log line will look like
> {code:java}
> 2019-03-03{code}
>  If we wrap the {{maxLen}} around the message part then we ensure some parts 
> are always captured. So with this pattern 
> {code:java}
> %d{-MM-dd HH:mm:ss.SSS} %-5p (%t) [%X{collection} %X{shard} %X{replica} 
> %X{core}] %c{1.} %maxLen{%m %notEmpty{=>%ex{short}}}{10}} %n{code}
>  the message will now look like 
> {code:java}
> 2019-03-03 19:07:24.901 INFO  (qtp776700275-57) [c:gettingstarted s:shard2 
> r:core_node7 x:gettingstarted_shard2_replica_n4] o.a.s.c.S.Request 
> [gettingst} {code}
> This is still not perfect as ideally we'd want to capture the 
> hits/status/QTime part even if the message get's shortened. I'm not sure if 
> the log4j2 Pattern Layout syntax support it? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] asfgit merged pull request #590: SOLR-13152

2019-03-04 Thread GitBox
asfgit merged pull request #590: SOLR-13152
URL: https://github.com/apache/lucene-solr/pull/590
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-03-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783572#comment-16783572
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit b8b51389ae7737b3594e3409138513cdd4817d1c in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b8b5138 ]

SOLR-12923: harden testEventQueue by replacing the arbitrary sleep call with a 
countdown latch

(cherry picked from commit 7f7357696f9efe63147bacc3e1ed3d800d389d28)


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-03-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783570#comment-16783570
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit 44dff11eca6b6969445bd0d84a66e1a5d5bae9d9 in lucene-solr's branch 
refs/heads/branch_8_0 from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=44dff11 ]

SOLR-12923: harden testEventQueue by replacing the arbitrary sleep call with a 
countdown latch

(cherry picked from commit 7f7357696f9efe63147bacc3e1ed3d800d389d28)


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-03-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783571#comment-16783571
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit 666e83d84a5f33af30eb5599567598a3b97f1e73 in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=666e83d ]

SOLR-12923: increase all await() times in TriggerIntegrationTest

This means that 'real' failures (which should be rare and hopefully 
reproducile) will be 'slow', but the trade off will be less hard to reproduce 
'false failures' due to thread contention on slow or heavily loaded (ie: 
jenkins) machines

(cherry picked from commit 235b15acfc97a97cdf03ce73939bc5daf052b6cf)


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-03-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783567#comment-16783567
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit e83d6f14812ec21f5b41dc7b3405df9fa6df86dd in lucene-solr's branch 
refs/heads/branch_7x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e83d6f1 ]

SOLR-12923: increase all await() times in TriggerIntegrationTest

This means that 'real' failures (which should be rare and hopefully 
reproducile) will be 'slow', but the trade off will be less hard to reproduce 
'false failures' due to thread contention on slow or heavily loaded (ie: 
jenkins) machines

(cherry picked from commit 235b15acfc97a97cdf03ce73939bc5daf052b6cf)


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-03-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783569#comment-16783569
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit b3330b0a11741996d6dd2ab0513bf96bd77e4377 in lucene-solr's branch 
refs/heads/branch_8_0 from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b3330b0 ]

SOLR-12923: increase all await() times in TriggerIntegrationTest

This means that 'real' failures (which should be rare and hopefully 
reproducile) will be 'slow', but the trade off will be less hard to reproduce 
'false failures' due to thread contention on slow or heavily loaded (ie: 
jenkins) machines

(cherry picked from commit 235b15acfc97a97cdf03ce73939bc5daf052b6cf)


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-03-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783568#comment-16783568
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit 26b498d0a9d25626a15e25b0cf97c8339114263a in lucene-solr's branch 
refs/heads/branch_7x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=26b498d ]

SOLR-12923: harden testEventQueue by replacing the arbitrary sleep call with a 
countdown latch

(cherry picked from commit 7f7357696f9efe63147bacc3e1ed3d800d389d28)


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13288) Async logging max length should only apply to the message

2019-03-04 Thread John Gallagher (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783563#comment-16783563
 ] 

John Gallagher commented on SOLR-13288:
---

would be nice to put hits/status/QTime in MDC if possible, just like 
collection/shard/replica/core, that way the standard patternlayout can be used 
to only truncate the message portion, and to rearrange those to the front as 
desired by a customer.

> Async logging max length should only apply to the message
> -
>
> Key: SOLR-13288
> URL: https://issues.apache.org/jira/browse/SOLR-13288
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
>
> After SOLR-12753 messages are limited to 10240 chars. +1 for having a limit, 
> we even hit this issue internally recently.
>  
> Sample log line 
> {code:java}
> 2019-03-03 19:04:51.293 INFO  (qtp776700275-57) [c:gettingstarted s:shard2 
> r:core_node7 x:gettingstarted_shard2_replica_n4] o.a.s.c.S.Request 
> [gettingstarted_shard2_replica_n4]  webapp=/solr path=/select 
> params={q=*:*&_=1551639889792} hits=0 status=0 QTime=206 } {code}
> The way it's implemented currently though it picks the first 10240 chars from 
> the start. So let's say it was reduced to 10 the log line will look like
> {code:java}
> 2019-03-03{code}
>  If we wrap the {{maxLen}} around the message part then we ensure some parts 
> are always captured. So with this pattern 
> {code:java}
> %d{-MM-dd HH:mm:ss.SSS} %-5p (%t) [%X{collection} %X{shard} %X{replica} 
> %X{core}] %c{1.} %maxLen{%m %notEmpty{=>%ex{short}}}{10}} %n{code}
>  the message will now look like 
> {code:java}
> 2019-03-03 19:07:24.901 INFO  (qtp776700275-57) [c:gettingstarted s:shard2 
> r:core_node7 x:gettingstarted_shard2_replica_n4] o.a.s.c.S.Request 
> [gettingst} {code}
> This is still not perfect as ideally we'd want to capture the 
> hits/status/QTime part even if the message get's shortened. I'm not sure if 
> the log4j2 Pattern Layout syntax support it? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13291) Failed to create collection due to lock held by this virtual machine

2019-03-04 Thread Joachim Sauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joachim Sauer updated SOLR-13291:
-
Description: 
We have a weird workload that at some times involves deletion and re-creation 
of collections with the same name in a short period of time (don't ask why).

 

When running in a SolrCloud cluster this will occasionally leave a random core 
lying around and locked even though the Collection deletion was reported to 
have finished successfully.

 

This results in an error the next time a collection of that given name should 
be created.

 

The attached shell script is consistently able to reproduce the error states 
within a small number of iterations against the 7.7.1 binary distribution 
running the default cloud example (`solr start -e cloud`, accept all default 
values).

 

Log entries that seemed relevant to me are:

At the time when the collection is deleted:

{code}
2019-03-04 16:56:44.037 WARN  (Thread-24) [c:myCollection s:shard2 r:core_node4 
x:myCollection_shard2_replica_n2] o.a.s.c.ZkController listener throws error
org.apache.solr.common.SolrException: Unable to reload core 
[myCollection_shard2_replica_n2]
at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1463) 
~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]
at 
org.apache.solr.core.SolrCore.lambda$getConfListener$20(SolrCore.java:3041) 
~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]
at 
org.apache.solr.cloud.ZkController.lambda$fireEventListeners$21(ZkController.java:2803)
 [solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]
at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: org.apache.solr.common.SolrException
at org.apache.solr.core.SolrCore.(SolrCore.java:1048) 
~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]
at org.apache.solr.core.SolrCore.reload(SolrCore.java:666) 
~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]
at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1439) 
~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]
... 3 more
Caused by: java.lang.NullPointerException
at 
org.apache.solr.metrics.SolrMetricManager.loadShardReporters(SolrMetricManager.java:1160)
 ~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]
at 
org.apache.solr.metrics.SolrCoreMetricManager.loadReporters(SolrCoreMetricManager.java:92)
 ~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]
at org.apache.solr.core.SolrCore.(SolrCore.java:920) 
~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]
at org.apache.solr.core.SolrCore.reload(SolrCore.java:666) 
~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]
at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1439) 
~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]
{code}

 

Later, when trying to re-create the collection:

 

{code}
2019-03-04 16:56:51.982 ERROR 
(OverseerThreadFactory-9-thread-5-processing-n:127.0.1.1:8983_solr) [   ] 
o.a.s.c.a.c.OverseerCollectionMessageHandler Error from shard: 
http://127.0.1.1:8983/solr
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.1.1:8983/solr: Error CREATEing SolrCore 
'myCollection_shard2_replica_n2': Unable to create core 
[myCollection_shard2_replica_n2
] Caused by: Lock held by this virtual machine: 
/home/joachim/workspaces/devtools/solr-7.7.1/example/cloud/node1/solr/myCollection_shard2_replica_n2/data/index/write.lock
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
 ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan 
- 2019-02-23 02:39:09]
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
 ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan 
- 2019-02-23 02:39:09]
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
 ~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan 
- 2019-02-23 02:39:09]
at 
org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1260) 
~[solr-solrj-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:09]
at 
org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:173)
 

[jira] [Created] (SOLR-13291) Failed to create collection due to

2019-03-04 Thread Joachim Sauer (JIRA)
Joachim Sauer created SOLR-13291:


 Summary: Failed to create collection due to 
 Key: SOLR-13291
 URL: https://issues.apache.org/jira/browse/SOLR-13291
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 7.7, 7.5
 Environment: * Solr 7.7.1 (also reproduced on 7.5)
 * running on Ubuntu 18.04 (also reproduced on AWS instances using Amazon Linux)
 * Using OpenJDK 11.0.1 as distributedby AdoptOpenJDK
 * setting up a solr example cloud using `solr start -e cloud` and accepting 
all default values (2 cluster nodes)
Reporter: Joachim Sauer
 Attachments: tortureSolr.sh

We have a weird workload that at some times involves deletion and re-creation 
of collections with the same name in a short period of time (don't ask why).

 

When running in a SolrCloud cluster this will occasionally leave a random core 
lying around and locked even though the Collection deletion was reported to 
have finished successfully.

 

This results in an error the next time a collection of that given name should 
be created.

 

The attached shell script is consistently able to reproduce the error states 
within a small number of iterations against the 7.7.1 binary distribution 
running the default cloud example (`solr start -e cloud`, accept all default 
values).

 

Log entries that seemed relevant to me are:

At the time when the collection is deleted:

{{2019-03-04 16:56:44.037 ERROR (Thread-24) [c:myCollection s:shard2 
r:core_node4 x:myCollection_shard2_replica_n2] o.a.s.c.SolrCore Error while 
closing}}
{{java.lang.NullPointerException: null}}
{{ at org.apache.solr.core.SolrCore.close(SolrCore.java:1639) 
~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]}}
{{ at org.apache.solr.core.SolrCore.(SolrCore.java:1040) 
[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]}}
{{ at org.apache.solr.core.SolrCore.reload(SolrCore.java:666) 
[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]}}
{{ at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1439) 
[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]}}
{{ at 
org.apache.solr.core.SolrCore.lambda$getConfListener$20(SolrCore.java:3041) 
[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]}}
{{ at 
org.apache.solr.cloud.ZkController.lambda$fireEventListeners$21(ZkController.java:2803)
 [solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]}}
{{ at java.lang.Thread.run(Thread.java:834) [?:?]}}
{{2019-03-04 16:56:44.037 WARN (Thread-24) [c:myCollection s:shard2 
r:core_node4 x:myCollection_shard2_replica_n2] o.a.s.c.ZkController listener 
throws error}}
{{org.apache.solr.common.SolrException: Unable to reload core 
[myCollection_shard2_replica_n2]}}
{{ at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1463) 
~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]}}
{{ at 
org.apache.solr.core.SolrCore.lambda$getConfListener$20(SolrCore.java:3041) 
~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]}}
{{ at 
org.apache.solr.cloud.ZkController.lambda$fireEventListeners$21(ZkController.java:2803)
 [solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]}}
{{ at java.lang.Thread.run(Thread.java:834) [?:?]}}
{{Caused by: org.apache.solr.common.SolrException}}
{{ at org.apache.solr.core.SolrCore.(SolrCore.java:1048) 
~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]}}
{{ at org.apache.solr.core.SolrCore.reload(SolrCore.java:666) 
~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]}}
{{ at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1439) 
~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]}}
{{ ... 3 more}}
{{Caused by: java.lang.NullPointerException}}
{{ at 
org.apache.solr.metrics.SolrMetricManager.loadShardReporters(SolrMetricManager.java:1160)
 ~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]}}
{{ at 
org.apache.solr.metrics.SolrCoreMetricManager.loadReporters(SolrCoreMetricManager.java:92)
 ~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]}}
{{ at org.apache.solr.core.SolrCore.(SolrCore.java:920) 
~[solr-core-7.7.1.jar:7.7.1 5bf96d32f88eb8a2f5e775339885cd6ba84a3b58 - ishan - 
2019-02-23 02:39:07]}}
{{ at 

Re: Welcome Ignacio Vera to the PMC

2019-03-04 Thread Erick Erickson
Welcome!

> On Mar 4, 2019, at 7:45 AM, Đạt Cao Mạnh  wrote:
> 
> Congrats Ignacio!
> 
> On Mon, Mar 4, 2019 at 2:09 PM jim ferenczi  wrote:
> Welcome and congrats Ignacio!
> 
> Le lun. 4 mars 2019 à 15:03, David Smiley  a écrit :
> Welcome Ignacio!
> 
> On Mon, Mar 4, 2019 at 7:53 AM Jason Gerlowski  wrote:
> Congrats Ignacio!
> 
> On Mon, Mar 4, 2019 at 7:17 AM Martin Gainty  wrote:
> >
> > ¡Bienvenidos Ignacio!
> >
> > 
> > From: Dawid Weiss 
> > Sent: Monday, March 4, 2019 6:45 AM
> > To: dev@lucene.apache.org
> > Subject: Re: Welcome Ignacio Vera to the PMC
> >
> > Welcome, Ignacio!
> >
> > On Mon, Mar 4, 2019 at 10:09 AM Adrien Grand  wrote:
> > >
> > > I am pleased to announce that Ignacio Vera has accepted the PMC's
> > > invitation to join.
> > >
> > > Welcome Ignacio!
> > >
> > > --
> > > Adrien
> > >
> > > -
> > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > > For additional commands, e-mail: dev-h...@lucene.apache.org
> > >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 
> -- 
> Lucene/Solr Search Committer (PMC), Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
> http://www.solrenterprisesearchserver.com
> 
> 
> -- 
> Best regards,
> Cao Mạnh Đạt
> D.O.B : 31-07-1991
> Cell: (+84) 946.328.329
> E-mail: caomanhdat...@gmail.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-03-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783551#comment-16783551
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit 235b15acfc97a97cdf03ce73939bc5daf052b6cf in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=235b15a ]

SOLR-12923: increase all await() times in TriggerIntegrationTest

This means that 'real' failures (which should be rare and hopefully 
reproducile) will be 'slow', but the trade off will be less hard to reproduce 
'false failures' due to thread contention on slow or heavily loaded (ie: 
jenkins) machines


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.

2019-03-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783552#comment-16783552
 ] 

ASF subversion and git services commented on SOLR-12923:


Commit 7f7357696f9efe63147bacc3e1ed3d800d389d28 in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=7f73576 ]

SOLR-12923: harden testEventQueue by replacing the arbitrary sleep call with a 
countdown latch


> The new AutoScaling tests are way to flaky and need special attention.
> --
>
> Key: SOLR-12923
> URL: https://issues.apache.org/jira/browse/SOLR-12923
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Priority: Major
>
> I've already done some work here (not posted yet). We need to address this, 
> these tests are too new to fail so often and easily.
> I want to add beasting to precommit (LUCENE-8545) to help prevent tests that 
> fail so easily from being committed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-8.x - Build # 35 - Unstable

2019-03-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/35/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.update.TransactionLogTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.update.TransactionLogTest:  
   1) Thread[id=13, name=Log4j2-TF-1-AsyncLoggerConfig-1, state=TIMED_WAITING, 
group=TGRP-TransactionLogTest] at sun.misc.Unsafe.park(Native Method)   
  at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
com.lmax.disruptor.TimeoutBlockingWaitStrategy.waitFor(TimeoutBlockingWaitStrategy.java:38)
 at 
com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
 at 
com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:159)
 at 
com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125)
 at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.update.TransactionLogTest: 
   1) Thread[id=13, name=Log4j2-TF-1-AsyncLoggerConfig-1, state=TIMED_WAITING, 
group=TGRP-TransactionLogTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 
com.lmax.disruptor.TimeoutBlockingWaitStrategy.waitFor(TimeoutBlockingWaitStrategy.java:38)
at 
com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
at 
com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:159)
at 
com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([5F71B1EE0AAE3681]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.update.TransactionLogTest

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=13, 
name=Log4j2-TF-1-AsyncLoggerConfig-1, state=TIMED_WAITING, 
group=TGRP-TransactionLogTest] at sun.misc.Unsafe.park(Native Method)   
  at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
com.lmax.disruptor.TimeoutBlockingWaitStrategy.waitFor(TimeoutBlockingWaitStrategy.java:38)
 at 
com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
 at 
com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:159)
 at 
com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125)
 at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=13, name=Log4j2-TF-1-AsyncLoggerConfig-1, state=TIMED_WAITING, 
group=TGRP-TransactionLogTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 
com.lmax.disruptor.TimeoutBlockingWaitStrategy.waitFor(TimeoutBlockingWaitStrategy.java:38)
at 
com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:56)
at 
com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:159)
at 
com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:125)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([5F71B1EE0AAE3681]:0)




Build Log:
[...truncated 13365 lines...]
   [junit4] Suite: org.apache.solr.update.TransactionLogTest
   [junit4]   2> Mar 04, 2019 2:51:32 PM 
com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
   [junit4]   2> WARNING: Will linger awaiting termination of 1 leaked 
thread(s).
   [junit4]   2> Mar 04, 2019 2:51:52 PM 
com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
   [junit4]   2> SEVERE: 1 thread leaked from SUITE scope at 
org.apache.solr.update.TransactionLogTest: 
   [junit4]   2>1) Thread[id=13, name=Log4j2-TF-1-AsyncLoggerConfig-1, 
state=TIMED_WAITING, group=TGRP-TransactionLogTest]
   [junit4]   2> at sun.misc.Unsafe.park(Native Method)
   [junit4]   2> at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
   [junit4]   2> at 

[jira] [Commented] (SOLR-7229) Allow DIH to handle attachments as separate documents

2019-03-04 Thread Nazerke Seidan (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783547#comment-16783547
 ] 

Nazerke Seidan commented on SOLR-7229:
--

Hi Tim,

I was wondering whether this project is still open or not? I would like to 
participate in GSoC'19 by contributing to solr community. 

> Allow DIH to handle attachments as separate documents
> -
>
> Key: SOLR-7229
> URL: https://issues.apache.org/jira/browse/SOLR-7229
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tim Allison
>Assignee: Alexandre Rafalovitch
>Priority: Minor
>  Labels: gsoc2017
>
> With Tika 1.7's RecursiveParserWrapper, it is possible to maintain metadata 
> of individual attachments/embedded documents.  Tika's default handling was to 
> maintain the metadata of the container document and concatenate the contents 
> of all embedded files.  With SOLR-7189, we added the legacy behavior.
> It might be handy, for example, to be able to send an MSG file through DIH 
> and treat the container email as well each attachment as separate (child?) 
> documents, or send a zip of jpeg files and correctly index the geo locations 
> for each image file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10329) Rebuild Solr examples

2019-03-04 Thread Nazerke Seidan (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783533#comment-16783533
 ] 

Nazerke Seidan commented on SOLR-10329:
---

Hi Alexandre,

I was wondering whether this project is still open or not? I would like to 
participate in GSoC'19 by contributing to solr community. 

 

Many thanks!

> Rebuild Solr examples
> -
>
> Key: SOLR-10329
> URL: https://issues.apache.org/jira/browse/SOLR-10329
> Project: Solr
>  Issue Type: Wish
>  Components: examples
>Reporter: Alexandre Rafalovitch
>Priority: Major
>  Labels: gsoc2017
>
> Apache Solr ships with a number of examples. They evolved from a kitchen sync 
> example and are rather large. When new Solr features are added, they are 
> often shoehorned into the most appropriate example and sometimes are not 
> represented at all. 
> Often, for new users, it is hard to tell what part of example is relevant, 
> what part is default and what part is demonstrating something completely 
> different.
> It would take significant (and very appreciated) effort to review all the 
> examples and rebuild them to provide clean way to showcase best practices 
> around base and most recent features.
> Specific issues are around kitchen sync vs. minimal examples, better approach 
> to "schemaless" mode and creating examples and datasets that allow to create 
> both "hello world" and more-advanced tutorials.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Query about solr volunteers to mentor: GSoC 19

2019-03-04 Thread Nazerke Seidan
Hi All,

I am a final year CS BSc student, interested in participating GSoC'19 by
contributing to Apache Solr project. I was wondering if there are any
volunteers from Solr community to mentor GSoC'19 project. I would like to
discuss about potential topics.


Many thanks,

Nazerke


[GitHub] [lucene-solr] ctargett commented on a change in pull request #594: SOLR-13259: Add new section on Reindexing in Solr

2019-03-04 Thread GitBox
ctargett commented on a change in pull request #594: SOLR-13259: Add new 
section on Reindexing in Solr
URL: https://github.com/apache/lucene-solr/pull/594#discussion_r262136271
 
 

 ##
 File path: solr/solr-ref-guide/src/reindexing.adoc
 ##
 @@ -0,0 +1,185 @@
+= Reindexing
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+There are several types of changes to Solr configuration that require you to 
reindex your data.
+
+These changes include editing properties of fields or field types; adding 
fields, field types, or copy field rules;
+upgrading Solr; and some system configuration properties.
+
+It's important to be aware that many changes require reindexing, because there 
are times when not reindexing
+can have negative consequences for Solr as a system, or for the ability of 
your users to find what they are looking for.
+
+There is no process in Solr for programmatically reindexing data. When we say 
"reindex", we mean, literally,
+"index it again". However you got the data into the index the first time, you 
will run that process again.
+It is strongly recommended that Solr users index their data in a repeatable, 
consistent way, so that the process can be
+easily repeated when the need for reindexing arises.
+
+Reindexing is recommended during major upgrades, so in addition to covering 
what types of configuration changes should trigger a reindex, this section will 
also cover strategies for reindexing.
+
+== Changes that Require Reindex
+
+=== Schema Changes
+
+All changes to a collection's schema require reindexing. This is because many 
of the available options are only
+applied during the indexing process. Solr simply has no way to implement the 
desired change without reindexing
+the data.
+
+To understand the general reason why reindexing is ever required, it's helpful 
to understand the relationship between
+Solr's schema and the underlying Lucene index. Lucene does not use a schema, 
it is a Solr-only concept. When you delete
+a field from Solr's schema, it does not modify Lucene's index in any way. When 
you add a field to Solr's schema, the
+field does not exist in Lucene's index until a document that contains the 
field is indexed.
+
+This means that there are many types of schema changes that cannot be 
reflected in the index simply by modifying
+Solr's schema. This is different from most database models where schemas are 
used. With regard to indexing, Solr's
+schema acts like a rulebook for indexing documents by telling Lucene how to 
interpret the data being sent. Once the
+documents are in Lucene, Solr's schema has no control over the underlying data 
structure.
+
+In addition to the types of schema changes described in the following 
sections, changing the schema `version` property
+is equivalent to changing field type properties. This type of change is 
usually only made during or because of a major upgrade.
+
+ Adding or Deleting Fields
+
+If you add or delete a field from Solr's schema, it's strongly recommended to 
reindex.
+
+When you add a field, you generally do so with the intent to use the field in 
some way.
+Since documents were indexed before the field was added, the index will not 
hold any references to the field for earlier documents.
+If you want to use the new field for faceting, for example, the new field 
facet will not include any documents that were not indexed with the new field.
+
+There is a slightly different situation when deleting a field.
+In this case, since simply removing the field from the schema doesn't change 
anything about the index, the field will still be in the index until the 
documents are reindexed.
+In fact, Lucene may keep a reference to a deleted field _forever_ (see also 
https://issues.apache.org/jira/browse/LUCENE-1761[LUCENE-1761]).
+This may only be an issue for your environment if you try to add a field that 
has the same name as a deleted field,
+but it can also be an issue for dynamic field rules that are later removed.
+
+ Changing Field and Field Type Field Properties
+
+Solr has two ways of defining field properties.
+
+The first is to define properties on a field type. These properties are then 
applied to all fields of that type unless they are 

[GitHub] [lucene-solr] ctargett commented on a change in pull request #594: SOLR-13259: Add new section on Reindexing in Solr

2019-03-04 Thread GitBox
ctargett commented on a change in pull request #594: SOLR-13259: Add new 
section on Reindexing in Solr
URL: https://github.com/apache/lucene-solr/pull/594#discussion_r262136087
 
 

 ##
 File path: solr/solr-ref-guide/src/reindexing.adoc
 ##
 @@ -0,0 +1,185 @@
+= Reindexing
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+There are several types of changes to Solr configuration that require you to 
reindex your data.
+
+These changes include editing properties of fields or field types; adding 
fields, field types, or copy field rules;
+upgrading Solr; and some system configuration properties.
+
+It's important to be aware that many changes require reindexing, because there 
are times when not reindexing
+can have negative consequences for Solr as a system, or for the ability of 
your users to find what they are looking for.
+
+There is no process in Solr for programmatically reindexing data. When we say 
"reindex", we mean, literally,
+"index it again". However you got the data into the index the first time, you 
will run that process again.
+It is strongly recommended that Solr users index their data in a repeatable, 
consistent way, so that the process can be
+easily repeated when the need for reindexing arises.
+
+Reindexing is recommended during major upgrades, so in addition to covering 
what types of configuration changes should trigger a reindex, this section will 
also cover strategies for reindexing.
+
+== Changes that Require Reindex
+
+=== Schema Changes
+
+All changes to a collection's schema require reindexing. This is because many 
of the available options are only
+applied during the indexing process. Solr simply has no way to implement the 
desired change without reindexing
+the data.
+
+To understand the general reason why reindexing is ever required, it's helpful 
to understand the relationship between
+Solr's schema and the underlying Lucene index. Lucene does not use a schema, 
it is a Solr-only concept. When you delete
+a field from Solr's schema, it does not modify Lucene's index in any way. When 
you add a field to Solr's schema, the
+field does not exist in Lucene's index until a document that contains the 
field is indexed.
+
+This means that there are many types of schema changes that cannot be 
reflected in the index simply by modifying
+Solr's schema. This is different from most database models where schemas are 
used. With regard to indexing, Solr's
+schema acts like a rulebook for indexing documents by telling Lucene how to 
interpret the data being sent. Once the
+documents are in Lucene, Solr's schema has no control over the underlying data 
structure.
+
+In addition to the types of schema changes described in the following 
sections, changing the schema `version` property
+is equivalent to changing field type properties. This type of change is 
usually only made during or because of a major upgrade.
+
+ Adding or Deleting Fields
+
+If you add or delete a field from Solr's schema, it's strongly recommended to 
reindex.
+
+When you add a field, you generally do so with the intent to use the field in 
some way.
+Since documents were indexed before the field was added, the index will not 
hold any references to the field for earlier documents.
+If you want to use the new field for faceting, for example, the new field 
facet will not include any documents that were not indexed with the new field.
+
+There is a slightly different situation when deleting a field.
+In this case, since simply removing the field from the schema doesn't change 
anything about the index, the field will still be in the index until the 
documents are reindexed.
+In fact, Lucene may keep a reference to a deleted field _forever_ (see also 
https://issues.apache.org/jira/browse/LUCENE-1761[LUCENE-1761]).
+This may only be an issue for your environment if you try to add a field that 
has the same name as a deleted field,
+but it can also be an issue for dynamic field rules that are later removed.
+
+ Changing Field and Field Type Field Properties
+
+Solr has two ways of defining field properties.
+
+The first is to define properties on a field type. These properties are then 
applied to all fields of that type unless they are 

[GitHub] [lucene-solr] ctargett commented on a change in pull request #594: SOLR-13259: Add new section on Reindexing in Solr

2019-03-04 Thread GitBox
ctargett commented on a change in pull request #594: SOLR-13259: Add new 
section on Reindexing in Solr
URL: https://github.com/apache/lucene-solr/pull/594#discussion_r262133903
 
 

 ##
 File path: solr/solr-ref-guide/src/reindexing.adoc
 ##
 @@ -0,0 +1,185 @@
+= Reindexing
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+There are several types of changes to Solr configuration that require you to 
reindex your data.
+
+These changes include editing properties of fields or field types; adding 
fields, field types, or copy field rules;
+upgrading Solr; and some system configuration properties.
+
+It's important to be aware that many changes require reindexing, because there 
are times when not reindexing
+can have negative consequences for Solr as a system, or for the ability of 
your users to find what they are looking for.
+
+There is no process in Solr for programmatically reindexing data. When we say 
"reindex", we mean, literally,
+"index it again". However you got the data into the index the first time, you 
will run that process again.
+It is strongly recommended that Solr users index their data in a repeatable, 
consistent way, so that the process can be
+easily repeated when the need for reindexing arises.
+
+Reindexing is recommended during major upgrades, so in addition to covering 
what types of configuration changes should trigger a reindex, this section will 
also cover strategies for reindexing.
+
+== Changes that Require Reindex
+
+=== Schema Changes
+
+All changes to a collection's schema require reindexing. This is because many 
of the available options are only
+applied during the indexing process. Solr simply has no way to implement the 
desired change without reindexing
+the data.
+
+To understand the general reason why reindexing is ever required, it's helpful 
to understand the relationship between
+Solr's schema and the underlying Lucene index. Lucene does not use a schema, 
it is a Solr-only concept. When you delete
+a field from Solr's schema, it does not modify Lucene's index in any way. When 
you add a field to Solr's schema, the
+field does not exist in Lucene's index until a document that contains the 
field is indexed.
+
+This means that there are many types of schema changes that cannot be 
reflected in the index simply by modifying
+Solr's schema. This is different from most database models where schemas are 
used. With regard to indexing, Solr's
+schema acts like a rulebook for indexing documents by telling Lucene how to 
interpret the data being sent. Once the
+documents are in Lucene, Solr's schema has no control over the underlying data 
structure.
+
+In addition to the types of schema changes described in the following 
sections, changing the schema `version` property
+is equivalent to changing field type properties. This type of change is 
usually only made during or because of a major upgrade.
+
+ Adding or Deleting Fields
+
+If you add or delete a field from Solr's schema, it's strongly recommended to 
reindex.
+
+When you add a field, you generally do so with the intent to use the field in 
some way.
+Since documents were indexed before the field was added, the index will not 
hold any references to the field for earlier documents.
+If you want to use the new field for faceting, for example, the new field 
facet will not include any documents that were not indexed with the new field.
+
+There is a slightly different situation when deleting a field.
+In this case, since simply removing the field from the schema doesn't change 
anything about the index, the field will still be in the index until the 
documents are reindexed.
+In fact, Lucene may keep a reference to a deleted field _forever_ (see also 
https://issues.apache.org/jira/browse/LUCENE-1761[LUCENE-1761]).
+This may only be an issue for your environment if you try to add a field that 
has the same name as a deleted field,
+but it can also be an issue for dynamic field rules that are later removed.
+
+ Changing Field and Field Type Field Properties
+
+Solr has two ways of defining field properties.
+
+The first is to define properties on a field type. These properties are then 
applied to all fields of that type unless they are 

[GitHub] [lucene-solr] ctargett commented on a change in pull request #594: SOLR-13259: Add new section on Reindexing in Solr

2019-03-04 Thread GitBox
ctargett commented on a change in pull request #594: SOLR-13259: Add new 
section on Reindexing in Solr
URL: https://github.com/apache/lucene-solr/pull/594#discussion_r262133836
 
 

 ##
 File path: solr/solr-ref-guide/src/reindexing.adoc
 ##
 @@ -0,0 +1,185 @@
+= Reindexing
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+There are several types of changes to Solr configuration that require you to 
reindex your data.
+
+These changes include editing properties of fields or field types; adding 
fields, field types, or copy field rules;
+upgrading Solr; and some system configuration properties.
+
+It's important to be aware that many changes require reindexing, because there 
are times when not reindexing
+can have negative consequences for Solr as a system, or for the ability of 
your users to find what they are looking for.
+
+There is no process in Solr for programmatically reindexing data. When we say 
"reindex", we mean, literally,
+"index it again". However you got the data into the index the first time, you 
will run that process again.
+It is strongly recommended that Solr users index their data in a repeatable, 
consistent way, so that the process can be
+easily repeated when the need for reindexing arises.
+
+Reindexing is recommended during major upgrades, so in addition to covering 
what types of configuration changes should trigger a reindex, this section will 
also cover strategies for reindexing.
+
+== Changes that Require Reindex
+
+=== Schema Changes
+
+All changes to a collection's schema require reindexing. This is because many 
of the available options are only
+applied during the indexing process. Solr simply has no way to implement the 
desired change without reindexing
+the data.
+
+To understand the general reason why reindexing is ever required, it's helpful 
to understand the relationship between
+Solr's schema and the underlying Lucene index. Lucene does not use a schema, 
it is a Solr-only concept. When you delete
+a field from Solr's schema, it does not modify Lucene's index in any way. When 
you add a field to Solr's schema, the
+field does not exist in Lucene's index until a document that contains the 
field is indexed.
+
+This means that there are many types of schema changes that cannot be 
reflected in the index simply by modifying
+Solr's schema. This is different from most database models where schemas are 
used. With regard to indexing, Solr's
+schema acts like a rulebook for indexing documents by telling Lucene how to 
interpret the data being sent. Once the
+documents are in Lucene, Solr's schema has no control over the underlying data 
structure.
+
+In addition to the types of schema changes described in the following 
sections, changing the schema `version` property
+is equivalent to changing field type properties. This type of change is 
usually only made during or because of a major upgrade.
+
+ Adding or Deleting Fields
+
+If you add or delete a field from Solr's schema, it's strongly recommended to 
reindex.
+
+When you add a field, you generally do so with the intent to use the field in 
some way.
+Since documents were indexed before the field was added, the index will not 
hold any references to the field for earlier documents.
+If you want to use the new field for faceting, for example, the new field 
facet will not include any documents that were not indexed with the new field.
+
+There is a slightly different situation when deleting a field.
+In this case, since simply removing the field from the schema doesn't change 
anything about the index, the field will still be in the index until the 
documents are reindexed.
+In fact, Lucene may keep a reference to a deleted field _forever_ (see also 
https://issues.apache.org/jira/browse/LUCENE-1761[LUCENE-1761]).
+This may only be an issue for your environment if you try to add a field that 
has the same name as a deleted field,
+but it can also be an issue for dynamic field rules that are later removed.
+
+ Changing Field and Field Type Field Properties
+
+Solr has two ways of defining field properties.
+
+The first is to define properties on a field type. These properties are then 
applied to all fields of that type unless they are 

[jira] [Updated] (SOLR-12884) Admin UI, admin/luke and *Point fields

2019-03-04 Thread Cassandra Targett (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-12884:
-
Component/s: Admin UI

> Admin UI, admin/luke and *Point fields
> --
>
> Key: SOLR-12884
> URL: https://issues.apache.org/jira/browse/SOLR-12884
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 8.0
>Reporter: Erick Erickson
>Priority: Major
>
> One of the conference attendees noted that you go to the schema browser and 
> click on, say, a pint field, then click "load term info", nothing is shown.
> admin/luke similarly doesn't show much interesting, here's the response for a 
> pint .vs. a tint field:
> "popularity":\{ "type":"pint", "schema":"I-SD-OF--"},
> "popularityt":{ "type":"tint", "schema":"I-S--OF--",
>                        "index":"-TS--", "docs":15},
>  
> What, if anything, should we do in these two cases? Since  the points-based 
> numerics don't have terms like Trie* fields, I don't think we _can_ show much 
> more so the above makes sense, it's just jarring to end users and looks like 
> a bug.
> WDYT about putting in some useful information though. Say for the Admin UI 
> for points-based "terms cannot be shown for points-based fields" or some such?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >