[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2018-02-19 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369787#comment-16369787
 ] 

Shawn Heisey commented on SOLR-11078:
-

bq. Also, when trie-fields were first deprecated in favor of point-fields, what 
was the thought process at that time?

Solr has merely been reacting to realities forced on it by changes in Lucene.

Points were introduced to Lucene in the 6.x timeframe, I think it was 
relatively early.  Shortly afterwards, Lucene deprecated the legacy numeric 
code used by virtually every Lucene-based software in the world up through 
early 6.x, including Solr.  That legacy numeric code is completely gone from 
Lucene 7.0 and later.

Solr was a little bit slow to add support for point field types.  It didn't 
happen until late in the 6.x series.

In 7.0, Solr incorporated the legacy numeric code from Lucene necessary for 
Trie fields to function, because without it, Solr 7.0 would not have been able 
to read most existing 6.x indexes.  This is a temporary band-aid, which is 
expected to be removed in 8.0.  I'm really hoping that we can restore the 
performance of numeric field searching in some way before 8.0 comes out.  I do 
not know if the issues with Points can be fixed without reducing the 
performance of the things Points *are* good at.


> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, Server
>Affects Versions: 6.6, 7.1
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> screenshot-3.png, solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, 
> solr-7-1-0-managed-schema, solr-7-1-0-solrconfig.xml, solr-71-vs-64.png, 
> solr-sample-warning-log.txt, solr.in.sh, solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12006) Add back '*_t' dynamic field for single valued text fields

2018-02-19 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369785#comment-16369785
 ] 

Varun Thacker commented on SOLR-12006:
--

Hi Steve,

 

I think this commit introduced it back, but at some point it was changed again? 
I didn't track down the exact commit that removed it or changed the fieldType 
to add multiValued=true but I don't think it matters since we should add this 
back.

 

I'll commit it shortly 

> Add back '*_t' dynamic field for single valued text fields
> --
>
> Key: SOLR-12006
> URL: https://issues.apache.org/jira/browse/SOLR-12006
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12006.patch
>
>
> Solr used to have a '_t' dynamic field which was single valued and a "_txt" 
> field for multi-valued text 
>  
> Solr 4.x : 
> [https://github.com/apache/lucene-solr/blob/branch_4x/solr/example/example-schemaless/solr/collection1/conf/schema.xml#L129]
>  
>  
> Somewhere in Solr 5.x both became the same definition . 
> [https://github.com/apache/lucene-solr/blob/branch_5_4/solr/server/solr/configsets/data_driven_schema_configs/conf/managed-schema#L138]
>  
> In master now there is no "_t" dynamic field anymore. 
>  
> We have a single-valued dynamic field and multi-valued dynamic field for 
> ints, longs, boolean, float, date , string . We should provide the same 
> option for a text field



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9510) child level facet exclusions

2018-02-19 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369780#comment-16369780
 ] 

Mikhail Khludnev edited comment on SOLR-9510 at 2/20/18 7:38 AM:
-

[~werder], thanks for your attention 
bq. Not sure that I fully understand how "expand parents docset" part will work 
(it will just execute parent BJQ again, but without excluded child clause, 
right?) 
I'm sure you do. Exactly right. I'd say what's changed here is propagating 
{{domain.excludeTags}} through {{{!parent}} 

bq. , but have a theoretical question
which JIRA is not a proper place for, you know.

bq. Assume someone will implement "global" feature
Right, that's spinning around as well. As it's said above, here we reset top 
query/result docset that could be also expressed via 
{code}
..{type:query, 
   global:true, 
   q:..
{code}
or   
{code}
 {
   domain: {
   ...
   global:true, 
}
  }
{code}
however, after that you need to repeat top filters and query excluding a child, 
join them to child and filter children again, oh my.. So, introducing 
{{global}} might be considered ... hold on, what if we exclude top query as 
well via explicit {{excludeTag}}?  


was (Author: mkhludnev):
[~werder], thanks for your attention 
bq. Not sure that I fully understand how "expand parents docset" part will work 
(it will just execute parent BJQ again, but without excluded child clause, 
right?) 
I'm sure you do. Exactly right. I'd say what's changed here is propagating 
{{domain.excludeTags}} through {{{!parent}} 

bq. , but have a theoretical question
which JIRA is not a proper place for, you know.

bq. Assume someone will implement "global" feature
Right, that's spinning around as well. As it's said above, here we reset top 
query/result docset that could be also expressed via 
{code}
..{type:query, 
   global:true, 
   q:..
{code}
or   
{code}
 {
   domain: {
   ...
   global:true, 
}
  }
{code}
however, after that you need to repeat top filters and query excluding a child, 
join them to child and filter children again, oh my.. So, introducing {{global 
}} might be considered ... hold on, what if we exclude top query as well via 
explicit {{excludeTag}}?  

> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, 
> SOLR_9510.patch, SOLR_9510.patch
>
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..&child.fq=color:Red&child.fq=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}&subq=text:word&child.fq={!tag=color}color:Red&child.fq=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via parameter name {{params=$child.fq}}, it also supports 
> {{excludeTags}}.
> h2. Notes
> Regarding the latter parser, the alternative approach might be to move into 
> {{domain:\{..}}} instruction of json facet. From the implementation 
> perspective, it's desired to optimize with bitset processing, however I 
> suppose it's might be deferred until some initial level of maturity. 
> h2. Example
> {code}
> q={!parent which=type_s:book filters=$child.fq 
> v=$childquery}&childquery=comment_t:good&child.fq={!tag=author}author_s:yonik&child.fq={!tag=stars}stars_i:(5
>  3)&wt=json&indent=on&json.facet={
> comments_for_author:{
>   type:query,
>   q:"{!filters params=$child.fq excludeTags=author v=$childquery}",
>   "//note":"author filter is excluded",
>  

[jira] [Commented] (SOLR-9510) child level facet exclusions

2018-02-19 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369780#comment-16369780
 ] 

Mikhail Khludnev commented on SOLR-9510:


[~werder], thanks for your attention 
bq. Not sure that I fully understand how "expand parents docset" part will work 
(it will just execute parent BJQ again, but without excluded child clause, 
right?) 
I'm sure you do. Exactly right. I'd say what's changed here is propagating 
{{domain.excludeTags}} through {{{!parent}} 

bq. , but have a theoretical question
which JIRA is not a proper place for, you know.

bq. Assume someone will implement "global" feature
Right, that's spinning around as well. As it's said above, here we reset top 
query/result docset that could be also expressed via 
{code}
..{type:query, 
   global:true, 
   q:..
{code}
or   
{code}
 {
   domain: {
   ...
   global:true, 
}
  }
{code}
however, after that you need to repeat top filters and query excluding a child, 
join them to child and filter children again, oh my.. So, introducing {{global 
}} might be considered ... hold on, what if we exclude top query as well via 
explicit {{excludeTag}}?  

> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, 
> SOLR_9510.patch, SOLR_9510.patch
>
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..&child.fq=color:Red&child.fq=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}&subq=text:word&child.fq={!tag=color}color:Red&child.fq=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via parameter name {{params=$child.fq}}, it also supports 
> {{excludeTags}}.
> h2. Notes
> Regarding the latter parser, the alternative approach might be to move into 
> {{domain:\{..}}} instruction of json facet. From the implementation 
> perspective, it's desired to optimize with bitset processing, however I 
> suppose it's might be deferred until some initial level of maturity. 
> h2. Example
> {code}
> q={!parent which=type_s:book filters=$child.fq 
> v=$childquery}&childquery=comment_t:good&child.fq={!tag=author}author_s:yonik&child.fq={!tag=stars}stars_i:(5
>  3)&wt=json&indent=on&json.facet={
> comments_for_author:{
>   type:query,
>   q:"{!filters params=$child.fq excludeTags=author v=$childquery}",
>   "//note":"author filter is excluded",
>   domain:{
>  blockChildren:"type_s:book",
>  "//":"applying filters here might be more promising"
>}, facet:{
>authors:{
>   type:terms,
>   field:author_s,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>}
> } ,
> comments_for_stars:{
>   type:query,
>  q:"{!filters params=$child.fq excludeTags=stars v=$childquery}",
>   "//note":"stars_i filter is excluded",
>   domain:{
>  blockChildren:"type_s:book"
>}, facet:{
>stars:{
>   type:terms,
>   field:stars_i,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>   }
> }
> }
> {code} 
> Votes? Opinions?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11712) Streaming throws IndexOutOfBoundsException against an alias when a shard is down

2018-02-19 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11712:

Attachment: SOLR-11712.patch

> Streaming throws IndexOutOfBoundsException against an alias when a shard is 
> down
> 
>
> Key: SOLR-11712
> URL: https://issues.apache.org/jira/browse/SOLR-11712
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-11712-with-fix.patch, SOLR-11712-without-fix.patch, 
> SOLR-11712.patch, SOLR-11712.patch, SOLR-11712.patch, SOLR-11712.patch, 
> SOLR-11712.patch, SOLR-11712.patch
>
>
> I have an alias against multiple collections. If any one of the shards the 
> underlying collection is down then the stream handler throws an 
> IndexOutOfBoundsException
> {code}
> {"result-set":{"docs":[{"EXCEPTION":"java.lang.IndexOutOfBoundsException: 
> Index: 0, Size: 0","EOF":true,"RESPONSE_TIME":11}]}}
> {code}
> From the Solr logs:
> {code}
> 2017-12-01 01:42:07.573 ERROR (qtp736709391-29) [c:collection s:shard1 
> r:core_node13 x:collection_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream 
> java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
> at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:414)
> at 
> org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:305)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:535)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:83)
> at 
> org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:193)
> at 
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)
> at 
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:534)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at 
> org.ec

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 7181 - Still Unstable!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7181/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseParallelGC

6 tests failed.
FAILED:  
org.apache.lucene.analysis.icu.segmentation.TestICUTokenizer.testRandomHugeStrings

Error Message:
some thread(s) failed

Stack Trace:
java.lang.RuntimeException: some thread(s) failed
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:584)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:475)
at 
org.apache.lucene.analysis.icu.segmentation.TestICUTokenizer.testRandomHugeStrings(TestICUTokenizer.java:327)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testEventQueue

Error Message:
action wasn't interrupted

Stack Trace:
java.lang.AssertionError: action wasn't interrupted
at 
__randomizedtesting.SeedInfo.seed([129D7A496961ED16:DB2838E760062BE3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testEventQueue(TriggerIntegrationTest.java:723)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMetho

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_162) - Build # 21496 - Still Failing!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21496/
Java: 64bit/jdk1.8.0_162 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

No tests ran.

Build Log:
[...truncated 12162 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.ComputePlanActionTest
   [junit4]   2> 74098 INFO  
(SUITE-ComputePlanActionTest-seed#[6E81E6D1177700DE]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.autoscaling.ComputePlanActionTest_6E81E6D1177700DE-001/init-core-data-001
   [junit4]   2> 74099 WARN  
(SUITE-ComputePlanActionTest-seed#[6E81E6D1177700DE]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=1 numCloses=1
   [junit4]   2> 74099 INFO  
(SUITE-ComputePlanActionTest-seed#[6E81E6D1177700DE]-worker) [] 
o.a.s.SolrTestCaseJ4 Using TrieFields (NUMERIC_POINTS_SYSPROP=false) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 74101 INFO  
(SUITE-ComputePlanActionTest-seed#[6E81E6D1177700DE]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 74102 INFO  
(SUITE-ComputePlanActionTest-seed#[6E81E6D1177700DE]-worker) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 1 servers in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.autoscaling.ComputePlanActionTest_6E81E6D1177700DE-001/tempDir-001
   [junit4]   2> 74102 INFO  
(SUITE-ComputePlanActionTest-seed#[6E81E6D1177700DE]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 74102 INFO  (Thread-198) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 74102 INFO  (Thread-198) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 74106 ERROR (Thread-198) [] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 74202 INFO  
(SUITE-ComputePlanActionTest-seed#[6E81E6D1177700DE]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:40449
   [junit4]   2> 74205 INFO  (zkConnectionManagerCallback-187-thread-1) [] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 74210 INFO  (jetty-launcher-184-thread-1) [] 
o.e.j.s.Server jetty-9.4.8.v20171121, build timestamp: 
2017-11-21T23:27:37+02:00, git hash: 82b8fb23f757335bb3329d540ce37a2a2615f0a8
   [junit4]   2> 74243 INFO  (jetty-launcher-184-thread-1) [] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 74243 INFO  (jetty-launcher-184-thread-1) [] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 74243 INFO  (jetty-launcher-184-thread-1) [] 
o.e.j.s.session Scavenging every 66ms
   [junit4]   2> 74247 INFO  (jetty-launcher-184-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@fae5ce6{/solr,null,AVAILABLE}
   [junit4]   2> 74249 INFO  (jetty-launcher-184-thread-1) [] 
o.e.j.s.AbstractConnector Started ServerConnector@582fd689{SSL,[ssl, 
http/1.1]}{127.0.0.1:44063}
   [junit4]   2> 74249 INFO  (jetty-launcher-184-thread-1) [] 
o.e.j.s.Server Started @76120ms
   [junit4]   2> 74249 INFO  (jetty-launcher-184-thread-1) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=44063}
   [junit4]   2> 74250 ERROR (jetty-launcher-184-thread-1) [] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 74250 INFO  (jetty-launcher-184-thread-1) [] 
o.a.s.s.SolrDispatchFilter  ___  _   Welcome to Apache Solr™ version 
8.0.0
   [junit4]   2> 74250 INFO  (jetty-launcher-184-thread-1) [] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 74250 INFO  (jetty-launcher-184-thread-1) [] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 74250 INFO  (jetty-launcher-184-thread-1) [] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|Start time: 
2018-02-20T05:56:40.041Z
   [junit4]   2> 74263 INFO  (zkConnectionManagerCallback-189-thread-1) [] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 74263 INFO  (jetty-launcher-184-thread-1) [] 
o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 74270 INFO  (jetty-launcher-184-thread-1) [] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:40449/solr
   [junit4]   2> 74271 INFO  (zkConnectionManagerCallback-193-thread-1) [] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 74275 INFO  
(zkConnectionManagerCallback-195-thread-1-processing-n:127.0.0.1:44063_solr) 
[n:127.0.0.1:44063_solr] o.a.s

[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2018-02-19 Thread Sachin Goyal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369768#comment-16369768
 ] 

Sachin Goyal commented on SOLR-11078:
-

Do we know why the point fields are less performant when it comes to simple 
field:value queries? Following [this 
thread|https://lucene.apache.org/solr/guide/7_0/spatial-search.html#spatial-search],
 it seems that the point-fields are using some sort of 
[KD-trees|https://en.wikipedia.org/wiki/K-D-B-tree] and trie-fields use tries. 
So at a theoretical level, why are the point fields not performing in simple 
field:value queries, but doing great on the range side? (I did try to read some 
stuff on KD trees but could not gather much. Hence I am hoping to avoid reading 
more docs and code myself if someone more knowledgeable than me can share this 
part).

 

Also, when trie-fields were first deprecated in favor of point-fields, what was 
the thought process at that time? I am just curious to know the initial chain 
of thought behind point-fields. So any Jira link etc would also be good to link 
with this issue. It would be good to link any performance tests done at that 
time and perhaps re-visit them?

> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, Server
>Affects Versions: 6.6, 7.1
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
>Priority: Major
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> screenshot-3.png, solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, 
> solr-7-1-0-managed-schema, solr-7-1-0-solrconfig.xml, solr-71-vs-64.png, 
> solr-sample-warning-log.txt, solr.in.sh, solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9510) child level facet exclusions

2018-02-19 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369586#comment-16369586
 ] 

Mikhail Khludnev edited comment on SOLR-9510 at 2/20/18 5:43 AM:
-

slightly refreshed [^SOLR_9510.patch].
 * no changes in Lucene codebase
 * it turns -a little bit- scary, however {{query}} facet is redundant. Anyway, 
I can't see how to make it shorter:
{code:java}
q={!parent filters=$child.fq which=type_s:book v=$childquery}&
childquery=comment_t:*&
child.fq={!tag=author}author_s:dan&
child.fq={!tag=stars}stars_i:4&
json.facet={  
   comments_for_author:{  
  domain:{  
 excludeTags:author,  // 1. rejoin child filters 
and query, expand parents docset, apply parent filters (I suppose) 
 blockChildren:"type_s:book",// 2. join to expanded children 
 filter:"{!filters params=$child.fq excludeTags=author v=$childquery}" 
// 3. filter them again 
  },
  type:terms,
  field:author_s,
  facet:{  
 in_books:"unique(_root_)"
  }
   },
   comments_for_stars:{  
  domain:{  
 excludeTags:stars,
 blockChildren:"type_s:book",
 filter:"{!filters params=$child.fq  excludeTags=stars v=$childquery}"
  },
  type:terms,
  field:stars_i,
  facet:{  
 in_books:"unique(_root_)"
  }
   }
}
{code}

 * *TODO* {{BJQParserFiltersTest}} should be collapsed with {{BJQParserTest}}
 * *TODO* edge case single child query is excluded.
 * *TODO* assert parent scope filter exclusion along side with children ones
 Is there any concerns? I think it may go in this week.


was (Author: mkhludnev):
slightly refreshed [^SOLR_9510.patch]. 
* no changes in Lucene codebase
* it turns -a little bit- scary, however {{query}} facet is redundant. Anyway, 
I can't see how to make it shorter:  
{code}
q={!parent filters=$child.fq which=type_s:book v=$childquery}&
childquery=comment_t:*&
child.fq={!tag=author}author_s:dan&
child.fq={!tag=stars}stars_i:4&
json.facet={  
   comments_for_author:{  
  domain:{  
 excludeTags:author,  // 1. rejoin child filters 
and query, expand parents docset, apply parent filters (I suppose) 
 blockChildren:"type_s:book",// 2. join to expanded children 
 filter:"{!filters params=$child.fq excludeTags=author v=$childquery}" 
// 3. filter them again 
  },
  type:terms,
  field:author_s,
  facet:{  
 in_books:"unique(_root_)"
  }
   },
   comments_for_stars:{  
  domain:{  
 excludeTags:stars,
 blockChildren:"type_s:book",
 filter:"{!filters params=$child.fq  excludeTags=stars v=$childquery}"
  },
  type:terms,
  field:stars_i,
  facet:{  
 in_books:"unique(_root_)"
  }
   }
}
{code}
* TODO {{BJQParserFiltersTest}} should be collapsed with {{BJQParserTest}}
* TODO edge case single child query is excluded. 
Is there any concerns? I think it may go in this week. 

> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, 
> SOLR_9510.patch, SOLR_9510.patch
>
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..&child.fq=color:Red&child.fq=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}&subq=text:word&child.fq={!tag=color}color:Red&child.fq=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via pa

[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+43) - Build # 1392 - Still Unstable!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1392/
Java: 64bit/jdk-10-ea+43 -XX:+UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest

Error Message:
8 threads leaked from SUITE scope at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest: 1) Thread[id=6600, 
name=zkCallback-1336-thread-1, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@10/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@10/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@10/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@10/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@10/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1060)
 at 
java.base@10/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
 at 
java.base@10/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@10/java.lang.Thread.run(Thread.java:844)2) 
Thread[id=6577, 
name=TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[F0F259F6996282E]-EventThread, 
state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@10/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)  
   at 
java.base@10/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2075)
 at 
java.base@10/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
 at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)3) 
Thread[id=6576, 
name=TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[F0F259F6996282E]-SendThread(127.0.0.1:45779),
 state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@10/java.lang.Thread.sleep(Native Method) at 
app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)4) 
Thread[id=6677, name=zkCallback-1336-thread-2, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@10/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@10/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@10/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@10/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@10/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1060)
 at 
java.base@10/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
 at 
java.base@10/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@10/java.lang.Thread.run(Thread.java:844)5) 
Thread[id=6708, name=zkCallback-1336-thread-4, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@10/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@10/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@10/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@10/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@10/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1060)
 at 
java.base@10/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
 at 
java.base@10/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@10/java.lang.Thread.run(Thread.java:844)6) 
Thread[id=6575, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@10/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@10/java.lang.Thread.run(Thread.java:844)7) 
Thread[id=6578, name=zkConnectionManagerCallback-1337-thread-1, state=WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@10/jdk.internal.misc.

[jira] [Comment Edited] (SOLR-11795) Add Solr metrics exporter for Prometheus

2018-02-19 Thread Minoru Osuka (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369726#comment-16369726
 ] 

Minoru Osuka edited comment on SOLR-11795 at 2/20/18 5:17 AM:
--

Hi,
 Thank you for quick reply.

Why test code uses these configs is following:
 - The test indexes exampledocs data to test if facet count can be exporsed as 
metrics. Any data can be used, but I used exampledocs.
(https://issues.apache.org/jira/browse/SOLR-11795?focusedCommentId=16304229&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16304229)
 - The test uses managed_schema because indexing without worrying about the 
data format.
 - Since the test code is using managed_schema, there are several 
updateProcessors for add unknown fields to the schema. (UpdateRequestProcessors 
means updateProcessor?)

Does that answer your question?


was (Author: minoru):
Hi,
Thank you for quick reply.

Why test code uses these configs is following:
- The test indexes exampledocs data to test if facet count can be exporsed as 
metrics. Any data can be used, but I used exampledocs.
- The test uses managed_schema because indexing without worrying about the data 
format.
- Since the test code is using managed_schema, there are several 
updateProcessors for add unknown fields to the schema. (UpdateRequestProcessors 
means updateProcessor?)

Does that answer your question?

> Add Solr metrics exporter for Prometheus
> 
>
> Key: SOLR-11795
> URL: https://issues.apache.org/jira/browse/SOLR-11795
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.2
>Reporter: Minoru Osuka
>Assignee: Koji Sekiguchi
>Priority: Minor
> Attachments: SOLR-11795-2.patch, SOLR-11795-3.patch, 
> SOLR-11795-4.patch, SOLR-11795-5.patch, SOLR-11795-6.patch, 
> SOLR-11795-7.patch, SOLR-11795.patch, solr-dashboard.png, 
> solr-exporter-diagram.png
>
>
> I 'd like to monitor Solr using Prometheus and Grafana.
> I've already created Solr metrics exporter for Prometheus. I'd like to 
> contribute to contrib directory if you don't mind.
> !solr-exporter-diagram.png|thumbnail!
> !solr-dashboard.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11795) Add Solr metrics exporter for Prometheus

2018-02-19 Thread Minoru Osuka (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369726#comment-16369726
 ] 

Minoru Osuka commented on SOLR-11795:
-

Hi,
Thank you for quick reply.

Why test code uses these configs is following:
- The test indexes exampledocs data to test if facet count can be exporsed as 
metrics. Any data can be used, but I used exampledocs.
- The test uses managed_schema because indexing without worrying about the data 
format.
- Since the test code is using managed_schema, there are several 
updateProcessors for add unknown fields to the schema. (UpdateRequestProcessors 
means updateProcessor?)

Does that answer your question?

> Add Solr metrics exporter for Prometheus
> 
>
> Key: SOLR-11795
> URL: https://issues.apache.org/jira/browse/SOLR-11795
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.2
>Reporter: Minoru Osuka
>Assignee: Koji Sekiguchi
>Priority: Minor
> Attachments: SOLR-11795-2.patch, SOLR-11795-3.patch, 
> SOLR-11795-4.patch, SOLR-11795-5.patch, SOLR-11795-6.patch, 
> SOLR-11795-7.patch, SOLR-11795.patch, solr-dashboard.png, 
> solr-exporter-diagram.png
>
>
> I 'd like to monitor Solr using Prometheus and Grafana.
> I've already created Solr metrics exporter for Prometheus. I'd like to 
> contribute to contrib directory if you don't mind.
> !solr-exporter-diagram.png|thumbnail!
> !solr-dashboard.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 67 - Still Unstable

2018-02-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/67/

[...truncated 34 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/428/consoleText

[repro] Revision: aac11fc1209e7afeee49b112630ec421000e9195

[repro] Repro line:  ant test  -Dtestcase=TestUtilizeNode -Dtests.method=test 
-Dtests.seed=1B33F7F58E4B819B -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=be-BY -Dtests.timezone=Arctic/Longyearbyen -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TriggerIntegrationTest 
-Dtests.method=testMetricTrigger -Dtests.seed=1B33F7F58E4B819B 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ar-YE 
-Dtests.timezone=America/Boise -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
dfc0fe86e465f552e7afad2013e077e2962e5bfc
[repro] git checkout aac11fc1209e7afeee49b112630ec421000e9195

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TriggerIntegrationTest
[repro]   TestUtilizeNode
[repro] ant compile-test

[...truncated 3310 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.TriggerIntegrationTest|*.TestUtilizeNode" 
-Dtests.showOutput=onerror -Dtests.seed=1B33F7F58E4B819B -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=ar-YE -Dtests.timezone=America/Boise 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 16492 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.TestUtilizeNode
[repro]   3/5 failed: org.apache.solr.cloud.autoscaling.TriggerIntegrationTest
[repro] git checkout dfc0fe86e465f552e7afad2013e077e2962e5bfc

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-11066) Implement a scheduled trigger

2018-02-19 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369720#comment-16369720
 ] 

Shalin Shekhar Mangar commented on SOLR-11066:
--

This patch adds support for Ignored events in ScheduledTrigger. In case where 
grace time has elapsed and the event is to be skipped, the listener is fired 
with stage=Ignored so that it is possible to perform some useful action on it.

> Implement a scheduled trigger
> -
>
> Key: SOLR-11066
> URL: https://issues.apache.org/jira/browse/SOLR-11066
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11066.patch, SOLR-11066.patch, SOLR-11066.patch, 
> SOLR-11066.patch, SOLR-11066.patch
>
>
> Implement a trigger that runs on a fixed interval say every 1 hour or every 
> 24 hours starting at midnight etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11066) Implement a scheduled trigger

2018-02-19 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-11066:
-
Attachment: SOLR-11066.patch

> Implement a scheduled trigger
> -
>
> Key: SOLR-11066
> URL: https://issues.apache.org/jira/browse/SOLR-11066
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11066.patch, SOLR-11066.patch, SOLR-11066.patch, 
> SOLR-11066.patch, SOLR-11066.patch
>
>
> Implement a trigger that runs on a fixed interval say every 1 hour or every 
> 24 hours starting at midnight etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 452 - Still Unstable!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/452/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:
Can not find doc 8 in http://127.0.0.1:63485/solr

Stack Trace:
java.lang.AssertionError: Can not find doc 8 in http://127.0.0.1:63485/solr
at 
__randomizedtesting.SeedInfo.seed([B34AFF72C3CCB2C9:72BA86DEEE9C786E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:885)
at 
org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:599)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13973 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestTlogReplica
   [junit4]   2> 3402430 INFO  
(SUITE-Tes

[jira] [Updated] (SOLR-11066) Implement a scheduled trigger

2018-02-19 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-11066:
-
Attachment: (was: SOLR-11066.patch)

> Implement a scheduled trigger
> -
>
> Key: SOLR-11066
> URL: https://issues.apache.org/jira/browse/SOLR-11066
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11066.patch, SOLR-11066.patch, SOLR-11066.patch, 
> SOLR-11066.patch
>
>
> Implement a trigger that runs on a fixed interval say every 1 hour or every 
> 24 hours starting at midnight etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2351 - Still Unstable

2018-02-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2351/

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testEventQueue

Error Message:
action wasn't interrupted

Stack Trace:
java.lang.AssertionError: action wasn't interrupted
at 
__randomizedtesting.SeedInfo.seed([F712CBEA9A6F698B:3EA789449308AF7E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testEventQueue(TriggerIntegrationTest.java:723)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12281 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.TriggerIntegrationTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J0/temp/solr.cloud.autoscaling.TriggerIntegrationTest_F712CBEA9A6F698B-001/init-core-data-001
   [ju

[jira] [Updated] (SOLR-11066) Implement a scheduled trigger

2018-02-19 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-11066:
-
Attachment: SOLR-11066.patch

> Implement a scheduled trigger
> -
>
> Key: SOLR-11066
> URL: https://issues.apache.org/jira/browse/SOLR-11066
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11066.patch, SOLR-11066.patch, SOLR-11066.patch, 
> SOLR-11066.patch, SOLR-11066.patch
>
>
> Implement a trigger that runs on a fixed interval say every 1 hour or every 
> 24 hours starting at midnight etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 66 - Still Unstable

2018-02-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/66/

[...truncated 33 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2350/consoleText

[repro] Revision: a2fdbc93534263299f055a9e344c49bf29aebdf5

[repro] Repro line:  ant test  -Dtestcase=TriggerIntegrationTest 
-Dtests.method=testMetricTrigger -Dtests.seed=F296178E4FBA8CC2 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=nl-BE 
-Dtests.timezone=Europe/Guernsey -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=HdfsAutoAddReplicasIntegrationTest 
-Dtests.method=testSimple -Dtests.seed=F296178E4FBA8CC2 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=sr -Dtests.timezone=Australia/Canberra 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
dfc0fe86e465f552e7afad2013e077e2962e5bfc
[repro] git checkout a2fdbc93534263299f055a9e344c49bf29aebdf5

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TriggerIntegrationTest
[repro]   HdfsAutoAddReplicasIntegrationTest
[repro] ant compile-test

[...truncated 3293 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.TriggerIntegrationTest|*.HdfsAutoAddReplicasIntegrationTest" 
-Dtests.showOutput=onerror -Dtests.seed=F296178E4FBA8CC2 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=nl-BE -Dtests.timezone=Europe/Guernsey 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 28425 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: 
org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.TriggerIntegrationTest

[repro] Re-testing 100% failures at the tip of master
[repro] git checkout master

[...truncated 3 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TriggerIntegrationTest
[repro] ant compile-test

[...truncated 3293 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TriggerIntegrationTest" -Dtests.showOutput=onerror 
-Dtests.seed=F296178E4FBA8CC2 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=nl-BE -Dtests.timezone=Europe/Guernsey -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 10884 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of master:
[repro]   2/5 failed: org.apache.solr.cloud.autoscaling.TriggerIntegrationTest
[repro] git checkout dfc0fe86e465f552e7afad2013e077e2962e5bfc

[...truncated 8 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-8126) Spatial prefix tree based on S2 geometry

2018-02-19 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369716#comment-16369716
 ] 

David Smiley commented on LUCENE-8126:
--

Thanks for investing the time into the illustrations [~ivera].  The diagram of 
the 3 prefix trees is very illustrative.  Usually when I think of people 
indexing "squares" I believe the square is aligned to lines of longitude and 
latitude... but this is not true for the so-called "squares" for your use-case? 
 Regardless of that, people index all kinds of shapes, e.g. circles, polygons 
and they will look differently at different latitudes.  I didn't know that it 
affects the cell count this much -- thanks for enlightening me.  I knew it 
_could_ in what I thought was some extreme cases but your diagram seems to show 
it's typical.  Hmm.  _I wonder if similar results could be achieved by 
internally using the web-mercator projection_?  Of course some scheme is needed 
to handle the polar caps which that projection doesn't even cover but whatever. 
 The web-mercator projection increases the overall size of the shape both 
latitudinally and longitudinally equally, and thus would probably yield roughly 
similar numbers of cells at all latitudes; wouldn't it?

RE index size -- you probably had difficulty benchmarking the differences 
because you used Lucene defaults.  Switch to a doc count based index writer 
flush (instead of memory based), and use SerialMergeScheduler to get 
predictable segments, albeit slower throughput that you wouldn't normally do in 
production.  This stuff can have a big impact on benchmark results, not just 
for index size but sometimes also benchmarking queries depending on how "lucky" 
one of the benchmark runs got if a big merge occurred to yield much fewer 
segments.

I'm having difficulty finding the benchmark; can you provide a link to the GH 
file?

At first I was unsure how S2 might improve point query performance but after 
some thought I figure that the cell count discussion for indexed shapes would 
apply as well for the cells a query shape might have to traverse.  Again; I 
wonder if a web-mercator projection would get similar improvements?  

Another nice thing about web-mercator based underlying coordinate system is 
that the index-time heatmap feature would produce a grid of numbers that are 
nice squares to be displayed in a web-mercator map client-side.  Today they 
tend to be horizontal rectangles that get flatter as you go to the poles.  It's 
not just about visual preference of squares; it's also about trying to ensure 
that any secondary processing of the raw heatmap data doesn't unintentionally 
skew/misrepresent data due to an assumption of a uniform grid when it's not 
actually uniform.  Sorry to get a little side-tracked but it's related.

> Spatial prefix tree based on S2 geometry
> 
>
> Key: LUCENE-8126
> URL: https://issues.apache.org/jira/browse/LUCENE-8126
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>Assignee: Ignacio Vera
>Priority: Major
> Attachments: SPT-cell.pdf, SPT-query.jpeg
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Hi [~dsmiley],
> I have been working on a prefix tree based on goggle S2 geometry 
> (https://s2geometry.io/) to be used mainly with Geo3d shapes with very 
> promising results, in particular for complex shapes (e.g polygons). Using 
> this pixelization scheme reduces the size of the index, improves the 
> performance of the queries and reduces the loading time for non-point shapes. 
> If you are ok with this contribution and before providing any code I would 
> like to understand what is the correct/prefered approach:
> 1) Add new depency to the S2 library 
> (https://mvnrepository.com/artifact/io.sgr/s2-geometry-library-java). It has 
> Apache 2.0 license so it should be ok.
> 2) Create a utility class with all methods necessary to navigate the S2 tree 
> and create shapes from S2 cells (basically port what we need from the library 
> into Lucene).
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10679) Solr CDCR cannot be configured to use Aliases for replication

2018-02-19 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16089550#comment-16089550
 ] 

Amrit Sarkar edited comment on SOLR-10679 at 2/20/18 4:40 AM:
--

[~WebHomer],

Internally {{CdcrReplicator}} forwards the request to target collection in 
target cluster via conventional UpdateRequest, which supports alias. Alias 
should definitely work for {{target}} collections in CDCR, need to confirm the 
same for {{source}}. Check in the logs once, {{Forwarded X updates to target 
collection Y}} in source and {{/update}} in target.


was (Author: sarkaramr...@gmail.com):
[~WebHomer],

Internally {{CdcrReplicator}} forwards the request to target collection in 
target cluster via conventional UpdateRequest, which supports alas. Alias 
should definitely work for {{target}} collections in CDCR, need to confirm the 
same for {{source}}. Check in the logs once, {{Forwarded X updates to target 
collection Y}} in source and {{/update}} in target.

> Solr CDCR cannot be configured to use Aliases for replication
> -
>
> Key: SOLR-10679
> URL: https://issues.apache.org/jira/browse/SOLR-10679
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 6.2
>Reporter: Webster Homer
>Priority: Major
>
> My company uses Solr aliases to limit the configuration changes that we need 
> to support.
> The CDCR configuration seems to accept an alias for either the source or 
> target collections, and no errors show up in the log, but no data is 
> replicated if the source or target is an alias and not an actual collection.
> I see that aliases are not even mentioned in the CDCR documentation. It seems 
> to me this should either work or throw an error.
> It should be documented one way or another



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 152 - Still Failing

2018-02-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/152/

No tests ran.

Build Log:
[...truncated 28782 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] Java 9 JAVA_HOME=/home/jenkins/tools/java/latest1.9
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (27.5 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.3.0-src.tgz...
   [smoker] 31.7 MB in 0.03 sec (1213.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.3.0.tgz...
   [smoker] 73.9 MB in 0.06 sec (1180.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.3.0.zip...
   [smoker] 84.4 MB in 0.07 sec (1175.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.3.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6290 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6290 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.3.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6290 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6290 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.3.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 217 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 9 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 9...
   [smoker]   got 217 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (60.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.3.0-src.tgz...
   [smoker] 54.1 MB in 0.20 sec (273.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.3.0.tgz...
   [smoker] 151.6 MB in 0.55 sec (273.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.3.0.zip...
   [smoker] 152.7 MB in 0.59 sec (260.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.3.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.3.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0-java8
   [smoker] *** [WARN] *** Your open file limi

[jira] [Commented] (SOLR-11795) Add Solr metrics exporter for Prometheus

2018-02-19 Thread Koji Sekiguchi (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369692#comment-16369692
 ] 

Koji Sekiguchi commented on SOLR-11795:
---

I can still see several UpdateRequestProcessors in solrconfig.xml for test. Are 
they necessary? And I'm sorry if I'm wrong but do you need 
test-files/exampledocs/*.xml files?

As for schema settings, existing all Solr contribs use schema.xml, not 
managed-schema. Why don't you follow them?

> Add Solr metrics exporter for Prometheus
> 
>
> Key: SOLR-11795
> URL: https://issues.apache.org/jira/browse/SOLR-11795
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.2
>Reporter: Minoru Osuka
>Assignee: Koji Sekiguchi
>Priority: Minor
> Attachments: SOLR-11795-2.patch, SOLR-11795-3.patch, 
> SOLR-11795-4.patch, SOLR-11795-5.patch, SOLR-11795-6.patch, 
> SOLR-11795-7.patch, SOLR-11795.patch, solr-dashboard.png, 
> solr-exporter-diagram.png
>
>
> I 'd like to monitor Solr using Prometheus and Grafana.
> I've already created Solr metrics exporter for Prometheus. I'd like to 
> contribute to contrib directory if you don't mind.
> !solr-exporter-diagram.png|thumbnail!
> !solr-dashboard.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+43) - Build # 21495 - Still Failing!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21495/
Java: 64bit/jdk-10-ea+43 -XX:+UseCompressedOops -XX:+UseParallelGC

No tests ran.

Build Log:
[...truncated 12285 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.TriggerIntegrationTest
   [junit4]   2> 98135 INFO  
(SUITE-TriggerIntegrationTest-seed#[AC9A42B9D9A7C190]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.autoscaling.TriggerIntegrationTest_AC9A42B9D9A7C190-001/init-core-data-001
   [junit4]   2> 98136 WARN  
(SUITE-TriggerIntegrationTest-seed#[AC9A42B9D9A7C190]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=2 numCloses=2
   [junit4]   2> 98136 INFO  
(SUITE-TriggerIntegrationTest-seed#[AC9A42B9D9A7C190]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 98136 INFO  
(SUITE-TriggerIntegrationTest-seed#[AC9A42B9D9A7C190]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason="", ssl=0.0/0.0, value=0.0/0.0, 
clientAuth=0.0/0.0)
   [junit4]   2> 98137 INFO  
(SUITE-TriggerIntegrationTest-seed#[AC9A42B9D9A7C190]-worker) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 2 servers in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.autoscaling.TriggerIntegrationTest_AC9A42B9D9A7C190-001/tempDir-001
   [junit4]   2> 98138 INFO  
(SUITE-TriggerIntegrationTest-seed#[AC9A42B9D9A7C190]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 98138 INFO  (Thread-194) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 98138 INFO  (Thread-194) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 98148 ERROR (Thread-194) [] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 98238 INFO  
(SUITE-TriggerIntegrationTest-seed#[AC9A42B9D9A7C190]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:39985
   [junit4]   2> 98240 INFO  (zkConnectionManagerCallback-266-thread-1) [] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 98243 INFO  (jetty-launcher-263-thread-1) [] 
o.e.j.s.Server jetty-9.4.8.v20171121, build timestamp: 
2017-11-21T23:27:37+02:00, git hash: 82b8fb23f757335bb3329d540ce37a2a2615f0a8
   [junit4]   2> 98243 INFO  (jetty-launcher-263-thread-2) [] 
o.e.j.s.Server jetty-9.4.8.v20171121, build timestamp: 
2017-11-21T23:27:37+02:00, git hash: 82b8fb23f757335bb3329d540ce37a2a2615f0a8
   [junit4]   2> 98243 INFO  (jetty-launcher-263-thread-1) [] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 98243 INFO  (jetty-launcher-263-thread-1) [] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 98243 INFO  (jetty-launcher-263-thread-1) [] 
o.e.j.s.session Scavenging every 60ms
   [junit4]   2> 98244 INFO  (jetty-launcher-263-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@756462df{/solr,null,AVAILABLE}
   [junit4]   2> 98244 INFO  (jetty-launcher-263-thread-2) [] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 98244 INFO  (jetty-launcher-263-thread-2) [] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 98244 INFO  (jetty-launcher-263-thread-2) [] 
o.e.j.s.session Scavenging every 66ms
   [junit4]   2> 98244 INFO  (jetty-launcher-263-thread-2) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@60d3903c{/solr,null,AVAILABLE}
   [junit4]   2> 98245 INFO  (jetty-launcher-263-thread-1) [] 
o.e.j.s.AbstractConnector Started ServerConnector@3933ce2c{SSL,[ssl, 
http/1.1]}{127.0.0.1:38995}
   [junit4]   2> 98245 INFO  (jetty-launcher-263-thread-2) [] 
o.e.j.s.AbstractConnector Started ServerConnector@5755fe48{SSL,[ssl, 
http/1.1]}{127.0.0.1:43795}
   [junit4]   2> 98245 INFO  (jetty-launcher-263-thread-1) [] 
o.e.j.s.Server Started @100074ms
   [junit4]   2> 98245 INFO  (jetty-launcher-263-thread-2) [] 
o.e.j.s.Server Started @100074ms
   [junit4]   2> 98245 INFO  (jetty-launcher-263-thread-2) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=43795}
   [junit4]   2> 98245 INFO  (jetty-launcher-263-thread-1) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=38995}
   [junit4]   2> 98246 ERROR (jetty-launcher-263-thread-2) [] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 98246 ERROR (jetty-launcher-263-thread-1) [] 
o.a.s.u.StartupLoggi

[jira] [Commented] (SOLR-12005) Solr should have the option of logging all jars loaded

2018-02-19 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369690#comment-16369690
 ] 

Erick Erickson commented on SOLR-12005:
---

Dunno, If you specify the -v option when starting Solr, does that provide the 
information you want? If so, for those few times you really want to see each 
and every jar this seems simpler.

> Solr should have the option of logging all jars loaded
> --
>
> Key: SOLR-12005
> URL: https://issues.apache.org/jira/browse/SOLR-12005
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shawn Heisey
>Priority: Major
>
> Solr used to explicitly log the filename of every jar it loaded.  It seems 
> that the effort to reduce the verbosity of the logs has changed this, now it 
> just logs the *count* of jars loaded and the paths where they were loaded 
> from.  Here's a log line where Solr is reading from ${solr.solr.home}/lib:
> {code}
> 2018-02-01 17:43:20.043 INFO  (main) [   ] o.a.s.c.SolrResourceLoader [null] 
> Added 8 libs to classloader, from paths: [/index/solr6/data/lib]
> {code}
> When trying to help somebody with classloader issues, it's more difficult to 
> help when the list of jars loaded isn't in the log.
> I would like the more verbose logging to be enabled by default, but I 
> understand that many people would not want that, so I propose this:
>  * Enable verbose logging for ${solr.solr.home}/lib by default.
>  * Disable verbose logging for each core by default.  Allow solrconfig.xml to 
> enable it.
>  * Optionally allow solr.xml to configure verbose logging at the global level.
>  ** This setting would affect both global and per-core jar loading. Each 
> solrconfig.xml could override.
> Rationale: The contents of ${solr.solr.home}/lib are loaded precisely once, 
> and this location doesn't even exist unless a user creates it.  An 
> out-of-the-box config would not have verbose logs from jar loading.
> The solr home lib location is my preferred way of loading custom jars, 
> because they get loaded only once, no matter how many cores you have.  Jars 
> added to this location would add lines to the log, but it would not be logged 
> for every core.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2018-02-19 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11598:

Attachment: SOLR-11598.patch

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>Priority: Major
>  Labels: patch
> Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, 
> SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch
>
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at 

[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2018-02-19 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369685#comment-16369685
 ] 

Amrit Sarkar commented on SOLR-11598:
-

Thank you [~varunthacker] for pin-pointing that. I have improved the patch as 
per recommendation.

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>Priority: Major
>  Labels: patch
> Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, 
> SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch
>
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCo

[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 153 - Still Unstable

2018-02-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/153/

4 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.HdfsChaosMonkeySafeLeaderTest.test

Error Message:
Error from server at https://127.0.0.1:34830/yw/hb: ADDREPLICA failed to create 
replica

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:34830/yw/hb: ADDREPLICA failed to create 
replica
at 
__randomizedtesting.SeedInfo.seed([CC7CCC46CDC7D8C1:4428F39C633BB539]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createJettys(AbstractFullDistribZkTestBase.java:425)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:341)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
   

[jira] [Comment Edited] (SOLR-11795) Add Solr metrics exporter for Prometheus

2018-02-19 Thread Minoru Osuka (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369681#comment-16369681
 ] 

Minoru Osuka edited comment on SOLR-11795 at 2/20/18 3:30 AM:
--

I replaced luceneMatchVersion to "${tests.luceneMatchVersion:LATEST}".
 And made solrconfig.xml compact.
 Please see SOLR-11795-7.patch.


was (Author: minoru):
I replaced luceneMatchVersion to "${tests.luceneMatchVersion: LATEST}".
 And made solrconfig.xml compact.
 Please see SOLR-11795-7.patch.

> Add Solr metrics exporter for Prometheus
> 
>
> Key: SOLR-11795
> URL: https://issues.apache.org/jira/browse/SOLR-11795
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.2
>Reporter: Minoru Osuka
>Assignee: Koji Sekiguchi
>Priority: Minor
> Attachments: SOLR-11795-2.patch, SOLR-11795-3.patch, 
> SOLR-11795-4.patch, SOLR-11795-5.patch, SOLR-11795-6.patch, 
> SOLR-11795-7.patch, SOLR-11795.patch, solr-dashboard.png, 
> solr-exporter-diagram.png
>
>
> I 'd like to monitor Solr using Prometheus and Grafana.
> I've already created Solr metrics exporter for Prometheus. I'd like to 
> contribute to contrib directory if you don't mind.
> !solr-exporter-diagram.png|thumbnail!
> !solr-dashboard.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11795) Add Solr metrics exporter for Prometheus

2018-02-19 Thread Minoru Osuka (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369681#comment-16369681
 ] 

Minoru Osuka commented on SOLR-11795:
-

I replaced luceneMatchVersion to "$ \{tests.luceneMatchVersion: LATEST}".
And made solrconfig.xml compact.
Please see SOLR-11795-7.patch.

> Add Solr metrics exporter for Prometheus
> 
>
> Key: SOLR-11795
> URL: https://issues.apache.org/jira/browse/SOLR-11795
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.2
>Reporter: Minoru Osuka
>Assignee: Koji Sekiguchi
>Priority: Minor
> Attachments: SOLR-11795-2.patch, SOLR-11795-3.patch, 
> SOLR-11795-4.patch, SOLR-11795-5.patch, SOLR-11795-6.patch, 
> SOLR-11795-7.patch, SOLR-11795.patch, solr-dashboard.png, 
> solr-exporter-diagram.png
>
>
> I 'd like to monitor Solr using Prometheus and Grafana.
> I've already created Solr metrics exporter for Prometheus. I'd like to 
> contribute to contrib directory if you don't mind.
> !solr-exporter-diagram.png|thumbnail!
> !solr-dashboard.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11795) Add Solr metrics exporter for Prometheus

2018-02-19 Thread Minoru Osuka (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369681#comment-16369681
 ] 

Minoru Osuka edited comment on SOLR-11795 at 2/20/18 3:29 AM:
--

I replaced luceneMatchVersion to "${tests.luceneMatchVersion: LATEST}".
 And made solrconfig.xml compact.
 Please see SOLR-11795-7.patch.


was (Author: minoru):
I replaced luceneMatchVersion to "$ \{tests.luceneMatchVersion: LATEST}".
And made solrconfig.xml compact.
Please see SOLR-11795-7.patch.

> Add Solr metrics exporter for Prometheus
> 
>
> Key: SOLR-11795
> URL: https://issues.apache.org/jira/browse/SOLR-11795
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.2
>Reporter: Minoru Osuka
>Assignee: Koji Sekiguchi
>Priority: Minor
> Attachments: SOLR-11795-2.patch, SOLR-11795-3.patch, 
> SOLR-11795-4.patch, SOLR-11795-5.patch, SOLR-11795-6.patch, 
> SOLR-11795-7.patch, SOLR-11795.patch, solr-dashboard.png, 
> solr-exporter-diagram.png
>
>
> I 'd like to monitor Solr using Prometheus and Grafana.
> I've already created Solr metrics exporter for Prometheus. I'd like to 
> contribute to contrib directory if you don't mind.
> !solr-exporter-diagram.png|thumbnail!
> !solr-dashboard.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11795) Add Solr metrics exporter for Prometheus

2018-02-19 Thread Minoru Osuka (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Minoru Osuka updated SOLR-11795:

Attachment: SOLR-11795-7.patch

> Add Solr metrics exporter for Prometheus
> 
>
> Key: SOLR-11795
> URL: https://issues.apache.org/jira/browse/SOLR-11795
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.2
>Reporter: Minoru Osuka
>Assignee: Koji Sekiguchi
>Priority: Minor
> Attachments: SOLR-11795-2.patch, SOLR-11795-3.patch, 
> SOLR-11795-4.patch, SOLR-11795-5.patch, SOLR-11795-6.patch, 
> SOLR-11795-7.patch, SOLR-11795.patch, solr-dashboard.png, 
> solr-exporter-diagram.png
>
>
> I 'd like to monitor Solr using Prometheus and Grafana.
> I've already created Solr metrics exporter for Prometheus. I'd like to 
> contribute to contrib directory if you don't mind.
> !solr-exporter-diagram.png|thumbnail!
> !solr-dashboard.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1688 - Still Unstable!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1688/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.clustering.DistributedClusteringComponentTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.clustering.DistributedClusteringComponentTest: 1) 
Thread[id=37, name=qtp1278777971-37, state=TIMED_WAITING, 
group=TGRP-DistributedClusteringComponentTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.clustering.DistributedClusteringComponentTest: 
   1) Thread[id=37, name=qtp1278777971-37, state=TIMED_WAITING, 
group=TGRP-DistributedClusteringComponentTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([8AEF5CD4F3FBC58]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.clustering.DistributedClusteringComponentTest

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=37, 
name=qtp1278777971-37, state=TIMED_WAITING, 
group=TGRP-DistributedClusteringComponentTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=37, name=qtp1278777971-37, state=TIMED_WAITING, 
group=TGRP-DistributedClusteringComponentTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([8AEF5CD4F3FBC58]:0)


FAILED:  
org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeWithMultipleReplicasLost

Error Message:
The operations computed by ComputePlanAction should not be null 
SolrClientNodeStateProvider.DEBUG{AFTER_ACTION=[compute_plan, null], 
BEFORE_ACTION=[compute_plan, null]}

Stack Trace:
java.lang.AssertionError: The operations computed by ComputePlanAction should 
not be null SolrClientNodeStateProvider.DEBUG{AFTER_ACTION=[compute_plan, 
null], BEFORE_ACTION=[compute_plan, null]}
at 
__randomizedtesting.SeedInfo.seed([4324EF2CAA18D31F:73E40EAE226A3243]:0)
at org.junit.Ass

[jira] [Commented] (SOLR-11795) Add Solr metrics exporter for Prometheus

2018-02-19 Thread Koji Sekiguchi (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369645#comment-16369645
 ] 

Koji Sekiguchi commented on SOLR-11795:
---

Thank you for updating the patch. I can see hard coded luceneMatchVersion in 
the patch:

{code}
7.1.0
{code}

You can rephrase it like this:

{code}
${tests.luceneMatchVersion:LATEST}
{code}

And I think your solrconfig.xml for test is still fat... Please consult 
solr/contrib/langid for making test config more compact.

> Add Solr metrics exporter for Prometheus
> 
>
> Key: SOLR-11795
> URL: https://issues.apache.org/jira/browse/SOLR-11795
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.2
>Reporter: Minoru Osuka
>Assignee: Koji Sekiguchi
>Priority: Minor
> Attachments: SOLR-11795-2.patch, SOLR-11795-3.patch, 
> SOLR-11795-4.patch, SOLR-11795-5.patch, SOLR-11795-6.patch, SOLR-11795.patch, 
> solr-dashboard.png, solr-exporter-diagram.png
>
>
> I 'd like to monitor Solr using Prometheus and Grafana.
> I've already created Solr metrics exporter for Prometheus. I'd like to 
> contribute to contrib directory if you don't mind.
> !solr-exporter-diagram.png|thumbnail!
> !solr-dashboard.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9510) child level facet exclusions

2018-02-19 Thread Andrey Kudryavtsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369636#comment-16369636
 ] 

Andrey Kudryavtsev edited comment on SOLR-9510 at 2/20/18 1:42 AM:
---

Not sure that I fully understand how "expand parents docset" part will work (it 
will just execute parent BJQ again, but without excluded child clause, right?), 
but have a theoretical question. 

Assume someone will implement "global" feature for JSON API ([like you 
know|https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-global-aggregation.html])
 to calculate facet on whole index and will calculate same 
{{comments_for_author}} facet like this:
{code:java}
...
comments_for_author:{  
 global:{},
  domain:{  
 blockChildren:"type_s:book",   
 filter:"{!filters params=$child.fq excludeTags=author v=$childquery}"
  }
...{code}
 

Would it be faster then "expand parents docset"? What is your gut feeling about 
it? 


was (Author: werder):
Not sure that I fully understand how "expand parents docset" part will work (it 
will just execute parent BJQ again, but without excluded child clause, right?), 
but have a theoretical question. 

Assume someone will implement "global" feature for JSON API ([like you 
know|http://example.com/]) to calculate facet on whole index and will calculate 
same {{comments_for_author}} facet like this:
{code:java}
...
comments_for_author:{  
 global:{},
  domain:{  
 blockChildren:"type_s:book",   
 filter:"{!filters params=$child.fq excludeTags=author v=$childquery}"
  }
...{code}
 

Would it be faster then "expand parents docset"? What is your gut feeling about 
it? 

> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, 
> SOLR_9510.patch, SOLR_9510.patch
>
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..&child.fq=color:Red&child.fq=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}&subq=text:word&child.fq={!tag=color}color:Red&child.fq=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via parameter name {{params=$child.fq}}, it also supports 
> {{excludeTags}}.
> h2. Notes
> Regarding the latter parser, the alternative approach might be to move into 
> {{domain:\{..}}} instruction of json facet. From the implementation 
> perspective, it's desired to optimize with bitset processing, however I 
> suppose it's might be deferred until some initial level of maturity. 
> h2. Example
> {code}
> q={!parent which=type_s:book filters=$child.fq 
> v=$childquery}&childquery=comment_t:good&child.fq={!tag=author}author_s:yonik&child.fq={!tag=stars}stars_i:(5
>  3)&wt=json&indent=on&json.facet={
> comments_for_author:{
>   type:query,
>   q:"{!filters params=$child.fq excludeTags=author v=$childquery}",
>   "//note":"author filter is excluded",
>   domain:{
>  blockChildren:"type_s:book",
>  "//":"applying filters here might be more promising"
>}, facet:{
>authors:{
>   type:terms,
>   field:author_s,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>}
> } ,
> comments_for_stars:{
>   type:query,
>  q:"{!filters params=$child.fq excludeTags=stars v=$childquery}",
>   "//note":"stars_i filter is excluded",
>   domain:{
>  blockChildren:"type_s:book"
>}, facet:{
>stars:{
>   type:terms,

[jira] [Comment Edited] (SOLR-9510) child level facet exclusions

2018-02-19 Thread Andrey Kudryavtsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369636#comment-16369636
 ] 

Andrey Kudryavtsev edited comment on SOLR-9510 at 2/20/18 1:41 AM:
---

Not sure that I fully understand how "expand parents docset" part will work (it 
will just execute parent BJQ again, but without excluded child clause, right?), 
but have a theoretical question. 

Assume someone will implement "global" feature for JSON API ([like you 
know|http://example.com/]) to calculate facet on whole index and will calculate 
same {{comments_for_author}} facet like this:
{code:java}
...
comments_for_author:{  
 global:{},
  domain:{  
 blockChildren:"type_s:book",   
 filter:"{!filters params=$child.fq excludeTags=author v=$childquery}"
  }
...{code}
 

Would it be faster then "expand parents docset"? What is your gut feeling about 
it? 


was (Author: werder):
Not sure that I fully understand how "expand parents docset" part will work (it 
will just execute parent BJQ again, but without excluded child clause, right?), 
but have a theoretical question. 

Assume someone will implement "global" feature for JSON API ([like you 
know|http://example.com]https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-global-aggregation.html)
 to calculate facet on whole index and will calculate same 
{{comments_for_author}} facet like this:
{code:java}
...
comments_for_author:{  
 global:{},
  domain:{  
 blockChildren:"type_s:book",   
 filter:"{!filters params=$child.fq excludeTags=author v=$childquery}"
  }
...{code}
 

Would it be faster then "expand parents docset"? What is your gut feeling about 
it? 

> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, 
> SOLR_9510.patch, SOLR_9510.patch
>
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..&child.fq=color:Red&child.fq=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}&subq=text:word&child.fq={!tag=color}color:Red&child.fq=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via parameter name {{params=$child.fq}}, it also supports 
> {{excludeTags}}.
> h2. Notes
> Regarding the latter parser, the alternative approach might be to move into 
> {{domain:\{..}}} instruction of json facet. From the implementation 
> perspective, it's desired to optimize with bitset processing, however I 
> suppose it's might be deferred until some initial level of maturity. 
> h2. Example
> {code}
> q={!parent which=type_s:book filters=$child.fq 
> v=$childquery}&childquery=comment_t:good&child.fq={!tag=author}author_s:yonik&child.fq={!tag=stars}stars_i:(5
>  3)&wt=json&indent=on&json.facet={
> comments_for_author:{
>   type:query,
>   q:"{!filters params=$child.fq excludeTags=author v=$childquery}",
>   "//note":"author filter is excluded",
>   domain:{
>  blockChildren:"type_s:book",
>  "//":"applying filters here might be more promising"
>}, facet:{
>authors:{
>   type:terms,
>   field:author_s,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>}
> } ,
> comments_for_stars:{
>   type:query,
>  q:"{!filters params=$child.fq excludeTags=stars v=$childquery}",
>   "//note":"stars_i filter is excluded",
>   domain:{
>  blockChildren:"type_s:book"
>}, facet:{
>stars:{
>

[jira] [Commented] (SOLR-9510) child level facet exclusions

2018-02-19 Thread Andrey Kudryavtsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369636#comment-16369636
 ] 

Andrey Kudryavtsev commented on SOLR-9510:
--

Not sure that I fully understand how "expand parents docset" part will work (it 
will just execute parent BJQ again, but without excluded child clause, right?), 
but have a theoretical question. 

Assume someone will implement "global" feature for JSON API ([like you 
know|http://example.com]https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-global-aggregation.html)
 to calculate facet on whole index and will calculate same 
{{comments_for_author}} facet like this:
{code:java}
...
comments_for_author:{  
 global:{},
  domain:{  
 blockChildren:"type_s:book",   
 filter:"{!filters params=$child.fq excludeTags=author v=$childquery}"
  }
...{code}
 

Would it be faster then "expand parents docset"? What is your gut feeling about 
it? 

> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, 
> SOLR_9510.patch, SOLR_9510.patch
>
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..&child.fq=color:Red&child.fq=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}&subq=text:word&child.fq={!tag=color}color:Red&child.fq=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via parameter name {{params=$child.fq}}, it also supports 
> {{excludeTags}}.
> h2. Notes
> Regarding the latter parser, the alternative approach might be to move into 
> {{domain:\{..}}} instruction of json facet. From the implementation 
> perspective, it's desired to optimize with bitset processing, however I 
> suppose it's might be deferred until some initial level of maturity. 
> h2. Example
> {code}
> q={!parent which=type_s:book filters=$child.fq 
> v=$childquery}&childquery=comment_t:good&child.fq={!tag=author}author_s:yonik&child.fq={!tag=stars}stars_i:(5
>  3)&wt=json&indent=on&json.facet={
> comments_for_author:{
>   type:query,
>   q:"{!filters params=$child.fq excludeTags=author v=$childquery}",
>   "//note":"author filter is excluded",
>   domain:{
>  blockChildren:"type_s:book",
>  "//":"applying filters here might be more promising"
>}, facet:{
>authors:{
>   type:terms,
>   field:author_s,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>}
> } ,
> comments_for_stars:{
>   type:query,
>  q:"{!filters params=$child.fq excludeTags=stars v=$childquery}",
>   "//note":"stars_i filter is excluded",
>   domain:{
>  blockChildren:"type_s:book"
>}, facet:{
>stars:{
>   type:terms,
>   field:stars_i,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>   }
> }
> }
> {code} 
> Votes? Opinions?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 1391 - Still Unstable!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1391/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.ltr.feature.TestExternalValueFeatures

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.ltr.feature.TestExternalValueFeatures: 1) Thread[id=111, 
name=qtp1766728254-111, state=TIMED_WAITING, 
group=TGRP-TestExternalValueFeatures] at 
java.base@9.0.4/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9.0.4/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9.0.4/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
 at java.base@9.0.4/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.ltr.feature.TestExternalValueFeatures: 
   1) Thread[id=111, name=qtp1766728254-111, state=TIMED_WAITING, 
group=TGRP-TestExternalValueFeatures]
at java.base@9.0.4/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@9.0.4/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
at 
java.base@9.0.4/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.base@9.0.4/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([E7B72E4C529707D0]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.ltr.feature.TestExternalValueFeatures

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=111, name=qtp1766728254-111, state=TIMED_WAITING, 
group=TGRP-TestExternalValueFeatures] at 
java.base@9.0.4/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9.0.4/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9.0.4/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
 at java.base@9.0.4/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=111, name=qtp1766728254-111, state=TIMED_WAITING, 
group=TGRP-TestExternalValueFeatures]
at java.base@9.0.4/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@9.0.4/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
at 
java.base@9.0.4/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.base@9.0.4/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([E7B72E4C529707D0]:0)




Build Log:
[...truncated 22274 lines...]
   [junit4] Suite: org.apache.solr.ltr.feature.TestExternalValueFeatures
   [junit4]   2> 7300 INFO  
(SUITE-TestExternalValueFeatures-seed#[E7B72E4C529707D0]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file

[jira] [Commented] (LUCENE-8106) Add script to attempt to reproduce failing tests from a Jenkins log

2018-02-19 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369634#comment-16369634
 ] 

Steve Rowe commented on LUCENE-8106:


[~thetaphi]: I've run into trouble getting the script ^^ to work.  First, the 
local log isn't where it is on my Jenkins 
({{workspace/../builds/$BUILD_NUMBER/log}}, so I switched to fetching the log 
via HTTPS and storing it in a temp file.  That seems to work.  But now there 
are problems dealing with {{lucene.build.properties}}:

>From 
>[https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21494/consoleText]:

{noformat}
+ set -x
+ mktemp
+ TMPFILE=/tmp/tmp.GedGCMtTog
+ trap rm -f /tmp/tmp.GedGCMtTog EXIT
+ curl -o /tmp/tmp.GedGCMtTog 
https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21494/consoleText
[...]
+ grep --quiet reproduce with /tmp/tmp.GedGCMtTog
+ mv lucene/build lucene/build.orig
+ mv solr/build solr/build.orig
+ grep ^[[:space:]]*python32\.exe[[:space:]]*= 
/home/jenkins/lucene.build.properties
+ cut -d= -f2
+ PYTHON32_EXE=
+ grep ^[[:space:]]*git\.exe[[:space:]]*= /home/jenkins/lucene.build.properties
+ cut -d= -f2
+ GIT_EXE=
+ 
PATH=:/home/jenkins/tools/java/64bit/jdk-10-ea+43/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
 -u dev-tools/scripts/reproduceJenkinsFailures.py --no-fetch 
file:///tmp/tmp.GedGCMtTog
/tmp/jenkins9073460340081845275.sh: 17: /tmp/jenkins9073460340081845275.sh: -u: 
not found
+ rm -f /tmp/tmp.GedGCMtTog
{noformat}

>From ^^, it looks to me like {{/home/jenkins/lucene.build.properties}} either 
>doesn't exist, or doesn't have entries for {{git.exe}} and {{python32.exe}}.  
>I apparently no longer have login access to the VMs (I tried).

> Add script to attempt to reproduce failing tests from a Jenkins log
> ---
>
> Key: LUCENE-8106
> URL: https://issues.apache.org/jira/browse/LUCENE-8106
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: LUCENE-8106-part2.patch, LUCENE-8106.patch, 
> LUCENE-8106.patch
>
>
> This script will be runnable from a downstream job triggered by an upstream 
> failing Jenkins job, passing log location info between the two.
> The script will also be runnable manually from a developer's cmdline.
> From the script help:
> {noformat}
> Usage:
>  python3 -u reproduceJenkinsFailures.py URL
> Must be run from a Lucene/Solr git workspace. Downloads the Jenkins
> log pointed to by the given URL, parses it for Git revision and failed
> Lucene/Solr tests, checks out the Git revision in the local workspace,
> groups the failed tests by module, then runs
> 'ant test -Dtest.dups=5 -Dtests.class="*.test1[|*.test2[...]]" ...'
> in each module of interest, failing at the end if any of the runs fails.
> To control the maximum number of concurrent JVMs used for each module's
> test run, set 'tests.jvms', e.g. in ~/lucene.build.properties
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8106) Add script to attempt to reproduce failing tests from a Jenkins log

2018-02-19 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16368370#comment-16368370
 ] 

Steve Rowe edited comment on LUCENE-8106 at 2/20/18 1:22 AM:
-

{quote}
bq. I'd include the repro stuff into the main build job somehow!
I'll try making it a build step (instead of a post-build action).
{quote}
 
I added the following as an "Execute shell" build step to the 
{{Lucene-Solr-master-Linux}} project - we'll see how it goes:

{noformat}
set -x # Log commands

TMPFILE=`mktemp`
trap "rm -f $TMPFILE" EXIT   # Delete the temp file on SIGEXIT

curl -o $TMPFILE 
https://jenkins.thetaphi.de/job/$JOB_NAME/$BUILD_NUMBER/consoleText

if grep --quiet 'reproduce with' $TMPFILE ; then

# Preserve original build output
mv lucene/build lucene/build.orig
mv solr/build solr/build.orig

PYTHON32_EXE=`grep "^[[:space:]]*python32\.exe[[:space:]]*=" 
~/lucene.build.properties | cut -d'=' -f2`
GIT_EXE=`grep "^[[:space:]]*git\.exe[[:space:]]*=" 
~/lucene.build.properties | cut -d'=' -f2`
PATH=$GIT_EXE:$PATH $PYTHON32_EXE -u 
dev-tools/scripts/reproduceJenkinsFailures.py --no-fetch file://$TMPFILE

# Preserve repro build output
mv lucene/build lucene/build.repro
mv solr/build solr/build.repro

# Restore original build output
mv lucene/build.orig lucene/build
mv solr/build.orig solr/build
fi
{noformat}


was (Author: steve_rowe):
{quote}
bq. I'd include the repro stuff into the main build job somehow!
I'll try making it a build step (instead of a post-build action).
{quote}
 
I added the following as an "Execute shell" build step to the 
{{Lucene-Solr-master-Linux}} project - we'll see how it goes:

{noformat}
set -x # Log commands

if grep --quiet 'reproduce with' ../builds/$BUILD_NUMBER/log ; then

# Preserve original build output
mv lucene/build lucene/build.orig
mv solr/build solr/build.orig

PYTHON32_EXE=`grep "^[[:space:]]*python32\.exe[[:space:]]*=" 
~/lucene.build.properties | cut -d'=' -f2`
GIT_EXE=`grep "^[[:space:]]*git\.exe[[:space:]]*=" 
~/lucene.build.properties | cut -d'=' -f2`
cd ..
PARENT_DIR=`pwd`
cd workspace
PATH=$GIT_EXE:$PATH $PYTHON32_EXE -u 
dev-tools/scripts/reproduceJenkinsFailures.py --no-fetch 
file://$PARENT_DIR/builds/$BUILD_NUMBER/log

# Preserve repro build output
mv lucene/build lucene/build.repro
mv solr/build solr/build.repro

# Restore original build output
mv lucene/build.orig lucene/build
mv solr/build.orig solr/build
fi
{noformat}

> Add script to attempt to reproduce failing tests from a Jenkins log
> ---
>
> Key: LUCENE-8106
> URL: https://issues.apache.org/jira/browse/LUCENE-8106
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: LUCENE-8106-part2.patch, LUCENE-8106.patch, 
> LUCENE-8106.patch
>
>
> This script will be runnable from a downstream job triggered by an upstream 
> failing Jenkins job, passing log location info between the two.
> The script will also be runnable manually from a developer's cmdline.
> From the script help:
> {noformat}
> Usage:
>  python3 -u reproduceJenkinsFailures.py URL
> Must be run from a Lucene/Solr git workspace. Downloads the Jenkins
> log pointed to by the given URL, parses it for Git revision and failed
> Lucene/Solr tests, checks out the Git revision in the local workspace,
> groups the failed tests by module, then runs
> 'ant test -Dtest.dups=5 -Dtests.class="*.test1[|*.test2[...]]" ...'
> in each module of interest, failing at the end if any of the runs fails.
> To control the maximum number of concurrent JVMs used for each module's
> test run, set 'tests.jvms', e.g. in ~/lucene.build.properties
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12006) Add back '*_t' dynamic field for single valued text fields

2018-02-19 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369627#comment-16369627
 ] 

Steve Rowe commented on SOLR-12006:
---

+1.

FYI, here's the commit that caused the change: 
https://github.com/apache/lucene-solr/commit/e2456776dde249813401bde93382131874731f53#diff-ae19449bd83f0c133277f97cbe6a8e9f


> Add back '*_t' dynamic field for single valued text fields
> --
>
> Key: SOLR-12006
> URL: https://issues.apache.org/jira/browse/SOLR-12006
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12006.patch
>
>
> Solr used to have a '_t' dynamic field which was single valued and a "_txt" 
> field for multi-valued text 
>  
> Solr 4.x : 
> [https://github.com/apache/lucene-solr/blob/branch_4x/solr/example/example-schemaless/solr/collection1/conf/schema.xml#L129]
>  
>  
> Somewhere in Solr 5.x both became the same definition . 
> [https://github.com/apache/lucene-solr/blob/branch_5_4/solr/server/solr/configsets/data_driven_schema_configs/conf/managed-schema#L138]
>  
> In master now there is no "_t" dynamic field anymore. 
>  
> We have a single-valued dynamic field and multi-valued dynamic field for 
> ints, longs, boolean, float, date , string . We should provide the same 
> option for a text field



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 462 - Still Unstable!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/462/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  org.apache.solr.cloud.ReplaceNodeTest.test

Error Message:
Error from server at http://127.0.0.1:61498/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:61498/solr: create the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([4FBB6ACF56817E88:C7EF5515F87D1370]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at org.apache.solr.cloud.ReplaceNodeTest.test(ReplaceNodeTest.java:86)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.

[jira] [Updated] (SOLR-11960) Add collection level properties

2018-02-19 Thread Peter Rusko (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Rusko updated SOLR-11960:
---
Attachment: SOLR-11960.patch

> Add collection level properties
> ---
>
> Key: SOLR-11960
> URL: https://issues.apache.org/jira/browse/SOLR-11960
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Peter Rusko
>Assignee: Tomás Fernández Löbbe
>Priority: Major
> Attachments: SOLR-11960.patch, SOLR-11960.patch
>
>
> Solr has cluster properties, but no easy and extendable way of defining 
> properties that affect a single collection. Collection properties could be 
> stored in a single zookeeper node per collection, making it possible to 
> trigger zookeeper watchers for only those Solr nodes that have cores of that 
> collection.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11960) Add collection level properties

2018-02-19 Thread Peter Rusko (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369607#comment-16369607
 ] 

Peter Rusko commented on SOLR-11960:


Thanks for the review, here's the updated patch: [^SOLR-11960.patch]

> Add collection level properties
> ---
>
> Key: SOLR-11960
> URL: https://issues.apache.org/jira/browse/SOLR-11960
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Peter Rusko
>Assignee: Tomás Fernández Löbbe
>Priority: Major
> Attachments: SOLR-11960.patch, SOLR-11960.patch
>
>
> Solr has cluster properties, but no easy and extendable way of defining 
> properties that affect a single collection. Collection properties could be 
> stored in a single zookeeper node per collection, making it possible to 
> trigger zookeeper watchers for only those Solr nodes that have cores of that 
> collection.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 65 - Still Unstable

2018-02-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/65/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/957/consoleText

[repro] Revision: a2fdbc93534263299f055a9e344c49bf29aebdf5

[repro] Repro line:  ant test  -Dtestcase=ReplaceNodeNoTargetTest 
-Dtests.method=test -Dtests.seed=3CCA5E48195B9FF -Dtests.multiplier=2 
-Dtests.locale=ar-SD -Dtests.timezone=CST -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TriggerIntegrationTest 
-Dtests.method=testEventQueue -Dtests.seed=3CCA5E48195B9FF -Dtests.multiplier=2 
-Dtests.locale=ja-JP-u-ca-japanese-x-lvariant-JP -Dtests.timezone=Greenwich 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
be7a811f61135890554a2a165a8ef6765b0a4310
[repro] git checkout a2fdbc93534263299f055a9e344c49bf29aebdf5

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   ReplaceNodeNoTargetTest
[repro]   TriggerIntegrationTest
[repro] ant compile-test

[...truncated 3293 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.ReplaceNodeNoTargetTest|*.TriggerIntegrationTest" 
-Dtests.showOutput=onerror -Dtests.seed=3CCA5E48195B9FF -Dtests.multiplier=2 
-Dtests.locale=ar-SD -Dtests.timezone=CST -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 15847 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.ReplaceNodeNoTargetTest
[repro]   3/5 failed: org.apache.solr.cloud.autoscaling.TriggerIntegrationTest
[repro] git checkout be7a811f61135890554a2a165a8ef6765b0a4310

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+43) - Build # 21494 - Failure!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21494/
Java: 64bit/jdk-10-ea+43 -XX:+UseCompressedOops -XX:+UseParallelGC

No tests ran.

Build Log:
[...truncated 13196 lines...]
   [junit4] Suite: 
org.apache.solr.cloud.autoscaling.AutoAddReplicasIntegrationTest
   [junit4]   2> 1219171 INFO  
(SUITE-AutoAddReplicasIntegrationTest-seed#[D30E9B462C651DD9]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.autoscaling.AutoAddReplicasIntegrationTest_D30E9B462C651DD9-001/init-core-data-001
   [junit4]   2> 1219172 WARN  
(SUITE-AutoAddReplicasIntegrationTest-seed#[D30E9B462C651DD9]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=16 numCloses=16
   [junit4]   2> 1219172 INFO  
(SUITE-AutoAddReplicasIntegrationTest-seed#[D30E9B462C651DD9]-worker) [] 
o.a.s.SolrTestCaseJ4 Using TrieFields (NUMERIC_POINTS_SYSPROP=false) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 1219172 INFO  
(SUITE-AutoAddReplicasIntegrationTest-seed#[D30E9B462C651DD9]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason="", value=0.0/0.0, ssl=0.0/0.0, 
clientAuth=0.0/0.0)
   [junit4]   2> 1219173 INFO  
(SUITE-AutoAddReplicasIntegrationTest-seed#[D30E9B462C651DD9]-worker) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 3 servers in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.autoscaling.AutoAddReplicasIntegrationTest_D30E9B462C651DD9-001/tempDir-001
   [junit4]   2> 1219173 INFO  
(SUITE-AutoAddReplicasIntegrationTest-seed#[D30E9B462C651DD9]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1219174 INFO  (Thread-4741) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1219174 INFO  (Thread-4741) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1219175 ERROR (Thread-4741) [] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 1219275 INFO  
(SUITE-AutoAddReplicasIntegrationTest-seed#[D30E9B462C651DD9]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:37337
   [junit4]   2> 1219284 INFO  (zkConnectionManagerCallback-2417-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1219287 INFO  (jetty-launcher-2414-thread-1) [] 
o.e.j.s.Server jetty-9.4.8.v20171121, build timestamp: 
2017-11-21T22:27:37+01:00, git hash: 82b8fb23f757335bb3329d540ce37a2a2615f0a8
   [junit4]   2> 1219288 INFO  (jetty-launcher-2414-thread-2) [] 
o.e.j.s.Server jetty-9.4.8.v20171121, build timestamp: 
2017-11-21T22:27:37+01:00, git hash: 82b8fb23f757335bb3329d540ce37a2a2615f0a8
   [junit4]   2> 1219292 INFO  (jetty-launcher-2414-thread-3) [] 
o.e.j.s.Server jetty-9.4.8.v20171121, build timestamp: 
2017-11-21T22:27:37+01:00, git hash: 82b8fb23f757335bb3329d540ce37a2a2615f0a8
   [junit4]   2> 1219318 INFO  (jetty-launcher-2414-thread-3) [] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 1219318 INFO  (jetty-launcher-2414-thread-3) [] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 1219318 INFO  (jetty-launcher-2414-thread-3) [] 
o.e.j.s.session Scavenging every 60ms
   [junit4]   2> 1219327 INFO  (jetty-launcher-2414-thread-3) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@69444d25{/solr,null,AVAILABLE}
   [junit4]   2> 1219327 INFO  (jetty-launcher-2414-thread-2) [] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 1219327 INFO  (jetty-launcher-2414-thread-2) [] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 1219327 INFO  (jetty-launcher-2414-thread-1) [] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 1219328 INFO  (jetty-launcher-2414-thread-1) [] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 1219328 INFO  (jetty-launcher-2414-thread-1) [] 
o.e.j.s.session Scavenging every 60ms
   [junit4]   2> 1219328 INFO  (jetty-launcher-2414-thread-2) [] 
o.e.j.s.session Scavenging every 66ms
   [junit4]   2> 1219328 INFO  (jetty-launcher-2414-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@3ae3f7f1{/solr,null,AVAILABLE}
   [junit4]   2> 1219328 INFO  (jetty-launcher-2414-thread-2) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@6820f57c{/solr,null,AVAILABLE}
   [junit4]   2> 1219332 INFO  (jetty-launcher-2414-thread-1) [] 
o.e.j.s.AbstractConnector Started ServerConnector@42385178{SSL,[ssl, 
http/1.1]}{127.0.0.1:33655}
   [junit4]   2> 1219332 INFO  (jetty-launch

[JENKINS] Lucene-Solr-Tests-7.x - Build # 428 - Still Unstable

2018-02-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/428/

2 tests failed.
FAILED:  org.apache.solr.cloud.TestUtilizeNode.test

Error Message:
no replica should be present in  127.0.0.1:44151_solr

Stack Trace:
java.lang.AssertionError: no replica should be present in  127.0.0.1:44151_solr
at 
__randomizedtesting.SeedInfo.seed([1B33F7F58E4B819B:9367C82F20B7EC63]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.apache.solr.cloud.TestUtilizeNode.test(TestUtilizeNode.java:99)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testMetricTrigger

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([1B33F7F58E4B819B:A13FC07AD1A357D4]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert

[JENKINS] Lucene-Solr-repro - Build # 64 - Still unstable

2018-02-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/64/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/151/consoleText

[repro] Revision: c92458339830f0a1fa88891fa137b16f0c8b0849

[repro] Repro line:  ant test  -Dtestcase=AutoscalingHistoryHandlerTest 
-Dtests.method=testHistory -Dtests.seed=98AE70F40B569CC8 -Dtests.multiplier=2 
-Dtests.locale=lt-LT -Dtests.timezone=Etc/GMT+10 -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestHdfsCloudBackupRestore 
-Dtests.method=test -Dtests.seed=98AE70F40B569CC8 -Dtests.multiplier=2 
-Dtests.locale=ca -Dtests.timezone=America/Rankin_Inlet -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestDistribIDF 
-Dtests.method=testMultiCollectionQuery -Dtests.seed=98AE70F40B569CC8 
-Dtests.multiplier=2 -Dtests.locale=hu-HU 
-Dtests.timezone=America/Argentina/San_Juan -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TriggerIntegrationTest 
-Dtests.method=testNodeMarkersRegistration -Dtests.seed=98AE70F40B569CC8 
-Dtests.multiplier=2 -Dtests.locale=sr-Latn -Dtests.timezone=Asia/Vientiane 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TriggerIntegrationTest 
-Dtests.method=testSearchRate -Dtests.seed=98AE70F40B569CC8 
-Dtests.multiplier=2 -Dtests.locale=sr-Latn -Dtests.timezone=Asia/Vientiane 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.method=testSearchRate -Dtests.seed=98AE70F40B569CC8 
-Dtests.multiplier=2 -Dtests.locale=es-CL -Dtests.timezone=Europe/Mariehamn 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.seed=98AE70F40B569CC8 -Dtests.multiplier=2 -Dtests.locale=es-CL 
-Dtests.timezone=Europe/Mariehamn -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testSearchRate -Dtests.seed=98AE70F40B569CC8 
-Dtests.multiplier=2 -Dtests.locale=is-IS -Dtests.timezone=Pacific/Galapagos 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testEventFromRestoredState -Dtests.seed=98AE70F40B569CC8 
-Dtests.multiplier=2 -Dtests.locale=is-IS -Dtests.timezone=Pacific/Galapagos 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
be7a811f61135890554a2a165a8ef6765b0a4310
[repro] git checkout c92458339830f0a1fa88891fa137b16f0c8b0849

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TriggerIntegrationTest
[repro]   AutoscalingHistoryHandlerTest
[repro]   TestDistribIDF
[repro]   TestLargeCluster
[repro]   TestHdfsCloudBackupRestore
[repro]   TestTriggerIntegration
[repro] ant compile-test

[...truncated 3310 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=30 
-Dtests.class="*.TriggerIntegrationTest|*.AutoscalingHistoryHandlerTest|*.TestDistribIDF|*.TestLargeCluster|*.TestHdfsCloudBackupRestore|*.TestTriggerIntegration"
 -Dtests.showOutput=onerror -Dtests.seed=98AE70F40B569CC8 -Dtests.multiplier=2 
-Dtests.locale=sr-Latn -Dtests.timezone=Asia/Vientiane -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 19615 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.solr.cloud.api.collections.TestHdfsCloudBackupRestore
[repro]   0/5 failed: org.apache.solr.search.stats.TestDistribIDF
[repro]   1/5 failed: org.apache.solr.cloud.autoscaling.sim.TestLargeCluster
[repro]   1/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration
[repro]   1/5 failed: 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest
[repro]   2/5 failed: org.apache.solr.cloud.autoscaling.TriggerIntegrationTest
[repro] git checkout be7a811f61135890554a2a165a8ef6765b0a4310

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4450 - Still Unstable!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4450/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory

Error Message:
expected:<5> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<5> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([D25326B18C77B3D0:BFAF824C363F4CD7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory(AutoscalingHistoryHandlerTest.java:244)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.TestUtilizeNode.test

Error Message:
no replica should be present in  127.0.0.1:58925_solr


[JENKINS] Lucene-Solr-repro - Build # 63 - Failure

2018-02-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/63/

[...truncated 40 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/427/consoleText

[repro] Revision: 539225effa70103845402f60242f1c381d4cd4b7

[repro] Repro line:  ant test  -Dtestcase=ComputePlanActionTest 
-Dtests.method=testNodeWithMultipleReplicasLost -Dtests.seed=568E45F9882B878E 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=hi-IN 
-Dtests.timezone=Indian/Mahe -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=AutoscalingHistoryHandlerTest 
-Dtests.method=testHistory -Dtests.seed=568E45F9882B878E -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=he-IL -Dtests.timezone=ROK 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
be7a811f61135890554a2a165a8ef6765b0a4310
[repro] git checkout 539225effa70103845402f60242f1c381d4cd4b7

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   ComputePlanActionTest
[repro]   AutoscalingHistoryHandlerTest
[repro] ant compile-test

[...truncated 1523 lines...]
[repro] Setting last failure code to 256

[repro] Traceback (most recent call last):

[...truncated 3 lines...]
raise RuntimeError("ERROR: Compile failed in %s/ with code %d.  See above." 
% (module, code))
RuntimeError: ERROR: Compile failed in solr/core/ with code 256.  See above.

[repro] git checkout be7a811f61135890554a2a165a8ef6765b0a4310
Previous HEAD position was 539225e... Avoid thread contention in LRUQueryCache 
test
HEAD is now at be7a811... SOLR-11964: SOLR-11874: Docs for ulimits (system) and 
queryAnalyzerFieldType (spellcheck)
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-repro - Build # 62 - Unstable

2018-02-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/62/

[...truncated 41 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2349/consoleText

[repro] Revision: a2fdbc93534263299f055a9e344c49bf29aebdf5

[repro] Repro line:  ant test  -Dtestcase=CollectionsAPISolrJTest 
-Dtests.method=testCreateWithDefaultConfigSet -Dtests.seed=94CC9EF206D4B874 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=id -Dtests.timezone=Zulu 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=ReplaceNodeNoTargetTest 
-Dtests.method=test -Dtests.seed=94CC9EF206D4B874 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=th-TH -Dtests.timezone=Europe/Sofia 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
be7a811f61135890554a2a165a8ef6765b0a4310
[repro] git checkout a2fdbc93534263299f055a9e344c49bf29aebdf5

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   ReplaceNodeNoTargetTest
[repro]   CollectionsAPISolrJTest
[repro] ant compile-test

[...truncated 3293 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.ReplaceNodeNoTargetTest|*.CollectionsAPISolrJTest" 
-Dtests.showOutput=onerror -Dtests.seed=94CC9EF206D4B874 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=th-TH -Dtests.timezone=Europe/Sofia 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[...truncated 2604 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.CollectionsAPISolrJTest
[repro]   2/5 failed: org.apache.solr.cloud.ReplaceNodeNoTargetTest
[repro] git checkout be7a811f61135890554a2a165a8ef6765b0a4310

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-9510) child level facet exclusions

2018-02-19 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9510:
---
Attachment: SOLR_9510.patch

> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, 
> SOLR_9510.patch, SOLR_9510.patch
>
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..&child.fq=color:Red&child.fq=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}&subq=text:word&child.fq={!tag=color}color:Red&child.fq=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via parameter name {{params=$child.fq}}, it also supports 
> {{excludeTags}}.
> h2. Notes
> Regarding the latter parser, the alternative approach might be to move into 
> {{domain:\{..}}} instruction of json facet. From the implementation 
> perspective, it's desired to optimize with bitset processing, however I 
> suppose it's might be deferred until some initial level of maturity. 
> h2. Example
> {code}
> q={!parent which=type_s:book filters=$child.fq 
> v=$childquery}&childquery=comment_t:good&child.fq={!tag=author}author_s:yonik&child.fq={!tag=stars}stars_i:(5
>  3)&wt=json&indent=on&json.facet={
> comments_for_author:{
>   type:query,
>   q:"{!filters params=$child.fq excludeTags=author v=$childquery}",
>   "//note":"author filter is excluded",
>   domain:{
>  blockChildren:"type_s:book",
>  "//":"applying filters here might be more promising"
>}, facet:{
>authors:{
>   type:terms,
>   field:author_s,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>}
> } ,
> comments_for_stars:{
>   type:query,
>  q:"{!filters params=$child.fq excludeTags=stars v=$childquery}",
>   "//note":"stars_i filter is excluded",
>   domain:{
>  blockChildren:"type_s:book"
>}, facet:{
>stars:{
>   type:terms,
>   field:stars_i,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>   }
> }
> }
> {code} 
> Votes? Opinions?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9510) child level facet exclusions

2018-02-19 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9510:
---
Attachment: (was: SOLR_9510.patch)

> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, 
> SOLR_9510.patch
>
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..&child.fq=color:Red&child.fq=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}&subq=text:word&child.fq={!tag=color}color:Red&child.fq=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via parameter name {{params=$child.fq}}, it also supports 
> {{excludeTags}}.
> h2. Notes
> Regarding the latter parser, the alternative approach might be to move into 
> {{domain:\{..}}} instruction of json facet. From the implementation 
> perspective, it's desired to optimize with bitset processing, however I 
> suppose it's might be deferred until some initial level of maturity. 
> h2. Example
> {code}
> q={!parent which=type_s:book filters=$child.fq 
> v=$childquery}&childquery=comment_t:good&child.fq={!tag=author}author_s:yonik&child.fq={!tag=stars}stars_i:(5
>  3)&wt=json&indent=on&json.facet={
> comments_for_author:{
>   type:query,
>   q:"{!filters params=$child.fq excludeTags=author v=$childquery}",
>   "//note":"author filter is excluded",
>   domain:{
>  blockChildren:"type_s:book",
>  "//":"applying filters here might be more promising"
>}, facet:{
>authors:{
>   type:terms,
>   field:author_s,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>}
> } ,
> comments_for_stars:{
>   type:query,
>  q:"{!filters params=$child.fq excludeTags=stars v=$childquery}",
>   "//note":"stars_i filter is excluded",
>   domain:{
>  blockChildren:"type_s:book"
>}, facet:{
>stars:{
>   type:terms,
>   field:stars_i,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>   }
> }
> }
> {code} 
> Votes? Opinions?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9510) child level facet exclusions

2018-02-19 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369586#comment-16369586
 ] 

Mikhail Khludnev commented on SOLR-9510:


slightly refreshed [^SOLR_9510.patch]. 
* no changes in Lucene codebase
* it turns -a little bit- scary, however {{query}} facet is redundant. Anyway, 
I can't see how to make it shorter:  
{code}
q={!parent filters=$child.fq which=type_s:book v=$childquery}&
childquery=comment_t:*&
child.fq={!tag=author}author_s:dan&
child.fq={!tag=stars}stars_i:4&
json.facet={  
   comments_for_author:{  
  domain:{  
 excludeTags:author,  // 1. rejoin child filters 
and query, expand parents docset, apply parent filters (I suppose) 
 blockChildren:"type_s:book",// 2. join to expanded children 
 filter:"{!filters params=$child.fq excludeTags=author v=$childquery}" 
// 3. filter them again 
  },
  type:terms,
  field:author_s,
  facet:{  
 in_books:"unique(_root_)"
  }
   },
   comments_for_stars:{  
  domain:{  
 excludeTags:stars,
 blockChildren:"type_s:book",
 filter:"{!filters params=$child.fq  excludeTags=stars v=$childquery}"
  },
  type:terms,
  field:stars_i,
  facet:{  
 in_books:"unique(_root_)"
  }
   }
}
{code}
* TODO {{BJQParserFiltersTest}} should be collapsed with {{BJQParserTest}}
* TODO edge case single child query is excluded. 
Is there any concerns? I think it may go in this week. 

> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, 
> SOLR_9510.patch, SOLR_9510.patch
>
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..&child.fq=color:Red&child.fq=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}&subq=text:word&child.fq={!tag=color}color:Red&child.fq=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via parameter name {{params=$child.fq}}, it also supports 
> {{excludeTags}}.
> h2. Notes
> Regarding the latter parser, the alternative approach might be to move into 
> {{domain:\{..}}} instruction of json facet. From the implementation 
> perspective, it's desired to optimize with bitset processing, however I 
> suppose it's might be deferred until some initial level of maturity. 
> h2. Example
> {code}
> q={!parent which=type_s:book filters=$child.fq 
> v=$childquery}&childquery=comment_t:good&child.fq={!tag=author}author_s:yonik&child.fq={!tag=stars}stars_i:(5
>  3)&wt=json&indent=on&json.facet={
> comments_for_author:{
>   type:query,
>   q:"{!filters params=$child.fq excludeTags=author v=$childquery}",
>   "//note":"author filter is excluded",
>   domain:{
>  blockChildren:"type_s:book",
>  "//":"applying filters here might be more promising"
>}, facet:{
>authors:{
>   type:terms,
>   field:author_s,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>}
> } ,
> comments_for_stars:{
>   type:query,
>  q:"{!filters params=$child.fq excludeTags=stars v=$childquery}",
>   "//note":"stars_i filter is excluded",
>   domain:{
>  blockChildren:"type_s:book"
>}, facet:{
>stars:{
>   type:terms,
>   field:stars_i,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>   }
> }
> }
> {code} 
> Votes? Opinions?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[JENKINS] Lucene-Solr-Tests-master - Build # 2350 - Still Unstable

2018-02-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2350/

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest.testSimple

Error Message:
Waiting for collection testSimple1 null Live Nodes: [127.0.0.1:44787_solr, 
127.0.0.1:55613_solr] Last available state: 
DocCollection(testSimple1//collections/testSimple1/state.json/17)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{ "shard1":{  
 "range":"8000-",   "state":"active",   "replicas":{
 "core_node3":{   
"dataDir":"hdfs://localhost:42608/data/testSimple1/core_node3/data/",   
"base_url":"https://127.0.0.1:44787/solr";,   
"node_name":"127.0.0.1:44787_solr",   "type":"NRT",   
"ulogDir":"hdfs://localhost:42608/data/testSimple1/core_node3/data/tlog",   
"core":"testSimple1_shard1_replica_n1",   "shared_storage":"true",  
 "state":"active"}, "core_node5":{   
"dataDir":"hdfs://localhost:42608/data/testSimple1/core_node5/data/",   
"base_url":"https://127.0.0.1:44787/solr";,   
"node_name":"127.0.0.1:44787_solr",   "type":"NRT",   
"ulogDir":"hdfs://localhost:42608/data/testSimple1/core_node5/data/tlog",   
"core":"testSimple1_shard1_replica_n2",   "shared_storage":"true",  
 "state":"active",   "leader":"true"}}}, "shard2":{   
"range":"0-7fff",   "state":"active",   "replicas":{ 
"core_node7":{   
"dataDir":"hdfs://localhost:42608/data/testSimple1/core_node7/data/",   
"base_url":"https://127.0.0.1:55381/solr";,   
"node_name":"127.0.0.1:55381_solr",   "type":"NRT",   
"ulogDir":"hdfs://localhost:42608/data/testSimple1/core_node7/data/tlog",   
"core":"testSimple1_shard2_replica_n4",   "shared_storage":"true",  
 "state":"down"}, "core_node8":{   
"dataDir":"hdfs://localhost:42608/data/testSimple1/core_node8/data/",   
"base_url":"https://127.0.0.1:44787/solr";,   
"node_name":"127.0.0.1:44787_solr",   "type":"NRT",   
"ulogDir":"hdfs://localhost:42608/data/testSimple1/core_node8/data/tlog",   
"core":"testSimple1_shard2_replica_n6",   "shared_storage":"true",  
 "state":"active",   "leader":"true",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"true",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Waiting for collection testSimple1
null
Live Nodes: [127.0.0.1:44787_solr, 127.0.0.1:55613_solr]
Last available state: 
DocCollection(testSimple1//collections/testSimple1/state.json/17)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{
"shard1":{
  "range":"8000-",
  "state":"active",
  "replicas":{
"core_node3":{
  "dataDir":"hdfs://localhost:42608/data/testSimple1/core_node3/data/",
  "base_url":"https://127.0.0.1:44787/solr";,
  "node_name":"127.0.0.1:44787_solr",
  "type":"NRT",
  
"ulogDir":"hdfs://localhost:42608/data/testSimple1/core_node3/data/tlog",
  "core":"testSimple1_shard1_replica_n1",
  "shared_storage":"true",
  "state":"active"},
"core_node5":{
  "dataDir":"hdfs://localhost:42608/data/testSimple1/core_node5/data/",
  "base_url":"https://127.0.0.1:44787/solr";,
  "node_name":"127.0.0.1:44787_solr",
  "type":"NRT",
  
"ulogDir":"hdfs://localhost:42608/data/testSimple1/core_node5/data/tlog",
  "core":"testSimple1_shard1_replica_n2",
  "shared_storage":"true",
  "state":"active",
  "leader":"true"}}},
"shard2":{
  "range":"0-7fff",
  "state":"active",
  "replicas":{
"core_node7":{
  "dataDir":"hdfs://localhost:42608/data/testSimple1/core_node7/data/",
  "base_url":"https://127.0.0.1:55381/solr";,
  "node_name":"127.0.0.1:55381_solr",
  "type":"NRT",
  
"ulogDir":"hdfs://localhost:42608/data/testSimple1/core_node7/data/tlog",
  "core":"testSimple1_shard2_replica_n4",
  "shared_storage":"true",
  "state":"down"},
"core_node8":{
  "dataDir":"hdfs://localhost:42608/data/testSimple1/core_node8/data/",
  "base_url":"https://127.0.0.1:44787/solr";,
  "node_name":"127.0.0.1:44787_solr",
  "type":"NRT",
  
"ulogDir":"hdfs://localhost:42608/data/testSimple1/core_node8/data/tlog",
  "core":"testSimple1_shard2_replica_n6",
  "shared_storage":"true",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"true",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([F296178E4FBA8CC2:CA25337068495813]:0)
at 

[jira] [Updated] (LUCENE-8179) StandardTokenizer doesn't tokenize the word "system" but it works for the plural "systems"

2018-02-19 Thread Joanita Dsouza (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joanita Dsouza updated LUCENE-8179:
---
Description: 
Hi,

We use the Standard tokenizer to tokenize text. The Standard Tokenizer 
tokenizes 'systems' correctly, but it fails to tokenize 'system' Attached a 
small program to demo this.

Is this a known issue.Is there a way to fix it? I have tried a few different 
text examples with different stop words and only this word seems to show this 
issue.

  was:
Hi,

We use the Standard tokenizer to find stop words from text using a predefined 
list of stop words.This list contains 'system' as one of the words. While 
tokenizing a text. The Standard Tokenizer tokenizes 'systems' correctly, but it 
fails to tokenize 'system' Attached a small program to demo this.

Is this a known issue.Is there a way to fix it? I have tried a few different 
text examples with different stop words and only this word seems to show this 
issue.


> StandardTokenizer doesn't tokenize the word "system" but it works for the 
> plural "systems"
> --
>
> Key: LUCENE-8179
> URL: https://issues.apache.org/jira/browse/LUCENE-8179
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 4.10.4
>Reporter: Joanita Dsouza
>Priority: Major
> Attachments: TokenizerBug.java
>
>
> Hi,
> We use the Standard tokenizer to tokenize text. The Standard Tokenizer 
> tokenizes 'systems' correctly, but it fails to tokenize 'system' Attached a 
> small program to demo this.
> Is this a known issue.Is there a way to fix it? I have tried a few different 
> text examples with different stop words and only this word seems to show this 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8179) StandardTokenizer doesn't tokenize the word "system" but it works for the plural "systems"

2018-02-19 Thread Joanita Dsouza (JIRA)
Joanita Dsouza created LUCENE-8179:
--

 Summary: StandardTokenizer doesn't tokenize the word "system" but 
it works for the plural "systems"
 Key: LUCENE-8179
 URL: https://issues.apache.org/jira/browse/LUCENE-8179
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Affects Versions: 4.10.4
Reporter: Joanita Dsouza
 Attachments: TokenizerBug.java

Hi,

We use the Standard tokenizer to find stop words from text using a predefined 
list of stop words.This list contains 'system' as one of the words. While 
tokenizing a text. The Standard Tokenizer tokenizes 'systems' correctly, but it 
fails to tokenize 'system' Attached a small program to demo this.

Is this a known issue.Is there a way to fix it? I have tried a few different 
text examples with different stop words and only this word seems to show this 
issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9510) child level facet exclusions

2018-02-19 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9510:
---
Attachment: SOLR_9510.patch

> child level facet exclusions
> 
>
> Key: SOLR-9510
> URL: https://issues.apache.org/jira/browse/SOLR-9510
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, query parsers
>Reporter: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR_9510.patch, SOLR_9510.patch, SOLR_9510.patch, 
> SOLR_9510.patch, SOLR_9510.patch
>
>
> h2. Challenge
> * Since SOLR-5743 achieved block join child level facets with counts roll-up 
> to parents, there is a demand for filter exclusions. 
> h2. Context
> * Then, it's worth to consider JSON Facets as an engine for this 
> functionality rather than support a separate component. 
> * During a discussion in SOLR-8998 [a solution for block join with child 
> level 
> exclusion|https://issues.apache.org/jira/browse/SOLR-8998?focusedCommentId=15487095&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15487095]
>  has been found.  
>
> h2. Proposal
> It's proposed to provide a bit of syntax sugar to make it user friendly, 
> believe it or not.
> h2. List of improvements
> * introducing a local parameter {{filters}} for {{\{!parent}} query parser 
> referring to _multiple_ filters queries via parameter name: {{\{!parent 
> filters=$child.fq ..}..&child.fq=color:Red&child.fq=size:XL}} 
> these _filters_ are intersected with a child query supplied as a subordinate 
> clause.
> * introducing {{\{!filters params=$child.fq excludeTags=color 
> v=$subq}&subq=text:word&child.fq={!tag=color}color:Red&child.fq=size:XL}} it 
> intersects a subordinate clause (here it's {{subq}} param, and the trick is 
> to refer to the same query from {{\{!parent}}}), with multiple filters 
> supplied via parameter name {{params=$child.fq}}, it also supports 
> {{excludeTags}}.
> h2. Notes
> Regarding the latter parser, the alternative approach might be to move into 
> {{domain:\{..}}} instruction of json facet. From the implementation 
> perspective, it's desired to optimize with bitset processing, however I 
> suppose it's might be deferred until some initial level of maturity. 
> h2. Example
> {code}
> q={!parent which=type_s:book filters=$child.fq 
> v=$childquery}&childquery=comment_t:good&child.fq={!tag=author}author_s:yonik&child.fq={!tag=stars}stars_i:(5
>  3)&wt=json&indent=on&json.facet={
> comments_for_author:{
>   type:query,
>   q:"{!filters params=$child.fq excludeTags=author v=$childquery}",
>   "//note":"author filter is excluded",
>   domain:{
>  blockChildren:"type_s:book",
>  "//":"applying filters here might be more promising"
>}, facet:{
>authors:{
>   type:terms,
>   field:author_s,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>}
> } ,
> comments_for_stars:{
>   type:query,
>  q:"{!filters params=$child.fq excludeTags=stars v=$childquery}",
>   "//note":"stars_i filter is excluded",
>   domain:{
>  blockChildren:"type_s:book"
>}, facet:{
>stars:{
>   type:terms,
>   field:stars_i,
>   facet: {
>   in_books: "unique(_root_)"
> }
> }
>   }
> }
> }
> {code} 
> Votes? Opinions?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 1390 - Unstable!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1390/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ReplaceNodeNoTargetTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([FC3D3DCF6F88309D:74690215C1745D65]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.ReplaceNodeNoTargetTest.test(ReplaceNodeNoTargetTest.java:92)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 12399 lines...]
   [junit4] Suite: org.apache.solr.cloud.ReplaceNodeNoTargetTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ReplaceNodeNoTargetTest_FC3D3DCF6F88309D-001/init-core-data-001
   [junit4]   2> 497655 WARN  
(SUITE-ReplaceNod

[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 464 - Still Unstable!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/464/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseSerialGC

5 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestBackwardsCompatibility

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_EAA246E844AF8D01-001\4.5.0-nocfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_EAA246E844AF8D01-001\4.5.0-nocfs-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_EAA246E844AF8D01-001\4.5.0-nocfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_EAA246E844AF8D01-001\4.5.0-nocfs-001

at __randomizedtesting.SeedInfo.seed([EAA246E844AF8D01]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.OverallAnalyticsTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.OverallAnalyticsTest_80B52200FCDC9C7E-001\tempDir-001\zookeeper\server1\data\version-2:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.OverallAnalyticsTest_80B52200FCDC9C7E-001\tempDir-001\zookeeper\server1\data\version-2

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.OverallAnalyticsTest_80B52200FCDC9C7E-001\tempDir-001\zookeeper\server1\data:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.OverallAnalyticsTest_80B52200FCDC9C7E-001\tempDir-001\zookeeper\server1\data

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.OverallAnalyticsTest_80B52200FCDC9C7E-001\tempDir-001\zookeeper\server1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.OverallAnalyticsTest_80B52200FCDC9C7E-001\tempDir-001\zookeeper\server1

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.OverallAnalyticsTest_80B52200FCDC9C7E-001\tempDir-001\zookeeper:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.OverallAnalyticsTest_80B52200FCDC9C7E-001\tempDir-001\zookeeper

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.OverallAnalyticsTest_80B52200FCDC9C7E-001\tempDir-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.OverallAnalyticsTest_80B52200FCDC9C7E-001\tempDir-001

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-analytics\test\J1\temp\solr.analytics.OverallAnalyticsTest_80B52200FCDC9C7E-001\tempDir-001\zookeeper\server1\data\version-2\log.1:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\s

[jira] [Resolved] (SOLR-11874) Add ulimit recommendations to the "Taking Solr to Production" section in the ref guide

2018-02-19 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-11874.
---
   Resolution: Fixed
Fix Version/s: 7.3

Apparently putting two SOLR JIRA numbers in at once doesn't update the JIRAs. 
SHAs

master SHA: be7a811f61135890554a2a165a8ef6765b0a4310
7x SHA: aac11fc1209e7afeee49b112630ec421000e9195

> Add ulimit recommendations to the "Taking Solr to Production" section in the 
> ref guide
> --
>
> Key: SOLR-11874
> URL: https://issues.apache.org/jira/browse/SOLR-11874
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 7.3
>
>
> Just noticed that we never mention appropriate ulimits in the ref guide 
> except for one spot when talking about cfs files.
> Anyone who wants to pick this up feel free. Otherwise I'll get to this 
> probably over the weekend.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11964) We never mention queryAnalyzerFieldType in the parameters for spellcheck component in the reference guide

2018-02-19 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-11964.
---
   Resolution: Fixed
Fix Version/s: 7.3

> We never mention queryAnalyzerFieldType in the parameters for spellcheck 
> component in the reference guide
> -
>
> Key: SOLR-11964
> URL: https://issues.apache.org/jira/browse/SOLR-11964
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 7.3
>
> Attachments: SOLR-11964.patch, SOLR-11964.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11964) We never mention queryAnalyzerFieldType in the parameters for spellcheck component in the reference guide

2018-02-19 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369526#comment-16369526
 ] 

Erick Erickson commented on SOLR-11964:
---

Apparently putting two SOLR JIRA numbers in at once doesn't update the JIRAs. 
SHAs

master SHA: be7a811f61135890554a2a165a8ef6765b0a4310
7x SHA: aac11fc1209e7afeee49b112630ec421000e9195

> We never mention queryAnalyzerFieldType in the parameters for spellcheck 
> component in the reference guide
> -
>
> Key: SOLR-11964
> URL: https://issues.apache.org/jira/browse/SOLR-11964
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-11964.patch, SOLR-11964.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12006) Add back '*_t' dynamic field for single valued text fields

2018-02-19 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369527#comment-16369527
 ] 

Varun Thacker commented on SOLR-12006:
--

This should be fine from a back-compat point of view as well since people using 
"_txt" fields will still use it as multi-valued.

> Add back '*_t' dynamic field for single valued text fields
> --
>
> Key: SOLR-12006
> URL: https://issues.apache.org/jira/browse/SOLR-12006
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12006.patch
>
>
> Solr used to have a '_t' dynamic field which was single valued and a "_txt" 
> field for multi-valued text 
>  
> Solr 4.x : 
> [https://github.com/apache/lucene-solr/blob/branch_4x/solr/example/example-schemaless/solr/collection1/conf/schema.xml#L129]
>  
>  
> Somewhere in Solr 5.x both became the same definition . 
> [https://github.com/apache/lucene-solr/blob/branch_5_4/solr/server/solr/configsets/data_driven_schema_configs/conf/managed-schema#L138]
>  
> In master now there is no "_t" dynamic field anymore. 
>  
> We have a single-valued dynamic field and multi-valued dynamic field for 
> ints, longs, boolean, float, date , string . We should provide the same 
> option for a text field



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12006) Add back '*_t' dynamic field for single valued text fields

2018-02-19 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12006:
-
Attachment: SOLR-12006.patch

> Add back '*_t' dynamic field for single valued text fields
> --
>
> Key: SOLR-12006
> URL: https://issues.apache.org/jira/browse/SOLR-12006
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12006.patch
>
>
> Solr used to have a '_t' dynamic field which was single valued and a "_txt" 
> field for multi-valued text 
>  
> Solr 4.x : 
> [https://github.com/apache/lucene-solr/blob/branch_4x/solr/example/example-schemaless/solr/collection1/conf/schema.xml#L129]
>  
>  
> Somewhere in Solr 5.x both became the same definition . 
> [https://github.com/apache/lucene-solr/blob/branch_5_4/solr/server/solr/configsets/data_driven_schema_configs/conf/managed-schema#L138]
>  
> In master now there is no "_t" dynamic field anymore. 
>  
> We have a single-valued dynamic field and multi-valued dynamic field for 
> ints, longs, boolean, float, date , string . We should provide the same 
> option for a text field



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11964) We never mention queryAnalyzerFieldType in the parameters for spellcheck component in the reference guide

2018-02-19 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-11964:
--
Attachment: SOLR-11964.patch

> We never mention queryAnalyzerFieldType in the parameters for spellcheck 
> component in the reference guide
> -
>
> Key: SOLR-11964
> URL: https://issues.apache.org/jira/browse/SOLR-11964
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-11964.patch, SOLR-11964.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12006) Add back '*_t' dynamic field for single valued text fields

2018-02-19 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12006:
-
Description: 
Solr used to have a '_t' dynamic field which was single valued and a "_txt" 
field for multi-valued text 

 

Solr 4.x : 
[https://github.com/apache/lucene-solr/blob/branch_4x/solr/example/example-schemaless/solr/collection1/conf/schema.xml#L129]

 

 

Somewhere in Solr 5.x both became the same definition . 
[https://github.com/apache/lucene-solr/blob/branch_5_4/solr/server/solr/configsets/data_driven_schema_configs/conf/managed-schema#L138]

 

In master now there is no "_t" dynamic field anymore. 

 

We have a single-valued dynamic field and multi-valued dynamic field for ints, 
longs, boolean, float, date , string . We should provide the same option for a 
text field

  was:
Solr used to have a '*_t' dynamic field which was single valued and a "*_txt" 
field for multi-valued text 

 

Solr 4.x : 
[https://github.com/apache/lucene-solr/blob/branch_4x/solr/example/example-schemaless/solr/collection1/conf/schema.xml#L129]

 

 

Somewhere in Solr 5.x both became the same definition . 
[https://github.com/apache/lucene-solr/blob/branch_5_4/solr/server/solr/configsets/data_driven_schema_configs/conf/managed-schema#L138]

 

In master now there is no "_t" dynamic field anymore. 

 

We have a single-valued dynamic field and multi-valued dynamic field for ints, 
longs, boolean, float, date , string . We should provide the same option for a 
text field


> Add back '*_t' dynamic field for single valued text fields
> --
>
> Key: SOLR-12006
> URL: https://issues.apache.org/jira/browse/SOLR-12006
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
>
> Solr used to have a '_t' dynamic field which was single valued and a "_txt" 
> field for multi-valued text 
>  
> Solr 4.x : 
> [https://github.com/apache/lucene-solr/blob/branch_4x/solr/example/example-schemaless/solr/collection1/conf/schema.xml#L129]
>  
>  
> Somewhere in Solr 5.x both became the same definition . 
> [https://github.com/apache/lucene-solr/blob/branch_5_4/solr/server/solr/configsets/data_driven_schema_configs/conf/managed-schema#L138]
>  
> In master now there is no "_t" dynamic field anymore. 
>  
> We have a single-valued dynamic field and multi-valued dynamic field for 
> ints, longs, boolean, float, date , string . We should provide the same 
> option for a text field



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12006) Add back '*_t' dynamic field for single valued text fields

2018-02-19 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12006:


 Summary: Add back '*_t' dynamic field for single valued text fields
 Key: SOLR-12006
 URL: https://issues.apache.org/jira/browse/SOLR-12006
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker


Solr used to have a '*_t' dynamic field which was single valued and a "*_txt" 
field for multi-valued text 

 

Solr 4.x : 
[https://github.com/apache/lucene-solr/blob/branch_4x/solr/example/example-schemaless/solr/collection1/conf/schema.xml#L129]

 

 

Somewhere in Solr 5.x both became the same definition . 
[https://github.com/apache/lucene-solr/blob/branch_5_4/solr/server/solr/configsets/data_driven_schema_configs/conf/managed-schema#L138]

 

In master now there is no "_t" dynamic field anymore. 

 

We have a single-valued dynamic field and multi-valued dynamic field for ints, 
longs, boolean, float, date , string . We should provide the same option for a 
text field



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12006) Add back '*_t' dynamic field for single valued text fields

2018-02-19 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker reassigned SOLR-12006:


Assignee: Varun Thacker

> Add back '*_t' dynamic field for single valued text fields
> --
>
> Key: SOLR-12006
> URL: https://issues.apache.org/jira/browse/SOLR-12006
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
>
> Solr used to have a '*_t' dynamic field which was single valued and a "*_txt" 
> field for multi-valued text 
>  
> Solr 4.x : 
> [https://github.com/apache/lucene-solr/blob/branch_4x/solr/example/example-schemaless/solr/collection1/conf/schema.xml#L129]
>  
>  
> Somewhere in Solr 5.x both became the same definition . 
> [https://github.com/apache/lucene-solr/blob/branch_5_4/solr/server/solr/configsets/data_driven_schema_configs/conf/managed-schema#L138]
>  
> In master now there is no "_t" dynamic field anymore. 
>  
> We have a single-valued dynamic field and multi-valued dynamic field for 
> ints, longs, boolean, float, date , string . We should provide the same 
> option for a text field



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 957 - Still Failing

2018-02-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/957/

No tests ran.

Build Log:
[...truncated 28740 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] Java 9 JAVA_HOME=/home/jenkins/tools/java/latest1.9
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.02 sec (10.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-8.0.0-src.tgz...
   [smoker] 30.2 MB in 0.03 sec (1136.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.tgz...
   [smoker] 73.9 MB in 0.14 sec (528.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.zip...
   [smoker] 84.4 MB in 0.08 sec (1086.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6249 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6249 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6249 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6249 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 214 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 9 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 9...
   [smoker]   got 214 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (25.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-8.0.0-src.tgz...
   [smoker] 52.6 MB in 0.98 sec (53.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.tgz...
   [smoker] 151.6 MB in 0.59 sec (257.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.zip...
   [smoker] 152.6 MB in 1.04 sec (146.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-8.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8
   [smoker] *** [WARN] *** Your open 

[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 151 - Still Failing

2018-02-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/151/

No tests ran.

Build Log:
[...truncated 28782 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] Java 9 JAVA_HOME=/home/jenkins/tools/java/latest1.9
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.05 sec (4.7 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.3.0-src.tgz...
   [smoker] 31.7 MB in 0.64 sec (50.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.3.0.tgz...
   [smoker] 73.9 MB in 0.71 sec (103.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.3.0.zip...
   [smoker] 84.4 MB in 1.68 sec (50.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.3.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6290 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6290 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.3.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6290 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6290 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.3.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 217 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 9 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 9...
   [smoker]   got 217 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.04 sec (6.7 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.3.0-src.tgz...
   [smoker] 54.1 MB in 2.07 sec (26.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.3.0.tgz...
   [smoker] 151.7 MB in 6.38 sec (23.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.3.0.zip...
   [smoker] 152.7 MB in 5.40 sec (28.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.3.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.3.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0-java8
   [smoker] *** [WARN] *** Your open file limit is curre

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_162) - Build # 21493 - Still Unstable!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21493/
Java: 32bit/jdk1.8.0_162 -client -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testMetricTrigger

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([DB6F2C93CB99C23A:61631B1C94711475]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.junit.Assert.assertNull(Assert.java:562)
at 
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testMetricTrigger(TriggerIntegrationTest.java:1585)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory

Error Message:
expected:<5> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<5> but was:<0>
at 
__randomizedtesting.SeedIn

[jira] [Created] (SOLR-12005) Solr should have the option of logging all jars loaded

2018-02-19 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-12005:
---

 Summary: Solr should have the option of logging all jars loaded
 Key: SOLR-12005
 URL: https://issues.apache.org/jira/browse/SOLR-12005
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Shawn Heisey


Solr used to explicitly log the filename of every jar it loaded.  It seems that 
the effort to reduce the verbosity of the logs has changed this, now it just 
logs the *count* of jars loaded and the paths where they were loaded from.  
Here's a log line where Solr is reading from ${solr.solr.home}/lib:

{code}
2018-02-01 17:43:20.043 INFO  (main) [   ] o.a.s.c.SolrResourceLoader [null] 
Added 8 libs to classloader, from paths: [/index/solr6/data/lib]
{code}

When trying to help somebody with classloader issues, it's more difficult to 
help when the list of jars loaded isn't in the log.

I would like the more verbose logging to be enabled by default, but I 
understand that many people would not want that, so I propose this:

 * Enable verbose logging for ${solr.solr.home}/lib by default.
 * Disable verbose logging for each core by default.  Allow solrconfig.xml to 
enable it.
 * Optionally allow solr.xml to configure verbose logging at the global level.
 ** This setting would affect both global and per-core jar loading. Each 
solrconfig.xml could override.

Rationale: The contents of ${solr.solr.home}/lib are loaded precisely once, and 
this location doesn't even exist unless a user creates it.  An out-of-the-box 
config would not have verbose logs from jar loading.

The solr home lib location is my preferred way of loading custom jars, because 
they get loaded only once, no matter how many cores you have.  Jars added to 
this location would add lines to the log, but it would not be logged for every 
core.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.1) - Build # 7180 - Still Unstable!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7180/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  
org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest.test

Error Message:
Expected no errors: [{type=ADD,id=10,message=We need to create a new time 
routed collection but for unknown reasons were unable to do so.}]

Stack Trace:
java.lang.AssertionError: Expected no errors: [{type=ADD,id=10,message=We need 
to create a new time routed collection but for unknown reasons were unable to 
do so.}]
at 
__randomizedtesting.SeedInfo.seed([55E3E16CA69351F8:DDB7DEB6086F3C00]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest.assertUpdateResponse(TimeRoutedAliasUpdateProcessorTest.java:305)
at 
org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest.addDocsAndCommit(TimeRoutedAliasUpdateProcessorTest.java:263)
at 
org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest.test(TimeRoutedAliasUpdateProcessorTest.java:195)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting

[jira] [Commented] (SOLR-8096) Major faceting performance regressions

2018-02-19 Thread Nikolay Khitrin (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369433#comment-16369433
 ] 

Nikolay Khitrin commented on SOLR-8096:
---

Please take a look at LUCENE-8178 patch, I've got up to 2 - 2.5x facetting 
performance boost on real index (35M docs) by DocValues block unpacking and 
position lookup reducing.

> Major faceting performance regressions
> --
>
> Key: SOLR-8096
> URL: https://issues.apache.org/jira/browse/SOLR-8096
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3, 6.0
>Reporter: Yonik Seeley
>Priority: Critical
> Attachments: facetcache.diff, simple_facets.diff
>
>
> Use of the highly optimized faceting that Solr had for multi-valued fields 
> over relatively static indexes was removed as part of LUCENE-5666, causing 
> severe performance regressions.
> Here are some quick benchmarks to gauge the damage, on a 5M document index, 
> with each field having between 0 and 5 values per document.  *Higher numbers 
> represent worse 5x performance*.
> Solr 5.4_dev faceting time as a percent of Solr 4.10.3 faceting time  
> ||...|| Percent of index being faceted
> ||num_unique_values|| 10% || 50% || 90% ||
> |10   | 351.17%   | 1587.08%  | 3057.28% |
> |100  | 158.10%   | 203.61%   | 1421.93% |
> |1000 | 143.78%   | 168.01%   | 1325.87% |
> |1| 137.98%   | 175.31%   | 1233.97% |
> |10   | 142.98%   | 159.42%   | 1252.45% |
> |100  | 255.15%   | 165.17%   | 1236.75% |
> For example, a field with 1000 unique values in the whole index, faceting 
> with 5x took 143% of the 4x time, when ~10% of the docs in the index were 
> faceted.
> One user who brought the performance problem to our attention: 
> http://markmail.org/message/ekmqh4ocbkwxv3we
> "faceting is unusable slow since upgrade to 5.3.0" (from 4.10.3)
> The disabling of the UnInvertedField algorithm was previously discovered in 
> SOLR-7190, but we didn't know just how bad the problem was at that time.
> edit: removed "secret" adverb by request



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8178) Bulk operations for LongValues and Sorted[Set]DocValues

2018-02-19 Thread Nikolay Khitrin (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Khitrin updated LUCENE-8178:

Attachment: LUCENE-8178.patch

> Bulk operations for LongValues and Sorted[Set]DocValues
> ---
>
> Key: LUCENE-8178
> URL: https://issues.apache.org/jira/browse/LUCENE-8178
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.2.1
>Reporter: Nikolay Khitrin
>Priority: Major
> Attachments: LUCENE-8178-for-solr.patch, LUCENE-8178.patch
>
>
> One-by-one DocValues iteration by {{advanceExact}} and 
> {{nextOrd}}/{{ordValue}} is really slow for bulk operations like facetting. 
> Reading and unpacking integers in blocks is substantially faster but 
> DocValues for now can be queried only for single document.
> To apply document-based bulk processing {{DocIdSetIterator}} matches have to 
> be splitted to sequential docID runs and remapped to underlying 
> {{LongValues}} positions.
>  After this transformation relatively large linear scans can be performed 
> over packed integers.
>  
> To do this two new interfaces
> 1. {{LongValuesCollector}} ({{collectValue(long index, long value)}}).
>  2. {{OrdStatsCollector}} ({{collectOrd(long ord)}}, {{collectMissing(int 
> count)}}).
> and three new functions are introduced
> 1. {{LongValues.forRange(long begin, long end, LongValuesCollector 
> collector)}}
>  2. {{SortedDocValues.forEach(DocIdSetIterator disi, OrdStatsConsumer 
> collector)}}
>  3. {{SortedSetDocValues.forEach(DocIdSetIterator disi, OrdStatsConsumer 
> collector)}}
> with reference implementations.
> Optimized versions of these functions are provided for:
>  1. {{DirectReader}} for non-32/64 bits per value cases (using 
> {{PackedInts.Decoder}}).
>  2. {{Lucene70DocValuesProducer}} {{getSorted}} and {{getSortedSet}} (both 
> sparse and dense).
>  
> Measured Solr facetting performance boost is up to 2 - 2.5x on real index.
>  Patch for Solr {{DocValuesFacets}} is also provided as separate file.
>  
> Implementation notes:
>  * {{OrdStatsCollector}} does not accept document id because it will ruin 
> performance for {{SortedSetDocValues}} due to excessive position lookups.
>  * This patch is fully compatible with Lucene 7.0 DocValues format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8178) Bulk operations for LongValues and Sorted[Set]DocValues

2018-02-19 Thread Nikolay Khitrin (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Khitrin updated LUCENE-8178:

Attachment: LUCENE-8178-for-solr.patch

> Bulk operations for LongValues and Sorted[Set]DocValues
> ---
>
> Key: LUCENE-8178
> URL: https://issues.apache.org/jira/browse/LUCENE-8178
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.2.1
>Reporter: Nikolay Khitrin
>Priority: Major
> Attachments: LUCENE-8178-for-solr.patch, LUCENE-8178.patch
>
>
> One-by-one DocValues iteration by {{advanceExact}} and 
> {{nextOrd}}/{{ordValue}} is really slow for bulk operations like facetting. 
> Reading and unpacking integers in blocks is substantially faster but 
> DocValues for now can be queried only for single document.
> To apply document-based bulk processing {{DocIdSetIterator}} matches have to 
> be splitted to sequential docID runs and remapped to underlying 
> {{LongValues}} positions.
>  After this transformation relatively large linear scans can be performed 
> over packed integers.
>  
> To do this two new interfaces
> 1. {{LongValuesCollector}} ({{collectValue(long index, long value)}}).
>  2. {{OrdStatsCollector}} ({{collectOrd(long ord)}}, {{collectMissing(int 
> count)}}).
> and three new functions are introduced
> 1. {{LongValues.forRange(long begin, long end, LongValuesCollector 
> collector)}}
>  2. {{SortedDocValues.forEach(DocIdSetIterator disi, OrdStatsConsumer 
> collector)}}
>  3. {{SortedSetDocValues.forEach(DocIdSetIterator disi, OrdStatsConsumer 
> collector)}}
> with reference implementations.
> Optimized versions of these functions are provided for:
>  1. {{DirectReader}} for non-32/64 bits per value cases (using 
> {{PackedInts.Decoder}}).
>  2. {{Lucene70DocValuesProducer}} {{getSorted}} and {{getSortedSet}} (both 
> sparse and dense).
>  
> Measured Solr facetting performance boost is up to 2 - 2.5x on real index.
>  Patch for Solr {{DocValuesFacets}} is also provided as separate file.
>  
> Implementation notes:
>  * {{OrdStatsCollector}} does not accept document id because it will ruin 
> performance for {{SortedSetDocValues}} due to excessive position lookups.
>  * This patch is fully compatible with Lucene 7.0 DocValues format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8178) Bulk operations for LongValues and Sorted[Set]DocValues

2018-02-19 Thread Nikolay Khitrin (JIRA)
Nikolay Khitrin created LUCENE-8178:
---

 Summary: Bulk operations for LongValues and Sorted[Set]DocValues
 Key: LUCENE-8178
 URL: https://issues.apache.org/jira/browse/LUCENE-8178
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 7.2.1
Reporter: Nikolay Khitrin


One-by-one DocValues iteration by {{advanceExact}} and {{nextOrd}}/{{ordValue}} 
is really slow for bulk operations like facetting. Reading and unpacking 
integers in blocks is substantially faster but DocValues for now can be queried 
only for single document.

To apply document-based bulk processing {{DocIdSetIterator}} matches have to be 
splitted to sequential docID runs and remapped to underlying {{LongValues}} 
positions.
 After this transformation relatively large linear scans can be performed over 
packed integers.

 

To do this two new interfaces

1. {{LongValuesCollector}} ({{collectValue(long index, long value)}}).
 2. {{OrdStatsCollector}} ({{collectOrd(long ord)}}, {{collectMissing(int 
count)}}).

and three new functions are introduced

1. {{LongValues.forRange(long begin, long end, LongValuesCollector collector)}}
 2. {{SortedDocValues.forEach(DocIdSetIterator disi, OrdStatsConsumer 
collector)}}
 3. {{SortedSetDocValues.forEach(DocIdSetIterator disi, OrdStatsConsumer 
collector)}}

with reference implementations.

Optimized versions of these functions are provided for:
 1. {{DirectReader}} for non-32/64 bits per value cases (using 
{{PackedInts.Decoder}}).
 2. {{Lucene70DocValuesProducer}} {{getSorted}} and {{getSortedSet}} (both 
sparse and dense).

 

Measured Solr facetting performance boost is up to 2 - 2.5x on real index.
 Patch for Solr {{DocValuesFacets}} is also provided as separate file.

 

Implementation notes:
 * {{OrdStatsCollector}} does not accept document id because it will ruin 
performance for {{SortedSetDocValues}} due to excessive position lookups.
 * This patch is fully compatible with Lucene 7.0 DocValues format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 59 - Unstable

2018-02-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/59/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/956/consoleText

[repro] Revision: b4f8cd7ea6bcbf3974228857ff9d92b545e2c33e

[repro] Repro line:  ant test  -Dtestcase=ComputePlanActionTest 
-Dtests.method=testSelectedCollections -Dtests.seed=FEB73303DABB5D6C 
-Dtests.multiplier=2 -Dtests.locale=es-MX -Dtests.timezone=America/St_Johns 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TriggerIntegrationTest 
-Dtests.method=testNodeMarkersRegistration -Dtests.seed=FEB73303DABB5D6C 
-Dtests.multiplier=2 -Dtests.locale=fr -Dtests.timezone=America/Indiana/Knox 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TriggerIntegrationTest 
-Dtests.method=testSearchRate -Dtests.seed=FEB73303DABB5D6C 
-Dtests.multiplier=2 -Dtests.locale=fr -Dtests.timezone=America/Indiana/Knox 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TriggerIntegrationTest 
-Dtests.method=testMetricTrigger -Dtests.seed=FEB73303DABB5D6C 
-Dtests.multiplier=2 -Dtests.locale=fr -Dtests.timezone=America/Indiana/Knox 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=AutoscalingHistoryHandlerTest 
-Dtests.method=testHistory -Dtests.seed=FEB73303DABB5D6C -Dtests.multiplier=2 
-Dtests.locale=ar-EG -Dtests.timezone=Africa/Algiers -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=DeleteShardTest -Dtests.method=test 
-Dtests.seed=FEB73303DABB5D6C -Dtests.multiplier=2 -Dtests.locale=ar-BH 
-Dtests.timezone=SystemV/HST10 -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestTolerantUpdateProcessorCloud 
-Dtests.seed=FEB73303DABB5D6C -Dtests.multiplier=2 -Dtests.locale=sv-SE 
-Dtests.timezone=Africa/Brazzaville -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testNodeLostTriggerRestoreState -Dtests.seed=FEB73303DABB5D6C 
-Dtests.multiplier=2 -Dtests.locale=ar-OM -Dtests.timezone=America/Adak 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testNodeAddedTriggerRestoreState -Dtests.seed=FEB73303DABB5D6C 
-Dtests.multiplier=2 -Dtests.locale=ar-OM -Dtests.timezone=America/Adak 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testNodeMarkersRegistration -Dtests.seed=FEB73303DABB5D6C 
-Dtests.multiplier=2 -Dtests.locale=ar-OM -Dtests.timezone=America/Adak 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.method=testSearchRate -Dtests.seed=FEB73303DABB5D6C 
-Dtests.multiplier=2 -Dtests.locale=pt -Dtests.timezone=CET 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TimeRoutedAliasUpdateProcessorTest 
-Dtests.method=test -Dtests.seed=FEB73303DABB5D6C -Dtests.multiplier=2 
-Dtests.locale=fr-CH -Dtests.timezone=Indian/Chagos -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
a2fdbc93534263299f055a9e344c49bf29aebdf5
[repro] git checkout b4f8cd7ea6bcbf3974228857ff9d92b545e2c33e

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   AutoscalingHistoryHandlerTest
[repro]   TestTriggerIntegration
[repro]   TestTolerantUpdateProcessorCloud
[repro]   TimeRoutedAliasUpdateProcessorTest
[repro]   DeleteShardTest
[repro]   TriggerIntegrationTest
[repro]   ComputePlanActionTest
[repro]   TestLargeCluster
[repro] ant compile-test

[...truncated 3293 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=40 
-Dtests.class="*.AutoscalingHistoryHandlerTest|*.TestTriggerIntegration|*.TestTolerantUpdateProcessorCloud|*.TimeRoutedAliasUpdateProcessorTest|*.DeleteShardTest|*.TriggerIntegrationTest|*.ComputePlanActionTest|*.TestLargeCluster"
 -Dtests.showOutput=onerror -Dtests.seed=FEB73303DABB5D6C -Dtests.multiplier=2 
-Dtests.locale=ar-EG -Dtests.timezone=Africa/Algiers -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 34559 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.DeleteShardTest
[repro]   0/5 failed: org.apache.solr.cloud.TestTolerantUpdateProcessorCloud
[repro]   0/5 failed: org.apache.solr.cloud.autoscaling.sim.TestLargeCluster
[repro]   0/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration
[repro]   0/5 failed: 
org.a

[jira] [Updated] (SOLR-11588) Add matrixMult Stream Evaluator to support matrix multiplication

2018-02-19 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11588:
--
Attachment: SOLR-11588.patch

> Add matrixMult Stream Evaluator to support matrix multiplication
> 
>
> Key: SOLR-11588
> URL: https://issues.apache.org/jira/browse/SOLR-11588
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11588.patch, SOLR-11588.patch, SOLR-11588.patch
>
>
> This ticket adds the matrixMult Stream Evaluator to support matrix 
> multiplication. The matrix multiplication implementation is provided by 
> *Apache Commons Math*.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 427 - Still Unstable

2018-02-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/427/

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeWithMultipleReplicasLost

Error Message:
The operations computed by ComputePlanAction should not be null 
SolrClientNodeStateProvider.DEBUG{AFTER_ACTION=[compute_plan, null], 
BEFORE_ACTION=[compute_plan, null]}

Stack Trace:
java.lang.AssertionError: The operations computed by ComputePlanAction should 
not be null SolrClientNodeStateProvider.DEBUG{AFTER_ACTION=[compute_plan, 
null], BEFORE_ACTION=[compute_plan, null]}
at 
__randomizedtesting.SeedInfo.seed([568E45F9882B878E:664EA47B005966D2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeWithMultipleReplicasLost(ComputePlanActionTest.java:291)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at jav

[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 451 - Unstable!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/451/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.update.HdfsTransactionLog  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:132)  
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:203)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:161)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:116)  at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:113)
  at sun.reflect.GeneratedConstructorAccessor155.newInstance(Unknown Source)  
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)  at 
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:793)  at 
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:855)  at 
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1108)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:978)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:863)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1039)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:640)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.update.HdfsTransactionLog
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:132)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:203)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:161)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:116)
at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:113)
at sun.reflect.GeneratedConstructorAccessor155.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:793)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:855)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1108)
at org.apache.solr.core.SolrCore.(SolrCore.java:978)
at org.apache.solr.core.SolrCore.(SolrCore.java:863)
at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1039)
at 
org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:640)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)


at __randomizedtesting.SeedInfo.seed([8957D0DA2B7CCF8C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:295)
at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotse

[jira] [Resolved] (SOLR-12004) Unable to write response, client closed connection or we are shutting down org.eclipse.jetty.io.EofException: Closed

2018-02-19 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-12004.
---
Resolution: Invalid

Please raise this question on the user's list at solr-u...@lucene.apache.org, 
see: (http://lucene.apache.org/solr/community.html#mailing-lists-irc) there are 
a _lot_ more people watching that list who may be able to help. 

If it's determined that this really is a code issue in Solr and not a 
configuration/usage problem, we can raise a new JIRA or reopen this one.

Best,
Erick

> Unable to write response, client closed connection or we are shutting down 
> org.eclipse.jetty.io.EofException: Closed
> 
>
> Key: SOLR-12004
> URL: https://issues.apache.org/jira/browse/SOLR-12004
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api, documentation, SolrJ
>Affects Versions: 6.6
> Environment: Kernal version : Linux qa-solr-lx21 
> 4.4.103-92.56-default #1 SMP Wed Dec 27 16:24:31 UTC 2017 (2fd2155) x86_64 
> x86_64 x86_64 GNU/Linux
> Solr version :6.6
> CPU :6 
>Reporter: sidharth aggarwal
>Priority: Blocker
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Hello We are getting below error while indexing(basically tagging them) 
>  
> o.a.s.s.HttpSolrCall Unable to write response, client closed connection or we 
> are shutting down
> org.eclipse.jetty.io.EofException: Closed
>  at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:620)
>  at 
> org.apache.commons.io.output.ProxyOutputStream.write(ProxyOutputStream.java:55)
>  at 
> org.apache.solr.response.QueryResponseWriterUtil$1.write(QueryResponseWriterUtil.java:54)
>  at java.io.OutputStream.write(OutputStream.java:116)
>  at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
>  at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
>  at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
>  at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207)
>  at org.apache.solr.util.FastWriter.flush(FastWriter.java:140)
>  at org.apache.solr.util.FastWriter.flushBuffer(FastWriter.java:154)
>  at 
> org.apache.solr.response.TextResponseWriter.close(TextResponseWriter.java:93)
>  at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:73)
>  at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
>  at org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>  at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>  at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>  at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>  at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>  at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>  at org.eclipse.jetty.server.Server.handle(Server.java:534)
>  at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>  at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>  at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>  at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
> 

[jira] [Resolved] (SOLR-11961) group.query and sort with function getting error in solrcloud

2018-02-19 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-11961.
---
Resolution: Invalid

Well, then please nudge people on the user's list. The JIRA system is reserved 
for known bugs, and this does not (yet) qualify.

bq: if tun same query in sudo node will working ,please help me on this

Then it's highly likely this is a permissions issue where you installed Solr 
either as, say, root or at least as a user who has permissions to some 
directory and are running Solr as some user who does not have those permissions.

> group.query and sort with function getting error in solrcloud
> -
>
> Key: SOLR-11961
> URL: https://issues.apache.org/jira/browse/SOLR-11961
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, SolrCloud
>Affects Versions: 6.4, 6.4.2
>Reporter: adeppa
>Priority: Major
> Attachments: Screen Shot 2018-02-13 at 11.41.45 AM.png
>
>
> while querying combination of group.query and sort function is not working 
>  getting below error 
>  
> Environment : 
>  Solr 6.4.2 and solr cloud mode with two shards and replication factor 2 
>  AWS  with ubuntu 
> query :/solr/qa/select?fq=((level:*.RL AND 
>  im_field_destination_category:7845 AND im_field_geography:6937 AND 
>  im_field_legacy_category:(7875 OR 12949 OR 7902 OR 12954) AND 
>  im_field_report_research_type:7854) OR (im_field_destination_category:7845 
>  AND im_field_geography:6937 AND im_field_legacy_category:(7875 OR 12949 OR 
>  7902 OR 12954) AND im_field_report_research_type:7855 AND 
>  ${sku}))&group.query= im_field_deliverable_type:(12941)&group.query= 
>  
> im_field_deliverable_type:(12941)&group=true&indent=on&q=*:*&sku=sm_field_sku:(manpq7416
>  
>  OR TTPMUS0005A OR TTPXSI1015US OR TTPMUS0004B OR 
>  TTPDPRUS0215)&sort=product(if(exists(query(\{!v="${sku}"})),1,0),2) 
>  desc&wt=json 
> if run same query in sudo node will working ,please help me on this 
> Error: 
> { 
>    "responseHeader":{ 
>      "zkConnected":true, 
>      "status":500, 
>      "QTime":8, 
>      "params":{ 
>        "q":"*:*", 
>        "indent":"on", 
>        "fq":"((level:*.RL AND im_field_destination_category:7845 AND 
>  im_field_geography:6937 AND im_field_legacy_category:(7875 OR 12949 OR 7902 
>  OR 12954) AND im_field_report_research_type:7854) OR 
>  (im_field_destination_category:7845 AND im_field_geography:6937 AND 
>  im_field_legacy_category:(7875 OR 12949 OR 7902 OR 12954) AND 
>  im_field_report_research_type:7855 AND ${sku}))", 
>        "sort":"product(if(exists(query(\{!v=\"${sku}\"})),1,0),2) desc", 
>        "group.query":[" im_field_deliverable_type:(12941)", 
>          " im_field_deliverable_type:(12941)"], 
>        "sku":"sm_field_sku:(manpq7416 OR TTPMUS0005A OR TTPXSI1015US OR 
>  TTPMUS0004B OR TTPDPRUS0215)", 
>        "wt":"json", 
>        "_":"1518098081571", 
>        "group":"true"}}, 
>    "error":{ 
>      "metadata":[ 
>        "error-class","org.apache.solr.common.SolrException", 
>        
>  
> "root-error-class","org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException"],
>  
>      "msg":"org.apache.solr.client.solrj.SolrServerException: No live 
>  SolrServers available to handle this 
>  request:[[http://172.22.0.231:8983/solr/qa_shard1_replica2], 
>  [http://172.22.0.231:8983/solr/qa_shard2_replica2], 
>  [http://172.22.1.249:8983/solr/qa_shard1_replica3]]", 
>      "trace":"org.apache.solr.common.SolrException: 
>  org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
>  available to handle this 
>  request:[[http://172.22.0.231:8983/solr/qa_shard1_replica2], 
>  [http://172.22.0.231:8983/solr/qa_shard2_replica2], 
>  [http://172.22.1.249:8983/solr/qa_shard1_replica3]]\n\tat 
>  
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:415)\n\tat
>  
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:166)\n\tat
>  
>  org.apache.solr.core.SolrCore.execute(SolrCore.java:2299)\n\tat 
>  org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)\n\tat 
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)\n\tat 
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)\n\tat
>  
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:296)\n\tat
>  
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)\n\tat
>  
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
>  
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
>  
> org.eclipse.jetty.security.SecurityHandle

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+43) - Build # 21492 - Still Unstable!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21492/
Java: 64bit/jdk-10-ea+43 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.AutoAddReplicasIntegrationTest.testSimple

Error Message:
Waiting for collection testSimple1 null Live Nodes: [127.0.0.1:39733_solr, 
127.0.0.1:44219_solr] Last available state: 
DocCollection(testSimple1//collections/testSimple1/state.json/16)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{ "shard1":{  
 "range":"8000-",   "state":"active",   "replicas":{
 "core_node3":{   "core":"testSimple1_shard1_replica_n1",   
"base_url":"https://127.0.0.1:44219/solr";,   
"node_name":"127.0.0.1:44219_solr",   "state":"active",   
"type":"NRT",   "leader":"true"}, "core_node12":{   
"core":"testSimple1_shard1_replica_n11",   
"base_url":"https://127.0.0.1:44219/solr";,   
"node_name":"127.0.0.1:44219_solr",   "state":"active",   
"type":"NRT"}}}, "shard2":{   "range":"0-7fff",   
"state":"active",   "replicas":{ "core_node7":{   
"core":"testSimple1_shard2_replica_n4",   
"base_url":"https://127.0.0.1:44219/solr";,   
"node_name":"127.0.0.1:44219_solr",   "state":"active",   
"type":"NRT",   "leader":"true"}, "core_node10":{   
"core":"testSimple1_shard2_replica_n9",   
"base_url":"https://127.0.0.1:33881/solr";,   
"node_name":"127.0.0.1:33881_solr",   "state":"down",   
"type":"NRT",   "router":{"name":"compositeId"},   "maxShardsPerNode":"2",  
 "autoAddReplicas":"true",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Waiting for collection testSimple1
null
Live Nodes: [127.0.0.1:39733_solr, 127.0.0.1:44219_solr]
Last available state: 
DocCollection(testSimple1//collections/testSimple1/state.json/16)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{
"shard1":{
  "range":"8000-",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"testSimple1_shard1_replica_n1",
  "base_url":"https://127.0.0.1:44219/solr";,
  "node_name":"127.0.0.1:44219_solr",
  "state":"active",
  "type":"NRT",
  "leader":"true"},
"core_node12":{
  "core":"testSimple1_shard1_replica_n11",
  "base_url":"https://127.0.0.1:44219/solr";,
  "node_name":"127.0.0.1:44219_solr",
  "state":"active",
  "type":"NRT"}}},
"shard2":{
  "range":"0-7fff",
  "state":"active",
  "replicas":{
"core_node7":{
  "core":"testSimple1_shard2_replica_n4",
  "base_url":"https://127.0.0.1:44219/solr";,
  "node_name":"127.0.0.1:44219_solr",
  "state":"active",
  "type":"NRT",
  "leader":"true"},
"core_node10":{
  "core":"testSimple1_shard2_replica_n9",
  "base_url":"https://127.0.0.1:33881/solr";,
  "node_name":"127.0.0.1:33881_solr",
  "state":"down",
  "type":"NRT",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"true",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([34BD3B4298873B60:C0E1FBCBF74EFB1]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:269)
at 
org.apache.solr.cloud.autoscaling.AutoAddReplicasIntegrationTest.testSimple(AutoAddReplicasIntegrationTest.java:103)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.Te

[JENKINS] Lucene-Solr-Tests-master - Build # 2349 - Unstable

2018-02-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2349/

2 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateWithDefaultConfigSet

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([94CC9EF206D4B874:DA673FB5596BC287]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateWithDefaultConfigSet(CollectionsAPISolrJTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.ReplaceNodeNoTargetTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([94CC9EF206D4B874:1C98A128A828D58C]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.As

[jira] [Created] (SOLR-12004) Unable to write response, client closed connection or we are shutting down org.eclipse.jetty.io.EofException: Closed

2018-02-19 Thread sidharth aggarwal (JIRA)
sidharth aggarwal created SOLR-12004:


 Summary: Unable to write response, client closed connection or we 
are shutting down org.eclipse.jetty.io.EofException: Closed
 Key: SOLR-12004
 URL: https://issues.apache.org/jira/browse/SOLR-12004
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: config-api, documentation, SolrJ
Affects Versions: 6.6
 Environment: Kernal version : Linux qa-solr-lx21 4.4.103-92.56-default 
#1 SMP Wed Dec 27 16:24:31 UTC 2017 (2fd2155) x86_64 x86_64 x86_64 GNU/Linux



Solr version :6.6

CPU :6 


Reporter: sidharth aggarwal


Hello We are getting below error while indexing(basically tagging them) 

 

o.a.s.s.HttpSolrCall Unable to write response, client closed connection or we 
are shutting down
org.eclipse.jetty.io.EofException: Closed
 at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:620)
 at 
org.apache.commons.io.output.ProxyOutputStream.write(ProxyOutputStream.java:55)
 at 
org.apache.solr.response.QueryResponseWriterUtil$1.write(QueryResponseWriterUtil.java:54)
 at java.io.OutputStream.write(OutputStream.java:116)
 at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
 at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
 at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
 at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207)
 at org.apache.solr.util.FastWriter.flush(FastWriter.java:140)
 at org.apache.solr.util.FastWriter.flushBuffer(FastWriter.java:154)
 at 
org.apache.solr.response.TextResponseWriter.close(TextResponseWriter.java:93)
 at 
org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:73)
 at 
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
 at org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
 at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
 at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
 at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
 at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
 at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
 at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
 at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
 at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
 at org.eclipse.jetty.server.Server.handle(Server.java:534)
 at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
 at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
 at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
 at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
 at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
 at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1687 - Still Unstable!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1687/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication

Error Message:
Index: 0, Size: 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at 
__randomizedtesting.SeedInfo.seed([9F16DB95344B09D5:8B5E80C0174CB4CB]:0)
at java.util.ArrayList.rangeCheck(ArrayList.java:657)
at java.util.ArrayList.get(ArrayList.java:433)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication(TestReplicationHandler.java:561)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.TestReplicationHandler: 1) Thread[id=23476, 
name=qtp887737483-23476, state=TIMED_W

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1481 - Still Failing

2018-02-19 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1481/

2 tests failed.
FAILED:  org.apache.lucene.search.TestInetAddressRangeQueries.testRandomBig

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
__randomizedtesting.SeedInfo.seed([F9067CED3D4B4E18:7E510162AC123298]:0)
at java.util.Arrays.copyOf(Arrays.java:3236)
at org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:285)
at 
org.apache.lucene.codecs.memory.DirectPostingsFormat$DirectField.(DirectPostingsFormat.java:354)
at 
org.apache.lucene.codecs.memory.DirectPostingsFormat$DirectFields.(DirectPostingsFormat.java:132)
at 
org.apache.lucene.codecs.memory.DirectPostingsFormat.fieldsProducer(DirectPostingsFormat.java:116)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:293)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:373)
at 
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:112)
at org.apache.lucene.index.SegmentReader.(SegmentReader.java:78)
at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:208)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4586)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4081)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2245)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5096)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1730)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1462)
at 
org.apache.lucene.search.BaseRangeFieldQueryTestCase.verify(BaseRangeFieldQueryTestCase.java:190)
at 
org.apache.lucene.search.BaseRangeFieldQueryTestCase.doTestRandom(BaseRangeFieldQueryTestCase.java:160)
at 
org.apache.lucene.search.BaseRangeFieldQueryTestCase.testRandomBig(BaseRangeFieldQueryTestCase.java:75)
at 
org.apache.lucene.search.TestInetAddressRangeQueries.testRandomBig(TestInetAddressRangeQueries.java:81)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)


FAILED:  org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv_idx

Error Message:
Some docs had errors -- check logs expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: Some docs had errors -- check logs expected:<0> but 
was:<1>
at 
__randomizedtesting.SeedInfo.seed([246B70166FE67079:B11705DEB33ADD98]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.checkField(TestStressCloudBlindAtomicUpdates.java:337)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv_idx(TestStressCloudBlindAtomicUpdates.java:226)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate

[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 461 - Still Unstable!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/461/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testMetricTrigger

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([C63F959953740A23:7C33A2160C9CDC6C]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.junit.Assert.assertNull(Assert.java:562)
at 
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testMetricTrigger(TriggerIntegrationTest.java:1585)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory

Error Message:
expected:<5> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<5> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([C63F959953

[jira] [Commented] (SOLR-11795) Add Solr metrics exporter for Prometheus

2018-02-19 Thread Minoru Osuka (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16369114#comment-16369114
 ] 

Minoru Osuka commented on SOLR-11795:
-

Hi [~koji],

I attached the latest patch (SOLR-11795-6.patch).
 I added Ref Guide content and images, also it includes test code that checking 
scrape error.
 Please check this patch file.
{code:java}
$ git apply /path/to/SOLR-11795-6.patch
{code}
Thanks,

> Add Solr metrics exporter for Prometheus
> 
>
> Key: SOLR-11795
> URL: https://issues.apache.org/jira/browse/SOLR-11795
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.2
>Reporter: Minoru Osuka
>Assignee: Koji Sekiguchi
>Priority: Minor
> Attachments: SOLR-11795-2.patch, SOLR-11795-3.patch, 
> SOLR-11795-4.patch, SOLR-11795-5.patch, SOLR-11795-6.patch, SOLR-11795.patch, 
> solr-dashboard.png, solr-exporter-diagram.png
>
>
> I 'd like to monitor Solr using Prometheus and Grafana.
> I've already created Solr metrics exporter for Prometheus. I'd like to 
> contribute to contrib directory if you don't mind.
> !solr-exporter-diagram.png|thumbnail!
> !solr-dashboard.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+43) - Build # 21491 - Unstable!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21491/
Java: 64bit/jdk-10-ea+43 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest.testSimple

Error Message:
Waiting for collection testSimple1 null Live Nodes: [127.0.0.1:33891_solr, 
127.0.0.1:41121_solr] Last available state: 
DocCollection(testSimple1//collections/testSimple1/state.json/15)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{ "shard1":{  
 "range":"8000-",   "state":"active",   "replicas":{
 "core_node3":{   
"dataDir":"hdfs://localhost.localdomain:37857/data/testSimple1/core_node3/data/",
   "base_url":"https://127.0.0.1:33891/solr";,   
"node_name":"127.0.0.1:33891_solr",   "type":"NRT",   
"ulogDir":"hdfs://localhost.localdomain:37857/data/testSimple1/core_node3/data/tlog",
   "core":"testSimple1_shard1_replica_n1",   
"shared_storage":"true",   "state":"active",   
"leader":"true"}, "core_node5":{   
"dataDir":"hdfs://localhost.localdomain:37857/data/testSimple1/core_node5/data/",
   "base_url":"https://127.0.0.1:33891/solr";,   
"node_name":"127.0.0.1:33891_solr",   "type":"NRT",   
"ulogDir":"hdfs://localhost.localdomain:37857/data/testSimple1/core_node5/data/tlog",
   "core":"testSimple1_shard1_replica_n2",   
"shared_storage":"true",   "state":"active"}}}, "shard2":{   
"range":"0-7fff",   "state":"active",   "replicas":{ 
"core_node7":{   
"dataDir":"hdfs://localhost.localdomain:37857/data/testSimple1/core_node7/data/",
   "base_url":"https://127.0.0.1:33891/solr";,   
"node_name":"127.0.0.1:33891_solr",   "type":"NRT",   
"ulogDir":"hdfs://localhost.localdomain:37857/data/testSimple1/core_node7/data/tlog",
   "core":"testSimple1_shard2_replica_n4",   
"shared_storage":"true",   "state":"active",   
"leader":"true"}, "core_node8":{   
"dataDir":"hdfs://localhost.localdomain:37857/data/testSimple1/core_node8/data/",
   "base_url":"https://127.0.0.1:40001/solr";,   
"node_name":"127.0.0.1:40001_solr",   "type":"NRT",   
"ulogDir":"hdfs://localhost.localdomain:37857/data/testSimple1/core_node8/data/tlog",
   "core":"testSimple1_shard2_replica_n6",   
"shared_storage":"true",   "state":"down",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"true",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Waiting for collection testSimple1
null
Live Nodes: [127.0.0.1:33891_solr, 127.0.0.1:41121_solr]
Last available state: 
DocCollection(testSimple1//collections/testSimple1/state.json/15)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{
"shard1":{
  "range":"8000-",
  "state":"active",
  "replicas":{
"core_node3":{
  
"dataDir":"hdfs://localhost.localdomain:37857/data/testSimple1/core_node3/data/",
  "base_url":"https://127.0.0.1:33891/solr";,
  "node_name":"127.0.0.1:33891_solr",
  "type":"NRT",
  
"ulogDir":"hdfs://localhost.localdomain:37857/data/testSimple1/core_node3/data/tlog",
  "core":"testSimple1_shard1_replica_n1",
  "shared_storage":"true",
  "state":"active",
  "leader":"true"},
"core_node5":{
  
"dataDir":"hdfs://localhost.localdomain:37857/data/testSimple1/core_node5/data/",
  "base_url":"https://127.0.0.1:33891/solr";,
  "node_name":"127.0.0.1:33891_solr",
  "type":"NRT",
  
"ulogDir":"hdfs://localhost.localdomain:37857/data/testSimple1/core_node5/data/tlog",
  "core":"testSimple1_shard1_replica_n2",
  "shared_storage":"true",
  "state":"active"}}},
"shard2":{
  "range":"0-7fff",
  "state":"active",
  "replicas":{
"core_node7":{
  
"dataDir":"hdfs://localhost.localdomain:37857/data/testSimple1/core_node7/data/",
  "base_url":"https://127.0.0.1:33891/solr";,
  "node_name":"127.0.0.1:33891_solr",
  "type":"NRT",
  
"ulogDir":"hdfs://localhost.localdomain:37857/data/testSimple1/core_node7/data/tlog",
  "core":"testSimple1_shard2_replica_n4",
  "shared_storage":"true",
  "state":"active",
  "leader":"true"},
"core_node8":{
  
"dataDir":"hdfs://localhost.localdomain:37857/data/testSimple1/core_node8/data/",
  "base_url":"https://127.0.0.1:40001/solr";,
  "node_name":"127.0.0.1:40001_solr",
  "type":"NRT",
  
"ulogDir":"hdfs://localhost.localdomain:37857/data/testSimple1/core_node8/data/tlog",
  "core":"testSimple1_shard2_replica_n6",
  "shared_storage

[jira] [Updated] (SOLR-11795) Add Solr metrics exporter for Prometheus

2018-02-19 Thread Minoru Osuka (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Minoru Osuka updated SOLR-11795:

Attachment: SOLR-11795-6.patch

> Add Solr metrics exporter for Prometheus
> 
>
> Key: SOLR-11795
> URL: https://issues.apache.org/jira/browse/SOLR-11795
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.2
>Reporter: Minoru Osuka
>Assignee: Koji Sekiguchi
>Priority: Minor
> Attachments: SOLR-11795-2.patch, SOLR-11795-3.patch, 
> SOLR-11795-4.patch, SOLR-11795-5.patch, SOLR-11795-6.patch, SOLR-11795.patch, 
> solr-dashboard.png, solr-exporter-diagram.png
>
>
> I 'd like to monitor Solr using Prometheus and Grafana.
> I've already created Solr metrics exporter for Prometheus. I'd like to 
> contribute to contrib directory if you don't mind.
> !solr-exporter-diagram.png|thumbnail!
> !solr-dashboard.png|thumbnail!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4449 - Still Unstable!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4449/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestConfigSetImmutable

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.core.TestConfigSetImmutable: 1) Thread[id=1614, 
name=qtp96979246-1614, state=TIMED_WAITING, group=TGRP-TestConfigSetImmutable]  
   at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestConfigSetImmutable: 
   1) Thread[id=1614, name=qtp96979246-1614, state=TIMED_WAITING, 
group=TGRP-TestConfigSetImmutable]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([A97F29924D5715EC]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestConfigSetImmutable

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=1614, name=qtp96979246-1614, state=TIMED_WAITING, 
group=TGRP-TestConfigSetImmutable] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=1614, name=qtp96979246-1614, state=TIMED_WAITING, 
group=TGRP-TestConfigSetImmutable]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([A97F29924D5715EC]:0)




Build Log:
[...truncated 12143 lines...]
   [junit4] Suite: org.apache.solr.core.TestConfigSetImmutable
   [junit4]   2> Creating dataDir: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/J1/temp/solr.core.TestConfigSetImmutable_A97F29924D5715EC-001/init-core-data-001
   [junit4]   2> 147889 WARN  
(SUITE-TestConfigSetImmutable-seed#[A97F29924D5715EC]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=1 numCloses=1
   [junit4]   2> 147890 INFO  
(SUITE-TestConfigSetImmutable-seed#[A97F29924D5715EC]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 147894 INFO  
(SUITE-TestConfigSetImmutable-seed#[A97F29924D5

Re: lucene-solr:master: Avoid thread contention in LRUQueryCache test

2018-02-19 Thread Adrien Grand
Thanks Alan!

Le lun. 19 févr. 2018 à 10:50,  a écrit :

> Repository: lucene-solr
> Updated Branches:
>   refs/heads/master 34d3282ed -> a2fdbc935
>
>
> Avoid thread contention in LRUQueryCache test
>
>
> Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
> Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/a2fdbc93
> Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/a2fdbc93
> Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/a2fdbc93
>
> Branch: refs/heads/master
> Commit: a2fdbc93534263299f055a9e344c49bf29aebdf5
> Parents: 34d3282
> Author: Alan Woodward 
> Authored: Mon Feb 19 09:48:32 2018 +
> Committer: Alan Woodward 
> Committed: Mon Feb 19 09:50:12 2018 +
>
> --
>  .../src/test/org/apache/lucene/search/TestLRUQueryCache.java   | 6 +-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> --
>
>
>
> http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/a2fdbc93/lucene/core/src/test/org/apache/lucene/search/TestLRUQueryCache.java
> --
> diff --git
> a/lucene/core/src/test/org/apache/lucene/search/TestLRUQueryCache.java
> b/lucene/core/src/test/org/apache/lucene/search/TestLRUQueryCache.java
> index f6b1c73..eac8b4e 100644
> --- a/lucene/core/src/test/org/apache/lucene/search/TestLRUQueryCache.java
> +++ b/lucene/core/src/test/org/apache/lucene/search/TestLRUQueryCache.java
> @@ -1475,7 +1475,11 @@ public class TestLRUQueryCache extends
> LuceneTestCase {
>  w.addDocument(new Document());
>  w.commit();
>  DirectoryReader reader = DirectoryReader.open(w);
> -IndexSearcher searcher = newSearcher(reader);
> +
> +// Don't use newSearcher(), because that will sometimes use an
> ExecutorService, and
> +// we need to be single threaded to ensure that LRUQueryCache doesn't
> skip the cache
> +// due to thread contention
> +IndexSearcher searcher = new AssertingIndexSearcher(random(), reader);
>  searcher.setQueryCachingPolicy(QueryCachingPolicy.ALWAYS_CACHE);
>
>  LRUQueryCache cache = new LRUQueryCache(1, 1, context -> true,
> Float.POSITIVE_INFINITY);
>
>


[jira] [Created] (LUCENE-8177) BlockMaxConjunctionScorer should compute better lower bounds of the required scores

2018-02-19 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-8177:


 Summary: BlockMaxConjunctionScorer should compute better lower 
bounds of the required scores
 Key: LUCENE-8177
 URL: https://issues.apache.org/jira/browse/LUCENE-8177
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand


Assuming N scorers, {{BlockMaxConjunctionScorer}} computes a lower bound of the 
sum of scores for scorers 0..i, for any given 0 <= i < N.

For instance say you are searching for "quick AND fox", that a hit needs a 
score of 4 to be competitive and that "quick" contributes at most 3 to the 
score and "fox" 2. This means that for a given hit to be competitive, the sum 
of scores must be at least 4-maxScore(fox)=4-2=2 after having scored "quick" 
and 4 after having scored "fox".

Currently we have this in BlockMaxConjunctionScorer:
{code:java}
// Also compute the minimum required scores for a hit to be competitive
// A double that is less than 'score' might still be converted to 'score'
// when casted to a float, so we go to the previous float to avoid this issue
minScores[minScores.length - 1] = minScore > 0 ? Math.nextDown(minScore)
{code}

We currently use {{Math.minDown(float)}} to be safe, but we would get a better 
bound by computing the lowest double that is converted to {{minScore}} when 
casted to a float.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+43) - Build # 1387 - Still Unstable!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1387/
Java: 64bit/jdk-10-ea+43 -XX:-UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_stored_idx

Error Message:
Some docs had errors -- check logs expected:<0> but was:<2>

Stack Trace:
java.lang.AssertionError: Some docs had errors -- check logs expected:<0> but 
was:<2>
at 
__randomizedtesting.SeedInfo.seed([134E42DF2363A3C8:32FEF2636C1E6E6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.checkField(TestStressCloudBlindAtomicUpdates.java:337)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_stored_idx(TestStressCloudBlindAtomicUpdates.java:236)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.1) - Build # 463 - Still Unstable!

2018-02-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/463/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

6 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestZkChroot

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestZkChroot_771783E0D20012AB-001\tempDir-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestZkChroot_771783E0D20012AB-001\tempDir-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestZkChroot_771783E0D20012AB-001\tempDir-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestZkChroot_771783E0D20012AB-001\tempDir-001

at __randomizedtesting.SeedInfo.seed([771783E0D20012AB]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestGenericDistributedQueue.testDistributedQueueBlocking

Error Message:


Stack Trace:
java.util.concurrent.TimeoutException
at 
__randomizedtesting.SeedInfo.seed([771783E0D20012AB:32BDF1999652AEDF]:0)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:204)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimDistributedQueue.testDistributedQueueBlocking(TestSimDistributedQueue.java:101)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(Randomi

[jira] [Updated] (SOLR-11961) group.query and sort with function getting error in solrcloud

2018-02-19 Thread adeppa (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

adeppa updated SOLR-11961:
--
Component/s: SolrCloud

> group.query and sort with function getting error in solrcloud
> -
>
> Key: SOLR-11961
> URL: https://issues.apache.org/jira/browse/SOLR-11961
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, SolrCloud
>Affects Versions: 6.4, 6.4.2
>Reporter: adeppa
>Priority: Major
> Attachments: Screen Shot 2018-02-13 at 11.41.45 AM.png
>
>
> while querying combination of group.query and sort function is not working 
>  getting below error 
>  
> Environment : 
>  Solr 6.4.2 and solr cloud mode with two shards and replication factor 2 
>  AWS  with ubuntu 
> query :/solr/qa/select?fq=((level:*.RL AND 
>  im_field_destination_category:7845 AND im_field_geography:6937 AND 
>  im_field_legacy_category:(7875 OR 12949 OR 7902 OR 12954) AND 
>  im_field_report_research_type:7854) OR (im_field_destination_category:7845 
>  AND im_field_geography:6937 AND im_field_legacy_category:(7875 OR 12949 OR 
>  7902 OR 12954) AND im_field_report_research_type:7855 AND 
>  ${sku}))&group.query= im_field_deliverable_type:(12941)&group.query= 
>  
> im_field_deliverable_type:(12941)&group=true&indent=on&q=*:*&sku=sm_field_sku:(manpq7416
>  
>  OR TTPMUS0005A OR TTPXSI1015US OR TTPMUS0004B OR 
>  TTPDPRUS0215)&sort=product(if(exists(query(\{!v="${sku}"})),1,0),2) 
>  desc&wt=json 
> if run same query in sudo node will working ,please help me on this 
> Error: 
> { 
>    "responseHeader":{ 
>      "zkConnected":true, 
>      "status":500, 
>      "QTime":8, 
>      "params":{ 
>        "q":"*:*", 
>        "indent":"on", 
>        "fq":"((level:*.RL AND im_field_destination_category:7845 AND 
>  im_field_geography:6937 AND im_field_legacy_category:(7875 OR 12949 OR 7902 
>  OR 12954) AND im_field_report_research_type:7854) OR 
>  (im_field_destination_category:7845 AND im_field_geography:6937 AND 
>  im_field_legacy_category:(7875 OR 12949 OR 7902 OR 12954) AND 
>  im_field_report_research_type:7855 AND ${sku}))", 
>        "sort":"product(if(exists(query(\{!v=\"${sku}\"})),1,0),2) desc", 
>        "group.query":[" im_field_deliverable_type:(12941)", 
>          " im_field_deliverable_type:(12941)"], 
>        "sku":"sm_field_sku:(manpq7416 OR TTPMUS0005A OR TTPXSI1015US OR 
>  TTPMUS0004B OR TTPDPRUS0215)", 
>        "wt":"json", 
>        "_":"1518098081571", 
>        "group":"true"}}, 
>    "error":{ 
>      "metadata":[ 
>        "error-class","org.apache.solr.common.SolrException", 
>        
>  
> "root-error-class","org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException"],
>  
>      "msg":"org.apache.solr.client.solrj.SolrServerException: No live 
>  SolrServers available to handle this 
>  request:[[http://172.22.0.231:8983/solr/qa_shard1_replica2], 
>  [http://172.22.0.231:8983/solr/qa_shard2_replica2], 
>  [http://172.22.1.249:8983/solr/qa_shard1_replica3]]", 
>      "trace":"org.apache.solr.common.SolrException: 
>  org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
>  available to handle this 
>  request:[[http://172.22.0.231:8983/solr/qa_shard1_replica2], 
>  [http://172.22.0.231:8983/solr/qa_shard2_replica2], 
>  [http://172.22.1.249:8983/solr/qa_shard1_replica3]]\n\tat 
>  
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:415)\n\tat
>  
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:166)\n\tat
>  
>  org.apache.solr.core.SolrCore.execute(SolrCore.java:2299)\n\tat 
>  org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)\n\tat 
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)\n\tat 
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)\n\tat
>  
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:296)\n\tat
>  
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)\n\tat
>  
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
>  
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>  
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)\n\tat
>  
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)\n\tat
>  
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
>  
>  
> org.eclipse.jet

  1   2   >