[jira] [Commented] (SOLR-9330) Race condition between core reload and statistics request

2016-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515411#comment-15515411
 ] 

ASF subversion and git services commented on SOLR-9330:
---

Commit b50b9106f821915ced14a3efc1e09c265d648fa8 in lucene-solr's branch 
refs/heads/master from [~mkhludnev]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b50b910 ]

SOLR-9330: Fix AlreadyClosedException on admin/mbeans?stats=true


> Race condition between core reload and statistics request
> -
>
> Key: SOLR-9330
> URL: https://issues.apache.org/jira/browse/SOLR-9330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Andrey Kudryavtsev
> Attachments: SOLR-9330.patch, SOLR-9390.patch, SOLR-9390.patch, 
> SOLR-9390.patch, SOLR-9390.patch, too_sync.patch
>
>
> Things happened that we execute this two requests consecutively in Solr 5.5:
> * Core reload: /admin/cores?action=RELOAD=_coreName_
> * Check core statistics: /_coreName_/admin/mbeans?stats=true
> And sometimes second request ends with this error:
> {code}
> ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
> closed
>   at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)
> {code}
> If my understanding of SolrCore internals is correct, it happens because of 
> async nature of reload request:
> * New searcher is "registered" in separate thread
> * Old searcher is closed in same separate thread and only after new one is 
> registered
> * When old searcher is closing, it removes itself from map with MBeans 
> * If statistic requests happens before old searcher is completely removed 
> from everywhere - exception can happen. 
> What do you think if we will introduce new parameter for reload request which 
> makes it fully synchronized? Basically it will force it to call {code}  
> SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] 
> waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9542) Kerberos delegation tokens requires missing Jackson library

2016-09-22 Thread Shinichiro Abe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515325#comment-15515325
 ] 

Shinichiro Abe edited comment on SOLR-9542 at 9/23/16 4:26 AM:
---

[~hgadre], I understood, thanks.
IIUC, in SolrJ jackson library is used for DelegationTokenResponse to use 
ObjectMapper.
It would be nice if we could replace jackson with noggit, for instance 
Utils.fromJSON(InputStream is).
It's ok Solrj testing may depend to jackson or guava, but Solrj itself should 
not do unless using smile(Btw jackson-dataformat-smile is missing in Solrj 
deps), IMO.


was (Author: shinichiro abe):
[~hgadre], I understood, thanks.
IIUC, in SolrJ jackson library is used for DelegationTokenResponse to use 
ObjectMapper.
It would be nice if we could replace jackson with noggit, for instance 
Utils.fromJSON(InputStream is).
It's ok Solrj testing may depend to jackson or guava, but Solrj itself should 
not do, IMO.

> Kerberos delegation tokens requires missing Jackson library
> ---
>
> Key: SOLR-9542
> URL: https://issues.apache.org/jira/browse/SOLR-9542
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-9542.patch
>
>
> GET, RENEW or CANCEL operations for the delegation tokens support requires 
> the Solr server to have old jackson added as a dependency.
> Steps to reproduce the problem:
> 1) Configure Solr to use delegation tokens
> 2) Start Solr
> 3) Use a SolrJ application to get a delegation token.
> The server throws the following:
> {code}
> java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.managementOperation(DelegationTokenAuthenticationHandler.java:279)
> at 
> org.apache.solr.security.KerberosPlugin$RequestContinuesRecorderAuthenticationHandler.managementOperation(KerberosPlugin.java:566)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:514)
> at 
> org.apache.solr.security.DelegationTokenKerberosFilter.doFilter(DelegationTokenKerberosFilter.java:123)
> at 
> org.apache.solr.security.KerberosPlugin.doAuthenticate(KerberosPlugin.java:265)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:318)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> at 
> 

[jira] [Commented] (SOLR-9542) Kerberos delegation tokens requires missing Jackson library

2016-09-22 Thread Shinichiro Abe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515325#comment-15515325
 ] 

Shinichiro Abe commented on SOLR-9542:
--

[~hgadre], I understood, thanks.
IIUC, in SolrJ jackson library is used for DelegationTokenResponse to use 
ObjectMapper.
It would be nice if we could replace jackson with noggit, for instance 
Utils.fromJSON(InputStream is).
It's ok Solrj testing may depend to jackson or guava, but Solrj itself should 
not do, IMO.

> Kerberos delegation tokens requires missing Jackson library
> ---
>
> Key: SOLR-9542
> URL: https://issues.apache.org/jira/browse/SOLR-9542
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-9542.patch
>
>
> GET, RENEW or CANCEL operations for the delegation tokens support requires 
> the Solr server to have old jackson added as a dependency.
> Steps to reproduce the problem:
> 1) Configure Solr to use delegation tokens
> 2) Start Solr
> 3) Use a SolrJ application to get a delegation token.
> The server throws the following:
> {code}
> java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.managementOperation(DelegationTokenAuthenticationHandler.java:279)
> at 
> org.apache.solr.security.KerberosPlugin$RequestContinuesRecorderAuthenticationHandler.managementOperation(KerberosPlugin.java:566)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:514)
> at 
> org.apache.solr.security.DelegationTokenKerberosFilter.doFilter(DelegationTokenKerberosFilter.java:123)
> at 
> org.apache.solr.security.KerberosPlugin.doAuthenticate(KerberosPlugin.java:265)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:318)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9537) Support facet scoring with the scoreNodes expression

2016-09-22 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-9537:


Assignee: Joel Bernstein

> Support facet scoring with the scoreNodes expression
> 
>
> Key: SOLR-9537
> URL: https://issues.apache.org/jira/browse/SOLR-9537
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-9537.patch
>
>
> SOLR-9193 introduced the scoreNodes expression to find the most interesting 
> relationships in a distributed graph.
> With a small adjustment scoreNodes can be made to easily wrap the facet() 
> expression, to find the most interesting facets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9537) Support facet scoring with the scoreNodes expression

2016-09-22 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9537:
-
Summary: Support facet scoring with the scoreNodes expression  (was: 
Scoring facets with scoreNodes expression)

> Support facet scoring with the scoreNodes expression
> 
>
> Key: SOLR-9537
> URL: https://issues.apache.org/jira/browse/SOLR-9537
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-9537.patch
>
>
> SOLR-9193 introduced the scoreNodes expression to find the most interesting 
> relationships in a distributed graph.
> With a small adjustment scoreNodes can be made to easily wrap the facet() 
> expression, to find the most interesting facets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9537) Scoring facets with scoreNodes expression

2016-09-22 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9537:
-
Attachment: SOLR-9537.patch

> Scoring facets with scoreNodes expression
> -
>
> Key: SOLR-9537
> URL: https://issues.apache.org/jira/browse/SOLR-9537
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-9537.patch
>
>
> SOLR-9193 introduced the scoreNodes expression to find the most interesting 
> relationships in a distributed graph.
> With a small adjustment scoreNodes can be made to easily wrap the facet() 
> expression, to find the most interesting facets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9537) Scoring facets with scoreNodes expression

2016-09-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515282#comment-15515282
 ] 

Joel Bernstein edited comment on SOLR-9537 at 9/23/16 3:39 AM:
---

Patch for this coming shortly. The syntax is:

{code}
scoreNodes(facet(basket, 
 q="product:product3", 
 buckets="product", 
 bucketSorts="count(*) desc", 
 bucketSizeLimit=100, count(*)))
{code}


was (Author: joel.bernstein):
Patch for this coming shortly. The syntax is:

{code}
scoreNodes(facet(collection1, 
  q="product_ss:product3", 
  buckets="product_ss", 
  bucketSorts="count(*) desc", 
  bucketSizeLimit=100, count(*)))
{code}

> Scoring facets with scoreNodes expression
> -
>
> Key: SOLR-9537
> URL: https://issues.apache.org/jira/browse/SOLR-9537
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> SOLR-9193 introduced the scoreNodes expression to find the most interesting 
> relationships in a distributed graph.
> With a small adjustment scoreNodes can be made to easily wrap the facet() 
> expression, to find the most interesting facets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9537) Scoring facets with scoreNodes expression

2016-09-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515282#comment-15515282
 ] 

Joel Bernstein edited comment on SOLR-9537 at 9/23/16 3:39 AM:
---

Patch for this coming shortly. The syntax is:

{code}
scoreNodes(facet(basket, 
 q="product:product3", 
 buckets="product", 
 bucketSorts="count(*) desc", 
 bucketSizeLimit=100, 
 count(*)))
{code}


was (Author: joel.bernstein):
Patch for this coming shortly. The syntax is:

{code}
scoreNodes(facet(basket, 
 q="product:product3", 
 buckets="product", 
 bucketSorts="count(*) desc", 
 bucketSizeLimit=100, count(*)))
{code}

> Scoring facets with scoreNodes expression
> -
>
> Key: SOLR-9537
> URL: https://issues.apache.org/jira/browse/SOLR-9537
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> SOLR-9193 introduced the scoreNodes expression to find the most interesting 
> relationships in a distributed graph.
> With a small adjustment scoreNodes can be made to easily wrap the facet() 
> expression, to find the most interesting facets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9537) Scoring facets with scoreNodes expression

2016-09-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515282#comment-15515282
 ] 

Joel Bernstein edited comment on SOLR-9537 at 9/23/16 3:38 AM:
---

Patch for this coming shortly. The syntax is:

{code}
scoreNodes(facet(collection1, 
  q="product_ss:product3", 
  buckets="product_ss", 
  bucketSorts="count(*) desc", 
  bucketSizeLimit=100, count(*)))
{code}


was (Author: joel.bernstein):
Patch for this coming shortly. The syntax is:

{code}
scoreNodes(facet(collection1, 
   q="product_ss:product3", 
   buckets="product_ss", 
   bucketSorts="count(*) desc", 
   bucketSizeLimit=100, count(*)))
{code}

> Scoring facets with scoreNodes expression
> -
>
> Key: SOLR-9537
> URL: https://issues.apache.org/jira/browse/SOLR-9537
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> SOLR-9193 introduced the scoreNodes expression to find the most interesting 
> relationships in a distributed graph.
> With a small adjustment scoreNodes can be made to easily wrap the facet() 
> expression, to find the most interesting facets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9537) Scoring facets with scoreNodes expression

2016-09-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515282#comment-15515282
 ] 

Joel Bernstein commented on SOLR-9537:
--

Patch for this coming shortly. The syntax is:

{code}
scoreNodes(facet(collection1, 
   q="product_ss:product3", 
   buckets="product_ss", 
   bucketSorts="count(*) desc", 
   bucketSizeLimit=100, count(*)))
{code}

> Scoring facets with scoreNodes expression
> -
>
> Key: SOLR-9537
> URL: https://issues.apache.org/jira/browse/SOLR-9537
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> SOLR-9193 introduced the scoreNodes expression to find the most interesting 
> relationships in a distributed graph.
> With a small adjustment scoreNodes can be made to easily wrap the facet() 
> expression, to find the most interesting facets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1120 - Still Failing

2016-09-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1120/

270 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.BasicFunctionalityTest

Error Message:
java.lang.NullPointerException

Stack Trace:
java.lang.RuntimeException: java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([219E416F25A4A93]:0)
at org.apache.solr.util.TestHarness.createConfig(TestHarness.java:75)
at org.apache.solr.SolrTestCaseJ4.createCore(SolrTestCaseJ4.java:602)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:595)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:437)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:426)
at 
org.apache.solr.BasicFunctionalityTest.beforeTests(BasicFunctionalityTest.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:811)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at 
org.apache.solr.core.SolrResourceLoader.addToClassLoader(SolrResourceLoader.java:202)
at 
org.apache.solr.core.SolrResourceLoader.(SolrResourceLoader.java:178)
at 
org.apache.solr.core.SolrResourceLoader.(SolrResourceLoader.java:142)
at org.apache.solr.core.SolrConfig.(SolrConfig.java:171)
at org.apache.solr.util.TestHarness.createConfig(TestHarness.java:73)
... 29 more


FAILED:  junit.framework.TestSuite.org.apache.solr.ConvertedLegacyTest

Error Message:
java.lang.NullPointerException

Stack Trace:
java.lang.RuntimeException: java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([219E416F25A4A93]:0)
at org.apache.solr.util.TestHarness.createConfig(TestHarness.java:75)
at org.apache.solr.SolrTestCaseJ4.createCore(SolrTestCaseJ4.java:602)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:595)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:437)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:426)
at 
org.apache.solr.ConvertedLegacyTest.beforeTests(ConvertedLegacyTest.java:37)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 

[jira] [Resolved] (SOLR-9299) Allow Streaming Expressions to use Analyzers

2016-09-22 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-9299.
--
Resolution: Duplicate

This issue is handled in SOLR-9258, so closing it out.

> Allow Streaming Expressions to use Analyzers
> 
>
> Key: SOLR-9299
> URL: https://issues.apache.org/jira/browse/SOLR-9299
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-9299.patch
>
>
> As SOLR-9240 is close to completion it will be important for Streaming 
> Expressions to be able to analyze text fields. This ticket will add this 
> capability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9542) Kerberos delegation tokens requires missing Jackson library

2016-09-22 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514980#comment-15514980
 ] 

Hrishikesh Gadre commented on SOLR-9542:


[~shinichiro abe] BTW solrj does not depend upon the older version of jackson 
library.

https://github.com/apache/lucene-solr/blob/bede7aefa3b2294e869d7fa543417e160e3518f9/solr/solrj/ivy.xml#L44-L47
https://github.com/apache/lucene-solr/blob/bede7aefa3b2294e869d7fa543417e160e3518f9/solr/core/ivy.xml#L96-L97

> Kerberos delegation tokens requires missing Jackson library
> ---
>
> Key: SOLR-9542
> URL: https://issues.apache.org/jira/browse/SOLR-9542
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-9542.patch
>
>
> GET, RENEW or CANCEL operations for the delegation tokens support requires 
> the Solr server to have old jackson added as a dependency.
> Steps to reproduce the problem:
> 1) Configure Solr to use delegation tokens
> 2) Start Solr
> 3) Use a SolrJ application to get a delegation token.
> The server throws the following:
> {code}
> java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.managementOperation(DelegationTokenAuthenticationHandler.java:279)
> at 
> org.apache.solr.security.KerberosPlugin$RequestContinuesRecorderAuthenticationHandler.managementOperation(KerberosPlugin.java:566)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:514)
> at 
> org.apache.solr.security.DelegationTokenKerberosFilter.doFilter(DelegationTokenKerberosFilter.java:123)
> at 
> org.apache.solr.security.KerberosPlugin.doAuthenticate(KerberosPlugin.java:265)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:318)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9542) Kerberos delegation tokens requires missing Jackson library

2016-09-22 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514946#comment-15514946
 ] 

Hrishikesh Gadre commented on SOLR-9542:


[~shinichiro abe]

bq. Currently jackson and guava are SolrJ dependencies for that plugin. guava 
is used for only one annotation, it is a large jar and usually is suppose to 
provided from client program. If that plugin does not have a strong dependency, 
would you like to make those scope provided?

I think that guava dependency can be avoided by commenting out the 
VisibleForTesting annotation (since the code comment serves the same purpose as 
the annotation).

> Kerberos delegation tokens requires missing Jackson library
> ---
>
> Key: SOLR-9542
> URL: https://issues.apache.org/jira/browse/SOLR-9542
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-9542.patch
>
>
> GET, RENEW or CANCEL operations for the delegation tokens support requires 
> the Solr server to have old jackson added as a dependency.
> Steps to reproduce the problem:
> 1) Configure Solr to use delegation tokens
> 2) Start Solr
> 3) Use a SolrJ application to get a delegation token.
> The server throws the following:
> {code}
> java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.managementOperation(DelegationTokenAuthenticationHandler.java:279)
> at 
> org.apache.solr.security.KerberosPlugin$RequestContinuesRecorderAuthenticationHandler.managementOperation(KerberosPlugin.java:566)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:514)
> at 
> org.apache.solr.security.DelegationTokenKerberosFilter.doFilter(DelegationTokenKerberosFilter.java:123)
> at 
> org.apache.solr.security.KerberosPlugin.doAuthenticate(KerberosPlugin.java:265)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:318)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For 

[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_102) - Build # 1779 - Unstable!

2016-09-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1779/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

507 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.CleanupOldIndexTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([39A48CDDC54845E3]:0)


FAILED:  org.apache.solr.cloud.CleanupOldIndexTest.test

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([39A48CDDC54845E3]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.analysis.TestFoldingMultitermExtrasQuery

Error Message:
Can't load schema 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/contrib/solr-analysis-extras/test/J1/temp/solr.analysis.TestFoldingMultitermExtrasQuery_FF86C5E88C152511-001/tempDir-001/collection1/conf/schema-folding-extra.xml:
 null

Stack Trace:
org.apache.solr.common.SolrException: Can't load schema 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/contrib/solr-analysis-extras/test/J1/temp/solr.analysis.TestFoldingMultitermExtrasQuery_FF86C5E88C152511-001/tempDir-001/collection1/conf/schema-folding-extra.xml:
 null
at __randomizedtesting.SeedInfo.seed([FF86C5E88C152511]:0)
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:607)
at org.apache.solr.schema.IndexSchema.(IndexSchema.java:183)
at 
org.apache.solr.schema.ManagedIndexSchema.(ManagedIndexSchema.java:104)
at 
org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:172)
at 
org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:45)
at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
at org.apache.solr.util.TestHarness.(TestHarness.java:96)
at org.apache.solr.SolrTestCaseJ4.createCore(SolrTestCaseJ4.java:605)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:595)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:437)
at 
org.apache.solr.analysis.TestFoldingMultitermExtrasQuery.beforeTests(TestFoldingMultitermExtrasQuery.java:36)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:811)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:541)
... 34 more



[jira] [Commented] (SOLR-9542) Kerberos delegation tokens requires missing Jackson library

2016-09-22 Thread Shinichiro Abe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514806#comment-15514806
 ] 

Shinichiro Abe commented on SOLR-9542:
--

bq. Adding jackson just for kerberosPlugin feels like an overkill.

So does SolrJ, I think. Currently jackson and guava are SolrJ dependencies for 
that plugin. guava is used for only one annotation, it is a large jar and 
usually is suppose to provided from client program. If that plugin does not 
have a strong dependency,  would you like to make those scope provided? ref 
CONNECTORS-1338.

> Kerberos delegation tokens requires missing Jackson library
> ---
>
> Key: SOLR-9542
> URL: https://issues.apache.org/jira/browse/SOLR-9542
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-9542.patch
>
>
> GET, RENEW or CANCEL operations for the delegation tokens support requires 
> the Solr server to have old jackson added as a dependency.
> Steps to reproduce the problem:
> 1) Configure Solr to use delegation tokens
> 2) Start Solr
> 3) Use a SolrJ application to get a delegation token.
> The server throws the following:
> {code}
> java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.managementOperation(DelegationTokenAuthenticationHandler.java:279)
> at 
> org.apache.solr.security.KerberosPlugin$RequestContinuesRecorderAuthenticationHandler.managementOperation(KerberosPlugin.java:566)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:514)
> at 
> org.apache.solr.security.DelegationTokenKerberosFilter.doFilter(DelegationTokenKerberosFilter.java:123)
> at 
> org.apache.solr.security.KerberosPlugin.doAuthenticate(KerberosPlugin.java:265)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:318)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Progress on Moving Ref Guide

2016-09-22 Thread Shawn Heisey
On 9/22/2016 2:58 PM, Cassandra Targett wrote:
> This has allowed us to make a demo of the entire Ref Guide online, at
> http://home.apache.org/~ctargett/RefGuidePOC/jekyll-full/apache-solr-reference-guide.html.

Looks pretty good overall!

Are you ready for feedback on how the demo operates, or are you focusing
on other things right now?  I don't want to upset your process by
heaping information on you that's useless at the moment.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6468) Regression: StopFilterFactory doesn't work properly without enablePositionIncrements="false"

2016-09-22 Thread Roman Chyla (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514785#comment-15514785
 ] 

Roman Chyla commented on SOLR-6468:
---

Ha! :-)
I've found my own comment above, 2 years later I'm facing this situation again, 
I completely forgot (and truth be told: preferred running old solr 4x).

This is how the new solr sees things:

A 350-MHz GBT Survey of 50 Faint Fermi γ ray Sources for Radio Millisecond 
Pulsars

is indexed as
```
null_1
1   :350|350mhz
2   :mhz|syn::mhz
3   :acr::gbt|gbt|syn::gbt|syn::green bank telescope
4   :survey|syn::survey
null_1
6   :50
```

the 1st and 5th position is a gap - so the search for "350-MHz GBT Survey of 50 
Faint" will fail - because 'of' is a stopword and the stop-filter will always 
increment the position (what's the purpose of a stopfilter; if it is leaving 
gaps?)

anyways, the solution with CharFilterFactory cannot work for me, I have to do 
this:
 
 1. search for synonyms (they can contain stopwords)
 2. remove stopwords
 3. search for other synonyms (that don't have stopwords)

I'm afraid the real life is little bit more complex than what it seems; but 
there is a logic to your choices, SOLR devs, I'm afraid I can agree with you. 
People who understand the *why* will make it work again as it *should*. Others 
will happily keep using the 'simplified' version.

> Regression: StopFilterFactory doesn't work properly without 
> enablePositionIncrements="false"
> 
>
> Key: SOLR-6468
> URL: https://issues.apache.org/jira/browse/SOLR-6468
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.8.1, 4.9
>Reporter: Alexander S.
>
> Setup:
> * Schema version is 1.5
> * Field config:
> {code}
>  autoGeneratePhraseQueries="true">
>   
> 
>  ignoreCase="true" />
> 
>   
> 
> {code}
> * Stop words:
> {code}
> http 
> https 
> ftp 
> www
> {code}
> So very simple. In the index I have:
> * twitter.com/testuser
> All these queries do match:
> * twitter.com/testuser
> * com/testuser
> * testuser
> But none of these does:
> * https://twitter.com/testuser
> * https://www.twitter.com/testuser
> * www.twitter.com/testuser
> Debug output shows:
> "parsedquery_toString": "+(url_words_ngram:\"? twitter com testuser\")"
> But we need:
> "parsedquery_toString": "+(url_words_ngram:\"twitter com testuser\")"
> Complete debug outputs:
> * a valid search: 
> http://pastie.org/pastes/9500661/text?key=rgqj5ivlgsbk1jxsudx9za
> * an invalid search: 
> http://pastie.org/pastes/9500662/text?key=b4zlh2oaxtikd8jvo5xaww
> The complete discussion and explanation of the problem is here: 
> http://lucene.472066.n3.nabble.com/Help-with-StopFilterFactory-td4153839.html
> I didn't find a clear explanation how can we upgrade Solr, there's no any 
> replacement or a workarround to this, so this is not just a major change but 
> a major disrespect to all existing Solr users who are using this feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9534) Support quiet/verbose bin/solr options for changing log level

2016-09-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514624#comment-15514624
 ] 

Jan Høydahl commented on SOLR-9534:
---

Documented options in RefGuide: 
https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=50234737=51=50
https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=32604193=25=24

> Support quiet/verbose bin/solr options for changing log level
> -
>
> Key: SOLR-9534
> URL: https://issues.apache.org/jira/browse/SOLR-9534
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9534.patch, SOLR-9534.patch
>
>
> Spinoff from SOLR-6677
> Let's make it much easier to "turn on debug" by supporting a {{bin/solr start 
> -V}} verbose option, and likewise a {{bin/solr start -q}} for quiet operation.
> These would simply be convenience options for changing the RootLogger from 
> level INFO to DEBUG or WARN respectively. This can be done programmatically 
> in log4j at startup. 
> Could be we need to add some more package specific defaults in 
> log4j.properties to get the right mix of logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8186) Solr start scripts -- only log to console when running in foreground

2016-09-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514578#comment-15514578
 ] 

Jan Høydahl commented on SOLR-8186:
---

Documented auto console log muting in RefGuide: 
https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=32604193=24=23

> Solr start scripts -- only log to console when running in foreground
> 
>
> Key: SOLR-8186
> URL: https://issues.apache.org/jira/browse/SOLR-8186
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Shawn Heisey
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-8186-robustness.patch, SOLR-8186.patch, 
> SOLR-8186.patch
>
>
> Currently the log4j.properties file logs to the console, and the start 
> scripts capture console output to a logfile that never rotates.  This can 
> fill up the disk, and when the logfile is removed, the user might be alarmed 
> by the way their memory statistics behave -- the "cached" memory might have a 
> sudden and very large drop, making it appear to a novice that the huge 
> logfile was hogging their memory.
> The logfile created by log4j is rotated when it gets big enough, so that 
> logfile is unlikely to fill up the disk.
> I propose that we copy the current log4j.properties file to something like 
> log4j-foreground.properties, remove CONSOLE logging in the log4j.properties 
> file, and have the start script use the alternate config file when running in 
> the foreground.  This way users will see the logging output when running in 
> the foreground, but it will be absent when running normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6677) reduce logging during Solr startup

2016-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514552#comment-15514552
 ] 

ASF subversion and git services commented on SOLR-6677:
---

Commit dffbefa153ac6d86c60d09a1f69c1ba770e864ec in lucene-solr's branch 
refs/heads/branch_6x from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=dffbefa ]

SOLR-6677: Fix test failures related to nullpointer when printing core name in 
logs.

(cherry picked from commit bede7ae - which btw had wrong JIRA number..)


> reduce logging during Solr startup
> --
>
> Key: SOLR-6677
> URL: https://issues.apache.org/jira/browse/SOLR-6677
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Jan Høydahl
> Attachments: SOLR-6677.patch, SOLR-6677.patch
>
>
> most of what is printed is neither helpful nor useful. It's just noise



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6677) reduce logging during Solr startup

2016-09-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514550#comment-15514550
 ] 

Jan Høydahl commented on SOLR-6677:
---

Committed to master (but with wrong JIRA id):

Commit bede7aefa3b2294e869d7fa543417e160e3518f9 in lucene-solr's branch 
refs/heads/master from Jan Høydahl
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bede7ae ]
SOLR-9534: Fix test failures related to nullpointer when printing core name in 
logs.

> reduce logging during Solr startup
> --
>
> Key: SOLR-6677
> URL: https://issues.apache.org/jira/browse/SOLR-6677
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Jan Høydahl
> Attachments: SOLR-6677.patch, SOLR-6677.patch
>
>
> most of what is printed is neither helpful nor useful. It's just noise



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-9534) Support quiet/verbose bin/solr options for changing log level

2016-09-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9534:
--
Comment: was deleted

(was: Commit bede7aefa3b2294e869d7fa543417e160e3518f9 in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bede7ae ]

SOLR-9534: Fix test failures related to nullpointer when printing core name in 
logs.
)

> Support quiet/verbose bin/solr options for changing log level
> -
>
> Key: SOLR-9534
> URL: https://issues.apache.org/jira/browse/SOLR-9534
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9534.patch, SOLR-9534.patch
>
>
> Spinoff from SOLR-6677
> Let's make it much easier to "turn on debug" by supporting a {{bin/solr start 
> -V}} verbose option, and likewise a {{bin/solr start -q}} for quiet operation.
> These would simply be convenience options for changing the RootLogger from 
> level INFO to DEBUG or WARN respectively. This can be done programmatically 
> in log4j at startup. 
> Could be we need to add some more package specific defaults in 
> log4j.properties to get the right mix of logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9534) Support quiet/verbose bin/solr options for changing log level

2016-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514532#comment-15514532
 ] 

ASF subversion and git services commented on SOLR-9534:
---

Commit bede7aefa3b2294e869d7fa543417e160e3518f9 in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bede7ae ]

SOLR-9534: Fix test failures related to nullpointer when printing core name in 
logs.


> Support quiet/verbose bin/solr options for changing log level
> -
>
> Key: SOLR-9534
> URL: https://issues.apache.org/jira/browse/SOLR-9534
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9534.patch, SOLR-9534.patch
>
>
> Spinoff from SOLR-6677
> Let's make it much easier to "turn on debug" by supporting a {{bin/solr start 
> -V}} verbose option, and likewise a {{bin/solr start -q}} for quiet operation.
> These would simply be convenience options for changing the RootLogger from 
> level INFO to DEBUG or WARN respectively. This can be done programmatically 
> in log4j at startup. 
> Could be we need to add some more package specific defaults in 
> log4j.properties to get the right mix of logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 17877 - Still unstable!

2016-09-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17877/
Java: 32bit/jdk1.8.0_102 -server -XX:+UseG1GC

504 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analysis.TestFoldingMultitermExtrasQuery

Error Message:
Can't load schema 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/contrib/solr-analysis-extras/test/J1/temp/solr.analysis.TestFoldingMultitermExtrasQuery_503B3A4CBA685D21-001/tempDir-001/collection1/conf/schema-folding-extra.xml:
 null

Stack Trace:
org.apache.solr.common.SolrException: Can't load schema 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/contrib/solr-analysis-extras/test/J1/temp/solr.analysis.TestFoldingMultitermExtrasQuery_503B3A4CBA685D21-001/tempDir-001/collection1/conf/schema-folding-extra.xml:
 null
at __randomizedtesting.SeedInfo.seed([503B3A4CBA685D21]:0)
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:607)
at org.apache.solr.schema.IndexSchema.(IndexSchema.java:183)
at 
org.apache.solr.schema.ManagedIndexSchema.(ManagedIndexSchema.java:104)
at 
org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:172)
at 
org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:45)
at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
at org.apache.solr.util.TestHarness.(TestHarness.java:96)
at org.apache.solr.SolrTestCaseJ4.createCore(SolrTestCaseJ4.java:605)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:595)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:437)
at 
org.apache.solr.analysis.TestFoldingMultitermExtrasQuery.beforeTests(TestFoldingMultitermExtrasQuery.java:36)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:811)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:541)
... 34 more


FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestICUCollationField

Error Message:
Can't load schema 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/contrib/solr-analysis-extras/test/J2/temp/solr.schema.TestICUCollationField_503B3A4CBA685D21-001/tempDir-001/collection1/conf/schema.xml:
 null

Stack Trace:
org.apache.solr.common.SolrException: Can't load schema 

[jira] [Commented] (LUCENE-7457) Default doc values format should optimize for iterator access

2016-09-22 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514530#comment-15514530
 ] 

Michael McCandless commented on LUCENE-7457:


Thanks [~jpountz], this looks great!  Should we also increase the sparse 
threshold (currently 1%) when writing doc values?  Or we can wait for a 
followon issue...

> Default doc values format should optimize for iterator access
> -
>
> Key: LUCENE-7457
> URL: https://issues.apache.org/jira/browse/LUCENE-7457
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Adrien Grand
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: LUCENE-7457.patch
>
>
> In LUCENE-7407 we switched doc values consumption from random access API to 
> an iterator API, but nothing was done there to improve the codec.  We should 
> do that here.
> At a bare minimum we should fix the existing very-sparse case to be a true 
> iterator, and not wrapped with the silly legacy wrappers.
> I think we should also increase the threshold (currently 1%?) when we switch 
> from dense to sparse encoding.  This should fix LUCENE-7253, making merging 
> of sparse doc values efficient ("pay for what you use").
> I'm sure there are many other things to explore to let codecs "take 
> advantage" of the fact that they no longer need to offer random access to doc 
> values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5542) Explore making DVConsumer sparse-aware

2016-09-22 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-5542.

Resolution: Duplicate

Dup of LUCENE-7407.  We now pass a {{DocValuesProducer}} to all the 
{{addXYZField}} when writing doc values.

> Explore making DVConsumer sparse-aware
> --
>
> Key: LUCENE-5542
> URL: https://issues.apache.org/jira/browse/LUCENE-5542
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Reporter: Shai Erera
>
> Today DVConsumer API requires the caller to pass a value for every document, 
> where {{null}} means "this doc has no value". The Codec can then choose how 
> to encode the values, i.e. whether it encodes a 0 for a numeric field, or 
> encodes the sparse docs. In practice, from what I see, we choose to encode 
> the 0s.
> I wonder if we e.g. added an {{Iterable}} to 
> DVConsumer.addXYZField(), if that would make a better API. The caller only 
> passes  pairs and it's up to the Codec to decide how it wants to 
> encode the missing values. Like, if a user's app truly has a sparse NDV, 
> IndexWriter doesn't need to "fill the gaps" artificially. It's the job of the 
> Codec.
> To be clear, I don't propose to change any Codec implementation in this issue 
> (w.r.t. sparse encoding - yes/no), only change the API to reflect that 
> sparseness. I think that if we'll ever want to encode sparse values, it will 
> be a more convenient API.
> Thoughts? I volunteer to do this work, but want to get others' opinion before 
> I start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7826) Permission issues when creating cores with bin/solr

2016-09-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514495#comment-15514495
 ] 

Jan Høydahl commented on SOLR-7826:
---

Documented the {{-force}} flag and removed warning box in ref-guide 
https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=50234737=50=49,
 as the script will now warn the user itself :)

> Permission issues when creating cores with bin/solr
> ---
>
> Key: SOLR-7826
> URL: https://issues.apache.org/jira/browse/SOLR-7826
> Project: Solr
>  Issue Type: Improvement
>Reporter: Shawn Heisey
>Assignee: Jan Høydahl
>Priority: Minor
>  Labels: newdev
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-7826.patch, SOLR-7826.patch
>
>
> Ran into an interesting situation on IRC today.
> Solr has been installed as a service using the shell script 
> install_solr_service.sh ... so it is running as an unprivileged user.
> User is running "bin/solr create" as root.  This causes permission problems, 
> because the script creates the core's instanceDir with root ownership, then 
> when Solr is instructed to actually create the core, it cannot create the 
> dataDir.
> Enhancement idea:  When the install script is used, leave breadcrumbs 
> somewhere so that the "create core" section of the main script can find it 
> and su to the user specified during install.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 404 - Still Unstable!

2016-09-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/404/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

507 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analysis.TestFoldingMultitermExtrasQuery

Error Message:
Can't load schema 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/build/contrib/solr-analysis-extras/test/J0/temp/solr.analysis.TestFoldingMultitermExtrasQuery_5B5F7CD30C211DD4-001/tempDir-001/collection1/conf/schema-folding-extra.xml:
 null

Stack Trace:
org.apache.solr.common.SolrException: Can't load schema 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/build/contrib/solr-analysis-extras/test/J0/temp/solr.analysis.TestFoldingMultitermExtrasQuery_5B5F7CD30C211DD4-001/tempDir-001/collection1/conf/schema-folding-extra.xml:
 null
at __randomizedtesting.SeedInfo.seed([5B5F7CD30C211DD4]:0)
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:607)
at org.apache.solr.schema.IndexSchema.(IndexSchema.java:183)
at 
org.apache.solr.schema.ManagedIndexSchema.(ManagedIndexSchema.java:104)
at 
org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:172)
at 
org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:45)
at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
at org.apache.solr.util.TestHarness.(TestHarness.java:96)
at org.apache.solr.SolrTestCaseJ4.createCore(SolrTestCaseJ4.java:605)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:595)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:437)
at 
org.apache.solr.analysis.TestFoldingMultitermExtrasQuery.beforeTests(TestFoldingMultitermExtrasQuery.java:36)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:811)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:541)
... 34 more


FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestICUCollationField

Error Message:
Can't load schema 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/build/contrib/solr-analysis-extras/test/J1/temp/solr.schema.TestICUCollationField_5B5F7CD30C211DD4-001/tempDir-001/collection1/conf/schema.xml:
 null

Stack Trace:
org.apache.solr.common.SolrException: Can't load schema 

[jira] [Commented] (SOLR-9508) Install script should check existence of tools, and add option to NOT start service

2016-09-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514479#comment-15514479
 ] 

Jan Høydahl commented on SOLR-9508:
---

Updated refGuide: 
https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=50856198=48=47

> Install script should check existence of tools, and add option to NOT start 
> service
> ---
>
> Key: SOLR-9508
> URL: https://issues.apache.org/jira/browse/SOLR-9508
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9508.patch
>
>
> The {{install_solr_service.sh}} script should exit cleanly if tools like 
> {{tar}}, {{unzip}}, {{service}} or {{java}} are not available.
> Also, add a new switch {{-n}} to skip starting the service after 
> installation, which will make it easier to script installations which will 
> want to modify {{/etc/default/solr.in.sh}} before starting the service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9475) Add install script support for CentOS

2016-09-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514468#comment-15514468
 ] 

Jan Høydahl commented on SOLR-9475:
---

Update RefGuide 
https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=50856198=47=46

> Add install script support for CentOS
> -
>
> Key: SOLR-9475
> URL: https://issues.apache.org/jira/browse/SOLR-9475
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2
> Environment: Centos 7
>Reporter: Nitin Surana
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9475.patch, install_solr_service.sh
>
>
> [root@ns521582 tmp]# sudo ./install_solr_service.sh solr-6.2.0.tgz
> id: solr: no such user
> Creating new user: solr
> adduser: group '--disabled-password' does not exist
> Extracting solr-6.2.0.tgz to /opt
> Installing symlink /opt/solr -> /opt/solr-6.2.0 ...
> Installing /etc/init.d/solr script ...
> /etc/default/solr.in.sh already exist. Skipping install ...
> /var/solr/data/solr.xml already exists. Skipping install ...
> /var/solr/log4j.properties already exists. Skipping install ...
> chown: invalid spec: ‘solr:’
> ./install_solr_service.sh: line 322: update-rc.d: command not found
> id: solr: no such user
> User solr not found! Please create the solr user before running this script.
> id: solr: no such user
> User solr not found! Please create the solr user before running this script.
> Service solr installed.
> Reference - 
> http://stackoverflow.com/questions/39320647/unable-to-create-user-when-installing-solr-6-2-0-on-centos-7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Progress on Moving Ref Guide

2016-09-22 Thread Cassandra Targett
A bit of radio silence on this, but Hoss & I have made some progress
since I first made the proposal last month.

Hoss helped me with scripts to make the conversion process easier and
more automated. The goal is to do this one time as quickly as
possible, so it was worth spending a bit of time on trying to get it
right.

Hoss also worked on a process that will automatically create a
hierarchy of pages for sidebar navigation and as the basis for the PDF
output (more on that below).

This has allowed us to make a demo of the entire Ref Guide online, at
http://home.apache.org/~ctargett/RefGuidePOC/jekyll-full/apache-solr-reference-guide.html.

A PDF version is available at
http://home.apache.org/~ctargett/RefGuidePOC/pdf/SolrRefGuide-all-0.0-DRAFT.pdf
(beware, it's currently 27Mb).

There are a few known issues that we will only be able to resolve
manually, once we actually do the conversion:

- Nested blocks of content in tables (like code blocks or NOTEs, etc.)
break the rows.
- Nested ordered lists (like 1, 2, a, b, 3, 4) get converted as a bad
quasi-ordered list (like 1, 2, 2, 2, 3, 4).

For both of these items, TODOs have been added to the new .adoc format
files so we can manually fix the problems.

Additionally, we are working through some bugs with inter- and
intra-document links. These are problematic, and we need to spend a
little bit more time on them. And we have a few other small issues
remaining, tracking (for now) in the Github repo:
https://github.com/ctargett/refguide-asciidoc-poc/issues.

We'd like your help with two things before moving to the next phase:

- Review the full guide online or in PDF and look for conversion
problems we may have overlooked so far (problems that aren't the ones
I just mentioned above).

- In order to automate building the sidebar nav & PDF, every parent
document includes a list of its children. We will need to maintain
this, although can add some pre-pub scripts to check that all pages
are listed in some parent document. Do you think this is workable?
Have a better idea?

For a couple of examples, see:
--- 
https://raw.githubusercontent.com/ctargett/refguide-asciidoc-poc/master/confluence-export/converted-asciidoc/apache-solr-reference-guide.adoc
--- 
https://raw.githubusercontent.com/ctargett/refguide-asciidoc-poc/master/confluence-export/converted-asciidoc/solrcloud.adoc

Note the "page-children" at the top. That contains the list of
children pages that will be pulled into nav and the PDF.

Depending on your feedback, I'm hopeful the next phase will include
fixing some of the remaining issues, creating a branch to bring the
pages into the project, and figuring out where to host the pages
online.

Thanks,
Cassandra

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9499) Streaming Expression Cannot sort on aliased field

2016-09-22 Thread Gus Heck (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gus Heck updated SOLR-9499:
---
Attachment: SOLR-9499.patch

Patch to solve the problem described initially. Allows for sorting on the 
original (non-aliased) name of the field, similar to normal select. Broader 
issue that sort spec parsing is inconsistent with what's normally available for 
select (i.e. functions, etc) left for another time.

> Streaming Expression Cannot sort on aliased field 
> --
>
> Key: SOLR-9499
> URL: https://issues.apache.org/jira/browse/SOLR-9499
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.2
> Environment: 6.2.0_RC1
>Reporter: Gus Heck
>Priority: Minor
> Attachments: SOLR-9499.patch
>
>
> an expression such as:
> {code}
> search(test, rows=99, q=table:article_p, 
> fl="yi_pw,id,article:article_id", sort="article asc") 
> or
> search(test, rows=99, q=table:article_p, 
> fl="yi_pw,id,article:article_id", sort="article_id asc") 
> {code}
> results in: 
> {code}
> {"result-set":{"docs":[
> {"EXCEPTION":"Fields in the sort spec must be included in the field 
> list:article","EOF":true}]}}
> or
> {"result-set":{"docs":[
> {"EXCEPTION":"Fields in the sort spec must be included in the field 
> list:article_id","EOF":true}]}}
> {code}
> and 
> {code}
> {"result-set":{"docs":[
> {"EXCEPTION":"Fields in the sort spec must be included in the field 
> list:article:article_id","EOF":true}]}}
> {code}
> yeilds 
> {code}
> {"result-set":{"docs":[
> {"EXCEPTION":"java.util.concurrent.ExecutionException: java.io.IOException: 
> --> http://10.1.3.9:8983/solr/test_shard1_replica1/:sort param could not be 
> parsed as a query, and is not a field that exists in the index: 
> article:article_id","EOF":true,"RESPONSE_TIME":5}]}}
> {code}
> If I sort by id instead, it all works fine. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 860 - Still unstable!

2016-09-22 Thread Jan Høydahl
Looking at the test failures...
--
Jan Høydahl
Search Solution architect
Cominvent AS
www.cominvent.com
+47 90125809

> 22. sep. 2016 kl. 21.52 skrev Policeman Jenkins Server :
> 
> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/860/
> Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC
> 
> 506 tests failed.
> FAILED:  
> junit.framework.TestSuite.org.apache.solr.analysis.TestFoldingMultitermExtrasQuery
> 
> Error Message:
> Can't load schema 
> /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/contrib/solr-analysis-extras/test/J1/temp/solr.analysis.TestFoldingMultitermExtrasQuery_DD0E992BF8552B45-001/tempDir-001/collection1/conf/schema-folding-extra.xml:
>  null
> 
> Stack Trace:
> org.apache.solr.common.SolrException: Can't load schema 
> /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/contrib/solr-analysis-extras/test/J1/temp/solr.analysis.TestFoldingMultitermExtrasQuery_DD0E992BF8552B45-001/tempDir-001/collection1/conf/schema-folding-extra.xml:
>  null
>   at __randomizedtesting.SeedInfo.seed([DD0E992BF8552B45]:0)
>   at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:607)
>   at org.apache.solr.schema.IndexSchema.(IndexSchema.java:183)
>   at 
> org.apache.solr.schema.ManagedIndexSchema.(ManagedIndexSchema.java:104)
>   at 
> org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:172)
>   at 
> org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:45)
>   at 
> org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
>   at org.apache.solr.util.TestHarness.(TestHarness.java:96)
>   at org.apache.solr.SolrTestCaseJ4.createCore(SolrTestCaseJ4.java:605)
>   at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:595)
>   at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:437)
>   at 
> org.apache.solr.analysis.TestFoldingMultitermExtrasQuery.beforeTests(TestFoldingMultitermExtrasQuery.java:36)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:811)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
>   at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>   at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
>   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>   at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
>   at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:541)
>   ... 34 more
> 
> 
> FAILED:  
> junit.framework.TestSuite.org.apache.solr.schema.TestICUCollationField
> 
> Error Message:
> Can't load schema 
> 

[jira] [Commented] (SOLR-6677) reduce logging during Solr startup

2016-09-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514404#comment-15514404
 ] 

Jan Høydahl commented on SOLR-6677:
---

Test failures:
{noformat}
FAILED:  
junit.framework.TestSuite.org.apache.solr.analysis.TestFoldingMultitermExtrasQuery
...
Caused by: java.lang.NullPointerException
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:541)
... 34 more
{noformat}

Will commit a fix for the nullpointer

> reduce logging during Solr startup
> --
>
> Key: SOLR-6677
> URL: https://issues.apache.org/jira/browse/SOLR-6677
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Jan Høydahl
> Attachments: SOLR-6677.patch, SOLR-6677.patch
>
>
> most of what is printed is neither helpful nor useful. It's just noise



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_102) - Build # 1778 - Failure!

2016-09-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1778/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseG1GC

529 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analysis.TestFoldingMultitermExtrasQuery

Error Message:
Can't load schema 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/contrib/solr-analysis-extras/test/J1/temp/solr.analysis.TestFoldingMultitermExtrasQuery_160271694761E049-001/tempDir-001/collection1/conf/schema-folding-extra.xml:
 null

Stack Trace:
org.apache.solr.common.SolrException: Can't load schema 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/contrib/solr-analysis-extras/test/J1/temp/solr.analysis.TestFoldingMultitermExtrasQuery_160271694761E049-001/tempDir-001/collection1/conf/schema-folding-extra.xml:
 null
at __randomizedtesting.SeedInfo.seed([160271694761E049]:0)
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:607)
at org.apache.solr.schema.IndexSchema.(IndexSchema.java:183)
at 
org.apache.solr.schema.ManagedIndexSchema.(ManagedIndexSchema.java:104)
at 
org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:172)
at 
org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:45)
at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
at org.apache.solr.util.TestHarness.(TestHarness.java:96)
at org.apache.solr.SolrTestCaseJ4.createCore(SolrTestCaseJ4.java:605)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:595)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:437)
at 
org.apache.solr.analysis.TestFoldingMultitermExtrasQuery.beforeTests(TestFoldingMultitermExtrasQuery.java:36)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:811)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:541)
... 34 more


FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestICUCollationField

Error Message:
Can't load schema 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/contrib/solr-analysis-extras/test/J0/temp/solr.schema.TestICUCollationField_160271694761E049-001/tempDir-001/collection1/conf/schema.xml:
 null

Stack Trace:
org.apache.solr.common.SolrException: Can't load schema 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 860 - Still unstable!

2016-09-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/860/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

506 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analysis.TestFoldingMultitermExtrasQuery

Error Message:
Can't load schema 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/contrib/solr-analysis-extras/test/J1/temp/solr.analysis.TestFoldingMultitermExtrasQuery_DD0E992BF8552B45-001/tempDir-001/collection1/conf/schema-folding-extra.xml:
 null

Stack Trace:
org.apache.solr.common.SolrException: Can't load schema 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/contrib/solr-analysis-extras/test/J1/temp/solr.analysis.TestFoldingMultitermExtrasQuery_DD0E992BF8552B45-001/tempDir-001/collection1/conf/schema-folding-extra.xml:
 null
at __randomizedtesting.SeedInfo.seed([DD0E992BF8552B45]:0)
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:607)
at org.apache.solr.schema.IndexSchema.(IndexSchema.java:183)
at 
org.apache.solr.schema.ManagedIndexSchema.(ManagedIndexSchema.java:104)
at 
org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:172)
at 
org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:45)
at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
at org.apache.solr.util.TestHarness.(TestHarness.java:96)
at org.apache.solr.SolrTestCaseJ4.createCore(SolrTestCaseJ4.java:605)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:595)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:437)
at 
org.apache.solr.analysis.TestFoldingMultitermExtrasQuery.beforeTests(TestFoldingMultitermExtrasQuery.java:36)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:811)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:541)
... 34 more


FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestICUCollationField

Error Message:
Can't load schema 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/contrib/solr-analysis-extras/test/J0/temp/solr.schema.TestICUCollationField_DD0E992BF8552B45-001/tempDir-001/collection1/conf/schema.xml:
 null

Stack Trace:
org.apache.solr.common.SolrException: Can't load schema 

[jira] [Updated] (SOLR-9470) Deadlocked threads in recovery

2016-09-22 Thread Michael Braun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Braun updated SOLR-9470:

Attachment: solr-deadlock-2-r.txt

Replicated again - redacted thread dumps attached for relevant threads. Also 
confirmed we see some of the same lines that were shown in the relevant 
[SOLR-9278] deadlock ticket, where the index files can't be deleted, as shown 
below:

{code}
09-22 17:24:42.317  - we started the process

09-22 17:25:43.716 org.apache.solr.handler.IndexFetcher 
(recoveryExecutor-3-thread-1-processing-n:x.x.x.75:8983_solr 
x:collection_shard1_replica1 s:shard1 c:collection) [s:shard1] IndexFetcher 
unable to cleanup unused lucene index files so we must do a full copy instead 
globalRequestId: 
09-22 17:25:43.716 org.apache.solr.handler.IndexFetcher 
(recoveryExecutor-3-thread-1-processing-n:x.x.x.75:8983_solr 
x:collection_shard1_replica1 s:shard1 c:collection) [s:shard1] IndexFetcher 
slept for 3ms for unused lucene index files 
to be delete-able globalRequestId: 
INFO  09-22 17:25:43.864 org.apache.solr.update.DefaultSolrCoreState 
(recoveryExecutor-3-thread-1-processing-n:x.x.x.75:8983_solr 
x:collection_shard1_replica1 s:shard1 c:collection) [s:shard1] Rollback old 
IndexWriter... core=collection_shard1_replica1
 globalRequestId: 
 {code}

I'm hoping that the patch in SOLR-9278 is valid and would fix the problem?

> Deadlocked threads in recovery
> --
>
> Key: SOLR-9470
> URL: https://issues.apache.org/jira/browse/SOLR-9470
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2
>Reporter: Michael Braun
> Attachments: solr-deadlock-2-r.txt, solr-deadlock.txt
>
>
> Background: Booted up a cluster and replicas were in recovery. All replicas 
> recovered minus one, and it was hanging on HTTP requests. Issued shutdown and 
> solr would not shut down. Examined with JStack and found a deadlock had 
> occurred. The relevant thread information is attached. Some information has 
> been redacted as well (some custom URPs, IPs) from the stack traces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2016-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514268#comment-15514268
 ] 

ASF subversion and git services commented on SOLR-8029:
---

Commit b957e2ed1f28038e6b0f07dc0f74319d89cb16c2 in lucene-solr's branch 
refs/heads/apiv2 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b957e2e ]

SOLR-8029: testcases for configset api


> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9534) Support quiet/verbose bin/solr options for changing log level

2016-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514242#comment-15514242
 ] 

ASF subversion and git services commented on SOLR-9534:
---

Commit 97bb81db1a84983137e44f0dd753c411c925a2ea in lucene-solr's branch 
refs/heads/branch_6x from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=97bb81d ]

SOLR-9534: You can now set Solr's log level through environment variable 
SOLR_LOG_LEVEL and -q and -v options to bin/solr

(cherry picked from commit 73c2edd)


> Support quiet/verbose bin/solr options for changing log level
> -
>
> Key: SOLR-9534
> URL: https://issues.apache.org/jira/browse/SOLR-9534
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9534.patch, SOLR-9534.patch
>
>
> Spinoff from SOLR-6677
> Let's make it much easier to "turn on debug" by supporting a {{bin/solr start 
> -V}} verbose option, and likewise a {{bin/solr start -q}} for quiet operation.
> These would simply be convenience options for changing the RootLogger from 
> level INFO to DEBUG or WARN respectively. This can be done programmatically 
> in log4j at startup. 
> Could be we need to add some more package specific defaults in 
> log4j.properties to get the right mix of logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9534) Support quiet/verbose bin/solr options for changing log level

2016-09-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-9534.
---
Resolution: Fixed

> Support quiet/verbose bin/solr options for changing log level
> -
>
> Key: SOLR-9534
> URL: https://issues.apache.org/jira/browse/SOLR-9534
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9534.patch, SOLR-9534.patch
>
>
> Spinoff from SOLR-6677
> Let's make it much easier to "turn on debug" by supporting a {{bin/solr start 
> -V}} verbose option, and likewise a {{bin/solr start -q}} for quiet operation.
> These would simply be convenience options for changing the RootLogger from 
> level INFO to DEBUG or WARN respectively. This can be done programmatically 
> in log4j at startup. 
> Could be we need to add some more package specific defaults in 
> log4j.properties to get the right mix of logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9534) Support quiet/verbose bin/solr options for changing log level

2016-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514236#comment-15514236
 ] 

ASF subversion and git services commented on SOLR-9534:
---

Commit 73c2edddf01dbbd312d9101a9e1e1db1e4c7e770 in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=73c2edd ]

SOLR-9534: You can now set Solr's log level through environment variable 
SOLR_LOG_LEVEL and -q and -v options to bin/solr


> Support quiet/verbose bin/solr options for changing log level
> -
>
> Key: SOLR-9534
> URL: https://issues.apache.org/jira/browse/SOLR-9534
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9534.patch, SOLR-9534.patch
>
>
> Spinoff from SOLR-6677
> Let's make it much easier to "turn on debug" by supporting a {{bin/solr start 
> -V}} verbose option, and likewise a {{bin/solr start -q}} for quiet operation.
> These would simply be convenience options for changing the RootLogger from 
> level INFO to DEBUG or WARN respectively. This can be done programmatically 
> in log4j at startup. 
> Could be we need to add some more package specific defaults in 
> log4j.properties to get the right mix of logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: field:* queries can be painfully slow if there are many terms.

2016-09-22 Thread Erick Erickson
Thanks Mike. I'm not sure this _should_ be fixed mind you, but thought I'd ask.

On Thu, Sep 22, 2016 at 10:16 AM, Michael McCandless
 wrote:
> You could index the prefix terms (edge ngrams), assuming your queries
> are prefix queries; this way there would typically be far fewer terms
> to visit than all 200 M terms.
>
> Auto-prefix terms also tried to solves this more "automatically", so
> you don't have to mess with edge ngrams, but we reverted it because of
> the added code complexity and lack of real-word use cases especially
> once we switched numerics from postings to dimensional points
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> On Thu, Sep 22, 2016 at 1:01 PM, Erick Erickson  
> wrote:
>> In MultiTermConstantScoreWrapper there's this block around line 174 in 6x:
>>
>> do {
>>   docs = termsEnum.postings(docs, PostingsEnum.NONE);
>>   builder.add(docs);
>> } while (termsEnum.next() != null);
>>
>> In the case of lots and lots of terms in a multiValued field this can
>> take quite a bit of time. In my test case I have 100K docs with 200M
>> terms (pathological I understand, but it illustrates the issue). If
>> I'm reading this right it loops through all the terms and, for each
>> term, creates a sub-list of docs for the term and adds the sub-list to
>> the "master list". So a query like 'field:*' takes 20+ seconds.
>>
>> Is there anything we can/should do to short circuit this kind of
>> thing? In this case I got 200M terms by ngramming 3-32 (again, far too
>> many ngrams I understand). It's not clear to me whether it's an easy
>> check to say "stop when all the docs have been added to the master
>> list"
>>
>> I can raise a JIRA if it makes sense.
>>
>> For supporting this particular use-case, we could index a separate
>> field "has_field1_value" but the general case still holds.
>>
>> Erick
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7457) Default doc values format should optimize for iterator access

2016-09-22 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7457:
-
Attachment: LUCENE-7457.patch

Here is a patch implementing what Mike describes above as the bare minimum. I'm 
not sure it is worth spending too much time on this since we will probably want 
to build a new DV format that better takes advantage of the iterator-style API 
until 7.0 is released?

> Default doc values format should optimize for iterator access
> -
>
> Key: LUCENE-7457
> URL: https://issues.apache.org/jira/browse/LUCENE-7457
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Adrien Grand
>Priority: Blocker
> Fix For: master (7.0)
>
> Attachments: LUCENE-7457.patch
>
>
> In LUCENE-7407 we switched doc values consumption from random access API to 
> an iterator API, but nothing was done there to improve the codec.  We should 
> do that here.
> At a bare minimum we should fix the existing very-sparse case to be a true 
> iterator, and not wrapped with the silly legacy wrappers.
> I think we should also increase the threshold (currently 1%?) when we switch 
> from dense to sparse encoding.  This should fix LUCENE-7253, making merging 
> of sparse doc values efficient ("pay for what you use").
> I'm sure there are many other things to explore to let codecs "take 
> advantage" of the fact that they no longer need to offer random access to doc 
> values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9551) Add constructor to JSONWriter which takes wrapperFunction and namedListStyle

2016-09-22 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-9551:
--
Attachment: SOLR-9551.patch

Hi Jonny, we talked offline. Please find attached an alternative patch:
* the protected JSONWriter.wrapperFunction remains non-final
* no unnecessary JSONResponseWriter.write change
* the JSON_NL_* constants remain in JSONWriter but now have package visibility 
e.g. for use by the newly added JSONWriterTest.testConstantsUnchanged method 
and also for use by the upcoming SOLR-9442 change

> Add constructor to JSONWriter which takes wrapperFunction and namedListStyle
> 
>
> Key: SOLR-9551
> URL: https://issues.apache.org/jira/browse/SOLR-9551
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Jonny Marks
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9551.patch, SOLR-9551.patch
>
>
> Currently JSONWriter's constructor extracts the wrapperFunction and 
> namedListStyle from the request.
> This patch adds a new constructor where these are passed in from 
> JSONResponseWriter. This will allow us to decide in JSONResponseWriter which 
> writer to construct based on the named list style.
> There is precedent here - GeoJSONResponseWriter extracts geofield from the 
> request and passes it to GeoJSONWriter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7495) Unexpected docvalues type NUMERIC when grouping by a int facet

2016-09-22 Thread Scott Stults (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514067#comment-15514067
 ] 

Scott Stults commented on SOLR-7495:


[~rcmuir] could you weigh in on the approach of the patch? I'd be happy to 
tweak it or take a completely different angle if that'll help close this issue.

> Unexpected docvalues type NUMERIC when grouping by a int facet
> --
>
> Key: SOLR-7495
> URL: https://issues.apache.org/jira/browse/SOLR-7495
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3
>Reporter: Fabio Batista da Silva
> Attachments: SOLR-7495.patch, SOLR-7495.patch
>
>
> Hey All,
> After upgrading from solr 4.10 to 5.1 with solr could
> I'm getting a IllegalStateException when i try to facet a int field.
> IllegalStateException: unexpected docvalues type NUMERIC for field 'year' 
> (expected=SORTED). Use UninvertingReader or index with docvalues.
> schema.xml
> {code}
> 
> 
> 
> 
> 
> 
>  multiValued="false" required="true"/>
>  multiValued="false" required="true"/>
> 
> 
>  stored="true"/>
> 
> 
> 
>  />
>  sortMissingLast="true"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  precisionStep="0" positionIncrementGap="0"/>
>  positionIncrementGap="0"/>
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  positionIncrementGap="100">
> 
> 
>  words="stopwords.txt" />
> 
>  maxGramSize="15"/>
> 
> 
> 
>  words="stopwords.txt" />
>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> 
> 
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType" geo="true" 
> distErrPct="0.025" maxDistErr="0.09" units="degrees" />
> 
> id
> name
> 
> 
> {code}
> query :
> {code}
> http://solr.dev:8983/solr/my_collection/select?wt=json=id=index_type:foobar=true=year_make_model=true=true=year
> {code}
> Exception :
> {code}
> ull:org.apache.solr.common.SolrException: Exception during facet.field: year
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:627)
> at org.apache.solr.request.SimpleFacets$3.call(SimpleFacets.java:612)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at org.apache.solr.request.SimpleFacets$2.execute(SimpleFacets.java:566)
> at 
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:637)
> at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:280)
> at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:106)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:222)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> 

[jira] [Reopened] (SOLR-8487) Add CommitStream to Streaming API and Streaming Expressions

2016-09-22 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove reopened SOLR-8487:
---

Should've been resolved, not closed.

> Add CommitStream to Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-8487
> URL: https://issues.apache.org/jira/browse/SOLR-8487
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 6.3
>Reporter: Jason Gerlowski
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 6.3
>
> Attachments: SOLR-8487.patch, SOLR-8487.patch
>
>
> (Paraphrased from Joel's idea/suggestions in the comments of SOLR-7535).
> With SOLR-7535, users can now index documents/tuples using an UpdateStream.  
> However, there's no way currently using the Streaming API to force a commit 
> on the collection that received these updates.
> The purpose of this ticket is to add a CommitStream, which can be used to 
> trigger commit(s) on a given collection.
> The proposed usage/behavior would look a little bit like:
> {{commit(collection, parallel(update(search()))}}
> Note that...
> 1.) CommitStream has a positional collection parameter, to indicate which 
> collection to commit on. (Alternatively, it could recurse through 
> {{children()}} nodes until it finds the UpdateStream, and then retrieve the 
> collection from the UpdateStream).
> 2.) CommitStream forwards all tuples received by an underlying, wrapped 
> stream.
> 3.) CommitStream commits when the underlying stream emits its EOF tuple. 
> (Alternatively, it could commit every X tuples, based on a parameter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8487) Add CommitStream to Streaming API and Streaming Expressions

2016-09-22 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove resolved SOLR-8487.
---
Resolution: Implemented

> Add CommitStream to Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-8487
> URL: https://issues.apache.org/jira/browse/SOLR-8487
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 6.3
>Reporter: Jason Gerlowski
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 6.3
>
> Attachments: SOLR-8487.patch, SOLR-8487.patch
>
>
> (Paraphrased from Joel's idea/suggestions in the comments of SOLR-7535).
> With SOLR-7535, users can now index documents/tuples using an UpdateStream.  
> However, there's no way currently using the Streaming API to force a commit 
> on the collection that received these updates.
> The purpose of this ticket is to add a CommitStream, which can be used to 
> trigger commit(s) on a given collection.
> The proposed usage/behavior would look a little bit like:
> {{commit(collection, parallel(update(search()))}}
> Note that...
> 1.) CommitStream has a positional collection parameter, to indicate which 
> collection to commit on. (Alternatively, it could recurse through 
> {{children()}} nodes until it finds the UpdateStream, and then retrieve the 
> collection from the UpdateStream).
> 2.) CommitStream forwards all tuples received by an underlying, wrapped 
> stream.
> 3.) CommitStream commits when the underlying stream emits its EOF tuple. 
> (Alternatively, it could commit every X tuples, based on a parameter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9542) Kerberos delegation tokens requires missing Jackson library

2016-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514013#comment-15514013
 ] 

ASF subversion and git services commented on SOLR-9542:
---

Commit 5acbcac274dd3f2096a3a91ee1afd2a1f03f5ed6 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5acbcac ]

SOLR-9542: Kerberos delegation tokens requires Jackson library


> Kerberos delegation tokens requires missing Jackson library
> ---
>
> Key: SOLR-9542
> URL: https://issues.apache.org/jira/browse/SOLR-9542
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-9542.patch
>
>
> GET, RENEW or CANCEL operations for the delegation tokens support requires 
> the Solr server to have old jackson added as a dependency.
> Steps to reproduce the problem:
> 1) Configure Solr to use delegation tokens
> 2) Start Solr
> 3) Use a SolrJ application to get a delegation token.
> The server throws the following:
> {code}
> java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.managementOperation(DelegationTokenAuthenticationHandler.java:279)
> at 
> org.apache.solr.security.KerberosPlugin$RequestContinuesRecorderAuthenticationHandler.managementOperation(KerberosPlugin.java:566)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:514)
> at 
> org.apache.solr.security.DelegationTokenKerberosFilter.doFilter(DelegationTokenKerberosFilter.java:123)
> at 
> org.apache.solr.security.KerberosPlugin.doAuthenticate(KerberosPlugin.java:265)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:318)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9542) Kerberos delegation tokens requires missing Jackson library

2016-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514001#comment-15514001
 ] 

ASF subversion and git services commented on SOLR-9542:
---

Commit ec5a53d706173046f2e0048abe2d6376a7e1a375 in lucene-solr's branch 
refs/heads/branch_6x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ec5a53d ]

SOLR-9542: Kerberos delegation tokens requires Jackson library


> Kerberos delegation tokens requires missing Jackson library
> ---
>
> Key: SOLR-9542
> URL: https://issues.apache.org/jira/browse/SOLR-9542
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-9542.patch
>
>
> GET, RENEW or CANCEL operations for the delegation tokens support requires 
> the Solr server to have old jackson added as a dependency.
> Steps to reproduce the problem:
> 1) Configure Solr to use delegation tokens
> 2) Start Solr
> 3) Use a SolrJ application to get a delegation token.
> The server throws the following:
> {code}
> java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.managementOperation(DelegationTokenAuthenticationHandler.java:279)
> at 
> org.apache.solr.security.KerberosPlugin$RequestContinuesRecorderAuthenticationHandler.managementOperation(KerberosPlugin.java:566)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:514)
> at 
> org.apache.solr.security.DelegationTokenKerberosFilter.doFilter(DelegationTokenKerberosFilter.java:123)
> at 
> org.apache.solr.security.KerberosPlugin.doAuthenticate(KerberosPlugin.java:265)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:318)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 455 - Still Unstable

2016-09-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/455/

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithSourceCluster

Error Message:
Document mismatch on target after sync expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: Document mismatch on target after sync 
expected:<2> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([C0FD476EADEA7313:19AB16AAAE8E6059]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithSourceCluster(CdcrBootstrapTest.java:249)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10847 lines...]
   [junit4] Suite: org.apache.solr.cloud.CdcrBootstrapTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-6677) reduce logging during Solr startup

2016-09-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513951#comment-15513951
 ] 

Jan Høydahl commented on SOLR-6677:
---

We're now down to 9 lines for {{bin/solr start -f}}.
And 20 lines for {{bin/solr start -c -f}} (down from 67, thanks to SOLR-5563).
For {{bin/solr create -c foo}} we have reduced from 129 to 48.

Feel free to commit other log-level changes across the code base as part of 
this issue or a spinoff issue.

> reduce logging during Solr startup
> --
>
> Key: SOLR-6677
> URL: https://issues.apache.org/jira/browse/SOLR-6677
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Jan Høydahl
> Attachments: SOLR-6677.patch, SOLR-6677.patch
>
>
> most of what is printed is neither helpful nor useful. It's just noise



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9337) Add fetch Streaming Expression

2016-09-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513940#comment-15513940
 ] 

Joel Bernstein edited comment on SOLR-9337 at 9/22/16 5:45 PM:
---

fetch works like this:

1) read N tuples into memory
2) use a query to fetch fields for the tuples read in step 1.
3) stream the tuples out
4) repeat steps 1-3 until the underlying stream is EOF

This is essentially a nested loop join against the entire index.

Mainly used when one side of the join is very small and you want to join it 
against the entire index. 

One main use case I have in mind is doing a graph query, fetching text fields 
for the node set that is returned, and then running the text classifier on the 
node set. This would combine graph queries and AI models to provide very 
intelligent recommendations.









was (Author: joel.bernstein):
fetch works like this:

1) read N tuples into memory
2) use a query to fetch fields for the tuples read in step 1.
3) stream the tuples out
4) repeat steps 1-3 until the underlying stream is EOF

This is essentially a nested loop join agains the index.

Mainly used when one side of the join is very small and you want to join it 
against the entire index. 

One main use case I have in mind is doing a graph query, fetching text fields 
for the node set that is returned, and then running the text classifier on the 
node set. This would combine graph queries and AI models to provide very 
intelligent recommendations.








> Add fetch Streaming Expression
> --
>
> Key: SOLR-9337
> URL: https://issues.apache.org/jira/browse/SOLR-9337
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> The fetch() Streaming Expression wraps another expression and fetches 
> additional fields for documents in batches. The fetch() expression will 
> stream out the Tuples after the data has been fetched. Fields can be fetched 
> from any SolrCloud collection. 
> Sample syntax:
> {code}
> daemon(
>update(collectionC, batchSize="100"
>   fetch(collectionB, 
> topic(checkpoints, collectionA, q="*:*", fl="a,b,c", 
> rows="50"),
> fl="j,m,z",
> on="a=j")))
>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9337) Add fetch Streaming Expression

2016-09-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513940#comment-15513940
 ] 

Joel Bernstein edited comment on SOLR-9337 at 9/22/16 5:44 PM:
---

fetch works like this:

1) read N tuples into memory
2) use a query to fetch fields for the tuples read in step 1.
3) stream the tuples out
4) repeat steps 1-3 until the underlying stream is EOF

This is essentially a nested loop join agains the index.

Mainly used when one side of the join is very small and you want to join it 
against the entire index. 

One main use case I have in mind is doing a graph query, fetching text fields 
for the node set that is returned, and then running the text classifier on the 
node set. This would combine graph queries and AI models to provide very 
intelligent recommendations.









was (Author: joel.bernstein):
fetch works like this:

1) read N tuples into memory
2) Use a query to fetch fields for the tuples read in step 1.
3) stream the tuples out
4) repeat steps 1-3 until the underlying stream is EOF

This is essentially a nested loop join agains the index.

Mainly used when one side of the join is very small and you want to join it 
against the entire index. 

One main use case I have in mind is doing a graph query, fetching text fields 
for the node set that is returned, and then running the text classifier on the 
node set. This would combine graph queries and AI models to provide very 
intelligent recommendations.








> Add fetch Streaming Expression
> --
>
> Key: SOLR-9337
> URL: https://issues.apache.org/jira/browse/SOLR-9337
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> The fetch() Streaming Expression wraps another expression and fetches 
> additional fields for documents in batches. The fetch() expression will 
> stream out the Tuples after the data has been fetched. Fields can be fetched 
> from any SolrCloud collection. 
> Sample syntax:
> {code}
> daemon(
>update(collectionC, batchSize="100"
>   fetch(collectionB, 
> topic(checkpoints, collectionA, q="*:*", fl="a,b,c", 
> rows="50"),
> fl="j,m,z",
> on="a=j")))
>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9337) Add fetch Streaming Expression

2016-09-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513940#comment-15513940
 ] 

Joel Bernstein edited comment on SOLR-9337 at 9/22/16 5:43 PM:
---

fetch works like this:

1) read N tuples into memory
2) Use a query to fetch fields for the tuples read in step 1.
3) stream the tuples out
4) repeat steps 1-3 until the underlying stream is EOF

This is essentially a nested loop join agains the index.

Mainly used when one side of the join is very small and you want to join it 
against the entire index. 

One main use case I have in mind is doing a graph query, fetching text fields 
for the node set that is returned, and then running the text classifier on the 
node set. This would combine graph queries and AI models to provide very 
intelligent recommendations.









was (Author: joel.bernstein):
fetch works like this:

1) read N tuples into memory
2) Use a query to fetch fields for the tuples read in step 1.
3) stream the tuples out
4) repeat steps 1-3 until the underlying stream is EOF

This is essentially a nested loop join agains the index.

Mainly used when one side of the join is very small and you want to join it 
against the entire index. 

One main use case I have in mind is doing a graph query, fetching text fields 
for node set that is returned, and then running the classifier on the node set. 
This would combine graph queries and AI models to provide very intelligent 
recommendations.








> Add fetch Streaming Expression
> --
>
> Key: SOLR-9337
> URL: https://issues.apache.org/jira/browse/SOLR-9337
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> The fetch() Streaming Expression wraps another expression and fetches 
> additional fields for documents in batches. The fetch() expression will 
> stream out the Tuples after the data has been fetched. Fields can be fetched 
> from any SolrCloud collection. 
> Sample syntax:
> {code}
> daemon(
>update(collectionC, batchSize="100"
>   fetch(collectionB, 
> topic(checkpoints, collectionA, q="*:*", fl="a,b,c", 
> rows="50"),
> fl="j,m,z",
> on="a=j")))
>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9337) Add fetch Streaming Expression

2016-09-22 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513940#comment-15513940
 ] 

Joel Bernstein commented on SOLR-9337:
--

fetch works like this:

1) read N tuples into memory
2) Use a query to fetch fields for the tuples read in step 1.
3) stream the tuples out
4) repeat steps 1-3 until the underlying stream is EOF

This is essentially a nested loop join agains the index.

Mainly used when one side of the join is very small and you want to join it 
against the entire index. 

One main use case I have in mind is doing a graph query, fetching text fields 
for node set that is returned, and then running the classifier on the node set. 
This would combine graph queries and AI models to provide very intelligent 
recommendations.








> Add fetch Streaming Expression
> --
>
> Key: SOLR-9337
> URL: https://issues.apache.org/jira/browse/SOLR-9337
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> The fetch() Streaming Expression wraps another expression and fetches 
> additional fields for documents in batches. The fetch() expression will 
> stream out the Tuples after the data has been fetched. Fields can be fetched 
> from any SolrCloud collection. 
> Sample syntax:
> {code}
> daemon(
>update(collectionC, batchSize="100"
>   fetch(collectionB, 
> topic(checkpoints, collectionA, q="*:*", fl="a,b,c", 
> rows="50"),
> fl="j,m,z",
> on="a=j")))
>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 17876 - Still Failing!

2016-09-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17876/
Java: 32bit/jdk1.8.0_102 -server -XX:+UseG1GC

506 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analysis.TestFoldingMultitermExtrasQuery

Error Message:
Can't load schema 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/contrib/solr-analysis-extras/test/J1/temp/solr.analysis.TestFoldingMultitermExtrasQuery_13108BD61E710C04-001/tempDir-001/collection1/conf/schema-folding-extra.xml:
 null

Stack Trace:
org.apache.solr.common.SolrException: Can't load schema 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/contrib/solr-analysis-extras/test/J1/temp/solr.analysis.TestFoldingMultitermExtrasQuery_13108BD61E710C04-001/tempDir-001/collection1/conf/schema-folding-extra.xml:
 null
at __randomizedtesting.SeedInfo.seed([13108BD61E710C04]:0)
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:607)
at org.apache.solr.schema.IndexSchema.(IndexSchema.java:183)
at 
org.apache.solr.schema.ManagedIndexSchema.(ManagedIndexSchema.java:104)
at 
org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:172)
at 
org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:45)
at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
at org.apache.solr.util.TestHarness.(TestHarness.java:96)
at org.apache.solr.SolrTestCaseJ4.createCore(SolrTestCaseJ4.java:605)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:595)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:437)
at 
org.apache.solr.analysis.TestFoldingMultitermExtrasQuery.beforeTests(TestFoldingMultitermExtrasQuery.java:36)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:811)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:541)
... 34 more


FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestICUCollationField

Error Message:
Can't load schema 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/contrib/solr-analysis-extras/test/J0/temp/solr.schema.TestICUCollationField_13108BD61E710C04-001/tempDir-001/collection1/conf/schema.xml:
 null

Stack Trace:
org.apache.solr.common.SolrException: Can't load schema 

[jira] [Commented] (SOLR-6677) reduce logging during Solr startup

2016-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513910#comment-15513910
 ] 

ASF subversion and git services commented on SOLR-6677:
---

Commit 03575003068f568980782e913016b2ac281e1741 in lucene-solr's branch 
refs/heads/branch_6x from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0357500 ]

SOLR-6677: Reduced logging during Solr startup, moved more logs to DEBUG level

(cherry picked from commit f391d57)


> reduce logging during Solr startup
> --
>
> Key: SOLR-6677
> URL: https://issues.apache.org/jira/browse/SOLR-6677
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Jan Høydahl
> Attachments: SOLR-6677.patch, SOLR-6677.patch
>
>
> most of what is printed is neither helpful nor useful. It's just noise



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9337) Add fetch Streaming Expression

2016-09-22 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513889#comment-15513889
 ] 

Dennis Gove commented on SOLR-9337:
---

How does a fetch differ from an innerJoin? I guess it could if it read in a 
tuple from the source and then looked up its specific fields, but I dunno how 
performant that'd be.

> Add fetch Streaming Expression
> --
>
> Key: SOLR-9337
> URL: https://issues.apache.org/jira/browse/SOLR-9337
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> The fetch() Streaming Expression wraps another expression and fetches 
> additional fields for documents in batches. The fetch() expression will 
> stream out the Tuples after the data has been fetched. Fields can be fetched 
> from any SolrCloud collection. 
> Sample syntax:
> {code}
> daemon(
>update(collectionC, batchSize="100"
>   fetch(collectionB, 
> topic(checkpoints, collectionA, q="*:*", fl="a,b,c", 
> rows="50"),
> fl="j,m,z",
> on="a=j")))
>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9330) Race condition between core reload and statistics request

2016-09-22 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513882#comment-15513882
 ] 

Mike Drob commented on SOLR-9330:
-

Ah, I see where the difference is, yes. In my case, the client process getting 
the statistics is an external monitoring application that gets them every 15 
seconds and charts them. Since number of replicas can move, grow and shrink to 
accommodate usage, solving races like this is a very complicated problem. And 
at the end of the day, I don't care if my monitoring system misses one round of 
statistics, I'm more concerned about scary exceptions in the log that the ops 
team has to deal with.

> Race condition between core reload and statistics request
> -
>
> Key: SOLR-9330
> URL: https://issues.apache.org/jira/browse/SOLR-9330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Andrey Kudryavtsev
> Attachments: SOLR-9330.patch, SOLR-9390.patch, SOLR-9390.patch, 
> SOLR-9390.patch, SOLR-9390.patch, too_sync.patch
>
>
> Things happened that we execute this two requests consecutively in Solr 5.5:
> * Core reload: /admin/cores?action=RELOAD=_coreName_
> * Check core statistics: /_coreName_/admin/mbeans?stats=true
> And sometimes second request ends with this error:
> {code}
> ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
> closed
>   at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)
> {code}
> If my understanding of SolrCore internals is correct, it happens because of 
> async nature of reload request:
> * New searcher is "registered" in separate thread
> * Old searcher is closed in same separate thread and only after new one is 
> registered
> * When old searcher is closing, it removes itself from map with MBeans 
> * If statistic requests happens before old searcher is completely removed 
> from everywhere - exception can happen. 
> What do you think if we will introduce new parameter for reload request which 
> makes it fully synchronized? Basically it will force it to call {code}  
> SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] 
> waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8487) Add CommitStream to Streaming API and Streaming Expressions

2016-09-22 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513872#comment-15513872
 ] 

Dennis Gove commented on SOLR-8487:
---

Added a section in the reference guide - 
https://cwiki.apache.org/confluence/display/solr/Streaming+Expressions#StreamingExpressions-commit

> Add CommitStream to Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-8487
> URL: https://issues.apache.org/jira/browse/SOLR-8487
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 6.3
>Reporter: Jason Gerlowski
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 6.3
>
> Attachments: SOLR-8487.patch, SOLR-8487.patch
>
>
> (Paraphrased from Joel's idea/suggestions in the comments of SOLR-7535).
> With SOLR-7535, users can now index documents/tuples using an UpdateStream.  
> However, there's no way currently using the Streaming API to force a commit 
> on the collection that received these updates.
> The purpose of this ticket is to add a CommitStream, which can be used to 
> trigger commit(s) on a given collection.
> The proposed usage/behavior would look a little bit like:
> {{commit(collection, parallel(update(search()))}}
> Note that...
> 1.) CommitStream has a positional collection parameter, to indicate which 
> collection to commit on. (Alternatively, it could recurse through 
> {{children()}} nodes until it finds the UpdateStream, and then retrieve the 
> collection from the UpdateStream).
> 2.) CommitStream forwards all tuples received by an underlying, wrapped 
> stream.
> 3.) CommitStream commits when the underlying stream emits its EOF tuple. 
> (Alternatively, it could commit every X tuples, based on a parameter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-9330) Race condition between core reload and statistics request

2016-09-22 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-9330:

Comment: was deleted

(was: {code}
lst.add("searcherName", name);
lst.add("caching", cachingEnabled);
lst.add("openedAt", openTime);
if (registerTime != null) lst.add("registeredAt", registerTime);
lst.add("warmupTime", warmupTime);
{code}
Why not put these in the cached list as well? The first three are final and 
available before your call to {{snapStatistics}}. The last two are set during 
{{register}} which should only be called once, if I understand this correctly. 
Then the whole method becomes {{return readerStats;}} -- much simpler and 
probably faster too!

{code}
+// core.getInfoRegistry().remove(STATISTICS_KEY, this);
+// decided to comment it, because it might upset users by showing stats, 
w/o "searcher" entry
{code}
I don't think there is any reason to keep this in.

Other than those minor points, the patch looks good to me.

I've had a similar issue when calling {{/replication?command=details}}, but am 
not able to reproduce it in this test, so I think we're fine to handle that 
later.)

> Race condition between core reload and statistics request
> -
>
> Key: SOLR-9330
> URL: https://issues.apache.org/jira/browse/SOLR-9330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Andrey Kudryavtsev
> Attachments: SOLR-9330.patch, SOLR-9390.patch, SOLR-9390.patch, 
> SOLR-9390.patch, SOLR-9390.patch, too_sync.patch
>
>
> Things happened that we execute this two requests consecutively in Solr 5.5:
> * Core reload: /admin/cores?action=RELOAD=_coreName_
> * Check core statistics: /_coreName_/admin/mbeans?stats=true
> And sometimes second request ends with this error:
> {code}
> ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
> closed
>   at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)
> {code}
> If my understanding of SolrCore internals is correct, it happens because of 
> async nature of reload request:
> * New searcher is "registered" in separate thread
> * Old searcher is closed in same separate thread and only after new one is 
> registered
> * When old searcher is closing, it removes itself from map with MBeans 
> * If statistic requests happens before old searcher is completely removed 
> from everywhere - exception can happen. 
> What do you think if we will introduce new parameter for reload request which 
> makes it fully synchronized? Basically it will force it to call {code}  
> SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] 
> waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9330) Race condition between core reload and statistics request

2016-09-22 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513869#comment-15513869
 ] 

Mike Drob commented on SOLR-9330:
-

{code}
lst.add("searcherName", name);
lst.add("caching", cachingEnabled);
lst.add("openedAt", openTime);
if (registerTime != null) lst.add("registeredAt", registerTime);
lst.add("warmupTime", warmupTime);
{code}
Why not put these in the cached list as well? The first three are final and 
available before your call to {{snapStatistics}}. The last two are set during 
{{register}} which should only be called once, if I understand this correctly. 
Then the whole method becomes {{return readerStats;}} -- much simpler and 
probably faster too!

{code}
+// core.getInfoRegistry().remove(STATISTICS_KEY, this);
+// decided to comment it, because it might upset users by showing stats, 
w/o "searcher" entry
{code}
I don't think there is any reason to keep this in.

Other than those minor points, the patch looks good to me.

I've had a similar issue when calling {{/replication?command=details}}, but am 
not able to reproduce it in this test, so I think we're fine to handle that 
later.

> Race condition between core reload and statistics request
> -
>
> Key: SOLR-9330
> URL: https://issues.apache.org/jira/browse/SOLR-9330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Andrey Kudryavtsev
> Attachments: SOLR-9330.patch, SOLR-9390.patch, SOLR-9390.patch, 
> SOLR-9390.patch, SOLR-9390.patch, too_sync.patch
>
>
> Things happened that we execute this two requests consecutively in Solr 5.5:
> * Core reload: /admin/cores?action=RELOAD=_coreName_
> * Check core statistics: /_coreName_/admin/mbeans?stats=true
> And sometimes second request ends with this error:
> {code}
> ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
> closed
>   at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)
> {code}
> If my understanding of SolrCore internals is correct, it happens because of 
> async nature of reload request:
> * New searcher is "registered" in separate thread
> * Old searcher is closed in same separate thread and only after new one is 
> registered
> * When old searcher is closing, it removes itself from map with MBeans 
> * If statistic requests happens before old searcher is completely removed 
> from everywhere - exception can happen. 
> What do you think if we will introduce new parameter for reload request which 
> makes it fully synchronized? Basically it will force it to call {code}  
> SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] 
> waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9330) Race condition between core reload and statistics request

2016-09-22 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513868#comment-15513868
 ] 

Mike Drob commented on SOLR-9330:
-

{code}
lst.add("searcherName", name);
lst.add("caching", cachingEnabled);
lst.add("openedAt", openTime);
if (registerTime != null) lst.add("registeredAt", registerTime);
lst.add("warmupTime", warmupTime);
{code}
Why not put these in the cached list as well? The first three are final and 
available before your call to {{snapStatistics}}. The last two are set during 
{{register}} which should only be called once, if I understand this correctly. 
Then the whole method becomes {{return readerStats;}} -- much simpler and 
probably faster too!

{code}
+// core.getInfoRegistry().remove(STATISTICS_KEY, this);
+// decided to comment it, because it might upset users by showing stats, 
w/o "searcher" entry
{code}
I don't think there is any reason to keep this in.

Other than those minor points, the patch looks good to me.

I've had a similar issue when calling {{/replication?command=details}}, but am 
not able to reproduce it in this test, so I think we're fine to handle that 
later.

> Race condition between core reload and statistics request
> -
>
> Key: SOLR-9330
> URL: https://issues.apache.org/jira/browse/SOLR-9330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Andrey Kudryavtsev
> Attachments: SOLR-9330.patch, SOLR-9390.patch, SOLR-9390.patch, 
> SOLR-9390.patch, SOLR-9390.patch, too_sync.patch
>
>
> Things happened that we execute this two requests consecutively in Solr 5.5:
> * Core reload: /admin/cores?action=RELOAD=_coreName_
> * Check core statistics: /_coreName_/admin/mbeans?stats=true
> And sometimes second request ends with this error:
> {code}
> ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
> closed
>   at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)
> {code}
> If my understanding of SolrCore internals is correct, it happens because of 
> async nature of reload request:
> * New searcher is "registered" in separate thread
> * Old searcher is closed in same separate thread and only after new one is 
> registered
> * When old searcher is closing, it removes itself from map with MBeans 
> * If statistic requests happens before old searcher is completely removed 
> from everywhere - exception can happen. 
> What do you think if we will introduce new parameter for reload request which 
> makes it fully synchronized? Basically it will force it to call {code}  
> SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] 
> waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: field:* queries can be painfully slow if there are many terms.

2016-09-22 Thread Michael McCandless
You could index the prefix terms (edge ngrams), assuming your queries
are prefix queries; this way there would typically be far fewer terms
to visit than all 200 M terms.

Auto-prefix terms also tried to solves this more "automatically", so
you don't have to mess with edge ngrams, but we reverted it because of
the added code complexity and lack of real-word use cases especially
once we switched numerics from postings to dimensional points

Mike McCandless

http://blog.mikemccandless.com

On Thu, Sep 22, 2016 at 1:01 PM, Erick Erickson  wrote:
> In MultiTermConstantScoreWrapper there's this block around line 174 in 6x:
>
> do {
>   docs = termsEnum.postings(docs, PostingsEnum.NONE);
>   builder.add(docs);
> } while (termsEnum.next() != null);
>
> In the case of lots and lots of terms in a multiValued field this can
> take quite a bit of time. In my test case I have 100K docs with 200M
> terms (pathological I understand, but it illustrates the issue). If
> I'm reading this right it loops through all the terms and, for each
> term, creates a sub-list of docs for the term and adds the sub-list to
> the "master list". So a query like 'field:*' takes 20+ seconds.
>
> Is there anything we can/should do to short circuit this kind of
> thing? In this case I got 200M terms by ngramming 3-32 (again, far too
> many ngrams I understand). It's not clear to me whether it's an easy
> check to say "stop when all the docs have been added to the master
> list"
>
> I can raise a JIRA if it makes sense.
>
> For supporting this particular use-case, we could index a separate
> field "has_field1_value" but the general case still holds.
>
> Erick
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8487) Add CommitStream to Streaming API and Streaming Expressions

2016-09-22 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove closed SOLR-8487.
-
Resolution: Fixed

> Add CommitStream to Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-8487
> URL: https://issues.apache.org/jira/browse/SOLR-8487
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 6.3
>Reporter: Jason Gerlowski
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 6.3
>
> Attachments: SOLR-8487.patch, SOLR-8487.patch
>
>
> (Paraphrased from Joel's idea/suggestions in the comments of SOLR-7535).
> With SOLR-7535, users can now index documents/tuples using an UpdateStream.  
> However, there's no way currently using the Streaming API to force a commit 
> on the collection that received these updates.
> The purpose of this ticket is to add a CommitStream, which can be used to 
> trigger commit(s) on a given collection.
> The proposed usage/behavior would look a little bit like:
> {{commit(collection, parallel(update(search()))}}
> Note that...
> 1.) CommitStream has a positional collection parameter, to indicate which 
> collection to commit on. (Alternatively, it could recurse through 
> {{children()}} nodes until it finds the UpdateStream, and then retrieve the 
> collection from the UpdateStream).
> 2.) CommitStream forwards all tuples received by an underlying, wrapped 
> stream.
> 3.) CommitStream commits when the underlying stream emits its EOF tuple. 
> (Alternatively, it could commit every X tuples, based on a parameter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8487) Add CommitStream to Streaming API and Streaming Expressions

2016-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513823#comment-15513823
 ] 

ASF subversion and git services commented on SOLR-8487:
---

Commit 6365920a0e9ed3bf0b13b90955cd73535d495f9a in lucene-solr's branch 
refs/heads/master from [~dpgove]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6365920 ]

SOLR-8487: Adds CommitStream to support sending commits to a collection being 
updated


> Add CommitStream to Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-8487
> URL: https://issues.apache.org/jira/browse/SOLR-8487
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 6.3
>Reporter: Jason Gerlowski
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 6.3
>
> Attachments: SOLR-8487.patch, SOLR-8487.patch
>
>
> (Paraphrased from Joel's idea/suggestions in the comments of SOLR-7535).
> With SOLR-7535, users can now index documents/tuples using an UpdateStream.  
> However, there's no way currently using the Streaming API to force a commit 
> on the collection that received these updates.
> The purpose of this ticket is to add a CommitStream, which can be used to 
> trigger commit(s) on a given collection.
> The proposed usage/behavior would look a little bit like:
> {{commit(collection, parallel(update(search()))}}
> Note that...
> 1.) CommitStream has a positional collection parameter, to indicate which 
> collection to commit on. (Alternatively, it could recurse through 
> {{children()}} nodes until it finds the UpdateStream, and then retrieve the 
> collection from the UpdateStream).
> 2.) CommitStream forwards all tuples received by an underlying, wrapped 
> stream.
> 3.) CommitStream commits when the underlying stream emits its EOF tuple. 
> (Alternatively, it could commit every X tuples, based on a parameter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8487) Add CommitStream to Streaming API and Streaming Expressions

2016-09-22 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-8487:
--
Affects Version/s: (was: 6.0)
   6.3

> Add CommitStream to Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-8487
> URL: https://issues.apache.org/jira/browse/SOLR-8487
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 6.3
>Reporter: Jason Gerlowski
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 6.3
>
> Attachments: SOLR-8487.patch, SOLR-8487.patch
>
>
> (Paraphrased from Joel's idea/suggestions in the comments of SOLR-7535).
> With SOLR-7535, users can now index documents/tuples using an UpdateStream.  
> However, there's no way currently using the Streaming API to force a commit 
> on the collection that received these updates.
> The purpose of this ticket is to add a CommitStream, which can be used to 
> trigger commit(s) on a given collection.
> The proposed usage/behavior would look a little bit like:
> {{commit(collection, parallel(update(search()))}}
> Note that...
> 1.) CommitStream has a positional collection parameter, to indicate which 
> collection to commit on. (Alternatively, it could recurse through 
> {{children()}} nodes until it finds the UpdateStream, and then retrieve the 
> collection from the UpdateStream).
> 2.) CommitStream forwards all tuples received by an underlying, wrapped 
> stream.
> 3.) CommitStream commits when the underlying stream emits its EOF tuple. 
> (Alternatively, it could commit every X tuples, based on a parameter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8487) Add CommitStream to Streaming API and Streaming Expressions

2016-09-22 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-8487:
--
Fix Version/s: (was: 6.0)
   6.3

> Add CommitStream to Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-8487
> URL: https://issues.apache.org/jira/browse/SOLR-8487
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 6.3
>Reporter: Jason Gerlowski
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 6.3
>
> Attachments: SOLR-8487.patch, SOLR-8487.patch
>
>
> (Paraphrased from Joel's idea/suggestions in the comments of SOLR-7535).
> With SOLR-7535, users can now index documents/tuples using an UpdateStream.  
> However, there's no way currently using the Streaming API to force a commit 
> on the collection that received these updates.
> The purpose of this ticket is to add a CommitStream, which can be used to 
> trigger commit(s) on a given collection.
> The proposed usage/behavior would look a little bit like:
> {{commit(collection, parallel(update(search()))}}
> Note that...
> 1.) CommitStream has a positional collection parameter, to indicate which 
> collection to commit on. (Alternatively, it could recurse through 
> {{children()}} nodes until it finds the UpdateStream, and then retrieve the 
> collection from the UpdateStream).
> 2.) CommitStream forwards all tuples received by an underlying, wrapped 
> stream.
> 3.) CommitStream commits when the underlying stream emits its EOF tuple. 
> (Alternatively, it could commit every X tuples, based on a parameter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



field:* queries can be painfully slow if there are many terms.

2016-09-22 Thread Erick Erickson
In MultiTermConstantScoreWrapper there's this block around line 174 in 6x:

do {
  docs = termsEnum.postings(docs, PostingsEnum.NONE);
  builder.add(docs);
} while (termsEnum.next() != null);

In the case of lots and lots of terms in a multiValued field this can
take quite a bit of time. In my test case I have 100K docs with 200M
terms (pathological I understand, but it illustrates the issue). If
I'm reading this right it loops through all the terms and, for each
term, creates a sub-list of docs for the term and adds the sub-list to
the "master list". So a query like 'field:*' takes 20+ seconds.

Is there anything we can/should do to short circuit this kind of
thing? In this case I got 200M terms by ngramming 3-32 (again, far too
many ngrams I understand). It's not clear to me whether it's an easy
check to say "stop when all the docs have been added to the master
list"

I can raise a JIRA if it makes sense.

For supporting this particular use-case, we could index a separate
field "has_field1_value" but the general case still holds.

Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9542) Kerberos delegation tokens requires missing Jackson library

2016-09-22 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513799#comment-15513799
 ] 

Hrishikesh Gadre commented on SOLR-9542:


[~ichattopadhyaya] I reviewed the patch and it looks good.

HADOOP-13332 is tracking the work required for upgrading jackson library in 
Hadoop. Since the work is underway for Hadoop 3 release, this may be addressed 
in next few months. (BTW SOLR-9515 is tracking the work required in Solr to 
support Hadoop 3). But in my opinion we shouldn't hold off for this Hadoop 
enhancement. Instead we should commit this patch to fix the reported issue. May 
be we can file another JIRA to revert this change once the Hadoop side fix is 
available.


> Kerberos delegation tokens requires missing Jackson library
> ---
>
> Key: SOLR-9542
> URL: https://issues.apache.org/jira/browse/SOLR-9542
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-9542.patch
>
>
> GET, RENEW or CANCEL operations for the delegation tokens support requires 
> the Solr server to have old jackson added as a dependency.
> Steps to reproduce the problem:
> 1) Configure Solr to use delegation tokens
> 2) Start Solr
> 3) Use a SolrJ application to get a delegation token.
> The server throws the following:
> {code}
> java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.managementOperation(DelegationTokenAuthenticationHandler.java:279)
> at 
> org.apache.solr.security.KerberosPlugin$RequestContinuesRecorderAuthenticationHandler.managementOperation(KerberosPlugin.java:566)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:514)
> at 
> org.apache.solr.security.DelegationTokenKerberosFilter.doFilter(DelegationTokenKerberosFilter.java:123)
> at 
> org.apache.solr.security.KerberosPlugin.doAuthenticate(KerberosPlugin.java:265)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:318)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SOLR-6871) Need a process for updating & maintaining the new quickstart tutorial (and any other tutorials added to the website)

2016-09-22 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513660#comment-15513660
 ] 

Steve Rowe commented on SOLR-6871:
--

Thanks [~janhoy] - for some reason your branch_6x commit didn't get posted here 
- from the email notification to commits@l.a.o:

{quote}
Repository: lucene-solr
Updated Branches:
 refs/heads/branch_6x 082f8e3f9 -> 9611478c7


SOLR-6871: Fix precommit - accept /solr/downloads.html as valid link

(cherry picked from commit d146354)


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/9611478c
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/9611478c
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/9611478c

Branch: refs/heads/branch_6x
Commit: 9611478c7e249b7c65d3807e2ae672aabaefa50b
Parents: 082f8e3
Author: Jan Høydahl 
Authored: Thu Sep 22 10:52:01 2016 +0200
Committer: Jan Høydahl 
Committed: Thu Sep 22 10:53:21 2016 +0200
{quote}

> Need a process for updating & maintaining the new quickstart tutorial (and 
> any other tutorials added to the website)
> 
>
> Key: SOLR-6871
> URL: https://issues.apache.org/jira/browse/SOLR-6871
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Minor
> Attachments: SOLR-6871.patch
>
>
> Prior to SOLR-6058 the /solr/tutorial.html link on the website contained only 
> a simple landing page that then linked people to the "versioned" tutorial for 
> the most recent release -- or more specificly: the most recent release*s* 
> (plural) when we were releasing off of multiple branches (ie: links to both 
> the 4.0.0 tutorial, as well as the 3.6.3 tutorial when 4.0 came out)
> The old tutorial content lived along side the solr code, and was 
> automatically branched, tagged & released along with Solr.  When committing 
> any changes to Solr code (or post.jar code, or the sample data, or the sample 
> configs, etc..) you could also commit changes to the tutorial at th same time 
> and be confident that it was clear what version of solr that tutorial went 
> along with.
> As part of SOLR-6058, it seems that there was a concensus to move to a 
> keeping "tutorial" content on the website, where it can be integrated 
> directly in with other site content/navigation, and use the same look and 
> feel.
> I have no objection to this in principle -- but as a result of this choice, 
> there are outstanding issues regarding how devs should go about maintaining 
> this doc as changes are made to solr & the solr examples used in the tutorial.
> We need a clear process for where/how to edit the tutorial(s) as new versions 
> of solr come out and cahnges are made that mandate corisponding hanges to the 
> tutorial.  this process _should_ also account for things like having multiple 
> versions of the tutorial live at one time (ie: at some point in the future, 
> we'll certainly need to host the "5.13" tutorial if that's the current 
> "stable" release, but we'll also want to host the tutorial for "6.0-BETA" so 
> that people can try it out)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9549) StreamExpressionTest failures

2016-09-22 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-9549.
--
Resolution: Resolved

> StreamExpressionTest failures
> -
>
> Key: SOLR-9549
> URL: https://issues.apache.org/jira/browse/SOLR-9549
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>
> Reproduces for me on master:
> ant test  -Dtestcase=StreamExpressionTest 
> -Dtests.method=testBasicTextLogitStream -Dtests.seed=DB749AA9C9E30657 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=os 
> -Dtests.timezone=Asia/Bahrain -Dtests.asserts=true -Dtests.file.encoding=UTF-8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9552) Upgrade to Tika 1.14 when available

2016-09-22 Thread Tim Allison (JIRA)
Tim Allison created SOLR-9552:
-

 Summary: Upgrade to Tika 1.14 when available
 Key: SOLR-9552
 URL: https://issues.apache.org/jira/browse/SOLR-9552
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: contrib - DataImportHandler
Reporter: Tim Allison
Priority: Minor


 Let's upgrade Solr as soon as 1.14 is available.

P.S. I _think_ we're soon to wrap up work on 1.14.  Any last requests? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-09-22 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513607#comment-15513607
 ] 

Kevin Risden commented on SOLR-8593:


Adding some resources that may be helpful:
* http://www.slideshare.net/HadoopSummit/costbased-query-optimization
* 
https://medium.com/@mpathirage/query-planning-with-apache-calcite-part-1-fe957b011c36#.ywd9ouxmv

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>
> The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4936) Cannot run Solr with zookeeper on multiple IPs

2016-09-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-4936.
---
Resolution: Won't Fix

We don't support a cluster of embedded ZK, it is simply not a good idea. Setup 
an external ZK ensemble as per refGuide.

> Cannot run Solr with zookeeper on multiple IPs
> --
>
> Key: SOLR-4936
> URL: https://issues.apache.org/jira/browse/SOLR-4936
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.2
>Reporter: Grzegorz Sobczyk
>
> This doesn't run solr with ZK:
> {{java -DzkRun=192.168.1.169:9180 
> -DzkHost=192.168.1.169:9180,192.168.1.169:9280 -Djetty.port=8180 -jar 
> start.jar}}
> {{java -DzkRun=192.168.1.169:9280 
> -DzkHost=192.168.1.169:9180,192.168.1.169:9280 -Djetty.port=8280 -jar 
> start.jar}}
> And this does: 
> {{java -DzkRun=localhost:9180 -DzkHost=localhost:9180,localhost:9280 
> -Djetty.port=8180 -jar start.jar}}
> {{java -DzkRun=localhost:9280 -DzkHost=localhost:9180,localhost:9280 
> -Djetty.port=8280 -jar start.jar}}
> SolrZkServerProps#getMyServerId() assumes that myHost is "localhost" rather 
> than reads it from zkRun property.
> (tested on example)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6677) reduce logging during Solr startup

2016-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513520#comment-15513520
 ] 

ASF subversion and git services commented on SOLR-6677:
---

Commit f391d57075ca4bbb5608079bec63d9a6a574308f in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f391d570 ]

SOLR-6677: Reduced logging during Solr startup, moved more logs to DEBUG level


> reduce logging during Solr startup
> --
>
> Key: SOLR-6677
> URL: https://issues.apache.org/jira/browse/SOLR-6677
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Jan Høydahl
> Attachments: SOLR-6677.patch, SOLR-6677.patch
>
>
> most of what is printed is neither helpful nor useful. It's just noise



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 585 - Still Failing

2016-09-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/585/

No tests ran.

Build Log:
[...truncated 40571 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (17.3 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 29.9 MB in 0.05 sec (636.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 64.4 MB in 0.08 sec (766.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 75.0 MB in 0.13 sec (570.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6036 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6036 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 214 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (36.5 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.0.0-src.tgz...
   [smoker] 39.2 MB in 0.83 sec (47.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.tgz...
   [smoker] 142.0 MB in 1.92 sec (74.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.zip...
   [smoker] 151.0 MB in 1.81 sec (83.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 30 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]   [|]   [/]  
   [smoker] Started 

[jira] [Commented] (SOLR-9330) Race condition between core reload and statistics request

2016-09-22 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513472#comment-15513472
 ] 

Mikhail Khludnev commented on SOLR-9330:


[~mdrob], would you mind to review the last patch?  

> Race condition between core reload and statistics request
> -
>
> Key: SOLR-9330
> URL: https://issues.apache.org/jira/browse/SOLR-9330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Andrey Kudryavtsev
> Attachments: SOLR-9330.patch, SOLR-9390.patch, SOLR-9390.patch, 
> SOLR-9390.patch, SOLR-9390.patch, too_sync.patch
>
>
> Things happened that we execute this two requests consecutively in Solr 5.5:
> * Core reload: /admin/cores?action=RELOAD=_coreName_
> * Check core statistics: /_coreName_/admin/mbeans?stats=true
> And sometimes second request ends with this error:
> {code}
> ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
> closed
>   at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)
> {code}
> If my understanding of SolrCore internals is correct, it happens because of 
> async nature of reload request:
> * New searcher is "registered" in separate thread
> * Old searcher is closed in same separate thread and only after new one is 
> registered
> * When old searcher is closing, it removes itself from map with MBeans 
> * If statistic requests happens before old searcher is completely removed 
> from everywhere - exception can happen. 
> What do you think if we will introduce new parameter for reload request which 
> makes it fully synchronized? Basically it will force it to call {code}  
> SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] 
> waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_102) - Build # 468 - Still unstable!

2016-09-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/468/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.util.TestSolrCLIRunExample

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MockDirectoryWrapper, SolrCore, MockDirectoryWrapper, MockDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [MockDirectoryWrapper, SolrCore, MockDirectoryWrapper, 
MockDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([7FC3F8113C9B8793]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12647 lines...]
   [junit4] Suite: org.apache.solr.util.TestSolrCLIRunExample
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.util.TestSolrCLIRunExample_7FC3F8113C9B8793-001\init-core-data-001
   [junit4]   2> 3118700 INFO  
(SUITE-TestSolrCLIRunExample-seed#[7FC3F8113C9B8793]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=https://issues.apache.org/jira/browse/SOLR-5776)
   [junit4]   2> 3118702 INFO  
(TEST-TestSolrCLIRunExample.testSchemalessExample-seed#[7FC3F8113C9B8793]) [
] o.a.s.SolrTestCaseJ4 ###Starting testSchemalessExample
   [junit4]   2> 3118704 INFO  
(TEST-TestSolrCLIRunExample.testSchemalessExample-seed#[7FC3F8113C9B8793]) [
] o.a.s.u.TestSolrCLIRunExample Selected port 49385 to start schemaless example 
Solr instance on ...
   [junit4]   2> 3119789 INFO  (Thread-6197) [] o.e.j.s.Server 
jetty-9.3.8.v20160314
   [junit4]   2> 3119791 INFO  (Thread-6197) [] o.e.j.s.h.ContextHandler 
Started o.e.j.s.ServletContextHandler@22dc77c2{/solr,null,AVAILABLE}
   [junit4]   2> 3119793 INFO  (Thread-6197) [] o.e.j.s.ServerConnector 
Started ServerConnector@763645c{HTTP/1.1,[http/1.1]}{127.0.0.1:49385}
   [junit4]   2> 3119793 INFO  (Thread-6197) [] o.e.j.s.Server Started 
@3124408ms
   [junit4]   2> 3119793 INFO  (Thread-6197) [] o.a.s.c.s.e.JettySolrRunner 
Jetty properties: {hostContext=/solr, hostPort=49385}
   [junit4]   2> 3119793 INFO  (Thread-6197) [] o.a.s.s.SolrDispatchFilter 
SolrDispatchFilter.init(): sun.misc.Launcher$AppClassLoader@73d16e93
   [junit4]   2> 3119794 INFO  (Thread-6197) [] o.a.s.c.SolrResourceLoader 
new SolrResourceLoader for directory: 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 17875 - Failure!

2016-09-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17875/
Java: 32bit/jdk1.8.0_102 -server -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 12556 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/temp/junit4-J0-20160922_135246_183.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] java.lang.OutOfMemoryError: Java heap space
   [junit4] Dumping heap to 
/home/jenkins/workspace/Lucene-Solr-master-Linux/heapdumps/java_pid8079.hprof 
...
   [junit4] Heap dump file created [436678063 bytes in 1.375 secs]
   [junit4] <<< JVM J0: EOF 

[...truncated 663 lines...]
   [junit4] JVM J2: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-solrj/test/temp/junit4-J2-20160922_142658_615.sysout
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4] Dumping heap to 
/home/jenkins/workspace/Lucene-Solr-master-Linux/heapdumps/java_pid4238.hprof 
...
   [junit4] Heap dump file created [498678362 bytes in 1.639 secs]
   [junit4] <<< JVM J2: EOF 

[...truncated 10379 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:763: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:715: Some of the 
tests produced a heap dump, but did not fail. Maybe a suppressed 
OutOfMemoryError? Dumps created:
* java_pid4238.hprof
* java_pid8079.hprof

Total time: 61 minutes 8 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Assigned] (SOLR-8487) Add CommitStream to Streaming API and Streaming Expressions

2016-09-22 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove reassigned SOLR-8487:
-

Assignee: Dennis Gove

> Add CommitStream to Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-8487
> URL: https://issues.apache.org/jira/browse/SOLR-8487
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 6.0
>Reporter: Jason Gerlowski
>Assignee: Dennis Gove
>Priority: Minor
> Fix For: 6.0
>
> Attachments: SOLR-8487.patch, SOLR-8487.patch
>
>
> (Paraphrased from Joel's idea/suggestions in the comments of SOLR-7535).
> With SOLR-7535, users can now index documents/tuples using an UpdateStream.  
> However, there's no way currently using the Streaming API to force a commit 
> on the collection that received these updates.
> The purpose of this ticket is to add a CommitStream, which can be used to 
> trigger commit(s) on a given collection.
> The proposed usage/behavior would look a little bit like:
> {{commit(collection, parallel(update(search()))}}
> Note that...
> 1.) CommitStream has a positional collection parameter, to indicate which 
> collection to commit on. (Alternatively, it could recurse through 
> {{children()}} nodes until it finds the UpdateStream, and then retrieve the 
> collection from the UpdateStream).
> 2.) CommitStream forwards all tuples received by an underlying, wrapped 
> stream.
> 3.) CommitStream commits when the underlying stream emits its EOF tuple. 
> (Alternatively, it could commit every X tuples, based on a parameter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7452) improve exception message: child query must only match non-parent docs, but parent docID=180314...

2016-09-22 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513439#comment-15513439
 ] 

Mikhail Khludnev commented on LUCENE-7452:
--

[~arafalov], what do you think about these exception messages?

> improve exception message: child query must only match non-parent docs, but 
> parent docID=180314...
> --
>
> Key: LUCENE-7452
> URL: https://issues.apache.org/jira/browse/LUCENE-7452
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 6.2
>Reporter: Mikhail Khludnev
>Priority: Minor
> Attachments: LUCENE-7452.patch
>
>
> when parent filter intersects with child query the exception exposes internal 
> details: docnum and scorer class. I propose an exception message to suggest 
> to execute a query intersecting them both. There is an opinion to add this  
> suggestion in addition to existing details. 
> My main concern against is, when index is constantly updated even SOLR-9582 
> allows to search for docnum it would be like catching the wind, also think 
> about cloud case. But, user advised with executing query intersection can 
> catch problem documents even if they occurs sporadically.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4936) Cannot run Solr with zookeeper on multiple IPs

2016-09-22 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513389#comment-15513389
 ] 

Cao Manh Dat commented on SOLR-4936:


Ok, so I figure out why the commands can be executed in Solr-4.0. Prior to 
SOLR-4718, we dont make a connect to zk cluster before start embedded one. 
SOLR-4718 change that, so the node must connect to zk cluster first. 

In my opinion, I dont think a cluter of embedded zk is a good idea. So this 
issue can be mark as wont fix.

> Cannot run Solr with zookeeper on multiple IPs
> --
>
> Key: SOLR-4936
> URL: https://issues.apache.org/jira/browse/SOLR-4936
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.2
>Reporter: Grzegorz Sobczyk
>
> This doesn't run solr with ZK:
> {{java -DzkRun=192.168.1.169:9180 
> -DzkHost=192.168.1.169:9180,192.168.1.169:9280 -Djetty.port=8180 -jar 
> start.jar}}
> {{java -DzkRun=192.168.1.169:9280 
> -DzkHost=192.168.1.169:9180,192.168.1.169:9280 -Djetty.port=8280 -jar 
> start.jar}}
> And this does: 
> {{java -DzkRun=localhost:9180 -DzkHost=localhost:9180,localhost:9280 
> -Djetty.port=8180 -jar start.jar}}
> {{java -DzkRun=localhost:9280 -DzkHost=localhost:9180,localhost:9280 
> -Djetty.port=8280 -jar start.jar}}
> SolrZkServerProps#getMyServerId() assumes that myHost is "localhost" rather 
> than reads it from zkRun property.
> (tested on example)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7453) Change naming of variables/apis from docid to docnum

2016-09-22 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513380#comment-15513380
 ] 

Yonik Seeley commented on LUCENE-7453:
--

I don't think changing the name really helps a new user understand what a docid 
actually is, and the safe ways to use one - that's the much harder part.
The fact that it's transient in a sense (but still cacheable for the lifetime 
of a reader), local to a segment (one has to understand segments and the fact 
that they are mostly immutable), the fact that you *can* reuse one on a 
different view of the same segment (deleted docs), etc.

This naming discussion would have been appropriate during the initial naming 
perhaps, but now a rename would inflict guaranteed pain on all existing devs / 
documentation / books / blogs, ec., all to attempt to safe a few *seconds* of 
new user confusion out of the necessary *days/weeks* of total confusion 
necessary to build a mental model of how Lucene actually works.  In fact, it 
may be just as likely to cause confusion if the new user is using any 
out-of-date resources that use the old terminology.  It sounds like a poor 
trade-off to rename now.


> Change naming of variables/apis from docid to docnum
> 
>
> Key: LUCENE-7453
> URL: https://issues.apache.org/jira/browse/LUCENE-7453
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
>
> In SOLR-9528 a suggestion was made to change {{docid}} to {{docnum}}. The 
> reasoning for this is most notably that {{docid}} has a connotation about a 
> persistent unique identifier (eg like {{_id}} in elasticsearch or {{id}} in 
> solr), while {{docid}} in lucene is currently some local to a segment, and 
> not comparable directly across segments.
> When I first started working on Lucene, I had this same confusion. {{docnum}} 
> is a much better name for this transient, segment local identifier for a doc. 
> Regardless of what solr wants to do in their api (eg keeping _docid_), I 
> think we should switch the lucene apis and variable names to use docnum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7407) Explore switching doc values to an iterator API

2016-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513376#comment-15513376
 ] 

ASF subversion and git services commented on LUCENE-7407:
-

Commit 7377d0ef9ea8fa9e2aa9a3ccb1249703d8d1d813 in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7377d0e ]

LUCENE-7407: fix stale javadocs


> Explore switching doc values to an iterator API
> ---
>
> Key: LUCENE-7407
> URL: https://issues.apache.org/jira/browse/LUCENE-7407
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>  Labels: docValues
> Fix For: master (7.0)
>
> Attachments: LUCENE-7407.patch
>
>
> I think it could be compelling if we restricted doc values to use an
> iterator API at read time, instead of the more general random access
> API we have today:
>   * It would make doc values disk usage more of a "you pay for what
> what you actually use", like postings, which is a compelling
> reduction for sparse usage.
>   * I think codecs could compress better and maybe speed up decoding
> of doc values, even in the non-sparse case, since the read-time
> API is more restrictive "forward only" instead of random access.
>   * We could remove {{getDocsWithField}} entirely, since that's
> implicit in the iteration, and the awkward "return 0 if the
> document didn't have this field" would go away.
>   * We can remove the annoying thread locals we must make today in
> {{CodecReader}}, and close the trappy "I accidentally shared a
> single XXXDocValues instance across threads", since an iterator is
> inherently "use once".
>   * We could maybe leverage the numerous optimizations we've done for
> postings over time, since the two problems ("iterate over doc ids
> and store something interesting for each") are very similar.
> This idea has come up many in the past, e.g. LUCENE-7253 is a recent
> example, and very early iterations of doc values started with exactly
> this ;)
> However, it's a truly enormous change, likely 7.0 only.  Or maybe we
> could have the new iterator APIs also ported to 6.x side by side with
> the deprecate existing random-access APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9549) StreamExpressionTest failures

2016-09-22 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513347#comment-15513347
 ] 

Michael McCandless commented on SOLR-9549:
--

Woops, that is indeed right!  Thanks for fixing [~joel.bernstein].

> StreamExpressionTest failures
> -
>
> Key: SOLR-9549
> URL: https://issues.apache.org/jira/browse/SOLR-9549
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>
> Reproduces for me on master:
> ant test  -Dtestcase=StreamExpressionTest 
> -Dtests.method=testBasicTextLogitStream -Dtests.seed=DB749AA9C9E30657 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=os 
> -Dtests.timezone=Asia/Bahrain -Dtests.asserts=true -Dtests.file.encoding=UTF-8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6677) reduce logging during Solr startup

2016-09-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-6677:
--
Attachment: SOLR-6677.patch

Updated patch. I'll commit this and then keep the issue open a bit longer for 
further commits.

> reduce logging during Solr startup
> --
>
> Key: SOLR-6677
> URL: https://issues.apache.org/jira/browse/SOLR-6677
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Jan Høydahl
> Attachments: SOLR-6677.patch, SOLR-6677.patch
>
>
> most of what is printed is neither helpful nor useful. It's just noise



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9548) solr.log should start with informative welcome message

2016-09-22 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513262#comment-15513262
 ] 

Shawn Heisey commented on SOLR-9548:


I much prefer a real timestamp to an ever-increasing measure of runtime.  
Without it, it's extremely difficult to compare an external timestamp to log 
information.

I'm ambivalent when it comes to thread names in the log.  I have not yet seen 
any situation where the thread names in Solr are useful, but perhaps that's a 
failure of imagination on my part.  I haven't tried to connect different 
logging statements together by thread name.  If they can be useful, we should 
leave them in, but if they aren't useful to most people, we should take them 
out, and explain in the docs how to adjust the config for situations where they 
can be useful.


> solr.log should start with informative welcome message
> --
>
> Key: SOLR-9548
> URL: https://issues.apache.org/jira/browse/SOLR-9548
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9548.patch
>
>
> When starting Solr, the first log line should be more informative, such as
> {code}
> Welcome to Apache Solr™ version 7.0.0, running in standalone mode on port 
> 8983 from folder /Users/janhoy/git/lucene-solr/solr
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9544) ObjectReleaseTracker can false-fail on late asynchronous closing resources

2016-09-22 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved SOLR-9544.
-
   Resolution: Fixed
 Assignee: Alan Woodward
Fix Version/s: 6.3

> ObjectReleaseTracker can false-fail on late asynchronous closing resources
> --
>
> Key: SOLR-9544
> URL: https://issues.apache.org/jira/browse/SOLR-9544
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 6.3
>
> Attachments: SOLR-9544.patch
>
>
> SolrTestCaseJ4 assumes that, once its embedded CoreContainer has shutdown, it 
> can check the ObjectReleaseTracker and ensure that all cores are closed.  
> However, if the test has kicked off some asynchronous core reloads then this 
> assumption doesn't necessarily hold, particularly on slow machines.
> See http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/466/ for an 
> example failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9544) ObjectReleaseTracker can false-fail on late asynchronous closing resources

2016-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513256#comment-15513256
 ] 

ASF subversion and git services commented on SOLR-9544:
---

Commit c55a14e198072c16a834d5b3683c5edaa0c67e5d in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c55a14e ]

SOLR-9544: Give ObjectReleaseTracker more time for async closing objects


> ObjectReleaseTracker can false-fail on late asynchronous closing resources
> --
>
> Key: SOLR-9544
> URL: https://issues.apache.org/jira/browse/SOLR-9544
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
> Fix For: 6.3
>
> Attachments: SOLR-9544.patch
>
>
> SolrTestCaseJ4 assumes that, once its embedded CoreContainer has shutdown, it 
> can check the ObjectReleaseTracker and ensure that all cores are closed.  
> However, if the test has kicked off some asynchronous core reloads then this 
> assumption doesn't necessarily hold, particularly on slow machines.
> See http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/466/ for an 
> example failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9544) ObjectReleaseTracker can false-fail on late asynchronous closing resources

2016-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513255#comment-15513255
 ] 

ASF subversion and git services commented on SOLR-9544:
---

Commit 36b39a2c415d812d143ebcbc88d90ecd15754cbb in lucene-solr's branch 
refs/heads/branch_6x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=36b39a2 ]

SOLR-9544: Give ObjectReleaseTracker more time for async closing objects


> ObjectReleaseTracker can false-fail on late asynchronous closing resources
> --
>
> Key: SOLR-9544
> URL: https://issues.apache.org/jira/browse/SOLR-9544
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
> Fix For: 6.3
>
> Attachments: SOLR-9544.patch
>
>
> SolrTestCaseJ4 assumes that, once its embedded CoreContainer has shutdown, it 
> can check the ObjectReleaseTracker and ensure that all cores are closed.  
> However, if the test has kicked off some asynchronous core reloads then this 
> assumption doesn't necessarily hold, particularly on slow machines.
> See http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/466/ for an 
> example failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9534) Support quiet/verbose bin/solr options for changing log level

2016-09-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9534:
--
Attachment: SOLR-9534.patch

New patch which takes advantage of the new class {{StartupLoggingUtils}}

> Support quiet/verbose bin/solr options for changing log level
> -
>
> Key: SOLR-9534
> URL: https://issues.apache.org/jira/browse/SOLR-9534
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9534.patch, SOLR-9534.patch
>
>
> Spinoff from SOLR-6677
> Let's make it much easier to "turn on debug" by supporting a {{bin/solr start 
> -V}} verbose option, and likewise a {{bin/solr start -q}} for quiet operation.
> These would simply be convenience options for changing the RootLogger from 
> level INFO to DEBUG or WARN respectively. This can be done programmatically 
> in log4j at startup. 
> Could be we need to add some more package specific defaults in 
> log4j.properties to get the right mix of logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.2 - Build # 8 - Failure

2016-09-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.2/8/

9 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterRestart

Error Message:
Timeout waiting for CDCR replication to complete @source_collection:shard1

Stack Trace:
java.lang.RuntimeException: Timeout waiting for CDCR replication to complete 
@source_collection:shard1
at 
__randomizedtesting.SeedInfo.seed([2D15D3BE9A152021:7108343F54359417]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:794)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterRestart(CdcrReplicationDistributedZkTest.java:235)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_102) - Build # 6131 - Still Failing!

2016-09-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6131/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

5 tests failed.
FAILED:  org.apache.solr.TestDistributedSearch.test

Error Message:
Expected to find shardAddress in the up shard info: 
{error=org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
available to handle this 
request,trace=org.apache.solr.client.solrj.SolrServerException: No live 
SolrServers available to handle this request  at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:392)
  at 
org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(HttpShardHandlerFactory.java:226)
  at 
org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:198)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745) ,time=7}

Stack Trace:
java.lang.AssertionError: Expected to find shardAddress in the up shard info: 
{error=org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
available to handle this 
request,trace=org.apache.solr.client.solrj.SolrServerException: No live 
SolrServers available to handle this request
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:392)
at 
org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(HttpShardHandlerFactory.java:226)
at 
org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:198)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
,time=7}
at 
__randomizedtesting.SeedInfo.seed([758AE553FDF283F3:FDDEDA89530EEE0B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.TestDistributedSearch.comparePartialResponses(TestDistributedSearch.java:1172)
at 
org.apache.solr.TestDistributedSearch.queryPartialResults(TestDistributedSearch.java:1113)
at 
org.apache.solr.TestDistributedSearch.test(TestDistributedSearch.java:973)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1011)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 

[jira] [Commented] (SOLR-8186) Solr start scripts -- only log to console when running in foreground

2016-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513194#comment-15513194
 ] 

ASF subversion and git services commented on SOLR-8186:
---

Commit 7498ca9ad67b25e48e2ae182256864b06d82e186 in lucene-solr's branch 
refs/heads/branch_6x from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7498ca9 ]

SOLR-8186: Added robustness to the dynamic log muting logic

(cherry picked from commit eabb05f)


> Solr start scripts -- only log to console when running in foreground
> 
>
> Key: SOLR-8186
> URL: https://issues.apache.org/jira/browse/SOLR-8186
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Shawn Heisey
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-8186-robustness.patch, SOLR-8186.patch, 
> SOLR-8186.patch
>
>
> Currently the log4j.properties file logs to the console, and the start 
> scripts capture console output to a logfile that never rotates.  This can 
> fill up the disk, and when the logfile is removed, the user might be alarmed 
> by the way their memory statistics behave -- the "cached" memory might have a 
> sudden and very large drop, making it appear to a novice that the huge 
> logfile was hogging their memory.
> The logfile created by log4j is rotated when it gets big enough, so that 
> logfile is unlikely to fill up the disk.
> I propose that we copy the current log4j.properties file to something like 
> log4j-foreground.properties, remove CONSOLE logging in the log4j.properties 
> file, and have the start script use the alternate config file when running in 
> the foreground.  This way users will see the logging output when running in 
> the foreground, but it will be absent when running normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8186) Solr start scripts -- only log to console when running in foreground

2016-09-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-8186.
---
Resolution: Fixed

> Solr start scripts -- only log to console when running in foreground
> 
>
> Key: SOLR-8186
> URL: https://issues.apache.org/jira/browse/SOLR-8186
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Shawn Heisey
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-8186-robustness.patch, SOLR-8186.patch, 
> SOLR-8186.patch
>
>
> Currently the log4j.properties file logs to the console, and the start 
> scripts capture console output to a logfile that never rotates.  This can 
> fill up the disk, and when the logfile is removed, the user might be alarmed 
> by the way their memory statistics behave -- the "cached" memory might have a 
> sudden and very large drop, making it appear to a novice that the huge 
> logfile was hogging their memory.
> The logfile created by log4j is rotated when it gets big enough, so that 
> logfile is unlikely to fill up the disk.
> I propose that we copy the current log4j.properties file to something like 
> log4j-foreground.properties, remove CONSOLE logging in the log4j.properties 
> file, and have the start script use the alternate config file when running in 
> the foreground.  This way users will see the logging output when running in 
> the foreground, but it will be absent when running normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8186) Solr start scripts -- only log to console when running in foreground

2016-09-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513189#comment-15513189
 ] 

Jan Høydahl commented on SOLR-8186:
---

Yes, they will get a warning when running in {{solr-8983-console.log}} that we 
were not able to tune down logging.
But then again, if they actually switched to Logback or some other backend, 
then they are on their own in configuring that framework, and they may not even 
configure any Console loggers at all, so then it is not really a problem that 
we were unable to mute log4j ConsoleAppenders...

+1 to choosing one log backend and not officially support anything else.

> Solr start scripts -- only log to console when running in foreground
> 
>
> Key: SOLR-8186
> URL: https://issues.apache.org/jira/browse/SOLR-8186
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Shawn Heisey
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-8186-robustness.patch, SOLR-8186.patch, 
> SOLR-8186.patch
>
>
> Currently the log4j.properties file logs to the console, and the start 
> scripts capture console output to a logfile that never rotates.  This can 
> fill up the disk, and when the logfile is removed, the user might be alarmed 
> by the way their memory statistics behave -- the "cached" memory might have a 
> sudden and very large drop, making it appear to a novice that the huge 
> logfile was hogging their memory.
> The logfile created by log4j is rotated when it gets big enough, so that 
> logfile is unlikely to fill up the disk.
> I propose that we copy the current log4j.properties file to something like 
> log4j-foreground.properties, remove CONSOLE logging in the log4j.properties 
> file, and have the start script use the alternate config file when running in 
> the foreground.  This way users will see the logging output when running in 
> the foreground, but it will be absent when running normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8186) Solr start scripts -- only log to console when running in foreground

2016-09-22 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513166#comment-15513166
 ] 

Shawn Heisey commented on SOLR-8186:


bq. Checks that log4j is actually bound by slf4j

Would this mean that somebody who changes their logging jars on purpose gets a 
warning?

Note that I am not actually opposed to assuming slf4j->log4j (and eventually 
log4j2) in what we incorporate into Solr, particularly as we move to making a 
standalone application.  Taking away the user's choice of logging framework 
would allow us to control logging more effectively from the admin UI.


> Solr start scripts -- only log to console when running in foreground
> 
>
> Key: SOLR-8186
> URL: https://issues.apache.org/jira/browse/SOLR-8186
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.3.1
>Reporter: Shawn Heisey
>Assignee: Jan Høydahl
>  Labels: logging
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-8186-robustness.patch, SOLR-8186.patch, 
> SOLR-8186.patch
>
>
> Currently the log4j.properties file logs to the console, and the start 
> scripts capture console output to a logfile that never rotates.  This can 
> fill up the disk, and when the logfile is removed, the user might be alarmed 
> by the way their memory statistics behave -- the "cached" memory might have a 
> sudden and very large drop, making it appear to a novice that the huge 
> logfile was hogging their memory.
> The logfile created by log4j is rotated when it gets big enough, so that 
> logfile is unlikely to fill up the disk.
> I propose that we copy the current log4j.properties file to something like 
> log4j-foreground.properties, remove CONSOLE logging in the log4j.properties 
> file, and have the start script use the alternate config file when running in 
> the foreground.  This way users will see the logging output when running in 
> the foreground, but it will be absent when running normally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9551) Add constructor to JSONWriter which takes wrapperFunction and namedListStyle

2016-09-22 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke reassigned SOLR-9551:
-

Assignee: Christine Poerschke

> Add constructor to JSONWriter which takes wrapperFunction and namedListStyle
> 
>
> Key: SOLR-9551
> URL: https://issues.apache.org/jira/browse/SOLR-9551
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Jonny Marks
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-9551.patch
>
>
> Currently JSONWriter's constructor extracts the wrapperFunction and 
> namedListStyle from the request.
> This patch adds a new constructor where these are passed in from 
> JSONResponseWriter. This will allow us to decide in JSONResponseWriter which 
> writer to construct based on the named list style.
> There is precedent here - GeoJSONResponseWriter extracts geofield from the 
> request and passes it to GeoJSONWriter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >