[jira] [Commented] (SOLR-8370) Display Similarity Factory in use in Schema-Browser

2016-10-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562266#comment-15562266
 ] 

Jan Høydahl commented on SOLR-8370:
---

Want to commit this soon, if no objections?

> Display Similarity Factory in use in Schema-Browser
> ---
>
> Key: SOLR-8370
> URL: https://issues.apache.org/jira/browse/SOLR-8370
> Project: Solr
>  Issue Type: Improvement
>  Components: UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Trivial
>  Labels: newdev
> Attachments: SOLR-8370.patch
>
>
> Perhaps the Admin UI Schema browser should also display which 
> {{}} that is in use in schema, like it does per-field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7484) FastVectorHighlighter fails to highlight SynonymQuery

2016-10-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562270#comment-15562270
 ] 

ASF subversion and git services commented on LUCENE-7484:
-

Commit 58b64c36751b79e5a1d6aedb2eee74bfa2c4016c in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=58b64c3 ]

LUCENE-7484: FastVectorHighlighter failed to highlight SynonymQuery


> FastVectorHighlighter fails to highlight SynonymQuery
> -
>
> Key: LUCENE-7484
> URL: https://issues.apache.org/jira/browse/LUCENE-7484
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/termvectors
>Affects Versions: 6.x, master (7.0)
>Reporter: Ferenczi Jim
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7484.patch
>
>
> SynonymQuery are ignored by the FastVectorHighlighter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7484) FastVectorHighlighter fails to highlight SynonymQuery

2016-10-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562273#comment-15562273
 ] 

ASF subversion and git services commented on LUCENE-7484:
-

Commit 6f3eb145344520cfa5c3609f637583841211550d in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6f3eb14 ]

LUCENE-7484: FastVectorHighlighter failed to highlight SynonymQuery


> FastVectorHighlighter fails to highlight SynonymQuery
> -
>
> Key: LUCENE-7484
> URL: https://issues.apache.org/jira/browse/LUCENE-7484
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/termvectors
>Affects Versions: 6.x, master (7.0)
>Reporter: Ferenczi Jim
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7484.patch
>
>
> SynonymQuery are ignored by the FastVectorHighlighter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7484) FastVectorHighlighter fails to highlight SynonymQuery

2016-10-10 Thread Ferenczi Jim (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562279#comment-15562279
 ] 

Ferenczi Jim commented on LUCENE-7484:
--

Thanks [~mikemccand]. That was fast !

> FastVectorHighlighter fails to highlight SynonymQuery
> -
>
> Key: LUCENE-7484
> URL: https://issues.apache.org/jira/browse/LUCENE-7484
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/termvectors
>Affects Versions: 6.x, master (7.0)
>Reporter: Ferenczi Jim
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7484.patch
>
>
> SynonymQuery are ignored by the FastVectorHighlighter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7484) FastVectorHighlighter fails to highlight SynonymQuery

2016-10-10 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7484.

Resolution: Fixed

Thanks [~jim.ferenczi]!

> FastVectorHighlighter fails to highlight SynonymQuery
> -
>
> Key: LUCENE-7484
> URL: https://issues.apache.org/jira/browse/LUCENE-7484
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/termvectors
>Affects Versions: 6.x, master (7.0)
>Reporter: Ferenczi Jim
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7484.patch
>
>
> SynonymQuery are ignored by the FastVectorHighlighter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9615) NamedList:asMap method is no converted NamedList in List

2016-10-10 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562315#comment-15562315
 ] 

Alexandre Rafalovitch commented on SOLR-9615:
-

Where specifically do you see this issue? Can you give a Solr configuration 
example and the impact of this issue. I would need to recheck it against latest 
(Solr 7) to see what is going on.

> NamedList:asMap method is no converted NamedList in List
> 
>
> Key: SOLR-9615
> URL: https://issues.apache.org/jira/browse/SOLR-9615
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.1
>Reporter: HYUNCHANG LEE
>
> When a NamedList is organized as follows, the innermost NamedList is not 
> converted into a map by calling the asMap() method of the outmost NamedList.
> {noformat}
> NamedList
>  - List
>- NamedList
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9614) TestSolrCloudWithKerberosAlt.testBasics failure HTTP ERROR: 401 Problem accessing /solr/admin/cores

2016-10-10 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562322#comment-15562322
 ] 

Mikhail Khludnev commented on SOLR-9614:


It seems fixed. I'm sorry for keeping silence. Thanks for your help. Have a 
good flight. 

> TestSolrCloudWithKerberosAlt.testBasics failure HTTP ERROR: 401 Problem 
> accessing /solr/admin/cores
> ---
>
> Key: SOLR-9614
> URL: https://issues.apache.org/jira/browse/SOLR-9614
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
> Attachments: SOLR-9614.patch, SOLR-9614.patch
>
>
> * this occurs after SOLR-9608 commit 
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6169/
> * but, I can't get it fixed rolling it back locally. 
> * it doesn't yet happen in branch_6x CI 
> So far I have no idea what to do. 
> Problem log
> {quote}
> ] o.a.s.c.TestSolrCloudWithKerberosAlt Enable delegation token: true
> 12922 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.c.CoreContainer Authentication plugin class obtained from system 
> property 'authenticationPlugin': org.apache.solr.security.KerberosPlugin
> 12931 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.c.s.i.Krb5HttpClientBuilder Setting up SPNego auth with config: 
> C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-001\minikdc\jaas-client.conf
> 12971 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.s.KerberosPlugin Params: {token.valid=30, 
> kerberos.principal=HTTP/127.0.0.1, 
> kerberos.keytab=C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-001\minikdc\keytabs,
>  cookie.domain=127.0.0.1, token.validity=36000, type=kerberos, 
> delegation-token.token-kind=solr-dt, cookie.path=/, 
> zk-dt-secret-manager.znodeWorkingPath=solr/security/zkdtsm, 
> signer.secret.provider.zookeeper.path=/token, 
> zk-dt-secret-manager.enable=true, 
> kerberos.name.rules=RULE:[1:$1@$0](.*EXAMPLE.COM)s/@.*//
> RULE:[2:$2@$0](.*EXAMPLE.COM)s/@.*//
> DEFAULT, signer.secret.provider=zookeeper}
> 13123 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.c.f.i.CuratorFrameworkImpl Starting
> 13133 WARN  (jetty-launcher-1-thread-1-SendThread(127.0.0.1:6)) 
> [n:127.0.0.1:64475_solr] o.a.z.ClientCnxn SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> 'C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-001\minikdc\jaas-client.conf'.
>  Will continue connection to Zookeeper server without SASL authentication, if 
> Zookeeper server allows it.
> 13145 ERROR (jetty-launcher-1-thread-1-EventThread) [n:127.0.0.1:64475_solr   
>  ] o.a.c.ConnectionState Authentication failed
> 13153 INFO  (jetty-launcher-1-thread-1-EventThread) [n:127.0.0.1:64475_solr   
>  ] o.a.c.f.s.ConnectionStateManager State change: CONNECTED
> 13632 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.c.s.i.Krb5HttpClientBuilder Setting up SPNego auth with config: 
> C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-001\minikdc\jaas-client.conf
> 18210 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath 
> C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-002\node1\.
> 20158 ERROR 
> (OverseerThreadFactory-6-thread-1-processing-n:127.0.0.1:56132_solr) 
> [n:127.0.0.1:56132_solr] o.a.s.c.OverseerCollectionMessageHandler Error 
> from shard: http://127.0.0.1:56132/solr
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://127.0.0.1:56132/solr: Expected mime type 
> application/octet-stream but got text/html. 
> 
> 
> Error 401 
> 
> 
> HTTP ERROR: 401
> Problem accessing /solr/admin/cores. Reason:
> Authentication required
> http://eclipse.org/jetty;>Powered by Jetty:// 
> 9.3.8.v20160314
> 
> 
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:578)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
>   at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
>   at 
> 

[jira] [Created] (SOLR-9616) Solr throws exception when expand=true on empty result

2016-10-10 Thread Timo Schmidt (JIRA)
Timo Schmidt created SOLR-9616:
--

 Summary: Solr throws exception when expand=true on empty result
 Key: SOLR-9616
 URL: https://issues.apache.org/jira/browse/SOLR-9616
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.2.1
Reporter: Timo Schmidt
Priority: Critical
 Fix For: 6.2.1


When i run a query with expand=true with field collapsing and the result set is 
empty an exception is thrown:

solr:8984/solr/core_en/select?={!collapse 
field=pid}=true=10

Produces:

  "error":{
"msg":"Index: 0, Size: 0",
"trace":"java.lang.IndexOutOfBoundsException: Index: 0, Size: 0\n\tat 
java.util.ArrayList.rangeCheck(ArrayList.java:653)\n\tat 
java.util.ArrayList.get(ArrayList.java:429)\n\tat 
java.util.Collections$UnmodifiableList.get(Collections.java:1309)\n\tat 
org.apache.solr.handler.component.ExpandComponent.process(ExpandComponent.java:269)\n\tat
 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)\n\tat
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)\n\tat
 org.apache.solr.core.SolrCore.execute(SolrCore.java:2036)\n\tat 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)\n\tat 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)\n\tat 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
 org.eclipse.jetty.server.Server.handle(Server.java:518)\n\tat 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)\n\tat 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)\n\tat
 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)\n\tat
 org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)\n\tat 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)\n\tat
 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)\n\tat
 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)\n\tat
 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)\n\tat
 java.lang.Thread.run(Thread.java:745)\n",
"code":500}}

Instead i would assume to get an empty result. 

Is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7484) FastVectorHighlighter fails to highlight SynonymQuery

2016-10-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562246#comment-15562246
 ] 

Michael McCandless commented on LUCENE-7484:


Thanks [~jim.ferenczi], patch looks good; I'll push shortly.

> FastVectorHighlighter fails to highlight SynonymQuery
> -
>
> Key: LUCENE-7484
> URL: https://issues.apache.org/jira/browse/LUCENE-7484
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/termvectors
>Affects Versions: 6.x, master (7.0)
>Reporter: Ferenczi Jim
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7484.patch
>
>
> SynonymQuery are ignored by the FastVectorHighlighter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7484) FastVectorHighlighter fails to highlight SynonymQuery

2016-10-10 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7484:
---
Fix Version/s: 7.0
   6.3

> FastVectorHighlighter fails to highlight SynonymQuery
> -
>
> Key: LUCENE-7484
> URL: https://issues.apache.org/jira/browse/LUCENE-7484
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/termvectors
>Affects Versions: 6.x, master (7.0)
>Reporter: Ferenczi Jim
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7484.patch
>
>
> SynonymQuery are ignored by the FastVectorHighlighter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7484) FastVectorHighlighter fails to highlight SynonymQuery

2016-10-10 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7484:
---
Fix Version/s: (was: 7.0)
   master (7.0)

> FastVectorHighlighter fails to highlight SynonymQuery
> -
>
> Key: LUCENE-7484
> URL: https://issues.apache.org/jira/browse/LUCENE-7484
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/termvectors
>Affects Versions: 6.x, master (7.0)
>Reporter: Ferenczi Jim
> Fix For: master (7.0), 6.3
>
> Attachments: LUCENE-7484.patch
>
>
> SynonymQuery are ignored by the FastVectorHighlighter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9614) TestSolrCloudWithKerberosAlt.testBasics failure HTTP ERROR: 401 Problem accessing /solr/admin/cores

2016-10-10 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-9614:

Attachment: SOLR-9614.patch

Here's a patch that should fix the issue.  I'm about to board a flight, so I'm 
not going to be able to commit for another 10 hours - feel free to commit for 
me!

> TestSolrCloudWithKerberosAlt.testBasics failure HTTP ERROR: 401 Problem 
> accessing /solr/admin/cores
> ---
>
> Key: SOLR-9614
> URL: https://issues.apache.org/jira/browse/SOLR-9614
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
> Attachments: SOLR-9614.patch, SOLR-9614.patch
>
>
> * this occurs after SOLR-9608 commit 
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6169/
> * but, I can't get it fixed rolling it back locally. 
> * it doesn't yet happen in branch_6x CI 
> So far I have no idea what to do. 
> Problem log
> {quote}
> ] o.a.s.c.TestSolrCloudWithKerberosAlt Enable delegation token: true
> 12922 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.c.CoreContainer Authentication plugin class obtained from system 
> property 'authenticationPlugin': org.apache.solr.security.KerberosPlugin
> 12931 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.c.s.i.Krb5HttpClientBuilder Setting up SPNego auth with config: 
> C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-001\minikdc\jaas-client.conf
> 12971 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.s.KerberosPlugin Params: {token.valid=30, 
> kerberos.principal=HTTP/127.0.0.1, 
> kerberos.keytab=C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-001\minikdc\keytabs,
>  cookie.domain=127.0.0.1, token.validity=36000, type=kerberos, 
> delegation-token.token-kind=solr-dt, cookie.path=/, 
> zk-dt-secret-manager.znodeWorkingPath=solr/security/zkdtsm, 
> signer.secret.provider.zookeeper.path=/token, 
> zk-dt-secret-manager.enable=true, 
> kerberos.name.rules=RULE:[1:$1@$0](.*EXAMPLE.COM)s/@.*//
> RULE:[2:$2@$0](.*EXAMPLE.COM)s/@.*//
> DEFAULT, signer.secret.provider=zookeeper}
> 13123 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.c.f.i.CuratorFrameworkImpl Starting
> 13133 WARN  (jetty-launcher-1-thread-1-SendThread(127.0.0.1:6)) 
> [n:127.0.0.1:64475_solr] o.a.z.ClientCnxn SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> 'C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-001\minikdc\jaas-client.conf'.
>  Will continue connection to Zookeeper server without SASL authentication, if 
> Zookeeper server allows it.
> 13145 ERROR (jetty-launcher-1-thread-1-EventThread) [n:127.0.0.1:64475_solr   
>  ] o.a.c.ConnectionState Authentication failed
> 13153 INFO  (jetty-launcher-1-thread-1-EventThread) [n:127.0.0.1:64475_solr   
>  ] o.a.c.f.s.ConnectionStateManager State change: CONNECTED
> 13632 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.c.s.i.Krb5HttpClientBuilder Setting up SPNego auth with config: 
> C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-001\minikdc\jaas-client.conf
> 18210 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath 
> C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-002\node1\.
> 20158 ERROR 
> (OverseerThreadFactory-6-thread-1-processing-n:127.0.0.1:56132_solr) 
> [n:127.0.0.1:56132_solr] o.a.s.c.OverseerCollectionMessageHandler Error 
> from shard: http://127.0.0.1:56132/solr
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://127.0.0.1:56132/solr: Expected mime type 
> application/octet-stream but got text/html. 
> 
> 
> Error 401 
> 
> 
> HTTP ERROR: 401
> Problem accessing /solr/admin/cores. Reason:
> Authentication required
> http://eclipse.org/jetty;>Powered by Jetty:// 
> 9.3.8.v20160314
> 
> 
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:578)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
>   at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
>   at 
> 

[jira] [Commented] (SOLR-9614) TestSolrCloudWithKerberosAlt.testBasics failure HTTP ERROR: 401 Problem accessing /solr/admin/cores

2016-10-10 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562019#comment-15562019
 ] 

Alan Woodward commented on SOLR-9614:
-

Ah, I see you got there before me!

> TestSolrCloudWithKerberosAlt.testBasics failure HTTP ERROR: 401 Problem 
> accessing /solr/admin/cores
> ---
>
> Key: SOLR-9614
> URL: https://issues.apache.org/jira/browse/SOLR-9614
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
> Attachments: SOLR-9614.patch, SOLR-9614.patch
>
>
> * this occurs after SOLR-9608 commit 
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6169/
> * but, I can't get it fixed rolling it back locally. 
> * it doesn't yet happen in branch_6x CI 
> So far I have no idea what to do. 
> Problem log
> {quote}
> ] o.a.s.c.TestSolrCloudWithKerberosAlt Enable delegation token: true
> 12922 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.c.CoreContainer Authentication plugin class obtained from system 
> property 'authenticationPlugin': org.apache.solr.security.KerberosPlugin
> 12931 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.c.s.i.Krb5HttpClientBuilder Setting up SPNego auth with config: 
> C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-001\minikdc\jaas-client.conf
> 12971 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.s.KerberosPlugin Params: {token.valid=30, 
> kerberos.principal=HTTP/127.0.0.1, 
> kerberos.keytab=C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-001\minikdc\keytabs,
>  cookie.domain=127.0.0.1, token.validity=36000, type=kerberos, 
> delegation-token.token-kind=solr-dt, cookie.path=/, 
> zk-dt-secret-manager.znodeWorkingPath=solr/security/zkdtsm, 
> signer.secret.provider.zookeeper.path=/token, 
> zk-dt-secret-manager.enable=true, 
> kerberos.name.rules=RULE:[1:$1@$0](.*EXAMPLE.COM)s/@.*//
> RULE:[2:$2@$0](.*EXAMPLE.COM)s/@.*//
> DEFAULT, signer.secret.provider=zookeeper}
> 13123 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.c.f.i.CuratorFrameworkImpl Starting
> 13133 WARN  (jetty-launcher-1-thread-1-SendThread(127.0.0.1:6)) 
> [n:127.0.0.1:64475_solr] o.a.z.ClientCnxn SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> 'C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-001\minikdc\jaas-client.conf'.
>  Will continue connection to Zookeeper server without SASL authentication, if 
> Zookeeper server allows it.
> 13145 ERROR (jetty-launcher-1-thread-1-EventThread) [n:127.0.0.1:64475_solr   
>  ] o.a.c.ConnectionState Authentication failed
> 13153 INFO  (jetty-launcher-1-thread-1-EventThread) [n:127.0.0.1:64475_solr   
>  ] o.a.c.f.s.ConnectionStateManager State change: CONNECTED
> 13632 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.c.s.i.Krb5HttpClientBuilder Setting up SPNego auth with config: 
> C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-001\minikdc\jaas-client.conf
> 18210 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath 
> C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-002\node1\.
> 20158 ERROR 
> (OverseerThreadFactory-6-thread-1-processing-n:127.0.0.1:56132_solr) 
> [n:127.0.0.1:56132_solr] o.a.s.c.OverseerCollectionMessageHandler Error 
> from shard: http://127.0.0.1:56132/solr
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://127.0.0.1:56132/solr: Expected mime type 
> application/octet-stream but got text/html. 
> 
> 
> Error 401 
> 
> 
> HTTP ERROR: 401
> Problem accessing /solr/admin/cores. Reason:
> Authentication required
> http://eclipse.org/jetty;>Powered by Jetty:// 
> 9.3.8.v20160314
> 
> 
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:578)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
>   at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
>   at 
> org.apache.solr.handler.component.HttpShardHandler.lambda$0(HttpShardHandler.java:195)
> {quote}



--
This message was sent by 

[jira] [Commented] (SOLR-9616) Solr throws exception when expand=true on empty result

2016-10-10 Thread Timo Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562057#comment-15562057
 ] 

Timo Schmidt commented on SOLR-9616:


The behaviour is the same in 6.1.0

> Solr throws exception when expand=true on empty result
> --
>
> Key: SOLR-9616
> URL: https://issues.apache.org/jira/browse/SOLR-9616
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2.1
>Reporter: Timo Schmidt
>Priority: Critical
> Fix For: 6.2.1
>
>
> When i run a query with expand=true with field collapsing and the result set 
> is empty an exception is thrown:
> solr:8984/solr/core_en/select?={!collapse 
> field=pid}=true=10
> Produces:
>   "error":{
> "msg":"Index: 0, Size: 0",
> "trace":"java.lang.IndexOutOfBoundsException: Index: 0, Size: 0\n\tat 
> java.util.ArrayList.rangeCheck(ArrayList.java:653)\n\tat 
> java.util.ArrayList.get(ArrayList.java:429)\n\tat 
> java.util.Collections$UnmodifiableList.get(Collections.java:1309)\n\tat 
> org.apache.solr.handler.component.ExpandComponent.process(ExpandComponent.java:269)\n\tat
>  
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)\n\tat
>  org.apache.solr.core.SolrCore.execute(SolrCore.java:2036)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:518)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)\n\tat 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)\n\tat
>  java.lang.Thread.run(Thread.java:745)\n",
> "code":500}}
> Instead i would assume to get an empty result. 
> Is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 899 - Still Unstable!

2016-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/899/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.embedded.LargeVolumeEmbeddedTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [TransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at org.apache.solr.update.TransactionLog.(TransactionLog.java:188)  at 
org.apache.solr.update.UpdateLog.newTransactionLog(UpdateLog.java:344)  at 
org.apache.solr.update.UpdateLog.ensureLog(UpdateLog.java:859)  at 
org.apache.solr.update.UpdateLog.add(UpdateLog.java:428)  at 
org.apache.solr.update.UpdateLog.add(UpdateLog.java:415)  at 
org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:299)
  at 
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:211)
  at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:166)
  at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:957)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1112)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:738)
  at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
  at org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:250) 
 at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:177)  at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
  at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:153)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2106)  at 
org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:178)
  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)  at 
org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)  at 
org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71)  at 
org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:85)  at 
org.apache.solr.client.solrj.LargeVolumeTestBase$DocThread.run(LargeVolumeTestBase.java:109)
  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [TransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at org.apache.solr.update.TransactionLog.(TransactionLog.java:188)
at 
org.apache.solr.update.UpdateLog.newTransactionLog(UpdateLog.java:344)
at org.apache.solr.update.UpdateLog.ensureLog(UpdateLog.java:859)
at org.apache.solr.update.UpdateLog.add(UpdateLog.java:428)
at org.apache.solr.update.UpdateLog.add(UpdateLog.java:415)
at 
org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:299)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:211)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:166)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:48)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:957)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1112)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:738)
at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
at 
org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:250)
at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:177)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:153)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2106)
at 

[jira] [Commented] (SOLR-9616) Solr throws exception when expand=true on empty result

2016-10-10 Thread Timo Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562127#comment-15562127
 ] 

Timo Schmidt commented on SOLR-9616:


6.0.0 is not affected

> Solr throws exception when expand=true on empty result
> --
>
> Key: SOLR-9616
> URL: https://issues.apache.org/jira/browse/SOLR-9616
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2.1
>Reporter: Timo Schmidt
>Priority: Critical
> Fix For: 6.2.1
>
>
> When i run a query with expand=true with field collapsing and the result set 
> is empty an exception is thrown:
> solr:8984/solr/core_en/select?={!collapse 
> field=pid}=true=10
> Produces:
>   "error":{
> "msg":"Index: 0, Size: 0",
> "trace":"java.lang.IndexOutOfBoundsException: Index: 0, Size: 0\n\tat 
> java.util.ArrayList.rangeCheck(ArrayList.java:653)\n\tat 
> java.util.ArrayList.get(ArrayList.java:429)\n\tat 
> java.util.Collections$UnmodifiableList.get(Collections.java:1309)\n\tat 
> org.apache.solr.handler.component.ExpandComponent.process(ExpandComponent.java:269)\n\tat
>  
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)\n\tat
>  org.apache.solr.core.SolrCore.execute(SolrCore.java:2036)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:518)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)\n\tat 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)\n\tat
>  java.lang.Thread.run(Thread.java:745)\n",
> "code":500}}
> Instead i would assume to get an empty result. 
> Is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9325) solr.log written to {solrRoot}/server/logs instead of location specified by SOLR_LOGS_DIR

2016-10-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9325:
--
Attachment: SOLR-9325.patch

Fix UTF-8 BOM for solr.cmd

> solr.log written to {solrRoot}/server/logs instead of location specified by 
> SOLR_LOGS_DIR
> -
>
> Key: SOLR-9325
> URL: https://issues.apache.org/jira/browse/SOLR-9325
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.2, 6.0.1
> Environment: 64-bit CentOS 7 with latest patches, JVM 1.8.0.92
>Reporter: Tim Parker
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9325.patch, SOLR-9325.patch, SOLR-9325.patch
>
>
> (6.1 is probably also affected, but we've been blocked by SOLR-9231)
> solr.log should be written to the directory specified by the SOLR_LOGS_DIR 
> environment variable, but instead it's written to {solrRoot}/server/logs.
> This results in requiring that solr is installed on a writable device, which 
> leads to two problems:
> 1) solr installation can't live on a shared device (single copy shared by two 
> or more VMs)
> 2) solr installation is more difficult to lock down
> Solr should be able to run without error in this test scenario:
> burn the Solr directory tree onto a CD-ROM
> Mount this CD as /solr
> run Solr from there (with appropriate environment variables set, of course)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7485) Better storage for `docsWithField` in Lucene70NormsFormat

2016-10-10 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-7485:


 Summary: Better storage for `docsWithField` in Lucene70NormsFormat
 Key: LUCENE-7485
 URL: https://issues.apache.org/jira/browse/LUCENE-7485
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


Currently {{Lucene70NormsFormat}} uses a bit set to store documents that have a 
norm, and counts one bits using {{Long.bitCount}} in order to know the index of 
the current document in the set of docs that have a norm value.

I think this is fairly good if a field is moderately sparse (somewhere between 
5% and 99%) but it still has some issues like slow advance by large deltas (it 
still needs to visit all words in order to accumulate the number of ones to 
know the index of a document) or when very few bits are set.

I have been working on a disk-based adaptation of {{RoaringDocIdSet}} that 
would still give the ability to know the index of the current document. It 
seems to be only a bit slower than the current implementation on moderately 
sparse fields. However, it also comes with benefits:
 * it is faster in the sparse case when it uses the sparse encoding that uses 
shorts to store doc IDs (when the density is 6% or less)
 * it has faster advance() by large deltas (still linear, but by a factor of 
65536 so that should always be fine in practice since doc IDs are bound to 2B)
 * it uses O(numDocsWithField) storage rather than O(maxDoc), the worst case in 
6 bytes per field, which occurs when each range of 65k docs contains exactly 
one document.
 * it is faster if some ranges of documents that share the same 16 upper bits 
are full, this is useful eg. if there is a single document that misses a field 
in the whole index or for use-cases that would store multiple types of 
documents (with different fields) within a single index and would use index 
sorting to put documents of the same type together



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7485) Better storage for `docsWithField` in Lucene70NormsFormat

2016-10-10 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7485:
-
Attachment: LUCENE-7485.patch

Here is a patch. I am using norms to play since they have a smaller API, but 
the idea is to use the same thing for doc values eventually.

> Better storage for `docsWithField` in Lucene70NormsFormat
> -
>
> Key: LUCENE-7485
> URL: https://issues.apache.org/jira/browse/LUCENE-7485
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7485.patch
>
>
> Currently {{Lucene70NormsFormat}} uses a bit set to store documents that have 
> a norm, and counts one bits using {{Long.bitCount}} in order to know the 
> index of the current document in the set of docs that have a norm value.
> I think this is fairly good if a field is moderately sparse (somewhere 
> between 5% and 99%) but it still has some issues like slow advance by large 
> deltas (it still needs to visit all words in order to accumulate the number 
> of ones to know the index of a document) or when very few bits are set.
> I have been working on a disk-based adaptation of {{RoaringDocIdSet}} that 
> would still give the ability to know the index of the current document. It 
> seems to be only a bit slower than the current implementation on moderately 
> sparse fields. However, it also comes with benefits:
>  * it is faster in the sparse case when it uses the sparse encoding that uses 
> shorts to store doc IDs (when the density is 6% or less)
>  * it has faster advance() by large deltas (still linear, but by a factor of 
> 65536 so that should always be fine in practice since doc IDs are bound to 2B)
>  * it uses O(numDocsWithField) storage rather than O(maxDoc), the worst case 
> in 6 bytes per field, which occurs when each range of 65k docs contains 
> exactly one document.
>  * it is faster if some ranges of documents that share the same 16 upper bits 
> are full, this is useful eg. if there is a single document that misses a 
> field in the whole index or for use-cases that would store multiple types of 
> documents (with different fields) within a single index and would use index 
> sorting to put documents of the same type together



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr Master Nightly Tests 1126 stuck?

2016-10-10 Thread Kevin Risden
Looks like this build is still running?

Kevin Risden

On Fri, Oct 7, 2016 at 1:39 AM, Dawid Weiss  wrote:

> > I wonder why the build didn't timeout at 2 hours?
>
> It won't if the JVM died (and Runtime.halt() didn't stop the process
> for whatever reason).
>
> > How can we kill it?
>
> It'd be good to try to get a stack (if possible), although I doubt it will
> be.
>
> D.
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Updated] (LUCENE-7487) Remove unnecessary synchronization from Lucene70NormsProducer

2016-10-10 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7487:
-
Description: Slice creation is thread-safe so synchronization is not 
necessary.  (was: The slice API is thread-safe so synchronization is not 
necessary.)

> Remove unnecessary synchronization from Lucene70NormsProducer
> -
>
> Key: LUCENE-7487
> URL: https://issues.apache.org/jira/browse/LUCENE-7487
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
>
> Slice creation is thread-safe so synchronization is not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7486) DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using Negative Scores

2016-10-10 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562780#comment-15562780
 ] 

Adrien Grand commented on LUCENE-7486:
--

I'm not sure DisjunctionMaxScorer makes a lot of sense with negative scores, 
but I'd be +1 to initialize {{scoreMax}} with {{Float.NEGATIVE_INFINTY}} like 
Uwe suggested for consistency.

> DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using 
> Negative Scores
> ---
>
> Key: LUCENE-7486
> URL: https://issues.apache.org/jira/browse/LUCENE-7486
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5.2
>Reporter: Ivan Provalov
>
> We are using a log of probability for scoring, which gives us negative 
> scores.  
> DisjunctionMaxScorer initializes scoreMax in the score(...) function to zero 
> preventing us from using negative scores.  Is there a reason it couldn't be 
> initialized to something like this:
> float scoreMax = Float.MAX_VALUE * -1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7487) Remove unnecessary synchronization from Lucene70NormsProducer

2016-10-10 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562882#comment-15562882
 ] 

Michael McCandless commented on LUCENE-7487:


+1

> Remove unnecessary synchronization from Lucene70NormsProducer
> -
>
> Key: LUCENE-7487
> URL: https://issues.apache.org/jira/browse/LUCENE-7487
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7487.patch
>
>
> Slice creation is thread-safe so synchronization is not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9103) Restore ability for users to add custom Streaming Expressions

2016-10-10 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562874#comment-15562874
 ] 

Dennis Gove commented on SOLR-9103:
---

testDynamicLoadingCustomStream is not passing because it cannot find 
runtimecode/HelloStream.class. Note that I did add the file 
solr/core/src/test-files/runtimecode/HelloStream.java. But it doesn't appear 
the test can find the .class of that. I know you provided a .class but I'm not 
sure I'm comfortable adding a .class to the source code.

The test does pass if I run it directly in Eclipse, however.

{code}
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestCustomStream 
-Dtests.method=testDynamicLoadingCustomStream -Dtests.seed=96673E541CBCF992 
-Dtests.slow=true -Dtests.locale=fr-CH -Dtests.timezone=Europe/Sarajevo 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   28.5s | TestCustomStream.testDynamicLoadingCustomStream <<<
   [junit4]> Throwable #1: java.lang.RuntimeException: Cannot find resource 
in classpath or in file-system (relative to CWD): runtimecode/HelloStream.class
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([96673E541CBCF992:E6D23776E646B225]:0)
   [junit4]>at 
org.apache.solr.SolrTestCaseJ4.getFile(SolrTestCaseJ4.java:1798)
   [junit4]>at 
org.apache.solr.core.TestDynamicLoading.getFileContent(TestDynamicLoading.java:261)
   [junit4]>at 
org.apache.solr.core.TestCustomStream.testDynamicLoadingCustomStream(TestCustomStream.java:73)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
{code}

> Restore ability for users to add custom Streaming Expressions
> -
>
> Key: SOLR-9103
> URL: https://issues.apache.org/jira/browse/SOLR-9103
> Project: Solr
>  Issue Type: Improvement
>Reporter: Cao Manh Dat
>Assignee: Joel Bernstein
> Attachments: HelloStream.class, SOLR-9103.PATCH, SOLR-9103.PATCH
>
>
> StreamHandler is an implicit handler. So to make it extensible, we can 
> introduce the below syntax in solrconfig.xml. 
> {code}
> 
> {code}
> This will add hello function to streamFactory of StreamHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7486) DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using Negative Scores

2016-10-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563078#comment-15563078
 ] 

Uwe Schindler commented on LUCENE-7486:
---

Of course, the comment was just about your suggestion. Of course we can fix 
this in Lucene, it is not a risk at all.

> DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using 
> Negative Scores
> ---
>
> Key: LUCENE-7486
> URL: https://issues.apache.org/jira/browse/LUCENE-7486
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5.2
>Reporter: Ivan Provalov
>Assignee: Uwe Schindler
>
> We are using a log of probability for scoring, which gives us negative 
> scores.  
> DisjunctionMaxScorer initializes scoreMax in the score(...) function to zero 
> preventing us from using negative scores.  Is there a reason it couldn't be 
> initialized to something like this:
> float scoreMax = Float.MAX_VALUE * -1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7486) DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using Negative Scores

2016-10-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563075#comment-15563075
 ] 

Uwe Schindler commented on LUCENE-7486:
---

I can take care of this! It should be a one-line change only, if all tests pass.

> DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using 
> Negative Scores
> ---
>
> Key: LUCENE-7486
> URL: https://issues.apache.org/jira/browse/LUCENE-7486
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5.2
>Reporter: Ivan Provalov
>Assignee: Uwe Schindler
>
> We are using a log of probability for scoring, which gives us negative 
> scores.  
> DisjunctionMaxScorer initializes scoreMax in the score(...) function to zero 
> preventing us from using negative scores.  Is there a reason it couldn't be 
> initialized to something like this:
> float scoreMax = Float.MAX_VALUE * -1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7487) Remove unnecessary synchronization from Lucene70NormsProducer

2016-10-10 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7487:
-
Attachment: LUCENE-7487.patch

Here is a patch that also adds a threaded test to BaseNormsFormatTestCase.

> Remove unnecessary synchronization from Lucene70NormsProducer
> -
>
> Key: LUCENE-7487
> URL: https://issues.apache.org/jira/browse/LUCENE-7487
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7487.patch
>
>
> Slice creation is thread-safe so synchronization is not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7487) Remove unnecessary synchronization from Lucene70NormsProducer

2016-10-10 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-7487:


 Summary: Remove unnecessary synchronization from 
Lucene70NormsProducer
 Key: LUCENE-7487
 URL: https://issues.apache.org/jira/browse/LUCENE-7487
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


The slice API is thread-safe so synchronization is not necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-7486) DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using Negative Scores

2016-10-10 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler reassigned LUCENE-7486:
-

Assignee: Uwe Schindler

> DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using 
> Negative Scores
> ---
>
> Key: LUCENE-7486
> URL: https://issues.apache.org/jira/browse/LUCENE-7486
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5.2
>Reporter: Ivan Provalov
>Assignee: Uwe Schindler
>
> We are using a log of probability for scoring, which gives us negative 
> scores.  
> DisjunctionMaxScorer initializes scoreMax in the score(...) function to zero 
> preventing us from using negative scores.  Is there a reason it couldn't be 
> initialized to something like this:
> float scoreMax = Float.MAX_VALUE * -1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr Master Nightly Tests 1126 stuck?

2016-10-10 Thread Dawid Weiss
Thanks Uwe, this helps a lot!

There is a resource deadlock here (an interplay of loggers, sysouts
and junit4 stream redirectors and uncaught exception handlers...).
It's really complex, but I'll try to get to the bottom of it.

This completely aside, over 40 THOUSAND threads are hanging inside
jetty's http handlers... there should be a more reasonable limit to
this I guess?!

"qtp1445698227-45502" #45502 prio=5 os_prio=0 tid=0x7f5f5447c000
nid=0x4ec1 waiting for monitor entry [0x7f5f26327000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at org.apache.log4j.Category.callAppenders(Category.java:204)
- waiting to lock <0xe00a8348> (a org.apache.log4j.spi.RootLogger)
at org.apache.log4j.Category.forcedLog(Category.java:391)
at org.apache.log4j.Category.log(Category.java:856)
at org.slf4j.impl.Log4jLoggerAdapter.error(Log4jLoggerAdapter.java:497)
at org.apache.solr.common.SolrException.log(SolrException.java:159)
at org.apache.solr.servlet.ResponseUtils.getErrorInfo(ResponseUtils.java:65)

Dawid

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9337) Add fetch Streaming Expression

2016-10-10 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9337:
-
Attachment: SOLR-9337.patch

> Add fetch Streaming Expression
> --
>
> Key: SOLR-9337
> URL: https://issues.apache.org/jira/browse/SOLR-9337
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-9337.patch, SOLR-9337.patch
>
>
> The fetch() Streaming Expression wraps another expression and fetches 
> additional fields for documents in batches. The fetch() expression will 
> stream out the Tuples after the data has been fetched. Fields can be fetched 
> from any SolrCloud collection. 
> Sample syntax:
> {code}
> daemon(
>update(collectionC, batchSize="100"
>   fetch(collectionB, 
> topic(checkpoints, collectionA, q="*:*", fl="a,b,c", 
> rows="50"),
> fl="j,m,z",
> on="a=j")))
>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9617) Add Field Type RemoteFileField

2016-10-10 Thread Keith Laban (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563051#comment-15563051
 ] 

Keith Laban commented on SOLR-9617:
---

Are you sure that is the right ticket? I don't see the relevance

> Add Field Type RemoteFileField
> --
>
> Key: SOLR-9617
> URL: https://issues.apache.org/jira/browse/SOLR-9617
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Keith Laban
>
> RemoteFileField extends from ExternalFileField. The purpose of this field 
> type extension is to download an external file from a remote location (e.g. 
> S3 or artifactory) to a local location to be used as an external file field. 
> URLs are maintained as a ManagedResource and can be PUT as a fieldName -> url 
> mapping. Additionally there is a RequestHandler that will redownload all 
> RemoteFileFields. This request handler also distributes the request to all 
> live nodes in the cluster. The RequestHandler also implements SolrCoreAware 
> and will redownload all files when callad (i.e. whenever a core is loaded).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9619) Create Collection screen cuts off labels

2016-10-10 Thread Mike Drob (JIRA)
Mike Drob created SOLR-9619:
---

 Summary: Create Collection screen cuts off labels
 Key: SOLR-9619
 URL: https://issues.apache.org/jira/browse/SOLR-9619
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: UI
Affects Versions: master (7.0)
 Environment: Ubuntu 14.04
Firefox 50.0b5
Reporter: Mike Drob
Priority: Minor


Was running a Solr 7.0 snapshot (commit 5ef60af) and noticed that the create 
collection pop up cuts off some of the argument names. Specifically, the 
{{replicationFactor}} and {{maxShardsPerNode}}.

Would be nice to use a bigger box or line wrap there, maybe. Have not tested 
other versions, but also saw the same behaviour on Chrome 53.0.2785.143 on 
Ubuntu.

Screen shot attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9617) Add Field Type RemoteFileField

2016-10-10 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562955#comment-15562955
 ] 

Mikhail Khludnev commented on SOLR-9617:


I suppose that such functionality is expected to be provided via SOLR-5944

> Add Field Type RemoteFileField
> --
>
> Key: SOLR-9617
> URL: https://issues.apache.org/jira/browse/SOLR-9617
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Keith Laban
>
> RemoteFileField extends from ExternalFileField. The purpose of this field 
> type extension is to download an external file from a remote location (e.g. 
> S3 or artifactory) to a local location to be used as an external file field. 
> URLs are maintained as a ManagedResource and can be PUT as a fieldName -> url 
> mapping. Additionally there is a RequestHandler that will redownload all 
> RemoteFileFields. This request handler also distributes the request to all 
> live nodes in the cluster. The RequestHandler also implements SolrCoreAware 
> and will redownload all files when callad (i.e. whenever a core is loaded).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr Master Nightly Tests 1126 stuck?

2016-10-10 Thread Dawid Weiss
Well, somebody with root access would have to try to jps the forked
process (pid 6761) and see if he or she can get the stacktrace.

D.

On Mon, Oct 10, 2016 at 4:29 PM, Kevin Risden  wrote:
> Looks like this build is still running?
>
> Kevin Risden
>
> On Fri, Oct 7, 2016 at 1:39 AM, Dawid Weiss  wrote:
>>
>> > I wonder why the build didn't timeout at 2 hours?
>>
>> It won't if the JVM died (and Runtime.halt() didn't stop the process
>> for whatever reason).
>>
>> > How can we kill it?
>>
>> It'd be good to try to get a stack (if possible), although I doubt it will
>> be.
>>
>> D.
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7486) DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using Negative Scores

2016-10-10 Thread Ivan Provalov (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562771#comment-15562771
 ] 

Ivan Provalov commented on LUCENE-7486:
---

Good point, Uwe.  Is there a reason it shouldn't be done in Lucene source?

> DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using 
> Negative Scores
> ---
>
> Key: LUCENE-7486
> URL: https://issues.apache.org/jira/browse/LUCENE-7486
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5.2
>Reporter: Ivan Provalov
>
> We are using a log of probability for scoring, which gives us negative 
> scores.  
> DisjunctionMaxScorer initializes scoreMax in the score(...) function to zero 
> preventing us from using negative scores.  Is there a reason it couldn't be 
> initialized to something like this:
> float scoreMax = Float.MAX_VALUE * -1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7488) Consider tracking modification time of external file fields for faster reloading

2016-10-10 Thread Mike (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike updated LUCENE-7488:
-
Description: 
I have an index of about 4M legal documents that has pagerank boosting 
configured as an external file field. The external file is about 100MB in size 
and has one row per document in the index. Each row indicates the pagerank 
score of a document. When we open new searchers, this file has to get reloaded, 
and it creates a noticeable delay for our users -- takes several seconds to 
reload. 

An idea to fix this came up in [a recent discussion in the Solr mailing 
list|https://www.mail-archive.com/solr-user@lucene.apache.org/msg125521.html]: 
Could the file only be reloaded if it has changed on disk? In other words, when 
new searchers are opened, could they check the modtime of the file, and avoid 
reloading it if the file hasn't changed? 

In our configuration, this would be a big improvement. We only change the 
pagerank file once/week because computing it is intensive and new documents 
don't tend to have a big impact. At the same time, because we're regularly 
adding new documents, we do hundreds of commits per day, all of which have a 
delay as the (largish) external file field is reloaded. 

Is this a reasonable improvement to request? 

  was:
I have an index of about 4M legal documents that has pagerank boosting 
configured as an external file field. The external file is about 100MB in size 
and has one row per document in the index. Each row indicates the pagerank 
score of a document. When we open new searchers, this file has to get reloaded, 
and it creates a noticeable delay for our users -- takes several seconds to 
reload. 

An idea to fix this came up in [a recent 
discussion|https://www.mail-archive.com/solr-user@lucene.apache.org/msg125521.html]:
 Could the file only be reloaded if it has changed on disk? In other words, 
when new searchers are opened, could they check the modtime of the file, and 
avoid reloading it if the file hasn't changed? 

In our configuration, this would be a big improvement. We only change the 
pagerank file once/week because computing it is intensive and new documents 
don't tend to have a big impact. At the same time, because we're regularly 
adding new documents, we do hundreds of commits per day, all of which have a 
delay as the (largish) external file field is reloaded. 

Is this a reasonable improvement to request? 


> Consider tracking modification time of external file fields for faster 
> reloading
> 
>
> Key: LUCENE-7488
> URL: https://issues.apache.org/jira/browse/LUCENE-7488
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Affects Versions: 4.10.4
> Environment: Linux
>Reporter: Mike
>
> I have an index of about 4M legal documents that has pagerank boosting 
> configured as an external file field. The external file is about 100MB in 
> size and has one row per document in the index. Each row indicates the 
> pagerank score of a document. When we open new searchers, this file has to 
> get reloaded, and it creates a noticeable delay for our users -- takes 
> several seconds to reload. 
> An idea to fix this came up in [a recent discussion in the Solr mailing 
> list|https://www.mail-archive.com/solr-user@lucene.apache.org/msg125521.html]:
>  Could the file only be reloaded if it has changed on disk? In other words, 
> when new searchers are opened, could they check the modtime of the file, and 
> avoid reloading it if the file hasn't changed? 
> In our configuration, this would be a big improvement. We only change the 
> pagerank file once/week because computing it is intensive and new documents 
> don't tend to have a big impact. At the same time, because we're regularly 
> adding new documents, we do hundreds of commits per day, all of which have a 
> delay as the (largish) external file field is reloaded. 
> Is this a reasonable improvement to request? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Lucene/Solr Master Nightly Tests 1126 stuck?

2016-10-10 Thread Uwe Schindler
Hi,

the trace is in the workspace:
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/ws/

The trace file is 9 Megabytes, but I attached it in bzip2 format.
I am working on killing processes! There are multiple hung processes still 
there.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

> -Original Message-
> From: Uwe Schindler [mailto:u...@thetaphi.de]
> Sent: Monday, October 10, 2016 6:45 PM
> To: dev@lucene.apache.org; Dawid Weiss 
> Subject: Re: Lucene/Solr Master Nightly Tests 1126 stuck?
> 
> I will look into this.
> 
> Am 10. Oktober 2016 18:38:40 MESZ, schrieb Dawid Weiss
> :
> >Well, somebody with root access would have to try to jps the forked
> >process (pid 6761) and see if he or she can get the stacktrace.
> >
> >D.
> >
> >On Mon, Oct 10, 2016 at 4:29 PM, Kevin Risden
> > wrote:
> >> Looks like this build is still running?
> >>
> >> Kevin Risden
> >>
> >> On Fri, Oct 7, 2016 at 1:39 AM, Dawid Weiss 
> >wrote:
> >>>
> >>> > I wonder why the build didn't timeout at 2 hours?
> >>>
> >>> It won't if the JVM died (and Runtime.halt() didn't stop the process
> >>> for whatever reason).
> >>>
> >>> > How can we kill it?
> >>>
> >>> It'd be good to try to get a stack (if possible), although I doubt
> >it will
> >>> be.
> >>>
> >>> D.
> >>>
> >>>
> >-
> >>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >>> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>>
> >>
> >
> >-
> >To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >For additional commands, e-mail: dev-h...@lucene.apache.org
> 
> --
> Uwe Schindler
> H.-H.-Meier-Allee 63, 28213 Bremen
> http://www.thetaphi.de
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


trace.log.bz2
Description: Binary data

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Created] (SOLR-9618) Tests hang on a forked process (deadlock inside the process)

2016-10-10 Thread Dawid Weiss (JIRA)
Dawid Weiss created SOLR-9618:
-

 Summary: Tests hang on a forked process (deadlock inside the 
process)
 Key: SOLR-9618
 URL: https://issues.apache.org/jira/browse/SOLR-9618
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Dawid Weiss
Assignee: Dawid Weiss






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9618) Tests hang on a forked process (deadlock inside the process)

2016-10-10 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-9618:

Attachment: trace.log.bz2

Here is the stack trace

> Tests hang on a forked process (deadlock inside the process)
> 
>
> Key: SOLR-9618
> URL: https://issues.apache.org/jira/browse/SOLR-9618
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: trace.log.bz2
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9337) Add fetch Streaming Expression

2016-10-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563054#comment-15563054
 ] 

ASF subversion and git services commented on SOLR-9337:
---

Commit ee3f9e1e058ac4205140b909a85d43fdd715ddb7 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ee3f9e1 ]

SOLR-9337: Add fetch Streaming Expression


> Add fetch Streaming Expression
> --
>
> Key: SOLR-9337
> URL: https://issues.apache.org/jira/browse/SOLR-9337
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-9337.patch, SOLR-9337.patch
>
>
> The fetch() Streaming Expression wraps another expression and fetches 
> additional fields for documents in batches. The fetch() expression will 
> stream out the Tuples after the data has been fetched. Fields can be fetched 
> from any SolrCloud collection. 
> Sample syntax:
> {code}
> daemon(
>update(collectionC, batchSize="100"
>   fetch(collectionB, 
> topic(checkpoints, collectionA, q="*:*", fl="a,b,c", 
> rows="50"),
> fl="j,m,z",
> on="a=j")))
>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7488) Consider tracking modification time of external file fields for faster reloading

2016-10-10 Thread Mike (JIRA)
Mike created LUCENE-7488:


 Summary: Consider tracking modification time of external file 
fields for faster reloading
 Key: LUCENE-7488
 URL: https://issues.apache.org/jira/browse/LUCENE-7488
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Affects Versions: 4.10.4
 Environment: Linux
Reporter: Mike


I have an index of about 4M legal documents that has pagerank boosting 
configured as an external file field. The external file is about 100MB in size 
and has one row per document in the index. Each row indicates the pagerank 
score of a document. When we open new searchers, this file has to get reloaded, 
and it creates a noticeable delay for our users -- takes several seconds to 
reload. 

An idea to fix this came up in [a recent 
discussion|https://www.mail-archive.com/solr-user@lucene.apache.org/msg125521.html]:
 Could the file only be reloaded if it has changed on disk? In other words, 
when new searchers are opened, could they check the modtime of the file, and 
avoid reloading it if the file hasn't changed? 

In our configuration, this would be a big improvement. We only change the 
pagerank file once/week because computing it is intensive and new documents 
don't tend to have a big impact. At the same time, because we're regularly 
adding new documents, we do hundreds of commits per day, all of which have a 
delay as the (largish) external file field is reloaded. 

Is this a reasonable improvement to request? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1126 - Still Failing

2016-10-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1126/

No tests ran.

Build Log:
[...truncated 11625 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/temp/junit4-J1-20161005_083740_205.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4] Dumping heap to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/heapdumps/java_pid6760.hprof
 ...
   [junit4] Heap dump file created [714278651 bytes in 16.125 secs]
   [junit4] <<< JVM J1: EOF 

   [junit4] JVM J1: stderr was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/temp/junit4-J1-20161005_083740_205.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] WARN: Unhandled exception in event serialization. -> 
java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4] <<< JVM J1: EOF 

[...truncated 32 lines...]
   [junit4] Suite: org.apache.solr.cloud.ConcurrentDeleteAndCreateCollectionTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.ConcurrentDeleteAndCreateCollectionTest_BF6C5C74A9CEFC3D-001/init-core-data-001
   [junit4]   2> 1786063 INFO  
(SUITE-ConcurrentDeleteAndCreateCollectionTest-seed#[BF6C5C74A9CEFC3D]-worker) 
[] o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 1786071 INFO  
(TEST-ConcurrentDeleteAndCreateCollectionTest.testConcurrentCreateAndDeleteOverTheSameConfig-seed#[BF6C5C74A9CEFC3D])
 [] o.a.s.SolrTestCaseJ4 ###Starting 
testConcurrentCreateAndDeleteOverTheSameConfig
   [junit4]   2> 1786071 INFO  
(TEST-ConcurrentDeleteAndCreateCollectionTest.testConcurrentCreateAndDeleteOverTheSameConfig-seed#[BF6C5C74A9CEFC3D])
 [] o.a.s.c.MiniSolrCloudCluster Starting cluster of 1 servers in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.ConcurrentDeleteAndCreateCollectionTest_BF6C5C74A9CEFC3D-001/tempDir-001
   [junit4]   2> 1786071 INFO  
(TEST-ConcurrentDeleteAndCreateCollectionTest.testConcurrentCreateAndDeleteOverTheSameConfig-seed#[BF6C5C74A9CEFC3D])
 [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1786076 INFO  (Thread-1725) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1786076 INFO  (Thread-1725) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1786179 INFO  
(TEST-ConcurrentDeleteAndCreateCollectionTest.testConcurrentCreateAndDeleteOverTheSameConfig-seed#[BF6C5C74A9CEFC3D])
 [] o.a.s.c.ZkTestServer start zk server on port:58815
   [junit4]   2> 1786239 INFO  (jetty-launcher-799-thread-1) [] 
o.e.j.s.Server jetty-9.3.8.v20160314
   [junit4]   2> 1786283 INFO  (jetty-launcher-799-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@d571903{/solr,null,AVAILABLE}
   [junit4]   2> 1786286 INFO  (jetty-launcher-799-thread-1) [] 
o.e.j.s.ServerConnector Started ServerConnector@3d714097{SSL,[ssl, 
http/1.1]}{127.0.0.1:56041}
   [junit4]   2> 1786286 INFO  (jetty-launcher-799-thread-1) [] 
o.e.j.s.Server Started @1797249ms
   [junit4]   2> 1786286 INFO  (jetty-launcher-799-thread-1) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=56041}
   [junit4]   2> 1786286 INFO  (jetty-launcher-799-thread-1) [] 
o.a.s.s.SolrDispatchFilter  ___  _   Welcome to Apache Solr? version 
7.0.0
   [junit4]   2> 1786287 INFO  (jetty-launcher-799-thread-1) [] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 1786287 INFO  (jetty-launcher-799-thread-1) [] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 1786287 INFO  (jetty-launcher-799-thread-1) [] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|Start time: 
2016-10-05T09:07:37.802Z
   [junit4]   2> 1786344 INFO  (jetty-launcher-799-thread-1) [] 
o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 1786404 INFO  (jetty-launcher-799-thread-1) [] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:58815/solr
   [junit4]   2> 1786528 INFO  (jetty-launcher-799-thread-1) 
[n:127.0.0.1:56041_solr] o.a.s.c.OverseerElectionContext I am going to be 
the leader 127.0.0.1:56041_solr
   [junit4]   2> 1786529 INFO  (jetty-launcher-799-thread-1) 
[n:127.0.0.1:56041_solr] o.a.s.c.Overseer Overseer 
(id=96708752676749315-127.0.0.1:56041_solr-n_00) starting
   [junit4]   2> 1786580 INFO  (jetty-launcher-799-thread-1) 
[n:127.0.0.1:56041_solr] o.a.s.c.ZkController 

Re: Lucene/Solr Master Nightly Tests 1126 stuck?

2016-10-10 Thread Uwe Schindler
I will look into this.

Am 10. Oktober 2016 18:38:40 MESZ, schrieb Dawid Weiss :
>Well, somebody with root access would have to try to jps the forked
>process (pid 6761) and see if he or she can get the stacktrace.
>
>D.
>
>On Mon, Oct 10, 2016 at 4:29 PM, Kevin Risden
> wrote:
>> Looks like this build is still running?
>>
>> Kevin Risden
>>
>> On Fri, Oct 7, 2016 at 1:39 AM, Dawid Weiss 
>wrote:
>>>
>>> > I wonder why the build didn't timeout at 2 hours?
>>>
>>> It won't if the JVM died (and Runtime.halt() didn't stop the process
>>> for whatever reason).
>>>
>>> > How can we kill it?
>>>
>>> It'd be good to try to get a stack (if possible), although I doubt
>it will
>>> be.
>>>
>>> D.
>>>
>>>
>-
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>
>
>-
>To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>For additional commands, e-mail: dev-h...@lucene.apache.org

--
Uwe Schindler
H.-H.-Meier-Allee 63, 28213 Bremen
http://www.thetaphi.de

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9619) Create Collection screen cuts off labels

2016-10-10 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-9619:

Attachment: screenshot-1.png

> Create Collection screen cuts off labels
> 
>
> Key: SOLR-9619
> URL: https://issues.apache.org/jira/browse/SOLR-9619
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: master (7.0)
> Environment: Ubuntu 14.04
> Firefox 50.0b5
>Reporter: Mike Drob
>Priority: Minor
> Attachments: screenshot-1.png
>
>
> Was running a Solr 7.0 snapshot (commit 5ef60af) and noticed that the create 
> collection pop up cuts off some of the argument names. Specifically, the 
> {{replicationFactor}} and {{maxShardsPerNode}}.
> Would be nice to use a bigger box or line wrap there, maybe. Have not tested 
> other versions, but also saw the same behaviour on Chrome 53.0.2785.143 on 
> Ubuntu.
> Screen shot attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9618) Tests hang on a forked process (deadlock inside the process)

2016-10-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563199#comment-15563199
 ] 

Uwe Schindler commented on SOLR-9618:
-

BTW, it is interesting that on Solaris this type of hanging thread seems to 
occur much more often. On the Solaris Jenkins node on Policeman, many jobs, 
also outside of Solr hang in a similar way. If it happens again, I will catch a 
stack trace, too.

> Tests hang on a forked process (deadlock inside the process)
> 
>
> Key: SOLR-9618
> URL: https://issues.apache.org/jira/browse/SOLR-9618
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: trace.log.bz2
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9618) Tests hang on a forked process (deadlock inside the process)

2016-10-10 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563184#comment-15563184
 ] 

Dawid Weiss commented on SOLR-9618:
---

Thanks Mikhail, but knowing what's stuck on what is one thing and knowing why 
it got there is another (I actually did lock ownership analysis too). Check out 
this stack trace in full:
{code}
"zkCallback-3166-thread-10" #47841 prio=5 os_prio=0 tid=0x7f5f800eb000 
nid=0x5912 waiting for monitor entry [0x7f5e94c1a000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at java.util.logging.StreamHandler.publish(StreamHandler.java:206)
- waiting to lock <0xe0e6e2b8> (a 
java.util.logging.ConsoleHandler)
at java.util.logging.ConsoleHandler.publish(ConsoleHandler.java:116)
at java.util.logging.Logger.log(Logger.java:738)
at java.util.logging.Logger.doLog(Logger.java:765)
at java.util.logging.Logger.log(Logger.java:876)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler.uncaughtException(RandomizedRunner.java:524)
at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1057)
at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1052)
at java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1052)
at 
com.carrotsearch.randomizedtesting.RunnerThreadGroup.uncaughtException(RunnerThreadGroup.java:32)
at java.lang.Thread.dispatchUncaughtException(Thread.java:1956)
{code}

it originates from a private method that is invoked by the JVM... 

I see an immediate patch in removing this from RR:
{code}
  Logger.getLogger(RunnerThreadGroup.class.getSimpleName()).log(
  Level.WARNING,
  "Uncaught exception in thread: " + t, e);
{code}

but I'd like to understand how this interaction can be reliably repeated; 
perhaps there is a deeper problem that needs resolving.

> Tests hang on a forked process (deadlock inside the process)
> 
>
> Key: SOLR-9618
> URL: https://issues.apache.org/jira/browse/SOLR-9618
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: trace.log.bz2
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9609) Change hard-coded keysize from 512 to 1024

2016-10-10 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563395#comment-15563395
 ] 

Hrishikesh Gadre commented on SOLR-9609:


[~erickerickson] Since this a cluster wide (rather than a host or server 
specific) configuration, I think it should come from security.json rather than 
a system property. This will also allow us to make other parameters (e.g. 
algorithm name etc.) configurable. What do you think?

> Change hard-coded keysize from 512 to 1024
> --
>
> Key: SOLR-9609
> URL: https://issues.apache.org/jira/browse/SOLR-9609
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Jeremy Martini
> Attachments: SOLR-9609.patch, SOLR-9609.patch, solr.log
>
>
> In order to configure our dataSource without requiring a plaintext password 
> in the configuration file, we extended JdbcDataSource to create our own 
> custom implementation. Our dataSource config now looks something like this:
> {code:xml}
>  url="jdbc:oracle:thin:@db-host-machine:1521:tst1" user="testuser" 
> password="{ENC}{1.1}1ePOfWcbOIU056gKiLTrLw=="/>
> {code}
> We are using the RSA JSAFE Crypto-J libraries for encrypting/decrypting the 
> password. However, this seems to cause an issue when we try use Solr in a 
> Cloud Configuration (using Zookeeper). The error is "Strong key gen and 
> multiprime gen require at least 1024-bit keysize." Full log attached.
> This seems to be due to the hard-coded value of 512 in the 
> org.apache.solr.util.CryptoKeys$RSAKeyPair class:
> {code:java}
> public RSAKeyPair() {
>   KeyPairGenerator keyGen = null;
>   try {
> keyGen = KeyPairGenerator.getInstance("RSA");
>   } catch (NoSuchAlgorithmException e) {
> throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, e);
>   }
>   keyGen.initialize(512);
> {code}
> I pulled down the Solr code, changed the hard-coded value to 1024, rebuilt 
> it, and now everything seems to work great.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7485) Better storage for `docsWithField` in Lucene70NormsFormat

2016-10-10 Thread Ryan Ernst (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563393#comment-15563393
 ] 

Ryan Ernst commented on LUCENE-7485:


Looks good! A couple minor suggestions:
* I would change {{MAX_ARRAY_LENGTH}} to {{(1 << 12) - 1}} (and adjust 
comparisons accordingly), so that the buffer array is actually created with 
this exact value (otherwise the name is confusing).
* Add explicit tests around the edges of sparse to dense?

> Better storage for `docsWithField` in Lucene70NormsFormat
> -
>
> Key: LUCENE-7485
> URL: https://issues.apache.org/jira/browse/LUCENE-7485
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7485.patch
>
>
> Currently {{Lucene70NormsFormat}} uses a bit set to store documents that have 
> a norm, and counts one bits using {{Long.bitCount}} in order to know the 
> index of the current document in the set of docs that have a norm value.
> I think this is fairly good if a field is moderately sparse (somewhere 
> between 5% and 99%) but it still has some issues like slow advance by large 
> deltas (it still needs to visit all words in order to accumulate the number 
> of ones to know the index of a document) or when very few bits are set.
> I have been working on a disk-based adaptation of {{RoaringDocIdSet}} that 
> would still give the ability to know the index of the current document. It 
> seems to be only a bit slower than the current implementation on moderately 
> sparse fields. However, it also comes with benefits:
>  * it is faster in the sparse case when it uses the sparse encoding that uses 
> shorts to store doc IDs (when the density is 6% or less)
>  * it has faster advance() by large deltas (still linear, but by a factor of 
> 65536 so that should always be fine in practice since doc IDs are bound to 2B)
>  * it uses O(numDocsWithField) storage rather than O(maxDoc), the worst case 
> in 6 bytes per field, which occurs when each range of 65k docs contains 
> exactly one document.
>  * it is faster if some ranges of documents that share the same 16 upper bits 
> are full, this is useful eg. if there is a single document that misses a 
> field in the whole index or for use-cases that would store multiple types of 
> documents (with different fields) within a single index and would use index 
> sorting to put documents of the same type together



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9618) Tests hang on a forked process (deadlock inside the process)

2016-10-10 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563252#comment-15563252
 ] 

Dawid Weiss commented on SOLR-9618:
---

Sure, please do! It'd be very helpful to get more stack traces from such 
situations. One very dirty solution to the immediate problem of builds hanging 
for days is to use the super-duper JVM option telling it to halt itself after a 
deadline... this could be passed to forked off junit subprocesses... Works on 
OpenJDK JVMs (didn't check 9).

{code}
  product(intx, SelfDestructTimer, 0,   \
  "Will cause VM to terminate after a given time (in minutes) " \
  "(0 means off)")  \
{code}

> Tests hang on a forked process (deadlock inside the process)
> 
>
> Key: SOLR-9618
> URL: https://issues.apache.org/jira/browse/SOLR-9618
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: trace.log.bz2
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9589) Review and remove Jackson dependency from SolrJ

2016-10-10 Thread Eric Pugh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563377#comment-15563377
 ] 

Eric Pugh commented on SOLR-9589:
-

I updated to the 6.2.2-SNAPSHOT, as I ran into a dropwizard/SolrJ conflict on 
Jackson!

However, both 6.2.2-SNAPSHOT and 6.3.0-SNAPSHOT pom files still include 
Jackson.   So I had to add exclusions as below.Is there a code change to 
get updated .pom files on nightly?

```

  org.apache.solr
  solr-solrj
  6.2.2-SNAPSHOT
  
  
  com.fasterxml.jackson.core
  jackson-core

  
  com.fasterxml.jackson.core
  jackson-annotations

  
  com.fasterxml.jackson.core
  jackson-databind

  

```

https://repository.apache.org/content/groups/snapshots/org/apache/solr/solr-solrj/6.3.0-SNAPSHOT/solr-solrj-6.3.0-20161002.040738-18.pom

> Review and remove Jackson dependency from SolrJ
> ---
>
> Key: SOLR-9589
> URL: https://issues.apache.org/jira/browse/SOLR-9589
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9589.patch, SOLR-9589.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8370) Display Similarity Factory in use in Schema-Browser

2016-10-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-8370:
--
Attachment: SOLR-8370.patch

Agree. New patch.

* Display only toString, put abbreviated classname in toolTip
   !screenshot-2.png!
* Added toString methods to SweetSpot and TFIDFSimilarity

> Display Similarity Factory in use in Schema-Browser
> ---
>
> Key: SOLR-8370
> URL: https://issues.apache.org/jira/browse/SOLR-8370
> Project: Solr
>  Issue Type: Improvement
>  Components: UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Trivial
>  Labels: newdev
> Attachments: SOLR-8370.patch, SOLR-8370.patch, screenshot-1.png, 
> screenshot-2.png
>
>
> Perhaps the Admin UI Schema browser should also display which 
> {{}} that is in use in schema, like it does per-field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7486) DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using Negative Scores

2016-10-10 Thread Ivan Provalov (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563360#comment-15563360
 ] 

Ivan Provalov commented on LUCENE-7486:
---

Thanks, Uwe!

> DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using 
> Negative Scores
> ---
>
> Key: LUCENE-7486
> URL: https://issues.apache.org/jira/browse/LUCENE-7486
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5.2
>Reporter: Ivan Provalov
>Assignee: Uwe Schindler
>
> We are using a log of probability for scoring, which gives us negative 
> scores.  
> DisjunctionMaxScorer initializes scoreMax in the score(...) function to zero 
> preventing us from using negative scores.  Is there a reason it couldn't be 
> initialized to something like this:
> float scoreMax = Float.MAX_VALUE * -1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (LUCENE-7486) DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using Negative Scores

2016-10-10 Thread Ivan Provalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Provalov updated LUCENE-7486:
--
Comment: was deleted

(was: Thanks, Uwe!)

> DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using 
> Negative Scores
> ---
>
> Key: LUCENE-7486
> URL: https://issues.apache.org/jira/browse/LUCENE-7486
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5.2
>Reporter: Ivan Provalov
>Assignee: Uwe Schindler
>
> We are using a log of probability for scoring, which gives us negative 
> scores.  
> DisjunctionMaxScorer initializes scoreMax in the score(...) function to zero 
> preventing us from using negative scores.  Is there a reason it couldn't be 
> initialized to something like this:
> float scoreMax = Float.MAX_VALUE * -1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7486) DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using Negative Scores

2016-10-10 Thread Ivan Provalov (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563361#comment-15563361
 ] 

Ivan Provalov commented on LUCENE-7486:
---

Thanks, Uwe!

> DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using 
> Negative Scores
> ---
>
> Key: LUCENE-7486
> URL: https://issues.apache.org/jira/browse/LUCENE-7486
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5.2
>Reporter: Ivan Provalov
>Assignee: Uwe Schindler
>
> We are using a log of probability for scoring, which gives us negative 
> scores.  
> DisjunctionMaxScorer initializes scoreMax in the score(...) function to zero 
> preventing us from using negative scores.  Is there a reason it couldn't be 
> initialized to something like this:
> float scoreMax = Float.MAX_VALUE * -1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7474) Improve doc values writers

2016-10-10 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15564060#comment-15564060
 ] 

Otis Gospodnetic commented on LUCENE-7474:
--

I was wondering how one could compare Lucene indexing (and searching) 
performance before and after this change.  Is there a way to add a sparse 
dataset for the nightly benchmark and use it for both trunk and 6.x branch, so 
one can see the performance difference of Lucene 6.x with sparse data vs. 
Lucene 7.x with sparse data?

> Improve doc values writers
> --
>
> Key: LUCENE-7474
> URL: https://issues.apache.org/jira/browse/LUCENE-7474
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: master (7.0)
>
> Attachments: LUCENE-7474.patch
>
>
> One of the goals of the new iterator-based API is to better handle sparse 
> data. However, the current doc values writers still use a dense 
> representation, and some of them perform naive linear scans in the nextDoc 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9337) Add fetch Streaming Expression

2016-10-10 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-9337.
--
   Resolution: Implemented
Fix Version/s: 6.3

> Add fetch Streaming Expression
> --
>
> Key: SOLR-9337
> URL: https://issues.apache.org/jira/browse/SOLR-9337
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Fix For: 6.3
>
> Attachments: SOLR-9337.patch, SOLR-9337.patch
>
>
> The fetch() Streaming Expression wraps another expression and fetches 
> additional fields for documents in batches. The fetch() expression will 
> stream out the Tuples after the data has been fetched. Fields can be fetched 
> from any SolrCloud collection. 
> Sample syntax:
> {code}
> daemon(
>update(collectionC, batchSize="100"
>   fetch(collectionB, 
> topic(checkpoints, collectionA, q="*:*", fl="a,b,c", 
> rows="50"),
> fl="j,m,z",
> on="a=j")))
>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9337) Add fetch Streaming Expression

2016-10-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563220#comment-15563220
 ] 

ASF subversion and git services commented on SOLR-9337:
---

Commit 5836f4032fac975707c85e260d509ecd06c7f7e1 in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5836f40 ]

SOLR-9337: Add fetch Streaming Expression


> Add fetch Streaming Expression
> --
>
> Key: SOLR-9337
> URL: https://issues.apache.org/jira/browse/SOLR-9337
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-9337.patch, SOLR-9337.patch
>
>
> The fetch() Streaming Expression wraps another expression and fetches 
> additional fields for documents in batches. The fetch() expression will 
> stream out the Tuples after the data has been fetched. Fields can be fetched 
> from any SolrCloud collection. 
> Sample syntax:
> {code}
> daemon(
>update(collectionC, batchSize="100"
>   fetch(collectionB, 
> topic(checkpoints, collectionA, q="*:*", fl="a,b,c", 
> rows="50"),
> fl="j,m,z",
> on="a=j")))
>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9620) {!join score=.. fromIndex=..} throws "undefined field" for numeric field if from and to schemas are different

2016-10-10 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created SOLR-9620:
--

 Summary: {!join score=.. fromIndex=..} throws "undefined field" 
for numeric field if from and to schemas are different 
 Key: SOLR-9620
 URL: https://issues.apache.org/jira/browse/SOLR-9620
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.3
Reporter: Mikhail Khludnev


numeric "from" field is mistakenly looked in "to" schema. see 
org.apache.solr.search.join.ScoreJoinQParserPlugin.parse()
{code}
   private Query createQuery(final String fromField, final String 
fromQueryStr,
 String fromIndex, final String toField, final 
ScoreMode scoreMode,
 boolean byPassShortCircutCheck) throws 
SyntaxError {
+FieldType.LegacyNumericType fromNumericType = 
req.getSchema().getField(fromField).getType().getNumericType();
+FieldType.LegacyNumericType toNumericType = 
req.getSchema().getField(toField).getType().getNumericType();
{code}
it's in branch_6x only. Users who are faced this are advised to just declare 
"from" field in "to" schema. It should work. Take care.  

one line fix and improvements for TestCrossCoreJoin.java are quite welcome. 

 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8370) Display Similarity Factory in use in Schema-Browser

2016-10-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-8370:
--
Attachment: screenshot-2.png

> Display Similarity Factory in use in Schema-Browser
> ---
>
> Key: SOLR-8370
> URL: https://issues.apache.org/jira/browse/SOLR-8370
> Project: Solr
>  Issue Type: Improvement
>  Components: UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Trivial
>  Labels: newdev
> Attachments: SOLR-8370.patch, screenshot-1.png, screenshot-2.png
>
>
> Perhaps the Admin UI Schema browser should also display which 
> {{}} that is in use in schema, like it does per-field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9337) Add fetch Streaming Expression

2016-10-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15564154#comment-15564154
 ] 

ASF subversion and git services commented on SOLR-9337:
---

Commit ccc10fd5932fa5d830c3ecda86e85b4845bca863 in lucene-solr's branch 
refs/heads/branch_6x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ccc10fd ]

SOLR-9337: Update CHANGES.txt


> Add fetch Streaming Expression
> --
>
> Key: SOLR-9337
> URL: https://issues.apache.org/jira/browse/SOLR-9337
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-9337.patch, SOLR-9337.patch
>
>
> The fetch() Streaming Expression wraps another expression and fetches 
> additional fields for documents in batches. The fetch() expression will 
> stream out the Tuples after the data has been fetched. Fields can be fetched 
> from any SolrCloud collection. 
> Sample syntax:
> {code}
> daemon(
>update(collectionC, batchSize="100"
>   fetch(collectionB, 
> topic(checkpoints, collectionA, q="*:*", fl="a,b,c", 
> rows="50"),
> fl="j,m,z",
> on="a=j")))
>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7486) DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using Negative Scores

2016-10-10 Thread Ivan Provalov (JIRA)
Ivan Provalov created LUCENE-7486:
-

 Summary: DisjunctionMaxScorer Initializes scoreMax to Zero 
Preventing From Using Negative Scores
 Key: LUCENE-7486
 URL: https://issues.apache.org/jira/browse/LUCENE-7486
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 5.5.2
Reporter: Ivan Provalov


We are using a log of probability for scoring, which gives us negative scores.  

DisjunctionMaxScorer initializes scoreMax in the score(...) function to zero 
preventing us from using negative scores.  Is there a reason it couldn't be 
initialized to something like this:

float scoreMax = Float.MAX_VALUE * -1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7486) DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using Negative Scores

2016-10-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562664#comment-15562664
 ] 

Uwe Schindler commented on LUCENE-7486:
---

It should be Float.NEGATIVE_INFINTY if you would like to do this.

> DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using 
> Negative Scores
> ---
>
> Key: LUCENE-7486
> URL: https://issues.apache.org/jira/browse/LUCENE-7486
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5.2
>Reporter: Ivan Provalov
>
> We are using a log of probability for scoring, which gives us negative 
> scores.  
> DisjunctionMaxScorer initializes scoreMax in the score(...) function to zero 
> preventing us from using negative scores.  Is there a reason it couldn't be 
> initialized to something like this:
> float scoreMax = Float.MAX_VALUE * -1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_102) - Build # 1920 - Still Unstable!

2016-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1920/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.CustomCollectionTest.test

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:40653

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:40653
at 
__randomizedtesting.SeedInfo.seed([AF74C4E383E2F15A:2720FB392D1E9CA2]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:604)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:435)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:387)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1292)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1062)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1004)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1599)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1553)
at 
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForHashRouter(CustomCollectionTest.java:332)
at 
org.apache.solr.cloud.CustomCollectionTest.test(CustomCollectionTest.java:98)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[jira] [Commented] (SOLR-9337) Add fetch Streaming Expression

2016-10-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15564146#comment-15564146
 ] 

ASF subversion and git services commented on SOLR-9337:
---

Commit d69412bc676189600aed8b4cff2aad819526a5e2 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d69412b ]

SOLR-9337: Update CHANGES.txt


> Add fetch Streaming Expression
> --
>
> Key: SOLR-9337
> URL: https://issues.apache.org/jira/browse/SOLR-9337
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-9337.patch, SOLR-9337.patch
>
>
> The fetch() Streaming Expression wraps another expression and fetches 
> additional fields for documents in batches. The fetch() expression will 
> stream out the Tuples after the data has been fetched. Fields can be fetched 
> from any SolrCloud collection. 
> Sample syntax:
> {code}
> daemon(
>update(collectionC, batchSize="100"
>   fetch(collectionB, 
> topic(checkpoints, collectionA, q="*:*", fl="a,b,c", 
> rows="50"),
> fl="j,m,z",
> on="a=j")))
>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_102) - Build # 18013 - Still Unstable!

2016-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18013/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testBasics

Error Message:
Could not find collection:testkerberoscollection

Stack Trace:
java.lang.AssertionError: Could not find collection:testkerberoscollection
at 
__randomizedtesting.SeedInfo.seed([787BBA34AA9BE06F:45A314189275BE1F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:153)
at 
org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testCollectionCreateSearchDelete(TestSolrCloudWithKerberosAlt.java:206)
at 
org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testBasics(TestSolrCloudWithKerberosAlt.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (SOLR-9615) NamedList:asMap method is no converted NamedList in List

2016-10-10 Thread HYUNCHANG LEE (JIRA)
HYUNCHANG LEE created SOLR-9615:
---

 Summary: NamedList:asMap method is no converted NamedList in List
 Key: SOLR-9615
 URL: https://issues.apache.org/jira/browse/SOLR-9615
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: HYUNCHANG LEE


NamedList:asMap method is no converted NamedList in List





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3596 - Still Unstable!

2016-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3596/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testBasics

Error Message:
Could not find collection:testkerberoscollection

Stack Trace:
java.lang.AssertionError: Could not find collection:testkerberoscollection
at 
__randomizedtesting.SeedInfo.seed([4E6CEC3CD492BFE4:73B44210EC7CE194]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:153)
at 
org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testCollectionCreateSearchDelete(TestSolrCloudWithKerberosAlt.java:206)
at 
org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testBasics(TestSolrCloudWithKerberosAlt.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-9615) NamedList:asMap method is no converted NamedList in List

2016-10-10 Thread HYUNCHANG LEE (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HYUNCHANG LEE updated SOLR-9615:

Description: 
NamedList:asMap method is no converted NamedList in List

if use org.apache.solr.common.util.NamedList:asMap, can't convert List hava a 
SimpledOrderdMap(NamedList) 





  was:
NamedList:asMap method is no converted NamedList in List




> NamedList:asMap method is no converted NamedList in List
> 
>
> Key: SOLR-9615
> URL: https://issues.apache.org/jira/browse/SOLR-9615
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: HYUNCHANG LEE
>
> NamedList:asMap method is no converted NamedList in List
> if use org.apache.solr.common.util.NamedList:asMap, can't convert List hava a 
> SimpledOrderdMap(NamedList) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9615) NamedList:asMap method is no converted NamedList in List

2016-10-10 Thread HYUNCHANG LEE (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HYUNCHANG LEE updated SOLR-9615:

Affects Version/s: 5.5.1

> NamedList:asMap method is no converted NamedList in List
> 
>
> Key: SOLR-9615
> URL: https://issues.apache.org/jira/browse/SOLR-9615
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.1
>Reporter: HYUNCHANG LEE
>
> When a NamedList is organized as follows, the innermost NamedList is not 
> converted into a map by calling the asMap() method of the outmost NamedList.
> {noformat}
> NamedList
>  - List
>- NamedList
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2016-10-10 Thread Yun Jie Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15561466#comment-15561466
 ] 

Yun Jie Zhou commented on SOLR-9584:


Yes, I think it's duplicated, but not sure why it marked as Won't Fix?

Any concern about removing the absolute URL path prefix /solr? Per testing, it 
would just work.

> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Priority: Minor
>  Labels: patch
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9615) NamedList:asMap method is no converted NamedList in List

2016-10-10 Thread HYUNCHANG LEE (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HYUNCHANG LEE updated SOLR-9615:

Description: 
When a NamedList is organized as follows, the innermost NamedList is not 
converted into a map by calling the asMap() method of the outmost NamedList.

NamedList
 - List
   - NamedList

  was:
NamedList:asMap method is no converted NamedList in List

if use org.apache.solr.common.util.NamedList:asMap, can't convert List hava a 
SimpledOrderdMap(NamedList) 






> NamedList:asMap method is no converted NamedList in List
> 
>
> Key: SOLR-9615
> URL: https://issues.apache.org/jira/browse/SOLR-9615
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: HYUNCHANG LEE
>
> When a NamedList is organized as follows, the innermost NamedList is not 
> converted into a map by calling the asMap() method of the outmost NamedList.
> NamedList
>  - List
>- NamedList



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9615) NamedList:asMap method is no converted NamedList in List

2016-10-10 Thread HYUNCHANG LEE (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HYUNCHANG LEE updated SOLR-9615:

Description: 
When a NamedList is organized as follows, the innermost NamedList is not 
converted into a map by calling the asMap() method of the outmost NamedList.

{noformat}
NamedList
 - List
   - NamedList
{noformat}

  was:
When a NamedList is organized as follows, the innermost NamedList is not 
converted into a map by calling the asMap() method of the outmost NamedList.

NamedList
 - List
   - NamedList


> NamedList:asMap method is no converted NamedList in List
> 
>
> Key: SOLR-9615
> URL: https://issues.apache.org/jira/browse/SOLR-9615
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: HYUNCHANG LEE
>
> When a NamedList is organized as follows, the innermost NamedList is not 
> converted into a map by calling the asMap() method of the outmost NamedList.
> {noformat}
> NamedList
>  - List
>- NamedList
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9325) solr.log written to {solrRoot}/server/logs instead of location specified by SOLR_LOGS_DIR

2016-10-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563906#comment-15563906
 ] 

Jan Høydahl commented on SOLR-9325:
---

[~tparker] would you be able to test this in your environment?

> solr.log written to {solrRoot}/server/logs instead of location specified by 
> SOLR_LOGS_DIR
> -
>
> Key: SOLR-9325
> URL: https://issues.apache.org/jira/browse/SOLR-9325
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.2, 6.0.1
> Environment: 64-bit CentOS 7 with latest patches, JVM 1.8.0.92
>Reporter: Tim Parker
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9325.patch, SOLR-9325.patch, SOLR-9325.patch
>
>
> (6.1 is probably also affected, but we've been blocked by SOLR-9231)
> solr.log should be written to the directory specified by the SOLR_LOGS_DIR 
> environment variable, but instead it's written to {solrRoot}/server/logs.
> This results in requiring that solr is installed on a writable device, which 
> leads to two problems:
> 1) solr installation can't live on a shared device (single copy shared by two 
> or more VMs)
> 2) solr installation is more difficult to lock down
> Solr should be able to run without error in this test scenario:
> burn the Solr directory tree onto a CD-ROM
> Mount this CD as /solr
> run Solr from there (with appropriate environment variables set, of course)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 444 - Unstable!

2016-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/444/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster

Error Message:
Captured an uncaught exception in thread: Thread[id=11800, name=Thread-3409, 
state=RUNNABLE, group=TGRP-CdcrBootstrapTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=11800, name=Thread-3409, state=RUNNABLE, 
group=TGRP-CdcrBootstrapTest]
at 
__randomizedtesting.SeedInfo.seed([86785043756F3FD8:523D1B1A92398C23]:0)
Caused by: java.lang.AssertionError: 1
at __randomizedtesting.SeedInfo.seed([86785043756F3FD8]:0)
at 
org.apache.solr.core.CachingDirectoryFactory.close(CachingDirectoryFactory.java:191)
at org.apache.solr.core.SolrCore.close(SolrCore.java:1274)
at 
org.apache.solr.core.CoreContainer.registerCore(CoreContainer.java:705)
at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:945)
at 
org.apache.solr.handler.IndexFetcher.lambda$reloadCore$0(IndexFetcher.java:777)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.CdcrBootstrapTest

Error Message:
ObjectTracker found 5 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper, 
MockDirectoryWrapper, SolrCore] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:426)  
at org.apache.solr.core.SolrCore.(SolrCore.java:756)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:688)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:842)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:779)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:85)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:374)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:365)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:156)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:153)
  at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:660)  
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:441)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:302)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:253)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:108)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581) 
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:399)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
 at org.eclipse.jetty.server.Server.handle(Server.java:518)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)  at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) 
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
  at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572) 
 at java.lang.Thread.run(Thread.java:745)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 

[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 172 - Still Failing

2016-10-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/172/

2 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node2:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:56864/ux_/sa","node_name":"127.0.0.1:56864_ux_%2Fsa","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/35)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:34442/ux_/sa;,   
"core":"c8n_1x3_lf_shard1_replica1",   
"node_name":"127.0.0.1:34442_ux_%2Fsa"}, "core_node2":{   
"core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:56864/ux_/sa;,   
"node_name":"127.0.0.1:56864_ux_%2Fsa",   "state":"active",   
"leader":"true"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica2",   
"base_url":"http://127.0.0.1:34011/ux_/sa;,   
"node_name":"127.0.0.1:34011_ux_%2Fsa",   "state":"down",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node2:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:56864/ux_/sa","node_name":"127.0.0.1:56864_ux_%2Fsa","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/35)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:34442/ux_/sa;,
  "core":"c8n_1x3_lf_shard1_replica1",
  "node_name":"127.0.0.1:34442_ux_%2Fsa"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:56864/ux_/sa;,
  "node_name":"127.0.0.1:56864_ux_%2Fsa",
  "state":"active",
  "leader":"true"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica2",
  "base_url":"http://127.0.0.1:34011/ux_/sa;,
  "node_name":"127.0.0.1:34011_ux_%2Fsa",
  "state":"down",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([C6C1B3A24134E6CE:4E958C78EFC88B36]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:168)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 

[jira] [Updated] (LUCENE-7486) DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using Negative Scores

2016-10-10 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-7486:
--
Attachment: LUCENE-7486.patch

Simple patch. All Lucene Core tests pass. I am not sure if its worth to add 
some kind of test for this.

I also checked the code: It should behave the same as before. The score 
returned can never be NEGATIVE_INFINITY by default, because DisjSumScorer must 
at least have one scorer, whose score is always > than NEGATIVE_INFINITY. There 
is only a difference if one of the subscorers returns a score < 0, which is 
what this issue wants to fix.

The simplest way to test this would be to create a test using a 
ConstantScoreQuery with a negative score and add it to DisjMaxQuery.

I will run all tests later, but for now I posted the patch.

> DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using 
> Negative Scores
> ---
>
> Key: LUCENE-7486
> URL: https://issues.apache.org/jira/browse/LUCENE-7486
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5.2
>Reporter: Ivan Provalov
>Assignee: Uwe Schindler
> Attachments: LUCENE-7486.patch
>
>
> We are using a log of probability for scoring, which gives us negative 
> scores.  
> DisjunctionMaxScorer initializes scoreMax in the score(...) function to zero 
> preventing us from using negative scores.  Is there a reason it couldn't be 
> initialized to something like this:
> float scoreMax = Float.MAX_VALUE * -1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7486) DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using Negative Scores

2016-10-10 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-7486:
--
Attachment: LUCENE-7486.patch

New patch with test. Test fails without the NEGATIVE_INFINITY fix. The trick 
was to use a BoostQuery with negative boost.

> DisjunctionMaxScorer Initializes scoreMax to Zero Preventing From Using 
> Negative Scores
> ---
>
> Key: LUCENE-7486
> URL: https://issues.apache.org/jira/browse/LUCENE-7486
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5.2
>Reporter: Ivan Provalov
>Assignee: Uwe Schindler
> Attachments: LUCENE-7486.patch, LUCENE-7486.patch
>
>
> We are using a log of probability for scoring, which gives us negative 
> scores.  
> DisjunctionMaxScorer initializes scoreMax in the score(...) function to zero 
> preventing us from using negative scores.  Is there a reason it couldn't be 
> initialized to something like this:
> float scoreMax = Float.MAX_VALUE * -1;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+138) - Build # 1919 - Unstable!

2016-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1919/
Java: 64bit/jdk-9-ea+138 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.lucene.index.TestBagOfPostings.test

Error Message:
Captured an uncaught exception in thread: Thread[id=302, name=Thread-226, 
state=RUNNABLE, group=TGRP-TestBagOfPostings]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=302, name=Thread-226, state=RUNNABLE, 
group=TGRP-TestBagOfPostings]
at 
__randomizedtesting.SeedInfo.seed([ACF487FB7141D4A9:24A0B821DFBDB951]:0)
Caused by: java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([ACF487FB7141D4A9]:0)
at 
org.apache.lucene.index.TieredMergePolicy.findMerges(TieredMergePolicy.java:409)
at 
org.apache.lucene.index.IndexWriter.updatePendingMerges(IndexWriter.java:2087)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2051)
at 
org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4953)
at 
org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:731)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4991)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4982)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1565)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1307)
at 
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:171)
at 
org.apache.lucene.index.TestBagOfPostings$1.run(TestBagOfPostings.java:111)




Build Log:
[...truncated 504 lines...]
   [junit4] Suite: org.apache.lucene.index.TestBagOfPostings
   [junit4]   2> oct. 10, 2016 5:40:55 PM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> AVERTISSEMENT: Uncaught exception in thread: 
Thread[Thread-226,5,TGRP-TestBagOfPostings]
   [junit4]   2> java.lang.AssertionError
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([ACF487FB7141D4A9]:0)
   [junit4]   2>at 
org.apache.lucene.index.TieredMergePolicy.findMerges(TieredMergePolicy.java:409)
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.updatePendingMerges(IndexWriter.java:2087)
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2051)
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4953)
   [junit4]   2>at 
org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:731)
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4991)
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4982)
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1565)
   [junit4]   2>at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1307)
   [junit4]   2>at 
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:171)
   [junit4]   2>at 
org.apache.lucene.index.TestBagOfPostings$1.run(TestBagOfPostings.java:111)
   [junit4]   2> 
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestBagOfPostings 
-Dtests.method=test -Dtests.seed=ACF487FB7141D4A9 -Dtests.multiplier=3 
-Dtests.slow=true -Dtests.locale=fr-BI -Dtests.timezone=America/Porto_Velho 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR   5.97s J1 | TestBagOfPostings.test <<<
   [junit4]> Throwable #1: 
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=302, name=Thread-226, state=RUNNABLE, 
group=TGRP-TestBagOfPostings]
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([ACF487FB7141D4A9:24A0B821DFBDB951]:0)
   [junit4]> Caused by: java.lang.AssertionError
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([ACF487FB7141D4A9]:0)
   [junit4]>at 
org.apache.lucene.index.TieredMergePolicy.findMerges(TieredMergePolicy.java:409)
   [junit4]>at 
org.apache.lucene.index.IndexWriter.updatePendingMerges(IndexWriter.java:2087)
   [junit4]>at 
org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2051)
   [junit4]>at 
org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4953)
   [junit4]>at 
org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:731)
   [junit4]>at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4991)
   [junit4]>at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4982)
   [junit4]>at 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 900 - Still Unstable!

2016-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/900/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.DistributedVersionInfoTest

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:49830 within 3 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:49830 within 3 ms
at __randomizedtesting.SeedInfo.seed([2B88D4DCC8D81142]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:182)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:116)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:111)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:98)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.waitForAllNodes(MiniSolrCloudCluster.java:241)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:235)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:177)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:158)
at 
org.apache.solr.cloud.DistributedVersionInfoTest.setupCluster(DistributedVersionInfoTest.java:72)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:811)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: Could not connect to 
ZooKeeper 127.0.0.1:49830 within 3 ms
at 
org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:233)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:174)
... 32 more


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.DistributedVersionInfoTest

Error Message:
35 threads leaked from SUITE scope at 
org.apache.solr.cloud.DistributedVersionInfoTest: 1) Thread[id=50059, 
name=org.eclipse.jetty.server.session.HashSessionManager@5de099b2Timer, 
state=TIMED_WAITING, group=TGRP-DistributedVersionInfoTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 

[jira] [Created] (SOLR-9621) Remove several guava, apache commons calls in favor of java 8 alternatives

2016-10-10 Thread Michael Braun (JIRA)
Michael Braun created SOLR-9621:
---

 Summary: Remove several guava, apache commons calls in favor of 
java 8 alternatives
 Key: SOLR-9621
 URL: https://issues.apache.org/jira/browse/SOLR-9621
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Michael Braun
Priority: Trivial


Now that Solr is against Java 8, we can take advantage of replacing some guava 
and apache commons calls with JDK standards. I'd like to start by replacing the 
following:

com.google.common.base.Supplier  -> java.util.function.Supplier
com.google.common.base.Predicate -> java.util.function.Predicate
com.google.common.base.Charsets -> java.nio.charset.StandardCharsets
org.apache.commons.codec.Charsets -> java.nio.charset.StandardCharsets
com.google.common.collect.Ordering -> java.util.Comparator

>From com.google.common.base.Preconditions.checkNotNull - replace with 
>java.util.Objects.requireNonNull





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9621) Remove several guava, apache commons calls in favor of java 8 alternatives

2016-10-10 Thread Michael Braun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15564467#comment-15564467
 ] 

Michael Braun commented on SOLR-9621:
-

Have these ones done, running the full test suite on my box before I post the 
patch, would love to hear other suggestions / things to add on to this as well.

> Remove several guava, apache commons calls in favor of java 8 alternatives
> --
>
> Key: SOLR-9621
> URL: https://issues.apache.org/jira/browse/SOLR-9621
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Braun
>Priority: Trivial
>
> Now that Solr is against Java 8, we can take advantage of replacing some 
> guava and apache commons calls with JDK standards. I'd like to start by 
> replacing the following:
> com.google.common.base.Supplier  -> java.util.function.Supplier
> com.google.common.base.Predicate -> java.util.function.Predicate
> com.google.common.base.Charsets -> java.nio.charset.StandardCharsets
> org.apache.commons.codec.Charsets -> java.nio.charset.StandardCharsets
> com.google.common.collect.Ordering -> java.util.Comparator
> From com.google.common.base.Preconditions.checkNotNull - replace with 
> java.util.Objects.requireNonNull



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4095) remove deprecations from trunk

2016-10-10 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-4095:
---
Affects Version/s: (was: 6.0)
   5.0

> remove deprecations from trunk
> --
>
> Key: LUCENE-4095
> URL: https://issues.apache.org/jira/browse/LUCENE-4095
> Project: Lucene - Core
>  Issue Type: Task
>Affects Versions: 5.0
>Reporter: Robert Muir
>Assignee: Robert Muir
> Fix For: 5.0
>
> Attachments: LUCENE-4095.patch
>
>
> We should remove the deprecated code from trunk. 
> This also has benefits to 4x branch, particularly:
> * we should backport fixes to tests to avoid deprecated methods, e.g. 
> IndexReader.open -> DirectoryReader.open. Of course I will add specific 
> deprecated tests testing the back compat. This is all very important to
> ensure easier merging from trunk->4x  for the future.
> * by removing deprecated methods, I found some minor doc bugs, such as 
> javadocs linking to deprecated stuff. I would like to backport these docs 
> fixes as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4095) remove deprecations from trunk

2016-10-10 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-4095:
---
Fix Version/s: (was: 6.0)
   5.0

> remove deprecations from trunk
> --
>
> Key: LUCENE-4095
> URL: https://issues.apache.org/jira/browse/LUCENE-4095
> Project: Lucene - Core
>  Issue Type: Task
>Affects Versions: 5.0
>Reporter: Robert Muir
>Assignee: Robert Muir
> Fix For: 5.0
>
> Attachments: LUCENE-4095.patch
>
>
> We should remove the deprecated code from trunk. 
> This also has benefits to 4x branch, particularly:
> * we should backport fixes to tests to avoid deprecated methods, e.g. 
> IndexReader.open -> DirectoryReader.open. Of course I will add specific 
> deprecated tests testing the back compat. This is all very important to
> ensure easier merging from trunk->4x  for the future.
> * by removing deprecated methods, I found some minor doc bugs, such as 
> javadocs linking to deprecated stuff. I would like to backport these docs 
> fixes as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9615) NamedList:asMap method is no converted NamedList in List

2016-10-10 Thread HYUNCHANG LEE (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15564533#comment-15564533
 ] 

HYUNCHANG LEE commented on SOLR-9615:
-

I used SolrTemplate(spring-data-solr) with Solr 5.5.2 and returned QueyResponse.

If used NamedList in QueryResponse asMap function, NamedList is not normally 
converted



> NamedList:asMap method is no converted NamedList in List
> 
>
> Key: SOLR-9615
> URL: https://issues.apache.org/jira/browse/SOLR-9615
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.1
>Reporter: HYUNCHANG LEE
>
> When a NamedList is organized as follows, the innermost NamedList is not 
> converted into a map by calling the asMap() method of the outmost NamedList.
> {noformat}
> NamedList
>  - List
>- NamedList
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1409 - Unstable

2016-10-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1409/

1 tests failed.
FAILED:  org.apache.solr.cloud.RecoveryZkTest.test

Error Message:
There are still nodes recoverying - waited for 120 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 120 
seconds
at 
__randomizedtesting.SeedInfo.seed([7E7F568F98BD8F6D:F62B69553641E295]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:181)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:862)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1418)
at org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Comment Edited] (SOLR-9615) NamedList:asMap method is no converted NamedList in List

2016-10-10 Thread HYUNCHANG LEE (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15564533#comment-15564533
 ] 

HYUNCHANG LEE edited comment on SOLR-9615 at 10/11/16 5:31 AM:
---

I used SolrTemplate(spring-data-solr) with Solr 5.5.2 and returned QueyResponse.

If used NamedList in QueryResponse asMap function, NamedList is not normally 
converted

library : org.springframework.data:spring-data-solr:2.0.2.RELEASE


was (Author: lhch):
I used SolrTemplate(spring-data-solr) with Solr 5.5.2 and returned QueyResponse.

If used NamedList in QueryResponse asMap function, NamedList is not normally 
converted



> NamedList:asMap method is no converted NamedList in List
> 
>
> Key: SOLR-9615
> URL: https://issues.apache.org/jira/browse/SOLR-9615
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.1
>Reporter: HYUNCHANG LEE
>
> When a NamedList is organized as follows, the innermost NamedList is not 
> converted into a map by calling the asMap() method of the outmost NamedList.
> {noformat}
> NamedList
>  - List
>- NamedList
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9619) Create Collection screen cuts off labels

2016-10-10 Thread Alexandre Rafalovitch (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-9619:
---

Assignee: Alexandre Rafalovitch

> Create Collection screen cuts off labels
> 
>
> Key: SOLR-9619
> URL: https://issues.apache.org/jira/browse/SOLR-9619
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: master (7.0)
> Environment: Ubuntu 14.04
> Firefox 50.0b5
>Reporter: Mike Drob
>Assignee: Alexandre Rafalovitch
>Priority: Minor
> Attachments: screenshot-1.png
>
>
> Was running a Solr 7.0 snapshot (commit 5ef60af) and noticed that the create 
> collection pop up cuts off some of the argument names. Specifically, the 
> {{replicationFactor}} and {{maxShardsPerNode}}.
> Would be nice to use a bigger box or line wrap there, maybe. Have not tested 
> other versions, but also saw the same behaviour on Chrome 53.0.2785.143 on 
> Ubuntu.
> Screen shot attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8370) Display Similarity Factory in use in Schema-Browser

2016-10-10 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15564306#comment-15564306
 ] 

Alexandre Rafalovitch commented on SOLR-8370:
-

Looks great. I like that it also makes people think about what they have.

Just final clarification - this is the global schema-level similarity? Not per 
field one, right (we have that I think)? Because the "id" area is the global 
information area, just the way it looks now. 

> Display Similarity Factory in use in Schema-Browser
> ---
>
> Key: SOLR-8370
> URL: https://issues.apache.org/jira/browse/SOLR-8370
> Project: Solr
>  Issue Type: Improvement
>  Components: UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Trivial
>  Labels: newdev
> Attachments: SOLR-8370.patch, SOLR-8370.patch, screenshot-1.png, 
> screenshot-2.png
>
>
> Perhaps the Admin UI Schema browser should also display which 
> {{}} that is in use in schema, like it does per-field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9614) TestSolrCloudWithKerberosAlt.testBasics failure HTTP ERROR: 401 Problem accessing /solr/admin/cores

2016-10-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15561771#comment-15561771
 ] 

ASF subversion and git services commented on SOLR-9614:
---

Commit 9fea5129d3eaef7cdc8086271677fc807ca1c020 in lucene-solr's branch 
refs/heads/master from [~mkhludnev]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9fea512 ]

SOLR-9614: fixing TestSolrCloudWithKerberosAlt

> TestSolrCloudWithKerberosAlt.testBasics failure HTTP ERROR: 401 Problem 
> accessing /solr/admin/cores
> ---
>
> Key: SOLR-9614
> URL: https://issues.apache.org/jira/browse/SOLR-9614
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
> Attachments: SOLR-9614.patch
>
>
> * this occurs after SOLR-9608 commit 
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6169/
> * but, I can't get it fixed rolling it back locally. 
> * it doesn't yet happen in branch_6x CI 
> So far I have no idea what to do. 
> Problem log
> {quote}
> ] o.a.s.c.TestSolrCloudWithKerberosAlt Enable delegation token: true
> 12922 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.c.CoreContainer Authentication plugin class obtained from system 
> property 'authenticationPlugin': org.apache.solr.security.KerberosPlugin
> 12931 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.c.s.i.Krb5HttpClientBuilder Setting up SPNego auth with config: 
> C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-001\minikdc\jaas-client.conf
> 12971 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.s.KerberosPlugin Params: {token.valid=30, 
> kerberos.principal=HTTP/127.0.0.1, 
> kerberos.keytab=C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-001\minikdc\keytabs,
>  cookie.domain=127.0.0.1, token.validity=36000, type=kerberos, 
> delegation-token.token-kind=solr-dt, cookie.path=/, 
> zk-dt-secret-manager.znodeWorkingPath=solr/security/zkdtsm, 
> signer.secret.provider.zookeeper.path=/token, 
> zk-dt-secret-manager.enable=true, 
> kerberos.name.rules=RULE:[1:$1@$0](.*EXAMPLE.COM)s/@.*//
> RULE:[2:$2@$0](.*EXAMPLE.COM)s/@.*//
> DEFAULT, signer.secret.provider=zookeeper}
> 13123 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.c.f.i.CuratorFrameworkImpl Starting
> 13133 WARN  (jetty-launcher-1-thread-1-SendThread(127.0.0.1:6)) 
> [n:127.0.0.1:64475_solr] o.a.z.ClientCnxn SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> 'C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-001\minikdc\jaas-client.conf'.
>  Will continue connection to Zookeeper server without SASL authentication, if 
> Zookeeper server allows it.
> 13145 ERROR (jetty-launcher-1-thread-1-EventThread) [n:127.0.0.1:64475_solr   
>  ] o.a.c.ConnectionState Authentication failed
> 13153 INFO  (jetty-launcher-1-thread-1-EventThread) [n:127.0.0.1:64475_solr   
>  ] o.a.c.f.s.ConnectionStateManager State change: CONNECTED
> 13632 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.c.s.i.Krb5HttpClientBuilder Setting up SPNego auth with config: 
> C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-001\minikdc\jaas-client.conf
> 18210 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath 
> C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-002\node1\.
> 20158 ERROR 
> (OverseerThreadFactory-6-thread-1-processing-n:127.0.0.1:56132_solr) 
> [n:127.0.0.1:56132_solr] o.a.s.c.OverseerCollectionMessageHandler Error 
> from shard: http://127.0.0.1:56132/solr
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://127.0.0.1:56132/solr: Expected mime type 
> application/octet-stream but got text/html. 
> 
> 
> Error 401 
> 
> 
> HTTP ERROR: 401
> Problem accessing /solr/admin/cores. Reason:
> Authentication required
> http://eclipse.org/jetty;>Powered by Jetty:// 
> 9.3.8.v20160314
> 
> 
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:578)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
>   at 

[jira] [Commented] (LUCENE-7476) Fix transient failure in JapaneseNumberFilter run from TestFactories

2016-10-10 Thread Andy Hind (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15561799#comment-15561799
 ] 

Andy Hind commented on LUCENE-7476:
---

I spotted this running org.apache.lucene.analysis.core.TestFactories with 
@Repeat (iterations = 100) from eclipse
I just got 9 failures running this again. It is odd that I do not see them in 
the build failures. 

I believe the 9 fails are all the same 

{code}
java.lang.IllegalStateException: incrementToken() called while in wrong state: 
INCREMENT_FALSE
at 
__randomizedtesting.SeedInfo.seed([18C3960FB72D4F07:2AB7AA6A139D55E3]:0)
at org.apache.lucene.analysis.MockTokenizer.fail(MockTokenizer.java:125)
at 
org.apache.lucene.analysis.MockTokenizer.incrementToken(MockTokenizer.java:136)
at 
org.apache.lucene.analysis.ja.JapaneseNumberFilter.incrementToken(JapaneseNumberFilter.java:152)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:716)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:627)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:525)
at 
org.apache.lucene.analysis.core.TestFactories.doTestTokenFilter(TestFactories.java:108)
at 
org.apache.lucene.analysis.core.TestFactories.test(TestFactories.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-8542) Integrate Learning to Rank into Solr

2016-10-10 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15561952#comment-15561952
 ] 

Alessandro Benedetti commented on SOLR-8542:


Well done guys ! Impressive!

Just a couple of observations and ideas that can help :

Feature Caching Improvements : 
https://github.com/bloomberg/lucene-solr/issues/172
LambdaMART explain summarization :  
https://github.com/bloomberg/lucene-solr/issues/173

I can not wait to see the plugin in the official release ! :)

> Integrate Learning to Rank into Solr
> 
>
> Key: SOLR-8542
> URL: https://issues.apache.org/jira/browse/SOLR-8542
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joshua Pantony
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8542-branch_5x.patch, SOLR-8542-trunk.patch
>
>
> This is a ticket to integrate learning to rank machine learning models into 
> Solr. Solr Learning to Rank (LTR) provides a way for you to extract features 
> directly inside Solr for use in training a machine learned model. You can 
> then deploy that model to Solr and use it to rerank your top X search 
> results. This concept was previously [presented by the authors at Lucene/Solr 
> Revolution 
> 2015|http://www.slideshare.net/lucidworks/learning-to-rank-in-solr-presented-by-michael-nilsson-diego-ceccarelli-bloomberg-lp].
> [Read through the 
> README|https://github.com/bloomberg/lucene-solr/tree/master-ltr-plugin-release/solr/contrib/ltr]
>  for a tutorial on using the plugin, in addition to how to train your own 
> external model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9325) solr.log written to {solrRoot}/server/logs instead of location specified by SOLR_LOGS_DIR

2016-10-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9325:
--
Attachment: SOLR-9325.patch

Updated patch with Windows fixes:

* {{SOLR_LOGS_DIR}} was always overwritten in solr.cmd
* Did not support space in the path
* Check for restricted folders now works
* Missed {{set}} in solr.in.cmd

Tested on Windows 10 with and without spaces in SOLR_LOGS_DIR.

Please review and test. Plan to commit in a few days.

> solr.log written to {solrRoot}/server/logs instead of location specified by 
> SOLR_LOGS_DIR
> -
>
> Key: SOLR-9325
> URL: https://issues.apache.org/jira/browse/SOLR-9325
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5.2, 6.0.1
> Environment: 64-bit CentOS 7 with latest patches, JVM 1.8.0.92
>Reporter: Tim Parker
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9325.patch, SOLR-9325.patch
>
>
> (6.1 is probably also affected, but we've been blocked by SOLR-9231)
> solr.log should be written to the directory specified by the SOLR_LOGS_DIR 
> environment variable, but instead it's written to {solrRoot}/server/logs.
> This results in requiring that solr is installed on a writable device, which 
> leads to two problems:
> 1) solr installation can't live on a shared device (single copy shared by two 
> or more VMs)
> 2) solr installation is more difficult to lock down
> Solr should be able to run without error in this test scenario:
> burn the Solr directory tree onto a CD-ROM
> Mount this CD as /solr
> run Solr from there (with appropriate environment variables set, of course)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2585) DirectoryReader.isCurrent might fail to see the segments file during concurrent index changes

2016-10-10 Thread Han-Wen NIenhuys (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15561959#comment-15561959
 ] 

Han-Wen NIenhuys commented on LUCENE-2585:
--

I've seen race condition diagnostics for Lucene code that could be related to 
this.

Line numbers are for Lucene 5.5.2, but the problem seems present in master too.

AFAICT: StandardDirectoryReader has an IndexWriter, and it checks things are 
up-to-date by checking if there are pending in-memory updates by looking at 
IndexWriter.SegmentInfos.version.  That read is not synchronized relative to 
its write from IndexWriter.updateDocument


WARNING: ThreadSanitizer: data race (pid=21636)
  Write of size 8 at 0x7f24ad9d3210 by thread T36 (mutexes: write 
M431922409782456832):
#0 org.apache.lucene.index.SegmentInfos.changed()V (SegmentInfos.java:944)  
#1 org.apache.lucene.index.IndexWriter.newSegmentName()Ljava/lang/String; 
(IndexWriter.java:1652)  
#2 
org.apache.lucene.index.DocumentsWriter.ensureInitialized(Lorg/apache/lucene/index/DocumentsWriterPerThreadPool$ThreadState;)V
 (DocumentsWriter.java:391)  
#3 
org.apache.lucene.index.DocumentsWriter.updateDocument(Ljava/lang/Iterable;Lorg/apache/lucene/analysis/Analyzer;Lorg/apache/lucene/index/Term;)Z
 (DocumentsWriter.java:445)  
#4 
org.apache.lucene.index.IndexWriter.updateDocument(Lorg/apache/lucene/index/Term;Ljava/lang/Iterable;)V
 (IndexWriter.java:1477)  
#5 
com.google.gerrit.lucene.AutoCommitWriter.updateDocument(Lorg/apache/lucene/index/Term;Ljava/lang/Iterable;)V
 (AutoCommitWriter.java:100)  
#6 
org.apache.lucene.index.TrackingIndexWriter.updateDocument(Lorg/apache/lucene/index/Term;Ljava/lang/Iterable;)J
 (TrackingIndexWriter.java:55)  
#7 com.google.gerrit.lucene.AbstractLuceneIndex$4.call()Ljava/lang/Long; 
(AbstractLuceneIndex.java:250)  
#8 com.google.gerrit.lucene.AbstractLuceneIndex$4.call()Ljava/lang/Object; 
(AbstractLuceneIndex.java:247)  
#9 
com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly()V
 (TrustedListenableFutureTask.java:108)  
#10 com.google.common.util.concurrent.InterruptibleTask.run()V 
(InterruptibleTask.java:41)  
#11 com.google.common.util.concurrent.TrustedListenableFutureTask.run()V 
(TrustedListenableFutureTask.java:77)  
#12 
java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V
 (ThreadPoolExecutor.java:1142)  
#13 java.util.concurrent.ThreadPoolExecutor$Worker.run()V 
(ThreadPoolExecutor.java:617)  
#14 java.lang.Thread.run()V (Thread.java:745)  
#15 (Generated Stub)  

  Previous read of size 8 at 0x7f24ad9d3210 by thread T29 (mutexes: write 
M1060737507754061632):
#0 
org.apache.lucene.index.IndexWriter.nrtIsCurrent(Lorg/apache/lucene/index/SegmentInfos;)Z
 (IndexWriter.java:4592)  
#1 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(Lorg/apache/lucene/index/IndexCommit;)Lorg/apache/lucene/index/DirectoryReader;
 (StandardDirectoryReader.java:282)  
#2 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(Lorg/apache/lucene/index/IndexCommit;)Lorg/apache/lucene/index/DirectoryReader;
 (StandardDirectoryReader.java:261)  
#3 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged()Lorg/apache/lucene/index/DirectoryReader;
 (StandardDirectoryReader.java:251)  
#4 
org.apache.lucene.index.DirectoryReader.openIfChanged(Lorg/apache/lucene/index/DirectoryReader;)Lorg/apache/lucene/index/DirectoryReader;
 (DirectoryReader.java:137)  
#5 
com.google.gerrit.lucene.WrappableSearcherManager.refreshIfNeeded(Lorg/apache/lucene/search/IndexSearcher;)Lorg/apache/lucene/search/IndexSearcher;
 (WrappableSearcherManager.java:148)  
#6 
com.google.gerrit.lucene.WrappableSearcherManager.refreshIfNeeded(Ljava/lang/Object;)Ljava/lang/Object;
 (WrappableSearcherManager.java:68)  
#7 org.apache.lucene.search.ReferenceManager.doMaybeRefresh()V 
(ReferenceManager.java:176)  
#8 org.apache.lucene.search.ReferenceManager.maybeRefreshBlocking()V 
(ReferenceManager.java:253)  
#9 org.apache.lucene.search.ControlledRealTimeReopenThread.run()V 
(ControlledRealTimeReopenThread.java:245)  
#10 (Generated Stub)  

> DirectoryReader.isCurrent might fail to see the segments file during 
> concurrent index changes
> -
>
> Key: LUCENE-2585
> URL: https://issues.apache.org/jira/browse/LUCENE-2585
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Reporter: Sanne Grinovero
> Fix For: 4.9, 6.0
>
>
> I could reproduce the issue several times but only by running long and 
> stressfull benchmarks, the high number of files is likely part of the 
> scenario.
> All tests 

[jira] [Updated] (SOLR-9614) TestSolrCloudWithKerberosAlt.testBasics failure HTTP ERROR: 401 Problem accessing /solr/admin/cores

2016-10-10 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9614:
---
Attachment: SOLR-9614.patch

> TestSolrCloudWithKerberosAlt.testBasics failure HTTP ERROR: 401 Problem 
> accessing /solr/admin/cores
> ---
>
> Key: SOLR-9614
> URL: https://issues.apache.org/jira/browse/SOLR-9614
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
> Attachments: SOLR-9614.patch
>
>
> * this occurs after SOLR-9608 commit 
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6169/
> * but, I can't get it fixed rolling it back locally. 
> * it doesn't yet happen in branch_6x CI 
> So far I have no idea what to do. 
> Problem log
> {quote}
> ] o.a.s.c.TestSolrCloudWithKerberosAlt Enable delegation token: true
> 12922 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.c.CoreContainer Authentication plugin class obtained from system 
> property 'authenticationPlugin': org.apache.solr.security.KerberosPlugin
> 12931 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.c.s.i.Krb5HttpClientBuilder Setting up SPNego auth with config: 
> C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-001\minikdc\jaas-client.conf
> 12971 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.s.KerberosPlugin Params: {token.valid=30, 
> kerberos.principal=HTTP/127.0.0.1, 
> kerberos.keytab=C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-001\minikdc\keytabs,
>  cookie.domain=127.0.0.1, token.validity=36000, type=kerberos, 
> delegation-token.token-kind=solr-dt, cookie.path=/, 
> zk-dt-secret-manager.znodeWorkingPath=solr/security/zkdtsm, 
> signer.secret.provider.zookeeper.path=/token, 
> zk-dt-secret-manager.enable=true, 
> kerberos.name.rules=RULE:[1:$1@$0](.*EXAMPLE.COM)s/@.*//
> RULE:[2:$2@$0](.*EXAMPLE.COM)s/@.*//
> DEFAULT, signer.secret.provider=zookeeper}
> 13123 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.c.f.i.CuratorFrameworkImpl Starting
> 13133 WARN  (jetty-launcher-1-thread-1-SendThread(127.0.0.1:6)) 
> [n:127.0.0.1:64475_solr] o.a.z.ClientCnxn SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> 'C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-001\minikdc\jaas-client.conf'.
>  Will continue connection to Zookeeper server without SASL authentication, if 
> Zookeeper server allows it.
> 13145 ERROR (jetty-launcher-1-thread-1-EventThread) [n:127.0.0.1:64475_solr   
>  ] o.a.c.ConnectionState Authentication failed
> 13153 INFO  (jetty-launcher-1-thread-1-EventThread) [n:127.0.0.1:64475_solr   
>  ] o.a.c.f.s.ConnectionStateManager State change: CONNECTED
> 13632 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.c.s.i.Krb5HttpClientBuilder Setting up SPNego auth with config: 
> C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-001\minikdc\jaas-client.conf
> 18210 INFO  (jetty-launcher-1-thread-1) [n:127.0.0.1:64475_solr] 
> o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath 
> C:\Users\Mikhail_Khludnev\AppData\Local\Temp\solr.cloud.TestSolrCloudWithKerberosAlt_3F1879202E9D764F-018\tempDir-002\node1\.
> 20158 ERROR 
> (OverseerThreadFactory-6-thread-1-processing-n:127.0.0.1:56132_solr) 
> [n:127.0.0.1:56132_solr] o.a.s.c.OverseerCollectionMessageHandler Error 
> from shard: http://127.0.0.1:56132/solr
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://127.0.0.1:56132/solr: Expected mime type 
> application/octet-stream but got text/html. 
> 
> 
> Error 401 
> 
> 
> HTTP ERROR: 401
> Problem accessing /solr/admin/cores. Reason:
> Authentication required
> http://eclipse.org/jetty;>Powered by Jetty:// 
> 9.3.8.v20160314
> 
> 
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:578)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
>   at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
>   at 
> org.apache.solr.handler.component.HttpShardHandler.lambda$0(HttpShardHandler.java:195)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (LUCENE-7476) Fix transient failure in JapaneseNumberFilter run from TestFactories

2016-10-10 Thread Andy Hind (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15561858#comment-15561858
 ] 

Andy Hind edited comment on LUCENE-7476 at 10/10/16 10:15 AM:
--

Running the tests 100 times via ant produces no issue. This seems to be an 
eclipse configuration issue.
{code}
ant test  -Dtestcase=TestFactories -Dtests.method=test  -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
{code}


was (Author: andyhind):
Running the tests 100 times via ant produces no issue. This seems to an eclipse 
configuration issue.
{code}
ant test  -Dtestcase=TestFactories -Dtests.method=test  -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
{code}

> Fix transient failure in JapaneseNumberFilter run from TestFactories
> 
>
> Key: LUCENE-7476
> URL: https://issues.apache.org/jira/browse/LUCENE-7476
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/other
>Affects Versions: 6.2.1
>Reporter: Andy Hind
>Priority: Trivial
> Attachments: LUCENE-7476.patch
>
>
> Repeatedly running TestFactories show this test to fail ~10% of the time.
> I believe the fix is trivial and related to loosing the state of the 
> underlying input stream when testing some analyzer life cycle flows. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7476) Fix transient failure in JapaneseNumberFilter run from TestFactories

2016-10-10 Thread Andy Hind (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15561858#comment-15561858
 ] 

Andy Hind commented on LUCENE-7476:
---

Running the tests 100 times via ant produces no issue. This seems to an eclipse 
configuration issue.
{code}
ant test  -Dtestcase=TestFactories -Dtests.method=test  -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
{code}

> Fix transient failure in JapaneseNumberFilter run from TestFactories
> 
>
> Key: LUCENE-7476
> URL: https://issues.apache.org/jira/browse/LUCENE-7476
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/other
>Affects Versions: 6.2.1
>Reporter: Andy Hind
>Priority: Trivial
> Attachments: LUCENE-7476.patch
>
>
> Repeatedly running TestFactories show this test to fail ~10% of the time.
> I believe the fix is trivial and related to loosing the state of the 
> underlying input stream when testing some analyzer life cycle flows. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7484) FastVectorHighlighter fails to highlight SynonymQuery

2016-10-10 Thread Ferenczi Jim (JIRA)
Ferenczi Jim created LUCENE-7484:


 Summary: FastVectorHighlighter fails to highlight SynonymQuery
 Key: LUCENE-7484
 URL: https://issues.apache.org/jira/browse/LUCENE-7484
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/termvectors
Affects Versions: 6.x, master (7.0)
Reporter: Ferenczi Jim


SynonymQuery are ignored by the FastVectorHighlighter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7484) FastVectorHighlighter fails to highlight SynonymQuery

2016-10-10 Thread Ferenczi Jim (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferenczi Jim updated LUCENE-7484:
-
Attachment: LUCENE-7484.patch

> FastVectorHighlighter fails to highlight SynonymQuery
> -
>
> Key: LUCENE-7484
> URL: https://issues.apache.org/jira/browse/LUCENE-7484
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/termvectors
>Affects Versions: 6.x, master (7.0)
>Reporter: Ferenczi Jim
> Attachments: LUCENE-7484.patch
>
>
> SynonymQuery are ignored by the FastVectorHighlighter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8370) Display Similarity Factory in use in Schema-Browser

2016-10-10 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562352#comment-15562352
 ] 

Alexandre Rafalovitch commented on SOLR-8370:
-

No objections to the UI part.

> Display Similarity Factory in use in Schema-Browser
> ---
>
> Key: SOLR-8370
> URL: https://issues.apache.org/jira/browse/SOLR-8370
> Project: Solr
>  Issue Type: Improvement
>  Components: UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Trivial
>  Labels: newdev
> Attachments: SOLR-8370.patch
>
>
> Perhaps the Admin UI Schema browser should also display which 
> {{}} that is in use in schema, like it does per-field.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >