[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 422 - Still Unstable!

2016-09-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/422/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'A val' for path 'response/params/x/a' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   
"response":{"znodeVersion":-1}},  from server:  
http://127.0.0.1:54584/solr/collection1_shard1_replica2

Stack Trace:
java.lang.AssertionError: Could not get expected value  'A val' for path 
'response/params/x/a' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{"znodeVersion":-1}},  from server:  
http://127.0.0.1:54584/solr/collection1_shard1_replica2
at 
__randomizedtesting.SeedInfo.seed([92A25D3183575BDE:1AF662EB2DAB3626]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:535)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:108)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

Re: [Result Query Solr] How to retrieve the content of pdfs

2016-09-20 Thread Dmitry Kan
Hi Alexandre,

Could you add fl=* to your query and check the output? Alternatively, have
a look at your schema file and check what could look like content field:
text or similar.

Dmitry

14 сент. 2016 г. 1:27 AM пользователь "Alexandre Martins" <
alexandremart...@gmail.com> написал:

> Hi Guys,
>
> I'm trying to use the last version of solr and i have used the post tool
> to upload 28 pdf files and it works fine. However, I don't know how to show
> the content of the files in the resulted json. Anybody know how to include
> this field?
>
> "responseHeader":{ "zkConnected":true, "status":0, "QTime":43, "params":{
> "q":"ABC", "indent":"on", "wt":"json", "_":"1473804420750"}}, "response":
> {"numFound":40,"start":0,"maxScore":9.1066065,"docs":[ { "id":
> "/home/alexandre/desenvolvimento/workspace/solr-6.2.0/pdfs_hack/abc.pdf",
> "date":["2016-09-13T14:44:17Z"], "pdf_pdfversion":[1.5], "xmp_creatortool
> ":["PDFCreator Version 1.7.3"], "stream_content_type":["application/pdf"],
> "access_permission_modify_annotations":[false], "
> access_permission_can_print_degraded":[false], "dc_creator":["abc"], "
> dcterms_created":["2016-09-13T14:44:17Z"], "last_modified":["2016-09-
> 13T14:44:17Z"], "dcterms_modified":["2016-09-13T14:44:17Z"], 
> "dc_format":["application/pdf;
> version=1.5"], "title":["ABC tittle"], "xmpmm_documentid":["uuid:
> 100ccff2-7c1c-11e6--ab7b62fc46ae"], "last_save_date":["2016-09-
> 13T14:44:17Z"], "access_permission_fill_in_form":[false], "meta_save_date
> ":["2016-09-13T14:44:17Z"], "pdf_encrypted":[false], "dc_title":["Tittle
> abc"], "modified":["2016-09-13T14:44:17Z"], "content_type":["application/
> pdf"], "stream_size":[101948], "x_parsed_by":["org.apache.
> tika.parser.DefaultParser", "org.apache.tika.parser.pdf.PDFParser"], "
> creator":["mauricio.tostes"], "meta_author":["mauricio.tostes"], "
> meta_creation_date":["2016-09-13T14:44:17Z"], "created":["Tue Sep 13
> 14:44:17 UTC 2016"], "access_permission_extract_for_accessibility":[false],
> "access_permission_assemble_document":[false], "xmptpg_npages":[3], "
> creation_date":["2016-09-13T14:44:17Z"], "resourcename":["/home/
> alexandre/desenvolvimento/workspace/solr-6.2.0/pdfs_hack/abc.pdf"], "
> access_permission_extract_content":[false], "access_permission_can_print":
> [false], "author":["abc.add"], "producer":["GPL Ghostscript 9.10"], "
> access_permission_can_modify":[false], "_version_":1545395897488113664},
>
> Alexandre Costa Martins
> DATAPREV - IT Analyst
> Software Reuse Researcher
> MSc Federal University of Pernambuco
> RiSE Member - http://www.rise.com.br
> Sun Certified Programmer for Java 5.0 (SCPJ5.0)
>
> MSN: xandecmart...@hotmail.com
> GTalk: alexandremart...@gmail.com
> Skype: xandecmartins
> Mobile: +55 (85) 9626-3631
>


[JENKINS] Lucene-Solr-Tests-master - Build # 1397 - Still Unstable

2016-09-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1397/

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:38220","node_name":"127.0.0.1:38220_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/33)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:53640;,   
"core":"c8n_1x3_lf_shard1_replica3",   "node_name":"127.0.0.1:53640_"}, 
"core_node2":{   "core":"c8n_1x3_lf_shard1_replica2",   
"base_url":"http://127.0.0.1:54326;,   "node_name":"127.0.0.1:54326_",  
 "state":"down"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:38220;,   "node_name":"127.0.0.1:38220_",  
 "state":"active",   "leader":"true",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:38220","node_name":"127.0.0.1:38220_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/33)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:53640;,
  "core":"c8n_1x3_lf_shard1_replica3",
  "node_name":"127.0.0.1:53640_"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica2",
  "base_url":"http://127.0.0.1:54326;,
  "node_name":"127.0.0.1:54326_",
  "state":"down"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:38220;,
  "node_name":"127.0.0.1:38220_",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([EE206DF7BA461B76:6674522D14BA768E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:170)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3554 - Still Unstable!

2016-09-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3554/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([4DA54ED0255A6B5:F3A9BAB5C4BD0953]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1329)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11480 lines...]
   [junit4] Suite: 

Re: Negative Date Query for Local Params in Solr

2016-09-20 Thread Sandeep Khanzode
Thanks, David! Perhaps browsing the Solr sources may be a necessity at some 
point in time. :) SRK 

On Wednesday, September 21, 2016 9:08 AM, David Smiley 
 wrote:
 

 So that page referenced describes local-params, and describes the special
"v" local-param.  But first, see a list of all query parsers (which lists
"field"): https://cwiki.apache.org/confluence/display/solr/Other+Parsers
and
https://cwiki.apache.org/confluence/display/solr/The+Standard+Query+Parser for
the "lucene" one.

The "op" param is rather unique... it's not defined by any query parser.  A
trick is done in which a custom field type (DateRangeField in this case) is
able to inspect the local-params, and thus define and use params it needs.
https://cwiki.apache.org/confluence/display/solr/Working+with+Dates "More
DateRangeField Details" mentions "op".  {!lucene df=dateRange
op=Contains}... would also work.  I don't know of any other local-param
used in this way.

On Tue, Sep 20, 2016 at 11:21 PM David Smiley 
wrote:

> Personally I learned this by pouring over Solr's source code some time
> ago.  I suppose the only official reference to this stuff is:
>
> https://cwiki.apache.org/confluence/display/solr/Local+Parameters+in+Queries
> But that page doesn't address the implications for when the syntax is a
> clause of a larger query instead of being the whole query (i.e. has "{!"...
> but but not at the first char).
>
> On Tue, Sep 20, 2016 at 2:06 PM Sandeep Khanzode
>  wrote:
>
>> Wow. Simply awesome!
>> Where can I read more about this? I am not sure whether I understand what
>> is going on behind the scenes ... like which parser is invoked for !field,
>> how can we know which all special local params exist, whether we should
>> prefer edismax over others, when is the LuceneQParser invoked in other
>> conditions, etc? Would appreciate if you could indicate some references to
>> catch up.
>> Thanks a lot ...  SRK
>>
>>  Show original message    On Tuesday, September 20, 2016 5:54 PM, David
>> Smiley  wrote:
>>
>>
>>  OH!  Ok the moment the query no longer starts with "{!", the query is
>> parsed by defType (for 'q') and will default to lucene QParser.  So then
>> it
>> appears we have a clause with a NOT operator.  In this parsing mode,
>> embedded "{!" terminates at the "}".  This means you can't put the
>> sub-query text after the "}", you instead need to put it in the special
>> "v"
>> local-param.  e.g.:
>> -{!field f=schedule op=Contains v='[2016-08-26T12:00:12Z TO
>> 2016-08-26T15:00:12Z]'}
>>
>> On Tue, Sep 20, 2016 at 8:15 AM Sandeep Khanzode
>>  wrote:
>>
>> > This is what I get ...
>> > { "responseHeader": { "status": 400, "QTime": 1, "params": { "q":
>> > "-{!field f=schedule op=Contains}[2016-08-26T12:00:12Z TO
>> > 2016-08-26T15:00:12Z]", "indent": "true", "wt": "json", "_":
>> > "1474373612202" } }, "error": { "msg": "Invalid Date in Date Math
>> > String:'[2016-08-26T12:00:12Z'", "code": 400 }}
>> >  SRK
>> >
>> >    On Tuesday, September 20, 2016 5:34 PM, David Smiley <
>> > david.w.smi...@gmail.com> wrote:
>> >
>> >
>> >  It should, I think... what happens? Can you ascertain the nature of the
>> > results?
>> > ~ David
>> >
>> > On Tue, Sep 20, 2016 at 5:35 AM Sandeep Khanzode
>> >  wrote:
>> >
>> > > For Solr 6.1.0
>> > > This works .. -{!field f=schedule op=Intersects}2016-08-26T12:00:56Z
>> > >
>> > > This works .. {!field f=schedule op=Contains}[2016-08-26T12:00:12Z TO
>> > > 2016-08-26T15:00:12Z]
>> > >
>> > >
>> > > Why does this not work?-{!field f=schedule
>> > > op=Contains}[2016-08-26T12:00:12Z TO 2016-08-26T15:00:12Z]
>> > >  SRK
>> >
>> > --
>> > Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> > LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> > http://www.solrenterprisesearchserver.com
>> >
>> >
>> >
>>
>> --
>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> http://www.solrenterprisesearchserver.com
>>
>>
>>
>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


   

Re: Negative Date Query for Local Params in Solr

2016-09-20 Thread David Smiley
So that page referenced describes local-params, and describes the special
"v" local-param.  But first, see a list of all query parsers (which lists
"field"): https://cwiki.apache.org/confluence/display/solr/Other+Parsers
and
https://cwiki.apache.org/confluence/display/solr/The+Standard+Query+Parser for
the "lucene" one.

The "op" param is rather unique... it's not defined by any query parser.  A
trick is done in which a custom field type (DateRangeField in this case) is
able to inspect the local-params, and thus define and use params it needs.
https://cwiki.apache.org/confluence/display/solr/Working+with+Dates "More
DateRangeField Details" mentions "op".  {!lucene df=dateRange
op=Contains}... would also work.  I don't know of any other local-param
used in this way.

On Tue, Sep 20, 2016 at 11:21 PM David Smiley 
wrote:

> Personally I learned this by pouring over Solr's source code some time
> ago.  I suppose the only official reference to this stuff is:
>
> https://cwiki.apache.org/confluence/display/solr/Local+Parameters+in+Queries
> But that page doesn't address the implications for when the syntax is a
> clause of a larger query instead of being the whole query (i.e. has "{!"...
> but but not at the first char).
>
> On Tue, Sep 20, 2016 at 2:06 PM Sandeep Khanzode
>  wrote:
>
>> Wow. Simply awesome!
>> Where can I read more about this? I am not sure whether I understand what
>> is going on behind the scenes ... like which parser is invoked for !field,
>> how can we know which all special local params exist, whether we should
>> prefer edismax over others, when is the LuceneQParser invoked in other
>> conditions, etc? Would appreciate if you could indicate some references to
>> catch up.
>> Thanks a lot ...  SRK
>>
>>   Show original message On Tuesday, September 20, 2016 5:54 PM, David
>> Smiley  wrote:
>>
>>
>>  OH!  Ok the moment the query no longer starts with "{!", the query is
>> parsed by defType (for 'q') and will default to lucene QParser.  So then
>> it
>> appears we have a clause with a NOT operator.  In this parsing mode,
>> embedded "{!" terminates at the "}".  This means you can't put the
>> sub-query text after the "}", you instead need to put it in the special
>> "v"
>> local-param.  e.g.:
>> -{!field f=schedule op=Contains v='[2016-08-26T12:00:12Z TO
>> 2016-08-26T15:00:12Z]'}
>>
>> On Tue, Sep 20, 2016 at 8:15 AM Sandeep Khanzode
>>  wrote:
>>
>> > This is what I get ...
>> > { "responseHeader": { "status": 400, "QTime": 1, "params": { "q":
>> > "-{!field f=schedule op=Contains}[2016-08-26T12:00:12Z TO
>> > 2016-08-26T15:00:12Z]", "indent": "true", "wt": "json", "_":
>> > "1474373612202" } }, "error": { "msg": "Invalid Date in Date Math
>> > String:'[2016-08-26T12:00:12Z'", "code": 400 }}
>> >  SRK
>> >
>> >On Tuesday, September 20, 2016 5:34 PM, David Smiley <
>> > david.w.smi...@gmail.com> wrote:
>> >
>> >
>> >  It should, I think... what happens? Can you ascertain the nature of the
>> > results?
>> > ~ David
>> >
>> > On Tue, Sep 20, 2016 at 5:35 AM Sandeep Khanzode
>> >  wrote:
>> >
>> > > For Solr 6.1.0
>> > > This works .. -{!field f=schedule op=Intersects}2016-08-26T12:00:56Z
>> > >
>> > > This works .. {!field f=schedule op=Contains}[2016-08-26T12:00:12Z TO
>> > > 2016-08-26T15:00:12Z]
>> > >
>> > >
>> > > Why does this not work?-{!field f=schedule
>> > > op=Contains}[2016-08-26T12:00:12Z TO 2016-08-26T15:00:12Z]
>> > >  SRK
>> >
>> > --
>> > Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> > LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> > http://www.solrenterprisesearchserver.com
>> >
>> >
>> >
>>
>> --
>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> http://www.solrenterprisesearchserver.com
>>
>>
>>
>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


Re: Negative Date Query for Local Params in Solr

2016-09-20 Thread David Smiley
Personally I learned this by pouring over Solr's source code some time
ago.  I suppose the only official reference to this stuff is:
https://cwiki.apache.org/confluence/display/solr/Local+Parameters+in+Queries
But that page doesn't address the implications for when the syntax is a
clause of a larger query instead of being the whole query (i.e. has "{!"...
but but not at the first char).

On Tue, Sep 20, 2016 at 2:06 PM Sandeep Khanzode
 wrote:

> Wow. Simply awesome!
> Where can I read more about this? I am not sure whether I understand what
> is going on behind the scenes ... like which parser is invoked for !field,
> how can we know which all special local params exist, whether we should
> prefer edismax over others, when is the LuceneQParser invoked in other
> conditions, etc? Would appreciate if you could indicate some references to
> catch up.
> Thanks a lot ...  SRK
>
>   Show original message On Tuesday, September 20, 2016 5:54 PM, David
> Smiley  wrote:
>
>
>  OH!  Ok the moment the query no longer starts with "{!", the query is
> parsed by defType (for 'q') and will default to lucene QParser.  So then it
> appears we have a clause with a NOT operator.  In this parsing mode,
> embedded "{!" terminates at the "}".  This means you can't put the
> sub-query text after the "}", you instead need to put it in the special "v"
> local-param.  e.g.:
> -{!field f=schedule op=Contains v='[2016-08-26T12:00:12Z TO
> 2016-08-26T15:00:12Z]'}
>
> On Tue, Sep 20, 2016 at 8:15 AM Sandeep Khanzode
>  wrote:
>
> > This is what I get ...
> > { "responseHeader": { "status": 400, "QTime": 1, "params": { "q":
> > "-{!field f=schedule op=Contains}[2016-08-26T12:00:12Z TO
> > 2016-08-26T15:00:12Z]", "indent": "true", "wt": "json", "_":
> > "1474373612202" } }, "error": { "msg": "Invalid Date in Date Math
> > String:'[2016-08-26T12:00:12Z'", "code": 400 }}
> >  SRK
> >
> >On Tuesday, September 20, 2016 5:34 PM, David Smiley <
> > david.w.smi...@gmail.com> wrote:
> >
> >
> >  It should, I think... what happens? Can you ascertain the nature of the
> > results?
> > ~ David
> >
> > On Tue, Sep 20, 2016 at 5:35 AM Sandeep Khanzode
> >  wrote:
> >
> > > For Solr 6.1.0
> > > This works .. -{!field f=schedule op=Intersects}2016-08-26T12:00:56Z
> > >
> > > This works .. {!field f=schedule op=Contains}[2016-08-26T12:00:12Z TO
> > > 2016-08-26T15:00:12Z]
> > >
> > >
> > > Why does this not work?-{!field f=schedule
> > > op=Contains}[2016-08-26T12:00:12Z TO 2016-08-26T15:00:12Z]
> > >  SRK
> >
> > --
> > Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> > LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> > http://www.solrenterprisesearchserver.com
> >
> >
> >
>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>
>
>

-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Updated] (SOLR-9527) Solr RESTORE api doesn't distribute the replicas uniformly

2016-09-20 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated SOLR-9527:
---
Attachment: SOLR-9527.patch

[~steviebee] Thanks for the patch :) I updated your patch with a unit test 
which reproduces the problem (without your patch) and passes with your patch.

The failure can be reproduced with this command,

ant test  -Dtestcase=TestLocalFSCloudBackupRestore -Dtests.method=test 
-Dtests.seed=F465CD94BAC9633C -Dtests.slow=true -Dtests.locale=lt 
-Dtests.timezone=America/Inuvik -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[~varunthacker] Could you please take a look?

> Solr RESTORE api doesn't distribute the replicas uniformly 
> ---
>
> Key: SOLR-9527
> URL: https://issues.apache.org/jira/browse/SOLR-9527
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.1
>Reporter: Hrishikesh Gadre
> Attachments: SOLR-9527.patch, SOLR-9527.patch
>
>
> Please refer to this email thread for details,
> http://lucene.markmail.org/message/ycun4x5nx7lwj5sk?q=solr+list:org%2Eapache%2Elucene%2Esolr-user+order:date-backward=1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9541) Support configurable authentication mechanism for internode communication

2016-09-20 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated SOLR-9541:
---
Description: 
SOLR-7849 introduced PKI based authentication mechanism for internode 
communication. The main reason for introducing SOLR-7849 was,

>> Relying on every Authentication plugin to secure the internode communication 
>> is error prone. 

At Cloudera we are using Kerberos protocol for all communication without any 
issues (i.e. between client/server as well as server/server). We should make 
this internode authentication mechanism configurable (with default as PKI based 
mechanism). This will allow users to decide the appropriate authentication 
mechanism based on their security requirements.

  was:
SOLR-7849 introduced PKI based authentication mechanism for internode 
communication. The main reason for this feature (as per SOLR-7849) is,

>> Relying on every Authentication plugin to secure the internode communication 
>> is error prone. 

At Cloudera we are using Kerberos protocol for all communication without any 
issues (i.e. between client/server as well as server/server). We should make 
this internode authentication mechanism configurable (with default as PKI based 
mechanism). This will allow users to decide the appropriate mechanism based on 
their security requirements.


> Support configurable authentication mechanism for internode communication
> -
>
> Key: SOLR-9541
> URL: https://issues.apache.org/jira/browse/SOLR-9541
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.3, 6.0
>Reporter: Hrishikesh Gadre
>
> SOLR-7849 introduced PKI based authentication mechanism for internode 
> communication. The main reason for introducing SOLR-7849 was,
> >> Relying on every Authentication plugin to secure the internode 
> >> communication is error prone. 
> At Cloudera we are using Kerberos protocol for all communication without any 
> issues (i.e. between client/server as well as server/server). We should make 
> this internode authentication mechanism configurable (with default as PKI 
> based mechanism). This will allow users to decide the appropriate 
> authentication mechanism based on their security requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9541) Support configurable authentication mechanism for internode communication

2016-09-20 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created SOLR-9541:
--

 Summary: Support configurable authentication mechanism for 
internode communication
 Key: SOLR-9541
 URL: https://issues.apache.org/jira/browse/SOLR-9541
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.0, 5.3
Reporter: Hrishikesh Gadre


SOLR-7849 introduced PKI based authentication mechanism for internode 
communication. The main reason for this feature (as per SOLR-7849) is,

>> Relying on every Authentication plugin to secure the internode communication 
>> is error prone. 

At Cloudera we are using Kerberos protocol for all communication without any 
issues (i.e. between client/server as well as server/server). We should make 
this internode authentication mechanism configurable (with default as PKI based 
mechanism). This will allow users to decide the appropriate mechanism based on 
their security requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.2 - Build # 14 - Unstable

2016-09-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.2/14/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestSolrConfigHandlerCloud

Error Message:
ObjectTracker found 3 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 3 object(s) that were not 
released!!! [MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([755CF9353F497A41]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:258)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
org.apache.solr.handler.component.SpellCheckComponentTest.testNumericQuery

Error Message:
List size mismatch @ spellcheck/suggestions

Stack Trace:
java.lang.RuntimeException: List size mismatch @ spellcheck/suggestions
at 
__randomizedtesting.SeedInfo.seed([755CF9353F497A41:7E70AEF5A080C6EE]:0)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:871)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:818)
at 
org.apache.solr.handler.component.SpellCheckComponentTest.testNumericQuery(SpellCheckComponentTest.java:154)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Updated] (SOLR-8495) Schemaless mode cannot index large text fields

2016-09-20 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-8495:
---
Attachment: SOLR-8495.patch

Here are the initial patch for this issue, It based on the idea #1 of 
[~steve_rowe]

This patch introduce new {{ParseLongStringFieldUpdateProcessorFactory}} which 
do the check
{code}
if (valSize > 32000) {
  return new LongStringField(stringVal);
}
{code}
So we can add new type mapping to {{AddSchemaFieldsUpdateProcessorFactory}}
{code}

  org.apache.solr.update.processor.LongStringField
  lstring

{code}

There are some problems of this approach is :
- We must define the size of chunk ( in which we split large string into ) 
inside schema file ( for {{ChunkTokenizerFactory}} ) not inside solrconfig.
- In multi-value case, what should we do for case the first value is > 32kb and 
the second value is < 32kb? With this patch, first value is mapping into 
LongStringField and second value still a String, so 
{{AddSchemaFieldsUpdateProcessor#mapValueClassesToFieldType}} will create a 
field based on {{defaultFieldType}} ( should we modify the method? )

> Schemaless mode cannot index large text fields
> --
>
> Key: SOLR-8495
> URL: https://issues.apache.org/jira/browse/SOLR-8495
> Project: Solr
>  Issue Type: Bug
>  Components: Data-driven Schema, Schema and Analysis
>Affects Versions: 4.10.4, 5.3.1, 5.4
>Reporter: Shalin Shekhar Mangar
>  Labels: difficulty-easy, impact-medium
> Fix For: 5.5, 6.0
>
> Attachments: SOLR-8495.patch
>
>
> The schemaless mode by default indexes all string fields into an indexed 
> StrField which is limited to 32KB text. Anything larger than that leads to an 
> exception during analysis.
> {code}
> Caused by: java.lang.IllegalArgumentException: Document contains at least one 
> immense term in field="text" (whose UTF8 encoding is longer than the max 
> length 32766)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9527) Solr RESTORE api doesn't distribute the replicas uniformly

2016-09-20 Thread Stephen Lewis (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Lewis updated SOLR-9527:

Attachment: SOLR-9527.patch

This patch accomplishes two things in the branch branch_6_2.

(a) It fixes the issue of replica distribution for the leaders
(b) Simultaneously, supports the CreateNodeSet parameter for collection restore.

> Solr RESTORE api doesn't distribute the replicas uniformly 
> ---
>
> Key: SOLR-9527
> URL: https://issues.apache.org/jira/browse/SOLR-9527
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.1
>Reporter: Hrishikesh Gadre
> Attachments: SOLR-9527.patch
>
>
> Please refer to this email thread for details,
> http://lucene.markmail.org/message/ycun4x5nx7lwj5sk?q=solr+list:org%2Eapache%2Elucene%2Esolr-user+order:date-backward=1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7453) Change naming of variables/apis from docid to docnum

2016-09-20 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508210#comment-15508210
 ] 

David Smiley commented on LUCENE-7453:
--

bq. To me the difference between docnum and docid is really that docnum is one 
letter longer  Seriously, it doesn't seem to be explaining anything more than 
docid does. It would be more self-explanatory to call it docSegmentIndex, but 
this seems verbose.

I feel the same way -- there is no semantic difference.  

> Change naming of variables/apis from docid to docnum
> 
>
> Key: LUCENE-7453
> URL: https://issues.apache.org/jira/browse/LUCENE-7453
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
>
> In SOLR-9528 a suggestion was made to change {{docid}} to {{docnum}}. The 
> reasoning for this is most notably that {{docid}} has a connotation about a 
> persistent unique identifier (eg like {{_id}} in elasticsearch or {{id}} in 
> solr), while {{docid}} in lucene is currently some local to a segment, and 
> not comparable directly across segments.
> When I first started working on Lucene, I had this same confusion. {{docnum}} 
> is a much better name for this transient, segment local identifier for a doc. 
> Regardless of what solr wants to do in their api (eg keeping _docid_), I 
> think we should switch the lucene apis and variable names to use docnum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9467) Document Transformer to Remove Fields

2016-09-20 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508176#comment-15508176
 ] 

Gus Heck commented on SOLR-9467:


Ok reading [~hossman]'s cases again

# Yes that's nonsense, a quick test locally shows that my code already ignores 
it, I'll just add a unit test to document this. 
# Good point, my code doesn't handle this case. I think this should be 
supported. 
# Although I wasn't planning on using it for security type stuff, I can quickly 
imagine that this feature might be used that way. I wouldn't want info to 
become exposed because it got aliased. Quick testing in the admin ui for my 
local install shows that the current code already achieves this and it should 
be tested and documented.

So if that sounds good  I'll work on providing another patch sometime in the 
near future.





> Document Transformer to Remove Fields
> -
>
> Key: SOLR-9467
> URL: https://issues.apache.org/jira/browse/SOLR-9467
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 6.2
>Reporter: Gus Heck
> Attachments: SOLR-9467.patch
>
>
> Given that SOLR-3191 has become bogged down and inactive, evidently stuck in 
> low level details, and since I have wished several times for some way to just 
> get that one big field out of my results to improve transfer times without 
> making a big brittle list of all my other fields. I'd like to propose a 
> DocumentTransformer that accomplishes this.
> It would look something like this:
> {code}=*,[fl.rm v="title"]{code} 
> Since removing one field with a known name is probably the most common case 
> I'd like to start by keeping this simple, and if further features like globs 
> or lists of fields are desired, subsequent Jira tickets can be opened to add 
> them. Not attached to specifics here, only looking to keep things simple and 
> solve the key use case. If you don't like fl.rm as a name for a transformer, 
> suggest a better one (for example). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_102) - Build # 6128 - Unstable!

2016-09-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6128/
Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrVersionReplicationTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [InternalHttpClient]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [InternalHttpClient]
at __randomizedtesting.SeedInfo.seed([86AC16513837805E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
replicaCount expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: replicaCount expected:<0> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([86AC16513837805E:EF8298B96CBEDA6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testNoConfigSetExist(CollectionsAPIDistributedZkTest.java:619)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:177)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 

[jira] [Commented] (SOLR-9524) SolrIndexSearcher.getIndexFingerprint uses dubious sunchronization

2016-09-20 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508053#comment-15508053
 ] 

Mike Drob commented on SOLR-9524:
-

Thanks for looking. I agree that SOLR-9506 would be a good improvement too.

{quote}
The reason I didn't use computeIfAbsent is computing fingerprint could throw an 
exception. Code to account for exception looks pretty ugly.

It also seems like some changes to the original patch, I dont remember using 
ConcurrentHashmap.
{quote}
The CHM was added in commit 37ae065 by Noble.

I think that the ugliness of the code around exception handling here is a fine 
tradeoff for the correctness issue that we've already seen and the performance 
issue of locking on the whole map. If you have 5 fingerprints to calculate, 
then the current code will need 10 seconds total since each calculation can 
only happen serially. Letting computeIfAbsent manage the concurrent access for 
us seems much much better.

> SolrIndexSearcher.getIndexFingerprint uses dubious sunchronization
> --
>
> Key: SOLR-9524
> URL: https://issues.apache.org/jira/browse/SOLR-9524
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Mike Drob
> Attachments: SOLR-9524.patch
>
>
> In SOLR-9310 we added more code that does some fingerprint caching in 
> SolrIndexSearcher. However, the synchronization looks like it could be made 
> more efficient and may have issues with correctness.
> https://github.com/apache/lucene-solr/blob/branch_6x/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java#L2371-L2385
> Some of the issues:
> * Double checked locking needs use of volatile variables to ensure proper 
> memory semantics.
> * sync on a ConcurrentHashMap is usually a code smell



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6871) Need a process for updating & maintaining the new quickstart tutorial (and any other tutorials added to the website)

2016-09-20 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-6871:
-
Fix Version/s: (was: 5.0)

> Need a process for updating & maintaining the new quickstart tutorial (and 
> any other tutorials added to the website)
> 
>
> Key: SOLR-6871
> URL: https://issues.apache.org/jira/browse/SOLR-6871
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Minor
> Attachments: SOLR-6871.patch
>
>
> Prior to SOLR-6058 the /solr/tutorial.html link on the website contained only 
> a simple landing page that then linked people to the "versioned" tutorial for 
> the most recent release -- or more specificly: the most recent release*s* 
> (plural) when we were releasing off of multiple branches (ie: links to both 
> the 4.0.0 tutorial, as well as the 3.6.3 tutorial when 4.0 came out)
> The old tutorial content lived along side the solr code, and was 
> automatically branched, tagged & released along with Solr.  When committing 
> any changes to Solr code (or post.jar code, or the sample data, or the sample 
> configs, etc..) you could also commit changes to the tutorial at th same time 
> and be confident that it was clear what version of solr that tutorial went 
> along with.
> As part of SOLR-6058, it seems that there was a concensus to move to a 
> keeping "tutorial" content on the website, where it can be integrated 
> directly in with other site content/navigation, and use the same look and 
> feel.
> I have no objection to this in principle -- but as a result of this choice, 
> there are outstanding issues regarding how devs should go about maintaining 
> this doc as changes are made to solr & the solr examples used in the tutorial.
> We need a clear process for where/how to edit the tutorial(s) as new versions 
> of solr come out and cahnges are made that mandate corisponding hanges to the 
> tutorial.  this process _should_ also account for things like having multiple 
> versions of the tutorial live at one time (ie: at some point in the future, 
> we'll certainly need to host the "5.13" tutorial if that's the current 
> "stable" release, but we'll also want to host the tutorial for "6.0-BETA" so 
> that people can try it out)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6871) Need a process for updating & maintaining the new quickstart tutorial (and any other tutorials added to the website)

2016-09-20 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reassigned SOLR-6871:


Assignee: Steve Rowe

> Need a process for updating & maintaining the new quickstart tutorial (and 
> any other tutorials added to the website)
> 
>
> Key: SOLR-6871
> URL: https://issues.apache.org/jira/browse/SOLR-6871
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: 5.0
>
> Attachments: SOLR-6871.patch
>
>
> Prior to SOLR-6058 the /solr/tutorial.html link on the website contained only 
> a simple landing page that then linked people to the "versioned" tutorial for 
> the most recent release -- or more specificly: the most recent release*s* 
> (plural) when we were releasing off of multiple branches (ie: links to both 
> the 4.0.0 tutorial, as well as the 3.6.3 tutorial when 4.0 came out)
> The old tutorial content lived along side the solr code, and was 
> automatically branched, tagged & released along with Solr.  When committing 
> any changes to Solr code (or post.jar code, or the sample data, or the sample 
> configs, etc..) you could also commit changes to the tutorial at th same time 
> and be confident that it was clear what version of solr that tutorial went 
> along with.
> As part of SOLR-6058, it seems that there was a concensus to move to a 
> keeping "tutorial" content on the website, where it can be integrated 
> directly in with other site content/navigation, and use the same look and 
> feel.
> I have no objection to this in principle -- but as a result of this choice, 
> there are outstanding issues regarding how devs should go about maintaining 
> this doc as changes are made to solr & the solr examples used in the tutorial.
> We need a clear process for where/how to edit the tutorial(s) as new versions 
> of solr come out and cahnges are made that mandate corisponding hanges to the 
> tutorial.  this process _should_ also account for things like having multiple 
> versions of the tutorial live at one time (ie: at some point in the future, 
> we'll certainly need to host the "5.13" tutorial if that's the current 
> "stable" release, but we'll also want to host the tutorial for "6.0-BETA" so 
> that people can try it out)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9330) Race condition between core reload and statistics request

2016-09-20 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508026#comment-15508026
 ] 

Mike Drob commented on SOLR-9330:
-

You're right about there still being a race condition in my solution. For some 
reason I wasn't thinking about what happens in that case.

re: jmx exception - yea, it's the same thing. we have a custom jmx handler on 
top of the mbean endpoint that has run into this same issue. I should have 
cleaned up the commit message there.

[~werder], the reload operation isn't the only place where this can throw 
exceptions - it can also happen during core delete or during shutdown. So if we 
add synchronization, we will need to look at {{CoreContainer::shutdown}}, 
{{SolrCores::close}}, etc. I don't have tests written for all of these, but 
it's a nearly identical stack trace, given that reload is essentially unload + 
load.

> Race condition between core reload and statistics request
> -
>
> Key: SOLR-9330
> URL: https://issues.apache.org/jira/browse/SOLR-9330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Andrey Kudryavtsev
> Attachments: SOLR-9330.patch, SOLR-9390.patch, SOLR-9390.patch, 
> SOLR-9390.patch, too_sync.patch
>
>
> Things happened that we execute this two requests consecutively in Solr 5.5:
> * Core reload: /admin/cores?action=RELOAD=_coreName_
> * Check core statistics: /_coreName_/admin/mbeans?stats=true
> And sometimes second request ends with this error:
> {code}
> ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
> closed
>   at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)
> {code}
> If my understanding of SolrCore internals is correct, it happens because of 
> async nature of reload request:
> * New searcher is "registered" in separate thread
> * Old searcher is closed in same separate thread and only after new one is 
> registered
> * When old searcher is closing, it removes itself from map with MBeans 
> * If statistic requests happens before old searcher is completely removed 
> from everywhere - exception can happen. 
> What do you think if we will introduce new parameter for reload request which 
> makes it fully synchronized? Basically it will force it to call {code}  
> SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] 
> waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7453) Change naming of variables/apis from docid to docnum

2016-09-20 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508004#comment-15508004
 ] 

Paul Elschot commented on LUCENE-7453:
--

I'm in favour, but it is going to be hard:
{code}
public abstract class DocIdSet ...
public abstract class DocIdSetIterator ...
{code}
I vaguely recall not really agreeing with these names, but it is probably too 
late now.



> Change naming of variables/apis from docid to docnum
> 
>
> Key: LUCENE-7453
> URL: https://issues.apache.org/jira/browse/LUCENE-7453
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
>
> In SOLR-9528 a suggestion was made to change {{docid}} to {{docnum}}. The 
> reasoning for this is most notably that {{docid}} has a connotation about a 
> persistent unique identifier (eg like {{_id}} in elasticsearch or {{id}} in 
> solr), while {{docid}} in lucene is currently some local to a segment, and 
> not comparable directly across segments.
> When I first started working on Lucene, I had this same confusion. {{docnum}} 
> is a much better name for this transient, segment local identifier for a doc. 
> Regardless of what solr wants to do in their api (eg keeping _docid_), I 
> think we should switch the lucene apis and variable names to use docnum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 454 - Still Unstable

2016-09-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/454/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestSolrConfigHandlerCloud

Error Message:
ObjectTracker found 3 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 3 object(s) that were not 
released!!! [MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([4B7CF4AA61B28BCF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257)
at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12357 lines...]
   [junit4] Suite: org.apache.solr.handler.TestSolrConfigHandlerCloud
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/build/solr-core/test/J0/temp/solr.handler.TestSolrConfigHandlerCloud_4B7CF4AA61B28BCF-001/init-core-data-001
   [junit4]   2> 3050516 INFO  
(SUITE-TestSolrConfigHandlerCloud-seed#[4B7CF4AA61B28BCF]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 3050516 INFO  
(SUITE-TestSolrConfigHandlerCloud-seed#[4B7CF4AA61B28BCF]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 3050533 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[4B7CF4AA61B28BCF]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 3050547 INFO  (Thread-5851) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 3050548 INFO  (Thread-5851) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 3050646 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[4B7CF4AA61B28BCF]) [] 
o.a.s.c.ZkTestServer start zk server on port:56549
   [junit4]   2> 3050646 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[4B7CF4AA61B28BCF]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 3050662 INFO  
(TEST-TestSolrConfigHandlerCloud.test-seed#[4B7CF4AA61B28BCF]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 3050690 INFO  (zkCallback-2415-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@857c764 name:ZooKeeperConnection 
Watcher:127.0.0.1:56549 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 3050690 INFO  

[jira] [Commented] (LUCENE-7407) Explore switching doc values to an iterator API

2016-09-20 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507892#comment-15507892
 ] 

Otis Gospodnetic commented on LUCENE-7407:
--

bq. there is a lot of fun improvements we can make here, in follow-on issues, 
so that e.g. LUCENE-7253 (merging of sparse doc values fields) is fixed.

So LUCENE-7253 is where the new Codec work for trunk will go?
Did you maybe create the other issues you mentioned?  Asking because I'm 
curious what you have in mind and so I can link+watch.

> Explore switching doc values to an iterator API
> ---
>
> Key: LUCENE-7407
> URL: https://issues.apache.org/jira/browse/LUCENE-7407
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
>  Labels: docValues
> Attachments: LUCENE-7407.patch
>
>
> I think it could be compelling if we restricted doc values to use an
> iterator API at read time, instead of the more general random access
> API we have today:
>   * It would make doc values disk usage more of a "you pay for what
> what you actually use", like postings, which is a compelling
> reduction for sparse usage.
>   * I think codecs could compress better and maybe speed up decoding
> of doc values, even in the non-sparse case, since the read-time
> API is more restrictive "forward only" instead of random access.
>   * We could remove {{getDocsWithField}} entirely, since that's
> implicit in the iteration, and the awkward "return 0 if the
> document didn't have this field" would go away.
>   * We can remove the annoying thread locals we must make today in
> {{CodecReader}}, and close the trappy "I accidentally shared a
> single XXXDocValues instance across threads", since an iterator is
> inherently "use once".
>   * We could maybe leverage the numerous optimizations we've done for
> postings over time, since the two problems ("iterate over doc ids
> and store something interesting for each") are very similar.
> This idea has come up many in the past, e.g. LUCENE-7253 is a recent
> example, and very early iterations of doc values started with exactly
> this ;)
> However, it's a truly enormous change, likely 7.0 only.  Or maybe we
> could have the new iterator APIs also ported to 6.x side by side with
> the deprecate existing random-access APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 400 - Unstable!

2016-09-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/400/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZkTest.test

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:62036 within 3 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:62036 within 3 ms
at 
__randomizedtesting.SeedInfo.seed([DAA5AF664E81C43:85FE652CCA1471BB]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:182)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:116)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:111)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:98)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.printLayout(AbstractDistribZkTestBase.java:295)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.distribTearDown(AbstractFullDistribZkTestBase.java:1500)
at 
org.apache.solr.cloud.BasicDistributedZkTest.distribTearDown(BasicDistributedZkTest.java:1147)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:969)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: Could not connect to 
ZooKeeper 127.0.0.1:62036 within 3 ms
at 
org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:235)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:174)

[jira] [Commented] (SOLR-9524) SolrIndexSearcher.getIndexFingerprint uses dubious sunchronization

2016-09-20 Thread Pushkar Raste (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507812#comment-15507812
 ] 

Pushkar Raste commented on SOLR-9524:
-

I checked logs on one of our biggest collection with following stats
{{num docs: 1591412892}}
{{shards: 12}} spread across two solr instances (each process hosts 6 shards 
and each instance runs on separate physical box).
{{auto soft commit : 1 secs}} (not sure this matters as computing fingerprint 
opens a new NR searcher anyway)
There is replication too but I don't think that matters a lot here.

It is typically taking  ~2 seconds on avg.

This definitely requires caching. 

I am planning to take on 
[SOLR-9506|https://issues.apache.org/jira/browse/SOLR-9506] this week. 

> SolrIndexSearcher.getIndexFingerprint uses dubious sunchronization
> --
>
> Key: SOLR-9524
> URL: https://issues.apache.org/jira/browse/SOLR-9524
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Mike Drob
> Attachments: SOLR-9524.patch
>
>
> In SOLR-9310 we added more code that does some fingerprint caching in 
> SolrIndexSearcher. However, the synchronization looks like it could be made 
> more efficient and may have issues with correctness.
> https://github.com/apache/lucene-solr/blob/branch_6x/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java#L2371-L2385
> Some of the issues:
> * Double checked locking needs use of volatile variables to ensure proper 
> memory semantics.
> * sync on a ConcurrentHashMap is usually a code smell



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7453) Change naming of variables/apis from docid to docnum

2016-09-20 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507786#comment-15507786
 ] 

Uwe Schindler commented on LUCENE-7453:
---

I was the one who suggested that change. Imho the change would only modify 
parameter names and maybe some getter, although they are mostly called doc()  
int iterators. So this would not break anything or require updates in code of 
users. Just naming of parameters and corresponding javadocs. So risk is low.

> Change naming of variables/apis from docid to docnum
> 
>
> Key: LUCENE-7453
> URL: https://issues.apache.org/jira/browse/LUCENE-7453
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
>
> In SOLR-9528 a suggestion was made to change {{docid}} to {{docnum}}. The 
> reasoning for this is most notably that {{docid}} has a connotation about a 
> persistent unique identifier (eg like {{_id}} in elasticsearch or {{id}} in 
> solr), while {{docid}} in lucene is currently some local to a segment, and 
> not comparable directly across segments.
> When I first started working on Lucene, I had this same confusion. {{docnum}} 
> is a much better name for this transient, segment local identifier for a doc. 
> Regardless of what solr wants to do in their api (eg keeping _docid_), I 
> think we should switch the lucene apis and variable names to use docnum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5379) Query-time multi-word synonym expansion

2016-09-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507778#comment-15507778
 ] 

Jan Høydahl commented on SOLR-5379:
---

[~daitken] and [~atuljangra] and [~mjsminkey], I'm sorry there were on replies 
to your questions about updating the patch. What it probably means is that 
noone has had the capacity or need to spend time on this. It will probably take 
some effort to lift the patch from 4.x to 6.x, and then get it ready for 
committing either as part of edismax or as a subclass.

bq. What can we do to get this made official??
If you can contribute development work yourself (or your company) that would be 
the best. Else hire someone who can help you and/or just keep nagging here 
until it is done :-)

> Query-time multi-word synonym expansion
> ---
>
> Key: SOLR-5379
> URL: https://issues.apache.org/jira/browse/SOLR-5379
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Reporter: Tien Nguyen Manh
>  Labels: multi-word, queryparser, synonym
> Fix For: 4.9, 6.0
>
> Attachments: conf-test-files-4_8_1.patch, quoted-4_8_1.patch, 
> quoted.patch, solr-5379-version-4.10.3.patch, synonym-expander-4_8_1.patch, 
> synonym-expander.patch
>
>
> While dealing with synonym at query time, solr failed to work with multi-word 
> synonyms due to some reasons:
> - First the lucene queryparser tokenizes user query by space so it split 
> multi-word term into two terms before feeding to synonym filter, so synonym 
> filter can't recognized multi-word term to do expansion
> - Second, if synonym filter expand into multiple terms which contains 
> multi-word synonym, The SolrQueryParseBase currently use MultiPhraseQuery to 
> handle synonyms. But MultiPhraseQuery don't work with term have different 
> number of words.
> For the first one, we can extend quoted all multi-word synonym in user query 
> so that lucene queryparser don't split it. There are a jira task related to 
> this one https://issues.apache.org/jira/browse/LUCENE-2605.
> For the second, we can replace MultiPhraseQuery by an appropriate BoleanQuery 
> SHOULD which contains multiple PhraseQuery in case tokens stream have 
> multi-word synonym.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9310) PeerSync fails on a node restart due to IndexFingerPrint mismatch

2016-09-20 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-9310:

Fix Version/s: (was: 6.2.1)

This wasn't backported to 6.2.1 so removing the fix version 6.2.1

> PeerSync fails on a node restart due to IndexFingerPrint mismatch
> -
>
> Key: SOLR-9310
> URL: https://issues.apache.org/jira/browse/SOLR-9310
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
>Assignee: Noble Paul
> Fix For: 5.5.3, 6.3, trunk
>
> Attachments: PeerSync_3Node_Setup.jpg, PeerSync_Experiment.patch, 
> SOLR-9310.patch, SOLR-9310.patch, SOLR-9310.patch, SOLR-9310.patch, 
> SOLR-9310.patch, SOLR-9310.patch, SOLR-9310_3ReplicaTest.patch, 
> SOLR-9310_5x.patch, SOLR-9310_final.patch
>
>
> I found that Peer Sync fails if a node restarts and documents were indexed 
> while node was down. IndexFingerPrint check fails after recovering node 
> applies updates. 
> This happens only when node restarts and not if node just misses updates due 
> reason other than it being down.
> Please check attached patch for the test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[ANNOUNCE] Apache Solr 6.2.1 released

2016-09-20 Thread Shalin Shekhar Mangar
20 September 2016, Apache Solr™ 6.2.1 available

The Lucene PMC is pleased to announce the release of Apache Solr 6.2.1

Solr is the popular, blazing fast, open source NoSQL search platform from
the Apache Lucene project. Its major features include powerful full-text
search, hit highlighting, faceted search and analytics, rich document
parsing, geospatial search, extensive REST APIs as well as parallel SQL.
Solr is enterprise grade, secure and highly scalable, providing fault
tolerant distributed search and indexing, and powers the search and
navigation features of many of the world's largest internet sites.

This release includes 11 bug fixes since the 6.2.0 release. Some of the
major fixes are:

   -

   SOLR-9490: BoolField always returning false for non-DV fields when
   javabin involved (via solrj, or intra node communication)
   -

   SOLR-9188: BlockUnknown property makes inter-node communication
   impossible
   - SOLR-9389: HDFS Transaction logs stay open for writes which leaks
   Xceivers
   - SOLR-9438: Shard split can fail to write commit data on shutdown
   leading to data loss

Furthermore, this release includes Apache Lucene 6.2.1 which includes 3 bug
fixes since the 6.2.0 release.

The release is available for immediate download at:

   -

   http://www.apache.org/dyn/closer.lua/lucene/solr/6.2.1

Please read CHANGES.txt for a detailed list of changes:

   -

   https://lucene.apache.org/solr/6_2_1/changes/Changes.html

Please report any feedback to the mailing lists (http://lucene.apache.or
g/solr/discussion.html)

Note: The Apache Software Foundation uses an extensive mirroring network
for distributing releases. It is possible that the mirror you are using may
not have replicated the release yet. If that is the case, please try
another mirror. This also goes for Maven access.


-- 
Regards,
Shalin Shekhar Mangar.


[ANNOUNCE] Apache Lucene 6.2.1 released

2016-09-20 Thread Shalin Shekhar Mangar
20 September 2016, Apache Lucene™ 6.2.1 available

The Lucene PMC is pleased to announce the release of Apache Lucene 6.2.1

Apache Lucene is a high-performance, full-featured text search engine library
written entirely in Java. It is a technology suitable for nearly any
application that requires full-text search, especially cross-platform.

This release contains 3 bug fixes since the 6.2.0 release:

* LUCENE-7417: The standard Highlighter could throw an
IllegalArgumentException when trying to highlight a query containing a
degenerate case of a MultiPhraseQuery with one term.

* LUCENE-7440: Document id skipping (PostingsEnum.advance) could throw an
ArrayIndexOutOfBoundsException exception on large index segments (>1.8B
docs) with large skips.

* LUCENE-7318: Fix backwards compatibility issues around StandardAnalyzer
 and its components,
introduced with Lucene 6.2.0. The moved classes were restored in their
original packages: LowercaseFilter
 and StopFilter
, as well as several utility
classes.

The release is available for immediate download at:

   -

   http://www.apache.org/dyn/closer.lua/lucene/java/6.2.1

Please read CHANGES.txt for a full list of new features and changes:

   -

   https://lucene.apache.org/core/6_2_1/changes/Changes.html

Please report any feedback to the mailing lists (http://lucene.apache.or
g/core/discussion.html)

Note: The Apache Software Foundation uses an extensive mirroring network for
distributing releases. It is possible that the mirror you are using may not
have replicated the release yet. If that is the case, please try another
mirror. This also goes for Maven access.

-- 
Regards,
Shalin Shekhar Mangar.


[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 856 - Unstable!

2016-09-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/856/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

5 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:46529/ui_srg/c/c8n_1x3_lf_shard1_replica3]

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: No live 
SolrServers available to handle this 
request:[http://127.0.0.1:46529/ui_srg/c/c8n_1x3_lf_shard1_replica3]
at 
__randomizedtesting.SeedInfo.seed([C116BB5412F5B424:4942848EBC09D9DC]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:755)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1161)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1050)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:992)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:753)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:741)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:178)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  

[jira] [Commented] (SOLR-9330) Race condition between core reload and statistics request

2016-09-20 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507683#comment-15507683
 ] 

Mikhail Khludnev commented on SOLR-9330:


[~mdrob]
my concern regarding 
{code}
  if (reader.getRefCount() > 0) {
 /// and here core is reloaded in parallel and closes the reader. I think it 
can be reproduced with sleep()
 lst.add("indexVersion", reader.getVersion());
}
{code}
Can you also share any observation regarding 'jmx exception on shutdown' or 
perhaps you have a test? Is it the same case or it's something different?  

> Race condition between core reload and statistics request
> -
>
> Key: SOLR-9330
> URL: https://issues.apache.org/jira/browse/SOLR-9330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Andrey Kudryavtsev
> Attachments: SOLR-9330.patch, SOLR-9390.patch, SOLR-9390.patch, 
> SOLR-9390.patch, too_sync.patch
>
>
> Things happened that we execute this two requests consecutively in Solr 5.5:
> * Core reload: /admin/cores?action=RELOAD=_coreName_
> * Check core statistics: /_coreName_/admin/mbeans?stats=true
> And sometimes second request ends with this error:
> {code}
> ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
> closed
>   at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)
> {code}
> If my understanding of SolrCore internals is correct, it happens because of 
> async nature of reload request:
> * New searcher is "registered" in separate thread
> * Old searcher is closed in same separate thread and only after new one is 
> registered
> * When old searcher is closing, it removes itself from map with MBeans 
> * If statistic requests happens before old searcher is completely removed 
> from everywhere - exception can happen. 
> What do you think if we will introduce new parameter for reload request which 
> makes it fully synchronized? Basically it will force it to call {code}  
> SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] 
> waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9330) Race condition between core reload and statistics request

2016-09-20 Thread Andrey Kudryavtsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507657#comment-15507657
 ] 

Andrey Kudryavtsev edited comment on SOLR-9330 at 9/20/16 8:16 PM:
---

If we are talking about root cause of the race - it looks like things are a 
little bit more easily than I thought. There is just another  thread executor 
in CoreAdminHandler. If there will be API to avoid it, reload operation will 
become more synchronous (patch is attached).

I thought that it will require to do something like too_sync.patch, but not 
this time probably. 


was (Author: werder):
If we are talking about root cause of the race - it looks like things are a 
little bit more easily than I thought. There is just another  thread executor 
in CoreAdminHandler. If there will be API to avoid it, reload operation will 
become more synchronous (patch is attached).

It thought that it will require to do something like too_sync.patch, but not 
this time probably. 

> Race condition between core reload and statistics request
> -
>
> Key: SOLR-9330
> URL: https://issues.apache.org/jira/browse/SOLR-9330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Andrey Kudryavtsev
> Attachments: SOLR-9330.patch, SOLR-9390.patch, SOLR-9390.patch, 
> SOLR-9390.patch, too_sync.patch
>
>
> Things happened that we execute this two requests consecutively in Solr 5.5:
> * Core reload: /admin/cores?action=RELOAD=_coreName_
> * Check core statistics: /_coreName_/admin/mbeans?stats=true
> And sometimes second request ends with this error:
> {code}
> ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
> closed
>   at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)
> {code}
> If my understanding of SolrCore internals is correct, it happens because of 
> async nature of reload request:
> * New searcher is "registered" in separate thread
> * Old searcher is closed in same separate thread and only after new one is 
> registered
> * When old searcher is closing, it removes itself from map with MBeans 
> * If statistic requests happens before old searcher is completely removed 
> from everywhere - exception can happen. 
> What do you think if we will introduce new parameter for reload request which 
> makes it fully synchronized? Basically it will force it to call {code}  
> SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] 
> waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9330) Race condition between core reload and statistics request

2016-09-20 Thread Andrey Kudryavtsev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Kudryavtsev updated SOLR-9330:
-
Attachment: (was: SOLR-9390_too_sync.patch)

> Race condition between core reload and statistics request
> -
>
> Key: SOLR-9330
> URL: https://issues.apache.org/jira/browse/SOLR-9330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Andrey Kudryavtsev
> Attachments: SOLR-9330.patch, SOLR-9390.patch, SOLR-9390.patch, 
> SOLR-9390.patch, too_sync.patch
>
>
> Things happened that we execute this two requests consecutively in Solr 5.5:
> * Core reload: /admin/cores?action=RELOAD=_coreName_
> * Check core statistics: /_coreName_/admin/mbeans?stats=true
> And sometimes second request ends with this error:
> {code}
> ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
> closed
>   at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)
> {code}
> If my understanding of SolrCore internals is correct, it happens because of 
> async nature of reload request:
> * New searcher is "registered" in separate thread
> * Old searcher is closed in same separate thread and only after new one is 
> registered
> * When old searcher is closing, it removes itself from map with MBeans 
> * If statistic requests happens before old searcher is completely removed 
> from everywhere - exception can happen. 
> What do you think if we will introduce new parameter for reload request which 
> makes it fully synchronized? Basically it will force it to call {code}  
> SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] 
> waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9330) Race condition between core reload and statistics request

2016-09-20 Thread Andrey Kudryavtsev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Kudryavtsev updated SOLR-9330:
-
Attachment: too_sync.patch

> Race condition between core reload and statistics request
> -
>
> Key: SOLR-9330
> URL: https://issues.apache.org/jira/browse/SOLR-9330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Andrey Kudryavtsev
> Attachments: SOLR-9330.patch, SOLR-9390.patch, SOLR-9390.patch, 
> SOLR-9390.patch, too_sync.patch
>
>
> Things happened that we execute this two requests consecutively in Solr 5.5:
> * Core reload: /admin/cores?action=RELOAD=_coreName_
> * Check core statistics: /_coreName_/admin/mbeans?stats=true
> And sometimes second request ends with this error:
> {code}
> ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
> closed
>   at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)
> {code}
> If my understanding of SolrCore internals is correct, it happens because of 
> async nature of reload request:
> * New searcher is "registered" in separate thread
> * Old searcher is closed in same separate thread and only after new one is 
> registered
> * When old searcher is closing, it removes itself from map with MBeans 
> * If statistic requests happens before old searcher is completely removed 
> from everywhere - exception can happen. 
> What do you think if we will introduce new parameter for reload request which 
> makes it fully synchronized? Basically it will force it to call {code}  
> SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] 
> waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9330) Race condition between core reload and statistics request

2016-09-20 Thread Andrey Kudryavtsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507657#comment-15507657
 ] 

Andrey Kudryavtsev edited comment on SOLR-9330 at 9/20/16 8:09 PM:
---

If we are talking about root cause of the race - it looks like things are a 
little bit more easily than I thought. There is just another  thread executor 
in CoreAdminHandler. If there will be API to avoid it, reload operation will 
become more synchronous (patch is attached).

It thought that it will require to do something like too_sync.patch, but not 
this time probably. 


was (Author: werder):
If we are talking about root cause of the race - it looks like things are a 
little bit more easily than I thought. There is just another  thread executor 
in CoreAdminHandler. If there will be API to avoid it, reload operation will 
become more synchronous (patch is attached).

It thought that it will require to do something like SOLR-9390toosync.patch, 
but not this time probably. 

> Race condition between core reload and statistics request
> -
>
> Key: SOLR-9330
> URL: https://issues.apache.org/jira/browse/SOLR-9330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Andrey Kudryavtsev
> Attachments: SOLR-9330.patch, SOLR-9390.patch, SOLR-9390.patch, 
> SOLR-9390.patch, too_sync.patch
>
>
> Things happened that we execute this two requests consecutively in Solr 5.5:
> * Core reload: /admin/cores?action=RELOAD=_coreName_
> * Check core statistics: /_coreName_/admin/mbeans?stats=true
> And sometimes second request ends with this error:
> {code}
> ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
> closed
>   at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)
> {code}
> If my understanding of SolrCore internals is correct, it happens because of 
> async nature of reload request:
> * New searcher is "registered" in separate thread
> * Old searcher is closed in same separate thread and only after new one is 
> registered
> * When old searcher is closing, it removes itself from map with MBeans 
> * If statistic requests happens before old searcher is completely removed 
> from everywhere - exception can happen. 
> What do you think if we will introduce new parameter for reload request which 
> makes it fully synchronized? Basically it will force it to call {code}  
> SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] 
> waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9330) Race condition between core reload and statistics request

2016-09-20 Thread Andrey Kudryavtsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507657#comment-15507657
 ] 

Andrey Kudryavtsev edited comment on SOLR-9330 at 9/20/16 8:09 PM:
---

If we are talking about root cause of the race - it looks like things are a 
little bit more easily than I thought. There is just another  thread executor 
in CoreAdminHandler. If there will be API to avoid it, reload operation will 
become more synchronous (patch is attached).

It thought that it will require to do something like SOLR-9390toosync.patch, 
but not this time probably. 


was (Author: werder):
If we are talking about root cause of the race - it looks like things are a 
little bit more easily than I thought. There is just another  thread executor 
in CoreAdminHandler. If there will be API to avoid it, reload operation will 
become more synchronous (patch is attached).

It thought that it will require to do something like SOLR-9390_too_sync.patch, 
but not this time probably. 

> Race condition between core reload and statistics request
> -
>
> Key: SOLR-9330
> URL: https://issues.apache.org/jira/browse/SOLR-9330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Andrey Kudryavtsev
> Attachments: SOLR-9330.patch, SOLR-9390.patch, SOLR-9390.patch, 
> SOLR-9390.patch, SOLR-9390_too_sync.patch
>
>
> Things happened that we execute this two requests consecutively in Solr 5.5:
> * Core reload: /admin/cores?action=RELOAD=_coreName_
> * Check core statistics: /_coreName_/admin/mbeans?stats=true
> And sometimes second request ends with this error:
> {code}
> ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
> closed
>   at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)
> {code}
> If my understanding of SolrCore internals is correct, it happens because of 
> async nature of reload request:
> * New searcher is "registered" in separate thread
> * Old searcher is closed in same separate thread and only after new one is 
> registered
> * When old searcher is closing, it removes itself from map with MBeans 
> * If statistic requests happens before old searcher is completely removed 
> from everywhere - exception can happen. 
> What do you think if we will introduce new parameter for reload request which 
> makes it fully synchronized? Basically it will force it to call {code}  
> SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] 
> waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9330) Race condition between core reload and statistics request

2016-09-20 Thread Andrey Kudryavtsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507657#comment-15507657
 ] 

Andrey Kudryavtsev commented on SOLR-9330:
--

If we are talking about root cause of the race - it looks like things are a 
little bit more easily than I thought. There is just another  thread executor 
in CoreAdminHandler. If there will be API to avoid it, reload operation will 
become more synchronous (patch is attached).

It thought that it will require to do something like SOLR-9390_too_sync.patch, 
but not this time probably. 

> Race condition between core reload and statistics request
> -
>
> Key: SOLR-9330
> URL: https://issues.apache.org/jira/browse/SOLR-9330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Andrey Kudryavtsev
> Attachments: SOLR-9330.patch, SOLR-9390.patch, SOLR-9390.patch, 
> SOLR-9390.patch, SOLR-9390_too_sync.patch
>
>
> Things happened that we execute this two requests consecutively in Solr 5.5:
> * Core reload: /admin/cores?action=RELOAD=_coreName_
> * Check core statistics: /_coreName_/admin/mbeans?stats=true
> And sometimes second request ends with this error:
> {code}
> ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
> closed
>   at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)
> {code}
> If my understanding of SolrCore internals is correct, it happens because of 
> async nature of reload request:
> * New searcher is "registered" in separate thread
> * Old searcher is closed in same separate thread and only after new one is 
> registered
> * When old searcher is closing, it removes itself from map with MBeans 
> * If statistic requests happens before old searcher is completely removed 
> from everywhere - exception can happen. 
> What do you think if we will introduce new parameter for reload request which 
> makes it fully synchronized? Basically it will force it to call {code}  
> SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] 
> waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9330) Race condition between core reload and statistics request

2016-09-20 Thread Andrey Kudryavtsev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Kudryavtsev updated SOLR-9330:
-
Attachment: SOLR-9390_too_sync.patch

> Race condition between core reload and statistics request
> -
>
> Key: SOLR-9330
> URL: https://issues.apache.org/jira/browse/SOLR-9330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Andrey Kudryavtsev
> Attachments: SOLR-9330.patch, SOLR-9390.patch, SOLR-9390.patch, 
> SOLR-9390.patch, SOLR-9390_too_sync.patch
>
>
> Things happened that we execute this two requests consecutively in Solr 5.5:
> * Core reload: /admin/cores?action=RELOAD=_coreName_
> * Check core statistics: /_coreName_/admin/mbeans?stats=true
> And sometimes second request ends with this error:
> {code}
> ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
> closed
>   at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)
> {code}
> If my understanding of SolrCore internals is correct, it happens because of 
> async nature of reload request:
> * New searcher is "registered" in separate thread
> * Old searcher is closed in same separate thread and only after new one is 
> registered
> * When old searcher is closing, it removes itself from map with MBeans 
> * If statistic requests happens before old searcher is completely removed 
> from everywhere - exception can happen. 
> What do you think if we will introduce new parameter for reload request which 
> makes it fully synchronized? Basically it will force it to call {code}  
> SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] 
> waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9330) Race condition between core reload and statistics request

2016-09-20 Thread Andrey Kudryavtsev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Kudryavtsev updated SOLR-9330:
-
Attachment: SOLR-9390.patch

> Race condition between core reload and statistics request
> -
>
> Key: SOLR-9330
> URL: https://issues.apache.org/jira/browse/SOLR-9330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Andrey Kudryavtsev
> Attachments: SOLR-9330.patch, SOLR-9390.patch, SOLR-9390.patch, 
> SOLR-9390.patch
>
>
> Things happened that we execute this two requests consecutively in Solr 5.5:
> * Core reload: /admin/cores?action=RELOAD=_coreName_
> * Check core statistics: /_coreName_/admin/mbeans?stats=true
> And sometimes second request ends with this error:
> {code}
> ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
> closed
>   at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)
> {code}
> If my understanding of SolrCore internals is correct, it happens because of 
> async nature of reload request:
> * New searcher is "registered" in separate thread
> * Old searcher is closed in same separate thread and only after new one is 
> registered
> * When old searcher is closing, it removes itself from map with MBeans 
> * If statistic requests happens before old searcher is completely removed 
> from everywhere - exception can happen. 
> What do you think if we will introduce new parameter for reload request which 
> makes it fully synchronized? Basically it will force it to call {code}  
> SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] 
> waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9524) SolrIndexSearcher.getIndexFingerprint uses dubious sunchronization

2016-09-20 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507625#comment-15507625
 ] 

Mike Drob commented on SOLR-9524:
-

bq. I will look through logs of our collections (fortunately we have plenty of 
those) and see how expensive is fingerprint computation.
[~praste] - did you get a chance to verify this? I tried to check on our 
systems, but the first few deployments I looked at were all on versions before 
index fingerprinting.

> SolrIndexSearcher.getIndexFingerprint uses dubious sunchronization
> --
>
> Key: SOLR-9524
> URL: https://issues.apache.org/jira/browse/SOLR-9524
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Mike Drob
> Attachments: SOLR-9524.patch
>
>
> In SOLR-9310 we added more code that does some fingerprint caching in 
> SolrIndexSearcher. However, the synchronization looks like it could be made 
> more efficient and may have issues with correctness.
> https://github.com/apache/lucene-solr/blob/branch_6x/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java#L2371-L2385
> Some of the issues:
> * Double checked locking needs use of volatile variables to ensure proper 
> memory semantics.
> * sync on a ConcurrentHashMap is usually a code smell



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9330) Race condition between core reload and statistics request

2016-09-20 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-9330:

Attachment: SOLR-9330.patch

[~mkhludnev] - that looks like a good start to a test. I do not think 10 
seconds is long enough to relegate something to @Nightly runs.

I'm putting up an alternative fix that seems to also make your test pass. I 
think I like my approach better because it may still return partial results 
when able, but I'm also not convinced that there won't be other handlers that 
suffer from similar problems that are still not addressed. Your approach may be 
able to fix everything at once.

Maybe the best solution is to do both things.

> Race condition between core reload and statistics request
> -
>
> Key: SOLR-9330
> URL: https://issues.apache.org/jira/browse/SOLR-9330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Andrey Kudryavtsev
> Attachments: SOLR-9330.patch, SOLR-9390.patch, SOLR-9390.patch
>
>
> Things happened that we execute this two requests consecutively in Solr 5.5:
> * Core reload: /admin/cores?action=RELOAD=_coreName_
> * Check core statistics: /_coreName_/admin/mbeans?stats=true
> And sometimes second request ends with this error:
> {code}
> ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
> closed
>   at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)
> {code}
> If my understanding of SolrCore internals is correct, it happens because of 
> async nature of reload request:
> * New searcher is "registered" in separate thread
> * Old searcher is closed in same separate thread and only after new one is 
> registered
> * When old searcher is closing, it removes itself from map with MBeans 
> * If statistic requests happens before old searcher is completely removed 
> from everywhere - exception can happen. 
> What do you think if we will introduce new parameter for reload request which 
> makes it fully synchronized? Basically it will force it to call {code}  
> SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] 
> waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6871) Need a process for updating & maintaining the new quickstart tutorial (and any other tutorials added to the website)

2016-09-20 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507593#comment-15507593
 ] 

Steve Rowe commented on SOLR-6871:
--

[~arafalov]:

bq. So, what do we do about this? [...] Is this just a matter of one person 
taking a lead and updating the tutorial as they see fit? Or is there some sort 
of discussion on which features need to be highlighted and which can be omitted.

I think the normal process should work okay - Lucene/Solr's official process is 
commit-then-review, but (as I've just done) pre-commit communication via 
patches should work too.  But maybe you're thinking about the live website?  I 
think we should err on the side of exhausting available review prior to 
publishing there.  (My version of that here was giving a rough deadline at 
which I'll publish if I havn't gotten any review.)

bq. And with links to the Reference Guide, do we link into the live version of 
it (easy and convenient) or do an offline reference to the version-matched 
export?

All of the links are to specific sections in the ref guide, and while I've seen 
internal PDF links work (depending on how the PDF was created and the PDF 
reader being used), I don't think they're dependable enough to allow us to use 
for this purpose.  So I don't think there's a real choice at this point.   
However, once [~ctargett]'s ref guide work is in place 
([https://github.com/ctargett/refguide-asciidoc-poc]), we should be able to 
make version-matched online docs that would be suitable for outbound links from 
the tutorial.

> Need a process for updating & maintaining the new quickstart tutorial (and 
> any other tutorials added to the website)
> 
>
> Key: SOLR-6871
> URL: https://issues.apache.org/jira/browse/SOLR-6871
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Priority: Minor
> Fix For: 5.0
>
> Attachments: SOLR-6871.patch
>
>
> Prior to SOLR-6058 the /solr/tutorial.html link on the website contained only 
> a simple landing page that then linked people to the "versioned" tutorial for 
> the most recent release -- or more specificly: the most recent release*s* 
> (plural) when we were releasing off of multiple branches (ie: links to both 
> the 4.0.0 tutorial, as well as the 3.6.3 tutorial when 4.0 came out)
> The old tutorial content lived along side the solr code, and was 
> automatically branched, tagged & released along with Solr.  When committing 
> any changes to Solr code (or post.jar code, or the sample data, or the sample 
> configs, etc..) you could also commit changes to the tutorial at th same time 
> and be confident that it was clear what version of solr that tutorial went 
> along with.
> As part of SOLR-6058, it seems that there was a concensus to move to a 
> keeping "tutorial" content on the website, where it can be integrated 
> directly in with other site content/navigation, and use the same look and 
> feel.
> I have no objection to this in principle -- but as a result of this choice, 
> there are outstanding issues regarding how devs should go about maintaining 
> this doc as changes are made to solr & the solr examples used in the tutorial.
> We need a clear process for where/how to edit the tutorial(s) as new versions 
> of solr come out and cahnges are made that mandate corisponding hanges to the 
> tutorial.  this process _should_ also account for things like having multiple 
> versions of the tutorial live at one time (ie: at some point in the future, 
> we'll certainly need to host the "5.13" tutorial if that's the current 
> "stable" release, but we'll also want to host the tutorial for "6.0-BETA" so 
> that people can try it out)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 165 - Still unstable

2016-09-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/165/

8 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testUpdateLogSynchronisation

Error Message:
Timeout waiting for CDCR replication to complete @source_collection:shard2

Stack Trace:
java.lang.RuntimeException: Timeout waiting for CDCR replication to complete 
@source_collection:shard2
at 
__randomizedtesting.SeedInfo.seed([D9DB264E81824972:27B47EED43A26A63]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:794)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testUpdateLogSynchronisation(CdcrReplicationDistributedZkTest.java:377)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (SOLR-9540) facet.exists for json.facet

2016-09-20 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created SOLR-9540:
--

 Summary: facet.exists for json.facet
 Key: SOLR-9540
 URL: https://issues.apache.org/jira/browse/SOLR-9540
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: faceting
Reporter: Mikhail Khludnev


It would be great to have {{facet.exists}} SOLR-5725 short-circuit optimisation 
in json.facet. Subjects for application:
* field enum faceting
* listed terms faceting (if it's there)
* query facet



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7453) Change naming of variables/apis from docid to docnum

2016-09-20 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507544#comment-15507544
 ] 

Dawid Weiss edited comment on LUCENE-7453 at 9/20/16 7:30 PM:
--

To me the difference between {{docnum}} and {{docid}} is really that {{docnum}} 
is one letter longer :) Seriously, it doesn't seem to be explaining anything 
more than {{docid}} does. It would be more self-explanatory to call it 
{{docSegmentIndex}}, but this seems verbose.

Don't you think adding better documentation (in one place and linking to it) 
would be a better idea than just renaming? Also, the nomenclature here has been 
with us for years. I don't see an obvious benefit of switching to {{docnum}} 
for new users and I see how it may be a confusing change to existing 
Lucene-experienced developers (especially if they have their own code that 
would stick to "docid" in local variables, etc.


was (Author: dweiss):
To me the difference between {{docnum}} and {{docid}} is really that {{docnum}} 
is one letter longer :) Seriously, it doesn't seem to be explaining anything 
more than {{docid}} does. It would be more self-explanatory to call it 
{{segmentIndex}}, but this seems verbose.

Don't you think adding better documentation (in one place and linking to it) 
would be a better idea than just renaming? Also, the nomenclature here has been 
with us for years. I don't see an obvious benefit of switching to {{docnum}} 
for new users and I see how it may be a confusing change to existing 
Lucene-experienced developers (especially if they have their own code that 
would stick to "docid" in local variables, etc.

> Change naming of variables/apis from docid to docnum
> 
>
> Key: LUCENE-7453
> URL: https://issues.apache.org/jira/browse/LUCENE-7453
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
>
> In SOLR-9528 a suggestion was made to change {{docid}} to {{docnum}}. The 
> reasoning for this is most notably that {{docid}} has a connotation about a 
> persistent unique identifier (eg like {{_id}} in elasticsearch or {{id}} in 
> solr), while {{docid}} in lucene is currently some local to a segment, and 
> not comparable directly across segments.
> When I first started working on Lucene, I had this same confusion. {{docnum}} 
> is a much better name for this transient, segment local identifier for a doc. 
> Regardless of what solr wants to do in their api (eg keeping _docid_), I 
> think we should switch the lucene apis and variable names to use docnum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7453) Change naming of variables/apis from docid to docnum

2016-09-20 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507544#comment-15507544
 ] 

Dawid Weiss commented on LUCENE-7453:
-

To me the difference between {{docnum}} and {{docid}} is really that {{docnum}} 
is one letter longer :) Seriously, it doesn't seem to be explaining anything 
more than {{docid}} does. It would be more self-explanatory to call it 
{{segmentIndex}}, but this seems verbose.

Don't you think adding better documentation (in one place and linking to it) 
would be a better idea than just renaming? Also, the nomenclature here has been 
with us for years. I don't see an obvious benefit of switching to {{docnum}} 
for new users and I see how it may be a confusing change to existing 
Lucene-experienced developers (especially if they have their own code that 
would stick to "docid" in local variables, etc.

> Change naming of variables/apis from docid to docnum
> 
>
> Key: LUCENE-7453
> URL: https://issues.apache.org/jira/browse/LUCENE-7453
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
>
> In SOLR-9528 a suggestion was made to change {{docid}} to {{docnum}}. The 
> reasoning for this is most notably that {{docid}} has a connotation about a 
> persistent unique identifier (eg like {{_id}} in elasticsearch or {{id}} in 
> solr), while {{docid}} in lucene is currently some local to a segment, and 
> not comparable directly across segments.
> When I first started working on Lucene, I had this same confusion. {{docnum}} 
> is a much better name for this transient, segment local identifier for a doc. 
> Regardless of what solr wants to do in their api (eg keeping _docid_), I 
> think we should switch the lucene apis and variable names to use docnum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5725) Efficient facets without counts for enum method

2016-09-20 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev resolved SOLR-5725.

Resolution: Fixed

> Efficient facets without counts for enum method
> ---
>
> Key: SOLR-5725
> URL: https://issues.apache.org/jira/browse/SOLR-5725
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Alexey Kozhemiakin
>Assignee: Mikhail Khludnev
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-5725-5x.patch, SOLR-5725-master.patch, 
> SOLR-5725.patch, SOLR-5725.patch, SOLR-5725.patch, SOLR-5725.patch, 
> SOLR-5725.patch, SOLR-5725.patch, SOLR-5725.patch, 
> facet.limit=0=true discrepancy between cloud and non-distr.txt
>
>
> h1. UPD: Specification
> To cap facet counts by 1 specify {{facet.exists=true}}. It can be used with 
> {{facet.method=enum}} or when it's omitted. It can be used only on non-trie 
> fields i.e. strings. It may speed up facet counting on large indices and/or 
> high-cardinality facet values..  
> h3. Shot version:
> This improves performance for facet.method=enum when it's enough to know that 
> facet count>0, for example when you it's when you dynamically populate 
> filters on search form. New method checks if two bitsets intersect instead of 
> counting intersection size.
> h3.Long version:
> We have a dataset containing hundreds of millions of records, we facet by 
> dozens of fields with many of facet-excludes and have relatively small number 
> of unique values in fields, around thousands.
> Before executing search, users work with "advanced search" form, our  goal is 
> to populate dozens of filters with values which are applicable with other 
> selected values, so basically this is a use case for facets with mincount=1, 
> but without need in actual counts.
> Our performance tests showed that facet.method=enum works much better than 
> fc\fcs, probably due to a specific ratio of "docset"\"unique terms count". 
> For example average execution of query time with method fc=1500ms, fcs=2600ms 
> and with enum=280ms. Profiling indicated the majority time for enum was spent 
> on intersecting docsets.
> Hers's a patch that introduces an extension to facet calculation for 
> method=enum. Basically it uses docSetA.intersects(docSetB) instead of 
> docSetA. intersectionSize (docSetB).
> As a result we were able to reduce our average query time from 280ms to 60ms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9526) data_driven configs defaults to "strings" for unmapped fields, makes most fields containing "textual content" unsearchable, breaks tutorial examples

2016-09-20 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507506#comment-15507506
 ] 

Steve Rowe commented on SOLR-9526:
--

I attached a patch on SOLR-6871 that addresses the per-field search problem - 
see [my comment 
there|https://issues.apache.org/jira/browse/SOLR-6871?focusedCommentId=15507467=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15507467]
 for details.

> data_driven configs defaults to "strings" for unmapped fields, makes most 
> fields containing "textual content" unsearchable, breaks tutorial examples
> 
>
> Key: SOLR-9526
> URL: https://issues.apache.org/jira/browse/SOLR-9526
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> James Pritchett pointed out on the solr-user list that this sample query from 
> the quick start tutorial matched no docs (even though the tutorial text says 
> "The above request returns only one document")...
> http://localhost:8983/solr/gettingstarted/select?wt=json=true=name:foundation
> The root problem seems to be that the add-unknown-fields-to-the-schema chain 
> in data_driven_schema_configs is configured with...
> {code}
> strings
> {code}
> ...and the "strings" type uses StrField and is not tokenized.
> 
> Original thread: 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201609.mbox/%3ccac-n2zrpsspfnk43agecspchc5b-0ff25xlfnzogyuvyg2d...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 421 - Still Unstable!

2016-09-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/421/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader

Error Message:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:65190/l_jrx/forceleader_test_collection_shard1_replica3]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:65190/l_jrx/forceleader_test_collection_shard1_replica3]
at 
__randomizedtesting.SeedInfo.seed([ED19AB03022BD75D:B8E9FC33BA92E3C]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:769)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1161)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1050)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:992)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:753)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:741)
at 
org.apache.solr.cloud.ForceLeaderTest.sendDoc(ForceLeaderTest.java:424)
at 
org.apache.solr.cloud.ForceLeaderTest.assertSendDocFails(ForceLeaderTest.java:315)
at 
org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:110)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Updated] (SOLR-6871) Need a process for updating & maintaining the new quickstart tutorial (and any other tutorials added to the website)

2016-09-20 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-6871:
-
Attachment: SOLR-6871.patch

Patch updating the quickstart tutorial to 6.2.0 - the patch mentions but 
doesn't include a few screen capture images I modified to bring them up-to-date.

I addressed all [~cpoerschke]'s suggestions above, except for:
bq. The "Indexing JSON" heading could become "Indexing Solr JSON" and its 
paragraph link into the Solr JSON reference guide section as is done for Solr 
XML.
-- I added a link to the ref guide section, but I left the heading as "Indexing 
JSON", since both and custom and Solr JSON are described there.

I addressed the per-field search borkedness mentioned in SOLR-9526 by simply 
upcasing the search term {{name:foundation}} to {{name:Foundation}} - this 
works because strings fields can be successfully queried with exact full field 
values.  I decided to leave in per-field querying, and not mention any caveats, 
since I figure SOLR-9526 will change the situation shortly.  I didn't find any 
other problems.

Finally, the patch also includes a new target {{generate-website-quickstart}} 
in {{solr/build.xml}} that makes few minor modifications to 
{{quickstart.mdtext}} (see comments in the build file for details) and puts the 
result at {{solr/build/website/quickstart.mdtext}}.  This is a low-impact 
mechanism to allow the content to be shared between the bundled docs and the 
website.

If there are no objections I'll commit in a day or so, and put the modified 
version of the quickstart tutorial up on the website.

> Need a process for updating & maintaining the new quickstart tutorial (and 
> any other tutorials added to the website)
> 
>
> Key: SOLR-6871
> URL: https://issues.apache.org/jira/browse/SOLR-6871
> Project: Solr
>  Issue Type: Task
>Reporter: Hoss Man
>Priority: Minor
> Fix For: 5.0
>
> Attachments: SOLR-6871.patch
>
>
> Prior to SOLR-6058 the /solr/tutorial.html link on the website contained only 
> a simple landing page that then linked people to the "versioned" tutorial for 
> the most recent release -- or more specificly: the most recent release*s* 
> (plural) when we were releasing off of multiple branches (ie: links to both 
> the 4.0.0 tutorial, as well as the 3.6.3 tutorial when 4.0 came out)
> The old tutorial content lived along side the solr code, and was 
> automatically branched, tagged & released along with Solr.  When committing 
> any changes to Solr code (or post.jar code, or the sample data, or the sample 
> configs, etc..) you could also commit changes to the tutorial at th same time 
> and be confident that it was clear what version of solr that tutorial went 
> along with.
> As part of SOLR-6058, it seems that there was a concensus to move to a 
> keeping "tutorial" content on the website, where it can be integrated 
> directly in with other site content/navigation, and use the same look and 
> feel.
> I have no objection to this in principle -- but as a result of this choice, 
> there are outstanding issues regarding how devs should go about maintaining 
> this doc as changes are made to solr & the solr examples used in the tutorial.
> We need a clear process for where/how to edit the tutorial(s) as new versions 
> of solr come out and cahnges are made that mandate corisponding hanges to the 
> tutorial.  this process _should_ also account for things like having multiple 
> versions of the tutorial live at one time (ie: at some point in the future, 
> we'll certainly need to host the "5.13" tutorial if that's the current 
> "stable" release, but we'll also want to host the tutorial for "6.0-BETA" so 
> that people can try it out)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7453) Change naming of variables/apis from docid to docnum

2016-09-20 Thread Ryan Ernst (JIRA)
Ryan Ernst created LUCENE-7453:
--

 Summary: Change naming of variables/apis from docid to docnum
 Key: LUCENE-7453
 URL: https://issues.apache.org/jira/browse/LUCENE-7453
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Ryan Ernst


In SOLR-9528 a suggestion was made to change {{docid}} to {{docnum}}. The 
reasoning for this is most notably that {{docid}} has a connotation about a 
persistent unique identifier (eg like {{_id}} in elasticsearch or {{id}} in 
solr), while {{docid}} in lucene is currently some local to a segment, and not 
comparable directly across segments.

When I first started working on Lucene, I had this same confusion. {{docnum}} 
is a much better name for this transient, segment local identifier for a doc. 
Regardless of what solr wants to do in their api (eg keeping _docid_), I think 
we should switch the lucene apis and variable names to use docnum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8395) query-time join (with scoring) for single value numeric fields

2016-09-20 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reassigned SOLR-8395:
--

Assignee: Mikhail Khludnev

> query-time join (with scoring) for single value numeric fields
> --
>
> Key: SOLR-8395
> URL: https://issues.apache.org/jira/browse/SOLR-8395
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Minor
>  Labels: easytest, features, newbie, starter
> Fix For: 5.5
>
> Attachments: SOLR-8395.patch, SOLR-8395.patch, SOLR-8395.patch, 
> SOLR-8395.patch
>
>
> since LUCENE-5868 we have an opportunity to improve SOLR-6234 to make it join 
> int and long fields. I suppose it's worth to add "simple" test in Solr 
> NoScore suite. 
> * Alongside with that we can set _multipleValues_ parameters giving 
> _fromField_ cardinality declared in schema;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7452) improve exception message: child query must only match non-parent docs, but parent docID=180314...

2016-09-20 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created LUCENE-7452:


 Summary: improve exception message: child query must only match 
non-parent docs, but parent docID=180314...
 Key: LUCENE-7452
 URL: https://issues.apache.org/jira/browse/LUCENE-7452
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: 6.2
Reporter: Mikhail Khludnev
Priority: Minor


when parent filter intersects with child query the exception exposes internal 
details: docnum and scorer class. I propose an exception message to suggest to 
execute a query intersecting them both. There is an opinion to add this  
suggestion in addition to existing details. 
My main concern against is, when index is constantly updated even SOLR-9582 
allows to search for docnum it would be like catching the wind, also think 
about cloud case. But, user advised with executing query intersection can catch 
problem documents even if they occurs sporadically.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6082) Umbrella JIRA for Admin UI and SolrCloud.

2016-09-20 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507358#comment-15507358
 ] 

Mikhail Khludnev commented on SOLR-6082:


sure! SOLR-9539

> Umbrella JIRA for Admin UI and SolrCloud.
> -
>
> Key: SOLR-6082
> URL: https://issues.apache.org/jira/browse/SOLR-6082
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Affects Versions: 4.9, 6.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> It would be very helpful if the admin UI were more "cloud friendly". This is 
> an umbrella JIRA so we can collect sub-tasks as necessary. I think there 
> might be scattered JIRAs about this, let's link them in as we find them.
> [~steffkes] - I've taken the liberty of assigning it to you since you 
> expressed some interest. Feel free to assign it back if you want...
> Let's imagine that a user has a cluster with _no_ collections assigned and 
> start from there.
> Here's a simple way to set this up. Basically you follow the reference guide 
> tutorial but _don't_ define a collection.
> 1> completely delete the "collection1" directory from example
> 2> cp -r example example2
> 3> in example, execute "java -DzkRun -jar start.jar"
> 4> in example2, execute "java -Djetty.port=7574 -DzkHost=localhost:9983 -jar 
> start.jar"
> Now the "cloud link" appears. If you expand the tree view, you see the two 
> live nodes. But, there's nothing in the graph view, no cores are selectable, 
> etc.
> First problem (need to solve before any sub-jiras, so including it here): You 
> have to push a configuration directory to ZK.
> [~thetapi] The _last_ time Stefan and I started allowing files to be written 
> to Solr from the UI it was...unfortunate. I'm assuming that there's something 
> similar here. That is, we shouldn't allow pushing the Solr config _to_ 
> ZooKeeper through the Admin UI, where they'd be distributed to all the solr 
> nodes. Is that true? If this is a security issue, we can keep pushing the 
> config dirs to ZK a manual step for now...
> Once we determine how to get configurations up, we can work on the various 
> sub-jiras.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9539) UI freeze if Zookeeper is down

2016-09-20 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created SOLR-9539:
--

 Summary: UI freeze if Zookeeper is down
 Key: SOLR-9539
 URL: https://issues.apache.org/jira/browse/SOLR-9539
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: web gui
Affects Versions: 6.2
Reporter: Mikhail Khludnev


# Download Solr 6.2 and Zookeeper 3.4.6 
# Launch Zookeeper {{$zkServer}}
# Launch two Solrs {{$bin\solr start -z localhost:2181 -e cloud}}
# stop Zookeeper 
# get to new Admin UI. It's frozen. Respond with placeholders after few 
minutes. However, solr urls respond well.
# launch Zookeeper back, UI respond smoothly and rapidly.
I briefly checked chrome debug, I saw that every ...js response is slow, 
something like request filter might check zookeeper with timeout, but I don't 
know.   
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4787) Join Contrib

2016-09-20 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507330#comment-15507330
 ] 

Mikhail Khludnev commented on SOLR-4787:


right. it seems SOLR-8395 sank. You can estimate score=none search time. It's 
dominated by from side result size. You need to run from side query, then build 
long \{!terms} query from result keys, and then measure its' execution time. 
Sum of these times gives you rough JoinUtil query time, and let judge whether 
or not to reindex.  

> Join Contrib
> 
>
> Key: SOLR-4787
> URL: https://issues.apache.org/jira/browse/SOLR-4787
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 4.2.1
>Reporter: Joel Bernstein
>Priority: Minor
> Fix For: 6.0
>
> Attachments: SOLR-4787-deadlock-fix.patch, 
> SOLR-4787-pjoin-long-keys.patch, SOLR-4787-with-testcase-fix.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4797-hjoin-multivaluekeys-nestedJoins.patch, 
> SOLR-4797-hjoin-multivaluekeys-trunk.patch
>
>
> This contrib provides a place where different join implementations can be 
> contributed to Solr. This contrib currently includes 3 join implementations. 
> The initial patch was generated from the Solr 4.3 tag. Because of changes in 
> the FieldCache API this patch will only build with Solr 4.2 or above.
> *HashSetJoinQParserPlugin aka hjoin*
> The hjoin provides a join implementation that filters results in one core 
> based on the results of a search in another core. This is similar in 
> functionality to the JoinQParserPlugin but the implementation differs in a 
> couple of important ways.
> The first way is that the hjoin is designed to work with int and long join 
> keys only. So, in order to use hjoin, int or long join keys must be included 
> in both the to and from core.
> The second difference is that the hjoin builds memory structures that are 
> used to quickly connect the join keys. So, the hjoin will need more memory 
> then the JoinQParserPlugin to perform the join.
> The main advantage of the hjoin is that it can scale to join millions of keys 
> between cores and provide sub-second response time. The hjoin should work 
> well with up to two million results from the fromIndex and tens of millions 
> of results from the main query.
> The hjoin supports the following features:
> 1) Both lucene query and PostFilter implementations. A *"cost"* > 99 will 
> turn on the PostFilter. The PostFilter will typically outperform the Lucene 
> query when the main query results have been narrowed down.
> 2) With the lucene query implementation there is an option to build the 
> filter with threads. This can greatly improve the performance of the query if 
> the main query index is very large. The "threads" parameter turns on 
> threading. For example *threads=6* will use 6 threads to build the filter. 
> This will setup a fixed threadpool with six threads to handle all hjoin 
> requests. Once the threadpool is created the hjoin will always use it to 
> build the filter. Threading does not come into play with the PostFilter.
> 3) The *size* local parameter can be used to set the initial size of the 
> hashset used to perform the join. If this is set above the number of results 
> from the fromIndex then the you can avoid hashset resizing which improves 
> performance.
> 4) Nested filter queries. The local parameter "fq" can be used to nest a 
> filter query within the join. The nested fq will filter the results of the 
> join query. This can point to another join to support nested joins.
> 5) Full caching support for the lucene query implementation. The filterCache 
> and queryResultCache should work properly even with deep nesting of joins. 
> Only the queryResultCache comes into play with the PostFilter implementation 
> because PostFilters are not cacheable in the filterCache.
> The syntax of the hjoin is similar to the JoinQParserPlugin except that the 
> plugin is referenced by the string "hjoin" rather then "join".
> fq=\{!hjoin fromIndex=collection2 from=id_i to=id_i threads=6 
> fq=$qq\}user:customer1=group:5
> The example filter query above will search the fromIndex (collection2) for 
> "user:customer1" applying the local fq parameter to filter the results. The 
> lucene filter query will be built using 6 threads. This query will generate a 
> list of values from the "from" field that will be used to filter the main 
> query. Only records from the main query, where the "to" field is present in 
> the "from" list will be included in the results.
> The solrconfig.xml in the main query core must 

Re: Negative Date Query for Local Params in Solr

2016-09-20 Thread Sandeep Khanzode
Wow. Simply awesome!
Where can I read more about this? I am not sure whether I understand what is 
going on behind the scenes ... like which parser is invoked for !field, how can 
we know which all special local params exist, whether we should prefer edismax 
over others, when is the LuceneQParser invoked in other conditions, etc? Would 
appreciate if you could indicate some references to catch up. 
Thanks a lot ...  SRK 

  Show original message On Tuesday, September 20, 2016 5:54 PM, David 
Smiley  wrote:
 

 OH!  Ok the moment the query no longer starts with "{!", the query is
parsed by defType (for 'q') and will default to lucene QParser.  So then it
appears we have a clause with a NOT operator.  In this parsing mode,
embedded "{!" terminates at the "}".  This means you can't put the
sub-query text after the "}", you instead need to put it in the special "v"
local-param.  e.g.:
-{!field f=schedule op=Contains v='[2016-08-26T12:00:12Z TO
2016-08-26T15:00:12Z]'}

On Tue, Sep 20, 2016 at 8:15 AM Sandeep Khanzode
 wrote:

> This is what I get ...
> { "responseHeader": { "status": 400, "QTime": 1, "params": { "q":
> "-{!field f=schedule op=Contains}[2016-08-26T12:00:12Z TO
> 2016-08-26T15:00:12Z]", "indent": "true", "wt": "json", "_":
> "1474373612202" } }, "error": { "msg": "Invalid Date in Date Math
> String:'[2016-08-26T12:00:12Z'", "code": 400 }}
>  SRK
>
>    On Tuesday, September 20, 2016 5:34 PM, David Smiley <
> david.w.smi...@gmail.com> wrote:
>
>
>  It should, I think... what happens? Can you ascertain the nature of the
> results?
> ~ David
>
> On Tue, Sep 20, 2016 at 5:35 AM Sandeep Khanzode
>  wrote:
>
> > For Solr 6.1.0
> > This works .. -{!field f=schedule op=Intersects}2016-08-26T12:00:56Z
> >
> > This works .. {!field f=schedule op=Contains}[2016-08-26T12:00:12Z TO
> > 2016-08-26T15:00:12Z]
> >
> >
> > Why does this not work?-{!field f=schedule
> > op=Contains}[2016-08-26T12:00:12Z TO 2016-08-26T15:00:12Z]
> >  SRK
>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>
>
>

-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


   

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3553 - Still Unstable!

2016-09-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3553/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:63114","node_name":"127.0.0.1:63114_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/32)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:63108;,   
"core":"c8n_1x3_lf_shard1_replica3",   "node_name":"127.0.0.1:63108_"}, 
"core_node2":{   "core":"c8n_1x3_lf_shard1_replica2",   
"base_url":"http://127.0.0.1:63124;,   "node_name":"127.0.0.1:63124_",  
 "state":"down"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:63114;,   "node_name":"127.0.0.1:63114_",  
 "state":"active",   "leader":"true",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:63114","node_name":"127.0.0.1:63114_","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/32)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:63108;,
  "core":"c8n_1x3_lf_shard1_replica3",
  "node_name":"127.0.0.1:63108_"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica2",
  "base_url":"http://127.0.0.1:63124;,
  "node_name":"127.0.0.1:63124_",
  "state":"down"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:63114;,
  "node_name":"127.0.0.1:63114_",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([3D9888298A26120C:B5CCB7F324DA7FF4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:170)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 

[jira] [Commented] (SOLR-9528) Make _docid_ (lucene id) a pseudo field

2016-09-20 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507135#comment-15507135
 ] 

Hoss Man commented on SOLR-9528:


bq. That works for many contexts, except perhaps Alex's original usecase: fetch 
me the document that corresponds to a specific \_docid\_ .

Ah, ok see -- i'm glad i asked: that usecase ("search for a document using it's 
internal id") was not actaully mentioned anywhere in this issue.

I would argue that we should few that as a broader problem (in a distinct 
jira): "make it less to find documents that have a specific function value" 
(ie: add a new qparser/syntaxtic sugar for the behavior frange where the 
low/high values are the same ... maybe...
{code}
q={!func eq=123456789}docid()
{code}


> Make _docid_ (lucene id) a pseudo field
> ---
>
> Key: SOLR-9528
> URL: https://issues.apache.org/jira/browse/SOLR-9528
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (7.0)
>Reporter: Alexandre Rafalovitch
>Priority: Minor
>
> Lucene document id is a transitory id that cannot be relied on as it can 
> change on document updates, etc.
> However, there are circumstances where it could be useful to use it in a 
> search. The primarily use is a debugging where some error messages provide 
> only lucene document id as the reference. For example:
> {noformat}
> child query must only match non-parent docs, but parent docID=38200 matched 
> childScorer=class org.apache.lucene.search.DisjunctionSumScorer
> {noformat}
> We already expose the lucene id with \[docid] transformer with \_docid_ 
> sorting.
> On the email list, [~yo...@apache.org] proposed that _docid_ should be a 
> legitimate pseudo-field, which would make it returnable, usable in function 
> queries, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7451) Hi David,

2016-09-20 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-7451.

Resolution: Invalid

We try to reserve the JIRA lists for actual, identified code problems or 
improvement, not usage questions.

This is almost certainly an error in your code, most likely your analysis chain 
(i.e. the tokenizer + filters you use for this field) are breaking up the input 
on, perhaps, non-characters or the like. And/or the query parser you use to 
search is doing something similar. If you're using a simple tokenization 
process like whitespace this should be just fine. How you deal with periods at 
the ends of sentences is another issue ;)

Thus the suggestion you use the user's list for the question until there is 
some evidence that this is an actual _code_ problem in Lucene. You'll benefit 
because there are many more people who view/use/answer questions on the user's 
list than via JIRAs, so you'll get answers faster.

When you do post a message on the user's list, please include enough data to 
help diagnose the problem other than "it doesn't work". You might review: 
http://wiki.apache.org/solr/UsingMailingLists

> Hi David, 
> --
>
> Key: LUCENE-7451
> URL: https://issues.apache.org/jira/browse/LUCENE-7451
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Guyenot Jeremy
>
> You just closed my previous ticket, because you thought it was a request for 
> help.
> Instead I'm just trying to understand why when I'm seeking a sequence of 
> number (figures with eventually a decimal separator, like . or ,) using 
> Lucene, it does not work.
> If you agree on the fact that figures are kind of characters, this is 
> definitely a bogue.
> Thanks for your detailed answer.
> Cheers,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9519) Check sub-facets of empty facet buckets for operations that may expand the domain (like filter exclusions)

2016-09-20 Thread Michael Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Sun updated SOLR-9519:
--
Attachment: SOLR-9519.patch

> Check sub-facets of empty facet buckets for operations that may expand the 
> domain (like filter exclusions)
> --
>
> Key: SOLR-9519
> URL: https://issues.apache.org/jira/browse/SOLR-9519
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
> Attachments: SOLR-9519.patch, SOLR-9519.patch
>
>
> http://markmail.org/message/bgplt2qdxc7gqga5
> {quote}
> Background: the JSON Facet API does not execute sub-facets for a facet
> bucket with a 0 count (and the root facet bucket is like any other
> facet bucket).
> This was to help prevent the combinatorial explosion of deeply nested
> sub-facets with useless information.
> This is obviously incorrect though, when a sub-facet does something
> that can expand the domain rather than just restrict it.  Facet
> exclusions are one of these cases.
> For zero facet buckets, we should check if any sub-facets have these
> properties and then recurse if so.
> {quote}
> Aside: not processing empty sets also helped with issues like what junk 
> values to fill in for statistics... min, max, average, std, etc.  JSON 
> doesn't even officially support NaN, so it's nice to be able to leave these 
> junk values out in many circumstances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9519) Check sub-facets of empty facet buckets for operations that may expand the domain (like filter exclusions)

2016-09-20 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507109#comment-15507109
 ] 

Michael Sun commented on SOLR-9519:
---

Here is a new patch with fix and test.

> Check sub-facets of empty facet buckets for operations that may expand the 
> domain (like filter exclusions)
> --
>
> Key: SOLR-9519
> URL: https://issues.apache.org/jira/browse/SOLR-9519
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
> Attachments: SOLR-9519.patch, SOLR-9519.patch
>
>
> http://markmail.org/message/bgplt2qdxc7gqga5
> {quote}
> Background: the JSON Facet API does not execute sub-facets for a facet
> bucket with a 0 count (and the root facet bucket is like any other
> facet bucket).
> This was to help prevent the combinatorial explosion of deeply nested
> sub-facets with useless information.
> This is obviously incorrect though, when a sub-facet does something
> that can expand the domain rather than just restrict it.  Facet
> exclusions are one of these cases.
> For zero facet buckets, we should check if any sub-facets have these
> properties and then recurse if so.
> {quote}
> Aside: not processing empty sets also helped with issues like what junk 
> values to fill in for statistics... min, max, average, std, etc.  JSON 
> doesn't even officially support NaN, so it's nice to be able to leave these 
> junk values out in many circumstances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8495) Schemaless mode cannot index large text fields

2016-09-20 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507076#comment-15507076
 ] 

Hoss Man commented on SOLR-8495:


the root issue here is the same as SOLR-9526: assuming (untokenized) StrField.

I think my suggestion in that issue makes the most sense -- but it doesn't 
address the surface error noted in this issue: an exception when "string" 
values are too big.

So perhaps for that we should just just add TruncateFieldUpdateProcessorFactory 
to the data_drive configs with some reasonable upper limit?

{code}
 
   solr.StrField
   1
 
{code}

bq. Autodetect space-separated text above a (customizable? maybe 256 bytes or 
so by default?) threshold as tokenized text rather than as StrField.

I'm leary of this an approach like this, because it would be extremely trappy 
depending on the order docs were indexed: similar to the float/int problems we 
have now, but probably more so, and with more confusion because it wouldn't 
neccessarily be obvious at first glance when/why StrField was choosen vs 
TextField (or even that a diff choice was made if the user didn't go look, 
since unlike the int/float issue the _output_ of the stored field would be the 
same "String" 

(and you'd only ever get an error if the first doc was a "short" string, and 
some other doc was above the 32K lucene limit ... if all the docs were under 
the 32K limit, but above the str/text threshold, you'd never get an error -- 
regardless of the order the docs were indexed in.  but one doc ordering would 
give you searchable text fields, and another doc order would give you StrFields 
that didn't match any search you tried.



> Schemaless mode cannot index large text fields
> --
>
> Key: SOLR-8495
> URL: https://issues.apache.org/jira/browse/SOLR-8495
> Project: Solr
>  Issue Type: Bug
>  Components: Data-driven Schema, Schema and Analysis
>Affects Versions: 4.10.4, 5.3.1, 5.4
>Reporter: Shalin Shekhar Mangar
>  Labels: difficulty-easy, impact-medium
> Fix For: 5.5, 6.0
>
>
> The schemaless mode by default indexes all string fields into an indexed 
> StrField which is limited to 32KB text. Anything larger than that leads to an 
> exception during analysis.
> {code}
> Caused by: java.lang.IllegalArgumentException: Document contains at least one 
> immense term in field="text" (whose UTF8 encoding is longer than the max 
> length 32766)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4787) Join Contrib

2016-09-20 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507057#comment-15507057
 ] 

David Smiley commented on SOLR-4787:


The algorithm in JoinUtil is very different from Solr's default join.  IMO 
JoinUtil will be faster for most use-cases but understand it definitely 
depends.  I've twice added a QParser to use JoinUtil in my projects.

> Join Contrib
> 
>
> Key: SOLR-4787
> URL: https://issues.apache.org/jira/browse/SOLR-4787
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 4.2.1
>Reporter: Joel Bernstein
>Priority: Minor
> Fix For: 6.0
>
> Attachments: SOLR-4787-deadlock-fix.patch, 
> SOLR-4787-pjoin-long-keys.patch, SOLR-4787-with-testcase-fix.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4797-hjoin-multivaluekeys-nestedJoins.patch, 
> SOLR-4797-hjoin-multivaluekeys-trunk.patch
>
>
> This contrib provides a place where different join implementations can be 
> contributed to Solr. This contrib currently includes 3 join implementations. 
> The initial patch was generated from the Solr 4.3 tag. Because of changes in 
> the FieldCache API this patch will only build with Solr 4.2 or above.
> *HashSetJoinQParserPlugin aka hjoin*
> The hjoin provides a join implementation that filters results in one core 
> based on the results of a search in another core. This is similar in 
> functionality to the JoinQParserPlugin but the implementation differs in a 
> couple of important ways.
> The first way is that the hjoin is designed to work with int and long join 
> keys only. So, in order to use hjoin, int or long join keys must be included 
> in both the to and from core.
> The second difference is that the hjoin builds memory structures that are 
> used to quickly connect the join keys. So, the hjoin will need more memory 
> then the JoinQParserPlugin to perform the join.
> The main advantage of the hjoin is that it can scale to join millions of keys 
> between cores and provide sub-second response time. The hjoin should work 
> well with up to two million results from the fromIndex and tens of millions 
> of results from the main query.
> The hjoin supports the following features:
> 1) Both lucene query and PostFilter implementations. A *"cost"* > 99 will 
> turn on the PostFilter. The PostFilter will typically outperform the Lucene 
> query when the main query results have been narrowed down.
> 2) With the lucene query implementation there is an option to build the 
> filter with threads. This can greatly improve the performance of the query if 
> the main query index is very large. The "threads" parameter turns on 
> threading. For example *threads=6* will use 6 threads to build the filter. 
> This will setup a fixed threadpool with six threads to handle all hjoin 
> requests. Once the threadpool is created the hjoin will always use it to 
> build the filter. Threading does not come into play with the PostFilter.
> 3) The *size* local parameter can be used to set the initial size of the 
> hashset used to perform the join. If this is set above the number of results 
> from the fromIndex then the you can avoid hashset resizing which improves 
> performance.
> 4) Nested filter queries. The local parameter "fq" can be used to nest a 
> filter query within the join. The nested fq will filter the results of the 
> join query. This can point to another join to support nested joins.
> 5) Full caching support for the lucene query implementation. The filterCache 
> and queryResultCache should work properly even with deep nesting of joins. 
> Only the queryResultCache comes into play with the PostFilter implementation 
> because PostFilters are not cacheable in the filterCache.
> The syntax of the hjoin is similar to the JoinQParserPlugin except that the 
> plugin is referenced by the string "hjoin" rather then "join".
> fq=\{!hjoin fromIndex=collection2 from=id_i to=id_i threads=6 
> fq=$qq\}user:customer1=group:5
> The example filter query above will search the fromIndex (collection2) for 
> "user:customer1" applying the local fq parameter to filter the results. The 
> lucene filter query will be built using 6 threads. This query will generate a 
> list of values from the "from" field that will be used to filter the main 
> query. Only records from the main query, where the "to" field is present in 
> the "from" list will be included in the results.
> The solrconfig.xml in the main query core must contain the reference to the 
> hjoin.
>  class="org.apache.solr.joins.HashSetJoinQParserPlugin"/>
> And the join contrib lib jars 

[jira] [Updated] (LUCENE-7451) Hi David,

2016-09-20 Thread Guyenot Jeremy (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guyenot Jeremy updated LUCENE-7451:
---
Description: 
You just closed my previous ticket, because you thought it was a request for 
help.
Instead I'm just trying to understand why when I'm seeking a sequence of number 
(figures with eventually a decimal separator, like . or ,) using Lucene, it 
does not work.
If you agree on the fact that figures are kind of characters, this is 
definitely a bogue.
Thanks for your detailed answer.

Cheers,

> Hi David, 
> --
>
> Key: LUCENE-7451
> URL: https://issues.apache.org/jira/browse/LUCENE-7451
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Guyenot Jeremy
>
> You just closed my previous ticket, because you thought it was a request for 
> help.
> Instead I'm just trying to understand why when I'm seeking a sequence of 
> number (figures with eventually a decimal separator, like . or ,) using 
> Lucene, it does not work.
> If you agree on the fact that figures are kind of characters, this is 
> definitely a bogue.
> Thanks for your detailed answer.
> Cheers,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7451) Hi David,

2016-09-20 Thread Guyenot Jeremy (JIRA)
Guyenot Jeremy created LUCENE-7451:
--

 Summary: Hi David, 
 Key: LUCENE-7451
 URL: https://issues.apache.org/jira/browse/LUCENE-7451
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Guyenot Jeremy






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4787) Join Contrib

2016-09-20 Thread Vadim Ivanov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506968#comment-15506968
 ] 

Vadim Ivanov commented on SOLR-4787:


Hi, Mikhail
Recently I've installed SOLR 6.2 and did several tests.
score=none parameter is very tricky. As you 've mentioned here 
http://blog-archive.griddynamics.com/2015/08/scoring-join-party-in-solr-53.html
"Enabling score and even specifying {!join … score=none}.. picks the different 
algorithm ... which expects "from" field to be string docValues".
So, as my id is NUMERIC (bjoin requires it) I receive error: "unexpected 
docvalues type NUMERIC for field 'id' (expected one of [SORTED, SORTED_SET]). 
Re-index with correct docvalues type."
Do you think that changing schema (id type to string docvalues) and further 
reindexing worth doing? And  Lucene’s JoinUtil with clause score=none might 
perform considerably better than the original Solr’s algorithm?

> Join Contrib
> 
>
> Key: SOLR-4787
> URL: https://issues.apache.org/jira/browse/SOLR-4787
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 4.2.1
>Reporter: Joel Bernstein
>Priority: Minor
> Fix For: 6.0
>
> Attachments: SOLR-4787-deadlock-fix.patch, 
> SOLR-4787-pjoin-long-keys.patch, SOLR-4787-with-testcase-fix.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4787.patch, SOLR-4787.patch, SOLR-4787.patch, 
> SOLR-4797-hjoin-multivaluekeys-nestedJoins.patch, 
> SOLR-4797-hjoin-multivaluekeys-trunk.patch
>
>
> This contrib provides a place where different join implementations can be 
> contributed to Solr. This contrib currently includes 3 join implementations. 
> The initial patch was generated from the Solr 4.3 tag. Because of changes in 
> the FieldCache API this patch will only build with Solr 4.2 or above.
> *HashSetJoinQParserPlugin aka hjoin*
> The hjoin provides a join implementation that filters results in one core 
> based on the results of a search in another core. This is similar in 
> functionality to the JoinQParserPlugin but the implementation differs in a 
> couple of important ways.
> The first way is that the hjoin is designed to work with int and long join 
> keys only. So, in order to use hjoin, int or long join keys must be included 
> in both the to and from core.
> The second difference is that the hjoin builds memory structures that are 
> used to quickly connect the join keys. So, the hjoin will need more memory 
> then the JoinQParserPlugin to perform the join.
> The main advantage of the hjoin is that it can scale to join millions of keys 
> between cores and provide sub-second response time. The hjoin should work 
> well with up to two million results from the fromIndex and tens of millions 
> of results from the main query.
> The hjoin supports the following features:
> 1) Both lucene query and PostFilter implementations. A *"cost"* > 99 will 
> turn on the PostFilter. The PostFilter will typically outperform the Lucene 
> query when the main query results have been narrowed down.
> 2) With the lucene query implementation there is an option to build the 
> filter with threads. This can greatly improve the performance of the query if 
> the main query index is very large. The "threads" parameter turns on 
> threading. For example *threads=6* will use 6 threads to build the filter. 
> This will setup a fixed threadpool with six threads to handle all hjoin 
> requests. Once the threadpool is created the hjoin will always use it to 
> build the filter. Threading does not come into play with the PostFilter.
> 3) The *size* local parameter can be used to set the initial size of the 
> hashset used to perform the join. If this is set above the number of results 
> from the fromIndex then the you can avoid hashset resizing which improves 
> performance.
> 4) Nested filter queries. The local parameter "fq" can be used to nest a 
> filter query within the join. The nested fq will filter the results of the 
> join query. This can point to another join to support nested joins.
> 5) Full caching support for the lucene query implementation. The filterCache 
> and queryResultCache should work properly even with deep nesting of joins. 
> Only the queryResultCache comes into play with the PostFilter implementation 
> because PostFilters are not cacheable in the filterCache.
> The syntax of the hjoin is similar to the JoinQParserPlugin except that the 
> plugin is referenced by the string "hjoin" rather then "join".
> fq=\{!hjoin fromIndex=collection2 from=id_i to=id_i threads=6 
> fq=$qq\}user:customer1=group:5
> The example filter query above will search the fromIndex (collection2) 

[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_102) - Build # 464 - Still Unstable!

2016-09-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/464/
Java: 32bit/jdk1.8.0_102 -server -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:62344/b/c8n_1x3_lf_shard1_replica3]

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: No live 
SolrServers available to handle this 
request:[http://127.0.0.1:62344/b/c8n_1x3_lf_shard1_replica3]
at 
__randomizedtesting.SeedInfo.seed([62B63AF65123CE48:EAE2052CFFDFA3B0]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:755)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1161)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1050)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:992)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:592)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:578)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:174)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+136) - Build # 1760 - Unstable!

2016-09-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1760/
Java: 32bit/jdk-9-ea+136 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.SyncSliceTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([A6A3D0987F305CB6:2EF7EF42D1CC314E]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SyncSliceTest.waitTillAllNodesActive(SyncSliceTest.java:235)
at org.apache.solr.cloud.SyncSliceTest.test(SyncSliceTest.java:164)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Closed] (LUCENE-7450) Research problems on numeric values into text (with. or,)

2016-09-20 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed LUCENE-7450.

Resolution: Invalid

Sorry but this seems like a use of the issue tracker to get help.  Instead try 
the java-user list.  If you have a specific feature or bug to report, then you 
can report it here.

> Research problems on numeric values into text (with. or,)
> -
>
> Key: LUCENE-7450
> URL: https://issues.apache.org/jira/browse/LUCENE-7450
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 6.2
>Reporter: Guyenot Jeremy
> Attachments: LUCENE.zip
>
>
> Hello,
> we find research problems on numeric values into text (with. or,). Unable to 
> search 315.86 or 315.86.
> We try custom Analysers without success either.
> I enclose the code used to index and one to do the research.
> I do not know if this is a bug on your side or problem Analyze of ours.
> The problem is the same between version 4.3.1 and 6.2.0.
> Thank you in advance for your quick return.
> cordially



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9512) CloudSolrClient's cluster state cache can break direct updates to leaders

2016-09-20 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506675#comment-15506675
 ] 

Noble Paul commented on SOLR-9512:
--

Does it make any sense to have the explicit flag {{directUpdatesToLeaders}} ? 
IMHO it should be the only supported behavior. Why would we ever send a request 
ever to another node when the shard leader is down?

My proposal is as follows.

* The LBHttpSolrClient is aware of down servers. So, if the leader for the 
shard is down we already know it and we can fail fast
* If we have to fail because the shard leader is dead, we should try to 
explicitly read the {{state.json}} for that collection provided the cached 
state is older than a certain threshold (say 1 sec?). This means, the client 
may fire a request to ZK to every second to refresh the state. For such 
refreshes, check the version first to optimize ZK reads

> CloudSolrClient's cluster state cache can break direct updates to leaders
> -
>
> Key: SOLR-9512
> URL: https://issues.apache.org/jira/browse/SOLR-9512
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
> Attachments: SOLR-9512.patch
>
>
> This is the root cause of SOLR-9305 and (at least some of) SOLR-9390.  The 
> process goes something like this:
> Documents are added to the cluster via a CloudSolrClient, with 
> directUpdatesToLeadersOnly set to true.  CSC caches its view of the 
> DocCollection.  The leader then goes down, and is reassigned.  Next time 
> documents are added, CSC checks its cache again, and gets the old view of the 
> DocCollection.  It then tries to send the update directly to the old, now 
> down, leader, and we get ConnectionRefused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9258) Optimizing, storing and deploying AI models with Streaming Expressions

2016-09-20 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506635#comment-15506635
 ] 

Cao Manh Dat commented on SOLR-9258:


+1 The patch look great.

> Optimizing, storing and deploying AI models with Streaming Expressions
> --
>
> Key: SOLR-9258
> URL: https://issues.apache.org/jira/browse/SOLR-9258
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: ModelCache.java, ModelCache.java, SOLR-9258.patch, 
> SOLR-9258.patch
>
>
> This ticket describes a framework for *optimizing*, *storing* and *deploying* 
> AI models within the Streaming Expression framework.
> *Optimizing*
> [~caomanhdat], has contributed SOLR-9252 which provides *Streaming 
> Expressions* for both feature selection and optimization of a logistic 
> regression text classifier. SOLR-9252 also provides a great working example 
> of *optimization* of a machine learning model using an in-place parallel 
> iterative algorithm.
> *Storing*
> Both features and optimized models can be stored in SolrCloud collections 
> using the update expression. Using [~caomanhdat]'s example in SOLR-9252, the 
> pseudo code for storing features would be:
> {code}
> update(featuresCollection, 
>featuresSelection(collection1, 
> id="myFeatures", 
> q="*:*",  
> field="tv_text", 
> outcome="out_i", 
> positiveLabel=1, 
> numTerms=100))
> {code}  
> The id field can be added to the featureSelection expression so that features 
> can be later retrieved from the collection it's stored in.
> *Deploying*
> With the introduction of the topic() expression, SolrCloud can be treated as 
> a distributed message queue. This messaging capability can  be used to deploy 
> models and process data through the models.
> To implement this approach a classify() function can be created that uses a 
> topic() function to return both the model and the data to be classified:
> The pseudo code looks like this:
> {code}
> classify(topic(models, q="modelID", fl="features, weights"),
>  topic(emails, q="*:*", fl="id, body", rows="500", version="3232323"))
> {code}
> In the example above the classify() function uses the topic() function to 
> retrieve the model. Each time there is an update to the model in the index, 
> the topic() expression will automatically read the new model.
> The topic function() is also used to pull in the data set that is being 
> classified. Notice the *version* parameter. This will be added to the topic 
> function to support pulling results from a specific version number (jira 
> ticket to follow).
> With this approach both the model and the data to process through the model 
> are treated as messages in a message queue.
> The daemon function can be used to send the classify function to Solr where 
> it will be run in the background. The pseudo code looks like this:
> {code}
> daemon(...,
>  update(classifiedEmails, 
>  classify(topic(models, q="modelID", fl="features, weights"),
>   topic(emails, q="*:*", fl="id, fl, body", 
> rows="500", version="3232323"
> {code}
> In this scenario the daemon will run the classify function repeatedly in the 
> background. With each run the topic() functions will re-pull the model if the 
> model has been updated. It will also pull a new set of emails to be 
> classified. The classified emails can be stored in another SolrCloud 
> collection using the update() function.
> Using this approach emails can be classified in batches. The daemon can 
> continue to run even after all all the emails have been classified. New 
> emails added to the emails collections will then be automatically classified 
> when they enter the index.
> Classification can be done in parallel once SOLR-9240 is completed. This will 
> allow topic() results to be partitioned across worker nodes so they can be 
> processed in parallel. The pseudo code for this is:
> {code}
> parallel(workerCollection, worker="20", ...,
>  daemon(...,
>update(classifiedEmails, 
>classify(topic(models, q="modelID", fl="features, 
> weights", partitionKeys="none"),
> topic(emails, q="*:*", fl="id, fl, body", 
> rows="500", version="3232323", partitionKeys="id")
> {code}
> The code above sends a daemon to 20 workers, which will each classify a 
> partition of records pulled by the topic() function.
> *AI based alerting*
> If the *version* 

[jira] [Commented] (SOLR-9512) CloudSolrClient's cluster state cache can break direct updates to leaders

2016-09-20 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506575#comment-15506575
 ] 

Alan Woodward commented on SOLR-9512:
-

OK, have reverted.  For the moment, I'll set directUpdatesToLeaders=false on 
the two regularly failing test cases.

> CloudSolrClient's cluster state cache can break direct updates to leaders
> -
>
> Key: SOLR-9512
> URL: https://issues.apache.org/jira/browse/SOLR-9512
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
> Attachments: SOLR-9512.patch
>
>
> This is the root cause of SOLR-9305 and (at least some of) SOLR-9390.  The 
> process goes something like this:
> Documents are added to the cluster via a CloudSolrClient, with 
> directUpdatesToLeadersOnly set to true.  CSC caches its view of the 
> DocCollection.  The leader then goes down, and is reassigned.  Next time 
> documents are added, CSC checks its cache again, and gets the old view of the 
> DocCollection.  It then tries to send the update directly to the old, now 
> down, leader, and we get ConnectionRefused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9512) CloudSolrClient's cluster state cache can break direct updates to leaders

2016-09-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506551#comment-15506551
 ] 

ASF subversion and git services commented on SOLR-9512:
---

Commit a4293ce7c4e849b171430a34f36b830a84927a93 in lucene-solr's branch 
refs/heads/branch_6x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a4293ce ]

Revert "SOLR-9512: CloudSolrClient tries other replicas if a cached leader is 
down"

This reverts commit f96017d9e10c665e7ab6b9161f2af08efc491946.


> CloudSolrClient's cluster state cache can break direct updates to leaders
> -
>
> Key: SOLR-9512
> URL: https://issues.apache.org/jira/browse/SOLR-9512
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
> Attachments: SOLR-9512.patch
>
>
> This is the root cause of SOLR-9305 and (at least some of) SOLR-9390.  The 
> process goes something like this:
> Documents are added to the cluster via a CloudSolrClient, with 
> directUpdatesToLeadersOnly set to true.  CSC caches its view of the 
> DocCollection.  The leader then goes down, and is reassigned.  Next time 
> documents are added, CSC checks its cache again, and gets the old view of the 
> DocCollection.  It then tries to send the update directly to the old, now 
> down, leader, and we get ConnectionRefused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9512) CloudSolrClient's cluster state cache can break direct updates to leaders

2016-09-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506552#comment-15506552
 ] 

ASF subversion and git services commented on SOLR-9512:
---

Commit bd3fc7f43ff54a174660b7ad51f031d2104f84b5 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bd3fc7f ]

Revert "SOLR-9512: CloudSolrClient tries other replicas if a cached leader is 
down"

This reverts commit 3d130097b7768a8d753476ffe26b83db070c8e20.


> CloudSolrClient's cluster state cache can break direct updates to leaders
> -
>
> Key: SOLR-9512
> URL: https://issues.apache.org/jira/browse/SOLR-9512
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
> Attachments: SOLR-9512.patch
>
>
> This is the root cause of SOLR-9305 and (at least some of) SOLR-9390.  The 
> process goes something like this:
> Documents are added to the cluster via a CloudSolrClient, with 
> directUpdatesToLeadersOnly set to true.  CSC caches its view of the 
> DocCollection.  The leader then goes down, and is reassigned.  Next time 
> documents are added, CSC checks its cache again, and gets the old view of the 
> DocCollection.  It then tries to send the update directly to the old, now 
> down, leader, and we get ConnectionRefused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6677) reduce logging during Solr startup

2016-09-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506542#comment-15506542
 ] 

Jan Høydahl commented on SOLR-6677:
---

After we cleaned up the INFO level, we should probably look through the DEBUG 
level too, moving some of the less useful stuff down to TRACE; that will make 
the new {{solr start -v}} mode more useful.

> reduce logging during Solr startup
> --
>
> Key: SOLR-6677
> URL: https://issues.apache.org/jira/browse/SOLR-6677
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-6677.patch
>
>
> most of what is printed is neither helpful nor useful. It's just noise



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5563) Tone down some SolrCloud logging

2016-09-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506521#comment-15506521
 ] 

Jan Høydahl commented on SOLR-5563:
---

I've started to look at reducing log noise in general, this looks like a good 
bunch of improvements for Cloud. [~romseygeek], are you able to update this for 
"master" and commit?

I agree that perhaps the "A cluster state change" message could remain INFO, if 
it does not occur too often?

> Tone down some SolrCloud logging
> 
>
> Key: SOLR-5563
> URL: https://issues.apache.org/jira/browse/SOLR-5563
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: SOLR-5563.patch
>
>
> We output a *lot* of logging data from various bits of SolrCloud, a lot of 
> which is basically implementation detail that isn't particularly helpful.  
> Here's a patch to move a bunch of stuff to #debug level, rather than #info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6677) reduce logging during Solr startup

2016-09-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-6677:
--
Attachment: SOLR-6677.patch

First patch, moving many unhelpful INFO logs to DEBUG level as well as reducing 
Jetty logging.

With this patch, the {{bin/solr start -f}} shrinks from 31 to 13 lines (11 if 
applied on top of SOLR-8186 and SOLR-9534):
{noformat}
Starting Solr on port 8983 from /Users/janhoy/git/lucene-solr/solr/server

0INFO  (main) [   ] o.e.j.s.Server jetty-9.3.8.v20160314
311  INFO  (main) [   ] o.a.s.c.SolrResourceLoader Using system property 
solr.solr.home: /Users/janhoy/git/lucene-solr/solr/server/solr
311  INFO  (main) [   ] o.a.s.c.SolrResourceLoader new SolrResourceLoader for 
directory: '/Users/janhoy/git/lucene-solr/solr/server/solr'
316  INFO  (main) [   ] o.a.s.c.SolrXmlConfig Loading container configuration 
from /Users/janhoy/git/lucene-solr/solr/server/solr/solr.xml
424  WARN  (main) [   ] o.a.s.c.CoreContainer Couldn't add files from 
/Users/janhoy/git/lucene-solr/solr/server/solr/lib to classpath: 
/Users/janhoy/git/lucene-solr/solr/server/solr/lib
744  INFO  (main) [   ] o.a.s.c.CorePropertiesLocator Found 0 core definitions 
underneath /Users/janhoy/git/lucene-solr/solr/server/solr
749  INFO  (main) [   ] o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init() 
done
761  INFO  (main) [   ] o.e.j.s.h.ContextHandler Started 
o.e.j.w.WebAppContext@6536e911{/solr,file:///Users/janhoy/git/lucene-solr/solr/server/solr-webapp/webapp/,AVAILABLE}{/Users/janhoy/git/lucene-solr/solr/server/solr-webapp/webapp}
774  INFO  (main) [   ] o.e.j.s.ServerConnector Started 
ServerConnector@5609159b{HTTP/1.1,[http/1.1]}{0.0.0.0:8983}
775  INFO  (main) [   ] o.e.j.s.Server Started @1310ms
{noformat}

Likewise the {{bin/solr create -c foo}} logs shrink from 129 lines to 50 lines!

Still WorkInProgress. Would be good with some confirmation that I'm on the 
right track here.

> reduce logging during Solr startup
> --
>
> Key: SOLR-6677
> URL: https://issues.apache.org/jira/browse/SOLR-6677
> Project: Solr
>  Issue Type: Bug
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-6677.patch
>
>
> most of what is printed is neither helpful nor useful. It's just noise



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8097) Implement a builder pattern for constructing a Solrj client

2016-09-20 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506475#comment-15506475
 ] 

Jason Gerlowski commented on SOLR-8097:
---

I created SOLR-9535 to address the recent concern brought up here, and attached 
a small patch containing the fix.

> Implement a builder pattern for constructing a Solrj client
> ---
>
> Key: SOLR-8097
> URL: https://issues.apache.org/jira/browse/SOLR-8097
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Hrishikesh Gadre
>Assignee: Anshum Gupta
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch, 
> SOLR-8097.patch, SOLR-8097.patch, SOLR-8097.patch
>
>
> Currently Solrj clients (e.g. CloudSolrClient) supports multiple constructors 
> as follows,
> public CloudSolrClient(String zkHost) 
> public CloudSolrClient(String zkHost, HttpClient httpClient) 
> public CloudSolrClient(Collection zkHosts, String chroot)
> public CloudSolrClient(Collection zkHosts, String chroot, HttpClient 
> httpClient)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders)
> public CloudSolrClient(String zkHost, boolean updatesToLeaders, HttpClient 
> httpClient)
> It is kind of problematic while introducing an additional parameters (since 
> we need to introduce additional constructors). Instead it will be helpful to 
> provide SolrClient Builder which can provide either default values or support 
> overriding specific parameter. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9535) SolrClients should have protected constructors

2016-09-20 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506469#comment-15506469
 ] 

Jason Gerlowski edited comment on SOLR-9535 at 9/20/16 1:10 PM:


Attaching a patch making this change in CloudSolrClient.

After some poking around, HttpSolrClient, LBHttpSolrClient, and 
ConcurrentUpdateSolrClient all had a "catch-all" ctor marked as {{protected}} 
so this really turned into a single-line change.

Tests all pass locally.


was (Author: gerlowskija):
Attaching a patch making this change in CloudSolrClient.

After some poking around, HttpSolrClient, LBHttpSolrClient, and 
ConcurrentUpdateSolrClient all had a "catch-all" ctor marked as {{protected}} 
so this really turned into a single-line change.

> SolrClients should have protected constructors
> --
>
> Key: SOLR-9535
> URL: https://issues.apache.org/jira/browse/SOLR-9535
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.x
>Reporter: Jason Gerlowski
>Priority: Minor
> Fix For: 6.x
>
> Attachments: SOLR-9535.patch
>
>
> Recent SolrJ changes (SOLR-8097) resulted in {{SolrClient}} ending up with 
> {{protected}} ctors.  This achieved the purpose at the time, and steered 
> consumers towards using the *Builder types.  However the change was overly 
> restrictive, as this visibility prevents consumers from extending 
> {{SolrClient}} in any meaningful way.
> This issue involves changing the visibility of the SolrClient "kitchen sink" 
> ctors to better support extension.
> (See recent discussion on SOLR-8097 for more discussion on this topic.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9535) SolrClients should have protected constructors

2016-09-20 Thread Jason Gerlowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski updated SOLR-9535:
--
Attachment: SOLR-9535.patch

Attaching a patch making this change in CloudSolrClient.

After some poking around, HttpSolrClient, LBHttpSolrClient, and 
ConcurrentUpdateSolrClient all had a "catch-all" ctor marked as {{protected}} 
so this really turned into a single-line change.

> SolrClients should have protected constructors
> --
>
> Key: SOLR-9535
> URL: https://issues.apache.org/jira/browse/SOLR-9535
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.x
>Reporter: Jason Gerlowski
>Priority: Minor
> Fix For: 6.x
>
> Attachments: SOLR-9535.patch
>
>
> Recent SolrJ changes (SOLR-8097) resulted in {{SolrClient}} ending up with 
> {{protected}} ctors.  This achieved the purpose at the time, and steered 
> consumers towards using the *Builder types.  However the change was overly 
> restrictive, as this visibility prevents consumers from extending 
> {{SolrClient}} in any meaningful way.
> This issue involves changing the visibility of the SolrClient "kitchen sink" 
> ctors to better support extension.
> (See recent discussion on SOLR-8097 for more discussion on this topic.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9330) Race condition between core reload and statistics request

2016-09-20 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9330:
---
Attachment: SOLR-9390.patch

I made a few common sense/best effort changes, but nothing beside try\{}catch() 
helps. Perhaps it might be synchronized properly with 
{{solrCore._searcherLock}} but it's too much changes I suppose. 

I'll try to just cache reader.version or whole reader stats since it shouldn't 
be changed during searcher life (will check.)

> Race condition between core reload and statistics request
> -
>
> Key: SOLR-9330
> URL: https://issues.apache.org/jira/browse/SOLR-9330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Andrey Kudryavtsev
> Attachments: SOLR-9390.patch, SOLR-9390.patch
>
>
> Things happened that we execute this two requests consecutively in Solr 5.5:
> * Core reload: /admin/cores?action=RELOAD=_coreName_
> * Check core statistics: /_coreName_/admin/mbeans?stats=true
> And sometimes second request ends with this error:
> {code}
> ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
> closed
>   at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)
> {code}
> If my understanding of SolrCore internals is correct, it happens because of 
> async nature of reload request:
> * New searcher is "registered" in separate thread
> * Old searcher is closed in same separate thread and only after new one is 
> registered
> * When old searcher is closing, it removes itself from map with MBeans 
> * If statistic requests happens before old searcher is completely removed 
> from everywhere - exception can happen. 
> What do you think if we will introduce new parameter for reload request which 
> makes it fully synchronized? Basically it will force it to call {code}  
> SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] 
> waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2016-09-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506414#comment-15506414
 ] 

ASF subversion and git services commented on SOLR-8029:
---

Commit 3baf6f8b3002e550400781194c4ef88a248f2ef9 in lucene-solr's branch 
refs/heads/apiv2 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3baf6f8 ]

SOLR-8029: more cleanup of specs. renamed collections.json 
collections.collection.json


> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Negative Date Query for Local Params in Solr

2016-09-20 Thread Sandeep Khanzode
This is what I get ... 
{ "responseHeader": { "status": 400, "QTime": 1, "params": { "q": "-{!field 
f=schedule op=Contains}[2016-08-26T12:00:12Z TO 2016-08-26T15:00:12Z]", 
"indent": "true", "wt": "json", "_": "1474373612202" } }, "error": { "msg": 
"Invalid Date in Date Math String:'[2016-08-26T12:00:12Z'", "code": 400 }}
 SRK 

On Tuesday, September 20, 2016 5:34 PM, David Smiley 
 wrote:
 

 It should, I think... what happens? Can you ascertain the nature of the
results?
~ David

On Tue, Sep 20, 2016 at 5:35 AM Sandeep Khanzode
 wrote:

> For Solr 6.1.0
> This works .. -{!field f=schedule op=Intersects}2016-08-26T12:00:56Z
>
> This works .. {!field f=schedule op=Contains}[2016-08-26T12:00:12Z TO
> 2016-08-26T15:00:12Z]
>
>
> Why does this not work?-{!field f=schedule
> op=Contains}[2016-08-26T12:00:12Z TO 2016-08-26T15:00:12Z]
>  SRK

-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


   

Re: Negative Date Query for Local Params in Solr

2016-09-20 Thread David Smiley
OH!  Ok the moment the query no longer starts with "{!", the query is
parsed by defType (for 'q') and will default to lucene QParser.  So then it
appears we have a clause with a NOT operator.  In this parsing mode,
embedded "{!" terminates at the "}".  This means you can't put the
sub-query text after the "}", you instead need to put it in the special "v"
local-param.  e.g.:
-{!field f=schedule op=Contains v='[2016-08-26T12:00:12Z TO
2016-08-26T15:00:12Z]'}

On Tue, Sep 20, 2016 at 8:15 AM Sandeep Khanzode
 wrote:

> This is what I get ...
> { "responseHeader": { "status": 400, "QTime": 1, "params": { "q":
> "-{!field f=schedule op=Contains}[2016-08-26T12:00:12Z TO
> 2016-08-26T15:00:12Z]", "indent": "true", "wt": "json", "_":
> "1474373612202" } }, "error": { "msg": "Invalid Date in Date Math
> String:'[2016-08-26T12:00:12Z'", "code": 400 }}
>  SRK
>
> On Tuesday, September 20, 2016 5:34 PM, David Smiley <
> david.w.smi...@gmail.com> wrote:
>
>
>  It should, I think... what happens? Can you ascertain the nature of the
> results?
> ~ David
>
> On Tue, Sep 20, 2016 at 5:35 AM Sandeep Khanzode
>  wrote:
>
> > For Solr 6.1.0
> > This works .. -{!field f=schedule op=Intersects}2016-08-26T12:00:56Z
> >
> > This works .. {!field f=schedule op=Contains}[2016-08-26T12:00:12Z TO
> > 2016-08-26T15:00:12Z]
> >
> >
> > Why does this not work?-{!field f=schedule
> > op=Contains}[2016-08-26T12:00:12Z TO 2016-08-26T15:00:12Z]
> >  SRK
>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>
>
>

-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[JENKINS-MAVEN] Lucene-Solr-Maven-6.2 #9: POMs out of sync

2016-09-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-6.2/9/

No tests ran.

Build Log:
[...truncated 8363 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-6.2/build.xml:781: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-6.2/build.xml:314: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-6.2/lucene/common-build.xml:533:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-6.2/lucene/common-build.xml:528:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-6.2/lucene/build.xml:479: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-6.2/lucene/common-build.xml:2512:
 java.net.ConnectException: Connection timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668)
at 
sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:173)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.protocol.https.HttpsClient.(HttpsClient.java:264)
at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:367)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105)
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
at 
org.apache.tools.ant.taskdefs.Get$GetThread.openConnection(Get.java:660)
at org.apache.tools.ant.taskdefs.Get$GetThread.get(Get.java:579)
at org.apache.tools.ant.taskdefs.Get$GetThread.run(Get.java:569)

Total time: 8 minutes 58 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: Negative Date Query for Local Params in Solr

2016-09-20 Thread David Smiley
It should, I think... what happens? Can you ascertain the nature of the
results?
~ David

On Tue, Sep 20, 2016 at 5:35 AM Sandeep Khanzode
 wrote:

> For Solr 6.1.0
> This works .. -{!field f=schedule op=Intersects}2016-08-26T12:00:56Z
>
> This works .. {!field f=schedule op=Contains}[2016-08-26T12:00:12Z TO
> 2016-08-26T15:00:12Z]
>
>
> Why does this not work?-{!field f=schedule
> op=Contains}[2016-08-26T12:00:12Z TO 2016-08-26T15:00:12Z]
>  SRK

-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1119 - Failure

2016-09-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1119/

11 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=55549, name=collection2, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=55549, name=collection2, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:59186/vlp
at __randomizedtesting.SeedInfo.seed([8EB1B6904EB41305]:0)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:998)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: https://127.0.0.1:59186/vlp
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:619)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:261)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:250)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:415)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:367)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1289)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1059)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1001)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1599)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1620)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:988)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:930)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:139)
at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:155)
at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:284)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:165)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:167)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:271)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:513)
... 11 more


FAILED:  org.apache.solr.cloud.SharedFSAutoReplicaFailoverTest.test

Error Message:
Timeout waiting for all live and active

Stack Trace:
java.lang.AssertionError: Timeout waiting for all 

[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_102) - Build # 1759 - Failure!

2016-09-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1759/
Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 55545 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/build.xml:763: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/build.xml:101: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/build.xml:138: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/build.xml:479: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/common-build.xml:2512: 
java.net.ConnectException: Connection timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668)
at 
sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:173)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.protocol.https.HttpsClient.(HttpsClient.java:264)
at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:367)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105)
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
at 
org.apache.tools.ant.taskdefs.Get$GetThread.openConnection(Get.java:660)
at org.apache.tools.ant.taskdefs.Get$GetThread.get(Get.java:579)
at org.apache.tools.ant.taskdefs.Get$GetThread.run(Get.java:569)

Total time: 69 minutes 12 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 399 - Failure!

2016-09-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/399/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node2:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:62734/_a","node_name":"127.0.0.1:62734__a","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/32)={   
"replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:43180/_a;,   
"core":"c8n_1x3_lf_shard1_replica1",   
"node_name":"127.0.0.1:43180__a"}, "core_node2":{   
"core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:62734/_a;,   
"node_name":"127.0.0.1:62734__a",   "state":"active",   
"leader":"true"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica2",   
"base_url":"http://127.0.0.1:59300/_a;,   
"node_name":"127.0.0.1:59300__a",   "state":"down",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node2:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:62734/_a","node_name":"127.0.0.1:62734__a","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/32)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:43180/_a;,
  "core":"c8n_1x3_lf_shard1_replica1",
  "node_name":"127.0.0.1:43180__a"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:62734/_a;,
  "node_name":"127.0.0.1:62734__a",
  "state":"active",
  "leader":"true"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica2",
  "base_url":"http://127.0.0.1:59300/_a;,
  "node_name":"127.0.0.1:59300__a",
  "state":"down",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([205D7B2105176C26:A80944FBABEB01DE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:168)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 

Early Access build 136 for JDK 9 & JDK 9 with Project Jigsaw are available on java.net

2016-09-20 Thread Rory O'Donnell


Hi Uwe & Dawid,

As usual you are ahead of the curve with b136, other notes of interest 
below and a request for feedback.


Early Access b136  for JDK 9 is 
available on java.net, summary of  changes are listed here 
.
Early Access b136  (#5506) for JDK 9 with 
Project Jigsaw is available on java.net, summary of  changes are listed 
here 
.


There have been a number of fixes to bugs reported by Open Source 
projects since the last availability email  :


 * 8165723 - b136 - core-libs JarFile::isMultiRelease() method returns
   false when it should return true
 * 8165116 - b136 - xml redirect function is not allowed even with
   enableExtensionFunctions

NOTE:-  Build 135 included a fix for JDK-8161016 which *has introduced a 
behavioral change to HttpURLConnection, more info:*


The behavior of HttpURLConnection when using a ProxySelector has been 
modified with this JDK release. Currently, HttpURLConnection.connect() 
call would fallback to a DIRECT connection attempt if the configured 
proxy/proxies failed to make a connection. This release introduces a 
change whereby no DIRECT connection will be attempted in such a 
scenario. Instead, the HttpURLConnection.connect() method will fail and 
throw an IOException which occurred from the last proxy tested. This 
behavior now matches with the HTTP connections made by popular web 
browsers. But this change will bring compatibility issues for the 
applications expecting a DIRECT connection when a proxy server is down 
or when wrong proxies are provided.

*

JDK 9 Outreach Survey*

In order to encourage and receive additional feedback from developers 
testing their applications with JDK 9,
the OpenJDK Quality Outreach effort has put together a very brief survey 
about your experiences with JDK 9 so far.


It is available at***https://www.surveymonkey.de/r/JDK9EA*

We would love to hear feedback from you!


Rgds,Rory

--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland



[jira] [Commented] (SOLR-9508) Install script should check existence of tools, and add option to NOT start service

2016-09-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506182#comment-15506182
 ] 

Jan Høydahl commented on SOLR-9508:
---

Will commit this shortly if lazy consensus

> Install script should check existence of tools, and add option to NOT start 
> service
> ---
>
> Key: SOLR-9508
> URL: https://issues.apache.org/jira/browse/SOLR-9508
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9508.patch
>
>
> The {{install_solr_service.sh}} script should exit cleanly if tools like 
> {{tar}}, {{unzip}}, {{service}} or {{java}} are not available.
> Also, add a new switch {{-n}} to skip starting the service after 
> installation, which will make it easier to script installations which will 
> want to modify {{/etc/default/solr.in.sh}} before starting the service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9538) Relocate {JSON,Smile}WriterTest and TestBinaryResponseWriter to org.apache.solr.response

2016-09-20 Thread Jonny Marks (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonny Marks updated SOLR-9538:
--
Attachment: SOLR-9538.patch

Attaching patch

> Relocate {JSON,Smile}WriterTest and TestBinaryResponseWriter to 
> org.apache.solr.response
> 
>
> Key: SOLR-9538
> URL: https://issues.apache.org/jira/browse/SOLR-9538
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Jonny Marks
>Priority: Minor
> Attachments: SOLR-9538.patch
>
>
> JSONWriterTest, SmileWriterTest and TestBinaryResponseWriter are currently in 
> the org.apache.solr.request package but the classes they test 
> (JSONResponseWriter, SmileWriter, BinaryResponseWriter) are in the 
> org.apache.solr.response package. 
> This patch moves the tests to the response package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Negative Date Query for Local Params in Solr

2016-09-20 Thread Sandeep Khanzode
For Solr 6.1.0
This works .. -{!field f=schedule op=Intersects}2016-08-26T12:00:56Z

This works .. {!field f=schedule op=Contains}[2016-08-26T12:00:12Z TO 
2016-08-26T15:00:12Z]


Why does this not work?-{!field f=schedule op=Contains}[2016-08-26T12:00:12Z TO 
2016-08-26T15:00:12Z]
 SRK

[jira] [Resolved] (SOLR-9475) Add install script support for CentOS

2016-09-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-9475.
---
Resolution: Fixed

Pushed to branch_6x and master.

[~xcoder], please test to see if it fixes your problem. You may fetch the new 
script from 
https://github.com/apache/lucene-solr/blob/branch_6x/solr/bin/install_solr_service.sh

> Add install script support for CentOS
> -
>
> Key: SOLR-9475
> URL: https://issues.apache.org/jira/browse/SOLR-9475
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2
> Environment: Centos 7
>Reporter: Nitin Surana
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9475.patch, install_solr_service.sh
>
>
> [root@ns521582 tmp]# sudo ./install_solr_service.sh solr-6.2.0.tgz
> id: solr: no such user
> Creating new user: solr
> adduser: group '--disabled-password' does not exist
> Extracting solr-6.2.0.tgz to /opt
> Installing symlink /opt/solr -> /opt/solr-6.2.0 ...
> Installing /etc/init.d/solr script ...
> /etc/default/solr.in.sh already exist. Skipping install ...
> /var/solr/data/solr.xml already exists. Skipping install ...
> /var/solr/log4j.properties already exists. Skipping install ...
> chown: invalid spec: ‘solr:’
> ./install_solr_service.sh: line 322: update-rc.d: command not found
> id: solr: no such user
> User solr not found! Please create the solr user before running this script.
> id: solr: no such user
> User solr not found! Please create the solr user before running this script.
> Service solr installed.
> Reference - 
> http://stackoverflow.com/questions/39320647/unable-to-create-user-when-installing-solr-6-2-0-on-centos-7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9330) Race condition between core reload and statistics request

2016-09-20 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9330:
---
Attachment: SOLR-9390.patch

Here is straightforward test reproducing the race, it catches exception at 
least at every third run at my laptop. It might be dirty yet, nevertheless. 
I wonder if I need to mark it \@Nightly if it run for 10 sec?

> Race condition between core reload and statistics request
> -
>
> Key: SOLR-9330
> URL: https://issues.apache.org/jira/browse/SOLR-9330
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 5.5
>Reporter: Andrey Kudryavtsev
> Attachments: SOLR-9390.patch
>
>
> Things happened that we execute this two requests consecutively in Solr 5.5:
> * Core reload: /admin/cores?action=RELOAD=_coreName_
> * Check core statistics: /_coreName_/admin/mbeans?stats=true
> And sometimes second request ends with this error:
> {code}
> ERROR org.apache.solr.servlet.HttpSolrCall - 
> null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
> closed
>   at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274)
>   at 
> org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134)
>   at 
> org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183)
> {code}
> If my understanding of SolrCore internals is correct, it happens because of 
> async nature of reload request:
> * New searcher is "registered" in separate thread
> * Old searcher is closed in same separate thread and only after new one is 
> registered
> * When old searcher is closing, it removes itself from map with MBeans 
> * If statistic requests happens before old searcher is completely removed 
> from everywhere - exception can happen. 
> What do you think if we will introduce new parameter for reload request which 
> makes it fully synchronized? Basically it will force it to call {code}  
> SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] 
> waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9475) Add install script support for CentOS

2016-09-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506170#comment-15506170
 ] 

ASF subversion and git services commented on SOLR-9475:
---

Commit 74bf88f8fe50b59e666f9387ca65ec26f821089d in lucene-solr's branch 
refs/heads/branch_6x from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=74bf88f ]

SOLR-9475: Add install script support for CentOS and better distro detection 
under Docker

(cherry picked from commit a1bbc99)


> Add install script support for CentOS
> -
>
> Key: SOLR-9475
> URL: https://issues.apache.org/jira/browse/SOLR-9475
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2
> Environment: Centos 7
>Reporter: Nitin Surana
>Assignee: Jan Høydahl
> Fix For: 6.3, master (7.0)
>
> Attachments: SOLR-9475.patch, install_solr_service.sh
>
>
> [root@ns521582 tmp]# sudo ./install_solr_service.sh solr-6.2.0.tgz
> id: solr: no such user
> Creating new user: solr
> adduser: group '--disabled-password' does not exist
> Extracting solr-6.2.0.tgz to /opt
> Installing symlink /opt/solr -> /opt/solr-6.2.0 ...
> Installing /etc/init.d/solr script ...
> /etc/default/solr.in.sh already exist. Skipping install ...
> /var/solr/data/solr.xml already exists. Skipping install ...
> /var/solr/log4j.properties already exists. Skipping install ...
> chown: invalid spec: ‘solr:’
> ./install_solr_service.sh: line 322: update-rc.d: command not found
> id: solr: no such user
> User solr not found! Please create the solr user before running this script.
> id: solr: no such user
> User solr not found! Please create the solr user before running this script.
> Service solr installed.
> Reference - 
> http://stackoverflow.com/questions/39320647/unable-to-create-user-when-installing-solr-6-2-0-on-centos-7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9538) Relocate {JSON,Smile}WriterTest and TestBinaryResponseWriter to org.apache.solr.response

2016-09-20 Thread Jonny Marks (JIRA)
Jonny Marks created SOLR-9538:
-

 Summary: Relocate {JSON,Smile}WriterTest and 
TestBinaryResponseWriter to org.apache.solr.response
 Key: SOLR-9538
 URL: https://issues.apache.org/jira/browse/SOLR-9538
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Jonny Marks
Priority: Minor


JSONWriterTest, SmileWriterTest and TestBinaryResponseWriter are currently in 
the org.apache.solr.request package but the classes they test 
(JSONResponseWriter, SmileWriter, BinaryResponseWriter) are in the 
org.apache.solr.response package. 

This patch moves the tests to the response package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2016-09-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506132#comment-15506132
 ] 

ASF subversion and git services commented on SOLR-8029:
---

Commit e0a09f5370ae770f7aeca474970969f70e30b9c8 in lucene-solr's branch 
refs/heads/apiv2 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e0a09f5 ]

SOLR-8029: more cleanup of specs


> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >