[jira] [Created] (SOLR-12392) IndexSizeTriggerTest fails too frequently.

2018-05-23 Thread Mark Miller (JIRA)
Mark Miller created SOLR-12392:
--

 Summary: IndexSizeTriggerTest fails too frequently.
 Key: SOLR-12392
 URL: https://issues.apache.org/jira/browse/SOLR-12392
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mark Miller






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12378) Support missing versionField on indexed docs in DocBasedVersionConstraintsURP

2018-05-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487408#comment-16487408
 ] 

Mark Miller commented on SOLR-12378:


+1, I'll make that change and commit.

> Support missing versionField on indexed docs in DocBasedVersionConstraintsURP
> -
>
> Key: SOLR-12378
> URL: https://issues.apache.org/jira/browse/SOLR-12378
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: master (8.0)
>Reporter: Oliver Bates
>Assignee: Mark Miller
>Priority: Minor
>  Labels: features, patch
> Attachments: SOLR-12378.patch, SOLR-12378.patch, 
> supportMissingVersionOnOldDocs-v1.patch
>
>
> -If we want to start using DocBasedVersionConstraintsUpdateRequestProcessor 
> on an existing index, we have to reindex everything to set value for the 
> 'versionField' field, otherwise- We can't start using 
> DocBasedVersionConstraintsUpdateRequestProcessor on an existing index because 
> we get this line throwing shade:
> {code:java}
> throw new SolrException(SERVER_ERROR,
> "Doc exists in index, but has null versionField: "
> + versionFieldName);
> {code}
> We have to reindex everything into a new collection, which isn't always 
> practical/possible. The proposal here is to have an option to allow the 
> existing docs to be missing this field and to simply treat those docs as 
> older than anything coming in with that field set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12390) Website search doesn't work for simple searches

2018-05-23 Thread Jean Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487404#comment-16487404
 ] 

Jean Silva commented on SOLR-12390:
---

Is there a way I could contribute? I assume this is the repo, correct? 
[https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide]

 

Thanks

> Website search doesn't work for simple searches
> ---
>
> Key: SOLR-12390
> URL: https://issues.apache.org/jira/browse/SOLR-12390
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 6.6, 7.1
>Reporter: Jean Silva
>Priority: Minor
>
> Simple searches aren't working well in the website.
> I've tested it on 6_6 and 7_1 docs version.
> Because the purpose of Solr is to empower better search quality, I see this 
> ticket as kinda important.
> Here some examples that I've got no results:
> *ngram*
> *analysers* (analy*z*ers work)
> *spellcheck*
> and probably much more.
>  
> Now where I'm creating this ticket, I've paid more attention I saw the 
> placeholder "Page title lookup", but even though I think this is not well 
> seen AND not good for us developers to find easily the documentation part we 
> want.
> If I could help with something please let me know.
> Thank you



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8325) smartcn analyzer can't handle SURROGATE char

2018-05-23 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487387#comment-16487387
 ] 

Uwe Schindler commented on LUCENE-8325:
---

Thanks! Great. :-)

> smartcn analyzer can't handle SURROGATE char
> 
>
> Key: LUCENE-8325
> URL: https://issues.apache.org/jira/browse/LUCENE-8325
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: chengpohi
>Priority: Minor
>  Labels: newbie, patch
> Fix For: 7.4, master (8.0)
>
> Attachments: handle_surrogate_char_for_smartcn_2018-05-23.patch
>
>
> This issue is from [https://github.com/elastic/elasticsearch/issues/30739]
> smartcn analyzer can't handle SURROGATE char, Example:
>  
>  
> {code:java}
> Analyzer ca = new SmartChineseAnalyzer(); 
> String sentence = "\uD862\uDE0F"; // 訏 a surrogate char 
> TokenStream tokenStream = ca.tokenStream("", sentence); 
> CharTermAttribute charTermAttribute = 
> tokenStream.addAttribute(CharTermAttribute.class); 
> tokenStream.reset(); 
> while (tokenStream.incrementToken()) { 
> String term = charTermAttribute.toString(); 
> System.out.println(term); 
> } 
> {code}
>  
> In the above code snippet will output: 
>  
> {code:java}
> ? 
> ? 
> {code}
>  
>  and I have created a *PATCH* to try to fix this, please help review(since 
> *smartcn* only support *GBK* char, so it's only just handle it as a *single 
> char*).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12294) System collection - Lazy loading mechanism not working for custom UpdateProcessors

2018-05-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487342#comment-16487342
 ] 

Noble Paul edited comment on SOLR-12294 at 5/23/18 2:28 PM:


sorry, you can't specify a chain like that , however you can specify a request 
parameter {{processor=testUP}} and it will work. remove the {{< 
updateRequestProcessorChain  >}} definition altogether and just specify the {{< 
updateProcessor >}}


was (Author: noble.paul):
sorry, you can't specify a chain like that , however you can specify a request 
parameter {{processor=testUP}} and it will work

> System collection - Lazy loading mechanism not working for custom 
> UpdateProcessors
> --
>
> Key: SOLR-12294
> URL: https://issues.apache.org/jira/browse/SOLR-12294
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: 7.3
>Reporter: Johannes Brucher
>Assignee: Noble Paul
>Priority: Critical
> Attachments: no_active_replica_available.png, schema.xml, 
> solrconfig.xml, update-processor-0.0.1-SNAPSHOT.jar
>
>
> Hi all,
> I'm facing an issue regarding custom code inside a .system-collection and 
> starting up a Solr Cloud cluster.
> I thought, like its stated in the documentation, that in case using the 
> .system collection custom code is lazy loaded, because it can happen that a 
> collection that uses custom code is initialized before the .system collection 
> is up and running.
> I did all the necessary configuration and while debugging, I can see that the 
> custom code is wrapped via a PluginBag$LazyPluginHolder. So far its seems 
> good, but I still get Exceptions when starting the Solr Cloud with the 
> following errors:
> SolrException: Blob loading failed: .no active replica available for .system 
> collection...
> In my case I'm using custom code for a couple of UpdateProcessors. So it 
> seems, that this lazy mechanism is not working well for UpdateProcessors.
> Inside the calzz LazyPluginHolder the comment says:
> "A class that loads plugins Lazily. When the get() method is invoked the 
> Plugin is initialized and returned."
> When a core is initialized and you have a custom UpdateProcessor, the 
> get-method is invoked directly and the lazy loading mechanism tries to get 
> the custom class from the MemClassLoader, but in most scenarios the system 
> collection is not up and the above Exception is thrown...
> So maybe it’s the case that for UpdateProcessors while initializing a core, 
> the routine is not implemented optimal for the lazy loading mechanism?
>  
> Here are the steps to reproduce the issue:
>  # Unpack solr 7.3.0
>  1.1 Add SOLR_OPTS="$SOLR_OPTS -Denable.runtime.lib=true" to solr.in.sh
>  1.2 Start Solr with -c option
>  # Setup .system collection:
>  2.1 Upload custom test jar --> curl -X POST -H 'Content-Type: 
> application/octet-stream' --data-binary @ jar>/update-processor-0.0.1-SNAPSHOT.jar http:// solr>/solr/.system/blob/test_blob
>  2.2 Alter maxShardsPerNode --> 
> .../admin/collections?action=MODIFYCOLLECTION=.system=2
>  2.2 Add Replica to .system collection --> 
> .../admin/collections?action=ADDREPLICA=.system=shard1
>  # Setup test collection:
>  3.1 Upload test conf to ZK --> ./zkcli.sh -zkhost  -cmd 
> upconfig -confdir  -confname test_conf
>  3.2 Create a test1 collection with commented UP-chain inside solrconfig.xml 
> via Admin UI
>  3.3 Add blob to test collection --> curl http:// Solr>/solr/test1/config -H 'Content-type:application/json' -d 
> '\{"add-runtimelib": { "name":"test_blob", "version":1 }}'
>  3.4 Uncomment the UP-chain and upload test-conf again --> ./zkcli.sh -zkhost 
>  -cmd upconfig -confdir  -confname test_conf
>  3.5 Reload test1 collection
>  3.6 Everything should work as expected now (no erros are shown)
>  # Restart SOLR
>  4.1 Now you can see: SolrException: Blob loading failed: No active replica 
> available for .system collection
> Expected: The lazy loading mechanism should work for UpdateProcessors inside 
> core init routine, but it isn't due to above Exception.
> Sometimes you are lucky and the test1 collection will be initialize after the 
> .system collection. But in ~90% of the time this isn't the case...
> Let me know if you need further details here,
>  
> Johannes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12294) System collection - Lazy loading mechanism not working for custom UpdateProcessors

2018-05-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487342#comment-16487342
 ] 

Noble Paul commented on SOLR-12294:
---

sorry, you can't specify a chain like that , however you can specify a request 
parameter {{processor=testUP}} and it will work

> System collection - Lazy loading mechanism not working for custom 
> UpdateProcessors
> --
>
> Key: SOLR-12294
> URL: https://issues.apache.org/jira/browse/SOLR-12294
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: 7.3
>Reporter: Johannes Brucher
>Priority: Critical
> Attachments: no_active_replica_available.png, schema.xml, 
> solrconfig.xml, update-processor-0.0.1-SNAPSHOT.jar
>
>
> Hi all,
> I'm facing an issue regarding custom code inside a .system-collection and 
> starting up a Solr Cloud cluster.
> I thought, like its stated in the documentation, that in case using the 
> .system collection custom code is lazy loaded, because it can happen that a 
> collection that uses custom code is initialized before the .system collection 
> is up and running.
> I did all the necessary configuration and while debugging, I can see that the 
> custom code is wrapped via a PluginBag$LazyPluginHolder. So far its seems 
> good, but I still get Exceptions when starting the Solr Cloud with the 
> following errors:
> SolrException: Blob loading failed: .no active replica available for .system 
> collection...
> In my case I'm using custom code for a couple of UpdateProcessors. So it 
> seems, that this lazy mechanism is not working well for UpdateProcessors.
> Inside the calzz LazyPluginHolder the comment says:
> "A class that loads plugins Lazily. When the get() method is invoked the 
> Plugin is initialized and returned."
> When a core is initialized and you have a custom UpdateProcessor, the 
> get-method is invoked directly and the lazy loading mechanism tries to get 
> the custom class from the MemClassLoader, but in most scenarios the system 
> collection is not up and the above Exception is thrown...
> So maybe it’s the case that for UpdateProcessors while initializing a core, 
> the routine is not implemented optimal for the lazy loading mechanism?
>  
> Here are the steps to reproduce the issue:
>  # Unpack solr 7.3.0
>  1.1 Add SOLR_OPTS="$SOLR_OPTS -Denable.runtime.lib=true" to solr.in.sh
>  1.2 Start Solr with -c option
>  # Setup .system collection:
>  2.1 Upload custom test jar --> curl -X POST -H 'Content-Type: 
> application/octet-stream' --data-binary @ jar>/update-processor-0.0.1-SNAPSHOT.jar http:// solr>/solr/.system/blob/test_blob
>  2.2 Alter maxShardsPerNode --> 
> .../admin/collections?action=MODIFYCOLLECTION=.system=2
>  2.2 Add Replica to .system collection --> 
> .../admin/collections?action=ADDREPLICA=.system=shard1
>  # Setup test collection:
>  3.1 Upload test conf to ZK --> ./zkcli.sh -zkhost  -cmd 
> upconfig -confdir  -confname test_conf
>  3.2 Create a test1 collection with commented UP-chain inside solrconfig.xml 
> via Admin UI
>  3.3 Add blob to test collection --> curl http:// Solr>/solr/test1/config -H 'Content-type:application/json' -d 
> '\{"add-runtimelib": { "name":"test_blob", "version":1 }}'
>  3.4 Uncomment the UP-chain and upload test-conf again --> ./zkcli.sh -zkhost 
>  -cmd upconfig -confdir  -confname test_conf
>  3.5 Reload test1 collection
>  3.6 Everything should work as expected now (no erros are shown)
>  # Restart SOLR
>  4.1 Now you can see: SolrException: Blob loading failed: No active replica 
> available for .system collection
> Expected: The lazy loading mechanism should work for UpdateProcessors inside 
> core init routine, but it isn't due to above Exception.
> Sometimes you are lucky and the test1 collection will be initialize after the 
> .system collection. But in ~90% of the time this isn't the case...
> Let me know if you need further details here,
>  
> Johannes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12294) System collection - Lazy loading mechanism not working for custom UpdateProcessors

2018-05-23 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-12294:
-

Assignee: Noble Paul

> System collection - Lazy loading mechanism not working for custom 
> UpdateProcessors
> --
>
> Key: SOLR-12294
> URL: https://issues.apache.org/jira/browse/SOLR-12294
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: 7.3
>Reporter: Johannes Brucher
>Assignee: Noble Paul
>Priority: Critical
> Attachments: no_active_replica_available.png, schema.xml, 
> solrconfig.xml, update-processor-0.0.1-SNAPSHOT.jar
>
>
> Hi all,
> I'm facing an issue regarding custom code inside a .system-collection and 
> starting up a Solr Cloud cluster.
> I thought, like its stated in the documentation, that in case using the 
> .system collection custom code is lazy loaded, because it can happen that a 
> collection that uses custom code is initialized before the .system collection 
> is up and running.
> I did all the necessary configuration and while debugging, I can see that the 
> custom code is wrapped via a PluginBag$LazyPluginHolder. So far its seems 
> good, but I still get Exceptions when starting the Solr Cloud with the 
> following errors:
> SolrException: Blob loading failed: .no active replica available for .system 
> collection...
> In my case I'm using custom code for a couple of UpdateProcessors. So it 
> seems, that this lazy mechanism is not working well for UpdateProcessors.
> Inside the calzz LazyPluginHolder the comment says:
> "A class that loads plugins Lazily. When the get() method is invoked the 
> Plugin is initialized and returned."
> When a core is initialized and you have a custom UpdateProcessor, the 
> get-method is invoked directly and the lazy loading mechanism tries to get 
> the custom class from the MemClassLoader, but in most scenarios the system 
> collection is not up and the above Exception is thrown...
> So maybe it’s the case that for UpdateProcessors while initializing a core, 
> the routine is not implemented optimal for the lazy loading mechanism?
>  
> Here are the steps to reproduce the issue:
>  # Unpack solr 7.3.0
>  1.1 Add SOLR_OPTS="$SOLR_OPTS -Denable.runtime.lib=true" to solr.in.sh
>  1.2 Start Solr with -c option
>  # Setup .system collection:
>  2.1 Upload custom test jar --> curl -X POST -H 'Content-Type: 
> application/octet-stream' --data-binary @ jar>/update-processor-0.0.1-SNAPSHOT.jar http:// solr>/solr/.system/blob/test_blob
>  2.2 Alter maxShardsPerNode --> 
> .../admin/collections?action=MODIFYCOLLECTION=.system=2
>  2.2 Add Replica to .system collection --> 
> .../admin/collections?action=ADDREPLICA=.system=shard1
>  # Setup test collection:
>  3.1 Upload test conf to ZK --> ./zkcli.sh -zkhost  -cmd 
> upconfig -confdir  -confname test_conf
>  3.2 Create a test1 collection with commented UP-chain inside solrconfig.xml 
> via Admin UI
>  3.3 Add blob to test collection --> curl http:// Solr>/solr/test1/config -H 'Content-type:application/json' -d 
> '\{"add-runtimelib": { "name":"test_blob", "version":1 }}'
>  3.4 Uncomment the UP-chain and upload test-conf again --> ./zkcli.sh -zkhost 
>  -cmd upconfig -confdir  -confname test_conf
>  3.5 Reload test1 collection
>  3.6 Everything should work as expected now (no erros are shown)
>  # Restart SOLR
>  4.1 Now you can see: SolrException: Blob loading failed: No active replica 
> available for .system collection
> Expected: The lazy loading mechanism should work for UpdateProcessors inside 
> core init routine, but it isn't due to above Exception.
> Sometimes you are lucky and the test1 collection will be initialize after the 
> .system collection. But in ~90% of the time this isn't the case...
> Let me know if you need further details here,
>  
> Johannes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8325) smartcn analyzer can't handle SURROGATE char

2018-05-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487331#comment-16487331
 ] 

ASF subversion and git services commented on LUCENE-8325:
-

Commit bc3926509002056a46efc579e175fe2c14ec1804 in lucene-solr's branch 
refs/heads/branch_7x from [~jimczi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bc39265 ]

LUCENE-8325: Fixed the smartcn tokenizer to not split UTF-16 surrogate pairs.


> smartcn analyzer can't handle SURROGATE char
> 
>
> Key: LUCENE-8325
> URL: https://issues.apache.org/jira/browse/LUCENE-8325
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: chengpohi
>Priority: Minor
>  Labels: newbie, patch
> Fix For: 7.4, master (8.0)
>
> Attachments: handle_surrogate_char_for_smartcn_2018-05-23.patch
>
>
> This issue is from [https://github.com/elastic/elasticsearch/issues/30739]
> smartcn analyzer can't handle SURROGATE char, Example:
>  
>  
> {code:java}
> Analyzer ca = new SmartChineseAnalyzer(); 
> String sentence = "\uD862\uDE0F"; // 訏 a surrogate char 
> TokenStream tokenStream = ca.tokenStream("", sentence); 
> CharTermAttribute charTermAttribute = 
> tokenStream.addAttribute(CharTermAttribute.class); 
> tokenStream.reset(); 
> while (tokenStream.incrementToken()) { 
> String term = charTermAttribute.toString(); 
> System.out.println(term); 
> } 
> {code}
>  
> In the above code snippet will output: 
>  
> {code:java}
> ? 
> ? 
> {code}
>  
>  and I have created a *PATCH* to try to fix this, please help review(since 
> *smartcn* only support *GBK* char, so it's only just handle it as a *single 
> char*).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8325) smartcn analyzer can't handle SURROGATE char

2018-05-23 Thread Jim Ferenczi (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi resolved LUCENE-8325.
--
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.4

I merged in master and backported to 7x.
Thanks [~chengpohi] and [~rcmuir] for reviewing.

> smartcn analyzer can't handle SURROGATE char
> 
>
> Key: LUCENE-8325
> URL: https://issues.apache.org/jira/browse/LUCENE-8325
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: chengpohi
>Priority: Minor
>  Labels: newbie, patch
> Fix For: 7.4, master (8.0)
>
> Attachments: handle_surrogate_char_for_smartcn_2018-05-23.patch
>
>
> This issue is from [https://github.com/elastic/elasticsearch/issues/30739]
> smartcn analyzer can't handle SURROGATE char, Example:
>  
>  
> {code:java}
> Analyzer ca = new SmartChineseAnalyzer(); 
> String sentence = "\uD862\uDE0F"; // 訏 a surrogate char 
> TokenStream tokenStream = ca.tokenStream("", sentence); 
> CharTermAttribute charTermAttribute = 
> tokenStream.addAttribute(CharTermAttribute.class); 
> tokenStream.reset(); 
> while (tokenStream.incrementToken()) { 
> String term = charTermAttribute.toString(); 
> System.out.println(term); 
> } 
> {code}
>  
> In the above code snippet will output: 
>  
> {code:java}
> ? 
> ? 
> {code}
>  
>  and I have created a *PATCH* to try to fix this, please help review(since 
> *smartcn* only support *GBK* char, so it's only just handle it as a *single 
> char*).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12391) Core selector dropdown missing in Admin UI

2018-05-23 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-12391:
-
Description: On master the "Core Selector" dropdown in the Admin UI is 
missing, both in cloud and non-cloud modes. This selector is present on 
branch_7x (as of today).

> Core selector dropdown missing in Admin UI
> --
>
> Key: SOLR-12391
> URL: https://issues.apache.org/jira/browse/SOLR-12391
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: master (8.0)
>Reporter: Andrzej Bialecki 
>Priority: Major
> Attachments: master.png
>
>
> On master the "Core Selector" dropdown in the Admin UI is missing, both in 
> cloud and non-cloud modes. This selector is present on branch_7x (as of 
> today).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12391) Core selector dropdown missing in Admin UI

2018-05-23 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-12391:
-
Environment: (was: On master the "Core Selector" dropdown in the Admin 
UI is missing, both in cloud and non-cloud modes. This selector is present on 
branch_7x (as of today).)

> Core selector dropdown missing in Admin UI
> --
>
> Key: SOLR-12391
> URL: https://issues.apache.org/jira/browse/SOLR-12391
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: master (8.0)
>Reporter: Andrzej Bialecki 
>Priority: Major
> Attachments: master.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12391) Core selector dropdown missing in Admin UI

2018-05-23 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-12391:
-
Attachment: master.png

> Core selector dropdown missing in Admin UI
> --
>
> Key: SOLR-12391
> URL: https://issues.apache.org/jira/browse/SOLR-12391
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: master (8.0)
> Environment: On master the "Core Selector" dropdown in the Admin UI 
> is missing, both in cloud and non-cloud modes. This selector is present on 
> branch_7x (as of today).
>Reporter: Andrzej Bialecki 
>Priority: Major
> Attachments: master.png
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11779) Basic long-term collection of aggregated metrics

2018-05-23 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-11779:
-
Attachment: (was: master.png)

> Basic long-term collection of aggregated metrics
> 
>
> Key: SOLR-11779
> URL: https://issues.apache.org/jira/browse/SOLR-11779
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-11779.patch, SOLR-11779.patch, SOLR-11779.patch, 
> c1.png, c2.png, core.json, d1.png, d2.png, d3.png, jvm-list.json, 
> jvm-string.json, jvm.json, o1.png, u1.png
>
>
> Tracking the key metrics over time is very helpful in understanding the 
> cluster and user behavior.
> Currently even basic metrics tracking requires setting up an external system 
> and either polling {{/admin/metrics}} or using {{SolrMetricReporter}}-s. The 
> advantage of this setup is that these external tools usually provide a lot of 
> sophisticated functionality. The downside is that they don't ship out of the 
> box with Solr and require additional admin effort to set up.
> Solr could collect some of the key metrics and keep their historical values 
> in a round-robin database (eg. using RRD4j) to keep the size of the historic 
> data constant (eg. ~64kB per metric), but at the same providing out of the 
> box useful insights into the basic system behavior over time. This data could 
> be persisted to the {{.system}} collection as blobs, and it could be also 
> presented in the Admin UI as graphs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12391) Core selector dropdown missing in Admin UI

2018-05-23 Thread Andrzej Bialecki (JIRA)
Andrzej Bialecki  created SOLR-12391:


 Summary: Core selector dropdown missing in Admin UI
 Key: SOLR-12391
 URL: https://issues.apache.org/jira/browse/SOLR-12391
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Admin UI
Affects Versions: master (8.0)
 Environment: On master the "Core Selector" dropdown in the Admin UI is 
missing, both in cloud and non-cloud modes. This selector is present on 
branch_7x (as of today).
Reporter: Andrzej Bialecki 






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11779) Basic long-term collection of aggregated metrics

2018-05-23 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-11779:
-
Attachment: master.png

> Basic long-term collection of aggregated metrics
> 
>
> Key: SOLR-11779
> URL: https://issues.apache.org/jira/browse/SOLR-11779
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-11779.patch, SOLR-11779.patch, SOLR-11779.patch, 
> c1.png, c2.png, core.json, d1.png, d2.png, d3.png, jvm-list.json, 
> jvm-string.json, jvm.json, master.png, o1.png, u1.png
>
>
> Tracking the key metrics over time is very helpful in understanding the 
> cluster and user behavior.
> Currently even basic metrics tracking requires setting up an external system 
> and either polling {{/admin/metrics}} or using {{SolrMetricReporter}}-s. The 
> advantage of this setup is that these external tools usually provide a lot of 
> sophisticated functionality. The downside is that they don't ship out of the 
> box with Solr and require additional admin effort to set up.
> Solr could collect some of the key metrics and keep their historical values 
> in a round-robin database (eg. using RRD4j) to keep the size of the historic 
> data constant (eg. ~64kB per metric), but at the same providing out of the 
> box useful insights into the basic system behavior over time. This data could 
> be persisted to the {{.system}} collection as blobs, and it could be also 
> presented in the Admin UI as graphs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-12390) Website search doesn't work for simple searches

2018-05-23 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey closed SOLR-12390.
---

> Website search doesn't work for simple searches
> ---
>
> Key: SOLR-12390
> URL: https://issues.apache.org/jira/browse/SOLR-12390
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 6.6, 7.1
>Reporter: Jean Silva
>Priority: Minor
>
> Simple searches aren't working well in the website.
> I've tested it on 6_6 and 7_1 docs version.
> Because the purpose of Solr is to empower better search quality, I see this 
> ticket as kinda important.
> Here some examples that I've got no results:
> *ngram*
> *analysers* (analy*z*ers work)
> *spellcheck*
> and probably much more.
>  
> Now where I'm creating this ticket, I've paid more attention I saw the 
> placeholder "Page title lookup", but even though I think this is not well 
> seen AND not good for us developers to find easily the documentation part we 
> want.
> If I could help with something please let me know.
> Thank you



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12390) Website search doesn't work for simple searches

2018-05-23 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey resolved SOLR-12390.
-
Resolution: Duplicate

> Website search doesn't work for simple searches
> ---
>
> Key: SOLR-12390
> URL: https://issues.apache.org/jira/browse/SOLR-12390
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 6.6, 7.1
>Reporter: Jean Silva
>Priority: Minor
>
> Simple searches aren't working well in the website.
> I've tested it on 6_6 and 7_1 docs version.
> Because the purpose of Solr is to empower better search quality, I see this 
> ticket as kinda important.
> Here some examples that I've got no results:
> *ngram*
> *analysers* (analy*z*ers work)
> *spellcheck*
> and probably much more.
>  
> Now where I'm creating this ticket, I've paid more attention I saw the 
> placeholder "Page title lookup", but even though I think this is not well 
> seen AND not good for us developers to find easily the documentation part we 
> want.
> If I could help with something please let me know.
> Thank you



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-11-ea+14) - Build # 7334 - Still Unstable!

2018-05-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7334/
Java: 64bit/jdk-11-ea+14 -XX:+UseCompressedOops -XX:+UseG1GC

7 tests failed.
FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
SimTimeSource:50.0 time diff=18335000

Stack Trace:
java.lang.AssertionError: SimTimeSource:50.0 time diff=18335000
at 
__randomizedtesting.SeedInfo.seed([24D9FA1A5A464FC8:1CB5893FCE96ED8E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.util.TestTimeSource.doTestEpochTime(TestTimeSource.java:52)
at 
org.apache.solr.common.util.TestTimeSource.testEpochTime(TestTimeSource.java:32)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:832)


FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
SimTimeSource:50.0 time diff=3284

Stack Trace:
java.lang.AssertionError: SimTimeSource:50.0 time diff=3284
at 

[jira] [Commented] (LUCENE-8325) smartcn analyzer can't handle SURROGATE char

2018-05-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487322#comment-16487322
 ] 

ASF subversion and git services commented on LUCENE-8325:
-

Commit 55858d7ba72f857ded79035430855e511a8e319d in lucene-solr's branch 
refs/heads/master from [~jimczi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=55858d7 ]

LUCENE-8325: Fixed the smartcn tokenizer to not split UTF-16 surrogate pairs.


> smartcn analyzer can't handle SURROGATE char
> 
>
> Key: LUCENE-8325
> URL: https://issues.apache.org/jira/browse/LUCENE-8325
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: chengpohi
>Priority: Minor
>  Labels: newbie, patch
> Attachments: handle_surrogate_char_for_smartcn_2018-05-23.patch
>
>
> This issue is from [https://github.com/elastic/elasticsearch/issues/30739]
> smartcn analyzer can't handle SURROGATE char, Example:
>  
>  
> {code:java}
> Analyzer ca = new SmartChineseAnalyzer(); 
> String sentence = "\uD862\uDE0F"; // 訏 a surrogate char 
> TokenStream tokenStream = ca.tokenStream("", sentence); 
> CharTermAttribute charTermAttribute = 
> tokenStream.addAttribute(CharTermAttribute.class); 
> tokenStream.reset(); 
> while (tokenStream.incrementToken()) { 
> String term = charTermAttribute.toString(); 
> System.out.println(term); 
> } 
> {code}
>  
> In the above code snippet will output: 
>  
> {code:java}
> ? 
> ? 
> {code}
>  
>  and I have created a *PATCH* to try to fix this, please help review(since 
> *smartcn* only support *GBK* char, so it's only just handle it as a *single 
> char*).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12390) Website search doesn't work for simple searches

2018-05-23 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487320#comment-16487320
 ] 

Cassandra Targett commented on SOLR-12390:
--

This is basically a duplicate of SOLR-10299.

> Website search doesn't work for simple searches
> ---
>
> Key: SOLR-12390
> URL: https://issues.apache.org/jira/browse/SOLR-12390
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 6.6, 7.1
>Reporter: Jean Silva
>Priority: Minor
>
> Simple searches aren't working well in the website.
> I've tested it on 6_6 and 7_1 docs version.
> Because the purpose of Solr is to empower better search quality, I see this 
> ticket as kinda important.
> Here some examples that I've got no results:
> *ngram*
> *analysers* (analy*z*ers work)
> *spellcheck*
> and probably much more.
>  
> Now where I'm creating this ticket, I've paid more attention I saw the 
> placeholder "Page title lookup", but even though I think this is not well 
> seen AND not good for us developers to find easily the documentation part we 
> want.
> If I could help with something please let me know.
> Thank you



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12390) Website search doesn't work for simple searches

2018-05-23 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487297#comment-16487297
 ] 

Shawn Heisey commented on SOLR-12390:
-

The search box on the online reference guide only searches page titles.  It is 
not a comprehensive keyword search of the entire guide.  We are aware of the 
irony of not having comprehensive search for the documentation of a search 
server.  At this time we don't have a solution, but we very much want to find 
one.

[~ctargett], do we already have a ticket for search functionality in the online 
guide?  If so, this ticket would need to be closed as a duplicate.

If you want to do comprehensive searches on the reference guide, download the 
PDF version.  The PDF version is the official documentation release.


> Website search doesn't work for simple searches
> ---
>
> Key: SOLR-12390
> URL: https://issues.apache.org/jira/browse/SOLR-12390
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 6.6, 7.1
>Reporter: Jean Silva
>Priority: Minor
>
> Simple searches aren't working well in the website.
> I've tested it on 6_6 and 7_1 docs version.
> Because the purpose of Solr is to empower better search quality, I see this 
> ticket as kinda important.
> Here some examples that I've got no results:
> *ngram*
> *analysers* (analy*z*ers work)
> *spellcheck*
> and probably much more.
>  
> Now where I'm creating this ticket, I've paid more attention I saw the 
> placeholder "Page title lookup", but even though I think this is not well 
> seen AND not good for us developers to find easily the documentation part we 
> want.
> If I could help with something please let me know.
> Thank you



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12390) Website search doesn't work for simple searches

2018-05-23 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-12390:

Component/s: (was: website)
 documentation

> Website search doesn't work for simple searches
> ---
>
> Key: SOLR-12390
> URL: https://issues.apache.org/jira/browse/SOLR-12390
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 6.6, 7.1
>Reporter: Jean Silva
>Priority: Minor
>
> Simple searches aren't working well in the website.
> I've tested it on 6_6 and 7_1 docs version.
> Because the purpose of Solr is to empower better search quality, I see this 
> ticket as kinda important.
> Here some examples that I've got no results:
> *ngram*
> *analysers* (analy*z*ers work)
> *spellcheck*
> and probably much more.
>  
> Now where I'm creating this ticket, I've paid more attention I saw the 
> placeholder "Page title lookup", but even though I think this is not well 
> seen AND not good for us developers to find easily the documentation part we 
> want.
> If I could help with something please let me know.
> Thank you



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #377: Solr-12361: change _childDocuments to Map

2018-05-23 Thread moshebla
Github user moshebla closed the pull request at:

https://github.com/apache/lucene-solr/pull/377


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12390) Website search doesn't work for simple searches

2018-05-23 Thread Jean Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487275#comment-16487275
 ] 

Jean Silva commented on SOLR-12390:
---

[~elyograg] I was talking about this website 
[https://lucene.apache.org/solr/guide/7_1/|http://example.com/] am I creating a 
ticket for the wrong project?

As I selected the "website" component I expected to be clear.

> Website search doesn't work for simple searches
> ---
>
> Key: SOLR-12390
> URL: https://issues.apache.org/jira/browse/SOLR-12390
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: website
>Affects Versions: 6.6, 7.1
>Reporter: Jean Silva
>Priority: Minor
>
> Simple searches aren't working well in the website.
> I've tested it on 6_6 and 7_1 docs version.
> Because the purpose of Solr is to empower better search quality, I see this 
> ticket as kinda important.
> Here some examples that I've got no results:
> *ngram*
> *analysers* (analy*z*ers work)
> *spellcheck*
> and probably much more.
>  
> Now where I'm creating this ticket, I've paid more attention I saw the 
> placeholder "Page title lookup", but even though I think this is not well 
> seen AND not good for us developers to find easily the documentation part we 
> want.
> If I could help with something please let me know.
> Thank you



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12390) Website search doesn't work for simple searches

2018-05-23 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487243#comment-16487243
 ] 

Shawn Heisey commented on SOLR-12390:
-

This issue reads to me like a support request, not a bug report.  The Solr 
project does not use Jira as a support portal.  The mailing list and IRC 
channel are the correct places for support.

http://lucene.apache.org/solr/community.html#mailing-lists-irc


> Website search doesn't work for simple searches
> ---
>
> Key: SOLR-12390
> URL: https://issues.apache.org/jira/browse/SOLR-12390
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: website
>Affects Versions: 6.6, 7.1
>Reporter: Jean Silva
>Priority: Minor
>
> Simple searches aren't working well in the website.
> I've tested it on 6_6 and 7_1 docs version.
> Because the purpose of Solr is to empower better search quality, I see this 
> ticket as kinda important.
> Here some examples that I've got no results:
> *ngram*
> *analysers* (analy*z*ers work)
> *spellcheck*
> and probably much more.
>  
> Now where I'm creating this ticket, I've paid more attention I saw the 
> placeholder "Page title lookup", but even though I think this is not well 
> seen AND not good for us developers to find easily the documentation part we 
> want.
> If I could help with something please let me know.
> Thank you



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12390) Website search doesn't work for simple searches

2018-05-23 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487237#comment-16487237
 ] 

Shawn Heisey commented on SOLR-12390:
-

The first thing in the description is:  "Simple searches aren't working well in 
the website."

What is "the website?"  This might be completely clear to you, but I have no 
idea what you're referring to.

Whether a specific search is going to work or not depends on the data in your 
index and how Solr is configured, especially the schema.  Designing a schema 
that achieves the search results you want is one of the most time-consuming 
parts of setting up Solr.


> Website search doesn't work for simple searches
> ---
>
> Key: SOLR-12390
> URL: https://issues.apache.org/jira/browse/SOLR-12390
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: website
>Affects Versions: 6.6, 7.1
>Reporter: Jean Silva
>Priority: Minor
>
> Simple searches aren't working well in the website.
> I've tested it on 6_6 and 7_1 docs version.
> Because the purpose of Solr is to empower better search quality, I see this 
> ticket as kinda important.
> Here some examples that I've got no results:
> *ngram*
> *analysers* (analy*z*ers work)
> *spellcheck*
> and probably much more.
>  
> Now where I'm creating this ticket, I've paid more attention I saw the 
> placeholder "Page title lookup", but even though I think this is not well 
> seen AND not good for us developers to find easily the documentation part we 
> want.
> If I could help with something please let me know.
> Thank you



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12390) Website search doesn't work for simple searches

2018-05-23 Thread Jean Carlos Silva (JIRA)
Jean Carlos Silva created SOLR-12390:


 Summary: Website search doesn't work for simple searches
 Key: SOLR-12390
 URL: https://issues.apache.org/jira/browse/SOLR-12390
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: website
Affects Versions: 7.1, 6.6
Reporter: Jean Carlos Silva


Simple searches aren't working well in the website.

I've tested it on 6_6 and 7_1 docs version.

Because the purpose of Solr is to empower better search quality, I see this 
ticket as kinda important.

Here some examples that I've got no results:

*ngram*

*analysers* (analy*z*ers work)

*spellcheck*

and probably much more.

 

Now where I'm creating this ticket, I've paid more attention I saw the 
placeholder "Page title lookup", but even though I think this is not well seen 
AND not good for us developers to find easily the documentation part we want.

If I could help with something please let me know.

Thank you



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12388) Enable a strict ZooKeeper-connected search request mode, in which search requests will fail when the coordinating node can't communicate with ZooKeeper

2018-05-23 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487208#comment-16487208
 ] 

Shawn Heisey commented on SOLR-12388:
-

Interesting.  It's my understanding that SolrCloud goes read-only when ZK 
quorum is lost, so it would have to be a particularly unusual network partition 
for the described situation to arise.  But as noted by the author of Jepsen, 
unusual network partitions DO happen in the wild.


> Enable a strict ZooKeeper-connected search request mode, in which search 
> requests will fail when the coordinating node can't communicate with ZooKeeper
> ---
>
> Key: SOLR-12388
> URL: https://issues.apache.org/jira/browse/SOLR-12388
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Attachments: SOLR-12388.patch
>
>
> Right now, a Solr node will return the results of a search request even if it 
> cannot communicate with ZooKeeper at the time it receives the request. This 
> may result in stale or incorrect results if there have been major changes to 
> the collection structure that the node has not been informed of via 
> ZooKeeper.  When this happens, as long as all known shards respond, the 
> response will succeed, and a {{zkConnected}} header set to {{false}} is 
> included in the search response.
> There should be an option to instead fail requests under these conditions, to 
> prevent stale or incorrect results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12378) Support missing versionField on indexed docs in DocBasedVersionConstraintsURP

2018-05-23 Thread Michael Braun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487203#comment-16487203
 ] 

Michael Braun commented on SOLR-12378:
--

Just a thought for simplification, instead of creating a new Comparable and 
overriding, can the lambda syntax be used, such as userVersions[i] = 
(Comparable) o -> -1;  ?


> Support missing versionField on indexed docs in DocBasedVersionConstraintsURP
> -
>
> Key: SOLR-12378
> URL: https://issues.apache.org/jira/browse/SOLR-12378
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: master (8.0)
>Reporter: Oliver Bates
>Assignee: Mark Miller
>Priority: Minor
>  Labels: features, patch
> Attachments: SOLR-12378.patch, SOLR-12378.patch, 
> supportMissingVersionOnOldDocs-v1.patch
>
>
> -If we want to start using DocBasedVersionConstraintsUpdateRequestProcessor 
> on an existing index, we have to reindex everything to set value for the 
> 'versionField' field, otherwise- We can't start using 
> DocBasedVersionConstraintsUpdateRequestProcessor on an existing index because 
> we get this line throwing shade:
> {code:java}
> throw new SolrException(SERVER_ERROR,
> "Doc exists in index, but has null versionField: "
> + versionFieldName);
> {code}
> We have to reindex everything into a new collection, which isn't always 
> practical/possible. The proposal here is to have an option to allow the 
> existing docs to be missing this field and to simply treat those docs as 
> older than anything coming in with that field set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10) - Build # 22085 - Unstable!

2018-05-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22085/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testDeleteNode

Error Message:
[Op{action=DELETEREPLICA, hints={COLL_SHARD=[{   
"first":"deleteNode_collection",   "second":"shard1"}], REPLICA=[core_node3]}}, 
Op{action=DELETEREPLICA, hints={COLL_SHARD=[{   
"first":"deleteNode_collection",   "second":"shard1"}], 
REPLICA=[core_node10]}}, Op{action=DELETEREPLICA, hints={COLL_SHARD=[{   
"first":"deleteNode_collection",   "second":"shard1"}], REPLICA=[core_node6]}}, 
Op{action=DELETEREPLICA, hints={COLL_SHARD=[{   
"first":"deleteNode_collection",   "second":"shard1"}], REPLICA=[core_node8]}}, 
Op{action=DELETENODE, hints={SRC_NODE=[127.0.0.1:38839_solr]}}, 
Op{action=DELETENODE, hints={SRC_NODE=[127.0.0.1:36117_solr]}}, 
Op{action=DELETENODE, hints={SRC_NODE=[127.0.0.1:34607_solr]}}, 
Op{action=DELETENODE, hints={SRC_NODE=[127.0.0.1:33689_solr]}}, 
Op{action=DELETENODE, hints={SRC_NODE=[127.0.0.1:46153_solr]}}] expected:<8> 
but was:<9>

Stack Trace:
java.lang.AssertionError: [Op{action=DELETEREPLICA, hints={COLL_SHARD=[{
  "first":"deleteNode_collection",
  "second":"shard1"}], REPLICA=[core_node3]}}, Op{action=DELETEREPLICA, 
hints={COLL_SHARD=[{
  "first":"deleteNode_collection",
  "second":"shard1"}], REPLICA=[core_node10]}}, Op{action=DELETEREPLICA, 
hints={COLL_SHARD=[{
  "first":"deleteNode_collection",
  "second":"shard1"}], REPLICA=[core_node6]}}, Op{action=DELETEREPLICA, 
hints={COLL_SHARD=[{
  "first":"deleteNode_collection",
  "second":"shard1"}], REPLICA=[core_node8]}}, Op{action=DELETENODE, 
hints={SRC_NODE=[127.0.0.1:38839_solr]}}, Op{action=DELETENODE, 
hints={SRC_NODE=[127.0.0.1:36117_solr]}}, Op{action=DELETENODE, 
hints={SRC_NODE=[127.0.0.1:34607_solr]}}, Op{action=DELETENODE, 
hints={SRC_NODE=[127.0.0.1:33689_solr]}}, Op{action=DELETENODE, 
hints={SRC_NODE=[127.0.0.1:46153_solr]}}] expected:<8> but was:<9>
at 
__randomizedtesting.SeedInfo.seed([83AAA44CC0243D71:A1386ACEF7EEB20C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testDeleteNode(SearchRateTriggerIntegrationTest.java:638)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-12378) Support missing versionField on indexed docs in DocBasedVersionConstraintsURP

2018-05-23 Thread Michael Braun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487186#comment-16487186
 ] 

Michael Braun commented on SOLR-12378:
--

Given that it's a config option, this makes sense! 

> Support missing versionField on indexed docs in DocBasedVersionConstraintsURP
> -
>
> Key: SOLR-12378
> URL: https://issues.apache.org/jira/browse/SOLR-12378
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: master (8.0)
>Reporter: Oliver Bates
>Assignee: Mark Miller
>Priority: Minor
>  Labels: features, patch
> Attachments: SOLR-12378.patch, SOLR-12378.patch, 
> supportMissingVersionOnOldDocs-v1.patch
>
>
> -If we want to start using DocBasedVersionConstraintsUpdateRequestProcessor 
> on an existing index, we have to reindex everything to set value for the 
> 'versionField' field, otherwise- We can't start using 
> DocBasedVersionConstraintsUpdateRequestProcessor on an existing index because 
> we get this line throwing shade:
> {code:java}
> throw new SolrException(SERVER_ERROR,
> "Doc exists in index, but has null versionField: "
> + versionFieldName);
> {code}
> We have to reindex everything into a new collection, which isn't always 
> practical/possible. The proposal here is to have an option to allow the 
> existing docs to be missing this field and to simply treat those docs as 
> older than anything coming in with that field set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12387) Have cluster-wide defaults for numShards, nrtReplicas, tlogReplicas, pullReplicas

2018-05-23 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487182#comment-16487182
 ] 

Andrzej Bialecki  commented on SOLR-12387:
--

Well, then sooner or later you will end up with multiple top-level sections, 
each saying "thisDefaults", "thatDefaults", etc.

> Have cluster-wide defaults for numShards, nrtReplicas, tlogReplicas, 
> pullReplicas
> -
>
> Key: SOLR-12387
> URL: https://issues.apache.org/jira/browse/SOLR-12387
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Attachments: SOLR-12387.patch
>
>
> These will be cluster properties and the commands can omit these and the 
> command would pick it up from the cluster properties
>  
> the cluster property names are
>  * {{default.numShards}}
>  * {{default.nrtReplicas}}
>  * {{default.tlogReplicas}}
>  * {{default.pullReplicas}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12389) Support nested properties in cluster props

2018-05-23 Thread Noble Paul (JIRA)
Noble Paul created SOLR-12389:
-

 Summary: Support nested properties in cluster props
 Key: SOLR-12389
 URL: https://issues.apache.org/jira/browse/SOLR-12389
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
 Environment: cluster props API does not support nested objects . 

 

 
Reporter: Noble Paul
Assignee: Noble Paul






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12387) Have cluster-wide defaults for numShards, nrtReplicas, tlogReplicas, pullReplicas

2018-05-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487178#comment-16487178
 ] 

Noble Paul commented on SOLR-12387:
---

How about



{code}
{

"collectionDefaults" : {

"numShards": 2,

"nrtReplicas" : 2

}{code}



> Have cluster-wide defaults for numShards, nrtReplicas, tlogReplicas, 
> pullReplicas
> -
>
> Key: SOLR-12387
> URL: https://issues.apache.org/jira/browse/SOLR-12387
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Attachments: SOLR-12387.patch
>
>
> These will be cluster properties and the commands can omit these and the 
> command would pick it up from the cluster properties
>  
> the cluster property names are
>  * {{default.numShards}}
>  * {{default.nrtReplicas}}
>  * {{default.tlogReplicas}}
>  * {{default.pullReplicas}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 642 - Still Unstable!

2018-05-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/642/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

15 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([7A55986607BF26E5:43DB21262840EF1B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration(IndexSizeTriggerTest.java:298)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 

[jira] [Commented] (SOLR-7964) suggest.highlight=true does not work when using context filter query

2018-05-23 Thread Sergiu Gordea (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487164#comment-16487164
 ] 

Sergiu Gordea commented on SOLR-7964:
-

Dear all,

is there a plan to include this patch in an offical release? I need it in my 
project.

I need to apply the patch to the 6.6.1 release. Does it mean that I have to 
apply all 3 patch files to the code, or just the last one (17 Jan 2018)?

Thank you in advance for your support.

> suggest.highlight=true does not work when using context filter query
> 
>
> Key: SOLR-7964
> URL: https://issues.apache.org/jira/browse/SOLR-7964
> Project: Solr
>  Issue Type: Improvement
>  Components: Suggester
>Affects Versions: 5.4
>Reporter: Arcadius Ahouansou
>Priority: Minor
>  Labels: suggester
> Attachments: SOLR-7964.patch, SOLR_7964.patch, SOLR_7964.patch
>
>
> When using the new suggester context filtering query param 
> {{suggest.contextFilterQuery}} introduced in SOLR-7888, the param 
> {{suggest.highlight=true}} has no effect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8326) More Like This Params Refactor

2018-05-23 Thread Lucene/Solr QA (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487159#comment-16487159
 ] 

Lucene/Solr QA commented on LUCENE-8326:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} LUCENE-8326 does not apply to master. Rebase required? Wrong 
Branch? See 
https://wiki.apache.org/lucene-java/HowToContribute#Contributing_your_work for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8326 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924536/LUCENE-8326.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/13/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> More Like This Params Refactor
> --
>
> Key: LUCENE-8326
> URL: https://issues.apache.org/jira/browse/LUCENE-8326
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alessandro Benedetti
>Priority: Major
> Attachments: LUCENE-8326.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> More Like This ca be refactored to improve the code readability, test 
> coverage and maintenance.
> Scope of this Jira issue is to start the More Like This refactor from the 
> More Like This Params.
> This Jira will not improve the current More Like This but just keep the same 
> functionality with a refactored code.
> Other Jira issues will follow improving the overall code readability, test 
> coverage and maintenance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8325) smartcn analyzer can't handle SURROGATE char

2018-05-23 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487158#comment-16487158
 ] 

Robert Muir commented on LUCENE-8325:
-

+1, thank you for fixing this.

> smartcn analyzer can't handle SURROGATE char
> 
>
> Key: LUCENE-8325
> URL: https://issues.apache.org/jira/browse/LUCENE-8325
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: chengpohi
>Priority: Minor
>  Labels: newbie, patch
> Attachments: handle_surrogate_char_for_smartcn_2018-05-23.patch
>
>
> This issue is from [https://github.com/elastic/elasticsearch/issues/30739]
> smartcn analyzer can't handle SURROGATE char, Example:
>  
>  
> {code:java}
> Analyzer ca = new SmartChineseAnalyzer(); 
> String sentence = "\uD862\uDE0F"; // 訏 a surrogate char 
> TokenStream tokenStream = ca.tokenStream("", sentence); 
> CharTermAttribute charTermAttribute = 
> tokenStream.addAttribute(CharTermAttribute.class); 
> tokenStream.reset(); 
> while (tokenStream.incrementToken()) { 
> String term = charTermAttribute.toString(); 
> System.out.println(term); 
> } 
> {code}
>  
> In the above code snippet will output: 
>  
> {code:java}
> ? 
> ? 
> {code}
>  
>  and I have created a *PATCH* to try to fix this, please help review(since 
> *smartcn* only support *GBK* char, so it's only just handle it as a *single 
> char*).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11985) Allow percentage in replica attribute in policy

2018-05-23 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-11985:
-

Assignee: Noble Paul

> Allow percentage in replica attribute in policy
> ---
>
> Key: SOLR-11985
> URL: https://issues.apache.org/jira/browse/SOLR-11985
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>
> Today we can only specify an absolute number in the 'replica' attribute in 
> the policy rules. It'd be useful to write a percentage value to make certain 
> use-cases easier. For example:
> {code}
> // Keep a third of the the replicas of each shard in east region
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> // Keep two thirds of the the replicas of each shard in west region
> {"replica" : "<67%", "shard" : "#EACH", "sysprop:region": "west"}
> {code}
> Today the above must be represented by different rules for each collection if 
> they have different replication factors. Also if the replication factor 
> changes later, the absolute value has to be changed in tandem. So expressing 
> a percentage removes both of these restrictions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 1959 - Unstable!

2018-05-23 Thread Dawid Weiss
Caused by: java.lang.AssertionError
at 
java.base/java.util.HashMap$TreeNode.moveRootToFront(HashMap.java:1901)
at java.base/java.util.HashMap$TreeNode.putTreeVal(HashMap.java:2066)
at java.base/java.util.HashMap.putVal(HashMap.java:638)
at java.base/java.util.HashMap.putIfAbsent(HashMap.java:1062)
at 
org.apache.lucene.search.LRUQueryCache.putIfAbsent(LRUQueryCache.java:300)

Ooops. I looked at LRUQueryCache, but it seems that everything is
accessed under a (common) lock. Could be a JVM or impl. bug.
Interesting.

Dawid

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-NightlyTests-7.x - Build # 10 - Still unstable

2018-05-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-7.x/10/

23 tests failed.
FAILED:  
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings

Error Message:
some thread(s) failed

Stack Trace:
java.lang.RuntimeException: some thread(s) failed
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:583)
at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:882)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.lucene.index.TestIndexWriter.testFullyDeletedSegmentsReleaseFiles

Error Message:
[_0.cfe, _0.si, _0.cfs] vs [_0.cfe, _0.si, _0.cfs, write.lock] expected:<3> but 
was:<4>

Stack Trace:
java.lang.AssertionError: [_0.cfe, _0.si, _0.cfs] vs [_0.cfe, _0.si, _0.cfs, 
write.lock] expected:<3> but was:<4>
at 
__randomizedtesting.SeedInfo.seed([DAE679328C16B00B:801DDBEB301D5F4E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.lucene.index.TestIndexWriter.assertFiles(TestIndexWriter.java:3350)
at 

[jira] [Commented] (LUCENE-8328) ReadersAndUpdates#getLatestReader should execute under lock

2018-05-23 Thread Nhat Nguyen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487143#comment-16487143
 ] 

Nhat Nguyen commented on LUCENE-8328:
-

Thanks [~simonw]

> ReadersAndUpdates#getLatestReader should execute under lock
> ---
>
> Key: LUCENE-8328
> URL: https://issues.apache.org/jira/browse/LUCENE-8328
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Priority: Major
> Attachments: LUCENE-8328.patch, LUCENE-8328.patch
>
>
> It's possible for a merge thread to acquire an index reader which is closed 
> before it can incRef.
> {noformat}
> Merge stack trace:
> at __randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0)
> at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257)
> at org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
> at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355)
> at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043)
> at 
> org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
> at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140)
> Refresh stack trace:
> at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
> at 
> org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
> at 
> org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
> at 
> org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
> at 
> org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
> at org.apache.lucene.index.IndexWriter.isFullyDeleted(IndexWriter.java:5124)
> at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3306)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:514)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
> The problem is that `ReadersAndUpdates#getLatestReader` is executed 
> concurrently without holding lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 65 - Still Unstable

2018-05-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/65/

4 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.sim.TestLargeCluster.testNodeLost

Error Message:
last state: DocCollection(testNodeLost//clusterstate.json/29)={   
"replicationFactor":"10",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"5",   
"autoAddReplicas":"false",   "nrtReplicas":"10",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{ "shard20":{   "replicas":{ 
"core_node191":{   "core":"testNodeLost_shard20_replica_n191",  
 "leader":"true",   "SEARCHER.searcher.maxDoc":0,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":1,   
"node_name":"127.0.0.1:10043_solr",   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":0}, 
"core_node200":{   "core":"testNodeLost_shard20_replica_n200",  
 "SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10033_solr",  
 "state":"down",   "type":"NRT",   
"SEARCHER.searcher.numDocs":0}, "core_node199":{   
"core":"testNodeLost_shard20_replica_n199",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10034_solr",   
"state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":0}, "core_node198":{   
"core":"testNodeLost_shard20_replica_n198",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10078_solr",   
"state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":0}, "core_node197":{   
"core":"testNodeLost_shard20_replica_n197",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10094_solr",   
"state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":0}, "core_node196":{   
"core":"testNodeLost_shard20_replica_n196",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10039_solr",   
"state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":0}, "core_node195":{   
"core":"testNodeLost_shard20_replica_n195",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10036_solr",   
"state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":0}, "core_node194":{   
"core":"testNodeLost_shard20_replica_n194",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10015_solr",   
"state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":0}, "core_node193":{   
"core":"testNodeLost_shard20_replica_n193",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10067_solr",   
"state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":0}, "core_node192":{   
"core":"testNodeLost_shard20_replica_n192",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10019_solr",   
"state":"down",   "type":"NRT",   
"SEARCHER.searcher.numDocs":0}},   "range":"7333-7fff",   
"state":"active"}, "shard10":{   "replicas":{ "core_node92":{   
"core":"testNodeLost_shard10_replica_n92",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10019_solr",   
"state":"down",   "type":"NRT",   
"SEARCHER.searcher.numDocs":0}, "core_node91":{   
"core":"testNodeLost_shard10_replica_n91",   "leader":"true",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10043_solr",   
"state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":0}, "core_node94":{   
"core":"testNodeLost_shard10_replica_n94",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10015_solr",

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1886 - Still Unstable!

2018-05-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1886/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.cloud.TestRandomRequestDistribution.test

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([7286297B01C86ADB:FAD216A1AF340723]:0)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.getNodeName(JettySolrRunner.java:347)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createJettys(AbstractFullDistribZkTestBase.java:423)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:341)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestRandomRequestDistribution

Error Message:
10 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestRandomRequestDistribution: 1) Thread[id=15159, 
name=qtp1705720906-15159, state=TIMED_WAITING, 
group=TGRP-TestRandomRequestDistribution] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 

[jira] [Commented] (SOLR-12388) Enable a strict ZooKeeper-connected search request mode, in which search requests will fail when the coordinating node can't communicate with ZooKeeper

2018-05-23 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487071#comment-16487071
 ] 

Steve Rowe commented on SOLR-12388:
---

Attached patch implements the idea.

Rather than introducing a new request param, I've expanded the possible values 
{{shards.tolerant}} can take on to include non-boolean value 
{{requireZkConnected}}, which enables the mode described above.  (Thanks to 
[~hossman] for his offline suggestion to use {{shards.tolerant}} for this 
purpose.) 

In addition to causing requests to fail when the coordinating node can't 
communicate with ZooKeeper, setting {{shards.tolerant}} to 
{{requireZkConnected}} will cause search components to behave the same as when 
{{shards.tolerant}} is set to {{false}} (the default): the request will fail 
rather than causing partial results to be returned.

I've included ref guide docs and a CHANGES entry. Precommit and all Solr tests 
pass. I think this is ready to go.

Feedback is welcome.

> Enable a strict ZooKeeper-connected search request mode, in which search 
> requests will fail when the coordinating node can't communicate with ZooKeeper
> ---
>
> Key: SOLR-12388
> URL: https://issues.apache.org/jira/browse/SOLR-12388
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Attachments: SOLR-12388.patch
>
>
> Right now, a Solr node will return the results of a search request even if it 
> cannot communicate with ZooKeeper at the time it receives the request. This 
> may result in stale or incorrect results if there have been major changes to 
> the collection structure that the node has not been informed of via 
> ZooKeeper.  When this happens, as long as all known shards respond, the 
> response will succeed, and a {{zkConnected}} header set to {{false}} is 
> included in the search response.
> There should be an option to instead fail requests under these conditions, to 
> prevent stale or incorrect results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12388) Enable a strict ZooKeeper-connected search request mode, in which search requests will fail when the coordinating node can't communicate with ZooKeeper

2018-05-23 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-12388:
--
Attachment: (was: SOLR-12388.patch)

> Enable a strict ZooKeeper-connected search request mode, in which search 
> requests will fail when the coordinating node can't communicate with ZooKeeper
> ---
>
> Key: SOLR-12388
> URL: https://issues.apache.org/jira/browse/SOLR-12388
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Attachments: SOLR-12388.patch
>
>
> Right now, a Solr node will return the results of a search request even if it 
> cannot communicate with ZooKeeper at the time it receives the request. This 
> may result in stale or incorrect results if there have been major changes to 
> the collection structure that the node has not been informed of via 
> ZooKeeper.  When this happens, as long as all known shards respond, the 
> response will succeed, and a {{zkConnected}} header set to {{false}} is 
> included in the search response.
> There should be an option to instead fail requests under these conditions, to 
> prevent stale or incorrect results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12388) Enable a strict ZooKeeper-connected search request mode, in which search requests will fail when the coordinating node can't communicate with ZooKeeper

2018-05-23 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-12388:
--
Attachment: SOLR-12388.patch

> Enable a strict ZooKeeper-connected search request mode, in which search 
> requests will fail when the coordinating node can't communicate with ZooKeeper
> ---
>
> Key: SOLR-12388
> URL: https://issues.apache.org/jira/browse/SOLR-12388
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Attachments: SOLR-12388.patch, SOLR-12388.patch
>
>
> Right now, a Solr node will return the results of a search request even if it 
> cannot communicate with ZooKeeper at the time it receives the request. This 
> may result in stale or incorrect results if there have been major changes to 
> the collection structure that the node has not been informed of via 
> ZooKeeper.  When this happens, as long as all known shards respond, the 
> response will succeed, and a {{zkConnected}} header set to {{false}} is 
> included in the search response.
> There should be an option to instead fail requests under these conditions, to 
> prevent stale or incorrect results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12387) Have cluster-wide defaults for numShards, nrtReplicas, tlogReplicas, pullReplicas

2018-05-23 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487068#comment-16487068
 ] 

Andrzej Bialecki  commented on SOLR-12387:
--

Cluster properties is a JSON map. I suggest using a proper nested hierarchy 
instead of prefixes, because prefixes can quickly become awkward as we keep 
adding new data there with more complex relationships between elements.

For example:
{code}
{
  "defaults" : {
"collection": {
  "numShards": 2,
  "numNrtReplicas": 2,
}
  }
}
{code}

> Have cluster-wide defaults for numShards, nrtReplicas, tlogReplicas, 
> pullReplicas
> -
>
> Key: SOLR-12387
> URL: https://issues.apache.org/jira/browse/SOLR-12387
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Attachments: SOLR-12387.patch
>
>
> These will be cluster properties and the commands can omit these and the 
> command would pick it up from the cluster properties
>  
> the cluster property names are
>  * {{default.numShards}}
>  * {{default.nrtReplicas}}
>  * {{default.tlogReplicas}}
>  * {{default.pullReplicas}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11779) Basic long-term collection of aggregated metrics

2018-05-23 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487063#comment-16487063
 ] 

Andrzej Bialecki  commented on SOLR-11779:
--

bq. Would an existing collection suddenly start getting metrics collected for 
it?
Yes. The {{MetricsHistoryHandler}} simply pulls selected metrics from all 
{{solr.core.}} registries on each node.

> Basic long-term collection of aggregated metrics
> 
>
> Key: SOLR-11779
> URL: https://issues.apache.org/jira/browse/SOLR-11779
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-11779.patch, SOLR-11779.patch, SOLR-11779.patch, 
> c1.png, c2.png, core.json, d1.png, d2.png, d3.png, jvm-list.json, 
> jvm-string.json, jvm.json, o1.png, u1.png
>
>
> Tracking the key metrics over time is very helpful in understanding the 
> cluster and user behavior.
> Currently even basic metrics tracking requires setting up an external system 
> and either polling {{/admin/metrics}} or using {{SolrMetricReporter}}-s. The 
> advantage of this setup is that these external tools usually provide a lot of 
> sophisticated functionality. The downside is that they don't ship out of the 
> box with Solr and require additional admin effort to set up.
> Solr could collect some of the key metrics and keep their historical values 
> in a round-robin database (eg. using RRD4j) to keep the size of the historic 
> data constant (eg. ~64kB per metric), but at the same providing out of the 
> box useful insights into the basic system behavior over time. This data could 
> be persisted to the {{.system}} collection as blobs, and it could be also 
> presented in the Admin UI as graphs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8329) Size Estimator wrongly calculate Disk space in MB

2018-05-23 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487045#comment-16487045
 ] 

Alessandro Benedetti edited comment on LUCENE-8329 at 5/23/18 10:27 AM:


Hi Adrien, 
 I am talking about the one included in the dev-tools in the Apache Lucene/Solr 
project :

dev-tools/size-estimator-lucene-solr.xls

I understand it is an old tool, but someone is still using it, so I just 
thought to contribute back these simple bug fixes.

For sure, that xls could be rewritten, but It's out of scope for this simple 
Jira :)

P.S. I attached the patch, but unfortunately it is unreadable.
Being a binary file, it just replace it.
This is annoying as I have done a minimal fix to the XSL but being on a Mac I 
had to export it via Numbers.
So I end up not being sure if I broke any OS compatibility issue.


was (Author: alessandro.benedetti):
Hi Adrien, 
I am talking about the one included in the dev-tools in the Apache Lucene/Solr 
project :

dev-tools/size-estimator-lucene-solr.xls

I understand it is an old tool, but someone is still using it, so I just 
thought to contribute back these simple bug fixes.

For sure, that xls could be rewritten, but It's out of scope for this simple 
Jira :)

> Size Estimator wrongly calculate Disk space in MB
> -
>
> Key: LUCENE-8329
> URL: https://issues.apache.org/jira/browse/LUCENE-8329
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: -tools
>Affects Versions: 7.3.1
>Reporter: Alessandro Benedetti
>Priority: Minor
> Attachments: LUCENE-8329.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The size estimator dev tool ( dev-tools/size-estimator-lucene-solr.xls 
> )currently :
>  * Wrongly calculates disk size in MB ( showing GB)
>  * Doesn't specify clearly that the space needed by the optimize is FREE space
>  * Avg. Document Size (KB) when they are more correctly Avg. Document Field 
> Size (KB)
> Scope of this issue is just to fix these small mistakes.
>  Out of scope is any improvement to the tool ( potentially separate Jira 
> issues will follow)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8329) Size Estimator wrongly calculate Disk space in MB

2018-05-23 Thread Alessandro Benedetti (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro Benedetti updated LUCENE-8329:
-
Attachment: LUCENE-8329.patch

> Size Estimator wrongly calculate Disk space in MB
> -
>
> Key: LUCENE-8329
> URL: https://issues.apache.org/jira/browse/LUCENE-8329
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: -tools
>Affects Versions: 7.3.1
>Reporter: Alessandro Benedetti
>Priority: Minor
> Attachments: LUCENE-8329.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The size estimator dev tool ( dev-tools/size-estimator-lucene-solr.xls 
> )currently :
>  * Wrongly calculates disk size in MB ( showing GB)
>  * Doesn't specify clearly that the space needed by the optimize is FREE space
>  * Avg. Document Size (KB) when they are more correctly Avg. Document Field 
> Size (KB)
> Scope of this issue is just to fix these small mistakes.
>  Out of scope is any improvement to the tool ( potentially separate Jira 
> issues will follow)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #381: [LUCENE-8329] disk size estimator MB bug fixe...

2018-05-23 Thread alessandrobenedetti
GitHub user alessandrobenedetti opened a pull request:

https://github.com/apache/lucene-solr/pull/381

[LUCENE-8329] disk size estimator MB bug fixes

The size estimator dev tool ( dev-tools/size-estimator-lucene-solr.xls 
)currently :
Wrongly calculates disk size in MB ( showing GB)
Doesn't specify clearly that the space needed by the optimize is FREE space
Avg. Document Size (KB) when they are more correctly Avg. Document Field 
Size (KB)
Scope of this issue is just to fix these small mistakes.
Out of scope is any improvement to the tool ( potentially separate Jira 
issues will follow)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SeaseLtd/lucene-solr LUCENE-8329

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/381.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #381


commit bc31f4647213d534c8a3ccabf11869ffce0e53af
Author: Alessandro Benedetti 
Date:   2018-05-23T10:20:11Z

[LUCENE-8329] disk size estimator MB bug fixes




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8329) Size Estimator wrongly calculate Disk space in MB

2018-05-23 Thread Alessandro Benedetti (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro Benedetti updated LUCENE-8329:
-
Description: 
The size estimator dev tool ( dev-tools/size-estimator-lucene-solr.xls 
)currently :
 * Wrongly calculates disk size in MB ( showing GB)
 * Doesn't specify clearly that the space needed by the optimize is FREE space
 * Avg. Document Size (KB) when they are more correctly Avg. Document Field 
Size (KB)

Scope of this issue is just to fix these small mistakes.
 Out of scope is any improvement to the tool ( potentially separate Jira issues 
will follow)

 

  was:
The size estimator dev tool currently :
 * Wrongly calculates disk size in MB ( showing GB)
 * Doesn't specify clearly that the space needed by the optimize is FREE space
 * Avg. Document Size (KB) when they are more correctly Avg. Document Field 
Size (KB)

Scope of this issue is just to fix these small mistakes.
 Out of scope is any improvement to the tool ( potentially separate Jira issues 
will follow)

 


> Size Estimator wrongly calculate Disk space in MB
> -
>
> Key: LUCENE-8329
> URL: https://issues.apache.org/jira/browse/LUCENE-8329
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: -tools
>Affects Versions: 7.3.1
>Reporter: Alessandro Benedetti
>Priority: Minor
>
> The size estimator dev tool ( dev-tools/size-estimator-lucene-solr.xls 
> )currently :
>  * Wrongly calculates disk size in MB ( showing GB)
>  * Doesn't specify clearly that the space needed by the optimize is FREE space
>  * Avg. Document Size (KB) when they are more correctly Avg. Document Field 
> Size (KB)
> Scope of this issue is just to fix these small mistakes.
>  Out of scope is any improvement to the tool ( potentially separate Jira 
> issues will follow)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8329) Size Estimator wrongly calculate Disk space in MB

2018-05-23 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487045#comment-16487045
 ] 

Alessandro Benedetti commented on LUCENE-8329:
--

Hi Adrien, 
I am talking about the one included in the dev-tools in the Apache Lucene/Solr 
project :

dev-tools/size-estimator-lucene-solr.xls

I understand it is an old tool, but someone is still using it, so I just 
thought to contribute back these simple bug fixes.

For sure, that xls could be rewritten, but It's out of scope for this simple 
Jira :)

> Size Estimator wrongly calculate Disk space in MB
> -
>
> Key: LUCENE-8329
> URL: https://issues.apache.org/jira/browse/LUCENE-8329
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: -tools
>Affects Versions: 7.3.1
>Reporter: Alessandro Benedetti
>Priority: Minor
>
> The size estimator dev tool currently :
>  * Wrongly calculates disk size in MB ( showing GB)
>  * Doesn't specify clearly that the space needed by the optimize is FREE space
>  * Avg. Document Size (KB) when they are more correctly Avg. Document Field 
> Size (KB)
> Scope of this issue is just to fix these small mistakes.
>  Out of scope is any improvement to the tool ( potentially separate Jira 
> issues will follow)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8329) Size Estimator wrongly calculate Disk space in MB

2018-05-23 Thread Alessandro Benedetti (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro Benedetti updated LUCENE-8329:
-
Environment: (was: The size estimator dev tool currently :
 * Wrongly calculates disk size in MB ( showing GB)
 * Doesn't specify clearly that the space needed by the optimize is FREE space
 * Avg. Document Size (KB) when they are more correctly Avg. Document Field 
Size (KB)


Scope of this issue is just to fix these small mistakes.
Out of scope is any improvement to the tool ( potentially separate Jira issues 
will follow))

> Size Estimator wrongly calculate Disk space in MB
> -
>
> Key: LUCENE-8329
> URL: https://issues.apache.org/jira/browse/LUCENE-8329
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: -tools
>Affects Versions: 7.3.1
>Reporter: Alessandro Benedetti
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8329) Size Estimator wrongly calculate Disk space in MB

2018-05-23 Thread Alessandro Benedetti (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alessandro Benedetti updated LUCENE-8329:
-
Description: 
The size estimator dev tool currently :
 * Wrongly calculates disk size in MB ( showing GB)
 * Doesn't specify clearly that the space needed by the optimize is FREE space
 * Avg. Document Size (KB) when they are more correctly Avg. Document Field 
Size (KB)

Scope of this issue is just to fix these small mistakes.
 Out of scope is any improvement to the tool ( potentially separate Jira issues 
will follow)

 

> Size Estimator wrongly calculate Disk space in MB
> -
>
> Key: LUCENE-8329
> URL: https://issues.apache.org/jira/browse/LUCENE-8329
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: -tools
>Affects Versions: 7.3.1
>Reporter: Alessandro Benedetti
>Priority: Minor
>
> The size estimator dev tool currently :
>  * Wrongly calculates disk size in MB ( showing GB)
>  * Doesn't specify clearly that the space needed by the optimize is FREE space
>  * Avg. Document Size (KB) when they are more correctly Avg. Document Field 
> Size (KB)
> Scope of this issue is just to fix these small mistakes.
>  Out of scope is any improvement to the tool ( potentially separate Jira 
> issues will follow)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11779) Basic long-term collection of aggregated metrics

2018-05-23 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487039#comment-16487039
 ] 

Andrzej Bialecki  commented on SOLR-11779:
--

Oh, and before we all get carried away, I'd like to again stress the word 
"basic" in the issue title - we don't want to put a full-blown monitoring 
system into Solr.

> Basic long-term collection of aggregated metrics
> 
>
> Key: SOLR-11779
> URL: https://issues.apache.org/jira/browse/SOLR-11779
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-11779.patch, SOLR-11779.patch, SOLR-11779.patch, 
> c1.png, c2.png, core.json, d1.png, d2.png, d3.png, jvm-list.json, 
> jvm-string.json, jvm.json, o1.png, u1.png
>
>
> Tracking the key metrics over time is very helpful in understanding the 
> cluster and user behavior.
> Currently even basic metrics tracking requires setting up an external system 
> and either polling {{/admin/metrics}} or using {{SolrMetricReporter}}-s. The 
> advantage of this setup is that these external tools usually provide a lot of 
> sophisticated functionality. The downside is that they don't ship out of the 
> box with Solr and require additional admin effort to set up.
> Solr could collect some of the key metrics and keep their historical values 
> in a round-robin database (eg. using RRD4j) to keep the size of the historic 
> data constant (eg. ~64kB per metric), but at the same providing out of the 
> box useful insights into the basic system behavior over time. This data could 
> be persisted to the {{.system}} collection as blobs, and it could be also 
> presented in the Admin UI as graphs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11779) Basic long-term collection of aggregated metrics

2018-05-23 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487032#comment-16487032
 ] 

Andrzej Bialecki  commented on SOLR-11779:
--

bq. Maybe it doesn't make sense in 7x to have enable=true by default?
The way defaults are set for now it would only collect aggregated metrics 
history (one history db per collection, plus one for aggregated nodes and one 
for aggregated jvms). Considering the small memory impact of each DB (~30kB) 
and small CPU impact (metrics are polled every 60 sec) I'd say it's benign. But 
I've been wrong before ... ;)

[~janhoy] definitely, the format of the graphs is suitable for just 
copy/pasting the data into an {{}} 
element.

Tracking the history of ephemeral resources such as individual replicas and 
nodes is somewhat complicated due to their relatively shorter life-cycle (I 
know, it may sound weird if you run 3 nodes with 3 collections, but there are 
users running very large clusters that experience high churn). There's a config 
option to collect selected per-node metrics so it's possible to do so (see the 
patch description above). However, there's no mechanism in place yet to 
automatically clean up these DBs when nodes and replicas go permanently away 
(though we could add it as a scheduled maintenance task, there's already a 
predefined trigger for this). There's an API for doing this manually.

The list of metrics that are currently collected is as follows:
* CORE and COLLECTION level metrics
** QUERY./select.requests
** UPDATE./update.requests
** INDEX.sizeInBytes
** numShards (active)
** numReplicas (active)
* NODE level metrics
** CONTAINER.fs.coreRoot.usableSpace
** numNodes
* JVM level metrics
** memory.heap.used
** os.processCpuLoad
** os.systemLoadAverage

Currently one DB is created for each these groups. However, RRD4j doesn't allow 
adding new datasources once the DB is created, so this list is not configurable 
on the fly (yet - there are ways to work-around it that I'm exploring).

> Basic long-term collection of aggregated metrics
> 
>
> Key: SOLR-11779
> URL: https://issues.apache.org/jira/browse/SOLR-11779
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-11779.patch, SOLR-11779.patch, SOLR-11779.patch, 
> c1.png, c2.png, core.json, d1.png, d2.png, d3.png, jvm-list.json, 
> jvm-string.json, jvm.json, o1.png, u1.png
>
>
> Tracking the key metrics over time is very helpful in understanding the 
> cluster and user behavior.
> Currently even basic metrics tracking requires setting up an external system 
> and either polling {{/admin/metrics}} or using {{SolrMetricReporter}}-s. The 
> advantage of this setup is that these external tools usually provide a lot of 
> sophisticated functionality. The downside is that they don't ship out of the 
> box with Solr and require additional admin effort to set up.
> Solr could collect some of the key metrics and keep their historical values 
> in a round-robin database (eg. using RRD4j) to keep the size of the historic 
> data constant (eg. ~64kB per metric), but at the same providing out of the 
> box useful insights into the basic system behavior over time. This data could 
> be persisted to the {{.system}} collection as blobs, and it could be also 
> presented in the Admin UI as graphs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8329) Size Estimator wrongly calculate Disk space in MB

2018-05-23 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487028#comment-16487028
 ] 

Adrien Grand commented on LUCENE-8329:
--

Which tool are you talking about? Is it 
https://github.com/mikemccand/luceneutil/blob/master/src/main/perf/DiskUsage.70.java?

> Size Estimator wrongly calculate Disk space in MB
> -
>
> Key: LUCENE-8329
> URL: https://issues.apache.org/jira/browse/LUCENE-8329
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: -tools
>Affects Versions: 7.3.1
> Environment: The size estimator dev tool currently :
>  * Wrongly calculates disk size in MB ( showing GB)
>  * Doesn't specify clearly that the space needed by the optimize is FREE space
>  * Avg. Document Size (KB) when they are more correctly Avg. Document Field 
> Size (KB)
> Scope of this issue is just to fix these small mistakes.
> Out of scope is any improvement to the tool ( potentially separate Jira 
> issues will follow)
>Reporter: Alessandro Benedetti
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8329) Size Estimator wrongly calculate Disk space in MB

2018-05-23 Thread Alessandro Benedetti (JIRA)
Alessandro Benedetti created LUCENE-8329:


 Summary: Size Estimator wrongly calculate Disk space in MB
 Key: LUCENE-8329
 URL: https://issues.apache.org/jira/browse/LUCENE-8329
 Project: Lucene - Core
  Issue Type: Bug
  Components: -tools
Affects Versions: 7.3.1
 Environment: The size estimator dev tool currently :
 * Wrongly calculates disk size in MB ( showing GB)
 * Doesn't specify clearly that the space needed by the optimize is FREE space
 * Avg. Document Size (KB) when they are more correctly Avg. Document Field 
Size (KB)


Scope of this issue is just to fix these small mistakes.
Out of scope is any improvement to the tool ( potentially separate Jira issues 
will follow)
Reporter: Alessandro Benedetti






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 662 - Unstable!

2018-05-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/662/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.update.MaxSizeAutoCommitTest.deleteTest

Error Message:
Tlog size exceeds the max size bound. Tlog path: 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/solr-core/test/J0/temp/solr.update.MaxSizeAutoCommitTest_466C0C5DA0F27D6-001/init-core-data-001/tlog/tlog.001,
 tlog size: 5417

Stack Trace:
java.lang.AssertionError: Tlog size exceeds the max size bound. Tlog path: 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/solr-core/test/J0/temp/solr.update.MaxSizeAutoCommitTest_466C0C5DA0F27D6-001/init-core-data-001/tlog/tlog.001,
 tlog size: 5417
at 
__randomizedtesting.SeedInfo.seed([466C0C5DA0F27D6:1428253AA1A11E27]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.update.MaxSizeAutoCommitTest.getTlogFileSizes(MaxSizeAutoCommitTest.java:379)
at 
org.apache.solr.update.MaxSizeAutoCommitTest.deleteTest(MaxSizeAutoCommitTest.java:200)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Updated] (SOLR-12388) Enable a strict ZooKeeper-connected search request mode, in which search requests will fail when the coordinating node can't communicate with ZooKeeper

2018-05-23 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-12388:
--
Attachment: SOLR-12388.patch

> Enable a strict ZooKeeper-connected search request mode, in which search 
> requests will fail when the coordinating node can't communicate with ZooKeeper
> ---
>
> Key: SOLR-12388
> URL: https://issues.apache.org/jira/browse/SOLR-12388
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Minor
> Attachments: SOLR-12388.patch
>
>
> Right now, a Solr node will return the results of a search request even if it 
> cannot communicate with ZooKeeper at the time it receives the request. This 
> may result in stale or incorrect results if there have been major changes to 
> the collection structure that the node has not been informed of via 
> ZooKeeper.  When this happens, as long as all known shards respond, the 
> response will succeed, and a {{zkConnected}} header set to {{false}} is 
> included in the search response.
> There should be an option to instead fail requests under these conditions, to 
> prevent stale or incorrect results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12388) Enable a strict ZooKeeper-connected search request mode, in which search requests will fail when the coordinating node can't communicate with ZooKeeper

2018-05-23 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-12388:
-

 Summary: Enable a strict ZooKeeper-connected search request mode, 
in which search requests will fail when the coordinating node can't communicate 
with ZooKeeper
 Key: SOLR-12388
 URL: https://issues.apache.org/jira/browse/SOLR-12388
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Steve Rowe
Assignee: Steve Rowe


Right now, a Solr node will return the results of a search request even if it 
cannot communicate with ZooKeeper at the time it receives the request. This may 
result in stale or incorrect results if there have been major changes to the 
collection structure that the node has not been informed of via ZooKeeper.  
When this happens, as long as all known shards respond, the response will 
succeed, and a {{zkConnected}} header set to {{false}} is included in the 
search response.

There should be an option to instead fail requests under these conditions, to 
prevent stale or incorrect results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12387) Have cluster-wide defaults for numShards, nrtReplicas, tlogReplicas, pullReplicas

2018-05-23 Thread Noble Paul (JIRA)
Noble Paul created SOLR-12387:
-

 Summary: Have cluster-wide defaults for numShards, nrtReplicas, 
tlogReplicas, pullReplicas
 Key: SOLR-12387
 URL: https://issues.apache.org/jira/browse/SOLR-12387
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Noble Paul
Assignee: Noble Paul


These will be cluster properties and the commands can omit these and the 
command would pick it up from the cluster properties

 

the cluster property names are
 * {{default.numShards}}
 * {{default.nrtReplicas}}
 * {{default.tlogReplicas}}
 * {{default.pullReplicas}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8273) Add a ConditionalTokenFilter

2018-05-23 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16486886#comment-16486886
 ] 

Alan Woodward commented on LUCENE-8273:
---

The elastic CI has found some reproducing seeds in TestRandomChains that look 
like the following:
{code}
Suite: org.apache.lucene.analysis.core.TestRandomChains
01:47:39[junit4]   2> Exception from random analyzer: 
01:47:39[junit4]   2> charfilters=
01:47:39[junit4]   2>   
org.apache.lucene.analysis.fa.PersianCharFilter(java.io.StringReader@36de1051)
01:47:39[junit4]   2>   
org.apache.lucene.analysis.charfilter.MappingCharFilter(org.apache.lucene.analysis.charfilter.NormalizeCharMap@31483c67,
 org.apache.lucene.analysis.fa.PersianCharFilter@51a9d324)
01:47:39[junit4]   2> tokenizer=
01:47:39[junit4]   2>   
org.apache.lucene.analysis.core.UnicodeWhitespaceTokenizer(org.apache.lucene.util.AttributeFactory$1@27232fb3,
 35)
01:47:39[junit4]   2> filters=ConditionalTokenFilter: 
01:47:39[junit4]   2>   
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter(OneTimeWrapper@5f621e45
 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,
 
org.apache.lucene.analysis.compound.hyphenation.HyphenationTree@40cdd67e)ConditionalTokenFilter:
 
01:47:39[junit4]   2>   
org.apache.lucene.analysis.in.IndicNormalizationFilter(OneTimeWrapper@2de2e47c 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1)ConditionalTokenFilter:
 
01:47:39[junit4]   2>   
org.apache.lucene.analysis.MockRandomLookaheadTokenFilter(java.util.Random@4ced13ac,
 OneTimeWrapper@7d30a80d 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1)
01:47:39[junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestRandomChains -Dtests.method=testRandomChainsWithLargeStrings 
-Dtests.seed=72E157E8E16C0F79 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=en-US -Dtests.timezone=America/Anguilla -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
01:47:39[junit4] FAILURE 0.57s J0 | 
TestRandomChains.testRandomChainsWithLargeStrings <<<
01:47:39[junit4]> Throwable #1: java.lang.AssertionError
01:47:39[junit4]>   at 
__randomizedtesting.SeedInfo.seed([72E157E8E16C0F79:18BAE8F9B8222F8A]:0)
01:47:39[junit4]>   at 
org.apache.lucene.analysis.LookaheadTokenFilter.peekToken(LookaheadTokenFilter.java:140)
{code}

The root cause is that LookaheadTokenFilter doesn't play well with 
ConditionalTokenFilter when we have stacked tokens:
- CTF works by presenting the underlying TokenStream to its wrapped filter as a 
series of snippets, demarcated by tokens that don't pass the {{shouldFilter()}} 
test.  When a new snippet is started (i.e. when a token that passes 
{{shouldFilter()}} appears after one that doesn't) then {{reset()}} is called 
on the delegate, and when it stops (i.e. when a token that doesn't pass 
{{shouldFilter()}} appears) then {{end()}} is called.
- This means that if we have stacked tokens, with the first not passing 
{{shouldFilter()}} and the second passing it, the wrapped filter can see a 
TokenStream that has an initial position increment of 0
- LookaheadTokenFilter has an explicit assertion that checks we don't have an 
initial posInc of 0

I think this can be fixed by having a posInc adjustment when we're delegating, 
so that the delegated snippet starts with a posInc of 1, but this is then 
adjusted downwards by the CTF before it's emitted.

> Add a ConditionalTokenFilter
> 
>
> Key: LUCENE-8273
> URL: https://issues.apache.org/jira/browse/LUCENE-8273
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8273-2.patch, LUCENE-8273-2.patch, 
> LUCENE-8273-part2-rebased.patch, LUCENE-8273-part2-rebased.patch, 
> LUCENE-8273-part2.patch, LUCENE-8273-part2.patch, LUCENE-8273.patch, 
> LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch, 
> LUCENE-8273.patch, LUCENE-8273.patch, LUCENE-8273.patch
>
>
> Spinoff of LUCENE-8265.  It would be useful to be able to wrap a TokenFilter 
> in such a way that it could optionally be bypassed based on the current state 
> of the TokenStream.  This could be used to, for example, only apply 
> WordDelimiterFilter to terms that contain hyphens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11779) Basic long-term collection of aggregated metrics

2018-05-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16486866#comment-16486866
 ] 

Jan Høydahl commented on SOLR-11779:


This is cool. Looking forward to the RefGuide part to understand the API. I can 
imagine fetching metrics from here for display in SOLR-8207 As I understand a 
call to the history API would be much cheaper than to scrape the /admin/metrics 
API of all nodes in real-time. Are you planning to track GC history too, e.g. 
#minor/major collections or GC cpu percentage?

> Basic long-term collection of aggregated metrics
> 
>
> Key: SOLR-11779
> URL: https://issues.apache.org/jira/browse/SOLR-11779
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-11779.patch, SOLR-11779.patch, SOLR-11779.patch, 
> c1.png, c2.png, core.json, d1.png, d2.png, d3.png, jvm-list.json, 
> jvm-string.json, jvm.json, o1.png, u1.png
>
>
> Tracking the key metrics over time is very helpful in understanding the 
> cluster and user behavior.
> Currently even basic metrics tracking requires setting up an external system 
> and either polling {{/admin/metrics}} or using {{SolrMetricReporter}}-s. The 
> advantage of this setup is that these external tools usually provide a lot of 
> sophisticated functionality. The downside is that they don't ship out of the 
> box with Solr and require additional admin effort to set up.
> Solr could collect some of the key metrics and keep their historical values 
> in a round-robin database (eg. using RRD4j) to keep the size of the historic 
> data constant (eg. ~64kB per metric), but at the same providing out of the 
> box useful insights into the basic system behavior over time. This data could 
> be persisted to the {{.system}} collection as blobs, and it could be also 
> presented in the Admin UI as graphs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 620 - Still Unstable

2018-05-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/620/

1 tests failed.
FAILED:  org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate

Error Message:
Incorrect parsed timestamp: 1226583351000 != 1226579751000 (Thu Nov 13 04:35:51 
AKST 2008)

Stack Trace:
java.lang.AssertionError: Incorrect parsed timestamp: 1226583351000 != 
1226579751000 (Thu Nov 13 04:35:51 AKST 2008)
at 
__randomizedtesting.SeedInfo.seed([1578933B6282B037:5F61EB0E192BC782]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.extraction.TestExtractionDateUtil.assertParsedDate(TestExtractionDateUtil.java:59)
at 
org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate(TestExtractionDateUtil.java:54)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 21731 lines...]
   [junit4] Suite: org.apache.solr.handler.extraction.TestExtractionDateUtil
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestExtractionDateUtil -Dtests.method=testParseDate 
-Dtests.seed=1578933B6282B037 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=es-PE -Dtests.timezone=America/Metlakatla -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.93s J1 | 

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_172) - Build # 608 - Still Unstable!

2018-05-23 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/608/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseSerialGC

12 tests failed.
FAILED:  
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd.testFullImportFqParam

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_D81E05F65B68A099-001\tempDir-004\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_D81E05F65B68A099-001\tempDir-004\collection1

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_D81E05F65B68A099-001\tempDir-004:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_D81E05F65B68A099-001\tempDir-004
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_D81E05F65B68A099-001\tempDir-004\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_D81E05F65B68A099-001\tempDir-004\collection1
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_D81E05F65B68A099-001\tempDir-004:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_D81E05F65B68A099-001\tempDir-004

at 
__randomizedtesting.SeedInfo.seed([D81E05F65B68A099:2955BB05CB28D7AC]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:318)
at 
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd$SolrInstance.tearDown(TestSolrEntityProcessorEndToEnd.java:360)
at 
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd.tearDown(TestSolrEntityProcessorEndToEnd.java:142)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:992)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

<    1   2