Hey,

You probably did but just double checking- did you change the settings in the 
yaml files before restarting the nodes?




There is an easier way to fix this than a full restart: first restart a single 
node on production. That will cause the agent to check again for the template. 
Verify that the template was added. The delete all .marvel-2014* indices on the 
monitoring cluster and let them be recreated base on the template.




Boaz 





—
Sent from Mailbox

On Mon, Oct 27, 2014 at 11:25 PM, Ross Simpson <[email protected]> wrote:

> Hi Boaz,
> To install, I ran
> bin/plugin --install elasticsearch/marvel/latest
> on each node in both clusters, then restarted both clusters.
> Since then, I have tried several things, including deleting the indexes 
> from the monitoring cluster and reinstalling the plugin on the monitoring 
> cluster.  I'll try now to delete all the marvel indexes, uninstall, then 
> reinstall marvel into both clusters.  
> I'm a bit stumped otherwise, so I'm all ears for any other suggestions.
> Cheers,
> Ross
> On Tuesday, 28 October 2014 08:30:54 UTC+11, Boaz Leskes wrote:
>>
>> It looks like something is wrong is indeed wrong with your marvel index 
>> template which should be there before data is indexed. How did you install 
>> marvel? Did you perhaps delete the data folder of the monitoring cluster 
>> after production was already shipping data?
>>
>> Cheers,
>> Boaz
>>
>> On Monday, October 27, 2014 7:45:34 AM UTC+1, Ross Simpson wrote:
>>>
>>> To troubleshoot a little more, I rebuilt the monitoring cluster to use 
>>> ElasticSearch 1.1.1, which matches the ES version used in the production 
>>> cluster.  No luck.
>>>
>>> On the Overview dashboard, I can see some data (summary, doc count, 
>>> search and indexing rates are all populated [screenshot attached]), but but 
>>> both the nodes and indices sections are empty other than the errors 
>>> mentioned in the previous post.  Cluster pulse doesn't show any events at 
>>> all; node stats and index stats do both show data.
>>>
>>> Any further suggestions would be greatly appreciated :)
>>>
>>> Cheers,
>>> Ross
>>>
>>>
>>>
>>> On Monday, 27 October 2014 11:15:42 UTC+11, Ross Simpson wrote:
>>>>
>>>> I've got a brand-new Marvel installation, and am having some frustrating 
>>>> issues with it: on the overview screen, I am constantly getting errors 
>>>> like:
>>>> *Oops!* FacetPhaseExecutionException[Facet [timestamp]: failed to find 
>>>> mapping for node.ip_port.raw]
>>>>
>>>> *Production cluster:*
>>>> * ElasticSearch 1.1.1
>>>> * Marvel 1.2.1
>>>> * Running in vSphere
>>>>
>>>> *Monitoring cluster:*
>>>> * ElasticSearch 1.3.4
>>>> * Marvel 1.2.1
>>>> * Running in AWS
>>>>
>>>> After installing the plugin and bouncing all nodes in both clusters, 
>>>> Marvel seems to be working -- an index has been created in the monitoring 
>>>> cluster (.marvel-2014.10.26), and I see thousands of documents in 
>>>> there.  There are documents with the following types: cluster_state, 
>>>> cluster_stats, index_stats, indices_stats, node_stats.  So, it does 
>>>> seem that data is being shipped from the prod cluster to the monitoring 
>>>> cluster.
>>>>
>>>> I've seen in the user group that other people have had similar issues. 
>>>>  Some of those mention problems with the marvel index template.  I don't 
>>>> seem to have any at all templates in my monitoring cluster:
>>>>
>>>> $ curl -XGET localhost:9200/_template/
>>>> {}
>>>>
>>>> I tried manually adding the default template (as described in 
>>>> http://www.elasticsearch.org/guide/en/marvel/current/#config-marvel-indices),
>>>>  
>>>> but that didn't seem to have any effect.
>>>>
>>>> So far, I've seen just two specific errors in Marvel:
>>>> * FacetPhaseExecutionException[Facet [timestamp]: failed to find mapping 
>>>> for node.ip_port.raw]
>>>> * FacetPhaseExecutionException[Facet [timestamp]: failed to find mapping 
>>>> for index.raw]
>>>>
>>>> I've also looked through the logs on both the production and monitoring 
>>>> clusters, and the only errors are in the monitoring cluster resulting from 
>>>> queries from the Marvel UI, like this:
>>>>
>>>> [2014-10-27 11:08:13,427][DEBUG][action.search.type       ] [ip-10-4-1-
>>>> 187] [.marvel-2014.10.27][1], node[SR_hriFmTCav-8ofbKU-8g], [R], s[
>>>> STARTED]: Failed to execute [org.elasticsearch.action.search.
>>>> SearchRequest@661dc47e]
>>>> org.elasticsearch.search.SearchParseException: [.marvel-2014.10.27][1]: 
>>>> query[ConstantScore(BooleanFilter(+*:* +cache(_type:index_stats) +cache(
>>>> @timestamp:[1414367880000 TO 1414368540000])))],from[-1],size[10]: Parse 
>>>> Failure [Failed to parse source [{"size":10,"query":{"filtered":{"query"
>>>> :{"match_all":{}},"filter":{"bool":{"must":[{"match_all":{}},{"term":{
>>>> "_type":"index_stats"}},{"range":{"@timestamp":{"from":"now-10m/m","to":
>>>> "now/m"}}}]}}}},"facets":{"timestamp":{"terms_stats":{"key_field":
>>>> "index.raw","value_field":"@timestamp","order":"term","size":2000}},
>>>> "primaries.docs.count":{"terms_stats":{"key_field":"index.raw",
>>>> "value_field":"primaries.docs.count","order":"term","size":2000}},
>>>> "primaries.indexing.index_total":{"terms_stats":{"key_field":"index.raw"
>>>> ,"value_field":"primaries.indexing.index_total","order":"term","size":
>>>> 2000}},"total.search.query_total":{"terms_stats":{"key_field":
>>>> "index.raw","value_field":"total.search.query_total","order":"term",
>>>> "size":2000}},"total.merges.total_size_in_bytes":{"terms_stats":{
>>>> "key_field":"index.raw","value_field":"total.merges.total_size_in_bytes"
>>>> ,"order":"term","size":2000}},"total.fielddata.memory_size_in_bytes":{
>>>> "terms_stats":{"key_field":"index.raw","value_field":
>>>> "total.fielddata.memory_size_in_bytes","order":"term","size":2000}}}}]]
>>>>         at org.elasticsearch.search.SearchService.parseSource(
>>>> SearchService.java:660)
>>>>         at org.elasticsearch.search.SearchService.createContext(
>>>> SearchService.java:516)
>>>>         at org.elasticsearch.search.SearchService.createAndPutContext(
>>>> SearchService.java:488)
>>>>         at org.elasticsearch.search.SearchService.executeQueryPhase(
>>>> SearchService.java:257)
>>>>         at org.elasticsearch.search.action.
>>>> SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:
>>>> 206)
>>>>         at org.elasticsearch.search.action.
>>>> SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:
>>>> 203)
>>>>         at org.elasticsearch.search.action.
>>>> SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:
>>>> 517)
>>>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
>>>> ThreadPoolExecutor.java:1145)
>>>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>>> ThreadPoolExecutor.java:615)
>>>>         at java.lang.Thread.run(Thread.java:745)
>>>> Caused by: org.elasticsearch.search.facet.FacetPhaseExecutionException: 
>>>> Facet [timestamp]: failed to find mapping for index.raw
>>>>         at org.elasticsearch.search.facet.termsstats.
>>>> TermsStatsFacetParser.parse(TermsStatsFacetParser.java:126)
>>>>         at org.elasticsearch.search.facet.FacetParseElement.parse(
>>>> FacetParseElement.java:93)
>>>>         at org.elasticsearch.search.SearchService.parseSource(
>>>> SearchService.java:644</
>>>> ...
>>>
>>>
> -- 
> You received this message because you are subscribed to a topic in the Google 
> Groups "elasticsearch" group.
> To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/elasticsearch/ormCr0X7QoI/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to 
> [email protected].
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/7157c765-2576-4ae3-b83f-3a1753f4d2ec%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1414450083301.0f56a40a%40Nodemailer.
For more options, visit https://groups.google.com/d/optout.

Reply via email to