I changed the second Gmetad to "scalable off” and it works!

Thank you!


Sergey

> On Mar 25, 2015, at 1:48 PM, Vladimir Vuksan <vladi...@vuksan.com> wrote:
> 
> I might have misspoke try scalable off. 
> 
> On March 25, 2015 4:26:55 PM EDT, Sergey <svin...@apple.com> wrote:
> Hi Vladimir,
> 
> I changed to “scalable on”. I didn’t help.
> What I see is only the common remote grid view:
> 
> CPUs Total: 120       
> Hosts up: 16  
> Hosts down:2  
> ======
> Current Load Avg (15, 5, 1m):
>  &nb sp;3%, 3%, 3%
> Avg Utilization (last hour):
>   4%
> Localtime:
>   2015-03-25 10:27
> =======
> I can’t see any clusters and hosts inside this grid.
> By netstat I can see that the second Gmetad instance on machine2 periodically 
> connects to the machine1:8651.
> I don’t see any connections to machine1:8652.
> 
> The second Gmetad instance has the same ports, but it’s on another machine. 
> Did you mean that it can affect the polling process?
> 
> Any ideas?
> 
> Thanks!
> Sergey
> 
>> On Mar 24, 2015, at 6:51 PM, Vladimir Vuksan <vli...@veus.hr 
>> <mailto:vli...@veus.hr>> wrote:
>> 
>> Hi Sergey,
>> 
>> Try setting
>> 
>> scalable on
>> 
>> in gmetad.conf of the second instance. From the stock gmetad.conf
>> 
>> # Scalability mode. If on, we summarize over downstream grids, and respect
>> # authority tags. If off, we take on 2.5.0-era behavior: we do not wrap our 
>> output
>> # in <GRID></GRID> tags, we ignore all <GRID> tags we see, and always assume
>> # we are the "authority" on d ata source feeds. This approach does not scale 
>> to
>> # large groups of clusters, but is provided for backwards compatibility.
>> # default: on
>> # scalable off
>> 
>> I have not used this feature in a long time so not sure how well it scales 
>> however it's worth a shot.
>> 
>> Does second instance have different interactive and xml ports ?
>> 
>> Vladimir
>> 
>> 
>> On 03/24/2015 09:24 PM, Sergey wrote:
>>> I have one Gmetad instance collecting metrics from several clusters of 
>>> hosts. Then the second Gmetad instance has to pool all data via port 8651 
>>> from the first instance and store everything in local RRDS.
>>> I can get all data from the second machine via “#>nc machine1 8651”, but 
>>> when I check RRDS, I don’t see any clusters, only Summary_Data folder.
>>> Why Gmetad doesn’t wr ite data into RRDS?
>>> 
>> 
> 
> 
> -- 
> Vladimir

------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Ganglia-general mailing list
Ganglia-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ganglia-general

Reply via email to