> On 03/20/2015 10:23 AM, Loris Bennett wrote:
>> Hi,
>>
>> I have the following in my gmetad.conf
>>
>> data_source "Admin_Nodes" 10 admin:8648
>> data_source "Compute_Nodes" 10 admin:8649
>>
>> and when I look at the ports in use, I have
>>
>> $ netstat -plane | egrep 'gmon|gme'
>> tcp        0      0 0.0.0.0:8651                0.0.0.0:*                   
>> LISTEN      493        256095111  62544/gmetad
>> tcp        0      0 0.0.0.0:8652                0.0.0.0:*                   
>> LISTEN      493        256095112  62544/gmetad
>> unix  2      [ ]         DGRAM                    256095117 62544/gmetad
>>
>> Should I expect to see gmetad listening on ports 8648 and 8649 as well?
>>
>> Cheers,
>>
>> Loris
>>
Vladimir Vuksan <vli...@veus.hr> writes:

> No. Gmetad listens to two ports by default
>
> 8651 and 8652
>
> 8648 and 8649 are ports for the gmond which gmetad is polling.
>

OK, I think I have a general problem will my setup.  I have:

- 3 admin nodes, which during normal operation are always up
- 100 compute nodes, any or all of which could be powered down during
  normal operation

Setting up the data source for the admin nodes seems straight forward,
as they are normally all up.  However, how should it be defined for the
compute nodes?  I would like to do something like

  data_source "Compute_Nodes" 10 node*.test.cluster:8649

but this produces the error:

  we failed to resolve data source name node*.test.cluster

I could add one of the admin nodes to the cluster of compute nodes, but
then it would no longer be able to seed its own data to the cluster of
admin node.

Is there a standard way of dealing with this case?

Cheers,

Loris

-- 
This signature is currently under construction.


------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Ganglia-general mailing list
Ganglia-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ganglia-general

Reply via email to