Re: [openstack-dev] [ceilometer] Compute agent local VM inspector - potential enhancement

2014-09-05 Thread Rao Dingyuan
Hi folks,

Is there anybody working on this?

In most of our cloud environments, business networks are isolated from 
management network. So, we are thinking about making *an agent in guest machine 
to send metrics to compute node using virtual serial port*. And then, compute 
node could send those data to ceilometer. That seems a general solution for all 
kinds of network topologies, and can send metrics without knowing any 
credentials.


BR
Kurt Rao


-Original Message-
发件人: boden [mailto:bo...@linux.vnet.ibm.com] 
发送时间: 2014年8月1日 20:37
收件人: openstack-dev@lists.openstack.org
主题: Re: [openstack-dev] [ceilometer] Compute agent local VM inspector - 
potential enhancement

On 8/1/2014 4:37 AM, Eoghan Glynn wrote:


 Heat cfntools is based on SSH, so I assume it requires TCP/IP 
 connectivity between VM and the central agent(or collector). But in 
 the cloud, some networks are isolated from infrastructure layer 
 network, because of security reasons. Some of our customers even 
 explicitly require such security protection. Does it mean those 
 isolated VMs cannot be monitored by this proposed-VM-agent?

 Yes, that sounds plausible to me.

My understanding is that this VM agent for ceilometer would need connectivity 
to nova API as well as to the AMQP broker. IMHO the infrastructure requirements 
from a network topology POV will differ from provider to provider and based on 
customer reqs / env.


 Cheers,
 Eoghan

 I really wish we can figur out how it could work for all VMs but with 
 no security issues.

 I'm not familiar with heat-cfntools, so, correct me if I am wrong :)


 Best regards!
 Kurt

 -邮件原件-
 发件人: Eoghan Glynn [mailto:egl...@redhat.com]
 发送时间: 2014年8月1日 14:46
 收件人: OpenStack Development Mailing List (not for usage questions)
 主题: Re: [openstack-dev] [ceilometer] Compute agent local VM inspector 
 - potential enhancement



 Disclaimer: I'm not fully vested on ceilometer internals, so bear with me.

 For consumers wanting to leverage ceilometer as a telemetry service 
 atop non-OpenStack Clouds or infrastructure they don't own, some 
 edge cases crop up. Most notably the consumer may not have access to 
 the hypervisor host and therefore cannot leverage the ceilometer 
 compute agent on a per host basis.

 Yes, currently such access to the hypervisor host is required, least 
 in the case of the libvirt-based inspector.

 In such scenarios it's my understanding the main option is to employ 
 the central agent to poll measurements from the monitored resources 
 (VMs, etc.).

 Well, the ceilometer central agent is not generally concerned with 
 with polling related *directly* to VMs - rather it handles acquiring 
 data from RESTful API (glance, neutron etc.) that are not otherwise 
 available in the form of notifications, and also from host-level interfaces 
 such as SNMP.


Thanks for additional clarity. Perhaps this proposed local VM agent fills 
additional use cases whereupon ceilometer is being used without openstack 
proper (e.g. not a full set of openstack complaint services like neutron, 
glance, etc.).

 However this approach requires Cloud APIs (or other mechanisms) 
 which allow the polling impl to obtain the desired measurements (VM 
 memory, CPU, net stats, etc.) and moreover the polling approach has 
 it's own set of pros / cons from a arch / topology perspective.

 Indeed.

 The other potential option is to setup the ceilometer compute agent 
 within the VM and have each VM publish measurements to the collector
 -- a local VM agent / inspector if you will. With respect to this 
 local VM agent approach:
 (a) I haven't seen this documented to date; is there any desire / 
 reqs to support this topology?
 (b) If yes to #a, I whipped up a crude PoC here:
 http://tinyurl.com/pqjgotv  Are folks willing to consider a BP for 
 this approach?

 So in a sense this is similar to the Heat cfn-push-stats utility[1] 
 and seems to suffer from the same fundamental problem, i.e. the need 
 for injection of credentials (user/passwds, keys, whatever) into the 
 VM in order to allow the metric datapoints be pushed up to the 
 infrastructure layer (e.g. onto the AMQP bus, or to a REST API endpoint).

 How would you propose to solve that credentialing issue?


My initial approximation would be to target use cases where end users do not 
have direct guest access or have limited guest access such that their UID / GID 
cannot access the conf file. For example instances which only provide app 
access provisioned using heat SoftwareDeployments
(http://tinyurl.com/qxmh2of) or trove database instances.

In general I don't see this approach from a security POV much different than 
whats done with the trove guest agent (http://tinyurl.com/ohvtmtz).

Longer term perhaps credentials could be mitigated using Barbican as suggested 
here: https://bugs.launchpad.net/nova/+bug/1158328

 Cheers,
 Eoghan

 [1]
 https://github.com/openstack/heat-cfntools/blob/master/bin/cfn-push-s
 tats

Re: [openstack-dev] [ceilometer] Compute agent local VM inspector - potential enhancement

2014-08-01 Thread Eoghan Glynn


 Disclaimer: I'm not fully vested on ceilometer internals, so bear with me.
 
 For consumers wanting to leverage ceilometer as a telemetry service atop
 non-OpenStack Clouds or infrastructure they don't own, some edge cases
 crop up. Most notably the consumer may not have access to the hypervisor
 host and therefore cannot leverage the ceilometer compute agent on a per
 host basis.

Yes, currently such access to the hypervisor host is required, least in
the case of the libvirt-based inspector.
 
 In such scenarios it's my understanding the main option is to employ the
 central agent to poll measurements from the monitored resources (VMs,
 etc.). 

Well, the ceilometer central agent is not generally concerned with
with polling related *directly* to VMs - rather it handles acquiring
data from RESTful API (glance, neutron etc.) that are not otherwise
available in the form of notifications, and also from host-level
interfaces such as SNMP.

 However this approach requires Cloud APIs (or other mechanisms)
 which allow the polling impl to obtain the desired measurements (VM
 memory, CPU, net stats, etc.) and moreover the polling approach has it's
 own set of pros / cons from a arch / topology perspective.

Indeed.

 The other potential option is to setup the ceilometer compute agent
 within the VM and have each VM publish measurements to the collector --
 a local VM agent / inspector if you will. With respect to this local VM
 agent approach:
 (a) I haven't seen this documented to date; is there any desire / reqs
 to support this topology?
 (b) If yes to #a, I whipped up a crude PoC here:
 http://tinyurl.com/pqjgotv  Are folks willing to consider a BP for this
 approach?

So in a sense this is similar to the Heat cfn-push-stats utility[1]
and seems to suffer from the same fundamental problem, i.e. the need
for injection of credentials (user/passwds, keys, whatever) into the
VM in order to allow the metric datapoints be pushed up to the
infrastructure layer (e.g. onto the AMQP bus, or to a REST API endpoint).

How would you propose to solve that credentialing issue?

Cheers,
Eoghan

[1] https://github.com/openstack/heat-cfntools/blob/master/bin/cfn-push-stats

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Compute agent local VM inspector - potential enhancement

2014-08-01 Thread Rao Dingyuan
Heat cfntools is based on SSH, so I assume it requires TCP/IP connectivity
between VM and the central agent(or collector). But in the cloud, some
networks are isolated from infrastructure layer network, because of security
reasons. Some of our customers even explicitly require such security
protection. Does it mean those isolated VMs cannot be monitored by this
proposed-VM-agent?

I really wish we can figur out how it could work for all VMs but with no
security issues.

I'm not familiar with heat-cfntools, so, correct me if I am wrong :)


Best regards!
Kurt

-邮件原件-
发件人: Eoghan Glynn [mailto:egl...@redhat.com] 
发送时间: 2014年8月1日 14:46
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [ceilometer] Compute agent local VM inspector -
potential enhancement



 Disclaimer: I'm not fully vested on ceilometer internals, so bear with me.
 
 For consumers wanting to leverage ceilometer as a telemetry service 
 atop non-OpenStack Clouds or infrastructure they don't own, some edge 
 cases crop up. Most notably the consumer may not have access to the 
 hypervisor host and therefore cannot leverage the ceilometer compute 
 agent on a per host basis.

Yes, currently such access to the hypervisor host is required, least in the
case of the libvirt-based inspector.
 
 In such scenarios it's my understanding the main option is to employ 
 the central agent to poll measurements from the monitored resources 
 (VMs, etc.).

Well, the ceilometer central agent is not generally concerned with with
polling related *directly* to VMs - rather it handles acquiring data from
RESTful API (glance, neutron etc.) that are not otherwise available in the
form of notifications, and also from host-level interfaces such as SNMP.

 However this approach requires Cloud APIs (or other mechanisms) which 
 allow the polling impl to obtain the desired measurements (VM memory, 
 CPU, net stats, etc.) and moreover the polling approach has it's own 
 set of pros / cons from a arch / topology perspective.

Indeed.

 The other potential option is to setup the ceilometer compute agent 
 within the VM and have each VM publish measurements to the collector 
 -- a local VM agent / inspector if you will. With respect to this 
 local VM agent approach:
 (a) I haven't seen this documented to date; is there any desire / reqs 
 to support this topology?
 (b) If yes to #a, I whipped up a crude PoC here:
 http://tinyurl.com/pqjgotv  Are folks willing to consider a BP for 
 this approach?

So in a sense this is similar to the Heat cfn-push-stats utility[1] and
seems to suffer from the same fundamental problem, i.e. the need for
injection of credentials (user/passwds, keys, whatever) into the VM in order
to allow the metric datapoints be pushed up to the infrastructure layer
(e.g. onto the AMQP bus, or to a REST API endpoint).

How would you propose to solve that credentialing issue?

Cheers,
Eoghan

[1]
https://github.com/openstack/heat-cfntools/blob/master/bin/cfn-push-stats

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Compute agent local VM inspector - potential enhancement

2014-08-01 Thread Eoghan Glynn


 Heat cfntools is based on SSH, so I assume it requires TCP/IP connectivity
 between VM and the central agent(or collector). But in the cloud, some
 networks are isolated from infrastructure layer network, because of security
 reasons. Some of our customers even explicitly require such security
 protection. Does it mean those isolated VMs cannot be monitored by this
 proposed-VM-agent?

Yes, that sounds plausible to me.

Cheers,
Eoghan
 
 I really wish we can figur out how it could work for all VMs but with no
 security issues.
 
 I'm not familiar with heat-cfntools, so, correct me if I am wrong :)
 
 
 Best regards!
 Kurt
 
 -邮件原件-
 发件人: Eoghan Glynn [mailto:egl...@redhat.com]
 发送时间: 2014年8月1日 14:46
 收件人: OpenStack Development Mailing List (not for usage questions)
 主题: Re: [openstack-dev] [ceilometer] Compute agent local VM inspector -
 potential enhancement
 
 
 
  Disclaimer: I'm not fully vested on ceilometer internals, so bear with me.
  
  For consumers wanting to leverage ceilometer as a telemetry service
  atop non-OpenStack Clouds or infrastructure they don't own, some edge
  cases crop up. Most notably the consumer may not have access to the
  hypervisor host and therefore cannot leverage the ceilometer compute
  agent on a per host basis.
 
 Yes, currently such access to the hypervisor host is required, least in the
 case of the libvirt-based inspector.
  
  In such scenarios it's my understanding the main option is to employ
  the central agent to poll measurements from the monitored resources
  (VMs, etc.).
 
 Well, the ceilometer central agent is not generally concerned with with
 polling related *directly* to VMs - rather it handles acquiring data from
 RESTful API (glance, neutron etc.) that are not otherwise available in the
 form of notifications, and also from host-level interfaces such as SNMP.
 
  However this approach requires Cloud APIs (or other mechanisms) which
  allow the polling impl to obtain the desired measurements (VM memory,
  CPU, net stats, etc.) and moreover the polling approach has it's own
  set of pros / cons from a arch / topology perspective.
 
 Indeed.
 
  The other potential option is to setup the ceilometer compute agent
  within the VM and have each VM publish measurements to the collector
  -- a local VM agent / inspector if you will. With respect to this
  local VM agent approach:
  (a) I haven't seen this documented to date; is there any desire / reqs
  to support this topology?
  (b) If yes to #a, I whipped up a crude PoC here:
  http://tinyurl.com/pqjgotv  Are folks willing to consider a BP for
  this approach?
 
 So in a sense this is similar to the Heat cfn-push-stats utility[1] and
 seems to suffer from the same fundamental problem, i.e. the need for
 injection of credentials (user/passwds, keys, whatever) into the VM in order
 to allow the metric datapoints be pushed up to the infrastructure layer
 (e.g. onto the AMQP bus, or to a REST API endpoint).
 
 How would you propose to solve that credentialing issue?
 
 Cheers,
 Eoghan
 
 [1]
 https://github.com/openstack/heat-cfntools/blob/master/bin/cfn-push-stats
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Compute agent local VM inspector - potential enhancement

2014-08-01 Thread boden

On 8/1/2014 4:37 AM, Eoghan Glynn wrote:




Heat cfntools is based on SSH, so I assume it requires TCP/IP connectivity
between VM and the central agent(or collector). But in the cloud, some
networks are isolated from infrastructure layer network, because of security
reasons. Some of our customers even explicitly require such security
protection. Does it mean those isolated VMs cannot be monitored by this
proposed-VM-agent?


Yes, that sounds plausible to me.


My understanding is that this VM agent for ceilometer would need 
connectivity to nova API as well as to the AMQP broker. IMHO the 
infrastructure requirements from a network topology POV will differ from 
provider to provider and based on customer reqs / env.




Cheers,
Eoghan


I really wish we can figur out how it could work for all VMs but with no
security issues.

I'm not familiar with heat-cfntools, so, correct me if I am wrong :)


Best regards!
Kurt

-邮件原件-
发件人: Eoghan Glynn [mailto:egl...@redhat.com]
发送时间: 2014年8月1日 14:46
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [ceilometer] Compute agent local VM inspector -
potential enhancement




Disclaimer: I'm not fully vested on ceilometer internals, so bear with me.

For consumers wanting to leverage ceilometer as a telemetry service
atop non-OpenStack Clouds or infrastructure they don't own, some edge
cases crop up. Most notably the consumer may not have access to the
hypervisor host and therefore cannot leverage the ceilometer compute
agent on a per host basis.


Yes, currently such access to the hypervisor host is required, least in the
case of the libvirt-based inspector.


In such scenarios it's my understanding the main option is to employ
the central agent to poll measurements from the monitored resources
(VMs, etc.).


Well, the ceilometer central agent is not generally concerned with with
polling related *directly* to VMs - rather it handles acquiring data from
RESTful API (glance, neutron etc.) that are not otherwise available in the
form of notifications, and also from host-level interfaces such as SNMP.



Thanks for additional clarity. Perhaps this proposed local VM agent 
fills additional use cases whereupon ceilometer is being used without 
openstack proper (e.g. not a full set of openstack complaint services 
like neutron, glance, etc.).



However this approach requires Cloud APIs (or other mechanisms) which
allow the polling impl to obtain the desired measurements (VM memory,
CPU, net stats, etc.) and moreover the polling approach has it's own
set of pros / cons from a arch / topology perspective.


Indeed.


The other potential option is to setup the ceilometer compute agent
within the VM and have each VM publish measurements to the collector
-- a local VM agent / inspector if you will. With respect to this
local VM agent approach:
(a) I haven't seen this documented to date; is there any desire / reqs
to support this topology?
(b) If yes to #a, I whipped up a crude PoC here:
http://tinyurl.com/pqjgotv  Are folks willing to consider a BP for
this approach?


So in a sense this is similar to the Heat cfn-push-stats utility[1] and
seems to suffer from the same fundamental problem, i.e. the need for
injection of credentials (user/passwds, keys, whatever) into the VM in order
to allow the metric datapoints be pushed up to the infrastructure layer
(e.g. onto the AMQP bus, or to a REST API endpoint).

How would you propose to solve that credentialing issue?



My initial approximation would be to target use cases where end users do 
not have direct guest access or have limited guest access such that 
their UID / GID cannot access the conf file. For example instances which 
only provide app access provisioned using heat SoftwareDeployments 
(http://tinyurl.com/qxmh2of) or trove database instances.


In general I don't see this approach from a security POV much different 
than whats done with the trove guest agent (http://tinyurl.com/ohvtmtz).


Longer term perhaps credentials could be mitigated using Barbican as 
suggested here: https://bugs.launchpad.net/nova/+bug/1158328



Cheers,
Eoghan

[1]
https://github.com/openstack/heat-cfntools/blob/master/bin/cfn-push-stats

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] Compute agent local VM inspector - potential enhancement

2014-07-31 Thread boden

Disclaimer: I'm not fully vested on ceilometer internals, so bear with me.

For consumers wanting to leverage ceilometer as a telemetry service atop 
non-OpenStack Clouds or infrastructure they don't own, some edge cases 
crop up. Most notably the consumer may not have access to the hypervisor 
host and therefore cannot leverage the ceilometer compute agent on a per 
host basis.


In such scenarios it's my understanding the main option is to employ the 
central agent to poll measurements from the monitored resources (VMs, 
etc.). However this approach requires Cloud APIs (or other mechanisms) 
which allow the polling impl to obtain the desired measurements (VM 
memory, CPU, net stats, etc.) and moreover the polling approach has it's 
own set of pros / cons from a arch / topology perspective.


The other potential option is to setup the ceilometer compute agent 
within the VM and have each VM publish measurements to the collector -- 
a local VM agent / inspector if you will. With respect to this local VM 
agent approach:
(a) I haven't seen this documented to date; is there any desire / reqs 
to support this topology?
(b) If yes to #a, I whipped up a crude PoC here: 
http://tinyurl.com/pqjgotv  Are folks willing to consider a BP for this 
approach?


Thank you


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev