Re: [openstack-dev] [ceilometer] polling agent configuration speculation

2015-06-10 Thread Chris Dent

On Tue, 9 Jun 2015, Luo Gangyi wrote:


In current, ceilometer load pollsters by agent namespace, So do you
mean you want load pollsters one by one through their name(maybe
defined in pipeline.yaml)?

If loading all pollsters in one time do not cost much, I think your
change is bit unnecessary. But if it does cost much, your change is
meaningful.


The goal is to avoid importing code into a process that is not going to
be used. Not because it is slow or uses a lot of memory (in my testing
it is not slow, I'm unclear (thus far) about memory use) but simply
because it is inappropriate: Any single process should only contain code
(and config) that it is actually going to use.


BTW, I like the idea Separate polling and publishing/transforming
into separate workers/processes.


Good to know, thank you. This seems to be the growing consensus.
What's not yet clear is how soon we'll be able to make this happen
but at least we know we'll be trying to make progress in the right
direction.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] polling agent configuration speculation

2015-06-10 Thread Chris Dent

On Tue, 9 Jun 2015, gordon chung wrote:


i still like the idea of splitting polling and processing task.

pros:
- it moves load off poll agents and onto notificaiton agent
- we essentially get free healthcheck events by doing this

con:
to play devil's advocate. the one down side is that now there's two
levels of filtering (something we also ran into for declarative
meters.)


Many of the ideas we have floating around at the moment have
concerns about duplication/split-brain of any of:

* activity in the various processes
* configuration information
* data models

We should certain take care to avoid too much complexity arising
from these sorts of things but I hope we won't let it dissuade us from
decomposing our heavyweight monoliths into smaller pieces that are
more amenable to custom composition.


so now you need to ensure what you're polling matches up with what you
want to build (ie. you're polling the right things to build the data
you want or you're not polling stuff you don't intend to poll)... this
may or may not be a huge issue but it may be confusing to some.


True, but if we are moving in the direction of making configuration
more explicit and more contained then it will become a little easier
to manage. If we remove guesswork and ambiguity then it becomes
easier to create tools to automate the management (and distribution)
of configuration.

In the absence of additional feedback what I'm getting here is that
some prototyping is worth exploring and we'll evaluate as we go.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] polling agent configuration speculation

2015-06-09 Thread gordon chung

 
 A couple things about this seem less than ideal:
 
 * 2 means we load redundant stuff unless we edit entry_points.txt.
 We do not want to encourage this sort of behavior. entry_points is
 not configuration[1]. We should configure elsewhere to declare I
 care about things X (including the option of all things) and
 then load the tools to do so, on demand.
 
 * Two things are happening in the same context in step 5 and that
 seems quite limiting with regard to opportunities for effective
 maintenance and optimizing.
 
 My intuition (which often needs to sanity checked, thus my posting
 here) tells me there are some things we could change:
 
 * Separate polling and publishing/transforming into separate
 workers/processes.
 
 * Extract the definition of sources to be polled from pipeline.yaml
 to its own file and use that to be the authority of which
 extensions are loaded for polling and discovery.

i still like the idea of splitting polling and processing task.

pros: 
- it moves load off poll agents and onto notificaiton agent
- we essentially get free healthcheck events by doing this

con:
to play devil's advocate. the one down side is that now there's two levels of 
filtering (something we also ran into for declarative meters.)

in notification agents we first define which exchanges we want to listen to, 
and then we also define which event_types we want to build off of. similarly, 
here define what you want to poll, and then in notification agent, you define 
what you want to build.

so now you need to ensure what you're polling matches up with what you want to 
build (ie. you're polling the right things to build the data you want or you're 
not polling stuff you don't intend to poll)... this may or may not be a huge 
issue but it may be confusing to some.

that said, i don't see this being any different from users configuring 
notifications in cinder/nova/etc... and matching that configuration to what 
ceilometer is configured to build.

cheers,
gord  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] polling agent configuration speculation

2015-06-08 Thread Luo Gangyi
Hi, Chris,


In current, ceilometer load pollsters by agent namespace, So do you mean you 
want load pollsters one by one through their name(maybe defined in 
pipeline.yaml)?


If loading all pollsters in one time do not cost much, I think your change is 
bit unnecessary. 
But if it does cost much, your change is meaningful.


BTW,  I like the idea Separate polling and publishing/transforming into 
separate
   workers/processes.
--
Luo gangyiluogan...@cmss.chinamobile.com



 




-- Original --
From:  Chris Dent;chd...@redhat.com;
Date:  Mon, Jun 8, 2015 09:04 PM
To:  openstack-operatorsopenstack-operat...@lists.openstack.org; 
OpenStack-devOpenStack-dev@lists.openstack.org; 

Subject:  [openstack-dev] [ceilometer] polling agent configuration speculation




(Posting to the mailing list rather than writing a spec or making
code because I think it is important to get some input and feedback
before going off on something wild. Below I'm talking about
speculative plans and seeking feedback, not reporting decisions
about the future. Some of this discussion is intentionally naive
about how things are because that's not really relevant, what's
relevant is how things should be or could be.

tl;dr: I want to make the configuration of the pollsters more explicit
and not conflate and overlap the entry_points.txt and pipeline.yaml
in confusing and inefficient ways.

* entry_points.txt should define what measurements are possible, not
   what measurements are loaded
* something new should define what measurements are loaded and
   polled (and their intervals) (sources in pipeline.yaml speak)
* pipeline.yaml should define transformations and publishers

Would people like something like this?)

The longer version:

Several of the outcomes of the Liberty Design Summit were related to
making changes to the agents which gather or hear measurements and
events. Some of these changes have pending specs:

* Ceilometer Collection Agents Split
   https://review.openstack.org/#/c/186964/

   Splitting the collection agents into their own repo to allow
   use and evolution separate from the rest of Ceilometer.

* Adding Meta-Data Caching Spec
   https://review.openstack.org/#/c/185084/

   Adding metadata caching to the compute agent so the Nova-API is
   less assaulted than it currently is.

* Declarative notification handling
   https://review.openstack.org/#/c/178399/

   Be able to hear and transform a notification to an event without
   having to write code.

Reviewing these and other specs and doing some review of the code
points out that we have an opportunity to make some architectural and
user interface improvements (while still maintain existing
functionality). For example:

The current ceilometer polling agent has an interesting start up
process:

1 It determines which namespaces it is operating in ('compute',
   'central', 'ipmi').
2 Using entry_points defined in setup.cfg it initializes all the
   polling extensions and all the discovery extensions (independent
   of sources defined in pipeline.yaml)
3 Every source in pipeline.yaml is given a list of pollsters that
   match the meters defined by the source, creating long running
   tasks to do the polling.
4 Each task does resource discovery and partitioning coordination.
5 measurements/samples are gathered and are transformed and published
   according the sink rules in pipeline.yaml

A couple things about this seem less than ideal:

* 2 means we load redundant stuff unless we edit entry_points.txt.
   We do not want to encourage this sort of behavior. entry_points is
   not configuration[1]. We should configure elsewhere to declare I
   care about things X (including the option of all things) and
   then load the tools to do so, on demand.

* Two things are happening in the same context in step 5 and that
   seems quite limiting with regard to opportunities for effective
   maintenance and optimizing.

My intuition (which often needs to sanity checked, thus my posting
here) tells me there are some things we could change:

* Separate polling and publishing/transforming into separate
   workers/processes.

* Extract the definition of sources to be polled from pipeline.yaml
   to its own file and use that to be the authority of which
   extensions are loaded for polling and discovery.

What do people think?

[1] This is really the core of my concern and the main part I want
to see change.
-- 
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ

[openstack-dev] [ceilometer] polling agent configuration speculation

2015-06-08 Thread Chris Dent


(Posting to the mailing list rather than writing a spec or making
code because I think it is important to get some input and feedback
before going off on something wild. Below I'm talking about
speculative plans and seeking feedback, not reporting decisions
about the future. Some of this discussion is intentionally naive
about how things are because that's not really relevant, what's
relevant is how things should be or could be.

tl;dr: I want to make the configuration of the pollsters more explicit
and not conflate and overlap the entry_points.txt and pipeline.yaml
in confusing and inefficient ways.

* entry_points.txt should define what measurements are possible, not
  what measurements are loaded
* something new should define what measurements are loaded and
  polled (and their intervals) (sources in pipeline.yaml speak)
* pipeline.yaml should define transformations and publishers

Would people like something like this?)

The longer version:

Several of the outcomes of the Liberty Design Summit were related to
making changes to the agents which gather or hear measurements and
events. Some of these changes have pending specs:

* Ceilometer Collection Agents Split
  https://review.openstack.org/#/c/186964/

  Splitting the collection agents into their own repo to allow
  use and evolution separate from the rest of Ceilometer.

* Adding Meta-Data Caching Spec
  https://review.openstack.org/#/c/185084/

  Adding metadata caching to the compute agent so the Nova-API is
  less assaulted than it currently is.

* Declarative notification handling
  https://review.openstack.org/#/c/178399/

  Be able to hear and transform a notification to an event without
  having to write code.

Reviewing these and other specs and doing some review of the code
points out that we have an opportunity to make some architectural and
user interface improvements (while still maintain existing
functionality). For example:

The current ceilometer polling agent has an interesting start up
process:

1 It determines which namespaces it is operating in ('compute',
  'central', 'ipmi').
2 Using entry_points defined in setup.cfg it initializes all the
  polling extensions and all the discovery extensions (independent
  of sources defined in pipeline.yaml)
3 Every source in pipeline.yaml is given a list of pollsters that
  match the meters defined by the source, creating long running
  tasks to do the polling.
4 Each task does resource discovery and partitioning coordination.
5 measurements/samples are gathered and are transformed and published
  according the sink rules in pipeline.yaml

A couple things about this seem less than ideal:

* 2 means we load redundant stuff unless we edit entry_points.txt.
  We do not want to encourage this sort of behavior. entry_points is
  not configuration[1]. We should configure elsewhere to declare I
  care about things X (including the option of all things) and
  then load the tools to do so, on demand.

* Two things are happening in the same context in step 5 and that
  seems quite limiting with regard to opportunities for effective
  maintenance and optimizing.

My intuition (which often needs to sanity checked, thus my posting
here) tells me there are some things we could change:

* Separate polling and publishing/transforming into separate
  workers/processes.

* Extract the definition of sources to be polled from pipeline.yaml
  to its own file and use that to be the authority of which
  extensions are loaded for polling and discovery.

What do people think?

[1] This is really the core of my concern and the main part I want
to see change.
--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev