Re: [openstack-dev] [ceilometer] indicating sample provenance

2014-08-20 Thread gordon chung
> a) Am I right that no indicator is there?
> 
> b) Assuming there should be one:
> 
> * Where should it go? Presumably it needs to be an attribute of
> each sample because as agents leave and join the group, where
> samples are published from can change.
> 
> * How should it be named? The never-ending problem.
disclaimer: i'm just riffing and the following might be nonsense.
i don't think we have a formal indicator on where a sample came from. we do 
attach a message signature to each sample which verifies it hasn't been 
tampered with.[1] i could envision that being a way to trace a path (right now 
i'm not sure you're able to set a unique metering hash key per agent)
that said, i guess it's really dependent on how you plan on debugging? it might 
be easy to have some sort of logging to include the agents id and what sample 
it's publishing.
i guess also to extend your question about agents leaving/joining. i'd expect 
there is some volatility to the agents where an agent may or may not exist at 
the point of debugging... just curious what the benefit is of knowing who sent 
it if all the agents are just clones of each other.
[1] 
https://github.com/openstack/ceilometer/blob/a77dd2b5408eb120b7397a6f08bcbeef6e521443/ceilometer/publisher/rpc.py#L119-L124
cheers,
gord

  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] indicating sample provenance

2014-08-21 Thread gordon chung
> b) Assuming there should be one:
>
>* Where should it go? Presumably it needs to be an attribute of
>  each sample because as agents leave and join the group, where
>  samples are published from can change.is this just for debugging 
> purposes or auditing? from an audit standpoint, whenever an 
> event/meter/whatever is handled within a system, it should be captured. so in 
> CADF[1] and i assume any other auditing standard out there, when a resource 
> such as a publisher in the pipeline creates the sample, it should add a 
> reporter attribute noting that it was who created it and that would be 
> captured in the final sample/event.[1] 
> http://docs.openstack.org/developer/pycadf/
cheers,
gord  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-21 Thread gordon chung



> The point I've been making is 
> that by the TC continuing to bless only the Ceilometer project as the 
> OpenStack Way of Metering, I think we do a disservice to our users by 
> picking a winner in a space that is clearly still unsettled.
can we avoid using the word 'blessed' -- it's extremely vague and seems 
controversial. from what i know, no one is being told project x's services are 
the be all end all and based on experience, companies (should) know this. i've 
worked with other alternatives even though i contribute to ceilometer.> Totally 
agree with Jay here, I know people who gave up on trying to
> get any official project around deployment because they were told they
> had to do it under the TripleO umbrellafrom the pov of a project that seems 
> to be brought up constantly and maybe it's my naivety, i don't really 
> understand the fascination with branding and the stigma people have placed on 
> non-'openstack'/stackforge projects. it can't be a legal thing because i've 
> gone through that potential mess. also, it's just as easy to contribute to 
> 'non-openstack' projects as 'openstack' projects (even easier if we're 
> honest). 
in my mind, the goal of the programs is to encourage collaboration from 
projects with the same focus (whether they overlap or not). that way, even if 
there's differences in goal/implementation, there's a common space between them 
so users can easily decide. also, hopefully with the collaboration, it'll help 
teams realise that certain problems have already been solved and certain parts 
of code can be shared rather than having project x, y, and z all working in 
segregated streams, racing as fast as they can to claim supremacy (how you'd 
decide is another mess) and then n number of months/years later we decide to 
throw away (tens/hundreds) of thousands of person hours of work because we just 
created massive projects that overlap.
suggestion: maybe it's better to drop the branding codenames and just refer to 
everything as their generic feature? ie. identity, telemetry, orchestration, 
etc...
cheers,
gord
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][WSME] Sphinx failing sporadically because of wsme autodoc extension

2014-08-21 Thread gordon chung
is it possible that it's not on all the nodes?
seems like it passed here: https://review.openstack.org/#/c/109207/ but another 
patch at roughly the same time failed https://review.openstack.org/#/c/113549/

cheers,
gord  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] repackage ceilometer and ceilometerclient

2014-08-21 Thread gordon chung
> I would like to realize moving swift_middleware.py from the ceilometer 
> package to 
> the ceilometerclient package. For me it is very difficult to convince users 
> of 
> installing the ceilometer package on Proxy Nodes for just using the swift 
> middleware 
> because of maintenance costs. Operators in users must check security patches 
> for 
> installed packages on Proxy Nodes even if these are not used on the nodes.
this idea sounds so familiar. i feel like i may have tried to move this in the 
past but gave up. i actually prefer having the middleware reside in 
ceilometerclient... it really doesn't make sense for the entire ceilometer 
package to be pulled in for just a middleware although i feel like that might 
require the oslo.messaging feature as well
could you create a spec[1] and we can maybe hash out idea there.
[1]https://github.com/openstack/ceilometer-specs
cheers,gord

  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread gordon chung
> It may be easier for you, but it certainly isn't inside big companies,
> e.g. HP have pretty broad approvals for contributing to (official)
> openstack projects, where as individual approval may be needed to
> contribute to none-openstack projects.
i was referring to a company bigger than hp... maybe the legal team is nicer 
there. :)  couldn't hurt to ask them anyways... plenty of good projects that 
exist in stackforge domain.
cheers,gord   ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][WSME] Sphinx failing sporadically because of wsme autodoc extension

2014-08-22 Thread gordon chung
> I couldn’t reproduce this issue either. I’ve tried on precise and on a fresh 
> trusty too, everything worked fine…
fun
from the limited error message. it's because service path isn't found 
(https://github.com/stackforge/wsme/blob/master/wsmeext/sphinxext.py#L133-L140) 
and this code is returning None... so for some reason scan_services is not 
finding what it needs to find 
(https://github.com/stackforge/wsme/blob/master/wsmeext/sphinxext.py#L114-L130) 
in most cases but in the rare case, it actually does find a path
looking at the build trends. there doesn't seem to be a noticeable trend and 
more a crapshoot whether a check passes or 
not:https://jenkins01.openstack.org/job/gate-ceilometer-docs/buildTimeTrendhttps://jenkins02.openstack.org/job/gate-ceilometer-docs/buildTimeTrendhttps://jenkins03.openstack.org/job/gate-ceilometer-docs/buildTimeTrendhttps://jenkins04.openstack.org/job/gate-ceilometer-docs/buildTimeTrendhttps://jenkins05.openstack.org/job/gate-ceilometer-docs/buildTimeTrendhttps://jenkins06.openstack.org/job/gate-ceilometer-docs/buildTimeTrendhttps://jenkins07.openstack.org/job/gate-ceilometer-docs/buildTimeTrend
cheers,gord   ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][WSME] Sphinx failing sporadically because of wsme autodoc extension

2014-08-25 Thread gordon chung
just an update, i had to re-add PYTHONHASHSEED = 0 to ceilometer. i didn't find 
the exact root cause of why WSME is affecting our doc gate but it appears WSME 
is also affected by new tox and random hashseed as it too suffers from random 
failures in UT.
for now, i've added back HASHSEED so we should be unblocked

cheers,
gord  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Treating notifications as a contract

2014-09-03 Thread gordon chung
> > For example: It appears that CADF was designed for this sort of thing and> 
> > > was considered at some point in the past. It would be useful to know> > 
> > more of that story if there are any pointers.> >> > My initial reaction is 
> > that CADF has the stank of enterprisey all over> > it rather than "less is 
> > more" and "worse is better" but that's a> > completely uninformed and thus 
> > unfair opinion.> > TBH I don't know enough about CADF, but I know a man who 
> > does ;)> > (gordc, I'm looking at you!)** so i was on vacation when this 
> > thread popped up. i'll just throw a disclaimer, i didn't read the initial 
> > conversion thread... also, i just read what i typed below and ignore the 
> > fact it sounds like a sales pitch. **
CADF is definitely a well-defined open standard with contributions from 
multiple companies so there are a lot of use cases, case and point the daunting 
100+ pg spec [1].
the purpose of CADF was to be an auditable event model to describe cloud events 
(basically what our notifications are in OpenStack). regarding CADF in 
OpenStack[2], pyCADF has now been moved under the Keystone umbrella to handle 
auditing.  Keystone thus far has done a great job incorporating pyCADF into 
their notification messages.
while the spec is quite verbose, there is a short intro to CADF events and how 
to define them in the pycadf docs [3]. we also did a talk at the Atlanta summit 
[4] (apologies for my lack of presentation skills). lastly, i know we 
previously had a bunch of slides describing/explaining CADF at a highlevel. 
i'll let ibmers find a copy to post to slideshare or the like.
> * At the micro level have versioned schema for notifications such that
> one end can declare "I am sending version X of notification
> foo.bar.Y" and the other end can effectively deal.
the event model has a mandatory typeURI field where you could define a version
> These ideas serve two different purposes: One is to ensure that
> existing notification use cases are satisfied with robustness and
> provide a contract between two endpoints. The other is to allow a
> fecund notification environment that allows and enables many
> participants.
CADF is designed to be extensible so even if a use cases is not specifically 
defined in spec, the model can be extended to accommodate. additionally, one of 
the chairs of the CADF spec is also a contributor to pyCADF so there are 
opportunities to shape the future of the CADF (something we did, while building 
pyCADF).
> Another approach would be to hone in on the producer-side that's
> currently the heaviest user of notifications, i.e. nova, and propose
> the strawman to nova-specs
i'd love for OpenStack to converge on a standard (whether CADF or not). 
personal experience tells me it'll be difficult, but i think more and more have 
realised just making the 'wild west' even wilder isn't helping.
[1] 
http://www.dmtf.org/sites/default/files/standards/documents/DSP0262_1.0.0.pdf[2]
 http://www.dmtf.org/standards/cadf
[3] http://docs.openstack.org/developer/pycadf/event_concept.html[4] 
https://www.openstack.org/summit/openstack-summit-atlanta-2014/session-videos/presentation/an-overview-of-cloud-auditing-support-for-openstack
cheers,
gord  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Nejc Saje to ceilometer-core

2014-09-11 Thread gordon chung
> Nejc has been doing a great work and has been very helpful during the> Juno 
> cycle and his help is very valuable.
 
> I'd like to propose that we add Nejc Saje to the ceilometer-core group.can we 
> minus because he makes me look bad? /sarcasm
+1 for core.
cheers,
gord
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.middleware 0.1.0 release

2014-09-15 Thread gordon chung
on behalf of the Oslo team, we're pleased to the announce the initial public 
release of oslo.middleware (verison 0.1.0). this library contains WSGI 
middleware, previously available under openstack/common/middleware, that 
provides additional functionality to the api pipeline.the oslo.middleware 
library is intended to be adopted for Kilo as the middleware code part of 
oslo-incubator is deprecated.please report any issues using the oslo.middleware 
tracker: https://bugs.launchpad.net/oslo.middleware.cheers,gord
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Dina Belova to ceilometer-core

2014-09-16 Thread gordon chung



> Dina has been doing a great work and has been very helpful during the
> Juno cycle and her help is very valuable. She's been doing a lot of> reviews 
> and has been very active in our community.+1cheers,
gord

  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] meaning of resource_id in a meter

2013-11-20 Thread Gordon Chung
hi,

for reference, this is a continuation of discussion from meeting: 
http://eavesdrop.openstack.org/meetings/ceilometer/2013/ceilometer.2013-11-20-21.00.log.html
 
(see #topic what's a resource?)

came across a question when reviewing 
https://review.openstack.org/#/c/56019... basically, in Samples, user_id 
and project_id attributes are pretty self-explanatory and map to Keystone 
concepts pretty well but what is the criteria for setting resource_id? 
maybe the ambiguity is that resource_id in a Sample is not the resource_id 
from Keystone... so what is it? is it just any UUID that is accessible 
from notification/response and if it is, is there a better (possibly more 
consistent) alternative?

cheers,
gordon chung

openstack, ibm software standards
email: chungg [at] ca.ibm.com___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] meaning of resource_id in a meter

2013-11-21 Thread Gordon Chung
> In all cases, these are free string fields. `user_id' and `project_id'
> map to Keystone _most of the time_,

i'm sort of torn between the two -- which is why i brought it up i guess. 
i like the flexibility of having resource as a free string field but the 
difference between resource and project/user fields is that we can query 
directly on Resources. when we get a Resource, we get a list of associated 
Meters and if we don't set resource_id in a consistent manner, i worry we 
may be losing some relational information between Meters that groupings 
based off consistent resource_id can provide.

cheers,
gordon chung

openstack, ibm software standards
email: chungg [at] ca.ibm.com___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Ceilometer] get IPMI data for ceilometer

2013-11-26 Thread Gordon Chung
> 2. Not sure if our Ceilometer only accept the signed-message, if it is 
case, how Ironic get the message trust for Ceilometer, and send the valid 
message which can be accepted by Ceilometer Collector?

> I'm not sure it's appropriate for ironic to be sending messages using 
ceilometer's sample format. We receive data from the other projects using 
the more generic notification system, and that seems like the right tool 
to use here, too. Unless the other ceilometer devs disagree?

agreed, depending on your target milestone i'd suggest keeping an eye on 
this bp as well: 
https://blueprints.launchpad.net/oslo/+spec/notification-structured 

cheers,
gordon chung

openstack, ibm software standards
email: chungg [at] ca.ibm.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][qa] Punting ceilometer from whitelist

2013-12-04 Thread Gordon Chung
> As a developer think about the fact that when you log something as
> ERROR, you are expecting a cloud operator to be woken up in the middle
> of the night with an email alert to go fix the cloud immediately. You
> are intentionally ruining someone's weekend to fix this issue - RIGHT 
NOW!

was going to ask what CRITICAL level was for... good thing i googled 
first: http://docs.python.org/2/howto/logging.html seems like a good 
enough definition for each level.

cheers,
gordon chung

openstack, ibm software standards
email: chungg [at] ca.ibm.com___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Nomination of Sandy Walsh to core team

2013-12-11 Thread Gordon Chung
> To that end, I would like to nominate Sandy Walsh from Rackspace to
> ceilometer-core. Sandy is one of the original authors of StackTach, and
> spearheaded the original stacktach-ceilometer integration. He has been
> instrumental in many of my codes reviews, and has contributed much of 
the
> existing event storage and querying code.

+1 in support of Sandy.  the Event work he's led in Ceilometer has been an 
important feature and i think he has some valuable ideas.

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Discussion of the resource loader support patch

2014-01-21 Thread Gordon Chung
> If the resources defined in the pipeline doesn't match any resource 
> file loader, it will be treated as directly passing to the pollsters. 
E.g.
> resources:
> - fileloader:///foo/bar
> - snmp://2.2.2.2
> The endpoint 'snmp://2.2.2.2' will be passed to the pollsters along 
> with the those read from the file /foo/bar.

i don't have any particular opposition to the code.

i have two questions, the first is related to validation. if we load a 
list of resources from pipeline.yaml and another 'loader' source, what 
happens when the same resource(s) are listed in both pipeine.yaml and 
'loader' source. do we just let it continue with duplicates or do we try 
to filter?

also, another question would be, what other type of 'loaders' are there 
aside from fileloader? i don't really have any ideas outside of fileloader 
so it'd be interesting to know of other use-cases.

cheers,
gordon chung
openstack, ibm software standards
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Ceilometer]bp:send-data-to-ceilometer

2014-01-29 Thread Gordon Chung
> Meter Names:
> fanspeed, fanspeed.min, fanspeed.max, fanspeed.status
> voltage, voltage.min, voltage.max, voltage.status
> temperature, temperature.min, temperature.max, 
temperature.status
> 
> 'FAN 1': {
> 'current_value': '4652',
> 'min_value': '4200',
> 'max_value': '4693',
> 'status': 'ok'
> }
> 'FAN 2': {
> 'current_value': '4322',
> 'min_value': '4210',
> 'max_value': '4593',
> 'status': 'ok'
> },
> 'voltage': {
> 'Vcore': {
> 'current_value': '0.81',
> 'min_value': '0.80',
> 'max_value': '0.85',
> 'status': 'ok'
> },
> '3.3VCC': {
> 'current_value': '3.36',
> 'min_value': '3.20',
> 'max_value': '3.56',
> 'status': 'ok'
> },
> ...
> }
> }

are FAN 1, FAN 2, Vcore, etc... variable names or values that would 
consistently show up? if the former, would it make sense to have the 
meters be similar to fanspeed: where trait is FAN1, FAN2, etc...? 
if the meter is just fanspeed, what would the volume be? FAN 1's 
current_value?

cheers,

gordon chung
openstack, ibm software standards
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Ceilometer]bp:send-data-to-ceilometer

2014-01-29 Thread Gordon Chung
> Different hardware will expose different number of each of these 
> things. In Haomeng's first proposal, all hardware would expose a 
> "fanspeed" and a "voltage" category, but with a variable number of 
> meters in each category. In the second proposal, it looks like there
> are no categories and hardware exposes a variable number of meters 
> whose names adhere to some consistent structure (eg, "FAN ?" and 
"V???").

cool cool. personally, i would think the first option, splitting it into 
categories, might be safer route.

gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] ceilometer unit tests broke because of a nova patch

2014-02-26 Thread Gordon Chung
hi,

just so this issue doesn't get lost again. Mehdi's bp seems like a good 
place to track this issue: 
https://blueprints.launchpad.net/nova/+spec/usage-data-in-notification.

Joe, i agree with you that it's too late for this iteration... maybe it's 
something we should mark > low priority for J cycle.

adding participants to the bp just so we get eyes on it.

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] ceilometer unit tests broke because of a nova patch

2014-02-26 Thread Gordon Chung
> Ceilometer really needs to stop importing server projects in unit tests.
> By nature this is just going to break all the time.

i believe that was the takeaway from the thread -- it's an old thread and 
i was just doing some back-reading. 

> Cross project interactions need to be tested by something in the gate
> which is cross project gating - like tempest/devstack.

that said, we currently import swift for unit test as well to test a swift 
middleware solution which gathers metrics. i've added this as a bug here 
for discussion: https://bugs.launchpad.net/ceilometer/+bug/1285388

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Suggestions for alarm improvements

2014-03-04 Thread Gordon Chung
hi Sampath

tbh, i actually like the pipeline solution proposed in the blueprint... 
that said, there hasn't been any work done relating to this in Icehouse. 
there was work on adding alarms to notification 
https://blueprints.launchpad.net/ceilometer/+spec/alarm-on-notification 
but that has been pushed. i'd be interested in discussing adding alarms to 
pipeline and it's pros/cons vs current implementation.

>  https://wiki.openstack.org/wiki/Ceilometer/AlarmImprovements
>  Is there any further discussion about [Part 4 - Moving Alarms into the
> Pipelines] in above doc?
is the pipeline alarm design attached to a blueprint? also, is your 
interest purely to see status or were you looking to work on it? ;)

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.middleware 0.2.0 released

2014-12-02 Thread gordon chung
The Oslo team is pleased to announce release 0.2.0 of oslo.middleware.

This is primarily a bug-fix release, but does include requirements changes.
 
For more details, please see the git log history below and
 https://launchpad.net/oslo.middleware/kilo/0.2.0
 
Please report issues through launchpad:
 https://bugs.launchpad.net/oslo.middleware
 

 
Changes in openstack/oslo.middleware  0.1.0..0.2.0
 
7baf57a Updated from global requirements
6f88759 Updated from global requirements
f9d0b94 Flesh out the README
edfa12c Imported Translations from Transifex
5fd894b Updated from global requirements
28b8ad2 Add pbr to installation requirements
b49d38c Updated from global requirements
2f53838 Updated from global requirements
afb541d Remove extraneous vim editor configuration comments
c32959f Imported Translations from Transifex
9ccefd8 Support building wheels (PEP-427)
72836d0 Fix coverage testing
7ee3b0f Expose sizelimit option to config generator
7846039 Imported Translations from Transifex
e18de4a Imported Translations from Transifex
7874cf9 Updated from global requirements
d7bdf52 Imported Translations from Transifex
3679023 Remove oslo-incubator fixture
 
  diffstat (except docs and test files):
 
 README.rst |  5 +-
 openstack-common.conf  |  1 -
 .../de/LC_MESSAGES/oslo.middleware-log-error.po| 27 +++
 .../locale/de/LC_MESSAGES/oslo.middleware.po   | 27 +++
 .../en_GB/LC_MESSAGES/oslo.middleware-log-error.po | 27 +++
 .../locale/en_GB/LC_MESSAGES/oslo.middleware.po| 27 +++
 .../fr/LC_MESSAGES/oslo.middleware-log-error.po| 27 +++
 .../locale/fr/LC_MESSAGES/oslo.middleware.po   | 27 +++
 .../locale/oslo.middleware-log-critical.pot| 20 +
 .../locale/oslo.middleware-log-error.pot   | 25 +++
 .../locale/oslo.middleware-log-info.pot| 20 +
 .../locale/oslo.middleware-log-warning.pot | 20 +
 oslo/__init__.py   |  2 -
 .../openstack/common/fixture/__init__.py   |  0
 oslo/middleware/openstack/common/fixture/config.py | 85 --
 oslo/middleware/opts.py| 45 
 oslo/middleware/sizelimit.py   | 30 +---
 requirements.txt   |  5 +-
 setup.cfg  |  9 ++-
 test-requirements.txt  |  9 ++-
 tests/test_sizelimit.py|  5 +-
 21 files changed, 334 insertions(+), 109 deletions(-)

 
  Requirements updates:
 
 diff --git a/requirements.txt b/requirements.txt
 index 414bdf6..275fa4f 100644
 --- a/requirements.txt
 +++ b/requirements.txt
 @@ -4,0 +5 @@
 +pbr>=0.6,!=0.7,<1.0
 @@ -6,2 +7,2 @@ Babel>=1.3
 -oslo.config>=1.4.0.0a3
 -oslo.i18n>=0.3.0  # Apache-2.0
 +oslo.config>=1.4.0  # Apache-2.0
 +oslo.i18n>=1.0.0  # Apache-2.0
 diff --git a/test-requirements.txt b/test-requirements.txt
 index 506a33d..c5c0328 100644
 --- a/test-requirements.txt
 +++ b/test-requirements.txt
 @@ -8,4 +8,5 @@ mock>=1.0
 -oslosphinx>=2.2.0.0a2
 -oslotest>=1.1.0.0a2
 -sphinx>=1.1.2,!=1.2.0,<1.3
 -testtools>=0.9.34
 +oslosphinx>=2.2.0  # Apache-2.0
 +oslotest>=1.2.0  # Apache-2.0
 +sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
 +testtools>=0.9.36,!=1.2.0
 +coverage>=3.6

cheers
gord
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec core

2015-01-18 Thread gordon chung
+1

cheers,
gord


From: morgan.fainb...@gmail.com
Date: Sun, 18 Jan 2015 12:11:02 -0700
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec 
core

Hello all,
I would like to nominate Brad Topol for Keystone Spec core (core reviewer for 
Keystone specifications and API-Specification only: 
https://git.openstack.org/cgit/openstack/keystone-specs ). Brad has been a 
consistent voice advocating for well defined specifications, use of existing 
standards/technology, and ensuring the UX of all projects under the Keystone 
umbrella continue to improve. Brad brings to the table a significant amount of 
insight to the needs of the many types and sizes of OpenStack deployments, 
especially what real-world customers are demanding when integrating with the 
services. Brad is a core contributor on pycadf (also under the Keystone 
umbrella) and has consistently contributed code and reviews to the Keystone 
projects since the Grizzly release.
Please vote with +1/-1 on adding Brad to as core to the Keystone Spec repo. 
Voting will remain open until Friday Jan 23.
Cheers,Morgan Fainberg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] pyCADF 0.7.0 released

2015-01-19 Thread gordon chung
pyCADF is the python implementation of the DMTF Cloud Auditing Data Federation 
Working Group (CADF) specification.  pyCADF 0.7.0 has been tagged and should be 
available on PyPI and our mirror shortly.
this release includes:
* deprecation of audit middleware (replaced by audit middleware in 
keystonemiddleware).* removal of oslo.messaging requirement.* various 
requirements and oslo synchronisations.
please report any problems here: https://bugs.launchpad.net/pycadf
cheers,
gord
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Monitoring as a Service

2014-05-02 Thread Gordon Chung
> Problem to solve: Ceilometer's purpose is to track and *measure/
> meter* usage information collected from OpenStack components 
> (originally for billing). While Ceilometer is usefull for the cloud 
> operators and infrastructure metering, it is not a *monitoring* 
> solution for the tenants and their services/applications running in 
> the cloud because it does not allow for service/application-level 
> monitoring and it ignores detailed and precise guest system metrics.

Alexandre, good to see the monitoring topic is alive and well. i have a 
few questions and comments...

is the proposed service just a new polling agent, that instead of building 
meters, just takes raw polled events and stores them in a database and can 
also emit 'alarms'? a lot of the concepts in the blueprint seem to be 
inline with Ceilometer's design except with an event/monitoring emphasis 
(which Ceilometer also has)

rather than reinvent the wheel, regarding monitoring, have you taken a 
look at StackTach[1]? it may cover some of the use cases you have. we're 
currently in the process of integrating StachTach's monitoring ability 
into Ceilometer. Ceilometer does have the ability to capture tailored 
events[2] and there are blueprints to expand that functionality[3][4][5] 
(there are more event-related blueprints in Ceilometer). the StackTach 
integration process has been admittedly slow so help is always welcomed 
there.

whether eventing/monitoring should stay in Ceilometer is another topic but 
i'd be interested to see if the event functionality in StackTach and 
Ceilometer as well as the alarming capability in Ceilometer can cover the 
use cases you have.  if the one thing missing is the ability to poll for 
raw events, i would believe that could be incorporated into Ceilometer.

[1] https://github.com/stackforge/stacktach
[2] http://docs.openstack.org/developer/ceilometer/events.html
[3] 
https://blueprints.launchpad.net/ceilometer/+spec/configurable-event-definitions
[4] https://blueprints.launchpad.net/ceilometer/+spec/event-sample-plugins
[5] https://blueprints.launchpad.net/ceilometer/+spec/hbase-events-feature

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate proposal - drop Postgresql configurations in the gate

2014-06-12 Thread Gordon Chung
> If someone can point me to a case where we've actually found this kind
> of bug with tempest / devstack, that would be great. I've just *never*
> seen it. I was the one that did most of the fixing for pg support in
> Nova, and have helped other projects as well, so I'm relatively familiar
> with the kinds of fails we can discover. The ones that Julien pointed
> really aren't likely to be exposed in our current system.
>
> Which is why I think we're mostly just burning cycles on the existing
> approach for no gain.

not sure if this would get caught in mysql strict mode but we caught some 
differences between mysql/postgres in Ceilometer as well. ie. 
https://bugs.launchpad.net/ceilometer/+bug/1256318

personally, if resources weren't constrained i'd prefer both but out of 
curiosity, what was the reasoning for choosing to continue gating against 
mysql only rather than postgres only? is it known that mysql is the 
typical choice for openstack deployments?

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-06-12 Thread Gordon Chung
> I'm hoping we can just ACK this approach, and get folks to start moving
> patches through the clients to clean this all up.

just an fyi, in pyCADF, we obfuscate tokens similar to how credit cards 
are handled: by capturing a percentage of leading and trailing characters 
and substituting the middle ie. "4724  8478". whatever we decide 
here, i'm all for having a consistent way of masking and minimising tokens 
in OpenStack.

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] nominating Ildikó Váncsa and Nadya Privalova to ceilometer-core

2014-03-10 Thread Gordon Chung
> I'd like to nominate Ildikó Váncsa and Nadya Privalova as ceilometer

+1, thanks for the effort so far.

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Suggestions for alarm improvements

2014-03-11 Thread Gordon Chung
i've created a bp to discuss whether moving the alarming into pipeline is 
feasible and can cover all the use cases for alarm. if we can find a 
solution that is a bit leaner than what we have and still provide same 
functionality coverage i don't see why we try it. it very well may be that 
what we have is the best solution.

https://blueprints.launchpad.net/ceilometer/+spec/alarm-pipelines

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]Collector's performance

2014-03-11 Thread Gordon Chung
i did notice the collector service was only ever writing one db connection 
at a time. i've opened a bug for that here: 
https://bugs.launchpad.net/ceilometer/+bug/1291054

i am curious as to why postgresql passes but not mysql? is postgres 
actually faster or are it's default configurations set up better?

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [QA] Slow Ceilometer resource_list CLI command

2014-03-17 Thread Gordon Chung
hi Matt,

> test_ceilometer_resource_list which just calls ceilometer 
> resource_list from the
> CLI once is taking >=2 min to respond. For example:
> http://logs.openstack.org/68/80168/3/gate/gate-tempest-dsvm-
> postgres-full/07ab7f5/logs/tempest.txt.gz#_2014-03-17_17_08_25_003
> (where it takes > 3min)

thanks for bringing this up... we're tracking this here: 
https://bugs.launchpad.net/ceilometer/+bug/1264434

i've put a patch out that partially fixes the issue. from bad to 
average... but i guess i should make the fix a bit more aggressive to 
bring the performance in line with the 'seconds' expectation.

cheers,
gordon chung
openstack, ibm software standards

Matthew Treinish  wrote on 17/03/2014 02:55:40 PM:

> From: Matthew Treinish 
> To: openstack-dev@lists.openstack.org
> Date: 17/03/2014 02:57 PM
> Subject: [openstack-dev] [Ceilometer] [QA] Slow Ceilometer 
> resource_list CLI command
> 
> Hi everyone,
> 
> So a little while ago we noticed that in all the gate runs one of 
> the ceilometer
> cli tests is consistently in the list of slowest tests. (and often 
> the slowest)
> This was a bit surprising given the nature of the cli tests we expect 
them to
> execute very quickly.
> 
> test_ceilometer_resource_list which just calls ceilometer 
> resource_list from the
> CLI once is taking >=2 min to respond. For example:
> http://logs.openstack.org/68/80168/3/gate/gate-tempest-dsvm-
> postgres-full/07ab7f5/logs/tempest.txt.gz#_2014-03-17_17_08_25_003
> (where it takes > 3min)
> 
> The cli tests are supposed to be quick read-only sanity checks of the 
cli
> functionality and really shouldn't ever be on the list of slowest tests 
for a
> gate run. I think there was possibly a performance regression recently 
in
> ceilometer because from I can tell this test used to normally take ~60 
sec.
> (which honestly is probably too slow for a cli test too) but it is 
currently
> much slower than that.
> 
> From logstash it seems there are still some cases when the resource list 
takes
> as long to execute as it used to, but the majority of runs take a long 
time:
> http://goo.gl/smJPB9
> 
> In the short term I've pushed out a patch that will remove this testfrom 
gate
> runs: https://review.openstack.org/#/c/81036 But, I thought it wouldbe 
good to
> bring this up on the ML to try and figure out what changed or why this 
is so
> slow.
> 
> Thanks,
> 
> -Matt Treinish
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [QA] Slow Ceilometer resource_list CLI command

2014-03-17 Thread Gordon Chung
> Yep. At AT&T, we had to disable calls to GET /resources without any
> filters on it. The call would return hundreds of thousands of records,
> all being JSON-ified at the Ceilometer API endpoint, and the result
> would take minutes to return.

so the performance issue with resource-list is somewhat artificial... the 
gathering of resources itself can return back in seconds with over a 
million records... the real cost is that the api also returns a list of 
all related meters for each resource. if i disable that, resource-list 
performance is decent (debatable).

>  the main problem with the get_resources() call is the
> underlying databases schema for the Sample model is wacky, and forces
> the use of a dependent subquery in the WHERE clause [2] which completely
> kills performance of the query to get resources.

Jay, i've begun the initial steps to improve sql model and would love to 
get your opinion. i've created a bp here: 
https://blueprints.launchpad.net/ceilometer/+spec/big-data-sql (i use 'big 
data' in quotes...)

Regarding the < 2 second requirement. i haven't seen the number of records 
tempest generates but i would expect sub 2 seconds would be a good target. 
that said, as Jay mentioned, as the load/test increases there's only so 
much performance you can get with hundred thousand  to millions of records 
using an sql backend... at the very least it's going to flutuate (how much 
 is acceptable i have no clue currently).

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-20 Thread Gordon Chung
Alexei, awesome work.

> Rollbacks are caused not by retry logic but by create_or_update logic:
> We first try to do INSERT in sub-transaction when it fails we rollback 
> this transaction and do update instead.

if you take a look at my patch addressing deadlocks(
https://review.openstack.org/#/c/80461/), i actually added a check to get 
rid of the blind insert logic we had so that should lower the number of 
rollbacks (except for race conditions, which is what the function was 
designed for). i did some minor performance testing as well and will add a 
few notes to the patch where performance can be improved but requires a 
larger schema change.  Jay, please take a look there as well if you have 
time.

> Tomorrow we'll do the same tests with PostgreSQL and MongoDB to see if 
> there is any difference.

i look forward to these results, from my quick testing with Mongo, we get 
about 10x the write speed vs mysql.

> >>>>> We required a non mongo backend to graduate ceilometer. So I don't 
think
> >>>>> it's too much to ask that it actually works.

i don't think sql is the recommended back in real deployments but that 
said, given the modest load of tempest tests, i would expect our sql 
backend be able to handle it.

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][all] persisting dump tables after migration

2014-03-25 Thread Gordon Chung
in ceilometer we have a bug regarding residual dump tables left after 
migration: https://bugs.launchpad.net/ceilometer/+bug/1259724

basically, in a few prior migrations, when adding missing constraints, a 
dump table was create to backup values which didn't fit into the new 
constraints. i raised the initial bug because i believe there is very 
little value to these tables as i would expect any administrator capturing 
data of some importance would backup their data before any migration to 
begin with.  i noticed in Nova, they also clean up their dump tables but i 
wanted to raise this on mailing list so that everyone is aware of the 
issue before i add a patch which blows away these dump tables. :)

i'd be interested if anyone actually finds value in having these dump 
tables persist just so we can see if your use case can be handle without 
the tables.

for reference, the dump tables are created in:
https://github.com/openstack/ceilometer/blob/master/ceilometer/storage/sqlalchemy/migrate_repo/versions/012_add_missing_foreign_keys.py
https://github.com/openstack/ceilometer/blob/master/ceilometer/storage/sqlalchemy/migrate_repo/versions/027_remove_alarm_fk_constraints.py

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] pyCADF 0.5 released

2014-04-01 Thread Gordon Chung
pyCADF is the python implementation of the DMTF Cloud Auditing Data 
Federation Working Group (CADF) specification.  pyCADF 0.5 has been tagged 
and should be available on PyPI and
our mirror shortly.

this release includes two changes:

* pycadf documentation
* Updated from global requirements

please report any problems here: https://bugs.launchpad.net/pycadf

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Collector no recheck the db status

2014-04-02 Thread Gordon Chung
> I found that when collector service starting, if the db has not yet 
> ready, it will log an error info like 'Could not load 'database': 
> could not connect to...' but the service still goes on. Later when 
> the db is ready, but there are no mechanisms to check the db status 
> and reconnect it. so the collector service keeps useless to record the 
data.

is this against master? does this patch resolve the issue for you? 
https://review.openstack.org/#/c/83595/


gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] PTL candidacy

2014-04-02 Thread Gordon Chung
hi,

i'd like to announce my candidacy for PTL of Ceilometer.

as a little background, i've been a contributor to OpenStack for the past 
year and a half and have been primarily focused on Ceilometer for the past 
two cycles where i've been a core contributor. i contribute regularly to 
the project with code [1] and am one of the top reviewers [2]. i am also a 
developer at IBM where i work on the IBM software standards team, building 
standards such as the Cloud Audit Data Federation Working Group (CADF) 
specification.

there's a great deal to be discussed at the upcoming summit and i'm sure 
Ceilometers' contributors and deployers have a lot of ideas. in addition 
to those, i think some key items to focus on are:

- improving collector performance in Ceilometer. this has been a major 
item in Ceilometer recently as we tried to expand our integration tests in 
tempest and have found there are inefficiencies in how Ceilometer is 
collecting data. Ceilometer should be a lightweight service, capable of 
processing heavy load and we need to ensure it can handle this. i'd like 
to revisit our models (to ensure our model match what operators require) 
and build our backends around this so they are highly tuned to writes and 
reads for these use cases.
- improving polling agents. the compute agent currently creates a heavy 
load on compute-api[3] and the central agent has it's own performance and 
HA issues... it's something we should really considering looking at.
- continue to expand tempest testing in ceilometer. the tempest tests have 
been helpful in identifying gaps/performance issues and will help later in 
identifying regression.
- review how we log in Ceilometer. we have a very repetitive framework 
design so our logs can be extremely overwhelming or extremely sparse. 

some other items to track (depending on resource) are:

- enhancing event support. there is a framework for events in Ceilometer 
but there is work to be done to make it a true monitoring tool. we have a 
few blueprints existing in Ceilometer that should help.
- re-evaluating the alarming service. we currently require two services to 
run for alarms, a notifier and an evaluator which also depends on the 
ceilometerclient. there is an interesting bp to move some alarm logic to 
the pipeline and i think we should take a look at it to see if it provides 
a cleaner, leaner solution.[4]. if not, we should work on documenting our 
current implementation.

[1] 
https://review.openstack.org/#/q/owner:chungg+status:merged+project:openstack/ceilometer,n,z
[2] http://russellbryant.net/openstack-stats/ceilometer-reviewers-365.txt
[3] 
http://openstack-in-production.blogspot.ca/2014/03/cern-cloud-architecture-update-for.html
[4] https://blueprints.launchpad.net/ceilometer/+spec/alarm-pipelines

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Performance tests of ceilometer-collector and ceilometer-api with different backends

2014-04-23 Thread Gordon Chung
> Do you have any links to those blueprints? https://
> blueprints.launchpad.net/ceilometer/juno is pretty sparse.

we'll probably add targets closer to summit (or post summit).

blueprints of interest may be:
https://blueprints.launchpad.net/ceilometer/+spec/big-data-sql
https://blueprints.launchpad.net/ceilometer/+spec/tighten-model
https://blueprints.launchpad.net/ceilometer/+spec/bulk-message-handling

we're still prioritising design sessions but it's safe to say this session 
will be there:
http://summit.openstack.org/cfp/details/163

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] MySQL performance and Mongodb backend maturity question

2014-09-25 Thread gordon chung
> mysql> select count(*) from metadata_text;
> +--+
> | count(*) |
> +--+
> | 25249913 |
> +--+
> 1 row in set (3.83 sec)> 
> There were 25M records in one table.  The deletion time is reaching an
> unacceptable level (7 minutes for 4M records) and it was not increasing
> in a linear way.  Maybe DB experts can show me how to optimize this?
we don't do any customisations in default ceilometer package so i'm sure 
there's way to optimise... not sure if any devops ppl read this list. 
> Another question: does the mongodb backend support events now?
> (I asked this question in IRC, but, just as usual, no response from
> anyone in that community, no matter a silly question or not is it...)
regarding events, are you specifically asking about events 
(http://docs.openstack.org/developer/ceilometer/events.html) in ceilometer or 
using events term in generic sense? the table above has no relation to events 
in ceilometer, it's related to samples and corresponding resource.  we did do 
some remodelling of sql backend this cycle which should shrink the size of the 
metadata tables.
there's a euro-bias in ceilometer so you'll be more successful reaching people 
on irc during euro work hours... that said, you'll probably get best response 
by posting to list or pinging someone on core team directly.
cheers,gord   ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] request_id deprecation strategy question

2014-10-20 Thread gordon chung
> The issue I'm highlighting is that those projects using the code now have
> to update their api-paste.ini files to import from the new location,
> presumably while giving some warning to operators about the impending
> removal of the old code.
This was the issue i ran into when trying to switch projects to oslo.middleware 
where i couldn't get jenkins to pass -- grenade tests successfully did their 
job. we had a discussion on openstack-qa and it was suggested to add a upgrade 
script to grenade to handle the new reference and document the switch. [1]
if there's any issue with this solution, feel free to let us know.
[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-qa/%23openstack-qa.2014-10-10.log
 (search for gordc)
cheers,
gord  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][devstack] CI failed The plugin token_endpoint could not be found

2014-11-14 Thread gordon chung
just an fyi, i had same issue. i 'pip uninstall'ed all the python-*clients and 
it worked fine... i assume it's something to do with master (as i had it 
configured previously) since devstack seems to pull in pypi version.

cheers,
gord
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] graduating oslo.middleware

2014-07-22 Thread gordon chung
hi,
following the oslo graduation protocol, could the oslo team review the 
oslo.middleware library[1] i've created and see if there are any issues.
[1] https://github.com/chungg/oslo.middleware
cheers,
gord


  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] graduating oslo.middleware

2014-07-23 Thread gordon chung
> I left a comment on one of the commits, but in general here are my thoughts:
> 1) I would prefer not to do things like switch to oslo.i18n outside of 
> Gerrit.  I realize we don't have a specific existing policy for this, but 
> doing that significant 
> work outside of Gerrit is not desirable IMHO.  It needs to happen either 
> before graduation or after import into Gerrit.
> 2) I definitely don't want to be accepting "enable [hacking check]" changes 
> outside Gerrit.  The github graduation step is _just_ to get the code in 
> shape so it 
> can be imported with the tests passing.  It's perfectly acceptable to me to 
> just ignore any hacking checks during this step and fix them in Gerrit where, 
> again, 
> the changes can be reviewed.
> At a glance I don't see any problems with the changes that have been made, 
> but I haven't looked that closely and I think it brings up some topics for 
> clarification in the graduation process.


i'm ok to revert if there are concerns. i just vaguely remember a reference in 
another oslo lib about waiting for i18n graduation but tbh i didn't actually 
check back to see what conclusion was.

cheers,
gord  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] graduating oslo.middleware

2014-07-24 Thread gordon chung
> Gordon, could you prepare a version of the repository that stops with the 
> export and whatever changes are needed to make the test jobs for the new > 
> library run? If removing some of those tests is part of making the suite run, 
> we can talk about that on the list here, but if you can make the job run 
> without > that commit we should review it in gerrit after the repository is 
> imported.

from what i recall, the stray tests commit was because running graduate.sh put 
the unit tests under tests/unit/middleware/xyz.py and added a few base test 
files that weren't used for anything. the commit removed the unused base test 
files and kept the test files directly under tests directory.
cheers,
gord  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][ceilometer] Moving PyCADF from the Oslo program to Identity (Keystone)

2014-07-25 Thread gordon chung
> > Before we move ahead, I would like to hear from the other current pycadf and
> > oslo team members, especially Gordon since he is the primary maintainer.
this move makes sense to me. auditing and identity have a strong link and all 
of the pyCADF work done so far has been connected to Keystone in some form so 
it makes sense to have it fall under Keystone's expanded scope.
as a sidebar... glad to have more help on pyCADF.
cheers,
gord  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] can't rebuild local tox due to oslo alpha packages

2014-07-30 Thread gordon chung
> I noticed yesterday that trying to rebuild tox in nova fails because it 
> won't pull down the oslo alpha packages (config, messaging, rootwrap).
i ran into this yesterday as well. Doug suggested i update my virtualenv and 
that worked. i went from 1.10.1 to 1.11.x

cheers,
gord  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][nova] extra Ceilometer Samples of the instance gauge

2014-07-30 Thread gordon chung
> In a normal DevStack install, each Compute instance causes one Ceilometer 
> Sample every 10 minutes.  Except, there is an extra one every hour.  And a 
> lot of extra ones at > the start.  What's going on here? 
instance is one meter which is generated through both polling and notifications 
(see *origin* column[1]). when you create/update/delete an instance in Nova, it 
will generate a set of notifications relating to the instance. Ceilometer 
listens to those notifications and generates samples from them.
[1] 
http://docs.openstack.org/developer/ceilometer/measurements.html#compute-nova

cheers,
gord  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]How to change Keystone properties

2013-06-26 Thread Gordon Chung


> I assumed the configuration would be within a [keystone] section in
> ceilometer.conf.  Have not found much documentation about this.  Am I
> missing something?

close, if you're modifying the ceilometer.conf file, it should be under
[keystone_authtoken] section. you can find full details here:
http://docs.openstack.org/developer/ceilometer/configuration.html#keystone-middleware-authentication


here's what devstack sets up:

[keystone_authtoken]
signing_dir = /var/cache/ceilometer
admin_tenant_name = service
admin_password = pwpwpwpw
admin_user = ceilometer
auth_protocol = http

cheers,
gordon chung

openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo] Bringing audit standards to Openstack

2013-07-04 Thread Gordon Chung


hi Folks,

just wanted to bring everyone's attention to this blueprint we have in
Ceilometer:
https://blueprints.launchpad.net/ceilometer/+spec/support-standard-audit-formats
 (detailed bp:
https://wiki.openstack.org/wiki/Ceilometer/blueprints/support-standard-audit-formats#Provide_support_for_auditing_events_in_standardized_formats
 )

as a little background, there are many projects that use Ceilometer to
track usage information for statistical usage analysis and billing.  these
projects are seeing similar auditing requirements which are missing
currently.  the above blueprint's proposal is to add support for auditing
APIs access using the Distributed Mgmt. Task Force?s (DMTF) ?Cloud Audit?
standard (CADF).  you can read further into the spec via the latest public
draft here:
http://dmtf.org/sites/default/files/standards/documents/DSP0262_1.0.0a_0.pdf
 but to highlight the standard, it is an open standard developed by
multiple enterprises -- IBM, NetIQ, Microsoft, VMware, and Fujitsu to name
a few.  Also, the model is regulatory compliant (e.g. PCI-DSS, SoX, ISO
27017, etc.) and extensible so it should adapt to a broad range of uses.

initially, we drafted this to be part of Ceilometer but as we've worked
through it, we've noticed it is applicable in multiple projects. during the
course of our discussions with Keystone developers to assure we were
recording the correct data for audit, we found that Keystone itself had a
blueprint to add notifications and log audit data for their APIs:
https://blueprints.launchpad.net/keystone/+spec/notifications.

i thought i'd present this on the mailing list to gather feedback on the
idea of adopting CADF and discuss possibly its inclusion in Oslo so that
all the projects can use the same open standard when capturing events.

cheers,

gordon chung

openstack, ibm software standards
email: chu...@ca.ibm.com___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Bringing audit standards to Openstack

2013-07-08 Thread Gordon Chung


thanks for the feedback folks! you brought up good use case questions and
some gaps in our bp explanation.  i guess the main goal here was to
highlight a standard that could possibly help with how we deal with
notifications currently -- possibly standardizing it a bit more rather than
a grab bag of data.  the audit work will continue to be contained in
Ceilometer and maybe once we have the code in and working, it'll prove
itself to have greater value beyond Ceilometer.

cheers,
gordon chung

openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] what's in scope of Ceilometer

2013-08-28 Thread Gordon Chung
so we're in the process of selling Ceilometer to product teams so that 
they'll adopt it and we'll get more funding :).  one item that comes up 
from product teams is 'what will Ceilometer be able to do and where does 
the product takeover and add value?'

the first question is, Ceilometer currently does metering/alarming/maybe a 
few other things... will it go beyond that? specifically: capacity 
planning, optimization, dashboard(i assume this falls under 
horizon/ceilometer plugin work), analytics. 
they're pretty broad items so i would think they would probably end up 
being separate projects?

another question is what metrics will we capture.  some of the product 
teams we have collect metrics on datacenter memory/cpu utilization, 
cluster cpu/memory/vm, and a bunch of other clustered stuff.
i'm a nova-idiot, but is this info possible to retrieve? is the consensus 
that Ceilometer will collect anything and everything the other projects 
allow for?

cheers,
gordon chung

openstack, ibm software standards
email: chungg [at] ca.ibm.com
phone: 905.413.5072___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Telemetry] Time to test and import Panko

2016-05-17 Thread gordon chung
what needs to be tested here? that Ceilometer can dispatch to Panko? do 
we need to worry about any redirection like we had with Aodh?



On 17/05/2016 8:29 AM, Julien Danjou wrote:
> Hi fellows,
>
> I'm done creating Panko, our new project born from cutting off the event
> part of Ceilometer. It's at: https://github.com/jd/panko
>
> There are only a few commits as you can see:
>
>https://github.com/jd/panko/commits/master
>
> The code has been created in a 4 steps process:
> 1. Remove code that is not related to events storage and API
> 2. Rename to Panko
> 3. Remove base class for dispatcher
> 4. Rename database event dispatcher to panko
>
> Some testing would be welcome. It should be pretty straightforward, it
> provides `panko-api' that has a /v2/events endpoint and a "panko"
> event dispatcher for ceilometer-collector.
>
> The devstack plugin might need some love to integrate with Ceilometer,
> but I imagine we can do that in a later pass.
>
> I'm gonna create the openstack-infra patch to import the project unless
> someone tells me not to.
>
> Cheers,
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [stable] Re: [Openstack-stable-maint] Stable check of openstack/ceilometer failed

2016-06-14 Thread gordon chung
i don't know if anyone is looking at this -- i'm not sure where this test is 
even run. i usually let mriedem yell at me but i take it he has bigger things 
on his plate now :)

this seems like a pretty simple fix from the error output[1]. i guess my 
question is: should the correct fix be to cap oslo.utils? based on the error, 
the issue seems to be total_seconds method was removed. this was deprecated in 
Mitaka[2], so i don't think it should've been removed from Liberty. as this is 
an easy fix, i'm pretty indifferent if we decide to fix this rather than slow 
down progress. the original purpose of this method (based on commit messsage) 
seems to be related to py2.6. i don't know if this is still an issue.

[1] http://paste.openstack.org/show/515960/
[2] https://review.openstack.org/#/c/248590/

cheers,

--
gord


From: Ian Cordasco 
Sent: June 13, 2016 10:01:34 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [ceilometer] [stable] Re: [Openstack-stable-maint] 
Stable check of openstack/ceilometer failed

-Original Message-
From: A mailing list for the OpenStack Stable Branch test reports.

Reply: openstack-dev@lists.openstack.org 
Date: June 13, 2016 at 01:13:54
To: openstack-stable-ma...@lists.openstack.org

Subject:  [Openstack-stable-maint] Stable check of openstack/ceilometer failed

> Build failed.
>
> - periodic-ceilometer-docs-liberty 
> http://logs.openstack.org/periodic-stable/periodic-ceilometer-docs-liberty/204fcec/
> : SUCCESS in 5m 31s
> - periodic-ceilometer-python27-liberty 
> http://logs.openstack.org/periodic-stable/periodic-ceilometer-python27-liberty/00f7474/
> : FAILURE in 6m 20s

Hey ceilometer stable maintainers,

The following tests have been failing in periodic jobs for the last 4 days:

ceilometer.tests.unit.alarm.evaluator.test_base.TestEvaluatorBaseClass

- test_base_time_constraints_by_month
- test_base_time_constraints_complex
- test_base_time_constraints
- test_base_time_constraints_timezone

ceilometer.tests.unit.alarm.evaluator.test_combination.TestEvaluate

- test_no_state_change_outside_time_constraint
- test_state_change_inside_time_constraint

ceilometer.tests.unit.alarm.evaluator.test_gnocchi.TestGnocchiThresholdEvaluate

- test_no_state_change_outside_time_constraint

ceilometer.tests.unit.alarm.evaluator.test_threshold.TestEvaluate

- test_no_state_change_outside_time_constraint
- test_state_change_inside_time_constraint

And this one has been failing every day for almost a week now
(starting on 7 June 2016)

ceilometer.tests.unit.test_messaging.MessagingTests.test_get_transport_optional

Is anyone looking into these?

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Aodh] Ordering Alarm severity on context

2016-06-14 Thread gordon chung
hi,

i actually told him to raise it here. since our team is scattered pretty 
globally, we use the ML for this when we want a few more eyes on something 
debatable in a patch.

for me, my concern is whether there's a strong desire to have ordering of 
severity to be done based on context vs alphabetically, as it does now. if we 
want ordering to be done by context, the followup would be whether we want to 
support additional severity levels beyond: low, moderate, critical. the 
solution proposed i feel is pretty restrictive to that and i'd like to discuss 
a better solution (if this is even important).

cheers,

--
gord


From: Mike Carden 
Sent: June 14, 2016 6:14:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Aodh] Ordering Alarm severity on context

Hi Sanjana and welcome to openstack-dev.

Having sent all of us your 'Hitachi1' password, you may want to change that. :)

In general, fishing for code reviews via the openstack-dev mailing list is a 
poor strategy. You may be better served by discovering the preferred 
communication channel(s) for the project you are interested in and making 
yourself known there.

Someone who knows a lot more than I do about aodh may come along to advise you 
of IRC channels and the like.

--
MC




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [stable] Re: [Openstack-stable-maint] Stable check of openstack/ceilometer failed

2016-06-14 Thread gordon chung


On 14/06/2016 12:10 PM, Ian Cordasco wrote:
> I wonder why more projects aren't seeing this in stable/liberty. Perhaps, 
> ceilometer stable/liberty isn't using upper-constraints? I think oslo.utils 
> 3.2.0 
> (https://github.com/openstack/requirements/blob/stable/liberty/upper-constraints.txt#L202)
>  is low enough to avoid this if you're using constraints. (It looks as if the 
> total_seconds removal was first released in 3.12.0 
> https://github.com/openstack/oslo.utils/commit/8f5e65cae3aaf8d0a89d16d8932c266151de44f7)
>
that's strange, i tried to see if it's just Ceilometer. the 
periodic-neutron-python27-liberty job seems to be capped appropriately 
to oslo.utils 3.2.0. maybe it's a dependency? the 
periodic-nova-python27-liberty job doesn't seem to have any entries 
since March so i can't verify that.

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage][aodh] Notifications about aodh alarm state changes

2016-06-24 Thread gordon chung


On 24/06/2016 9:58 AM, Julien Danjou wrote:
> On Thu, Jun 23 2016, Afek, Ifat (Nokia - IL) wrote:
>
>> I understood that during Aodh-Vitrage design session in Austin, you had a
>> discussion about Vitrage need for notifications about Aodh alarm state 
>> changes.
>> Did you have a chance to think about it since, and to check how this can be
>> done?
>
> IIRC this was already implemented, just not enable by default and
> probably not well documented.
>
> Actually I see we took some note here:
>
>   https://etherpad.openstack.org/p/newton-telemetry-vitrage
>
> Feel free to send patches. :)

who put my name as TODO? *shakes fist*

sorry i haven't had time to look at it but *may* have time later next 
week. feel free to take it if you have time. we can provide guidance.

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi] profiling and benchmarking 2.1.x

2016-06-24 Thread gordon chung
hi,

i realised i didn't post this beyond IRC, so here are some initial 
numbers for some performance/benchmarking i did on Gnocchi.

http://www.slideshare.net/GordonChung/gnocchi-profiling-21x

as a headsup, the data above is using Ceph and with pretty much a 
default configuration. i'm currently doing more tests to see how it 
works if you actually start turning some knobs on Ceph (spoiler: it gets 
better).

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [gnocchi] profiling and benchmarking 2.1.x

2016-07-04 Thread gordon chung
hi folks,

i didn't get as far as i'd hope so i've decided to release what i have 
currently and create another deck for more 'future enhancements' 
benchmarking.

the following deck aims to show how performance changes as you scale out 
Ceph and some configuration options that can help stabilise Gnocchi+Ceph 
performance: http://www.slideshare.net/GordonChung/gnocchi-profiling-v2.

this is by no means a large scale architecture -- the tests are run 
against tens of thousands metrics currently -- but it's a start until we 
get more data. i'm hoping to test against a larger dataset going 
forward. also, will be testing some enhancements we've been discussing 
for Gnocchi 3.x

hope it helps.

cheers,


On 25/06/2016 8:50 AM, Curtis wrote:
> On Fri, Jun 24, 2016 at 2:09 PM, gordon chung  wrote:
>> hi,
>>
>> i realised i didn't post this beyond IRC, so here are some initial
>> numbers for some performance/benchmarking i did on Gnocchi.
>>
>> http://www.slideshare.net/GordonChung/gnocchi-profiling-21x
>>
>> as a headsup, the data above is using Ceph and with pretty much a
>> default configuration. i'm currently doing more tests to see how it
>> works if you actually start turning some knobs on Ceph (spoiler: it gets
>> better).
>
> Thanks Gordon. I will definitely take a look through your slides.
> Looking forward to what test results you get when you start turning
> some knobs.
>
> Thanks,
> Curtis.
>
>>
>> cheers,
>>
>> --
>> gord
>> ___
>> OpenStack-operators mailing list
>> openstack-operat...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Rescheduling IRC meetings

2016-03-30 Thread gordon chung


On 30/03/2016 8:06 AM, Julien Danjou wrote:
> On Wed, Mar 30 2016, Chris Dent wrote:
>
>> Another option on the meetings would be to do what the cross project
>> meetings do: Only have the meeting if there are agenda items.
>
> That's a good idea, I'd be totally cool with that too. We could send a
> mail indicating there would be meeting with a 1 week prior notice.
>

you don't like watching me talk to myself?

i believe this problem arises mainly at the beginning and end of cycle 
where we don't have any issues regarding blueprints as they were just 
discussed or won't be discussed. it also doesn't help that the meeting 
time is tied to North America when most of our devs are not here.

that said, i do prefer following the cross project model. when should we 
assign cut off for meeting items? we shouldn't make cutoff so late that 
it causes people to sit around waiting to see if there's a meeting or not.

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Rescheduling IRC meetings

2016-03-31 Thread gordon chung
ie. the new PTL should checkpoint with subteam leads regularly to review 
spec status or identify missing resources on high-priority items?

as some feedback, re: news flash mails, we need a way to promote roadmap 
backlog items better. i'm not sure anyone looks at Road Map page... 
maybe we need to organise it better with priority and incentive.

On 31/03/2016 7:14 AM, Ildikó Váncsa wrote:
> Hi All,
>
> +1 on the on demand meeting schedule. Maybe we can also have some news flash 
> mails  week to summarize the progress in our sub-modules when we don't have 
> the meeting. Just to keep people up to date.
>
> Will we already skip the today's meeting?
>
> Thanks,
> /Ildikó
>
>> -Original Message-
>> From: Julien Danjou [mailto:jul...@danjou.info]
>> Sent: March 31, 2016 11:04
>> To: liusheng
>> Cc: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [telemetry] Rescheduling IRC meetings
>>
>> On Thu, Mar 31 2016, liusheng wrote:
>>
>>> Another personal suggestion:
>>>
>>> maybe we can have a weekly routine mail thread to present the things
>>> need to be discussed or need to be notified. The mail will also list
>>> the topics posted in meeting agenda and ask to Telemetry folks if  a
>>> online IRC meeting is necessary, if there are a very few topics or low
>>> priority topics, or the topics can be suitably discussed
>>> asynchronously, we can disuccs them in the mail thread.
>>>
>>> any thoughts?
>>
>> Yeah I think it's more or less the same idea that was proposed, schedule a 
>> meeting only if needed. I'm going to amend the meeting
>> wiki page with that!
>>
>> --
>> Julien Danjou
>> -- Free Software hacker
>> -- https://julien.danjou.info
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election][tc] tc candidacy

2016-03-31 Thread gordon chung
hi folks,

i'd like to announce my candidacy for the OpenStack Technical Committee.

as a quick introduction, i've been a contributor in OpenStack for the 
past few years, focused primarily on various Telemetry and Oslo related 
projects. i was recently the Project Team Liaison for the Telemetry 
project, and currently, i'm an engineer at Huawei Technologies Canada 
where i work with a team of developers that contribute to the OpenStack 
community.

my views on what the next steps for OpenStack are are not unique. i 
share the idea: OpenStack needs to refine its mission. this is not to 
dissuade developers from continuing to build upon and extend the 
existing projects but i think OpenStack should concern itself with the 
core story first before worrying about extended use cases.

Also, i believe that the existing projects in OpenStack are too siloed. 
coming from a project that interacts with all other services, it's quite 
apparent that projects are focused solely on their own offerings and 
ignoring how it works globally. tighter collaboration between projects i 
believe will help make integration easier and more efficient. similar to 
how services within a project just work together, projects should just 
work together.

in many ways i see the Technical Committee as the Cross Project Liaisons 
some of us are searching for. rather than act as Guardians of "what is 
OpenStack", i think the TC should take a more active role in the 
projects and work together with the PTLs to ensure that the "Cloud" 
story makes sense. PTLs and TC members should work side by side to 
identify gaps in the story rather than the current interaction agreement 
of "you're in. we'll talk if stuff hits the fan".

understandably the "Cloud" story is different to many people so i'll 
employ the strategy i've used previously: listen to others, share the 
decision, share the blame, and claim the success.

thanks for your time.

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Newton design summit planning

2016-04-05 Thread gordon chung
seems fine to me. we still don't have a story around events. we just 
sort of store and dump it right now. i guess this could probably fit 
into ceilometer split session... or maybe everyone is cool with just 
pushing things to elasticsearch and letting users play with that + kibana.

On 05/04/2016 5:58 AM, Julien Danjou wrote:
> Hi folks,
>
> I've started to fill in the blank for our next design summit sessions.
> The schedule is at:
>
>
> https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Telemetry%3A
>
> We still have the 2 first work sessions empty, because I ran out of
> idea. I've used the Etherpad we previously announced¹ to build the
> schedule. Let me also know if you think we should shuffle things around.
>
> Any items/ideas we could discussion in this remaining sessions?
> Any project we could/should invite to discuss with us?
>
> Cheers,
>
> ¹  https://etherpad.openstack.org/p/newton-telemetry-summit-planning
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][elections] Results of the TC Election

2016-04-08 Thread gordon chung


On 08/04/2016 9:32 AM, Sheel Rana Insaan wrote:
> I agree with Thierry Carrez  , I think this would help.
>
> Along with this, we need to motivate new joiners to continue with openstack.
> Most of them leave earlier or participate less due to some demotivating
> factors like coding is easy but getting things reviewed is much
> difficult. :)
>
> In every group we have aome extra ordinary guys who help new
> contributors join and continue with, but some are just enjoying corehood.
> May be we should also make core members on rotation basis to give equal
> chances to everyone. This will also motivate those who are working for
> openstack in their part time.
>

forced rotations arguably wouldn't be fair to the cores that are still 
active. maybe it's best if projects don't adhere to some unwritten 
self-imposed "one out, one in" core rule. understandably it's more 
difficult to manage a large group of cores, but it does seem strange 
that all the projects have roughly the same core team size even though 
some projects have a contributor base that is significantly larger than 
others.

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][elections] Results of the TC Election

2016-04-08 Thread gordon chung


On 08/04/2016 9:14 AM, Thierry Carrez wrote:
> Eoghan Glynn wrote:
>>> However, the turnout continues to slide, dipping below 20% for
>>> the first time:
>>
>>Election | Electorate (delta %) | Votes | Turnout (delta %)
>>===
>>Oct '13  | 1106 | 342   | 30.92
>>Apr '14  | 1510(+36.52)  | 448   | 29.69   (-4.05)
>>Oct '14  | 1893   (+25.35)  | 506   | 26.73   (-9.91)
>>Apr '15  | 2169   (+14.58)  | 548   | 25.27   (-5.48)
>>Oct '15  | 2759   (+27.20)  | 619   | 22.44   (-11.20)
>>Apr '16  | 3284   (+19.03)  | 652   | 19.85   (-11.51)
>>
>>>
>>> This ongoing trend of a decreasing proportion of the electorate
>>> participating in TC elections is a concern.
>
> One way to look at it is that every cycle (mostly due to the habit of
> giving summit passes to recent contributors) we have more and more
> one-patch contributors (more than 600 in Mitaka), and those usually are
> not really interested in voting... So the electorate number is a bit
> inflated, resulting in an apparent drop in turnout.
>
> It would be interesting to run the same analysis but taking only >=3
> patch contributors as "expected voters" and see if the turnout still
> drops as much.
>
> Long term I'd like to remove the summit pass perk (or no longer link it
> to "one commit"). It will likely result in a drop in contributors
> numbers (gasp), but a saner electorate.
>

just for reference, while only affecting a subset of the electorate, if 
you look at the PTL elections, they all had over 40% turnout (even the 
older and larger projects).

it may be because of those with "one commit", but if that were the case, 
you would think the turnout would be inline/similar to the PTL elections.

cheers,
-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stackalytics] Gaming the Stackalytics stats

2016-04-08 Thread gordon chung


On 08/04/2016 1:26 PM, Davanum Srinivas wrote:
> Team,
>
> Steve pointed out to a problem in Stackalytics:
> https://twitter.com/stevebot/status/718185667709267969
>
> It's pretty clear what's happening if you look here:
> https://review.openstack.org/#/q/owner:openstack-infra%2540lists.openstack.org+status:open
>
> Here's the drastic step (i'd like to avoid):
> https://review.openstack.org/#/c/303545/
>

is it actually affecting anything in the community aside from the 
reviews being useless. aside from the 'diversity' tags in governance, 
does anything else use stackalytics?

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stackalytics] Gaming the Stackalytics stats

2016-04-08 Thread gordon chung


On 08/04/2016 5:23 PM, Davanum Srinivas wrote:
> On Fri, Apr 8, 2016 at 5:10 PM, gordon chung  wrote:
>>
>>
>> is it actually affecting anything in the community aside from the
>> reviews being useless. aside from the 'diversity' tags in governance,
>> does anything else use stackalytics?
>
> Gordon,
>
> I feel that we are missing an opportunity to teaching what we want new
> folks to do! As a group we should all try to spot these patterns and
> make sure everyone's efforts are fruitful.
>
> To that effect, i am capturing stuff here:
> https://davanum.wordpress.com/2016/04/08/new-to-openstack-reviews-start-here/
>
> Thanks,
> Dims
>

i imagine the fact they're gaming the stackalytics system means they're 
aware of what their doing and no one has called them out on it yet. i'm 
a glass half empty individual :)

this has been discussed in some form already in the past[1] and it'll 
probably keep happening. i get the feeling if you tell some to stop 
putting useless reviews in one place, they'll do it somewhere else or at 
random frequencies -- you may be endlessly chasing white noise. if it 
only affects lazy managers using stackalytics then it's probably not a 
big deal for now? i can't imagine the current useless reviews are 
swaying the overall stats of a project (aside from top global reviewers)

i like anteaya/your posts, hopefully it helps.

[1] source: i'm too lazy to search list

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stackalytics] Gaming the Stackalytics stats

2016-04-11 Thread gordon chung


On 11/04/2016 5:10 AM, Thierry Carrez wrote:
> gordon chung wrote:
>> On 08/04/2016 1:26 PM, Davanum Srinivas wrote:
>>> Steve pointed out to a problem in Stackalytics:
>>> https://twitter.com/stevebot/status/718185667709267969
>>>
>>> It's pretty clear what's happening if you look here:
>>> https://review.openstack.org/#/q/owner:openstack-infra%2540lists.openstack.org+status:open
>>>
>>>
>>> Here's the drastic step (i'd like to avoid):
>>> https://review.openstack.org/#/c/303545/
>>>
>>
>> is it actually affecting anything in the community aside from the
>> reviews being useless. aside from the 'diversity' tags in governance,
>> does anything else use stackalytics?
>
> Although I feel like there has been less of that over the last few
> releases, Stackalytics is where the press (and some companies) point to
> to find out who the "#1 OpenStack company" is, or what a particular
> company rank is in the contributions list.
>
> On http://www.openstack.org/software/mitaka/ you can click "Contributor
> stats" which points to: http://stackalytics.com/?release=mitaka -- and
> by default this shows Reviews stats (which are the easiest to game).
>
> Maybe it's time to revert back to "Commits" as the default stat shown on
> Stackalytics ? At least for a while ?
>
> The only protection against metrics being gamed is to change them often
> enough... I'm also a big fan of original retrospective analysis, where
> you look at past data and find interesting metrics (rather than
> predefine a metric for future data and hope nobody will game it).
>

commits is probably a better stat to show participation but that said, 
there will always be ways to game stats. if a company (or the 
foundation) uses Stackalytics to promote their brand, there will always 
be ways for them to be #1 at something, any stat can be skewed in any 
way you want, that's the basic idea of marketing.

i still don't think this is a concern until it starts affecting our 
(developers') workflow. if an individual's random +1 vote bothers you, 
call them out on it. if this becomes a swarm of random +1s where it 
starts negatively affecting your review process, then maybe we need more 
aggressive measures.

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][tempest] disabling 'full' tempest tests for ceilometer changes in CI

2016-04-12 Thread gordon chung
i'd be in favour of dropping the full cases -- never really understood 
why we ran all the tests everywhere. ceilometer/aodh are present at the 
end of the workflow so i don't think we need to be concerned with any of 
the other tests, only the ones explicitly related to ceilometer/aodh.

On 12/04/2016 8:12 AM, Ryota Mibu wrote:
> Hi,
>
>
> Can we disable 'full' tempest tests for ceilometer?
>
>  - gate-tempest-dsvm-ceilometer-mongodb-full
>  - gate-tempest-dsvm-ceilometer-mysql-full
>  - gate-tempest-dsvm-ceilometer-mysql-neutron-full
>  - gate-tempest-dsvm-ceilometer-postgresql-full
>  - gate-tempest-dsvm-ceilometer-es-full
>
> We've merged ceilometer test cases from tempest repo to ceilometer/aodh repo 
> as tempest plugin [1-2].
> So, I suppose we can disable these jobs for ceilometer checks and gates, once 
> we enabled ceilometer tempest tests with migrated codes [3].
>
> But, it will stop other tempest tests not in ceilometer dir of tempest in 
> gate jobs for ceilometer, as current setup is to kick 'full' tempest tests 
> even for ceilometer changes.
> Let me know if there is any problem.
> I might miss the original intention of Jenkins job setup for ceilometer.
>
> [1] https://blueprints.launchpad.net/ceilometer/+spec/tempest-plugin
> [2] https://blueprints.launchpad.net/ceilometer/+spec/tempest-aodh-plugin
> [3] https://review.openstack.org/#/c/303921/
>
>
> Thanks,
> Ryota
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][horizon] metric-list not complete and show metrics in horizon dashboard

2016-04-12 Thread gordon chung


On 12/04/2016 7:47 AM, Safka, JaroslavX wrote:

> *
> And my question is: How is connected the database table meter and the command 
> metric-list?

i assume you mean meter-list. it uses a combination of data from meter 
table and resource table[1]. this is because it lists all the meters and 
the resources associated with it. yes, it's complicated (which is why 
we'd recommend Gnocchi or your own special solution)

> Second question how I can propagate the metrics to horizon dashboard 
> "Resource usage"?
> (I'm able only see cpu metric from started worker image)

horizon uses ceilometerclient to grab data so i imagine they are using 
some combination of resource-list, sample-list, meter-list. as Matthias 
mentioned, you probably shouldn't rely on the existing horizon interface 
as the hope is to deprecate the view in horizon since no one is really 
sure what it's designed to show.

>
> Background:
> I'm writing plugin which connects collectd and ceilometer and I want to see 
> the collectd metrics in the horizon or at least in the ceilometer shell.

is this an addition to the work that Emma did[1]? i'm assuming so, given 
your locale/company.

[1] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/storage/impl_sqlalchemy.py#L539-L549
[1] https://github.com/openstack/collectd-ceilometer-plugin

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] [Ceilometer][Architecture] Transformers in Kilo vs Liberty(and Mitaka)

2016-04-13 Thread gordon chung
hi Nadya,

copy/pasting full original message with comments inline to clarify some 
comments.

i think a lot of the confusion is because we use pipeline.yaml across 
both polling and notification agents when really it only applies to 
latter. just an fyi, we've had an open work item to create a 
polling.yaml file... just the issue of 'resources'.

> Hello colleagues,
>
> I'd like to discuss one question with you. Perhaps, you remember that
> in Liberty we decided to get rid of transformers on polling agents [1]. I'd
> like to describe several issues we are facing now because of this decision.
> 1. pipeline.yaml inconsistency.
> Ceilometer pipeline consists from the two basic things: source and
> sink. In source, we describe how to get data, in sink - how to deal with
> the data. After the refactoring described in [1], on polling agents we
> apply only "source" definition, on notification agents we apply only "sink"
> one. It causes the problems described in the mailing thread [2]: the "pipe"
> concept is actually broken. To make it work more or less correctly, the
> user should care that from a polling agent he/she doesn't send duplicated
> samples. In the example below, we send "cpu" Sample twice each 600 seconds
> from a compute agents:
>
> sources:
> - name: meter_source
> interval: 600
> meters:
> - "*"
> sinks:
> - meter_sink
> - name: cpu_source
> interval: 60
> meters:
> - "cpu"
> sinks:
> - cpu_sink
> - cpu_delta_sink
>
> If we apply the same configuration on notification agent, each "cpu" Sample
> will be processed by all of the 3 sinks. Please refer to the mailing thread
> [2] for more details.
> As I understood from the specification, the main reason for [1] is
> making the pollster code more readable. That's why I call this change a
> "refactoring". Please correct me if I miss anything here.

i don't know about more readable. it was also to offload work from 
compute nodes and all the stuff cdent mentions.

>
> 2. Coordination stuff.
> TBH, coordination for notification agents is the most painful thing for
> me because of several reasons:
>
> a. Stateless service has became stateful. Here I'd like to note that tooz
> usage for central agents and alarm-evaluators may be called "optional". If
> you want to have these services scalable, it is recommended to use tooz,
> i.e. install Redis/Zookeeper. But you may have your puppets unchanged and
> everything continue to work with one service (central agent or
> alarm-evaluator) per cloud. If we are talking about notification agent,
> it's not the case. You must change the deployment: eighter rewrite the
> puppets for notification agent deployment (to have only one notification
> agent per cloud)  or make tooz installation with Redis/Zookeeper required.
> One more option: remove transformations completely - that's what we've done
> in our company's product by default.

the polling change is not related to coordination work in notification. 
the coordination work was to handle HA / multiple notification agents. 
regardless polling change, this must exist.

>
> b. RabbitMQ high utilisation. As you know, tooz does only one part of
> coordination for a notification agent. In Ceilometer, we use IPC queues
> mechanism to be sure that samples with the one metric and from the one
> resource are processed by exactly the one notification agent (to make it
> possible to use a local cache). I'd like to remind you that without
> coordination (but with [1] applied) each compute agent polls each instances
> and send the result as one message to a notification agent. The

this is not entirely accurate pre-polling change, the polling agents 
publish one message per sample. not the polling agents publish one 
message per interval (multiple samples).

> notification agent processes all the samples and sends as many messages to
> a collector as many sinks it is defined (2-4, not many). If [1] if not
> applied, one "publishing" round is skipped. But with [1] and coordination
> (it's the most recommended deployment), amount of publications increases
> dramatically because we publish each Sample as a separate message. Instead
> of 3-5 "publish" calls, we do 1+2*instance_amount_on_compute publishings
> per each compute. And it's by design, i.e. it's not a bug but a feature.

i don't think the maths is right but regardless, IPC is one of the 
standard use cases for message queues. the concept of using queues to 
pass around and distribute work is essentially what it's designed for. 
if rabbit or any message queue service can't provide this function, it 
does worry me.

>
> c. Samples ordering in the queues. It may be considered as a corner case,
> but anyway I'd like to describe it here too. We have a lot of
> order-sensitive transformers (cpu.delta, cpu_util), but we can guarantee
> message ordering only in the "main" polling queue, but not in IPC queues. At
> the picture below (hope it will be displayed) there are 3 agents A1, A2 and
> A3 and 3 time-ordered m

Re: [openstack-dev] [Openstack] [Ceilometer][Architecture] Transformers in Kilo vs Liberty(and Mitaka)

2016-04-14 Thread gordon chung


On 14/04/2016 5:28 AM, Nadya Shakhat wrote:
> Hi Gordon,
>
> I'd like to add some clarifications and comments.
>
> this is not entirely accurate pre-polling change, the polling agents
> publish one message per sample. not the polling agents publish one
> message per interval (multiple samples).
>
> Looks like there is some misunderstanding here. In the code, there is
> "batch_polled_samples" option. You can switch it off and get the result
> you described, but it's True by default.  See
> https://github.com/openstack/ceilometer/blob/master/ceilometer/agent/manager.py#L205-L211

right... the polling agents are by default to publish one message per 
interval as i said (if you s/not/now/) where as before it was publishing 
1 message per sample. i don't see why that's a bad thing?

> .
>
> You wrote:
>
> the polling change is not related to coordination work in notification.
> the coordination work was to handle HA / multiple notification agents.
> regardless polling change, this must exist.
>
> and
>
> transformers are already optional. they can be removed from
> pipeline.yaml if not required (and thus coordination can be disabled).
>
>
> So, coordination is needed only to support transformations. Polling
> change does relate to this because it has brought additional
> transformations on notification agent side. I suggest to pay attention
> to the existing use cases. In real life, people use transformers for
> polling-based metrics only. The most important use case for
> transformation is Heat autoscaling. It usually based on cpu_util. Before
> Liberty, we were able not to use coordination for notification agent to
> support the autoscaling usecase. In Liberty we cannot support it without
> Redis. Now "transformers are already optional", that's true. But I think
> it's better to add some restrictions like "we don't support
> transformations for notifications" and have transformers optional on
> polling-agent only instead of introducing such a comprehensive
> coordination.

i'm not sure if it's safe to say it's only use for cpu_util. that said, 
cpu_util ideally shouldn't be a transform anyways. see the work Avi was 
doing[1].


>
> IPC is one of the
> standard use cases for message queues. the concept of using queues to
> pass around and distribute work is essentially what it's designed for.
> if rabbit or any message queue service can't provide this function, it
> does worry me.
>
>
> I see your point here, but Ceilometer aims to take care of the
> OpenStack, monitor it's state. Now it is known as a "Rabbit killer". We
> cannot ignore that if we want anybody uses Ceilometer.

what is the message load we're seeing here? how is your MQ configured? 
do you have batching? how many agents/queues do you have? i think this 
needs to be reviewed first to be honest as there really isn't much to go on?


[1] https://review.openstack.org/#/c/182057/


-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][vitrage] Joint design sesssion in Austin

2016-04-14 Thread gordon chung
cool! either works for me. prefer earlier slot if it's just the one.

On 14/04/2016 5:07 AM, Julien Danjou wrote:
> Hi folks,
>
> Vitrage doesn't have any track/session at the summit, and we Telemetry
> have a bunch of spare ones, so I figured we should use one to meet and
> chat a bit about how our projects can help each others. There should be
> some interesting evolution for Aodh going forward with the usage Vitrage
> is making of it.
>
> We got 2 slots available on Thursday 28th April: 16:10-16:50 or
> 17:00:17:40. Would any of those fit the schedule of every interested?
>
> Cheers,
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [searchlight] [ceilometer] [fuel] [freezer] [monasca] Elasticsearch 2.x gate support

2016-04-25 Thread gordon chung


On 24/04/2016 6:53 PM, Julien Danjou wrote:
> On Mon, Apr 18 2016, McLellan, Steven wrote:
>
> Hi Steven,
>
> […]
>
>> * Have you tested with ES 2.x at all?
>> * Do you have plans to move to ES 2.x?
>
> I don't think we have tested that nor anyone working actively on ES.
> Gordon Chung is probably the more knowledgeable on that piece.
>
> Though I would not want Ceilometer to block any other project going to
> be heavily relying on ES. So if nobody steps up to update Ceilometer for
> ES 2, we can also look into deprecating the driver.
>
> I'll let Gordon weight in.
>

i haven't tried 2.x personally. we use pretty standard functionality for 
ElasticSearch driver and at quick glance, it seems like all the apis 
still exist.

i agree with jd, feel free to bump and we'll deal with the effects.

i guess the only question i have is whether ES 2.x is even an option? is 
it packaged by distro or are they still using 1.x?

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Stepping down from core

2016-05-10 Thread gordon chung
great work Yanis! (i am talking about your ping pong skills). all the 
best in your new adventure.

On 10/05/2016 3:52 AM, Yanis Guenane wrote:
> Hello all,
>
> After the Mitaka summit, my main area of focus at work changed and I
> couldn't find the necessary time to help anymore on the puppet modules.
>
> After a cycle I don't see my situation changing anytime soon, therefor
> I'd like to step down as a core-reviewer.
>
> Reading about the Austin summit it seems like modules have reached a
> nice level of maturity. This is an excellent news.
>
> Thanks to everybody for this adventure, it was an enriching one.
> I wish you the best for the challenges that comes next,
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [senlin] [keystone] [ceilometer] [telemetry] Questions about api-ref launchpad bugs

2016-05-10 Thread gordon chung


On 10/05/2016 9:36 AM, Anne Gentle wrote:
>
> It's a small set of files:
> https://github.com/openstack/api-site/tree/master/api-ref/source/telemetry/v2
> How about I ask someone to do the conversion and add it to
> https://github.com/openstack/ceilometer? I have someone in mind who's
> looking for a task. Let me know and I'll get her started.

i wouldn't mind this if it's free help :). although we should probably 
add a disclaimer that things may be dropped as we worked on streamlining 
the telemetry workflow. eg. Alarming API is only available via Aodh as 
of Mitaka.

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread gordon chung
wow. everything you did below is awesome. respect.

not a swift dev so i won't suggest what to do as i'm sure you've put a 
lot more thought into adopting golang than i have. personally, i think 
it's easier to find design flaws in something that is (perceived) slow 
and these design optimisations still benefit you if/when you switch.

On 10/05/2016 6:25 PM, Gregory Haynes wrote:
> On Tue, May 10, 2016, at 11:10 AM, Hayes, Graham wrote:
>> On 10/05/2016 01:01, Gregory Haynes wrote:
>>>
>>> On Mon, May 9, 2016, at 03:54 PM, John Dickinson wrote:
 On 9 May 2016, at 13:16, Gregory Haynes wrote:
>
> This is a bit of an aside but I am sure others are wondering the same
> thing - Is there some info (specs/etherpad/ML thread/etc) that has more
> details on the bottleneck you're running in to? Given that the only
> clients of your service are the public facing DNS servers I am now even
> more surprised that you're hitting a python-inherent bottleneck.

 In Swift's case, the summary is that it's hard[0] to write a network
 service in Python that shuffles data between the network and a block
 device (hard drive) and effectively utilizes all of the hardware
 available. So far, we've done very well by fork()'ing child processes,
 using cooperative concurrency via eventlet, and basic "write more
 efficient code" optimizations. However, when it comes down to it,
 managing all of the async operations across many cores and many drives
 is really hard, and there just isn't a good, efficient interface for
 that in Python.
>>>
>>> This is a pretty big difference from hitting an unsolvable performance
>>> issue in the language and instead is a case of language preference -
>>> which is fine. I don't really want to fall in to the language-comparison
>>> trap, but I think more detailed reasoning for why it is preferable over
>>> python in specific use cases we have hit is good info to include /
>>> discuss in the document you're drafting :). Essentially its a matter of
>>> weighing the costs (which lots of people have hit on so I won't) with
>>> the potential benefits and so unless the benefits are made very clear
>>> (especially if those benefits are technical) its pretty hard to evaluate
>>> IMO.
>>>
>>> There seemed to be an assumption in some of the designate rewrite posts
>>> that there is some language-inherent performance issue causing a
>>> bottleneck. If this does actually exist then that is a good reason for
>>> rewriting in another language and is something that would be very useful
>>> to clearly document as a case where we support this type of thing. I am
>>> highly suspicious that this is the case though, but I am trying hard to
>>> keep an open mind...
>>
>> The way this component works makes it quite difficult to make any major
>> improvement.
>
> OK, I'll bite.
>
> I had a look at the code and there's a *ton* of low hanging fruit. I
> decided to hack in some fixes or emulation of fixes to see whether I
> could get any major improvements. Each test I ran 4 workers using
> SO_REUSEPORT and timed doing 1k axfr's with 4 in parallel at a time and
> recorded 5 timings. I also added these changes on top of one another in
> the order they follow.
>
> Base timings: [9.223, 9.030, 8.942, 8.657, 9.190]
>
> Stop spawning a thread per request - there are a lot of ways to do this
> better, but lets not even mess with that and just literally move the
> thread spawning that happens per request because its a silly idea here:
> [8.579, 8.732, 8.217, 8.522, 8.214] (almost 10% increase).
>
> Stop instantiating oslo config object per request - this should be a no
> brainer, we dont need to parse config inside of a request handler:
> [8.544, 8.191, 8.318, 8.086] (a few more percent).
>
> Now, the slightly less low hanging fruit - there are 3 round trips to
> the database *every request*. This is where the vast majority of request
> time is spent (not in python). I didn't actually implement a full on
> cache (I just hacked around the db queries), but this should be trivial
> to do since designate does know when to invalidate the cache data. Some
> numbers on how much a warm cache will help:
>
> Caching zone: [5.968, 5.942, 5.936, 5.797, 5.911]
>
> Caching records: [3.450, 3.357, 3.364, 3.459, 3.352].
>
> I would also expect real-world usage to be similar in that you should
> only get 1 cache miss per worker per notify, and then all the other
> public DNS servers would be getting cache hits. You could also remove
> the cost of that 1 cache miss by pre-loading data in to the cache.
>
> All said and done, I think that's almost a 3x speed increase with
> minimal effort. So, can we stop saying that this has anything to do with
> Python as a language and has everything to do with the algorithms being
> used?
>
>>
>> MiniDNS (the component) takes data and sends a zone transfer every time
>> a recordset gets updated. That is a full (AXFR) zone transfer, so every
>> record in

Re: [openstack-dev] [cross-project][infra][keystone] Moving towards a Identity v3-only on Devstack - Next Steps

2016-05-12 Thread gordon chung


On 12/05/2016 1:47 PM, Morgan Fainberg wrote:
>
>
> On Thu, May 12, 2016 at 10:42 AM, Sean Dague  > wrote:
>
> We just had to revert another v3 "fix" because it wasn't verified to
> work correctly in the gate - https://review.openstack.org/#/c/315631/
>
> While I realize project-config patches are harder to test, you can do so
> with a bogus devstack-gate change that has the same impact in some cases
> (like the case above).
>
> I think the important bit on moving forward is that every patch here
> which might be disruptive has some manual verification about it working
> posted in review by v3 team members before we approve them.
>
> I also think we need to largely stay non voting on the v3 only job until
> we're quite confident that the vast majority of things are flipped over
> (for instance there remains an issue in nova <=> ironic communication
> with v3 last time I looked). That allows us to fix things faster because
> we don't wedge some slice of the projects in a gate failure.
>
>  -Sean
>
> On 05/12/2016 11:08 AM, Raildo Mascena wrote:
>  > Hi folks,
>  >
>  > Although the Identity v2 API is deprecated as of Mitaka [1], some
>  > services haven't implemented proper support to v3 yet. For instance,
>  > we implemented a patch that made DevStack v3 by default that, when
>  > merged, broke a lot of project gates in a few hours [2]. This
>  > happened due to specific services incompatibility issues with
> Keystone
>  > v3 API, such as hardcoded v2 usage, usage of removed
> keystoneclient CLI,
>  > requesting v2 service tokens and the lack of keystoneauth session
> usage.
>  >
>  > To discuss those points, we did a cross-project work
>  > session in the Newton Summit[3]. One point we are working on at this
>  > momment is creating gates to ensure the main OpenStack services
>  > can live without the Keystone v2 API. Those gates setup devstack with
>  > only Identity v3 enabled and run the Tempest suite on this
> environment.
>  >
>  > We already did that for a few services, like Nova, Cinder, Glance,
>  > Neutron, Swift. We are doing the same job for other services such
>  > as Ironic, Magnum, Ceilometer, Heat and Barbican [4].
>  >
>  > In addition, we are creating jobs to run functional tests for the
>  > services on this identity v3-only environment[5]. Also, we have a
> couple
>  > of other fronts that we are doing like removing some hardcoded v2
> usage
>  > [6], implementing keystoneauth sessions support in clients and
> APIs [7].
>  >
>  > Our plan is to keep tackling as many items from the cross-project
>  > session etherpad as we can, so we can achieve more confidence in
> moving
>  > to a DevStack working v3-only, making sure everyone is prepared
> to work
>  > with Keystone v3 API.
>  >
>  > Feedbacks and reviews are very appreciated.
>  >
>  > [1] https://review.openstack.org/#/c/251530/
>  > [2] https://etherpad.openstack.org/p/v3-only-devstack
>  > [3] https://etherpad.openstack.org/p/newton-keystone-v3-devstack
>  > [4]
> 
> https://review.openstack.org/#/q/project:openstack-infra/project-config+branch:master+topic:v3-only-integrated
>  > [5]
> https://review.openstack.org/#/q/topic:v3-only-functionals-tests-gates
>  > [6]
> https://review.openstack.org/#/q/topic:remove-hardcoded-keystone-v2
>  > [7] https://review.openstack.org/#/q/topic:use-ksa
>  >
>  > Cheers,
>  >
>  > Raildo
>  >
>  >
>  >
>
>
> This  also comes back to the conversation at the summit. We need to
> propose the timeline to turn over for V3 (regardless of
> voting/non-voting today) so that it is possible to set the timeline that
> is expected for everything to get fixed (and where we are
> expecting/planning to stop reverting while focusing on fixing the
> v3-only changes).
>
> I am going to ask the Keystone team to set forth the timeline and commit
> to getting the pieces in order so that we can make v3-only voting rather
> than playing the propose/revert game we're currently doing. A proposed
> timeline and gameplan will only help at this point.
>

can anyone confirm when we deprecated keystonev2? i see a bp[1] related 
to deprecation that was 'implemented' in 2013.

i realise switching to v3 breaks many gates but it'd be good to at some 
point say it's not 'keystonev3 breaking the gate' but rather 'projectx 
is breaking the gate because they are using keystonev2 which was 
deprecated 4 cycles ago'. given the deprecation period allowed already, 
can we say "here's some help, fix/merge this by 
, or your gate will be broken until then"? 
(assuming all the above items by Raildo doesn't fix everything).

[1] https://blueprints.launchpad.net/keystone/+spec/deprecate-v2-api

cheers,

-- 
gord

_

Re: [openstack-dev] [telemetry][ceilometer] New project: collectd-ceilometer-plugin

2016-01-25 Thread gordon chung
you can consider ceilometer database (and api) as an open-ended model 
designed to capture the full fidelity of a datapoint (something useful 
for auditing, post processing). alternatively, gnocchi is a strongly 
type model which captures only required data.

in the case of ceilometer -> gnocchi, the measurement data ceilometer 
collects is sent to gnocchi and mapped to specific resource types[1]. 
here we define all the resources and the metric mappings available. with 
regards to collectd, i'm just wondering what additional metrics are 
added and possibly any interesting metadata?

[1] 
https://github.com/openstack/ceilometer/blob/master/etc/ceilometer/gnocchi_resources.yaml

On 25/01/2016 9:21 AM, Foley, Emma L wrote:
> I'm not overly familiar with Gnocchi, so I can't answer that off the bat, but 
> I would be looking for answers to the following questions:
> What changes need to be made to gnocchi to accommodate regular data from 
> ceilometer?
> Is there anything additional in Gnocchi's data model that is not part of 
> Ceilometer?
>
> Regards,
> Emma
>
>
> -Original Message-
> From: gord chung [mailto:g...@live.ca]
> Sent: Friday, January 22, 2016 2:41 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [telemetry][ceilometer] New project: 
> collectd-ceilometer-plugin
>
> nice! thanks Emma!
>
> i'm wondering if there's an additional metrics/resources we should add to 
> gnocchi to accommodate the data?
>
> On 22/01/2016 6:11 AM, Foley, Emma L wrote:
>> Hi folks,
>>
>> A new plug-in for making collectd[1] stats available to Ceilometer [2] is 
>> ready for use.
>>
>> The collectd-ceilometer-plugin make system statistics from collectd 
>> available to Ceilometer.
>> These additional statistics make it easier to detect faults and identify 
>> performance bottlenecks (among other uses).
>>
>> Regards,
>> Emma
>>
>> [1] https://collectd.org/
>> [2] http://github.com/openstack/collectd-ceilometer-plugin
>>
>> --
>> Intel Research and Development Ireland Limited Registered in Ireland
>> Registered Office: Collinstown Industrial Park, Leixlip, County
>> Kildare Registered Number: 308263
>>
>>
>> __
>>  OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> --
> gord
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-27 Thread gordon chung
> It makes for a crappy user experience. Crappier than the crappy user
> experience that OpenStack API users already have because we have done a
> crappy job shepherding projects in order to make sure there isn't
> overlap between their APIs (yes, Ceilometer and Monasca, I'm looking
> directly at you).
... yes, Ceilometer can easily handle your events and meters and store 
them in either Elasticsearch or Gnocchi for visualisations. you just 
need to create a new definition in our mapping files[1][2]. you will 
definitely want to coordinate the naming of your messages. ie. 
event_type == backup. and event_type == backup..

[1] 
https://github.com/openstack/ceilometer/blob/master/etc/ceilometer/event_definitions.yaml
[2] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/meter/data/meters.yaml

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-27 Thread gordon chung


On 27/01/2016 10:51 AM, Jay Pipes wrote:
> On 01/27/2016 12:53 PM, gordon chung wrote:
>>> It makes for a crappy user experience. Crappier than the crappy user
>>> experience that OpenStack API users already have because we have done a
>>> crappy job shepherding projects in order to make sure there isn't
>>> overlap between their APIs (yes, Ceilometer and Monasca, I'm looking
>>> directly at you).
>> ... yes, Ceilometer can easily handle your events and meters and store
>> them in either Elasticsearch or Gnocchi for visualisations. you just
>> need to create a new definition in our mapping files[1][2]. you will
>> definitely want to coordinate the naming of your messages. ie.
>> event_type == backup. and event_type == 
>> backup..
>
> This isn't at all what I was referring to, actually. I was referring 
> to my belief that we (the API WG, the TC, whatever...) have failed to 
> properly prevent almost complete and total overlap of the Ceilometer 
> [1] and Monasca [2] REST APIs.
>
> They are virtually identical in purpose, but in frustrating 
> slightly-inconsistent ways. and this means that users of the 
> "OpenStack APIs" have absolutely no idea what the "OpenStack Telemetry 
> API" really is.
>
> Both APIs have /alarms as a top-level resource endpoint. One of them 
> refers to the alarm notification with /alarms, while the other refers 
> to the alarm definition with /alarms.
>
> One API has /meters as a top-level resource endpoint. The other uses 
> /metrics to mean the exact same thing.
>
> One API has /samples as a top-level resource endpoint. The other uses 
> /metrics/measurements to mean the exact same thing.
>
> One API returns a list JSON object for list results. The other returns 
> a dict JSON object with a "links" key and an "elements" key.
>
> And the list goes on... all producing a horrible non-unified, 
> overly-complicated and redundant experience for our API users.
>
> Best,
> -jay
>
> [1] http://developer.openstack.org/api-ref-telemetry-v2.html
> [2] 
> https://github.com/openstack/monasca-api/blob/master/docs/monasca-api-spec.md
>
> __ 
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

... i'm aware, thus the leading dots. as i saw no suggestions in your 
message -- just a statement -- i chose to provide some 'hopefully' 
constructive comments rather than make assumption on what you were 
hinting at. obviously, no one is able to foresee the existence of a 
project built internally within another company let alone the api of 
said project, i'm not sure what the proposed resolution is.

as the scope is different between ekko and freezer (same for monasca and 
telemetry according to governance voting[1]) what is the issue with 
having overlaps? bringing in CADF[2], if you define your taxonomy 
correctly, sharing a common base is fine as long as there exists enough 
granularity in how you define your resources that differentiates a 
'freezer' resource and an 'ekko' resource. that said, if they are truly 
the same, then you should probably be debating why you have two of the 
same thing instead of api.

[1] https://review.openstack.org/#/c/213183/
[2] 
https://www.dmtf.org/sites/default/files/standards/documents/DSP0262_1.0.0.pdf

-- 
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-28 Thread gordon chung


On 28/01/2016 1:03 AM, Steven Dake (stdake) wrote:
> Choice is one of life's fundamental elements that make life interesting.
> Let the Operators choose what they wish based upon technical merits rather
> then "who was first to publish a project".

yes, it's nice when competing entities exists in the same domain... i'm 
not sure it's the same when it's also within the same community. i 
realise this is common practice in big enterprise where often they'll 
make multiple plays on the same thing because a) they built up a lot 
money in the past and b) they are too big and didn't realise someone 
else was doing it as well. regardless, it's quite obvious having 
multiple plays does split up your best talent and it's debatable whether 
the surviving solution is the best solution or just a solution. it also 
devalues the work each team is doing so workers care less about it but 
that's a psychological argument which i don't have a survey to reference.

regardless if it's creative and original, my question would be why was 
your first step to create something new rather than advocate for change?

i'll go a slight tangent because i feel like this is one of the root of 
the issues: i think we need to devalue what a PTL is. there are many 
companies who seek PTLs and even promote the number of PTLs on staff as 
if they are some infallible deity. this is not what a PTL is and you are 
doing it wrong if that's what you think it is. can we rebrand PTL to 
what it is? team administrator or secretary or liaison (wow, this is 
actually useful now)? maybe this will stop people from trying so hard to 
have a PTL in their title and they will attempt to collaborate first 
rather than duplicate.

On 28/01/2016 6:34 AM, Chris Dent wrote:
> Let's be honest: From my perspective Joe (and others) had it in for
> Ceilometer and was hoping that the validation of Monasca would have some
> impact on improving the Telemetry situation, so the lack of criteria
> that would have prevent Monasca's inclusion is convenient. 

go back to silly-ometer dude :P

cheers,

-- 
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] log processing project

2016-01-28 Thread gordon chung
hi folks,

just querying before i do anything, i was wondering if there's a log 
processing project out there? or is everyone just using ELK stack?

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][ceilometer][all] stable/kilo 2015.1.3 delayed

2016-01-29 Thread gordon chung


On 28/01/2016 3:37 PM, Jeremy Stanley wrote:
> On 2016-01-28 19:40:20 + (+), Dave Walker wrote:
> [...]
>> However, pip 8 was released around the same time as the tarballs were
>> attempted to be generated.  Most of the projects are OK with this, but
>> ceilometer declares pbr!=0.7,<1.0,>=0.6 and then forces an update via
>> tox.
> [...]
>
> More to the point, the latest pbr matching that requirement (0.11.1)
> declares an unversioned dependency on pip in its requirements.txt,
> so ceilometer 2015.1.2's tox.ini is effectively forcing pip to
> upgrade itself to latest (8.x) release which no longer supports a
> command line option the tox.ini is also configured to add
> (--download-cache), making the sdist unbuildable via tox at that
> tagged point in the ceilometer repository.
trying to understand the situation here. isn't this all managed by 
global-reqs? an incompatible pip and pbr were release so now we can't 
build? were we the only project using downloadcache (i don't recall us 
doing anything unique in our tox file)?

i would prefer a release to be made as there was a performance backport 
made. what is the effort required to push tarball generated outside of 
jenkins? any drawbacks? do we have numbers on how often stable releases 
are picked up by users?

cheers,

-- 
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] :Regarding wild card configuration in pipeline.yaml

2016-01-29 Thread gordon chung
  I have a meter subscription m1.* for publisher p1 and I need a subset of m1.* 
 notifications for ex:m1.xyz.* for publisher p2.
If we add p2 to already exisiting sink along with p1,  p2 will get other 
notification's along with m1.xyz.* which are not needed for p2.

To avoid this we had the following entry in pipeline;

sources:
  -name : m1meter
   meters: m1.*,!m1.xyz.*
   sinks:
-m1sink
   .
  -name : m2meter
   meters:m1.xyz.*
   sinks:
-m2sink
sinks:
 -name: m1sink
  publishers:
   -p1

  -name: m2sink
  publishers:
   -p1
   -p2


you will unfortunately need to explicitly list out your required meters 
explicitly (without) wildcard.



>From reply mail it seems there is no strict restriction to support this.Could 
>you please let me know how should we handle such cases in ceilometer.
If we do code modification in pipeline module of ceilometer does it effects any 
other parts of ceilometer frame work.

the code filtering happens here: 
https://github.com/openstack/ceilometer/blob/master/ceilometer/pipeline.py#L275-L287
 and similarly in previous releases.

it will not affect anything other than pipeline filtering so it should be safe 
to change. you are welcome to push your change to community. as i mentioned 
previously, i don't think there is a restriction on having either wildcard or 
negative wildcard support in one pipeline. it was just how it was implemented 
as we did not have a requirement to deal with both (and the added complexity of 
ordering that comes with it)



--
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer] New project: collectd-ceilometer-plugin

2016-01-29 Thread gordon chung


On 29/01/2016 10:48 AM, Foley, Emma L wrote:
>> So, metrics are grouped by the type of resource they use, and each metric 
>> has to be listed.
>> Grouping isn't a problem, but creating an exhaustive list might be,
>> since there are 100+ plugins [1] in collectd which can provide
>> statistics, although not all of these are useful, and some require
>> extensive configuration. The plugins each provide multiple metrics,
>> and each metric can be duplicated for a number of instances, examples: [2].
>>
>> Collectd data is minimal: timestamp and volume, so there's little room to 
>> find interesting meta data.
>> It would be nice to see this support integrated, but it might be very
>> tedious to list all the metric names and group by resource type without any 
>> form of Do the resource definitions support wildcards? Collectd can provide 
>> A LOT of metrics.
> One also has to put into balance the upside of going through Ceilometer, as 
> Gnocchi has direct support for statsd:
>
>http://gnocchi.xyz/statsd.html
>
>
>
> Supporting statsd would require some more investigation, as collectd's statsd 
> plugin supports reading stats from the system, but not writing them.
>   Also, what are the usage figures for gnocchi? How many people use 
> it, and how easy is it to convert existing deployments to use gnocchi? I 
> mean, if someone was upgrading, would their data be preserved?
>   How easy is it to consume gnocchi statistics using an external 
> system/application?
>   I'm not against the idea, but it requires a little more 
> consideration.
>
> Regards,
> Emma
>
Gnocchi is intended to solve the use case of timestamp+value type data, 
that's essentially how it stores it. the best way i would describe it 
is, if you use ceilometer statistics command, you should probably be 
using Gnocchi. if you use ceilometer sample-list, it's arguable whether 
Gnocchi or legacy Ceilometer db is right. so basically do you want 
slower, full-fidelity data (ceilometer) or do you want responsive, 
light-weight data (gnocchi)

Gnocchi implements the concept of archive policies which basically 
dictates how much or little is store. it's purpose is to rollup and 
pre-calculate data so less is stored, and as a side effect, is more 
response as it has less clutter to deal with. in theory, you could 
define a granularity to store everything with no roll ups so all the 
data is preserved, but even though we store timestamp+value: the more 
you store, the bigger the size.

cheers,

-- 
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer] New project: collectd-ceilometer-plugin

2016-01-29 Thread gordon chung


On 28/01/2016 2:32 AM, Foley, Emma L wrote:
> So, metrics are grouped by the type of resource they use, and each metric has 
> to be listed.
> Grouping isn't a problem, but creating an exhaustive list might be, since 
> there are 100+ plugins [1] in collectd which can provide statistics, although 
> not all of these are useful, and some require extensive configuration. The 
> plugins each provide multiple metrics, and each metric can be duplicated for 
> a number of instances, examples: [2].
>
> Collectd data is minimal: timestamp and volume, so there's little room to 
> find interesting meta data.
> It would be nice to see this support integrated, but it might be very tedious 
> to list all the metric names and group by resource type without any form of
> Do the resource definitions support wildcards? Collectd can provide A LOT of 
> metrics.
>
> Regards,
> Emma
>
> [1] https://collectd.org/wiki/index.php/Table_of_Plugins
> [2] https://collectd.org/wiki/index.php/Naming_schema

gnocchi is strongly typed when compared to classical ceilometer db where 
you can dump anything and everything. we don't support wildcards as is 
but i believe it's something we can aim to support?

Mehdi is currently in process of implementing dynamic resources which 
would give more flexiblity on what type of data we can store in Gnocchi. 
i believe from ceilometer pov, we can add support to allow wildcard 
support in regards to adding new metrics.

cheers,

-- 
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][ceilometer][all] stable/kilo 2015.1.3 delayed

2016-01-29 Thread gordon chung


On 29/01/2016 1:27 PM, Jeremy Stanley wrote:
> On 2016-01-29 14:14:48 + (+), gordon chung wrote:
>> trying to understand the situation here. isn't this all managed by
>> global-reqs? an incompatible pip and pbr were release so now we
>> can't build? were we the only project using downloadcache (i don't
>> recall us doing anything unique in our tox file)?
> The tox.ini for Ceilometer stable/kilo was adding a downloadcache
> inherited by all its environments which caused tox to add
> --download-cache to all pip install invocations. While deprecated in
> pip 6 (and removed from the tox.ini during the Liberty cycle), this
> worked up until pip 8 dropped that option from its command-line
> parser. Due to unfortunate timing, the last commit on stable/liberty
> was tested with pip 7 and merged, but the 2015.1.2 tag for that
> commit was pushed after pip 8 was released and so tox was no longer
> able to work with the tagged commit.
>
> A number of workarounds were tried, but ultimately the explicit
> addition of -U (upgrade) to pip calls in tox.ini prevented my
> attempts to temporarily pin to earlier versions of pip within the
> calling script.
>
>> i would prefer a release to be made as there was a performance
>> backport made. what is the effort required to push tarball
>> generated outside of jenkins? any drawbacks?
> [...]
>
> The steps followed by tox could be emulated manually, with the
> addition of forcing a pip 7 install, and the result would then be
> copied via scp to tarballs.openstack.org by one of our Infra root
> admins. The drawbacks mostly come down to needing to apply some
> additional scrutiny to the generated tarball before pronouncing it
> viable, and the need to place trust in a manual process slightly
> inconsistent with our usual sdist generation mechanisms.

hmm.. that's unfortunate... anything we need to update so this doesn't 
happen again? or just a matter of lesson learned, let's keep an eye out 
next time?

i guess the question is can users wait (a month?) for next release? i'm 
willing to poll operator list (or any list) to query for demand if 
that's easier on your end? if there's very little interest we can defer 
-- i do have a few patches lined up for next kilo release window so i 
would expect another release.

cheers,

-- 
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][ceilometer][all] stable/kilo 2015.1.3 delayed

2016-01-31 Thread gordon chung


On 30/01/2016 7:54 AM, Dave Walker wrote:
> On 29 January 2016 at 20:36, Jeremy Stanley  wrote:
>> On 2016-01-29 19:34:01 + (+), gordon chung wrote:
>>> hmm.. that's unfortunate... anything we need to update so this doesn't
>>> happen again? or just a matter of lesson learned, let's keep an eye out
>>> next time?
>> Well, I backported the downloadcache removal to the stable/kilo
>> branch after discovering this issue, and while that's too late to
>> solve it for 2015.1.3 it will at least no longer prevent a 2015.1.4
>> tarball from being built.
>>
>>> i guess the question is can users wait (a month?) for next release? i'm
>>> willing to poll operator list (or any list) to query for demand if
>>> that's easier on your end? if there's very little interest we can defer
>>> -- i do have a few patches lined up for next kilo release window so i
>>> would expect another release.
>> I'm perfectly okay uploading a tarball I or someone else builds for
>> this, as long as it's acceptable to leadership from stable branch
>> management, Telemetry and the community at large. Our infrastructure
>> exists to make things more consistent and convenient, but it's there
>> to serve us and so we shouldn't be slaves to it.
> Unless anyone else objects, I'd be really happy if you are willing to
> scp a handrolled tarball.
>
> I'm happy to help validate it's pristine-state locally here.
>
> Thanks Jeremy!
>
> --
> Kind Regards,
> Dave Walker
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

this is fine with me. please let me know if there is anything i can do 
to help. thanks to both for all the work so far.

cheers,

-- 
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] towards a keystone v3 only devstack

2016-02-02 Thread gordon chung
yeah... the revert broke us across all telemetry projects since we fixed 
plugins to adapt to v3. i'm very much for adapting to v3 since it's lingered 
around for years. i think given the time lapsed, if it breaks them, tough. the 
only issue i had with original patch was it merged on a Friday with no mention.

On 01/02/2016 9:48 PM, Steve Martinelli wrote:

Thanks for bringing this up Sean.

I went ahead and documented all the way the projects/gates/clients broke in an 
etherpad:  
https://etherpad.openstack.org/p/v3-only-devstack

These are all the projects that I know that were affected, if someone knows 
others, please add your findings.

Sean, you can count me in on the volunteering effort to get this straightened 
out.

Steve

Sean Dague  wrote on 2016/02/01 12:21:50 
PM:

> From: Sean Dague 
> To: openstack-dev 
> 
> Date: 2016/02/01 12:23 PM
> Subject: [openstack-dev] [all] towards a keystone v3 only devstack
>
> On Friday last week I hit the go button on a keystone v3 default patch
> change in devstack. While that made it through tests for all the tightly
> integrated projects, we really should have stacked up some other spot
> tests to see how this was going to impact the rest of the ecosystem.
> Novaclient, shade, osc, and a bunch of other things started faceplanting.
>
> The revert is here - https://review.openstack.org/#/c/274703/ - and will
> move it's way through the gate once the tests complete.
>
> Going forward I think we need a more concrete plan on this transition.
> I'm going to be -2 on any v3 related keystone changes in devstack until
> we do, as it feels like we need to revert one of these patches about
> every month for the last 6.
>
> I don't really care what format the plan takes, ML thread, wiki page,
> spec. But we need one, and an owner (probably on the keystone side) to
> walk us through how this transition goes. This is going to include some
> point in the future where:
>
> 1. devstack configures v3 and v2 always, and devstack issues a warning
> if v2 is enabled
> 2. devstack configures v3 only, v2 can be enabled and v2 enabled is a
> warning
> 3. devstack removes v2 support
>
> The transition to stage 2 and stage 3 requires using Depends-On to stack
> up some wider collection of tests to demonstrate that this works on
> novaclient, heat, shade, osc, and anything that comes forward as being
> broken by this last round. It's fine if we give people hard deadlines
> that they need to get their jobs sorted, but like the removal of
> extras.d, we need to be explicit about it.
>
> So, first off, we need a volunteer to step up to pull together this
> plan. Any volunteers here?
>
>-Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] towards a keystone v3 only devstack

2016-02-02 Thread gordon chung


On 02/02/2016 8:45 AM, Jordan Pittier wrote:
On Tue, Feb 2, 2016 at 2:09 PM, gordon chung 
mailto:g...@live.ca>> wrote:
yeah... the revert broke us across all telemetry projects since we fixed 
plugins to adapt to v3. i'm very much for adapting to v3 since it's lingered 
around for years. i think given the time lapsed, if it breaks them, tough.

 This is not how our community works.

sure. currently, i would say we work as: it depends who is broken, then 
possibly tough.

again, as i previously stated, this change should have been publicised (if it 
was, apologies, i missed it). i had to fix many separate projects to adapt to 
this so i've never said it went smoothly or that this was done correctly. i'm 
just mentioning that we as a community have been dragging our feet for years 
(not months). so this is not entirely a keystone fault. i'd much rather we see 
stuff break and light a fire under people because it's clearly evident no one 
was making an effort on this (myself included) so we should all take some blame.

cheers,

--
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer] New project: collectd-ceilometer-plugin

2016-02-03 Thread gordon chung


On 03/02/2016 9:16 AM, Foley, Emma L wrote:
>   AFAICT there's no such thing out of the box but it should be fairly 
> straightforward to implement a StatsD writer using the collectd Python plugin.
>   Simon
>
>   [1] https://collectd.org/documentation/manpages/collectd-python.5.shtml
>
> I guess that’ll have to be the plan now: get a prototype in place and have a 
> look at how well it does.
> The first one is always the most difficult, so it should be fairly quick to 
> get this going.
>

nice, do you have resource to look at this? or maybe something to add to 
Gnocchi's potential backlog. existing plugin still seems useful to those 
who want to use custom/proprietary storage.

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-04 Thread gordon chung


On 03/02/2016 10:38 AM, Sam Yaple wrote:
On Wed, Feb 3, 2016 at 2:52 PM, Jeremy Stanley 
<fu...@yuggoth.org> wrote:
On 2016-02-03 14:32:36 + (+), Sam Yaple wrote:
[...]
> Luckily, digging into it it appears cinder already has all the
> infrastructure in place to handle what we had talked about in a
> separate email thread Duncan. It is very possible Ekko can
> leverage the existing features to do it's backup with no change
> from Cinder.
[...]

If Cinder's backup facilities already do most of
what you want from it and there's only a little bit of development
work required to add the missing feature, why jump to implementing
this feature in a completely separate project instead rather than
improving Cinder's existing solution so that people who have been
using that can benefit directly?

Backing up Cinder was never the initial goal, just a potential feature on the 
roadmap. Nova is the main goal.

i'll extend fungi's question, are the backup framework/mechanisms common 
whether it be Nova or Cinder or anything else? or are they unique but only 
grouped together as a service because they backup something. it seems the 
problem is we've imagined the service as tackling a horizontal issue when 
really it is just a vertical story that appears across many silos.

cheers,

--
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-04 Thread gordon chung


On 04/02/2016 9:04 AM, Morgan Fainberg wrote:


On Thu, Feb 4, 2016 at 4:51 AM, Doug Hellmann 
mailto:d...@doughellmann.com>> wrote:
Excerpts from Sean Dague's message of 2016-02-04 06:38:26 -0500:
> A few issues have crept up recently with the service catalog, API
> headers, API end points, and even similarly named resources in different
> resources (e.g. backup), that are all circling around a key problem.
> Distributed teams and naming collision.
>
> Every OpenStack project has a unique name by virtue of having a git
> tree. Once they claim 'openstack/foo', foo is theirs in the OpenStack
> universe for all time (or until trademarks say otherwise). Nova in
> OpenStack will always mean one project.
>
> There has also been a desire to replace project names with
> common/generic names, in the service catalog, API headers, and a few
> other places. Nova owns 'compute'. Except... that's only because we all
> know that it does. We don't *actually* have a registry for those values.
>
> So the code names are well regulated, the common names, that we
> encourage use of, are not. Devstack in tree code defines some
> conventions. However with the big tent, things get kind of squirely
> pretty quickly. Congress registering 'policy' as their endpoint type is
> a good example of that -
> https://github.com/openstack/congress/blob/master/devstack/plugin.sh#L147
>
> Naming is hard. And trying to boil down complicated state machines to
> one or two word shiboliths means that inevitably we're going to find
> some words just keep cropping up: policy, flavor, backup, meter. We do
> however need to figure out a way forward.
>
> Lets start with the top level names (resource overlap cascades from there).
>
> What options do we have?
>
> 1) Use the names we already have: nova, glance, swift, etc.
>
> Upside, collision problem is solved. Downside, you need a secret decoder
> ring to figure out what project does what.
>
> 2) Have a registry of "common" names.
>
> Upside, we can safely use common names everywhere and not fear collision
> down the road.
>
> Downside, yet another contention point.
>
> A registry would clearly be under TC administration, though all the
> heavy lifting might be handed over to the API working group. I still
> imagine collision around some areas might be contentious.
>
> 3) Use either, inconsistently, hope for the best. (aka - status quo)
>
> Upside, no long mailing list thread to figure out the answer. Downside,
> it sucks.
>
>
> Are there other options missing? Where are people leaning at this point?
>
> Personally I'm way less partial to any particular answer as long as it's
> not #3.
>
>
> -Sean
>

This feels like something that should be designed with end-users
in mind, and that means making choices about descriptive words
rather than quirky in-jokes.  As much as I hate to think about the
long threads some of the contention is likely to introduce, not to
mention the bikeshedding over the terms themselves, I have become
convinced that our best long-term solution is a term/name registry
(option 2). We already have that pattern in the governance repository
where official projects describe their service type.

To reduce contention, we could agree in advance to support multi-word
names ("block storage" and "object storage", "block backup" and
"file backup", etc.). Decisions about noun-verb vs. verb-noun,
punctuation, etc. can be dealt with by the group that takes on the
task of setting standards.

As I said in the TC meeting, this seems like something the API working
group could do, if they wanted to take on the responsibility. If not,
then we should establish a new group with a mandate from the TC. Since
we have something like a product catalog already in the governance repo,
we can keep the new data there.

Doug

I am a fan of option #2. I also want to point out that os-client-config has 
encoded some of these names as well[1], which is pushing us in the direction of 
#2.  I 100% agree that the end user perspective also leans us towards option #2.

I am very against "hope for the best options".
i'm inclined to say #2 as well since the code names, based on my experience, 
lead to assumptions of what the project covers/does based on an elevator pitch 
description someone heard one time.

i definitely agree that we shouldn't concern ourselves with non big tent 
projects.

my concern with #2 is, we will just end up going to thesaurus.com and searching 
for alternate words that mean the same general thing and this will be equally 
confusing. with the big tent, we essentially agreed that duplication is 
possible, so no matter how granular we make the scope, i'm not sure there's a 
way for any project to own a domain anymore. it seems this question is better 
addressed by addressing how the TC should handle big tent first?

cheers,

--
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-r

[openstack-dev] [telemetry][aodh] announcing Liusheng as new Aodh liaison

2016-02-04 Thread gordon chung
hi,

we've been searching for a lead/liaison/lieutenant for Aodh for some 
time. thankfully, we've had a volunteer.

i'd like to announce Liusheng as the new lead of Aodh, the alarming 
service under Telemetry. he will help me with monitor bugs and specs and 
will be another resource for alarming related items. he will also help 
track some of the features we hope to implement[1].

i'll let him mention some of the target goals but for now, i'd like to 
thank him for volunteering to help improve the community.

[1] https://wiki.openstack.org/wiki/Telemetry/RoadMap#Aodh_.28alarming.29

cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-05 Thread gordon chung


On 05/02/2016 1:22 PM, Ryan Brown wrote:
> For example, I think "containers" will be one of those words that
> everyone wants to use (buzzbuzzbuzzbuzz). Having at least a way for
> projects to say "hm, someone else wants this" would be nice.

too late, magnum[1] beat you to it. i'm not sure what the other docker 
projects are using. plenty of alternatives though[2]?

#2 is the lesser of the evils but not by much. if we do choose it, we 
need a formalised taxonomy, CADF is one example (see line 2656[3]), and 
we need to be specific. it doesn't really solve the issue of projects 
that do the same thing but i believe that issue is not part of the 
discussion.

regarding some collisions so far, to a certain extent i believe it's 
because the projects are really features masquerading as discrete 
services. [disclaimer: following might be ignorant, apologies] for 
something like backups, i'm not sure why it isn't a part of an existing 
services api (compute|blockstorage/.../backup) and why it needs to be 
it's own endpoint.

[1] https://github.com/openstack/magnum/blob/master/devstack/lib/magnum#L111
[2] http://www.thesaurus.com/browse/container?s=t
[3] 
https://www.dmtf.org/sites/default/files/standards/documents/DSP0262_1.0.0.pdf

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal for having a service type registry and curated OpenStack REST API

2016-02-08 Thread gordon chung


On 08/02/2016 7:13 AM, Sean Dague wrote:
> On 02/07/2016 08:30 AM, Jay Pipes wrote:
>> On 02/04/2016 06:38 AM, Sean Dague wrote:
>>> What options do we have?
>> 
>>> 2) Have a registry of "common" names.
>>>
>>> Upside, we can safely use common names everywhere and not fear collision
>>> down the road.
>>>
>>> Downside, yet another contention point.
>>>
>>> A registry would clearly be under TC administration, though all the
>>> heavy lifting might be handed over to the API working group. I still
>>> imagine collision around some areas might be contentious.
>>
>> The above is my choice. I'd also like to point out that I'm only talking
>> about the *service* projects here -- i.e. the things that expose a REST
>> API.
>>
>> I don't care about a naming registry for non-service projects because
>> they do not expose a public user-facing API that needs to be curated and
>> protected.
>>
>> I would further suggest using the openstack/governance repo's
>> projects.yaml file for this registry. This is already under the TC's
>> administration and the API WG could be asked to work closely with the TC
>> to make recommendations on naming for all type:service projects in the
>> file. We should add a service:$type tag to the projects.yaml file and
>> that would serve as the registry for REST API services.
>>
>> We would need to institute this system by first tackling the current
>> areas of REST API functional overlap:
>>
>> * Ceilometer and Monasca are both type:service projects that are both
>> performing telemetry functionality in their REST APIs. The API WG should
>> work with both communities to come up with a 6-12 month plan for
>> creating a *single* OpenStack Telemetry REST API that both communities
>> would be able to implement separately as they see fit.
>
> 1) how do you imagine this happening?
>
> 2) is there buy in from both communities?

i'd be interested to see how much collaboration continues/exists after 
two overlapping projects are approved as 'openstack projects'. not sure 
how much collaboration happens since duplicating efforts partially 
implies "we don't want/need to collaborate".

>
> 3) 2 implementations of 1 API that is actually semantically the same is
> super hard. Doing so in the IETF typically takes many years.

++ and possibly leads to poor models for both implementations? or 
rewrites of backend(s).

>
> I feel like we spent a bunch of time a couple years ago putting projects
> detailed improvement plans from the TC, and it really didn't go all that
> well. The outside-in approach without community buy in mostly just gets
> combative and hostile.
>
>> * All APIs that the OpenStack Compute API currently proxies to other
>> service endpoints need to have a formal sunsetting plan. This includes:
>>
>>   - servers/{server_id}/os-interface (port interfaces)
>>   - images/
>>   - images/{image_id}/metadata
>>   - os-assisted-volume-snapshots/
>>   - servers/{server_id}/os-bare-metal-nodes/ (BTW, why is this a
>> sub-resource of /servers again?)
>>   - os-fixed-ips/
>>   - os-floating-ip-dns/
>>   - os-floating-ip-pools/
>>   - os-floating-ips/
>>   - os-floating-ips-bulk/
>>   - os-networks/
>>   - os-security-groups/
>>   - os-security-group-rules/
>>   - os-security-group-default-rules/
>>   - os-tenant-networks/
>>   - os-volumes/
>>   - os-snapshots/
>
> It feels really early to run down a path here on trying to build a
> registry for top level resources when we've yet to get service types down.
>
> Also, I'm not hugely sure why:
>
> GET /compute/flavors
> GET /dataprocessing/flavors
> GET /queues/flavors
>
> Is the worst thing we could be doing. And while I get the idea that in a
> perfect world there would be no overlap, the cost of getting there in
> breaking working software seems... a bit of a bad tradeoff.

agree, to clarify in the case of backups, is the idea that GET 
compute/../backups and blockstorage/../backups is bad? personally it 
seems to capture purpose pretty well, ie. 
/../. i think we ran into this years back when 
openstackclient was starting, there's only so much verbiage we can 
select from. ie. aggregation is used in multiple projects.

>
>> * All those services that have overlapping top-level resources must have
>> a plan to either:
>>   - align/consolidate the top-level resource if it makes sense
>>   - rename the top-level resource to be more specific if needed, or
>>   - place the top-level resource as a sub-resource on a top-level
>> resource that is unique in the full OpenStack REST API set of top-level
>> resources
>
> And what happens to all the software out there written to OpenStack? I
> do get the concerns for coherency, at the same time randomly changing
> API interfaces on people is a great way to kick all your users in the
> knees and take their candy.
>
> At the last summit basically *exactly* the opposite was agreed to. You
> don't get to remove an API, ever. Because the moment it's out there, it
> has users.
>
>   -Sean
>

-- 
gord

___

Re: [openstack-dev] [gate] RFC dropping largeops tests

2016-02-10 Thread gordon chung
makes sense to me. thanks for concise update and tracking this.

On 10/02/2016 7:59 AM, Davanum Srinivas wrote:
> +1 from me
>
> On Wed, Feb 10, 2016 at 7:56 AM, Jay Pipes  wrote:
>> On 02/10/2016 07:33 AM, Sean Dague wrote:
>>>
>>> The largeops tests at this point are mostly finding out that some of our
>>> new cloud providers are slow - http://tinyurl.com/j5u4nf5
>>>
>>> This is fundamentally a performance test, with timings having been tuned
>>> to pass 98% of the time on two clouds that were very predictable in
>>> performance. We're now running on 4 clouds, and the variance between
>>> them all, and between every run on each can be as much as a factor of 2.
>>>
>>> We could just bump all the timeouts again, but that's basically the same
>>> thing as dropping them.
>>>
>>> These tests are not instrumented in a way that any real solution can be
>>> addressed in most cases. Tests without a path forward, that are failing
>>> good patches a lot, are very much the kind of thing we should remove
>>> from the system.
>>
>>
>> +1 from me.
>>
>> -jay
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-10 Thread gordon chung


On 10/02/2016 11:35 AM, Thierry Carrez wrote:
> Chris Dent wrote:
>> [...]
>> Observing this thread and "the trouble with names"[1] one I get
>> concerned that we're trending in the direction of expecting
>> projects/servers/APIs to be done and perfect before they will ever
>> be OpenStack. This, of course, runs entirely contrary to the spirit
>> of open source where people release a solution to their itch and
>> people join with them to make it better.
>>
>> If we start thinking of projects as needing to have "production-grade"
>> implementations and APIs as needing to be stable and correct from
>> the start we're backing ourselves into corners that are very difficult
>> to get out of, distracting ourselves from the questions we ought to be
>> asking, and putting barriers in the way of doing new but necessary
>> stuff and evolving.
>
> I certainly didn't intend to mean that projects need to have a final API
> or perfect implementation before they can join the tent. I meant that
> projects need to have a reference implementation using open source tools
> that has a chance of being used in production one day. Imagine a project
> which uses sqlite in testing but requires Oracle DB to achieve full
> functionality or scaling beyond one user: the sqlite backend would be a
> token open backend for testing purposes but real usage would need you to
> buy into proprietary options. That would certainly be considered "open
> core": a project that pretends to be open but requires proprietary
> technology to be "really used".

apologies if this was asked somewhere else in thread, but should we try 
to define "production" scale or can we even? based on the last survey, 
the vast majority of deployments are under 100nodes[1]. that said, a few 
years ago, one company was dreaming 100,000 nodes.

i'd imagine the 50 node solution won't satisfy the 1000 node solution 
let alone the 10k node. similarly, the opposite direction will probably 
give an overkill solution. it seems somewhat difficult to define 
something against 'production' term unless we scope it somehow (e.g # of 
node ranges)?

[1] http://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   >