Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-25 Thread Clint Byrum
Excerpts from Lance Bragstad's message of 2017-02-25 13:07:58 -0600:
> Since both token formats rebuild the authorization context at validation
> time, we can remove some revocation events that are no longer needed. This
> means we won't be storing as many revocation events on role removal from
> domains and projects. Instead we will only rely on the revocation API to
> invalidate tokens for cases like specific token revocation or password
> changes (the new design of validation does role assignment enforcement for
> us automatically). This should reduce the amount of data being replicated
> due to massive amounts of revocation events.
> 

I didn't know that the work to make role removal non-event based was
even started much less done. Cool.

> We do still have some more work to do on this front, but I can dig into it
> and see what's left.
> 

Indeed, the less revocation events, the better the Fernet story is
for scalability.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Neutron team social Pictures

2017-02-25 Thread Kevin Benton
Thanks for putting these up!

On Fri, Feb 24, 2017 at 8:39 AM, Vasudevan, Swaminathan (PNB Roseville) <
swaminathan.vasude...@hpe.com> wrote:

> Hi Folks,
>
> Here is the link to the pictures that I took during the Neutron Social in
> Atlanta.
>
>
>
> https://souminathan.smugmug.com/OpenStack-Neutron-PTG-Atlanta-2017/
>
>
>
> Thanks
>
> Swami
>
>
>
> *From:* Kevin Benton [mailto:ke...@benton.pub]
> *Sent:* Thursday, February 23, 2017 4:09 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [neutron] - Neutron team social in Atlanta
> on Thursday
>
>
>
> Hi everyone,
>
>
>
> We will meet tonight at Mellow Mushroom at 6PM. It's about a 10-15 minute
> walk from the venue.
>
>
>
> Here is the location:
>
>
>
> https://goo.gl/maps/FAnmVEVEUSE2
>
>
>
> On Feb 23, 2017 13:27, "Duarte Cardoso, Igor" <
> igor.duarte.card...@intel.com> wrote:
>
> +1
>
>
>
> Best regards,
>
> Igor.
>
>
>
> *From:* Vasudevan, Swaminathan (PNB Roseville) [mailto:
> swaminathan.vasude...@hpe.com]
> *Sent:* Friday, February 17, 2017 8:30 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [neutron] - Neutron team social in Atlanta
> on Thursday
>
>
>
> Count me in.
>
>
>
> *From:* Kevin Benton [mailto:ke...@benton.pub ]
> *Sent:* Friday, February 17, 2017 11:19 AM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* [openstack-dev] [neutron] - Neutron team social in Atlanta on
> Thursday
>
>
>
> Hi all,
>
>
>
> I'm organizing a Neutron social event for Thursday evening in Atlanta
> somewhere near the venue for dinner/drinks. If you're interested, please
> reply to this email with a "+1" so I can get a general count for a
> reservation.
>
>
>
> Cheers,
>
> Kevin Benton
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA]Refactoring Scenarios manager.py

2017-02-25 Thread Masayuki Igawa
Hi,

Thank you for bringing this up.

Yeah, I understand your frustration. We already have the document about our
stable interface[1]. It says
--
Stability
Any code that lives in tempest/lib will be treated as a stable interface.
This means that any public interface under the tempest/lib directory is
expected to be a stable interface suitable for public consumption. However,
for any interfaces outside of tempest/lib in the tempest tree (unless
otherwise noted) or any private interfaces the same stability guarantees
don't apply.
--

So we can change private interfaces basically. However, I also assume that
this document is not well known(or people just ignore it, though, maybe).
So I'd like to advertise this policy here, and discuss it (if the
discussion is needed.)

[1] https://docs.openstack.org/developer/tempest/library.html#stability

On Sat, Feb 25, 2017 at 22:39 Jordan Pittier 
wrote:

> Hi guys,
> So I have a problem with these 2 patches here [1] and here [2]. You
> basically are blocking any attempt of refactoring manager.py. Refactoring
> that file has been our number one priority for 2 cycles, and so far hardly
> no one stepped up really to do the work, except me with these 2 patches.
> Let me remind you that that file is a gigantic mess, an so are our network
> scenarios.
>
> The manager.py file in the scenarios directory has no stable interface,
> and it was never "advertised" so. That some plugins decided to use some
> private methods (such as this _get_network_by_name) is unfortunate but that
> should not block us from moving.
>
> So just to be clear, if we really want to refactor our scenarios (and we
> must in my opinion), things will break for projects that are importing
> Tempest and using it outside of its stable interface. I am not interested
> in being the good Samaritan for the whole OpenStack galaxy, I have enough
> with the 6 core projects and the Tempest stable interface. So guys, if you
> are and don't want to go forward with [1] and [2], be sure I'll never touch
> those scenarios again. I am not upset, but we have to make clear decisions,
> sometimes difficult.
>
> [1] : https://review.openstack.org/#/c/436555/
> [2] : https://review.openstack.org/#/c/438097/
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Masayuki Igawa
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG + Release status

2017-02-25 Thread Pradeep Kilambi


Sent from my iPhone

> On Feb 25, 2017, at 11:14 AM, Emilien Macchi  wrote:
> 
>> On Fri, Feb 24, 2017 at 8:52 PM, Emilien Macchi  wrote:
>> Before leaving in week-end, I thought useful to share some infos with the 
>> team.
>> 
>> First of all, it was awesome to meet people face to face, really. I'm
>> already looking forward to the next time.
>> 
>> About Ocata release, it seems like we're getting very close to be
>> ready for RC2, probably next week. I've been discussed with upgrade
>> team and we have defined most of blockers, that would hopefully be
>> fixed next week.
> 
> I've seen quite a few number of services without upgrade tasks. We
> need to add them asap before the RC2:
> - redis
> - auditd
> - ceph-mds
> - cinder-backup
> - ec2api
> - etcd
> - horizon: https://review.openstack.org/#/c/437545/
> - all manila services
> - all mistral services
> - memcached
> - neutron-l3-compute-dvr
> - some neutron agents (dkpg, sriov)
> - libvirt
> - all octavia services
> - all ODL services
> - zaqar
> - all contrail services
> - all resources managed by Pacemaker (unless I missed something?)
> - panko-api

Panko is a new service in Ocata. We don't need this until pike for Panko. I'll 
get this into pike.1

> 
> Some services were introduced in Ocata so they don't necessarily need
> the upgrade tasks, though it would be great to have it before end of
> Pike, so we can tests it. For the others, they are required for Ocata
> RC2.
> 
> Thanks,
> 
>> Also breaking news: the upgrade job is now testing Nova upgrades from
>> Newton to Ocata and we've got a successful pingtest:
>> http://logs.openstack.org/55/438055/1/check/gate-tripleo-ci-centos-7-multinode-upgrades-nv/353b16a/console.html#_2017-02-25_00_58_53_067521
>> First time we run an actual overcloud upgrade in CI with all our
>> critical services. Big kudos to the team!
>> 
>> Next week, I'll take some time to prepare the Launchpad for Pike
>> cycle. I'll probably reach some of you regarding the blueprints we
>> discussed at PTG.
>> I'll also write a summary on the mailing-list of what happened this week.
>> 
>> For those who travel this week-end have a safe trip back to home!
>> 
>> See you online next week,
>> --
>> Emilien Macchi
> 
> 
> 
> -- 
> Emilien Macchi
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][api] Changing devstack to not set up keystone on :5000 and :35357

2017-02-25 Thread Lance Bragstad
Nice! Thanks for revisiting this, Brant.

Was this a cross-project goal/discussion at the PTG?

On Fri, Feb 24, 2017 at 9:24 AM, Brant Knudson  wrote:

>
> At the PTG there was some discussion about changing services to not listen
> on ports[0]. I'd been working on this for devstack keystone off and on for
> some time[1], so I thought I'd dust off that work again to see what the
> problems were. Turns out things must have been fixed since, after rebasing
> and cleaning up the change, it's now passing our gate tests. I split up the
> change since there was some refactoring to do before making the final
> change of having Apache HTTPd not listen on :5000 and :35357.
>
> This will likely affect devstack plugins using KEYSTONE_SERVICE_PROTOCOL,
> KEYSTONE_SERVICE_HOST, and KEYSTONE_AUTH_PORT instead of
> KEYSTONE_SERVICE_URI[2].
>
> [0] https://etherpad.openstack.org/p/service-catalog-pike
> [1] https://review.openstack.org/#/c/312230/
> [2] https://review.openstack.org/#/c/437760/
>
> --
> - Brant
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-25 Thread Lance Bragstad
On Sat, Feb 25, 2017 at 12:47 AM, Clint Byrum  wrote:

> Excerpts from joehuang's message of 2017-02-25 04:09:45 +:
> > Hello, Matt,
> >
> > Thank you for your reply, just as what you mentioned, for the slow
> changed data, aync. replication should work. My concerns is that the impact
> of replication delay, for example (though it's quite low chance to happen):
> >
> > 1) Add new user/group/role in RegionOne, before the new user/group/role
> are replicated to RegionTwo, the new user begin to access RegionTwo
> service, then because the data has not arrived yet, the user's request to
> RegionTwo may be rejected for the token vaildation failed in local KeyStone.
> >
>
> I think this is entirely acceptable. You can even check with your
> monitoring system to find out what the current replication lag is to
> each region, and notify the user of how long it may take.
>
> > 2)In token revoke case. If we remove the user'role in RegionOne, the
> token in RegionOne will be invalid immediately, but before the remove
> operation replicated to the RegionTwo, the user can still use the token to
> access the services in RegionTwo. Although it may last in very short
> interval.
> >
> > Is there someone can evaluate the security risk is affordable or not.
> >
>
> The simple answer is that the window between a revocation event being
> created, and being ubiquitous, is whatever the maximum replication lag
> is between regions. So if you usually have 5 seconds of replication lag,
> it will be 5 seconds. If you have a really write-heavy day, and you
> suddenly have 5 minutes of replication lag, it will be 5 minutes.
>
> The complicated component is that in async replication, reducing
> replication lag is expensive. You don't have many options here. Reducing
> writes on the master is one of them, but that isn't easy! Another is
> filtering out tables on slaves so that you only replicate the tables
> that you will be reading. But if there are lots of replication events,
> that doesn't help.
>

This is a good point and something that was much more prevalent with UUID
tokens. We still write *all* the data from a UUID token to the database,
which includes the user, project, scope, possibly the service catalog,
etc... When validating a UUID token, it would be pulled from the database
and returned to the user. The information in the UUID token wasn't
confirmed at validation time. For example, if you authenticated for a UUID
token scoped to a project with the `admin` role, the role and project
information persisted in the database would reflect that. If your `admin`
role assignment was removed from the project and you validated the token,
the token reference in the database would still contain `admin` scope on
the project. At the time the approach to fixing this was to create a
revocation event that would match specific attributes of that token (i.e.
the `admin` role on that specific project). As a result, the token
validation process would pull the token from the backend, then pass it to
the revocation API and ask if the token was revoked based on any
pre-existing revocation events.

The fernet approach to solving this was fundamentally different because we
didn't have a token reference to pull from the backend that represented the
authorization context at authentication time (which we did have with UUID).
Instead, what we can do at validation time is decrypt the token and ask the
assignment API for role assignments given a user and project [0] and raise
a 401 if that user has no roles on the project [1]. So, by rebuilding the
authorization context at validation time, we no longer need to rely on
revocation events to enforce role revocation (but we do need them to
enforce revocation for other things with fernet). The tradeoff is that
performance degrades if you're using fernet without caching because we have
to rebuild all of that information, instead of just returning a reference
from the database. This led to us making significant improvements to our
caching implementation in keystone so that we can improve token validation
time overall, especially for fernet. As of last release UUID tokens are now
validated the same exact way as fernet tokens are. Our team also made some
improvements listing and comparing token references in the revocation API
[2] [3] (thanks to Richard, Clint, and Ron for driving a lot of that work!).

Since both token formats rebuild the authorization context at validation
time, we can remove some revocation events that are no longer needed. This
means we won't be storing as many revocation events on role removal from
domains and projects. Instead we will only rely on the revocation API to
invalidate tokens for cases like specific token revocation or password
changes (the new design of validation does role assignment enforcement for
us automatically). This should reduce the amount of data being replicated
due to massive amounts of revocation events.

We do still have some more work to do 

Re: [openstack-dev] [monasca] cassandra support in Monasca

2017-02-25 Thread Shinya Kawabata
Hi Pradeep

Basic Cassandra support is already implemented.
But there are some performance problems.
These performance problems are difficult to enhance.
So we are planning to switch other DB and will let Cassandra deprecated.
KariosDB is one of candidates and not implemented yet.
KariosDB is build-on Cassandra so you might have problems we had.

Regards
Shinya Kawabata

> -Original Message-
> From: Pradeep Singh [mailto:ps4openst...@gmail.com]
> Sent: Thursday, February 23, 2017 2:39 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Cc: Kawabata Shinya ;
> santhosh.fernan...@gmail.com
> Subject: [openstack-dev][monasca] cassandra support in Monasca
> 
> Hello,
> 
> Could you please suggest status of Cassandra support in Monasca.
> Is there any plan to support  kairosdb?
> If it is not implemented then we can take kairosdb support work.
> 
> Thanks,
> Pradeep Singh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG + Release status

2017-02-25 Thread Emilien Macchi
On Fri, Feb 24, 2017 at 8:52 PM, Emilien Macchi  wrote:
> Before leaving in week-end, I thought useful to share some infos with the 
> team.
>
> First of all, it was awesome to meet people face to face, really. I'm
> already looking forward to the next time.
>
> About Ocata release, it seems like we're getting very close to be
> ready for RC2, probably next week. I've been discussed with upgrade
> team and we have defined most of blockers, that would hopefully be
> fixed next week.

I've seen quite a few number of services without upgrade tasks. We
need to add them asap before the RC2:
- redis
- auditd
- ceph-mds
- cinder-backup
- ec2api
- etcd
- horizon: https://review.openstack.org/#/c/437545/
- all manila services
- all mistral services
- memcached
- neutron-l3-compute-dvr
- some neutron agents (dkpg, sriov)
- libvirt
- all octavia services
- all ODL services
- zaqar
- all contrail services
- all resources managed by Pacemaker (unless I missed something?)
- panko-api

Some services were introduced in Ocata so they don't necessarily need
the upgrade tasks, though it would be great to have it before end of
Pike, so we can tests it. For the others, they are required for Ocata
RC2.

Thanks,

> Also breaking news: the upgrade job is now testing Nova upgrades from
> Newton to Ocata and we've got a successful pingtest:
> http://logs.openstack.org/55/438055/1/check/gate-tripleo-ci-centos-7-multinode-upgrades-nv/353b16a/console.html#_2017-02-25_00_58_53_067521
> First time we run an actual overcloud upgrade in CI with all our
> critical services. Big kudos to the team!
>
> Next week, I'll take some time to prepare the Launchpad for Pike
> cycle. I'll probably reach some of you regarding the blueprints we
> discussed at PTG.
> I'll also write a summary on the mailing-list of what happened this week.
>
> For those who travel this week-end have a safe trip back to home!
>
> See you online next week,
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][gnocchi] How to push compute node metrics to gnocchi?

2017-02-25 Thread Julien Danjou
On Sat, Feb 25 2017, Pradeep Singh wrote:

> I installed devstack and enabled compute node's cpu utilization collection
> in nova.conf, but seeing below error in notification agent's log.
>
> 2017-02-25 08:40:15.515 22662 WARNING ceilometer.dispatcher.gnocchi [-]
> metric compute.node.cpu.frequency is not handled by Gnocchi
> 2017-02-25 08:40:15.530 22662 WARNING ceilometer.dispatcher.gnocchi [-]
> metric compute.node.cpu.idle.percent is not handled by Gnocchi
> 2017-02-25 08:40:15.545 22662 WARNING ceilometer.dispatcher.gnocchi [-]
> metric compute.node.cpu.idle.time is not handled by Gnocchi
> 2017-02-25 08:40:15.559 22662 WARNING ceilometer.dispatcher.gnocchi [-]
> metric compute.node.cpu.iowait.percent is not handled by Gnocchi
> 2017-02-25 08:40:15.574 22662 WARNING ceilometer.dispatcher.gnocchi [-]
> metric compute.node.cpu.iowait.time is not handled by Gnocchi
> 2017-02-25 08:40:15.589 22662 WARNING ceilometer.dispatcher.gnocchi [-]
> metric compute.node.cpu.kernel.percent is not handled by Gnocchi
> 2017-02-25 08:40:15.604 22662 WARNING ceilometer.dispatcher.gnocchi [-]
> metric compute.node.cpu.kernel.time is not handled by Gnocchi
> 2017-02-25 08:40:15.620 22662 WARNING ceilometer.dispatcher.gnocchi [-]
> metric compute.node.cpu.percent is not handled by Gnocchi
> 2017-02-25 08:40:15.636 22662 WARNING ceilometer.dispatcher.gnocchi [-]
> metric compute.node.cpu.user.percent is not handled by Gnocchi
> 2017-02-25 08:40:15.653 22662 WARNING ceilometer.dispatcher.gnocchi [-]
> metric compute.node.cpu.user.time is not handled by Gnocchi
>
> Could you please suggest what configuration i am missing.

None. It looks like these metrics are not listed in
ceilometer/dispatcher/data/gnocchi_resources.yaml so they are not
supported by Gnocchi. It looks like this file should be modified this
metrics to be able to handle them.

Feel free to send a patch.

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][gnocchi] getting extra keys not allowed @ data[flavor name']

2017-02-25 Thread Julien Danjou
On Sat, Feb 25 2017, Pradeep Singh wrote:

> Hello,
>
> I installed latest devstack, and ceilometer dispatcher is not able to
> dispatch the instance data in gnocchi, in logs i am seeing:
>
> 'extra keys not allowed @ data[flavor_name']'
>
> When i deleted line 'flavor_name: resource_metadata.(instance_type|(
> flavor.name))'
>  from gnocchi_resources.yaml, it started working.
>
> Is this a bug or am i missing something?

Did you call ceilometer-upgrade to feed Gnocchi with the new resource
types?
It looks like your definition of the resource types are outdated.

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Schedule Instances according to Local disk based Volume?

2017-02-25 Thread Matt Riedemann

On 2/23/2017 9:27 PM, Zhenyu Zheng wrote:

BTW, I think this can be done using new placement service, using the
custom resource provider? correct?



This came up briefly on Friday at the PTG but unfortunately jaypipes and 
cdent had already left. You should talk to them about this for more 
details. It's my understanding that at some point you'd have a resource 
provider for the compute node and a resource provider for the cinder 
volume node (same host in this case), and you'd define a type of 
"distance" attribute between them, which would be 0. Then when 
requesting the instance create, you provide the distance qualifier of 0 
meaning you want the server and volume on the same resource provider (if 
possible). The distance attribute would be generic, but for the local 
block device scenario you want it to work with 0 distance.


I'm not exactly sure how this is modeled, but I think it would be via an 
aggregate such that both the compute and volume resource providers are 
in the same aggregate association and the distance=0 is defined on the 
aggregate. If that's the case, I don't see how it makes the setup for 
the operator much less complicated than it is today where there has to 
be a matching AZ for each compute and volume node (which gets unwieldy 
when you start to have several thousand nodes). It would make the user 
experience simpler though because instead of needing to select one out 
of several thousand available AZs, I just create a server with a block 
device and specify distance=0 as a required constraint.


I'm personally not sure how close we are to functionality like that, it 
seems like that would be a ways out to me, i.e. we have a lot of other 
work to do before we get to that point.


--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA]Refactoring Scenarios manager.py

2017-02-25 Thread Jordan Pittier
Hi guys,
So I have a problem with these 2 patches here [1] and here [2]. You
basically are blocking any attempt of refactoring manager.py. Refactoring
that file has been our number one priority for 2 cycles, and so far hardly
no one stepped up really to do the work, except me with these 2 patches.
Let me remind you that that file is a gigantic mess, an so are our network
scenarios.

The manager.py file in the scenarios directory has no stable interface, and
it was never "advertised" so. That some plugins decided to use some private
methods (such as this _get_network_by_name) is unfortunate but that should
not block us from moving.

So just to be clear, if we really want to refactor our scenarios (and we
must in my opinion), things will break for projects that are importing
Tempest and using it outside of its stable interface. I am not interested
in being the good Samaritan for the whole OpenStack galaxy, I have enough
with the 6 core projects and the Tempest stable interface. So guys, if you
are and don't want to go forward with [1] and [2], be sure I'll never touch
those scenarios again. I am not upset, but we have to make clear decisions,
sometimes difficult.

[1] : https://review.openstack.org/#/c/436555/
[2] : https://review.openstack.org/#/c/438097/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][gnocchi] How to push compute node metrics to gnocchi?

2017-02-25 Thread Pradeep Singh
Hello,

I installed devstack and enabled compute node's cpu utilization collection
in nova.conf, but seeing below error in notification agent's log.

2017-02-25 08:40:15.515 22662 WARNING ceilometer.dispatcher.gnocchi [-]
metric compute.node.cpu.frequency is not handled by Gnocchi
2017-02-25 08:40:15.530 22662 WARNING ceilometer.dispatcher.gnocchi [-]
metric compute.node.cpu.idle.percent is not handled by Gnocchi
2017-02-25 08:40:15.545 22662 WARNING ceilometer.dispatcher.gnocchi [-]
metric compute.node.cpu.idle.time is not handled by Gnocchi
2017-02-25 08:40:15.559 22662 WARNING ceilometer.dispatcher.gnocchi [-]
metric compute.node.cpu.iowait.percent is not handled by Gnocchi
2017-02-25 08:40:15.574 22662 WARNING ceilometer.dispatcher.gnocchi [-]
metric compute.node.cpu.iowait.time is not handled by Gnocchi
2017-02-25 08:40:15.589 22662 WARNING ceilometer.dispatcher.gnocchi [-]
metric compute.node.cpu.kernel.percent is not handled by Gnocchi
2017-02-25 08:40:15.604 22662 WARNING ceilometer.dispatcher.gnocchi [-]
metric compute.node.cpu.kernel.time is not handled by Gnocchi
2017-02-25 08:40:15.620 22662 WARNING ceilometer.dispatcher.gnocchi [-]
metric compute.node.cpu.percent is not handled by Gnocchi
2017-02-25 08:40:15.636 22662 WARNING ceilometer.dispatcher.gnocchi [-]
metric compute.node.cpu.user.percent is not handled by Gnocchi
2017-02-25 08:40:15.653 22662 WARNING ceilometer.dispatcher.gnocchi [-]
metric compute.node.cpu.user.time is not handled by Gnocchi

Could you please suggest what configuration i am missing.

Thanks,
Pradeep Singh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][gnocchi] getting extra keys not allowed @ data[flavor name']

2017-02-25 Thread Pradeep Singh
Hello,

I installed latest devstack, and ceilometer dispatcher is not able to
dispatch the instance data in gnocchi, in logs i am seeing:

'extra keys not allowed @ data[flavor_name']'

When i deleted line 'flavor_name: resource_metadata.(instance_type|(
flavor.name))'
 from gnocchi_resources.yaml, it started working.

Is this a bug or am i missing something?

I have below lines in my localrc file:

#enable plugin gnocchi
enable_plugin gnocchi https://github.com/openstack/gnocchi master
enable_service gnocchi-grafana

#enable plugin ceilometer
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer.git

Thanks,
Pradeep Singh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev