Re: [openstack-dev] Retirement of openstack/cloud-init repository

2016-07-29 Thread Morgan Fainberg
On Jul 29, 2016 17:13, "Joshua Harlow"  wrote:
>
> Hi all,
>
> I'd like to start the retirement (well actually it's more of shifting) of
the openstack/cloud-init repository to its new location that *finally*
removes the old bzr version of itself.
>
> The long story is that the cloud-init folks (myself included) moved the
bzr repository to openstack/cloud-init and cloud-init 2.0 work was started
there while 0.7.x work was still done in bzr.
>
> The 0.7.x branches of openstack/cloud-init then tried to stay up with the
0.7.x work but constantly feel behind, and 2.0 work has somewhat slowed
down (not entirely stalled just yet) so in order to help out the whole
thing here the canonical folks (mainly scott and friends) have finally
moved the old bzr repository off of bzr and now it's connected into the
launchpad git system and all history has been moved there and such (the 2.0
branch from openstack/cloud-init is also mirrored there) so at this point
there isn't a need to have git and bzr when now one location (and one
location that can please all the folks) exists.
>
> https://git.launchpad.net/cloud-init
>
> So sometime next week I'm going to start the move of the
openstack/cloud-init (which is outdated) to the attic and direct new people
to the new location (or perhaps we can have infra just point to that in
some kind of repo notes?).
>
> Anyways, TLDR; git at launchpad for cloud-init, no more bzr or need to
have out of sync ~sort of~ mirror in openstack, win!
>
> -Josh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

As I recall we no longer "move" the git repositories. We simply remove the
permissions/ACLs so new reviews aren't added/approved, and often the repo
is emptied with only a readme pointing to the new location.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Retirement of openstack/cloud-init repository

2016-07-29 Thread Joshua Harlow

Hi all,

I'd like to start the retirement (well actually it's more of shifting) 
of the openstack/cloud-init repository to its new location that 
*finally* removes the old bzr version of itself.


The long story is that the cloud-init folks (myself included) moved the 
bzr repository to openstack/cloud-init and cloud-init 2.0 work was 
started there while 0.7.x work was still done in bzr.


The 0.7.x branches of openstack/cloud-init then tried to stay up with 
the 0.7.x work but constantly feel behind, and 2.0 work has somewhat 
slowed down (not entirely stalled just yet) so in order to help out the 
whole thing here the canonical folks (mainly scott and friends) have 
finally moved the old bzr repository off of bzr and now it's connected 
into the launchpad git system and all history has been moved there and 
such (the 2.0 branch from openstack/cloud-init is also mirrored there) 
so at this point there isn't a need to have git and bzr when now one 
location (and one location that can please all the folks) exists.


https://git.launchpad.net/cloud-init

So sometime next week I'm going to start the move of the 
openstack/cloud-init (which is outdated) to the attic and direct new 
people to the new location (or perhaps we can have infra just point to 
that in some kind of repo notes?).


Anyways, TLDR; git at launchpad for cloud-init, no more bzr or need to 
have out of sync ~sort of~ mirror in openstack, win!


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Next steps for proxy API deprecation

2016-07-29 Thread Rochelle Grober
Thank you, Doug.  Yes, if the DefCore guidelines have any of these tests, the 
tests used by DefCore will need to be run beyond EOL of Newton as the DefCore 
tests last longer than the EOL timeframe.  But, first we should check which 
tests need to be capped and whether they are part of a/some DefCore guidelines. 
 If yes, as Tempest/Defcore summit session would be good.

Thanks!
--Rocky

-Original Message-
From: Doug Hellmann [mailto:d...@doughellmann.com] 
Sent: Tuesday, July 26, 2016 10:46 AM
To: openstack-dev
Subject: Re: [openstack-dev] [nova] Next steps for proxy API deprecation

Excerpts from Matt Riedemann's message of 2016-07-26 12:14:03 -0500:
> On 7/26/2016 11:59 AM, Matt Riedemann wrote:
> > Now that the 2.36 microversion change has merged [1], we can work on the
> > python-novaclient changes for this microversion.
> >
> > At the midcycle we agreed [2] to also return a 404 for network APIs,
> > including nova-network (which isn't a proxy), for consistency and
> > further signaling that nova-network is going away.
> >
> > In the client, we agreed to soften the impact for network CLIs by
> > determining if the latest microversion supported will fail (so will we
> > send >=2.36) and rather than fail, send 2.35 instead (if the user didn't
> > specifically specify a different version). However, we'd emit a warning
> > saying this is deprecated and will go away in the first major client
> > release (in Ocata? after nova-network is removed? after Ocata is
> > released?).
> >
> > We should probably just deprecate any CLIs/APIs in python-novaclient
> > today that are part of this server side API change, including network
> > CLIs/APIs in novaclient. The baremetal and image proxies in the client
> > are already deprecated, and the volume proxies were already removed.
> > That leaves the network proxies in the client.
> >
> > From my notes, Dan Smith was going to work on the novaclient changes for
> > 2.36 to not fail and use 2.35 - unless anyone else wants to volunteer to
> > do that work (please speak up).
> >
> > We can probably do the network CLI/API deprecations in the client in
> > parallel to the 2.36 support, but need someone to step up for that. I'll
> > try to get it started this week if no one else does.
> >
> > [1] https://review.openstack.org/#/c/337005/
> > [2] https://etherpad.openstack.org/p/nova-newton-midcycle
> >
> 
> I forgot to mention Tempest. We're going to have to probably put a 
> max_microversion cap in several tests in Tempest to cap at 2.35 (or 
> change those to use Neutron?). There are also going to be some response 
> schema changes like for quota usage/limits, I'm not sure if anyone is 
> looking at this yet. We could also get it done after feature freeze on 
> 9/2, but I still need to land the get-me-a-network API change which is 
> microversion 2.37 and has it's own Tempest test, although that test 
> relies on Neutron so I might be OK for the most part.
> 

If these tests are being used by DefCore, it would be better to cap
the existing behavior and add new tests to use neutron instead of
changing the existing tests. That will make it easier for DefCore
to handle the transition from the old to new behavior by replacing
the old tests in their list with the new ones.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] weird behavior of neutron create port with extra dhcp option

2016-07-29 Thread Moshe Levi
Hi,
I encounter a weird behavior with neutron create port command.
I am using neutron master.

When I run this neutron create-port command 
stack@r-dcs88:/opt/devstack$ neutron port-create 
--device-id=984b4a6d-a66d-4db7-8acc-1113cd1097ef --device-owner=baremetal:none 
--mac-address 7c:fe:90:29:22:4e --extra-dhcp-opt 
'opt_value'='ff:00:00:00:00:00:02:00:00:02:c9:00:7c:fe:90:03:00:29:22:4e','opt_name'='client-id'
 --admin_state_up=True private
port is created as expected see [1]

when I create a port with the following command:
stack@r-dcs88:/opt/devstack$ neutron port-create 
--device-id=984b4a6d-a66d-4db7-8acc-1113cd1097ef --device-owner=baremetal:none 
--mac-address 7c:fe:90:29:22:4e --extra-dhcp-opt 
'opt_value'='ff:00:00:00:00:00:02:00:00:02:c9:00:7c:fe:90:03:00:29:22:4e','opt_name'='client-id'
 --vnic_type=baremetal --admin_state_up=True private
port is created but in neutron client  show the extra_dhcp_opts attribute 
without the options. The only difference in the command is that I added 
--vnic_type=baremetal to it.
I looked in the neutron database and I can see the extra_dhcp_opts in the table.

I debugged the neutron server and the problem is with 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L1217
 that calling the _extend_port_dict_extra_dhcp_opt and clearing the 
extra_dhcp_opts from the result variable. If I comment this 
line(_apply_dict_extend_functions)  I will get the correct result.
Commenting it don't seem to me as the correct fix and I have a hard time 
understating the code.
It seems that the dhcp opt extend the result in 2 places  here [3] and here [4].

Does anyone know what is the proper way to fix this issue? Help would be much 
appreciated 


[1] - http://paste.openstack.org/show/544013/
[2] - http://paste.openstack.org/show/544014/ 
[3] - 
https://github.com/openstack/neutron/blob/master/neutron/db/extradhcpopt_db.py#L37-L53
[4] - 
https://github.com/openstack/neutron/blob/1fd1b505ca0a4dcaa7184df02dbd53ba63355083/neutron/db/extradhcpopt_db.py#L116-L121



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Indivisible Resource Providers

2016-07-29 Thread Jay Pipes

On 07/27/2016 10:48 AM, Sam Betts (sambetts) wrote:

While discussing the proposal to add resource_class’ to Ironic nodes for
interacting with the resource provider system in Nova with Jim on IRC, I
voiced my concern about having a resource_class per node. My thoughts
were that we could achieve the behaviour we require by every Ironic node
resource provider having a "baremetal" resource class of which they can
own a maximum of 1. Flavor’s that are required to land on a baremetal
node would then define that they require at least 1 baremetal resource,
along with any other resources they require.  For example:

Resource Provider 1 Resources:
Baremetal: 1
RAM: 256
CPUs: 4

Resource Provider 2 Resources:
Baremetal: 1
RAM: 512
CPUs: 4

Resource Provider 3 Resources:
Baremetal: 0
RAM: 0
CPUs: 0

(Resource Provider 3 has been used, so it has zero resources left)

 Given the thought experiment it seems like this would work great with
one exception, if you define 2 flavors:

Flavor 1 Required Resources:
Baremetal: 1
RAM: 256

Flavor 2 Required Resources:
   Baremetal: 1
RAM: 512

Flavor 2 will only schedule onto Resource Provider 2 because it is the
only resource provider that can provide the amount of resources
required. However Flavor 1 could potentially end up landing on Resource
Provider 2 even though it provides more RAM than is actually required.
The Baremetal resource class would prevent a second node from ever being
scheduled onto that resource provider, so scheduling more nodes doesn’t
result on 2 instance on the same node, but it is an inefficient use of
resources.

To combat this inefficient use of resources, I wondered if it was
possible to add a flag to a resource provider to define that it is an
indivisible resource provider, which would prevent flavors that don’t
use up all the resources a provider provides from landing on that provider.


Hi Sam,

As Ed said, this isn't the direction we are going (in fact, it's 
essentially the situation we are trying to get ourselves *out of*). The 
new placement API has a resource provider record for each baremetal 
resource node that Ironic exposes to tenants. Each of those resource 
providers has an inventory record containing a total value of 1 for a 
resource class that identifies the type of baremetal hardware (the 
Ironic node class that is being currently introduced).


There are no inventory records for the VCPU or MEMORY_MB resource 
classes for any resource provider that is an Ironic baremetal resource 
node. The inventory is only a single unit of a dynamic resource class 
that matches the Ironic node class -- thus representing the indivisible 
nature of the baremetal resources.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][tc] establishing project-wide goals

2016-07-29 Thread Doug Hellmann
One of the outcomes of the discussion at the leadership training
session earlier this year was the idea that the TC should set some
community-wide goals for accomplishing specific technical tasks to
get the projects synced up and moving in the same direction.

After several drafts via etherpad and input from other TC and SWG
members, I've prepared the change for the governance repo [1] and
am ready to open this discussion up to the broader community. Please
read through the patch carefully, especially the "goals/index.rst"
document which tries to lay out the expectations for what makes a
good goal for this purpose and for how teams are meant to approach
working on these goals.

I've also prepared two patches proposing specific goals for Ocata
[2][3].  I've tried to keep these suggested goals for the first
iteration limited to "finish what we've started" type items, so
they are small and straightforward enough to be able to be completed.
That will let us experiment with the process of managing goals this
time around, and set us up for discussions that may need to happen
at the Ocata summit about implementation.

For future cycles, we can iterate on making the goals "harder", and
collecting suggestions for goals from the community during the forum
discussions that will happen at summits starting in Boston.

Doug

[1] https://review.openstack.org/349068 describe a process for managing 
community-wide goals
[2] https://review.openstack.org/349069 add ocata goal "support python 3.5"
[3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
libraries"

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] unresolved topics in resource providers/placement api

2016-07-29 Thread Jay Pipes

On 07/29/2016 04:45 PM, Chris Dent wrote:

On Fri, 29 Jul 2016, Jay Pipes wrote:

On 07/29/2016 02:31 PM, Chris Dent wrote:

* resource_provider_aggregates as it was plus a new small aggregate
  id<->uuid mapping table.


Yes, this.

The integer ID values aren't relevant outside of the placement API.
All that matters is the UUID identifiers for aggregates and resource
providers.

So, add a new aggregates table in the placement DB that simply
contains an autoincrementing ID and a uuid column and insert into that
table when the placement API receives a request to associate a
resource provider to an aggregate where the placement DB doesn't have
a record of that UUID yet.


Are you thinking that to mean:

1 Use a different name for the table than 'aggregates' and also make
  it in the API db and be able to use the same code whether the system
  is configured to use a separate placement db or not.


No, such a table already exists in the API database and will continue to 
exist there.


We will want an aggregates table in the placement DB as well. For now, 
all it will store is the UUID identifier of the aggregate in the Nova 
API database.



or

2 Only add the table in the placement DB and conditionally modify
  the SQL

These both have their weaknesses. 1 duplicates some data, 2
complicates the code.

Given "All that matters is the UUID identifiers for aggregates and
resource providers" why not stick uuids in resource_provider_aggregates
(whichever database it is in) and have the same code and same
schema? The current resource_provider_aggregates won't have anything
in it, will it?


Because integer keys are a whole lot faster and more efficient than 
CHAR(36) keys. :)



Or do we need three tables (resource provider, resource provider
aggregates, something with a name close to aggregates) in order to
be able to clam shell? If that's the case I'd prefer option 1.


Well, the clam shell join actually doesn't come into play with this 
aggregates table in the placement DB. The aggregates table in the 
placement DB will do nothing other than look up the 
internal-to-the-placement-DB integer ID of the aggregate given a UUID value.


So, literally, all we need in the placement DB is this:

CREATE TABLE aggregates (
  id INT NOT NULL AUTOINCREMENT PRIMARY KEY,
  uuid CHAR(36) NOT NULL,
  UNIQUE INDEX (uuid)
);

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] typical length of timeseries data

2016-07-29 Thread Rochelle Grober
Just an FYI that might be the reason for the 14400:

1440 is the number of minutes in a day.  14400 would be tenths of minutes in a 
day of number of 6second chunks (huh???)

So, the number was picked to divide files in human logical, not computer 
logical chunks.

--Rocky

-Original Message-
From: gordon chung [mailto:g...@live.ca] 
Sent: Thursday, July 28, 2016 3:05 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [gnocchi] typical length of timeseries data

hi folks,

this is probably something to discuss on ops list as well eventually but 
what do you think about shrinking the max size of timeseries chunks from 
14400 to something smaller? i'm curious to understand what the length of 
the typical timeseries is. my main reason for bringing this up is that 
even our default 'high' policy doesn't reach 14400 limit so it at most 
will only split into two, partially filled objects. as we look to make a 
more efficient storage format for v3(?) seems like this may be an 
opportunity to change size as well (if necessary)

14400 points roughly equals 128KB object which is cool but maybe we 
should target something smaller? 7200points aka 64KB? 3600 points aka 
32KB? just for reference our biggest default series is 10080 points 
(1min granularity over week).

that said 128KB (at most) might not be that bad from read/write pov and 
maybe it's ok to keep it at 14400? i know from the test i did earlier, 
the time requirement to read/write increases linearly (7200 point object 
takes roughly half time of 14400 point object)[1]. i think the main item 
is we don't want it too small that we're updating multiple objects at a 
time.

[1] http://www.slideshare.net/GordonChung/gnocchi-profiling-v2/25

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] unresolved topics in resource providers/placement api

2016-07-29 Thread Chris Dent

On Fri, 29 Jul 2016, Jay Pipes wrote:

On 07/29/2016 02:31 PM, Chris Dent wrote:

* resource_provider_aggregates as it was plus a new small aggregate
  id<->uuid mapping table.


Yes, this.

The integer ID values aren't relevant outside of the placement API. All that 
matters is the UUID identifiers for aggregates and resource providers.


So, add a new aggregates table in the placement DB that simply contains an 
autoincrementing ID and a uuid column and insert into that table when the 
placement API receives a request to associate a resource provider to an 
aggregate where the placement DB doesn't have a record of that UUID yet.


Are you thinking that to mean:

1 Use a different name for the table than 'aggregates' and also make
  it in the API db and be able to use the same code whether the system
  is configured to use a separate placement db or not.

or

2 Only add the table in the placement DB and conditionally modify
  the SQL

These both have their weaknesses. 1 duplicates some data, 2
complicates the code.

Given "All that matters is the UUID identifiers for aggregates and
resource providers" why not stick uuids in resource_provider_aggregates
(whichever database it is in) and have the same code and same
schema? The current resource_provider_aggregates won't have anything
in it, will it?

Or do we need three tables (resource provider, resource provider
aggregates, something with a name close to aggregates) in order to
be able to clam shell? If that's the case I'd prefer option 1.

--
Chris Dent   ┬─┬ノ( º _ ºノ) http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Switch 'all?' openstack bots to errbot plugins?

2016-07-29 Thread Joshua Harlow
Woot, ya there is a bunch of nice features although I will admit after 
doing some plugins there is also a little things that are sort of what I 
would call oddities as well (but meh, nothing is perfect, ha).


Reviews up for a new project for all these (plugins and probably a main 
bot entrypoint/main):


- https://review.openstack.org/#/c/349046/
- https://review.openstack.org/#/c/348998/

I'm thinking I'll just move over 
https://github.com/harlowja/gerritbot2/blob/master/plugins/gerritbot/gerritbot.py 
after that and we can work through some of the oddities to make that 
plugin map better on the existing gerritbot (which has it's own 
unique/special configuration yaml style) and then we can work through 
various other plugins at the same time (with meetbot being the 
complicated one) and then 


-Josh

David Moreau Simard wrote:

FWIW I can vouch for the quality of Errbot, I've used it on several
occasions and we're currently using it in the RDO community.

A very useful feature that I like is the webserver hook integration.
This allows the bot to essentially expose an endpoint and you can send
things to it.
For example, we have a sensu_event [1] endpoint and we have a sensu
handler that sends events to it, the bot then sends an alert to our
IRC channels.

[1]: 
https://github.com/rdo-infra/rdobot/blob/master/rdobot/plugins/sensu/errbot-sensu.py#L95

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]


On Fri, Jul 29, 2016 at 1:49 AM, Joshua Harlow  wrote:

Hi folks,

I was thinking it might be useful to see what other folks think about
switching (or migrating all the current bots we have in openstack) to be
based on errbot plugins.

Errbot @ http://errbot.io/en/latest/ takes a slightly different approach to
bots and treats each bot 'feature' as a plugin that can be activated and
deactivated with-in the context of the same bot (even doing so
dynamically/at runtime).

It also allows for those that use slack (or other backend @
http://errbot.io/en/latest/features.html) to be able to 'seamlessly' use the
same plugins and just switching a tiny amount config to use a different 'bot
backend'.

I've been experimenting with it more recently and have a gerritbot (sort of
equivalent) @ https://github.com/harlowja/gerritbot2 and also have been
working on a oslobot plugin @ https://review.openstack.org/#/c/343857/ and
during this exploration it has gotten me to think that we could move most of
the functionality of the various bots in openstack (patchbot, openstack -
really meetbot, gerritbot and others?) under the same umbrella (or at least
convert them into plugins that folks can run on IRC or if they want to run
them on some other backend that's cool to).

The hardest one I can think would be meetbot, although the code @
https://github.com/openstack-infra/meetbot doesn't look impossible (or
really that hard to convert to an errbot plugin).

What do people think?

Any strong preference?

I was also thinking that as a result we could then just have a single
'openstack' bot and also turn on plugins like:

- https://github.com/aherok/errbot_plugins (helps with timezone conversions
that might be useful to have for folks that keep on getting them wrong).
- some stackalytics integration bot?
- something even better???
- some other plugin @ https://github.com/errbotio/errbot/wiki

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] unresolved topics in resource providers/placement api

2016-07-29 Thread Jay Pipes

On 07/29/2016 02:31 PM, Chris Dent wrote:

On Thu, 28 Jul 2016, Jay Pipes wrote:

The decision at the mid-cycle was to add a new
placement_sql_connection configuration option to the nova.conf. The
default value would be None which would mean the code in
nova/objects/resource_provider.py would default to using the API
database setting.


I've been working on this with Roman Podoliaka. We've made some
reasonable progress but I'm hitting a bump in the road that we
may wish to make a decision about sooner than later. I mentioned
this before but forgot to remember it as actually important and it
got lost in the sturm und drang.

When resource providers live in the api database they will be in
there with the aggregates and the resource_provider_aggregates
table, which looks essentially like this

CREATE TABLE resource_provider_aggregates (
resource_provider_id INTEGER NOT NULL,
aggregate_id INTEGER NOT NULL,
PRIMARY KEY (resource_provider_id, aggregate_id)
);

will make great sense: We can join across this to the aggregates
table to get the aggregates or aggregate uuids that are associated
with a resource provider.

If we use a separate placement db for resource providers there's as
yet no aggregate table to join with across that
resource_provider_aggregates table.

To deal with this do we:

* Give up for now on the separate placement_sql_connection?


No.


* Change resource_provider_aggregates to:

CREATE TABLE resource_provider_aggregates (
resource_provider_id INTEGER NOT NULL,
aggregate_id VARCHAR(36) NOT NULL, # a uuid
PRIMARY KEY (resource_provider_id, aggregate_id)
);


Also no.


  in the migrations and models used by both the api and placement
  dbs?

  This could work because as I recall what we really care about is that
  there is an aggregation of some resource providers with some other
  resource providers, not the details of the Aggregate object.

* resource_provider_aggregates as it was plus a new small aggregate
  id<->uuid mapping table.


Yes, this.

The integer ID values aren't relevant outside of the placement API. All 
that matters is the UUID identifiers for aggregates and resource providers.


So, add a new aggregates table in the placement DB that simply contains 
an autoincrementing ID and a uuid column and insert into that table when 
the placement API receives a request to associate a resource provider to 
an aggregate where the placement DB doesn't have a record of that UUID yet.


Best,
-jay


* Hoops I don't want to think about for aggregates in both tables?

* Some other solution I'm not thinking of.

* Actually you're wrong Chris, this isn't an issue because [please
  fill in the blank here].

A few of these seem rather less than great.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] service validation during deployment steps

2016-07-29 Thread Emilien Macchi
On Wed, Jul 27, 2016 at 4:25 AM, Steven Hardy  wrote:
> Hi Emilien,
>
> On Tue, Jul 26, 2016 at 03:59:33PM -0400, Emilien Macchi wrote:
>> I would love to hear some feedback about $topic, thanks.
>
> Sorry for the slow response, we did dicuss this on IRC, but providing that
> feedback and some other comments below:
>
>> On Fri, Jul 15, 2016 at 11:31 AM, Emilien Macchi  wrote:
>> > Hi,
>> >
>> > Some people on the field brought interesting feedback:
>> >
>> > "As a TripleO User, I would like the deployment to stop immediately
>> > after an resource creation failure during a step of the deployment and
>> > be able to easily understand what service or resource failed to be
>> > installed".
>> >
>> > Example:
>> > If during step4 Puppet tries to deploy Neutron and OVS, but OVS fails
>> > to start for some reasons, deployment should stop at the end of the
>> > step.
>
> I don't think anyone will argue against this use-case, we absolutely want
> to enable a better "fail fast" for deployment problems, as well as better
> surfacing of why it failed.
>
>> > So there are 2 things in this user story:
>> >
>> > 1) Be able to run some service validation within a step deployment.
>> > Note about the implementation: make the validation composable per
>> > service (OVS, nova, etc) and not per role (compute, controller, etc).
>
> +1, now we have composable services we need any validations to be
> associated with the services, not the roles.
>
> That said, it's fairly easy to imagine an interface like
> step_config/config_settings could be used to wire in composable service
> validations on a per-role basis, e.g similar to what we do here, but
> per-step:
>
> https://github.com/openstack/tripleo-heat-templates/blob/master/overcloud.yaml#L1144
>
> Similar to what was proposed (but never merged) here:
>
> https://review.openstack.org/#/c/174150/15/puppet/controller-post-puppet.yaml
>
>> > 2) Make this information readable and easy to access and understand
>> > for our users.
>> >
>> > I have a proof-of-concept for 1) and partially 2), with the example of
>> > OVS: https://review.openstack.org/#/c/342202/
>> > This patch will make sure OVS is actually usable at step 4 by running
>> > 'ovs-vsctl show' during the Puppet catalog and if it's working, it
>> > will create a Puppet anchor. This anchor is currently not useful but
>> > could be in future if we want to rely on it for orchestration.
>> > I wrote the service validation in Puppet 2 years ago when doing Spinal
>> > Stack with eNovance:
>> > https://github.com/openstack/puppet-openstacklib/blob/master/manifests/service_validation.pp
>> > I think we could re-use it very easily, it has been proven to work.
>> > Also, the code is within our Puppet profiles, so it's by design
>> > composable and we don't need to make any connection with our current
>> > services with some magic. Validation will reside within Puppet
>> > manifests.
>> > If you look my PoC, this code could even live in puppet-vswitch itself
>> > (we already have this code for puppet-nova, and some others).
>
> I think having the validations inside the puppet implementation is OK, but
> ideally I think we do want it to be part of the puppet modules themselves
> (not part of the puppet-tripleo abstraction layer).
>
> The issue I'd have with putting it in puppet-tripleo is that if we're going
> to do this in a tripleo specific way, it should probably be done via a
> method that's more config tool agnostic.  Otherwise we'll have to recreate
> the same validations for future implementations (I'm thinking specifically
> about containers here, and possibly ansible[1].
>
> So, in summary, I'm +1 on getting this integrated if it can be done with
> little overhead and it's something we can leverage via the puppet modules
> vs puppet-tripleo.
>
>> >
>> > Ok now, what if validation fails?
>> > I'm testing it here: https://review.openstack.org/#/c/342205/
>> > If you look at /var/log/messages, you'll see:
>> >
>> > Error: 
>> > /Stage[main]/Tripleo::Profile::Base::Neutron::Ovs/Openstacklib::Service_validation[openvswitch]/Exec[execute
>> > openvswitch validation]/returns: change from notrun to 0 failed
>> >
>> > So it's pretty clear by looking at logs that openvswitch service
>> > validation failed and something is wrong. You'll also notice in the
>> > logs that deployed stopped at step 4 since OVS is not considered to
>> > run.
>> > It's partially addressing 2) because we need to make it more explicit
>> > and readable. Dan Prince had the idea to use
>> > https://github.com/ripienaar/puppet-reportprint to print a nice report
>> > of Puppet catalog result (we haven't tried it yet). We could also use
>> > Operational Tools later to monitor Puppet logs and find Service
>> > validation failures.
>
> This all sounds good, but we do need to think beyond the puppet
> implementation, e.g how will we enable similar validations in a container
> based deployment?
>
> I remember SpinalStack also 

Re: [openstack-dev] [gnocchi] typical length of timeseries data

2016-07-29 Thread gordon chung


On 29/07/2016 12:20 PM, Julien Danjou wrote:
> On Fri, Jul 29 2016, gordon chung wrote:
>
>> so at first glance, it doesn't really seem to affect performance much
>> whether it's one 'larger' file or many smaller files.
>
> I guess it's because your storage system latency (file?) does not make a
> difference. I imagine that over Swift or Ceph, it might change things a
> bit.
>
> If you add time.sleep(1) in _get_measures(), you'd see a difference. ;)
>

i'm using Ceph. but i should mention i also only have 1 thread enabled 
because python+threading is... yeah.

i'll give it a try again with threads enabled.

cheers,
-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [nova] [neutron] get_all_bw_counters in the Ironic virt driver

2016-07-29 Thread Sean Dague
On 07/29/2016 02:29 PM, Jay Pipes wrote:
> On 07/28/2016 09:02 PM, Devananda van der Veen wrote:
>> On 07/28/2016 05:40 PM, Brad Morgan wrote:
>>> I'd like to solicit some advice about potentially implementing
>>> get_all_bw_counters() in the Ironic virt driver.
>>>
>>> https://github.com/openstack/nova/blob/master/nova/virt/driver.py#L438
>>> Example Implementation:
>>> https://github.com/openstack/nova/blob/master/nova/virt/xenapi/driver.py#L320
>>>
>>>
>>> I'm ignoring the obvious question about how this data will actually be
>>> collected/fetched as that's probably it's own topic (involving
>>> neutron), but I
>>> have a few questions about the Nova -> Ironic interaction:
>>>
>>> Nova
>>> * Is get_all_bw_counters() going to stick around for the foreseeable
>>> future? If
>>> not, what (if anything) is the replacement?
> 
> I don't think Nova should be in the business of monitoring *any*
> transient metrics at all.
> 
> There are many tools out there -- Nagios, collectd, HEKA, Snap, gnocchi,
> monasca just to name a few -- that can do this work.
> 
> What action is taken if some threshold is reached is entirely
> deployment-dependent and not something that Nova should care about. Nova
> should just expose an API for other services to use to control the guest
> instances under its management, nothing more.

More importantly... *only* xenapi driver implements this, and it's not
exposed over the API. In reality that part of the virt driver layer
should probably be removed.

Like jay said, there are better tools for collecting this than Nova.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Switch 'all?' openstack bots to errbot plugins?

2016-07-29 Thread David Moreau Simard
FWIW I can vouch for the quality of Errbot, I've used it on several
occasions and we're currently using it in the RDO community.

A very useful feature that I like is the webserver hook integration.
This allows the bot to essentially expose an endpoint and you can send
things to it.
For example, we have a sensu_event [1] endpoint and we have a sensu
handler that sends events to it, the bot then sends an alert to our
IRC channels.

[1]: 
https://github.com/rdo-infra/rdobot/blob/master/rdobot/plugins/sensu/errbot-sensu.py#L95

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]


On Fri, Jul 29, 2016 at 1:49 AM, Joshua Harlow  wrote:
> Hi folks,
>
> I was thinking it might be useful to see what other folks think about
> switching (or migrating all the current bots we have in openstack) to be
> based on errbot plugins.
>
> Errbot @ http://errbot.io/en/latest/ takes a slightly different approach to
> bots and treats each bot 'feature' as a plugin that can be activated and
> deactivated with-in the context of the same bot (even doing so
> dynamically/at runtime).
>
> It also allows for those that use slack (or other backend @
> http://errbot.io/en/latest/features.html) to be able to 'seamlessly' use the
> same plugins and just switching a tiny amount config to use a different 'bot
> backend'.
>
> I've been experimenting with it more recently and have a gerritbot (sort of
> equivalent) @ https://github.com/harlowja/gerritbot2 and also have been
> working on a oslobot plugin @ https://review.openstack.org/#/c/343857/ and
> during this exploration it has gotten me to think that we could move most of
> the functionality of the various bots in openstack (patchbot, openstack -
> really meetbot, gerritbot and others?) under the same umbrella (or at least
> convert them into plugins that folks can run on IRC or if they want to run
> them on some other backend that's cool to).
>
> The hardest one I can think would be meetbot, although the code @
> https://github.com/openstack-infra/meetbot doesn't look impossible (or
> really that hard to convert to an errbot plugin).
>
> What do people think?
>
> Any strong preference?
>
> I was also thinking that as a result we could then just have a single
> 'openstack' bot and also turn on plugins like:
>
> - https://github.com/aherok/errbot_plugins (helps with timezone conversions
> that might be useful to have for folks that keep on getting them wrong).
> - some stackalytics integration bot?
> - something even better???
> - some other plugin @ https://github.com/errbotio/errbot/wiki
>
> -Josh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] unresolved topics in resource providers/placement api

2016-07-29 Thread Chris Dent

On Thu, 28 Jul 2016, Jay Pipes wrote:
The decision at the mid-cycle was to add a new placement_sql_connection 
configuration option to the nova.conf. The default value would be None which 
would mean the code in nova/objects/resource_provider.py would default to 
using the API database setting.


I've been working on this with Roman Podoliaka. We've made some
reasonable progress but I'm hitting a bump in the road that we
may wish to make a decision about sooner than later. I mentioned
this before but forgot to remember it as actually important and it
got lost in the sturm und drang.

When resource providers live in the api database they will be in
there with the aggregates and the resource_provider_aggregates
table, which looks essentially like this

CREATE TABLE resource_provider_aggregates (
resource_provider_id INTEGER NOT NULL,
aggregate_id INTEGER NOT NULL,
PRIMARY KEY (resource_provider_id, aggregate_id)
);

will make great sense: We can join across this to the aggregates
table to get the aggregates or aggregate uuids that are associated
with a resource provider.

If we use a separate placement db for resource providers there's as
yet no aggregate table to join with across that
resource_provider_aggregates table.

To deal with this do we:

* Give up for now on the separate placement_sql_connection?

* Change resource_provider_aggregates to:

CREATE TABLE resource_provider_aggregates (
resource_provider_id INTEGER NOT NULL,
aggregate_id VARCHAR(36) NOT NULL, # a uuid
PRIMARY KEY (resource_provider_id, aggregate_id)
);

  in the migrations and models used by both the api and placement
  dbs?

  This could work because as I recall what we really care about is that
  there is an aggregation of some resource providers with some other
  resource providers, not the details of the Aggregate object.

* resource_provider_aggregates as it was plus a new small aggregate
  id<->uuid mapping table.

* Hoops I don't want to think about for aggregates in both tables?

* Some other solution I'm not thinking of.

* Actually you're wrong Chris, this isn't an issue because [please
  fill in the blank here].

A few of these seem rather less than great.

--
Chris Dent   ┬─┬ノ( º _ ºノ) http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [nova] [neutron] get_all_bw_counters in the Ironic virt driver

2016-07-29 Thread Jay Pipes

On 07/28/2016 09:02 PM, Devananda van der Veen wrote:

On 07/28/2016 05:40 PM, Brad Morgan wrote:

I'd like to solicit some advice about potentially implementing
get_all_bw_counters() in the Ironic virt driver.

https://github.com/openstack/nova/blob/master/nova/virt/driver.py#L438
Example Implementation:
https://github.com/openstack/nova/blob/master/nova/virt/xenapi/driver.py#L320

I'm ignoring the obvious question about how this data will actually be
collected/fetched as that's probably it's own topic (involving neutron), but I
have a few questions about the Nova -> Ironic interaction:

Nova
* Is get_all_bw_counters() going to stick around for the foreseeable future? If
not, what (if anything) is the replacement?


I don't think Nova should be in the business of monitoring *any* 
transient metrics at all.


There are many tools out there -- Nagios, collectd, HEKA, Snap, gnocchi, 
monasca just to name a few -- that can do this work.


What action is taken if some threshold is reached is entirely 
deployment-dependent and not something that Nova should care about. Nova 
should just expose an API for other services to use to control the guest 
instances under its management, nothing more.


Best,
-jay

p.s. Related: I don't think Nova should be in the business of 
implementing service group membership functionality either. We should be 
using Zookeeper or another system that was built for that purpose 
instead of continuing to maintain a wonky RDBMS-based home-grown service 
group system.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [container] [docker] [magnum] [zun] nova-docker alternatives ?

2016-07-29 Thread Adrian Otto
s/mentally/centrally/

Autocorrect is not my friend.

On Jul 29, 2016, at 11:26 AM, Adrian Otto 
> wrote:

Yasmin,

One option you have is to use the libvirt-lxc nova virt driver, and use an 
image that has a docker daemon installed on it. That would give you a way to 
place docker containers on a data plane the uses no virtualization, but you 
need to individually manage each instance. Another option is to add Magnum to 
your cloud (with or without a libvirt-lxc nova virt driver) and use Magnum to 
mentally manage each container cluster. We refer to such clusters as bays.

Adrian

On Jul 29, 2016, at 11:01 AM, Yasemin DEMİRAL (BİLGEM BTE) 
> wrote:


nova-docker is a dead project, i learned irc channel.
I need the hypervisor for nova, and I cant installation nova-docker in physical 
openstack systems. In devstack, I could deploy nova-docker.
What can I do ? openstack-magnum or openstack-zun project is useful for me ?? I 
dont know.
Do you have any ideas ?

Yasemin Demiral
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [container] [docker] [magnum] [zun] nova-docker alternatives ?

2016-07-29 Thread Adrian Otto
Yasmin,

One option you have is to use the libvirt-lxc nova virt driver, and use an 
image that has a docker daemon installed on it. That would give you a way to 
place docker containers on a data plane the uses no virtualization, but you 
need to individually manage each instance. Another option is to add Magnum to 
your cloud (with or without a libvirt-lxc nova virt driver) and use Magnum to 
mentally manage each container cluster. We refer to such clusters as bays.

Adrian

On Jul 29, 2016, at 11:01 AM, Yasemin DEMİRAL (BİLGEM BTE) 
> wrote:


nova-docker is a dead project, i learned irc channel.
I need the hypervisor for nova, and I cant installation nova-docker in physical 
openstack systems. In devstack, I could deploy nova-docker.
What can I do ? openstack-magnum or openstack-zun project is useful for me ?? I 
dont know.
Do you have any ideas ?

Yasemin Demiral
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Switch 'all?' openstack bots to errbot plugins?

2016-07-29 Thread Joshua Harlow
Ya, I was looking for how these get downloaded/assembled into an actual 
bot, and from looking at errbot there appears to be a couple approaches, 
some just do '!repos install https://github.com/YaroslavMolchan/lctv' 
while others I've seen just install them and then setup the config.py 
that errbot uses to ensure they are loaded.


It will probably be a little bit of learning to figure out what is the 
best mechanism here.


Other things I've noticed is that errbot, to work with various backends 
actually uses an interesting approach to message formatting. All 
messages that u write to self.send() in a errbot plugin are expected to 
be markdown, and then at the individual backend layer there is a 
translation from that markdown to the backends supported 'syntax' (which 
say for slack is a semi-limited version of markdown) or for IRC is well 
not any version of markdown, ha.


So u'll see things like the following that do this conversion:

https://github.com/errbotio/errbot/blob/master/errbot/backends/slack.py#L66

-Josh

Doug Hellmann wrote:

Excerpts from Joshua Harlow's message of 2016-07-29 10:35:18 -0700:

I prefer 'one bucket repo for OpenStack community Errbot plug-ins' since
I don't like a bunch of repos (seems like a premature optimization ~at
this time~), but I could see either way on this one.

Jeremy Stanley wrote:

On 2016-07-29 09:41:40 -0700 (-0700), Joshua Harlow wrote:
[...]

What shall we name it???

[...]

Also, one bucket repo for OpenStack community Errbot plug-ins, or
one repo per plug-in with a consistent naming scheme?


I agree. How about "openstack/irc-bot-plugins"? If we need to build an
artifact we can name that openstack-irc-bot-plugins and if we don't then
we can just install directly from the git repo (the docs for errbot talk
about installing from github, so I'm not sure what the "best practice"
is for that).

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [container] [docker] [magnum] [zun] nova-docker alternatives ?

2016-07-29 Thread BİLGEM BTE

nova-docker is a dead project, i learned irc channel. 
I need the hypervisor for nova, and I cant installation nova-docker in physical 
openstack systems. In devstack, I could deploy nova-docker. 
What can I do ? openstack-magnum or openstack-zun project is useful for me ?? I 
dont know. 
Do you have any ideas ? 

Yasemin Demiral 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposing Spyros Trigazis for Magnum core reviewer team

2016-07-29 Thread Hongbin Lu
Hi all,

Thanks for your votes. Based on the feedback, I added Spyros to the Magnum core 
team [1].

[1] https://review.openstack.org/#/admin/groups/473,members

Best regards,
Hongbin

> -Original Message-
> From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> Sent: July-25-16 10:58 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Proposing Spyros Trigazis for
> Magnum core reviewer team
> 
> +1 great addition to the team
> 
> From: Hongbin Lu 
> Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" 
> Date: Friday, 22 July 2016 at 21:27
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: [openstack-dev] [magnum] Proposing Spyros Trigazis for Magnum
> core reviewer team
> 
> Hi all,
> 
> Spyros has consistently contributed to Magnum for a while. In my
> opinion, what differentiate him from others is the significance of his
> contribution, which adds concrete value to the project. For example,
> the operator-oriented install guide he delivered attracts a significant
> number of users to install Magnum, which facilitates the adoption of
> the project. I would like to emphasize that the Magnum team has been
> working hard but struggling to increase the adoption, and Spyros’s
> contribution means a lot in this regards. He also completed several
> essential and challenging tasks, such as adding support for OverlayFS,
> adding Rally job for Magnum, etc. In overall, I am impressed by the
> amount of high-quality patches he submitted. He is also helpful in code
> reviews, and his comments often help us identify pitfalls that are not
> easy to identify. He is also very active in IRC and ML. Based on his
> contribution and expertise, I think he is qualified to be a Magnum core
> reviewer.
> 
> I am happy to propose Spyros to be a core reviewer of Magnum team.
> According to the OpenStack Governance process [1], we require a minimum
> of 4 +1 votes from Magnum core reviewers within a 1 week voting window
> (consider this proposal as a +1 vote from me). A vote of -1 is a veto.
> If we cannot get enough votes or there is a veto vote prior to the end
> of the voting window, Spyros is not able to join the core team and
> needs to wait 30 days to reapply.
> 
> The voting is open until Thursday July 29st.
> 
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
> 
> Best regards,
> Hongbin
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Switch 'all?' openstack bots to errbot plugins?

2016-07-29 Thread Doug Hellmann
Excerpts from Joshua Harlow's message of 2016-07-29 10:35:18 -0700:
> I prefer 'one bucket repo for OpenStack community Errbot plug-ins' since 
> I don't like a bunch of repos (seems like a premature optimization ~at 
> this time~), but I could see either way on this one.
> 
> Jeremy Stanley wrote:
> > On 2016-07-29 09:41:40 -0700 (-0700), Joshua Harlow wrote:
> > [...]
> >> What shall we name it???
> > [...]
> >
> > Also, one bucket repo for OpenStack community Errbot plug-ins, or
> > one repo per plug-in with a consistent naming scheme?
> 

I agree. How about "openstack/irc-bot-plugins"? If we need to build an
artifact we can name that openstack-irc-bot-plugins and if we don't then
we can just install directly from the git repo (the docs for errbot talk
about installing from github, so I'm not sure what the "best practice"
is for that).

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [infra] Intel NFV CI voting permission

2016-07-29 Thread Matt Riedemann

On 7/29/2016 10:47 AM, Znoinski, Waldemar wrote:

Hi Matt et al,
Thanks for taking the time to have a chat about it in Nova meeting yesterday.
In relation to your two points below...

1. tempest-dsvm-ovsdpdk-nfv-networking job in our Intel NFV CI was broken for 
about a day till we troubleshooted the issue, to find out merge of this [1] 
change started to cause our troubles.
We set Q_USE_PROVIDERNET_FOR_PUBLIC back to False to let the job get green 
again and test what it should be testing - nova/neutron changes and not giving 
false negatives because of that devstack change.
We saw a REVERT [2] of the above change shortly after as it was breaking 
Jenkins neutron's linuxbridge tempest too [3].

2. Our aim is to have two things tested when new change is proposed to 
devstack: NFV and OVS+DPDK. For better clarity we'll run two separate jobs 
instead of having NFV+OVSDPDK together.
Currently we run OVSDPDK+ODL on devstack changes to discover potential issues 
with configuring these two together with each devstack change proposed. We've 
discussed this internally and we can add/(replace OVSDPDK+ODL job) with a 
'tempest-dsvm-full-nfv' one (currently running on Nova changes) that does 
devstack + runs full tempest test suite (1100+ tests) on NFV enabled flavors. 
It should test properly proposed devstack changes with NFV features (as per 
wiki [4]) we have enabled in Openstack.

Let me know if there are other questions, concerns, asks or suggestions.

Thanks
Waldek


[1] https://review.openstack.org/#/c/343072/
[2] https://review.openstack.org/#/c/345820/
[3] https://bugs.launchpad.net/devstack/+bug/1605423
[4] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel_NFV_CI


 >-Original Message-
 >From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
 >Sent: Thursday, July 28, 2016 4:14 PM
 >To: openstack-dev@lists.openstack.org
 >Subject: Re: [openstack-dev] [nova] [infra] Intel NFV CI voting permission
 >
 >On 7/21/2016 5:38 AM, Znoinski, Waldemar wrote:
 >> Hi Nova cores et al,
 >>
 >>
 >>
 >> I would like to acquire voting (+/-1 Verified) permission for our
 >> Intel NFV CI.
 >>
 >>
 >>
 >> 1.   It's running since Q1'2015.
 >>
 >> 2.   Wiki [1].
 >>
 >> 3.   It's using openstack-infra/puppet-openstackci
 >>  with Zuul
 >> 2.1.1 for last 4 months: zuul, gearman, Jenkins, nodepool, local Openstack
 >cloud.
 >>
 >> 4.   We have a team of 2 people + me + Nagios looking after it. Its
 >> problems are fixed promptly and rechecks triggered after non-code
 >> related issues. It's being reconciled against ci-watch [2].
 >>
 >> 5.   Reviews [3].
 >>
 >>
 >>
 >> Let me know if further questions.
 >>
 >>
 >>
 >> 1.   https://wiki.openstack.org/wiki/ThirdPartySystems/Intel_NFV_CI
 >>
 >> 2.   http://ci-watch.tintri.com/project?project=nova
 >>
 >> 3.
 >> https://review.openstack.org/#/q/reviewer:%22Intel+NFV-
 >CI+%253Copensta
 >> ck-nfv-ci%2540intel.com%253E%22
 >>
 >>
 >>
 >>
 >>
 >>
 >> *Waldek*
 >>
 >>
 >>
 >> --
 >> Intel Research and Development Ireland Limited Registered in Ireland
 >> Registered Office: Collinstown Industrial Park, Leixlip, County
 >> Kildare Registered Number: 308263
 >>
 >> This e-mail and any attachments may contain confidential material for
 >> the sole use of the intended recipient(s). Any review or distribution
 >> by others is strictly prohibited. If you are not the intended
 >> recipient, please contact the sender and delete all copies.
 >>
 >>
 >>
 >>
 >__
 >
 >>  OpenStack Development Mailing List (not for usage questions)
 >> Unsubscribe:
 >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >>
 >
 >We talked about this in the nova meeting today. I don't have a great grasp on
 >how the Intel NFV CI has been performing, but making it voting will help with
 >that. Looking at the 7 day results:
 >
 >http://ci-watch.tintri.com/project?project=nova=7+days
 >
 >Everything looks pretty good except for tempest-dsvm-ovsdpdk-nfv-
 >networking but Waldemar pointed out there was a change in devstack that
 >broke the CI for a day or so:
 >
 >https://github.com/openstack-
 >dev/devstack/commit/130a11f8aaf08ea529b6ce60dd9052451cb7bb5c
 >
 >I would like to know a little more about why we don't run the Intel NFV CI on
 >devstack changes to catch stuff like this before it becomes a breaking
 >problem? The team worked around it for now, but it is a concern of mine. I
 >think at least the Xen and PowerKVM CIs also run on devstack changes to
 >avoid problems like this.
 >
 >So please give me some details on running against devstack changes and
 >then I'll ack or nack the request.
 >
 >--
 >
 >Thanks,
 >
 >Matt Riedemann
 >
 >
 >__
 

Re: [openstack-dev] [nova] os-virtual-interfaces isn't deprecated in 2.36

2016-07-29 Thread Matt Riedemann

On 7/29/2016 12:32 PM, Sean Dague wrote:

On 07/28/2016 05:38 PM, Matt Riedemann wrote:

On 7/28/2016 3:55 PM, Matt Riedemann wrote:

For os-attach-interfaces, we need that to attach/detach interfaces to a
server, so those actions don't go away with 2.36. We can also list and
show interfaces (ports) which is a proxy to neutron, but in this case it
seems a tad bit necessary, else to list ports for a given server you
have to know to list ports via neutron CLI and filter on
device_id=server.uuid.


On second thought, we could drop the proxy APIs to list/show ports for a
given server. python-openstackclient could have a convenience CLI for
listing ports for a server. And the show in os-attach-interfaces takes a
server id but it's not used, so it's basically pointless and should just
be replaced with neutron.

The question is, as these are proxies and the 2.36 microversion was for
proxy API deprecation, can we still do those in 2.36 even though it's
already merged? Or do they need to be 2.37? That seems like the more
accurate thing to do, but then we really have some weird "which is the
REAL proxy API microversion?" logic going on.

I think we could move forward with deprecation in novaclient either way.


We should definitely move forward with novaclient CLI deprecations.

We've said that microversions are idempotent, so fixing one in this case
isn't really what we want to do, it should just be another bump, with
things we apparently missed. I'm not sure it's super important that
there is a REAL proxy API microversion. We got most of it in one go, and
as long as we catch the stragglers in 2.39 (let's make that the last
merged one before the release so that we can figure out anything else we
missed, and keep get me a network as 2.37).

-Sean



That works for me.

I'm on the fence about deprecating os-virtual-interfaces, it would 
actually work for both nova-network and neutron now since for neutron 
we're now also creating VirtualInterface records in the nova database.


If it were just nova-network, it would be a no-brainer since that's 
deprecated and going away. But it could be a useful API still if we 
updated it to return VIF tags (added in 2.32). It wouldn't be a proxy to 
Neutron, we could literally pull up the nova.network.api code into 
nova.network.base_api to make that work (but it would need a new 
microversion).


At this point that kind of change would be an Ocata thing, but deciding 
what to do about it depends on how we treat the virtual-interface-list 
CLI in novaclient.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara][heat][infra] breakage of Sahara gate and images from openstack.org

2016-07-29 Thread Jeremy Stanley
On 2016-07-29 19:12:35 +0200 (+0200), Luigi Toscano wrote:
[...]
> - would it be possible to use the the nodepool cloud images
> (qcow2, raw) from the jobs, if they contains lsb_release (and
> possibly other tools), and if it is, how?

We don't currently publish them as they lack a simple mechanism for
granting access other than with our baked in keys/accounts, and also
because they're quite large due to pre-caching of all our git repos
and any distro packages our CI jobs are likely to try installing
(around 5GiB in compressed qcow2 format the last time I looked).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Switch 'all?' openstack bots to errbot plugins?

2016-07-29 Thread Joshua Harlow
I prefer 'one bucket repo for OpenStack community Errbot plug-ins' since 
I don't like a bunch of repos (seems like a premature optimization ~at 
this time~), but I could see either way on this one.


Jeremy Stanley wrote:

On 2016-07-29 09:41:40 -0700 (-0700), Joshua Harlow wrote:
[...]

What shall we name it???

[...]

Also, one bucket repo for OpenStack community Errbot plug-ins, or
one repo per plug-in with a consistent naming scheme?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-virtual-interfaces isn't deprecated in 2.36

2016-07-29 Thread Sean Dague
On 07/28/2016 05:38 PM, Matt Riedemann wrote:
> On 7/28/2016 3:55 PM, Matt Riedemann wrote:
>> For os-attach-interfaces, we need that to attach/detach interfaces to a
>> server, so those actions don't go away with 2.36. We can also list and
>> show interfaces (ports) which is a proxy to neutron, but in this case it
>> seems a tad bit necessary, else to list ports for a given server you
>> have to know to list ports via neutron CLI and filter on
>> device_id=server.uuid.
> 
> On second thought, we could drop the proxy APIs to list/show ports for a
> given server. python-openstackclient could have a convenience CLI for
> listing ports for a server. And the show in os-attach-interfaces takes a
> server id but it's not used, so it's basically pointless and should just
> be replaced with neutron.
> 
> The question is, as these are proxies and the 2.36 microversion was for
> proxy API deprecation, can we still do those in 2.36 even though it's
> already merged? Or do they need to be 2.37? That seems like the more
> accurate thing to do, but then we really have some weird "which is the
> REAL proxy API microversion?" logic going on.
> 
> I think we could move forward with deprecation in novaclient either way.

We should definitely move forward with novaclient CLI deprecations.

We've said that microversions are idempotent, so fixing one in this case
isn't really what we want to do, it should just be another bump, with
things we apparently missed. I'm not sure it's super important that
there is a REAL proxy API microversion. We got most of it in one go, and
as long as we catch the stragglers in 2.39 (let's make that the last
merged one before the release so that we can figure out anything else we
missed, and keep get me a network as 2.37).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][DIB] Proposal to move DIB to its own project team

2016-07-29 Thread Gregory Haynes
On Fri, Jul 29, 2016, at 11:55 AM, Ben Nemec wrote:
> As I noted in the meeting yesterday, I think the lack of response from
> TripleO regarding this topic is kind of answer enough.  TripleO has
> moved away from having a heavy dependency on diskimage-builder (it's
> basically used to install some packages and a handful of elements that
> we haven't been able to replace yet), so I don't see a problem with
> moving dib out of TripleO, as long as we still have some TripleO folks
> on the core team and tripleo-ci continues to test all changes against
> it.  We still care about keeping dib working, but the motivation from
> the TripleO side to do feature development in dib is pretty nonexistent
> at this point, so if a new team wants to take that on then I'm good with
> it.
> 
> Note that the diskimage-builder core team has always been separate from
> the tripleo-core team, so ultimately I guess this would just be a
> governance change?
> 

Awesome, that is what I hoped/expected and why I figured this was a
reasonable move to make. It's good to hear some confirmation.

The cores thing is a bit tricky - there is a separate
diskimage-builder-core group but tripleo-core is a member of
diskimage-builder core. I think tripleo-core should get moved out from
being diskimage-builder-core but there's some folks who are not in
diskimage-builder-core that are in tripleo-core and are active in DIB.
Maybe we can take all tripleo-core folk who have done 2 or more reviews
this past cycle and add them to diskimage-builder-core?

Cheers,
Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Switch 'all?' openstack bots to errbot plugins?

2016-07-29 Thread Jeremy Stanley
On 2016-07-29 09:41:40 -0700 (-0700), Joshua Harlow wrote:
[...]
> What shall we name it???
[...]

Also, one bucket repo for OpenStack community Errbot plug-ins, or
one repo per plug-in with a consistent naming scheme?
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara][heat][infra] breakage of Sahara gate and images from openstack.org

2016-07-29 Thread Luigi Toscano
Hi all,
the Sahara jobs on the gate run the scenario tests (from sahara-tests) using 
the fake plugin, so no real Hadoop/Spark/BigData operations are performed, but 
other the other expected operations are executed on the image. In order to do 
this we used for long time this image:
http://tarballs.openstack.org/heat-test-image/fedora-heat-test-image.qcow2

which was updated early on this Friday (July 29th) from Fedora 22 to Fedora 24 
breaking our jobs with some cryptic error, maybe something related to the 
repositories:
http://logs.openstack.org/46/335946/12/check/gate-sahara-tests-dsvm-scenario-nova-heat/5eeff52/logs/screen-sahara-eng.txt.gz?level=WARNING

Now we are trying to quickly find another image; the standard Fedora 24 and 
CentOS 7 images have no lsb_release (used in Sahara):
https://review.openstack.org/#/c/348849/
https://review.openstack.org/#/c/348894/

but the Ubuntu 16.04 cloud image seems to contain them, so this change *maybe* 
will solve the issue (but pending gates right now):
https://review.openstack.org/#/c/348952/

Nevertheless, it would be nice to not rely on something external, so my 
questions are:

- could someone from the heat side help investigate whether the image is still 
valid?
- would it be possible to use the the nodepool cloud images (qcow2, raw) from 
the jobs, if they contains lsb_release (and possibly other tools), and if it 
is, how?

Ciao
-- 
Luigi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][DIB] Proposal to move DIB to its own project team

2016-07-29 Thread Ben Nemec
As I noted in the meeting yesterday, I think the lack of response from
TripleO regarding this topic is kind of answer enough.  TripleO has
moved away from having a heavy dependency on diskimage-builder (it's
basically used to install some packages and a handful of elements that
we haven't been able to replace yet), so I don't see a problem with
moving dib out of TripleO, as long as we still have some TripleO folks
on the core team and tripleo-ci continues to test all changes against
it.  We still care about keeping dib working, but the motivation from
the TripleO side to do feature development in dib is pretty nonexistent
at this point, so if a new team wants to take that on then I'm good with it.

Note that the diskimage-builder core team has always been separate from
the tripleo-core team, so ultimately I guess this would just be a
governance change?

On 07/21/2016 04:58 PM, Gregory Haynes wrote:
> Hello everyone,
> 
> The subject sort of says it all - I'd like to propose making
> diskimage-builder its own project team.
> 
> When we started diskimage-builder and many of the other TripleO
> components we designed them with the goal in mind of creating tools that
> are useful outside of the TripleO context (in addition to fulfilling our
> immediate needs).  To that effect diskimage-builder has become more of a
> cross-project tool designed and used by several of the OpenStack
> projects and as a result it no longer seems to make sense for
> diskimage-builder to be part of the TripleO project team. Our two core
> groups have diverged to a large extent over the last several cycles
> which has removed much of the value of being part of that project team
> while creating some awkward communication issues. To be clear - I
> believe this is purely a result of the TripleO project team succeeding
> in its goal to improve OpenStack by use of the virtuous cycle and this
> is an ideal result of that goal.
> 
> Is this is something the DIB and TripleO folks agree/disagree with? If
> we all agree then I think this should be a fairly straightforward
> process, otherwise I welcome some discussion on the topic :).
> 
> Cheers,
> Greg
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Congress horizon plugin - congressclient/congress API auth issue - help

2016-07-29 Thread Adam Young

On 07/28/2016 10:05 PM, Tim Hinrichs wrote:


I've never worked on the authentication details, so this may be off 
track, but that error message indicates the failure is happening 
inside Congress's oslo_policy.


Error message shows up here as a Python exception class.
https://github.com/openstack/congress/blob/master/congress/exception.py#L135

That exception class is instantiated only here
https://github.com/openstack/congress/blob/master/congress/common/policy.py#L93 



The code that uses the instantiated exception class (which actually 
does the enforcement):

https://github.com/openstack/congress/blob/7c2f4132b9693e7969e704cb9914963274c2c4a1/congress/api/webservice.py#L373

I don't remember off the top of my head how the default policy.json 
gets created, but I'm sure the admin credentials will work.  You might 
want to ensure you're logged in as the admin with...


$ source openrc admin admin



IN most projects, policy is enforced against an oslo-context object.  
That shouild abstract away the differences between V2 and V3 keystone 
token formats.


Make sure that the policy is not dying on something specific to one 
version or the other.  Post the actual rule executed, please.





Tim

On Thu, Jul 28, 2016 at 1:56 PM Aimee Ukasick 
> wrote:


I've gotten a little farther, which leads me to my next question -
does the API support v3 token auth?
or am I making mistakes in my manual testing?

using the CLI on local devstack
1) did not modify openrc
2) source openrc
3) openstack token issue
4)  openstack congress datasource list --os-auth-type v3token
--os-token ad74073300e244768e08e0d4cd73fbbd --os-auth-url
http://192.168.56.101:5000/v3
--os-project-id da9a9ba573c34c18a037fd04812d81bc   --debug --verbose

When the python-congressclient calls the API, this is the response:
RESP BODY: Policy doesn't allow get_v1 to be performed.
Request returned failure status: 403

Log: http://paste.openstack.org/show/543445/

So then I called the API directly:
curl -X POST -H "Content-Type: application/json" -H
"Cache-Control: no-cache"
-d '{ "auth": {
"identity": {
  "methods": ["password"],
  "password": {
"user": {
  "name": "demo",
  "domain": { "id": "default" },
  "password": "secret"
}
  }
}
  }
}' "http://192.168.56.101:5000/v3/auth/tokens;

Response:
{
  "token": {
"issued_at": "2016-07-28T20:43:44.258137Z",
"audit_ids": [
  "N6tnfbI5QvyRT4xEB7pGCA"
],
"methods": [
  "password"
],
"expires_at": "2016-07-28T21:43:44.258112Z",
"user": {
  "domain": {
"id": "default",
"name": "Default"
  },
  "id": "f2bf5189bbd7466cbecc1b1315cff3b5",
  "name": "demo"
}
  }
}

Then:
curl -X GET -H "X-Auth-Token: f2bf5189bbd7466cbecc1b1315cff3b5" -H
"Cache-Control: no-cache" "http://192.168.56.101:1789/v1/data-sources;

Response:
{
  "error": {
"message": "The request you have made requires authentication.",
"code": 401,
"title": "Unauthorized"
  }
}

I'm feeling pretty stupid at the moment, like I've missed
something obvious.
Any ideas?

Thanks!

aimee

On Fri, Jul 22, 2016 at 9:21 PM, Anusha Ramineni
> wrote:
> Hi Aimee,
>
> Thanks for the investigation.
>
> I remember testing congress client with V3 password based
authentication ,
> which worked fine .. but never tested with token based .
>
> Please go ahead and fix it , if you think there is any issue .
>
>
> On 22-Jul-2016 9:38 PM, "Aimee Ukasick"
>
wrote:
>>
>> All - I made the change to the auth_url that Anusha suggested.
>> Same problem as before " Cannot authorize API client"
>> 2016-07-22 14:13:50.835861 * calling policies_list =
>> client.list_policy()*
>> 2016-07-22 14:13:50.836062 Unable to get policies list: Cannot
>> authorize API client.
>>
>> I used the token from the log output to query the Congress API with
>> the keystone v3 token - no issues.
>> curl -X GET -H "X-Auth-Token: 18ec54ac811b49aa8265c3d535ba0095" -H
>> "Cache-Control: no-cache" "http://192.168.56.103:1789/v1/policies;
>>
>> So I really think the problem is that the python-congressclient
>> doesn't support identity v3.
>> I thought it did, but then I came across this:
>> "support keystone v3 api and session based authentication "
>> https://bugs.launchpad.net/python-congressclient/+bug/1564361
>> This is currently assigned to Anusha.
>> I'd like to start 

Re: [openstack-dev] [infra] Switch 'all?' openstack bots to errbot plugins?

2016-07-29 Thread Joshua Harlow

Doug Hellmann wrote:

Excerpts from Joshua Harlow's message of 2016-07-29 08:47:32 -0700:

Jeremy Stanley wrote:

On 2016-07-28 23:17:52 -0700 (-0700), Morgan Fainberg wrote:

As I recall this has been on a long list of "we want to do it". It
really just comes down to someone putting effort into making it
happen.

Yes, it's come up semi-often (also Joshua mentioned this to me over
IRC earlier in the week where I basically told him the same).
There's been a general consensus that the Infra team would love to
see the IRC bots it manages (gerribot, meetbot, statusbot) rewritten
in a common framework, preferably a modern and extensible one. The
last time anyone looked into options (which admittedly was probably
at least a year ago), errbot seemed like the leading contender for
our desired language and featureset.

To echo Morgan, we just need (and would really appreciate!) someone
working on the implementation.

I'll see what I can do in my (spare time), but others are also willing
to jump in and learn some errbot ;)

The previously mentioned examples of plugins (repeated here) are IMHO
good things to look at if people are interested in messing around:

- https://github.com/harlowja/gerritbot2
- https://review.openstack.org/#/c/343857/

I guess if people want to jump in and explore, message me as well on IRC
(or email or smoke signals...),

-Josh



I have a couple of commands I'd like to add, too. How about if we
set up a repo for errbot plugins and start working on them in the
review system?  Then when we feel like we've hit a critical mass
of "usefulness" we can start adding an infra-hosted copy of the new
bot to channels, and migrating remaining features from the older
bots over to the new one.



Sounds like a good idea to me :)

What shall we name it??? (I'll be online in IRC in a few, in internal 
standup currently, prob can just discuss there, ha).



Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] [Fuel] [tc] Looks like Mirantis is getting Fuel CCP (docker/k8s) kicked off

2016-07-29 Thread John Griffith
On Thu, Jul 28, 2016 at 9:24 AM, Sergey Lukjanov 
wrote:

> Hi folks,
>
> First of all, let me say that it’s a marketing announcement and as all of
> you know such announcements aren’t precise from a technical side.
> Personally I’ve seen this paper first time on TechCrunch.
>
> First of all, fuel-ccp-* are a set of OpenStack projects and everyone is
> welcome to participate. All the regular community process(es) for other
> openstack projects apply to fuel-ccp-*. At the moment, in spite of what the
> marketing announcements say, it’s a bunch of people from Mirantis working
> on the repositories. Please think of this as an incubation process to try
> and see what the next incarnation of Fuel would look like in the future.
>
> Regardless of what was written, we aren’t applying to the Big Tent right
> now (as it was initially said explicitly when we were creating repos and
> it’s still valid). The state of the repos is still experimental, but I’d
> like to make things clear and confirm that Mirantis has chosen to use
> containers for infrastructure and OpenStack components and to use
> Kubernetes as the orchestrator of those containers. In the future, the Fuel
> OpenStack installer will use these containerized OpenStack/infrastructure
> component images. There are many questions to be solved and things to be
> done first in Fuel CCP, such as:
>
> * Freeze technologies and approaches, such as repos structure, image
> layers, etc.
> * Cleanup deprecated PoC stuff from the code
> * Implement basic test coverage for all parts of the project
> * Create Release Management approach
> * Consume OpenStack CI to run tests
> * Fully implement 3rd party CI (with end-to-end integration tests only)
> * Make at least initial documentation and ensure that it’s deployable
> using this doc
>
> and etc. In general, I would not expect us to seriously consider applying
> to the Big Tent for another 5-6 months at the earliest.
>
> Regarding the Fuel mission, that is:
>
> To streamline and accelerate the process of deploying, testing and
> maintaining various configurations of OpenStack at scale.
>
> I think that it’s 100% aligned with that we’re doing in Fuel CCP.
>
​All the other stuff aside, the above was my take away from the first or
second message in this thread so I fail to understand the debate around
this.  The mission statement is simply around deploying, how that deploy
mechanism is implemented (Kubernetes, Ironic whatever) doesn't really seem
to be an issue here.

The point about API's that Jay Pipes made was spot on in my opinion as
well.  We're not talking about service or project API's that the end users
or operators deal with on a daily basis.  Until there's a standard install
API I fail to see the argument against this.

Other questions about the 4 opens etc seem to have been answered, but I
don't have any real insight here.  Personally I'm looking forward to seeing
if somebody can come up with a reliable and relatively easy deployment
tool.  If it means competition then that's great as far as I'm concerned.
I'll use whichever one doesn't make me want to rip my hair out.

​


>
> About the Kolla usage in Fuel CCP, I agree with Kevin and we can see in
> future that Fuel CCP will be potentially using Kolla containers, it’ll
> require some time anyway, but it doesn’t mean that we stop considering it.
> And as Kevin correctly noticed, we did it already one time with Fuel
> adopting upstream Puppet modules and contributing actively to them.
>
> Thanks.
>
>
> On Thu, Jul 28, 2016 at 7:43 AM, Flavio Percoco  wrote:
>
>> On 28/07/16 04:45 +, Steven Dake (stdake) wrote:
>>
>>>
>>>
>>> On 7/27/16, 2:12 PM, "Jay Pipes"  wrote:
>>>
>>> On 07/27/2016 04:42 PM, Ed Leafe wrote:

> On Jul 27, 2016, at 2:42 PM, Fox, Kevin M  wrote:
>
> Its not an "end user" facing thing, but it is an "operator" facing
>> thing.
>>
>
> Well, the end user for Kolla is an operator, no?
>
> I deploy kolla containers today on non kolla managed systems in
>> production, and rely on that api being consistent.
>>
>> I'm positive I'm not the only operator doing this either. This sounds
>> like a consumable api to me.
>>
>
> I don¹t think that an API has to be RESTful to be considered an
> interface for we should avoid duplication.
>

 Application *Programming* Interface. There's nothing that is being
 *programmed* or *called* in Kolla's image definitions.

 What Kolla is/has is not an API. As Stephen said, it's more of an
 Application Binary Interface (ABI). It's not really an ABI, though, in
 the traditional sense of the term that I'm used to.

 It's an agreed set of package bases, installation procedures/directories
 and configuration recipes for OpenStack and infrastructure components.

>>>
>>> Jay,
>>>
>>> From my perspective, this isn't about 

Re: [openstack-dev] [gnocchi] typical length of timeseries data

2016-07-29 Thread Julien Danjou
On Fri, Jul 29 2016, gordon chung wrote:

> so at first glance, it doesn't really seem to affect performance much 
> whether it's one 'larger' file or many smaller files.

I guess it's because your storage system latency (file?) does not make a
difference. I imagine that over Swift or Ceph, it might change things a
bit.

If you add time.sleep(1) in _get_measures(), you'd see a difference. ;)

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Switch 'all?' openstack bots to errbot plugins?

2016-07-29 Thread Doug Hellmann
Excerpts from Joshua Harlow's message of 2016-07-29 08:47:32 -0700:
> Jeremy Stanley wrote:
> > On 2016-07-28 23:17:52 -0700 (-0700), Morgan Fainberg wrote:
> >> As I recall this has been on a long list of "we want to do it". It
> >> really just comes down to someone putting effort into making it
> >> happen.
> >
> > Yes, it's come up semi-often (also Joshua mentioned this to me over
> > IRC earlier in the week where I basically told him the same).
> > There's been a general consensus that the Infra team would love to
> > see the IRC bots it manages (gerribot, meetbot, statusbot) rewritten
> > in a common framework, preferably a modern and extensible one. The
> > last time anyone looked into options (which admittedly was probably
> > at least a year ago), errbot seemed like the leading contender for
> > our desired language and featureset.
> >
> > To echo Morgan, we just need (and would really appreciate!) someone
> > working on the implementation.
> 
> I'll see what I can do in my (spare time), but others are also willing 
> to jump in and learn some errbot ;)
> 
> The previously mentioned examples of plugins (repeated here) are IMHO 
> good things to look at if people are interested in messing around:
> 
> - https://github.com/harlowja/gerritbot2
> - https://review.openstack.org/#/c/343857/
> 
> I guess if people want to jump in and explore, message me as well on IRC 
> (or email or smoke signals...),
> 
> -Josh
> 

I have a couple of commands I'd like to add, too. How about if we
set up a repo for errbot plugins and start working on them in the
review system?  Then when we feel like we've hit a critical mass
of "usefulness" we can start adding an infra-hosted copy of the new
bot to channels, and migrating remaining features from the older
bots over to the new one.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Microversioning implementation

2016-07-29 Thread Grant, Jaycen V
To be clear, the idea of the decorator and the base code comes from Nova, but 
it is not the Nova implementation. I didn't just copy the code over, that 
wouldn't have worked. What I liked from the Nova implementation was the 
decorator.  Vijendar and I went through several different projects(Ironic, 
Keystone, Nova, etc...) looking for examples of how we wanted to implement 
this. We POC'ed a couple of them to compare.  The decorator method of 
organizing and labeling was the one we liked best from a future development 
point of view. It kept the API code easy to read and easy to understand what 
code belonged to each microversion.  I took the structure and some of the code 
from Nova and then changed it to work for Pecan.  Nova's implementation was 
mixed into their WSGI code and we don't need to recreate that since we use 
pecan in magnum. So the concern of the complexity of the nova router is not 
really part of this. Take a look at the code review, I feel the code is not too 
complex and provides additional error checking that we would have to add to 
each method without it. 

Quick summary of how it works:
1) The decorators create a dictionary attaching method name to a list of 
available methods with version information.
2) That dictionary is added to the controller object
3) When a request comes in and Pecan selects the correct method needed for the 
API call, the controller  __getattribute__ call uses the dictionary to return 
the correct method based on the request version. It then returns the selected 
method and Pecan proceeds as before.

That said, if we really believe that we won't do more than a few microversion 
changes then I'll agreed that it might be not be critical that we have 
something like this. If that is what is decided, I can resubmit this with just 
the Version class updates and response header fixes. The microversion checking 
can then be done in each method as needed.

Jaycen


-Original Message-
From: taget [mailto:qiaoliy...@gmail.com] 
Sent: Thursday, July 28, 2016 6:46 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] Microversioning implementation

Hongbin & team

Please forget Semantic Versioning API, that should not be done in short term 
especially API WG had defined lots of Microversion API docs, sorry to make 
confusions.

Of cause that Microversion API is important to OpenStack, and it has documented 
well in API WG [1].
For Magnum, I remember that the implementation idea is from ironic [2], it is 
simple.

For [3] which comes from Nova is complex because that nova implement router 
itself, but Magnum
(Ironic) use pecan. We need to maintain it Magnum as well, also when I doing 
some testing, I still find some unexpected exception if passing bad 
Microversion, it should follow API WG.

At last Magnum has very few APIs entry point now, baymodel.py bay.py 
certificate.py magnum_services.py

Seen from Ironic, it has bump the version to v1.21 [2] with current Magnum's 
microversion's infra

I am okay to go with [3] as long as [3] fellows up with APIWG[1] and have 
better testing, I really don't want to maintain an infra which won't be used 
frequently.

[1] https://wiki.openstack.org/wiki/VersionDiscovery
[2] https://github.com/openstack/ironic/blob/master/doc/source/webapi/v1.rst
[3]
https://review.openstack.org/#/c/343060/8/magnum/api/controllers/v1/bay.py


On 2016年07月29日 03:19, Hongbin Lu wrote:
> OK. My replies are inline.
>
>> -Original Message-
>> From: Grant, Jaycen V [mailto:jaycen.v.gr...@intel.com]
>> Sent: July-28-16 2:31 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Magnum] Microversioning implementation
>>
>> I was completely unaware of any discussion of Semantic Versioning.
>> That was brought up by Eli Qiao in the code review, so he might be the
>> one to point us in the right direction for that.
>>
>> Jaycen
>>
>> -Original Message-
>> From: Hongbin Lu [mailto:hongbin...@huawei.com]
>> Sent: Thursday, July 28, 2016 11:10 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject: Re: [openstack-dev] [Magnum] Microversioning implementation
>>
>> Added this to the agenda of next team meeting [1].
>>
>> I would like to ask clarification for " the community are discussing to
>> using Semantic Versioning(X.Y.Z) instead of microversion X.Y ". Could
>> anyone provide more information about that?
>>
>> Best regards,
>> Hongbin
>>
>>> -Original Message-
>>> From: Grant, Jaycen V [mailto:jaycen.v.gr...@intel.com]
>>> Sent: July-28-16 10:52 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: [openstack-dev] [Magnum] Microversioning implementation
>>>
>>>
>>> There has been a discussion around micro versioning implementation
>>> going on in the following patch:
>>> https://review.openstack.org/#/c/343060/8 and I was asked to bring it
>>> to the mailing 

Re: [openstack-dev] [nova] [infra] Intel NFV CI voting permission

2016-07-29 Thread Znoinski, Waldemar
Hi Matt et al, 
Thanks for taking the time to have a chat about it in Nova meeting yesterday.
In relation to your two points below...

1. tempest-dsvm-ovsdpdk-nfv-networking job in our Intel NFV CI was broken for 
about a day till we troubleshooted the issue, to find out merge of this [1] 
change started to cause our troubles.
We set Q_USE_PROVIDERNET_FOR_PUBLIC back to False to let the job get green 
again and test what it should be testing - nova/neutron changes and not giving 
false negatives because of that devstack change.
We saw a REVERT [2] of the above change shortly after as it was breaking 
Jenkins neutron's linuxbridge tempest too [3].

2. Our aim is to have two things tested when new change is proposed to 
devstack: NFV and OVS+DPDK. For better clarity we'll run two separate jobs 
instead of having NFV+OVSDPDK together.
Currently we run OVSDPDK+ODL on devstack changes to discover potential issues 
with configuring these two together with each devstack change proposed. We've 
discussed this internally and we can add/(replace OVSDPDK+ODL job) with a 
'tempest-dsvm-full-nfv' one (currently running on Nova changes) that does 
devstack + runs full tempest test suite (1100+ tests) on NFV enabled flavors. 
It should test properly proposed devstack changes with NFV features (as per 
wiki [4]) we have enabled in Openstack.

Let me know if there are other questions, concerns, asks or suggestions.

Thanks
Waldek


[1] https://review.openstack.org/#/c/343072/
[2] https://review.openstack.org/#/c/345820/ 
[3] https://bugs.launchpad.net/devstack/+bug/1605423 
[4] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel_NFV_CI 


 >-Original Message-
 >From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
 >Sent: Thursday, July 28, 2016 4:14 PM
 >To: openstack-dev@lists.openstack.org
 >Subject: Re: [openstack-dev] [nova] [infra] Intel NFV CI voting permission
 >
 >On 7/21/2016 5:38 AM, Znoinski, Waldemar wrote:
 >> Hi Nova cores et al,
 >>
 >>
 >>
 >> I would like to acquire voting (+/-1 Verified) permission for our
 >> Intel NFV CI.
 >>
 >>
 >>
 >> 1.   It's running since Q1'2015.
 >>
 >> 2.   Wiki [1].
 >>
 >> 3.   It's using openstack-infra/puppet-openstackci
 >>  with Zuul
 >> 2.1.1 for last 4 months: zuul, gearman, Jenkins, nodepool, local Openstack
 >cloud.
 >>
 >> 4.   We have a team of 2 people + me + Nagios looking after it. Its
 >> problems are fixed promptly and rechecks triggered after non-code
 >> related issues. It's being reconciled against ci-watch [2].
 >>
 >> 5.   Reviews [3].
 >>
 >>
 >>
 >> Let me know if further questions.
 >>
 >>
 >>
 >> 1.   https://wiki.openstack.org/wiki/ThirdPartySystems/Intel_NFV_CI
 >>
 >> 2.   http://ci-watch.tintri.com/project?project=nova
 >>
 >> 3.
 >> https://review.openstack.org/#/q/reviewer:%22Intel+NFV-
 >CI+%253Copensta
 >> ck-nfv-ci%2540intel.com%253E%22
 >>
 >>
 >>
 >>
 >>
 >>
 >> *Waldek*
 >>
 >>
 >>
 >> --
 >> Intel Research and Development Ireland Limited Registered in Ireland
 >> Registered Office: Collinstown Industrial Park, Leixlip, County
 >> Kildare Registered Number: 308263
 >>
 >> This e-mail and any attachments may contain confidential material for
 >> the sole use of the intended recipient(s). Any review or distribution
 >> by others is strictly prohibited. If you are not the intended
 >> recipient, please contact the sender and delete all copies.
 >>
 >>
 >>
 >>
 >__
 >
 >>  OpenStack Development Mailing List (not for usage questions)
 >> Unsubscribe:
 >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >>
 >
 >We talked about this in the nova meeting today. I don't have a great grasp on
 >how the Intel NFV CI has been performing, but making it voting will help with
 >that. Looking at the 7 day results:
 >
 >http://ci-watch.tintri.com/project?project=nova=7+days
 >
 >Everything looks pretty good except for tempest-dsvm-ovsdpdk-nfv-
 >networking but Waldemar pointed out there was a change in devstack that
 >broke the CI for a day or so:
 >
 >https://github.com/openstack-
 >dev/devstack/commit/130a11f8aaf08ea529b6ce60dd9052451cb7bb5c
 >
 >I would like to know a little more about why we don't run the Intel NFV CI on
 >devstack changes to catch stuff like this before it becomes a breaking
 >problem? The team worked around it for now, but it is a concern of mine. I
 >think at least the Xen and PowerKVM CIs also run on devstack changes to
 >avoid problems like this.
 >
 >So please give me some details on running against devstack changes and
 >then I'll ack or nack the request.
 >
 >--
 >
 >Thanks,
 >
 >Matt Riedemann
 >
 >
 >__
 >
 >OpenStack Development Mailing List (not 

Re: [openstack-dev] [infra] Switch 'all?' openstack bots to errbot plugins?

2016-07-29 Thread Joshua Harlow

Jeremy Stanley wrote:

On 2016-07-28 23:17:52 -0700 (-0700), Morgan Fainberg wrote:

As I recall this has been on a long list of "we want to do it". It
really just comes down to someone putting effort into making it
happen.


Yes, it's come up semi-often (also Joshua mentioned this to me over
IRC earlier in the week where I basically told him the same).
There's been a general consensus that the Infra team would love to
see the IRC bots it manages (gerribot, meetbot, statusbot) rewritten
in a common framework, preferably a modern and extensible one. The
last time anyone looked into options (which admittedly was probably
at least a year ago), errbot seemed like the leading contender for
our desired language and featureset.

To echo Morgan, we just need (and would really appreciate!) someone
working on the implementation.


I'll see what I can do in my (spare time), but others are also willing 
to jump in and learn some errbot ;)


The previously mentioned examples of plugins (repeated here) are IMHO 
good things to look at if people are interested in messing around:


- https://github.com/harlowja/gerritbot2
- https://review.openstack.org/#/c/343857/

I guess if people want to jump in and explore, message me as well on IRC 
(or email or smoke signals...),


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] typical length of timeseries data

2016-07-29 Thread gordon chung


On 29/07/2016 5:00 AM, Julien Danjou wrote:
> Best way is probably to do some bench… but I think it really depends on
> the use cases here. The interest of having many small splits is that you
> can parallelize the read.
>
> Considering the compression ratio we have, I think we should split in
> smaller files. I'd pick 3600 and give it a try.

i gave this a quick try with a series of ~68k points

with object size of 14400 points (uncompressed), i got:

[gchung@gchung-dev ~(keystone_admin)]$ time gnocchi measures show 
dc51c402-67e6-4b28-aba0-9d46b35b5397 --granularity 60 &> /tmp/blah

real0m6.398s
user0m5.003s
sys 0m0.071s

it took ~39.45s to process into 24 different aggregated series and 
created 6 split objects.

with object size of 3600 points (uncompressed), i got:

[gchung@gchung-dev ~(keystone_admin)]$ time gnocchi measures show 
301947fd-97ee-428a-b445-41a67ee62c38 --granularity 60 &> /tmp/blah

real0m6.495s
user0m4.970s
sys 0m0.073s

it took ~39.89s to process into 24 different aggregated series and 
created 21 split objects

so at first glance, it doesn't really seem to affect performance much 
whether it's one 'larger' file or many smaller files. that said, with 
new proposed v3 serialisation format, a larger file has a greater 
requirement for additional padding which is not a good thing.

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Service VMs, CI, etc

2016-07-29 Thread Ben Swartzlander

On 07/29/2016 09:25 AM, John Spray wrote:

Hi folks,

We're starting to look at providing NFS on top of CephFS, using NFS
daemons running in Nova instances.  Looking ahead, we can see that
this is likely to run into similar issues in the openstack CI that the
generic driver did.

I got the impression that the main issue with testing the generic
driver was that bleeding edge master versions of Nova/Neutron/Cinder
were in use when running in CI, and other stuff had a habit of
breaking.  Is that roughly correct?


The breakages related to using HEAD were mostly related to Tempest. For 
Nova, Neutron, and Cinder, the problems are more related to running a 
cloud within a cloud and having severely limited resources. Things take 
a long time and sometimes don't happen at all.


If you need a service VM to do real work, you can't create many of them 
and you can expect creation of each one to be quite slow. For the 
generic driver, we attempt to overcome the slowness by parallelizing 
tests, and sharing the VM resources between test groups, but that 
creates its own set of concurrency issues.


Our current approach to these problems is to not use the generic driver 
for most things, and to limit the tests we do run on the generic driver 
to only what's needed to ensure that driver isn't broken. Also, there is 
an effort to shrink the "service image" used by the generic driver so 
it's less resource hungry. Hopefully with those changes we can avoid the 
resource sharing in tempest and while still keeping test run times 
within reason.



Assuming versions are the main issue, we're going to need to look at
solutions to that, which could mean either doing some careful pinning
of the versions of Nova/Neutron used by Manila CI in general, or
creating a separate CI setup for CephFS that had that version pinning.
My preference would be to see this done Manila wide, so that the
generic driver could benefit as well.


I don't think pinning versions of the other projects would help much, 
for reason I outlined above.


-Ben


Thoughts?

John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleo-test-cloud-rh2 local mirror server

2016-07-29 Thread Derek Higgins
On 27 July 2016 at 17:52, Paul Belanger  wrote:
> On Wed, Jul 27, 2016 at 02:54:00PM +0100, Derek Higgins wrote:
>> On 21 July 2016 at 23:04, Paul Belanger  wrote:
>> > Greetings,
>> >
>> > I write today to see how I can remove this server from 
>> > tripleo-test-cloud-rh2. I
>> > have an open patch[1] currently to migrate tripleo-ci to use our AFS 
>> > mirrors for
>> > centos and epel.  However, I'm still struggling to see what else you are 
>> > using
>> > the local mirror for.
>> >
>> > From what I see, there appears to be some puppet modules in the mirror?
>> >
>> > The reason I am doing this work, is to help bring tripleo inline with
>> > openstack-infra tooling.  There shouldn't be the need for a project to 
>> > maintain
>> > its own infrastructure outside of openstack-infra.  If so, I see that as 
>> > some
>> > sort of a failure between the project and openstack-infra.   And with that 
>> > in
>> > mind, I am here to help fix that.
>> >
>> > For the most part, I think we have everything currently in place to 
>> > migrate away
>> > from your locally mirror. I just need some help figuring what else is left 
>> > and
>> > then delete it.
>>
>> Hi Paul,
>> The mirror server hosts 3 sets of data used in CI long with a cron
>> a job aimed at promoting trunk repositories,
>> The first you've already mentioned, there is a list of puppet modules
>> hosted here, we soon hope to move to packaged puppet modules so the
>> need for this will go away.
>>
> Ya, I was looking at an open review to rework this. If we moved these puppet
> modules to tarballs over git repos, I think we could mirror them pretty easy
> into our AFS mirrors.  Them being git repos requires more work because some
> policies around git repos.

We wont need to do anything here, the patch to move away from git
repos will be instead using the rdo packaged puppet modules, so we
wont need anything from infra for this, we just end up using the rdo
repository like we do for all other openstack projects.

>
>> The second is a mirror of the centos cloud images, these are updated
>> hourly by the centos-cloud-images cronjob[1], I guess these could be
>> easily replaced with the AFS server
>>
> So 2 things here.
>
> 1) I've reached out to CentOS asking to enable rsync support on
> http://cloud.centos.org/ if they do that, I can easily enable rsync for it.

Great

>
> 2) What about moving away from the centos diskimage-builder element and switch
> to centos-minimal element. I have an open review for this, but need help on
> actually testing this.  It moves away from using the cloud image, and instead
> uses yumdownloader to prebuild the images.

Its possible, but I think out of scope for a general ci thread, its
more of a tripleo decision so maybe needs its own thread to get a
wider audience.

>
>> Then we come to the parts where it will probably be more tricky to
>> move away from our own server
>>
>> o cached images - our nightly periodic jobs run tripleo ci with
>> master/HEAD for all openstack projects (using the most recent rdo
>> trunk repository), if the jobs pass then we upload the overcloud-full
>> and ipa images to the mirror server along with logging what jobs
>> passed, this happens at the end of toci_instack.sh[2], nothing else
>> happens at this point the files are just uploaded nothing starts using
>> them yet.
>>
> I suggest we move this to tarballs.o.o for now, this is what other projects 
> are
> doing.  I believe we are also considering moving this process into AFS too.

Ok, its an option worth looking at if we could make it work.

>
>> o promote script - hourly we then run the promote script[3], this
>> script is whats responsible for the promotion of the master rdo
>> repository that is used by tripleo ci (and devs), it checks to see if
>> images have been updated to the mirror server by the periodic jobs,
>> and if all of the jobs we care about (currently
>> periodic-tripleo-ci-centos-7-ovb-ha
>> periodic-tripleo-ci-centos-7-ovb-nonha[4]) passed then it does 2
>> things
>>   1. updates the current-tripleo link on the mirror server[5]
>>   2. updates the current-tripleo link on the rdo trunk server[6]
>> By doing this we ensure that the the current-tripleo link on the rdo
>> trunk server is always pointing to something that has passed tripleo
>> ci jobs, and that tripleo ci is using cached images that were built
>> using this repository
>>
> Okay, I think we need to dive more into this. It might be possible to make 
> this
> a post job or use mirror-update.openstack.org

A post job might work if it could find the status of all the perioc
jobs, I'm not
familiar with mirror-update.openstack.org what does it do?

>
>> We've had to run this promote script on the mirror server as the
>> individual jobs run independently and in oder to make the promote
>> decision we needed somewhere that is aware of the status of all the
>> jobs
>>
>> Hope this answers your questions,
>> Derek.
>>
>> 

Re: [openstack-dev] [nova] Next steps for proxy API deprecation

2016-07-29 Thread Matt Riedemann

On 7/28/2016 11:20 PM, Ghanshyam Mann wrote:

Yes, I also prefer the approach of capping the tests instead of jobs. But along 
with that we might need to make sure the same tests coverage Tempest provides if 
min_microversion is set >2.35 in config.
For example, if we cap the tests (calls nova-network) with max_microversion = 
2.35 then, we might need to implement/modify those tests to start using neutron 
which can be run if config's min_microversion
is set > 2.35.
There are two type of test cases-
1. Test only tests nova-network APIs - Example: 
https://github.com/openstack/tempest/tree/master/tempest/api/compute/floating_ips
2. Test testing other scenario using nova-network - Example: 
https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_server_rescue.py

1st case if all ok to cap and leave it skip for >2.35. But for 2nd case, I feel we 
should not leave them skip if config's min_microversion > 2.35 which mean leaving 
those scenario untested(if min_microversion >2.35).
There are 2 options for 2nd case:
  1. Implement  duplicate tests by using the neutron APIs - This will be 
duplicate tests but needed because of testing nova-network till newton EOL.
  2. Or modify those to switch from nova-network to neutron.  - If we do not 
care about nova-network testing even for stable branches where it is not 
deprecated.


I don't like either option, but I'd rather go with #1 for two reasons:

a) Once nova-network is gone these tests wouldn't run ever, so we lose 
the coverage.


b) nova-network wasn't deprecated in stable branches so I think we need 
to maintain those tests, so 2.1-2.35 are nova-network, 2.36+ are neutron.


The duplication cost probably wouldn't be all that terrible, we could 
probably abstract a lot of the common network setup parts away for the 
compute API tests for things like creating a floating IP.





Thanks
gmann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Attila Darazs for tripleo-quickstart core​

2016-07-29 Thread John Trowbridge
There were no objections, so I made the change in gerrit.

On 07/26/2016 10:32 AM, John Trowbridge wrote:
> I would like to add Attila to the tripleo-quickstart core reviewers
> group. Much of his work has been on some of the auxillary roles that
> quickstart makes use of in RDO CI, however his numbers on quickstart
> itself[1] are in line with the other core reviewers.
> 
> I will be out for paternity leave the next 4 weeks, so it will also be
> nice to have 3 core reviewers during that time in case I dont end up
> doing too many reviews.
> 
> If there are no objections I will make the change at the end of the week.
> 
> - trown
> 
> [1] http://stackalytics.com/report/contribution/tripleo-quickstart/90
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Service VMs, CI, etc

2016-07-29 Thread John Spray
Hi folks,

We're starting to look at providing NFS on top of CephFS, using NFS
daemons running in Nova instances.  Looking ahead, we can see that
this is likely to run into similar issues in the openstack CI that the
generic driver did.

I got the impression that the main issue with testing the generic
driver was that bleeding edge master versions of Nova/Neutron/Cinder
were in use when running in CI, and other stuff had a habit of
breaking.  Is that roughly correct?

Assuming versions are the main issue, we're going to need to look at
solutions to that, which could mean either doing some careful pinning
of the versions of Nova/Neutron used by Manila CI in general, or
creating a separate CI setup for CephFS that had that version pinning.
My preference would be to see this done Manila wide, so that the
generic driver could benefit as well.

Thoughts?

John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistal] Mistral logo ideas?

2016-07-29 Thread Anastasia Kuznetsova
I like the idea with octopus.

Creature with lots of leg is associated with graph for me, thus I would
like to propose one more idea for mascot,
it is a spider with a web (something like this)
. What do you
think?


On Fri, Jul 29, 2016 at 12:21 PM, Elisha, Moshe (Nokia - IL) <
moshe.eli...@nokia.com> wrote:

> Octopus sounds good to me.
> For me it somehow relates to Mistral as well – like concurrent tasks…
>
> From: Renat Akhmerov 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Tuesday, 19 July 2016 at 07:36
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [mistal] Mistral logo ideas?
>
>
>
> On 18 Jul 2016, at 19:54, Ryan Brady  wrote:
>
> On Mon, Jul 18, 2016 at 12:44 AM, Renat Akhmerov  > wrote:
>
>> On choosing a mascot for Mistral. Let’s choose one till next Monday.
>>
>> To start this discussion I’d like to propose a couple of ideas:
>>
>>
>>- *Octopus* (kind of symbolic to me). How do you like this beauty?
>>http://nashaplaneta.su/_bl/158/78285238.jpg
>>
>>
>  +1.  Intelligence, dexterity, tool-use - many good qualitites to
> associate with.
>
>
> Yep :)
>
>
> Renat Akhmerov
> @Nokia
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Anastasia Kuznetsova
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] I want to consult ironic problems

2016-07-29 Thread Jeremy Stanley
On 2016-07-29 11:19:39 +0100 (+0100), Lucas Alvares Gomes wrote:
[...]
> Ironic is developed by a community, if you have problems
> running/developing there are a couple of ways to solicit help:
> 
> * Send an email to this mail list (openstack-dev) with your
> question(s) (and add "[Ironic]" to the subject of the email to filter
> the audience).
[...]

Just to correct this slightly, "this mail list (openstack-dev)" is
definitely suitable for "problems [...] developing" but not for
"problems running" Ironic (and other software). That's what the -dev
suffix is meant to imply.

There's a general mailing list, openst...@lists.openstack.org, which
is generally recommended for usage-related questions.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Promoting Dawid Deja to core reviewers

2016-07-29 Thread Anastasia Kuznetsova
Renat,

I fully support Dawid's promotion! Here is my +1 for Dawid.

Dawid,

I will be glad to see you in the Mistral core team.

On Fri, Jul 29, 2016 at 2:39 PM, Renat Akhmerov 
wrote:

> Hi,
>
> I’d like to promote Dawid Deja working at Intel (ddeja in IRC) to Mistral
> core reviewers.
>
> The reasons why I want to see Dawid in the core team is that he provides
> amazing, very thorough reviews.
> Just by looking at a few of them I was able to make a conclusion that he
> knows the system architecture very well
> although he started contributing actively not so long ago. He always sees
> things deeply, can examine a problem
> from different angles, demonstrates solid technical background in general.
> He is in top 5 reviewers now by a number
> of reviews and the only one who still doesn’t have core status. He also
> implemented several very important changes
> during Newton cycle. Some of them were in progress for more than a year
> (flexible RPC) but Dawid helped to knock
> them down elegantly.
>
> Besides purely professional skills that I just mentioned I also want to
> say that it’s a great pleasure to work with
> Dawid. He’s a bright cheerful guy and a good team player.
>
> Dawid’s statistics is here:
> http://stackalytics.com/?module=mistral-group=commits_id=dawid-deja-0
>
>
> I’m hoping for your support in making this promotion.
>
> Thanks
>
> Renat Akhmerov
> @Nokia
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Anastasia Kuznetsova
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][panko] ElasticSearch support is broken

2016-07-29 Thread Julien Danjou
On Fri, Jul 29 2016, Nadya Shakhat wrote:

> Thank you for notifying! I've assigned this bug to me. Will work on that
> today.

As a second effort, making sure it's tested and working in Panko's gate
would be a good idea.

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Switch 'all?' openstack bots to errbot plugins?

2016-07-29 Thread Jeremy Stanley
On 2016-07-28 23:17:52 -0700 (-0700), Morgan Fainberg wrote:
> As I recall this has been on a long list of "we want to do it". It
> really just comes down to someone putting effort into making it
> happen.

Yes, it's come up semi-often (also Joshua mentioned this to me over
IRC earlier in the week where I basically told him the same).
There's been a general consensus that the Infra team would love to
see the IRC bots it manages (gerribot, meetbot, statusbot) rewritten
in a common framework, preferably a modern and extensible one. The
last time anyone looked into options (which admittedly was probably
at least a year ago), errbot seemed like the leading contender for
our desired language and featureset.

To echo Morgan, we just need (and would really appreciate!) someone
working on the implementation.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] SFC stable/mitaka version

2016-07-29 Thread Ihar Hrachyshka

Akihiro Motoki  wrote:


2016-07-29 18:34 GMT+09:00 Ihar Hrachyshka :

Cathy Zhang  wrote:


Hi Ihar and all,

Yes, we have been preparing for such a release. We will do one more round
of testing to make sure everything works fine, and then I will submit the
release request.
There is a new patch on "stadium: adopt openstack/releases in subproject
release process" which is not Merged yet.
Shall I follow this
http://docs.openstack.org/developer/neutron/stadium/sub_project_guidelines.html#sub-project-release-process
to submit the request?
Do you have a good bug example for Neutron sub-project release request?



For the time being, until the patch landds, you may follow any of those
directions.

An example of a release request bug is:
https://bugs.launchpad.net/networking-bagpipe/+bug/1589502


BTW, a functional and tempest patch for networking-sfc has been uploaded
and it might take some time for the team to complete the review. The  
test is

non-voting. Do you think we should wait until this patch is merged or
release can be done without it?



It would be great to have CI voting, but then, you already lag with the
release for months comparing to release date of Neutron Mitaka, and you  
risk

getting into Phase II support mode before you even release the first
version. If you don’t envision release blocker bugs in the branch, I would
suggest you release the thing and then follow up with bug fixes for  
whatever

you catch later on. In a way, it’s better to release a half baked release
than to not release at all. That’s to follow the ‘release often’ mantra,  
and

boost adoption.


I agree with Ihar, but I think there are several points to be checked
before the release.

- The code should be tested against mitaka version of neutron.
  Currently the master branch of networking-sfc is tested against neutron master
  and we haven't tested it against neutron stable/mitaka after Mitaka
was released.


That’s why we suggest* in devref to release at around same time as neutron:

*  
http://docs.openstack.org/developer/neutron/stadium/sub_project_guidelines.html#stable-branches


"Stable branches for subprojects should be created at the same time when  
corresponding neutron stable branches are created. This is to avoid  
situations when a postponed cut-off results in a stable branch that  
contains some patches that belong to the next release. This would require  
reverting patches, and this is something you should avoid."




- networking-sfc branch already contains newton db migration (see
db/migration/alambic_migrations/versions/newton).
  What I am not sure is whether it needs to be a part of mitaka release or not,
  but you need to be careful when cutting stable/mitaka branch.


Good catch. From alembic perspective, it does not matter where scripts are  
located, so it’s probably minor.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] SFC stable/mitaka version

2016-07-29 Thread Akihiro Motoki
2016-07-29 18:34 GMT+09:00 Ihar Hrachyshka :
> Cathy Zhang  wrote:
>
>> Hi Ihar and all,
>>
>> Yes, we have been preparing for such a release. We will do one more round
>> of testing to make sure everything works fine, and then I will submit the
>> release request.
>> There is a new patch on "stadium: adopt openstack/releases in subproject
>> release process" which is not Merged yet.
>> Shall I follow this
>> http://docs.openstack.org/developer/neutron/stadium/sub_project_guidelines.html#sub-project-release-process
>> to submit the request?
>> Do you have a good bug example for Neutron sub-project release request?
>
>
> For the time being, until the patch landds, you may follow any of those
> directions.
>
> An example of a release request bug is:
> https://bugs.launchpad.net/networking-bagpipe/+bug/1589502
>
>>
>> BTW, a functional and tempest patch for networking-sfc has been uploaded
>> and it might take some time for the team to complete the review. The test is
>> non-voting. Do you think we should wait until this patch is merged or
>> release can be done without it?
>
>
> It would be great to have CI voting, but then, you already lag with the
> release for months comparing to release date of Neutron Mitaka, and you risk
> getting into Phase II support mode before you even release the first
> version. If you don’t envision release blocker bugs in the branch, I would
> suggest you release the thing and then follow up with bug fixes for whatever
> you catch later on. In a way, it’s better to release a half baked release
> than to not release at all. That’s to follow the ‘release often’ mantra, and
> boost adoption.

I agree with Ihar, but I think there are several points to be checked
before the release.

- The code should be tested against mitaka version of neutron.
  Currently the master branch of networking-sfc is tested against neutron master
  and we haven't tested it against neutron stable/mitaka after Mitaka
was released.

- networking-sfc branch already contains newton db migration (see
db/migration/alambic_migrations/versions/newton).
  What I am not sure is whether it needs to be a part of mitaka release or not,
  but you need to be careful when cutting stable/mitaka branch.

Thanks,
Akihiro


>
>
>>
>> Thanks,
>> Cathy
>>
>> -Original Message-
>> From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
>> Sent: Wednesday, July 27, 2016 1:24 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Neutron] SFC stable/mitaka version
>>
>> Tony Breeds  wrote:
>>
>>> On Wed, Jul 06, 2016 at 12:40:48PM +, Gary Kotton wrote:

 Hi,
 Is anyone looking at creating a stable/mitaka version? What if
 someone want to use this for stable/mitaka?
>>>
>>>
>>> If that's a thing you need it's a matter of Armando asking the release
>>> managers to create it.
>>
>>
>> I only suggest Armando is not dragged into it, the release liaison
>> (currently me) should be able to handle the request if it comes from the
>> core team for the subproject.
>>
>> Ihar
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Promoting Dawid Deja to core reviewers

2016-07-29 Thread Renat Akhmerov
Hi,

I’d like to promote Dawid Deja working at Intel (ddeja in IRC) to Mistral core 
reviewers.

The reasons why I want to see Dawid in the core team is that he provides 
amazing, very thorough reviews.
Just by looking at a few of them I was able to make a conclusion that he knows 
the system architecture very well
although he started contributing actively not so long ago. He always sees 
things deeply, can examine a problem
from different angles, demonstrates solid technical background in general. He 
is in top 5 reviewers now by a number
of reviews and the only one who still doesn’t have core status. He also 
implemented several very important changes
during Newton cycle. Some of them were in progress for more than a year 
(flexible RPC) but Dawid helped to knock
them down elegantly.

Besides purely professional skills that I just mentioned I also want to say 
that it’s a great pleasure to work with
Dawid. He’s a bright cheerful guy and a good team player.

Dawid’s statistics is here: 
http://stackalytics.com/?module=mistral-group=commits_id=dawid-deja-0
 



I’m hoping for your support in making this promotion.

Thanks

Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] I want to consult ironic problems

2016-07-29 Thread Lucas Alvares Gomes
Hi Paul,

> I want to consult ironic problems
> do there have bare metal management man?
>

Ironic is developed by a community, if you have problems
running/developing there are a couple of ways to solicit help:

* Send an email to this mail list (openstack-dev) with your
question(s) (and add "[Ironic]" to the subject of the email to filter
the audience).
* Join the #openstack-ironic IRC channel on irc.freenode.net.
* Join our weekly meeting (more information can be found here:
https://wiki.openstack.org/wiki/Meetings/Ironic).

Hope that helps,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][panko] ElasticSearch support is broken

2016-07-29 Thread Nadya Shakhat
Hi Julien,

Thank you for notifying! I've assigned this bug to me. Will work on that
today.

Nadya

On Fri, Jul 29, 2016 at 12:11 PM, Julien Danjou  wrote:

> Hi there,
>
> This is a reminder that ElasticSearch support for event in Ceilometer
> (now Panko) is broken for more than a month¹. If nothing is done by the
> time we need to release RC1, as I don't see the point of releasing
> broken code, I might suggest that we remove this driver altogether.
>
> If you care, I'd suggest to fix it before it's too late.
>
> ¹  https://bugs.launchpad.net/ceilometer/+bug/1596988
>
> --
> Julien Danjou
> -- Free Software hacker
> -- https://julien.danjou.info
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] I want to consult ironic problems

2016-07-29 Thread paul schlacter
I want to consult ironic problems
do there have bare metal management man?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [nova] [neutron] get_all_bw_counters in the Ironic virt driver

2016-07-29 Thread Ihar Hrachyshka

Devananda van der Veen  wrote:




On 07/28/2016 05:40 PM, Brad Morgan wrote:

I'd like to solicit some advice about potentially implementing
get_all_bw_counters() in the Ironic virt driver.

https://github.com/openstack/nova/blob/master/nova/virt/driver.py#L438
Example Implementation:
https://github.com/openstack/nova/blob/master/nova/virt/xenapi/driver.py#L320

I'm ignoring the obvious question about how this data will actually be
collected/fetched as that's probably it's own topic (involving neutron),  
but I

have a few questions about the Nova -> Ironic interaction:

Nova
* Is get_all_bw_counters() going to stick around for the foreseeable  
future? If

not, what (if anything) is the replacement?

Ironic
* I assume Ironic would be responsible for knowing how to fetch bandwidth
counters for a given instance - correct?


The nova.virt.ironic driver would be responsible for implementing that  
method --
but I don't think that it makes sense to fetch that information from  
Ironic.


In some cases, it may be possible for the Node's management controller  
(eg, the

iLO) to collect/measure/expose network traffic counters for each physical
interface on the Node. None of Ironic's in-tree drivers support gathering  
this
data, afaik; Ironic isn't capturing it, and we don't have an API to  
expose it

today. If we went this route, it would be a vendor-specific thing, and not
supported by the _*ipmitool class of drivers. In other words, I don't  
think we
could have a fully open source production-oriented implementation of this  
feature.


On the other hand, with the Neutron integration now underway, if one were  
using
Neutron and OVS or OVN to manage the physical switches, then I would  
think that

Neutron could expose the bandwidth counters on the VIFs associated with the
Instance // with the user-defined Ports. I believe OVS supports this, but I
don't see anything in the Neutron API that actually exposes it...  
(IANANE, so it

may very well be there and I just didn't find it)

I'll defer to Neutron folks here. If the VIF's bandwidth counters can be  
fetched
from neutron, that would be ideal, as it should work regardless of the  
server's

management controller.

(I've added [neutron] to the subject line to solicit their input)


The only metering feature I know in neutron is for L3 metering, that  
measures traffic per router, not per port:


http://docs.openstack.org/admin-guide/networking_adv-features.html#l3-metering

It would take a completely new feature in neutron to expose traffic per  
port. I don’t think there would be a problem of backends not supporting  
this feature, but it would take some API design work.


I don’t know of any plans to expose this information through neutron API.

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][db][models]

2016-07-29 Thread Ihar Hrachyshka

Victor  wrote:


Manjeet,

Tony has some issues moving model classes to other location. Given that  
some class models are used by other neutron services, Ihar suggest to use  
debtcollector to make this transition smoothly.  Can we include that  
solution as part of this movement?


Absolutely. There should be a debtcollector based wrapper to move models  
around, as in:  
https://review.openstack.org/#/c/330870/13/neutron/db/agents_db.py @ line 88


This should be spun out into a separate review, then utilized in all  
refactoring patches.




Thanks
Victor Morales



On 7/28/16, 12:19 PM, "Bhatia, Manjeet S"   
wrote:



Ihar Hrachyshka  wrote:

Manjeet S  wrote:


Hello Team,

I have a question regarding centralizing all db models in neutron. As
you all know Oslo versioned object work is under progress and I also
had a ticket opened for refactoring Db models.
(https://bugs.launchpad.net/neutron/+bug/1597913). There are three
way I can do this, 1, move all models to db/models_v2.py 2, create a
new dir db/models/ and move whatever models are giving issue Of
cyclic import to db_models.py under db/models/ tree but all in same
file, 3rd is move into different files under Same tree db/models. I
liked second way better, please let me know which one according to
experienced developers is better, I’ll do that way.


I don’t think 2. is the best way forward because it still keeps all
models in a single file with no classification. I prefer we split
models by topic, so option 3.

I took the approach for security groups here:
https://review.openstack.org/#/c/284738/49/neutron/db/models/securityg
roup.py



I also prefer this organization (option 3).


Ok thanks will follow 3.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] SFC stable/mitaka version

2016-07-29 Thread Ihar Hrachyshka

Cathy Zhang  wrote:


Hi Ihar and all,

Yes, we have been preparing for such a release. We will do one more round  
of testing to make sure everything works fine, and then I will submit the  
release request.
There is a new patch on "stadium: adopt openstack/releases in subproject  
release process" which is not Merged yet.
Shall I follow this  
http://docs.openstack.org/developer/neutron/stadium/sub_project_guidelines.html#sub-project-release-process  
to submit the request?

Do you have a good bug example for Neutron sub-project release request?


For the time being, until the patch landds, you may follow any of those  
directions.


An example of a release request bug is:  
https://bugs.launchpad.net/networking-bagpipe/+bug/1589502




BTW, a functional and tempest patch for networking-sfc has been uploaded  
and it might take some time for the team to complete the review. The test  
is non-voting. Do you think we should wait until this patch is merged or  
release can be done without it?


It would be great to have CI voting, but then, you already lag with the  
release for months comparing to release date of Neutron Mitaka, and you  
risk getting into Phase II support mode before you even release the first  
version. If you don’t envision release blocker bugs in the branch, I would  
suggest you release the thing and then follow up with bug fixes for  
whatever you catch later on. In a way, it’s better to release a half baked  
release than to not release at all. That’s to follow the ‘release often’  
mantra, and boost adoption.




Thanks,
Cathy

-Original Message-
From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
Sent: Wednesday, July 27, 2016 1:24 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] SFC stable/mitaka version

Tony Breeds  wrote:


On Wed, Jul 06, 2016 at 12:40:48PM +, Gary Kotton wrote:

Hi,
Is anyone looking at creating a stable/mitaka version? What if
someone want to use this for stable/mitaka?


If that's a thing you need it's a matter of Armando asking the release
managers to create it.


I only suggest Armando is not dragged into it, the release liaison  
(currently me) should be able to handle the request if it comes from the  
core team for the subproject.


Ihar




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistal] Mistral logo ideas?

2016-07-29 Thread Elisha, Moshe (Nokia - IL)
Octopus sounds good to me.
For me it somehow relates to Mistral as well – like concurrent tasks…

From: Renat Akhmerov >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, 19 July 2016 at 07:36
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [mistal] Mistral logo ideas?



On 18 Jul 2016, at 19:54, Ryan Brady 
> wrote:

On Mon, Jul 18, 2016 at 12:44 AM, Renat Akhmerov 
> wrote:
On choosing a mascot for Mistral. Let’s choose one till next Monday.

To start this discussion I’d like to propose a couple of ideas:


  *   Octopus (kind of symbolic to me). How do you like this beauty? 
http://nashaplaneta.su/_bl/158/78285238.jpg

 +1.  Intelligence, dexterity, tool-use - many good qualitites to associate 
with.


Yep :)


Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][panko] ElasticSearch support is broken

2016-07-29 Thread Julien Danjou
Hi there,

This is a reminder that ElasticSearch support for event in Ceilometer
(now Panko) is broken for more than a month¹. If nothing is done by the
time we need to release RC1, as I don't see the point of releasing
broken code, I might suggest that we remove this driver altogether.

If you care, I'd suggest to fix it before it's too late.

¹  https://bugs.launchpad.net/ceilometer/+bug/1596988

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] typical length of timeseries data

2016-07-29 Thread Julien Danjou
On Thu, Jul 28 2016, gordon chung wrote:

> this is probably something to discuss on ops list as well eventually but 
> what do you think about shrinking the max size of timeseries chunks from 
> 14400 to something smaller? i'm curious to understand what the length of 
> the typical timeseries is. my main reason for bringing this up is that 
> even our default 'high' policy doesn't reach 14400 limit so it at most 
> will only split into two, partially filled objects. as we look to make a 
> more efficient storage format for v3(?) seems like this may be an 
> opportunity to change size as well (if necessary)

1 minute granularity over a year: 525600 points, 27 splits.
Even in that case, which is pretty precise, that's not a lot of split
I'd say.

> 14400 points roughly equals 128KB object which is cool but maybe we 
> should target something smaller? 7200points aka 64KB? 3600 points aka 
> 32KB? just for reference our biggest default series is 10080 points 
> (1min granularity over week).

It's 128 Kb if you don't compress, but if you do, it's usually way less.

> that said 128KB (at most) might not be that bad from read/write pov and 
> maybe it's ok to keep it at 14400? i know from the test i did earlier, 
> the time requirement to read/write increases linearly (7200 point object 
> takes roughly half time of 14400 point object)[1]. i think the main item 
> is we don't want it too small that we're updating multiple objects at a 
> time.

Best way is probably to do some bench… but I think it really depends on
the use cases here. The interest of having many small splits is that you
can parallelize the read.

Considering the compression ratio we have, I think we should split in
smaller files. I'd pick 3600 and give it a try.

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate][ansible] Ansible OpenStack modules for Designate available

2016-07-29 Thread Ricardo Carrillo Cruz
Hi there

I'm happy to report that modules to manage Designate zones and recordsets
have landed on Ansible extras devel branch:

https://github.com/ansible/ansible-modules-extras/blob/devel/cloud/openstack/os_zone.py
https://github.com/ansible/ansible-modules-extras/blob/devel/cloud/openstack/os_recordset.py

If you find issues, please reach me out on IRC (rcarrillocruz) and/or open
an issue on GitHub:

https://github.com/ansible/ansible-modules-extras/issues

Also, if you have interest on having other Designate resources implemented
as Ansible modules let me know,
as I'm now looking for new streams of work on Ansible land.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FPGA as a dynamic nested resources

2016-07-29 Thread Roman Dobosz
On Thu, 28 Jul 2016 10:50:08 -0400
Jay Pipes  wrote:

> Roman, great thread, thanks for posting! Comment inline :)

Thanks!

> 
> > It can identified 3 levels of FPGA resources, which can be nested one
> > on the others:
> >
> > 1. Whole FPGA. If used discrete FPGA, than even today it might be pass
> >through to the VM.
> >
> > 2. Region in FPGA. Some of the FPGA models can be divided into regions
> >or slots. Also, for some model it is possible to (re)program such
> >region individually - in this case there is a possibility to pass
> >entire slot to the VM, so that it might be possible to reprogram
> >such slot, and utilize the algorithm within the VM.
> >
> > 3. Accelerator in region/FPGA. If there is an accelerator programmed
> >in the slot, it is possible, that such accelerator provides us with
> >Virtual Functions (similar to the SR-IOV), than every available VF
> >can be treated as a resource.
> >
> > 4. It might be also necessary to track every VF individually, although
> >I didn't assumed it will be needed, nevertheless with nested
> >resources it should be easy to handle it.
> >
> > Correlation between such resources are a bit different from NUMA -
> > while in NUMA case there is a possibility to either schedule a VM with
> > some memory specified, or request memory within NUMA cell, in FPGA if
> > there is slot taken, or accelerator already programmed and used, there
> > is no way to offer FPGA as a whole to the tenant, until all
> > accelerators and slots are free.
> >
> > I've followed Jay idea about nested resources and having in mind
> > blueprint[2] regarding dynamic resources I've prepared how it fit in.
> >
> 
> >
> > To get id of resource of type acceleratorX to allocate 8 VF:
> >
> >
> > SELECT rp.id
> > FROM resource_providers rp
> > LEFT JOIN allocations al ON al.resource_provider_id = rp.id
> > LEFT JOIN inventories iv ON iv.resource_provider_id = rp.id
> > WHERE al.resource_class_id = 1668
> > AND (iv.total - COALESCE(al.used, 0)) >= 8;
> 
> Right idea, yes, but you would need to INNER JOIN inventories and LEFT 
> JOIN from the winnowed set of inventory records to a grouped projection 
> of allocations. :)
> 
> The SQL would be this:
> 
> SELECT rp.id
> FROM resource_providers rp
> INNER JOIN inventories iv
> ON rp.id = iv.resource_provider_id
> AND iv.resource_class_id = 1688
> LEFT JOIN (
>SELECT resource_provider_id, SUM(used) as used
>FROM allocations
>WHERE resource_class_id = 1688
>GROUP BY resource_provider_id
> ) AS al
> ON iv.resource_provider_id = al.id
> WHERE (iv.total - COALESCE(al.used, 0)) >= 8;

Hm. I'm getting same results using the both queries. Certainly, I can't
see something obvious here, and for sure I'm no sql expert :)

> The other SQL queries you listed had a couple errors, but the ideas were 
> mostly sound. I'll include the FPGA use cases when I write up the nested 
> resource providers spec proposal.

Great, thank you!

> The only thing I'd say is that I was envisioning the dynamic resource 
> classes for FPGAs to be the resource context to an already-flashed 
> algorithm, not to the FPGA root device (or a region even). But, who 
> knows, perhaps we can work something out. More discussion on the spec...

For sure, we can start from defining basic case, and expand it if
needed.

-- 
Cheers,
Roman Dobosz

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Switch 'all?' openstack bots to errbot plugins?

2016-07-29 Thread Morgan Fainberg
On Jul 28, 2016 22:50, "Joshua Harlow"  wrote:
>
> Hi folks,
>
> I was thinking it might be useful to see what other folks think about
switching (or migrating all the current bots we have in openstack) to be
based on errbot plugins.
>
> Errbot @ http://errbot.io/en/latest/ takes a slightly different approach
to bots and treats each bot 'feature' as a plugin that can be activated and
deactivated with-in the context of the same bot (even doing so
dynamically/at runtime).
>
> It also allows for those that use slack (or other backend @
http://errbot.io/en/latest/features.html) to be able to 'seamlessly' use
the same plugins and just switching a tiny amount config to use a different
'bot backend'.
>
> I've been experimenting with it more recently and have a gerritbot (sort
of equivalent) @ https://github.com/harlowja/gerritbot2 and also have been
working on a oslobot plugin @ https://review.openstack.org/#/c/343857/ and
during this exploration it has gotten me to think that we could move most
of the functionality of the various bots in openstack (patchbot, openstack
- really meetbot, gerritbot and others?) under the same umbrella (or at
least convert them into plugins that folks can run on IRC or if they want
to run them on some other backend that's cool to).
>
> The hardest one I can think would be meetbot, although the code @
https://github.com/openstack-infra/meetbot doesn't look impossible (or
really that hard to convert to an errbot plugin).
>
> What do people think?
>
> Any strong preference?
>
> I was also thinking that as a result we could then just have a single
'openstack' bot and also turn on plugins like:
>
> - https://github.com/aherok/errbot_plugins (helps with timezone
conversions that might be useful to have for folks that keep on getting
them wrong).
> - some stackalytics integration bot?
> - something even better???
> - some other plugin @ https://github.com/errbotio/errbot/wiki
>
> -Josh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

As I recall this has been on a long list of "we want to do it". It really
just comes down to someone putting effort into making it happen.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [daisycloud-core] IRC channel change remind and this meeting agenda

2016-07-29 Thread jason
Hi team,

Just remind again that we are moved to channel from #daisycloud to
#openstack-meeting.Agenda of the coming meeting is as follows:

#topic roll call
#topic CI build error
#topic WEB UI deployment status update
#topic ironic related problem status update

-- 
Yours,
Jason

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] [yaql] Evaluate YAQL expressions in the yaqluator

2016-07-29 Thread Elisha, Moshe (Nokia - IL)
Hi,

I saw that starting the Newton release - Heat supports yaql function[1].
I think this will prove to be very powerful and very handy.

I wanted to make sure you are familiar with the yaqluator[2] as it might be 
useful for you.

yaqluator – is a free online YAQL evaluator.
* Enter a YAML / JSON and a YAQL expression and evaluate to see the result.
* There is a catalog of commonly used OpenStack API responses to run YAQL 
expressions against.
* It is open-source[3] and any contribution is welcome.

I hope you will find it useful.


[1] http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#yaql
[2] http://yaqluator.com
[3] https://github.com/ALU-CloudBand/yaqluator


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev