Re: [openstack-dev] [nova] nova boot from image created volume

2017-04-06 Thread 李田清
Thanks Matt.






 
 
-- Original --
From:  "Matt Riedemann";
Date:  Fri, Apr 7, 2017 00:18 AM
To:  "openstack-dev"; 

Subject:  Re: [openstack-dev] [nova] nova boot from image created volume

 
On 4/6/2017 10:05 AM, Jay Pipes wrote:
>> jaypipes@serialcoder:~/src/git.openstack.org/openstack/nova/nova$
>> ack-grep --ignore-dir tests --ignore-dir locale "volume_api.create"
>> compute/api.py
>> 2982:snapshot = self.volume_api.create_snapshot_force(
>>
>> api/openstack/compute/volumes.py
>> 185:new_volume = self.volume_api.create(
>> 582:create_func = self.volume_api.create_snapshot_force
>> 584:create_func = self.volume_api.create_snapshot
>>
>> virt/block_device.py
>> 63::returns: The availability_zone value to pass to volume_api.create
>> 487:vol = volume_api.create(context, self.volume_size, '',
>> '',
>> 508:vol = volume_api.create(context, self.volume_size,
>> 530:vol = volume_api.create(context, self.volume_size,
>> vol_name, '',
>
> Best,
> -jay
>
> On 04/06/2017 03:29 AM, 李田清 wrote:
>> Hello,
>> If we use nova boot from image and created volume, i think the
>> nova will
>> use volume/cinder.py:create to create volume. But after insert pdb,
>> i do not find
>> the specific code of line to call the create. Can someone help me to
>> point out the
>> code of the line? Thanks a lot.
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

For the case you're looking for, the volume is created here:

https://github.com/openstack/nova/blob/8d9bf947a4c8654a30e016a5d95d9bec55447506/nova/virt/block_device.py#L508

That happens when the compute manager is preparing block devices to 
attach to the instance while building it on the compute host.

After the volume is created and nova gets the volume ID back, we wait 
for it to be available before we can attach it:

https://github.com/openstack/nova/blob/8d9bf947a4c8654a30e016a5d95d9bec55447506/nova/virt/block_device.py#L512

Which uses this function passed in from the compute manager (yes, it's 
gross tight coupling):

https://github.com/openstack/nova/blob/8d9bf947a4c8654a30e016a5d95d9bec55447506/nova/compute/manager.py#L1221

And finally it attaches the volume to the instance via a call to the 
parent class:

https://github.com/openstack/nova/blob/8d9bf947a4c8654a30e016a5d95d9bec55447506/nova/virt/block_device.py#L516

I hope this helps. It only took me 4 years to memorize the flow. :)






-- 

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-04-06 Thread Steve Gordon
- Original Message -
> From: "Monty Taylor" 
> To: openstack-dev@lists.openstack.org
> Sent: Sunday, April 2, 2017 4:16:44 PM
> Subject: Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud 
> Provider for Kubernetes
> 
> On 04/02/2017 02:53 PM, Chris Hoge wrote:
> > Now that the provider has a repository in the OpenStack project
> > namespace, we need to move over the existing set of issues and pull
> > requests and create an initial work list for migrating patches and
> > fixing existing issues.
> > 
> > I've started up an etherpad where we can track that work[1]. In the longer
> > run we should migrate over to Launchpad or Storyboard. One question,
> > to help preserve continuity with the K8S community workflow: do we want
> > to investigate ways to allow for issue creation in the OpenStack
> > namespace on GitHub?
> 
> I do not think this is a thing we want to do. While I understand the
> urge, a project needs to live somewhere (in this case we've chosen
> OpenStack) and should behave as projects do in that location. When I
> work on Ansible, I do issues on github. When I deal with tox, I file
> issues on bitbucket. Back when I dealt with Jenkins I filed issues in
> their Jira. I do not think that filing an issue in the issue tracker for
> a project is too onerous of a request to make of someone.
> 
> We have issues turned off in all of our github mirrors, so it's highly
> unlikely someone will accidentally attempt to file an issue like the.
> (it's too bad we can't similarly turn off pull requests, but oh well)

I agree with the above comments w.r.t. tooling, but I think we will still need 
to determine what I think is at the core of Chris's concern which is in a world 
where we have extracted the cloud provider implementation from Kube (and 
externalizing these from Kube has indeed been on the table for some time, so 
thanks Dims for taking the initiative) how do we continue to work on it in the 
OpenStack community while also still maintaining - if not extending - our level 
of interop and visibility with the Kubernetes community. I think the focus of 
concern here should be less on the tools though - as you note each community 
has its own tools and that is unlikely to change - and more on communication 
but it can be difficult to decouple the two (IRC versus Slack, Zoom, etc.).

Thus far discussion of open PRs/Issues and ongoing work w.r.t. the provider 
implementation has primarily focused on the Kubernetes OpenStack SIG (the scope 
of which was recently extended to allow space for discussions/collaboration 
between the various OpenStack deployment projects and folks anchored in the 
Kubernetes side of things, specifically w.r.t. Helm. It's not immediately clear 
to me how we would prefer to maintain visibility on the Kubernetes side of the 
fence going forward because a natural progression of "this is developed, 
tested, and served up on OpenStack infra" would of course also be to move most 
of these discussions to IRC.

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-04-06 Thread Doug Hellmann
Excerpts from Emilien Macchi's message of 2017-04-06 18:17:59 -0400:
> On Wed, Mar 22, 2017 at 11:23 AM, Flavio Percoco  wrote:
> > On 15/03/17 15:40 -0400, Doug Hellmann wrote:
> >>
> >> Excerpts from Monty Taylor's message of 2017-03-15 04:36:24 +0100:
> >>>
> >>> On 03/14/2017 06:04 PM, Davanum Srinivas wrote:
> >>> > Team,
> >>> >
> >>> > So one more thing popped up again on IRC:
> >>> > https://etherpad.openstack.org/p/oslo.config_etcd_backend
> >>> >
> >>> > What do you think? interested in this work?
> >>> >
> >>> > Thanks,
> >>> > Dims
> >>> >
> >>> > PS: Between this thread and the other one about Tooz/DLM and
> >>> > os-lively, we can probably make a good case to add etcd as a base
> >>> > always-on service.
> >>>
> >>> As I mentioned in the other thread, there was specific and strong
> >>> anti-etcd sentiment in Tokyo which is why we decided to use an
> >>> abstraction. I continue to be in favor of us having one known service in
> >>> this space, but I do think that it's important to revisit that decision
> >>> fully and in context of the concerns that were raised when we tried to
> >>> pick one last time.
> >>>
> >>> It's worth noting that there is nothing particularly etcd-ish about
> >>> storing config that couldn't also be done with zk and thus just be an
> >>> additional api call or two added to Tooz with etcd and zk drivers for it.
> >>>
> >>
> >> The fun* thing about working with these libraries is managing the
> >> interdependencies. If we're going to have an abstraction library that
> >> provides configuration options for seeing the backend, like we do in
> >> oslo.db and olso.messaging, then the configuration library can't use it
> >> or we have a circular dependency.
> >>
> >> Luckily, tooz does not currently use oslo.config. So, oslo.config could
> >> use tooz and we could create an oslo.dlm library with a shallow
> >> interface mapping config options to tooz calls to open connections or
> >> whatever we need from tooz in an application. Then apps could use
> >> oslo.dlm instead of calling into tooz directly and the configuration of
> >> the backend would be hidden from the application developer.
> >
> >
> > Replying here becasue I like the proposal, I like what Monty said and I also
> > like what Doug said. Most of the issues and concerns have been covered in
> > this
> > thread and I don't have much else to add other than +1.
> 
> The one-million-dollar question now is: what are the next steps?
> It sounds like an oslo spec would be nice to summarize the ideas here
> and talk about design.
> 
> I volunteer to help but I would need someone more familiar than I am with 
> Oslo.
> Please let me know if you're interested to work on it with me
> otherwise I'll chase chase some of you :-)

I can help from the Oslo side.

Doug

> 
> Thanks for the nice discussions here, I think we've made good progress.
> 
> >> Doug
> >>
> >> * your definition of "fun" may be different than mine
> >
> >
> > Which is probably different than mine :)
> >
> > --
> > @flaper87
> > Flavio Percoco
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [policy][nova][keystone] policy meeting next week

2017-04-06 Thread Lance Bragstad
For those looking for a little background information on RBAC, I've
included a link to the NIST RBAC model [0]. It doesn't drill down to the
nitty-gritty implementation details, but it does give a good idea of RBAC
standardization complete with examples.

If anyone has any additional references they've found useful for RBAC
discussions, feel free to drop them here.


[0] http://csrc.nist.gov/rbac/sandhu-ferraiolo-kuhn-00.pdf

On Wed, Apr 5, 2017 at 4:45 PM, Lance Bragstad  wrote:

> We ended up cancelling today's policy meeting, but policy discussions
> carried on throughout the day in #openstack-keystone [0]. We have several
> specs up for review [1][2][3][4]. Some are nova specs and a couple are
> proposed to keystone. With keystone's spec proposal freeze coming up next
> week [5], this is our last shot to collaborate on new ideas if we want to
> propose any for Pike.
>
> I'm sending out this note to ask folks interested in the policy
> discussions to have a look at the proposed specs. Next week's meeting will
> be focused on having detailed discussions about them.
>
> If you just can't wait to start talking about policy until next week, come
> find me in #openstack-dev. Pre-discussions might help maximize our meeting
> next week.
>
> Thanks!
>
> [0] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%
> 23openstack-keystone.2017-04-05.log.html#t2017-04-05T16:24:29
> [1] https://review.openstack.org/#/c/427872/
> [2] https://review.openstack.org/#/c/433037/
> [3] https://review.openstack.org/#/c/452198/
> [4] https://review.openstack.org/#/c/453739/
> [5] https://releases.openstack.org/pike/schedule.
> html#p-keystone-spec-proposal-freeze
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] rejoining our IRC channel

2017-04-06 Thread Dariusz Śmigiel
2017-04-06 17:16 GMT-05:00 Jeremy Stanley :
> On 2017-04-06 16:04:14 -0500 (-0500), Lance Bragstad wrote:
>> If you chill in #openstack-keystone, we had a little mishap today that
>> resulted in people getting accidentally kicked from the channel. Everything
>> is back to normal and if you haven't already done so, feel free to hop back
>> in.
>
> Sad to say it was all thanks to my butterfingers. Sorry about the
> disruption... :/

Jeremy, that [1] was brutal :)

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-04-06.log.html#t2017-04-06T20:46:40

> --
> Jeremy Stanley
>

--
Dariusz Smigiel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-04-06 Thread Emilien Macchi
On Wed, Mar 22, 2017 at 11:23 AM, Flavio Percoco  wrote:
> On 15/03/17 15:40 -0400, Doug Hellmann wrote:
>>
>> Excerpts from Monty Taylor's message of 2017-03-15 04:36:24 +0100:
>>>
>>> On 03/14/2017 06:04 PM, Davanum Srinivas wrote:
>>> > Team,
>>> >
>>> > So one more thing popped up again on IRC:
>>> > https://etherpad.openstack.org/p/oslo.config_etcd_backend
>>> >
>>> > What do you think? interested in this work?
>>> >
>>> > Thanks,
>>> > Dims
>>> >
>>> > PS: Between this thread and the other one about Tooz/DLM and
>>> > os-lively, we can probably make a good case to add etcd as a base
>>> > always-on service.
>>>
>>> As I mentioned in the other thread, there was specific and strong
>>> anti-etcd sentiment in Tokyo which is why we decided to use an
>>> abstraction. I continue to be in favor of us having one known service in
>>> this space, but I do think that it's important to revisit that decision
>>> fully and in context of the concerns that were raised when we tried to
>>> pick one last time.
>>>
>>> It's worth noting that there is nothing particularly etcd-ish about
>>> storing config that couldn't also be done with zk and thus just be an
>>> additional api call or two added to Tooz with etcd and zk drivers for it.
>>>
>>
>> The fun* thing about working with these libraries is managing the
>> interdependencies. If we're going to have an abstraction library that
>> provides configuration options for seeing the backend, like we do in
>> oslo.db and olso.messaging, then the configuration library can't use it
>> or we have a circular dependency.
>>
>> Luckily, tooz does not currently use oslo.config. So, oslo.config could
>> use tooz and we could create an oslo.dlm library with a shallow
>> interface mapping config options to tooz calls to open connections or
>> whatever we need from tooz in an application. Then apps could use
>> oslo.dlm instead of calling into tooz directly and the configuration of
>> the backend would be hidden from the application developer.
>
>
> Replying here becasue I like the proposal, I like what Monty said and I also
> like what Doug said. Most of the issues and concerns have been covered in
> this
> thread and I don't have much else to add other than +1.

The one-million-dollar question now is: what are the next steps?
It sounds like an oslo spec would be nice to summarize the ideas here
and talk about design.

I volunteer to help but I would need someone more familiar than I am with Oslo.
Please let me know if you're interested to work on it with me
otherwise I'll chase chase some of you :-)

Thanks for the nice discussions here, I think we've made good progress.

>> Doug
>>
>> * your definition of "fun" may be different than mine
>
>
> Which is probably different than mine :)
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] rejoining our IRC channel

2017-04-06 Thread Jeremy Stanley
On 2017-04-06 16:04:14 -0500 (-0500), Lance Bragstad wrote:
> If you chill in #openstack-keystone, we had a little mishap today that
> resulted in people getting accidentally kicked from the channel. Everything
> is back to normal and if you haven't already done so, feel free to hop back
> in.

Sad to say it was all thanks to my butterfingers. Sorry about the
disruption... :/
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] container jobs are unstable

2017-04-06 Thread Wesley Hayutin
On Thu, Mar 30, 2017 at 10:08 AM, Steven Hardy  wrote:

> On Wed, Mar 29, 2017 at 10:07:24PM -0400, Paul Belanger wrote:
> > On Thu, Mar 30, 2017 at 09:56:59AM +1300, Steve Baker wrote:
> > > On Thu, Mar 30, 2017 at 9:39 AM, Emilien Macchi 
> wrote:
> > >
> > > > On Mon, Mar 27, 2017 at 8:00 AM, Flavio Percoco 
> wrote:
> > > > > On 23/03/17 16:24 +0100, Martin André wrote:
> > > > >>
> > > > >> On Wed, Mar 22, 2017 at 2:20 PM, Dan Prince 
> wrote:
> > > > >>>
> > > > >>> On Wed, 2017-03-22 at 13:35 +0100, Flavio Percoco wrote:
> > > > 
> > > >  On 22/03/17 13:32 +0100, Flavio Percoco wrote:
> > > >  > On 21/03/17 23:15 -0400, Emilien Macchi wrote:
> > > >  > > Hey,
> > > >  > >
> > > >  > > I've noticed that container jobs look pretty unstable
> lately; to
> > > >  > > me,
> > > >  > > it sounds like a timeout:
> > > >  > > http://logs.openstack.org/19/447319/2/check-tripleo/gate-
> tripleo-
> > > >  > > ci-centos-7-ovb-containers-oooq-nv/bca496a/console.html#_
> 2017-03-
> > > >  > > 22_00_08_55_358973
> > > >  >
> > > >  > There are different hypothesis on what is going on here. Some
> > > >  > patches have
> > > >  > landed to improve the write performance on containers by using
> > > >  > hostpath mounts
> > > >  > but we think the real slowness is coming from the images
> download.
> > > >  >
> > > >  > This said, this is still under investigation and the
> containers
> > > >  > squad will
> > > >  > report back as soon as there are new findings.
> > > > 
> > > >  Also, to be more precise, Martin André is looking into this. He
> also
> > > >  fixed the
> > > >  gate in the last 2 weeks.
> > > > >>>
> > > > >>>
> > > > >>> I spoke w/ Martin on IRC. He seems to think this is the cause of
> some
> > > > >>> of the failures:
> > > > >>>
> > > > >>> http://logs.openstack.org/32/446432/1/check-tripleo/gate-
> > > > tripleo-ci-cen
> > > > >>> tos-7-ovb-containers-oooq-nv/543bc80/logs/oooq/overcloud-
> controller-
> > > > >>> 0/var/log/extra/docker/containers/heat_engine/log/heat/heat-
> > > > >>> engine.log.txt.gz#_2017-03-21_20_26_29_697
> > > > >>>
> > > > >>>
> > > > >>> Looks like Heat isn't able to create Nova instances in the
> overcloud
> > > > >>> due to "Host 'overcloud-novacompute-0' is not mapped to any
> cell'. This
> > > > >>> means our cells initialization code for containers may not be
> quite
> > > > >>> right... or there is a race somewhere.
> > > > >>
> > > > >>
> > > > >> Here are some findings. I've looked at time measures from CI for
> > > > >> https://review.openstack.org/#/c/448533/ which provided the most
> > > > >> recent results:
> > > > >>
> > > > >> * gate-tripleo-ci-centos-7-ovb-ha [1]
> > > > >>undercloud install: 23
> > > > >>overcloud deploy: 72
> > > > >>total time: 125
> > > > >> * gate-tripleo-ci-centos-7-ovb-nonha [2]
> > > > >>undercloud install: 25
> > > > >>overcloud deploy: 48
> > > > >>total time: 122
> > > > >> * gate-tripleo-ci-centos-7-ovb-updates [3]
> > > > >>undercloud install: 24
> > > > >>overcloud deploy: 57
> > > > >>total time: 152
> > > > >> * gate-tripleo-ci-centos-7-ovb-containers-oooq-nv [4]
> > > > >>undercloud install: 28
> > > > >>overcloud deploy: 48
> > > > >>total time: 165 (timeout)
> > > > >>
> > > > >> Looking at the undercloud & overcloud install times, the most task
> > > > >> consuming tasks, the containers job isn't doing that bad compared
> to
> > > > >> other OVB jobs. But looking closer I could see that:
> > > > >> - the containers job pulls docker images from dockerhub, this
> process
> > > > >> takes roughly 18 min.
> > > > >
> > > > >
> > > > > I think we can optimize this a bit by having the script that
> populates
> > > > the
> > > > > local
> > > > > registry in the overcloud job to run in parallel. The docker
> daemon can
> > > > do
> > > > > multiple pulls w/o problems.
> > > > >
> > > > >> - the overcloud validate task takes 10 min more than it should
> because
> > > > >> of the bug Dan mentioned (a fix is in the queue at
> > > > >> https://review.openstack.org/#/c/448575/)
> > > > >
> > > > >
> > > > > +A
> > > > >
> > > > >> - the postci takes a long time with quickstart, 13 min (4 min
> alone
> > > > >> spent on docker log collection) whereas it takes only 3 min when
> using
> > > > >> tripleo.sh
> > > > >
> > > > >
> > > > > mmh, does this have anything to do with ansible being in between?
> Or is
> > > > that
> > > > > time specifically for the part that gets the logs?
> > > > >
> > > > >>
> > > > >> Adding all these numbers, we're at about 40 min of additional
> time for
> > > > >> oooq containers job which is enough to cross the CI job limit.
> > > > >>
> > > > >> There is certainly a lot of room for optimization here and there
> and
> > > > >> I'll explore how we can speed up the containers CI job 

Re: [openstack-dev] [freezer] Freezer Hangout Discussion tomorrow

2017-04-06 Thread Saad Zaher
You can join the call directly tomorrow using this link [0].


[0]
https://plus.google.com/hangouts/_/calendar/ZW5nLnN6YWhlckBnbWFpbC5jb20.u6remok1ujohdvh9hgl3u3h294?authuser=0


On Thu, Apr 6, 2017 at 10:03 PM, Saad Zaher  wrote:

> Hello everyone,
>
> There will be a freezer hangout meeting tomorrow if you want to discuss
> anything about freezer. We gonna review the project and what has been done
> so far and where should we direct the project.
>
> The hangout will start tomorrow at 2:00 pm GMT till 3:00 pm GMT and you
> can join from here [0].
>
> Hope to get all of you tomorrow in this meeting.
>
>
> [0] https://calendar.google.com/calendar/event?action=TEMPLATE=
> dTZyZW1vazF1am9oZHZoOWhnbDN1M2gyOTQgZW5nLnN6YWhlckBt=
> eng.szaher%40gmail.com
>
> --
> Best Regards,
> Saad!
>



-- 
--
Best Regards,
Saad!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] rejoining our IRC channel

2017-04-06 Thread Lance Bragstad
If you chill in #openstack-keystone, we had a little mishap today that
resulted in people getting accidentally kicked from the channel. Everything
is back to normal and if you haven't already done so, feel free to hop back
in.


Thanks!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [freezer] Freezer Hangout Discussion tomorrow

2017-04-06 Thread Saad Zaher
Hello everyone,

There will be a freezer hangout meeting tomorrow if you want to discuss
anything about freezer. We gonna review the project and what has been done
so far and where should we direct the project.

The hangout will start tomorrow at 2:00 pm GMT till 3:00 pm GMT and you can
join from here [0].

Hope to get all of you tomorrow in this meeting.


[0]
https://calendar.google.com/calendar/event?action=TEMPLATE=dTZyZW1vazF1am9oZHZoOWhnbDN1M2gyOTQgZW5nLnN6YWhlckBt=eng.szaher%40gmail.com


--
Best Regards,
Saad!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] version document for project navigator

2017-04-06 Thread Jimmy McArthur

Cool. Thanks Monty!


Monty Taylor 
April 6, 2017 at 3:21 PM
On 04/06/2017 11:58 AM, Jimmy McArthur wrote:

Assuming this format is accepted, do you all have any sense of when this
data will be complete for all projects?


Hopefully "soon" :)

Honestly, it's not terribly difficult data to produce, so once we're 
happy with it and where it goes, crowdsourcing filling it all in 
should go quickly.



Jimmy McArthur 
April 5, 2017 at 8:59 AM
FWIW, from my perspective on the Project Navigator side, this format
works great. We can actually derive the age of the project from this
information as well by identifying the first release that has API data
for a particular project. I'm indifferent about where it lives, so I'd
defer to you all to determine the best spot.

I really appreciate you all putting this together!

Jimmy


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Thierry Carrez 
April 5, 2017 at 5:28 AM

Somehow missed this thread, so will repost here comments I made 
elsewhere:


This looks good, but I would rather not overload the releases
repository. My personal preference (which was also expressed by
Doug in the TC meeting) would be to set this information up in a
"project-navigator" git repo that we would reuse for any information we
need to collect from projects for accurate display on the project
navigator. If the data is not maintained anywhere else (or easily
derivable from existing data), we would use that repository to collect
it from projects.

That way there is a clear place to go to to propose fixes to the 
project

navigator data. Not knowing how to fix that data is a common complaint,
so if we can point people to a git repo (and redirect people from there
to the places where other bits of information happen to live) that 
would

be great.

Monty Taylor 
April 4, 2017 at 5:47 PM
Hey all,

As per our discussion in today's TC meeting, I have made a document
format for reporting versions to the project navigator. I stuck it in
the releases repo:

  https://review.openstack.org/453361

Because there was already per-release information there, and the
governance repo did not have that structure.

I've included pseudo-code and a human explanation of how to get from a
service's version discovery document to the data in this document, but
also how it can be maintained- which is likely to be easier by hand
than by automation - but who knows, maybe we decide we want to make a
devstack job for each service that runs on tag events that submits a
patch to the releases repo. That sounds like WAY more work than once a
cycle someone adding a few lines of json to a repo - but *shrug*.

Basing it on the version discovery docs show a few things:

* "As a user, I want to consume an OpenStack Service's Discovery
Document" is a thing people might want to do and want to do
consistently across services.

* We're not that far off from being able to do that today.

* Still, like we are in many places, we're randomly different in a few
minor ways that do not actually matter but make life harder for our
users.

Thoughts and feedback more than welcome!
Monty

__ 



OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Jimmy McArthur 
April 6, 2017 at 11:58 AM
Assuming this format is accepted, do you all have any sense of when 
this data will be complete for all projects?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Jimmy McArthur 
April 5, 2017 at 8:59 AM
FWIW, from my perspective on the Project Navigator side, this format 
works great. We can actually derive the age of the project from this 
information as well 

Re: [openstack-dev] [tc] version document for project navigator

2017-04-06 Thread Monty Taylor

On 04/06/2017 11:58 AM, Jimmy McArthur wrote:

Assuming this format is accepted, do you all have any sense of when this
data will be complete for all projects?


Hopefully "soon" :)

Honestly, it's not terribly difficult data to produce, so once we're 
happy with it and where it goes, crowdsourcing filling it all in should 
go quickly.



Jimmy McArthur 
April 5, 2017 at 8:59 AM
FWIW, from my perspective on the Project Navigator side, this format
works great. We can actually derive the age of the project from this
information as well by identifying the first release that has API data
for a particular project. I'm indifferent about where it lives, so I'd
defer to you all to determine the best spot.

I really appreciate you all putting this together!

Jimmy


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Thierry Carrez 
April 5, 2017 at 5:28 AM

Somehow missed this thread, so will repost here comments I made elsewhere:

This looks good, but I would rather not overload the releases
repository. My personal preference (which was also expressed by
Doug in the TC meeting) would be to set this information up in a
"project-navigator" git repo that we would reuse for any information we
need to collect from projects for accurate display on the project
navigator. If the data is not maintained anywhere else (or easily
derivable from existing data), we would use that repository to collect
it from projects.

That way there is a clear place to go to to propose fixes to the project
navigator data. Not knowing how to fix that data is a common complaint,
so if we can point people to a git repo (and redirect people from there
to the places where other bits of information happen to live) that would
be great.

Monty Taylor 
April 4, 2017 at 5:47 PM
Hey all,

As per our discussion in today's TC meeting, I have made a document
format for reporting versions to the project navigator. I stuck it in
the releases repo:

  https://review.openstack.org/453361

Because there was already per-release information there, and the
governance repo did not have that structure.

I've included pseudo-code and a human explanation of how to get from a
service's version discovery document to the data in this document, but
also how it can be maintained- which is likely to be easier by hand
than by automation - but who knows, maybe we decide we want to make a
devstack job for each service that runs on tag events that submits a
patch to the releases repo. That sounds like WAY more work than once a
cycle someone adding a few lines of json to a repo - but *shrug*.

Basing it on the version discovery docs show a few things:

* "As a user, I want to consume an OpenStack Service's Discovery
Document" is a thing people might want to do and want to do
consistently across services.

* We're not that far off from being able to do that today.

* Still, like we are in many places, we're randomly different in a few
minor ways that do not actually matter but make life harder for our
users.

Thoughts and feedback more than welcome!
Monty

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] container jobs are unstable

2017-04-06 Thread Paul Belanger
On Thu, Mar 30, 2017 at 11:01:08AM -0400, Paul Belanger wrote:
> On Thu, Mar 30, 2017 at 03:08:57PM +0100, Steven Hardy wrote:
> > To be fair, we discussed this on IRC yesterday, everyone agreed infra
> > supported docker cache/registry was a great idea, but you said there was no
> > known timeline for it actually getting done.
> > 
> > So while we all want to see that happen, and potentially help out with the
> > effort, we're also trying to mitigate the fact that work isn't done by
> > working around it in our OVB environment.
> > 
> > FWIW I think we absolutely need multinode container jobs, e.g using infra
> > resources, as that has worked out great for our puppet based CI, but we
> > really need to work out how to optimize the container download speed in
> > that environment before that will work well AFAIK.
> > 
> > You referenced https://review.openstack.org/#/c/447524/ in your other
> > reply, which AFAICS is a spec about publishing to dockerhub, which sounds
> > great, but we have the opposite problem, we need to consume those published
> > images during our CI runs, and currently downloading images takes too long.
> > So we ideally need some sort of local registry/pull-through-cache that
> > speeds up that process.
> > 
> > How can we move forward here, is there anyone on the infra side we can work
> > with to discuss further?
> > 
> Yes, I am currently working with clarkb to adress some of these concerns. 
> Today
> we are looking at setup our cloud mirrors to cache[1] specific URLs, for 
> example
> we are trying testing out http://trunk.rdoproject.org  This is not a long term
> solution for projects, but a short. It will be opt-in for now, rather then us
> set it up for all jobs.  Long term, we move rdoproject.org into AFS.
> 
> I have been trying to see if we can do the same for docker hub, and continue 
> to
> run it.  The main issue, at least for me, is we don't want to depend on docker
> tooling for this. I'd rather not install a docker into our control play at 
> this
> point in time.
> 
> So, all of that to stay, it will take some time. I understand it is a high
> priority, but lets solve the current mirroring issues with tripleo first (RDO,
> gems, github), and lets see if the apache cache proxy with work for
> hub.docker.com too.
> 
> [1] https://review.openstack.org/451554

Wanted to follow up to this thread, we managed to get a reverse proxy cache[2]
for https://registry-1.docker.io working. So far, I've just tested ubuntu,
fedora, centos images but the caching works. Once we land this, any jobs using
docker can take advantage of the mirror.

[2] https://review.openstack.org/#/c/453811

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun]Use 'uuid' instead of 'id' as object ident in data model

2017-04-06 Thread Mike Bayer



On 04/05/2017 11:02 AM, gordon chung wrote:



On 05/04/17 09:00 AM, Monty Taylor wrote:


Please do NOT use uuid as a primary key in MySQL:

* UUID has 36 characters which makes it bulky.


you can store it as a binary if space is a concern.


this is highly inconvenient from a datadump / MySQL commandline 
perspective.






* InnoDB stores data in the PRIMARY KEY order and all the secondary keys
also contain PRIMARY KEY. So having UUID as PRIMARY KEY makes the index
bigger which can not be fit into the memory
* Inserts are random and the data is scattered.


can store a ordered uuid (uuid1) for performance but arguably not much
diff from just autoincrement



In cases where data has a large natural key (like a uuid) It is
considered a best practice to use an auto-increment integer as the
primary key and to put a second column in the table to store the uuid,
potentially with a unique index applied to it for consistency.

That way the external identifier for things like gnocchi can still be
the UUID, but the internal id for the database can be an efficient
auto-increment primary key.


very good points. i guess ultimately should probably just test to the
scale you hope for


there's no advantage to the UUID being the physical primary key of the 
table.  If you don't care about the surrogate integer, just ignore it; 
it gets created for you.   The only argument I can see is that you 
really want to generate rows in Python that refer to the UUID of another 
row and you want that UUID to go straight into a foreign-key constrained 
column, in which case I'd urge you to instead use idiomatic SQLAlchemy 
ORM patterns for data manipulation (e.g. relationships).


The surrogate integer thing is the use case that all database engines 
are very well tested for and while it is not "pure" from Codd's point of 
view, it is definitely the most pragmatic approach from many different 
perspectives.





cheers,



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun]Use 'uuid' instead of 'id' as object ident in data model

2017-04-06 Thread Mike Bayer



On 04/05/2017 11:00 AM, Monty Taylor wrote:

On 04/05/2017 09:39 AM, Akihiro Motoki wrote:

I noticed this thread by Monty's reply. Sorry for my late :(

I think we need to think 'id' separately for API modeling and DB
modeling.

In the API perspective, one of the important things is that 'id' is
not predictable
and it rarely conflict. From this perspective, UUID works.

In the DB perspective, the context will be different.
Efficiency is another important point.
auto-incremental way brings us a good efficiency.

In most OpenStack projects, we use 'id' in a database as 'id' in an
API layer.
I am okay with using incremental integer as 'id' in DB, but I don't think
it is not a good idea to use predictable 'id' in the API layer.

I don't know how 'id' in API and DB layer are related in Zun
implementation
but I believe this is one of the important point.


Yes! Very well said. UUID is the excellent choice for API - auto-inc is
the excellent choice for the database.


+1

with primary key datatype, you also imply the datatype of columns 
constrained by foreign key as well, which itself usually gets indexed too.








2017-04-05 22:00 GMT+09:00 Monty Taylor :

On 02/21/2017 07:28 AM, gordon chung wrote:




On 21/02/17 01:28 AM, Qiming Teng wrote:


in mysql[2].


Can someone remind me the benefits we get from Integer over UUID as
primary key? UUID, as its name implies, is meant to be an
identifier for
a resource. Why are we generating integer key values?



this ^. use UUID please. you can google why auto increment is a
probably
not a good idea.

from a selfish pov, as gnocchi captures data on all resources in
openstack, we store everything as a uuid anyways. even if your id
doesn't clash in zun, it has a higher chance of clashing when you
consider all the other resources from other services.

cheers,



sorry - I just caught this.

Please do NOT use uuid as a primary key in MySQL:

* UUID has 36 characters which makes it bulky.
* InnoDB stores data in the PRIMARY KEY order and all the secondary keys
also contain PRIMARY KEY. So having UUID as PRIMARY KEY makes the index
bigger which can not be fit into the memory
* Inserts are random and the data is scattered.

In cases where data has a large natural key (like a uuid) It is
considered a
best practice to use an auto-increment integer as the primary key and
to put
a second column in the table to store the uuid, potentially with a
unique
index applied to it for consistency.

That way the external identifier for things like gnocchi can still be
the
UUID, but the internal id for the database can be an efficient
auto-increment primary key.



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [architecture] Arch-WG, we hardly knew ye..

2017-04-06 Thread Clint Byrum
I'm going to be blunt. I'm folding the Architecture Working Group
immediately following our meeting today at 2000 UTC. We'll be using the
time to discuss continuity of the base-services proposal, and any other
draw-down necessary. After that our meetings will cease.

I had high hopes for the arch-wg, with so many joining us to discuss
things in Atlanta. But ultimately, we remain a very small group with
very limited resources, and so I don't think it's the best use of our
time to continue chasing the arch-wg.

Thanks everyone for your contributions. See you in the trenches.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-04-06 Thread Chris Dent


Greetings OpenStack community,

In today's API-WG meeting we renamed the group twice. First to the "microversion support group". 
This is because any conversation that is even tangentially related to microversions becomes rather involved, 
leaking into metaphysics. Today involved some discussion on when a new or experimental service might like to 
turn them on (if they ever plan to do so). The advice here is to implement version discovery and versioning 
from the outset but don't start regularly bumping versions until after there has been an official release. 
There's some discussion of this topic in the interoperability guideline (linked below), which also happens to 
have been renamed (from "compatibility" to "interoperability").

The second group rename was to "Human Programming Interface Working Group" due 
to a discussion on whether to use a 404 or 400 in the case of a bad marker when doing 
pagination on a collection (guideline also linked below). 404 can be interpreted as 
strictly correct, especially from the standpoint of thinking of URIs as relatively 
strongly typed things. 400 may make more sense, however, to humans.

Both of these guidelines are subject to some debate, so if you have thoughts 
about them, do everyone a great favor and provide your input.

Thanks to members of the Sahara project for coming to speak with the API-WG, it 
made for some very interesting and fun discussions. We hope we were able to 
provide some useful input.

# Newly Published Guidelines

Nothing new this week.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None this week, although the pagination guideline below is close.

# Guidelines Currently Under Review [3]

* Define pagination guidelines
  https://review.openstack.org/#/c/446716/
Stuck on whether to use 400 or 404.

* Create a set of api interoperability guidelines
  https://review.openstack.org/#/c/421846/
This has been renamed to indicate its focus more clearly.

* Microversions: add next_min_version field in version body
  https://review.openstack.org/#/c/446138/

* Mention max length limit information for tags
  https://review.openstack.org/#/c/447344/

* Recommend the correct HTTP method for tags
  https://review.openstack.org/451536
Very close but needs more eyes.

* Add API capabilities discovery guideline
  https://review.openstack.org/#/c/386555/
On hold.

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your concerns in 
an email to the OpenStack developer mailing list[1] with the tag "[api]" in the 
subject. In your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] version document for project navigator

2017-04-06 Thread Jimmy McArthur
Assuming this format is accepted, do you all have any sense of when this 
data will be complete for all projects?



Jimmy McArthur 
April 5, 2017 at 8:59 AM
FWIW, from my perspective on the Project Navigator side, this format 
works great. We can actually derive the age of the project from this 
information as well by identifying the first release that has API data 
for a particular project. I'm indifferent about where it lives, so I'd 
defer to you all to determine the best spot.


I really appreciate you all putting this together!

Jimmy


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Thierry Carrez 
April 5, 2017 at 5:28 AM

Somehow missed this thread, so will repost here comments I made elsewhere:

This looks good, but I would rather not overload the releases
repository. My personal preference (which was also expressed by
Doug in the TC meeting) would be to set this information up in a
"project-navigator" git repo that we would reuse for any information we
need to collect from projects for accurate display on the project
navigator. If the data is not maintained anywhere else (or easily
derivable from existing data), we would use that repository to collect
it from projects.

That way there is a clear place to go to to propose fixes to the project
navigator data. Not knowing how to fix that data is a common complaint,
so if we can point people to a git repo (and redirect people from there
to the places where other bits of information happen to live) that would
be great.

Monty Taylor 
April 4, 2017 at 5:47 PM
Hey all,

As per our discussion in today's TC meeting, I have made a document 
format for reporting versions to the project navigator. I stuck it in 
the releases repo:


  https://review.openstack.org/453361

Because there was already per-release information there, and the 
governance repo did not have that structure.


I've included pseudo-code and a human explanation of how to get from a 
service's version discovery document to the data in this document, but 
also how it can be maintained- which is likely to be easier by hand 
than by automation - but who knows, maybe we decide we want to make a 
devstack job for each service that runs on tag events that submits a 
patch to the releases repo. That sounds like WAY more work than once a 
cycle someone adding a few lines of json to a repo - but *shrug*.


Basing it on the version discovery docs show a few things:

* "As a user, I want to consume an OpenStack Service's Discovery 
Document" is a thing people might want to do and want to do 
consistently across services.


* We're not that far off from being able to do that today.

* Still, like we are in many places, we're randomly different in a few 
minor ways that do not actually matter but make life harder for our 
users.


Thoughts and feedback more than welcome!
Monty

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Florian Fuchs for tripleo-validations core

2017-04-06 Thread Honza Pokorny
+1

On 2017-04-06 11:53, Martin André wrote:
> Hellooo,
> 
> I'd like to propose we extend Florian Fuchs +2 powers to the
> tripleo-validations project. Florian is already core on tripleo-ui
> (well, tripleo technically so this means there is no changes to make
> to gerrit groups).
> 
> Florian took over many of the stalled patches in tripleo-validations
> and is now the principal contributor in the project [1]. He has built
> a good expertise over the last months and I think it's time he has
> officially the right to approve changes in tripleo-validations.
> 
> Consider this my +1 vote.
> 
> Martin
> 
> [1] 
> http://stackalytics.com/?module=tripleo-validations=patches=pike
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Florian Fuchs for tripleo-validations core

2017-04-06 Thread Tomas Sedovic

On 04/06/2017 11:53 AM, Martin André wrote:

Hellooo,

I'd like to propose we extend Florian Fuchs +2 powers to the
tripleo-validations project. Florian is already core on tripleo-ui
(well, tripleo technically so this means there is no changes to make
to gerrit groups).

Florian took over many of the stalled patches in tripleo-validations
and is now the principal contributor in the project [1]. He has built
a good expertise over the last months and I think it's time he has
officially the right to approve changes in tripleo-validations.

Consider this my +1 vote.


Yes please! +1 from me.



Martin

[1] 
http://stackalytics.com/?module=tripleo-validations=patches=pike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova boot from image created volume

2017-04-06 Thread Matt Riedemann

On 4/6/2017 10:05 AM, Jay Pipes wrote:

jaypipes@serialcoder:~/src/git.openstack.org/openstack/nova/nova$
ack-grep --ignore-dir tests --ignore-dir locale "volume_api.create"
compute/api.py
2982:snapshot = self.volume_api.create_snapshot_force(

api/openstack/compute/volumes.py
185:new_volume = self.volume_api.create(
582:create_func = self.volume_api.create_snapshot_force
584:create_func = self.volume_api.create_snapshot

virt/block_device.py
63::returns: The availability_zone value to pass to volume_api.create
487:vol = volume_api.create(context, self.volume_size, '',
'',
508:vol = volume_api.create(context, self.volume_size,
530:vol = volume_api.create(context, self.volume_size,
vol_name, '',


Best,
-jay

On 04/06/2017 03:29 AM, 李田清 wrote:

Hello,
If we use nova boot from image and created volume, i think the
nova will
use volume/cinder.py:create to create volume. But after insert pdb,
i do not find
the specific code of line to call the create. Can someone help me to
point out the
code of the line? Thanks a lot.


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


For the case you're looking for, the volume is created here:

https://github.com/openstack/nova/blob/8d9bf947a4c8654a30e016a5d95d9bec55447506/nova/virt/block_device.py#L508

That happens when the compute manager is preparing block devices to 
attach to the instance while building it on the compute host.


After the volume is created and nova gets the volume ID back, we wait 
for it to be available before we can attach it:


https://github.com/openstack/nova/blob/8d9bf947a4c8654a30e016a5d95d9bec55447506/nova/virt/block_device.py#L512

Which uses this function passed in from the compute manager (yes, it's 
gross tight coupling):


https://github.com/openstack/nova/blob/8d9bf947a4c8654a30e016a5d95d9bec55447506/nova/compute/manager.py#L1221

And finally it attaches the volume to the instance via a call to the 
parent class:


https://github.com/openstack/nova/blob/8d9bf947a4c8654a30e016a5d95d9bec55447506/nova/virt/block_device.py#L516

I hope this helps. It only took me 4 years to memorize the flow. :)

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Weekly meeting April 6th is cancelled

2017-04-06 Thread Vladimir Kuklin
Fuelers

Agenda is empty for today, so the meeting is cancelled.

-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][requirements][designate][cinder] Reverting eventlet version bump

2017-04-06 Thread Graham Hayes

On 04/04/17 11:11 -0700, Clint Byrum wrote:

Excerpts from Monty Taylor's message of 2017-04-04 12:19:36 -0500:

On 04/04/2017 09:19 AM, Jay S Bryant wrote:
> Monty,
>
> I agree with your approach.  Think we should not break other projects
> with a change like this and the answer thus far has been to just patch
> each project individually to work around the issues introduced by
> eventlet.  So, I support this approach and think Sean would as well.

To follow up with everyone - dims and I spent a bit of time this morning
testing combinations of things, and it seems that master of eventlet
actually also does not fix things - there are some issues that will just
need to be figured out.

The suggested path forward at the moment is to pull designate out of the
requirements-sync process so it can stay pinned at 0.19 while the issues
are sorted out. A patch to bump designate back to 0.20.1 can accompany
the patch to fix it for 0.20.1 - and can be done as people have time,
rather than in a rush.



Has anyone made any attempt to eliminate eventlet from Designate? The
less places it's used, the less problems OpenStack seems to have, IMO.
But I can see on cursory examination it has quite a few services so
perhaps it's just too entrenched?



We looked at this previously, but due to a lack of developer time, we
were did not have the resources.

Personally, I would love to see it gone, but the replacement would need
to be writen, and maintained long term - which we are not in the
position to do right now.

- Graham


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova boot from image created volume

2017-04-06 Thread Jay Pipes

jaypipes@serialcoder:~/src/git.openstack.org/openstack/nova/nova$ ack-grep --ignore-dir 
tests --ignore-dir locale "volume_api.create"
compute/api.py
2982:snapshot = self.volume_api.create_snapshot_force(

api/openstack/compute/volumes.py
185:new_volume = self.volume_api.create(
582:create_func = self.volume_api.create_snapshot_force
584:create_func = self.volume_api.create_snapshot

virt/block_device.py
63::returns: The availability_zone value to pass to volume_api.create
487:vol = volume_api.create(context, self.volume_size, '', '',
508:vol = volume_api.create(context, self.volume_size,
530:vol = volume_api.create(context, self.volume_size, vol_name, '',


Best,
-jay

On 04/06/2017 03:29 AM, 李田清 wrote:

Hello,
If we use nova boot from image and created volume, i think the nova will
use volume/cinder.py:create to create volume. But after insert pdb,
i do not find
the specific code of line to call the create. Can someone help me to
point out the
code of the line? Thanks a lot.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Consistent way of integrating services with different backends

2017-04-06 Thread Peng Liu
I think the option 1) is a better choice. Since:
1. The api and agent talk through RPC. So it is unnecessarily to have both
the api and agent service on the same node. Separate composable service is
more appropriate here.
2. In the l2gw_plugin.ini file, it suggests that the service_provider could
have more than one line. which means the odl and ovs agent might coexisted.

On Thu, Apr 6, 2017 at 9:15 PM, Giulio Fidente  wrote:

> On Thu, 2017-04-06 at 13:07 +0200, Ricardo Noriega De Soto wrote:
> > Hi owls!
> >
> > This is something that I've been discussing in the IRC channel but
> > still I
> > think we should define a consistent way of integrating services which
> > support different backends. In this case, I'm refering to BGPVPN and
> > L2GW
> > Neutron services, but it could be applied to any other type of
> > service.
>
> yes indeed there is a similar issue with the storage services and thei
> supported backends
>
> > These two Neutron service plugins support different backends such an
> > agent
> > and a SDN controller (OpenDaylight). Usually the reference
> > architecture
> > will use the agents.
> >
> > My main question is about how to model this into THT properly.
> >
> > It is clear that we have to create two different composable services:
> > one
> > for the API and one for the agent. However, how many environment
> > files we
> > should have and which should be the content?
>
> currently for cinder we use a tht service for each backend; multiple
> backends can be enabled at the same time; having multiple instances of
> the same backend is a bit trickier and requires some yaml editing
>
> > i.e. L2GW project
> >
> > Option 1:
> >
> >- neutron-l2gw-api.yaml enabling the corresponding API composable
> >service.
> >- neutron-l2gw-agent.yaml enabling the corresponding agent
> > composable
> >service.
> >
> > openstack overcloud deploy -e neutron-l2gw-api.yaml -e
> >  neutron-l2gw-agent.yaml (with agent)
> > openstack overcloud deploy -e neutron-l2gw-api.yaml -e
> >  neutron-opendaylight-l3.yaml (with ODL)
> >
> > Option 2:
> >
> >- neutron-l2gw.yaml enabling the API and the agent as a reference
> >architecture scenario.
> >- neutron-l2gw-odl.yaml enabling the API with OpenDaylight as
> >service_provider
> >
> > openstack overcloud deploy -e neutron-l2gw.yaml (with agent)
> > openstack overcloud deploy -e neutron-l2gw-odl.yaml -e
> >  neutron-opendaylight-l3.yaml (with ODL)
> >
> >
> > I'm not really pushing for any option, but I'm just concern from the
> > user
> > experience point of view. As a user, which way is more friendly? or
> > understandable? Where in the documentation is this reflected?
>
> I am not sure there is a single answer; option 2) seems to me more user
> friendly and easier to consume in the UI
>
> Yet when working on the integration of CephMDS and the Manila/CephFS
> backend, we decided to use two different environment files, one to
> enable CephMDS and one to enable the CephFS backend in Manila. This was
> so that operators could deploy CephMDS without Manila, to provide
> CephFS to the overcloud or connect the Manila/CephFS backend to an
> external unmanaged Ceph cluster and use only one or the other
> environment file.
>
> My conclusion, if there aren't reasons to deploy the two services
> indepedently, I'd probably go with option 2), if there are reasons to
> deploy only one of them, option 1) is the only one which seems to allow
> that avoiding yaml edits to the users.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Peng Liu | Senior Software Engineer

Tel: +86 10 62608046 (direct)
Mobile: +86 13801193245

Red Hat Software (Beijing) Co., Ltd.
9/F, North Tower C,
Raycom Infotech Park,
No.2 Kexueyuan Nanlu, Haidian District,
Beijing, China, POC 100190
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Florian Fuchs for tripleo-validations core

2017-04-06 Thread Michele Baldessari
+1

On Thu, Apr 06, 2017 at 11:53:04AM +0200, Martin André wrote:
> Hellooo,
> 
> I'd like to propose we extend Florian Fuchs +2 powers to the
> tripleo-validations project. Florian is already core on tripleo-ui
> (well, tripleo technically so this means there is no changes to make
> to gerrit groups).
> 
> Florian took over many of the stalled patches in tripleo-validations
> and is now the principal contributor in the project [1]. He has built
> a good expertise over the last months and I think it's time he has
> officially the right to approve changes in tripleo-validations.
> 
> Consider this my +1 vote.
> 
> Martin
> 
> [1] 
> http://stackalytics.com/?module=tripleo-validations=patches=pike
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Michele Baldessari
C2A5 9DA3 9961 4FFB E01B  D0BC DDD4 DCCB 7515 5C6D

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-04-06 Thread Flavio Percoco

On 06/04/17 12:32 +0300, Sagi Shnaidman wrote:

HI,

I think Rally or Browbeat and other performance oriented solutions won't
serve our needs, because we run TripleO CI on virtualized environment with
very limited resources. Actually we are pretty close to full utilizing
these resources when deploying openstack, so very little is available for
test.
It's not a problem to run tempest API tests because they are cheap - take
little time, little resources, but also gives little coverage. Scenario
test are more interesting and gives us more coverage, but also takes a lot
of resources (which we don't have sometimes).

It may be useful to run a "limited edition" of API tests that maximize
coverage and don't duplicate, for example just to check service working
basically, without covering all its functionality. It will take very little
time (i.e. 5 tests for each service) and will give a general picture of
deployment success. It will cover fields that are not covered by pingtest
as well.


Without much of the history you guys have, I think this makes sense. This will
also increase consistency throughout CI environments in OpenStack. I like the
idea of starting with a small set of tests from tempest and eventually increase
the number of tests that are executed if needed.

Flavio


I think could be an option to develop a special scenario tempest tests for
TripleO which would fit our needs.

Thanks


On Wed, Apr 5, 2017 at 11:49 PM, Emilien Macchi  wrote:


Greetings dear owls,

I would like to bring back an old topic: running tempest in the gate.

== Context

Right now, TripleO gate is running something called pingtest to
validate that the OpenStack cloud is working. It's an Heat stack, that
deploys a Nova server, some volumes, a glance image, a neutron network
and sometimes a little bit more.
To deploy the pingtest, you obviously need Heat deployed in your overcloud.

== Problems:

Although pingtest has been very helpful over the last years:
- easy to understand, it's an Heat template, like an OpenStack user
would do to deploy their apps.
- fast: the stack takes a few minutes to be created and validated

It has some limitations:
- Limitation to what Heat resources support (example: some OpenStack
resources can't be managed from Heat)
- Impossible to run a dynamic workflow (test a live migration for example)

== Solutions

1) Switch pingtest to Tempest run on some specific tests, with feature
parity of what we had with pingtest.
For example, we could imagine to run the scenarios that deploys VM and
boot from volume. It would test the same thing as pingtest (details
can be discussed here).
Each scenario would run more tests depending on the service that they
run (scenario001 is telemetry, so it would run some tempest tests for
Ceilometer, Aodh, Gnocchi, etc).
We should work at making the tempest run as short as possible, and the
close as possible from what we have with a pingtest.

2) Run custom scripts in TripleO CI tooling, called from the pingtest
(heat template), that would run some validations commands (API calls,
etc).
It has been investigated in the past but never implemented AFIK.

3) ?

I tried to make this text short and go straight to the point, please
bring feedback now. I hope we can make progress on $topic during Pike,
so we can increase our testing coverage and detect deployment issues
sooner.

Thanks,
--
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Best regards
Sagi Shnaidman



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Florian Fuchs for tripleo-validations core

2017-04-06 Thread Emilien Macchi
I tripleo-validate that!

OK the joke was poor.

+1 otherwise :-) Nice contributions from Florian, fully deserved.

On Thu, Apr 6, 2017 at 10:18 AM, Jason E. Rist  wrote:
> On 04/06/2017 07:00 AM, Jiri Tomasek wrote:
>> +1
>>
>> On Thu, Apr 6, 2017 at 12:56 PM, Julie Pichon  wrote:
>>
>> > On 6 April 2017 at 10:53, Martin André  wrote:
>> >> Hellooo,
>> >>
>> >> I'd like to propose we extend Florian Fuchs +2 powers to the
>> >> tripleo-validations project. Florian is already core on tripleo-ui
>> >> (well, tripleo technically so this means there is no changes to make
>> >> to gerrit groups).
>> >>
>> >> Florian took over many of the stalled patches in tripleo-validations
>> >> and is now the principal contributor in the project [1]. He has built
>> >> a good expertise over the last months and I think it's time he has
>> >> officially the right to approve changes in tripleo-validations.
>> >>
>> >> Consider this my +1 vote.
>> >>
>> >> Martin
>> >>
>> >> [1] http://stackalytics.com/?module=tripleo-validations;
>> > metric=patches=pike
>> >>
>> >
>> > +1!
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> yooge +1
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] Cinder-Nova API meeting time slot change

2017-04-06 Thread Ildiko Vancsa
Hi All,

As of __today__ the Cinder-Nova API interactions meeting has a new time slot, 
__1600 UTC__.

The meeting channel is the same: __#openstack-meeting-cp__.

The patch [1] to change the slot officially is still under review with no 
conflicts.

See you soon!

Thanks and Best Regards,
Ildikó

[1] https://review.openstack.org/#/c/453199/ 
 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Florian Fuchs for tripleo-validations core

2017-04-06 Thread Jason E. Rist
On 04/06/2017 07:00 AM, Jiri Tomasek wrote:
> +1
>
> On Thu, Apr 6, 2017 at 12:56 PM, Julie Pichon  wrote:
>
> > On 6 April 2017 at 10:53, Martin André  wrote:
> >> Hellooo,
> >>
> >> I'd like to propose we extend Florian Fuchs +2 powers to the
> >> tripleo-validations project. Florian is already core on tripleo-ui
> >> (well, tripleo technically so this means there is no changes to make
> >> to gerrit groups).
> >>
> >> Florian took over many of the stalled patches in tripleo-validations
> >> and is now the principal contributor in the project [1]. He has built
> >> a good expertise over the last months and I think it's time he has
> >> officially the right to approve changes in tripleo-validations.
> >>
> >> Consider this my +1 vote.
> >>
> >> Martin
> >>
> >> [1] http://stackalytics.com/?module=tripleo-validations;
> > metric=patches=pike
> >>
> >
> > +1!
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
yooge +1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-04-06 Thread Assaf Muller
On Wed, Apr 5, 2017 at 4:49 PM, Emilien Macchi  wrote:

> Greetings dear owls,
>
> I would like to bring back an old topic: running tempest in the gate.
>
> == Context
>
> Right now, TripleO gate is running something called pingtest to
> validate that the OpenStack cloud is working. It's an Heat stack, that
> deploys a Nova server, some volumes, a glance image, a neutron network
> and sometimes a little bit more.
> To deploy the pingtest, you obviously need Heat deployed in your overcloud.
>
> == Problems:
>
> Although pingtest has been very helpful over the last years:
> - easy to understand, it's an Heat template, like an OpenStack user
> would do to deploy their apps.
> - fast: the stack takes a few minutes to be created and validated
>
> It has some limitations:
> - Limitation to what Heat resources support (example: some OpenStack
> resources can't be managed from Heat)
> - Impossible to run a dynamic workflow (test a live migration for example)
>

Another limitation which is obvious but I think still worth mentioning is
that Tempest has much better coverage than the pingtest. Security groups,
for example, have been accidentally disabled in every TripleO version as of
late. That lets regressions such as [1] slip in. We wouldn't have that
problem if we switched to Tempest and selected an intelligent subset of
tests to run.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1411690


>
> == Solutions
>
> 1) Switch pingtest to Tempest run on some specific tests, with feature
> parity of what we had with pingtest.
> For example, we could imagine to run the scenarios that deploys VM and
> boot from volume. It would test the same thing as pingtest (details
> can be discussed here).
> Each scenario would run more tests depending on the service that they
> run (scenario001 is telemetry, so it would run some tempest tests for
> Ceilometer, Aodh, Gnocchi, etc).
> We should work at making the tempest run as short as possible, and the
> close as possible from what we have with a pingtest.
>
> 2) Run custom scripts in TripleO CI tooling, called from the pingtest
> (heat template), that would run some validations commands (API calls,
> etc).
> It has been investigated in the past but never implemented AFIK.
>
> 3) ?
>
> I tried to make this text short and go straight to the point, please
> bring feedback now. I hope we can make progress on $topic during Pike,
> so we can increase our testing coverage and detect deployment issues
> sooner.
>
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Consistent way of integrating services with different backends

2017-04-06 Thread Giulio Fidente
On Thu, 2017-04-06 at 13:07 +0200, Ricardo Noriega De Soto wrote:
> Hi owls!
> 
> This is something that I've been discussing in the IRC channel but
> still I
> think we should define a consistent way of integrating services which
> support different backends. In this case, I'm refering to BGPVPN and
> L2GW
> Neutron services, but it could be applied to any other type of
> service.

yes indeed there is a similar issue with the storage services and thei
supported backends

> These two Neutron service plugins support different backends such an
> agent
> and a SDN controller (OpenDaylight). Usually the reference
> architecture
> will use the agents.
> 
> My main question is about how to model this into THT properly.
> 
> It is clear that we have to create two different composable services:
> one
> for the API and one for the agent. However, how many environment
> files we
> should have and which should be the content?

currently for cinder we use a tht service for each backend; multiple
backends can be enabled at the same time; having multiple instances of
the same backend is a bit trickier and requires some yaml editing

> i.e. L2GW project
> 
> Option 1:
> 
>    - neutron-l2gw-api.yaml enabling the corresponding API composable
>    service.
>    - neutron-l2gw-agent.yaml enabling the corresponding agent
> composable
>    service.
> 
> openstack overcloud deploy -e neutron-l2gw-api.yaml -e
>  neutron-l2gw-agent.yaml (with agent)
> openstack overcloud deploy -e neutron-l2gw-api.yaml -e
>  neutron-opendaylight-l3.yaml (with ODL)
> 
> Option 2:
> 
>    - neutron-l2gw.yaml enabling the API and the agent as a reference
>    architecture scenario.
>    - neutron-l2gw-odl.yaml enabling the API with OpenDaylight as
>    service_provider
> 
> openstack overcloud deploy -e neutron-l2gw.yaml (with agent)
> openstack overcloud deploy -e neutron-l2gw-odl.yaml -e
>  neutron-opendaylight-l3.yaml (with ODL)
> 
> 
> I'm not really pushing for any option, but I'm just concern from the
> user
> experience point of view. As a user, which way is more friendly? or
> understandable? Where in the documentation is this reflected?

I am not sure there is a single answer; option 2) seems to me more user
friendly and easier to consume in the UI

Yet when working on the integration of CephMDS and the Manila/CephFS
backend, we decided to use two different environment files, one to
enable CephMDS and one to enable the CephFS backend in Manila. This was
so that operators could deploy CephMDS without Manila, to provide
CephFS to the overcloud or connect the Manila/CephFS backend to an
external unmanaged Ceph cluster and use only one or the other
environment file.

My conclusion, if there aren't reasons to deploy the two services
indepedently, I'd probably go with option 2), if there are reasons to
deploy only one of them, option 1) is the only one which seems to allow
that avoiding yaml edits to the users.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Florian Fuchs for tripleo-validations core

2017-04-06 Thread Jiri Tomasek
+1

On Thu, Apr 6, 2017 at 12:56 PM, Julie Pichon  wrote:

> On 6 April 2017 at 10:53, Martin André  wrote:
> > Hellooo,
> >
> > I'd like to propose we extend Florian Fuchs +2 powers to the
> > tripleo-validations project. Florian is already core on tripleo-ui
> > (well, tripleo technically so this means there is no changes to make
> > to gerrit groups).
> >
> > Florian took over many of the stalled patches in tripleo-validations
> > and is now the principal contributor in the project [1]. He has built
> > a good expertise over the last months and I think it's time he has
> > officially the right to approve changes in tripleo-validations.
> >
> > Consider this my +1 vote.
> >
> > Martin
> >
> > [1] http://stackalytics.com/?module=tripleo-validations;
> metric=patches=pike
> >
>
> +1!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-04-06 Thread Arx Cruz
I would say smoke tests, and at least the minimum scenario tests.
Smoke tests takes 14 minutes (113 tests) to run, and I can check how long
it takes the minimum scenario tests later. So it won't take a long time
running.

Kind regards,
Arx Cruz

On Thu, Apr 6, 2017 at 2:44 PM, Andrea Frittoli 
wrote:

> I don't really have much context in what the decision is going to be based
> on here,
> so I'll just add some random comments here and there.
>
> On Thu, Apr 6, 2017 at 12:48 PM Arx Cruz  wrote:
>
>> Having tempest running will allow these jobs to appear in
>> openstack-health system as well.
>>
>
> I agree that's a plus. It's also rather easy to produce subunit from
> whatever you
> are using to run tests, and that's all you need in fact to get data into
> open stack-health
> without touching the existing infrastructure. So in case you decide not to
> use Tempest,
> open stack-health can still be on the list.
>
>
>>
>> On Thu, Apr 6, 2017 at 1:29 PM, Justin Kilpatrick 
>> wrote:
>>
>> Maybe I'm getting a little off topic with this question, but why was
>> Tempest removed last time?
>>
>> I'm not well versed in the history of this discussion, but from what I
>> understand Tempest in the gate has
>> been an off and on again thing for a while but I've never heard the
>> story of why it got removed.
>>
>> On Thu, Apr 6, 2017 at 7:00 AM, Chris Dent 
>> wrote:
>> > On Thu, 6 Apr 2017, Sagi Shnaidman wrote:
>> >
>> >> It may be useful to run a "limited edition" of API tests that maximize
>> >> coverage and don't duplicate, for example just to check service working
>> >> basically, without covering all its functionality. It will take very
>> >> little
>> >> time (i.e. 5 tests for each service) and will give a general picture of
>> >> deployment success. It will cover fields that are not covered by
>> pingtest
>> >> as well.
>>
>> >
>>
>>
> We have a smoke attribute here an there, but it's not well curated at all,
> so you're
> probably better off maintaining your own list.
> Since presumably you're more interested in verifying that a deployed cloud
> is
> functional - as opposed to verify specific APIs are working properly - you
> may want
> to look at scenario tests, where with a couple of test you can cover
> already a lot of
> basic stuff, e.g. if you can boot a server from a volume with an image
> from glance,
> and ssh into it, you have proven a lot already about the general health of
> your cloud.
>
>
>> >
>> > It's sound like using some parts of tempest is perhaps the desired
>> > thing here but in case a "limited edition" test against the APIs to
>> > do what amounts to a smoke test is desired, it might be worthwhile
>> > to investigate using gabbi[1] and its command line gabbi-run[2] tool for
>> > some fairly simple and readable tests that can describe a sequence
>> > of API interactions. There are lots of tools that can do the same
>> > thing, so gabbi may not be the right choice but it's there as an
>> > option.
>> >
>> > The telemetry group had (an may still have) some integration tests
>> > that use gabbi files to integrate ceilometer, heat (starting some
>> > vms), aodh and gnocchi and confirm that the expected flow happened.
>> > Since the earlier raw scripts I think there's been some integration
>> > with tempest, but gabbi files are still used[3].
>> >
>> > If this might be useful and I can help out, please ask.
>> >
>> > [1] http://gabbi.readthedocs.io/
>> > [2] http://gabbi.readthedocs.io/en/latest/runner.html
>> > [3]
>> > https://github.com/openstack/ceilometer/tree/master/
>> ceilometer/tests/integration
>> >
>> > --
>> > Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
>> > freenode: cdent tw: @anticdent
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [tripleo] Consistent way of integrating services with different backends

2017-04-06 Thread Carlos Camacho Gonzalez
 I think the answer here should be if we want to define environment files
as a whole
“With all the information required to deploy something functional” or
composing them like
a LEGO (which might not work separately).

I don't have a strong opinion about it, but I'm inclined to use them as
functional blocks, i.e. if you want to deploy the pacemaker profiles you
just have to include puppet-pacemaker.yaml and that's all.

What IMHO would do is:

* -e neutron-l2gw-basic.yaml → Will install the API and the agent.
* -e neutron-l2gw-odl.yaml → Will install the API and ODL.

The API is a common factor here so we can add it into all envs using it.
Also, I think will be easier from a user POV as they won't have to master
how to compose their environment files to have it running.

What do you think about it?

Cheers,
Carlos.




On Thu, Apr 6, 2017 at 1:07 PM, Ricardo Noriega De Soto 
wrote:
>
> Hi owls!
>
> This is something that I've been discussing in the IRC channel but still
I think we should define a consistent way of integrating services which
support different backends. In this case, I'm refering to BGPVPN and L2GW
Neutron services, but it could be applied to any other type of service.
>
> These two Neutron service plugins support different backends such an
agent and a SDN controller (OpenDaylight). Usually the reference
architecture will use the agents.
>
> My main question is about how to model this into THT properly.
>
> It is clear that we have to create two different composable services: one
for the API and one for the agent. However, how many environment files we
should have and which should be the content?
>
> i.e. L2GW project
>
> Option 1:
>
> neutron-l2gw-api.yaml enabling the corresponding API composable service.
> neutron-l2gw-agent.yaml enabling the corresponding agent composable
service.
>
> openstack overcloud deploy -e neutron-l2gw-api.yaml -e
 neutron-l2gw-agent.yaml (with agent)
> openstack overcloud deploy -e neutron-l2gw-api.yaml -e
 neutron-opendaylight-l3.yaml (with ODL)
>
> Option 2:
>
> neutron-l2gw.yaml enabling the API and the agent as a reference
architecture scenario.
> neutron-l2gw-odl.yaml enabling the API with OpenDaylight as
service_provider
>
> openstack overcloud deploy -e neutron-l2gw.yaml (with agent)
> openstack overcloud deploy -e neutron-l2gw-odl.yaml -e
 neutron-opendaylight-l3.yaml (with ODL)
>
>
> I'm not really pushing for any option, but I'm just concern from the user
experience point of view. As a user, which way is more friendly? or
understandable? Where in the documentation is this reflected?
>
>
> Some pointers of the L2GW service and agent services:
>
> https://review.openstack.org/#/c/447429/
> https://review.openstack.org/#/c/451175/
>
> Cheers
>
> --
> Ricardo Noriega
>
> Senior Software Engineer - NFV Partner Engineer | Office of Technology  |
Red Hat
> irc: rnoriega @freenode
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-04-06 Thread Andrea Frittoli
I don't really have much context in what the decision is going to be based
on here,
so I'll just add some random comments here and there.

On Thu, Apr 6, 2017 at 12:48 PM Arx Cruz  wrote:

> Having tempest running will allow these jobs to appear in openstack-health
> system as well.
>

I agree that's a plus. It's also rather easy to produce subunit from
whatever you
are using to run tests, and that's all you need in fact to get data into
open stack-health
without touching the existing infrastructure. So in case you decide not to
use Tempest,
open stack-health can still be on the list.


>
> On Thu, Apr 6, 2017 at 1:29 PM, Justin Kilpatrick 
> wrote:
>
> Maybe I'm getting a little off topic with this question, but why was
> Tempest removed last time?
>
> I'm not well versed in the history of this discussion, but from what I
> understand Tempest in the gate has
> been an off and on again thing for a while but I've never heard the
> story of why it got removed.
>
> On Thu, Apr 6, 2017 at 7:00 AM, Chris Dent  wrote:
> > On Thu, 6 Apr 2017, Sagi Shnaidman wrote:
> >
> >> It may be useful to run a "limited edition" of API tests that maximize
> >> coverage and don't duplicate, for example just to check service working
> >> basically, without covering all its functionality. It will take very
> >> little
> >> time (i.e. 5 tests for each service) and will give a general picture of
> >> deployment success. It will cover fields that are not covered by
> pingtest
> >> as well.
>
> >
>
>
We have a smoke attribute here an there, but it's not well curated at all,
so you're
probably better off maintaining your own list.
Since presumably you're more interested in verifying that a deployed cloud
is
functional - as opposed to verify specific APIs are working properly - you
may want
to look at scenario tests, where with a couple of test you can cover
already a lot of
basic stuff, e.g. if you can boot a server from a volume with an image from
glance,
and ssh into it, you have proven a lot already about the general health of
your cloud.


> >
> > It's sound like using some parts of tempest is perhaps the desired
> > thing here but in case a "limited edition" test against the APIs to
> > do what amounts to a smoke test is desired, it might be worthwhile
> > to investigate using gabbi[1] and its command line gabbi-run[2] tool for
> > some fairly simple and readable tests that can describe a sequence
> > of API interactions. There are lots of tools that can do the same
> > thing, so gabbi may not be the right choice but it's there as an
> > option.
> >
> > The telemetry group had (an may still have) some integration tests
> > that use gabbi files to integrate ceilometer, heat (starting some
> > vms), aodh and gnocchi and confirm that the expected flow happened.
> > Since the earlier raw scripts I think there's been some integration
> > with tempest, but gabbi files are still used[3].
> >
> > If this might be useful and I can help out, please ask.
> >
> > [1] http://gabbi.readthedocs.io/
> > [2] http://gabbi.readthedocs.io/en/latest/runner.html
> > [3]
> >
> https://github.com/openstack/ceilometer/tree/master/ceilometer/tests/integration
> >
> > --
> > Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
> > freenode: cdent tw: @anticdent
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Emails for OpenStack R Release Name voting going out - please be patient

2017-04-06 Thread Monty Taylor

Hey all!

I've started the R Release Name poll and currently am submitting 
everyone's email address to the system. In order to not make our fine 
friends at Carnegie Mellon (the folks who run the CIVS voting service) 
upset, I have a script that submits the emails one at a time with a 
half-second delay between each email. That means at best, since there 
are 40k people to process it'll take ~6 hours for them all to go out.


Which is to say - emails are on their way - but if you haven't gotten 
yours yet, that's fine. I'll send another email when they've all gone 
out, so don't worry about not receiving one until I've sent that mail.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Roadmap for Container CI work

2017-04-06 Thread Jiří Stránský

On 6.4.2017 12:46, Jiří Stránský wrote:

On 4.4.2017 22:01, Emilien Macchi wrote:

After our weekly meeting of today, I found useful to share and discuss
our roadmap for Container CI jobs in TripleO.
They are ordered by priority from the highest to lowest:

1. Swap ovb-nonha job with ovb-containers, enable introspection on the
container job and shuffle other coverage (e.g ssl) to other jobs
(HA?). It will help us to get coverage for ovb-containers scenario
again, without consuming more rh1 resources and keep existing
coverage.
2. Get multinode coverage of deployments - this should integrate with
the scenarios we already have defined for non-container deployment.
This is super important to cover all overcloud services, like we did
with classic deployments. It should be non voting to start and then
voting once it works. We should find a way to keep the same templates
as we have now, and just include the docker environment. In other
words, find a way to keep using:
https://github.com/openstack/tripleo-heat-templates/blob/master/ci/environments/scenario001-multinode.yaml
so we don't duplicate scenario environments.
3. Implement container upgrade job, which for Pike will be deploy a
baremetal overcloud, then migrate on upgrade to containers. Use
multinode jobs for this task. Start with a non-voting job and move to
the gate once it work. I also suggest to use scenarios framework, so
we keep good coverage.


The first iteration of this job is ready to be reviewed and landed.
Please see the patches here [1].

The latest job execution didn't go all the way to success yet, it failed
during Ansible upgrade steps execution [2], but i think the patches are
now far enough that they would be good to merge anyway, and issues can
be ironed out subsequently, as well as making the job actually
Ocata->master rather than master->master (currently just switching from
non-containers to containers).

[1] https://review.openstack.org/#/q/topic:container-upgrade
[2]
http://logs.openstack.org/84/450784/8/experimental/gate-tripleo-ci-centos-7-containers-multinode-upgrades-nv/a1850f7/logs/undercloud/home/jenkins/overcloud_deploy.log.txt.gz


Sorry the [2] link was incorrect, this is the right one:

http://logs.openstack.org/84/450784/9/experimental/gate-tripleo-ci-centos-7-containers-multinode-upgrades-nv/23f9190/logs/undercloud/home/jenkins/overcloud_upgrade_console.log.txt.gz




4. After we implement the workflow for minor updates, have a job with
tests container-to-container updates for minor (rolling) updates, this
ideally should add some coverage to ensure no downtime of APIs and
possibly checks for service restarts (ref recent bugs about bouncing
services on minor updates)
5. Once Pike is released and Queens starts, let's work on container to
containers upgrade job.

Any feedback or question is highly welcome,

Note: The proposal comes from shardy's notes on
https://etherpad.openstack.org/p/tripleo-container-ci - feel free to
contribute to the etherpad or mailing list.

Thanks,






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-04-06 Thread Luigi Toscano
On Thursday, 6 April 2017 13:29:32 CEST Justin Kilpatrick wrote:
> Maybe I'm getting a little off topic with this question, but why was
> Tempest removed last time?
> 
> I'm not well versed in the history of this discussion, but from what I
> understand Tempest in the gate has
> been an off and on again thing for a while but I've never heard the
> story of why it got removed.
> 

Also, saying "tempest" only can be a bit confusing.
I guess that what can be easily done here is:
- use tempest (library, CLI) to initialize the test environment (tempest init) 
and run test tests (test run/ostestr), and gather the results
- select a subset of the current tests to be executed, or even write some.
Pingtest itself could be changed into a Tempest plugin...

A possible "resource problem" depend only on the set of tests which are 
executed.

-- 
Luigi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-04-06 Thread Arx Cruz
Having tempest running will allow these jobs to appear in openstack-health
system as well.

On Thu, Apr 6, 2017 at 1:29 PM, Justin Kilpatrick 
wrote:

> Maybe I'm getting a little off topic with this question, but why was
> Tempest removed last time?
>
> I'm not well versed in the history of this discussion, but from what I
> understand Tempest in the gate has
> been an off and on again thing for a while but I've never heard the
> story of why it got removed.
>
> On Thu, Apr 6, 2017 at 7:00 AM, Chris Dent  wrote:
> > On Thu, 6 Apr 2017, Sagi Shnaidman wrote:
> >
> >> It may be useful to run a "limited edition" of API tests that maximize
> >> coverage and don't duplicate, for example just to check service working
> >> basically, without covering all its functionality. It will take very
> >> little
> >> time (i.e. 5 tests for each service) and will give a general picture of
> >> deployment success. It will cover fields that are not covered by
> pingtest
> >> as well.
> >
> >
> > It's sound like using some parts of tempest is perhaps the desired
> > thing here but in case a "limited edition" test against the APIs to
> > do what amounts to a smoke test is desired, it might be worthwhile
> > to investigate using gabbi[1] and its command line gabbi-run[2] tool for
> > some fairly simple and readable tests that can describe a sequence
> > of API interactions. There are lots of tools that can do the same
> > thing, so gabbi may not be the right choice but it's there as an
> > option.
> >
> > The telemetry group had (an may still have) some integration tests
> > that use gabbi files to integrate ceilometer, heat (starting some
> > vms), aodh and gnocchi and confirm that the expected flow happened.
> > Since the earlier raw scripts I think there's been some integration
> > with tempest, but gabbi files are still used[3].
> >
> > If this might be useful and I can help out, please ask.
> >
> > [1] http://gabbi.readthedocs.io/
> > [2] http://gabbi.readthedocs.io/en/latest/runner.html
> > [3]
> > https://github.com/openstack/ceilometer/tree/master/
> ceilometer/tests/integration
> >
> > --
> > Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
> > freenode: cdent tw: @anticdent
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][intel-nfv-ci-tests][octavia][tap-as-a-service][vmware-nsx][networking-sfc] Refactor of Tempest scenario base classes

2017-04-06 Thread Andrea Frittoli
One final status update on this.
We will proceed now with refactoring the tempest scenario base class.

- 17 patches merged
- 3 patches ready (on master)
- 3 patches ready (on stable branches)
- 3 extra patches for data_utils

Details: https://etherpad.openstack.org/p/tempest-manager-plugins

andrea

On Fri, Mar 24, 2017 at 2:55 PM Andrea Frittoli 
wrote:

> Status update on the scenario manager base class:
>
> - 13 patches merged
> - 6 patches ready (on master)
> - 2 patches ready (on stable branches)
>
> Details: https://etherpad.openstack.org/p/tempest-manager-plugins
>
> I changed the subject to include only the projects that have outstanding
> patches related to this.
>
> thank you
>
> andrea
>
> On Sat, Mar 18, 2017 at 12:04 PM Andrea Frittoli <
> andrea.fritt...@gmail.com> wrote:
>
> Status update.
>
> Thanks to all your work, we are getting close to finishing this work!
>
> - 12 patches merged
> - 7 patches ready, pending approval
> - 2 patches (on stable branches) with a pep8 failure to be fixed
>
> Details: https://etherpad.openstack.org/p/tempest-manager-plugins
>
> There are still a few patches that need a review, any help
> would be much appreciated!
>
> Thank you
>
> andrea
>
> On Wed, Mar 15, 2017 at 3:56 PM Andrea Frittoli 
> wrote:
>
> On Wed, Mar 15, 2017 at 11:38 AM Dmitry Tantsur 
> wrote:
>
> On 02/27/2017 12:34 PM, Andrea Frittoli wrote:
> > Hello folks,
> >
> > TL;DR: if today you import manager,py from tempest.scenario please
> maintain a
> > copy of [0] in tree until further notice.
>
> Hi!
>
> I hope it is pretty obvious, but just to be clear. Anything that this
> copied
> file uses should be treated more or less as a stable API by the QA team
> during
> the whole transition period. The last thing we want to happen is for this
> file
> to break all the time because its dependencies (imports, functions,
> classes it
> uses) are not stable.
>
> If it's not the case, please update it, and let us know the git hash to
> use to
> grab the final version of the file.
>
>
> Your code depends on manager.py and its dependencies today,
> and copying that in-tree removes at least one of the dependencies.
>
> The only case were you're in a worse situation is if one of the imports is
> removed / renamed,
> and we'll do our best to avoid that.
>
> My recommendation would be to trim down your copy of manager.py to the
> bare minimum you
> need, which is likely to be much smaller than the whole module.
>
>
> Thanks for understanding!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-04-06 Thread Justin Kilpatrick
Maybe I'm getting a little off topic with this question, but why was
Tempest removed last time?

I'm not well versed in the history of this discussion, but from what I
understand Tempest in the gate has
been an off and on again thing for a while but I've never heard the
story of why it got removed.

On Thu, Apr 6, 2017 at 7:00 AM, Chris Dent  wrote:
> On Thu, 6 Apr 2017, Sagi Shnaidman wrote:
>
>> It may be useful to run a "limited edition" of API tests that maximize
>> coverage and don't duplicate, for example just to check service working
>> basically, without covering all its functionality. It will take very
>> little
>> time (i.e. 5 tests for each service) and will give a general picture of
>> deployment success. It will cover fields that are not covered by pingtest
>> as well.
>
>
> It's sound like using some parts of tempest is perhaps the desired
> thing here but in case a "limited edition" test against the APIs to
> do what amounts to a smoke test is desired, it might be worthwhile
> to investigate using gabbi[1] and its command line gabbi-run[2] tool for
> some fairly simple and readable tests that can describe a sequence
> of API interactions. There are lots of tools that can do the same
> thing, so gabbi may not be the right choice but it's there as an
> option.
>
> The telemetry group had (an may still have) some integration tests
> that use gabbi files to integrate ceilometer, heat (starting some
> vms), aodh and gnocchi and confirm that the expected flow happened.
> Since the earlier raw scripts I think there's been some integration
> with tempest, but gabbi files are still used[3].
>
> If this might be useful and I can help out, please ask.
>
> [1] http://gabbi.readthedocs.io/
> [2] http://gabbi.readthedocs.io/en/latest/runner.html
> [3]
> https://github.com/openstack/ceilometer/tree/master/ceilometer/tests/integration
>
> --
> Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
> freenode: cdent tw: @anticdent
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Consistent way of integrating services with different backends

2017-04-06 Thread Ricardo Noriega De Soto
Hi owls!

This is something that I've been discussing in the IRC channel but still I
think we should define a consistent way of integrating services which
support different backends. In this case, I'm refering to BGPVPN and L2GW
Neutron services, but it could be applied to any other type of service.

These two Neutron service plugins support different backends such an agent
and a SDN controller (OpenDaylight). Usually the reference architecture
will use the agents.

My main question is about how to model this into THT properly.

It is clear that we have to create two different composable services: one
for the API and one for the agent. However, how many environment files we
should have and which should be the content?

i.e. L2GW project

Option 1:

   - neutron-l2gw-api.yaml enabling the corresponding API composable
   service.
   - neutron-l2gw-agent.yaml enabling the corresponding agent composable
   service.

openstack overcloud deploy -e neutron-l2gw-api.yaml -e
 neutron-l2gw-agent.yaml (with agent)
openstack overcloud deploy -e neutron-l2gw-api.yaml -e
 neutron-opendaylight-l3.yaml (with ODL)

Option 2:

   - neutron-l2gw.yaml enabling the API and the agent as a reference
   architecture scenario.
   - neutron-l2gw-odl.yaml enabling the API with OpenDaylight as
   service_provider

openstack overcloud deploy -e neutron-l2gw.yaml (with agent)
openstack overcloud deploy -e neutron-l2gw-odl.yaml -e
 neutron-opendaylight-l3.yaml (with ODL)


I'm not really pushing for any option, but I'm just concern from the user
experience point of view. As a user, which way is more friendly? or
understandable? Where in the documentation is this reflected?


Some pointers of the L2GW service and agent services:

https://review.openstack.org/#/c/447429/
https://review.openstack.org/#/c/451175/

Cheers

-- 
Ricardo Noriega

Senior Software Engineer - NFV Partner Engineer | Office of Technology  |
Red Hat
irc: rnoriega @freenode
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-04-06 Thread Chris Dent

On Thu, 6 Apr 2017, Sagi Shnaidman wrote:


It may be useful to run a "limited edition" of API tests that maximize
coverage and don't duplicate, for example just to check service working
basically, without covering all its functionality. It will take very little
time (i.e. 5 tests for each service) and will give a general picture of
deployment success. It will cover fields that are not covered by pingtest
as well.


It's sound like using some parts of tempest is perhaps the desired
thing here but in case a "limited edition" test against the APIs to
do what amounts to a smoke test is desired, it might be worthwhile
to investigate using gabbi[1] and its command line gabbi-run[2] tool for
some fairly simple and readable tests that can describe a sequence
of API interactions. There are lots of tools that can do the same
thing, so gabbi may not be the right choice but it's there as an
option.

The telemetry group had (an may still have) some integration tests
that use gabbi files to integrate ceilometer, heat (starting some
vms), aodh and gnocchi and confirm that the expected flow happened.
Since the earlier raw scripts I think there's been some integration
with tempest, but gabbi files are still used[3].

If this might be useful and I can help out, please ask.

[1] http://gabbi.readthedocs.io/
[2] http://gabbi.readthedocs.io/en/latest/runner.html
[3] 
https://github.com/openstack/ceilometer/tree/master/ceilometer/tests/integration

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-doc] Docs team meeting

2017-04-06 Thread Alexandra Settle
Hey everyone,

The docs meeting will continue today in #openstack-meeting-alt as scheduled 
(Thursday at 21:00 UTC). For more details, and the agenda, see the meeting 
page: - 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

The meeting with be hosted by Joseph Robinson, as I have some personal business 
to attend to this evening.

Specialty team leads – if you are unable to attend the meeting, please send me 
your team reports to include in the doc newsletter.

Thanks,

Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Florian Fuchs for tripleo-validations core

2017-04-06 Thread Julie Pichon
On 6 April 2017 at 10:53, Martin André  wrote:
> Hellooo,
>
> I'd like to propose we extend Florian Fuchs +2 powers to the
> tripleo-validations project. Florian is already core on tripleo-ui
> (well, tripleo technically so this means there is no changes to make
> to gerrit groups).
>
> Florian took over many of the stalled patches in tripleo-validations
> and is now the principal contributor in the project [1]. He has built
> a good expertise over the last months and I think it's time he has
> officially the right to approve changes in tripleo-validations.
>
> Consider this my +1 vote.
>
> Martin
>
> [1] 
> http://stackalytics.com/?module=tripleo-validations=patches=pike
>

+1!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Florian Fuchs for tripleo-validations core

2017-04-06 Thread Jiří Stránský

On 6.4.2017 11:53, Martin André wrote:

Hellooo,

I'd like to propose we extend Florian Fuchs +2 powers to the
tripleo-validations project. Florian is already core on tripleo-ui
(well, tripleo technically so this means there is no changes to make
to gerrit groups).

Florian took over many of the stalled patches in tripleo-validations
and is now the principal contributor in the project [1]. He has built
a good expertise over the last months and I think it's time he has
officially the right to approve changes in tripleo-validations.


+1



Consider this my +1 vote.

Martin

[1] 
http://stackalytics.com/?module=tripleo-validations=patches=pike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Roadmap for Container CI work

2017-04-06 Thread Jiří Stránský

On 4.4.2017 22:01, Emilien Macchi wrote:

After our weekly meeting of today, I found useful to share and discuss
our roadmap for Container CI jobs in TripleO.
They are ordered by priority from the highest to lowest:

1. Swap ovb-nonha job with ovb-containers, enable introspection on the
container job and shuffle other coverage (e.g ssl) to other jobs
(HA?). It will help us to get coverage for ovb-containers scenario
again, without consuming more rh1 resources and keep existing
coverage.
2. Get multinode coverage of deployments - this should integrate with
the scenarios we already have defined for non-container deployment.
This is super important to cover all overcloud services, like we did
with classic deployments. It should be non voting to start and then
voting once it works. We should find a way to keep the same templates
as we have now, and just include the docker environment. In other
words, find a way to keep using:
https://github.com/openstack/tripleo-heat-templates/blob/master/ci/environments/scenario001-multinode.yaml
so we don't duplicate scenario environments.
3. Implement container upgrade job, which for Pike will be deploy a
baremetal overcloud, then migrate on upgrade to containers. Use
multinode jobs for this task. Start with a non-voting job and move to
the gate once it work. I also suggest to use scenarios framework, so
we keep good coverage.


The first iteration of this job is ready to be reviewed and landed. 
Please see the patches here [1].


The latest job execution didn't go all the way to success yet, it failed 
during Ansible upgrade steps execution [2], but i think the patches are 
now far enough that they would be good to merge anyway, and issues can 
be ironed out subsequently, as well as making the job actually 
Ocata->master rather than master->master (currently just switching from 
non-containers to containers).


[1] https://review.openstack.org/#/q/topic:container-upgrade
[2] 
http://logs.openstack.org/84/450784/8/experimental/gate-tripleo-ci-centos-7-containers-multinode-upgrades-nv/a1850f7/logs/undercloud/home/jenkins/overcloud_deploy.log.txt.gz



4. After we implement the workflow for minor updates, have a job with
tests container-to-container updates for minor (rolling) updates, this
ideally should add some coverage to ensure no downtime of APIs and
possibly checks for service restarts (ref recent bugs about bouncing
services on minor updates)
5. Once Pike is released and Queens starts, let's work on container to
containers upgrade job.

Any feedback or question is highly welcome,

Note: The proposal comes from shardy's notes on
https://etherpad.openstack.org/p/tripleo-container-ci - feel free to
contribute to the etherpad or mailing list.

Thanks,




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-04-06 Thread Boris Pavlovic
Sagi,

I think Rally or Browbeat and other performance oriented solutions won't
> serve our needs, because we run TripleO CI on virtualized environment with
> very limited resources. Actually we are pretty close to full utilizing
> these resources when deploying openstack, so very little is available for
> test.


You can run Rally with any load. Including just starting 1 smallest VM.


It may be useful to run a "limited edition" of API tests that maximize
> coverage and don't duplicate, for example just to check service working
> basically, without covering all its functionality. It will take very little
> time (i.e. 5 tests for each service) and will give a general picture of
> deployment success. It will cover fields that are not covered by pingtest
> as well.


You can actually pick few of scenarios that we have in Rally and cover most
of the functionality.
If you specify what exactly you want to test I can help with writing Rally
Task for that. (it will use as minimum as possible resources)


Best regards,
Boris Pavlovic



On Thu, Apr 6, 2017 at 2:38 AM, Dmitry Tantsur  wrote:

> On 04/05/2017 10:49 PM, Emilien Macchi wrote:
>
>> Greetings dear owls,
>>
>> I would like to bring back an old topic: running tempest in the gate.
>>
>> == Context
>>
>> Right now, TripleO gate is running something called pingtest to
>> validate that the OpenStack cloud is working. It's an Heat stack, that
>> deploys a Nova server, some volumes, a glance image, a neutron network
>> and sometimes a little bit more.
>> To deploy the pingtest, you obviously need Heat deployed in your
>> overcloud.
>>
>> == Problems:
>>
>> Although pingtest has been very helpful over the last years:
>> - easy to understand, it's an Heat template, like an OpenStack user
>> would do to deploy their apps.
>> - fast: the stack takes a few minutes to be created and validated
>>
>> It has some limitations:
>> - Limitation to what Heat resources support (example: some OpenStack
>> resources can't be managed from Heat)
>> - Impossible to run a dynamic workflow (test a live migration for example)
>>
>> == Solutions
>>
>> 1) Switch pingtest to Tempest run on some specific tests, with feature
>> parity of what we had with pingtest.
>> For example, we could imagine to run the scenarios that deploys VM and
>> boot from volume. It would test the same thing as pingtest (details
>> can be discussed here).
>> Each scenario would run more tests depending on the service that they
>> run (scenario001 is telemetry, so it would run some tempest tests for
>> Ceilometer, Aodh, Gnocchi, etc).
>> We should work at making the tempest run as short as possible, and the
>> close as possible from what we have with a pingtest.
>>
>
> A lot of work is going into Tempest itself and its various plugins, so
> that it becomes a convenient and universal tool to test OpenStack clouds.
> While we're not quite there in terms of convenience, it's hard to match the
> coverage of tempest + plugins. I'd prefer TripleO use (some subset of)
> Tempest test suite(s).
>
>
>> 2) Run custom scripts in TripleO CI tooling, called from the pingtest
>> (heat template), that would run some validations commands (API calls,
>> etc).
>> It has been investigated in the past but never implemented AFIK.
>>
>> 3) ?
>>
>
> Unless you want to duplicate all the work that goes into Tempest ecosystem
> now, this is probably not a good idea.
>
>
>> I tried to make this text short and go straight to the point, please
>> bring feedback now. I hope we can make progress on $topic during Pike,
>> so we can increase our testing coverage and detect deployment issues
>> sooner.
>>
>> Thanks,
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Proposing Florian Fuchs for tripleo-validations core

2017-04-06 Thread Martin André
Hellooo,

I'd like to propose we extend Florian Fuchs +2 powers to the
tripleo-validations project. Florian is already core on tripleo-ui
(well, tripleo technically so this means there is no changes to make
to gerrit groups).

Florian took over many of the stalled patches in tripleo-validations
and is now the principal contributor in the project [1]. He has built
a good expertise over the last months and I think it's time he has
officially the right to approve changes in tripleo-validations.

Consider this my +1 vote.

Martin

[1] 
http://stackalytics.com/?module=tripleo-validations=patches=pike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-04-06 Thread Dmitry Tantsur

On 04/05/2017 10:49 PM, Emilien Macchi wrote:

Greetings dear owls,

I would like to bring back an old topic: running tempest in the gate.

== Context

Right now, TripleO gate is running something called pingtest to
validate that the OpenStack cloud is working. It's an Heat stack, that
deploys a Nova server, some volumes, a glance image, a neutron network
and sometimes a little bit more.
To deploy the pingtest, you obviously need Heat deployed in your overcloud.

== Problems:

Although pingtest has been very helpful over the last years:
- easy to understand, it's an Heat template, like an OpenStack user
would do to deploy their apps.
- fast: the stack takes a few minutes to be created and validated

It has some limitations:
- Limitation to what Heat resources support (example: some OpenStack
resources can't be managed from Heat)
- Impossible to run a dynamic workflow (test a live migration for example)

== Solutions

1) Switch pingtest to Tempest run on some specific tests, with feature
parity of what we had with pingtest.
For example, we could imagine to run the scenarios that deploys VM and
boot from volume. It would test the same thing as pingtest (details
can be discussed here).
Each scenario would run more tests depending on the service that they
run (scenario001 is telemetry, so it would run some tempest tests for
Ceilometer, Aodh, Gnocchi, etc).
We should work at making the tempest run as short as possible, and the
close as possible from what we have with a pingtest.


A lot of work is going into Tempest itself and its various plugins, so that it 
becomes a convenient and universal tool to test OpenStack clouds. While we're 
not quite there in terms of convenience, it's hard to match the coverage of 
tempest + plugins. I'd prefer TripleO use (some subset of) Tempest test suite(s).




2) Run custom scripts in TripleO CI tooling, called from the pingtest
(heat template), that would run some validations commands (API calls,
etc).
It has been investigated in the past but never implemented AFIK.

3) ?


Unless you want to duplicate all the work that goes into Tempest ecosystem now, 
this is probably not a good idea.




I tried to make this text short and go straight to the point, please
bring feedback now. I hope we can make progress on $topic during Pike,
so we can increase our testing coverage and detect deployment issues
sooner.

Thanks,




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Risk prediction model for OpenStack

2017-04-06 Thread Thierry Carrez
林泽燕 wrote:
> Dear everyone,
> 
> My name is Zoey Lin, majored in Computer Science, Peking University,
> China. I’m a candidate of Master Degree. Recently I'm making a research
> on OpenStack about the contribution composition of a code file, to
> predict the potential amount of defect that the file would have in the
> later development stage of a release.
> 
> I wonder if I could show you my study, including some metrics for the
> prediction model and a visualization tool. I would appreciate it if you
> could share your opinions or give some advices, which would really,
> really help me a lot. Thank you so much for your kindness. :)
> [...]

I'd like to echo what Jeremy said and thank you for your insightful
research. I've been interested in using risk prediction and machine
learning as a part of our review process to increase quality.

Your scientific analysis seems to match what we intuitively know: larger
files will contain more bugs than smaller files, and (beyond a few
outliers), complex files which see lots of contributions will trigger
more issues than simple files that only needed to be written once. So
I'm wondering how much of that feedback can be used to improve the code:
I think we internalize most of that risk assessment already.

One insight which I think we could take from this is that when a smaller
group of people "owns" a set of files, we raise quality (compared to
everyone owning everything). So the more we can split the code along
areas of expertise and smaller review teams, the better. But I think
that is also something we intuitively knew.

Regards,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-04-06 Thread Sagi Shnaidman
HI,

I think Rally or Browbeat and other performance oriented solutions won't
serve our needs, because we run TripleO CI on virtualized environment with
very limited resources. Actually we are pretty close to full utilizing
these resources when deploying openstack, so very little is available for
test.
It's not a problem to run tempest API tests because they are cheap - take
little time, little resources, but also gives little coverage. Scenario
test are more interesting and gives us more coverage, but also takes a lot
of resources (which we don't have sometimes).

It may be useful to run a "limited edition" of API tests that maximize
coverage and don't duplicate, for example just to check service working
basically, without covering all its functionality. It will take very little
time (i.e. 5 tests for each service) and will give a general picture of
deployment success. It will cover fields that are not covered by pingtest
as well.

I think could be an option to develop a special scenario tempest tests for
TripleO which would fit our needs.

Thanks


On Wed, Apr 5, 2017 at 11:49 PM, Emilien Macchi  wrote:

> Greetings dear owls,
>
> I would like to bring back an old topic: running tempest in the gate.
>
> == Context
>
> Right now, TripleO gate is running something called pingtest to
> validate that the OpenStack cloud is working. It's an Heat stack, that
> deploys a Nova server, some volumes, a glance image, a neutron network
> and sometimes a little bit more.
> To deploy the pingtest, you obviously need Heat deployed in your overcloud.
>
> == Problems:
>
> Although pingtest has been very helpful over the last years:
> - easy to understand, it's an Heat template, like an OpenStack user
> would do to deploy their apps.
> - fast: the stack takes a few minutes to be created and validated
>
> It has some limitations:
> - Limitation to what Heat resources support (example: some OpenStack
> resources can't be managed from Heat)
> - Impossible to run a dynamic workflow (test a live migration for example)
>
> == Solutions
>
> 1) Switch pingtest to Tempest run on some specific tests, with feature
> parity of what we had with pingtest.
> For example, we could imagine to run the scenarios that deploys VM and
> boot from volume. It would test the same thing as pingtest (details
> can be discussed here).
> Each scenario would run more tests depending on the service that they
> run (scenario001 is telemetry, so it would run some tempest tests for
> Ceilometer, Aodh, Gnocchi, etc).
> We should work at making the tempest run as short as possible, and the
> close as possible from what we have with a pingtest.
>
> 2) Run custom scripts in TripleO CI tooling, called from the pingtest
> (heat template), that would run some validations commands (API calls,
> etc).
> It has been investigated in the past but never implemented AFIK.
>
> 3) ?
>
> I tried to make this text short and go straight to the point, please
> bring feedback now. I hope we can make progress on $topic during Pike,
> so we can increase our testing coverage and detect deployment issues
> sooner.
>
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards
Sagi Shnaidman
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] How to get all detail RPC message and detail context in neutron docs?

2017-04-06 Thread Sam
Thank you, use debug option will also help us to get detail of RPC message,
good luck~

2017-04-06 14:20 GMT+08:00 김기석 [Kiseok Kim] :

> Sam,
>
>
>
> I have no idea whether or not the document exists. (I want to read too)
>
>
>
> about L3 RPC messages(L3_AGENT topic), its client-side is implemented in
> l3_router_plugin.[1]
>
> There is an oslo_meessaging document.[2] I guess it would help.
>
>
>
> [1] https://github.com/openstack/neutron/blob/master/neutron/
> api/rpc/agentnotifiers/l3_rpc_agent_api.py#L37
>
> [2] https://docs.openstack.org/developer/oslo.messaging/
>
>
>
> good luck!
>
>
>
> *From:* Sam [mailto:batmanu...@gmail.com]
> *Sent:* Thursday, April 06, 2017 10:56 AM
> *To:* 김기석 [Kiseok Kim]
> *Cc:* OpenStack General; OpenStack Development Mailing List (not for
> usage questions)
> *Subject:* Re: [Openstack] How to get all detail RPC message and detail
> context in neutron docs?
>
>
>
> For example, detail of the messages of topics.L3_AGENT
>
>
>
> 2017-04-06 9:38 GMT+08:00 Sam :
>
> Thank you all.
>
>
>
> For 'context', I got it.
>
> For RPCs, is there some document or blog or some debug method to get its
> detal contains in neutron L3 Agent?
>
>
>
> 2017-04-06 9:33 GMT+08:00 김기석 [Kiseok Kim] :
>
> Hi Sam,
>
>
>
> that 'context' is olso_context and neutron use it with addition attributes.
>
>
>
> oslo.context has to_dict method,
>
> so you could add debug log in 'agent_updated' method like:
>
>
>
>LOG.debug("context in agent_updated: %s", context.to_dict())
>
>
>
> and you can find out the attributes of context in
>
> https://github.com/openstack/neutron-lib/blob/master/
> neutron_lib/context.py#L83-L92,
>
> https://github.com/openstack/oslo.context/blob/master/oslo_
> context/context.py#L310-L332
>
> .
>
>
>
> *From:* Sam [mailto:batmanu...@gmail.com]
> *Sent:* Wednesday, April 05, 2017 7:10 PM
> *To:* OpenStack General; OpenStack Development Mailing List (not for
> usage questions)
> *Subject:* [Openstack] How to get all detail RPC message and detail
> context in neutron docs?
>
>
>
> Hi all,
>
>
>
> I'm working on neutron L3 Agent and some other Agent. I found that there
> are lots of RPCs including RPC call and notification and lots of 'context'
> as bellow. But I don't know its detail context, can I get these from some
> docs?
>
>
>
> If there are no docs, could I get these using some debug method? Like
> '--debug' option or using pdb or something?
>
>
>
> RPC: like 'agent_updated' in neutron/neutron/agent/l3/agent.py Line759.
>
>
>
> context: it's param in some function like 'def router_added_to_agent(self,
> context, payload):' in neutron/neutron/agent/l3/agent.py.
>
>
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] nova boot from image created volume

2017-04-06 Thread 李田清
Hello,
If we use nova boot from image and created volume, i think the nova will
use volume/cinder.py:create to create volume. But after insert pdb, i do 
not find
the specific code of line to call the create. Can someone help me to point 
out the 
code of the line? Thanks a lot.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] How to get all detail RPC message and detail context in neutron docs?

2017-04-06 Thread 김기석 [Kiseok Kim]
Sam,

I have no idea whether or not the document exists. (I want to read too)

about L3 RPC messages(L3_AGENT topic), its client-side is implemented in 
l3_router_plugin.[1]
There is an oslo_meessaging document.[2] I guess it would help.

[1] 
https://github.com/openstack/neutron/blob/master/neutron/api/rpc/agentnotifiers/l3_rpc_agent_api.py#L37
[2] https://docs.openstack.org/developer/oslo.messaging/

good luck!

From: Sam [mailto:batmanu...@gmail.com]
Sent: Thursday, April 06, 2017 10:56 AM
To: 김기석 [Kiseok Kim]
Cc: OpenStack General; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [Openstack] How to get all detail RPC message and detail context 
in neutron docs?

For example, detail of the messages of topics.L3_AGENT

2017-04-06 9:38 GMT+08:00 Sam 
>:
Thank you all.

For 'context', I got it.
For RPCs, is there some document or blog or some debug method to get its detal 
contains in neutron L3 Agent?

2017-04-06 9:33 GMT+08:00 김기석 [Kiseok Kim] 
>:
Hi Sam,

that 'context' is olso_context and neutron use it with addition attributes.

oslo.context has to_dict method,
so you could add debug log in 'agent_updated' method like:

   LOG.debug("context in agent_updated: %s", context.to_dict())

and you can find out the attributes of context in
https://github.com/openstack/neutron-lib/blob/master/neutron_lib/context.py#L83-L92,
https://github.com/openstack/oslo.context/blob/master/oslo_context/context.py#L310-L332
.

From: Sam [mailto:batmanu...@gmail.com]
Sent: Wednesday, April 05, 2017 7:10 PM
To: OpenStack General; OpenStack Development Mailing List (not for usage 
questions)
Subject: [Openstack] How to get all detail RPC message and detail context in 
neutron docs?

Hi all,

I'm working on neutron L3 Agent and some other Agent. I found that there are 
lots of RPCs including RPC call and notification and lots of 'context' as 
bellow. But I don't know its detail context, can I get these from some docs?

If there are no docs, could I get these using some debug method? Like '--debug' 
option or using pdb or something?

RPC: like 'agent_updated' in neutron/neutron/agent/l3/agent.py Line759.

context: it's param in some function like 'def router_added_to_agent(self, 
context, payload):' in neutron/neutron/agent/l3/agent.py.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Risk prediction model for OpenStack

2017-04-06 Thread 林泽燕
Hi, Matt,
Thank you for your attention. The information you provided is very helpful.

Best Regards


> -原始邮件-
> 发件人: "Matt Riedemann" 
> 发送时间: 2017-04-05 23:33:21 (星期三)
> 收件人: openstack-dev@lists.openstack.org
> 抄送: 
> 主题: Re: [openstack-dev] [nova] Risk prediction model for OpenStack
> 
> On 4/5/2017 9:00 AM, Jeremy Stanley wrote:
> > On 2017-04-05 14:00:59 +0800 (+0800), 林泽燕 wrote:
> > [...]
> >> I wonder if I could show you my study, including some metrics for
> >> the prediction model and a visualization tool.
> > [...]
> >
> > I want to start out thanking you for your research and interest in
> > OpenStack's development practices. I love that our contribution
> > model enables such scientific analysis, a sometimes less recognized
> > benefit of our community's choice to work entirely in the open. This
> > specific study is also very insightful and well-presented.
> >
> >> In this release, 36 developers left the development of this file
> >> (they made contributions in last release but not this one).
> >> Developers leaving a code file deprive the file of the knowledge
> >> of the decisions they have made.
> > [...]
> >
> > One potentially influential aspect of our development model is that
> > we place a heavy importance on code review. For any patch to make it
> > into a branch under official revision control, it must first be
> > reviewed by multiple experienced, long-standing contributors to that
> > repository. Our hope is that even though some developers may cease
> > contributing new patches to a file, some of them would still be
> > reviewing, guiding and refining changes proposed by newer
> > contributors. It doesn't seem like this behavior was captured in
> > your analysis, or alternatively the fact that your model yielded
> > relatively accurate predictions could imply that our review process
> > has little impact on defects introduced by new commits.
> >
> > If you do at some point wish to try integrating review metrics into
> > your analysis, our code review system has a REST API you can
> > leverage, and much of the data you'd likely be interested in can be
> > queried via anonymous methods such that you wouldn't even need to
> > create an account. Documentation for the interface is available at
> > https://review.openstack.org/Documentation/rest-api.html and we also
> > have documentation of our general developer workflow at
> > https://docs.openstack.org/infra/manual/developers.html as well as
> > some background on our development model at
> > https://docs.openstack.org/project-team-guide/open-development.html
> > if that helps.
> >
> 
> Jeremy pointed out what I was going to mention, which was the lack of 
> input on code reviews. Each major component of Nova, or virt drivers, 
> generally have subteams, or some sort of subject domain expert, that is 
> consulted or at least involved in reviewing code contributions. So while 
> they may not be making the changes themselves to a component, they 
> should be reviewing those changes. For example, with the 
> nova/virt/libvirt/driver.py, danpb was the main core reviewer and 
> maintainer for that code in the past, so while he didn't write 
> everything, he was reviewing a lot of the contributions.
> 
> Some of the files are also skewed a bit, and you might want to take into 
> account logic paths in a module to exclude it. For example, exception.py 
> and the various opts.py modules are outliers. They are basically files 
> that contain constants but not logic code so the chance of those having 
> an actual owner is small, but so should be the risk for bugs. They will 
> also have a high diversity given how common they are.
> 
> I'm not sure I understood the timeline graphs, or the point those are 
> making. We definitely have an ebb and flow of contributions based on the 
> release schedule where feature development and new code is loaded toward 
> the front of the release, and then that is supposed to be cut off toward 
> the 3rd milestone at the end of the release so we can stabilize and 
> focus on bugs.
> 
> In general some of this is common sense. When one person "owns" most of 
> a module in a piece of software they are the expert and therefore bugs 
> due to lack of understanding the bigger picture of that module, or how 
> it fits into the bigger system, should be mitigated. When that person 
> leaves, if others on the team don't have the domain knowledge, there are 
> going to be mistakes. We definitely have parts of the nova codebase that 
> fall into areas that we know are just very touchy and error prone and we 
> avoid changing those if at all possible (block device mappings, quotas, 
> neutronv2.api, nova-network and cells v1 come to mind). This is hard in 
> a big open source project, but is also why we have high standards for 
> core reviewers (those that can approve code contributions) and a 
> ridiculous amount of continuous integration testing.
> 
> -- 
> 
> Thanks,
> 
> Matt
> 
>