Re: [openstack-dev] [devstack] devstack-plugin additional-pkg-repos: ocata summit working session?

2016-10-12 Thread Tony Breeds
On Wed, Oct 12, 2016 at 10:52:25AM +0200, Kashyap Chamarthy wrote:

> I think what Markus is talking about is to the ability to produce
> packages from arbitrary Git commits to be able to test Nova with certain
> new features from libvirt / QEMU which won't be available until a formal
> release is pushed out.

That certainly is the long term view/aim
 
> Somewhat analogus to the Fedora virt-preview-repository[1]:

Yup and the plan was always to use this for Fedora images but noone has written
it yet as at the time we were transition from F22 -> F23 and the F23 images in
nodepool didn't work.  I'm pretty sure this has been fixed so we could probably
get this done in Barcelona

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Event notification descriptors/schemas (? swagger ?)

2016-10-12 Thread Joshua Harlow

Julien Danjou wrote:

On Tue, Oct 11 2016, Joshua Harlow wrote:


Has there been any ideas from folks to split those 'event_definitions.yaml'
into something else (a notifications schema repo?)? I'd be up for helping do
that (nice to have would be an included ability/code-gen(?) to turn those
schemas into code for various languages [starting with the typical ones,
python, go(?), java,...]).

Then we could also hold the emitting projects accountable for there events
being emitted (and the formats and versions they emit), because overall I'd
like to get away from 'the payload format OpenStack services emit could be
described as the Wild West' (found on that events.html site, lol).


I've proposed exactly that in 2013 in Hong-Kong and it was well received
by Oslo folks. I even started something¹ at some point, but it got so
low traction by projects and the friction was so high that I gave up. :)

¹  https://review.openstack.org/#/c/66566/



Ok, time to try again (maybe I'll get a little farther),

Wish me luck, lol

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] devstack-plugin additional-pkg-repos: ocata summit working session?

2016-10-12 Thread Tony Breeds
On Tue, Oct 11, 2016 at 05:53:24PM +0200, Thomas Goirand wrote:
> On 10/11/2016 02:09 PM, Markus Zoeller wrote:
> > * How to create a "*.deb" package out of the source code of
> > libvirt/qemu? (surprisingly enough, I'm still struggling with this)
> 
> What version of libvirt / qemu are you trying to build? libvirt 2.3.0
> was released 6 days ago, and uploaded to Debian unstable 3 days ago. I
> don't think you need to manage the packaging yourself, but maybe just
> backport it to your distribution of choice (Ubuntu?). It is probably the
> same for Qemu (Debian unstable has a fairly recent version). If you see
> any packaging problem, best would be to deal with it with the package
> maintainer (probably best through the Debian BTS).

So there are several phases and we're still near the begining but my design was
to borrow the upstream packaging as much as possible and just switch the
tarballs.  Yes this will require manual intervention but as you point out they
can be reported to you via the BTS.

Righrt now the plugin mandates Ubuntu, but that's only because we haven't done
the work to work on other deistros.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] devstack-plugin additional-pkg-repos: ocata summit working session?

2016-10-12 Thread Tony Breeds
On Tue, Oct 11, 2016 at 02:09:33PM +0200, Markus Zoeller wrote:
> Backstory
> -
> Some time ago, tonyb and I started the work on a gate testing job which
> can install additional packages onto the testing node:

I think it'd be great to talk about and get a few people interested in helping
up to speed.

We tried to do this during Austin and failed.
> 
> 
> https://github.com/openstack-infra/project-config/blob/b18a8a6cd1ed3865ff46e654e41b4959c342dc15/jenkins/jobs/devstack-gate.yaml#L536
> 
> The installation of the additional packages is done via a
> devstack-plugin called "addition-pkg-repos" (apr):
> 
> https://github.com/openstack/devstack-plugin-additional-pkg-repos/
> 
> The inital trigger for this was the need to test newer versions of
> libvirt and qemu. The plugin itself was supposed to be generic for other
> packages too, which makes it also useful for Cinder and Neutron (and
> probably others too).
> 
> Long story short, I couldn't make any progress there lately due to other
> responsibilities. I still see the need for such a plugin though. Right
> now I help myself with a Vagrant setup:
> 
> 
> https://github.com/markuszoeller/openstack/tree/master/scripts/vagrant/libvirt-qemu-source-U1404-VB
> 
> With that, I could successfully test the "virtlogd" feature in Nova
> which needs (the 5 week old) Qemu 2.7.0:
> 
> https://blueprints.launchpad.net/nova/+spec/libvirt-virtlogd
> 
> 
> Request
> ---
> My question is, if you have interest in this plugin and its
> capabilities, are you at the Summit in Barcelona and do you have time
> for a short working session there? Maybe on Friday during the
> contributors meetup?
> 
> https://etherpad.openstack.org/p/ocata-nova-summit-meetup

I can't make the nova meetup but I'm sure we can find soem time.

> Possible action/discussion items
> 
> * How to create a "*.deb" package out of the source code of
> libvirt/qemu? (surprisingly enough, I'm still struggling with this)

Now that trusty is less important this is actually pretty easy.  for the most
part you can gran the ubuntu package source and just rebuild it.

> * How should we specify the packages and versions to install in the gate
> job?

Yes but not the way we've done it with UCA :(

> * When do we build new packages and increase the version in the gate?

IMO Building must be seperate from the test phase.  I'd be against building
form tarballs in devstack-plugin-apr

> * Actually hack some code and push it

Good plan.

I'll try to make some libvert+qemu packages available for this during the next
week.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elections][tc]Thoughts on the TC election process

2016-10-12 Thread Tony Breeds
On Wed, Oct 12, 2016 at 02:16:42PM +0200, Flavio Percoco wrote:
> Top-posting as I'll try to summarize/re-start/reword/whatever the right word 
> is,
> this thread:
> 
> It seems to me that the problem we're trying to solve here is how we can help
> voters to make more thoughtful choices (note that I'm not saying they 
> currently
> don't. I'm not judging the voters but, as others, I'm pointing fingers at the
> process we've in place). A couple of points have been made around this:
> 
> - We'd like the electorate to be able to ask questions to the candiates
> - The time availble doesn't seem to be enough for this
> - The ML is great but it might not be the right format for this, especially 
> with
>  the amount of emails going through openstack-dev
> 
> Some (rough) ideas:
> 
> - We could have a common place where we collect the threads that ask questions
>  to the candidates. Ideally, this thread would be kept updated by the members
>  of the community asking these questions. If I start a new thread, I get the
>  link and put it in this "common place" The common place could be a wiki page
>  or anything linkable from the voting URL.
> - Link in the voting URL the place where the information about the questions
>  being asked to the candidates are being aggregated.

That's pretty doable and we could easily trial it.

> - Send the ballots every day, if possible.

I don't think we can send them everyday, twice might be doable.  I'd be happy
to to talk to Andrew (The CIVS maintainer/owner) about this.

FWIW I think it's very doable to insert a week between the nomination close and
the election open for the discussion.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] about aodh notifier to create a event alarm

2016-10-12 Thread dong . wenjuan
Hi vitrages,

In aodh notifier plugin, we receive a ACTIVATE_DEDUCED_ALARM_EVENT and 
then create a aodh event alarm.
The event alarm with a `event_rule`, it should be include `event_type` and 
`query` which Aodh used to evaluator
a alarm when it receives a specify event to filter with the query to fire 
a alarm. see [1]

But we create a event alarm only with `query` in `event_rule`.
The deduced alarm create with the `alarm` state so Aodh will skip to 
evaluator the fired alarm.
The `event_rule` seems no necessary in the request body, am I right?
Let me know if i miss something. :)

[1]https://github.com/openstack/aodh/blob/master/doc/source/event-alarm.rst


BR,
dwj

董文娟   Wenjuan Dong
控制器四部 / 无线产品   Controller Dept Ⅳ. / Wireless Product Operation
 


上海市浦东新区碧波路889号中兴通讯D3
D3, ZTE, No. 889, Bibo Rd.
T: +86 021 85922M: +86 13661996389
E: dong.wenj...@zte.com.cn
www.ztedevice.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

2016-10-12 Thread joehuang
For most of contributors in the Tricircle project from Asia, I would like to 
know whether it's possible for the project to have PTG in Asia somewhere before 
the PTG in Atlanta, and then some representatives from the project to attend 
the PTG in Atlanta if there are cross project topics to be discussed with other 
project teams.

(To gather the team in Asia will make most of our contributors easier to get 
the budget from their own organization, we have discussed this topic in 
yesterday's weekly meeting)

Best Regards
Chaoyi Huang (joehuang)


From: Thierry Carrez [thie...@openstack.org]
Sent: 12 October 2016 17:59
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

Qiming Teng wrote:
> On Tue, Oct 11, 2016 at 10:39:11PM -0500, Michał Jastrzębski wrote:
>> One of reasons we created PTG in the first place is that Summit became
>> big and expensive, and project developers had harder and harder time
>> attending it due to budget issues.
>
> So a trip to PTG is really cheaper than summit? Is the PTG one
> sponsored by someone?

PTGs happen in more cost-effective locations, airport hubs with cheaper
hotels, which should lower the cost of attending. Yes, I'm pretty sure
traveling to Atlanta downtown for a week will be cheaper than going to
Barcelona, Boston or Sydney downtown for a week.

>> [...]
>> PTG becomes very important for project team, summit arguably will
>> become less important as many of developers will be able to afford
>> only PTGs.
>
> Summit is less (or just NOT) important to developers, emm ... that is
> true if 1) the team knows exactly what users/ops want so they don't even
> bother interact with them, just focus on getting things done; 2) the
> person who approves your trip request also believes so.
>
>> If we say that "Don't expect Ops at PTG", that means
>> OpenStack dev community will become even more disconnected from Ops
>> community.
>
> Wasn't that part of the plan? [...]

Yes, the plan is (amongst other things) to make sure that upstream
developers are available to interact with users (operators, but also app
developers...) during the very week where *all* our community gets
together (the OpenStack Summit). Currently we try to get things done at
the same time, which results in hard choices between listening and
doing. By clearly setting out separate times for each activity, we make
sure we stay focused.

For an ops-focused team like Kolla, I'd argue that participating to
OpenStack Summits (and Ops midcycles, to be honest) is essential. That
doesn't mean that everyone has to go to every single event, but the
Kolla team should definitely self-organize to make sure that enough
Kolla people go to those events to close the feedback loop. The PTG is
not where the feedback loop is closed. It's a venue for the *team* to
get together and build the shared understandings (priorities,
assignment, bootstrap work) necessary to a successful development cycle
(some teams need that more than others, so YMMV).

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi] host field information flapping for instance resource-type

2016-10-12 Thread Jake Yip
Hi all!

We've been trying to get gnocchi working us, and have been hitting some
performance bottlenecks. It seems that the 'host' field for the instance
resource-type flaps between host id and host names. This makes the gnocchi
dispatcher calls an update per resource which is a HTTP PATCH call. The
large amount of HTTP calls causes gnocchi to be unable to keep up with
ceilometer, essentially making it unworkable for us in production.

I've tried removing this update code and can get ~5x the performance. So I
guess the questions are,

1) why is the host information flapping? Are there difference services
generating this data?

2) If gnocchi data can be viewable by the user, should we only have host id
in the resource? (similar to what nova shows to user)

3) Anyone running gnocchi in prod, can you let us know how is it going?

Any pointers to track down this issue will be appreciated! There is a
bug[1] open for this issue but I hope this email can reach more people.

FYI, we are running Mitaka ceilometer and gnocchi, but we have patched
ceilometer to use the Newton gnocchi dispatcher to help with performance.

[1] https://bugs.launchpad.net/ceilometer/+bug/1569037

Jake Yip,
DevOps Engineer,
Core Services, NeCTAR Research Cloud,
The University of Melbourne
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Default the HA scenario to Ceph

2016-10-12 Thread Pradeep Kilambi
On Wednesday, October 12, 2016, Emilien Macchi  wrote:

> On Wed, Oct 12, 2016 at 7:10 AM, Giulio Fidente  > wrote:
> > hi,
> >
> > we introduced support for the deployment of Ceph in the liberty release
> so
> > that it could optionally be used as backend for one or more of Cinder,
> > Glance, Nova and more recently Gnocchi.
> >
> > We used to deploy Ceph MONs on the controller nodes and Ceph OSDs on
> > dedicated ceph-storage nodes so a deployment of OpenStack with Ceph would
> > need at least 1 more additional node to host a Ceph OSD.
> >
> > In our HA scenario the storage backends are configured as follows:
> >
> > Glance -> Swift
> > Nova (ephemeral) -> Local
> > Cinder (persistent) -> LVM (on controllers)
> > Gnocchi -> Swift
> >
> > The downside of the above configuration is that Cinder volumes can not be
> > replicated across the controller nodes and become unavailable if a
> > controller fails, while production environments generally expect
> persistent
> > storage to be highly available. Cinder volumes instead could even get
> lost
> > completely in case of a permanent failure of a controller.
> >
> > With the Newton release and the composable roles we can now deploy Ceph
> OSDs
> > on the compute nodes, removing the requirement we had for an additional
> node
> > to host a Ceph OSD.
> >
> > I would like to ask for some feedback on the possibility of deploying
> Ceph
> > by default in the HA scenario and use it as backend for Cinder.
> >
> > Also using Swift as backend for Glance and Gnocchi is enough to cover the
> > availability issue for the data, but it also means we're storing that
> data
> > on the controller nodes which might or might not be wanted; I don't see a
> > strong reason for defaulting them to Ceph, but it might make more sense
> when
> > Ceph is available; feedback about this would be appreciated as well.
> >
> > Finally a shared backend (Ceph) for Nova would allow live migrations but
> > probably decrease performances for the guests in general; so I'd be
> against
> > defaulting Nova to Ceph. Feedback?
>
> +1 on making ceph default backend for nova/glance/cinder/gnocchi.
> I think this is the most common use-case we currently have in our
> deployments AFIK.


 + 1 from me. Ceph is the recommended backend for gnocchi and this will
help a lot with some recent performance issue we have seen.

- Prad



> Also, I'll continue to work on scenarios jobs (scenario002 and
> scenario003 without Ceph to cover other use cases).
>
> > --
> > Giulio Fidente
> > GPG KEY: 08D733BA
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG space request

2016-10-12 Thread Eoghan Glynn


> Emilien Macchi wrote:
> > I would like to request for some space dedicated to TripleO project
> > for the first OpenStack PTG.
> > 
> > https://www.openstack.org/ptg/
> > 
> > The event will happen in February 2017 during the next PTG in Atlanta.
> > Any feedback is welcome,
> 
> Just a quick note: as you can imagine we have finite space at the event,
> and the OpenStack Foundation wants to give priority to teams which have
> a diverse affiliation (or which are not tagged "single-vendor").
> Depending on which teams decide to take advantage of the event and which
> don't, we may or may not be able to offer space to single-vendor
> projects -- and TripleO is currently tagged single-vendor.
> 
> The rationale is, the more organizations are involved in a given project
> team, the more value there is to offer common meeting space to that team
> for them to sync on priorities and get stuff done.

One of the professed primary purposes of splitting off the PTG was to enable
cross-project collaboration, so as to avoid horizontally-oriented contributors
needing to attend multiple midcycle meetups.

Denying PTG space to a project that clearly requires an immense amount of
cross-project collaboration seems to run counter to those stated goals.

The need for cross-project collaboration seems to me orthogonal to the
diversity of corporate affiliation within any individual project, given
the potential diversity within the other projects they may need to
collaborate with.

So the criteria applied should concentrate less on individual project
diversity, and much more on the cross-cutting nature of that project's
concerns.

Cheers,
Eoghan

> If more than 90% of
> contributions / reviews / core reviewers come from a single
> organization, there is less coordination needs and less value in having
> all those people from a single org to travel to a distant place to have
> a team meeting. And as far as recruitment of new team members go (to
> increase that diversity), the OpenStack Summit will be a better venue to
> do that.
> 
> I hope we'll be able to accommodate you, though. And in all cases
> TripleO people are more than welcome to join the event to coordinate
> with other teams. It's just not 100% sure we'll be able to give you a
> dedicated room for multiple days. We should know better in a week or so,
> once we get a good idea of who plans to meet at the event and who doesn't.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][CI] Isolated Network Testing

2016-10-12 Thread Dan Sneddon
I recently evaluated our needs for testing coverage for TripleO
isolated networking. I wanted to post my thoughts on the matter for
discussion, which will hopefully lead to a shared understanding of what
improvements we need to make. I think we can cover the majority of
end-user requirements by testing the following minimum scenarios:

1. single-nic-vlans (one nic, all VLANs trunked, great for virt and POCs)

2. Provisioning + bond (to test basic bonding functionality)

3. Bonded provisioning (perhaps one bond with all VLANs)

4. Spine and leaf (in the near future)

Within those four scenarios, we should ensure that we are testing both
IPv4 and IPv6, and both traditional Neutron SNAT/Floating IPs and DVR.

The first scenario is well covered. I think scenario 2 is covered by a
review posted by Ben Nemec recently [1].

I would very much like to see us testing scenario 3 with a resilient
bond for the provisioning interface as well. This used to require LACP
and fallback to a single link, but I believe recent changes to the PXE
boot images may allow this over links without special switch
configuration. I'm currently doing testing in my lab, I hope I can work
with the TripleO CI team to help make this happen upstream.

Spine and leaf (routed networks) support may require specific
configuration of the routing hardware in order to support PXE booting
across router boundaries. Specifically, a DHCP proxy needs to be
configured in order to forward DHCP requests from a remote VLAN to the
Undercloud. If this is not possible in our bare-metal CI environments,
then we may need to develop a method of testing this in OVB.

I'm very interested in finding out about whether it may be possible to
have DHCP proxy (or "DHCP helper-address") configured on the router
hardware for CI VLANs. If we can deploy this in bare metal, I think it
will save us a lot of time and effort over recreating a routed
environment in OVB. I believe we could use use Open Daylight or another
OpenFlow controller to simulate routers in virtual environments, or
perhaps use dnsmasq in DHCP proxy mode on the OVB host to forward
requests from the various bridges representing remote VLANs to the
Undercloud br-ctlplane bridge. But it would be a fair amount of work to
put that together.

I don't believe we currently test IPv6 or DVR (please correct me if I'm
mistaken). Do we have plans in the works to add these to any jobs?

Finally, I wonder if we need to test any exotic configurations, such as
OVS+DPDK, OpenDaylight, etc.

OVS+DPDK would require compatible hardware. I'm interested in hearing
feedback about how critical this would be in the grand scheme of
things. It isn't yet clear to me that OVS+DPDK is going to have
widespread adoption, but I do recognize that there are some NFV users
that depend on this technology.

OpenDaylight does not require hardware changes AFAIK, but the drivers
and network interface config differs significantly from ML2+OVS. I'm
helping some ODL developers make changes that will allow deployment via
TripleO, but these changes won't be tested by CI.

Of course, there are elements of OVS+DPDK and ODL that get tested as
part of Neutron CI, but now we are implementing TripleO-based
deployment of these technologies, I wonder if we should endeavor to
test them in CI. I suppose that begs the question, if we are testing
those, then why not Contrail, etc.? I don't know where we draw the
line, but it seems that we might want to at least periodically test
deploying some other Neutron drivers via TripleO.

[1] - https://review.openstack.org/#/c/385562

-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Exposing project team's metadata in README files

2016-10-12 Thread gordon chung


On 12/10/16 05:43 PM, Mike Perez wrote:
> Besides the governance site, there is also the project navigator [1] which is
> a more friendly landing page on projects and their tags. Although it does not
> today capture separate deliverables.
>
> Assuming the README files aren't being manually updated, I don't have a 
> problem
> with this idea.
>
> [1] - https://www.openstack.org/software/project-navigator/

sorry, unrelated to thread, but is that page manually updated? i noticed 
the Telemetry project is still not sync'd in terms of what is in 
projects.yaml[1] and the page. i imagine it was updated recently since 
i'm correctly not PTL anymore.

[1] 
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][networking-cisco]

2016-10-12 Thread Sam Betts (sambetts)
The networking-cisco development team are grateful to announce the release of:

networking-cisco 4.0.0

This release is compatible with OpenStack Mitaka and Newton

Download the package from:

https://pypi.python.org/pypi/networking-cisco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Exposing project team's metadata in README files

2016-10-12 Thread Mike Perez
On 14:50 Oct 12, Flavio Percoco wrote:
> Greetings,
> 
> One of the common complains about the existing project organization in the big
> tent is that it's difficult to wrap our heads around the many projects there
> are, their current state (in/out the big tent), their tags, etc.
> 
> This information is available on the governance website[0]. Each official
> project team has a page there containing the information related to the
> deliverables managed by that team. Unfortunately, I don't think this page is
> checked often enough and I believe it's not known by everyone.

Besides the governance site, there is also the project navigator [1] which is
a more friendly landing page on projects and their tags. Although it does not
today capture separate deliverables.

Assuming the README files aren't being manually updated, I don't have a problem
with this idea.

[1] - https://www.openstack.org/software/project-navigator/

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas] [octavia] Proposing Lubosz Kosnik (diltram) as Octavia Core

2016-10-12 Thread Michael Johnson
That is quorum from the cores, welcome Lubosz!

Michael


On Wed, Oct 12, 2016 at 1:26 PM, Doug Wiegley
 wrote:
> +1
>
>> On Oct 10, 2016, at 3:40 PM, Brandon Logan  
>> wrote:
>>
>> +1
>>
>> On Mon, 2016-10-10 at 13:06 -0700, Michael Johnson wrote:
>>> Greetings Octavia and developer mailing list folks,
>>>
>>> I propose that we add Lubosz Kosnik (diltram) as an OpenStack Octavia
>>> core reviewer.
>>>
>>> His contributions [1] are in line with other cores and he has been an
>>> active member of our community.  He regularly attends our weekly
>>> meetings, contributes good code, and provides solid reviews.
>>>
>>> Overall I think Lubosz would make a great addition to the core review
>>> team.
>>>
>>> Current Octavia cores, please respond with +1/-1.
>>>
>>> Michael
>>>
>>> [1] http://stackalytics.com/report/contribution/octavia/90
>>>
>>> _
>>> _
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
>>> cribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [lbaas] [octavia] Proposing Lubosz Kosnik (diltram) as Octavia Core

2016-10-12 Thread Phillip Toohill

+1 good choice!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas] [octavia] Proposing Lubosz Kosnik (diltram) as Octavia Core

2016-10-12 Thread Doug Wiegley
+1

> On Oct 10, 2016, at 3:40 PM, Brandon Logan  
> wrote:
> 
> +1
> 
> On Mon, 2016-10-10 at 13:06 -0700, Michael Johnson wrote:
>> Greetings Octavia and developer mailing list folks,
>> 
>> I propose that we add Lubosz Kosnik (diltram) as an OpenStack Octavia
>> core reviewer.
>> 
>> His contributions [1] are in line with other cores and he has been an
>> active member of our community.  He regularly attends our weekly
>> meetings, contributes good code, and provides solid reviews.
>> 
>> Overall I think Lubosz would make a great addition to the core review
>> team.
>> 
>> Current Octavia cores, please respond with +1/-1.
>> 
>> Michael
>> 
>> [1] http://stackalytics.com/report/contribution/octavia/90
>> 
>> _
>> _
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
>> cribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova New Contributor Update

2016-10-12 Thread Augustina Ragwitz
During the Newton Design Summit the Nova team had a session regarding
new contributor activities
(https://etherpad.openstack.org/p/newton-nova-getting-started). We
agreed that we should update the Mentoring wiki page
(https://wiki.openstack.org/wiki/Nova/Mentoring) with a list of projects
and resources for new folks, and I volunteered to take on the "Mentoring
Czar" role as a result (which I changed to "New Contributor Liaison"). I
moved our "low hanging fruit" etherpad contents over to the Mentoring
wiki and have done my best to keep it current. I also want to thank the
sub-project team members who updated their sections on that wiki page
and I would encourage those who haven't to make sure your contact info
and any newbie-friendly project info is up to date.

During the Newton release cycle, I've spoken with several new
contributors over IRC, via email, and in person. I think we've done a
great job of addressing the need to have someone to provide a first
contact and make suggestions on things to get started on. I'd like to
take this further and continue to follow up with new contributors to
find out what's going well and what they are struggling with. As new
contributors become more established in the community, they may have
some valuable feedback for others looking to get involved that would be
worth capturing. In addition, if new contributors decide not to continue
with Nova, I think it could be valuable to find out why.

Over the past 6 months, I didn't just reach out to new contributors but
I also spoke informally with a few established Nova community members
about their on-boarding experience. The thing I really took away from
these conversations is how helpful it would be for new contributors to
have access to these insights and expectations. In the next few months,
I would like to reach out to established Nova contributors to talk about
their on-boarding experience and what advice they'd give to new
contributors looking to get involved with Nova. Ultimately I'd like to
use these conversations to put together a document for new contributors.
If you would like to participate, please feel free to email me or let me
know on irc (auggy) and I'll add you to my list!

Proposed Nova New Contributor Goals for Ocata:
1) Determine what new contributor/nova onboarding information/feedback
we'd like to collect and how we should collect it
2) Put together a "New Contributor Advice" document based on feedback
from established Nova contributors

I will be at the Barcelona Summit and can add an agenda item to the
Friday Nova Unconference if folks feel it's necessary. I had originally
proposed this as a session but given the amount on Nova's plate for this
super short release, we determined a mailing list post for this would
suffice.

I started a Nova New Contributor brainstorm etherpad for the original
session I'd proposed, feel free to check it out and add any thoughts -
https://etherpad.openstack.org/p/nova-new-contributor-brainstorm

-- 
Augustina Ragwitz
Señora Software Engineer
---
Ask me about contributing to OpenStack Nova!
https://wiki.openstack.org/wiki/Nova/Mentoring

Waiting for your change to get through the gate? Clean up some Nova
bugs!
http://45.55.105.55:8082/bugs-dashboard.html
---
email: aragwitz+n...@pobox.com
irc: auggy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][horizon] keystone/horizon integration session in barcelona

2016-10-12 Thread Rob Cresswell
This sounds great! Thanks for organising, Steve.

Rob

On 11 October 2016 at 22:32, Richard Jones 
> wrote:
Thanks, Steve, this will be a valuable session!

On 12 October 2016 at 08:14, Steve Martinelli 
> wrote:
The keystone team had a spare fishbowl session, and we decided to use it to 
collaborate with the horizon team on a few long standing issues we've had 
between the two projects.

You can view the session online:
https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16907

Details here:
Wed 26, 3:55pm-4:35pm
Keystone: Keystone/Horizon integration session (Fishbowl)
https://etherpad.openstack.org/p/ocata-keystone-horizon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

2016-10-12 Thread Shamail


> On Oct 12, 2016, at 4:09 AM, Qiming Teng  wrote:
> 
>> On Tue, Oct 11, 2016 at 10:39:11PM -0500, Michał Jastrzębski wrote:
>> Hello Tom,
>> 
>> I must say I think this is bad news - especially for projects like
>> Kolla - ops centric.
>> One of reasons we created PTG in the first place is that Summit became
>> big and expensive, and project developers had harder and harder time
>> attending it due to budget issues.
> 
> So a trip to PTG is really cheaper than summit? Is the PTG one
> sponsored by someone?
> 
>> PTG would offer many of these devs
>> opportunity to talk to their peers, other project developers and build
>> OpenStack dev community.
> 
>> If project attends PTG, and most of them
>> plans to (again, Kolla included), that is a travel for project team.
> 
> A big IF here ...
> 
>> If we hold 2 PTGs per year, that's big hit on travel budget (but still
>> smaller than summit).
>> 
>> PTG becomes very important for project team, summit arguably will
>> become less important as many of developers will be able to afford
>> only PTGs.
> 
> Summit is less (or just NOT) important to developers, emm ... that is
> true if 1) the team knows exactly what users/ops want so they don't even
> bother interact with them, just focus on getting things done; 2) the
> person who approves your trip request also believes so.
> 
>> If we say that "Don't expect Ops at PTG", that means
>> OpenStack dev community will become even more disconnected from Ops
>> community.
> 
> Wasn't that part of the plan? Or maybe the Ops will travel four times a
> year, go to the summit twice for (watching) shows and go to the PTGs
> twice to interact with the team that is busy discussing implementation
> details ...
My understanding is there will be two ops meetups (as there are currently) with 
one being aligned with the OpenStack summit and one being during the mid-cycle 
window (although not at the PTG itself).  I think this could still work if a 
few people from the technical leadership of the project attend the summit/Forum 
(the new combined event for discussing user needs/feedback). 

This would mean that the majority of the engineering organization doesn't need 
to be at the summit but a few cores/PTL from a project should still plan to 
attend and then share their findings/discussions with the broader project team. 
 This would also mean that travel doesn't change for a few people per project 
but the majority of the team can plan on attending just the PTGs from a 
budgeting perspective.

This would also help establish a cadence for activities at the ops meetups:
The ops meetups aligned with the summits could be used to share 
feedback/discuss pain points 
The ops meetups during the mid-cycle could be used to identify new items that 
are priority and there is a need to share feedback/discuss pain points on.

> 
>> Let's not forget that OpenStack is ultimately operators
>> tool, they need to care for it and in my opinion having close
>> relationship with them is extremely important for good of project. If
>> we raise cost of keeping this relationship, that might really hurt
>> OpenStack.
> 
>> Cheers,
>> Michal
> 
> I really hope I was totally wrong.
> 
> Regards,
>  Qiming
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

2016-10-12 Thread Clint Byrum
Excerpts from Jaesuk Ahn's message of 2016-10-12 15:08:24 +:
> It can be cheap if you are in the US. However, for Asia folks, it is not
> that cheap considering it is all overseas travel. In addition, all-in-one
> event like the current summit makes us much easier to get the travel fund
> from the company, since the company only need to send everyone (tech, ops,
> business, strategy) to one event. Even as an ops or developers, doing
> presentation or a meeting with one or two important company can be very
> good excuse to get the travel money.
> 

This is definitely on the list of concerns I heard while the split was
being discussed.

I think the concern is valid, and we'll have to see how it affects
attendance at PTG's and summits.

However, I am not so sure the overseas cost is being accurately
characterized. Of course, the complications are higher with immigration
details, but ultimately hotels around international hub airports are
extremely cheap, and flights tend to be quite a bit less expensive and
more numerous to these locations. You'll find flights from Narita to
LAX for < $500 where as you'd be hard pressed to find Narita to Boston
for under $600, and they'll be less convenient, possibly requiring more
hotel days.

Also worth considering is how cheap the space is for the PTG
vs. Summit. Without need for large expo halls, keynote speakers,
catered lunch and cocktail hours, we can rent a smaller, less impressive
space. That should mean either a cheaper ticket price (if there is one
at all) or more sponsored travel to the PTG. Either one of those should
help alleviate the concerns about travel budget.

> I understand the needs of separate event to make developers stay focused. I
> am just sharing my experience here how difficult for us to get fund for
> overseas event more than once per year. For my case as ops (previously) and
> product manager (now), even though I don't code actively, attending design
> summit (and interacting with developers) has been very helpful to
> understand what is really going to in openstack so that I can make right
> decision.
> 

Many devs will still come to the summit. The same ones who have been
coming and running lots of fishbowls, will be there, running presumably
smaller fishbowls that are _more_ ops focused, more user focused,
and more design focused, because the contributors won't be feeling a
need to be there to get the implementation planned out at that moment,
because they know there's a place for doing that.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Setting up to 3rd party CI OVB jobs

2016-10-12 Thread James Slagle
On Wed, Oct 12, 2016 at 1:32 PM, Dan Prince  wrote:
> On Fri, 2016-10-07 at 09:03 -0400, Paul Belanger wrote:
>> Greetings,
>>
>> I wanted to propose a work item, that I am happy to spearhead, about
>> setting up
>> a 3rd party CI system for tripleo project. The work I am proposing,
>> wouldn't
>> actually affect anything today about tripleo-ci but provider a
>> working example
>> of how 3rd party CI will work and potential migration path.
>>
>> This is just one example of how it would work, obviously everything
>> is open for
>> discussions but I think you'll find the plan to be workable.
>> Additionally, this
>> topic would only apply to OVB jobs, existing jobs already running on
>> cloud
>> providers from openstack-infra would not be affected.
>
>
> The plan you describe here sounds reasonable. Testing out a 3rd party
> system in parallel to our existing CI causes no harm and certainly
> allows us to evaluate things and learn from the new setup.
>
> A couple of things I would like to see discussed a bit more (either
> here or in a new thread if deemed unrelated) are how do we benefit from
> these changes in making the OVB jobs 3rd party.
>
> There are at least 3 groups who likely care about this along with how
> this benefits them:
>
> -the openstack-infra team:
>
>   * standardization: doesn't have to deal with special case OVB clouds
>
> -the tripleo OVB cloud/CI maintainers:
>
>   * Can manage the 3rd party cloud how they like it. Using images or whatever 
> with less regard for openstack-infra compatability.
>
> -the tripleo core team:
>
>   * The OVB jobs are mostly the same. The maintenance is potentially
> further diverging from upstream though. So is there any benefit to 3rd
> party for the core team? Unclear to me at this point. The OVB jobs are
> still the same. They aren't running any faster than they are today. The
> maintenance of them might even get harder for some due to the fact that
> we have different base images across our upstream infra multinode jobs
> and what we run via the OVB 3rd party testing.

Benefits I see include:

- moving to 3rd party CI potentially means moving the jobs to a
different cloud that the core team doesn't have to maintain. If we
move tripleo-test-cloud-rh2 to rdoproject.org's nodepool instead of
infra's, I think that is a step in the direction of having the
rdoproject.org team maintain this cloud (eventually as part of RDO
cloud) in the future. That would free up those reponsibilities from
the tripleo-core team. There may be some on the tripleo-core team that
still want to participate in maintaining that cloud, and they should
feel free to do so as I don't think we're drawing strong
organizational lines here. But, ultimately I would like to see the
tripleo-core team not be on the hook for maintaining a multi
region/site public cloud(s). That frees up the core team to do
development work, reviews, actually fix real CI failures, etc.

- using custom images would be done so that the jobs do indeed run
faster. so, those 2 of your points contradict a bit. while it would be
more maintenance, we would be doing so that the jobs run faster

- 3rd party CI jobs can actually vote in the check queue whereas
current tripleo-ci ovb jobs can not vote at all.

>
> 
>
> The tripleo-ci end-to-end test jobs have always fallen into the high
> maintenance category. We've only recently switched to OVB and one of
> the nice things about doing that is we are using something much closer
> to stock openstack vs. our previous CI cloud. Sure there are some OVB
> configuration differences to enable testing of baremetal in the cloud
> but we are using more OpenStack to drive things. So by simply using
> more OpenStack within our CI we should be more closely aligning with
> infra. A move in the right direction anyway.
>
> Going through all this effort I really would like to see all the teams
> gain from the effort. Like, for me the point of having upstream
> tripleo-ci tests is that we catch breakages. Breakages that no other
> upstream projects are catching. And the solution to stopping those
> breakages from happening isn't IMO to move some of the most valuable CI
> tests into 3rd party. That may cover over some of the maintenance rubs
> in the short/mid term perhaps. But I view it as a bit of a retreat in
> where we could be with upstream testing.
>
> So rather than just taking what we have in the OVB jobs today and
> making the same, long running (1.5 hours +) CI job (which catches lots
> of things) could we re-imaging the pipeline a bit in the process so we
> improve this. I guess my concern is we'll go to all the trouble to move
> this and we'll actually negatively impact the speed with which the
> tripleo core team can land code instead of increasing it. I guess what
> I'm asking is in doing this move can we raise the bar for TripleO core
> any too?

The full end to end tests are valuable and we definitely need them.

But, and this may be quite 

Re: [openstack-dev] [ironic] [infra] RFC: consolidating and extending Ironic CI jobs

2016-10-12 Thread Devananda van der Veen


On 10/12/2016 05:01 AM, Dmitry Tantsur wrote:
> Hi folks!
> 
> I'd like to propose a plan on how to simultaneously extend the coverage of our
> jobs and reduce their number.
> 
> Currently, we're running one instance per job. This was reasonable when the
> coreos-based IPA image was the default, but now with tinyipa we can run up to 
> 7
> instances (and actually do it in the grenade job). I suggest we use 6 fake bm
> nodes to make a single CI job cover many scenarios.
> 
> The jobs will be grouped based on driver (pxe_ipmitool and agent_ipmitool) to 
> be
> more in sync with how 3rd party CI does it. A special configuration option 
> will
> be used to enable multi-instance testing to avoid breaking 3rd party CI 
> systems
> that are not ready for it.
> 
> To ensure coverage, we'll only leave a required number of nodes "available", 
> and
> deploy all instances in parallel.
> 
> In the end, we'll have these jobs on ironic:
> gate-tempest-ironic-pxe_ipmitool-tinyipa
> gate-tempest-ironic-agent_ipmitool-tinyipa
> 
> Each job will cover the following scenarious:
> * partition images:
> ** with local boot:
> ** 1. msdos partition table and BIOS boot
> ** 2. GPT partition table and BIOS boot
> ** 3. GPT partition table and UEFI boot  <*>
> ** with netboot:
> ** 4. msdos partition table and BIOS boot <**>
> * whole disk images:
> * 5. with msdos partition table embedded and BIOS boot
> * 6. with GPT partition table embedded and UEFI boot  <*>
> 
>  <*> - in the future, when we figure our UEFI testing
>  <**> - we're moving away from defaulting to netboot, hence only one scenario
> 
> I suggest creating the jobs for Newton and Ocata, and starting with Xenial 
> right
> away.
> 
> Any comments, ideas and suggestions are welcome.

Huge +1 on this from me, as well.

I am also in favor of some of the other suggestions on this thread, namely,
moving API testing over to functional tests so that those can be run more
quickly / with less resources / without affecting tempest scenario tests.

I also think we should begin defining additional scenario tests to cover things
we are not covering today  (eg, adopt a running instance), as Vasyl already
pointed out. But I don't think that conflicts or prevents the changes you're
suggesting, Dmitry.

-Deva

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [ovn][neutron] networking-ovn 1.0.0 release (newton) -- port-security

2016-10-12 Thread Russell Bryant
On Tue, Oct 11, 2016 at 2:47 PM, Murali R  wrote:

> Hi,
>
> Please clarify if port security is required to be enabled with newton
> release when installing OVN. The install.rst says it must be. In many of my
> use cases I want to disable port security which is how I do currently with
> devstack. I would like to know if either ovn or neutron will have
> contentions if port-security disabled at neutron server.
>

You can disable it.  There's no problem with that.  It will just disable
related features: L2 and L3 port security and security groups.

-- 
Russell Bryant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Setting up to 3rd party CI OVB jobs

2016-10-12 Thread Paul Belanger
On Tue, Oct 11, 2016 at 05:37:54PM +0100, Derek Higgins wrote:
> On 7 October 2016 at 14:03, Paul Belanger  wrote:
> > Greetings,
> >
> > I wanted to propose a work item, that I am happy to spearhead, about 
> > setting up
> > a 3rd party CI system for tripleo project. The work I am proposing, wouldn't
> > actually affect anything today about tripleo-ci but provider a working 
> > example
> > of how 3rd party CI will work and potential migration path.
> 
> Great, if we are to transition to 3rd party CI this getting a trial up
> and running first would be great to minimize down time if we are to
> move jobs in future
> 
> >
> > This is just one example of how it would work, obviously everything is open 
> > for
> > discussions but I think you'll find the plan to be workable. Additionally, 
> > this
> > topic would only apply to OVB jobs, existing jobs already running on cloud
> > providers from openstack-infra would not be affected.
> >
> > What I am proposing is we move tripleo-test-cloud-rh2 (currently disabled) 
> > from
> > openstack-infra (nodepool) to rdoproject (nodepool).  This give us a cloud 
> > we
> > can use for OVB; we know it works because OVB jobs have run on it before.
> 
> +1, there are some user currently on RH2 using it as a dev
> environment, but if we start small this wont be a problem and those
> users should eventually be moving too a different cloud
> 
> >
> > There is a few issues we'd first need to work on, specifically since
> > rdoproject.org is currently using SoftwareFactory[1] we'd need to have them
> > adding support for nodepool-builder. This is needed so we can use the 
> > existing
> > DIB elements that openstack-infra does to create centos-7 images (which 
> > tripleo
> > uses today). We have 2 options, wait for SF team to add support for this (I
> > don't know how long that is, but they know of the request) or we manually 
> > setup
> > a external nodepool-builder instance for rdoproject.org, which connects to
> > nodepool.rdoproject.org via gearman (I suggest we do this).
> 
> As a 3rd option, is it possible to just use the centos cloud image
> directly? The majority of the data cached on the DIB built image isn't
> actually used by tripleo-ci?
> 
yes, this is possible but will require somebody to step up to do the work. For
me, the easier path was getting nodepool-builder going.

> >
> > Once that issue is solved, things are a little easier.  It would just be a
> > matter of porting upstream CI configuration to rdoproject.org and validating
> > images, JJB jobs and test validation. Cloud credentials removed from
> > openstack-infra and added to rdoproject.org.
> >
> > I'd basically need help from rdoproject (eg: dmsimard) with some of the 
> > admin
> > tasks, a long with a VM for nodepool-builder. We already have the 3rdparty 
> > CI
> > bits setup in rdoproject.org, we are actually running DLRN builds on
> > python-tripleoclient / python-openstackclient upstream patches.
> 
> Sounds good(assuming the RDO community are ok with allowing us to add
> jobs over there)
> 
> >
> > I think the biggest step is getting nodepool-builder working with Software
> > Factory, but once that is done, it should be straightforward work.
> >
> > Now, if SoftwareFactory is the long term home for this system is open for
> > debate.  Obviously, rdoproject has the majority of this infrastructure in 
> > plan,
> > so it makes for a good place to run tripleo-ci OVB jobs.  Other wise, if 
> > there
> > are issue, then tripleo would have to stand up their own 
> > jenkins/nodepool/zuul
> > infrastructure and maintain it.
> >
> > I'm happy to answer questions,
> > Paul
> >
> > [1] http://softwarefactory-project.io/
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Ironic]Removal of Ironic tests from Tempest

2016-10-12 Thread Jim Rollenhagen
On Wed, Oct 12, 2016 at 1:01 PM, Jordan Pittier 
wrote:

> Hi guys,
> As you may know we are pushing projects to use Tempest plugins and we are
> only keeping "core" projects tests in Tempest tree.
>
> There has been several attempts to remove Ironic from Tempest so I guess
> it doesn't come as a surprise (hopefully). I am starting to work on the
> removal now, as it's super early in the dev cycle, now is a good time.
> Expect a patch soon.
>

Awesome, thanks!

Thiago was working on this, but I haven't seen him in some time.

If you want to pick up his patches and finish them, I'm sure it wouldn't
be a problem. :)

https://review.openstack.org/#/c/355591/
https://review.openstack.org/#/c/355586/
https://review.openstack.org/#/c/358116/

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Setting up to 3rd party CI OVB jobs

2016-10-12 Thread Dan Prince
On Fri, 2016-10-07 at 09:03 -0400, Paul Belanger wrote:
> Greetings,
> 
> I wanted to propose a work item, that I am happy to spearhead, about
> setting up
> a 3rd party CI system for tripleo project. The work I am proposing,
> wouldn't
> actually affect anything today about tripleo-ci but provider a
> working example
> of how 3rd party CI will work and potential migration path.
> 
> This is just one example of how it would work, obviously everything
> is open for
> discussions but I think you'll find the plan to be workable.
> Additionally, this
> topic would only apply to OVB jobs, existing jobs already running on
> cloud
> providers from openstack-infra would not be affected.


The plan you describe here sounds reasonable. Testing out a 3rd party
system in parallel to our existing CI causes no harm and certainly
allows us to evaluate things and learn from the new setup.

A couple of things I would like to see discussed a bit more (either
here or in a new thread if deemed unrelated) are how do we benefit from
these changes in making the OVB jobs 3rd party.

There are at least 3 groups who likely care about this along with how
this benefits them:

-the openstack-infra team:

  * standardization: doesn't have to deal with special case OVB clouds

-the tripleo OVB cloud/CI maintainers:

  * Can manage the 3rd party cloud how they like it. Using images or whatever 
with less regard for openstack-infra compatability.

-the tripleo core team:

  * The OVB jobs are mostly the same. The maintenance is potentially
further diverging from upstream though. So is there any benefit to 3rd
party for the core team? Unclear to me at this point. The OVB jobs are
still the same. They aren't running any faster than they are today. The
maintenance of them might even get harder for some due to the fact that
we have different base images across our upstream infra multinode jobs
and what we run via the OVB 3rd party testing.



The tripleo-ci end-to-end test jobs have always fallen into the high
maintenance category. We've only recently switched to OVB and one of
the nice things about doing that is we are using something much closer
to stock openstack vs. our previous CI cloud. Sure there are some OVB
configuration differences to enable testing of baremetal in the cloud
but we are using more OpenStack to drive things. So by simply using
more OpenStack within our CI we should be more closely aligning with
infra. A move in the right direction anyway.

Going through all this effort I really would like to see all the teams
gain from the effort. Like, for me the point of having upstream
tripleo-ci tests is that we catch breakages. Breakages that no other
upstream projects are catching. And the solution to stopping those
breakages from happening isn't IMO to move some of the most valuable CI
tests into 3rd party. That may cover over some of the maintenance rubs
in the short/mid term perhaps. But I view it as a bit of a retreat in
where we could be with upstream testing.

So rather than just taking what we have in the OVB jobs today and
making the same, long running (1.5 hours +) CI job (which catches lots
of things) could we re-imaging the pipeline a bit in the process so we
improve this. I guess my concern is we'll go to all the trouble to move
this and we'll actually negatively impact the speed with which the
tripleo core team can land code instead of increasing it. I guess what
I'm asking is in doing this move can we raise the bar for TripleO core
any too?

Dan 


> 
> What I am proposing is we move tripleo-test-cloud-rh2 (currently
> disabled) from
> openstack-infra (nodepool) to rdoproject (nodepool).  This give us a
> cloud we
> can use for OVB; we know it works because OVB jobs have run on it
> before.
> 
> There is a few issues we'd first need to work on, specifically since
> rdoproject.org is currently using SoftwareFactory[1] we'd need to
> have them
> adding support for nodepool-builder. This is needed so we can use the
> existing
> DIB elements that openstack-infra does to create centos-7 images
> (which tripleo
> uses today). We have 2 options, wait for SF team to add support for
> this (I
> don't know how long that is, but they know of the request) or we
> manually setup
> a external nodepool-builder instance for rdoproject.org, which
> connects to
> nodepool.rdoproject.org via gearman (I suggest we do this).
> 
> Once that issue is solved, things are a little easier.  It would just
> be a
> matter of porting upstream CI configuration to rdoproject.org and
> validating
> images, JJB jobs and test validation. Cloud credentials removed from
> openstack-infra and added to rdoproject.org.
> 
> I'd basically need help from rdoproject (eg: dmsimard) with some of
> the admin
> tasks, a long with a VM for nodepool-builder. We already have the
> 3rdparty CI
> bits setup in rdoproject.org, we are actually running DLRN builds on
> python-tripleoclient / python-openstackclient upstream patches.
> 
> I think the biggest step is 

[openstack-dev] [all] planning the PTG -- lessons from Swift's midcycles

2016-10-12 Thread John Dickinson
The Swift team has been doing midcycles for a while now, and as the new PTG 
gets closer, I want to write down our experience with what has worked for us. I 
hope it is beneficial to other teams, too.

## Logistics of the event

- 2 rooms is ideal, but can make due with one larger room
- move tables and chairs
- whiteboards/flip charts
- projector/tv
- host provides lunch and snacks
- host provides one evening meal/event

When someone offers to host a midcycle, this is what I ask them to provide. The 
PTG will be slightly different, but the general idea is the same. There's a few 
important things to note here. First, be flexible about who's talking about 
what and when people are working together. The point of getting together in 
person is to facilitate face-to-face communication, so be sure that the room 
logistics don't get in the way by forcing people into a certain configuration. 
Providing lunch and snacks allows the participants to not break any tech or 
social flow in the middle of the day. It keeps people together and helps 
facilitate communication. And the one evening event is super helpful to let 
people relax, have fun, and do something interesting away from a computer. In 
the past we've done everything from locked-room challenges and bowling to 
brewery tours and a boat ride under the Golden Gate bridge.

## Only agenda item is "set the agenda"

- dotmocracy
- too much to do for the people we have to work on it
- what's the big stuff we need the right people to be together for?
- schedule one big talk each am and pm

When it comes to the actual flow of the limited time together, there are two 
important things to keep in mind. First, make sure there's time to cover all 
the topics that are of interest to the people in the room. Second, make sure 
the big important stuff gets discussed without requiring someone to be in two 
places at once.

Unfortunately, these two goals are often in conflict. We've solved this in the 
past by starting the midcycle with one and only one agenda item: set the 
agenda. The most successful way we've done this is to ask the room to shout out 
topics to discuss. Every topic gets written down on a piece of paper or on a 
flipboard. When you've got all the topics written down, then give everyone a 
limited number of dot stickers and ask them to vote for what they want to talk 
about by placing one or more dots next to it. The trick is that there are more 
topics to talk about than people who are there and each person has less dots 
than the full schedule of time we have. So, for example, if there are 3 days 
together, that's a total of 6 morning and afternoon blocks of time. Give 
everyone 4 dots, and force them to prioritize. This also has the very real 
visual side effect of emphasizing that we are a team and not one person can be 
a part of everything going on. After everyone has put their dots on topics, 
sort the topics by number of dots. In our example, we've got 6 blocks of time, 
so choose the top six and schedule them. This ensures that the big stuff can 
get scheduled, the little stuff can fill in the gaps, and people can know when 
to be available for conversations that are important to them.

Imagine than you've got a glass mason jar, and you need to fill it up with 
stuff. You've got big rocks, small rocks, sand, and water. If you fill it up 
with water first, the water will spill out when you add anything else. But if 
you add the big things first, then you can fit more in. The big rocks go first, 
then small rocks fill up the spaces, then sand fills up the cracks, then the 
water can seem in the tiny air gaps. It's the same way with prioritizing the 
in-person meetings. Schedule the big stuff, then fill in any gaps with the 
small stuff.

## Social dynamics during the week

- you won't be able to participate in everything. that's ok
- there will be several conversations going on at one time. be considerate and 
flexible
- don't wait for someone to start a conversation. start it yourself. this is 
very important!

There's a lot going on at in-person meetings. It's ok to not participate in 
everything--you won't be able to, so don't even try. In the best case, there 
will be a lot of conversations going on at once, so be considerate and 
flexible. It's important to not sit back and wait to start a conversation--if 
you need to talk about something, grab the right people, a whiteboard, and a 
corner of the room and start talking.

But what do you talk about? Sometimes it's just talking with a whiteboard. 
Sometimes it's reviewing code together. And occasionally, there's even an 
opportunity for some pair programming.

After a topic has been talked about, check it off on the big list of topics 
that you made the first day. This keeps everyone aware of what has been talked 
about and what needs to be talked about. And by the end of your time together, 
it's a great visual reminder of the success of the week.

## Have fun

Overall, have fun. In-person 

Re: [openstack-dev] [networking-sfc][devstack][mitaka]

2016-10-12 Thread Henry Fourie
Navdeep,
  Post port-chain, port-pair-group, port-pair config to questions at 
https://launchpad.net/networking-sfc
  Use these commands to determine traffic flow and post results also.

  sudo ovs-ofctl -O openflow13 dump-flows br-int

  sudo ovs-ofctl -O Openflow13 dump-groups br-int


-Louis

From: Navdeep Uniyal [mailto:navdeep.uni...@neclab.eu]
Sent: Wednesday, October 12, 2016 3:06 AM
To: Cathy Zhang
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-sfc][devstack][mitaka]

Hi Cathy,

Thanks for your reply. I have the setup done without any errors with only one 
vm in the chain. I want to move all the icmp traffic from vm1 to vm3 via vm2. 
My Flow classifier looks like:
"neutron flow-classifier-create --ethertype IPv4 --source-ip-prefix 
10.0.0.18/32 --destination-ip-prefix 10.0.0.6/32 --protocol icmp FC1"
But using tcpdump on vm2 ingress port, I could not see any traffic. Please let 
me know how can I debug this and what could be the possible issue.


Best Regards,
Navdeep Uniyal


From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com]
Sent: Dienstag, 11. Oktober 2016 19:50
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-sfc][devstack][mitaka]

Hi Navdeep,

Please see inline.

Cathy

From: Navdeep Uniyal [mailto:navdeep.uni...@neclab.eu]
Sent: Tuesday, October 11, 2016 5:42 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [networking-sfc][devstack][mitaka]

Hi all,

I have been trying out networking-sfc to create service function chain in 
Openstack. I could create all the port pairs, port-pair-groups, flow classifier 
and the chain but I could not see the packets on the desired hops.
I am trying to create a simple sfc with 3 VMs(vm1 to vm3) in the setup. I just 
want to check how it works. In my setup, vm1 is the Traffic generator(iperf) 
and vm3 is the traffic receiver(iperf server). Now, the  2 vms (vm2 and 3) are 
in the same network with vm1 and I want to move the iperf traffic from 
vm1->vm2->vm3. In order to achieve this, I have created 2 port pairs of vm2  
and vm3 and both pairs are in separate port pair groups (PG1 and PG2), also 
created a Flow classifier FC1 and finally chain with PG1, PG2 and FC1.  Now my 
question is, is my setup correct in order to achieve the sfc result as I stated 
above? Do I need to include the vm1 in the port pair group?

Cathy> You only need to include VM2 in a port pair group. Traffic source and 
traffic destination do not need to be included in the chain's port pair group, 
instead their IP addresses should be included in the flow classifier so that 
the system knows which flow needs to go through the chain. Here is a link to 
thw wiki.
https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining

Cathy




Below is the flow classifier:

++--+
| Field  | Value
|
++--+
| description  |
  |
| destination_ip_prefix   |  |
| destination_port_range_max |  |
| destination_port_range_min |  |
| ethertype| IPv4   
  |
| id | 
e5000ade-50ad-41ed-a159-b89c4blp97ec |
| l7_parameters  | {}   
|
| logical_destination_port   |  |
| logical_source_port   | 63cdf664-dd67-455c-8345-f01ef58c23e5 |
| name| FC1 
 |
| project_id   | 
6b90cd3356144681b44274d4881c5fc7 |
| protocol  | tcp   
   |
| source_ip_prefix  | 10.0.0.18/32  
   |
| source_port_range_max  |  |
| source_port_range_min  |  |
| tenant_id | 
6b90cd3310104681b44274d4881c5fc7 |
++--+



Is there any wiki with some example case explained with testing scenario?


Best Regards,
Navdeep Uniyal
Email: navdeep.uni...@neclab.eu
-
Software Engineer
NEC Europe Ltd.
NEC Laboratories Europe
Kurfürstenanlage 36, D-69115 Heidelberg,

NEC Europe Ltd | Registered Office: Athene, Odyssey Business Park, West End  
Road, London, 

Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

2016-10-12 Thread Clint Byrum
Excerpts from Chris Dent's message of 2016-10-12 15:08:48 +0100:
> On Wed, 12 Oct 2016, Clint Byrum wrote:
> 
> > So, while I do want to make sure enough of our architects and designers
> > go to the summit to truly understand user needs, I also think it has
> > been proven ineffective to also throw all of the coders into that mix and
> > expect them to be productive.
> 
> I suspect you didn't mean it to sound like this, and I don't want to
> distract overmuch from the meat of this thread, but at a superficial
> level it appears you are implying that there is a disjunction
> between coders and architects/designers. I was pretty sure that was
> an antiquated notion and if it is being used to drive the decisions
> about what should happen at summit and what should happen at PTG,
> that's a shame.
> 

Oh dear no. I am speaking of the many hats that many of our community
members wear from time to time. Everybody's welcome and valuable, but
the value people will bring to OpenStack, and take back to their orgs,
will vary by circumstance. The Summit is an attempt to allow those not
in need of direct operator/user interaction to skip and attend a more
productive event in a more productive time slot.

I'd also like to point out that we should get better at taking user
feedback _on the internet_. The Summit is an expensive event for users
and operators to attend as well. We should not rely on it as _the_ place
to get that feedback. We have bug trackers, ask forums, mailing lists,
and IRC, and all of those are places that anybody who has internet access
can contribute feedback on.

> The reason, as I understood it, for having a separate event is not
> because all those users are such a horrible distraction but because
> the sales and marketing efforts are (inherent in themselves and also
> in the way that the corporate patrons want the devs to participate
> in those efforts at the cost of the design summit-ing).
>

Right, but many of the users only come because of the sales and marketing
budgets.

> For most of us, as Dmitry points out, getting to summit even once a
> year is a real challenge. Getting to 4 events, which is what will be
> required to maintain full engagement with both the planning and
> development of a project (not to mention cross project concerns), will
> either be impossible or require a significant change in the financial
> commitment that the patron companies are making. I think we'd all like
> to see that change, but I think we all also know that it's not very likely
> at this time.
> 

The hope is that only a small percentage of people will have need to go
to 4, and those are likely the people who've already been going to most
Summits and their project Mid-Cycles. Now we can have a mid-cycle level of
productivity, while also facilitating cross-project discussion and work.

> That leaves us in a bit of a bind. It feels very much like we are
> narrowing the direct feedback pipeline between operators and users. It
> also feels like we are strengthening distinctions between various
> types of contributors, based mostly on the economic power their
> corporate patrons are willing to wield for them. And finally it
> feels like we are making it harder for more people to be engaged in
> cross project work.
> 

On the contrary, the PTG will give cross-project contributors a perfect
place to actually participate in project work. A few of us kind of
figured out how to do this at summits, and I think it involved a lot of
hustle and fast talk, which are not everybody's forté.

> I'm certain that none of these things are the desired outcomes.
> What can we do to clarify the situation and remedy the issues?
>

I'm also certain that not all of the things are actual outcomes. They're
valuable hypothesis though, and we should evaluate the results against
them.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Ironic]Removal of Ironic tests from Tempest

2016-10-12 Thread Jordan Pittier
Hi guys,
As you may know we are pushing projects to use Tempest plugins and we are
only keeping "core" projects tests in Tempest tree.

There has been several attempts to remove Ironic from Tempest so I guess it
doesn't come as a surprise (hopefully). I am starting to work on the
removal now, as it's super early in the dev cycle, now is a good time.
Expect a patch soon.

Thanks,
Jordan

-- 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

2016-10-12 Thread Thierry Carrez
Michał Jastrzębski wrote:
> On 12 October 2016 at 08:53, John Davidge  wrote:
>> [...]
>> Also, unless I've missed something, we still don't know the registration
>> fees for the PTG and the new Summit do we? Last I remember, there was talk
>> of a registration fee for the PTG, and then a Summit discount for PTG
>> attendees[1]. Is that still the plan?

Current plan is for the PTG to have a nominal cost (to maximize the
chances that someone registered will actually show up), and then people
physically attending PTG would get a discount to attend next summit
(rather than "recent contributors").

Obviously (Cost of PTG + Cost of Summit) will still be superior to Cost
of Summit, but that helps in keeping it reasonable.

>> Surely the PTG will need to be free to attend, otherwise isn't it better
>> for project teams to simply shift our existing mid-cycles to the PTG
>> timeframe with an altered focus to save money? Genuine question.
> 
> So if PTG is meant to just "hang out with your dev team", that's midcycle.

PTG is also "hang out with other teams". The midcycles setup was
arguably slightly cheaper, but reinforced the upstream silos as a result.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Multinode testing with devstack and neutron broken

2016-10-12 Thread Matthew Van Dijk
Not much to add here - I commented in 
review.openstack.org/378063
that they could set the default value of the flag to revert behaviour.
- Matt

On Oct 11, 2016, at 10:21 PM, Armando M. 
> wrote:



On 11 October 2016 at 18:20, Sean M. Collins 
> wrote:
Armando M. wrote:
> At this point I feel that changing the pool range is even less justified.
> If I had seen bug [4], I would have been against its fix, because you're
> absolutely right as the change being not backward compatible.

https://review.openstack.org/#/c/356026 was written by someone on the Trove 
team to
help them with their CI jobs IIRC.

CC'ing Matthew since he has more context. I went into the Trove channel
and asked them about reverting 356026. It doesn't seem like an option at
this point.

http://eavesdrop.openstack.org/irclogs/%23openstack-trove/%23openstack-trove.2016-09-30.log.html#t2016-09-30T17:53:08

A revert with no follow up is clearly not a viable option most of the times, 
and we clearly dug ourselves too deep now with [1]. Rather than making the use 
of subnet pools conditional as done in [1], IMO we should have made [2] 
conditional to preserve the existing provisioning behavior and let Trove 
override.

[1] Ic89ceca76afda67da5545111972c3348011f294f
[2] https://review.openstack.org/#/c/356026/




--
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [infra] RFC: consolidating and extending Ironic CI jobs

2016-10-12 Thread Dmitry Tantsur

On 10/12/2016 05:53 PM, Jay Faulkner wrote:



On Oct 12, 2016, at 5:01 AM, Dmitry Tantsur  wrote:

Hi folks!

I'd like to propose a plan on how to simultaneously extend the coverage of our 
jobs and reduce their number.

Currently, we're running one instance per job. This was reasonable when the 
coreos-based IPA image was the default, but now with tinyipa we can run up to 7 
instances (and actually do it in the grenade job). I suggest we use 6 fake bm 
nodes to make a single CI job cover many scenarios.

The jobs will be grouped based on driver (pxe_ipmitool and agent_ipmitool) to 
be more in sync with how 3rd party CI does it. A special configuration option 
will be used to enable multi-instance testing to avoid breaking 3rd party CI 
systems that are not ready for it.

To ensure coverage, we'll only leave a required number of nodes "available", 
and deploy all instances in parallel.

In the end, we'll have these jobs on ironic:
gate-tempest-ironic-pxe_ipmitool-tinyipa
gate-tempest-ironic-agent_ipmitool-tinyipa

Each job will cover the following scenarious:
* partition images:
** with local boot:
** 1. msdos partition table and BIOS boot
** 2. GPT partition table and BIOS boot
** 3. GPT partition table and UEFI boot  <*>
** with netboot:
** 4. msdos partition table and BIOS boot <**>
* whole disk images:
* 5. with msdos partition table embedded and BIOS boot
* 6. with GPT partition table embedded and UEFI boot  <*>

<*> - in the future, when we figure our UEFI testing
<**> - we're moving away from defaulting to netboot, hence only one scenario

I suggest creating the jobs for Newton and Ocata, and starting with Xenial 
right away.

Any comments, ideas and suggestions are welcome.



+1 I'm completely on-board with this.

Have you considered mixing in multiple drivers in a single test? Given we can 
set drivers per node, is there's a reason (other than maybe just size/duration 
of job) that we couldn't test both pxe_* and agent_* deploy methodologies at 
the same time?


There is no reason, except for maybe making life easier for 3rd party CI folks. 
If we don't mess with drivers, they might have easier time doing the same in 
their jobs.




Thanks,
Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [infra] RFC: consolidating and extending Ironic CI jobs

2016-10-12 Thread Jay Faulkner

> On Oct 12, 2016, at 5:01 AM, Dmitry Tantsur  wrote:
> 
> Hi folks!
> 
> I'd like to propose a plan on how to simultaneously extend the coverage of 
> our jobs and reduce their number.
> 
> Currently, we're running one instance per job. This was reasonable when the 
> coreos-based IPA image was the default, but now with tinyipa we can run up to 
> 7 instances (and actually do it in the grenade job). I suggest we use 6 fake 
> bm nodes to make a single CI job cover many scenarios.
> 
> The jobs will be grouped based on driver (pxe_ipmitool and agent_ipmitool) to 
> be more in sync with how 3rd party CI does it. A special configuration option 
> will be used to enable multi-instance testing to avoid breaking 3rd party CI 
> systems that are not ready for it.
> 
> To ensure coverage, we'll only leave a required number of nodes "available", 
> and deploy all instances in parallel.
> 
> In the end, we'll have these jobs on ironic:
> gate-tempest-ironic-pxe_ipmitool-tinyipa
> gate-tempest-ironic-agent_ipmitool-tinyipa
> 
> Each job will cover the following scenarious:
> * partition images:
> ** with local boot:
> ** 1. msdos partition table and BIOS boot
> ** 2. GPT partition table and BIOS boot
> ** 3. GPT partition table and UEFI boot  <*>
> ** with netboot:
> ** 4. msdos partition table and BIOS boot <**>
> * whole disk images:
> * 5. with msdos partition table embedded and BIOS boot
> * 6. with GPT partition table embedded and UEFI boot  <*>
> 
> <*> - in the future, when we figure our UEFI testing
> <**> - we're moving away from defaulting to netboot, hence only one scenario
> 
> I suggest creating the jobs for Newton and Ocata, and starting with Xenial 
> right away.
> 
> Any comments, ideas and suggestions are welcome.
> 

+1 I'm completely on-board with this. 

Have you considered mixing in multiple drivers in a single test? Given we can 
set drivers per node, is there's a reason (other than maybe just size/duration 
of job) that we couldn't test both pxe_* and agent_* deploy methodologies at 
the same time?

Thanks,
Jay

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Doc] Inclusion of microversion API support in CLI reference

2016-10-12 Thread Andreas Jaeger
On 10/12/2016 05:23 PM, Scott D'Angelo wrote:
> We added this patch to the cinderclient:
> 
> b76f5944130e29ee1bf3095c966a393c489c05e6
> 
> 
> Which basically only shows help for the features available at the
> requested API version. It is by design.

The question that Sean raises is how the user will figure out that they
need to use this API version and then can use the command.

Check:
http://docs.openstack.org/cli-reference/cinder.html

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Doc] Inclusion of microversion API support in CLI reference

2016-10-12 Thread Andreas Jaeger
On 10/12/2016 03:03 PM, Sean McGinnis wrote:
> Just getting this out there to either get educated or to start a
> conversation...
> 
> While going through some of the DocImpact generated bugs for
> python-cinderclient I noticed a few that added new parameters to
> existing CLI commands. As Cinder has now moved to using microversions
> for all API changes, these new parameters are only available at a
> certain microversion level.
> 
> A specific case is here:
> 
> https://github.com/openstack/python-cinderclient/blob/master/cinderclient/v3/shell.py#L1485
> 
> We have two parameters that are marked "start_version='3.1'" that do not
> show up in the generated CLI reference.
> 
> This appears to be due to (or related to) the fact that the command line
> help does not output anything for these. Now before I dig into why that
> is, I know there are others that are already much more knowledgable
> about this area than I am. So my question is, is this by design? Or is
> something missing here that is needed to recognize these params with the
> start_version value so they get printed?
> 
> My expectation as an end user would be that the help information would
> be printed, with something like "(Requires API 3.1 or later)" appended
> to the help text.
> 
> Anyone have any insight on this?

These are always challenges ;) We generate the help using the
doc-tools-update-cli-reference command from openstack-doc-tools and I
don't think we have special code in os_doc_tools/commands.py to handle
this yet.

For some commands, we run the help twice with different parameters - and
we could add for cinder the start_version for sure. I just hope it does
not remove older commands at the same time ;)


Adding docs mailing list,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG space request

2016-10-12 Thread Emilien Macchi
On Wed, Oct 12, 2016 at 5:28 AM, Thierry Carrez  wrote:
> Emilien Macchi wrote:
>> I would like to request for some space dedicated to TripleO project
>> for the first OpenStack PTG.
>>
>> https://www.openstack.org/ptg/
>>
>> The event will happen in February 2017 during the next PTG in Atlanta.
>> Any feedback is welcome,
>
> Just a quick note: as you can imagine we have finite space at the event,
> and the OpenStack Foundation wants to give priority to teams which have
> a diverse affiliation (or which are not tagged "single-vendor").
> Depending on which teams decide to take advantage of the event and which
> don't, we may or may not be able to offer space to single-vendor
> projects -- and TripleO is currently tagged single-vendor.

I understand that space is limited and it is true to mention TripleO
has the single-vendor tag.
Although this tag, TripleO has a lot of horizontal interactions with
others projects in OpenStack:

- Ironic, used to bootstrap the TripleO overcloud. Ironic folks could
weight we have a bunch of common work there.
- Heat, used to orchestrate the TripleO overcloud deployment (and soon
the undercloud too).
- Puppet OpenStack modules. Despite PuppetOpenStack project might skip
PTG because we don't need it [1] until now, we still have a lot of
collaboration between both groups.
- OpenStack Infra. TripleO & Infra folks keep increasing the
collaboration on multiple topics (third party CI, multinode jobs,
etc).

[1] http://osdir.com/ml/openstack-dev/2016-10/msg00311.html

I might have missed it, but did we document PTG allocation space
priorities? If not, we might want to make it clear in the
single-vendor tag doc.
I'm concerned by the fact whether or not space is granted for a
project would be a subjective decision to take.

> The rationale is, the more organizations are involved in a given project
> team, the more value there is to offer common meeting space to that team
> for them to sync on priorities and get stuff done. If more than 90% of
> contributions / reviews / core reviewers come from a single
> organization, there is less coordination needs and less value in having
> all those people from a single org to travel to a distant place to have
> a team meeting. And as far as recruitment of new team members go (to
> increase that diversity), the OpenStack Summit will be a better venue to
> do that.
>
> I hope we'll be able to accommodate you, though. And in all cases
> TripleO people are more than welcome to join the event to coordinate
> with other teams. It's just not 100% sure we'll be able to give you a
> dedicated room for multiple days. We should know better in a week or so,
> once we get a good idea of who plans to meet at the event and who doesn't.

ack. I'll follow what happens. I appreciate your quick feedback, that
will let us to find a fallback in case of rejection.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara][qa] Unblocking Sahara gate

2016-10-12 Thread Luigi Toscano
Hi all,

the removal of the Sahara tests from Tempest [1] broke the tests now in 
sahara-tests, and basically I underestimated the fall-out of the removal. 
Apart from a minor issue due to the wrong exception [2], the key issue comes 
from the option groups in tempest.conf, which are defined as data-processing 
and data-processing-feature-available, while the code tries to access 
CONF.data_processing. Previously this was handled by some magic mapping [3], 
which can't be used, as it is, by plugins.
I came up with this solution [4] after discussing with Andrea Frittoli. The 
patches only fails with the automation negative tests, which are on the way of 
the Dodo anyway [5].
The alternative fix for sahara would involve patching the configured 
tempest.conf in few branches, in addition to the fixes to sahara-tests, and 
I'd frankly prefer a more general solution.

My question is: can you please approve [5] and [4], so that Sahara gates can 
be unlocked?


[1] https://review.openstack.org/#/c/380082/
[2] https://review.openstack.org/#/c/385336/
[3] http://git.openstack.org/cgit/openstack/tempest/tree/tempest/config.py?
h=13.0.0#n1239
[4] https://review.openstack.org/#/c/385460/
[5] https://review.openstack.org/#/c/380982/

Ciao
-- 
Luigi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] [searchlight] Duplicate parameter names

2016-10-12 Thread McLellan, Steven
Hi,

This issue came up during our meeting last week and it was suggested to ask the 
API working group for its opinion, so any comments welcome.

Searchlight was born out of the Glance codebase, and so implemented "offset" 
and "limit" as paging parameters [1]. Where possible we've tried to map closely 
to Elasticsearch's API so that we don't have to translate parts of the query 
body into Elasticsearch's DSL (with the testing and documentation that comes 
along with it). Elasticsearch implements the same paging parameters as 'size' 
and 'from'.

Because at present we require an amount of knowledge of Elasticsearch's query 
language in order to use Searchlight, I proposed 
https://review.openstack.org/#/c/381956/ last week to add 'size' and 'from' as 
synonyms for 'offset' and 'limit' (and incidentally, we do plan to add a more 
scrolling/cursor-based method to work around the temporal problems with 
offset/limit). The two issues noted by folks:

* Having two parameters doing the same thing can be confusing (which in general 
I agree with though in this case I feel it's defensible)
* 'size' is ambiguous when it's part of a set of APIs as large as the Openstack 
community

This obviously isn't life threatening, more of a convenience for a number of 
people who's expressed frustration with getting the parameter names wrong, but 
does anyone on the API WG have a strong opinion against doing it?

Thanks!

Steve

[1] I am aware of the discussions around paging versus 'marker'-based 
scrolling. We plan to add support for a more cursor-like method as well, but 
Elasticsearch has always supported paging.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Exposing project team's metadata in README files

2016-10-12 Thread Amrith Kumar
I've been interacting recently with several new contributors (to Trove) and I 
can certainly attest to the fact that people appear to browse the source 
repository and read the README's.

Flavio makes a good point that it would be good to provide a short summary of 
the project's capabilities in some common place for all projects, and it may 
then be easy to automate the generation of a single webpage for all of the 
projects that one could treat as a landing page.

To Doug's point about tags; I would recommend that the tags be used to also 
indicate which section of the main landing page the project should land. For 
example, does the project fall under "compute", "storage", or "networking" or 
some such suitable taxonomy of projects that will help to render a meaningful 
top level (landing page) for all projects in some intuitive organization for 
potential contributors and users.

-amrith 

> -Original Message-
> From: Hayes, Graham [mailto:graham.ha...@hpe.com]
> Sent: Wednesday, October 12, 2016 11:14 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [all][tc] Exposing project team's metadata in
> README files
> 
> On 12/10/2016 16:08, Doug Hellmann wrote:
> > Excerpts from Flavio Percoco's message of 2016-10-12 14:50:03 +0200:
> >> Greetings,
> >>
> >> One of the common complains about the existing project organization in
> the big
> >> tent is that it's difficult to wrap our heads around the many projects
> there
> >> are, their current state (in/out the big tent), their tags, etc.
> >>
> >> This information is available on the governance website[0]. Each
> official
> >> project team has a page there containing the information related to the
> >> deliverables managed by that team. Unfortunately, I don't think this
> page is
> >> checked often enough and I believe it's not known by everyone.
> >>
> >> In the hope that we can make this information clearer to people
> browsing the
> >> many repos (most likely on github), I'd like to propose that we include
> the
> >> information of each deliverable in the readme file. This information
> would be
> >> rendered along with the rest of the readme (at least on Github, which
> might not
> >> be our main repo but it's the place most humans go to to check our
> projects).
> >>
> >> Rather than duplicating this information, I'd like to find a way to
> just
> >> "include it" in the Readme file. As far as showing the "official" badge
> goes, I
> >> believe it'd be quite simple. We can do it the same way CI tags are
> exposed when
> >> using travis (just include an image). As for the rest of the tags, it
> might
> >> require some extra hacking.
> >>
> >> So, before I start digging more into this, I wanted to get other
> opinions/ideas
> >> on this topic and how we can make this information more evident to the
> rest of
> >> the community (and people not as familiar with our processes as some of
> us are).
> >>
> >> Thanks in advance,
> >> Flavio
> >>
> >> [0] http://governance.openstack.org/reference/projects/index.html
> >>
> >
> > Is your proposal that a tag like release:cycle-with-milestones would
> > result in a badge being added when the README.rst is rendered on
> > github.com? Would that work for git.openstack.org, too?
> >
> > I agree that the governance site is not the best place to put the
> > info to make it discoverable. Do users look first at the source
> > repository, or at some other documentation?
> >
> > Doug
> 
> I like this idea.
> 
> I know when I am looking at software, I look at the source repo
> initially.
> 
> We could do it in the readme, and maybe re-use it in the docs as well?
> 
> I would be willing to dig in and help if needed.
> 
> - Graham
> 
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Doc] Inclusion of microversion API support in CLI reference

2016-10-12 Thread D'Angelo, Scott
We added this patch to the cinderclient:

b76f5944130e29ee1bf3095c966a393c489c05e6


Which basically only shows help for the features available at the requested API 
version. It is by design.


From: Sean McGinnis 
Sent: Wednesday, October 12, 2016 7:03:28 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Doc] Inclusion of microversion API support in CLI 
reference

Just getting this out there to either get educated or to start a
conversation...

While going through some of the DocImpact generated bugs for
python-cinderclient I noticed a few that added new parameters to
existing CLI commands. As Cinder has now moved to using microversions
for all API changes, these new parameters are only available at a
certain microversion level.

A specific case is here:

https://github.com/openstack/python-cinderclient/blob/master/cinderclient/v3/shell.py#L1485

We have two parameters that are marked "start_version='3.1'" that do not
show up in the generated CLI reference.

This appears to be due to (or related to) the fact that the command line
help does not output anything for these. Now before I dig into why that
is, I know there are others that are already much more knowledgable
about this area than I am. So my question is, is this by design? Or is
something missing here that is needed to recognize these params with the
start_version value so they get printed?

My expectation as an end user would be that the help information would
be printed, with something like "(Requires API 3.1 or later)" appended
to the help text.

Anyone have any insight on this?

Thanks!

Sean (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

2016-10-12 Thread John Davidge
Steve,

Why just the one summit? Each cycle has a PTG and a summit, which makes 4 
events per year.

John

From: "Steven Dake (stdake)" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, October 12, 2016 at 3:25 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

Clint,

RE 3 - I do understand purpose of PTGs.   Prior many engineering orgs didn't 
send many people to midcycles (where a lot of work got done - more so than 
summit even).  The 3 events are 2 PTGS + 1 summit (although I don't know the 
length of the PTGs)

Regards
-steve


From: Clint Byrum >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, October 12, 2016 at 12:51 AM
To: openstack-dev 
>
Subject: Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

Excerpts from Steven Dake (stdake)'s message of 2016-10-12 06:42:43 +:
Tom,
No flame, just observation about the reality of these changes.
I think we missed this communication on the mailing list or in the FAQs or 
somewhere else.  I think most engineering-focused organizations are looking at 
the PTGs only and not really considering the summit for budget planning.  If 
folks knew the operators were only going to be at the OpenStack Summit, I think 
that may change budget planning for engineering organizations.  Seems like more 
siloing to me, not less.  We need to integrate OpenStack's development with 
Operators as well as the Operator's customers (the cloud consumers the 
Operators deliver to).
Does the foundation not want to co-locate the ops summit at the PTG because the 
ops summit feeds into the OpenStack Summit main ops day(s)?

Agree, on the surface it looks like this adds a brick or two on top of
the existing wall that devs throw things over.

However, I think the reality is those bricks were already there, and
we've all been pretending this isn't what is already happening.

So, while I do want to make sure enough of our architects and designers
go to the summit to truly understand user needs, I also think it has
been proven ineffective to also throw all of the coders into that mix and
expect them to be productive.

I recall many of us huddled in the dev pit and at parties at summits
trying desperately to have deep technical conversations while the
maelstrom was happening around us. And then the few who were fortunate
enough to go to the mid-cycle would get into quiet rooms for a few days,
and _actually_ design the things our users need, but 3 months late,
and basically for the next release.

I don't have any easy solutions for this problem, but the expectation that 
project developers are required at 3 week-long events instead of 2 wasn't 
clearly communicated and should be rectified beyond a post to the openstack-dev 
mailing list where most people filter messages by tags (i.e. your message is 
probably not reaching the correct audience).

Where did you get three?

PTG - write code, design things (replaces mid-cycles)
Summit - listen to users, showcase designs, state plans for next release

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] ERROR: Not Authorized

2016-10-12 Thread courage angeh
You right the log files should be there. So i went back to start the
exercise afresh, so run the stack.sh script again And i realised it didn't
finish because the database password was wrong and also connection
problems. So After successfully running the file. i can now create
containers perfectly. Thank you for your help

On Tue, Oct 11, 2016 at 4:08 PM, Hongbin Lu  wrote:

> Courage,
>
> If you follow the instruction [1] to setup the development environment,
> the files should be there. All the logs can be found under /opt/stack/logs
> . Alternatively, you could check the screen output by "screen -r stack".
>
> [1] https://github.com/openstack/zun/blob/master/doc/
> source/dev/quickstart.rst#exercising-the-services-using-devstack
>
> Best regards,
> Hongbin
>
> On Tue, Oct 11, 2016 at 12:57 AM, courage angeh 
> wrote:
>
>> All the files you specified, i don't have them. i searched using find
>> from root but got nothing
>>
>> On Tue, Oct 11, 2016 at 5:13 AM, courage angeh 
>> wrote:
>>
>>> Here is the url of the error i get when i run zun --debug create --name
>>> test --image cirros --command "ping -c 4 8.8.8.8"  :
>>> http://paste.openstack.org/show/585268/
>>>
>>> On Tue, Oct 11, 2016 at 3:28 AM, courage angeh 
>>> wrote:
>>>
 Thanks bu i hav No folder zun found under /etc But i did find a folder
 keystone but no log file found in the folder

 On Tue, Oct 11, 2016 at 3:07 AM, Hongbin Lu 
 wrote:

> Several things to check:
>
> * Run “zun –debug create …” and check the output
>
> * Make sure your Keystone is running on 192.168.8.101
>
> * Check the Keystone log
>
> * Check the zun-api log
>
> * Check the config file under /etc/zun
>
> If you cannot figure it out, feel free to send me the outputs/logs
> above.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* courage angeh [mailto:couragean...@gmail.com]
> *Sent:* October-10-16 9:09 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Zun] ERROR: Not Authorized
>
>
>
> i did source the file before running that command,
>
> This are the environment that are set:
>
> OS_REGION_NAME=RegionOne
> OS_PROJECT_NAME=admin
> OS_IDENTITY_API_VERSION=2.0
> OS_PASSWORD=password
> OS_AUTH_URL=http://192.168.8.101:5000/v2.0
> OS_USERNAME=admin
> OS_TENANT_NAME=admin
> OS_VOLUME_API_VERSION=2
> OS_NO_CACHE=1
>
> Yet still when i run that command i still get thesame error msg and i
> have talking to the irc channel nut not reply from any one.
>
>
>
> On Mon, Oct 10, 2016 at 9:04 PM, Hongbin Lu 
> wrote:
>
> Courage,
>
>
>
> As suggested by Denis in another reply, you might need to source the
> credential before issuing the command. If it doesn’t help, please feel 
> free
> to ping us in the IRC channel (#openstack-zun).
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* courage angeh [mailto:couragean...@gmail.com]
> *Sent:* October-10-16 10:40 AM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* [openstack-dev] ERROR: Not Authorized
>
>
>
> i have problems running zun. When i try to run comaands lik:
>
> zun start test or
> zun create --name test --image cirros --command "ping -c 4 8.8.8.8"
>
> I get the error: ERROR: Not Authorized
>
> Futher searching it seem like i cann't connect to 
> http://192.168.8.101:5000/v2.0
>
> Please can someone help me?
>
> Thanks
>
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.op
> enstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.op
> enstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

2016-10-12 Thread Ian Cordasco
 

-Original Message-
From: Chris Dent 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: October 12, 2016 at 09:10:45
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] PTG from the Ops Perspective - a few short notes
 
> The reason, as I understood it, for having a separate event is not
> because all those users are such a horrible distraction but because
> the sales and marketing efforts are (inherent in themselves and also
> in the way that the corporate patrons want the devs to participate
> in those efforts at the cost of the design summit-ing).
>  
> For most of us, as Dmitry points out, getting to summit even once a
> year is a real challenge. Getting to 4 events, which is what will be
> required to maintain full engagement with both the planning and
> development of a project (not to mention cross project concerns), will
> either be impossible or require a significant change in the financial
> commitment that the patron companies are making. I think we'd all like
> to see that change, but I think we all also know that it's not very likely
> at this time.

So, are there people who already go to both Summits and both Midcycles for 
projects? They're already attending 4 events. In effect you're now changing it 
from 2 (ostensibly) significantly cheaper and 2 expensive events to 4 
moderately expensive events. While it might not average out perfectly, it 
probably comes close.

That said, the people that already attend all 4 are either in very significant 
roles in that project team (PTL, etc.) or influential enough in their company 
to secure that funding. There are people that will only attend a fraction of 
those 4 though as it already is. That said, these projects *are* teams. I think 
they could strategically determine who should go to which event because the 
people who already have significant roles will likely be able to get that 
funding and attend all events. The rest should be able to determine who can 
represent the project at each as funding becomes a topic for each employer.

For Glance and other projects, I have been absent from many events for a 
variety of reasons and have been able to rely on my fellow team members to 
represent the project appropriately and communicate effectively what happened 
(including feedback received). If there isn't that level of trust in a team, I 
wonder if that team is really working effectively together today.

> That leaves us in a bit of a bind. It feels very much like we are
> narrowing the direct feedback pipeline between operators and users. It
> also feels like we are strengthening distinctions between various
> types of contributors, based mostly on the economic power their
> corporate patrons are willing to wield for them. And finally it
> feels like we are making it harder for more people to be engaged in
> cross project work.

Again, I'd argue that teams need to determine the priorities for team members 
at events to partake in different efforts. Maybe teams will prioritize having 
cross-project liaisons who can travel at the PTG. Meanwhile those who are 
driving specific efforts and features may end up at the summit. Presumably the 
team is already communicating and there will be some overlap so having disjoint 
set of team members at the different events will not be harmful. Ostensibly, 
having a cheaper PTG will mean that more developers can get together for those 
cross-project efforts that would never have happened at a mid-cycle (unless 
mid-cycle organizers worked to co-locate them).

> I'm certain that none of these things are the desired outcomes.
> What can we do to clarify the situation and remedy the issues?

I agree that these worst case scenarios were not the desired outcomes. I think 
these outcomes are also the product of a mindset that "I" need to attend all of 
the events otherwise "I" won't be as involved as "I" want/need to be. I would 
challenge that mindset and remind everyone with that mindset that they need to 
consider the entirety of the team and the opportunities this might open up for 
different team members. This could decentralize some of the responsibilities 
only a few people currently hold which will only benefit the team. If the split 
of the PTG and Summit act as a forcing function for that, I'm especially glad 
for this decision.

As a matter of transparency, I've been involved with OpenStack in various ways 
for 2+ years and have only attended 1 midcycle and 1 summit. Both were 
productive, but I have been limited by "real life" most of the time. I think 
some of the teams I work with would find it hard to argue that they suffered 
from my absence or that I've been less effective due to it.

Yes, for those of us who love to travel on some one else's dime this will be 
painful, but I think it will ultimately prove good if teams remain 

Re: [openstack-dev] [release][ansible][fuel][kolla][puppet][tripleo] proposed deadlines for cycle-trailing projects

2016-10-12 Thread Doug Hellmann
Excerpts from Alexey Shtokolov's message of 2016-10-11 00:18:24 +0300:
> Doug,
> 
> We've finally fixed the blockers and tagged RC1 for all our repos.
> We're going to have final RC this Friday (Oct14).
> 
> Could I ask you to create stable/newton branches for repos:
> - openstack/fuel-agent
> - openstack/fuel-astute
> - openstack/fuel-library
> - openstack/fuel-main
> - openstack/fuel-menu
> - openstack/fuel-nailgun-agent
> - openstack/fuel-ostf
> - openstack/fuel-qa
> - openstack/fuel-ui
> - openstack/fuel-virtualbox
> - openstack/fuel-web
> based on the tag 10.0.0rc1
> 
> Or should I do it myself?

The release team will create the branches for you. The scripts we have
to do that will submit some additional patches to update the .gitreview
file, any constraints URLs in tox.ini, and reno (where that applies).

Please just include the request to have the branches created in the
commit message with the RC tag request. If you have some repositories
you want branched and others you do not, please separate those into two
patches to openstack/releases (the tool for creating the branches looks
at all of the deliverable files modified in a patch).

Doug

> 
> Best regards,
> Alexey Shtokolov
> 
> On Fri, Oct 7, 2016 at 10:16 PM, Doug Hellmann 
> wrote:
> 
> > This week we tagged the final releases for projects using the
> > cycle-with-milestones release model. Projects using the cycle-trailing
> > model have two more weeks before their final release tags are due. In
> > the time between now and then, we expect those projects to be preparing
> > and tagging release candidates.
> >
> > Just as with the milestone-based projects, we want to manage the number,
> > frequency, and timing of release candidates for cycle-trailing projects.
> > With that in mind, I would like to propose the following rough timeline
> > (my apologies for not preparing this sooner):
> >
> > 10 Oct -- All cycle-trailing projects tag at least their first RC.
> > 13 Oct -- Soft deadline for cycle-trailing projects to tag a final RC.
> > 18 Oct -- Hard deadline for cycle-trailing projects to tag a final RC.
> > 20 Oct -- Re-tag the final RCs as a final release.
> >
> > Between the first and later release candidates, any translations and
> > bug fixes should be merged.
> >
> > We want to leave a few days between the last release candidate and
> > the final release so that downstream consumers of the projects can
> > report issues against stable artifacts. Given the nature of most
> > of our trailing projects, and the lateness of starting to discuss
> > these deadlines, I don't think we need the same amount of time as
> > we usually set aside for the milestone-based projects. Based on
> > that assumption, I've proposed a 1 week soft goal and a 2 day hard
> > deadline.
> >
> > Let me know what you think,
> > Doug
> >
> > Newton schedule: https://releases.openstack.org/newton/schedule.html
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Exposing project team's metadata in README files

2016-10-12 Thread Hayes, Graham
On 12/10/2016 16:08, Doug Hellmann wrote:
> Excerpts from Flavio Percoco's message of 2016-10-12 14:50:03 +0200:
>> Greetings,
>>
>> One of the common complains about the existing project organization in the 
>> big
>> tent is that it's difficult to wrap our heads around the many projects there
>> are, their current state (in/out the big tent), their tags, etc.
>>
>> This information is available on the governance website[0]. Each official
>> project team has a page there containing the information related to the
>> deliverables managed by that team. Unfortunately, I don't think this page is
>> checked often enough and I believe it's not known by everyone.
>>
>> In the hope that we can make this information clearer to people browsing the
>> many repos (most likely on github), I'd like to propose that we include the
>> information of each deliverable in the readme file. This information would be
>> rendered along with the rest of the readme (at least on Github, which might 
>> not
>> be our main repo but it's the place most humans go to to check our projects).
>>
>> Rather than duplicating this information, I'd like to find a way to just
>> "include it" in the Readme file. As far as showing the "official" badge 
>> goes, I
>> believe it'd be quite simple. We can do it the same way CI tags are exposed 
>> when
>> using travis (just include an image). As for the rest of the tags, it might
>> require some extra hacking.
>>
>> So, before I start digging more into this, I wanted to get other 
>> opinions/ideas
>> on this topic and how we can make this information more evident to the rest 
>> of
>> the community (and people not as familiar with our processes as some of us 
>> are).
>>
>> Thanks in advance,
>> Flavio
>>
>> [0] http://governance.openstack.org/reference/projects/index.html
>>
>
> Is your proposal that a tag like release:cycle-with-milestones would
> result in a badge being added when the README.rst is rendered on
> github.com? Would that work for git.openstack.org, too?
>
> I agree that the governance site is not the best place to put the
> info to make it discoverable. Do users look first at the source
> repository, or at some other documentation?
>
> Doug

I like this idea.

I know when I am looking at software, I look at the source repo
initially.

We could do it in the readme, and maybe re-use it in the docs as well?

I would be willing to dig in and help if needed.

- Graham

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

2016-10-12 Thread Jaesuk Ahn
It can be cheap if you are in the US. However, for Asia folks, it is not
that cheap considering it is all overseas travel. In addition, all-in-one
event like the current summit makes us much easier to get the travel fund
from the company, since the company only need to send everyone (tech, ops,
business, strategy) to one event. Even as an ops or developers, doing
presentation or a meeting with one or two important company can be very
good excuse to get the travel money.

I understand the needs of separate event to make developers stay focused. I
am just sharing my experience here how difficult for us to get fund for
overseas event more than once per year. For my case as ops (previously) and
product manager (now), even though I don't code actively, attending design
summit (and interacting with developers) has been very helpful to
understand what is really going to in openstack so that I can make right
decision.

Just sharing my opinion.

On Wed, Oct 12, 2016 at 11:42 PM John Davidge 
wrote:

> Steve,
>
> Why just the one summit? Each cycle has a PTG and a summit, which makes 4
> events per year.
>
> John
>
> From: "Steven Dake (stdake)" 
>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Wednesday, October 12, 2016 at 3:25 PM
>
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] PTG from the Ops Perspective - a few short
> notes
>
> Clint,
>
>
>
> RE 3 - I do understand purpose of PTGs.   Prior many engineering orgs
> didn’t send many people to midcycles (where a lot of work got done – more
> so than summit even).  The 3 events are 2 PTGS + 1 summit (although I don’t
> know the length of the PTGs)
>
>
>
> Regards
>
> -steve
>
>
>
>
>
> *From: *Clint Byrum 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Wednesday, October 12, 2016 at 12:51 AM
> *To: *openstack-dev 
> *Subject: *Re: [openstack-dev] PTG from the Ops Perspective - a few short
> notes
>
>
>
> Excerpts from Steven Dake (stdake)'s message of 2016-10-12 06:42:43 +:
>
> Tom,
>
> No flame, just observation about the reality of these changes.
>
> I think we missed this communication on the mailing list or in the FAQs or
> somewhere else.  I think most engineering-focused organizations are looking
> at the PTGs only and not really considering the summit for budget
> planning.  If folks knew the operators were only going to be at the
> OpenStack Summit, I think that may change budget planning for engineering
> organizations.  Seems like more siloing to me, not less.  We need to
> integrate OpenStack’s development with Operators as well as the Operator’s
> customers (the cloud consumers the Operators deliver to).
>
> Does the foundation not want to co-locate the ops summit at the PTG
> because the ops summit feeds into the OpenStack Summit main ops day(s)?
>
>
>
> Agree, on the surface it looks like this adds a brick or two on top of
>
> the existing wall that devs throw things over.
>
>
>
> However, I think the reality is those bricks were already there, and
>
> we've all been pretending this isn't what is already happening.
>
>
>
> So, while I do want to make sure enough of our architects and designers
>
> go to the summit to truly understand user needs, I also think it has
>
> been proven ineffective to also throw all of the coders into that mix and
>
> expect them to be productive.
>
>
>
> I recall many of us huddled in the dev pit and at parties at summits
>
> trying desperately to have deep technical conversations while the
>
> maelstrom was happening around us. And then the few who were fortunate
>
> enough to go to the mid-cycle would get into quiet rooms for a few days,
>
> and _actually_ design the things our users need, but 3 months late,
>
> and basically for the next release.
>
>
>
> I don’t have any easy solutions for this problem, but the expectation that
> project developers are required at 3 week-long events instead of 2 wasn’t
> clearly communicated and should be rectified beyond a post to the
> openstack-dev mailing list where most people filter messages by tags (i.e.
> your message is probably not reaching the correct audience).
>
>
>
> Where did you get three?
>
>
>
> PTG - write code, design things (replaces mid-cycles)
>
> Summit - listen to users, showcase designs, state plans for next release
>
>
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
>
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Rackspace Limited is a company registered in England & Wales (company
> 

Re: [openstack-dev] [release][ansible][fuel][kolla][puppet][tripleo] proposed deadlines for cycle-trailing projects

2016-10-12 Thread Doug Hellmann
Excerpts from Steven Dake (stdake)'s message of 2016-10-07 20:39:04 +:
> Doug,
> 
> We have already tagged rc1 long ago, but ack on rc2 (we are targeting 12th at 
> present) and ack on 20th for retag of  final rc.  We expect our rc2 to be 
> final.

Right, I should have phrased that as a deadline for anyone who hadn't
already tagged RC1.

> If there are critical bugs that make the release doa in some way (such as 
> upgrades, reconfigure etc), we will obviously have to tag an rc3 to then 
> retag that as final.  I don’t expect that to happen but it is a possibility.

That's consistent with what we do for the milestone-based projects.

Doug

> 
> Regards
> -steve
> 
> 
> From: Doug Hellmann 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: Friday, October 7, 2016 at 12:16 PM
> To: openstack-dev 
> Subject: [openstack-dev] [release][ansible][fuel][kolla][puppet][tripleo] 
> proposed deadlines for cycle-trailing projects
> 
> This week we tagged the final releases for projects using the
> cycle-with-milestones release model. Projects using the cycle-trailing
> model have two more weeks before their final release tags are due. In
> the time between now and then, we expect those projects to be preparing
> and tagging release candidates.
> 
> Just as with the milestone-based projects, we want to manage the number,
> frequency, and timing of release candidates for cycle-trailing projects.
> With that in mind, I would like to propose the following rough timeline
> (my apologies for not preparing this sooner):
> 
> 10 Oct -- All cycle-trailing projects tag at least their first RC.
> 13 Oct -- Soft deadline for cycle-trailing projects to tag a final RC.
> 18 Oct -- Hard deadline for cycle-trailing projects to tag a final RC.
> 20 Oct -- Re-tag the final RCs as a final release.
> 
> Between the first and later release candidates, any translations and
> bug fixes should be merged.
> 
> We want to leave a few days between the last release candidate and
> the final release so that downstream consumers of the projects can
> report issues against stable artifacts. Given the nature of most
> of our trailing projects, and the lateness of starting to discuss
> these deadlines, I don't think we need the same amount of time as
> we usually set aside for the milestone-based projects. Based on
> that assumption, I've proposed a 1 week soft goal and a 2 day hard
> deadline.
> 
> Let me know what you think,
> Doug
> 
> Newton schedule: https://releases.openstack.org/newton/schedule.html
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Exposing project team's metadata in README files

2016-10-12 Thread Doug Hellmann
Excerpts from Flavio Percoco's message of 2016-10-12 14:50:03 +0200:
> Greetings,
> 
> One of the common complains about the existing project organization in the big
> tent is that it's difficult to wrap our heads around the many projects there
> are, their current state (in/out the big tent), their tags, etc.
> 
> This information is available on the governance website[0]. Each official
> project team has a page there containing the information related to the
> deliverables managed by that team. Unfortunately, I don't think this page is
> checked often enough and I believe it's not known by everyone.
> 
> In the hope that we can make this information clearer to people browsing the
> many repos (most likely on github), I'd like to propose that we include the
> information of each deliverable in the readme file. This information would be
> rendered along with the rest of the readme (at least on Github, which might 
> not
> be our main repo but it's the place most humans go to to check our projects).
> 
> Rather than duplicating this information, I'd like to find a way to just
> "include it" in the Readme file. As far as showing the "official" badge goes, 
> I
> believe it'd be quite simple. We can do it the same way CI tags are exposed 
> when
> using travis (just include an image). As for the rest of the tags, it might
> require some extra hacking.
> 
> So, before I start digging more into this, I wanted to get other 
> opinions/ideas
> on this topic and how we can make this information more evident to the rest of
> the community (and people not as familiar with our processes as some of us 
> are).
> 
> Thanks in advance,
> Flavio
> 
> [0] http://governance.openstack.org/reference/projects/index.html
> 

Is your proposal that a tag like release:cycle-with-milestones would
result in a badge being added when the README.rst is rendered on
github.com? Would that work for git.openstack.org, too?

I agree that the governance site is not the best place to put the
info to make it discoverable. Do users look first at the source
repository, or at some other documentation?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

2016-10-12 Thread Steven Dake (stdake)
Clint,

RE 3 - I do understand purpose of PTGs.   Prior many engineering orgs didn’t 
send many people to midcycles (where a lot of work got done – more so than 
summit even).  The 3 events are 2 PTGS + 1 summit (although I don’t know the 
length of the PTGs)

Regards
-steve


From: Clint Byrum 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, October 12, 2016 at 12:51 AM
To: openstack-dev 
Subject: Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

Excerpts from Steven Dake (stdake)'s message of 2016-10-12 06:42:43 +:
Tom,
No flame, just observation about the reality of these changes.
I think we missed this communication on the mailing list or in the FAQs or 
somewhere else.  I think most engineering-focused organizations are looking at 
the PTGs only and not really considering the summit for budget planning.  If 
folks knew the operators were only going to be at the OpenStack Summit, I think 
that may change budget planning for engineering organizations.  Seems like more 
siloing to me, not less.  We need to integrate OpenStack’s development with 
Operators as well as the Operator’s customers (the cloud consumers the 
Operators deliver to).
Does the foundation not want to co-locate the ops summit at the PTG because the 
ops summit feeds into the OpenStack Summit main ops day(s)?

Agree, on the surface it looks like this adds a brick or two on top of
the existing wall that devs throw things over.

However, I think the reality is those bricks were already there, and
we've all been pretending this isn't what is already happening.

So, while I do want to make sure enough of our architects and designers
go to the summit to truly understand user needs, I also think it has
been proven ineffective to also throw all of the coders into that mix and
expect them to be productive.

I recall many of us huddled in the dev pit and at parties at summits
trying desperately to have deep technical conversations while the
maelstrom was happening around us. And then the few who were fortunate
enough to go to the mid-cycle would get into quiet rooms for a few days,
and _actually_ design the things our users need, but 3 months late,
and basically for the next release.

I don’t have any easy solutions for this problem, but the expectation that 
project developers are required at 3 week-long events instead of 2 wasn’t 
clearly communicated and should be rectified beyond a post to the openstack-dev 
mailing list where most people filter messages by tags (i.e. your message is 
probably not reaching the correct audience).

Where did you get three?

PTG - write code, design things (replaces mid-cycles)
Summit - listen to users, showcase designs, state plans for next release

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovn][neutron] networking-ovn 1.0.0 release (newton) -- port-security

2016-10-12 Thread Richard Theis
Murali R  wrote on 10/11/2016 01:47:24 PM:

> From: Murali R 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> , discuss 
> Date: 10/11/2016 01:50 PM
> Subject: [openstack-dev] [ovn][neutron] networking-ovn 1.0.0 release
> (newton) -- port-security
> 
> Hi,
> 
> Please clarify if port security is required to be enabled with 
> newton release when installing OVN. The install.rst says it must be.

Disabling port security should work with networking-ovn. This can be
done via the extension or at a port or network level. OVN acls shouldn't
be created for port with port security disabled.

- Richard

> In many of my use cases I want to disable port security which is how
> I do currently with devstack. I would like to know if either ovn or 
> neutron will have contentions if port-security disabled at neutron 
server.
> 
> Thanks
> Murali
> 
> -- Forwarded message --
> From: 
> Date: Thu, Oct 6, 2016 at 6:21 AM
> Subject: [openstack-announce] [new][neutron] networking-ovn 1.0.0 
> release (newton)
> To: openstack-annou...@lists.openstack.org
> 
> 
> We are jazzed to announce the release of:
> 
> networking-ovn 1.0.0: OpenStack Neutron integration with OVN
> 
> This release is part of the newton release series.
> 
> With source available at:
> 
> http://git.openstack.org/cgit/openstack/networking-ovn
> 
> With package available at:
> 
> https://pypi.python.org/pypi/networking-ovn
> 
> Please report issues through launchpad:
> 
> http://bugs.launchpad.net/networking-ovn
> 
> For more details, please see below.
> 
> 1.0.0
> ^
> 
> New Features
> 
> * Initial release of the OpenStack Networking service (neutron)
>   integration with Open Virtual Network (OVN), a component of the the
>   Open vSwitch (http://openvswitch.org/) project. OVN provides the
>   following features either via native implementation or conventional
>   agents:
> 
>   * Layer-2 (native OVN implementation)
> 
>   * Layer-3 (native OVN implementation or conventional layer-3
> agent) The native OVN implementation supports distributed routing.
> However, it currently lacks support for floating IP addresses,
> NAT, and the metadata proxy.
> 
>   * DHCP (native OVN implementation or conventional DHCP agent) The
> native implementation supports distributed DHCP. However, it
> currently lacks support for IPv6, internal DNS, and metadata
> proxy.
> 
>   * Metadata (conventional metadata agent)
> 
>   * DPDK - Usable with OVS via either the Linux kernel datapath or
> the DPDK datapath.
> 
>   * Trunk driver - Driver to back the neutron's 'trunk' service
> plugin
> 
>   The initial release also supports the following Networking service
>   API extensions:
> 
>   * "agent"
> 
>   * "Address Scopes" *
> 
>   * "Allowed Address Pairs"
> 
>   * "Auto Allocated Topology Services"
> 
>   * "Availability Zone"
> 
>   * "Default Subnetpools"
> 
>   * "DHCP Agent Scheduler" **
> 
>   * "Distributed Virtual Router" *
> 
>   * "DNS Integration" *
> 
>   * "HA Router extension" *
> 
>   * "L3 Agent Scheduler" *
> 
>   * "Network Availability Zone" **
> 
>   * "Network IP Availability"
> 
>   * "Neutron external network"
> 
>   * "Neutron Extra DHCP opts"
> 
>   * "Neutron Extra Route"
> 
>   * "Neutron L3 Configurable external gateway mode" *
> 
>   * "Neutron L3 Router"
> 
>   * "Network MTU"
> 
>   * "Port Binding"
> 
>   * "Port Security"
> 
>   * "Provider Network"
> 
>   * "Quality of Service"
> 
>   * "Quota management support"
> 
>   * "RBAC Policies"
> 
>   * "Resource revision numbers"
> 
>   * "Router Availability Zone" *
> 
>   * "security-group"
> 
>   * "standard-attr-description"
> 
>   * "Subnet Allocation"
> 
>   * "Tag support"
> 
>   * "Time Stamp Fields"
> 
>   (*) Only applicable if using the conventional layer-3 agent.
> 
>   (**) Only applicable if using the conventional DHCP agent.
> 
> Changes in networking-ovn 1.0.0.0rc1..1.0.0
> ---
> 
> 16ab14c Fix for vtep port
> 8721389 Fix test waiting for ovn-northd to start
> cc52860 Update port provisioning block registration
> faaa45e Updated from global requirements
> 1335653 Update .gitreview for stable/newton
> 
> 
> Diffstat (except docs and test files)
> -
> 
> .gitreview|  1 +
> devstack/lib/networking-ovn   |  2 +-
> networking_ovn/common/constants.py|  4 +-
> networking_ovn/ml2/mech_driver.py | 42 +-
> requirements.txt  |  2 +-
> 7 files changed, 119 insertions(+), 32 deletions(-)
> 
> 
> Requirements updates
> 
> 
> diff --git a/requirements.txt b/requirements.txt
> index 2650d84..5fc068d 100644
> --- a/requirements.txt
> +++ b/requirements.txt
> @@ -5 

Re: [openstack-dev] [tripleo] Default the HA scenario to Ceph

2016-10-12 Thread Emilien Macchi
On Wed, Oct 12, 2016 at 7:10 AM, Giulio Fidente  wrote:
> hi,
>
> we introduced support for the deployment of Ceph in the liberty release so
> that it could optionally be used as backend for one or more of Cinder,
> Glance, Nova and more recently Gnocchi.
>
> We used to deploy Ceph MONs on the controller nodes and Ceph OSDs on
> dedicated ceph-storage nodes so a deployment of OpenStack with Ceph would
> need at least 1 more additional node to host a Ceph OSD.
>
> In our HA scenario the storage backends are configured as follows:
>
> Glance -> Swift
> Nova (ephemeral) -> Local
> Cinder (persistent) -> LVM (on controllers)
> Gnocchi -> Swift
>
> The downside of the above configuration is that Cinder volumes can not be
> replicated across the controller nodes and become unavailable if a
> controller fails, while production environments generally expect persistent
> storage to be highly available. Cinder volumes instead could even get lost
> completely in case of a permanent failure of a controller.
>
> With the Newton release and the composable roles we can now deploy Ceph OSDs
> on the compute nodes, removing the requirement we had for an additional node
> to host a Ceph OSD.
>
> I would like to ask for some feedback on the possibility of deploying Ceph
> by default in the HA scenario and use it as backend for Cinder.
>
> Also using Swift as backend for Glance and Gnocchi is enough to cover the
> availability issue for the data, but it also means we're storing that data
> on the controller nodes which might or might not be wanted; I don't see a
> strong reason for defaulting them to Ceph, but it might make more sense when
> Ceph is available; feedback about this would be appreciated as well.
>
> Finally a shared backend (Ceph) for Nova would allow live migrations but
> probably decrease performances for the guests in general; so I'd be against
> defaulting Nova to Ceph. Feedback?

+1 on making ceph default backend for nova/glance/cinder/gnocchi.
I think this is the most common use-case we currently have in our
deployments AFIK.

Also, I'll continue to work on scenarios jobs (scenario002 and
scenario003 without Ceph to cover other use cases).

> --
> Giulio Fidente
> GPG KEY: 08D733BA
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Default the HA scenario to Ceph

2016-10-12 Thread Ben Nemec



On 10/12/2016 06:10 AM, Giulio Fidente wrote:

hi,

we introduced support for the deployment of Ceph in the liberty release
so that it could optionally be used as backend for one or more of
Cinder, Glance, Nova and more recently Gnocchi.

We used to deploy Ceph MONs on the controller nodes and Ceph OSDs on
dedicated ceph-storage nodes so a deployment of OpenStack with Ceph
would need at least 1 more additional node to host a Ceph OSD.

In our HA scenario the storage backends are configured as follows:

Glance -> Swift
Nova (ephemeral) -> Local
Cinder (persistent) -> LVM (on controllers)
Gnocchi -> Swift

The downside of the above configuration is that Cinder volumes can not
be replicated across the controller nodes and become unavailable if a
controller fails, while production environments generally expect
persistent storage to be highly available. Cinder volumes instead could
even get lost completely in case of a permanent failure of a controller.

With the Newton release and the composable roles we can now deploy Ceph
OSDs on the compute nodes, removing the requirement we had for an
additional node to host a Ceph OSD.

I would like to ask for some feedback on the possibility of deploying
Ceph by default in the HA scenario and use it as backend for Cinder.


+1 from me.  It sounds like our current default is inappropriate for an 
HA environment anyway, so if someone is using it they're already broken 
by design.  Hopefully everyone is already setting up Ceph or some other 
shared storage backend in HA so changing the default should be largely a 
non-event.  Obviously we would still need to provide an upgrade path for 
anyone who did deploy the old default (maybe they don't use Cinder and 
don't care if it's HA, for example).




Also using Swift as backend for Glance and Gnocchi is enough to cover
the availability issue for the data, but it also means we're storing
that data on the controller nodes which might or might not be wanted; I
don't see a strong reason for defaulting them to Ceph, but it might make
more sense when Ceph is available; feedback about this would be
appreciated as well.

Finally a shared backend (Ceph) for Nova would allow live migrations but
probably decrease performances for the guests in general; so I'd be
against defaulting Nova to Ceph. Feedback?


Agreed.  It's simple enough for people to set Nova to use Ceph if they 
want, but if people haven't spec'd their compute nodes to handle heavy 
converged Ceph usage I suspect performance would be unacceptable for VMs.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [infra] RFC: consolidating and extending Ironic CI jobs

2016-10-12 Thread Dmitry Tantsur

On 10/12/2016 04:02 PM, Jim Rollenhagen wrote:

On Wed, Oct 12, 2016 at 8:01 AM, Dmitry Tantsur  wrote:

Hi folks!

I'd like to propose a plan on how to simultaneously extend the coverage of
our jobs and reduce their number.

Currently, we're running one instance per job. This was reasonable when the
coreos-based IPA image was the default, but now with tinyipa we can run up
to 7 instances (and actually do it in the grenade job). I suggest we use 6
fake bm nodes to make a single CI job cover many scenarios.

The jobs will be grouped based on driver (pxe_ipmitool and agent_ipmitool)
to be more in sync with how 3rd party CI does it. A special configuration
option will be used to enable multi-instance testing to avoid breaking 3rd
party CI systems that are not ready for it.

To ensure coverage, we'll only leave a required number of nodes "available",
and deploy all instances in parallel.

In the end, we'll have these jobs on ironic:
gate-tempest-ironic-pxe_ipmitool-tinyipa
gate-tempest-ironic-agent_ipmitool-tinyipa

Each job will cover the following scenarious:
* partition images:
** with local boot:
** 1. msdos partition table and BIOS boot
** 2. GPT partition table and BIOS boot
** 3. GPT partition table and UEFI boot  <*>
** with netboot:
** 4. msdos partition table and BIOS boot <**>
* whole disk images:
* 5. with msdos partition table embedded and BIOS boot
* 6. with GPT partition table embedded and UEFI boot  <*>

 <*> - in the future, when we figure our UEFI testing
 <**> - we're moving away from defaulting to netboot, hence only one
scenario

I suggest creating the jobs for Newton and Ocata, and starting with Xenial
right away.


+1, huge fan of this.

One more thing to think about - our API tests create and delete nodes.
If you run
tempest in parallel, Nova may schedule to these nodes. Might be worth breaking
API tests out into a separate job, and making these jobs scenario tests only.


I'd prefer to move API tests to functional testing, but yeah, as a first step we 
can split these jobs.




// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

2016-10-12 Thread Michał Jastrzębski
On 12 October 2016 at 08:53, John Davidge  wrote:
> On 10/12/16, 11:18 AM, Dmitry Tantsur wrote:
>
>>On 10/12/2016 11:59 AM, Thierry Carrez wrote:
>>>
>>> PTGs happen in more cost-effective locations, airport hubs with cheaper
>>> hotels, which should lower the cost of attending. Yes, I'm pretty sure
>>> traveling to Atlanta downtown for a week will be cheaper than going to
>>> Barcelona, Boston or Sydney downtown for a week.
>>
>>[...]
>>
>>And while PTG will surely be cheaper than the Summit, the Summit is not
>>going
>>away (for marketing, management, few developer representatives), so the
>>total
>>expense is unlikely to drop IMO.
>
> Yes, I've been unsure about the cost justification for the PTG too.
> Hosting in a less expensive city will probably result in the PTG being
> slightly cheaper to attend than the Summit, but with ops feedback and so
> many other important activities missing, many developers will need to
> attend both the Summit and the PTG to be fully involved.
>
> Cost of PTG < Cost of Summit
>
> But:
>
> (Cost of PTG + Cost of Summit) > Cost of Summit
>
> Also, unless I've missed something, we still don't know the registration
> fees for the PTG and the new Summit do we? Last I remember, there was talk
> of a registration fee for the PTG, and then a Summit discount for PTG
> attendees[1]. Is that still the plan?
>
> Surely the PTG will need to be free to attend, otherwise isn't it better
> for project teams to simply shift our existing mid-cycles to the PTG
> timeframe with an altered focus to save money? Genuine question.

So if PTG is meant to just "hang out with your dev team", that's midcycle.
Problem is:

Cost of PTG >> Cost of midcycle

In Kolla we scheduled our midcycle usually in relatively small towns that are
convenient for core team locations. That lowered cost of midcycle attendance
for our team (and everyone else, because while Atlanta is cheaper than Boston,
Greenville SC is much cheaper than Atlanta;)).

Even with this, midcycle attendance was much lower than summits, which
is expected.
I assumed that PTG is replacement for design summit, ops summit and pretty much
all the summit things that usually happens in "second hotel":). If
that is not the case,
I'm afraid that PTG might end up being more expensive midcycle, ergo
with even lower
attendance, ergo pretty much pointless for projects other than Nova or
Neutron, which
attracts more people to midcycles.

Alternative to this would be to openly say that PTG is a replacement
for midcycles, just coordinated,
summit is still summit (with design part to it), and ops summit might
be part of PTG just as well.

>
>>[...]
>>>
>>> Yes, the plan is (amongst other things) to make sure that upstream
>>> developers are available to interact with users (operators, but also app
>>> developers...) during the very week where *all* our community gets
>>> together (the OpenStack Summit). Currently we try to get things done at
>>> the same time, which results in hard choices between listening and
>>> doing. By clearly setting out separate times for each activity, we make
>>> sure we stay focused.
>>
>>Sorry, but to me it's extremely unrealistic to expect a big number of
>>developers
>>on the Summit any more. Sending folks to both events doubles the travel
>>budget,
>>and I know that many companies have struggles with sending people to one
>>event
>>already.
>
> I know a lot of people share this concern. It can already be hard enough
> to justify travel twice a year to an event comprising 100% of the
> community's activities. Splitting that 100% across two separate events of
> ~50% each, twice each per year, is going to make it much harder. I fear
> that some developers will be unable to attend any events at all, as
> neither the PTG or the new Summit will be as important as the combined
> Summit has been until now.
>
> I'm looking forward to hopefully hearing a lot more details about the PTG
> during the summit.
>
> Thanks,
>
> John
>
> [1] https://www.openstack.org/ptg/ptgfaq/
>
>
> 
> Rackspace Limited is a company registered in England & Wales (company 
> registered number 03897010) whose registered office is at 5 Millington Road, 
> Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
> viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
> contain confidential or privileged information intended for the recipient. 
> Any dissemination, distribution or copying of the enclosed material is 
> prohibited. If you receive this transmission in error, please notify us 
> immediately by e-mail at ab...@rackspace.com and delete the original message. 
> Your cooperation is appreciated.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [ironic] [infra] RFC: consolidating and extending Ironic CI jobs

2016-10-12 Thread Jim Rollenhagen
On Wed, Oct 12, 2016 at 8:01 AM, Dmitry Tantsur  wrote:
> Hi folks!
>
> I'd like to propose a plan on how to simultaneously extend the coverage of
> our jobs and reduce their number.
>
> Currently, we're running one instance per job. This was reasonable when the
> coreos-based IPA image was the default, but now with tinyipa we can run up
> to 7 instances (and actually do it in the grenade job). I suggest we use 6
> fake bm nodes to make a single CI job cover many scenarios.
>
> The jobs will be grouped based on driver (pxe_ipmitool and agent_ipmitool)
> to be more in sync with how 3rd party CI does it. A special configuration
> option will be used to enable multi-instance testing to avoid breaking 3rd
> party CI systems that are not ready for it.
>
> To ensure coverage, we'll only leave a required number of nodes "available",
> and deploy all instances in parallel.
>
> In the end, we'll have these jobs on ironic:
> gate-tempest-ironic-pxe_ipmitool-tinyipa
> gate-tempest-ironic-agent_ipmitool-tinyipa
>
> Each job will cover the following scenarious:
> * partition images:
> ** with local boot:
> ** 1. msdos partition table and BIOS boot
> ** 2. GPT partition table and BIOS boot
> ** 3. GPT partition table and UEFI boot  <*>
> ** with netboot:
> ** 4. msdos partition table and BIOS boot <**>
> * whole disk images:
> * 5. with msdos partition table embedded and BIOS boot
> * 6. with GPT partition table embedded and UEFI boot  <*>
>
>  <*> - in the future, when we figure our UEFI testing
>  <**> - we're moving away from defaulting to netboot, hence only one
> scenario
>
> I suggest creating the jobs for Newton and Ocata, and starting with Xenial
> right away.

+1, huge fan of this.

One more thing to think about - our API tests create and delete nodes.
If you run
tempest in parallel, Nova may schedule to these nodes. Might be worth breaking
API tests out into a separate job, and making these jobs scenario tests only.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

2016-10-12 Thread Chris Dent

On Wed, 12 Oct 2016, Clint Byrum wrote:


So, while I do want to make sure enough of our architects and designers
go to the summit to truly understand user needs, I also think it has
been proven ineffective to also throw all of the coders into that mix and
expect them to be productive.


I suspect you didn't mean it to sound like this, and I don't want to
distract overmuch from the meat of this thread, but at a superficial
level it appears you are implying that there is a disjunction
between coders and architects/designers. I was pretty sure that was
an antiquated notion and if it is being used to drive the decisions
about what should happen at summit and what should happen at PTG,
that's a shame.

The reason, as I understood it, for having a separate event is not
because all those users are such a horrible distraction but because
the sales and marketing efforts are (inherent in themselves and also
in the way that the corporate patrons want the devs to participate
in those efforts at the cost of the design summit-ing).

For most of us, as Dmitry points out, getting to summit even once a
year is a real challenge. Getting to 4 events, which is what will be
required to maintain full engagement with both the planning and
development of a project (not to mention cross project concerns), will
either be impossible or require a significant change in the financial
commitment that the patron companies are making. I think we'd all like
to see that change, but I think we all also know that it's not very likely
at this time.

That leaves us in a bit of a bind. It feels very much like we are
narrowing the direct feedback pipeline between operators and users. It
also feels like we are strengthening distinctions between various
types of contributors, based mostly on the economic power their
corporate patrons are willing to wield for them. And finally it
feels like we are making it harder for more people to be engaged in
cross project work.

I'm certain that none of these things are the desired outcomes.
What can we do to clarify the situation and remedy the issues?

--
Chris Dent   ┬─┬ノ( º _ ºノ)https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [infra] RFC: consolidating and extending Ironic CI jobs

2016-10-12 Thread Dmitry Tantsur

On 10/12/2016 03:54 PM, Vasyl Saienko wrote:



On Wed, Oct 12, 2016 at 4:10 PM, Dmitry Tantsur > wrote:

On 10/12/2016 03:01 PM, Vasyl Saienko wrote:

Hello Dmitry,

Thanks for raising this question. I think the problem is deeper. There
are a lot
of use-cases that are not covered by our CI like cleaning, adoption 
etc...


This is nice, but here I'm trying to solve a pretty specific problem: we
can't reasonably add more jobs to even cover all supported partitioning
scenarios.


The main problem is that we need to change ironic configuration to apply
specific use-case. Unfortunately tempest doesn't allow to change cloud
configuration during tests run.


Recently I've started working on PoC that should solve this problem 
[0]. The
main idea is to have ability to change ironic configuration during
single gate
job run, and launch the same tempest tests after each configuration 
change.

We can't change other components configuration as it will require
reinstalling
whole devstack, so launching flat network and multitenant network
scenarios is
not possible in single job.


For example:

1. Setup devstack with agent_ssh wholedisk ipxe configuration

2. Run tempest tests

3. Update localrc to use agent_ssh localboot image


For this particular example, my approach will be much, much faster, as all
instances will be built in parallel.


 One the gates we've using 7 VMs and we never boot all 7 nodes in parallel not
sure how slow environment will be in this case.


I think we do boot them in parallel (more or less) in the grenade job.







4. Unstack ironic component only. Not whole devstack.

5. Install/configure ironic component only

6. Run tempest tests

7. Repeat steps 3-6 with other Ironic-only configuration change.


Running step 4,5 takes near 2-3 minutes.


Below is an non-exhaustive list of configuration choices we could try to
mix-and-match in single tempest run to have a maximal overall code
coverage in a
sibl:

  *

cleaning enabled / disabled


This is the only valid example, for other cases you don't need a devstack
update.


There are other use-cases like: portgroups, security groups, boot from volume
which will require configuration changes.



  *

using pxe_* drivers / agent_* drivers

  *

using netboot / localboot

  * using partitioned / wholedisk images



[0] https://review.openstack.org/#/c/369021/





On Wed, Oct 12, 2016 at 3:01 PM, Dmitry Tantsur 
>> wrote:

Hi folks!

I'd like to propose a plan on how to simultaneously extend the
coverage of
our jobs and reduce their number.

Currently, we're running one instance per job. This was reasonable
when the
coreos-based IPA image was the default, but now with tinyipa we can
run up
to 7 instances (and actually do it in the grenade job). I suggest we
use 6
fake bm nodes to make a single CI job cover many scenarios.

The jobs will be grouped based on driver (pxe_ipmitool and
agent_ipmitool)
to be more in sync with how 3rd party CI does it. A special
configuration
option will be used to enable multi-instance testing to avoid
breaking 3rd
party CI systems that are not ready for it.

To ensure coverage, we'll only leave a required number of nodes
"available",
and deploy all instances in parallel.

In the end, we'll have these jobs on ironic:
gate-tempest-ironic-pxe_ipmitool-tinyipa
gate-tempest-ironic-agent_ipmitool-tinyipa

Each job will cover the following scenarious:
* partition images:
** with local boot:
** 1. msdos partition table and BIOS boot
** 2. GPT partition table and BIOS boot
** 3. GPT partition table and UEFI boot  <*>
** with netboot:
** 4. msdos partition table and BIOS boot <**>
* whole disk images:
* 5. with msdos partition table embedded and BIOS boot
* 6. with GPT partition table embedded and UEFI boot  <*>


Am I right that we need to increase number of tempest tests to the number of
use-cases we are going to test per driver. To ensure that we using right node
for each test, because partition scheme is defined in node properties and
requires right image to be used.

 <*> - 

Re: [openstack-dev] [ironic] [infra] RFC: consolidating and extending Ironic CI jobs

2016-10-12 Thread Vasyl Saienko
On Wed, Oct 12, 2016 at 4:10 PM, Dmitry Tantsur  wrote:

> On 10/12/2016 03:01 PM, Vasyl Saienko wrote:
>
>> Hello Dmitry,
>>
>> Thanks for raising this question. I think the problem is deeper. There
>> are a lot
>> of use-cases that are not covered by our CI like cleaning, adoption etc...
>>
>
> This is nice, but here I'm trying to solve a pretty specific problem: we
> can't reasonably add more jobs to even cover all supported partitioning
> scenarios.
>
>
>> The main problem is that we need to change ironic configuration to apply
>> specific use-case. Unfortunately tempest doesn't allow to change cloud
>> configuration during tests run.
>>
>>
>> Recently I've started working on PoC that should solve this problem [0].
>> The
>> main idea is to have ability to change ironic configuration during single
>> gate
>> job run, and launch the same tempest tests after each configuration
>> change.
>>
>> We can't change other components configuration as it will require
>> reinstalling
>> whole devstack, so launching flat network and multitenant network
>> scenarios is
>> not possible in single job.
>>
>>
>> For example:
>>
>> 1. Setup devstack with agent_ssh wholedisk ipxe configuration
>>
>> 2. Run tempest tests
>>
>> 3. Update localrc to use agent_ssh localboot image
>>
>
> For this particular example, my approach will be much, much faster, as all
> instances will be built in parallel.


 One the gates we've using 7 VMs and we never boot all 7 nodes in parallel
not sure how slow environment will be in this case.




>
>> 4. Unstack ironic component only. Not whole devstack.
>>
>> 5. Install/configure ironic component only
>>
>> 6. Run tempest tests
>>
>> 7. Repeat steps 3-6 with other Ironic-only configuration change.
>>
>>
>> Running step 4,5 takes near 2-3 minutes.
>>
>>
>> Below is an non-exhaustive list of configuration choices we could try to
>> mix-and-match in single tempest run to have a maximal overall code
>> coverage in a
>> sibl:
>>
>>   *
>>
>> cleaning enabled / disabled
>>
>
> This is the only valid example, for other cases you don't need a devstack
> update.
>

There are other use-cases like: portgroups, security groups, boot from
volume which will require configuration changes.


>
>>   *
>>
>> using pxe_* drivers / agent_* drivers
>>
>>   *
>>
>> using netboot / localboot
>>
>>   * using partitioned / wholedisk images
>>
>>
>>
>> [0] https://review.openstack.org/#/c/369021/
>>
>>
>>
>>
>> On Wed, Oct 12, 2016 at 3:01 PM, Dmitry Tantsur > > wrote:
>>
>> Hi folks!
>>
>> I'd like to propose a plan on how to simultaneously extend the
>> coverage of
>> our jobs and reduce their number.
>>
>> Currently, we're running one instance per job. This was reasonable
>> when the
>> coreos-based IPA image was the default, but now with tinyipa we can
>> run up
>> to 7 instances (and actually do it in the grenade job). I suggest we
>> use 6
>> fake bm nodes to make a single CI job cover many scenarios.
>>
>> The jobs will be grouped based on driver (pxe_ipmitool and
>> agent_ipmitool)
>> to be more in sync with how 3rd party CI does it. A special
>> configuration
>> option will be used to enable multi-instance testing to avoid
>> breaking 3rd
>> party CI systems that are not ready for it.
>>
>> To ensure coverage, we'll only leave a required number of nodes
>> "available",
>> and deploy all instances in parallel.
>>
>> In the end, we'll have these jobs on ironic:
>> gate-tempest-ironic-pxe_ipmitool-tinyipa
>> gate-tempest-ironic-agent_ipmitool-tinyipa
>>
>> Each job will cover the following scenarious:
>> * partition images:
>> ** with local boot:
>> ** 1. msdos partition table and BIOS boot
>> ** 2. GPT partition table and BIOS boot
>> ** 3. GPT partition table and UEFI boot  <*>
>> ** with netboot:
>> ** 4. msdos partition table and BIOS boot <**>
>> * whole disk images:
>> * 5. with msdos partition table embedded and BIOS boot
>> * 6. with GPT partition table embedded and UEFI boot  <*>
>>
>>
Am I right that we need to increase number of tempest tests to the number
of use-cases we are going to test per driver. To ensure that we using right
node for each test, because partition scheme is defined in node properties
and requires right image to be used.

 <*> - in the future, when we figure our UEFI testing
>>  <**> - we're moving away from defaulting to netboot, hence only one
>> scenario
>>
>> I suggest creating the jobs for Newton and Ocata, and starting with
>> Xenial
>> right away.
>>
>> Any comments, ideas and suggestions are welcome.
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> 

Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

2016-10-12 Thread John Davidge
On 10/12/16, 11:18 AM, Dmitry Tantsur wrote:

>On 10/12/2016 11:59 AM, Thierry Carrez wrote:
>>
>> PTGs happen in more cost-effective locations, airport hubs with cheaper
>> hotels, which should lower the cost of attending. Yes, I'm pretty sure
>> traveling to Atlanta downtown for a week will be cheaper than going to
>> Barcelona, Boston or Sydney downtown for a week.
>
>[...]
>
>And while PTG will surely be cheaper than the Summit, the Summit is not
>going
>away (for marketing, management, few developer representatives), so the
>total
>expense is unlikely to drop IMO.

Yes, I've been unsure about the cost justification for the PTG too.
Hosting in a less expensive city will probably result in the PTG being
slightly cheaper to attend than the Summit, but with ops feedback and so
many other important activities missing, many developers will need to
attend both the Summit and the PTG to be fully involved.

Cost of PTG < Cost of Summit

But:

(Cost of PTG + Cost of Summit) > Cost of Summit

Also, unless I've missed something, we still don't know the registration
fees for the PTG and the new Summit do we? Last I remember, there was talk
of a registration fee for the PTG, and then a Summit discount for PTG
attendees[1]. Is that still the plan?

Surely the PTG will need to be free to attend, otherwise isn't it better
for project teams to simply shift our existing mid-cycles to the PTG
timeframe with an altered focus to save money? Genuine question.


>[...]
>>
>> Yes, the plan is (amongst other things) to make sure that upstream
>> developers are available to interact with users (operators, but also app
>> developers...) during the very week where *all* our community gets
>> together (the OpenStack Summit). Currently we try to get things done at
>> the same time, which results in hard choices between listening and
>> doing. By clearly setting out separate times for each activity, we make
>> sure we stay focused.
>
>Sorry, but to me it's extremely unrealistic to expect a big number of
>developers
>on the Summit any more. Sending folks to both events doubles the travel
>budget,
>and I know that many companies have struggles with sending people to one
>event
>already.

I know a lot of people share this concern. It can already be hard enough
to justify travel twice a year to an event comprising 100% of the
community's activities. Splitting that 100% across two separate events of
~50% each, twice each per year, is going to make it much harder. I fear
that some developers will be unable to attend any events at all, as
neither the PTG or the new Summit will be as important as the combined
Summit has been until now.

I'm looking forward to hopefully hearing a lot more details about the PTG
during the summit.

Thanks,

John

[1] https://www.openstack.org/ptg/ptgfaq/



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [infra] RFC: consolidating and extending Ironic CI jobs

2016-10-12 Thread Dmitry Tantsur

On 10/12/2016 03:01 PM, Vasyl Saienko wrote:

Hello Dmitry,

Thanks for raising this question. I think the problem is deeper. There are a lot
of use-cases that are not covered by our CI like cleaning, adoption etc...


This is nice, but here I'm trying to solve a pretty specific problem: we can't 
reasonably add more jobs to even cover all supported partitioning scenarios.




The main problem is that we need to change ironic configuration to apply
specific use-case. Unfortunately tempest doesn't allow to change cloud
configuration during tests run.


Recently I've started working on PoC that should solve this problem [0]. The
main idea is to have ability to change ironic configuration during single gate
job run, and launch the same tempest tests after each configuration change.

We can't change other components configuration as it will require reinstalling
whole devstack, so launching flat network and multitenant network scenarios is
not possible in single job.


For example:

1. Setup devstack with agent_ssh wholedisk ipxe configuration

2. Run tempest tests

3. Update localrc to use agent_ssh localboot image


For this particular example, my approach will be much, much faster, as all 
instances will be built in parallel.




4. Unstack ironic component only. Not whole devstack.

5. Install/configure ironic component only

6. Run tempest tests

7. Repeat steps 3-6 with other Ironic-only configuration change.


Running step 4,5 takes near 2-3 minutes.


Below is an non-exhaustive list of configuration choices we could try to
mix-and-match in single tempest run to have a maximal overall code coverage in a
sibl:

  *

cleaning enabled / disabled


This is the only valid example, for other cases you don't need a devstack 
update.



  *

using pxe_* drivers / agent_* drivers

  *

using netboot / localboot

  * using partitioned / wholedisk images



[0] https://review.openstack.org/#/c/369021/




On Wed, Oct 12, 2016 at 3:01 PM, Dmitry Tantsur > wrote:

Hi folks!

I'd like to propose a plan on how to simultaneously extend the coverage of
our jobs and reduce their number.

Currently, we're running one instance per job. This was reasonable when the
coreos-based IPA image was the default, but now with tinyipa we can run up
to 7 instances (and actually do it in the grenade job). I suggest we use 6
fake bm nodes to make a single CI job cover many scenarios.

The jobs will be grouped based on driver (pxe_ipmitool and agent_ipmitool)
to be more in sync with how 3rd party CI does it. A special configuration
option will be used to enable multi-instance testing to avoid breaking 3rd
party CI systems that are not ready for it.

To ensure coverage, we'll only leave a required number of nodes "available",
and deploy all instances in parallel.

In the end, we'll have these jobs on ironic:
gate-tempest-ironic-pxe_ipmitool-tinyipa
gate-tempest-ironic-agent_ipmitool-tinyipa

Each job will cover the following scenarious:
* partition images:
** with local boot:
** 1. msdos partition table and BIOS boot
** 2. GPT partition table and BIOS boot
** 3. GPT partition table and UEFI boot  <*>
** with netboot:
** 4. msdos partition table and BIOS boot <**>
* whole disk images:
* 5. with msdos partition table embedded and BIOS boot
* 6. with GPT partition table embedded and UEFI boot  <*>

 <*> - in the future, when we figure our UEFI testing
 <**> - we're moving away from defaulting to netboot, hence only one 
scenario

I suggest creating the jobs for Newton and Ocata, and starting with Xenial
right away.

Any comments, ideas and suggestions are welcome.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [infra] RFC: consolidating and extending Ironic CI jobs

2016-10-12 Thread Vasyl Saienko
Hello Dmitry,

Thanks for raising this question. I think the problem is deeper. There are
a lot of use-cases that are not covered by our CI like cleaning, adoption
etc...

The main problem is that we need to change ironic configuration to apply
specific use-case. Unfortunately tempest doesn't allow to change cloud
configuration during tests run.

Recently I've started working on PoC that should solve this problem [0].
The main idea is to have ability to change ironic configuration during
single gate job run, and launch the same tempest tests after each
configuration change.

We can't change other components configuration as it will require
reinstalling whole devstack, so launching flat network and multitenant
network scenarios is not possible in single job.

For example:

1. Setup devstack with agent_ssh wholedisk ipxe configuration

2. Run tempest tests

3. Update localrc to use agent_ssh localboot image

4. Unstack ironic component only. Not whole devstack.

5. Install/configure ironic component only

6. Run tempest tests

7. Repeat steps 3-6 with other Ironic-only configuration change.

Running step 4,5 takes near 2-3 minutes.

Below is an non-exhaustive list of configuration choices we could try to
mix-and-match in single tempest run to have a maximal overall code coverage
in a sibl:

   -

   cleaning enabled / disabled
   -

   using pxe_* drivers / agent_* drivers
   -

   using netboot / localboot
   - using partitioned / wholedisk images



[0] https://review.openstack.org/#/c/369021/




On Wed, Oct 12, 2016 at 3:01 PM, Dmitry Tantsur  wrote:

> Hi folks!
>
> I'd like to propose a plan on how to simultaneously extend the coverage of
> our jobs and reduce their number.
>
> Currently, we're running one instance per job. This was reasonable when
> the coreos-based IPA image was the default, but now with tinyipa we can run
> up to 7 instances (and actually do it in the grenade job). I suggest we use
> 6 fake bm nodes to make a single CI job cover many scenarios.
>
> The jobs will be grouped based on driver (pxe_ipmitool and agent_ipmitool)
> to be more in sync with how 3rd party CI does it. A special configuration
> option will be used to enable multi-instance testing to avoid breaking 3rd
> party CI systems that are not ready for it.
>
> To ensure coverage, we'll only leave a required number of nodes
> "available", and deploy all instances in parallel.
>
> In the end, we'll have these jobs on ironic:
> gate-tempest-ironic-pxe_ipmitool-tinyipa
> gate-tempest-ironic-agent_ipmitool-tinyipa
>
> Each job will cover the following scenarious:
> * partition images:
> ** with local boot:
> ** 1. msdos partition table and BIOS boot
> ** 2. GPT partition table and BIOS boot
> ** 3. GPT partition table and UEFI boot  <*>
> ** with netboot:
> ** 4. msdos partition table and BIOS boot <**>
> * whole disk images:
> * 5. with msdos partition table embedded and BIOS boot
> * 6. with GPT partition table embedded and UEFI boot  <*>
>
>  <*> - in the future, when we figure our UEFI testing
>  <**> - we're moving away from defaulting to netboot, hence only one
> scenario
>
> I suggest creating the jobs for Newton and Ocata, and starting with Xenial
> right away.
>
> Any comments, ideas and suggestions are welcome.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Doc] Inclusion of microversion API support in CLI reference

2016-10-12 Thread Sean McGinnis
Just getting this out there to either get educated or to start a
conversation...

While going through some of the DocImpact generated bugs for
python-cinderclient I noticed a few that added new parameters to
existing CLI commands. As Cinder has now moved to using microversions
for all API changes, these new parameters are only available at a
certain microversion level.

A specific case is here:

https://github.com/openstack/python-cinderclient/blob/master/cinderclient/v3/shell.py#L1485

We have two parameters that are marked "start_version='3.1'" that do not
show up in the generated CLI reference.

This appears to be due to (or related to) the fact that the command line
help does not output anything for these. Now before I dig into why that
is, I know there are others that are already much more knowledgable
about this area than I am. So my question is, is this by design? Or is
something missing here that is needed to recognize these params with the
start_version value so they get printed?

My expectation as an end user would be that the help information would
be printed, with something like "(Requires API 3.1 or later)" appended
to the help text.

Anyone have any insight on this?

Thanks!

Sean (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Default the HA scenario to Ceph

2016-10-12 Thread Giulio Fidente

On 10/12/2016 02:29 PM, Thiago da Silva wrote:



On 10/12/2016 07:10 AM, Giulio Fidente wrote:

hi,

we introduced support for the deployment of Ceph in the liberty
release so that it could optionally be used as backend for one or more
of Cinder, Glance, Nova and more recently Gnocchi.

We used to deploy Ceph MONs on the controller nodes and Ceph OSDs on
dedicated ceph-storage nodes so a deployment of OpenStack with Ceph
would need at least 1 more additional node to host a Ceph OSD.

In our HA scenario the storage backends are configured as follows:

Glance -> Swift
Nova (ephemeral) -> Local
Cinder (persistent) -> LVM (on controllers)
Gnocchi -> Swift

The downside of the above configuration is that Cinder volumes can not
be replicated across the controller nodes and become unavailable if a
controller fails, while production environments generally expect
persistent storage to be highly available. Cinder volumes instead
could even get lost completely in case of a permanent failure of a
controller.

With the Newton release and the composable roles we can now deploy
Ceph OSDs on the compute nodes, removing the requirement we had for an
additional node to host a Ceph OSD.

I would like to ask for some feedback on the possibility of deploying
Ceph by default in the HA scenario and use it as backend for Cinder.

Also using Swift as backend for Glance and Gnocchi is enough to cover
the availability issue for the data, but it also means we're storing
that data on the controller nodes which might or might not be wanted;
I don't see a strong reason for defaulting them to Ceph, but it might
make more sense when Ceph is available; feedback about this would be
appreciated as well.

I think it would be important to take into account the recently created
guiding principles [0]:

"While the software that OpenStack produces has well defined and
documented APIs, the primary output of OpenStack is software, not API
definitions. We expect people who say they run “OpenStack” to run the
software produced by and in the community, rather than alternative
implementations of the API."

In the case of Cinder, I think the situation is a bit muddy as LVM is
not openstack software, and my limited understanding is that LVM is used
as a reference implementation, but in the case of Swift, I think RGW
would be considered an 'alternative implementation of the API'.

Thiago


hi Thiago,

sorry if it wasn't clear in my original message but I did not suggest to 
replace Swift with Ceph RGW.


Swift would continue to be deployed by default, not RGW.

The feedback I'm asking for is about storing (or not) the Cinder volumes 
in Ceph for the HA scenario by default, and also store the Glance images 
and Gnocchi metrics in Ceph or rather keep that data in Swift.

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elections][tc]Thoughts on the TC election process

2016-10-12 Thread Neil Jerram
On Wed, Oct 5, 2016 at 4:09 PM Sean Dague  wrote:

> On 10/03/2016 12:46 PM, Edward Leafe wrote:
>
> 
>
> > We are fortunate in that all of the candidates are exceptionally
> well-qualified, and those elected have put in excellent service while on
> the TC. But one thing I'm afraid of is that we tend to get into a situation
> where groupthink [0] is very likely. There are many excellent candidates
> running in every election, but it is rare for someone who hasn't been a PTL
> of a large project, and thus very visible, has been selected. Is this
> really the best approach?
>
> >
>
> > I wrote a blog post about implicit bias [1], and in that post used the
> example of blind auditions for musical orchestras radically changing the
> selection results. Before the introduction of blind auditions, men
> overwhelmingly comprised orchestras, but once the people judging the
> auditions had no clue as to whether the musician was male or female, women
> began to be selected much more in proportion to their numbers in the
> audition pools. So I'd like to propose something for the next election:
> have candidates self-nominate as in the past, but instead of writing a big
> candidacy letter, just state their interest in serving. After the
> nominations close, the election officials will assign each candidate a
> non-identifying label, such as a random number, and those officials will be
> the only ones who know which candidate is associated with which number. The
> nomination period can be much, much shorter, and then followed by a week of
> campaigning (the part that's really missing in the current process).
> Candidates will post their thoughts and positions, and respond to questions
> from people, and this is how the voters will know who best represents what
> they want to see in their TC.
>
>
>
> The comparison to orchestra auditions was brought up a couple of cycles
>
> ago as well. But I think it's a bad comparison.
>
>
>
> In the listed example the job being asked of people was performing their
>
> instrument, and it turns out that lots of things not having to do with
>
> performing their instrument were biasing the results. It was possible to
>
> remove the irrelevant parts.
>
>
>
> What is the job being asked of a TC member? To put the best interests of
>
> OpenStack at heart. To be effective in working with a diverse set of
>
> folks in our community to get things done. To find areas of friction and
>
> remove them. To help set an overall direction for the project that the
>
> community accepts and moves forward with.
>
>
>
> Writing a good candidacy email isn't really a good representation of
>
> those abilities. It's the measure of writing a good candidacy email, in
>
> english.
>
>
>
> I hope that when voters vote in the election they are taking the
>
> reputations of the individuals into account. That they look at the work
>
> they did across all of OpenStack, the hundreds / thousands of individual
>
> actions in the community that the person made. How they got to consensus
>
> on items. What efforts they were able to get folks to rally around and
>
> move forward. Where they get stuck, and how they dig out of being stuck.
>
> When they ask for help. When they admit they are out of their element.
>
> How they help new folks in our community. How they work with long timers.
>
>
>
> That aggregate reputation is much more indicative of their
>
> successfulness as a member of the TC to help OpenStack than the
>
> candidate email. It's easy to dismiss it as a popularity contest, but I
>
> don't think it is. This is about evaluating the plausible promise that
>
> individuals put forward. Not just the ideas they have, but how likely
>
> they are to be able to bring them to fruition.
>


I completely agree.  When I vote, it is based on a combined perception of
the candidate email - which often _is_ a useful redux of that person's
approach and views on points of current importance - and all of their
previous contributions and interactions that I am aware of; and that's
seems exactly right to me for a TC position.

 Neil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][tc] Exposing project team's metadata in README files

2016-10-12 Thread Flavio Percoco

Greetings,

One of the common complains about the existing project organization in the big
tent is that it's difficult to wrap our heads around the many projects there
are, their current state (in/out the big tent), their tags, etc.

This information is available on the governance website[0]. Each official
project team has a page there containing the information related to the
deliverables managed by that team. Unfortunately, I don't think this page is
checked often enough and I believe it's not known by everyone.

In the hope that we can make this information clearer to people browsing the
many repos (most likely on github), I'd like to propose that we include the
information of each deliverable in the readme file. This information would be
rendered along with the rest of the readme (at least on Github, which might not
be our main repo but it's the place most humans go to to check our projects).

Rather than duplicating this information, I'd like to find a way to just
"include it" in the Readme file. As far as showing the "official" badge goes, I
believe it'd be quite simple. We can do it the same way CI tags are exposed when
using travis (just include an image). As for the rest of the tags, it might
require some extra hacking.

So, before I start digging more into this, I wanted to get other opinions/ideas
on this topic and how we can make this information more evident to the rest of
the community (and people not as familiar with our processes as some of us are).

Thanks in advance,
Flavio

[0] http://governance.openstack.org/reference/projects/index.html

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [magnum] Subjects to discuss during the summit

2016-10-12 Thread Zane Bitter

On 10/10/16 11:11, Spyros Trigazis wrote:

4. Finally, a thought under investigation is replacing the nodes one
   by one using a different image. e.g. Upgrade from fedora 24 to 25
   with new versions of packages all in a new qcow2 image. How could
   we update the stack for this?


This should work now - you just set an update_policy for rolling 
updates. That allows you to configure a batch size for changes (e.g. 
batch_size: 1 for one-by-one replacement; you can also configure a 
minimum number of nodes to remain in service). Then do a stack update 
with the new image name and Heat will handle the rest.


The trickier part is disabling whatever is running on the server before 
deleting it. You can do this with a SoftwareDeployment with action: 
DELETE inside the scaled unit (if you're scaling out nested stacks). 
Alternatively, you can configure pre-delete hooks on the servers, 
configure a Zaqar queue to send hook notifications to, and use Zaqar 
subscriptions to have the notifications to that queue trigger a Mistral 
workflow.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Default the HA scenario to Ceph

2016-10-12 Thread Thiago da Silva



On 10/12/2016 07:10 AM, Giulio Fidente wrote:

hi,

we introduced support for the deployment of Ceph in the liberty 
release so that it could optionally be used as backend for one or more 
of Cinder, Glance, Nova and more recently Gnocchi.


We used to deploy Ceph MONs on the controller nodes and Ceph OSDs on 
dedicated ceph-storage nodes so a deployment of OpenStack with Ceph 
would need at least 1 more additional node to host a Ceph OSD.


In our HA scenario the storage backends are configured as follows:

Glance -> Swift
Nova (ephemeral) -> Local
Cinder (persistent) -> LVM (on controllers)
Gnocchi -> Swift

The downside of the above configuration is that Cinder volumes can not 
be replicated across the controller nodes and become unavailable if a 
controller fails, while production environments generally expect 
persistent storage to be highly available. Cinder volumes instead 
could even get lost completely in case of a permanent failure of a 
controller.


With the Newton release and the composable roles we can now deploy 
Ceph OSDs on the compute nodes, removing the requirement we had for an 
additional node to host a Ceph OSD.


I would like to ask for some feedback on the possibility of deploying 
Ceph by default in the HA scenario and use it as backend for Cinder.


Also using Swift as backend for Glance and Gnocchi is enough to cover 
the availability issue for the data, but it also means we're storing 
that data on the controller nodes which might or might not be wanted; 
I don't see a strong reason for defaulting them to Ceph, but it might 
make more sense when Ceph is available; feedback about this would be 
appreciated as well.
I think it would be important to take into account the recently created 
guiding principles [0]:


"While the software that OpenStack produces has well defined and 
documented APIs, the primary output of OpenStack is software, not API 
definitions. We expect people who say they run “OpenStack” to run the 
software produced by and in the community, rather than alternative 
implementations of the API."


In the case of Cinder, I think the situation is a bit muddy as LVM is 
not openstack software, and my limited understanding is that LVM is used 
as a reference implementation, but in the case of Swift, I think RGW 
would be considered an 'alternative implementation of the API'.


Thiago

[0] - 
https://governance.openstack.org/reference/principles.html#openstack-primarily-produces-software


Finally a shared backend (Ceph) for Nova would allow live migrations 
but probably decrease performances for the guests in general; so I'd 
be against defaulting Nova to Ceph. Feedback?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elections][tc]Thoughts on the TC election process

2016-10-12 Thread Flavio Percoco

Top-posting as I'll try to summarize/re-start/reword/whatever the right word is,
this thread:

It seems to me that the problem we're trying to solve here is how we can help
voters to make more thoughtful choices (note that I'm not saying they currently
don't. I'm not judging the voters but, as others, I'm pointing fingers at the
process we've in place). A couple of points have been made around this:

- We'd like the electorate to be able to ask questions to the candiates
- The time availble doesn't seem to be enough for this
- The ML is great but it might not be the right format for this, especially with
 the amount of emails going through openstack-dev

Some (rough) ideas:

- We could have a common place where we collect the threads that ask questions
 to the candidates. Ideally, this thread would be kept updated by the members
 of the community asking these questions. If I start a new thread, I get the
 link and put it in this "common place" The common place could be a wiki page
 or anything linkable from the voting URL.
- Link in the voting URL the place where the information about the questions
 being asked to the candidates are being aggregated.
- Send the ballots every day, if possible.

I don't think the above will solve all the problems that have been mentioned in
this thread. For example, it certainly doesn't solve the problem of there not
being enough time for all the candidates to reply to these questions and/or the
electorate to come up with a list of questions and/or read through the answers.

I'd like to avoid coming up with a prepared set of questions as I believe the
best discussions are triggered by candidacies, moment and a broader set of minds
working together. I don't really have a strong opinion on having an extra week
in between but I trust Anita's feedback on the burden this will add to the
election's officers. It'd also add more stress to the candidates, fwiw.

The rough ideas above are just small steps that could help organizing the
discussion and making them easier to find by our electorate. I hope the list of
issues does summarize the concerns expressed in this thread.

Flavio

On 03/10/16 11:46 -0500, Edward Leafe wrote:

So the period of self-nominations for the Technical Committee seats has ended, 
and the voting has begun. I've been a very close observer of this process for 
several cycles, and I have some ideas I'd like to share. Full disclosure: I am 
a current candidate for the TC, and have been a candidate several times in the 
past, all of which were unsuccessful.

When deciding to run, candidates write a long, thoughtful essay on their 
reasons for wanting to serve on the TC, and those essays are typically the last 
you hear from them until the election. It has been rare for anyone to ask 
follow-up questions, or to challenge the candidates to explain their positions 
more definitively. I have spoken with many people at the Summits, which always 
closely followed the TC election (warning: unscientific samples ahead!), and 
what their selection process mostly boils down to is: they pick the names they 
are most familiar with. Many people don't read those long candidacy posts, and 
nearly all couldn't remember a single point that any of the candidates had put 
forth.

We are fortunate in that all of the candidates are exceptionally 
well-qualified, and those elected have put in excellent service while on the 
TC. But one thing I'm afraid of is that we tend to get into a situation where 
groupthink [0] is very likely. There are many excellent candidates running in 
every election, but it is rare for someone who hasn't been a PTL of a large 
project, and thus very visible, has been selected. Is this really the best 
approach?

I wrote a blog post about implicit bias [1], and in that post used the example 
of blind auditions for musical orchestras radically changing the selection 
results. Before the introduction of blind auditions, men overwhelmingly 
comprised orchestras, but once the people judging the auditions had no clue as 
to whether the musician was male or female, women began to be selected much 
more in proportion to their numbers in the audition pools. So I'd like to 
propose something for the next election: have candidates self-nominate as in 
the past, but instead of writing a big candidacy letter, just state their 
interest in serving. After the nominations close, the election officials will 
assign each candidate a non-identifying label, such as a random number, and 
those officials will be the only ones who know which candidate is associated 
with which number. The nomination period can be much, much shorter, and then 
followed by a week of campaigning (the part that's really missing in the cu!
rrent pro
cess). Candidates will post their thoughts and positions, and respond to 
questions from people, and this is how the voters will know who best represents 
what they want to see in their TC.

The current candidacy essay would now be posted in the campaign 

[openstack-dev] [ironic] [infra] RFC: consolidating and extending Ironic CI jobs

2016-10-12 Thread Dmitry Tantsur

Hi folks!

I'd like to propose a plan on how to simultaneously extend the coverage of our 
jobs and reduce their number.


Currently, we're running one instance per job. This was reasonable when the 
coreos-based IPA image was the default, but now with tinyipa we can run up to 7 
instances (and actually do it in the grenade job). I suggest we use 6 fake bm 
nodes to make a single CI job cover many scenarios.


The jobs will be grouped based on driver (pxe_ipmitool and agent_ipmitool) to be 
more in sync with how 3rd party CI does it. A special configuration option will 
be used to enable multi-instance testing to avoid breaking 3rd party CI systems 
that are not ready for it.


To ensure coverage, we'll only leave a required number of nodes "available", and 
deploy all instances in parallel.


In the end, we'll have these jobs on ironic:
gate-tempest-ironic-pxe_ipmitool-tinyipa
gate-tempest-ironic-agent_ipmitool-tinyipa

Each job will cover the following scenarious:
* partition images:
** with local boot:
** 1. msdos partition table and BIOS boot
** 2. GPT partition table and BIOS boot
** 3. GPT partition table and UEFI boot  <*>
** with netboot:
** 4. msdos partition table and BIOS boot <**>
* whole disk images:
* 5. with msdos partition table embedded and BIOS boot
* 6. with GPT partition table embedded and UEFI boot  <*>

 <*> - in the future, when we figure our UEFI testing
 <**> - we're moving away from defaulting to netboot, hence only one scenario

I suggest creating the jobs for Newton and Ocata, and starting with Xenial right 
away.


Any comments, ideas and suggestions are welcome.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Default the HA scenario to Ceph

2016-10-12 Thread Julien Danjou
On Wed, Oct 12 2016, Giulio Fidente wrote:

> Also using Swift as backend for Glance and Gnocchi is enough to cover the
> availability issue for the data, but it also means we're storing that data on
> the controller nodes which might or might not be wanted; I don't see a strong
> reason for defaulting them to Ceph, but it might make more sense when Ceph is
> available; feedback about this would be appreciated as well.

The Ceph driver of Gnocchi is better than the Swift one, so it'd make
total sense to default to Ceph if Ceph is available.

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][gnocchi]gnocchi resource id has been SHA-1, How to find the resources belong to the monitored items?

2016-10-12 Thread Julien Danjou
On Wed, Oct 12 2016, 冯朝阳 wrote:

> Thank you very much for your reply, Julien!
> We are using the openstack version is liberty(gnocchi version=1.3), no
> original_resource_id, Can you tell me what I should do?

You should upgrade. :-)
Indeed, this original_resource_id has been introduced after 1.3 IIRC.

Though you should be good for searching by resource_id.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Default the HA scenario to Ceph

2016-10-12 Thread Giulio Fidente

hi,

we introduced support for the deployment of Ceph in the liberty release 
so that it could optionally be used as backend for one or more of 
Cinder, Glance, Nova and more recently Gnocchi.


We used to deploy Ceph MONs on the controller nodes and Ceph OSDs on 
dedicated ceph-storage nodes so a deployment of OpenStack with Ceph 
would need at least 1 more additional node to host a Ceph OSD.


In our HA scenario the storage backends are configured as follows:

Glance -> Swift
Nova (ephemeral) -> Local
Cinder (persistent) -> LVM (on controllers)
Gnocchi -> Swift

The downside of the above configuration is that Cinder volumes can not 
be replicated across the controller nodes and become unavailable if a 
controller fails, while production environments generally expect 
persistent storage to be highly available. Cinder volumes instead could 
even get lost completely in case of a permanent failure of a controller.


With the Newton release and the composable roles we can now deploy Ceph 
OSDs on the compute nodes, removing the requirement we had for an 
additional node to host a Ceph OSD.


I would like to ask for some feedback on the possibility of deploying 
Ceph by default in the HA scenario and use it as backend for Cinder.


Also using Swift as backend for Glance and Gnocchi is enough to cover 
the availability issue for the data, but it also means we're storing 
that data on the controller nodes which might or might not be wanted; I 
don't see a strong reason for defaulting them to Ceph, but it might make 
more sense when Ceph is available; feedback about this would be 
appreciated as well.


Finally a shared backend (Ceph) for Nova would allow live migrations but 
probably decrease performances for the guests in general; so I'd be 
against defaulting Nova to Ceph. Feedback?

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][gnocchi]gnocchi resource id has been SHA-1, How to find the resources belong to the monitored items?

2016-10-12 Thread 冯朝阳
Thank you very much for your reply, Julien!
We are using the openstack version is liberty(gnocchi version=1.3), no 
original_resource_id, Can you tell me what I should do?


 
 
-- Original --
From:  "Julien Danjou";
Date:  Wed, Oct 12, 2016 06:22 PM
To:  "冯朝阳"; 
Cc:  "openstack-dev"; 
Subject:  Re: [openstack-dev] [telemetry][gnocchi]gnocchi resource id has been 
SHA-1,How to find the resources belong to the monitored items?

 
On Wed, Oct 12 2016, 冯朝阳 wrote:

> such as(The following example is used gnocchi version = 1.3 (L version)):
> # gnocchi resource list
>  a23d72a8-6fa3-545a-8b66-e512cac3aea3 | instance_network_interface |
> 12fc4501-8408-43b2-8cb7-6f1e1462d80e | 48d883ba-76d0-4e30-ac6b-ab85d1041c96 |
> 2016-10-12T08:05:55.397722+00:00 | None
>
>
> # neutron port-show a23d72a8-6fa3-545a-8b66-e512cac3aea3
> Unable to find port with name 'a23d72a8-6fa3-545a-8b66-e512cac3aea3'

First, an instance_network_interface is a NIC on a compute so it's unrelated to
a Neutron port. I guess it might be connected to it, but that's just in
the case you're actually using Neutron.

Then, you can use gnocchi resource search using the original_resource_id
or instance_id field to find the NIC you're looking for.

Cheers,
-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] devstack-plugin additional-pkg-repos: ocata summit working session?

2016-10-12 Thread Markus Zoeller
On 11.10.2016 16:09, Kashyap Chamarthy wrote:
> On Tue, Oct 11, 2016 at 02:09:33PM +0200, Markus Zoeller wrote:
> 
> [Snip well-written backrgound detail]
> 
>> Request
>> ---
>> My question is, if you have interest in this plugin and its
>> capabilities, are you at the Summit in Barcelona and do you have time
>> for a short working session there? Maybe on Friday during the
>> contributors meetup?
>>
>> https://etherpad.openstack.org/p/ocata-nova-summit-meetup
> 
> I've made a note to be present there.
> 
>> Possible action/discussion items
>> 
>> * How to create a "*.deb" package out of the source code of
>> libvirt/qemu? (surprisingly enough, I'm still struggling with this)
> 
> Probably check with some of the Debian packagers on the list?
> 
> FWIW, I'm familiar with producing RPM builds for libvirt / QEMU for
> Fedora from their respective Git sources, as I make scratch builds for
> testing often. 
> 
>> * How should we specify the packages and versions to install in the gate
>> job?
>> * When do we build new packages and increase the version in the gate?
>> * Actually hack some code and push it
> 
> Another point for working there
> 
>https://review.openstack.org/#/c/313568/4 -- Plugin to setup
>libvirt/QEMU from tar releases 
> 

Good pointer. I borrowed some of that for my Vagrant setup I mentioned.
It still does a "make && make install" which can take some time.

-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] devstack-plugin additional-pkg-repos: ocata summit working session?

2016-10-12 Thread Markus Zoeller
On 12.10.2016 10:52, Kashyap Chamarthy wrote:
> On Tue, Oct 11, 2016 at 05:53:24PM +0200, Thomas Goirand wrote:
>> On 10/11/2016 02:09 PM, Markus Zoeller wrote:
>>> * How to create a "*.deb" package out of the source code of
>>> libvirt/qemu? (surprisingly enough, I'm still struggling with this)
>>
>> What version of libvirt / qemu are you trying to build? libvirt 2.3.0
>> was released 6 days ago, and uploaded to Debian unstable 3 days ago. I
>> don't think you need to manage the packaging yourself
> 
> I think what Markus is talking about is to the ability to produce
> packages from arbitrary Git commits to be able to test Nova with certain
> new features from libvirt / QEMU which won't be available until a formal
> release is pushed out.
> 
> Somewhat analogus to the Fedora virt-preview-repository[1]:
> 
> "The Virtualization Preview Repository is for people who would like
> to test the very latest virtualization related packages. This
> repository is intended primarily as an aid to testing / early
> experimentation. It is not intended for 'production' deployment."
> 
> 
> [1] https://fedoraproject.org/wiki/Virtualization_Preview_Repository
> 
> [...]
> 

Yes, exactly what kashyap said. At the Nova Mitaka midcycle we talked
shortly about that and came to the conclusion that having packages saves
a lot of (overall) time in the gate queue, compared to the usual "make
&& make install" every time. I'm new to the packaging topic and don't
know the common tool-chain for such tasks.

-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Presence at PTG Atlanta in February

2016-10-12 Thread Masayuki Igawa
Thank you for clarifying Thierry! I got it!
I'm looking forward to see the registration:)


Best Regards,
-- Masayuki Igawa
On Wed, 12 Oct 2016 at 18:46 Thierry Carrez  wrote:

> Masayuki Igawa wrote:
> > Hi Ken'ichi,
> >
> > Thanks for bring this up.
> >
> > On Tue, Oct 11, 2016 at 6:50 AM, Ken'ichi Ohmichi 
> wrote:
> >> Hi QA team,
> >>
> >> As you know, the first PTG(Project Teams Gathering) happens at Atlanta
> >> 20th-24th February the next year.
> >> After Barcelona, OpenStack Summit will be separated into two parts
> >> (conferences and design sessions) and PTG is a new format for design
> >> sessions for developers(Please see the detail on [1]).
> >> QA is always important factor of developments, and I am thinking the
> >> presence of QA project is good for the community.
> >
> > Yeah, I agree with you. (I'm not sure I'll be there, though.)
> > Do we(QA team) need to register or something for the PTG?
>
> Current plan is to have registration open shortly after Barcelona.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][gnocchi]gnocchi resource id has been SHA-1, How to find the resources belong to the monitored items?

2016-10-12 Thread Julien Danjou
On Wed, Oct 12 2016, 冯朝阳 wrote:

> such as(The following example is used gnocchi version = 1.3 (L version)):
> # gnocchi resource list
>  a23d72a8-6fa3-545a-8b66-e512cac3aea3 | instance_network_interface |
> 12fc4501-8408-43b2-8cb7-6f1e1462d80e | 48d883ba-76d0-4e30-ac6b-ab85d1041c96 |
> 2016-10-12T08:05:55.397722+00:00 | None
>
>
> # neutron port-show a23d72a8-6fa3-545a-8b66-e512cac3aea3
> Unable to find port with name 'a23d72a8-6fa3-545a-8b66-e512cac3aea3'

First, an instance_network_interface is a NIC on a compute so it's unrelated to
a Neutron port. I guess it might be connected to it, but that's just in
the case you're actually using Neutron.

Then, you can use gnocchi resource search using the original_resource_id
or instance_id field to find the NIC you're looking for.

Cheers,
-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

2016-10-12 Thread Dmitry Tantsur

On 10/12/2016 11:59 AM, Thierry Carrez wrote:

Qiming Teng wrote:

On Tue, Oct 11, 2016 at 10:39:11PM -0500, Michał Jastrzębski wrote:

One of reasons we created PTG in the first place is that Summit became
big and expensive, and project developers had harder and harder time
attending it due to budget issues.


So a trip to PTG is really cheaper than summit? Is the PTG one
sponsored by someone?


PTGs happen in more cost-effective locations, airport hubs with cheaper
hotels, which should lower the cost of attending. Yes, I'm pretty sure
traveling to Atlanta downtown for a week will be cheaper than going to
Barcelona, Boston or Sydney downtown for a week.


A friendly reminder that there is a world outside of the US ;) 

Yes, I understand that your statement is true for most of the developers, but 
for EU folks the next year offers 4 overseas travels. I'm pretty sure most of us 
will only make to either PTG or the summit. And it's not only budget: my family 
said firm NO to me going to both events, for example :)


And while PTG will surely be cheaper than the Summit, the Summit is not going 
away (for marketing, management, few developer representatives), so the total 
expense is unlikely to drop IMO.





[...]
PTG becomes very important for project team, summit arguably will
become less important as many of developers will be able to afford
only PTGs.


Summit is less (or just NOT) important to developers, emm ... that is
true if 1) the team knows exactly what users/ops want so they don't even
bother interact with them, just focus on getting things done; 2) the
person who approves your trip request also believes so.


If we say that "Don't expect Ops at PTG", that means
OpenStack dev community will become even more disconnected from Ops
community.


Wasn't that part of the plan? [...]


Yes, the plan is (amongst other things) to make sure that upstream
developers are available to interact with users (operators, but also app
developers...) during the very week where *all* our community gets
together (the OpenStack Summit). Currently we try to get things done at
the same time, which results in hard choices between listening and
doing. By clearly setting out separate times for each activity, we make
sure we stay focused.


Sorry, but to me it's extremely unrealistic to expect a big number of developers 
on the Summit any more. Sending folks to both events doubles the travel budget, 
and I know that many companies have struggles with sending people to one event 
already.




For an ops-focused team like Kolla, I'd argue that participating to
OpenStack Summits (and Ops midcycles, to be honest) is essential. That
doesn't mean that everyone has to go to every single event, but the
Kolla team should definitely self-organize to make sure that enough
Kolla people go to those events to close the feedback loop. The PTG is
not where the feedback loop is closed. It's a venue for the *team* to
get together and build the shared understandings (priorities,
assignment, bootstrap work) necessary to a successful development cycle
(some teams need that more than others, so YMMV).




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc][devstack][mitaka]

2016-10-12 Thread Navdeep Uniyal
Hi Cathy,

Thanks for your reply. I have the setup done without any errors with only one 
vm in the chain. I want to move all the icmp traffic from vm1 to vm3 via vm2. 
My Flow classifier looks like:
"neutron flow-classifier-create --ethertype IPv4 --source-ip-prefix 
10.0.0.18/32 --destination-ip-prefix 10.0.0.6/32 --protocol icmp FC1"
But using tcpdump on vm2 ingress port, I could not see any traffic. Please let 
me know how can I debug this and what could be the possible issue.


Best Regards,
Navdeep Uniyal


From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com]
Sent: Dienstag, 11. Oktober 2016 19:50
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-sfc][devstack][mitaka]

Hi Navdeep,

Please see inline.

Cathy

From: Navdeep Uniyal [mailto:navdeep.uni...@neclab.eu]
Sent: Tuesday, October 11, 2016 5:42 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [networking-sfc][devstack][mitaka]

Hi all,

I have been trying out networking-sfc to create service function chain in 
Openstack. I could create all the port pairs, port-pair-groups, flow classifier 
and the chain but I could not see the packets on the desired hops.
I am trying to create a simple sfc with 3 VMs(vm1 to vm3) in the setup. I just 
want to check how it works. In my setup, vm1 is the Traffic generator(iperf) 
and vm3 is the traffic receiver(iperf server). Now, the  2 vms (vm2 and 3) are 
in the same network with vm1 and I want to move the iperf traffic from 
vm1->vm2->vm3. In order to achieve this, I have created 2 port pairs of vm2  
and vm3 and both pairs are in separate port pair groups (PG1 and PG2), also 
created a Flow classifier FC1 and finally chain with PG1, PG2 and FC1.  Now my 
question is, is my setup correct in order to achieve the sfc result as I stated 
above? Do I need to include the vm1 in the port pair group?

Cathy> You only need to include VM2 in a port pair group. Traffic source and 
traffic destination do not need to be included in the chain's port pair group, 
instead their IP addresses should be included in the flow classifier so that 
the system knows which flow needs to go through the chain. Here is a link to 
thw wiki.
https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining

Cathy




Below is the flow classifier:

++--+
| Field  | Value
|
++--+
| description  |
  |
| destination_ip_prefix   |  |
| destination_port_range_max |  |
| destination_port_range_min |  |
| ethertype| IPv4   
  |
| id | 
e5000ade-50ad-41ed-a159-b89c4blp97ec |
| l7_parameters  | {}   
|
| logical_destination_port   |  |
| logical_source_port   | 63cdf664-dd67-455c-8345-f01ef58c23e5 |
| name| FC1 
 |
| project_id   | 
6b90cd3356144681b44274d4881c5fc7 |
| protocol  | tcp   
   |
| source_ip_prefix  | 10.0.0.18/32  
   |
| source_port_range_max  |  |
| source_port_range_min  |  |
| tenant_id | 
6b90cd3310104681b44274d4881c5fc7 |
++--+



Is there any wiki with some example case explained with testing scenario?


Best Regards,
Navdeep Uniyal
Email: navdeep.uni...@neclab.eu
-
Software Engineer
NEC Europe Ltd.
NEC Laboratories Europe
Kurfürstenanlage 36, D-69115 Heidelberg,

NEC Europe Ltd | Registered Office: Athene, Odyssey Business Park, West End  
Road, London, HA4 6QE, GB | Registered in England 2832014
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telemetry][gnocchi]gnocchi resource id has been SHA-1, How to find the resources belong to the monitored items?

2016-10-12 Thread 冯朝阳
Hello All,


When the ceilometer sent over the data to gnocchi, Gnocchi will use uuid.uuid5 
for resource id, how to find it through the resource id belongs to the virtual 
machine, monitoring items? Will not be able to deal with resource id?


such as(The following example is used gnocchi version = 1.3 (L version)):
# gnocchi resource list
 a23d72a8-6fa3-545a-8b66-e512cac3aea3 | instance_network_interface | 
12fc4501-8408-43b2-8cb7-6f1e1462d80e | 48d883ba-76d0-4e30-ac6b-ab85d1041c96 | 
2016-10-12T08:05:55.397722+00:00 | None  


# neutron port-show a23d72a8-6fa3-545a-8b66-e512cac3aea3
Unable to find port with name 'a23d72a8-6fa3-545a-8b66-e512cac3aea3'


How should we find the instance network_interface input which instance, which 
is the port?


Thank you very much for reading my question, I hope to get your help!__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

2016-10-12 Thread Thierry Carrez
Qiming Teng wrote:
> On Tue, Oct 11, 2016 at 10:39:11PM -0500, Michał Jastrzębski wrote:
>> One of reasons we created PTG in the first place is that Summit became
>> big and expensive, and project developers had harder and harder time
>> attending it due to budget issues.
> 
> So a trip to PTG is really cheaper than summit? Is the PTG one
> sponsored by someone?

PTGs happen in more cost-effective locations, airport hubs with cheaper
hotels, which should lower the cost of attending. Yes, I'm pretty sure
traveling to Atlanta downtown for a week will be cheaper than going to
Barcelona, Boston or Sydney downtown for a week.

>> [...]
>> PTG becomes very important for project team, summit arguably will
>> become less important as many of developers will be able to afford
>> only PTGs.
> 
> Summit is less (or just NOT) important to developers, emm ... that is
> true if 1) the team knows exactly what users/ops want so they don't even
> bother interact with them, just focus on getting things done; 2) the
> person who approves your trip request also believes so.
> 
>> If we say that "Don't expect Ops at PTG", that means
>> OpenStack dev community will become even more disconnected from Ops
>> community.
> 
> Wasn't that part of the plan? [...]

Yes, the plan is (amongst other things) to make sure that upstream
developers are available to interact with users (operators, but also app
developers...) during the very week where *all* our community gets
together (the OpenStack Summit). Currently we try to get things done at
the same time, which results in hard choices between listening and
doing. By clearly setting out separate times for each activity, we make
sure we stay focused.

For an ops-focused team like Kolla, I'd argue that participating to
OpenStack Summits (and Ops midcycles, to be honest) is essential. That
doesn't mean that everyone has to go to every single event, but the
Kolla team should definitely self-organize to make sure that enough
Kolla people go to those events to close the feedback loop. The PTG is
not where the feedback loop is closed. It's a venue for the *team* to
get together and build the shared understandings (priorities,
assignment, bootstrap work) necessary to a successful development cycle
(some teams need that more than others, so YMMV).

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Presence at PTG Atlanta in February

2016-10-12 Thread Thierry Carrez
Masayuki Igawa wrote:
> Hi Ken'ichi,
> 
> Thanks for bring this up.
> 
> On Tue, Oct 11, 2016 at 6:50 AM, Ken'ichi Ohmichi  
> wrote:
>> Hi QA team,
>>
>> As you know, the first PTG(Project Teams Gathering) happens at Atlanta
>> 20th-24th February the next year.
>> After Barcelona, OpenStack Summit will be separated into two parts
>> (conferences and design sessions) and PTG is a new format for design
>> sessions for developers(Please see the detail on [1]).
>> QA is always important factor of developments, and I am thinking the
>> presence of QA project is good for the community.
> 
> Yeah, I agree with you. (I'm not sure I'll be there, though.)
> Do we(QA team) need to register or something for the PTG?

Current plan is to have registration open shortly after Barcelona.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elections][tc]Thoughts on the TC election process

2016-10-12 Thread Thierry Carrez
Ed Leafe wrote:
> Why do we need a week to nominate? Open it up a month before the election, 
> and close it a week before. Or open it for two days, and close it a week 
> before. I don’t understand why, other than procrastination, we need such a 
> long period. If you’re serious about serving, throw you hat into the ring.

At every PTL election there are people missing the call, and the noise
on the mailing-list for a full week helps them get the message. I agree
it's slightly less of a problem with the TC election -- although I bet
that some people decide to run after seeing the current mix of proposed
candidates (the "heck, if they can do it, I probably can too" reflex).
Since we use Condorcet, the more candidates the better...

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG space request

2016-10-12 Thread Thierry Carrez
Emilien Macchi wrote:
> I would like to request for some space dedicated to TripleO project
> for the first OpenStack PTG.
> 
> https://www.openstack.org/ptg/
> 
> The event will happen in February 2017 during the next PTG in Atlanta.
> Any feedback is welcome,

Just a quick note: as you can imagine we have finite space at the event,
and the OpenStack Foundation wants to give priority to teams which have
a diverse affiliation (or which are not tagged "single-vendor").
Depending on which teams decide to take advantage of the event and which
don't, we may or may not be able to offer space to single-vendor
projects -- and TripleO is currently tagged single-vendor.

The rationale is, the more organizations are involved in a given project
team, the more value there is to offer common meeting space to that team
for them to sync on priorities and get stuff done. If more than 90% of
contributions / reviews / core reviewers come from a single
organization, there is less coordination needs and less value in having
all those people from a single org to travel to a distant place to have
a team meeting. And as far as recruitment of new team members go (to
increase that diversity), the OpenStack Summit will be a better venue to
do that.

I hope we'll be able to accommodate you, though. And in all cases
TripleO people are more than welcome to join the event to coordinate
with other teams. It's just not 100% sure we'll be able to give you a
dedicated room for multiple days. We should know better in a week or so,
once we get a good idea of who plans to meet at the event and who doesn't.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] devstack-plugin additional-pkg-repos: ocata summit working session?

2016-10-12 Thread Kashyap Chamarthy
On Tue, Oct 11, 2016 at 05:53:24PM +0200, Thomas Goirand wrote:
> On 10/11/2016 02:09 PM, Markus Zoeller wrote:
> > * How to create a "*.deb" package out of the source code of
> > libvirt/qemu? (surprisingly enough, I'm still struggling with this)
> 
> What version of libvirt / qemu are you trying to build? libvirt 2.3.0
> was released 6 days ago, and uploaded to Debian unstable 3 days ago. I
> don't think you need to manage the packaging yourself

I think what Markus is talking about is to the ability to produce
packages from arbitrary Git commits to be able to test Nova with certain
new features from libvirt / QEMU which won't be available until a formal
release is pushed out.

Somewhat analogus to the Fedora virt-preview-repository[1]:

"The Virtualization Preview Repository is for people who would like
to test the very latest virtualization related packages. This
repository is intended primarily as an aid to testing / early
experimentation. It is not intended for 'production' deployment."


[1] https://fedoraproject.org/wiki/Virtualization_Preview_Repository

[...]

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Event notification descriptors/schemas (? swagger ?)

2016-10-12 Thread Julien Danjou
On Tue, Oct 11 2016, Joshua Harlow wrote:

> Has there been any ideas from folks to split those 'event_definitions.yaml'
> into something else (a notifications schema repo?)? I'd be up for helping do
> that (nice to have would be an included ability/code-gen(?) to turn those
> schemas into code for various languages [starting with the typical ones,
> python, go(?), java, ...]).
>
> Then we could also hold the emitting projects accountable for there events
> being emitted (and the formats and versions they emit), because overall I'd
> like to get away from 'the payload format OpenStack services emit could be
> described as the Wild West' (found on that events.html site, lol).

I've proposed exactly that in 2013 in Hong-Kong and it was well received
by Oslo folks. I even started something¹ at some point, but it got so
low traction by projects and the friction was so high that I gave up. :)

¹  https://review.openstack.org/#/c/66566/

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Presence at PTG Atlanta in February

2016-10-12 Thread Masayuki Igawa
Hi Ken'ichi,

Thanks for bring this up.

On Tue, Oct 11, 2016 at 6:50 AM, Ken'ichi Ohmichi  wrote:
> Hi QA team,
>
> As you know, the first PTG(Project Teams Gathering) happens at Atlanta
> 20th-24th February the next year.
> After Barcelona, OpenStack Summit will be separated into two parts
> (conferences and design sessions) and PTG is a new format for design
> sessions for developers(Please see the detail on [1]).
> QA is always important factor of developments, and I am thinking the
> presence of QA project is good for the community.

Yeah, I agree with you. (I'm not sure I'll be there, though.)
Do we(QA team) need to register or something for the PTG?

Best Regards,
-- Masayuki Igawa


>
> Thanks
> Ken Omichi
>
> ---
> [1]: http://www.openstack.org/ptg
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Cross Project workshops in Barcelona including "Re-inventing the TC"

2016-10-12 Thread Thierry Carrez
Davanum Srinivas wrote:
> Please see the cross project schedule over at:
> https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Cross%20Project%20workshops%3A

Note that following the TC meeting yesterday, a small adjustment was
pushed to fix a conflict for one of the session moderators. The sessions
from Tuesday at 3:55pm were swapped with the sessions at 5:05pm.

I hope this doesn't create more issues than it solves :)
See you there!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

2016-10-12 Thread Qiming Teng
On Tue, Oct 11, 2016 at 10:39:11PM -0500, Michał Jastrzębski wrote:
> Hello Tom,
> 
> I must say I think this is bad news - especially for projects like
> Kolla - ops centric.
> One of reasons we created PTG in the first place is that Summit became
> big and expensive, and project developers had harder and harder time
> attending it due to budget issues.

So a trip to PTG is really cheaper than summit? Is the PTG one
sponsored by someone?

> PTG would offer many of these devs
> opportunity to talk to their peers, other project developers and build
> OpenStack dev community.

> If project attends PTG, and most of them
> plans to (again, Kolla included), that is a travel for project team.

A big IF here ...

> If we hold 2 PTGs per year, that's big hit on travel budget (but still
> smaller than summit).
> 
> PTG becomes very important for project team, summit arguably will
> become less important as many of developers will be able to afford
> only PTGs.

Summit is less (or just NOT) important to developers, emm ... that is
true if 1) the team knows exactly what users/ops want so they don't even
bother interact with them, just focus on getting things done; 2) the
person who approves your trip request also believes so.

> If we say that "Don't expect Ops at PTG", that means
> OpenStack dev community will become even more disconnected from Ops
> community.

Wasn't that part of the plan? Or maybe the Ops will travel four times a
year, go to the summit twice for (watching) shows and go to the PTGs
twice to interact with the team that is busy discussing implementation
details ...

> Let's not forget that OpenStack is ultimately operators
> tool, they need to care for it and in my opinion having close
> relationship with them is extremely important for good of project. If
> we raise cost of keeping this relationship, that might really hurt
> OpenStack.

> Cheers,
> Michal
> 

I really hope I was totally wrong.

Regards,
  Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]agenda of weekly meeting Oct.12

2016-10-12 Thread joehuang
Agenda of Oct.12 weekly meeting:


# Ocata cycle design summit sessions: [1][2]

# Atlanta PTG(project team gather) presence

# release for stable/newton and tricircle cleaning

# open discussion


[1]https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Tricircle%3A
[2]https://wiki.openstack.org/wiki/Design_Summit/Ocata/Etherpads#Tricircle

How to join:

#  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on 
every Wednesday starting from UTC 13:00.


If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.


Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]Tricricle cleanning

2016-10-12 Thread joehuang
Great, all patches are ready now for Tricircle cleaning:

Review and Merge order:

1.central and local plugin for l3: https://review.openstack.org/#/c/378476/
2. remove api gateway code: https://review.openstack.org/#/c/384182/
3.  security group support: https://review.openstack.org/#/c/380054/
4: Single Node installation: https://review.openstack.org/#/c/384872/
5. Multi nodes installation:  https://review.openstack.org/#/c/385306/

Best Regards
Chaoyi Huang (joehuang)

From: Vega Cai [luckyveg...@gmail.com]
Sent: 12 October 2016 15:52
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tricircle]Tricricle cleanning

Hello, patch for installation guide update of multi-node deployment has just 
been submitted.  Here is the link:
https://review.openstack.org/#/c/385306/

BR
Zhiyuan

On Wed, 12 Oct 2016 at 10:17 joehuang 
> wrote:
Hello, Team,

Understand this period is keeping us quite busy, for the Tricircle cleaning is 
in parallel with newton release period. Fortunately we have made great progress 
for the Tricircle cleaning and Newton release is approaching completeness.

(https://etherpad.openstack.org/p/TricircleSplitting)


  *   DONE 1. update README: https://review.openstack.org/#/c/375218/

  *   DONE 1. local plugin spec: https://review.openstack.org/#/c/368529/

  *   DONE 1. local and central plugin: https://review.openstack.org/#/c/375281/

  *
  *   2.central and local plugin for l3: 
https://review.openstack.org/#/c/378476/

  *   2. remove api gateway code: https://review.openstack.org/#/c/384182/

  *   3.  security group support: https://review.openstack.org/#/c/380054/

  *   3. installation guide update(no api gateway):

  *   Part1: Single Node installation: https://review.openstack.org/#/c/384872/

  *   Part2: Multi nodes installation: WIP

  *

  *   - Try to get these above cleaning patches merged before Oct.19, 
before Barcelona summit.

Except the patch " Multi nodes installation", all other patches are in the 
queue of review for the Tricircle cleaning. After these patches get merged, we 
can have a first clean Tricircle baseline for networking automation, it's good 
to have a tagging for this baseline as a milestone.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
BR
Zhiyuan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]Tricricle cleanning

2016-10-12 Thread Vega Cai
Hello, patch for installation guide update of multi-node deployment has
just been submitted.  Here is the link:
https://review.openstack.org/#/c/385306/

BR
Zhiyuan

On Wed, 12 Oct 2016 at 10:17 joehuang  wrote:

> Hello, Team,
>
> Understand this period is keeping us quite busy, for the Tricircle
> cleaning is in parallel with newton release period. Fortunately we have
> made great progress for the Tricircle cleaning and Newton release is
> approaching completeness.
>
> (https://etherpad.openstack.org/p/TricircleSplitting)
>
>
>- DONE 1. update README: https://review.openstack.org/#/c/375218/
>
>
>- DONE 1. local plugin spec: https://review.openstack.org/#/c/368529/
>
>
>- DONE 1. local and central plugin:
>https://review.openstack.org/#/c/375281/
>
>
>-
>- 2.central and local plugin for l3:
>https://review.openstack.org/#/c/378476/
>
>
>- 2. remove api gateway code: https://review.openstack.org/#/c/384182/
>
>
>- 3.  security group support: https://review.openstack.org/#/c/380054/
>
>
>- 3. installation guide update(no api gateway):
>
>
>- Part1: Single Node installation:
>https://review.openstack.org/#/c/384872/
>
>
>- Part2: Multi nodes installation: WIP
>
>
>-
>
>
>- - Try to get these above cleaning patches merged before Oct.19,
>before Barcelona summit.
>
>
>
>
> Except the patch " Multi nodes installation", all other patches are in
> the queue of review for the Tricircle cleaning. After these patches get
> merged, we can have a first clean Tricircle baseline for networking
> automation, it's good to have a tagging for this baseline as a milestone.
>
> Best Regards
> Chaoyi Huang (joehuang)
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
BR
Zhiyuan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

2016-10-12 Thread Clint Byrum
Excerpts from Steven Dake (stdake)'s message of 2016-10-12 06:42:43 +:
> Tom,
> 
> No flame, just observation about the reality of these changes.
> 
> I think we missed this communication on the mailing list or in the FAQs or 
> somewhere else.  I think most engineering-focused organizations are looking 
> at the PTGs only and not really considering the summit for budget planning.  
> If folks knew the operators were only going to be at the OpenStack Summit, I 
> think that may change budget planning for engineering organizations.  Seems 
> like more siloing to me, not less.  We need to integrate OpenStack’s 
> development with Operators as well as the Operator’s customers (the cloud 
> consumers the Operators deliver to).
> 
> Does the foundation not want to co-locate the ops summit at the PTG because 
> the ops summit feeds into the OpenStack Summit main ops day(s)?
> 

Agree, on the surface it looks like this adds a brick or two on top of
the existing wall that devs throw things over.

However, I think the reality is those bricks were already there, and
we've all been pretending this isn't what is already happening.

So, while I do want to make sure enough of our architects and designers
go to the summit to truly understand user needs, I also think it has
been proven ineffective to also throw all of the coders into that mix and
expect them to be productive.

I recall many of us huddled in the dev pit and at parties at summits
trying desperately to have deep technical conversations while the
maelstrom was happening around us. And then the few who were fortunate
enough to go to the mid-cycle would get into quiet rooms for a few days,
and _actually_ design the things our users need, but 3 months late,
and basically for the next release.

> I don’t have any easy solutions for this problem, but the expectation that 
> project developers are required at 3 week-long events instead of 2 wasn’t 
> clearly communicated and should be rectified beyond a post to the 
> openstack-dev mailing list where most people filter messages by tags (i.e. 
> your message is probably not reaching the correct audience).

Where did you get three?

PTG - write code, design things (replaces mid-cycles)
Summit - listen to users, showcase designs, state plans for next release

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Karbor] Questions about algorithm of building resource graph

2016-10-12 Thread zengchen


Hi karbor guys:


Yesterday, I had discussed this question with other guys on the weekly 
meeting. Maybe I have not described the question clearly.
Today, I discussed with chenying and reached an agreement with him. I update 
the question again in the attached file and hope for
the ideas from you. Please, take a look at it. Thanks very much!


zengchen





At 2016-10-11 09:06:33, "zengchen"  wrote:

Hi karbor guys:


I have some questions about algorithm of building resource graph. I have 
described them in the attached file.
Any thoughts will be welcomed. Thanks very much!


zengchen

Questions about algorithm of building resource graph.pptx
Description: MS-Powerpoint 2007 presentation
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG from the Ops Perspective - a few short notes

2016-10-12 Thread Steven Dake (stdake)
Tom,

No flame, just observation about the reality of these changes.

I think we missed this communication on the mailing list or in the FAQs or 
somewhere else.  I think most engineering-focused organizations are looking at 
the PTGs only and not really considering the summit for budget planning.  If 
folks knew the operators were only going to be at the OpenStack Summit, I think 
that may change budget planning for engineering organizations.  Seems like more 
siloing to me, not less.  We need to integrate OpenStack’s development with 
Operators as well as the Operator’s customers (the cloud consumers the 
Operators deliver to).

Does the foundation not want to co-locate the ops summit at the PTG because the 
ops summit feeds into the OpenStack Summit main ops day(s)?

I don’t have any easy solutions for this problem, but the expectation that 
project developers are required at 3 week-long events instead of 2 wasn’t 
clearly communicated and should be rectified beyond a post to the openstack-dev 
mailing list where most people filter messages by tags (i.e. your message is 
probably not reaching the correct audience).

Regards
-steve


From: "t...@openstack.org" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, October 11, 2016 at 7:29 PM
To: "openstack-dev@lists.openstack.org" 
Subject: [openstack-dev] PTG from the Ops Perspective - a few short notes

Hello all,

It's fantastic to see all of the PTG planning that has been going on in
recent threads. It's clear there's a bit of confusion too, and as
mriedem notes - us "mere mortals" are probably going to take some time
to figure it out. Nothing's final of course, and we're going to take a
while and iterate to success.

With that in mind, I'm going to don the flame-proof suit and try to list
a few very short, simple things to try and help, particularly to
understand from the ops-y side of things. Throwing away all the context
and nuance here that could stave off attacks, so please be nice :)


* The OpenStack Summit is the start of a release cycle *

If you do nothing else, please check out the diagram on the PTG site -
it's good. We're finally acknowledging that a release cycle starts with
planning, rather than when the code branch opens :) It means that we'll
be finalizing development on one release while planning another, though
we've actually been doing that already. The difference is that with this
change, we'll have the summit in the right place to get decent feedback
and ideas from users: at the very start of the cycle.



* The OpenStack Summit is the place where the entire community gets
together *

Just because there's the PTG, doesn't mean the Summit becomes some
marketing thing. If you want to have pre-spec brainstorming or feedback
discussions with users: Summit. If you need to be involved in the
strategic direction of OpenStack: Summit. If you just want to hang out
with your project team and talk code only: you're going to love the PTG :)


* Don't expect Ops at the PTG *

The PTG has been designed for you to have a space to get stuff done.
Unless a user is so deep into code that you basically look at them as
"one of the team", they're not going to be there. If you'd like feedback
and ideas from users, plan that to happen at the start of the cycle -
i.e. Summit :)



Thank you for your exceptional patience as we work all this out! Ready
for the flame-tan now :)



Regards,


Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rpm-packaging][chef][puppet][salt][openstack-ansible][HA][tripleo][kolla][fuel] Schema proposal for config file handling for services

2016-10-12 Thread Steven Dake (stdake)
Thomas,

Kolla does not use systemd fies (bifrost may be different here – I am not 
certain).  Kolla also does not use default configuration files that are shipped 
with distros.  We find this model to be disruptive to reliable development.  I 
get distros want to ship them and that’s fine by us.  We just ignore them.

What interests Kolla most is that binaries stay in the same place in the same 
packages.  That said, if binaries are moved around, Kolla can deal with it.  We 
adapt to our upstreams (in Kolla’s case tarballs.openstack.org, RDO, UCA, and 
many others ;).

Regards
-steve


From: Thomas Bechtold 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, October 11, 2016 at 12:39 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] 
[rpm-packaging][chef][puppet][salt][openstack-ansible][HA][tripleo][kolla][fuel]
 Schema proposal for config file handling for services

Hi,

On Mon, Oct 10, 2016 at 10:07:05AM -0600, Alex Schultz wrote:
On Mon, Oct 10, 2016 at 5:03 AM, Thomas Bechtold 
> wrote:
> Hi,
>
> in the rpm-packaging project we started to package the services and are
> currently discussing a possible schema for configuration files and
> snippets used by the systemd .service files (but this would also affect
> OCF resource agents).
> This affects packagers, endusers and config management systems (Chef,
> Puppet, Ansible, Salt, ...) so I want to discuss the proposal here on
> the list.
This also affects deployment tools so you may want to include tripleo,
kolla, fuel as they are downstream consumers and may have their own
assumptions about how services are launched.

Done.

> Most services (at least for SUSE and RDO) are using a single config
> (e.g. /etc/nova/nova.conf) when starting the service. Some services
> (e.g. neutron services like neutron-l3-agent) use multiple config files.
>
> There are multiple problems with that approach:
> - when using a config-mgmt-system, users may want to override a config
> option (for a feature that is not yet supported) but the
> config-mgmt-system will override the config later again.
Just to chime in here from a puppet standpoint, this is not a problem
because we provide a way for a user to add any extra options they wish
using the provider so it always ends up in the correct configuration
file.

Does that also work if you need extra configuration files for 3rd party
plugins (e.g. neutron plugins) ? I guess you could just copy the 3rd
party config file content to the needed neutron config but that's ugly imo.

> - when users adjust /etc/$service/$service.conf and a release update is
> done (e.g. mitaka->newton) /etc/$service/$service.conf wil not be
> overridden. That's good because the user changed something but otoh the
> file (with all it's config options and comments) is no longer correct.
Depending on the configuration management tool, the 'default' options
and comments may not even be there so I'm not sure this is actually
that much of a concern.  Also when you upgrade there is usually some
sort of upgrade process that has to go along with your configuration
management tool which should take care of this for you. So i'm not
sure this needs to be a packaging concern.
> - when config-mgmt-systems use templates for the $service.conf file,
> updating theses templates is a lot of work.
Yes which is why tools that don't use templates in the configuration
management tool makes this a non-issue.  I'm not sure this needs to be
a concern of packagers as it's an issue with the configuration
management tool of choice and many of these tools have switched away
from templates or are currently handling this.  If the configuration
management tool doesn't support this or is suffering from this, simply
adding conf.d support might help but then you also run into issues
about ensuring things are removed and cleaned up.

There *are* config-mgmt-tools still using templates. And having the
possibility to add config snippets simplifies the process here without
any downside for available solutions. Just don't use it if you don't
need it.

> - setting configuration options per service (let's say cinder-volume
> needs other config options than cinder-api) is not possible.
So I agree this is more likely a real problem, but i'm not sure this
should be solved by packaging as this probably needs to be addressed
in upstream.  Unless this is already a thing and it's just never been
properly implemented when deployed via packages. The issue I have with
only solving this via rpm packaging is that for tools that support
both rpms and debs this would lead to 2 different configuration
mechanisms.  So I would vote not to do anything different then what
debs also do unless both packaging methods are updated at the same
time.

Don't you have already a lot of different