[openstack-dev] [all] Improving Vendor Driver Discoverability

2017-01-13 Thread Mike Perez
Hello all,

In the spirit of recent Technical Committee discussions I would like to bring
focus on how we're doing vendor driver discoverability. Today we're doing this
with the OpenStack Foundation marketplace [1] which is powered by the driverlog
project. In a nutshell, it is a big JSON file [2] that has information on which
vendor solutions are supported by which projects in which releases. This
information is then parsed to generate the marketplace so that users can
discover them. As discussed in previous TC meetings [3] we need to recognize
vendors that are trying to make great products work in OpenStack so that they
can be successful, which allows our community to be successful and healthy.

In the feedback I have received from various people in the community, some
didn’t know how it worked, and were unhappy that the projects themselves
weren’t owning this. I totally agree that project teams should own this and
should be encouraged to be involved in the reviews. Today that’s not happening.
I’d like to propose we come up with a way for the marketplace to be more
community-driven by the projects that are validating these solutions.

At the Barcelona Summit [4] we discussed ways to improve driverlog. Projects
like Nova have a support matrix of hypervisors in their in-tree documentation.
Various members of the Cinder project also expressed interest in using this
solution. It was suggested in the session that the marketplace should just link
to the projects' appropriate documentation. The problem with this solution is
the information is not presented in a consistent way across projects, as
driverlog does it today. We could accomplish this instead by using a parsable
format that is stored in each appropriate project's git repository. I'm
thinking of pretty much how driverlog works today, but broken up into
individual projects.

The marketplace can parse this information and present it in one place
consistently. Projects may also continue to parse this information in their own
documentation, and we can even write a common tool to do this. The way a vendor
is listed here is based on being validated by the project team itself. Keeping
things in the marketplace would also address the suggestions that came out of
the recent feedback we received from various driver maintainers [4].

The way validation works is completely up to the project team. In my research
as shown in the Summit etherpad [5] there's a clear trend in projects doing
continuous integration for validation. If we wanted to we could also have the
marketplace give the current CI results, which was also requested in the
feedback from driver maintainers.

I would like to volunteer in creating the initial files for each project with
what the marketplace says today.

[1] - https://www.openstack.org/marketplace/drivers/
[2] - 
http://git.openstack.org/cgit/openstack/driverlog/tree/etc/default_data.json
[3] - 
http://eavesdrop.openstack.org/meetings/tc/2017/tc.2017-01-10-20.01.log.html#l-106
[4] - 
http://lists.openstack.org/pipermail/openstack-dev/2017-January/109855.html
[5] - https://etherpad.openstack.org/p/driverlog-validation

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][ptl] not planning to serve as release PTL for Pike

2017-01-13 Thread Emilien Macchi
Thanks Doug for your excellent technical leadership and being always
helpful. Your impact on release management has been very appreciated.

---
Emilien Macchi

On Jan 13, 2017 9:16 PM, "Anita Kuno"  wrote:

> On 2017-01-13 03:19 PM, Steve Martinelli wrote:
>
>> +++ Thanks for making it 100x easier to release new libraries, it's now
>> something I look forward to.
>>
>> On Fri, Jan 13, 2017 at 3:11 PM, Davanum Srinivas 
>> wrote:
>>
>> Many thanks for all the automation and all other initiatives Doug!
>>>
>>> On Fri, Jan 13, 2017 at 3:00 PM, Doug Hellmann 
>>> wrote:
>>>
 I announced this at the release team meeting on 6 Jan, but thought
 I should also post to the list as well:  I do not plan to serve as
 the Release Management team PTL for the Pike release cycle.

 It has been my pleasure to serve as PTL, and we've almost finished
 the automation work that I envisioned when I joined the team. Now
 I would like to shift my focus to some other projects within the
 community.  I will still be an active member of the Release Management
 team, helping with new initiatives and reviews for releases.

 Doug

 --
>>> Davanum Srinivas :: https://twitter.com/dims
>>>
>>> Thank you, Doug, for all your good work as PTL of the release team.
>
> Anita.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][ptl] not planning to serve as release PTL for Pike

2017-01-13 Thread Anita Kuno

On 2017-01-13 03:19 PM, Steve Martinelli wrote:

+++ Thanks for making it 100x easier to release new libraries, it's now
something I look forward to.

On Fri, Jan 13, 2017 at 3:11 PM, Davanum Srinivas  wrote:


Many thanks for all the automation and all other initiatives Doug!

On Fri, Jan 13, 2017 at 3:00 PM, Doug Hellmann 
wrote:

I announced this at the release team meeting on 6 Jan, but thought
I should also post to the list as well:  I do not plan to serve as
the Release Management team PTL for the Pike release cycle.

It has been my pleasure to serve as PTL, and we've almost finished
the automation work that I envisioned when I joined the team. Now
I would like to shift my focus to some other projects within the
community.  I will still be an active member of the Release Management
team, helping with new initiatives and reviews for releases.

Doug


--
Davanum Srinivas :: https://twitter.com/dims


Thank you, Doug, for all your good work as PTL of the release team.

Anita.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][ptl] not planning to serve as release PTL for Pike

2017-01-13 Thread Mike Perez
On 15:00 Jan 13, Doug Hellmann wrote:
> I announced this at the release team meeting on 6 Jan, but thought
> I should also post to the list as well:  I do not plan to serve as
> the Release Management team PTL for the Pike release cycle.
> 
> It has been my pleasure to serve as PTL, and we've almost finished
> the automation work that I envisioned when I joined the team. Now
> I would like to shift my focus to some other projects within the
> community.  I will still be an active member of the Release Management
> team, helping with new initiatives and reviews for releases.

Thank you Doug for your continued good work in the community. I'm anxious to
see what you're going to make better next!

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [infra] drbdmanage is no more GPL2

2017-01-13 Thread Mike Perez
On 09:44 Jan 10, Sean McGinnis wrote:
> On Mon, Dec 12, 2016 at 07:58:17AM +0100, Mehdi Abaakouk wrote:
> > Hi,
> > 
> > I have recently seen that drbdmanage python library is no more GPL2 but
> > need a end user license agreement [1].
> > 
> > Is this compatible with the driver policy of Cinder ?
> > 
> > [1] 
> > http://git.drbd.org/drbdmanage.git/commitdiff/441dc6a96b0bc6a08d2469fa5a82d97fc08e8ec1
> > 
> > Regards
> > 
> > -- 
> > Mehdi Abaakouk
> > mail: sil...@sileht.net
> > irc: sileht
> > 
> 
> It doesn't look like much has changed here. There has been one commit
> that only slightly modified the new license: [1]
> 
> IANAL, and I don't want to make assumption on what can and can't be
> done, so looking to other more informed folks. Do we need to remove this
> from the Jenkins run CI tests?
> 
> Input would be appreciated.
> 
> Sean

Can we survey from the operators community if anyone cares, and if some people
do, who wants to step up and maintain a forked version for support to remain in
Cinder.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][puppet][tripleo][kolla][fuel] Let's talk nova cell v2 deployments and upgrades

2017-01-13 Thread Matt Riedemann

On 1/13/2017 2:05 PM, Matt Riedemann wrote:


Documenting this is going to be a priority. We should have something up
for review in Nova by next week (like Monday), at least a draft.



Dan Smith has a start on the docs here:

https://review.openstack.org/#/c/420198/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][diskimage-builder] containers, Containers, CONTAINERS!

2017-01-13 Thread Gregory Haynes
On Wed, Jan 11, 2017, at 03:04 PM, Paul Belanger wrote:
> On Sun, Jan 08, 2017 at 02:45:28PM -0600, Gregory Haynes wrote:
> > On Fri, Jan 6, 2017, at 09:57 AM, Paul Belanger wrote:
> > > On Fri, Jan 06, 2017 at 09:48:31AM +0100, Andre Florath wrote:
> > > > Hello Paul,
> > > > 
> > > > thank you very much for your contribution - it is very appreciated.
> > > > 
> > 
> > Seconded - I'm very excited for some effort to be put in to improving
> > the use case of making containers with DIB. Thanks :).
> > 
> > > > You addressed a topic with your patch set that was IMHO not in a wide
> > > > focus: generating images for containers.  The ideas in the patches are
> > > > good and should be implemented.
> > > > 
> > > > Nevertheless I'm missing the concept behind your patches. What I saw
> > > > are a couple of (independent?) patches - and it looks that there is
> > > > one 'big goal' - but I did not really get it.  My proposal is (as it
> > > > is done for other bigger changes or introducing new concepts) that
> > > > you write a spec for this first [1].  That would help other people
> > > > (see e.g. Matthew) to use the same blueprint also for other
> > > > distributions.
> > 
> > I strongly agree with the point that this is something were going to end
> > up repeating across many distros so we should make sure there's some
> > common patterns for doing so. A spec seems fine to me, but ideally the
> > end result involves some developer documentation. A spec is probably a
> > good place to get started on getting some consensus which we can turn in
> > to the dev docs.
> > 
> This plan is to start with ubuntu, then move to debian, then fedora and
> finally
> centos. Fedora and CentOS are obviously harder, since a debootstrap tool
> doesn't
> exist.
> 

Right, although I believe we've solve a fair amount of the hard bits
with our yum-minimal element which performs a similar operation to
debootstrap for laying down the root file tree.

> > > Sure, I can write a spec if needed but the TL;DR is:
> > > 
> > > Use diskimage-builder to build debootstrap --variant=minbase chroot, and
> > > nothing
> > > else. So I can then use take the generated tarball and do something else
> > > with
> > > it.
> > > 
> > > > One possibility would be to classify different element sets and define
> > > > the dependency between them.  E.g. to have a element class 'container'
> > > > which can be referenced by other classes, but is not able to reference
> > > > these (e.g. VM or hardware specific things).
> > > > 
> > 
> > It sounds like we need to step back a bit get a clear idea of how were
> > going to manage the full use case matrix of distro * (minimal / full) *
> > (container / vm / baremetal), which is something that would be nice to
> > get consensus on in a spec. This is something that keeps tripping up
> > both users and devs and I think adding containers to the matrix is sort
> > of a tipping point in terms of complexity so again, some docs after
> > figuring out our plan would be *awesome*.
> > 
> > Currently we have distro-minimal elements which are minimal
> > vm/baremetal, and distro elements which actually are full vm/baremetal
> > elements. I assume by adding an element class you mean add a set of
> > distro-container elements? If so, I worry that we might be falling in to
> > a common dib antipattern of making distro-specific elements. I have a
> > alternate proposal:
> > 
> > Lets make two elements: kernel, and minimal-userspace which,
> > respectively, install the kernel package and a minimal set of userspace
> > packages for dib to function (e.g. dependencies for dib-run-parts,
> > package-installs). The kernel package should be doable as basically a
> > package-installs and a pkg-map. The minimal-userspace element gets
> > tricky because it needs to install deps which are required for things
> > like package-installs to function (which is why the various distro
> > elements do this independently).  Even so, I think it would be nice to
> > take care of installing these from within the chroot rather than from
> > outside (see https://review.openstack.org/#/c/392253/ for a good reason
> > why). If we do this then the minimal-userspace element can have some
> > common logic to enter the chroot as part of root.d and then install the
> > needed deps.
> > 
> > The end result of this would be we have distro-minimal which depends on
> > kernel, minimal-userspace, and yum/debootstrap to build a vm/baremetal
> > capable image. We could also create a distro-container element which
> > only depends on minimal-userspace and yum/debootstrap and creates a
> > minimal container. The point being - the top level -container or
> > -minimal elements are basically convenience elements for exporting a few
> > vars and pulling in the proper elements at this point and the
> > elements/code are broken down by the functionality they provide rather
> > than use case.
> > 
> To be honest, this is a ton of work, just to create an debootstrap
> 

Re: [openstack-dev] [release] Release countdown for week R-5, Jan 16-20

2017-01-13 Thread joehuang
Great, got it, thanks a lot

Best Regards
Chaoyi Huang (joehuang)


From: Doug Hellmann [d...@doughellmann.com]
Sent: 13 January 2017 22:55
To: openstack-dev
Subject: Re: [openstack-dev] [release] Release countdown for week R-5,  Jan 
16-20

Excerpts from joehuang's message of 2017-01-13 01:23:08 +:
> Hello, Doug,
>
> One question, according to the guide for self-branch[1], the Ocata stable 
> branch should be created for RC1 tag for projects using the 
> cycle-with-milestone release model. The date for RC1 one is Jan 30 - Feb 03 
> according to the schedule [2]. Tricircle is one big-tent project with  
> cycle-with-intermediary mode, and has dependency on stable Neutron, should 
> Tricircle Ocata stable branch be created during  Jan 30 - Feb 03 or later 
> than Feb 03? I think Neutron Ocata stable branch will be created during Jan 
> 30 - Feb 03.

You'll probably want to wait until after the Neutron stable branch has
been created. You can submit the branch request and either mark it WIP
or add a Depends-On to prevent it from merging before the neutron
request.

Doug

>
> [1]http://git.openstack.org/cgit/openstack/releases/tree/README.rst#n66
> [2] https://releases.openstack.org/ocata/schedule.html
>
> Best Regards
> Chaoyi Huang (joehuang)
>
> 
> From: Doug Hellmann [d...@doughellmann.com]
> Sent: 13 January 2017 2:34
> To: openstack-dev
> Subject: [openstack-dev] [release] Release countdown for week R-5, Jan 16-20
>
> Focus
> -
>
> Feature work and major refactoring should be starting to wrap up
> as we approach the third milestone and various feature and release
> freeze dates.
>
> The deadline for non-client library releases is Thursday 19 Jan.
> We do not grant Feature Freeze Extensions for any libraries, so
> that is a hard freeze date. Any feature work that requires updates
> to non-client libraries should be prioritized so it can be completed
> by that time.
>
> Release Tasks
> -
>
> As we did at the end of Newton, when the time comes to create
> stable/ocata branches they will be configured so that members of
> the $project-release group in gerrit have permission to approve
> patches.  This group should be a small subset of the core review
> team, aware of the priorities and criteria for patches to be approved
> as we work toward release candidates. Release liaisons should ensure
> that these groups exist in gerrit and that their membership is
> correct for this cycle.  Please coordinate with the release management
> team if you have any questions.
>
> General Notes
> -
>
> We will start the soft string freeze during R-4 (23-27 Jan). See
> https://releases.openstack.org/ocata/schedule.html#o-soft-sf for
> details
>
> The release team is now publishing the release calendar using ICS.
> Subscribe your favorite calendaring software to
> https://releases.openstack.org/schedule.ics for automatic updates.
>
> Important Dates
> ---
>
> Final release of non-client libraries: 19 Jan
>
> Ocata 3 Milestone, with Feature and Requirements Freezes: 26 Jan
>
> Ocata release schedule: http://releases.openstack.org/ocata/schedule.html
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Unable to add new metrics using meters.yaml

2017-01-13 Thread Srikanth Vavilapalli
Hi Yurii

Thanks for your inputs. Yes, I have noticed that statement in the guide and 
enabled disable_non_metric_meters in my conf file, but that didn't change the 
behavior. If you notice, that condition is only applicable for "meters that 
have a volume as 1". But in my case, the meter that I have defined is of type 
metric with a proper value for volume.

I think I have found the issue why my meters are not processed by notification 
agents. If I look at the ceilometer generic meter notification listener module 
(ceilometer/meter/notifications.py), it listens for all the meters as defined 
in meters.yaml in the "notifications.info" topic at pre-defined rabbitmq 
exchanges as defined in ceilometer/exchange_control.py file. But in my case, my 
service is publishing its meters to "notifications.info" topic at a different 
exchange name. That means I need to make a change to 
ceilometer/meter/notifications.py to listen on some default rabbitmq exchange 
for all non openstack services to publish their telemetry data.

So the question is, is there any config that I can use to let 
"ceilometer/meter/notifications.py" listen on other rabbitmq exchanges in 
addition to predefined ones, such that this framework can be extended to 
receive meters from non openstack services? Appreciate your inputs.

Thanks
Srikanth



-Original Message-
From: Yurii Prokulevych [mailto:yprok...@redhat.com] 
Sent: Thursday, January 12, 2017 12:34 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Ceilometer] Unable to add new metrics using 
meters.yaml

Hi Srikanth,

As U've noticed those meters are derived from notifications emitted by other 
OpenStack services. So please check that 'cord.dns.cache.size'
events are processed correctly.

Also, the last sentences from the guide:
'''
These meters are not loaded by default. To load these meters, flip the 
`disable_non_metric_meters` option in the ceilometer.conf file '''

Do U have this enabled ?

---
Regards,
Yurii


On Thu, 2017-01-12 at 02:01 +, Srikanth Vavilapalli wrote:
> Hi
> 
> I was following the instructions @ http://docs.openstack.org/admin-gu
> ide/telemetry-data-collection.html#meter-definitions to add new meters 
> to Ceilometer, but not able to make it work.
> 
> I verified meters.yaml file in meter/data folder:
> 
> ubuntu@mysite-ceilometer-3:/usr/lib/python2.7/dist-
> packages/ceilometer/meter/data$ ls
> meters.yaml
> 
> 
> I add the following new meter to the end of that file:
> 
>   - name: $.payload.name
> event_type: 'cord.dns.cache.size'
> type: 'gauge'
> unit: 'entries'
> volume: $.payload.cache_size
> user_id: $.payload.user_id
> project_id: $.payload.project_id
> resource_id: '"cord-" + $.payload.base_id'
> 
> When I inject 'cord.dns.cache.size' metric from a sample publisher to 
> rabbitmq server (@ exchange 'openstack') on which the ceilometer 
> notification agents are listening, I don't see these metrics appearing 
> in 'ceilometer meter-list' output. Can any one plz let me know if I 
> missing any config or change that prevents custom meter processing in 
> Ceilometer?
> 
> Appreciate ur inputs.
> 
> Thanks
> Srikanth
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] priorities for the coming week (01/13-01/19)

2017-01-13 Thread Brian Rosmaita
As discussed at the Glance weekly meeting yesterday, please concentrate
on the following items:

(0) Glance coresec: you know what I'm talking about (and if you don't
contact me immediately offline).  We need to get this wrapped up before
January 18.

(1) Ian wants to release glance_store on Wednesday, January 18.
- https://review.openstack.org/#/c/378460/ is a ceph driver patch that's
OK but could use some tests to prevent regressions; unfortunately the
owner doesn't have time ATM, so this is ripe for someone to pick up and
complete.
- https://review.openstack.org/#/c/120866/ is the buffered reader for
the swift store, let's show the official OpenStack object store some love

(2) Port Glance migrations to Alembic:
- https://review.openstack.org/#/c/382958/
- https://review.openstack.org/#/c/392993/
- https://review.openstack.org/#/c/397409/

(3) Patch to enable better request-id tracking (glanceclient):
- https://review.openstack.org/#/c/352892/
This had some good discussion last week and a new patch is up, let's try
to close it out.

(4) Community images:
We are following the process outlined in tempest/HACKING.rst concerning
a bugfix that causes a breaking change.  As soon as
- https://review.openstack.org/#/c/419658/
is approved by tempest cores, we can merge the main community images patch:
- https://review.openstack.org/#/c/369110/
(which will need another +2; the only test failure is the test that
419658 adds a skip for), and then we've got another patch ready for
tempest to replace the skipped test:
- https://review.openstack.org/#/c/414261/


Light list this week because some people are having a long weekend in
honor of Dr. Martin Luther King, Jr.

cheers,
brian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Confusion around the complexity

2017-01-13 Thread Armando M.
On 13 January 2017 at 15:01, Clint Byrum  wrote:

> Excerpts from Armando M.'s message of 2017-01-13 11:39:33 -0800:
> > On 13 January 2017 at 10:47, Clint Byrum  wrote:
> >
> > > Excerpts from Joshua Harlow's message of 2017-01-12 22:38:46 -0800:
> > > > Kevin Benton wrote:
> > > > > If you don't want users to specify network details, then use the
> get me
> > > > > a network extension or just have them boot to a public (or other
> > > > > pre-created) network.
> > > > >
> > > > > In your thought experiment, why is your iPhone app developer not
> just
> > > > > using a PaaS that handles instance scaling, load balancing and HA?
> Why
> > > > > would he/she want to spend time managing security updates and log
> > > > > rotation for an operating system running inside another program
> > > > > pretending to be hardware? Different levels of abstraction solve
> > > > > different use cases.
> > > >
> > > > Fair point, probably mr/mrs iPhone app developer should be doing
> that.
> > > >
> > >
> > > I totally disagree. If PaaS was the answer, they'd all be using PaaS.
> > >
> > > Maybe some day, but that's no excuse for having an overly complex story
> > > for the base. I totally appreciate that "Get me a network" is an effort
> > > to address this. But after reading docs on it, I actually have no idea
> > > how it works or how to make use of it (I do have a decent understanding
> > > of how to setup a default subnetpool as an operator).
> > >
> >
> > I'd be happy to improve the docs, but your feedback is not very
> actionable.
> > Any chance you can elaborate on what you're struggling with?
> >
>
> The docs I found are all extremely Neutron-centric. I was told later
> on IRC that once the default subnet pool is setup, Nova would do some
> magic to tell neutron to allocate a subnet from that pool to the user
> when they create an instance.


> Basically, the docs I found were not at all user-centric. They were
> Neutron-centric and they didn't really explain why, as an operator, I'd
> want to allocate a subnet pool. I mean, they do, but because I don't
> really know if I have that problem or what it is, I just wasn't able
> to grasp where this was going. It tells me to go ahead and list default
> subnet pools, and then pass --nic=net-id=$ID from that. Super confusing
> and not really any more friendly than before.
>
>
If you are referring to [1], I wouldn't expect anything less, it's the
OpenStack networking guide after all.


> So what I want is the story from the user's perspective. Something like:
>
> "Without this extension, your users will need to do these steps in order
> to boot servers with networking:..."
>
> and then
>
> "With this extension, your users will not need to perform those steps,
> and the default subnet pools that you setup will be automatically
> allocated to users upon their first server boot."
>

Problem statement from the user's point of view is typically left to the
specs [2,3].
I am sure one can argue that the content there may not be well written or
organized, but it was enough for getting the nova and the neutron team to
have a mutual understanding and agreement on how to design, implement and
test the feature.

>From what I hear, there is a gap in the networking guide in that the
rationale behind the feature is missing. I suppose we can fill that up, and
thus I filed [4].

Thanks.

[1]
http://docs.openstack.org/newton/networking-guide/config-auto-allocation.html
[2]
http://specs.openstack.org/openstack/neutron-specs/specs/mitaka/get-me-a-network.html
[3]
http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/get-me-a-network.html
[4] https://bugs.launchpad.net/openstack-manuals/+bug/1656447


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Confusion around the complexity

2017-01-13 Thread Clint Byrum
Excerpts from Armando M.'s message of 2017-01-13 11:39:33 -0800:
> On 13 January 2017 at 10:47, Clint Byrum  wrote:
> 
> > Excerpts from Joshua Harlow's message of 2017-01-12 22:38:46 -0800:
> > > Kevin Benton wrote:
> > > > If you don't want users to specify network details, then use the get me
> > > > a network extension or just have them boot to a public (or other
> > > > pre-created) network.
> > > >
> > > > In your thought experiment, why is your iPhone app developer not just
> > > > using a PaaS that handles instance scaling, load balancing and HA? Why
> > > > would he/she want to spend time managing security updates and log
> > > > rotation for an operating system running inside another program
> > > > pretending to be hardware? Different levels of abstraction solve
> > > > different use cases.
> > >
> > > Fair point, probably mr/mrs iPhone app developer should be doing that.
> > >
> >
> > I totally disagree. If PaaS was the answer, they'd all be using PaaS.
> >
> > Maybe some day, but that's no excuse for having an overly complex story
> > for the base. I totally appreciate that "Get me a network" is an effort
> > to address this. But after reading docs on it, I actually have no idea
> > how it works or how to make use of it (I do have a decent understanding
> > of how to setup a default subnetpool as an operator).
> >
> 
> I'd be happy to improve the docs, but your feedback is not very actionable.
> Any chance you can elaborate on what you're struggling with?
> 

The docs I found are all extremely Neutron-centric. I was told later
on IRC that once the default subnet pool is setup, Nova would do some
magic to tell neutron to allocate a subnet from that pool to the user
when they create an instance.

Basically, the docs I found were not at all user-centric. They were
Neutron-centric and they didn't really explain why, as an operator, I'd
want to allocate a subnet pool. I mean, they do, but because I don't
really know if I have that problem or what it is, I just wasn't able
to grasp where this was going. It tells me to go ahead and list default
subnet pools, and then pass --nic=net-id=$ID from that. Super confusing
and not really any more friendly than before.

So what I want is the story from the user's perspective. Something like:

"Without this extension, your users will need to do these steps in order
to boot servers with networking:..."

and then

"With this extension, your users will not need to perform those steps,
and the default subnet pools that you setup will be automatically
allocated to users upon their first server boot."

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Cinder replication for multisite openstack clouds

2017-01-13 Thread Klemen Pogacnik
I'm thinking about disaster recovery options. Permanent data of
application which needs that feature, will be put on replicated volume.
On the secondary site the same applicatin will run on idle state.
After switchover, replicated volume will be attached to app on
secondary site, before going to active state.
At the same time both Openstacks will normaly run their own non
geo VMs (both sites are in normal active state).
I've already made this scenario by hand. I copied a row from volumes
table of cinder DB to secondary openstack, changed some IDs (don't
have common keystone yet) and after switchover, which was done by
Ceph commands (demote, promote), replicated volume was easily
attached to app on secondary site.
It would be nice to have this feature directly on Cinder. On my opinion
two things should be done: replication of Cinder DB data (probably
only volumes table is not enough) and demote and promote commands
similar to the of Ceph.
What do you think?

Kemo

On Fri, Jan 13, 2017 at 9:59 PM, Jay S. Bryant
 wrote:
> Kemo,
>
> The next phase of development for replication is to enable replication of
> groups of volumes [1].
>
>
> I remember, in the past, there being discussion around how we handle
> replication across multiple data centers and don't know that we came to a
> conclusion.  I think we would need to better understand the use case here?
> Are you picturing two data centers that have all their data replicated
> between the two locations. You want to then be able to have a Cinder access
> those volumes concurrently in both locations?  Theoretically that could be
> possible with a shared database between the two datacenters.  I am not aware
> of anyone actually operating with such a configuration.
>
> Jay
>
> [1] https://blueprints.launchpad.net/cinder/+spec/replication-cg
>
>
> On 01/13/2017 12:15 PM, Klemen Pogacnik wrote:
>>
>> HI!
>> I've been playing with dual site openstack clouds. I use ceph as a backend
>> for
>> Cinder volumes. Each openstack has its own ceph cluster. Data replication
>> is done by ceph rbd mirroring.
>> Current Cinder replication design (cheesecake) only protects against
>> storage failure.
>> What about support for multi site deployments and protection of whole
>> data center.
>> Has any work been planned or done yet on that topic?
>> Kemo
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG? / Was (Consistent Versioned Endpoints)

2017-01-13 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2017-01-13 19:44:23 +:
> Don't want to hijack the thread too much but... when the PTG was being sold, 
> it was a way to get the various developers in to one place and make it 
> cheaper to go to for devs. Now it seems to be being made into a place where 
> each of the silo's can co'exist but not talk, and then the summit is still 
> required to get cross project work done, so it only increases the devs cost 
> by requiring attendance at both. This is very troubling. :/ Whats the main 
> benefit of the PTG then?
> 

I've come to the conclusion that this will still have a net positive
effect for communication.

The reason? Leaders. Not just PTL's, but all of those who are serving as
leaders, whether formal or not.

With the old system, the leaders of each project would be tasked
with attending all of the summit sessions relevant to their project,
whether cross-project, ops-centric, or project-centric. This was a
full-time job for the entirety of the summit for many. As a result,
leaders were unable to attend the conference portion of the event,
which meant no socialization of what is actually happening with their
work to the community.

Basically the leadership was there to plan, facilitate, and listen,
but not to present. They'd also be expected at the mid-cycle to help
keep up on what's really coming down the pipe for the release vs. what
was planned (and to help work on their own efforts for those with time
left to do actual development).

With the new system, the leadership will be at the PTG, and have dev-centric
conversations related to planning all week, and probably be just as busy
as they were at the summit and mid-cycle.

But with that work done at the PTG, a project leader can attend the Forum
and conference and actually participate fully in both. They can talk about
the work the team is doing, they can showcase their company's offerings
(let's keep the lights on please!) and they can spend time in the Forum
on the things that they're needed for there (which should be a fraction
of what they did at the dev summit).

For operators, unless you're sponsoring work, you can ignore the PTG just
like you ignored the mid-cycle. You can come to the forum and expect
to see the most influential developers there, just like you would have
seen them at the summit. But they will have a lot less to do that isn't
listening to you or telling you what's happening in their projects. I've
specifically heard the tales of developers, cornered in summit sessions,
being clear that they simply don't have time to listen to the operators'
needs. We can hope that this new scheme works against that feeling.

So yeah, it's new and scary. But I got over my fear of the change, and
I think you should too. Let's see how it goes, and reserve our final
judgement until after the Forum.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Pike PTG facilities and remote participation

2017-01-13 Thread Brian Rosmaita
Hi Thierry,

I have a quick facilities question about the PTG.  I know of at least
one developer who can't attend physically but will be willing to join
via some type of videoconferencing software (vidyo, or blue jeans, or
google hangout).  Do you think it will be possible?  The wifi has gotten
better with each summit (I remember sitting right next one of the wifi
endpoints in Portland and still kept getting dropped; it hasn't been
such a problem at more recent summits), but we're going to be at a much
smaller facility for the PTG.

Anyway, my question is whether it makes sense to plan a few sessions
over video (this dev would probably lead one design session and I'd need
his hearty participation in another -- it's not just that I'd like to
make it possible for someone to follow along, I need the bandwidth to be
sufficient for a really good connection).  So basically, my question is
whether it makes sense to *plan* for remote collaboration, or whether
the remote connection should be looked at as something that might be
nice if it happens, but shouldn't really be planned for.

I realize that the whole point of the PTG was to bring costs down so
that more developers could attend, and I did see your email earlier this
week about the extension to the travel grant program, but nonetheless,
this issue has still come up, hence this email.

thanks,
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Draft logos

2017-01-13 Thread Heidi Joy Tretheway
Following up on a Glance ML question, but this applies to all: 
Hi Brian & developers - 

Thanks for the note! We’ve been working on the illustrators on another round of 
revisions, which we expect to see within a week or so. We received 132 
individual responses from devs on the first round team mascot drafts, and we so 
appreciate the feedback! I’ll also be sending personal notes back to everyone 
who requested them on the feedback form, detailing the changes we made. 

Glance is getting a makeover since we heard loud and clear from your team (and 
six other teams) that our illustration team missed the mark on the key creature 
features. Seven other team logos are still in draft stage because the teams 
switched mascots or we didn’t approve the illustrators’ first efforts. We’ll 
share a new draft with these teams as soon as we get them back from the 
illustrators.

On the positive side, 25 mascots are approved (based on project team feedback 
being generally positive) and moving forward to the final round of tweaks 
(color and linework corrections, to ensure consistency across logos), and 
another 13 have just small revisions requested, such as a change in color or 
the animal’s expression. 

We also got quite a few comments from folks about the heavy line weight, 
geometric style, and bold color choices (some folks loved it, some didn't), and 
so we’re balancing that feedback against the need to ensure all logos are 
presented in a similar style—that style won’t be changing from project to 
project. For example, the heavy line weight helps us create bold marks even 
when the logo is shrunk to a postage stamp size, such as on a PPT slide. 

Our plan is to print stickers for the teams attending the PTG (and they’ll be 
available for all teams in Boston), as well as having digital logo files to 
share with you in advance of the PTG so that you can use them at will. 

Please feel free to reach out to me with more questions!


> On Jan 13, 2017, at 12:41 PM, Brian Rosmaita  
> wrote:
> 
> Hello Heidi Joy,
> 
> At the Glance meeting yesterday, a concerned developer asked about the
> status of the Glance logo.  Do you have any news for us?
> 
> thanks,
> brian
> 
> 
> On 11/3/16, 4:38 PM, "Heidi Joy Tretheway"  wrote:
> 
>> Thanks for the feedback, Brian! That is always HUGELY helpful. I¹ll
>> convey that to the design team and I¹m hoping you also took a moment to
>> include your feedback on the tinyurl.com/OSmascot page  (because it makes
>> it a lot easier for us to see all feedback side-by-side). Either way, I¹m
>> very appreciative of your thoughtful input!
>> 
>> Best,
>> Heidi Joy
>> 
>>> On Nov 3, 2016, at 9:55 AM, Brian Rosmaita
>>>  wrote:
>>> 
>>> Hello Heidi Joy,
>>> 
>>> First, let me say on behalf of the Glance community that we appreciate
>>> the
>>> hard work you and your team have done in creating all the team
>>> mascot/logos.
>>> 
>>> We discussed the draft logo at the weekly Glance meeting today [0], and
>>> the general sense of the Glance community is that we'd like to request a
>>> re-draft.  I'm not sure we conveyed clearly to you how important the
>>> mouth-full-of-nuts aspect was to our choice of a chipmunk [1].  We could
>>> tell from the video [2] that you started with the sample picture [3] and
>>> worked from there to create something more streamlined, but we were
>>> really
>>> hoping for something more like the sample picture.
>>> 
>>> The importance of the mouth-full-of-nuts aspect is that Glance curates
>>> and
>>> stores virtual machine images and other services consume images supplied
>>> from Glance.  Additionally, this would more clearly distinguish our logo
>>> from that of other projects.
>>> 
>>> Thanks for being so willing to consider feedback!
>>> 
>>> cheers,
>>> brian
>>> 
>>> [0] 
>>> 
>>> http://eavesdrop.openstack.org/meetings/glance/2016/glance.2016-11-03-14.
>>> 00
>>> .log.html (beginning 14:14:07)
>>> [1] https://etherpad.openstack.org/p/glance-project-mascot (lines 38-41)
>>> [2] https://youtu.be/JmMTCWyY8Y4?t=41
>>> [3] https://www.fortwhyte.org/wp-content/uploads/2014/12/chip2.jpg
>>> 
>>> 
>>> 
>>> 
>>> On 10/21/16, 12:43 PM, "Brian Rosmaita" 
>>> wrote:
>>> 
 Hello Glancers,
 
 Heidi Joy Tretheway and Todd Morey from the foundation have sent us the
 attached sneak preview of the Glance mascot/logo.  (As you may recall,
 we
 voted for a chipmunk [0].)
 
 They've set up a handy form [1] so that you can provide individual
 feedback.  The project logos will be finalized in time for the February
 PTG, so please provide any feedback before Friday, November 11.
 
 Before providing feedback, you might want to take a look at a short
 (50-second) video [2] to see how the logos developed and how they
 present
 a consistent interface across OpenStack projects.
 
 Since we're looking at 

Re: [openstack-dev] [nova][puppet][tripleo][kolla][fuel] Let's talk nova cell v2 deployments and upgrades

2017-01-13 Thread melanie witt

On Fri, 13 Jan 2017 14:04:27 -0700, Alex Schultz wrote:

Just from the puppet standpoint, it's much easier to create the cell
and populate it after the fact and run some command to sync stuff
after the nodes have been added.  This also would be easier to consume
for scale up/scale down actions.  I'm pretty sure that's also the case
if you're going to implement this in ansible or some other workflow
tooling.  I think this is a cleaner path than having to preplan your
computes, install them and then setup your cell.


Agreed this is the reasonable way for scale up/scale down to occur 
during deployment. The usual expectation for scale up/scale down is on a 
live system, so the compute hosts would be registered at the time you 
are creating cells (and the existing commands support this). But, we 
have a gap for the fresh install case and the scale up during deploy 
case. For others tuned in to this, there's a patch up [1] for a command 
that can create an empty cell, and the operator can add hosts to it at a 
later time. The expected use of the commands would be: at setup time, 
run 'map_cell0' to create cell0 for storage of instances that fail to 
schedule and 'create_cell' to create an empty cell intended for compute 
hosts. Then later after compute hosts are up and running, 
'discover_hosts' to associate hosts with a given cell.



Thanks. We don't have to necessarily fix simple_cell_setup if it's
working the way it's supposed to.  It seems that we're missing pieces
and the knowledge around when we're supposed to run.  What made this
worse was it seems that nova works without any cell v2 items on the
fresh install but the upgrade has it as a hard requirement.


I think the current consensus is to leave 'simple_cell_setup' alone 
since it means "set up all the things" and if it can't, it should return 
1. Instead we'll add 'create_cell' to allow each piece to be run 
deliberately at the appropriate stages of a deployment. The 
'simple_cell_setup' was intended to be a lightweight way for people to 
setup things during an upgrade of an existing non-cells-v1 deployment.



That seems
like it should be made consistent and clearer for the end user to
understand why something is failing. But I would say that anything
that is going to be a hard requirement as part of an upgrade, it
really should be documented prior to when it's made a hard
requirement.  Just to share, I'd also like to point to
https://review.openstack.org/#/c/267153/ which I was only made aware
of today.  It seems like the start of the documentation that we needed
around this process but hasn't been merged yet.  I would personally
like to see that completed and merged prior to any new hard
requirements get merged in.


I agree. I think the thought was that the release notes covered the 
upgrade requirement, but it's clear there wasn't enough info there. 
What's really needed is a small manual that describes each command and 
its use case, along with how-to steps for each common deployment 
scenario. We'll be working on writing that.


-melanie

[1] https://review.openstack.org/#/c/332713/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ocatvia]Newton Octavia multinode setup

2017-01-13 Thread Michael Johnson
Hi Santhosh,

 

Currently there is not an OpenStack Ansible (OSA) role for Octavia, but one is 
under development now.  Keep an eye on the OSA project for updates.

 

Michael

 

From: Santhosh Fernandes [mailto:santhosh.fernan...@gmail.com] 
Sent: Thursday, January 12, 2017 10:13 PM
To: openstack-dev@lists.openstack.org; johnso...@gmail.com
Subject: [openstack-dev][ocatvia]Newton Octavia multinode setup

 

Hi all,

 

Is there any documentation or ansible playbook to install octavia on multi-node 
or all-in-one setup?

I am trying to setup in my lab but not able to find any documentation. 

 

 

Thanks,

Santhosh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][puppet][tripleo][kolla][fuel] Let's talk nova cell v2 deployments and upgrades

2017-01-13 Thread Alex Schultz
On Fri, Jan 13, 2017 at 1:05 PM, Matt Riedemann
 wrote:
> On 1/13/2017 11:43 AM, Alex Schultz wrote:
>>
>> Ahoy folks,
>>
>> So we've been running into issues with the addition of the cell v2
>> setup as part of the requirements for different parts of nova.  It was
>> recommended that we move the conversation to the ML to get a wider
>> audience.  Basically, cell v2 has been working it's way into a
>> required thing for various parts of nova for the Ocata cycle.  We've
>> hit several issues[0][1] because of this.  I put a wider audience than
>> just nova because deployment tools need to understand how this works
>> as it impacts anyone installing or upgrading.
>>
>> What is not clear is what is the expectation around how to install and
>> configure cell v2.  When we hit the first bug in the upgrade, we
>> reached out in irc[2] around this and it seemed that there was little
>> to no documentation around how this stuff all works.  There are
>> mentions of these new commands in the release notes[3] but it's not
>> clear on the specifics for both the upgrade process and also a fresh
>> install.  We attempted to just run simple_cell_setup in the puppet
>> (and tripleo downstream) because we assumed this would handle all the
>> things.  It's become clear that this is not the case.  The latest
>> bug[1] has shown that we do not have a proper understanding of what it
>> means to setup cell v2, so I'd like to use this email to start the
>> conversation as it impacts anyone either install Ocata fresh or
>> attempting some sort of Newton -> Ocata upgrade.
>>
>> Additionally after some conversations today on irc[4], it's also
>> become clear there is some disconnect around understanding between
>> nova folks and people who deploy as to how this thing would ideally
>> work.  So, what I would like to know is how should people be
>> installing and configuring nova cell v2? Specifically what are the
>> steps so that the deployment tools and operators can properly handle
>> these complexities.  What are the assumptions being baked into
>> simple_cell_setup?  It seems to assume computes should exist before
>> the cell simple setup where as traditionally computes are the last
>> thing to be setup (for new installs).
>>
>> So, help?
>>
>> Thanks,
>> -Alex
>>
>> [0] https://bugs.launchpad.net/tripleo/+bug/1649341
>> [1] https://bugs.launchpad.net/nova/+bug/1656276
>> [2]
>> http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2016-12-12.log.html#t2016-12-12T17:38:56
>> [3] http://docs.openstack.org/releasenotes/nova/unreleased.html#id12
>> [4]
>> http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-01-13.log.html#t2017-01-13T14:11:37
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> Thanks for starting this thread.
>
> First, I want to apologize for the lack of communication and documentation
> around this. I know this is frustrating. We've been very heads down on
> getting the changes out and getting devstack/grenade working with this stuff
> that we haven't taken the time to document it outside of the release notes,
> which isn't adequate.
>
> Without going into details, there was a major change in personnel this
> release for working on cells v2 so we've been doing some catch-up and that's
> part of why things are a bit scatter-brained.
>
> Based on the discussion in IRC this morning, I took some time to try and
> capture some of the immediate issues/todos/questions in the cells v2 wiki
> [1].
>
> Documenting this is going to be a priority. We should have something up for
> review in Nova by next week (like Monday), at least a draft.
>
> I think it's also important to realize (for the nova team) that we've been
> thinking about cells v2 deployment from an upgrade perspective, which I
> think is why we had the simple_cell_setup command asserting that you needed
> computes already registered to map to hosts and cells. As noted, this is not
> going to work in a fresh install deployment where the control services are
> setup before the computes. We're working on addressing that in [2].
>
> To compare with how simple_cell_setup works, the recently created
> 'nova-status upgrade check' command [3] is OK with there being no compute
> nodes yet (it does fail if you don't have any cell mappings though). It's OK
> with that because of the fresh install scenario. It doesn't fail but it does
> report that you need to remember to discover new hosts and map them once
> they are registered.
>

I think it's great that you've added this into a command that could be
run by the operator.  This is another tool that I would recommend
brushing up documentation on as it seems relatively new and help
people understand how 

Re: [openstack-dev] [cinder] Cinder replication for multisite openstack clouds

2017-01-13 Thread Jay S. Bryant

Kemo,

The next phase of development for replication is to enable replication 
of groups of volumes [1].



I remember, in the past, there being discussion around how we handle 
replication across multiple data centers and don't know that we came to 
a conclusion.  I think we would need to better understand the use case 
here?  Are you picturing two data centers that have all their data 
replicated between the two locations. You want to then be able to have a 
Cinder access those volumes concurrently in both locations?  
Theoretically that could be possible with a shared database between the 
two datacenters.  I am not aware of anyone actually operating with such 
a configuration.


Jay

[1] https://blueprints.launchpad.net/cinder/+spec/replication-cg

On 01/13/2017 12:15 PM, Klemen Pogacnik wrote:

HI!
I've been playing with dual site openstack clouds. I use ceph as a backend for
Cinder volumes. Each openstack has its own ceph cluster. Data replication
is done by ceph rbd mirroring.
Current Cinder replication design (cheesecake) only protects against
storage failure.
What about support for multi site deployments and protection of whole
data center.
Has any work been planned or done yet on that topic?
Kemo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] draft logo of glance mascot

2017-01-13 Thread Brian Rosmaita
Hello Heidi Joy,

At the Glance meeting yesterday, a concerned developer asked about the
status of the Glance logo.  Do you have any news for us?

thanks,
brian


On 11/3/16, 4:38 PM, "Heidi Joy Tretheway"  wrote:

> Thanks for the feedback, Brian! That is always HUGELY helpful. I¹ll
> convey that to the design team and I¹m hoping you also took a moment to
> include your feedback on the tinyurl.com/OSmascot page  (because it makes
> it a lot easier for us to see all feedback side-by-side). Either way, I¹m
> very appreciative of your thoughtful input!
>
> Best,
> Heidi Joy
>
>> On Nov 3, 2016, at 9:55 AM, Brian Rosmaita
>>  wrote:
>>
>> Hello Heidi Joy,
>>
>> First, let me say on behalf of the Glance community that we appreciate
>> the
>> hard work you and your team have done in creating all the team
>> mascot/logos.
>>
>> We discussed the draft logo at the weekly Glance meeting today [0], and
>> the general sense of the Glance community is that we'd like to request a
>> re-draft.  I'm not sure we conveyed clearly to you how important the
>> mouth-full-of-nuts aspect was to our choice of a chipmunk [1].  We could
>> tell from the video [2] that you started with the sample picture [3] and
>> worked from there to create something more streamlined, but we were
>> really
>> hoping for something more like the sample picture.
>>
>> The importance of the mouth-full-of-nuts aspect is that Glance curates
>> and
>> stores virtual machine images and other services consume images supplied
>> from Glance.  Additionally, this would more clearly distinguish our logo
>> from that of other projects.
>>
>> Thanks for being so willing to consider feedback!
>>
>> cheers,
>> brian
>>
>> [0] 
>>
>> http://eavesdrop.openstack.org/meetings/glance/2016/glance.2016-11-03-14.
>> 00
>> .log.html (beginning 14:14:07)
>> [1] https://etherpad.openstack.org/p/glance-project-mascot (lines 38-41)
>> [2] https://youtu.be/JmMTCWyY8Y4?t=41
>> [3] https://www.fortwhyte.org/wp-content/uploads/2014/12/chip2.jpg
>>
>>
>>
>>
>> On 10/21/16, 12:43 PM, "Brian Rosmaita" 
>> wrote:
>>
>>> Hello Glancers,
>>>
>>> Heidi Joy Tretheway and Todd Morey from the foundation have sent us the
>>> attached sneak preview of the Glance mascot/logo.  (As you may recall,
>>> we
>>> voted for a chipmunk [0].)
>>>
>>> They've set up a handy form [1] so that you can provide individual
>>> feedback.  The project logos will be finalized in time for the February
>>> PTG, so please provide any feedback before Friday, November 11.
>>>
>>> Before providing feedback, you might want to take a look at a short
>>> (50-second) video [2] to see how the logos developed and how they
>>> present
>>> a consistent interface across OpenStack projects.
>>>
>>> Since we're looking at light turnout for the Barcelona summit, I've put
>>> discussion of the logo on the agenda for the November 3 edition of the
>>> Glance weekly meeting [3].
>>>
>>> Keep in mind that the logo is still a draft, so please don't tweet it
>>> or
>>> use it in any promotional materials.
>>>
>>> cheers,
>>> brian
>>>
>>> [0] https://etherpad.openstack.org/p/glance-project-mascot
>>> [1] http://tinyurl.com/OSmascot
>>> [2] https://youtu.be/JmMTCWyY8Y4
>>> [3] https://etherpad.openstack.org/p/glance-team-meeting-agenda
>>>
>>
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][ptl] not planning to serve as release PTL for Pike

2017-01-13 Thread Steve Martinelli
+++ Thanks for making it 100x easier to release new libraries, it's now
something I look forward to.

On Fri, Jan 13, 2017 at 3:11 PM, Davanum Srinivas  wrote:

> Many thanks for all the automation and all other initiatives Doug!
>
> On Fri, Jan 13, 2017 at 3:00 PM, Doug Hellmann 
> wrote:
> > I announced this at the release team meeting on 6 Jan, but thought
> > I should also post to the list as well:  I do not plan to serve as
> > the Release Management team PTL for the Pike release cycle.
> >
> > It has been my pleasure to serve as PTL, and we've almost finished
> > the automation work that I envisioned when I joined the team. Now
> > I would like to shift my focus to some other projects within the
> > community.  I will still be an active member of the Release Management
> > team, helping with new initiatives and reviews for releases.
> >
> > Doug
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][ptl] not planning to serve as release PTL for Pike

2017-01-13 Thread Davanum Srinivas
Many thanks for all the automation and all other initiatives Doug!

On Fri, Jan 13, 2017 at 3:00 PM, Doug Hellmann  wrote:
> I announced this at the release team meeting on 6 Jan, but thought
> I should also post to the list as well:  I do not plan to serve as
> the Release Management team PTL for the Pike release cycle.
>
> It has been my pleasure to serve as PTL, and we've almost finished
> the automation work that I envisioned when I joined the team. Now
> I would like to shift my focus to some other projects within the
> community.  I will still be an active member of the Release Management
> team, helping with new initiatives and reviews for releases.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][puppet][tripleo][kolla][fuel] Let's talk nova cell v2 deployments and upgrades

2017-01-13 Thread Matt Riedemann

On 1/13/2017 11:43 AM, Alex Schultz wrote:

Ahoy folks,

So we've been running into issues with the addition of the cell v2
setup as part of the requirements for different parts of nova.  It was
recommended that we move the conversation to the ML to get a wider
audience.  Basically, cell v2 has been working it's way into a
required thing for various parts of nova for the Ocata cycle.  We've
hit several issues[0][1] because of this.  I put a wider audience than
just nova because deployment tools need to understand how this works
as it impacts anyone installing or upgrading.

What is not clear is what is the expectation around how to install and
configure cell v2.  When we hit the first bug in the upgrade, we
reached out in irc[2] around this and it seemed that there was little
to no documentation around how this stuff all works.  There are
mentions of these new commands in the release notes[3] but it's not
clear on the specifics for both the upgrade process and also a fresh
install.  We attempted to just run simple_cell_setup in the puppet
(and tripleo downstream) because we assumed this would handle all the
things.  It's become clear that this is not the case.  The latest
bug[1] has shown that we do not have a proper understanding of what it
means to setup cell v2, so I'd like to use this email to start the
conversation as it impacts anyone either install Ocata fresh or
attempting some sort of Newton -> Ocata upgrade.

Additionally after some conversations today on irc[4], it's also
become clear there is some disconnect around understanding between
nova folks and people who deploy as to how this thing would ideally
work.  So, what I would like to know is how should people be
installing and configuring nova cell v2? Specifically what are the
steps so that the deployment tools and operators can properly handle
these complexities.  What are the assumptions being baked into
simple_cell_setup?  It seems to assume computes should exist before
the cell simple setup where as traditionally computes are the last
thing to be setup (for new installs).

So, help?

Thanks,
-Alex

[0] https://bugs.launchpad.net/tripleo/+bug/1649341
[1] https://bugs.launchpad.net/nova/+bug/1656276
[2] 
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2016-12-12.log.html#t2016-12-12T17:38:56
[3] http://docs.openstack.org/releasenotes/nova/unreleased.html#id12
[4] 
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-01-13.log.html#t2017-01-13T14:11:37

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Thanks for starting this thread.

First, I want to apologize for the lack of communication and 
documentation around this. I know this is frustrating. We've been very 
heads down on getting the changes out and getting devstack/grenade 
working with this stuff that we haven't taken the time to document it 
outside of the release notes, which isn't adequate.


Without going into details, there was a major change in personnel this 
release for working on cells v2 so we've been doing some catch-up and 
that's part of why things are a bit scatter-brained.


Based on the discussion in IRC this morning, I took some time to try and 
capture some of the immediate issues/todos/questions in the cells v2 
wiki [1].


Documenting this is going to be a priority. We should have something up 
for review in Nova by next week (like Monday), at least a draft.


I think it's also important to realize (for the nova team) that we've 
been thinking about cells v2 deployment from an upgrade perspective, 
which I think is why we had the simple_cell_setup command asserting that 
you needed computes already registered to map to hosts and cells. As 
noted, this is not going to work in a fresh install deployment where the 
control services are setup before the computes. We're working on 
addressing that in [2].


To compare with how simple_cell_setup works, the recently created 
'nova-status upgrade check' command [3] is OK with there being no 
compute nodes yet (it does fail if you don't have any cell mappings 
though). It's OK with that because of the fresh install scenario. It 
doesn't fail but it does report that you need to remember to discover 
new hosts and map them once they are registered.


So for whatever reason the existing commands and code were written under 
the assumption that you'd first create computes (or already have them) 
and then map those to a cell, and we need to adjust the tooling for the 
scenario that you want to create the cell first and map hosts later. I 
think today grenade and devstack are working the same way as far as 
setting up cells v2, but eventually I think we're going to want to make 
grenade do the more specific upgrade path where we expect a compute 

Re: [openstack-dev] PTG? / Was (Consistent Versioned Endpoints)

2017-01-13 Thread Joshua Harlow

Fox, Kevin M wrote:

Don't want to hijack the thread too much but... when the PTG was being sold, it 
was a way to get the various developers in to one place and make it cheaper to 
go to for devs. Now it seems to be being made into a place where each of the 
silo's can co'exist but not talk, and then the summit is still required to get 
cross project work done, so it only increases the devs cost by requiring 
attendance at both. This is very troubling. :/ Whats the main benefit of the 
PTG then?

Thanks,
Kevin


Sometimes I almost wish we just rented out a football stadium (or 
equivalent, a soccer field?) and put all the contributors in the 'field' 
with bean bags and some tables and a bunch of white boards (and a lot of 
wifi and power cords) and let everyone 'have at it' (ideally in a 
stadium with a roof in the winter). Maybe put all the infra people in a 
circle in the middle and make the foundation people all wear referee 
outfits.


It'd be an interesting social experiment at least :-P


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rdo-list] [RDO][DLRN] DLRN worker downtime during the weekend

2017-01-13 Thread Alan Pevec
> We need to run some maintenance operations on the DLRN instance next weekend, 
> starting on Friday 13 @ 19:00 UTC.

I've aborted the purge and restarted Ocata master builder so we can
get reverts built for CI blocker
https://bugs.launchpad.net/nova/+bug/1656276

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][ptl] not planning to serve as release PTL for Pike

2017-01-13 Thread Doug Hellmann
I announced this at the release team meeting on 6 Jan, but thought
I should also post to the list as well:  I do not plan to serve as
the Release Management team PTL for the Pike release cycle.

It has been my pleasure to serve as PTL, and we've almost finished
the automation work that I envisioned when I joined the team. Now
I would like to shift my focus to some other projects within the
community.  I will still be an active member of the Release Management
team, helping with new initiatives and reviews for releases.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Confusion around the complexity

2017-01-13 Thread Kevin Benton
"as an operator"? That's not related to the iPhone developer use case (user
usability) at all.

For users, they just boot a VM and Nova will call the API and neutron will
setup a network/router/etc on demand and return it so there is nothing the
user has to do.

If you have issues with operator usability, that relates to improvements to
the openstack networking guide, but that's not what this thread is about.


On Jan 13, 2017 10:50, "Clint Byrum"  wrote:

Excerpts from Joshua Harlow's message of 2017-01-12 22:38:46 -0800:
> Kevin Benton wrote:
> > If you don't want users to specify network details, then use the get me
> > a network extension or just have them boot to a public (or other
> > pre-created) network.
> >
> > In your thought experiment, why is your iPhone app developer not just
> > using a PaaS that handles instance scaling, load balancing and HA? Why
> > would he/she want to spend time managing security updates and log
> > rotation for an operating system running inside another program
> > pretending to be hardware? Different levels of abstraction solve
> > different use cases.
>
> Fair point, probably mr/mrs iPhone app developer should be doing that.
>

I totally disagree. If PaaS was the answer, they'd all be using PaaS.

Maybe some day, but that's no excuse for having an overly complex story
for the base. I totally appreciate that "Get me a network" is an effort
to address this. But after reading docs on it, I actually have no idea
how it works or how to make use of it (I do have a decent understanding
of how to setup a default subnetpool as an operator).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG? / Was (Consistent Versioned Endpoints)

2017-01-13 Thread Fox, Kevin M
Don't want to hijack the thread too much but... when the PTG was being sold, it 
was a way to get the various developers in to one place and make it cheaper to 
go to for devs. Now it seems to be being made into a place where each of the 
silo's can co'exist but not talk, and then the summit is still required to get 
cross project work done, so it only increases the devs cost by requiring 
attendance at both. This is very troubling. :/ Whats the main benefit of the 
PTG then?

Thanks,
Kevin

From: Thierry Carrez [thie...@openstack.org]
Sent: Friday, January 13, 2017 6:54 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Consistent Versioned Endpoints

Sean Dague wrote:
> On 01/12/2017 01:35 PM, Scott D'Angelo wrote:
>> [...]
>> Can we get to this "perfect world"? Let's discuss at the PTG.
>> It is my understanding that we do not have the ability to schedule a
>> time or room for such a cross-project discussion. Please chime in if
>> interested, and/or make your interest known to scottda, mordred, or edleafe.
>
> Happy to join in on this, it does seem weird there is no time / space
> for such things at PTG.

We'll have a room available for such inter-team discussions at the PTG.
However, since only a fragment of our community will be present at the
PTG, we need to be careful to avoid exclusion. Ideally we would only use
that room to discuss things that are only relevant to upstream
development teams, and use the "Forum" in Boston to hold truly
cross-project / community-wide discussions. The typical target for the
discussion room at the PTG are therefore ad-hoc discussions between PTG
teams, where a separate fishbowl room makes more sense than holding it
in a specific team room.

As far as scheduling goes, the "discussion" room at the PTG should be
available from Monday to Thursday, and scheduled in unconference-style,
to give flexibility to have the discussions we need to have. Current
plan is to use an ethercalc document to share the schedule.

For critical discussions (which don't belong to any given room, fit the
cross-section of our community present, and/or can't wait until Boston)
we could totally hardcode them in the schedule for the discussion
room... but we should probably keep those to a minimum and use team
rooms as much as possible.

How much time do you think that discussion would need ? If more than a
few hours it's probably just simpler to give it a full team room.

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Confusion around the complexity

2017-01-13 Thread Armando M.
On 13 January 2017 at 10:47, Clint Byrum  wrote:

> Excerpts from Joshua Harlow's message of 2017-01-12 22:38:46 -0800:
> > Kevin Benton wrote:
> > > If you don't want users to specify network details, then use the get me
> > > a network extension or just have them boot to a public (or other
> > > pre-created) network.
> > >
> > > In your thought experiment, why is your iPhone app developer not just
> > > using a PaaS that handles instance scaling, load balancing and HA? Why
> > > would he/she want to spend time managing security updates and log
> > > rotation for an operating system running inside another program
> > > pretending to be hardware? Different levels of abstraction solve
> > > different use cases.
> >
> > Fair point, probably mr/mrs iPhone app developer should be doing that.
> >
>
> I totally disagree. If PaaS was the answer, they'd all be using PaaS.
>
> Maybe some day, but that's no excuse for having an overly complex story
> for the base. I totally appreciate that "Get me a network" is an effort
> to address this. But after reading docs on it, I actually have no idea
> how it works or how to make use of it (I do have a decent understanding
> of how to setup a default subnetpool as an operator).
>

I'd be happy to improve the docs, but your feedback is not very actionable.
Any chance you can elaborate on what you're struggling with?

Thanks,
Armando


>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [vpnaas] vpnaas no longer part of the neutron governance

2017-01-13 Thread Dariusz Śmigiel
2017-01-13 11:17 GMT-06:00 Doug Hellmann :
> Excerpts from Dariusz Śmigiel's message of 2017-01-13 09:11:01 -0600:
>> 2017-01-12 21:43 GMT-06:00 Takashi Yamamoto :
>> > hi,
>> >
>> > On Wed, Nov 16, 2016 at 11:02 AM, Armando M.  wrote:
>> >> Hi
>> >>
>> >> As of today, the project neutron-vpnaas is no longer part of the neutron
>> >> governance. This was a decision reached after the project saw a dramatic
>> >> drop in active development over a prolonged period of time.
>> >>
>> >> What does this mean in practice?
>> >>
>> >> From a visibility point of view, release notes and documentation will no
>> >> longer appear on openstack.org as of Ocata going forward.
>> >> No more releases will be published by the neutron release team.
>> >> The neutron team will stop proposing fixes for the upstream CI, if not
>> >> solely on a voluntary basis (e.g. I still felt like proposing [2]).
>> >>
>> >> How does it affect you, the user or the deployer?
>> >>
>> >> You can continue to use vpnaas and its CLI via the python-neutronclient 
>> >> and
>> >> expect it to work with neutron up until the newton
>> >> release/python-neutronclient 6.0.0. After this point, if you want a 
>> >> release
>> >> that works for Ocata or newer, you need to proactively request a release
>> >> [5], and reach out to a member of the neutron release team [3] for 
>> >> approval.
>> >
>> > i want to make an ocata release. (and more importantly the stable branch,
>> > for the benefit of consuming subprojects)
>> > for the purpose, the next step would be ocata-3, right?
>>
>> Hey Takashi,
>> If you want to release new version of neutron-vpnaas, please look at [1].
>> This is the place, which you need to update and based on provided
>> details, tags and branches will be cut.
>>
>> [1] 
>> https://github.com/openstack/releases/blob/master/deliverables/ocata/neutron-vpnaas.yaml
>
> Unfortunately, since vpnaas is no longer part of an official project,
> we won't be using the releases repository to manage and publish
> information about the releases. It'll need to be done by hand.
>
> Doug
>
Thanks Doug for clarification,

Dariusz

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][Neutron] Running out of memory on gate for linux bridge job

2017-01-13 Thread Dariusz Śmigiel
2017-01-13 11:13 GMT-06:00 Kevin Benton :
> Sounds like we must have a memory leak in the Linux bridge agent if that's
> the only difference between the Linux bridge job and the ovs ones. Is there
> a bug tracking this?

Just created one [1]. For now, this issue was observed in two cases
(mentioned in bug description).

[1] https://bugs.launchpad.net/neutron/+bug/1656386

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Confusion around the complexity

2017-01-13 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2017-01-12 22:38:46 -0800:
> Kevin Benton wrote:
> > If you don't want users to specify network details, then use the get me
> > a network extension or just have them boot to a public (or other
> > pre-created) network.
> >
> > In your thought experiment, why is your iPhone app developer not just
> > using a PaaS that handles instance scaling, load balancing and HA? Why
> > would he/she want to spend time managing security updates and log
> > rotation for an operating system running inside another program
> > pretending to be hardware? Different levels of abstraction solve
> > different use cases.
> 
> Fair point, probably mr/mrs iPhone app developer should be doing that.
> 

I totally disagree. If PaaS was the answer, they'd all be using PaaS.

Maybe some day, but that's no excuse for having an overly complex story
for the base. I totally appreciate that "Get me a network" is an effort
to address this. But after reading docs on it, I actually have no idea
how it works or how to make use of it (I do have a decent understanding
of how to setup a default subnetpool as an operator).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Pike PTG idea area

2017-01-13 Thread Joshua Harlow

Hi folks,

Just wanted all of you get your thinking caps on.

>>> time.sleep(2)

Ok hopefully you now have it on,

Then with cap *on* if you don't mind adding some of your thoughts to:

https://etherpad.openstack.org/p/oslo-ptg-pike

If you could keep the thoughts you are having targeted/focused at oslo 
and the pike cycle and any kind of discussions focused on openstack 
that'd be super, but if you really must, other thoughts are ok to (just 
maybe put them at the bottom).


>>> write_things_in_etherpad()
>>> exit()

Ok you can now take the cap *off*.

P.S.

For a bigger idea that may require a full etherpad try to put that at:

https://wiki.openstack.org/wiki/PTG/Pike/Etherpads

Thanks,

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] decomposed plugins broken

2017-01-13 Thread Gary Kotton
Hi,
The l3 patch - https://review.openstack.org/#/c/417604/ has broken the 
decomposed plugins. We need to look at addressing this.
We will fix the code, just a heads up to all other projects.
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Cinder replication for multisite openstack clouds

2017-01-13 Thread Klemen Pogacnik
HI!
I've been playing with dual site openstack clouds. I use ceph as a backend for
Cinder volumes. Each openstack has its own ceph cluster. Data replication
is done by ceph rbd mirroring.
Current Cinder replication design (cheesecake) only protects against
storage failure.
What about support for multi site deployments and protection of whole
data center.
Has any work been planned or done yet on that topic?
Kemo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][puppet][tripleo][kolla][fuel] Let's talk nova cell v2 deployments and upgrades

2017-01-13 Thread Alex Schultz
Ahoy folks,

So we've been running into issues with the addition of the cell v2
setup as part of the requirements for different parts of nova.  It was
recommended that we move the conversation to the ML to get a wider
audience.  Basically, cell v2 has been working it's way into a
required thing for various parts of nova for the Ocata cycle.  We've
hit several issues[0][1] because of this.  I put a wider audience than
just nova because deployment tools need to understand how this works
as it impacts anyone installing or upgrading.

What is not clear is what is the expectation around how to install and
configure cell v2.  When we hit the first bug in the upgrade, we
reached out in irc[2] around this and it seemed that there was little
to no documentation around how this stuff all works.  There are
mentions of these new commands in the release notes[3] but it's not
clear on the specifics for both the upgrade process and also a fresh
install.  We attempted to just run simple_cell_setup in the puppet
(and tripleo downstream) because we assumed this would handle all the
things.  It's become clear that this is not the case.  The latest
bug[1] has shown that we do not have a proper understanding of what it
means to setup cell v2, so I'd like to use this email to start the
conversation as it impacts anyone either install Ocata fresh or
attempting some sort of Newton -> Ocata upgrade.

Additionally after some conversations today on irc[4], it's also
become clear there is some disconnect around understanding between
nova folks and people who deploy as to how this thing would ideally
work.  So, what I would like to know is how should people be
installing and configuring nova cell v2? Specifically what are the
steps so that the deployment tools and operators can properly handle
these complexities.  What are the assumptions being baked into
simple_cell_setup?  It seems to assume computes should exist before
the cell simple setup where as traditionally computes are the last
thing to be setup (for new installs).

So, help?

Thanks,
-Alex

[0] https://bugs.launchpad.net/tripleo/+bug/1649341
[1] https://bugs.launchpad.net/nova/+bug/1656276
[2] 
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2016-12-12.log.html#t2016-12-12T17:38:56
[3] http://docs.openstack.org/releasenotes/nova/unreleased.html#id12
[4] 
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-01-13.log.html#t2017-01-13T14:11:37

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][tempest][api] community images, tempest tests, and API stability

2017-01-13 Thread Ian Cordasco
-Original Message-
From: Ian Cordasco 
Reply: Ian Cordasco 
Date: January 13, 2017 at 08:12:12
To: OpenStack Development Mailing List (not for usage questions)

Subject:  Re: [openstack-dev] [glance][tempest][api] community images,
tempest tests, and API stability

> And besides "No one uses Glance" [ref: 
> http://lists.openstack.org/pipermail/openstack-dev/2013-February/005758.html]

I was being a bit glib when I wrote this last sentence this morning,
but in commenting on the Gerrit patch to skip the test in question, I
realized this is actually far more valid than I realized.

Let's look at the state of Glance v2 and be brutally honest:

Interoperability

    Glance v2 is currently incapable of being truly interoperable between
    existing publicly accessible clouds. There are two ways to currently
    upload images to Glance. Work is being done to add a third way that
    suits the needs of all cloud providers. This introduces further
    interoperability incompatibility (say *that* three times fast ;)) and
    honestly, I don't see it being a problem for the next reason.

    Further, the tasks API presents a huge number of interoperability
    problems. We've limited that to users with the admin role, but if you
    have an admin on two clouds operated by different people, there is a
    good likelihood the tasks will not be the same.


v2 deployment and usage

    The best anyone working on Glance can determine, v2 is rarely deployed
    for users and if it is, it isn't chosen. v2 was written to specifically
    excise some problematic "features" that some users still rely on. A
    such, you see conversations even between Glance and *other services*
    about how to migrate to v2. Nova only recently made the migration. Heat
    still has yet to do so and I think has only just relented in their
    desire to avoid it.


Security Concerns

    There are some serious security issues that will be fixed by this
    change. If we were to add the backwards compatibility shim that the QA
    team has suggested repeatedly that we add, we'd be keeping those security
    issues.


In short, I feel like the constant refrain from the QA team has been two fold:

- "This will cause interoperability problems"
- "This is backwards incompatible"

The Glance team has come to terms with this over the course of several
cycles. I don't think anyone is thrilled about the prospect of
potentially breaking some users' workflows. If we had been that
enthusiastic about it, then we simply would have acted on this when it
was first proposed. It would have completely gone unnoticed except by
some users. There's no acceptable way (without sacrificing security -
which let's be clear, is entirely unacceptable) that we can maintain a
backwards compatibility shim and Glance v2 already has loads of
interoperability problems. We're working on fixing them, but we're
also working on fixing the user experience, which is a big part of
this patch.

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [vpnaas] vpnaas no longer part of the neutron governance

2017-01-13 Thread Doug Hellmann
Excerpts from Dariusz Śmigiel's message of 2017-01-13 09:11:01 -0600:
> 2017-01-12 21:43 GMT-06:00 Takashi Yamamoto :
> > hi,
> >
> > On Wed, Nov 16, 2016 at 11:02 AM, Armando M.  wrote:
> >> Hi
> >>
> >> As of today, the project neutron-vpnaas is no longer part of the neutron
> >> governance. This was a decision reached after the project saw a dramatic
> >> drop in active development over a prolonged period of time.
> >>
> >> What does this mean in practice?
> >>
> >> From a visibility point of view, release notes and documentation will no
> >> longer appear on openstack.org as of Ocata going forward.
> >> No more releases will be published by the neutron release team.
> >> The neutron team will stop proposing fixes for the upstream CI, if not
> >> solely on a voluntary basis (e.g. I still felt like proposing [2]).
> >>
> >> How does it affect you, the user or the deployer?
> >>
> >> You can continue to use vpnaas and its CLI via the python-neutronclient and
> >> expect it to work with neutron up until the newton
> >> release/python-neutronclient 6.0.0. After this point, if you want a release
> >> that works for Ocata or newer, you need to proactively request a release
> >> [5], and reach out to a member of the neutron release team [3] for 
> >> approval.
> >
> > i want to make an ocata release. (and more importantly the stable branch,
> > for the benefit of consuming subprojects)
> > for the purpose, the next step would be ocata-3, right?
> 
> Hey Takashi,
> If you want to release new version of neutron-vpnaas, please look at [1].
> This is the place, which you need to update and based on provided
> details, tags and branches will be cut.
> 
> [1] 
> https://github.com/openstack/releases/blob/master/deliverables/ocata/neutron-vpnaas.yaml

Unfortunately, since vpnaas is no longer part of an official project,
we won't be using the releases repository to manage and publish
information about the releases. It'll need to be done by hand.

Doug

> 
> BR, Dariusz
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 8

2017-01-13 Thread Chris Dent


Hi, more resource providers and placement information for your reading
pleasure. Things continue to move along, with plenty of stuff to
review and a new bug that someone could work on.

# What Matters Most

The main priority remains the same: Getting the scheduler using a
filtered list of resource providers. That work is in progress near
here:

 https://review.openstack.org/#/c/392569/

# Stuff that's Different Now

Discussion about the upgrade process from newton (where placement is
optional) to ocata (where we _really_ want it to be required) has led
the placement client in the resource tracker (and eventually the
scheduler) to change from trying the placement API and not retrying on
failure to retrying with a log warning:

https://review.openstack.org/#/c/418590/

The aggregates api will now return the uuid of an aggreate in the JSON
representation.

# Stuff Needing a Plan

# can_host, aggregates in filtering

See:

http://lists.openstack.org/pipermail/openstack-dev/2017-January/109971.html

in which I go an at some length on this topic in response to last
week's update, but no one has yet confirmed or denied the concerns.

# Pending Planned Work

## Dev Documentation

Work has started on documentation for developers of the placement API.
It is under review starting at

 https://review.openstack.org/#/c/411946/

## Handling Aggregates in Placement Server and Clients

Resource tracker handling of aggregates is the last big piece for
aggregate handling:

 https://review.openstack.org/#/c/407309/

The server side change has merged, after some stumbling over the
proper handling of mircoversions: there was a non-microversioned 
code on master that enabled the `member_of` feature to exist, but

not actually work.

The link to the archive above is about the when of arggegate handling
for the scheduler.

## Resource Tracker Cleanup and Use of Resource Classes For Ironic

The cleanup to the resource tracker so that it can be more effective
and efficient and work correctly with Ironic continues in the
custom-resource-classes topic:

 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/custom-resource-classes

# Stuff Happening Outside of Nova

* Neutron IPV4 Inventory
  https://review.openstack.org/#/c/358658/

* Continued puppet/tripleo work with placement
  https://review.openstack.org/#/c/406309/

# Bugs, Pick up Work, and Miscellaneous

* Allocation bad input handling and dead code fixing
  https://review.openstack.org/#/c/419137/

  This has turned out to be a goldmine for finding bugs. In addition
  to fixing how allocations are created and removing dead code, it
  also exposed that resource provider objects were not having their
  generation being updated in the right places at the right times and
  a bug in gabbi related to exposing failures in fixtures.

* Update the generic resource pools spec to reflect reality
  https://review.openstack.org/#/c/407562/

* [WIP] Placement api: Add json_error_formatter to defaults
  https://review.openstack.org/#/c/395194/

  This is an effort to avoid boilerplate, but no good solution has
  been determined yet. Reviewers can help us figure a good way to
  handle things.

* Demo inventory update script:
  https://review.openstack.org/#/c/382613/

  This one might be considered a WIP because how it chooses to do
  things (rather simply and dumbly) may not be in line with expecations.

* CORS support in placement API:
  https://review.openstack.org/#/c/392891/

* Better debug logging on inventory update failures
  https://review.openstack.org/#/c/414230/

  The request-id wasn't being recorded, which makes it difficult to
  associate client side logs with server side logs.

* Capacity exceeded logging is in the wrong place
  https://review.openstack.org/#/c/410128/

* Python3 fixes that include changes in placement
  https://review.openstack.org/#/c/419476/

  JSON bodies need to be bytes in python3. The code was written
  assuming webob would take care of this but it doesn't. The plan is
  for code like this to merge, and then a follow up to back it out in
  favor of an `EncodeUTF8` middleware, removing the need for
  boilerplate on all handlers.

* Backports to Newton
  
https://review.openstack.org/#/q/owner:mriedem%2540us.ibm.com+status:open+branch:stable/newton

  MattR has a passel of backports to newton, several of which are
  placement but fixes.

* NEW BUG: DiscoveryFailure when trying to get resource providers
  https://bugs.launchpad.net/nova/+bug/1656075

  This is a newly discovered bug of yet another exception raised
  by keystoneauth1 that is not being handled.

# Post Ocata

## Resource Provider Traits

Proof of concept work on resource provider traits is seeing plenty of
activity lately. This is the functionality that will allow people to
express requirements and preferences about qualitative aspects of
resources.

 

Re: [openstack-dev] [infra][Neutron] Running out of memory on gate for linux bridge job

2017-01-13 Thread Kevin Benton
Sounds like we must have a memory leak in the Linux bridge agent if that's
the only difference between the Linux bridge job and the ovs ones. Is there
a bug tracking this?

On Jan 13, 2017 08:58, "Clark Boylan"  wrote:

> On Fri, Jan 13, 2017, at 07:48 AM, Jakub Libosvar wrote:
> > Does anybody know whether we can bump memory on nodes in the gate
> > without losing resources for running other jobs?
> > Has anybody experience with memory consumption being higher when using
> > linux bridge agents?
> >
> > Any other ideas?
>
> Ideally I think we would see more work to reduce memory consumption.
> Heat has been able to more than halve their memory usage recently [0].
> Perhaps start by identifying the biggest memory hogs and go from there?
>
> [0]
> http://lists.openstack.org/pipermail/openstack-dev/2017-
> January/109748.html
>
> Clark
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] Stop using "current" DLRN repo

2017-01-13 Thread John Trowbridge


On 01/13/2017 09:25 AM, Emilien Macchi wrote:
> On Fri, Jan 13, 2017 at 9:09 AM, Gabriele Cerami  wrote:
>>
>> Hi,
>>
>> following a suggestion from Alal Pevec I'm proposing to stop using
>> "current" repo from dlrn and start using "consistent" instead.
>> The main difference should only be that "consistent" is not affected by
>> packages in ftbfs, so we're testing with a bit more stability.
> 
> We might want to exclude tripleo projects because we always need the
> latest on this case. Otherwise, lgtm.

The "current" repo being replaced is actually already only used for the
tripleo gated projects (and stable branches). We definitely need
"current" in the tripleo gated project case, and because we don't have a
special repo for only tripleo projects in the stable case, we need it
there too.

> 
>> This is the proposal
>> https://review.openstack.org/419455
>>
>> please comment especially if I did not understood correctly what the
>> difference between "current" and "consistent" is.
>>
>>
>> Thanks.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Confusion around the complexity

2017-01-13 Thread Morales, Victor
My two cents on this

Agree with Kevin, IaaS solutions(like CloudStack, OpenNebula, OpenStack, etc.) 
offer a deep level of customization for those apps which requires fine-grained 
control of Cloud resources with the disadvantage of increasing the time 
required for developing them. By other hand, PaaS solutions (e.g Cloud Foundry, 
OpenShift, etc) usually deployed on top of IaaS solutions, offer a quicker 
development process but lower level of customization associated with poor 
performance or scalability controlled by the PaaS solution.  Lastly, my 
understanding is that the term "legacy apps” refers to non-cloud aware 
applications usually with monolithic instead of using microservices 
architecture and/or publish/subscribe pattern.

Regards, 
Victor Morales

irc: electrocucaracha




On 1/13/17, 12:38 AM, "Joshua Harlow"  wrote:

>Kevin Benton wrote:
>> If you don't want users to specify network details, then use the get me
>> a network extension or just have them boot to a public (or other
>> pre-created) network.
>>
>> In your thought experiment, why is your iPhone app developer not just
>> using a PaaS that handles instance scaling, load balancing and HA? Why
>> would he/she want to spend time managing security updates and log
>> rotation for an operating system running inside another program
>> pretending to be hardware? Different levels of abstraction solve
>> different use cases.
>
>Fair point, probably mr/mrs iPhone app developer should be doing that.
>
>>
>> Amazon VPC exists (and is the default) for the same reason neutron
>> provides network virtualization primitives. People moving legacy apps
>> onto these systems end up needing specific addressing schemes and
>> isolation topologies.
>>
>
>What's a legacy app, sounds sorta dirty lol
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][Neutron] Running out of memory on gate for linux bridge job

2017-01-13 Thread Clark Boylan
On Fri, Jan 13, 2017, at 07:48 AM, Jakub Libosvar wrote:
> Does anybody know whether we can bump memory on nodes in the gate 
> without losing resources for running other jobs?
> Has anybody experience with memory consumption being higher when using 
> linux bridge agents?
> 
> Any other ideas?

Ideally I think we would see more work to reduce memory consumption.
Heat has been able to more than halve their memory usage recently [0].
Perhaps start by identifying the biggest memory hogs and go from there?

[0]
http://lists.openstack.org/pipermail/openstack-dev/2017-January/109748.html

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Ocata cycle ending and proposing new people as Kuryr cores

2017-01-13 Thread Antoni Segura Puimedon
Hi fellow kuryrs!

We are getting close to the end of the Ocata and it is time to look back
and appreciate the good work all the contributors did. I would like to
thank you all for the continued dedication and participation in gerrit, the
weekly meetings, answering queries on IRC, etc.

I also want to propose two people that I think will help us a lot as core
contributors in the next cycles.

For Kuryr-lib and kuryr-libnetwork I want to propose Liping Mao. Liping has
been contributing a lot of since Mitaka, not just in code but in reviews
catching important details and fixing bugs. It is overdue that he gets to
help us even more!

For Kuryr-kubernetes I want to propose Ilya Chukhnakov. Ilya got into Kuryr
at the end of the Newton cycle and has done a wonderful job in the
Kubernetes integration contributing heaps of code and being an important
part of the design discussions and patches. It is also time for him to
start approving patches :-)


Let's have the votes until next Friday (unless enough votes are cast
earlier).

Regards,

Toni
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] kolla-build building images that are known not to work for a given base/type

2017-01-13 Thread David Moreau Simard
Hi openstack-dev,

I have a simple question: why are there no mechanisms to prevent
kolla-build from building images that are known not to work for a
given base/type ?
In the Kolla CI gate, we build everything -- and then, if there are
errors, we match them against a list of images that are known to fail
[1].

This feels backwards, why even build things that are known to fail in
the first place ?

For example, if I do "kolla-build -b centos -t binary", it should not
even attempt to build "bifrost-base" because there's no support for
that distro/type combination.
As an end user and not a CI system using tox, I don't know what is
supported and what isn't.

Are those reasonable expectations ? How hard would it be to implement this ?

[1]: https://github.com/openstack/kolla/blob/master/tests/test_build.py#L47-L85

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][Neutron] Running out of memory on gate for linux bridge job

2017-01-13 Thread Jeremy Stanley
On 2017-01-13 16:48:26 +0100 (+0100), Jakub Libosvar wrote:
[...]
> Does anybody know whether we can bump memory on nodes in the gate without
> losing resources for running other jobs?
[...]

We picked 8gb back when typical devstack-gate jobs only used around
2gb of memory, to make sure there was a hard upper limit developers
could expect when trying to recreate the same tests locally on their
systems. It would take a lot of convincing to raise that further
(and yes it would reduce the number of test instances we can run in
most of our providers since memory is generally the limiting factor
for our nova quotas).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all] [goals] proposing a new goal: "Control Plane API endpoints deployment via WSGI"

2017-01-13 Thread Andy McCrae
>
>
> I have been looking for a Community Goal [1] that would directly help
> Operators and I found the "run API via WSGI" useful.
> So I've decided to propose this one as a goal for Pike but I'll stay
> open to postpone it to Queen if our community thinks we already have
> too much goals for Pike.
>
> Note that this goal might help to achieve 2 other goals later:
> - enable and test SSL everywhere
> - enable and test IPv6 everywhere
>
> Here's the draft:
> https://review.openstack.org/419706
>
> Any feedback is very welcome, thanks for reading so far.
>
> [1] https://etherpad.openstack.org/p/community-goals


I'd be in favour of that being a goal - I was keen to propose this a goal
for OpenStack-Ansible during the Pike cycle, so if it's community wide that
would be great.

Aside from the SSL everywhere and IPv6 everywhere I think it helps
deployments to become more uniform and easier to manage.

Thanks,
Andy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][Neutron] Running out of memory on gate for linux bridge job

2017-01-13 Thread Jakub Libosvar

Hi,

recently I noticed we got oom-killer in action in one of our jobs [1]. I 
saw it several times, so far only with linux bridge job. The consequence 
is that usually mysqld gets killed as a processes that consumes most of 
the memory, sometimes even nova-api gets killed.


Does anybody know whether we can bump memory on nodes in the gate 
without losing resources for running other jobs?
Has anybody experience with memory consumption being higher when using 
linux bridge agents?


Any other ideas?

Thanks,
Jakub

[1] 
http://logs.openstack.org/73/373973/13/check/gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial/295d92f/logs/syslog.txt.gz#_Jan_11_13_56_32


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [goals] proposing a new goal: "Control Plane API endpoints deployment via WSGI"

2017-01-13 Thread Brian Haley

On 01/12/2017 08:40 PM, Emilien Macchi wrote:

Greetings OpenStack community,

I have been looking for a Community Goal [1] that would directly help
Operators and I found the "run API via WSGI" useful.
So I've decided to propose this one as a goal for Pike but I'll stay
open to postpone it to Queen if our community thinks we already have
too much goals for Pike.

Note that this goal might help to achieve 2 other goals later:
- enable and test SSL everywhere
- enable and test IPv6 everywhere


I know IPv6 is a secondary goal, but since this was previously working in 
devstack 
(https://www.openstack.org/summit/tokyo-2015/videos/presentation/deploying-and-operating-an-ipv6-only-openstack-cloud) 
I'll try and help out getting that working again 
(https://bugs.launchpad.net/devstack/+bug/1656329).  Added a line to the 
etherpad as well.  I know there's more to do than just that, but if we had a job 
in place to catch regressions at least we'd have a baseline that *something* works.


-Brian



Here's the draft:
https://review.openstack.org/419706

Any feedback is very welcome, thanks for reading so far.

[1] https://etherpad.openstack.org/p/community-goals




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Deep dive session about the UI - January 12

2017-01-13 Thread Ana Krivokapic
Hi everyone,

For those who missed the deep dive, I am posting the recording [1] of the
session. Unfortunately I had screen sharing problems which prevented me to
properly cover the part about setting up the development environment during
the deep dive. To make up for this, I made a short video that walks through
this part [2].

Hope you find it useful!


[1] https://bluejeans.com/s/D6TIf/
[2] https://www.youtube.com/watch?v=1puSvUqTKzw


On Thu, Dec 15, 2016 at 11:56 AM, Ana Krivokapic 
wrote:

>
>
> On Tue, Dec 13, 2016 at 5:05 PM, Emilien Macchi 
> wrote:
>
>> On Mon, Dec 12, 2016 at 1:30 PM, Ana Krivokapic 
>> wrote:
>> > Hi Everyone,
>> >
>> > On the 12th of January 2017, I'll lead a TripleO deep dive[1] session
>> on how
>> > to contribute to the TripleO UI. Hope to see many of you there!
>>
>> That's awesome! Thanks for this work.
>> Is there some requirements that we should prepare? (e.g. having an
>> undercloud ready, etc).
>>
>
> Good question. Yes, I will start with an installed undercloud and go from
> there. A virt setup with all the default options is perfectly fine.
> Basically, one would need to follow the TripleO docs [1] up to and
> including "Undercloud Installation". l also recommend looking at Trown's
> excellent deep dive[2] on how to set up a development environment.
>
>
>> Is it a presentation or/and Hands-On?
>>
>
> My idea was to have a short intro segment where I would do a quick
> high-level overview on the TripleO UI. Then I would dive into the specifics
> about setting a up a dev environment for the UI development, and maybe even
> submit a simple patch for review as part of this. I suppose the audience
> could follow along, although I think it would be easier to just watch me do
> it and then replicate the steps later (with the help of the video and/or
> developer docs). At the end, we will have a special guest Liz Blanchard
> talk about how to contribute designs to the UI.
>
> I started an etherpad[3] with the agenda for the deep dive (still very
> much WIP).
>
> If anyone has any ideas about other things that they would like to see
> included, please by all means let me know. The outline above is just my
> idea of what could be included. We have plenty of time to add other things
> to the list, if people would find them useful. One point though, I did want
> to keep it more about how to contribute, as opposed to how to *use* the UI
> (developer perspective vs user perspective). The latter is a subject that
> would IMO deserve a whole separate session.
>
>
>>
>> Also, I propose that we record it as we did for the previous sessions,
>> so anyone can watch it again.
>>
>
> Absolutely, great idea.
>
>
>>
>> Thanks again.
>>
>> >
>> > [1] https://etherpad.openstack.org/p/tripleo-deep-dive-topics
>> >
>> > --
>> > Regards,
>> > Ana Krivokapic
>> > Senior Software Engineer
>> > OpenStack team
>> > Red Hat Inc.
>> >
>> --
>> Emilien Macchi
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> [1] http://docs.openstack.org/developer/tripleo-docs/index.html
> [2] https://www.youtube.com/watch?v=E1d_RmysnA8
> [3] https://etherpad.openstack.org/p/tripleo-deep-dive-ui
>
> --
> Regards,
> Ana Krivokapic
> Senior Software Engineer
> OpenStack team
> Red Hat Inc.
>



-- 
Regards,
Ana Krivokapic
Senior Software Engineer
OpenStack team
Red Hat Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [vpnaas] vpnaas no longer part of the neutron governance

2017-01-13 Thread Dariusz Śmigiel
2017-01-12 21:43 GMT-06:00 Takashi Yamamoto :
> hi,
>
> On Wed, Nov 16, 2016 at 11:02 AM, Armando M.  wrote:
>> Hi
>>
>> As of today, the project neutron-vpnaas is no longer part of the neutron
>> governance. This was a decision reached after the project saw a dramatic
>> drop in active development over a prolonged period of time.
>>
>> What does this mean in practice?
>>
>> From a visibility point of view, release notes and documentation will no
>> longer appear on openstack.org as of Ocata going forward.
>> No more releases will be published by the neutron release team.
>> The neutron team will stop proposing fixes for the upstream CI, if not
>> solely on a voluntary basis (e.g. I still felt like proposing [2]).
>>
>> How does it affect you, the user or the deployer?
>>
>> You can continue to use vpnaas and its CLI via the python-neutronclient and
>> expect it to work with neutron up until the newton
>> release/python-neutronclient 6.0.0. After this point, if you want a release
>> that works for Ocata or newer, you need to proactively request a release
>> [5], and reach out to a member of the neutron release team [3] for approval.
>
> i want to make an ocata release. (and more importantly the stable branch,
> for the benefit of consuming subprojects)
> for the purpose, the next step would be ocata-3, right?

Hey Takashi,
If you want to release new version of neutron-vpnaas, please look at [1].
This is the place, which you need to update and based on provided
details, tags and branches will be cut.

[1] 
https://github.com/openstack/releases/blob/master/deliverables/ocata/neutron-vpnaas.yaml

BR, Dariusz

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Adding support to Neutron Trunking feature

2017-01-13 Thread Norbert Illés

Hi everybody,

I'm interested in adding Heat support to a Neutron feature, called VLAN 
aware VMs [1] (also known as Trunk Port or just Trunking) introduced in 
the Newton release.
A Heat Launchpad Blueprint already created [2] by Rabi Mishra, but I'm 
wondering if anybody started/plan to work on this.


If the answer is no, how can I indicate that I'd like to work on this 
Blueprint?


[1]: https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
[2]: https://blueprints.launchpad.net/heat/+spec/support-trunk-port

Cheers,
Norbert

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-5, Jan 16-20

2017-01-13 Thread Doug Hellmann
Excerpts from joehuang's message of 2017-01-13 01:23:08 +:
> Hello, Doug,
> 
> One question, according to the guide for self-branch[1], the Ocata stable 
> branch should be created for RC1 tag for projects using the 
> cycle-with-milestone release model. The date for RC1 one is Jan 30 - Feb 03 
> according to the schedule [2]. Tricircle is one big-tent project with  
> cycle-with-intermediary mode, and has dependency on stable Neutron, should 
> Tricircle Ocata stable branch be created during  Jan 30 - Feb 03 or later 
> than Feb 03? I think Neutron Ocata stable branch will be created during Jan 
> 30 - Feb 03.

You'll probably want to wait until after the Neutron stable branch has
been created. You can submit the branch request and either mark it WIP
or add a Depends-On to prevent it from merging before the neutron
request.

Doug

> 
> [1]http://git.openstack.org/cgit/openstack/releases/tree/README.rst#n66
> [2] https://releases.openstack.org/ocata/schedule.html
> 
> Best Regards
> Chaoyi Huang (joehuang)
> 
> 
> From: Doug Hellmann [d...@doughellmann.com]
> Sent: 13 January 2017 2:34
> To: openstack-dev
> Subject: [openstack-dev] [release] Release countdown for week R-5, Jan 16-20
> 
> Focus
> -
> 
> Feature work and major refactoring should be starting to wrap up
> as we approach the third milestone and various feature and release
> freeze dates.
> 
> The deadline for non-client library releases is Thursday 19 Jan.
> We do not grant Feature Freeze Extensions for any libraries, so
> that is a hard freeze date. Any feature work that requires updates
> to non-client libraries should be prioritized so it can be completed
> by that time.
> 
> Release Tasks
> -
> 
> As we did at the end of Newton, when the time comes to create
> stable/ocata branches they will be configured so that members of
> the $project-release group in gerrit have permission to approve
> patches.  This group should be a small subset of the core review
> team, aware of the priorities and criteria for patches to be approved
> as we work toward release candidates. Release liaisons should ensure
> that these groups exist in gerrit and that their membership is
> correct for this cycle.  Please coordinate with the release management
> team if you have any questions.
> 
> General Notes
> -
> 
> We will start the soft string freeze during R-4 (23-27 Jan). See
> https://releases.openstack.org/ocata/schedule.html#o-soft-sf for
> details
> 
> The release team is now publishing the release calendar using ICS.
> Subscribe your favorite calendaring software to
> https://releases.openstack.org/schedule.ics for automatic updates.
> 
> Important Dates
> ---
> 
> Final release of non-client libraries: 19 Jan
> 
> Ocata 3 Milestone, with Feature and Requirements Freezes: 26 Jan
> 
> Ocata release schedule: http://releases.openstack.org/ocata/schedule.html
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Consistent Versioned Endpoints

2017-01-13 Thread Thierry Carrez
Sean Dague wrote:
> On 01/12/2017 01:35 PM, Scott D'Angelo wrote:
>> [...]
>> Can we get to this "perfect world"? Let's discuss at the PTG.
>> It is my understanding that we do not have the ability to schedule a
>> time or room for such a cross-project discussion. Please chime in if
>> interested, and/or make your interest known to scottda, mordred, or edleafe.
> 
> Happy to join in on this, it does seem weird there is no time / space
> for such things at PTG.

We'll have a room available for such inter-team discussions at the PTG.
However, since only a fragment of our community will be present at the
PTG, we need to be careful to avoid exclusion. Ideally we would only use
that room to discuss things that are only relevant to upstream
development teams, and use the "Forum" in Boston to hold truly
cross-project / community-wide discussions. The typical target for the
discussion room at the PTG are therefore ad-hoc discussions between PTG
teams, where a separate fishbowl room makes more sense than holding it
in a specific team room.

As far as scheduling goes, the "discussion" room at the PTG should be
available from Monday to Thursday, and scheduled in unconference-style,
to give flexibility to have the discussions we need to have. Current
plan is to use an ethercalc document to share the schedule.

For critical discussions (which don't belong to any given room, fit the
cross-section of our community present, and/or can't wait until Boston)
we could totally hardcode them in the schedule for the discussion
room... but we should probably keep those to a minimum and use team
rooms as much as possible.

How much time do you think that discussion would need ? If more than a
few hours it's probably just simpler to give it a full team room.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] Stop using "current" DLRN repo

2017-01-13 Thread Emilien Macchi
On Fri, Jan 13, 2017 at 9:09 AM, Gabriele Cerami  wrote:
>
> Hi,
>
> following a suggestion from Alal Pevec I'm proposing to stop using
> "current" repo from dlrn and start using "consistent" instead.
> The main difference should only be that "consistent" is not affected by
> packages in ftbfs, so we're testing with a bit more stability.

We might want to exclude tripleo projects because we always need the
latest on this case. Otherwise, lgtm.

> This is the proposal
> https://review.openstack.org/419455
>
> please comment especially if I did not understood correctly what the
> difference between "current" and "consistent" is.
>
>
> Thanks.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][tempest][api] community images, tempest tests, and API stability

2017-01-13 Thread Ian Cordasco
-Original Message-
From: Ken'ichi Ohmichi 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: January 12, 2017 at 13:35:56
To: OpenStack Development Mailing List (not for usage questions)

Subject:  Re: [openstack-dev] [glance][tempest][api] community images,
tempest tests, and API stability

> 2017-01-12 5:47 GMT-08:00 Brian Rosmaita :
> > The issue is that we are proposing to introduce an incompatible change
> > in order to address an incoherence with image visibility semantics. We
> > have acknowledged that this is a breaking change, but the impact of the
> > change is mitigated by the way that the default visibility value is
> > assigned in Ocata.
> >
> > As I've documented earlier in the thread, we have discussed this change
> > with the wider community and the API Working Group, and they are OK with it.
> >
> > The current tempest test has done its duty and identified an
> > incompatibility in the Ocata code. We acknowledge that and salute you.
> > On balance, however, this change will be better for users (and as I've
> > documented earlier in the thread, we've minimized the impact of the
> > change), and we want to make it anyway.
>
> It is difficult to expect the impact of API change is small by us as 
> developers.

We're not claiming it is small. We have done our best to minimize the
impact though.

> For example, if there could be some APPs which list images of both
> private and shared by depending on
> https://bugs.launchpad.net/glance/+bug/1394299 , the APPs will be
> broken after fixing it.
> Nova team faced such situation we expected the impact of the change
> was small but horizon was broken, then we reverted the change in the
> same dev cycle.

We really wanted to include this in our last beta release but the
Tempest test we're talking about right now prevented exactly that. We
might have been able to garner more information from users that way
but alas, we likely won't have time for users to adopt the changes,
provide us feedback, *and* give us time to revert if it does end up
being as disruptive as some are claiming it will be. (Honestly, I
think it'll probably end up being somewhere between where Brian and
others think it will be and where the QA team is claiming it will be.)

> Then Nova has now API microversions mechanism to avoid impacting to
> the existing APPs.

Right. Microversions do sometimes make changes like this better. That
said, microversioning would probably cause yet *more* confusion around
this change than a hard break would and would likely further introduce
security issues. A microversioned API here would (probably) map
"shared" and "private" in what we're proposing to "private" in what
already exists. That means someone continuing with the old version
could add members to a private image and there would *need* to be an
implicit conversion on the new side. This means someone could create
an image as "private" in our new version, switch to the old and
*force* it to be shared. That's all kinds of awful.

Microversions are great. But they're not a panacea. They would not
have helped us had we already had them. I would like Glance to add
them, but there's a vocal minority among our team that's opposed.

Moving past the microversioning conversation (that we've had far too
many times to date), it sounds like the tempest project still sees
itself as a sort of wiser and older sibling to other projects. Other
projects breaking their APIs (intentionally, to improve the user
experience) are then treated as if they're children and talked down
to, in each email on a topic. That's probably not at all the team's
intention, but that is absolutely how it feels on this side of the
conversation. (And as a reminder, intentions don't magically fix
things.) I'm surprised by this behaviour. It's not at all what I
expected from another project in OpenStack. I understand the QA team
feels as if it is defending some idealized user, but the reality is
that as of the last User Survey, *at least* 40% of users are *still*
using Kilo. If and when they upgrade, they'll be encountering far more
disruptive changes than tempest can prevent. Most are internal
implementation details that will require a whole lot more work to fix
for large clouds than fixing some API interactions.

In short, the Glance team is ready, willing, and able to own the
responsibility for any breakage this causes users and any
interoperability problems this will pose. We'd really appreciate
cooperation on this.

And besides "No one uses Glance" [ref:
http://lists.openstack.org/pipermail/openstack-dev/2013-February/005758.html]
;)

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [tripleo][ci] Stop using "current" DLRN repo

2017-01-13 Thread Gabriele Cerami

Hi,

following a suggestion from Alal Pevec I'm proposing to stop using
"current" repo from dlrn and start using "consistent" instead.
The main difference should only be that "consistent" is not affected by
packages in ftbfs, so we're testing with a bit more stability.

This is the proposal
https://review.openstack.org/419455

please comment especially if I did not understood correctly what the
difference between "current" and "consistent" is.


Thanks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Consistent Versioned Endpoints

2017-01-13 Thread Brian Rosmaita
On 1/13/17 7:42 AM, Steve Martinelli wrote:
> On Fri, Jan 13, 2017 at 7:39 AM, Sean Dague  wrote:
> 
>> On 01/12/2017 01:35 PM, Scott D'Angelo wrote:
>>> TL;DR: Let's discuss Version Discovery and Endpoints in the Service
>>> Catalog at the PTG in Atlanta.
>>>
>>> The topic of Versioning and the Endpoints discovered in the Service
>>> Catalog was discussed in today's API Working Group Meeting[1].
>>> A previous ML post[2] claimed:
>>>
>>> In a perfect world, every endpoint would return the same type of
>> resource -
>>> most likely the versions resource as described in the API WG
>> Microversions
>>> spec. It would also be nice if version negotiation can happen without
>>> requiring authentication, the easiest path to which would be supporting
>> the
>>> 'max_version' and 'min_version' fields in the root versions resource.
>>>
>>> One problem is multiple versioned service names in the catalog for a
>>> given service[3], as opposed to a single endpoint that would return
>>> version info[4].
>>>
>>> Can we get to this "perfect world"? Let's discuss at the PTG.
>>> It is my understanding that we do not have the ability to schedule a
>>> time or room for such a cross-project discussion. Please chime in if
>>> interested, and/or make your interest known to scottda, mordred, or
>> edleafe.
>>
>> Happy to join in on this, it does seem weird there is no time / space
>> for such things at PTG.
>>
>> We actually had a rough sketch of a plan last year here which would go a
>> slightly different direction, and get that all up into the service
>> catalog. That also ensures consistent schema enforcement, as it would
>> all be through keystone.
>>
>> Definitely need keystone folks in the room to be able to make forward
>> progress here I think, because part of the tension in the past has been
>> understanding domain boundaries, and it would be a shame to do a ton of
>> work in all the projects that could be easily done in one project, just
>> because of communication gaps.
>>
> 
> If theres a room I'll be there, and try to rope in other folks too.
> 

This is possibly the worst suggestion of all time, but I've told the
Glance team that we'll finish up by 11:30 a.m. on Friday, which will
free up the Glance room for Friday afternoon.  If other teams are taking
a similar strategy, there wouldn't be too much conflict, and maybe we
can hold a few sessions like this, which are important but seem to have
slipped through the cracks, on Friday afternoon?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Consistent Versioned Endpoints

2017-01-13 Thread Steve Martinelli
On Fri, Jan 13, 2017 at 7:39 AM, Sean Dague  wrote:

> On 01/12/2017 01:35 PM, Scott D'Angelo wrote:
> > TL;DR: Let's discuss Version Discovery and Endpoints in the Service
> > Catalog at the PTG in Atlanta.
> >
> > The topic of Versioning and the Endpoints discovered in the Service
> > Catalog was discussed in today's API Working Group Meeting[1].
> > A previous ML post[2] claimed:
> >
> > In a perfect world, every endpoint would return the same type of
> resource -
> > most likely the versions resource as described in the API WG
> Microversions
> > spec. It would also be nice if version negotiation can happen without
> > requiring authentication, the easiest path to which would be supporting
> the
> > 'max_version' and 'min_version' fields in the root versions resource.
> >
> > One problem is multiple versioned service names in the catalog for a
> > given service[3], as opposed to a single endpoint that would return
> > version info[4].
> >
> > Can we get to this "perfect world"? Let's discuss at the PTG.
> > It is my understanding that we do not have the ability to schedule a
> > time or room for such a cross-project discussion. Please chime in if
> > interested, and/or make your interest known to scottda, mordred, or
> edleafe.
>
> Happy to join in on this, it does seem weird there is no time / space
> for such things at PTG.
>
> We actually had a rough sketch of a plan last year here which would go a
> slightly different direction, and get that all up into the service
> catalog. That also ensures consistent schema enforcement, as it would
> all be through keystone.
>
> Definitely need keystone folks in the room to be able to make forward
> progress here I think, because part of the tension in the past has been
> understanding domain boundaries, and it would be a shame to do a ton of
> work in all the projects that could be easily done in one project, just
> because of communication gaps.
>

If theres a room I'll be there, and try to rope in other folks too.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Consistent Versioned Endpoints

2017-01-13 Thread Sean Dague
On 01/12/2017 01:35 PM, Scott D'Angelo wrote:
> TL;DR: Let's discuss Version Discovery and Endpoints in the Service
> Catalog at the PTG in Atlanta.
> 
> The topic of Versioning and the Endpoints discovered in the Service
> Catalog was discussed in today's API Working Group Meeting[1].
> A previous ML post[2] claimed:
> 
> In a perfect world, every endpoint would return the same type of resource -
> most likely the versions resource as described in the API WG Microversions
> spec. It would also be nice if version negotiation can happen without
> requiring authentication, the easiest path to which would be supporting the
> 'max_version' and 'min_version' fields in the root versions resource.
> 
> One problem is multiple versioned service names in the catalog for a
> given service[3], as opposed to a single endpoint that would return
> version info[4].
> 
> Can we get to this "perfect world"? Let's discuss at the PTG.
> It is my understanding that we do not have the ability to schedule a
> time or room for such a cross-project discussion. Please chime in if
> interested, and/or make your interest known to scottda, mordred, or edleafe.

Happy to join in on this, it does seem weird there is no time / space
for such things at PTG.

We actually had a rough sketch of a plan last year here which would go a
slightly different direction, and get that all up into the service
catalog. That also ensures consistent schema enforcement, as it would
all be through keystone.

Definitely need keystone folks in the room to be able to make forward
progress here I think, because part of the tension in the past has been
understanding domain boundaries, and it would be a shame to do a ton of
work in all the projects that could be easily done in one project, just
because of communication gaps.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][nova-docker] Time to retire nova-docker?

2017-01-13 Thread Álvaro López García
On 22 Dec 2016 (21:30), Matt Riedemann wrote:
>
> (...)
> 
> I know people are running it and hacking on it outside of the community
> repo, which is fine, and if someone doing that wanted to stand up and say
> they wanted to own the repo and be the core team I'd be fine with that too,
> but so far no one has done that in the last few years. If you're already
> maintaining it outside of the community I don't know why you wouldn't just
> do that development in the open, and maybe get a free bug fix at times from
> another contributor, but I suppose people have their reasons (secret sauce
> and all that). So meh.

We are using nova-docker and we have contributed back several changes,
(some improvements and some bugfixes) to the community. We cannot aford
the burden of being the core team for nova-docker though, but we would
be happy to see nova-docker maintained at least until Zun is not in beta
status.

Cheers,
-- 
Álvaro López García  al...@ifca.unican.es
Instituto de Física de Cantabria http://alvarolopez.github.io
Ed. Juan Jordá, Campus UC  tel: (+34) 942 200 969
Avda. de los Castros s/nskype: aloga.csic
39005 Santander (SPAIN)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Weekly wrap-up

2017-01-13 Thread Richard Jones
Hi folks,

Welcome back from the break - I hope you had a good one!

We kicked off 2017 with our first weekly meeting[1] that covered a few
areas, notably the impending Ocata feature freeze next week.

We also talked a little about the Pike PTG which is just over a month
away, and I've started a planning etherpad[2] for the sessions we'll
have at the PTG. If you're coming, or would like to have something
discussed at the PTG, please contribute to the etherpad.

I'm at a conference next week so most likely won't make the team
meeting, but it should run anyway. If you have anything to discuss at
the weekly meeting please add it to the agenda[3].

In the mean time, let's get those priority reviews[4] going so we can
get as many landed for Ocata!


 Richard

[1] 
http://eavesdrop.openstack.org/meetings/horizon/2017/horizon.2017-01-11-20.00.html
[2] https://etherpad.openstack.org/p/horizon-ptg-pike
[3] https://wiki.openstack.org/wiki/Meetings/Horizon
[4] https://review.openstack.org/#/q/starredby:r1chardj0n3s%20AND%20status:open

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][magnum]

2017-01-13 Thread BİLGEM BTE

Okey, I did these, it worked. 

I am waiting for 50 minutes but master node is still create in progress, and 
/var/log/magnum folder is emtpy, i can't any any log. 

Yasemin 

- Orijinal Mesaj -

Kimden: "Yatin Karel"  
Kime: "OpenStack Development Mailing List (not for usage questions)" 
 
Gönderilenler: 13 Ocak Cuma 2017 12:49:25 
Konu: Re: [openstack-dev] [containers] 

Hi Yasemin, 

You can try following to check logs:- 

1) Stop the magnum-api and magnum-conductor process using service commands or 
use kill -9. 

2) Try running magnum-api and magnum-conductor process on console. For this on 
bash shell from root account just do: 
# magnum-api 
# magnum-conductor 
>From their you can see the ERROR if any. 

Thanks and Regards 
Yatin Karel 


From: Yasemin DEMİRAL (BİLGEM BTE) [yasemin.demi...@tubitak.gov.tr] 
Sent: Friday, January 13, 2017 1:43 PM 
To: OpenStack Development Mailing List (not for usage questions) 
Subject: [openstack-dev] [nova][magnum][containers] 

Hi 

I use openstack-newton on Ubuntu 16.04, i install magnum from source code, but 
its log not running. so i can't see errors. 
When i create template it gives " InternalServerError: 'NoneType' object has no 
attribute 'find' " 

Could you help me ? 

Thanks 

Yasemin 

__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers]

2017-01-13 Thread Yatin Karel
Hi Yasemin,

You can try following to check logs:-

1) Stop the magnum-api and magnum-conductor process using service commands or 
use kill -9.

2) Try running magnum-api and magnum-conductor process on console. For this on 
bash shell from root account just do:
# magnum-api
# magnum-conductor
>From their you can see the ERROR if any.

Thanks and Regards
Yatin Karel


From: Yasemin DEMİRAL (BİLGEM BTE) [yasemin.demi...@tubitak.gov.tr]
Sent: Friday, January 13, 2017 1:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][magnum][containers]

Hi

I use openstack-newton on Ubuntu 16.04, i install magnum from source code, but 
its log not running. so i can't see errors.
When i create template it gives " InternalServerError: 'NoneType' object has no 
attribute 'find' "

Could you help me ?

Thanks

Yasemin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [machine learning] Question: Why there is no serious project for machine learning ?

2017-01-13 Thread Yujun Zhang
Not sure what you mean by serious.

Maybe you could have a look at Meteos[1]. It is a young project but surely
focuses on machine learning.

[1]: https://wiki.openstack.org/wiki/Meteos

On Fri, Jan 13, 2017 at 3:57 PM 严超  wrote:

> Hi, all,
> I'm wondering if there is a serious project for machine learning in
> Openstack ? For us to easily build a model in industrial operational level.
>
> Thanks,
> Andy Yan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][magnum][containers]

2017-01-13 Thread BİLGEM BTE
Hi 

I use openstack-newton on Ubuntu 16.04, i install magnum from source code, but 
its log not running. so i can't see errors. 
When i create template it gives " InternalServerError: 'NoneType' object has no 
attribute 'find' " 

Could you help me ? 

Thanks 

Yasemin 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [machine learning] Question: Why there is no serious project for machine learning ?

2017-01-13 Thread joehuang
What do you mean " serious project " ?

Best Regards
Chaoyi Huang (joehuang)

From: 严超 [yanchao...@gmail.com]
Sent: 13 January 2017 15:53
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [machine learning] Question: Why there is no serious 
project for machine learning ?

Hi, all,
I'm wondering if there is a serious project for machine learning in Openstack ? 
For us to easily build a model in industrial operational level.

Thanks,
Andy Yan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev