Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-10-01 Thread Ghe Rivero
If anyone disagrees with the commit format, please, go ahead and fix it (It's
really easy using the gerrit web) For such cosmetic changes (and others
similars), we should not wait for the author to do it. Sometimes, for a stupid
comma, and with all the TZ, a change can need more than a day to be fixed and
approved.

Ghe Rivero

Quoting Ihar Hrachyshka (2015-09-29 18:05:37)
> > On 25 Sep 2015, at 16:44, Ihar Hrachyshka  wrote:
> >
> > Hi all,
> >
> > releases are approaching, so it’s the right time to start some bike 
> > shedding on the mailing list.
> >
> > Recently I got pointed out several times [1][2] that I violate our commit 
> > message requirement [3] for the message lines that says: "Subsequent lines 
> > should be wrapped at 72 characters.”
> >
> > I agree that very long commit message lines can be bad, f.e. if they are 
> > 200+ chars. But <= 79 chars?.. Don’t think so. Especially since we have 79 
> > chars limit for the code.
> >
> > We had a check for the line lengths in openstack-dev/hacking before but it 
> > was killed [4] as per openstack-dev@ discussion [5].
> >
> > I believe commit message lines of <=80 chars are absolutely fine and should 
> > not get -1 treatment. I propose to raise the limit for the guideline on 
> > wiki accordingly.
> >
> > Comments?
> >
> > [1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG
> > [2]: https://review.openstack.org/#/c/227319/2//COMMIT_MSG
> > [3]: 
> > https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure
> > [4]: https://review.openstack.org/#/c/142585/
> > [5]: 
> > http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519
> >
> > Ihar
>
> Thanks everyone for replies.
>
> Now I realize WHY we do it with 72 chars and not 80 chars (git log output). 
> :) I updated the wiki page with how to configure Vim to enforce the rule. I 
> also removed the notion of gating on commit messages because we have them 
> removed since recently.
>
> Ihar
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] CephFS native driver

2015-10-01 Thread Deepak Shetty
On Sat, Sep 26, 2015 at 4:32 PM, John Spray  wrote:

> On Sat, Sep 26, 2015 at 1:27 AM, Ben Swartzlander 
> wrote:
> > On 09/24/2015 09:49 AM, John Spray wrote:
> >>
> >> Hi all,
> >>
> >> I've recently started work on a CephFS driver for Manila.  The (early)
> >> code is here:
> >> https://github.com/openstack/manila/compare/master...jcsp:ceph
> >
> >
> > Awesome! This is something that's been talking about for quite some time
> and
> > I'm pleased to see progress on making it a reality.
> >
> >> It requires a special branch of ceph which is here:
> >> https://github.com/ceph/ceph/compare/master...jcsp:wip-manila
> >>
> >> This isn't done yet (hence this email rather than a gerrit review),
> >> but I wanted to give everyone a heads up that this work is going on,
> >> and a brief status update.
> >>
> >> This is the 'native' driver in the sense that clients use the CephFS
> >> client to access the share, rather than re-exporting it over NFS.  The
> >> idea is that this driver will be useful for anyone who has suchq
> >> clients, as well as acting as the basis for a later NFS-enabled
> >> driver.
> >
> >
> > This makes sense, but have you given thought to the optimal way to
> provide
> > NFS semantics for those who prefer that? Obviously you can pair the
> existing
> > Manila Generic driver with Cinder running on ceph, but I wonder how that
> > wound compare to some kind of ganesha bridge that translates between NFS
> and
> > cephfs. It that something you've looked into?
>
> The Ceph FSAL in ganesha already exists, some work is going on at the
> moment to get it more regularly built and tested.  There's some
> separate design work to be done to decide exactly how that part of
> things is going to work, including discussing with all the right
> people, but I didn't want to let that hold up getting the initial
> native driver out there.
>
> >> The export location returned by the driver gives the client the Ceph
> >> mon IP addresses, the share path, and an authentication token.  This
> >> authentication token is what permits the clients access (Ceph does not
> >> do access control based on IP addresses).
> >>
> >> It's just capable of the minimal functionality of creating and
> >> deleting shares so far, but I will shortly be looking into hooking up
> >> snapshots/consistency groups, albeit for read-only snapshots only
> >> (cephfs does not have writeable shapshots).  Currently deletion is
> >> just a move into a 'trash' directory, the idea is to add something
> >> later that cleans this up in the background: the downside to the
> >> "shares are just directories" approach is that clearing them up has a
> >> "rm -rf" cost!
> >
> >
> > All snapshots are read-only... The question is whether you can take a
> > snapshot and clone it into something that's writable. We're looking at
> > allowing for different kinds of snapshot semantics in Manila for Mitaka.
> > Even if there's no create-share-from-snapshot functionality a readable
> > snapshot is still useful and something we'd like to enable.
>
> Enabling creation of snapshots is pretty trivial, the slightly more
> interesting part will be accessing them.  CephFS doesn't provide a
> rollback mechanism, so
>
> > The deletion issue sounds like a common one, although if you don't have
> the
> > thing that cleans them up in the background yet I hope someone is
> working on
> > that.
>
> Yeah, that would be me -- the most important sentence in my original
> email was probably "this isn't done yet" :-)
>
> >> A note on the implementation: cephfs recently got the ability (not yet
> >> in master) to restrict client metadata access based on path, so this
> >> driver is simply creating shares by creating directories within a
> >> cluster-wide filesystem, and issuing credentials to clients that
> >> restrict them to their own directory.  They then mount that subpath,
> >> so that from the client's point of view it's like having their own
> >> filesystem.  We also have a quota mechanism that I'll hook in later to
> >> enforce the share size.
> >
> >
> > So quotas aren't enforced yet? That seems like a serious issue for any
> > operator except those that want to support "infinite" size shares. I hope
> > that gets fixed soon as well.
>
> Same again, just not done yet.  Well, actually since I wrote the
> original email I added quota support to my branch, so never mind!
>
> >> Currently the security here requires clients (i.e. the ceph-fuse code
> >> on client hosts, not the userspace applications) to be trusted, as
> >> quotas are enforced on the client side.  The OSD access control
> >> operates on a per-pool basis, and creating a separate pool for each
> >> share is inefficient.  In the future it is expected that CephFS will
> >> be extended to support file layouts that use RADOS namespaces, which
> >> are cheap, such that we can issue a new namespace to each share and
> >> enforce the separation between shares on the OSD side.
> >
> 

Re: [openstack-dev] [nova] how to address boot from volume failures

2015-10-01 Thread Alex Xu
2015-10-01 5:45 GMT+08:00 Andrew Laski :

> On 09/30/15 at 05:03pm, Sean Dague wrote:
>
>> Today we attempted to branch devstack and grenade for liberty, and are
>> currently blocked because in liberty with openstack client and
>> novaclient, it's not possible to boot a server from volume using just
>> the volume id.
>>
>> That's because of this change in novaclient -
>> https://review.openstack.org/#/c/221525/
>>
>> That was done to resolve the issue that strong schema validation in Nova
>> started rejecting the kinds of calls that novaclient was making for boot
>> from volume, because the bdm 1 and 2 code was sharing common code and
>> got a bit tangled up. So 3 bdm 2 params were being sent on every request.
>>
>> However, https://review.openstack.org/#/c/221525/ removed the ==1 code
>> path. If you pass in just {"vda": "$volume_id"} the code falls through,
>> volume id is lost, and nothing is booted. This is how the devstack
>> exercises and osc recommends booting from volume. I expect other people
>> might be doing that as well.
>>
>> There seem to be a few options going forward:
>>
>> 1) fix the client without a revert
>>
>> This would bring back a ==1 code path, which is basically just setting
>> volume_id, and move on. This means that until people upgrade their
>> client they loose access to this function on the server.
>>
>> 2) revert the client and loose up schema validation
>>
>> If we revert the client to the old code, we also need to accept the fact
>> that novaclient has been sending 3 extra parameters to this API call
>> since as long as people can remember. We'd need a nova schema relax to
>> let those in and just accept that people are going to pass those.
>>
>> 3) fix osc and novaclient cli to not use this code path. This will also
>> require everyone upgrades both of those to not explode in the common
>> case of specifying boot from volume on the command line.
>>
>> I slightly lean towards #2 on a compatibility front, but it's a chunk of
>> change at this point in the cycle, so I don't think there is a clear win
>> path. It would be good to collect opinions here. The bug tracking this
>> is - https://bugs.launchpad.net/python-openstackclient/+bug/1501435
>>
>
> I have a slight preference for #1.  Nova is not buggy here novaclient is
> so I think we should contain the fix there.
>

+1, this is novaclient bug.


>
> Is using the v2 API an option?  That should also allow the 3 extra
> parameters mentioned in #2.
>
>
>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-10-01 Thread Gary Kotton
+1

On 10/1/15, 10:56 AM, "Ghe Rivero"  wrote:

>If anyone disagrees with the commit format, please, go ahead and fix it
>(It's
>really easy using the gerrit web) For such cosmetic changes (and others
>similars), we should not wait for the author to do it. Sometimes, for a
>stupid
>comma, and with all the TZ, a change can need more than a day to be fixed
>and
>approved.
>
>Ghe Rivero
>
>Quoting Ihar Hrachyshka (2015-09-29 18:05:37)
>> > On 25 Sep 2015, at 16:44, Ihar Hrachyshka  wrote:
>> >
>> > Hi all,
>> >
>> > releases are approaching, so it¹s the right time to start some bike
>>shedding on the mailing list.
>> >
>> > Recently I got pointed out several times [1][2] that I violate our
>>commit message requirement [3] for the message lines that says:
>>"Subsequent lines should be wrapped at 72 characters.²
>> >
>> > I agree that very long commit message lines can be bad, f.e. if they
>>are 200+ chars. But <= 79 chars?.. Don¹t think so. Especially since we
>>have 79 chars limit for the code.
>> >
>> > We had a check for the line lengths in openstack-dev/hacking before
>>but it was killed [4] as per openstack-dev@ discussion [5].
>> >
>> > I believe commit message lines of <=80 chars are absolutely fine and
>>should not get -1 treatment. I propose to raise the limit for the
>>guideline on wiki accordingly.
>> >
>> > Comments?
>> >
>> > [1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG
>> > [2]: https://review.openstack.org/#/c/227319/2//COMMIT_MSG
>> > [3]: 
>>https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_m
>>essage_structure
>> > [4]: https://review.openstack.org/#/c/142585/
>> > [5]: 
>>http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.h
>>tml#52519
>> >
>> > Ihar
>>
>> Thanks everyone for replies.
>>
>> Now I realize WHY we do it with 72 chars and not 80 chars (git log
>>output). :) I updated the wiki page with how to configure Vim to enforce
>>the rule. I also removed the notion of gating on commit messages because
>>we have them removed since recently.
>>
>> Ihar
>>
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-10-01 Thread Sofer Athlan-Guyot
Rich Megginson  writes:

> On 09/30/2015 11:43 AM, Sofer Athlan-Guyot wrote:
>> Gilles Dubreuil  writes:
>>
>>> On 30/09/15 03:43, Rich Megginson wrote:
 On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:
> On 15/09/15 19:55, Sofer Athlan-Guyot wrote:
>> Gilles Dubreuil  writes:
>>
>>> On 15/09/15 06:53, Rich Megginson wrote:
 On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
> Hi,
>
> Gilles Dubreuil  writes:
>
>> A. The 'composite namevar' approach:
>>
>>   keystone_tenant {'projectX::domainY': ... }
>> B. The 'meaningless name' approach:
>>
>>  keystone_tenant {'myproject': name='projectX',
>> domain=>'domainY',
>> ...}
>>
>> Notes:
>> - Actually using both combined should work too with the domain
>> supposedly overriding the name part of the domain.
>> - Please look at [1] this for some background between the two
>> approaches:
>>
>> The question
>> -
>> Decide between the two approaches, the one we would like to
>> retain for
>> puppet-keystone.
>>
>> Why it matters?
>> ---
>> 1. Domain names are mandatory in every user, group or project.
>> Besides
>> the backward compatibility period mentioned earlier, where no domain
>> means using the default one.
>> 2. Long term impact
>> 3. Both approaches are not completely equivalent which different
>> consequences on the future usage.
> I can't see why they couldn't be equivalent, but I may be missing
> something here.
 I think we could support both.  I don't see it as an either/or
 situation.

>> 4. Being consistent
>> 5. Therefore the community to decide
>>
>> Pros/Cons
>> --
>> A.
> I think it's the B: meaningless approach here.
>
>>  Pros
>>- Easier names
> That's subjective, creating unique and meaningful name don't look
> easy
> to me.
 The point is that this allows choice - maybe the user already has some
 naming scheme, or wants to use a more "natural" meaningful name -
 rather
 than being forced into a possibly "awkward" naming scheme with "::"

 keystone_user { 'heat domain admin user':
   name => 'admin',
   domain => 'HeatDomain',
   ...
 }

 keystone_user_role {'heat domain admin user@::HeatDomain':
   roles => ['admin']
   ...
 }

>>  Cons
>>- Titles have no meaning!
 They have meaning to the user, not necessarily to Puppet.

>>- Cases where 2 or more resources could exists
 This seems to be the hardest part - I still cannot figure out how
 to use
 "compound" names with Puppet.

>>- More difficult to debug
 More difficult than it is already? :P

>>- Titles mismatch when listing the resources (self.instances)
>>
>> B.
>>  Pros
>>- Unique titles guaranteed
>>- No ambiguity between resource found and their title
>>  Cons
>>- More complicated titles
>> My vote
>> 
>> I would love to have the approach A for easier name.
>> But I've seen the challenge of maintaining the providers behind the
>> curtains and the confusion it creates with name/titles and when
>> not sure
>> about the domain we're dealing with.
>> Also I believe that supporting self.instances consistently with
>> meaningful name is saner.
>> Therefore I vote B
> +1 for B.
>
> My view is that this should be the advertised way, but the other
> method
> (meaningless) should be there if the user need it.
>
> So as far as I'm concerned the two idioms should co-exist.  This
> would
> mimic what is possible with all puppet resources.  For instance
> you can:
>
>  file { '/tmp/foo.bar': ensure => present }
>
> and you can
>
>  file { 'meaningless_id': name => '/tmp/foo.bar', ensure =>
> present }
>
> The two refer to the same resource.
 Right.

>>> I disagree, using the name for the title is not creating a composite
>>> name. The latter requires adding at least another parameter to be part
>>> of the title.
>>>
>>> Also in the case of the file resource, a 

Re: [openstack-dev] [glance] Models and validation for v2

2015-10-01 Thread Kairat Kushaev
Yep, the way we removed the validation is not good long term solution (IMO)
because we still requesting the schema for unvalidated_model and I am not
sure why do we need it.
I will create a spec about it soon so we can discuss it in more details.

Best regards,
Kairat Kushaev

On Thu, Oct 1, 2015 at 2:44 PM,  wrote:

>
> We've been taking validation out as issues have been reported (it was
> removed from image-list recently for example).
>
> Removing across the board probably does make sense.
>
>
>> Agree with you. That's why I am asking about reasoning. Perhaps, we need
>> to
>> realize how to get rid of this in glanceclient.
>>
>> Best regards,
>> Kairat Kushaev
>>
>> On Wed, Sep 30, 2015 at 7:04 PM, Jay Pipes  wrote:
>>
>> On 09/30/2015 09:31 AM, Kairat Kushaev wrote:
>>>
>>> Hi All,
 In short terms, I am wondering why we are validating responses from
 server when we are doing
 image-show, image-list, member-list, metadef-namespace-show and other
 read-only requests.

 AFAIK, we are building warlock models when receiving responses from
 server (see [0]). Each model requires schema to be fetched from glance
 server. It means that each time we are doing image-show, image-list,
 image-create, member-list and others we are requesting schema from the
 server. AFAIU, we are using models to dynamically validate that object
 is in accordance with schema but is it the case when glance receives
 responses from the server?

 Could somebody please explain me the reasoning of this implementation?
 Am I missed some usage cases when validation is required for server
 responses?

 I also noticed that we already faced some issues with such
 implementation that leads to "mocking" validation([1][2]).


>>> The validation should not be done for responses, only ever requests (and
>>> it's unclear that there is value in doing this on the client side at all,
>>> IMHO).
>>>
>>> -jay
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> -- next part --
>> An HTML attachment was scrubbed...
>> URL: <
>> http://lists.openstack.org/pipermail/openstack-dev/attachments/20150930/5b5dba74/attachment-0001.html
>> >
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Liberty release - what is the correct version - 2015.2.0, 8.0.0 or 12.0.0?

2015-10-01 Thread Thierry Carrez
Ihar Hrachyshka wrote:
>> Also:
>> http://ttx.re/new-versioning.html
>>
>> And we finally publish the series / versions map at:
>> http://docs.openstack.org/releases/
> 
> Awesome stuff! Why don’t we see major server projects there, like nova or 
> neutron?

because they are not released for liberty yet. This site only contains
formal releases (for now). You can find Nova / Neutron alright for past
releases.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Liberty release - what is the correct version - 2015.2.0, 8.0.0 or 12.0.0?

2015-10-01 Thread Jeremy Stanley
On 2015-10-01 14:30:13 +0200 (+0200), Ihar Hrachyshka wrote:
> Awesome stuff! Why don’t we see major server projects there, like
> nova or neutron?

They haven't released yet in Liberty. If you follow the links to one
of the earlier development cycles you'll see their releases listed
for it (under the old versioning scheme).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live migration in Mitaka

2015-10-01 Thread Kashyap Chamarthy
On Wed, Sep 30, 2015 at 11:25:12AM +, Murray, Paul (HP Cloud) wrote:
> 
> > Please respond to this post if you have an interest in this and what
> > you would like to see done.  Include anything you are already
> > getting on with so we get a clear picture. 
>
> Thank you to those who replied to this thread. I have used the
> contents to start an etherpad page here:
>
> https://etherpad.openstack.org/p/mitaka-live-migration

I added a couple of URLs for upstream libvirt work that allow for
selective block device migration, and the in-progress generic TLS
support work by Dan Berrange in upstream QEMU.

> I have taken the liberty of listing those that responded to the thread
> and the authors of mentioned patches as interested people.
 
> From the responses and looking at the specs up for review it looks
> like there are about five areas that could be addressed in Mitaka and
> several others that could come later. The first five are:
>
> 
> - migrating instances with a mix of local disks and cinder volumes 

IIUC, this is possible with the selective block device migration work
merged in upstream libvirt:

https://www.redhat.com/archives/libvir-list/2015-May/msg00955.html

> - pause instance during migration
> - cancel migration 
> - migrate suspended instances 
> - improve CI coverage
> 
> Not all of these are covered by specs yet and all the existing specs
> need reviews. Please look at the etherpad and see if there is anything
> you think is missing.
> 

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] CephFS native driver

2015-10-01 Thread John Spray
On Thu, Oct 1, 2015 at 8:36 AM, Deepak Shetty  wrote:
>
>
> On Thu, Sep 24, 2015 at 7:19 PM, John Spray  wrote:
>>
>> Hi all,
>>
>> I've recently started work on a CephFS driver for Manila.  The (early)
>> code is here:
>> https://github.com/openstack/manila/compare/master...jcsp:ceph
>>
>
> 1) README says driver_handles_share_servers=True, but code says
>
> + if share_server is not None:
> + log.warning("You specified a share server, but this driver doesn't use
> that")

The warning is just for my benefit, so that I could see which bits of
the API were pushing a share server in.  This driver doesn't care
about the concept of a share server, so I'm really just ignoring it
for the moment.

> 2) Would it good to make the data_isolated option controllable from
> manila.conf config param ?

That's the intention.

> 3) CephFSVolumeClient - it sounds more like CephFSShareClient , any reason
> you chose the
> word 'Volume" instead of Share ? Volumes remind of RBD volumes, hence the Q

The terminology here is not standard across the industry, so there's
not really any right term.  For example, in docker, a
container-exposed filesystem is a "volume".  I generally use volume to
refer to a piece of storage that we're carving out, and share to refer
to the act of making that visible to someone else.  If I had been
writing Manila originally I wouldn't have called shares shares :-)

The naming in CephFSVolumeClient will not be the same as Manilas,
because it is not intended to be Manila-only code, though that's the
first use for it.

> 4) IIUC there is no need to do access_allow/deny in the cephfs usecase ? It
> looks like
> create_share, put the cephx keyring in client and it can access the share,
> as long as the
> client has network access to the ceph cluster. Doc says you don't use IP
> address based
> access method, so which method is used in case you are using access_allow
> flow ?

Currently, as you say, a share is accessible to anyone who knows the
auth key (created a the time the share is created).

For adding the allow/deny path, I'd simply create and remove new ceph
keys for each entity being allowed/denied.

John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] CephFS native driver

2015-10-01 Thread John Spray
On Thu, Oct 1, 2015 at 12:58 AM, Shinobu Kinjo  wrote:
> Is there any plan to merge those branches to master?
> Or is there anything needs to be done more?

As I said in the original email, this is unfinished code, and my
message was just to let people know this was underway so that the
patch didn't come as a complete surprise.

John

>
> Shinobu
>
> - Original Message -
> From: "Ben Swartzlander" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Saturday, September 26, 2015 9:27:58 AM
> Subject: Re: [openstack-dev] [Manila] CephFS native driver
>
> On 09/24/2015 09:49 AM, John Spray wrote:
>> Hi all,
>>
>> I've recently started work on a CephFS driver for Manila.  The (early)
>> code is here:
>> https://github.com/openstack/manila/compare/master...jcsp:ceph
>
> Awesome! This is something that's been talking about for quite some time
> and I'm pleased to see progress on making it a reality.
>
>> It requires a special branch of ceph which is here:
>> https://github.com/ceph/ceph/compare/master...jcsp:wip-manila
>>
>> This isn't done yet (hence this email rather than a gerrit review),
>> but I wanted to give everyone a heads up that this work is going on,
>> and a brief status update.
>>
>> This is the 'native' driver in the sense that clients use the CephFS
>> client to access the share, rather than re-exporting it over NFS.  The
>> idea is that this driver will be useful for anyone who has such
>> clients, as well as acting as the basis for a later NFS-enabled
>> driver.
>
> This makes sense, but have you given thought to the optimal way to
> provide NFS semantics for those who prefer that? Obviously you can pair
> the existing Manila Generic driver with Cinder running on ceph, but I
> wonder how that wound compare to some kind of ganesha bridge that
> translates between NFS and cephfs. It that something you've looked into?
>
>> The export location returned by the driver gives the client the Ceph
>> mon IP addresses, the share path, and an authentication token.  This
>> authentication token is what permits the clients access (Ceph does not
>> do access control based on IP addresses).
>>
>> It's just capable of the minimal functionality of creating and
>> deleting shares so far, but I will shortly be looking into hooking up
>> snapshots/consistency groups, albeit for read-only snapshots only
>> (cephfs does not have writeable shapshots).  Currently deletion is
>> just a move into a 'trash' directory, the idea is to add something
>> later that cleans this up in the background: the downside to the
>> "shares are just directories" approach is that clearing them up has a
>> "rm -rf" cost!
>
> All snapshots are read-only... The question is whether you can take a
> snapshot and clone it into something that's writable. We're looking at
> allowing for different kinds of snapshot semantics in Manila for Mitaka.
> Even if there's no create-share-from-snapshot functionality a readable
> snapshot is still useful and something we'd like to enable.
>
> The deletion issue sounds like a common one, although if you don't have
> the thing that cleans them up in the background yet I hope someone is
> working on that.
>
>> A note on the implementation: cephfs recently got the ability (not yet
>> in master) to restrict client metadata access based on path, so this
>> driver is simply creating shares by creating directories within a
>> cluster-wide filesystem, and issuing credentials to clients that
>> restrict them to their own directory.  They then mount that subpath,
>> so that from the client's point of view it's like having their own
>> filesystem.  We also have a quota mechanism that I'll hook in later to
>> enforce the share size.
>
> So quotas aren't enforced yet? That seems like a serious issue for any
> operator except those that want to support "infinite" size shares. I
> hope that gets fixed soon as well.
>
>> Currently the security here requires clients (i.e. the ceph-fuse code
>> on client hosts, not the userspace applications) to be trusted, as
>> quotas are enforced on the client side.  The OSD access control
>> operates on a per-pool basis, and creating a separate pool for each
>> share is inefficient.  In the future it is expected that CephFS will
>> be extended to support file layouts that use RADOS namespaces, which
>> are cheap, such that we can issue a new namespace to each share and
>> enforce the separation between shares on the OSD side.
>
> I think it will be important to document all of these limitations. I
> wouldn't let them stop you from getting the driver done, but if I was a
> deployer I'd want to know about these details.
>
>> However, for many people the ultimate access control solution will be
>> to use a NFS gateway in front of their CephFS filesystem: it is
>> expected that an NFS-enabled cephfs driver will follow this native
>> driver in the not-too-distant future.
>
> Okay 

Re: [openstack-dev] [nova] how to address boot from volume failures

2015-10-01 Thread Sean Dague
On 09/30/2015 10:41 PM, melanie witt wrote:
> On Sep 30, 2015, at 14:45, Andrew Laski  > wrote:
> 
>> I have a slight preference for #1.  Nova is not buggy here novaclient
>> is so I think we should contain the fix there.
>>
>> Is using the v2 API an option?  That should also allow the 3 extra
>> parameters mentioned in #2.

It could be, but it would invalidate the claim that operators can just
point people at v2.1 and it won't be noticed.

> 
> +1. I have put up https://review.openstack.org/229669 in -W mode in case
> we decide to go that route. 
> 
> -melanie 

As the concensus is for item #1, I'm fine with that. +2 on
https://review.openstack.org/229669.

Let's get that landed and cut a new release today.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Liberty release - what is the correct version - 2015.2.0, 8.0.0 or 12.0.0?

2015-10-01 Thread Thierry Carrez
Jeremy Stanley wrote:
> On 2015-10-01 10:54:20 +1000 (+1000), Richard Jones wrote:
> [...]
>> I believe that if we are moving to semver, then 12.0.0 is
>> appropriate.
> 
> Each project participating in the release is following semver
> independently, with the first digit indicating the number of
> OpenStack integrated releases in which that project has
> participated. This means the version numbers will vary between
> projects. For Horizon it's 8.0.0 but for, say, Nova it's 12.0.0 and
> Sahara's 3.0.0. This was chosen specifically to avoid future
> assumptions that the version numbers will remain in sync as they
> will naturally diverge in coming cycles anyway.

Also Horizon Liberty RC2 was tagged 8.0.0.0rc1 (and liberty-2 8.0.0.0b2,
and liberty-3 8.0.0.0b3), so it shouldn't come as a complete surprise.

> For further details, see the thread starting around
> http://lists.openstack.org/pipermail/openstack-dev/2015-June/067006.html
> which lists all of them (noting there have since been some
> corrections to that initial plan).

Also:
http://ttx.re/new-versioning.html

And we finally publish the series / versions map at:
http://docs.openstack.org/releases/

Hope this helps,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to address boot from volume failures

2015-10-01 Thread Andrew Laski

On 10/01/15 at 06:17am, Sean Dague wrote:

On 09/30/2015 10:41 PM, melanie witt wrote:

On Sep 30, 2015, at 14:45, Andrew Laski > wrote:


I have a slight preference for #1.  Nova is not buggy here novaclient
is so I think we should contain the fix there.

Is using the v2 API an option?  That should also allow the 3 extra
parameters mentioned in #2.


It could be, but it would invalidate the claim that operators can just
point people at v2.1 and it won't be noticed.


I thought we were always clear that it shouldn't be noticed, unless the 
client in use was doing something it shouldn't be.  The goal of strict 
validation was to make that known to users.  Bugs in Tempest were 
discovered this way and bugs in novaclient and the previous response has 
always been to fix client behavior.






+1. I have put up https://review.openstack.org/229669 in -W mode in case
we decide to go that route.

-melanie


As the concensus is for item #1, I'm fine with that. +2 on
https://review.openstack.org/229669.

Let's get that landed and cut a new release today.

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] nominating Brant Knudson for Oslo core

2015-10-01 Thread Ken Giusti
+1

On Mon, Sep 28, 2015 at 4:37 AM Victor Stinner  wrote:

> +1 for Brant
>
> Victor
>
> Le 24/09/2015 19:12, Doug Hellmann a écrit :
> > Oslo team,
> >
> > I am nominating Brant Knudson for Oslo core.
> >
> > As liaison from the Keystone team Brant has participated in meetings,
> > summit sessions, and other discussions at a level higher than some
> > of our own core team members.  He is already core on oslo.policy
> > and oslo.cache, and given his track record I am confident that he would
> > make a good addition to the team.
> >
> > Please indicate your opinion by responding with +1/-1 as usual.
> >
> > Doug
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Remove nova-network as a deployment option in Fuel?

2015-10-01 Thread Evgeniy L
Hi,

Just a small note, we shouldn't remove it completely from Nailgun codebase,
because we still have old environments to support, we should remove it from
newest releases only.

Thanks,

On Thu, Oct 1, 2015 at 1:10 AM, Mike Scherbakov 
wrote:

> Hi team,
> where do we stand with it now? I remember there was a plan to remove
> nova-network support in 7.0, but we've delayed it due to vcenter/dvr or
> something which was not ready for it.
>
> Can we delete it now? The early in the cycle we do it, the easier it will
> be.
>
> Thanks!
> --
> Mike Scherbakov
> #mihgen
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] CephFS native driver

2015-10-01 Thread John Spray
On Thu, Oct 1, 2015 at 8:26 AM, Deepak Shetty  wrote:
>> > I think it will be important to document all of these limitations. I
>> > wouldn't let them stop you from getting the driver done, but if I was a
>> > deployer I'd want to know about these details.
>>
>> Yes, definitely.  I'm also adding an optional flag when creating
>> volumes to give them their own RADOS pool for data, which would make
>> the level of isolation much stronger, at the cost of using more
>> resources per volume.  Creating separate pools has a substantial
>> overhead, but in sites with a relatively small number of shared
>> filesystems it could be desirable.  We may also want to look into
>> making this a layered thing with a pool per tenant, and then
>> less-isolated shares within that pool.  (pool in this paragraph means
>> the ceph concept, not the manila concept).
>>
>> At some stage I would like to add the ability to have physically
>> separate filesystems within ceph (i.e. filesystems don't share the
>> same MDSs), which would add a second optional level of isolation for
>> metadata as well as data
>>
>> Overall though, there's going to be sort of a race here between the
>> native ceph multitenancy capability, and the use of NFS to provide
>> similar levels of isolation.
>
>
> Thanks for the explanation, this helps understand things nicely, tho' I have
> a small doubt. When you say separate filesystems within ceph cluster, you
> meant the same as mapping them to different RADOS namespaces, and each
> namespace will have its own MDS, thus providing addnl isolation on top of
> having 1 pool per tenant ?

Physically separate filesystems would be using separate MDSs, and
separate RADOS pools.  For ultra isolation, the RADOS pools would also
be configured to map to different OSDs.

Separate RADOS namespaces do not provide physical separation (multiple
namespaces exist within one pool, hence on the same OSDs), but they
would provide server-side security for preventing clients seeing into
one anothers data pools.  The terminology is confusing because RADOS
namespace is a distinct ceph specific concept from filesystem
namespaces.

CephFS doesn't currently have either the "separate MDSs" isolation, or
the support for using RADOS namespaces in layouts.  They're both
pretty well understood and not massively complex to implement though,
so it's pretty much just a matter of time.

This is all very ceph-implementation-specific stuff, so apologies if
it's not crystal clear at this stage.


>>
>>
>> >> However, for many people the ultimate access control solution will be
>> >> to use a NFS gateway in front of their CephFS filesystem: it is
>> >> expected that an NFS-enabled cephfs driver will follow this native
>> >> driver in the not-too-distant future.
>> >
>> >
>> > Okay this answers part of my above question, but how to you expect the
>> > NFS
>> > gateway to work? Ganesha has been used successfully in the past.
>>
>> Ganesha is the preferred server right now.  There is probably going to
>> need to be some level of experimentation needed to confirm that it's
>> working and performing sufficiently well compared with knfs on top of
>> the cephfs kernel client.  Personally though, I have a strong
>> preference for userspace solutions where they work well enough.
>>
>> The broader question is exactly where in the system the NFS gateways
>> run, and how they get configured -- that's the very next conversation
>> to have after the guts of this driver are done.  We are interested in
>> approaches that bring the CephFS protocol as close to the guests as
>> possible before bridging it to NFS, possibly even running ganesha
>> instances locally on the hypervisors, but I don't think we're ready to
>> draw a clear picture of that just yet, and I suspect we will end up
>> wanting to enable multiple methods, including the lowest common
>> denominator "run a VM with a ceph client and ganesha" case.
>
>
> By the lowest denominator case, you mean the manila concept
> of running the share server inside a service VM or something else ?

Yes, that's exactly what I mean.  To be clear, by "lowest common
denominator" I don't mean least good, I mean most generic.

John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [cinder] [all] The future of Cinder API v1

2015-10-01 Thread Sean Dague
Some of us are actively watching the thread / participating. I'll make
sure it gets on the TC agenda in the near future.

I think most of the recommendations are quite good, especially on the
client support front for clients / tools within our community.

On 09/30/2015 10:37 PM, Matt Fischer wrote:
> Thanks for summarizing this Mark. What's the best way to get feedback
> about this to the TC? I'd love to see some of the items which I think
> are common sense for anyone who can't just blow away devstack and start
> over to get added for consideration.
> 
> On Tue, Sep 29, 2015 at 11:32 AM, Mark Voelker  > wrote:
> 
> 
> Mark T. Voelker
> 
> 
> 
> > On Sep 29, 2015, at 12:36 PM, Matt Fischer  > wrote:
> >
> >
> >
> > I agree with John Griffith. I don't have any empirical evidences
> to back
> > my "feelings" on that one but it's true that we weren't enable to
> enable
> > Cinder v2 until now.
> >
> > Which makes me wonder: When can we actually deprecate an API
> version? I
> > *feel* we are fast to jump on the deprecation when the replacement
> isn't
> > 100% ready yet for several versions.
> >
> > --
> > Mathieu
> >
> >
> > I don't think it's too much to ask that versions can't be
> deprecated until the new version is 100% working, passing all tests,
> and the clients (at least python-xxxclients) can handle it without
> issues. Ideally I'd like to also throw in the criteria that
> devstack, rally, tempest, and other services are all using and
> exercising the new API.
> >
> > I agree that things feel rushed.
> 
> 
> FWIW, the TC recently created an assert:follows-standard-deprecation
> tag.  Ivan linked to a thread in which Thierry asked for input on
> it, but FYI the final language as it was approved last week [1] is a
> bit different than originally proposed.  It now requires one release
> plus 3 linear months of deprecated-but-still-present-in-the-tree as
> a minimum, and recommends at least two full stable releases for
> significant features (an entire API version would undoubtedly fall
> into that bucket).  It also requires that a migration path will be
> documented.  However to Matt’s point, it doesn’t contain any
> language that says specific things like:
> 
> In the case of major API version deprecation:
> * $oldversion and $newversion must both work with
> [cinder|nova|whatever]client and openstackclient during the
> deprecation period.
> * It must be possible to run $oldversion and $newversion
> concurrently on the servers to ensure end users don’t have to switch
> overnight.
> * Devstack uses $newversion by default.
> * $newversion works in Tempest/Rally/whatever else.
> 
> What it *does* do is require that a thread be started here on
> openstack-operators [2] so that operators can provide feedback.  I
> would hope that feedback like “I can’t get clients to use it so
> please don’t remove it yet” would be taken into account by projects,
> which seems to be exactly what’s happening in this case with Cinder
> v1.  =)
> 
> I’d hazard a guess that the TC would be interested in hearing about
> whether you think that plan is a reasonable one (and given that TC
> election season is upon us, candidates for the TC probably would too).
> 
> [1] https://review.openstack.org/#/c/207467/
> [2]
> 
> http://git.openstack.org/cgit/openstack/governance/tree/reference/tags/assert_follows-standard-deprecation.rst#n59
> 
> At Your Service,
> 
> Mark T. Voelker
> 
> 
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Openstack-operators] [cinder] [all] The future of Cinder API v1

2015-10-01 Thread Sean Dague
This is now queued up for discussion this week -
https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda

On 10/01/2015 06:22 AM, Sean Dague wrote:
> Some of us are actively watching the thread / participating. I'll make
> sure it gets on the TC agenda in the near future.
> 
> I think most of the recommendations are quite good, especially on the
> client support front for clients / tools within our community.
> 
> On 09/30/2015 10:37 PM, Matt Fischer wrote:
>> Thanks for summarizing this Mark. What's the best way to get feedback
>> about this to the TC? I'd love to see some of the items which I think
>> are common sense for anyone who can't just blow away devstack and start
>> over to get added for consideration.
>>
>> On Tue, Sep 29, 2015 at 11:32 AM, Mark Voelker > > wrote:
>>
>>
>> Mark T. Voelker
>>
>>
>>
>> > On Sep 29, 2015, at 12:36 PM, Matt Fischer > > wrote:
>> >
>> >
>> >
>> > I agree with John Griffith. I don't have any empirical evidences
>> to back
>> > my "feelings" on that one but it's true that we weren't enable to
>> enable
>> > Cinder v2 until now.
>> >
>> > Which makes me wonder: When can we actually deprecate an API
>> version? I
>> > *feel* we are fast to jump on the deprecation when the replacement
>> isn't
>> > 100% ready yet for several versions.
>> >
>> > --
>> > Mathieu
>> >
>> >
>> > I don't think it's too much to ask that versions can't be
>> deprecated until the new version is 100% working, passing all tests,
>> and the clients (at least python-xxxclients) can handle it without
>> issues. Ideally I'd like to also throw in the criteria that
>> devstack, rally, tempest, and other services are all using and
>> exercising the new API.
>> >
>> > I agree that things feel rushed.
>>
>>
>> FWIW, the TC recently created an assert:follows-standard-deprecation
>> tag.  Ivan linked to a thread in which Thierry asked for input on
>> it, but FYI the final language as it was approved last week [1] is a
>> bit different than originally proposed.  It now requires one release
>> plus 3 linear months of deprecated-but-still-present-in-the-tree as
>> a minimum, and recommends at least two full stable releases for
>> significant features (an entire API version would undoubtedly fall
>> into that bucket).  It also requires that a migration path will be
>> documented.  However to Matt’s point, it doesn’t contain any
>> language that says specific things like:
>>
>> In the case of major API version deprecation:
>> * $oldversion and $newversion must both work with
>> [cinder|nova|whatever]client and openstackclient during the
>> deprecation period.
>> * It must be possible to run $oldversion and $newversion
>> concurrently on the servers to ensure end users don’t have to switch
>> overnight.
>> * Devstack uses $newversion by default.
>> * $newversion works in Tempest/Rally/whatever else.
>>
>> What it *does* do is require that a thread be started here on
>> openstack-operators [2] so that operators can provide feedback.  I
>> would hope that feedback like “I can’t get clients to use it so
>> please don’t remove it yet” would be taken into account by projects,
>> which seems to be exactly what’s happening in this case with Cinder
>> v1.  =)
>>
>> I’d hazard a guess that the TC would be interested in hearing about
>> whether you think that plan is a reasonable one (and given that TC
>> election season is upon us, candidates for the TC probably would too).
>>
>> [1] https://review.openstack.org/#/c/207467/
>> [2]
>> 
>> http://git.openstack.org/cgit/openstack/governance/tree/reference/tags/assert_follows-standard-deprecation.rst#n59
>>
>> At Your Service,
>>
>> Mark T. Voelker
>>
>>
>> >
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-operators mailing list
>> openstack-operat...@lists.openstack.org
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 

[openstack-dev] [Rally][Meeting][Agenda]

2015-10-01 Thread Roman Vasilets
Hi, its a friendly reminder that if you what to discuss some topics at
Rally meetings, please add you topic to our Meeting agenda
https://wiki.openstack.org/wiki/Meetings/Rally#Agenda. Don't forget to
specify who will lead topic discussion. Add some information about
topic(links, etc.) Thank you for your attention.

- Best regards, Vasilets Roman.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Liberty release - what is the correct version - 2015.2.0, 8.0.0 or 12.0.0?

2015-10-01 Thread Ihar Hrachyshka

> On 01 Oct 2015, at 14:21, Thierry Carrez  wrote:
> 
> Jeremy Stanley wrote:
>> On 2015-10-01 10:54:20 +1000 (+1000), Richard Jones wrote:
>> [...]
>>> I believe that if we are moving to semver, then 12.0.0 is
>>> appropriate.
>> 
>> Each project participating in the release is following semver
>> independently, with the first digit indicating the number of
>> OpenStack integrated releases in which that project has
>> participated. This means the version numbers will vary between
>> projects. For Horizon it's 8.0.0 but for, say, Nova it's 12.0.0 and
>> Sahara's 3.0.0. This was chosen specifically to avoid future
>> assumptions that the version numbers will remain in sync as they
>> will naturally diverge in coming cycles anyway.
> 
> Also Horizon Liberty RC2 was tagged 8.0.0.0rc1 (and liberty-2 8.0.0.0b2,
> and liberty-3 8.0.0.0b3), so it shouldn't come as a complete surprise.
> 
>> For further details, see the thread starting around
>> http://lists.openstack.org/pipermail/openstack-dev/2015-June/067006.html
>> which lists all of them (noting there have since been some
>> corrections to that initial plan).
> 
> Also:
> http://ttx.re/new-versioning.html
> 
> And we finally publish the series / versions map at:
> http://docs.openstack.org/releases/

Awesome stuff! Why don’t we see major server projects there, like nova or 
neutron?

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Models and validation for v2

2015-10-01 Thread stuart . mclaren


We've been taking validation out as issues have been reported (it was
removed from image-list recently for example).

Removing across the board probably does make sense.



Agree with you. That's why I am asking about reasoning. Perhaps, we need to
realize how to get rid of this in glanceclient.

Best regards,
Kairat Kushaev

On Wed, Sep 30, 2015 at 7:04 PM, Jay Pipes  wrote:


On 09/30/2015 09:31 AM, Kairat Kushaev wrote:


Hi All,
In short terms, I am wondering why we are validating responses from
server when we are doing
image-show, image-list, member-list, metadef-namespace-show and other
read-only requests.

AFAIK, we are building warlock models when receiving responses from
server (see [0]). Each model requires schema to be fetched from glance
server. It means that each time we are doing image-show, image-list,
image-create, member-list and others we are requesting schema from the
server. AFAIU, we are using models to dynamically validate that object
is in accordance with schema but is it the case when glance receives
responses from the server?

Could somebody please explain me the reasoning of this implementation?
Am I missed some usage cases when validation is required for server
responses?

I also noticed that we already faced some issues with such
implementation that leads to "mocking" validation([1][2]).



The validation should not be done for responses, only ever requests (and
it's unclear that there is value in doing this on the client side at all,
IMHO).

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- next part --
An HTML attachment was scrubbed...
URL: 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Remove nova-network as a deployment option in Fuel?

2015-10-01 Thread Sergii Golovatiuk
Hi,

Can we get the latest supported release version? We will remove from
nailgun after End of Life of that particular release...


--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][glance] glance-stable-maint group refresher

2015-10-01 Thread Flavio Percoco

On 30/09/15 12:46 +, Kuvaja, Erno wrote:

Hi all,



I’d like to propose following changes to glance-stable-maint team:

1)  Removing Zhi Yan Liu from the group; unfortunately he has moved on to
other ventures and is not actively participating our operations anymore.

2)  Adding Mike Fedosin to the group; Mike has been reviewing and
backporting patches to glance stable branches and is working with the right
mindset. I think he would be great addition to share the workload around.




+1 to all the above! Thanks Erno for checking into this. Thanks to
Zhii Yan Liu for all the time he dedicated to Glance and thanks to
Mike Fedosin for the time he's dedicating to reviewing stable
branches.

Let's give this some extra time and I'll make the changes.
Flavio

--
@flaper87
Flavio Percoco


pgpupSZ5XdK3_.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pecan] [mistral] Pecan python3 compatibility

2015-10-01 Thread Victor Stinner

Hi,

Le 01/10/2015 15:50, Nikolay Makhotkin a écrit :

In Mistral, we are trying to fix the codebase for supporting python3,
but we are not able to do this since we faced issue with pecan library.
If you want to see the details, see this traceback - [1].


I tried to run Mistral tests on Python 3 and I got the same error. For 
me, it's an obvious Python 3 bug in Pecan. Maybe Pecan has only a 
partial Python 3 support?


If value is a bounded method, you can replace value.im_class with 
type(value.__self__).


Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Some reminders on the specs process

2015-10-01 Thread Jim Rollenhagen
Hi all,

Just a couple quick reminders about our specs process, given our new
release model:

* New specs should be proposed to the specs/approved/ directory, not a
  mitaka directory.

* Specs approved in a previous development cycle do not need to be
  re-approved for Mitaka. The team reserves the right to un-approve a
  spec if it no longer makes sense; please do let us know if you see
  a spec like that. Otherwise, any approved spec is fair game for
  implementing.

Thanks!

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Sean M. Collins
On Thu, Oct 01, 2015 at 10:02:24AM EDT, Ihar Hrachyshka wrote:
> - more changes with less infra tinkering! neutron devs should not need to go 
> to infra projects so often to make an impact;
> -- make our little neat devstack plugin used for qos and sr-iov only a huge 
> pile of bash code that is currently stored in devstack and is proudly called 
> neutron-legacy now; and make the latter obsolete and eventually removed from 
> devstack;

We may need to discuss this. I am currently doing a refactor of the
Neutron DevStack integration in 

https://review.openstack.org/168438

If I understand your message correctly, I disagree that we should be
moving all the DevStack support for Neutron out of DevStack and making
it a plugin. All that does is move the mess from one corner of the room,
to another corner.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [cinder] [all] The future of Cinder API v1

2015-10-01 Thread Ivan Kolodyazhny
Sean,

Thanks for bringing this topic to TC meeting.

Regards,
Ivan Kolodyazhny,
Web Developer,
http://blog.e0ne.info/,
http://notacash.com/,
http://kharkivpy.org.ua/

On Thu, Oct 1, 2015 at 1:43 PM, Sean Dague  wrote:

> This is now queued up for discussion this week -
> https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda
>
> On 10/01/2015 06:22 AM, Sean Dague wrote:
> > Some of us are actively watching the thread / participating. I'll make
> > sure it gets on the TC agenda in the near future.
> >
> > I think most of the recommendations are quite good, especially on the
> > client support front for clients / tools within our community.
> >
> > On 09/30/2015 10:37 PM, Matt Fischer wrote:
> >> Thanks for summarizing this Mark. What's the best way to get feedback
> >> about this to the TC? I'd love to see some of the items which I think
> >> are common sense for anyone who can't just blow away devstack and start
> >> over to get added for consideration.
> >>
> >> On Tue, Sep 29, 2015 at 11:32 AM, Mark Voelker  >> > wrote:
> >>
> >>
> >> Mark T. Voelker
> >>
> >>
> >>
> >> > On Sep 29, 2015, at 12:36 PM, Matt Fischer  >> > wrote:
> >> >
> >> >
> >> >
> >> > I agree with John Griffith. I don't have any empirical evidences
> >> to back
> >> > my "feelings" on that one but it's true that we weren't enable to
> >> enable
> >> > Cinder v2 until now.
> >> >
> >> > Which makes me wonder: When can we actually deprecate an API
> >> version? I
> >> > *feel* we are fast to jump on the deprecation when the replacement
> >> isn't
> >> > 100% ready yet for several versions.
> >> >
> >> > --
> >> > Mathieu
> >> >
> >> >
> >> > I don't think it's too much to ask that versions can't be
> >> deprecated until the new version is 100% working, passing all tests,
> >> and the clients (at least python-xxxclients) can handle it without
> >> issues. Ideally I'd like to also throw in the criteria that
> >> devstack, rally, tempest, and other services are all using and
> >> exercising the new API.
> >> >
> >> > I agree that things feel rushed.
> >>
> >>
> >> FWIW, the TC recently created an assert:follows-standard-deprecation
> >> tag.  Ivan linked to a thread in which Thierry asked for input on
> >> it, but FYI the final language as it was approved last week [1] is a
> >> bit different than originally proposed.  It now requires one release
> >> plus 3 linear months of deprecated-but-still-present-in-the-tree as
> >> a minimum, and recommends at least two full stable releases for
> >> significant features (an entire API version would undoubtedly fall
> >> into that bucket).  It also requires that a migration path will be
> >> documented.  However to Matt’s point, it doesn’t contain any
> >> language that says specific things like:
> >>
> >> In the case of major API version deprecation:
> >> * $oldversion and $newversion must both work with
> >> [cinder|nova|whatever]client and openstackclient during the
> >> deprecation period.
> >> * It must be possible to run $oldversion and $newversion
> >> concurrently on the servers to ensure end users don’t have to switch
> >> overnight.
> >> * Devstack uses $newversion by default.
> >> * $newversion works in Tempest/Rally/whatever else.
> >>
> >> What it *does* do is require that a thread be started here on
> >> openstack-operators [2] so that operators can provide feedback.  I
> >> would hope that feedback like “I can’t get clients to use it so
> >> please don’t remove it yet” would be taken into account by projects,
> >> which seems to be exactly what’s happening in this case with Cinder
> >> v1.  =)
> >>
> >> I’d hazard a guess that the TC would be interested in hearing about
> >> whether you think that plan is a reasonable one (and given that TC
> >> election season is upon us, candidates for the TC probably would
> too).
> >>
> >> [1] https://review.openstack.org/#/c/207467/
> >> [2]
> >>
> http://git.openstack.org/cgit/openstack/governance/tree/reference/tags/assert_follows-standard-deprecation.rst#n59
> >>
> >> At Your Service,
> >>
> >> Mark T. Voelker
> >>
> >>
> >> >
> >> >
> >> >
> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> <
> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >> ___
> >> OpenStack-operators mailing list
> >> 

[openstack-dev] [pecan] [mistral] Pecan python3 compatibility

2015-10-01 Thread Nikolay Makhotkin
Hi, pecan folks!

I have an question for you about python3 support in pecan library.

In Mistral, we are trying to fix the codebase for supporting python3, but
we are not able to do this since we faced issue with pecan library. If you
want to see the details, see this traceback - [1].
(Actually, something is wrong with HooksController and walk_controller
method)

Does pecan officially support python3 (especially, python3.4 or python3.5)
or not?
I didn't find any info about that in pecan repository.

[1] http://paste.openstack.org/show/475041/

-- 
Best Regards,
Nikolay Makhotkin
@Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-10-01 Thread Hongbin Lu
Kris,

I think the proposal of hierarchical projects is out of the scope of magnum, 
and you might need to bring it up at keystone or cross-project meeting. I am 
going to propose a walk-around that might work for you at existing tenancy 
model.

Suppose there is a department (department A) with two subteams (team 1 and team 
2). You can create three projects:

· Project A

· Project A-1

· Project A-2

Then you can assign users to projects in the following ways:

· Assign team 1 members to both Project A and Project A-1

· Assign team 2 members to both Project A and Project A-2

Then you can create a bay at project A, which is shared by the whole 
department. In addition, each subteam can create their own bays at project A-X 
if they want. Does it address your use cases?

Best regards,
Hongbin

From: Kris G. Lindgren [mailto:klindg...@godaddy.com]
Sent: September-30-15 7:26 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.

From my point of view an ideal use case for companies like ours (yahoo/godaddy) 
would be able to support hierarchical projects in magnum.  That way we could 
create a project for each department, and then the subteams of those 
departments can have their own projects.  We create a a bay per department.  
Sub-projects if they want to can support creation of their own bays (but 
support of the kube cluster would then fall to that team).  When a sub-project 
spins up a pod on a bay, minions get created inside that teams sub projects and 
the containers in that pod run on the capacity that was spun up  under that 
project, the minions for each pod would be a in a scaling group and as such 
grow/shrink as dictated by load.

The above would make it so where we support a minimal, yet imho reasonable, 
number of kube clusters, give people who can't/don’t want to fall inline with 
the provided resource a way to make their own and still offer a "good enough 
for a single company" level of multi-tenancy.

>Joshua,

>

>If you share resources, you give up multi-tenancy.  No COE system has the

>concept of multi-tenancy (kubernetes has some basic implementation but it

>is totally insecure).  Not only does multi-tenancy have to “look like” it

>offers multiple tenants isolation, but it actually has to deliver the

>goods.


>

>I understand that at first glance a company like Yahoo may not want

>separate bays for their various applications because of the perceived

>administrative overhead.  I would then challenge Yahoo to go deploy a COE

>like kubernetes (which has no multi-tenancy or a very basic implementation

>of such) and get it to work with hundreds of different competing

>applications.  I would speculate the administrative overhead of getting

>all that to work would be greater then the administrative overhead of

>simply doing a bay create for the various tenants.

>

>Placing tenancy inside a COE seems interesting, but no COE does that

>today.  Maybe in the future they will.  Magnum was designed to present an

>integration point between COEs and OpenStack today, not five years down

>the road.  Its not as if we took shortcuts to get to where we are.

>

>I will grant you that density is lower with the current design of Magnum

>vs a full on integration with OpenStack within the COE itself.  However,

>that model which is what I believe you proposed is a huge design change to

>each COE which would overly complicate the COE at the gain of increased

>density.  I personally don’t feel that pain is worth the gain.



___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Armando M.
On 1 October 2015 at 06:45, Ihar Hrachyshka  wrote:

> Hi all,
>
> I talked recently with several contributors about what each of us plans
> for the next cycle, and found it’s quite useful to share thoughts with
> others, because you have immediate yay/nay feedback, and maybe find
> companions for next adventures, and what not. So I’ve decided to ask
> everyone what you see the team and you personally doing the next cycle, for
> fun or profit.
>
> That’s like a PTL nomination letter, but open to everyone! :) No
> commitments, no deadlines, just list random ideas you have in mind or in
> your todo lists, and we’ll all appreciate the huge pile of awesomeness no
> one will ever have time to implement even if scheduled for Xixao release.
>

You mean Xixao, once we have already rotated the alphabet? :)


>
> To start the fun, I will share my silly ideas in the next email.
>

Thanks for starting this thread. I think having people share ideas can be
useful to help us have a sense of what they are going to work on during the
next release. Obviously some of these ideas will eventually feed into
neutron-specs.

Kyle also tried to capture people's workload in [1] at one point. Perhaps
we can revisit that idea and develop it further based on your input here.

Definitely food for thought.

[1]
https://github.com/openstack/neutron-specs/blob/master/priorities/kilo-priorities.rst


>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pecan] [mistral] Pecan python3 compatibility

2015-10-01 Thread Nikolay Makhotkin
>
>
> If value is a bounded method, you can replace value.im_class with
> type(value.__self__).
>
>
Yes. Python 3 does not support im_class attribute for methods.

-- 
Best Regards,
Nikolay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] ubuntu trusty beaker jobs are broken

2015-10-01 Thread Emilien Macchi
fog-google which is a beaker dependency dropped Ruby 1.x support, and
enforces 2.x.
Ubuntu Trusty does not provide Ruby 2.x so our CI is broken.

Good news! A fix is ongoing in beaker to pin fog-google:
https://github.com/puppetlabs/beaker/pull/973

Hopefully it will be merged & released soon.
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Ihar Hrachyshka
> On 01 Oct 2015, at 15:45, Ihar Hrachyshka  wrote:
> 
> Hi all,
> 
> I talked recently with several contributors about what each of us plans for 
> the next cycle, and found it’s quite useful to share thoughts with others, 
> because you have immediate yay/nay feedback, and maybe find companions for 
> next adventures, and what not. So I’ve decided to ask everyone what you see 
> the team and you personally doing the next cycle, for fun or profit.
> 
> That’s like a PTL nomination letter, but open to everyone! :) No commitments, 
> no deadlines, just list random ideas you have in mind or in your todo lists, 
> and we’ll all appreciate the huge pile of awesomeness no one will ever have 
> time to implement even if scheduled for Xixao release.
> 
> To start the fun, I will share my silly ideas in the next email.

Here is my silly list of stuff to do.

- start adopting NeutronDbObject for core resources (ports, networks) [till 
now, it’s used in QoS only];

- introduce a so called ‘core resource extender manager’ that would be able to 
replace ml2 extension mechanism and become a plugin agnostic way of extending 
core resources by additional plugins (think of port security or qos available 
for ml2 only - that sucks!);

- more changes with less infra tinkering! neutron devs should not need to go to 
infra projects so often to make an impact;
-- make our little neat devstack plugin used for qos and sr-iov only a huge 
pile of bash code that is currently stored in devstack and is proudly called 
neutron-legacy now; and make the latter obsolete and eventually removed from 
devstack;
-- make tempest jobs use a gate hook as we already do for api jobs;

- qos:
-- once we have gate hook triggered, finally introduce qos into tempest runs to 
allow first qos scenarios merged;
-- remove RPC upgrade tech debt that we left in L (that should open path for 
new QoS rules that are currently blocked by it);
-- look into races in rpc.callbacks notification pattern (Kevin mentioned he 
had ideas in mind around that);

- oslo:
-- kill the incubator: we have a single module consumed from there (cache); 
Mitaka is the time for the witch to die in pain;
-- adopt oslo.reports: that is something I failed to do in Liberty so that I 
would have a great chance to do the same in Mitaka; basically, allow neutron 
services to dump ‘useful info’ on SIGUSR2 sent; hopefully will make debugging a 
bit easier;

- upgrades:
-- we should return to partial job for neutron; it’s not ok our upgrade 
strategy works by pure luck;
-- overall, I feel that it’s needed to provide more details about how upgrades 
are expected to work in OpenStack (the order of service upgrades; constraints; 
managing RPC versions and deprecations; etc.) Probably devref should be a good 
start. I talked to some nova folks involved in upgrades there, and we may join 
the armies on that since general upgrade strategy should be similar throughout 
the meta-project.

- stable:
-- with a stadium of the size we have, it becomes a burden for 
neutron-stable-maint to track backports for all projects; we should think of 
opening doors for more per-sub-project stable cores for those subprojects that 
seem sane in terms of development practices and stable awareness side; that way 
we offload neutron-stable-maint folks for stuff with greater impact (aka stuff 
they actually know).

And what are you folks thinking of?

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Ihar Hrachyshka
Hi all,

I talked recently with several contributors about what each of us plans for the 
next cycle, and found it’s quite useful to share thoughts with others, because 
you have immediate yay/nay feedback, and maybe find companions for next 
adventures, and what not. So I’ve decided to ask everyone what you see the team 
and you personally doing the next cycle, for fun or profit.

That’s like a PTL nomination letter, but open to everyone! :) No commitments, 
no deadlines, just list random ideas you have in mind or in your todo lists, 
and we’ll all appreciate the huge pile of awesomeness no one will ever have 
time to implement even if scheduled for Xixao release.

To start the fun, I will share my silly ideas in the next email.

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Mitaka summit session proposal deadline

2015-10-01 Thread Jim Rollenhagen
Hi folks,

We need to have our summit sessions decided by October 15. So, I'd like
to propose a couple of dates.

Oct 9: session proposal deadline.
Oct 12: we go through session proposals in the Ironic meeting and try to
get a rough order of session priority.

Sometime between Oct 12 and 15, we'll get the cores together to
make our final list, and submit it.

As a reminder, you can propose a session here:
https://etherpad.openstack.org/p/mitaka-ironic-design-summit-ideas

Please do add your name to your proposal, and a link to a spec if
applicable.

Thanks!

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

2015-10-01 Thread Russell Bryant
On 09/30/2015 06:01 PM, Murali R wrote:
> Yes, sfc without nsh is what I am looking into and I am thinking ovn can
> have a better approach.
> 
> I did an implementation of sfc around nsh that used ovs & flows from
> custom ovs-agent back in mar-may. I added fields in ovs agent to send
> additional info for actions as well. Neutron side was quite trivial. But
> the solution required an implementation of ovs to listen on a different
> port to handle nsh header so doubled the number of tunnels. The ovs code
> we used/modified to was either from the link you sent or some other
> similar impl from Cisco folks (I don't recall) that had actions and
> conditional commands for the field. If we have generic ovs code to
> compare or set actions on any configured address field was my thought.
> But haven't thought through much on how to do that. In any case, with
> ovn we cannot define custom flows directly on ovs, so that approach is
> dated now. But hoping some similar feature can be added to ovn which can
> transpose some header field to geneve options.

Thanks for the detail of what you're trying to do.

I'm not sure how much you've looked into how OVN works.  OVN works by
defining the network in terms of "logical flows".  These logical flows
look similar to OpenFlow, but it talks about network resources in the
logical sense (not based on where they are physically located).  I think
we can implement SFC purely in the logical space.  So, most of the work
I think is in defining the northbound db schema and then converting that
into the right logical flows.  I looked at the API being proposed by the
networking-sfc project, and that's giving me a pretty good idea of what
the northbound schema could look like for OVN.

https://git.openstack.org/cgit/openstack/networking-sfc/tree/doc/source/api.rst

The networking-sfc API talks about a "chain parameter".  That's where
NSH could come in.  The spec proposes "mpls" as something OVS can
already support.  Given a single VIF, we need a way to differentiate
traffic associated with different chains.  This is *VERY* similar to
what OVN is already doing with parent/child ports, originally intended
for the containers-in-VM use case.  This same concept seems to fit here
quite well.  Today, we only support VLAN IDs for this, but we could
extend it to support mpls, NSH, or whatever.

Anyway, those are just my high level thoughts so far.  I haven't tried
to really dig into a detailed design yet.

> I am trying something right now with ovn and will be attending ovs
> conference in nov. I am skipping openstack summit to attend something
> else in far-east during that time. But lets keep the discussion going
> and collaborate if you work on sfc.

I look forward to meeting you in November!  :-)

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Congress and Monasca Joint Session at Tokyo Design Summit

2015-10-01 Thread Tim Hinrichs
Hi Fabio,

The Congress team talked this over during our IRC yesterday.  It looks like
can meet with your team during one of our meeting slots.  As far as I know
the schedule for those meetings hasn't been set.  But once it is I'll reach
out (or you can) to discuss the day/time.

Tim

On Mon, Sep 28, 2015 at 2:51 PM Tim Hinrichs  wrote:

>
> Hi Fabio: Thanks for reaching out.  We should definitely talk at the
> summit.  I don't know if we can devote 1 of the 3 allocated Congress
> sessions to Monasca, but we'll talk it over during IRC on Wed and let you
> know.  Or do you have a session we could use for the discussion?  In any
> case, I'm confident we can make good progress toward integrating Congress
> and Monasca in Tokyo.  Monasca sounds interesting--I'm looking forward to
> learning more!
>
> Congress team: if we could all quickly browse the Monasca wiki before
> Wed's IRC, that would be great:
> https://wiki.openstack.org/wiki/Monasca
>
> Tim
>
>
>
> On Mon, Sep 28, 2015 at 1:50 PM Fabio Giannetti (fgiannet) <
> fgian...@cisco.com> wrote:
>
>> Tim and Congress folks,
>>   I am writing on behalf of the Monasca community and I would like to
>> explore the possibility of holding a joint session during the Tokyo Design
>> Summit.
>> We would like to explore:
>>
>>1. how to integrate Monasca with Congress so then Monasca can provide
>>metrics, logs and event data for policy evaluation/enforcement
>>2. How to leverage Monasca alarming to automatically notify about
>>statuses that may imply policy breach
>>3. How to automatically (if possible) convert policies (or subparts)
>>into Monasca alarms.
>>
>> Please point me to a submission page if I have to create a formal
>> proposal for the topic and/or let me know other forms we can interact at
>> the Summit.
>> Thanks in advance,
>> Fabio
>>
>> *Fabio Giannetti*
>> Cloud Innovation Architect
>> Cisco Services
>> fgian...@cisco.com
>> Phone: *+1 408 527 1134*
>> Mobile: *+1 408 854 0020*
>>
>> *Cisco Systems, Inc.*
>> 285 W. Tasman Drive
>> San Jose
>> California
>> 95134
>> United States
>> Cisco.com 
>>
>>  Think before you print.
>>
>> This email may contain confidential and privileged material for the sole
>> use of the intended recipient. Any review, use, distribution or disclosure
>> by others is strictly prohibited. If you are not the intended recipient (or
>> authorized to receive for the recipient), please contact the sender by
>> reply email and delete all copies of this message.
>>
>> Please click here
>>  for
>> Company Registration Information.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Traditional question about Heat IRC meeting time.

2015-10-01 Thread Zane Bitter

On 29/09/15 05:56, Sergey Kraynev wrote:

Hi Heaters!

Previously we had constant "tradition" to change meeting time for
involving more people from different time zones.
However last release cycle show, that two different meetings with 07:00
and 20:00 UTC are comfortable for most of our contributors. Both time
values are acceptable for me and I plan to visit both meetings. So I
suggested to leave it without any changes.

What do you think about it ?


Sadly I can only make the 2000 UTC, but +1 anyway... it seems to be 
working ok at the moment.


- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Kyle Mestery
On Thu, Oct 1, 2015 at 9:57 AM, Sean M. Collins  wrote:

> On Thu, Oct 01, 2015 at 10:02:24AM EDT, Ihar Hrachyshka wrote:
> > - more changes with less infra tinkering! neutron devs should not need
> to go to infra projects so often to make an impact;
> > -- make our little neat devstack plugin used for qos and sr-iov only a
> huge pile of bash code that is currently stored in devstack and is proudly
> called neutron-legacy now; and make the latter obsolete and eventually
> removed from devstack;
>
> We may need to discuss this. I am currently doing a refactor of the
> Neutron DevStack integration in
>
> https://review.openstack.org/168438
>
> If I understand your message correctly, I disagree that we should be
> moving all the DevStack support for Neutron out of DevStack and making
> it a plugin. All that does is move the mess from one corner of the room,
> to another corner.
>
> I would actually be in favor of cleaning up the mess AND moving it into
neutron. If it's in Neutron, we control our own destiny with regards to
landing patches which affect devstack and ultimately our gate jobs. To me,
that's a huge win-win. Thus, cleanup first, then move to Neutron.


> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [ptls] Proposed Design Summit track/room/time allocation

2015-10-01 Thread Thierry Carrez
Hi PTLs,

A few weeks ago I posted here the allocation of slots for each track at
the Design Summit:

http://lists.openstack.org/pipermail/openstack-dev/2015-September/073654.html

You replied with a set of constraints, which makes it a fun scheduling
problem to solve. I did my best to take those constraints into account,
as well as avoiding to book a slot when the project PTL is giving a talk
at the conference. Here is the resulting proposed room/time layout result:

http://tinyurl.com/mitaka-design-summit

Feel free to reach out to me if you spot any glaring issue with the
proposed schedule. I don't promise I'll fix it to your satisfaction,
because the equilibrium is pretty fragile here.

I'll push this proposed layout to Cheddar early next week (probably on
Monday) so you can start scheduling, so please comment before then. The
site is not up yet, but I already pushed instructions on how to do the
scheduling at:

https://wiki.openstack.org/wiki/Design_Summit/SchedulingForPTLs

Eagle eyes will have noticed that there are two empty slots (highlighted
in red in the page). Those are up for grabs, if you're interested let me
know (there may be a few more available if any project team decides they
don't need that much space after all). If there are a lot of requests
for those, I plan to give priority to the late-comers in the tent which
don't have any space yet.

See you all in 3.5 weeks in Tokyo!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Sean M. Collins
On Thu, Oct 01, 2015 at 11:05:29AM EDT, Kyle Mestery wrote:
> On Thu, Oct 1, 2015 at 9:57 AM, Sean M. Collins  wrote:
> 
> > On Thu, Oct 01, 2015 at 10:02:24AM EDT, Ihar Hrachyshka wrote:
> > > - more changes with less infra tinkering! neutron devs should not need
> > to go to infra projects so often to make an impact;
> > > -- make our little neat DevStack plugin used for qos and sr-iov only a
> > huge pile of bash code that is currently stored in DevStack and is proudly
> > called neutron-legacy now; and make the latter obsolete and eventually
> > removed from DevStack;
> >
> > We may need to discuss this. I am currently doing a refactor of the
> > Neutron DevStack integration in
> >
> > https://review.openstack.org/168438
> >
> > If I understand your message correctly, I disagree that we should be
> > moving all the DevStack support for Neutron out of DevStack and making
> > it a plugin. All that does is move the mess from one corner of the room,
> > to another corner.
> >
> >
> I would actually be in favor of cleaning up the mess AND moving it into
> neutron. If it's in Neutron, we control our own destiny with regards to
> landing patches which affect DevStack and ultimately our gate jobs. To me,
> that's a huge win-win. Thus, cleanup first, then move to Neutron.

Frankly we have a bad track record in DevStack, if we are to make an
argument about controlling our own destiny. Neutron-lib is in a sad
state of affairs because we haven't had the discipline to keep things
simple.

In fact, I think the whole genesis of the Neutron plugin for DevStack is
a great example of how controlling our own destiny has started to grow
the mess. Yes, we needed it to gate the QoS code. But now things are
starting to get added.

https://github.com/openstack/neutron/commit/bd07b74045d93c46483aa261b8686072d9b448e8

The trend is now that people are going to throw things into the Neutron
DevStack plugin to get their doo-dad up and running, because making a
new repo is harder than creating a patch (which maybe shows are repo
creation process needs streamlining). I was originally for making
Neutron DevStack plugins that exist in their own repos, instead of
putting them in the Neutron tree. At least that makes things small,
manageable, and straight forward. Yes, it makes for more plugin lines in
your DevStack configuration, but at least you know what each one does,
instead of being an agglomeration.

If we are not careful, the Neutron DevStack plugin will grow into the big
mess that neutron-legacy is.

Finally, Look at how many configuration knobs we have, and how there is
a tendency to introduce new ones, instead of using local.conf to inject
configuration into Neutron and the associated components. This ends up
making it very complicated for someone to actually run Neutron in their
DevStack, and I think a lot of people would give up and just run
Nova-Network, which I will note is *still the default*.

We need to keep our ties strong with other projects, and improve them in
some cases. I think culturally, if we start trying to move things into
our corner of the sandbox because working with other groups is hard, we
send bad signals to others. This will eventually come back to bite us.

/rant

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][aodh][gnocchi] Tokyo design session planning

2015-10-01 Thread gord chung

hi,

just a reminder, please submit any topics ASAP so we can properly vote 
on items. if you have a feature you'd like to submit this cycle it'd 
greatly help your chances of merging should you present/explain your 
topic to us common folks.


On 10/09/15 02:03 PM, gord chung wrote:

hi,

as mentioned during today's meeting, since we have our slots for 
design summit, we'll start accepting proposals for the 
telemetry-related topics for the Tokyo summit.


similar to previous design summits, anyone is welcome to propose 
topics related to ceilometer, aodh, gnocchi, or any other 
monitoring/metering/alarming related project.


official proposals can be submitted here: 
https://docs.google.com/spreadsheets/d/1ea_P2k1kNy_SILEnC-5Zl61IFhqEOrlZ5L8VY4-xNB0/edit?usp=sharing


we've also created an etherpad for those wishing to brainstorm ideas 
before adding formal proposal: 
https://etherpad.openstack.org/p/tokyo-ceilometer-design-summit


we have tentatively set submission deadline for October 5, 2015, which 
will be followed by a public vote.


cheers,
--
gord


--
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] acceptance tests status

2015-10-01 Thread Emilien Macchi
Hi,

Most of our modules have acceptance tests (run in beaker CI jobs) and
they make our modules can be deployed and actually deploy OpenStack
services. They are also really helpful for reviewers to validate patches
do not break the module.

While we are currently working on integration testing, I would like to
engage some work on modules that do not have acceptance tests:

* puppet-zaqar
* puppet-murano
* puppet-monasca
* puppet-mistral
* puppet-gnocchi (wip by me)
* puppet-aodh
* puppet-barbican

For each module, we need to:
* make sure UCA or/and RDO provides packaging (at least in liberty)
* push first acceptance tests (look [1] for example)

For now, it's really hard to make review for those modules because we
don't really test it. That means patches take more time to land because
our core team like functional testing.

I would like our community to makemore efforts on this during Mitaka
cycle, please raise your hand if you're willing to help.

[1]
https://github.com/openstack/puppet-glance/blob/master/spec/acceptance/basic_glance_spec.rb


Thanks,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sender Auth Failure] Re: [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Johnston, Nate
I was wondering if you provide a little bit more background on the QoS item, 
"-- remove RPC upgrade tech debt that we left in L (that should open path for 
new QoS rules that are currently blocked by it);”.  Is this related to the 
issue you point out in the commit message for the 
https://review.openstack.org/#/c/211520 change?  Does this block work on 
implementing QoS DSCP in Mitaka, and if so are there bugs that we can pitch in 
to in order to get past it?

Thanks,

—N.

> On Oct 1, 2015, at 10:02 AM, Ihar Hrachyshka  wrote:
> 
>> On 01 Oct 2015, at 15:45, Ihar Hrachyshka  wrote:
>> 
>> Hi all,
>> 
>> I talked recently with several contributors about what each of us plans for 
>> the next cycle, and found it’s quite useful to share thoughts with others, 
>> because you have immediate yay/nay feedback, and maybe find companions for 
>> next adventures, and what not. So I’ve decided to ask everyone what you see 
>> the team and you personally doing the next cycle, for fun or profit.
>> 
>> That’s like a PTL nomination letter, but open to everyone! :) No 
>> commitments, no deadlines, just list random ideas you have in mind or in 
>> your todo lists, and we’ll all appreciate the huge pile of awesomeness no 
>> one will ever have time to implement even if scheduled for Xixao release.
>> 
>> To start the fun, I will share my silly ideas in the next email.
> 
> Here is my silly list of stuff to do.
> 
> - start adopting NeutronDbObject for core resources (ports, networks) [till 
> now, it’s used in QoS only];
> 
> - introduce a so called ‘core resource extender manager’ that would be able 
> to replace ml2 extension mechanism and become a plugin agnostic way of 
> extending core resources by additional plugins (think of port security or qos 
> available for ml2 only - that sucks!);
> 
> - more changes with less infra tinkering! neutron devs should not need to go 
> to infra projects so often to make an impact;
> -- make our little neat devstack plugin used for qos and sr-iov only a huge 
> pile of bash code that is currently stored in devstack and is proudly called 
> neutron-legacy now; and make the latter obsolete and eventually removed from 
> devstack;
> -- make tempest jobs use a gate hook as we already do for api jobs;
> 
> - qos:
> -- once we have gate hook triggered, finally introduce qos into tempest runs 
> to allow first qos scenarios merged;
> -- remove RPC upgrade tech debt that we left in L (that should open path for 
> new QoS rules that are currently blocked by it);
> -- look into races in rpc.callbacks notification pattern (Kevin mentioned he 
> had ideas in mind around that);
> 
> - oslo:
> -- kill the incubator: we have a single module consumed from there (cache); 
> Mitaka is the time for the witch to die in pain;
> -- adopt oslo.reports: that is something I failed to do in Liberty so that I 
> would have a great chance to do the same in Mitaka; basically, allow neutron 
> services to dump ‘useful info’ on SIGUSR2 sent; hopefully will make debugging 
> a bit easier;
> 
> - upgrades:
> -- we should return to partial job for neutron; it’s not ok our upgrade 
> strategy works by pure luck;
> -- overall, I feel that it’s needed to provide more details about how 
> upgrades are expected to work in OpenStack (the order of service upgrades; 
> constraints; managing RPC versions and deprecations; etc.) Probably devref 
> should be a good start. I talked to some nova folks involved in upgrades 
> there, and we may join the armies on that since general upgrade strategy 
> should be similar throughout the meta-project.
> 
> - stable:
> -- with a stadium of the size we have, it becomes a burden for 
> neutron-stable-maint to track backports for all projects; we should think of 
> opening doors for more per-sub-project stable cores for those subprojects 
> that seem sane in terms of development practices and stable awareness side; 
> that way we offload neutron-stable-maint folks for stuff with greater impact 
> (aka stuff they actually know).
> 
> And what are you folks thinking of?
> 
> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Ihar Hrachyshka
> On 01 Oct 2015, at 17:05, Kyle Mestery  wrote:
> 
> On Thu, Oct 1, 2015 at 9:57 AM, Sean M. Collins  wrote:
> On Thu, Oct 01, 2015 at 10:02:24AM EDT, Ihar Hrachyshka wrote:
> > - more changes with less infra tinkering! neutron devs should not need to 
> > go to infra projects so often to make an impact;
> > -- make our little neat devstack plugin used for qos and sr-iov only a huge 
> > pile of bash code that is currently stored in devstack and is proudly 
> > called neutron-legacy now; and make the latter obsolete and eventually 
> > removed from devstack;
> 
> We may need to discuss this. I am currently doing a refactor of the
> Neutron DevStack integration in
> 
> https://review.openstack.org/168438
> 
> If I understand your message correctly, I disagree that we should be
> moving all the DevStack support for Neutron out of DevStack and making
> it a plugin. All that does is move the mess from one corner of the room,
> to another corner.
> 
> I would actually be in favor of cleaning up the mess AND moving it into 
> neutron. If it's in Neutron, we control our own destiny with regards to 
> landing patches which affect devstack and ultimately our gate jobs. To me, 
> that's a huge win-win. Thus, cleanup first, then move to Neutron.
> 

The idea is to make it *both* clean and under neutron team control. The latter 
has huge benefits for the team, allowing a quick turnaround on patches. We 
landed qos and sr-iov support in no time, and I believe the same should apply 
for all goods we want to ship. Ideally, devstack would only contain a single 
neutron related line that enables our plugin.

No more dependency on devstack core team, no more devstack folks burdened by 
random networking stuff they don’t really care that much. Ain’t it a win-win?

I agree we should not step on each other’s feet and I am ok to help with 
cleanup as Sean sees it (note: tell me more though about what is envisioned for 
the cleanup), or step out of it while Sean takes care of his stuff.

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] hands on lab

2015-10-01 Thread Alex Yip
Hi all,

I updated the hands-on lab.


Here's the newest version of the instructions:

https://goo.gl/W0Rhcv


Here's the newest version of the VirtualBox image:

https://goo.gl/o062Kc​


Please give it a try and let me know if you run into any problems.

- Alex




From: Tim Hinrichs 
Sent: Monday, September 28, 2015 10:54 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] hands on lab

Hi Alex,

I went through the HOL.  Some feedback...

1.  When starting up, it paused for a while showing the following messages.  
Maybe that caused the issues I mention below.
Waiting for network configuration...
Waiting up to 60 more seconds for network configuration...

2.  Maybe spell out the first paragraph of "Starting the Devstack VM".  Maybe 
something like

To run the devstack VM, first install VirtualBox.  Then download the image from 
XXX and start the VM.  (From VirtualBox's File menu, Import Appliance and then 
choose the file you downloaded.)

Now you'll want to ssh to the VM from your laptop so it's easier to copy and 
paste.  (The VM is setup to use bridge networking with DHCP.) To find the IP 
address, login to the VM through the console.

username: congress
password: congress

Run ‘ifconfig’ to get the IP address of the VM for eth0

$ ifconfig eth0

Now open a terminal from your laptop and ssh to that IP address using the same 
username and password from above.



Now that devstack is running, you can point your laptop's browser to the VM's 
IP address you found earlier to use Horizon.

 3. Connection problems.
Horizon.  The browser brought up the login screen but gave me the message 
"Unable to establish connection to keystone endpoint." after I provided 
admin/password.

Congress.  From the congress log I see connection errors happening for keystone 
and neutron.

Looking at the Keystone screen log I don't see much of anything.  Here's the 
API log in its entirety.  Seems it hasn't gotten any requests today (Sept 28).


10.1.184.61 - - [17/Sep/2015:12:05:08 -0700] "POST /v2.0/tokens HTTP/1.1" 200 
4287 "-" "python-keystoneclient" 52799(us)

10.1.184.61 - - [17/Sep/2015:12:05:08 -0700] "GET /v3/auth/tokens HTTP/1.1" 200 
7655 "-" "python-keystoneclient" 65278(us)

10.1.184.61 - - [17/Sep/2015:12:05:08 -0700] "POST /v2.0/tokens HTTP/1.1" 200 
4287 "-" "python-keystoneclient" 63428(us)

10.1.184.61 - - [17/Sep/2015:12:05:08 -0700] "GET /v3/auth/tokens HTTP/1.1" 200 
7655 "-" "python-keystoneclient" 135994(us)

10.1.184.61 - - [17/Sep/2015:12:05:09 -0700] "POST /v2.0/tokens HTTP/1.1" 200 
4287 "-" "python-keystoneclient" 56581(us)

10.1.184.61 - - [17/Sep/2015:12:05:09 -0700] "GET /v3/auth/tokens HTTP/1.1" 200 
7655 "-" "python-keystoneclient" 55412(us)

10.1.184.61 - - [17/Sep/2015:12:05:12 -0700] "GET /v2.0/users HTTP/1.1" 200 
1679 "-" "python-keystoneclient" 13630(us)

10.1.184.61 - - [17/Sep/2015:12:05:12 -0700] "GET /v2.0/OS-KSADM/roles 
HTTP/1.1" 200 397 "-" "python-keystoneclient" 10940(us)

10.1.184.61 - - [17/Sep/2015:12:05:12 -0700] "GET /v2.0/tenants HTTP/1.1" 200 
752 "-" "python-keystoneclient" 12387(us)

The VM has eth0 bridged.  I can ping 
google.com
 from inside the VM; I can ssh to the VM from my laptop.

Any ideas what's going on?  (I'm trying to unstack/clean/stack to see if that 
works, but it'll take a while.)

Tim




On Thu, Sep 17, 2015 at 6:05 PM Alex Yip 
> wrote:
Hi all,
I have created a VirtualBox VM that matches the Vancouver handson-lab here:

https://drive.google.com/file/d/0B94E7u1TIA8oTEdOQlFERkFwMUE/view?usp=sharing

There's also an updated instruction document here:

https://docs.google.com/document/d/1ispwf56bX8sy9T0KZyosdHrSR9WHEVA1oGEIYA22Orw/pub

If you have some time, please try it out to see if it all works as expected.
thanks, Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [heat] Traditional question about Heat IRC meeting time.

2015-10-01 Thread x Lyn
+1 for 7:00

Sent from my iPhone

> On Oct 1, 2015, at 10:36 PM, Zane Bitter  wrote:
> 
>> On 29/09/15 05:56, Sergey Kraynev wrote:
>> Hi Heaters!
>> 
>> Previously we had constant "tradition" to change meeting time for
>> involving more people from different time zones.
>> However last release cycle show, that two different meetings with 07:00
>> and 20:00 UTC are comfortable for most of our contributors. Both time
>> values are acceptable for me and I plan to visit both meetings. So I
>> suggested to leave it without any changes.
>> 
>> What do you think about it ?
> 
> Sadly I can only make the 2000 UTC, but +1 anyway... it seems to be working 
> ok at the moment.
> 
> - ZB
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][horizon] adding Doug Fish to horizon stable-maint

2015-10-01 Thread David Lyle
+1

On Thu, Oct 1, 2015 at 3:21 AM, Matthias Runge  wrote:
> Hello,
>
> I would like to propose to add
>
> Doug Fish (doug-fish)
>
> to horizon-stable-maint team.
>
> I'd volunteer and introduce him to stable branch policy.
>
> Matthias
> --
> Matthias Runge 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Armando M.
On 1 October 2015 at 08:42, Sean M. Collins  wrote:

> On Thu, Oct 01, 2015 at 11:05:29AM EDT, Kyle Mestery wrote:
> > On Thu, Oct 1, 2015 at 9:57 AM, Sean M. Collins 
> wrote:
> >
> > > On Thu, Oct 01, 2015 at 10:02:24AM EDT, Ihar Hrachyshka wrote:
> > > > - more changes with less infra tinkering! neutron devs should not
> need
> > > to go to infra projects so often to make an impact;
> > > > -- make our little neat DevStack plugin used for qos and sr-iov only
> a
> > > huge pile of bash code that is currently stored in DevStack and is
> proudly
> > > called neutron-legacy now; and make the latter obsolete and eventually
> > > removed from DevStack;
> > >
> > > We may need to discuss this. I am currently doing a refactor of the
> > > Neutron DevStack integration in
> > >
> > > https://review.openstack.org/168438
> > >
> > > If I understand your message correctly, I disagree that we should be
> > > moving all the DevStack support for Neutron out of DevStack and making
> > > it a plugin. All that does is move the mess from one corner of the
> room,
> > > to another corner.
> > >
> > >
> > I would actually be in favor of cleaning up the mess AND moving it into
> > neutron. If it's in Neutron, we control our own destiny with regards to
> > landing patches which affect DevStack and ultimately our gate jobs. To
> me,
> > that's a huge win-win. Thus, cleanup first, then move to Neutron.
>
> Frankly we have a bad track record in DevStack, if we are to make an
> argument about controlling our own destiny. Neutron-lib is in a sad
> state of affairs because we haven't had the discipline to keep things
> simple.
>

IMO we can't make these statements otherwise what's the point in looking
forward if all we do is base our actions on some _indelible_ past?

As for the rest, I am gonna let this thread sink in a bit before I come up
with a more elaborate answer that this thread deserves.


>
> In fact, I think the whole genesis of the Neutron plugin for DevStack is
> a great example of how controlling our own destiny has started to grow
> the mess. Yes, we needed it to gate the QoS code. But now things are
> starting to get added.
>
>
> https://github.com/openstack/neutron/commit/bd07b74045d93c46483aa261b8686072d9b448e8
>
> The trend is now that people are going to throw things into the Neutron
> DevStack plugin to get their doo-dad up and running, because making a
> new repo is harder than creating a patch (which maybe shows are repo
> creation process needs streamlining). I was originally for making
> Neutron DevStack plugins that exist in their own repos, instead of
> putting them in the Neutron tree. At least that makes things small,
> manageable, and straight forward. Yes, it makes for more plugin lines in
> your DevStack configuration, but at least you know what each one does,
> instead of being an agglomeration.
>
> If we are not careful, the Neutron DevStack plugin will grow into the big
> mess that neutron-legacy is.
>
> Finally, Look at how many configuration knobs we have, and how there is
> a tendency to introduce new ones, instead of using local.conf to inject
> configuration into Neutron and the associated components. This ends up
> making it very complicated for someone to actually run Neutron in their
> DevStack, and I think a lot of people would give up and just run
> Nova-Network, which I will note is *still the default*.
>
> We need to keep our ties strong with other projects, and improve them in
> some cases. I think culturally, if we start trying to move things into
> our corner of the sandbox because working with other groups is hard, we
> send bad signals to others. This will eventually come back to bite us.
>
> /rant
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-10-01 Thread Fox, Kevin M
+1.

From: Ghe Rivero [ghe.riv...@gmail.com]
Sent: Thursday, October 01, 2015 12:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] -1 due to line length violation in commit
messages

If anyone disagrees with the commit format, please, go ahead and fix it (It's
really easy using the gerrit web) For such cosmetic changes (and others
similars), we should not wait for the author to do it. Sometimes, for a stupid
comma, and with all the TZ, a change can need more than a day to be fixed and
approved.

Ghe Rivero

Quoting Ihar Hrachyshka (2015-09-29 18:05:37)
> > On 25 Sep 2015, at 16:44, Ihar Hrachyshka  wrote:
> >
> > Hi all,
> >
> > releases are approaching, so it’s the right time to start some bike 
> > shedding on the mailing list.
> >
> > Recently I got pointed out several times [1][2] that I violate our commit 
> > message requirement [3] for the message lines that says: "Subsequent lines 
> > should be wrapped at 72 characters.”
> >
> > I agree that very long commit message lines can be bad, f.e. if they are 
> > 200+ chars. But <= 79 chars?.. Don’t think so. Especially since we have 79 
> > chars limit for the code.
> >
> > We had a check for the line lengths in openstack-dev/hacking before but it 
> > was killed [4] as per openstack-dev@ discussion [5].
> >
> > I believe commit message lines of <=80 chars are absolutely fine and should 
> > not get -1 treatment. I propose to raise the limit for the guideline on 
> > wiki accordingly.
> >
> > Comments?
> >
> > [1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG
> > [2]: https://review.openstack.org/#/c/227319/2//COMMIT_MSG
> > [3]: 
> > https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure
> > [4]: https://review.openstack.org/#/c/142585/
> > [5]: 
> > http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519
> >
> > Ihar
>
> Thanks everyone for replies.
>
> Now I realize WHY we do it with 72 chars and not 80 chars (git log output). 
> :) I updated the wiki page with how to configure Vim to enforce the rule. I 
> also removed the notion of gating on commit messages because we have them 
> removed since recently.
>
> Ihar
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-10-01 Thread Fox, Kevin M
I believe keystone already supports hierarchical projects

Thanks,
Kevin

From: Hongbin Lu [hongbin...@huawei.com]
Sent: Thursday, October 01, 2015 7:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Kris,

I think the proposal of hierarchical projects is out of the scope of magnum, 
and you might need to bring it up at keystone or cross-project meeting. I am 
going to propose a walk-around that might work for you at existing tenancy 
model.

Suppose there is a department (department A) with two subteams (team 1 and team 
2). You can create three projects:

· Project A

· Project A-1

· Project A-2

Then you can assign users to projects in the following ways:

· Assign team 1 members to both Project A and Project A-1

· Assign team 2 members to both Project A and Project A-2

Then you can create a bay at project A, which is shared by the whole 
department. In addition, each subteam can create their own bays at project A-X 
if they want. Does it address your use cases?

Best regards,
Hongbin

From: Kris G. Lindgren [mailto:klindg...@godaddy.com]
Sent: September-30-15 7:26 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.

>From my point of view an ideal use case for companies like ours 
>(yahoo/godaddy) would be able to support hierarchical projects in magnum.  
>That way we could create a project for each department, and then the subteams 
>of those departments can have their own projects.  We create a a bay per 
>department.  Sub-projects if they want to can support creation of their own 
>bays (but support of the kube cluster would then fall to that team).  When a 
>sub-project spins up a pod on a bay, minions get created inside that teams sub 
>projects and the containers in that pod run on the capacity that was spun up  
>under that project, the minions for each pod would be a in a scaling group and 
>as such grow/shrink as dictated by load.

The above would make it so where we support a minimal, yet imho reasonable, 
number of kube clusters, give people who can't/don’t want to fall inline with 
the provided resource a way to make their own and still offer a "good enough 
for a single company" level of multi-tenancy.

>Joshua,

>

>If you share resources, you give up multi-tenancy.  No COE system has the

>concept of multi-tenancy (kubernetes has some basic implementation but it

>is totally insecure).  Not only does multi-tenancy have to “look like” it

>offers multiple tenants isolation, but it actually has to deliver the

>goods.


>

>I understand that at first glance a company like Yahoo may not want

>separate bays for their various applications because of the perceived

>administrative overhead.  I would then challenge Yahoo to go deploy a COE

>like kubernetes (which has no multi-tenancy or a very basic implementation

>of such) and get it to work with hundreds of different competing

>applications.  I would speculate the administrative overhead of getting

>all that to work would be greater then the administrative overhead of

>simply doing a bay create for the various tenants.

>

>Placing tenancy inside a COE seems interesting, but no COE does that

>today.  Maybe in the future they will.  Magnum was designed to present an

>integration point between COEs and OpenStack today, not five years down

>the road.  Its not as if we took shortcuts to get to where we are.

>

>I will grant you that density is lower with the current design of Magnum

>vs a full on integration with OpenStack within the COE itself.  However,

>that model which is what I believe you proposed is a huge design change to

>each COE which would overly complicate the COE at the gain of increased

>density.  I 

Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Ihar Hrachyshka
> On 01 Oct 2015, at 17:42, Sean M. Collins  wrote:
> 
> On Thu, Oct 01, 2015 at 11:05:29AM EDT, Kyle Mestery wrote:
>> On Thu, Oct 1, 2015 at 9:57 AM, Sean M. Collins  wrote:
>> 
>>> On Thu, Oct 01, 2015 at 10:02:24AM EDT, Ihar Hrachyshka wrote:
 - more changes with less infra tinkering! neutron devs should not need
>>> to go to infra projects so often to make an impact;
 -- make our little neat DevStack plugin used for qos and sr-iov only a
>>> huge pile of bash code that is currently stored in DevStack and is proudly
>>> called neutron-legacy now; and make the latter obsolete and eventually
>>> removed from DevStack;
>>> 
>>> We may need to discuss this. I am currently doing a refactor of the
>>> Neutron DevStack integration in
>>> 
>>> https://review.openstack.org/168438
>>> 
>>> If I understand your message correctly, I disagree that we should be
>>> moving all the DevStack support for Neutron out of DevStack and making
>>> it a plugin. All that does is move the mess from one corner of the room,
>>> to another corner.
>>> 
>>> 
>> I would actually be in favor of cleaning up the mess AND moving it into
>> neutron. If it's in Neutron, we control our own destiny with regards to
>> landing patches which affect DevStack and ultimately our gate jobs. To me,
>> that's a huge win-win. Thus, cleanup first, then move to Neutron.
> 
> Frankly we have a bad track record in DevStack, if we are to make an
> argument about controlling our own destiny. Neutron-lib is in a sad
> state of affairs because we haven't had the discipline to keep things
> simple.
> 
> In fact, I think the whole genesis of the Neutron plugin for DevStack is
> a great example of how controlling our own destiny has started to grow
> the mess. Yes, we needed it to gate the QoS code. But now things are
> starting to get added.
> 
> https://github.com/openstack/neutron/commit/bd07b74045d93c46483aa261b8686072d9b448e8
> 
> The trend is now that people are going to throw things into the Neutron
> DevStack plugin to get their doo-dad up and running, because making a
> new repo is harder than creating a patch (which maybe shows are repo
> creation process needs streamlining). I was originally for making
> Neutron DevStack plugins that exist in their own repos, instead of
> putting them in the Neutron tree. At least that makes things small,
> manageable, and straight forward. Yes, it makes for more plugin lines in
> your DevStack configuration, but at least you know what each one does,
> instead of being an agglomeration.
> 

Scattering devstack plugins in separate repos that are far from the code that 
they actually try to manage seems to me like a huge waste of time and 
resources. Once a component is out of the tree, I agree their devstack pieces 
should go away too. But while we keep QoS or SR-IOV in the tree, I think it’s 
the right place to have all stuff related in.

> If we are not careful, the Neutron DevStack plugin will grow into the big
> mess that neutron-legacy is.
> 

With your valuable reviewer comments, it has no way to come to such a pity 
state. ;)

> Finally, Look at how many configuration knobs we have, and how there is
> a tendency to introduce new ones, instead of using local.conf to inject
> configuration into Neutron and the associated components. This ends up
> making it very complicated for someone to actually run Neutron in their
> DevStack, and I think a lot of people would give up and just run
> Nova-Network, which I will note is *still the default*.
> 

local.conf is fine but I believe we should still hide predefined sets of 
configuration values that would define ‘roles’ like QoS or L3 or VPNaaS, under 
‘services’ (like q-qos or q-sriov).

I don’t believe the number of non-default knobs is the issue that bothers 
people and make them use nova-network. The fact that default installation does 
not set up networking properly is the issue though.

> We need to keep our ties strong with other projects, and improve them in
> some cases. I think culturally, if we start trying to move things into
> our corner of the sandbox because working with other groups is hard, we
> send bad signals to others. This will eventually come back to bite us.

Well, it seems that the general trend in -dev projects like devstack or grenade 
is to give projects a plugin interface and then push them into adopting their 
pieces of code thru that interface. I will merely note that for QoS, the 
initial idea was to introduce q-qos service into devstack, but devstack core 
team (reasonably) pushed us into plugin world. They forbid new features in 
grenade for the same reason, so that we have motivation to move out of tree.

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Sender Auth Failure] Re: [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Miguel Angel Ajo



Johnston, Nate wrote:

I was wondering if you provide a little bit more background on the QoS item, 
"-- remove RPC upgrade tech debt that we left in L (that should open path for 
new QoS rules that are currently blocked by it);”.  Is this related to the issue you 
point out in the commit message for the https://review.openstack.org/#/c/211520 
change?  Does this block work on implementing QoS DSCP in Mitaka, and if so are 
there bugs that we can pitch in to in order to get past it?



Ok, that's something we need to implement before pushing new rules. 
Basically RPC Callbacks push neutron objects over the wire:


QoSPolicy to be exact. That QoSPolicy version depends on the rules it 
contains, so if we add a new type of rule, we need to bump that version
on the server, but still, make the old agents on the field work until 
you upgrade them.


I have several different strategies in mind [1]. But I have to put a 
document so we can discuss and decide on the best and put it in place.


[1] 
https://github.com/openstack/neutron/blob/master/doc/source/devref/rpc_callbacks.rst 
(read around "Considering rolling upgrades")




Thanks,

—N.


On Oct 1, 2015, at 10:02 AM, Ihar Hrachyshka  wrote:


On 01 Oct 2015, at 15:45, Ihar Hrachyshka  wrote:

Hi all,

I talked recently with several contributors about what each of us plans for the 
next cycle, and found it’s quite useful to share thoughts with others, because 
you have immediate yay/nay feedback, and maybe find companions for next 
adventures, and what not. So I’ve decided to ask everyone what you see the team 
and you personally doing the next cycle, for fun or profit.

That’s like a PTL nomination letter, but open to everyone! :) No commitments, 
no deadlines, just list random ideas you have in mind or in your todo lists, 
and we’ll all appreciate the huge pile of awesomeness no one will ever have 
time to implement even if scheduled for Xixao release.

To start the fun, I will share my silly ideas in the next email.

Here is my silly list of stuff to do.

- start adopting NeutronDbObject for core resources (ports, networks) [till 
now, it’s used in QoS only];

- introduce a so called ‘core resource extender manager’ that would be able to 
replace ml2 extension mechanism and become a plugin agnostic way of extending 
core resources by additional plugins (think of port security or qos available 
for ml2 only - that sucks!);

- more changes with less infra tinkering! neutron devs should not need to go to 
infra projects so often to make an impact;
-- make our little neat devstack plugin used for qos and sr-iov only a huge 
pile of bash code that is currently stored in devstack and is proudly called 
neutron-legacy now; and make the latter obsolete and eventually removed from 
devstack;
-- make tempest jobs use a gate hook as we already do for api jobs;

- qos:
-- once we have gate hook triggered, finally introduce qos into tempest runs to 
allow first qos scenarios merged;
-- remove RPC upgrade tech debt that we left in L (that should open path for 
new QoS rules that are currently blocked by it);
-- look into races in rpc.callbacks notification pattern (Kevin mentioned he 
had ideas in mind around that);

- oslo:
-- kill the incubator: we have a single module consumed from there (cache); 
Mitaka is the time for the witch to die in pain;
-- adopt oslo.reports: that is something I failed to do in Liberty so that I 
would have a great chance to do the same in Mitaka; basically, allow neutron 
services to dump ‘useful info’ on SIGUSR2 sent; hopefully will make debugging a 
bit easier;

- upgrades:
-- we should return to partial job for neutron; it’s not ok our upgrade 
strategy works by pure luck;
-- overall, I feel that it’s needed to provide more details about how upgrades 
are expected to work in OpenStack (the order of service upgrades; constraints; 
managing RPC versions and deprecations; etc.) Probably devref should be a good 
start. I talked to some nova folks involved in upgrades there, and we may join 
the armies on that since general upgrade strategy should be similar throughout 
the meta-project.

- stable:
-- with a stadium of the size we have, it becomes a burden for 
neutron-stable-maint to track backports for all projects; we should think of 
opening doors for more per-sub-project stable cores for those subprojects that 
seem sane in terms of development practices and stable awareness side; that way 
we offload neutron-stable-maint folks for stuff with greater impact (aka stuff 
they actually know).

And what are you folks thinking of?

Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack 

Re: [openstack-dev] [nova] live migration in Mitaka

2015-10-01 Thread Mathieu Gagné
On 2015-10-01 7:26 AM, Kashyap Chamarthy wrote:
> On Wed, Sep 30, 2015 at 11:25:12AM +, Murray, Paul (HP Cloud) wrote:
>>
>>> Please respond to this post if you have an interest in this and what
>>> you would like to see done.  Include anything you are already
>>> getting on with so we get a clear picture. 
>>
>> Thank you to those who replied to this thread. I have used the
>> contents to start an etherpad page here:
>>
>> https://etherpad.openstack.org/p/mitaka-live-migration
> 
> I added a couple of URLs for upstream libvirt work that allow for
> selective block device migration, and the in-progress generic TLS
> support work by Dan Berrange in upstream QEMU.
> 
>> I have taken the liberty of listing those that responded to the thread
>> and the authors of mentioned patches as interested people.
>  
>> From the responses and looking at the specs up for review it looks
>> like there are about five areas that could be addressed in Mitaka and
>> several others that could come later. The first five are:
>>
>>
>> - migrating instances with a mix of local disks and cinder volumes 
> 
> IIUC, this is possible with the selective block device migration work
> merged in upstream libvirt:
> 
> https://www.redhat.com/archives/libvir-list/2015-May/msg00955.html
> 

Can someone explain to me what is the actual "disk name" I have to pass
in to libvirt? I couldn't find any documentation about how to use this
feature.

-- 
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Bugfixing status. 12 weeks before SCF.

2015-10-01 Thread Dmitry Pyzhov
Guys,

I was not able to participate in our weekly IRC meeting. So I'd like to
share our bug status for 8.0 release with offline e-mail.

We have 494 Fuel bugs on Launchpad. This number can be splitted into
several piles.

1) Critical and High priority bugs. We have 48 of them now. 2 in UI, 31 in
python, 15 in library. Here is our focus and we are working on reducing the
numbers.
2) Medium/Low/Wishlist priority bugs. We have 241 bug. 72 in UI, 119 in
python, 50 in library.
3) Features reported as bugs and bugs that can be fixed only by
implementing new blueprints. We have 133 of them. 3 in UI, 106 in python
and 24 in library. These bugs are marked with 'feature', 'covered-by-bp'
and 'need-bp' tags. Numbers look scary but only 40 of them have high and
critical priority.
4) Technical debt. Things that we should do better from developer's point
of view. 72 bugs in total. 60 in python, 12 in library. They are marked
with 'tech-debt' tag.

My personal opinion is that we can ignore our medium-priority technical
debt bugs for now. We should fix them but there is nothing that cannot be
postponed here. We will continue fixing them in background. The only
exception here should be bugs related to alignment with global
requirements, tox and oslo-related changes. We definitely should fix this
stuff.

You can see that we have big demand for python developers. Here is my early
estimation. With current pace we can fix all existing library bugs in 8.0.
Also we can fix all existing high priority bugs in python. It includes
technical debt and maybe feature-bugs. It looks like we are able to fix
about half of medium priority python bugs. I don't have any estimations for
medium priority feature-bugs in python. And I'd prefer to be pessimistic
here. Also we will fix very small number of medium priority technical debt
bugs.

There is a good chance that number of incoming bugs will became smaller
over time and we will fix most of existing medium priority python bugs.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

2015-10-01 Thread Murali R
Russell,

" These logical flows
look similar to OpenFlow, but it talks about network resources in the
logical sense (not based on where they are physically located).  I think
we can implement SFC purely in the logical space. "

Exactly. I was in the ovn presentation at Vancouver and at that time it
felt we could use these for sfc and that is why I am on this project now. I
am checking if the logical flows will do what I want to do. Or we can
extend the internal impl without impacting the larger neutron or other cms
interaction. For a standalone solutions number of flows to manage is too
much with plain ovs and ovs-agent has its own limitation on how we can
define custom flows..

Missed the ovn meeting today but have notes from log. Nice usage blog :)
Thank you for all you do Russell, helping us get overboard.

-Murali

On Thu, Oct 1, 2015 at 7:32 AM, Russell Bryant  wrote:

> On 09/30/2015 06:01 PM, Murali R wrote:
> > Yes, sfc without nsh is what I am looking into and I am thinking ovn can
> > have a better approach.
> >
> > I did an implementation of sfc around nsh that used ovs & flows from
> > custom ovs-agent back in mar-may. I added fields in ovs agent to send
> > additional info for actions as well. Neutron side was quite trivial. But
> > the solution required an implementation of ovs to listen on a different
> > port to handle nsh header so doubled the number of tunnels. The ovs code
> > we used/modified to was either from the link you sent or some other
> > similar impl from Cisco folks (I don't recall) that had actions and
> > conditional commands for the field. If we have generic ovs code to
> > compare or set actions on any configured address field was my thought.
> > But haven't thought through much on how to do that. In any case, with
> > ovn we cannot define custom flows directly on ovs, so that approach is
> > dated now. But hoping some similar feature can be added to ovn which can
> > transpose some header field to geneve options.
>
> Thanks for the detail of what you're trying to do.
>
> I'm not sure how much you've looked into how OVN works.  OVN works by
> defining the network in terms of "logical flows".  These logical flows
> look similar to OpenFlow, but it talks about network resources in the
> logical sense (not based on where they are physically located).  I think
> we can implement SFC purely in the logical space.  So, most of the work
> I think is in defining the northbound db schema and then converting that
> into the right logical flows.  I looked at the API being proposed by the
> networking-sfc project, and that's giving me a pretty good idea of what
> the northbound schema could look like for OVN.
>
>
> https://git.openstack.org/cgit/openstack/networking-sfc/tree/doc/source/api.rst
>
> The networking-sfc API talks about a "chain parameter".  That's where
> NSH could come in.  The spec proposes "mpls" as something OVS can
> already support.  Given a single VIF, we need a way to differentiate
> traffic associated with different chains.  This is *VERY* similar to
> what OVN is already doing with parent/child ports, originally intended
> for the containers-in-VM use case.  This same concept seems to fit here
> quite well.  Today, we only support VLAN IDs for this, but we could
> extend it to support mpls, NSH, or whatever.
>
> Anyway, those are just my high level thoughts so far.  I haven't tried
> to really dig into a detailed design yet.
>
> > I am trying something right now with ovn and will be attending ovs
> > conference in nov. I am skipping openstack summit to attend something
> > else in far-east during that time. But lets keep the discussion going
> > and collaborate if you work on sfc.
>
> I look forward to meeting you in November!  :-)
>
> --
> Russell Bryant
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Selenium is now green - please pay attention to it

2015-10-01 Thread Rob Cresswell (rcresswe)
Also, please rebase patches to make sure they are passing selenium and
integration, even if they have been previously verified.
Yes, it¹s a little frustrating, but selenium is still non-voting so it
will not block merges.

Rob


On 01/10/2015 19:34, "Doug Fish"  wrote:

>Thanks for your work on this Richard!
>
>A good place to check this is during code reviews. If you see a patch
>causing these tests to fail, that's a good reason to -1!
>
>Sent from my iPhone
>
>> On Sep 30, 2015, at 10:45 PM, Richard Jones 
>>wrote:
>> 
>> Hi folks,
>> 
>> Selenium tests "gate-horizon-selenium-headless" are now green in master
>>again.
>> 
>> Please pay attention if it goes red. I will probably notice, but if I
>>don't, and you can't figure out what's going on, please feel free to get
>>in touch with me (r1chardj0n3s on IRC in #openstack-horizon, or email).
>>Let's try to keep it green!
>> 
>> 
>> Richard
>> 
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pecan] [mistral] Pecan python3 compatibility

2015-10-01 Thread Ryan Petrello
Yep, this definitely looks like a Python3-specific bug.  If you'll open 
a ticket, I'll take a look as soon as I get a chance :)!


On 10/01/15 02:44 PM, Doug Hellmann wrote:

Excerpts from Nikolay Makhotkin's message of 2015-10-01 16:50:04 +0300:

Hi, pecan folks!

I have an question for you about python3 support in pecan library.

In Mistral, we are trying to fix the codebase for supporting python3, but
we are not able to do this since we faced issue with pecan library. If you
want to see the details, see this traceback - [1].
(Actually, something is wrong with HooksController and walk_controller
method)

Does pecan officially support python3 (especially, python3.4 or python3.5)
or not?
I didn't find any info about that in pecan repository.

[1] http://paste.openstack.org/show/475041/



The intent is definitely to support python 3. This sounds like a bug, so
I recommend opening a ticket in the pecan bug tracker.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Ryan Petrello
Senior Developer, DreamHost
ryan.petre...@dreamhost.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] How to selectively enable new services?

2015-10-01 Thread Dan Prince
On Wed, 2015-09-30 at 21:05 +0100, Steven Hardy wrote:
> Hi all,
> 
> So I wanted to start some discussion on $subject, because atm we have
> a
> couple of patches adding support for new services (which is great!):
> 
> Manila: https://review.openstack.org/#/c/188137/
> Sahara: https://review.openstack.org/#/c/220863/
> 
> So, firstly I am *not* aiming to be any impediment to those landing,
> and I
> know they have been in-progress for some time.  These look pretty
> close to
> being ready to land and overall I think new service integration is a
> very
> good thing for TripleO.
> 
> However, given the recent evolution towards the "big tent" of
> OpenStack, I
> wanted to get some ideas on what an effective way to selectively
> enable
> services would look like, as I can imagine not all users of TripleO
> want to
> deploy all-the-services all of the time.
> 
> I was initially thinking we simply have e.g "EnableSahara" as a
> boolean in
> overcloud-without-mergepy, and wire that in to the puppet manifests,
> such
> that the services are not configured/started.  However comments in
> the
> Sahara patch indicate it may be more complex than that, in particular
> requiring changes to the loadbalancer puppet code and os-cloud
> -config.
> 
> This is all part of the more general "composable roles" problem, but
> is
> there an initial step we can take, which will make it easy to simply
> disable services (and ideally not pay the cost of configuring them at
> all)
> on deployment?
> 
> Interested in peoples thoughts on this - has anyone already looked
> into it,
> or is there any existing pattern we can reuse?

A couple of ideas I would throw out that might help us pair down the
larger controller role into more composable roles.

On the Heat side we could individual role templates directly. So new
nested stack templates for Sahara and Manilla (or any service really).
These templates would still wire into the controller.yaml in much the
same way... but we would compose the resulting configuration metadata
based on what was configured in the resource registry. So if Sahara or
Manilla is set to a noop resource we would essentially get a controller
without those service. If it is enabled then we would pull in the extra
configuration metadata (hiera) and wire it into the structured config
as normal. Perhaps a new 'roles' directory would help organize these
services... The mechanism for composability on the Heat side is really
the resource registry so we need to be careful to name and document
these things correctly.



On the Puppet side the initial goal was to avoid creating what we
called a composition layer, which is basically a new set of puppet
modules that act as a front end for all of your roles etc. This isn't
to say that we don't want composability with the Puppet (we do) but
just that our preference was to write role manifests that used direct
include statements to OpenStack Puppet modules. This proved to be a
more hackable set of templates and forces us to use (as much as
possible) OpenStack puppet or add functionality there rather than go
and write our own puppet-tripleo to rule the world.

In practice that didn't fully work out and we do have a small bit of
Puppet code in puppet-tripleo. I consider much of this to be technical
debt and over the mid to long term would prefer to refactor and add the
puppet-tripleo bits elsewhere.

One area we could improve our puppet architecture might be with regards
to how we actually deploy the puppet manifests themselves. Currently we
deploy the manifests with Heat directly by using get_file to deploy the
manifest alongside of the SoftwareConfig (looks something like this htt
p://git.openstack.org/cgit/openstack/tripleo-heat
-templates/tree/puppet/compute-post.yaml#n23). This works really nicely
in that our puppet manifests can live alongside of the (lightly)
coupled Heat templates which pass in parameters via Hiera. However the
approach is limited in that we can only deploy a single manifest (not a
directory of roles that could be dynamically assembled or included on
the fly). Perhaps we could expand this capability a bit so that we
could structure the puppet a bit more freely into decomposed role
manifests. We already have a similar thing going on w/ the Hiera
element where we can deploy multiple Hiera files for each role. So
perhaps a script step to deploy multiple puppet files (generic so it
could be used in multiple templates), or perhaps we could use Swift to
deploy these alongside of Heat (I'm not sure we have the required
OS::Swift heat resources for this ATM though).

Again we've already got mechanisms to compose puppet Hiera, we just
need a few more tricks to more easily compose the role manifests
themselves...

Dan



> 
> As mentioned above, not aiming to block anything on this, I guess we
> can
> figure it out and retro-fit it to whatever services folks want to
> selectively disable later if needed.

Agree with not blocking the Manilla and Sahara 

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-10-01 Thread Hongbin Lu
Do you mean this proposal 
http://specs.openstack.org/openstack/keystone-specs/specs/juno/hierarchical_multitenancy.html
 ? It looks like a support of hierarchical role/privilege, and I couldn't find 
anything related to resource sharing. I am not sure if it can address the use 
cases Kris mentioned.

Best regards,
Hongbin

From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: October-01-15 11:58 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

I believe keystone already supports hierarchical projects

Thanks,
Kevin

From: Hongbin Lu [hongbin...@huawei.com]
Sent: Thursday, October 01, 2015 7:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
Kris,

I think the proposal of hierarchical projects is out of the scope of magnum, 
and you might need to bring it up at keystone or cross-project meeting. I am 
going to propose a walk-around that might work for you at existing tenancy 
model.

Suppose there is a department (department A) with two subteams (team 1 and team 
2). You can create three projects:

* Project A

* Project A-1

* Project A-2

Then you can assign users to projects in the following ways:

* Assign team 1 members to both Project A and Project A-1

* Assign team 2 members to both Project A and Project A-2

Then you can create a bay at project A, which is shared by the whole 
department. In addition, each subteam can create their own bays at project A-X 
if they want. Does it address your use cases?

Best regards,
Hongbin

From: Kris G. Lindgren [mailto:klindg...@godaddy.com]
Sent: September-30-15 7:26 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10-20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.

>From my point of view an ideal use case for companies like ours 
>(yahoo/godaddy) would be able to support hierarchical projects in magnum.  
>That way we could create a project for each department, and then the subteams 
>of those departments can have their own projects.  We create a a bay per 
>department.  Sub-projects if they want to can support creation of their own 
>bays (but support of the kube cluster would then fall to that team).  When a 
>sub-project spins up a pod on a bay, minions get created inside that teams sub 
>projects and the containers in that pod run on the capacity that was spun up  
>under that project, the minions for each pod would be a in a scaling group and 
>as such grow/shrink as dictated by load.

The above would make it so where we support a minimal, yet imho reasonable, 
number of kube clusters, give people who can't/don't want to fall inline with 
the provided resource a way to make their own and still offer a "good enough 
for a single company" level of multi-tenancy.

>Joshua,

>

>If you share resources, you give up multi-tenancy.  No COE system has the

>concept of multi-tenancy (kubernetes has some basic implementation but it

>is totally insecure).  Not only does multi-tenancy have to "look like" it

>offers multiple tenants isolation, but it actually has to deliver the

>goods.

>

>I understand that at first glance a company like Yahoo may not want

>separate bays for their various applications because of the perceived

>administrative overhead.  I would then challenge Yahoo to go deploy a COE

>like kubernetes (which has no multi-tenancy or a very basic implementation

>of such) and get it to work with hundreds of different competing

>applications.  I would speculate the administrative overhead of getting

>all that to work would be greater then the administrative overhead of

>simply doing a bay create for the various tenants.

>

>Placing tenancy inside a COE seems 

Re: [openstack-dev] [all] [ptls] Proposed Design Summit track/room/time allocation

2015-10-01 Thread Hayes, Graham
On 01/10/15 17:17, Thierry Carrez wrote:
> Hi PTLs,
> 
> A few weeks ago I posted here the allocation of slots for each track at
> the Design Summit:
> 
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073654.html
> 
> You replied with a set of constraints, which makes it a fun scheduling
> problem to solve. I did my best to take those constraints into account,
> as well as avoiding to book a slot when the project PTL is giving a talk
> at the conference. Here is the resulting proposed room/time layout result:
> 
> http://tinyurl.com/mitaka-design-summit
> 
> Feel free to reach out to me if you spot any glaring issue with the
> proposed schedule. I don't promise I'll fix it to your satisfaction,
> because the equilibrium is pretty fragile here.
> 
> I'll push this proposed layout to Cheddar early next week (probably on
> Monday) so you can start scheduling, so please comment before then. The
> site is not up yet, but I already pushed instructions on how to do the
> scheduling at:
> 
> https://wiki.openstack.org/wiki/Design_Summit/SchedulingForPTLs
> 
> Eagle eyes will have noticed that there are two empty slots (highlighted
> in red in the page). Those are up for grabs, if you're interested let me
> know (there may be a few more available if any project team decides they
> don't need that much space after all). If there are a lot of requests
> for those, I plan to give priority to the late-comers in the tent which
> don't have any space yet.
> 
> See you all in 3.5 weeks in Tokyo!
> 

In previous years we have had "related open source project" sessions at
the design summit, is that happening this year, or is space at too much
of a premium?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] It's time to update the Liberty release notes

2015-10-01 Thread Matt Riedemann
We need to go through the changes which were tagged for release notes 
[1]. Generally this means anything with the UpgradeImpact tag but can 
also mean things with DocImpact.


Here are the commits in liberty that had the UpgradeImpact tag:

mriedem@ubuntu:~/git/nova$ git log --oneline -i --grep UpgradeImpact 
stable/kilo..stable/liberty

0b49934 CONF.allow_resize_to_same_host should check only once in controller
4a9e14a Update ComputeNode values with allocation ratios in the RT
4a18f7d api: use v2.1 only in api-paste.ini
11507ee api: deprecate the concept of extensions in v2.1
9c91781 Add missing rules in policy.json
1b8a2e0 Adding user_id handling to keypair index, show and create api calls
0283234 libvirt: Always default device names at boot
725c54e Remove db layer hard-code permission checks for 
quota_class_create/update
1dbb322 Remove db layer hard-code permission checks for 
quota_class_get_all_by_name

4d6a50a Remove db layer hard-code permission checks for floating_ip_dns
55e63f8 Allow non-admin to list all tenants based on policy
92807d6 Remove redundant policy check from security_group_default_rule
2a01a1b Remove hv_type translation shim for powervm
dcd4be6 Remove db layer hard-code permission checks for quota_get_all_*
06e6056 Remove cell policy check
d03b716 libvirt: deprecate libvirt version usage < 0.10.2
5309120 Update kilo version alias

So if you own on of those, please up the release notes.

Here are the DocImpact changes:

mriedem@ubuntu:~/git/nova$ git log --oneline -i --grep DocImpact 
stable/kilo..stable/liberty

bc6f30d Give instance default hostname if hostname is empty
4ee4f9f RT: track evacuation migrations
9095b36 Expose keystoneclient's session and auth plugin loading parameters
4a9e14a Update ComputeNode values with allocation ratios in the RT
4a18f7d api: use v2.1 only in api-paste.ini
11507ee api: deprecate the concept of extensions in v2.1
45d1e3c Expose VIF net-id attribute in os-virtual-interfaces
9d353e5 libvirt: take account of disks in migration data size
17e5911 Add deprecated_for_removal parm for deprecated neutron_ops
95940cc Don't allow instance to overcommit against itself
9cd9e66 Add rootwrap daemon mode support
c250aca Allow compute monitors in different namespaces
434ce2a Added processing /compute URL
2c0a306 Limit parallel live migrations in progress
da33ab4 libvirt: set caps on maximum live migration time
07c7e5c libvirt: support management of downtime during migration
60d08e6 Add documentation for the nova-cells command.
ae5a329 libvirt:Rsync remote FS driver was added
9a09674 libvirt: enable virtio-net multiqueue
8a7b1e8 :Add documentation for the nova-idmapshift command.
bf91d9f Added missed '-' to the rest_api_version_history.rst
1b8a2e0 Adding user_id handling to keypair index, show and create api calls
622a845 Metadata: support proxying loadbalancers
2f7403b VMware: map one nova-compute to one VC cluster
ace11d3 VMware: add serial port device
ab35779 Handle SSL termination proxies for version list
6739df7 Include DiskFilter in the default list
5e5ef99 VMware: Add support for swap disk
49a572a Show 'locked' information in server details
4252420 VMware: add resource limits for disk
f1f46a0 VMware: Resource limits for memory
7aec88c VMware: add support for cores per socket
bc3b6cc libvirt: rename parallels driver to virtuozzo
95f1d47 Add console allowed origins setting
d0ee3ab libvirt:Add a driver API to inject an NMI
50c8f93 Add MKS console support
abf20cd Execute _poll_shelved_instances only if shelved_offload_time is > 0
973f312 Use stevedore for loading monitor extensions
9260ea1 Include project_id in instance metadata.
d9c696a Make evacuate leave a record for the source compute host to process
6fe967b Cells: add instance cell registration utility to nova-manage
93a5a67 Removed extra '-' from rest_api_version_history.rst
bad76e6 VMware: convert driver to use nova.objects.ImageMeta
56feb2b Add microversion to allow server search option ip6 for non-admin
55e63f8 Allow non-admin to list all tenants based on policy
92807d6 Remove redundant policy check from security_group_default_rule
b06867c neutron: remove deprecated allow_duplicate_networks config option
5793d56 cells: remove deprecated mute_weight_value option
8237666 VMware: verify vCenter server certificate
b9bae02 fix "down" nova-compute service spuriously marked as "up"
66e1427 Add AggregateTypeAffinityFilter multi values support
f1a0d85 Cleanups for pci stats in preparation for RT using ComputeNode
27e7871 VMware: enforce minimum support VC version
438a48d Remove db layer hard-code permission checks for v2.1 cells
ad8d50f Add policy to cover snapshotting of volume backed instances
5c521e7 libvirt: deprecate the remove_unused_kernels config option
bcb9769 API: remove unuseful expected error code from v2.1 service 
delete api

a586b1b VMware: add support for NFS 4.1
0951cf3 Remove orphaned tables - iscsi_targets, volumes
f16dbab Add ability to inject routes in interfaces.template
d09785b Add config 

Re: [openstack-dev] [puppet] [infra] split integration jobs

2015-10-01 Thread Emilien Macchi

On 09/30/2015 07:37 PM, Jeremy Stanley wrote:

[...]
> 
> I don't think adding one more job is going to put a strain on our
> available resources. In fact it consumes just about as much to run a
> single job twice as long since we're constrained on the number of
> running instances in our providers (ignoring for a moment the
> spin-up/tear-down overhead incurred per job which, if you're
> talking about long-running jobs anyway, is less wasteful than it is
> for lots of very quick jobs). The number of puppet changes and
> number of jobs currently run on each is considerably lower than a
> lot of our other teams as well.

Ok, so I propose that we split the jobs.
Please review:
https://review.openstack.org/#/c/230013/
https://review.openstack.org/#/c/230064/

Thanks,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Selenium is now green - please pay attention to it

2015-10-01 Thread Doug Fish
Thanks for your work on this Richard!

A good place to check this is during code reviews. If you see a patch causing 
these tests to fail, that's a good reason to -1!

Sent from my iPhone

> On Sep 30, 2015, at 10:45 PM, Richard Jones  wrote:
> 
> Hi folks,
> 
> Selenium tests "gate-horizon-selenium-headless" are now green in master again.
> 
> Please pay attention if it goes red. I will probably notice, but if I don't, 
> and you can't figure out what's going on, please feel free to get in touch 
> with me (r1chardj0n3s on IRC in #openstack-horizon, or email). Let's try to 
> keep it green!
> 
> 
> Richard
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pecan] [mistral] Pecan python3 compatibility

2015-10-01 Thread Doug Hellmann
Excerpts from Nikolay Makhotkin's message of 2015-10-01 16:50:04 +0300:
> Hi, pecan folks!
> 
> I have an question for you about python3 support in pecan library.
> 
> In Mistral, we are trying to fix the codebase for supporting python3, but
> we are not able to do this since we faced issue with pecan library. If you
> want to see the details, see this traceback - [1].
> (Actually, something is wrong with HooksController and walk_controller
> method)
> 
> Does pecan officially support python3 (especially, python3.4 or python3.5)
> or not?
> I didn't find any info about that in pecan repository.
> 
> [1] http://paste.openstack.org/show/475041/
> 

The intent is definitely to support python 3. This sounds like a bug, so
I recommend opening a ticket in the pecan bug tracker.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Tokyo sessions

2015-10-01 Thread Tim Hinrichs
Hi all,

We just got a tentative assignment for our meeting times in Tokyo.  Our 3
meetings are scheduled back-to-back-to-back on Wed afternoon from
2:00-4:30p.  I don't think there's much chance of getting the meetings
moved, but does anyone have a hard conflict?

Here's our schedule for Wed:

Wed 11:15-12:45 HOL
Wed 2:00-2:40 Working meeting
Wed 2:50-3:30 Working meeting
Wed 3:40-4:20 Working meeting

Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ops] Operator Local Patches

2015-10-01 Thread Matt Riedemann



On 9/30/2015 5:47 PM, Matt Fischer wrote:

Is the purge deleted a replacement for nova-manage db archive-deleted?
It hasn't worked for several cycles and so I assume it's abandoned.

On Sep 30, 2015 4:16 PM, "Matt Riedemann" > wrote:



On 9/29/2015 6:33 PM, Kris G. Lindgren wrote:

Hello All,

We have some pretty good contributions of local patches on the
etherpad.
   We are going through right now and trying to group patches that
multiple people are carrying and patches that people may not be
carrying
but solves a problem that they are running into.  If you can
take some
time and either add your own local patches that you have to the
ether
pad or add +1's next to the patches that are laid out, it would
help us
immensely.

The etherpad can be found at:
https://etherpad.openstack.org/p/operator-local-patches

Thanks for your help!

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: "Kris G. Lindgren"
Date: Tuesday, September 22, 2015 at 4:21 PM
To: openstack-operators
Subject: Re: Operator Local Patches

Hello all,

Friendly reminder: If you have local patches and haven't yet
done so,
please contribute to the etherpad at:
https://etherpad.openstack.org/p/operator-local-patches

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: "Kris G. Lindgren"
Date: Friday, September 18, 2015 at 4:35 PM
To: openstack-operators
Cc: Tom Fifield
Subject: Operator Local Patches

Hello Operators!

During the ops meetup in Palo Alto were we talking about
sessions for
Tokyo. A session that I purposed, that got a bunch of +1's,  was
about
local patches that operators were carrying.  From my experience
this is
done to either implement business logic,  fix assumptions in
projects
that do not apply to your implementation, implement business
requirements that are not yet implemented in openstack, or fix scale
related bugs.  What I would like to do is get a working group
together
to do the following:

1.) Document local patches that operators have (even those that
are in
gerrit right now waiting to be committed upstream)
2.) Figure out commonality in those patches
3.) Either upstream the common fixes to the appropriate projects or
figure out if a hook can be added to allow people to run their
code at
that specific point
4.) 
5.) Profit

To start this off, I have documented every patch, along with a
description of what it does and why we did it (where needed), that
GoDaddy is running [1].  What I am asking is that the operator
community
please update the etherpad with the patches that you are running, so
that we have a good starting point for discussions in Tokyo and
beyond.

[1] - https://etherpad.openstack.org/p/operator-local-patches
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I saw this originally on the ops list and it's a great idea - cat
herding the bazillion ops patches and seeing what common things rise
to the top would be helpful.  Hopefully some of that can then be
pushed into the projects.

There are a couple of things I could note that are specifically
operator driven which could use eyes again.

1. purge deleted instances from nova database:


http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/purge-deleted-instances-cmd.html

The spec is approved for mitaka, the code is out for review.  If
people could test the change out it'd be helpful to vet it's usefulness.

2. I'm trying to revive a spec that was approved in liberty but the
code never landed:

https://review.openstack.org/#/c/226925/

That's for force resetting quotas for a project/user so that on the
next pass it gets recalculated. A question came up about making the
user optional in that command so it's going 

Re: [openstack-dev] Announcing Liberty RC1 availability in Debian

2015-10-01 Thread Thomas Goirand
On 09/30/2015 01:58 PM, Thomas Goirand wrote:
> 3/ Horizon dependencies still in NEW queue
> ==
> 
> It is also worth noting that Horizon hasn't been fully FTP master
> approved, and that some packages are still remaining in the NEW queue.
> This isn't the first release with such an issue with Horizon. I hope
> that 1/ FTP masters will approve the remaining packages son 2/ for
> Mitaka, the Horizon team will care about freezing external dependencies
> (ie: new Javascript objects) earlier in the development cycle. I am
> hereby proposing that the Horizon 3rd party dependency freeze happens
> not later than Mitaka b2, so that we don't experience it again for the
> next release. Note that this problem affects both Debian and Ubuntu, as
> Ubuntu syncs dependencies from Debian.

Good news. All of the dependencies for Horizon have been FTP master
approved. Only Horizon itself is now in the FTP master NEW queue because
I added a horizon-doc binary package.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][PTL] PTL Candidates Q Session

2015-10-01 Thread Mike Scherbakov
> we may mix technical direction / tech debt roadmap and process,
political, and people management work of PTL.
sorry, of course I meant that we rather should NOT mix these things.

To make my email very short, I'd say PTL role is more political and
process-wise rather than architectural.

On Wed, Sep 30, 2015 at 5:48 PM Mike Scherbakov 
wrote:

> Vladimir,
> we may mix technical direction / tech debt roadmap and process, political,
> and people management work of PTL.
>
> PTL definition in OpenStack [1] reflects many things which PTL becomes
> responsible for. This applies to Fuel as well.
>
> I'd like to reflect some things here which I'd expect PTL doing, most of
> which will intersect with [1]:
> - Participate in cross-project initiatives & resolution of issues around
> it. Great example is puppet-openstack vs Fuel [2]
> - Organize required processes around launchpad bugs & blueprints
> - Personal personal feedback to Fuel contributors & public suggestions
> when needed
> - Define architecture direction & review majority of design specs. Rely on
> Component Leads and Core Reviewers
> - Ensure that roadmap & use cases are aligned with architecture work
> - Resolve conflicts between core reviewers, component leads. Get people to
> the same page
> - Watch for code review queues and quality of reviews. Ensure discipline
> of code review.
> - Testing / coverage have to be at the high level
>
> Considering all above, contributors actually have been working with all of
> us and know who could be better handling such a hard work. I don't think
> special Q is needed. If there are concerns / particular process/tech
> questions we'd like to discuss - those should be just open as email threads.
>
> [1] https://wiki.openstack.org/wiki/PTL_Guide
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2015-June/066685.html
>
> Thank you,
>
> On Tue, Sep 29, 2015 at 3:47 AM Vladimir Kuklin 
> wrote:
>
>> Folks
>>
>> I think it is awesome we have three candidates for PTL position in Fuel.
>> I read all candidates' emails (including mine own several times :-) ) and I
>> got a slight thought of not being able to really differentiate the
>> candidates platforms as they are almost identical from the high-level point
>> of view. But we all know that the devil is in details. And this details
>> will actually affect project future.
>>
>> Thus I thought about Q session at #fuel-dev channel in IRC. I think
>> that this will be mutually benefitial for everyone to get our platforms a
>> little bit more clear.
>>
>> Let's do it before or right at the start of actual voting so that our
>> contributors can make better decisions based on this session.
>>
>> I suggest the following format:
>>
>> 1) 3 questions from electorate members - let's put them onto an etherpad
>> 2) 2 questions from a candidate to his opponents (1 question per opponent)
>> 3) external moderator - I suppose, @xarses as our weekly meeting
>> moderator could help us
>> 4) time and date - Wednesday or Thursday comfortable for both timezones,
>> e.g. after 4PM UTC or right after fuel weekly meeting.
>>
>> What do you think, folks?
>>
>> --
>> Yours Faithfully,
>> Vladimir Kuklin,
>> Fuel Library Tech Lead,
>> Mirantis, Inc.
>> +7 (495) 640-49-04
>> +7 (926) 702-39-68
>> Skype kuklinvv
>> 35bk3, Vorontsovskaya Str.
>> Moscow, Russia,
>> www.mirantis.com 
>> www.mirantis.ru
>> vkuk...@mirantis.com
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
> Mike Scherbakov
> #mihgen
>
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

2015-10-01 Thread Murali R
Apologize typo "...us get overboard." ==> should be "... get us onboard" :)

On Thu, Oct 1, 2015 at 11:38 AM, Murali R  wrote:

> Russell,
>
> " These logical flows
> look similar to OpenFlow, but it talks about network resources in the
> logical sense (not based on where they are physically located).  I think
> we can implement SFC purely in the logical space. "
>
> Exactly. I was in the ovn presentation at Vancouver and at that time it
> felt we could use these for sfc and that is why I am on this project now. I
> am checking if the logical flows will do what I want to do. Or we can
> extend the internal impl without impacting the larger neutron or other cms
> interaction. For a standalone solutions number of flows to manage is too
> much with plain ovs and ovs-agent has its own limitation on how we can
> define custom flows..
>
> Missed the ovn meeting today but have notes from log. Nice usage blog :)
> Thank you for all you do Russell, helping us get overboard.
>
> -Murali
>
> On Thu, Oct 1, 2015 at 7:32 AM, Russell Bryant  wrote:
>
>> On 09/30/2015 06:01 PM, Murali R wrote:
>> > Yes, sfc without nsh is what I am looking into and I am thinking ovn can
>> > have a better approach.
>> >
>> > I did an implementation of sfc around nsh that used ovs & flows from
>> > custom ovs-agent back in mar-may. I added fields in ovs agent to send
>> > additional info for actions as well. Neutron side was quite trivial. But
>> > the solution required an implementation of ovs to listen on a different
>> > port to handle nsh header so doubled the number of tunnels. The ovs code
>> > we used/modified to was either from the link you sent or some other
>> > similar impl from Cisco folks (I don't recall) that had actions and
>> > conditional commands for the field. If we have generic ovs code to
>> > compare or set actions on any configured address field was my thought.
>> > But haven't thought through much on how to do that. In any case, with
>> > ovn we cannot define custom flows directly on ovs, so that approach is
>> > dated now. But hoping some similar feature can be added to ovn which can
>> > transpose some header field to geneve options.
>>
>> Thanks for the detail of what you're trying to do.
>>
>> I'm not sure how much you've looked into how OVN works.  OVN works by
>> defining the network in terms of "logical flows".  These logical flows
>> look similar to OpenFlow, but it talks about network resources in the
>> logical sense (not based on where they are physically located).  I think
>> we can implement SFC purely in the logical space.  So, most of the work
>> I think is in defining the northbound db schema and then converting that
>> into the right logical flows.  I looked at the API being proposed by the
>> networking-sfc project, and that's giving me a pretty good idea of what
>> the northbound schema could look like for OVN.
>>
>>
>> https://git.openstack.org/cgit/openstack/networking-sfc/tree/doc/source/api.rst
>>
>> The networking-sfc API talks about a "chain parameter".  That's where
>> NSH could come in.  The spec proposes "mpls" as something OVS can
>> already support.  Given a single VIF, we need a way to differentiate
>> traffic associated with different chains.  This is *VERY* similar to
>> what OVN is already doing with parent/child ports, originally intended
>> for the containers-in-VM use case.  This same concept seems to fit here
>> quite well.  Today, we only support VLAN IDs for this, but we could
>> extend it to support mpls, NSH, or whatever.
>>
>> Anyway, those are just my high level thoughts so far.  I haven't tried
>> to really dig into a detailed design yet.
>>
>> > I am trying something right now with ovn and will be attending ovs
>> > conference in nov. I am skipping openstack summit to attend something
>> > else in far-east during that time. But lets keep the discussion going
>> > and collaborate if you work on sfc.
>>
>> I look forward to meeting you in November!  :-)
>>
>> --
>> Russell Bryant
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Congress and Monasca Joint Session at Tokyo Design Summit

2015-10-01 Thread Fabio Giannetti (fgiannet)
Thanks a lot Tim.
I really appreciate.
Fabio

From: Tim Hinrichs >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, October 1, 2015 at 7:40 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [Congress] Congress and Monasca Joint Session at 
Tokyo Design Summit

Hi Fabio,

The Congress team talked this over during our IRC yesterday.  It looks like can 
meet with your team during one of our meeting slots.  As far as I know the 
schedule for those meetings hasn't been set.  But once it is I'll reach out (or 
you can) to discuss the day/time.

Tim

On Mon, Sep 28, 2015 at 2:51 PM Tim Hinrichs 
> wrote:

Hi Fabio: Thanks for reaching out.  We should definitely talk at the summit.  I 
don't know if we can devote 1 of the 3 allocated Congress sessions to Monasca, 
but we'll talk it over during IRC on Wed and let you know.  Or do you have a 
session we could use for the discussion?  In any case, I'm confident we can 
make good progress toward integrating Congress and Monasca in Tokyo.  Monasca 
sounds interesting--I'm looking forward to learning more!

Congress team: if we could all quickly browse the Monasca wiki before Wed's 
IRC, that would be great:
https://wiki.openstack.org/wiki/Monasca

Tim



On Mon, Sep 28, 2015 at 1:50 PM Fabio Giannetti (fgiannet) 
> wrote:
Tim and Congress folks,
  I am writing on behalf of the Monasca community and I would like to explore 
the possibility of holding a joint session during the Tokyo Design Summit.
We would like to explore:

  1.  how to integrate Monasca with Congress so then Monasca can provide 
metrics, logs and event data for policy evaluation/enforcement
  2.  How to leverage Monasca alarming to automatically notify about statuses 
that may imply policy breach
  3.  How to automatically (if possible) convert policies (or subparts) into 
Monasca alarms.

Please point me to a submission page if I have to create a formal proposal for 
the topic and/or let me know other forms we can interact at the Summit.
Thanks in advance,
Fabio

[http://www.cisco.com/c/dam/assets/email-signature-tool/logo_06.png?ct=1430182397611]

Fabio Giannetti
Cloud Innovation Architect
Cisco Services
fgian...@cisco.com
Phone: +1 408 527 1134
Mobile: +1 408 854 0020


Cisco Systems, Inc.
285 W. Tasman Drive
San Jose
California
95134
United States
Cisco.com





[http://www.cisco.com/assets/swa/img/thinkbeforeyouprint.gif] Think before you 
print.

This email may contain confidential and privileged material for the sole use of 
the intended recipient. Any review, use, distribution or disclosure by others 
is strictly prohibited. If you are not the intended recipient (or authorized to 
receive for the recipient), please contact the sender by reply email and delete 
all copies of this message.

Please click 
here for 
Company Registration Information.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ovn] New cycle started. What are you up to, folks?

2015-10-01 Thread Russell Bryant
On 10/01/2015 09:45 AM, Ihar Hrachyshka wrote:
> Hi all,
> 
> I talked recently with several contributors about what each of us 
> plans for the next cycle, and found it’s quite useful to share 
> thoughts with others, because you have immediate yay/nay feedback, 
> and maybe find companions for next adventures, and what not. So
> I’ve decided to ask everyone what you see the team and you
> personally doing the next cycle, for fun or profit.
> 
> That’s like a PTL nomination letter, but open to everyone! :) No 
> commitments, no deadlines, just list random ideas you have in mind
> or in your todo lists, and we’ll all appreciate the huge pile of 
> awesomeness no one will ever have time to implement even if
> scheduled for Xixao release.
> 
> To start the fun, I will share my silly ideas in the next email.

Nice thread!

Here's a rough cut of what I have in mind.  My Neutron related
development time covers a few areas: Neutron, OVN itself, and the
networking-ovn plugin for Neutron.

For Neutron:

 - general code reviews.  I'm especially interested in reviewing the
   design and implementation of any new APIs people are adding.  Feel
   free to add me to reviews you think I could help with and I'll take
   a look.

 - plugin infrastructure.  Ihar mentioned in one of his items that
   there are features implemented as ML2 specific.  That has started
   to be a pain for networking-ovn.  For example, the provider networks
   extension is only implemented for ML2, and the data is only stored
   in an ML2 specific db table.  The db table and related code should
   be reusable by other plugins.  I'd like to help start to clean that
   up for the sake of other plugins.

For OVN and networking-ovn:

First, for anyone not already familiar with OVN, it is an effort
within the Open vSwitch project to build a virtual networking control
plane for OVS.  There are several projects that have implemented
something in this space (including Neutron).  OVN is intended to be a
new, common implementation of this functionality that can be reused in
many contexts, including by Neutron.  It includes a focus on
implementation of features taking advantage of the newest features of
OVS, including some still being added as we go.  There was a
presentation about this in Vancouver [1] and we'll be doing another
one covering current status in Tokyo [2].

This plugin is developed in parallel with OVN itself.  My time on each
changes week to week, depending on what I'm working on.  The dev items
I expect in the near future (this release cycle at least) include:

 - security groups.  This is being implemented using conntrack suport
   in OVS.  There's actually WIP code for this including kernel
   changes, ovs userspace changes, OVN, and networking-ovn.  This is a
   complex stack of dependencies, but it's starting to fall into place.
   Most of the kernel changes have been accepted, thought there's
   another change being reviewed now.  The OVS userspace changes have
   been under review in the last few weeks and are close to being
   merged.  Once that's done, we can finish up testing the OVN and
   networking-ovn patches.  We expect this to be done by Tokyo.

 - provider networks.  There's a lot of demand in OpenStack for
   supporting direct connectivity to existing networks as a simpler
   deployment model without any overlays.  I've done most of the work
   for both OVN and networking-ovn for this now and expect to have it
   wrapped up in the next week or so.

 - L3.  So far we've been using Neutron's L3 agent with networking-ovn.
   OVN will have native L3 support (will no longer use the L3 agent)
   and development on that has now started.  We aim to at least have
   initial distributed L3 support by Tokyo.  Some of it will certainly
   extend past that, though.  For example, NAT will be implemented with
   an OVS native solution, and that will likely not be ready by Tokyo.
   We may be able to deliver an intermediary NAT solution quicker.

 - SFC.  There's a ton of interest in SFC.  I've been casually following
   the networking-sfc project and the Neutron API they are proposing.
   I've also been thinking about how to implement it in OVN.  I think
   OVN's logical flows abstraction is going to make it surprisingly easy
   to implement.  I think I have a good idea of what needs to be done,
   but I'm not sure of when it will bubble up on my todo list.  I hope
   to work on it for this dev cycle though.  I'd first be implementing
   it in OVN, and then later doing the support for the networking-sfc
   API.

 - l2gw.  OVN already includes support for VTEP gateways.  We also just
   merged a patch to support them through the Neutron API using a
   networking-ovn specific binding:profile.  This is just a first step,
   though.  We'd like to support this with a proper Neutron abstraction.
   The networking-l2gw project is working on that abstraction so I'd
   like to help out there, expand it as necessary, and get an OVN
   backend for 

Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Ryan Moats
(With apologies to the Who)...

"Meet the new things, same as the old things"

DVR - let's make it real folks :)

Performance - I keep turning over rocks and finding things that just don't
make sense to me...

I suspect others will come a calling as we go...

Ryan Moats
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Selenium is now green - please pay attention to it

2015-10-01 Thread Jeremy Stanley
On 2015-10-01 18:38:49 + (+), Rob Cresswell (rcresswe) wrote:
> Also, please rebase patches to make sure they are passing selenium and
> integration, even if they have been previously verified.
> Yes, it¹s a little frustrating, but selenium is still non-voting so it
> will not block merges.

You shouldn't need to rebase your changes for this. Jobs are run
with proposed changes merged to the current target branch tip, so
just leaving a "recheck" review comment should be sufficient to
rerun the jobs and get updated results.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pecan] [mistral] Pecan python3 compatibility

2015-10-01 Thread Salvatore Orlando
Or, since many OpenStack projects now use Pecan, we could fix this
ourselves as a thank you note to Pecan developers!

Salvatore

On 1 October 2015 at 21:08, Ryan Petrello 
wrote:

> Yep, this definitely looks like a Python3-specific bug.  If you'll open a
> ticket, I'll take a look as soon as I get a chance :)!
>
>
> On 10/01/15 02:44 PM, Doug Hellmann wrote:
>
>> Excerpts from Nikolay Makhotkin's message of 2015-10-01 16:50:04 +0300:
>>
>>> Hi, pecan folks!
>>>
>>> I have an question for you about python3 support in pecan library.
>>>
>>> In Mistral, we are trying to fix the codebase for supporting python3, but
>>> we are not able to do this since we faced issue with pecan library. If
>>> you
>>> want to see the details, see this traceback - [1].
>>> (Actually, something is wrong with HooksController and walk_controller
>>> method)
>>>
>>> Does pecan officially support python3 (especially, python3.4 or
>>> python3.5)
>>> or not?
>>> I didn't find any info about that in pecan repository.
>>>
>>> [1] http://paste.openstack.org/show/475041/
>>>
>>>
>> The intent is definitely to support python 3. This sounds like a bug, so
>> I recommend opening a ticket in the pecan bug tracker.
>>
>> Doug
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> --
> Ryan Petrello
> Senior Developer, DreamHost
> ryan.petre...@dreamhost.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel][fuel-library] librarian workflows

2015-10-01 Thread Alex Schultz
Hey Fuel folks,

We have recently had some concerns around the librarian workflows and
custom patches to upstream modules.  I have updated the Fuel wiki[0]
with additional information to try and clarify how librarian is used
within fuel-library and what the rules are around the use of custom
upstream patches.

Additionally, I have created a script[1] to be used as part of Fuel
CI[2] to validate the Puppetfile based on the policies laid out on the
wiki page. Please take some time to review the wiki and the script and
provide feedback.

Thanks,
-Alex

[0] https://wiki.openstack.org/wiki/Fuel/Library_and_Upstream_Modules
[1] https://review.openstack.org/#/c/229605/
[2] https://review.fuel-infra.org/#/c/12344/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Core Reviewers groups restructure

2015-10-01 Thread Dmitry Borodaenko
This commit brings Fuel ACLs in sync with each other and in line with
the agreement on this thread:
https://review.openstack.org/230195

Please review carefully. Note that I intentionally didn't touch any of
the plugins ACLs, primarily to save time for us and the
openstack-infra team until after the stackforge->openstack namespace
migration.

On Mon, Sep 21, 2015 at 4:17 PM, Mike Scherbakov
 wrote:
> Thanks guys.
> So for fuel-octane then there are no actions needed.
>
> For fuel-agent-core group [1], looks like we are already good (it doesn't
> have fuel-core group nested). But it would need to include fuel-infra group
> and remove Aleksandra Fedorova (she will be a part of fuel-infra group).
>
> python-fuel-client-core [2] is good as well (no nested fuel-core). However,
> there is another group python-fuelclient-release [3], which has to be
> eliminated, and main python-fuelclient-core would just have fuel-infra group
> included for maintenance purposes.
>
> [1] https://review.openstack.org/#/admin/groups/995,members
> [2] https://review.openstack.org/#/admin/groups/551,members
> [3] https://review.openstack.org/#/admin/groups/552,members
>
>
> On Mon, Sep 21, 2015 at 11:06 AM Oleg Gelbukh  wrote:
>>
>> FYI, we have a separate core group for stackforge/fuel-octane repository
>> [1].
>>
>> I'm supporting the move to modularization of Fuel with cleaner separation
>> of authority and better defined interfaces. Thus, I'm +1 to such a change as
>> a part of that move.
>>
>> [1] https://review.openstack.org/#/admin/groups/1020,members
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>>
>> On Sun, Sep 20, 2015 at 11:56 PM, Mike Scherbakov
>>  wrote:
>>>
>>> Hi all,
>>> as of my larger proposal on improvements to code review workflow [1], we
>>> need to have cores for repositories, not for the whole Fuel. It is the path
>>> we are taking for a while, and new core reviewers added to specific repos
>>> only. Now we need to complete this work.
>>>
>>> My proposal is:
>>>
>>> Get rid of one common fuel-core [2] group, members of which can merge
>>> code anywhere in Fuel. Some members of this group may cover a couple of
>>> repositories, but can't really be cores in all repos.
>>> Extend existing groups, such as fuel-library [3], with members from
>>> fuel-core who are keeping up with large number of reviews / merges. This
>>> data can be queried at Stackalytics.
>>> Establish a new group "fuel-infra", and ensure that it's included into
>>> any other core group. This is for maintenance purposes, it is expected to be
>>> used only in exceptional cases. Fuel Infra team will have to decide whom to
>>> include into this group.
>>> Ensure that fuel-plugin-* repos will not be affected by removal of
>>> fuel-core group.
>>>
>>> #2 needs specific details. Stackalytics can show active cores easily, we
>>> can look at people with *:
>>> http://stackalytics.com/report/contribution/fuel-web/180. This is for
>>> fuel-web, change the link for other repos accordingly. If people are added
>>> specifically to the particular group, leaving as is (some of them are no
>>> longer active. But let's clean them up separately from this group
>>> restructure process).
>>>
>>> fuel-library-core [3] group will have following members: Bogdan D.,
>>> Sergii G., Alex Schultz, Vladimir Kuklin, Alex Didenko.
>>> fuel-web-core [4]: Sebastian K., Igor Kalnitsky, Alexey Kasatkin, Vitaly
>>> Kramskikh, Julia Aranovich, Evgeny Li, Dima Shulyak
>>> fuel-astute-core [5]: Vladimir Sharshov, Evgeny Li
>>> fuel-dev-tools-core [6]: Przemek Kaminski, Sebastian K.
>>> fuel-devops-core [7]: Tatyana Leontovich, Andrey Sledzinsky, Nastya
>>> Urlapova
>>> fuel-docs-core [8]: Irina Povolotskaya, Denis Klepikov, Evgeny
>>> Konstantinov, Olga Gusarenko
>>> fuel-main-core [9]: Vladimir Kozhukalov, Roman Vyalov, Dmitry Pyzhov,
>>> Sergii Golovatyuk, Vladimir Kuklin, Igor Kalnitsky
>>> fuel-nailgun-agent-core [10]: Vladimir Sharshov, V.Kozhukalov
>>> fuel-ostf-core [11]: Tatyana Leontovich, Nastya Urlapova, Andrey
>>> Sledzinsky, Dmitry Shulyak
>>> fuel-plugins-core [12]: Igor Kalnitsky, Evgeny Li, Alexey Kasatkin
>>> fuel-qa-core [13]: Andrey Sledzinsky, Tatyana Leontovich, Nastya Urlapova
>>> fuel-stats-core [14]: Alex Kislitsky, Alexey Kasatkin, Vitaly Kramskikh
>>> fuel-tasklib-core [15]: Igor Kalnitsky, Dima Shulyak, Alexey Kasatkin
>>> (this project seems to be dead, let's consider to rip it off)
>>> fuel-specs-core: there is no such a group at the moment. I propose to
>>> create one with following members, based on stackalytics data [16]: Vitaly
>>> Kramskikh, Bogdan Dobrelia, Evgeny Li, Sergii Golovatyuk, Vladimir Kuklin,
>>> Igor Kalnitsky, Alexey Kasatkin, Roman Vyalov, Dmitry Borodaenko, Mike
>>> Scherbakov, Dmitry Pyzhov. We would need to reconsider who can merge after
>>> Fuel PTL/Component Leads elections
>>> fuel-octane-core: needs to be created. Members: Yury Taraday, Oleg
>>> Gelbukh, 

[openstack-dev] [infra][nova][all] Pillow breaking gate?

2015-10-01 Thread Kevin L. Mitchell
It looks like Pillow (pulled in by blockdiag, pulled in by
sphinxcontrib-seqdiag, in test-requirements.txt of nova and probably
others) had a 3.0.0 release today, and now the gate is breaking because
libjpeg isn't available in the image…thoughts on how best to address
this problem?
-- 
Kevin L. Mitchell 
Rackspace


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][all] Pillow breaking gate?

2015-10-01 Thread Clark Boylan
On Thu, Oct 1, 2015, at 03:48 PM, Kevin L. Mitchell wrote:
> It looks like Pillow (pulled in by blockdiag, pulled in by
> sphinxcontrib-seqdiag, in test-requirements.txt of nova and probably
> others) had a 3.0.0 release today, and now the gate is breaking because
> libjpeg isn't available in the image…thoughts on how best to address
> this problem?
Two changes are already in flight to address this.

The first updates global requirements to require an older version of
Pillow:
https://review.openstack.org/#/c/230167/

The second updates the images for Jenkins slaves that run things like
unittests and doc builds:
https://review.openstack.org/#/c/230175/ This one will take more time to
get through as it requires new images be built.

It is also a possibility that devstack will need to install zlib and
libjpeg headers depending on whether or not sphinxcontrib is needed in
devstack (I think it is but protected now by constraints).

A completely different approach would be to not use blockdiag, but these
changes should get us moving along again and we can worry about making
it more correct later.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum]: PTL Voting is now open

2015-10-01 Thread Tony Breeds
If you are a Foundation individual member and had a commit in one of Magnum's
projects[0] over the Kilo-Liberty timeframe (September 18, 2014 06:00 UTC to
September 18, 2015 05:59 UTC) then you are eligible to vote. You should find
your email with a link to the Condorcet page to cast your vote in the inbox of
your gerrit preferred email[1].

What to do if you don't see the email and have a commit in at least one of the
programs having an election:
  * check the trash or spam folders of your gerrit Preferred Email address, in
case it went into trash or spam
  * wait a bit and check again, in case your email server is a bit slow
  * find the sha of at least one commit from the program project repos[0] and
  * email myself and Tristan[2] at the below email addresses. If we can confirm
that you are entitled to vote, we will add you to the voters list for the
appropriate election.

Our democratic process is important to the health of OpenStack, please exercise
your right to vote.

Candidate statements/platforms can be found linked to Candidate names on this
page:
https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates

Happy voting,

[0] The list of the program projects eligible for electoral status:
https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=sept-2015-elections#n1151

[1] Sign into review.openstack.org:
Go to Settings > Contact Information.
Look at the email listed as your Preferred Email.
That is where the ballot has been sent.

[2] Tony's email: tony at bakeyournoodle dot com
Tristan's email: tdecacqu at redhat dot com

Yours Tony.


pgpLf5EA7maeQ.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][all] Pillow breaking gate?

2015-10-01 Thread Tony Breeds
On Thu, Oct 01, 2015 at 05:48:25PM -0500, Kevin L. Mitchell wrote:
> It looks like Pillow (pulled in by blockdiag, pulled in by
> sphinxcontrib-seqdiag, in test-requirements.txt of nova and probably
> others) had a 3.0.0 release today, and now the gate is breaking because
> libjpeg isn't available in the image…thoughts on how best to address
> this problem?

For those playing along at home:
https://review.openstack.org/#/c/230167/

Yours Tony.


pgpLS5n6OyJFS.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum]: PTL Voting is now open

2015-10-01 Thread Anne Gentle
I want to thank personally and publicly our election officials for their
willingness to take care of this election. Nice work, Tony and Tristan.
Much appreciated.

Anne

On Thu, Oct 1, 2015 at 5:58 PM, Tony Breeds  wrote:

> If you are a Foundation individual member and had a commit in one of
> Magnum's
> projects[0] over the Kilo-Liberty timeframe (September 18, 2014 06:00 UTC
> to
> September 18, 2015 05:59 UTC) then you are eligible to vote. You should
> find
> your email with a link to the Condorcet page to cast your vote in the
> inbox of
> your gerrit preferred email[1].
>
> What to do if you don't see the email and have a commit in at least one of
> the
> programs having an election:
>   * check the trash or spam folders of your gerrit Preferred Email
> address, in
> case it went into trash or spam
>   * wait a bit and check again, in case your email server is a bit slow
>   * find the sha of at least one commit from the program project repos[0]
> and
>   * email myself and Tristan[2] at the below email addresses. If we can
> confirm
> that you are entitled to vote, we will add you to the voters list for
> the
> appropriate election.
>
> Our democratic process is important to the health of OpenStack, please
> exercise
> your right to vote.
>
> Candidate statements/platforms can be found linked to Candidate names on
> this
> page:
>
> https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates
>
> Happy voting,
>
> [0] The list of the program projects eligible for electoral status:
>
> https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=sept-2015-elections#n1151
>
> [1] Sign into review.openstack.org:
> Go to Settings > Contact Information.
> Look at the email listed as your Preferred Email.
> That is where the ballot has been sent.
>
> [2] Tony's email: tony at bakeyournoodle dot com
> Tristan's email: tdecacqu at redhat dot com
>
> Yours Tony.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][all] Pillow breaking gate?

2015-10-01 Thread Kevin L. Mitchell
On Thu, 2015-10-01 at 15:53 -0700, Clark Boylan wrote:
> On Thu, Oct 1, 2015, at 03:48 PM, Kevin L. Mitchell wrote:
> > It looks like Pillow (pulled in by blockdiag, pulled in by
> > sphinxcontrib-seqdiag, in test-requirements.txt of nova and probably
> > others) had a 3.0.0 release today, and now the gate is breaking because
> > libjpeg isn't available in the image…thoughts on how best to address
> > this problem?
> Two changes are already in flight to address this.
> 
> The first updates global requirements to require an older version of
> Pillow:
> https://review.openstack.org/#/c/230167/

Pillow is not explicitly listed in nova's requirements; would this still
be sufficient to unwedge the gate?

> The second updates the images for Jenkins slaves that run things like
> unittests and doc builds:
> https://review.openstack.org/#/c/230175/ This one will take more time to
> get through as it requires new images be built.
> 
> It is also a possibility that devstack will need to install zlib and
> libjpeg headers depending on whether or not sphinxcontrib is needed in
> devstack (I think it is but protected now by constraints).
> 
> A completely different approach would be to not use blockdiag, but these
> changes should get us moving along again and we can worry about making
> it more correct later.

-- 
Kevin L. Mitchell 
Rackspace


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [operators] [cinder] Does anyone use Cinder XML API?

2015-10-01 Thread Ivan Kolodyazhny
Hi all,

I would like to ask if anybody uses Cinder XML API.

AFAIR, almost one year ago XML-related tests were removed from Tempest [1].
XML API removed from Nova last year [2] and doesn't supported by Heat. I
didn't check for the others projects.

For now, I don't know does Cinder XML API work or not. It's not tested on
Gates except of some kind of unit-tests.

I proposed blueprint [3] to remove it. But I would like to ask if anybody
uses this API. We don't want to break any Cinder API users. We need to
decide remove XML API like Nova did or support it and implement tests.


[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-November/051384.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2014-December/052443.html
[3] https://blueprints.launchpad.net/cinder/+spec/remove-xml-api

Regards,
Ivan Kolodyazhny
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][all] Pillow breaking gate?

2015-10-01 Thread Anita Kuno
On 10/01/2015 09:23 PM, Carlos Garza wrote:
> I fixed this on my local ubuntu 14.04 box by doing ³apt-get install
> libjpeg-dev²
> Can we just make that a low level package dependency on the images in gate
> so that
> We can move forward?

The patch to do so has merged: https://review.openstack.org/#/c/230175/
images are currently being rebuilt with the dependency.

This is moving forward. You should see the days when it took us hours to
find the issue and the fix.

Thanks,
Anita.


> 
> On 10/1/15, 5:48 PM, "Kevin L. Mitchell" 
> wrote:
> 
>> It looks like Pillow (pulled in by blockdiag, pulled in by
>> sphinxcontrib-seqdiag, in test-requirements.txt of nova and probably
>> others) had a 3.0.0 release today, and now the gate is breaking because
>> libjpeg isn't available in the imageŠthoughts on how best to address
>> this problem?
>> -- 
>> Kevin L. Mitchell 
>> Rackspace
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][all] Pillow breaking gate?

2015-10-01 Thread Robert Collins
On 2 October 2015 at 12:00, Kevin L. Mitchell
 wrote:
> On Thu, 2015-10-01 at 15:53 -0700, Clark Boylan wrote:
>> On Thu, Oct 1, 2015, at 03:48 PM, Kevin L. Mitchell wrote:
>> > It looks like Pillow (pulled in by blockdiag, pulled in by
>> > sphinxcontrib-seqdiag, in test-requirements.txt of nova and probably
>> > others) had a 3.0.0 release today, and now the gate is breaking because
>> > libjpeg isn't available in the image…thoughts on how best to address
>> > this problem?
>> Two changes are already in flight to address this.
>>
>> The first updates global requirements to require an older version of
>> Pillow:
>> https://review.openstack.org/#/c/230167/
>
> Pillow is not explicitly listed in nova's requirements; would this still
> be sufficient to unwedge the gate?

No. (Technically the gate isn't wedged, its just broken... for a lot of folk :))

You will need to manually submit the same pillow cap to Nova (With a
comment before it). Like this:
https://review.openstack.org/#/c/230245/ (Thanks Tony).

Once we're all sorted out we can remove the direct reference to pillow.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] tox -egenconfig not working

2015-10-01 Thread Michal Rostecki
Hi,

On Wed, Sep 30, 2015 at 2:27 PM, Vikas Choudhary
 wrote:
> Hi,
>
> I tried to generate sample kuryr.using "tox -e genconfig", but it is
> failing:
>
> genconfig create: /home/vikas/kuryr/.tox/genconfig
> genconfig installdeps: -r/home/vikas/kuryr/requirements.txt,
> -r/home/vikas/kuryr/test-requirements.txt
> ERROR: could not install deps [-r/home/vikas/kuryr/requirements.txt,
> -r/home/vikas/kuryr/test-requirements.txt]
> ___ summary
> ___
> ERROR:   genconfig: could not install deps
> [-r/home/vikas/kuryr/requirements.txt,
> -r/home/vikas/kuryr/test-requirements.txt]
>
> 

Command "tox -e genconfig" is working perfectly for me. Please:
- ensure you have up-to-date repo
- try to remove .tox/ directory and run the command again

>
> But if i run "pip install -r requirements.txt", its giving no error.

Does "pip install -r test-requirements.txt" give no error as well?

>
> How to generalr sample config file? Please suggest.
>
>
> -Vikas

Regards,
Michal Rostecki

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Defect management

2015-10-01 Thread Armando M.
Hi neutrinos,

Whilst we go down the path of revising the way we manage/process bugs in
Neutron, and transition to the proposed model [1], I was wondering if I can
solicit some volunteers to screen the bugs outlined in [2]. It's only 24
bugs so it should be quick ;)

Btw, you can play with filters and Google sheets Insights to see how well
(or bad) we've done this week.

Cheers,
Armando

[1] https://review.openstack.org/#/c/228733/
[2]
https://docs.google.com/spreadsheets/d/1UpxSOsFKQWN0IF-mN0grFJJap-j-8tnZHmG4f3JYmIQ/edit#gid=1296831500

On 28 September 2015 at 23:06, Armando M.  wrote:

> Hi folks,
>
> One of the areas I would like to look into during the Mitaka cycle is
> 'stability' [1]. The team has done a great job improving test coverage, and
> at the same time increasing reliability of the product.
>
> However, regressions are always around the corner, and there is a huge
> backlog of outstanding bugs (800+ of new/confirmed/triaged/in progress
> actively reported) that pressure the team. Having these slip through the
> cracks or leave them lingering is not cool.
>
> To this aim, I would like to propose a number of changes in the way the
> team manage defeats, and I will be going through the process of proposing
> these changes via code review by editing [2] (like done in [3]).
>
> Feedback most welcome.
>
> Many thanks,
> Armando
>
>
> [1]
> http://git.openstack.org/cgit/openstack/election/tree/candidates/mitaka/Neutron/Armando_Migliaccio.txt#n25
> [2] http://docs.openstack.org/developer/neutron/policies/index.html
> [3] https://review.openstack.org/#/c/228733/
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][all] Pillow breaking gate?

2015-10-01 Thread Carlos Garza
I fixed this on my local ubuntu 14.04 box by doing ³apt-get install
libjpeg-dev²
Can we just make that a low level package dependency on the images in gate
so that
We can move forward?

On 10/1/15, 5:48 PM, "Kevin L. Mitchell" 
wrote:

>It looks like Pillow (pulled in by blockdiag, pulled in by
>sphinxcontrib-seqdiag, in test-requirements.txt of nova and probably
>others) had a 3.0.0 release today, and now the gate is breaking because
>libjpeg isn't available in the imageŠthoughts on how best to address
>this problem?
>-- 
>Kevin L. Mitchell 
>Rackspace
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Capturing isolated events

2015-10-01 Thread Sachin Manpathak
Can ceilometer-gurus advise?

[For Ceilometer/Juno]
Use case:
catch instance creation error notifications with ceilometer.

I tried the notifications as well as metering route, but haven't quite
figured out how to do this --

- Added a notification class to ceilometer
/
compute

/notifications

/instance.py

- Noticed that there is a class called "Instance" which already subscribes
to events compute.instance.*, disabled that by removing it from the egg
entry points (setup.cfg)
- Created a pipeline to define a meter for my notifications class.

I noticed that the ceilometer-agent-notification service does not consume
any notifications with above changes. I had to set
store_events=true in ceilometer.conf

Now, ceilometer generates events, but they are generic type.

1. Should I change event_definitions.yaml to store event information in the
format that I want?
2. a way to generate only the event I am interested in, not all by default.


Thanks,
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] tox -egenconfig not working

2015-10-01 Thread Carlos Garza

   If its because of an Error like "ValueError: --enable-jpeg requested
but jpeg not found, aborting² triggered by a dependency pull of
Python pillow which then I got around it by doing an ³apt-get install
libjpeg-dev² on my ubuntu build.
I agree its seemed odd when it happened on tox but not on pip but I
noticed it was during the documentation tests.


On 10/1/15, 7:20 PM, "Michal Rostecki"  wrote:

>Hi,
>
>On Wed, Sep 30, 2015 at 2:27 PM, Vikas Choudhary
> wrote:
>> Hi,
>>
>> I tried to generate sample kuryr.using "tox -e genconfig", but it is
>> failing:
>>
>> genconfig create: /home/vikas/kuryr/.tox/genconfig
>> genconfig installdeps: -r/home/vikas/kuryr/requirements.txt,
>> -r/home/vikas/kuryr/test-requirements.txt
>> ERROR: could not install deps [-r/home/vikas/kuryr/requirements.txt,
>> -r/home/vikas/kuryr/test-requirements.txt]
>> ___
>>summary
>> ___
>> ERROR:   genconfig: could not install deps
>> [-r/home/vikas/kuryr/requirements.txt,
>> -r/home/vikas/kuryr/test-requirements.txt]
>>
>> 
>
>Command "tox -e genconfig" is working perfectly for me. Please:
>- ensure you have up-to-date repo
>- try to remove .tox/ directory and run the command again
>
>>
>> But if i run "pip install -r requirements.txt", its giving no error.
>
>Does "pip install -r test-requirements.txt" give no error as well?
>
>>
>> How to generalr sample config file? Please suggest.
>>
>>
>> -Vikas
>
>Regards,
>Michal Rostecki
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] New Core Reviewers

2015-10-01 Thread Jay Lau
+1 for both! Welcome!

On Thu, Oct 1, 2015 at 7:07 AM, Hongbin Lu  wrote:

> +1 for both. Welcome!
>
>
>
> *From:* Davanum Srinivas [mailto:dava...@gmail.com]
> *Sent:* September-30-15 7:00 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] New Core Reviewers
>
>
>
> +1 from me for both Vilobh and Hua.
>
>
>
> Thanks,
>
> Dims
>
>
>
> On Wed, Sep 30, 2015 at 6:47 PM, Adrian Otto 
> wrote:
>
> Core Reviewers,
>
> I propose the following additions to magnum-core:
>
> +Vilobh Meshram (vilobhmm)
> +Hua Wang (humble00)
>
> Please respond with +1 to agree or -1 to veto. This will be decided by
> either a simple majority of existing core reviewers, or by lazy consensus
> concluding on 2015-10-06 at 00:00 UTC, in time for our next team meeting.
>
> Thanks,
>
> Adrian Otto
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to address boot from volume failures

2015-10-01 Thread Andrey Kurilin
>1) fix the client without a revert

I prefer to go with this option, since fix already done and pending review.

>This means that until people upgrade their
>client they loose access to this function on the server.

This applies to any of the proposed options.

On Thu, Oct 1, 2015 at 12:03 AM, Sean Dague  wrote:

> Today we attempted to branch devstack and grenade for liberty, and are
> currently blocked because in liberty with openstack client and
> novaclient, it's not possible to boot a server from volume using just
> the volume id.
>
> That's because of this change in novaclient -
> https://review.openstack.org/#/c/221525/
>
> That was done to resolve the issue that strong schema validation in Nova
> started rejecting the kinds of calls that novaclient was making for boot
> from volume, because the bdm 1 and 2 code was sharing common code and
> got a bit tangled up. So 3 bdm 2 params were being sent on every request.
>
> However, https://review.openstack.org/#/c/221525/ removed the ==1 code
> path. If you pass in just {"vda": "$volume_id"} the code falls through,
> volume id is lost, and nothing is booted. This is how the devstack
> exercises and osc recommends booting from volume. I expect other people
> might be doing that as well.
>
> There seem to be a few options going forward:
>
> 1) fix the client without a revert
>
> This would bring back a ==1 code path, which is basically just setting
> volume_id, and move on. This means that until people upgrade their
> client they loose access to this function on the server.
>
> 2) revert the client and loose up schema validation
>
> If we revert the client to the old code, we also need to accept the fact
> that novaclient has been sending 3 extra parameters to this API call
> since as long as people can remember. We'd need a nova schema relax to
> let those in and just accept that people are going to pass those.
>
> 3) fix osc and novaclient cli to not use this code path. This will also
> require everyone upgrades both of those to not explode in the common
> case of specifying boot from volume on the command line.
>
> I slightly lean towards #2 on a compatibility front, but it's a chunk of
> change at this point in the cycle, so I don't think there is a clear win
> path. It would be good to collect opinions here. The bug tracking this
> is - https://bugs.launchpad.net/python-openstackclient/+bug/1501435
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up, Doc? 2 October

2015-10-01 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi everyone,

Welcome to October, which means: it's nearly release day! I've been
brushing up on the release management tasks, and continued my
housekeeping duties to make sure we're in shape. The next two weeks will
be spent watching the Installation Guide pretty closely, and making sure
we get it all tested and ready to ship. If you are looking for something
to do this week, please treat Install Guide testing as your main
priority from here until 15 October. Also, don't forget to vote in the
TC election next week!

== Progress towards Liberty ==

12 days to go

534 bugs closed so far for this release.

The main things that still need to be done as we hurtle towards release:
* Testing, testing, testing:
https://wiki.openstack.org/wiki/Documentation/LibertyDocTesting
* Reviews:
https://review.openstack.org/#/q/status:open+project:openstack/openstack
- -manuals,n,z
and
https://review.openstack.org/#/q/status:open+project:openstack/api-site,
n,z
* Bug triage:
https://bugs.launchpad.net/openstack-manuals/+bugs?search=Search
tatus=New

== Mitaka Summit Prep ==

Thank you to everyone who provided suggestions for Design Summit
sessions. I've now mangled them into a draft schedule, which looks a
little like this:

Workrooms (Thursday AM):
- - Docs toolchain/infra info session
- - IA working session - Ops/Arch Guides & User Guides
- - Doc Contributor Guide: new content addition
- - API Docs Session

Fishbowls (Thursday PM):
- - The docs process, plus best practices
- - Mitaka planning - deliverables for the next release

Meetup (Friday AM):
- - Contributors meetup. No agenda. We can assist with setting up
devstack/working through the Install Guide during this session

Once I have access to Cheddar and Sched, I'll get this allocated more
accurately. In the meantime, feedback or questions are welcome.

== Bug Triage Process ==

The core team have been noticing a lot of people trying to close bugs
that haven't been triaged recently. It is very important that you have
someone else confirm your bug before you go ahead and create a patch.
Waiting just a little while for triage can save you (and our core team!)
a lot of work, and makes sure we don't let wayward changes past the
gate. If you're unsure about whether you need to wait or not, or if
you're in a hurry to work on a bug, you can always ping one of the core
team on IRC or on our mailing list. The core team members are listed
here, and we're always happy to help you out with a bug or a patch:
https://launchpad.net/~openstack-doc-core

This is even more important than usual as we get closer to the Liberty
release, so remember: If it's not triaged, don't fix it!

== Doc team meeting ==

The APAC meeting was held this week. The minutes are here:
https://wiki.openstack.org/wiki/Documentation/MeetingLogs#2015-09-30

The next meetings will be the final meetings before the Liberty release:
US: Wednesday 7 October, 14:00:00 UTC
APAC: Wednesday 14 October, 00:30:00 UTC

Please go ahead and add any agenda items to the meeting page here:
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_
meeting

- --

Remember, if you have content you would like to add to this newsletter,
or you would like to be added to the distribution list, please email me
directly at openst...@lanabrindley.com, or visit:
https://wiki.openstack.org/w/index.php?title=Documentation/WhatsUpDoc

Keep on doc'ing!

Lana

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBAgAGBQJWDgItAAoJELppzVb4+KUyvy4H/3J5Dg5Z2Rv5yHImXU7ZhY4U
chAQQ8ZUTysKcKtShIvqWqiejOWUql0Y2L9cV+28P9A8NFjJ/meWAckjC+LQgl1X
WrqDimHjtStkLTft0kbmsVN3HYDpdH9QJhLh0ut3YHieWy5DeFaKZGbnP5MrTwFi
7WPjWCChECyCx0olrsr8AoYtLu9wUjtX0rDFJTC/4T3kZ1wauUnkWeRVT4S2e09c
mjpGf7xgkciRqOyFgACzL0dGYXFlQhPxrRAnGMMRjpGERTIkWzlptk5v/oF9rx4v
84B/Ni3eTH2O+uqmVuXzjesvzV27Yzhoq4UvN14SeN7cHJ43mTfVsPi9kUZYPGM=
=aCzp
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] New cycle started. What are you up to, folks?

2015-10-01 Thread Gal Sagie
This is going to be busy cycle for me, but with many exciting and
interesting topics:

*Kuryr*

Kuryr is starting to gain interest from the community and going to be
demoed and presented in OpenStack
Tokyo, i think all this positive feedback means we are on the right track
and we have a very busy
road map for Mitaka.

   1) Kuryr spec in Neutron repository [1] is accepted by most of the
Magnum team and Kuryr team,
   We are planning to demo the first milestone in Tokyo
   2) Integration with Magnum and support of the nested containers inside
VM cases
   3) Containerised Neutron plugins using Kolla
   4) Finalising the generic VIF binding layer for OVS (reference
implementation) which also can be used by OVN
   5) Testing and stability
   6) Kubernetes Integration

*Neutron*

   1) Add tags to Neutron resources spec [2]  (hopefully gets accepted by
the community)
   2) Port forwarding in Neutron [3] (still needs work to finalise design
details and spec), this is in my view
   a very needed feature which one of its use cases is also to support
Docker port mapping for Kuryr
   3) Mentoring program - i would like to present an initiative which will
help both new contributors and
   experienced members find each other and be able to delegate tasks
due to mutual interest.
   I hope this will help both sides in the long run and will simplify
this process for both sides.

*Dragonflow*

Dragonflow Liberty release is already working as a distributed control
plane across all compute nodes
and implements L2, distributed L3 and distributed DHCP.
It has a pluggable DB layer which already support different backends (etcd,
RethinkDB, OVSDB and RAMCloud)
and the process of integrating new DB framework is very simple and easy.
Dragonflow aims at solving some of the known scale/performance/HA/latency
problems in today's SDN.

   1) There are some new people joining and starting to work on the project
(some remotely)
which means i will have to do some coaching and helping of new
contributors.
and always welcome new people that are interested with this project.
   2) Pluggable DB layer currently support full pro-activeness (all data is
synced with all nodes)
   In the next cycle we plan to support selective pro-activeness
(meaning each node
   will only be synced with the relevant data depending on the virtual
network topology),
   and proactive-reactive approach with local cache, which means not
all data is fully distributed
   but is queried on demand and cached in the local node.
   3) Support missing Neutron API (Provider networks)
   4) Extendability - We have some interesting ideas in regards to how
external applications
   or network functions can control parts of Dragonflow pipeline
without code changes and
  implement them in a distributed manner. (using OpenFlow)
   5) Smart NICS - leveraging HW offloading and current acceleration
techniques with adjusted
   Dragonflow pipeline (we already have some partnerships going on in
this area)
   6) Distributed SNAT/DNAT
   7) Scale/performance/testing - using projects like stackforge/shaker and
deploy Dragonflow
on large scale deployment

*OVN*

   1) OVN integration with Kuryr
   2) L3 in OVN
   3) Fault detection and management/monitoring in OVN
   4) Testing and picking up on tasks
   5) Hopefully trying to share some of the Dragonflow concepts (Pluggable
DB/ Extensibility)
   to OVN

*Blogging *

Sharing information and status in my blog [4] is very important in my view
and i hope to continue
doing so for all the above topics, hopefully providing more visibility and
help to the community
for the above projects.

[1] https://review.openstack.org/#/c/213490/
[2] https://review.openstack.org/#/c/216021/
[3] https://review.openstack.org/#/c/224727/
[4] http://galsagie.github.io


On Thu, Oct 1, 2015 at 10:32 PM, Ryan Moats  wrote:

> (With apologies to the Who)...
>
> "Meet the new things, same as the old things"
>
> DVR - let's make it real folks :)
>
> Performance - I keep turning over rocks and finding things that just don't
> make sense to me...
>
> I suspect others will come a calling as we go...
>
> Ryan Moats
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][all] Pillow breaking gate?

2015-10-01 Thread Michael Still
I've +2'ed Tony's Nova patch, but it would be nice to get another core to
take a look as well.

Michael

On Fri, Oct 2, 2015 at 12:34 PM, Robert Collins 
wrote:

> On 2 October 2015 at 12:00, Kevin L. Mitchell
>  wrote:
> > On Thu, 2015-10-01 at 15:53 -0700, Clark Boylan wrote:
> >> On Thu, Oct 1, 2015, at 03:48 PM, Kevin L. Mitchell wrote:
> >> > It looks like Pillow (pulled in by blockdiag, pulled in by
> >> > sphinxcontrib-seqdiag, in test-requirements.txt of nova and probably
> >> > others) had a 3.0.0 release today, and now the gate is breaking
> because
> >> > libjpeg isn't available in the image…thoughts on how best to address
> >> > this problem?
> >> Two changes are already in flight to address this.
> >>
> >> The first updates global requirements to require an older version of
> >> Pillow:
> >> https://review.openstack.org/#/c/230167/
> >
> > Pillow is not explicitly listed in nova's requirements; would this still
> > be sufficient to unwedge the gate?
>
> No. (Technically the gate isn't wedged, its just broken... for a lot of
> folk :))
>
> You will need to manually submit the same pillow cap to Nova (With a
> comment before it). Like this:
> https://review.openstack.org/#/c/230245/ (Thanks Tony).
>
> Once we're all sorted out we can remove the direct reference to pillow.
>
> -Rob
>
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][horizon] adding Doug Fish to horizon stable-maint

2015-10-01 Thread Matthias Runge
Hello,

I would like to propose to add

Doug Fish (doug-fish)

to horizon-stable-maint team.

I'd volunteer and introduce him to stable branch policy.

Matthias
--
Matthias Runge 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to address boot from volume failures

2015-10-01 Thread Nikola Đipanov
On 09/30/2015 10:45 PM, Andrew Laski wrote:
> On 09/30/15 at 05:03pm, Sean Dague wrote:
>> Today we attempted to branch devstack and grenade for liberty, and are
>> currently blocked because in liberty with openstack client and
>> novaclient, it's not possible to boot a server from volume using just
>> the volume id.
>>
>> That's because of this change in novaclient -
>> https://review.openstack.org/#/c/221525/
>>
>> That was done to resolve the issue that strong schema validation in Nova
>> started rejecting the kinds of calls that novaclient was making for boot
>> from volume, because the bdm 1 and 2 code was sharing common code and
>> got a bit tangled up. So 3 bdm 2 params were being sent on every request.
>>
>> However, https://review.openstack.org/#/c/221525/ removed the ==1 code
>> path. If you pass in just {"vda": "$volume_id"} the code falls through,
>> volume id is lost, and nothing is booted. This is how the devstack
>> exercises and osc recommends booting from volume. I expect other people
>> might be doing that as well.
>>
>> There seem to be a few options going forward:
>>
>> 1) fix the client without a revert
>>
>> This would bring back a ==1 code path, which is basically just setting
>> volume_id, and move on. This means that until people upgrade their
>> client they loose access to this function on the server.
>>
>> 2) revert the client and loose up schema validation
>>
>> If we revert the client to the old code, we also need to accept the fact
>> that novaclient has been sending 3 extra parameters to this API call
>> since as long as people can remember. We'd need a nova schema relax to
>> let those in and just accept that people are going to pass those.
>>
>> 3) fix osc and novaclient cli to not use this code path. This will also
>> require everyone upgrades both of those to not explode in the common
>> case of specifying boot from volume on the command line.
>>
>> I slightly lean towards #2 on a compatibility front, but it's a chunk of
>> change at this point in the cycle, so I don't think there is a clear win
>> path. It would be good to collect opinions here. The bug tracking this
>> is - https://bugs.launchpad.net/python-openstackclient/+bug/1501435
> 
> I have a slight preference for #1.  Nova is not buggy here novaclient is
> so I think we should contain the fix there.
> 

+1 - this is obviously a client bug

> Is using the v2 API an option?  That should also allow the 3 extra
> parameters mentioned in #2.
> 

This could be a short term solution I guess, but long term we want to be
testing the code that is there to stay so really we want to fix the
client ASAP.

N.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][shotgun] do we still use subs?

2015-10-01 Thread Igor Kalnitsky
Hey Alex,

I'm +1 for removal. I don't thing shotgun should be responsible for
log sanitizing, because sensitive data shouldn't get to the logs at
the first place.

Thanks,
Igor

On Wed, Sep 30, 2015 at 4:05 PM, Alexander Gordeev
 wrote:
> Hello fuelers,
>
> My question is related to shotgun tool[1] which will be invoked in
> order to generate the diagnostic snapshot.
>
> It has possibilities to substitute particular sensitive data such as
> credentials/hostnames/IPs/etc with meaningless values. It's done by
> Subs [2] object driver.
>
> However, it seems that subs is not used anymore. Well, at least it was
> turned off by default for fuel 5.1 [3] and newer. I won't able to find
> any traces of its usage in the code at fuel-web repo.
>
> Seems that this piece of code for subs could be ditched. Even more, it
> should be ditched as it looks like a fifth wheel from the project
> architecture point of view. As shotgun is totally about getting the
> actual logs, but not about corrupting them unpredictably with sed
> scripts.
>
> Proper log sanitization is the another story entirely. I doubt if it
> could be fitted into shotgun and being effective and/or well designed
> at the same time.
>
> Perhaps, i missed something and subs is still being used actively.
> So, folks don't hesitate to respond, if you know something which helps
> to shed a light on subs.
>
> Let's discuss anything related to subs or even vote on its removal.
> Maybe we need to wait for another 2 years to pass until we could
> finally get rid of it.
>
> Let me know your thoughts.
>
> Thanks!
>
>
> [1] https://github.com/stackforge/fuel-web/tree/master/shotgun
> [2] 
> https://github.com/stackforge/fuel-web/blob/master/shotgun/shotgun/driver.py#L165-L233
> [3] 
> https://github.com/stackforge/fuel-web/blob/stable/5.1/nailgun/nailgun/settings.yaml
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >