[openstack-dev] [kolla] Nominating Lei Zhang (Jeffrey Zhang in English) - jeffrey4l on irc

2016-01-19 Thread Steven Dake (stdake)
Hi folks,

I would like to propose Lei Zhang for our core reviewer team.  Count this 
proposal as a +1 vote from me.  Lei has done a fantastic job in his reviews 
over the last 6 weeks and has managed to produce some really nice 
implementation work along the way.  He participates in IRC regularly, and has a 
commitment from his management team at his employer to work full time 100% 
committed to Kolla for the foreseeable future (although things can always 
change in the future :)

Please vote +1 if you approve of Lei for core reviewer, or -1 if wish to veto 
his nomination.  Remember just one -1 vote is a complete veto, so if your on 
the fence, another option is to abstain from voting.

I would like to change from our 3 votes required, as our core team has grown, 
to requiring a simple majority of core reviewers with no veto votes.  As we 
have 9 core reviewers, this means Lei requires 4 more  +1 votes with no veto 
vote in the voting window to join the core reviewer team.

I will leave the voting open for 1 week as is the case with our other core 
reviewer nominations until January 26th.  If the vote is unanimous or there is 
a veto vote before January 26th I will close voting.  I'll make appropriate 
changes to gerrit permissions if Lei is voted into the core reviewer team.

Thank you for your time in evaluating Lei for the core review team.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-11, Jan 18-22, Mitaka-2 milestone

2016-01-19 Thread Thierry Carrez

Kyle Mestery wrote:

One question I have is, what should the version for projects be? For
example, for Neutron, M1 was set to 8.0.0.0b1. Should the M2 Neutron
milestone be 8.0.0.0c1? Or 8.0.0.0b2?


Good question! It should be X.0.0.0b2, so 8.0.0.0b2 for Neutron.
Cheers!

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Nominating Lei Zhang (Jeffrey Zhang in English) - jeffrey4l on irc

2016-01-19 Thread Paul Bourke
I was also thinking of adding Lei. He was a big help to me only recently 
with fixing plugin support and adding unit test support to build.py. +1 
from me.


-Paul

On 19/01/16 08:26, Steven Dake (stdake) wrote:

Hi folks,

I would like to propose Lei Zhang for our core reviewer team.  Count
this proposal as a +1 vote from me.  Lei has done a fantastic job in his
reviews over the last 6 weeks and has managed to produce some really
nice implementation work along the way.  He participates in IRC
regularly, and has a commitment from his management team at his employer
to work full time 100% committed to Kolla for the foreseeable future
(although things can always change in the future :)

Please vote +1 if you approve of Lei for core reviewer, or –1 if wish to
veto his nomination.  Remember just one –1 vote is a complete veto, so
if your on the fence, another option is to abstain from voting.

I would like to change from our 3 votes required, as our core team has
grown, to requiring a simple majority of core reviewers with no veto
votes.  As we have 9 core reviewers, this means Lei requires 4 more  +1
votes with no veto vote in the voting window to join the core reviewer team.

I will leave the voting open for 1 week as is the case with our other
core reviewer nominations until January 26th.  If the vote is unanimous
or there is a veto vote before January 26th I will close voting.  I'll
make appropriate changes to gerrit permissions if Lei is voted into the
core reviewer team.

Thank you for your time in evaluating Lei for the core review team.

Regards
-steve



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] How to auto allocate VIPs for roles in different network node groups?

2016-01-19 Thread Aleksandr Didenko
Hi,

I would also prefer second solution. The only real downside of it is the
possibility to configure invalid cluster (for instance configure default
"controller" roles in different racks). But such invalid configuration is
possible only under some conditions:
- User should configure multi-rack environment (network node groups). I'd
say it's rather advanced Fuel usage and user most likely will follow our
documentation, so we can describe possible problems in the documentation.
- User should ignore notifications about possible problems from Fuel. I
must say that this is quite possible when using CLI, because notifications
should be checked manually in this case.

Solution #1 is much safer, of course. But for me it looks like "let's
forbid as much as we can just to avoid any risks". I prefer to give Fuel
users a choice here which is possible only in second solution.

> What if neither of node is in default group? Still use default group?
> And prey that some third-party plugin will handle this case properly?

No, let's put a warning for user. I don't think that forbidding is the
proper way of handling such situations. Especially when we're not going to
forbid such setup in 9.0.

> Default is just pre-created nodegroup and that's it, so there's nothing
special in it.

Not quite. Default groups is the group where Fuel node is connected to.

> We don't support load-balancing for nodes in different racks out-of-box.

True. But we're going block deployment of roles that share VIP (created
from plugin, for instance) even when no load-balancing is involved at all -
just to be safe.

Regards,
Alex

On Fri, Jan 15, 2016 at 10:50 AM, Bogdan Dobrelya 
wrote:

> On 15.01.2016 10:19, Aleksandr Didenko wrote:
> > Hi,
> >
> > We need to come up with some solution for a problem with VIP generation
> > (auto allocation), see the original bug [0].
> >
> > The main problem here is: how do we know what exactly IPs to auto
> > allocate for VIPs when needed roles are in different nodegroups (i.e. in
> > different IP networks)?
> > For example 'public_vip' for 'controller' roles.
> >
> > Currently we have two possible solutions.
> >
> > 1) Fail early in pre-deployment check (when user hit "Deploy changes")
> > with error about inability to auto allocate VIP for nodes in different
> > nodegroups (racks). So in order to run deploy user has to put all roles
> > with the same VIPs in the same nodegroups (for example: all controllers
> > in the same nodegroup).
> >
> > Pros:
> >
> >   * VIPs are always correct, they are from the same network as nodes
> > that are going to use them, thus user simply can't configure invalid
> > VIPs for cluster and break deployment
> >
> > Cons:
> >
> >   * hardcoded limitation that is impossible to bypass, does not allow to
> > spread roles with VIPs across multiple racks even if it's properly
> > handled by Fuel Plugin, i.e. made so by design
>
> That'd be no good at all.
>
> >
> >
> > 2) Allow to move roles that use VIPs into different nodegroups, auto
> > allocate VIPs from "default" nodegroup and send an alert/notification to
> > user that such configuration may not work and it's up to user how to
> > proceed (either fix config or deploy at his/her own risk).
>
> It seems we have not much choice then, but use the option 2
>
> >
> > Pros:
> >
> >   * relatively simple solution
> >
> >   * impossible to break VIP serialization because in the worst case we
> > allocate VIPs from default nodegroup
> >
> > Cons:
> >
> >   * user can deploy invalid environment that will fail during deployment
> > or will not operate properly (for example when public_vip is not
> > able to migrate to controller from different rack)
> >
> >   * which nodegroup to choose to allocate VIPs? default nodegroup?
> > random pick? in case of random pick troubleshooting may become
> > problematic
>
> Random choices aren't good IMHO, let's use defaults.
>
> >
> >   * waste of IPs - IP address from the network range will be implicitly
> > allocated and marked as used, even it's not used by deployment
> > (plugin uses own ones)
> >
> >
> > *Please also note that this solution is needed for 8.0 only.*In 9.0 we
> > have new feature for manual VIPs allocation [1]. So in 9.0, if we can't
> > auto allocate VIPs for some cluster configuration, we can simply ask
> > user to manually set those problem VIPs or move roles to the same
> > network node group (rack).
> >
> > So, guys, please feel free to share your thoughts on this matter. Any
> > input is greatly appreciated.
> >
> > Regards,
> > Alex
> >
> > [0] https://bugs.launchpad.net/fuel/+bug/1524320
> > [1] https://blueprints.launchpad.net/fuel/+spec/allow-any-vip
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > 

Re: [openstack-dev] [Nova] sponsor some LVM development

2016-01-19 Thread Premysl Kouril
> I'm not a Nova developer. I am interesting in clarifying what you are
> asking.
>
> Are you asking for current Nova developers to work on this feature? Or
> s your company interested in having your developers interact with Nova
> developers?
>
> Thank you,
> Anita.


Both. We are first trying this "Are you asking for current Nova
developers to work on this feature?" and if we won't find anybody we
will start with "your company interested in having your developers
interact with Nova developers"

Thanks,
Prema

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

2016-01-19 Thread Clark, Robert Graham
+1

Doing this, and doing this well, provides critical functionality to OpenStack 
while keeping said functionality reasonably decoupled from the COE API vagaries 
that would inevitably encumber a solution that sought to provide ‘one api to 
control them all’.

-Rob

From: Mike Metral
Reply-To: OpenStack List
Date: Saturday, 16 January 2016 02:24
To: OpenStack List
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

The requirements that running a fully containerized application optimally & 
effectively requires the usage of a dedicated COE tool such as Swarm, 
Kubernetes or Marathon+Mesos.

OpenStack is better suited for managing the underlying infrastructure.

Mike Metral
Product Architect – Private Cloud R
email: mike.met...@rackspace.com
cell: +1-305-282-7606

From: Hongbin Lu
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Friday, January 15, 2016 at 8:02 PM
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

A reason is the container abstraction brings containers to OpenStack: Keystone 
for authentication, Heat for orchestration, Horizon for UI, etc.

From: Kyle Kelley [mailto:rgb...@gmail.com]
Sent: January-15-16 10:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

What are the reasons for keeping /containers?

On Fri, Jan 15, 2016 at 9:14 PM, Hongbin Lu 
> wrote:
Disagree.

If the container managing part is removed, Magnum is just a COE deployment 
tool. This is really a scope-mismatch IMO. The middle ground I can see is to 
have a flag that allows operators to turned off the container managing part. If 
it is turned off, COEs are not managed by Magnum and requests sent to the 
/container endpoint will return a reasonable error code. Thoughts?

Best regards,
Hongbin

From: Mike Metral 
[mailto:mike.met...@rackspace.com]
Sent: January-15-16 6:24 PM
To: openstack-dev@lists.openstack.org

Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

I too believe that the /containers endpoint is obstructive to the overall goal 
of Magnum.

IMO, Magnum’s scope should only be concerned with:

  1.  Provisioning the underlying infrastructure required by the Container 
Orchestration Engine (COE) and
  2.  Instantiating the COE itself on top of said infrastructure from step #1.
Anything further regarding Magnum interfacing or interacting with containers 
starts to get into a gray area that could easily evolve into:

  *   Potential race conditions between Magnum and the designated COE and
  *   Would create design & implementation overhead and debt that could bite us 
in the long run seeing how all COE’s operate & are based off various different 
paradigms in terms of describing & managing containers, and this divergence 
will only continue to grow with time.
  *   Not to mention, the recreation of functionality around managing 
containers in Magnum seems redundant in nature as this is the very reason to 
want to use a COE in the first place – because it’s a more suited tool for the 
task
If there is low-hanging fruit in terms of common functionality across all 
COE’s, then those generic capabilities could be abstracted and integrated into 
Magnum, but these have to be carefully examined beforehand to ensure true 
parity exists for the capability across all COE’s.

However, I still worry that going down this route toes the line that Magnum 
should and could be a part of the managing container story to some degree – 
which again should be the sole responsibility of the COE, not Magnum.

I’m in favor of doing away with the /containers endpoint – continuing with it 
just looks like a snowball of scope-mismatch and management issues just waiting 
to happen.

Mike Metral
Product Architect – Private Cloud R - Rackspace

From: Hongbin Lu >
Sent: Thursday, January 14, 2016 1:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

In short, the container IDs assigned by Magnum are independent of the container 
IDs assigned by Docker daemon. Magnum do the IDs mapping before doing a native 
API call. In particular, here is how it works.

If users create a container through Magnum endpoint, Magnum will do the 
followings:
1.   Generate a uuid (if not provided).
2.   Call Docker Swarm API to create a container, with its hostname equal 
to the generated uuid.
3.   Persist container to DB with the generated uuid.

If users perform an operation on an existing 

Re: [openstack-dev] [nova][stable] Proposal to add Tony Breeds to nova-stable-maint

2016-01-19 Thread Tony Breeds
On Mon, Jan 18, 2016 at 04:47:29PM -0600, Matt Riedemann wrote:

> Tony is now part of the nova-stable-maint core team. Congrats Tony!

Thanks so much to you Matt for mentoring me.  Also thanks to those that
supported my inclusion, even if I do speak funny ;D

I owe some people Beer/Coffee/Burger's.

Y'all can collect in Bristol next week :)

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Nominating Lei Zhang (Jeffrey Zhang in English) - jeffrey4l on irc

2016-01-19 Thread Michal Rostecki

On 01/19/2016 10:20 AM, Paul Bourke wrote:

I was also thinking of adding Lei. He was a big help to me only recently
with fixing plugin support and adding unit test support to build.py. +1
from me.

-Paul

On 19/01/16 08:26, Steven Dake (stdake) wrote:

Hi folks,

I would like to propose Lei Zhang for our core reviewer team.  Count
this proposal as a +1 vote from me.  Lei has done a fantastic job in his
reviews over the last 6 weeks and has managed to produce some really
nice implementation work along the way.  He participates in IRC
regularly, and has a commitment from his management team at his employer
to work full time 100% committed to Kolla for the foreseeable future
(although things can always change in the future :)

Please vote +1 if you approve of Lei for core reviewer, or –1 if wish to
veto his nomination.  Remember just one –1 vote is a complete veto, so
if your on the fence, another option is to abstain from voting.

I would like to change from our 3 votes required, as our core team has
grown, to requiring a simple majority of core reviewers with no veto
votes.  As we have 9 core reviewers, this means Lei requires 4 more  +1
votes with no veto vote in the voting window to join the core reviewer
team.

I will leave the voting open for 1 week as is the case with our other
core reviewer nominations until January 26th.  If the vote is unanimous
or there is a veto vote before January 26th I will close voting.  I'll
make appropriate changes to gerrit permissions if Lei is voted into the
core reviewer team.

Thank you for your time in evaluating Lei for the core review team.

Regards
-steve



+1

He did amazing job on plugins, unit tests and introducing oslo.config.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][clients] Enable hacking in python-*clients

2016-01-19 Thread Kekane, Abhishek
> Hi Abishek,

> In my understanding, hacking check is enabled for most (or all) of
> python-*client.
> For example, flake8 is run for each neutronclient review [1].
> test-requirements installs hacking, so I believe hacking check is enabled.
> openstackclient and novaclient do the same [2] [3].
> Am I missing something?

Hi Akhiro Motoki,

Individual OpenStack projects has separate hacking module (e.g. 
nova/hacking/checks.py) which contains additional rules other than standard 
PEP8 errors/warnings.
In similar mode can we do same in python-*clients?

Abhishek

> [1] 
> http://git.openstack.org/cgit/openstack/python-neutronclient/tree/tox.ini#n26
> [2] 
> http://git.openstack.org/cgit/openstack/python-openstackclient/tree/tox.ini#n15
> [3] http://git.openstack.org/cgit/openstack/python-novaclient/tree/tox.ini#n24

From: Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
Sent: 19 January 2016 10:49
To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: [openstack-dev] [all][clients] Enable hacking in python-*clients

Hi Devs,

As of now all OpenStack projects has hacking checks which take cares about 
OpenStack guidelines issues are caught while running PEP8 checks using tox.
There are no such checks in any of the python-*client.

IMO its worth to enable hacking checks in python-*clients as well which will 
caught some guidelines issues in local environment only,

Please let me know your opinion on the same.

Thanks & Regards,

Abhishek Kekane

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][clients] Enable hacking in python-*clients

2016-01-19 Thread Andreas Jaeger

On 2016-01-19 10:44, Abhishek Kekane wrote:

Hi Abishek,



In my understanding, hacking check is enabled for most (or all) of



python-*client.



For example, flake8 is run for each neutronclient review [1].



test-requirements installs hacking, so I believe hacking check is enabled.



openstackclient and novaclient do the same [2] [3].



Am I missing something?


Hi Akhiro Motoki,

Individual OpenStack projects has separate hacking module (e.g.
nova/hacking/checks.py) which contains additional rules other than
standard PEP8 errors/warnings.

In similar mode can we do same in python-*clients?


Let's share one common set of rules and not have each repo additional 
ones. So, if those are useful, propose them for the hacking repo.


To answer your questions: Sure, it can be done but why?

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Nominating Lei Zhang (Jeffrey Zhang in English) - jeffrey4l on irc

2016-01-19 Thread Martin André
On Tue, Jan 19, 2016 at 6:36 PM, Michal Rostecki 
wrote:

> On 01/19/2016 10:20 AM, Paul Bourke wrote:
>
>> I was also thinking of adding Lei. He was a big help to me only recently
>> with fixing plugin support and adding unit test support to build.py. +1
>> from me.
>>
>> -Paul
>>
>> On 19/01/16 08:26, Steven Dake (stdake) wrote:
>>
>>> Hi folks,
>>>
>>> I would like to propose Lei Zhang for our core reviewer team.  Count
>>> this proposal as a +1 vote from me.  Lei has done a fantastic job in his
>>> reviews over the last 6 weeks and has managed to produce some really
>>> nice implementation work along the way.  He participates in IRC
>>> regularly, and has a commitment from his management team at his employer
>>> to work full time 100% committed to Kolla for the foreseeable future
>>> (although things can always change in the future :)
>>>
>>> Please vote +1 if you approve of Lei for core reviewer, or –1 if wish to
>>> veto his nomination.  Remember just one –1 vote is a complete veto, so
>>> if your on the fence, another option is to abstain from voting.
>>>
>>> I would like to change from our 3 votes required, as our core team has
>>> grown, to requiring a simple majority of core reviewers with no veto
>>> votes.  As we have 9 core reviewers, this means Lei requires 4 more  +1
>>> votes with no veto vote in the voting window to join the core reviewer
>>> team.
>>>
>>> I will leave the voting open for 1 week as is the case with our other
>>> core reviewer nominations until January 26th.  If the vote is unanimous
>>> or there is a veto vote before January 26th I will close voting.  I'll
>>> make appropriate changes to gerrit permissions if Lei is voted into the
>>> core reviewer team.
>>>
>>> Thank you for your time in evaluating Lei for the core review team.
>>>
>>> Regards
>>> -steve
>>>
>>>
> +1
>
> He did amazing job on plugins, unit tests and introducing oslo.config.


Big +1 for me.

Martin


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][clients] Enable hacking in python-*clients

2016-01-19 Thread Kekane, Abhishek


-Original Message-
From: Andreas Jaeger [mailto:a...@suse.com] 
Sent: 19 January 2016 15:19
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][clients] Enable hacking in python-*clients

On 2016-01-19 10:44, Abhishek Kekane wrote:
>> Hi Abishek,
>
>> In my understanding, hacking check is enabled for most (or all) of
>
>> python-*client.
>
>> For example, flake8 is run for each neutronclient review [1].
>
>> test-requirements installs hacking, so I believe hacking check is enabled.
>
>> openstackclient and novaclient do the same [2] [3].
>
>> Am I missing something?
>
> Hi Akhiro Motoki,
>
> Individual OpenStack projects has separate hacking module (e.g.
> nova/hacking/checks.py) which contains additional rules other than 
> standard PEP8 errors/warnings.
>
> In similar mode can we do same in python-*clients?

Let's share one common set of rules and not have each repo additional ones. So, 
if those are useful, propose them for the hacking repo.

To answer your questions: Sure, it can be done but why?

Because we can encounter this issues in local environments only, also we can 
add custom checks like
1. use six.string_types instead of basestring 
2. use dict.items or six.iteritems(dict) instead of dict.items
3. checks on assertions etc.

Abhishek

Andreas
--
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
 GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Proposal of adding puppet-oslo to OpenStack

2016-01-19 Thread Xingchao Yu
Hi,  all:

Recently I submit some patches for adding rabbit_ha_queues and correct
the section name of memcached_servers params to each modules, then I find I
just did repeated things:

   1. Adding one parameters which related to oslo.*  or authtoken to
all puppet modules
   2. Correct section of parameters, move it from deprecated section to
oslo_* section, apply it on all puppet modules

 We have more than 30+ modules for now, that means we have to repeat
10+ or 20+ times if we want to do a simple change on oslo_* common configs.

 Besides, the number of oslo_* section is growing, for example :

   - oslo_messaging_amqp
   - oslo_messaging_rabbit
   - oslo_middleware
   - oslo_policy
   - oslo_concurrency
   - oslo_versionedobjects
   ...

Now we maintain these oslo_* parameters separately in each modules,
 this has lead some problems:

1.  oslo_* params are inconsistent in each modules
2.  common params explosion in each modules
3.  no convenient way for managing oslo_* params

When I was doing some work on keystone::resource::authtoken(
https://review.openstack.org/#/c/266723/)

Then I have a idea about adding puppet-oslo project, using a bunch of
define resources to unify oslo_* configs in each modules.

I just write a prototype to show how does it works with oslo.cache:

https://github.com/NewpTone/puppet-oslo/blob/master/manifests/cache.pp

Please let me know your opinion on the same.

Thanks & Regards.

-- 
 Xingchao Yu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] sponsor some LVM development

2016-01-19 Thread Matthew Booth
Hello, Premysl,

I'm not working on these features, however I am working in this area of
code implementing the libvirt storage pools spec. If anybody does start
working on this, please reach out to coordinate as I have a bunch of
related patches. My work should also make your features significantly
easier to implement.

Out of curiosity, can you explain why you want to use LVM specifically over
the file-based backends?

Matt

On Mon, Jan 18, 2016 at 7:49 PM, Premysl Kouril 
wrote:

> Hello everybody,
>
> we are a Europe based operator and we have a case for LVM based nova
> instances in our new cloud infrastructure. We are currently
> considering to contribute to OpenStack Nova to implement some features
> which are currently not supported for LVM based instances (they are
> only supported for raw/qcow2 file based instances). As an example of
> such features - nova block live migration or thin provisioning - these
> nowadays don't work with LVM based instances (they do work for file
> based).
>
> Before actually diving into development here internally - we wanted to
> check on possibility to actually sponsor this development within
> existing community. So if there is someone who would be interested in
> this work please drop me an email.
>
> Regards,
> Prema
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Docs] Definition of a provider Network

2016-01-19 Thread Ihar Hrachyshka

Andreas Scheuring  wrote:


Hi everybody,

I stumbled over a definition that explains the difference between a
Provider network and a self service network. [1]

To summarize it says:
- Provider Network: primarily uses layer2 services and vlan segmentation
and cannot be used for advanced services (fwaas,..)
- Self-service Network: is Neutron configured to use a overlay network
and supports advanced services (fwaas,..)


But my understanding is more like this:
- Provider Network: The Openstack user needs information about the
underlying network infrastructure to create a virtual network that
exactly matches this infrastructure.

- Self service network: The Openstack user can create virtual networks
without knowledge about the underlaying infrastructure on the data
network. This can also include vlan networks, if the l2 plugin/agent was
configured accordingly.



I believe your understanding and wording is a lot more in line with  
reality. It also captures main differences, and does not mention advanced  
services that are not really relevant here.




Did the meaning of a provider network change in the meantime, or is my
understanding just wrong?

Thanks!




[1]
http://docs.openstack.org/liberty/install-guide-rdo/overview.html#id4


--
-
Andreas (IRC: scheuran)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][docker][powervm] out of tree virt driver breakage

2016-01-19 Thread Daniel P. Berrange
This is an alert for anyone who maintains an out of tree virt driver
for Nova (docker & powervm are the 2 I know of).

The following change has just merged changing the nova/virt/driver.py
API, and as such it will break any out of tree virt drivers until they
are updated

  commit fbe31e461ac3f16edb795993558a2314b4c16b52
  Author: Daniel P. Berrange 
  Date:   Mon Jun 8 17:58:09 2015 +0100

compute: convert manager to use nova.objects.ImageMeta

Update the virt driver API so that all methods with an
'image_meta' parameter take a nova.objects.ImageMeta
instance instead of a dict.

NB, this will break out of tree virt drivers until they
convert their code to use the new object.

Blueprint: mitaka-objects
Change-Id: I75465a2029b53aa4d338b80619ed7380e0d19e6a

Anywhere in your virt driver impl that uses the 'image_meta' parameter
should be updated to use the nova.objects.ImageMeta instance rather
than assuming it has a dict.

If you have any trouble understanding how to update the code, reply
to this message or find me on IRC for guidance, or look at changes
made to the libvirt/xenapi/vmware drivers in tree.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Deploy Overcloud Keystone in HTTPD

2016-01-19 Thread Jiří Stránský

On 19.1.2016 03:59, Adam Young wrote:

I have a review here for switching Keystone to HTTPD

https://review.openstack.org/#/c/269377/

But I have no idea how to kick off the CI to really test it.  The check
came back way too quick for it to have done a full install; less than 3
minutes.  I think it was little more than a lint check.

How can I get a real sense of if it is this easy or if there is
something more that needs to be done?


Jenkins reports in two phases, first come the unit tests (in minutes), 
then the integration tests (in about 1.5 hrs minimum, depending on the 
CI load).


Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][all] glance_store drivers deprecation/stabilization: Volunteers needed

2016-01-19 Thread stuart . mclaren

On 11/01/16 15:52 -0500, Flavio Percoco wrote:

Greetings,

Gentle reminder that this is happening next week.

Cheers,
Flavio

- Original Message -

From: "Flavio Percoco" 
To: openstack-dev@lists.openstack.org
Sent: Thursday, December 10, 2015 9:16:09 AM
Subject: [glance][all] glance_store drivers deprecation/stabilization: 
Volunteers needed

Greetings,

As some of you know, there's a proposal (still a rough draft) for
refactoring the glance_store API. This library is the home for the
store drivers used by Glance to save or access the image data.

As other drivers in OpenStack, this library is facing the issue of
having unmaintained, untested and incomplete implementations of stores
that are, hypotetically, being used in production environments.

In order to guarantee some level of stability and, more important,
maintenance, the Glance team is looking for volunteers to sign up as
maintainers/keepers of the existing drivers.

Unfortunately, given the fact that our team is not as big as we would
like and that we don't have the knowledge to provide support for every
single driver, the Glance team will have to deprecate, and later
remove, the drivers that will remain without a maintainer.

Each driver will have to have a voting CI job running (maintained by
the driver maintainer) that will have to run Glance's functional tests
to ensure the API features are also supported by the driver.

There are 2 drivers I belive shouldn't fall into this category and
that should be maintained by the Glance community itself. These
drivers are:

- Filesystem
- Http

Please, find the full list of drivers here[0] and feel free to sign up
as volunteer in as many drivers as your time permits to maintain.
Please, provide all the information required as the lack of it will
result in the candidacy not being valid. As some sharp eyes will
notice, the Swift driver is not in the list above. The reason for that
is that, although it's a key piece of OpenStack, not everyone in the
Glance community knows the code of that driver well-enough and there
are enough folks that know it that could perhaps volunteer as
maintainers/reviewers for it. Furthermore, adding the swift driver
there would mean we should probably add the Cinder one as well as it's
part of OpenStack just like Swift. We can extend that list later. For
now, I'd like to focus on bringing some stability to the library.

The above information, as soon as it's complete or the due date is
reached, will be added to glance_store's docs so that folks know where
to find the drivers maintainers and who to talk to when things go
south.

Here's an attempt to schedule some of this work (please refer to
this tag[0.1] and this soon-to-be-approved review[0.2] to have more
info w.r.t the deprecation times and backwards compatibility
guarantees):

- By mitaka 2 (Jan 16-22), all drivers should have a maintainer.
  Drivers without one, will be marked as deprecated in Mitaka.


This has been done!

http://docs.openstack.org/developer/glance_store/drivers/index.html

Only 1 driver was left without maintainer and, as established, I've marked it as
deprecated:

https://review.openstack.org/#/c/266077/

Thanks to everyone who volunteered!


Thanks for leading this Flavio!


Flavio



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] need your help on bug #1449498

2016-01-19 Thread jialiang_song517
Hi guys,

I am working on bug #1449498, .

Reproduction steps w/ devstack and Liberty:
1) create a tenant bug_test
2) create a user test1 in tenant bug_test
3) update the quota instances of test1 as 5 (the default instances value is 10)
4) delete user test1
5) query the quota information for user test1 in tenant bug_test
in step5, the expected result should indicate user test1 doesn't exist, while 
nova returned the deleted user test1's quota infomation with instances as 5.

After investigation, it is found that quota_get_all_by_project_and_user() and 
quota_get_all_by_project() will invoke model_query(context, model,
args=None,
session=None,
use_slave=False,
read_deleted=None,
project_only=False)
to query the quota information specified by project or project & user. While 
the model_query() doesnot work as expected, that is, in case a user was 
deleted, even read_deleted is set as no, the quota information associated with 
the deleted user will also be returned.

I am not sure if this is a design behavior or this could be problem in oslo_db? 
Could you give some instruction on the further direction? Thanks.

Any other comments are welcome.

Best Regards,
Jialiang



jialiang_song517 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

2016-01-19 Thread Ricardo Rocha
Hi.

I agree with this. It's great magnum does the setup and config of the
container cluster backends, but we could also call heat ourselves if
that would be it.

Taking a common use case we have:
- create and expose a volume using a nfs backend so that multiple
clients can access the same data simultaneously (manila)
- launch a set of containers exposing and crunching data mounting that
nfs endpoint

Is there a reason why magnum can't do this for me (or aim at doing
it)? Handling all the required linking of containers with block
storage or filesystems would be great (and we have multiple block
storage backends, credentials not available to clients). There will be
other cases where all we want is a kubernetes or swarm endpoint, but
here we want containers to integrate with all the rest openstack
already manages.

Ricardo

On Tue, Jan 19, 2016 at 11:10 PM, Hongbin Lu  wrote:
> I don't see why the existent of /containers endpoint blocks your workflow. 
> However, with /containers gone, the alternate workflows are blocked.
>
> As a counterexample, some users want to manage containers through an 
> OpenStack API for various reasons (i.e. single integration point, lack of 
> domain knowledge of COEs, orchestration with other OpenStack resources: VMs, 
> networks, volumes, etc.):
>
> * Deployment of a cluster
> * Management of that cluster
> * Creation of a container
> * Management of that container
>
> As another counterexample, some users just want a container:
>
> * Creation of a container
> * Management of that container
>
> Then, should we remove the /bays endpoint as well? Mangum is currently in an 
> early stage, so workflows are diverse, non-static, and hypothetical. It is a 
> risk to have Magnum overfit into a specific workflow by removing others.
>
> For your analogies, Cinder is a block storage service so it doesn't abstract 
> the filesystems. Mangum is a container service [1] so it is reasonable to 
> abstract containers. Again, if your logic is applied, should Nova have an 
> endpoint that let you work with individual hypervisor? Probably not, because 
> Nova is a Compute service.
>
> [1] 
> https://github.com/openstack/magnum/blob/master/specs/containers-service.rst
>
> Best regards,
> Hongbin
>
> -Original Message-
> From: Kyle Kelley [mailto:kyle.kel...@rackspace.com]
> Sent: January-19-16 2:37 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays
>
> With /containers gone, what Magnum offers is a workflow for consuming 
> container orchestration engines:
>
> * Deployment of a cluster
> * Management of that cluster
> * Key handling (creation, upload, revocation, etc.)
>
> The first two are handled underneath by Nova + Heat, the last is in the 
> purview of Barbican. That doesn't matter though.
>
> What users care about is getting access to these resources without having to 
> write their own heat template, create a backing key store, etc. They'd like 
> to get started immediately with container technologies that are proven.
>
> If you're looking for analogies Hongbin, this would be more like saying that 
> Cinder shouldn't have an endpoint that let you work with individual files on 
> a volume. It would be unreasonable to try to abstract across filesystems in a 
> meaningful and sustainable way.
>
> 
> From: Hongbin Lu 
> Sent: Tuesday, January 19, 2016 9:43 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays
>
> Assume your logic is applied. Should Nova remove the endpoint of managing 
> VMs? Should Cinder remove the endpoint of managing volumes?
>
> I think the best way to deal with the heterogeneity is to introduce a common 
> abstraction layer, not to decouple from it. The real critical functionality 
> Magnum could offer to OpenStack is to provide a Container-as-a-Service. If 
> Magnum is a Deployment-as-a-service, it will be less useful and won't bring 
> too much value to the OpenStack ecosystem.
>
> Best regards,
> Hongbin
>
> -Original Message-
> From: Clark, Robert Graham [mailto:robert.cl...@hpe.com]
> Sent: January-19-16 5:19 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays
>
> +1
>
> Doing this, and doing this well, provides critical functionality to OpenStack 
> while keeping said functionality reasonably decoupled from the COE API 
> vagaries that would inevitably encumber a solution that sought to provide 
> ‘one api to control them all’.
>
> -Rob
>
> From: Mike Metral
> Reply-To: OpenStack List
> Date: Saturday, 16 January 2016 02:24
> To: OpenStack List
> Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays
>
> The requirements that running a fully 

Re: [openstack-dev] [heat]Delete the stack while the status of stack is CREATE_IN_PROGRESS?

2016-01-19 Thread Zane Bitter

On 19/01/16 11:33, zhu4236926 wrote:

Hi guys,
I find a interesting problem in Juno version.
 First, I create a new stack, it contains three resources, e.g.  (In
my test, there are six resources)
heat_template_version: 2014-10-16
resources:
 volume1:
 type: OS::Cinder::Volume
 properties: {name: test1, size: 2}
 volume2:
 type: OS::Cinder::Volume
 properties: {name: test2, size: 2}
volume3:
 type: OS::Cinder::Volume
 properties: {name: test3, size: 2}
 While the stack is creating,  I delete this stack, at this time,
some resources created complete, some resources may be in creating,
e.g. volume3 is creating in Green Thread 1, the volume test3 is creating
in cinder and volume id is not return, so the resource id of volume3 is
none.
Now stack start deleting in Green Thread 2,  because the resource id of
volume3 is none, it returns and deletes success, but the volume test3
has been created success in cinder and not deleted by heat, due to the
resource id of volume3 is none.
 So how to solve this problem, or this bug has been fixed?


Good question. I don't think is actually solvable in general, because 
every OpenStack service has a short window in which you can lose the 
response to a call and then not be able to tell whether it succeeded or 
not. However, we can hope to minimise the number of ways to lose the 
response.


I think the solution should be to stop the existing thread in the first 
instance by raising the ForcedCancel exception at the next sleep. This 
would ensure that we would only stop threads at an explicit 'yield' 
point (i.e. between handle_create and check_create_complete). Our 
current reliance on stopping the greenthread itself means that it is 
liable to be killed whenever eventlet gets control (which may be during 
a sleep(), but equally could be when doing I/O). We should give it a 
while to stop itself before then moving on to stopping the greenthread.


AFAICT this won't have changed since Juno.

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

2016-01-19 Thread Hongbin Lu
I don't see why the existent of /containers endpoint blocks your workflow. 
However, with /containers gone, the alternate workflows are blocked.

As a counterexample, some users want to manage containers through an OpenStack 
API for various reasons (i.e. single integration point, lack of domain 
knowledge of COEs, orchestration with other OpenStack resources: VMs, networks, 
volumes, etc.):

* Deployment of a cluster
* Management of that cluster
* Creation of a container
* Management of that container

As another counterexample, some users just want a container:

* Creation of a container
* Management of that container

Then, should we remove the /bays endpoint as well? Mangum is currently in an 
early stage, so workflows are diverse, non-static, and hypothetical. It is a 
risk to have Magnum overfit into a specific workflow by removing others. 

For your analogies, Cinder is a block storage service so it doesn't abstract 
the filesystems. Mangum is a container service [1] so it is reasonable to 
abstract containers. Again, if your logic is applied, should Nova have an 
endpoint that let you work with individual hypervisor? Probably not, because 
Nova is a Compute service.

[1] https://github.com/openstack/magnum/blob/master/specs/containers-service.rst

Best regards,
Hongbin

-Original Message-
From: Kyle Kelley [mailto:kyle.kel...@rackspace.com] 
Sent: January-19-16 2:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

With /containers gone, what Magnum offers is a workflow for consuming container 
orchestration engines:

* Deployment of a cluster
* Management of that cluster
* Key handling (creation, upload, revocation, etc.)

The first two are handled underneath by Nova + Heat, the last is in the purview 
of Barbican. That doesn't matter though.

What users care about is getting access to these resources without having to 
write their own heat template, create a backing key store, etc. They'd like to 
get started immediately with container technologies that are proven.

If you're looking for analogies Hongbin, this would be more like saying that 
Cinder shouldn't have an endpoint that let you work with individual files on a 
volume. It would be unreasonable to try to abstract across filesystems in a 
meaningful and sustainable way.


From: Hongbin Lu 
Sent: Tuesday, January 19, 2016 9:43 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

Assume your logic is applied. Should Nova remove the endpoint of managing VMs? 
Should Cinder remove the endpoint of managing volumes?

I think the best way to deal with the heterogeneity is to introduce a common 
abstraction layer, not to decouple from it. The real critical functionality 
Magnum could offer to OpenStack is to provide a Container-as-a-Service. If 
Magnum is a Deployment-as-a-service, it will be less useful and won't bring too 
much value to the OpenStack ecosystem.

Best regards,
Hongbin

-Original Message-
From: Clark, Robert Graham [mailto:robert.cl...@hpe.com]
Sent: January-19-16 5:19 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

+1

Doing this, and doing this well, provides critical functionality to OpenStack 
while keeping said functionality reasonably decoupled from the COE API vagaries 
that would inevitably encumber a solution that sought to provide ‘one api to 
control them all’.

-Rob

From: Mike Metral
Reply-To: OpenStack List
Date: Saturday, 16 January 2016 02:24
To: OpenStack List
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

The requirements that running a fully containerized application optimally & 
effectively requires the usage of a dedicated COE tool such as Swarm, 
Kubernetes or Marathon+Mesos.

OpenStack is better suited for managing the underlying infrastructure.

Mike Metral
Product Architect – Private Cloud R
email: mike.met...@rackspace.com
cell: +1-305-282-7606

From: Hongbin Lu
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Friday, January 15, 2016 at 8:02 PM
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

A reason is the container abstraction brings containers to OpenStack: Keystone 
for authentication, Heat for orchestration, Horizon for UI, etc.

From: Kyle Kelley [mailto:rgb...@gmail.com]
Sent: January-15-16 10:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

What are the reasons for keeping /containers?

On 

Re: [openstack-dev] [Neutron] Team meeting on Tuesday 1400UTC

2016-01-19 Thread Martin Hickey
Hi,

+1 for me on Kyle's suggestion.

Regards,
Martin



From:   Kyle Mestery 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   19/01/2016 16:39
Subject:Re: [openstack-dev] [Neutron] Team meeting on Tuesday 1400UTC



On Tue, Jan 19, 2016 at 10:14 AM, Ihar Hrachyshka 
wrote:
> Rossella Sblendido  wrote:
>
>>
>>
>> On 01/19/2016 04:36 PM, Miguel Angel Ajo Pelayo wrote:
>>>
>>> Thinking of this, I had another idea, a bit raw yet.
>>>
>>> But how does it sound to have two meetings a week, one in a EU/ASIA
>>> friendlier
>>> timezone, and another for USA/AU (current one), with different chairs.
>>>
>>> We don't impose unnatural-working hours (too early, too late for
family,
>>> etc..)
>>> to anyone, we encourage gathering as a community (may be split by
>>> timezones, but
>>> it feels more human and faster than ML conversations..) and also people
>>> able
>>> to make to both, could serve as bridges for both meetings.
>>>
>>>
>>> Thoughts?
>>
>>
>> I think that is what Kyle was proposing and if I am not wrong that's
what
>> they do in nova.
>
>
> My understanding is that Kyle proposed to switch back to bi-weekly
> alternating meetings, and have a separate chair for each.
>
> I think Kyle’s suggestion is wiser since it won’t leave the community
split
> into two separate parts, and it won’t waste two hours each week where we
> could make it with just one.
>
Yes, I was proposing two bi-weekly meetings with different chairs. We
could even have kept the existing schedule and just had a different
chair for the 1400UTC meeting on Tuesday.

> Ihar
>
>
>
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Is Swift a good choice of database for the TripleO API?

2016-01-19 Thread Pete Zaitcev
On Tue, 22 Dec 2015 08:56:08 -0800
Clint Byrum  wrote:

> You could create a unique swift container, upload things to that, and
> then update a pointer in a well-known location to point at that container
> for the new plan only after you've verified it is available. This is a
> primitive form of Read-copy-update.

It's worse than you think. Container updates lag often in Swift.
I suggest a pseudo-container or a manifest object instead. However,
renames in Swift are copies. Ergo, an external database has to point
to the current tip or the latest generation manifest. Which brings us
to...

> So if you are only using the DB for consistency, you might want to just
> use tooz+swift.

Yep. Still has to store the templates themselves somewhere though.

-- Pete

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] sponsor some LVM development

2016-01-19 Thread Fox, Kevin M
One feature I think we would like to see that could benefit from LVM is some 
kind of multidisk support with better fault tolerance

For example:
Say you have a node, and there are 20 vm's on it, and thats all the disk io it 
could take. But say you have spare cpu/ram capacity other then the diskio being 
used up. It would be nice to be able to add a second disk, and be able to 
launch 20 more vm's, located on the other disk.

If you combined them together into one file system (linear append or raid0), 
you could loose all 40 vm's if something went wrong. That may be more then you 
want to risk. If you could keep them as separate file systems or logical 
volumes (maybe with contigous lv's?) Each vm could only top out a spindle, but 
it would be much more fault tolerant to failures on the machine. I can see some 
cases where that tradeoff between individual vm performance and number of vm's 
affected by a device failure can lean in that direction.

Thoughts?

Thanks,
Kevin

From: Premysl Kouril [premysl.kou...@gmail.com]
Sent: Tuesday, January 19, 2016 4:40 AM
To: OpenStack Development Mailing List (not for usage questions); 
mbo...@redhat.com
Subject: Re: [openstack-dev] [Nova] sponsor some LVM development

Hi Matt,

thanks for letting me know, we will definitely do reach you out if we
start some activity in this area.

To answer your question: main reason for LVM is simplicity and
performance. It seems from our benchmarks that LVM behavior when
processing many IOPs (10s of thousands) is more stable than if
filesystem is used as backend. Also a filesystem generally is heavier
and more complex technology than LVM and we wanted to stay really as
simple as possible on the IO datapath - to make everything
(maintaining, tuning, configuring) easier.

Do you see this as reasonable argumentation? Do you see some major
benefits of file-based backend over the LVM one?

Cheers,
Prema

On Tue, Jan 19, 2016 at 12:18 PM, Matthew Booth  wrote:
> Hello, Premysl,
>
> I'm not working on these features, however I am working in this area of code
> implementing the libvirt storage pools spec. If anybody does start working
> on this, please reach out to coordinate as I have a bunch of related
> patches. My work should also make your features significantly easier to
> implement.
>
> Out of curiosity, can you explain why you want to use LVM specifically over
> the file-based backends?
>
> Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Nominating Lei Zhang (Jeffrey Zhang in English) - jeffrey4l on irc

2016-01-19 Thread Jeff Peeler
+1 from me as well

On Tue, Jan 19, 2016 at 1:52 PM, Gareth  wrote:
> +1 to lei :)
>
> On Wed, Jan 20, 2016 at 12:57 AM, Swapnil Kulkarni  wrote:
>> On Tue, Jan 19, 2016 at 1:56 PM, Steven Dake (stdake) 
>> wrote:
>>>
>>> Hi folks,
>>>
>>> I would like to propose Lei Zhang for our core reviewer team.  Count this
>>> proposal as a +1 vote from me.  Lei has done a fantastic job in his reviews
>>> over the last 6 weeks and has managed to produce some really nice
>>> implementation work along the way.  He participates in IRC regularly, and
>>> has a commitment from his management team at his employer to work full time
>>> 100% committed to Kolla for the foreseeable future (although things can
>>> always change in the future :)
>>>
>>> Please vote +1 if you approve of Lei for core reviewer, or –1 if wish to
>>> veto his nomination.  Remember just one –1 vote is a complete veto, so if
>>> your on the fence, another option is to abstain from voting.
>>>
>>> I would like to change from our 3 votes required, as our core team has
>>> grown, to requiring a simple majority of core reviewers with no veto votes.
>>> As we have 9 core reviewers, this means Lei requires 4 more  +1 votes with
>>> no veto vote in the voting window to join the core reviewer team.
>>>
>>> I will leave the voting open for 1 week as is the case with our other core
>>> reviewer nominations until January 26th.  If the vote is unanimous or there
>>> is a veto vote before January 26th I will close voting.  I'll make
>>> appropriate changes to gerrit permissions if Lei is voted into the core
>>> reviewer team.
>>>
>>> Thank you for your time in evaluating Lei for the core review team.
>>>
>>> Regards
>>> -steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][infra] Ability to run newer QEMU in Gate jobs

2016-01-19 Thread Daniel P. Berrange
On Tue, Jan 19, 2016 at 05:47:58PM +, Jeremy Stanley wrote:
> On 2016-01-19 18:32:38 +0100 (+0100), Kashyap Chamarthy wrote:
> [...]
> > Matt Riedemann tells me on IRC that multi-node live migration job is
> > currently Ubuntu only, and to get a newer QEMU, it has to be added to
> > Ubuntu Cloud Archive.
> [...]
> 
> As discussed recently on another thread[1], we're not currently
> using UCA in jobs either. We can discuss it, but generally by the
> time people start actively wanting newer whatever we're only a few
> months away from the next LTS anyway. In this case I have hopes that
> in a few months we'll be able to start running jobs on Ubuntu 16.04
> LTS, which looks like it's going to ship with QEMU 2.5.

We'll almost certainly need to be able to test QEMU 2.6 in the N
release cycle, since that'll (hopefully) include support for TLS
encrypted migration & nbd traffic.  So I don't think waiting for
LTS releases is a viable strategy in general - we'll need UCA to
be available for at least some set of jobs we run. Alternatively
stick with LTS release for Ubuntu, and run other jobs with Fedora
and the virt-preview repository to give us coverage of the cutting
edge QEMU/libvirt stack.

> Alternatively, look into getting a live migration job running on
> CentOS 7 or Fedora 23 if it can't wait until after Mitaka.

CentOS 7 might be a nice target, since I think it'll likely have
more reliable migration support at the QEMU level than any distros
shipping close-to-upstream QEMU versions.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-19 Thread Doug Wiegley
But, by requiring an external subnet, you are assuming that the packets always 
originate from inside a neutron network. That is not necessarily the case with 
a physical device.

doug


> On Jan 19, 2016, at 11:55 AM, Michael Johnson  wrote:
> 
> I feel that the subnet should be mandatory as there are too many
> ambiguity issues due to overlapping subnets and multiple routes.
> In the case of an IP being outside of the tenant networks, the user
> would specify an external network that has the appropriate routes.  We
> cannot always assume which tenant network with an external (or VPN)
> route is the appropriate one to use.
> 
> Michael
> 
> On Mon, Jan 18, 2016 at 2:45 PM, Stephen Balukoff  
> wrote:
>> Vivek--
>> 
>> "Member" in this case refers to an IP address that (probably) lives on a
>> tenant back-end network. We can't specify just the IP address when talking
>> to such an IP since tenant subnets may use overlapping IP ranges (ie. in
>> this case, subnet is required). In the case of the namespace driver and
>> Octavia, we use the subnet parameter for all members to determine which
>> back-end networks the load balancing software needs a port on.
>> 
>> I think the original use case for making subnet optional was the idea that
>> sometimes a tenant would like to add a "member" IP that is not part of their
>> tenant networks at all--  this is more than likely an IP address that lives
>> outside the local cloud. The assumption, then, would be that this IP address
>> should be reachable through standard routing from wherever the load balancer
>> happens to live on the network. That is to say, the load balancer will try
>> to get to such an IP address via its default gateway, unless it has a more
>> specific route.
>> 
>> As far as I'm aware, this use case is still valid and being asked for by
>> tenants. Therefore, I'm in favor of making member subnet optional.
>> 
>> Stephen
>> 
>> On Mon, Jan 18, 2016 at 11:14 AM, Jain, Vivek  wrote:
>>> 
>>> If member port (IP address) is allocated by neutron, then why do we need
>>> to specify it explicitly? It can be derived by LBaaS driver implicitly.
>>> 
>>> Thanks,
>>> Vivek
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On 1/17/16, 1:05 AM, "Samuel Bercovici"  wrote:
>>> 
 Btw.
 
 I am still in favor on associating the subnets to the LB and then not
 specify them per node at all.
 
 -Sam.
 
 
 -Original Message-
 From: Samuel Bercovici [mailto:samu...@radware.com]
 Sent: Sunday, January 17, 2016 10:14 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be
 optional on member create?
 
 +1
 Subnet should be mandatory
 
 The only thing this makes supporting load balancing servers which are not
 running in the cloud more challenging to support.
 But I do not see this as a huge user story (lb in cloud load balancing
 IPs outside the cloud)
 
 -Sam.
 
 -Original Message-
 From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
 Sent: Saturday, January 16, 2016 6:56 AM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be
 optional on member create?
 
 I filed a bug [1] a while ago that subnet_id should be an optional
 parameter for member creation.  Currently it is required.  Review [2] is
 makes it optional.
 
 The original thinking was that if the load balancer is ever connected to
 that same subnet, be it by another member on that subnet or the vip on that
 subnet, then the user does not need to specify the subnet for new member if
 that new member is on one of those subnets.
 
 At the midcycle we discussed it and we had an informal agreement that it
 required too many assumptions on the part of the end user, neutron lbaas,
 and driver.
 
 If anyone wants to voice their opinion on this matter, do so on the bug
 report, review, or in response to this thread.  Otherwise, it'll probably 
 be
 abandoned and not done at some point.
 
 Thanks,
 Brandon
 
 [1] https://bugs.launchpad.net/neutron/+bug/1426248
 [2] https://review.openstack.org/#/c/267935/
>>> 
> __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
>>> 
> __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [nova][infra] Ability to run newer QEMU in Gate jobs

2016-01-19 Thread Clark Boylan
On Tue, Jan 19, 2016, at 09:32 AM, Kashyap Chamarthy wrote:
> Heya,
> 
> Currently the live migration test job[1] is using relatively old version
> of QEMU (2.0 -- 2 years old, 17-APR-2014).  And, libvirt 1.2.2
> (released on 02-MAR-2014).  For libvirt, I realize there's an
> in-progress thread[2] to get to a state to run a bleeding edge libvirt.
> 
> How can we go about bumping up QEMU to its newest stable release (2.5,
> DEC-2015)?
> 
> Matt Riedemann tells me on IRC that multi-node live migration job is
> currently Ubuntu only, and to get a newer QEMU, it has to be added to
> Ubuntu Cloud Archive.  It'd be useful to get that done.  Who can help
> with it?
I would start by pushing a change to devstack or devstack-gate that
turns on cloud archive by default. Then libvirt and friends will all get
installed with the newer version and give you an idea of whether or not
the cloud archive is functional for us. 
> 
> It'll allow us to confirm our suspicion that a couple of Nova live
> migration bugs[3][4] are likely fixed with that version.  Also it (the
> newer QEMU) has some additional debug logging capabilities which can
> help us with root cause analysis of some live migration issues.
Assuming nothing breaks you can then use the same change to iterate on
this to see if the bugs are corrected and get access to the richer logs.
> 
> 
> [1]
> https://jenkins05.openstack.org/job/gate-tempest-dsvm-multinode-live-migration/
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2015-November/079679.html
> [3] https://bugs.launchpad.net/nova/+bug/1524898 -- Volume based live
> migrtion aborted unexpectedly
> [4] https://bugs.launchpad.net/nova/+bug/1535232/ -- live-migration ci
> failure on nfs shared storage 
> 
Since both devstack and devstack-gate are self testing you can typically
do a first draft attempt at changes like this just by pushing a change,
letting the tests run, then checking the results. We may not want to go
with the simple change long term (as it likely won't cache packages
properly among other things), but it is very good at giving quick
results on whether or not it is viable at all.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Heads up about Mitaka-2 and Non-priority Feature Freeze

2016-01-19 Thread John Garbutt
Hi,

So at the end of Thursday we hit the non-priority (thats roughly, all
the low priority blueprints) Feature Freeze. I will forward more
details about the exception process, once we know the scale of what is
required.

I have already postponed low priority blueprints that had no code up
for review as of late last week, so we give more review time (this
week) to those that already have code uploaded.

As usual we track the reviews to focus on here:
https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking

Please note, the mitaka-2 release that happens this week, is
technically independent of the non-priority feature freeze. They just
happen in the same week so there are less dates to remember.

Any questions, as usual, just let me know.

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-11, Jan 18-22, Mitaka-2 milestone

2016-01-19 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2016-01-19 09:48:19 +0100:
> Kyle Mestery wrote:
> > One question I have is, what should the version for projects be? For
> > example, for Neutron, M1 was set to 8.0.0.0b1. Should the M2 Neutron
> > milestone be 8.0.0.0c1? Or 8.0.0.0b2?
> 
> Good question! It should be X.0.0.0b2, so 8.0.0.0b2 for Neutron.
> Cheers!
> 

Right, the "b" means "beta" not just "after a". So we'll have a second
beta, designated by 0b2.

Another question that came up related to adding the milestone tags was
whether to replace the old 0b1 tag info in the releases repository or to
add the new one. Please add a new section for 0b2 tags in the
appropriate deliverable file(s). We want to preserve the full history
of the tags for documentation.

Thanks,
Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-19 Thread Michael Johnson
I feel that the subnet should be mandatory as there are too many
ambiguity issues due to overlapping subnets and multiple routes.
In the case of an IP being outside of the tenant networks, the user
would specify an external network that has the appropriate routes.  We
cannot always assume which tenant network with an external (or VPN)
route is the appropriate one to use.

Michael

On Mon, Jan 18, 2016 at 2:45 PM, Stephen Balukoff  wrote:
> Vivek--
>
> "Member" in this case refers to an IP address that (probably) lives on a
> tenant back-end network. We can't specify just the IP address when talking
> to such an IP since tenant subnets may use overlapping IP ranges (ie. in
> this case, subnet is required). In the case of the namespace driver and
> Octavia, we use the subnet parameter for all members to determine which
> back-end networks the load balancing software needs a port on.
>
> I think the original use case for making subnet optional was the idea that
> sometimes a tenant would like to add a "member" IP that is not part of their
> tenant networks at all--  this is more than likely an IP address that lives
> outside the local cloud. The assumption, then, would be that this IP address
> should be reachable through standard routing from wherever the load balancer
> happens to live on the network. That is to say, the load balancer will try
> to get to such an IP address via its default gateway, unless it has a more
> specific route.
>
> As far as I'm aware, this use case is still valid and being asked for by
> tenants. Therefore, I'm in favor of making member subnet optional.
>
> Stephen
>
> On Mon, Jan 18, 2016 at 11:14 AM, Jain, Vivek  wrote:
>>
>> If member port (IP address) is allocated by neutron, then why do we need
>> to specify it explicitly? It can be derived by LBaaS driver implicitly.
>>
>> Thanks,
>> Vivek
>>
>>
>>
>>
>>
>>
>> On 1/17/16, 1:05 AM, "Samuel Bercovici"  wrote:
>>
>> >Btw.
>> >
>> >I am still in favor on associating the subnets to the LB and then not
>> > specify them per node at all.
>> >
>> >-Sam.
>> >
>> >
>> >-Original Message-
>> >From: Samuel Bercovici [mailto:samu...@radware.com]
>> >Sent: Sunday, January 17, 2016 10:14 AM
>> >To: OpenStack Development Mailing List (not for usage questions)
>> >Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be
>> > optional on member create?
>> >
>> >+1
>> >Subnet should be mandatory
>> >
>> >The only thing this makes supporting load balancing servers which are not
>> > running in the cloud more challenging to support.
>> >But I do not see this as a huge user story (lb in cloud load balancing
>> > IPs outside the cloud)
>> >
>> >-Sam.
>> >
>> >-Original Message-
>> >From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
>> >Sent: Saturday, January 16, 2016 6:56 AM
>> >To: openstack-dev@lists.openstack.org
>> >Subject: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be
>> > optional on member create?
>> >
>> >I filed a bug [1] a while ago that subnet_id should be an optional
>> > parameter for member creation.  Currently it is required.  Review [2] is
>> > makes it optional.
>> >
>> >The original thinking was that if the load balancer is ever connected to
>> > that same subnet, be it by another member on that subnet or the vip on that
>> > subnet, then the user does not need to specify the subnet for new member if
>> > that new member is on one of those subnets.
>> >
>> >At the midcycle we discussed it and we had an informal agreement that it
>> > required too many assumptions on the part of the end user, neutron lbaas,
>> > and driver.
>> >
>> >If anyone wants to voice their opinion on this matter, do so on the bug
>> > report, review, or in response to this thread.  Otherwise, it'll probably 
>> > be
>> > abandoned and not done at some point.
>> >
>> >Thanks,
>> >Brandon
>> >
>> >[1] https://bugs.launchpad.net/neutron/+bug/1426248
>> >[2] https://review.openstack.org/#/c/267935/
>>
>> > >__
>> >OpenStack Development Mailing List (not for usage questions)
>> >Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> > >__
>> >OpenStack Development Mailing List (not for usage questions)
>> >Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> > >__
>> >OpenStack Development Mailing List (not for usage questions)
>> >Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 

Re: [openstack-dev] [Nova] sponsor some LVM development

2016-01-19 Thread Premysl Kouril
Hi James,


>
> You still haven't answered Anita's question: when you say "sponsor" do
> you mean provide resources to existing developers to work on your
> feature or provide new developers.
>

I did, I am copy-pasting my response to Anita here again:

Both. We are first trying this "Are you asking for current Nova
developers to work on this feature?" and if we won't find anybody we
will start with "your company interested in having your developers
interact with Nova developers"


>
> Heh, this is history repeating itself from over a decade ago when
> Oracle would have confidently told you that Linux had to have raw
> devices because that's the only way a database will perform.  Fast
> forward to today and all oracle databases use file backends.
>
> Simplicity is also in the eye of the beholder.  LVM has a very simple
> naming structure whereas filesystems have complex hierarchical ones.
>  Once you start trying to scale to millions of instances, you'll find
> there's quite a management penalty for the LVM simplicity.

We won't definitely have millions instances on hypervisors but we can
certainly have applications demanding million IOPS (in sum) from
hypervisor in near future.

>
>>  It seems from our benchmarks that LVM behavior when
>> processing many IOPs (10s of thousands) is more stable than if
>> filesystem is used as backend.
>
> It sounds like you haven't enabled directio here ... that was the
> solution to the oracle issue.


If you mean O_DIRECT mode then we had than one during our benchmarks.
Here is our benchmark setup and results:

testing box configuration:

  CPU: 4x E7-8867 v3 (total of 64 physical cores)
  RAM: 1TB
  Storage: 12x enteprise class SSD disks (each disk 140 000/120 000
IOPS read/write)
disks connected via 12Gb/s SAS3 lanes

  So we are using big boxes which can run quite a lot of VMs.

  Out of the disks we create linux md raid (we did raid5 and raid10)
and do some fine tuning:

1) echo 8 > /sys/block/md127/md/group_thread_cnt - this increases
parallelism for raid5
2) we boot kernel with scsi_mod.use_blk_mq=Y to active block io multi queueing
3) we increase size of caching (for raid5)

 On that raid we either create LVM group or filesystem depending if we
are testing LVM nova backend or file-based nova backend.


On this hypervisor we run nova/kvm and we provision 10-20 VMs and we
run benchmark tests from these VMs and we are trying to saturate IO on
hypervisor.

We use following command running inside the VMs:

fio --randrepeat=1 --ioengine=libaio --direct=1 -gtod_reduce=1
--name=test1 --bs=4k --iodepth=256 --size=20G --numjobs=1
--readwrite=randwrite

So you can see that in the guest OS we use --direct=1 which causes the
test file to be opened with O_DIRECT. Actually I am now not sure but
if using file-based backend then I hope that the virtual disk is
automatically opened with O_DIRECT and that it is done by libvirt/qemu
by default without any explicit configuration.

Anyway, with this we have following results:

If we use file-based backend in Nova, ext4 filesystem and RAID5 then
in 8 parallel VMs we were able to achieve ~3000 IOPS per machine which
means in total about 32000 IOPS.

If we use LVM-based backend,RAID5, 8 parallel VMs, we achieve ~11000
IOPS per machine, in total about 9 IOPS.

This is a significant difference.

This test was done about half a year ago by one of our engineers who
no longer works for us but we still do have the box and everything, so
if community is interested I can re-run the tests, again validate
results, do any reconfiguration etc.



> And this was precisely the Oracle argument.  The reason it foundered is
> that most FS complexity goes to manage the data structures ... the I/O
> path can still be made short and fast, as DirectIO demonstrates.  Then
> the management penalty you pay (having to manage all the data
> structures that the filesystem would have managed for you) starts to
> outweigh any minor performance advantages.

The only thing O_DIRECT does is that it instructs the kernel to skip
filesystem cache for the file opened in this mode. Rest of the
filesystem complexity remains in the IO's datapath. Note for example -
we did a test on file-based backend with BTRFS - results were
absolutely horrible - there's just too much stuff filesystem has to do
when processing IOs and we believe a lot of it is just not necessary
when the storage is actually used to only store virtual disks.

Anyway, I am really glad that you brought these views, we are happy to
reconsider our decisions so let's have a discussion I am sure we
missed many things when we were evaluating both backends.

One more question: What about the Cinder? I think they are using LVM
for storing volumes, right? Why they don't use files?

Thanks,
Prema

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [kolla] Kolla Midcycle February 9th, 10th please RSVP via Eventbrite

2016-01-19 Thread Steven Dake (stdake)
Hello folks,

If your attending the Kolla midcycle please RSVP via eventbrite.  If your a 
core reviewer and won't be able to make it, would you do me a solid and send me 
your regrets either via email or an irc message so I know how many folks to 
expect?

Breakfast, lunch and dinner will be provided Tuesday February 9th, and 
breakfast and lunch will be provided Wednesday February 10th to help cut down 
on food expenses.  We will finish up early on Wednesday at 3:30 PM to give 
folks on the west coast a chance to make it home Wednesday to economize hotel  
expenses.

The facilities do not offer any way to facilitate remote participation, so if 
you want to participate, it must be done in person.

The eventbrite SVP link is here:
https://www.eventbrite.com/e/kolla-midcycle-event-tickets-20861426087

The Agenda (work in progress) is here:
https://etherpad.openstack.org/p/kolla-mitaka-midcycle

Whether your attending or not, please take some time and vote (name +1, or name 
+0) for each session in the agenda.  This will help me prioritize the agenda to 
suit the needs of the Kolla community.

Thanks a bunch
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Temporarily remove swarm func test from gate

2016-01-19 Thread Egor Guz
Ton,

I belive so, I will create separate patch from 
https://review.openstack.org/#/c/251158/
Also we need to explore possibility to create volume at /dev/vda2 device (it 
has about 5G free space).
Unfortunately Atomic has very little documentation, so the plan is use Cinder 
volume until we can
figure out better way.

—
Egor

On Jan 18, 2016, at 22:27, Ton Ngo > 
wrote:


Hi Egor,
Do we need to add a cinder volume to the master nodes for Kubernetes as well? 
We did not run Docker on the master node before so the volume was not needed.
Ton Ngo,


Hongbin Lu ---01/18/2016 12:29:09 PM---Hi Egor, Thanks for 
investigating on the issue. I will review the patch. Agreed. We can definitely e

From: Hongbin Lu >
To: Egor Guz >, OpenStack 
Development Mailing List 
>
Date: 01/18/2016 12:29 PM
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate





Hi Egor,

Thanks for investigating on the issue. I will review the patch. Agreed. We can 
definitely enable the swarm tests if everything works fine.

Best regards,
Hongbin

-Original Message-
From: Egor Guz [mailto:e...@walmartlabs.com]
Sent: January-18-16 2:42 PM
To: OpenStack Development Mailing List
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

Hongbin,

I did some digging and found that docker storage driver wasn’t configured 
correctly at agent nodes.
Also it looks like Atomic folks recommend use deicated volumes for DeviceMapper 
(http://www.projectatomic.io/blog/2015/06/notes-on-fedora-centos-and-docker-storage-drivers/).
So added Cinder volume for the master as well (I tried create volumes at local 
storage, but it’s not even enough space for 1G volume).

Please take a look at https://review.openstack.org/#/c/267996, did around ~12 
gates run and got only 2 failures (tests cannot connect to master, but all 
containers logs looks alrignt. e.g. 
http://logs.openstack.org/96/267996/3/check/gate-functional-dsvm-magnum-swarm/d8d855b/console.html#_2016-01-18_04_31_17_312),
 we have similar error rates with Kub. So after merging this code we can try to 
enable voting for Swarm tests, thoughts?

—
Egor

On Jan 8, 2016, at 12:01, Hongbin Lu 
>
 wrote:

There are other symptoms as well, which I have no idea without a deep dip.

-Original Message-
From: Egor Guz [mailto:e...@walmartlabs.com]
Sent: January-08-16 2:14 PM
To: 
openstack-dev@lists.openstack.org
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

Hongbin,

I belive most failures are related to containers tests. Maybe we should comment 
only them out and keep Swarm cluster provisioning.
Thoughts?

—
Egor

On Jan 8, 2016, at 06:37, Hongbin Lu 
>
 wrote:

Done: https://review.openstack.org/#/c/264998/

Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: January-07-16 10:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

Hongbin,

I’m not aware of any viable options besides using a nonvoting gate job. Are 
there other alternatives to consider? If not, let’s proceed with that approach.

Adrian

On Jan 7, 2016, at 3:34 PM, Hongbin Lu 
>
 wrote:

Clark,

That is true. The check pipeline must pass in order to enter the gate pipeline. 
Here is the problem we are facing. A patch that was able to pass the check 
pipeline is blocked in gate pipeline, due to the instability of the test. The 
removal of unstable test from gate pipeline aims to unblock the patches that 
already passed the check.

An alternative is to remove the unstable test from check pipeline as well or 
mark it as non-voting test. If that is what the team prefers, I will adjust the 
review accordingly.

Best regards,
Honbgin

-Original Message-
From: Clark Boylan [mailto:cboy...@sapwetik.org]
Sent: January-07-16 6:04 PM
To: 
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

On Thu, Jan 7, 2016, at 02:59 PM, Hongbin Lu wrote:
Hi folks,

It looks the swarm func test is currently unstable, which negatively impacts 
the patch 

Re: [openstack-dev] [puppet] [oslo] Proposal of adding puppet-oslo to OpenStack

2016-01-19 Thread Colleen Murphy
On Tue, Jan 19, 2016 at 9:57 AM, Xingchao Yu  wrote:

> Hi, Emilien:
>
>  Thanks for your efforts on this topic, I didn't attend V release
> summit and missed related discussion about puppet-oslo.
>
>  As the reason for not using a unified way to manage oslo_* parameters
> is there maybe exist different oslo_* version between openstack projects.
>
>  I have an idea to solve this potential problem,we can maintain
> several versions of puppet-oslo, each module can map to different version
> of puppet-oslo.
>
> It would be something like follows: (the map info is not true, just
> for example)
>
> In Mitaka release
> puppet-nova maps to puppet-oslo with 8.0.0
> puppet-designate maps to puppet-oslo with 7.0.0
> puppet-murano maps to puppet-oslo with 6.0.0
>
> In Newton release
> puppet-nova maps to puppet-oslo with 9.0.0
> puppet-designate maps to puppet-oslo with 9.0.0
> puppet-murano maps to puppet-oslo with 7.0.0
>
For the simplest case of puppet infrastructure configuration, which is a
single puppetmaster with one environment, you cannot have multiple versions
of a single puppet module installed. This means you absolutely cannot have
an openstack infrastructure depend on having different versions of a single
module installed. In your example, a user would not  be able to use both
puppet-nova and puppet-designate since they are using different versions of
the puppet-oslo module.

When we put out puppet modules, we guarantee that version X.x.x of a given
module works with the same version of every other module, and this proposal
would totally break that guarantee.

>
> And by the way, most of projects' requirements.txt
> and test-requirements.txt are maintained automatically by requirements
> project(https://github.com/openstack/requirements), they have the same
> version of oslo.* projects.
> So there maybe minor projects would need extra efforts.
>
> If projects seem to be converging together, maybe this isn't such an issue
anymore? I have no insight here.

Colleen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Nominating Lei Zhang (Jeffrey Zhang in English) - jeffrey4l on irc

2016-01-19 Thread Gareth
+1 to lei :)

On Wed, Jan 20, 2016 at 12:57 AM, Swapnil Kulkarni  wrote:
> On Tue, Jan 19, 2016 at 1:56 PM, Steven Dake (stdake) 
> wrote:
>>
>> Hi folks,
>>
>> I would like to propose Lei Zhang for our core reviewer team.  Count this
>> proposal as a +1 vote from me.  Lei has done a fantastic job in his reviews
>> over the last 6 weeks and has managed to produce some really nice
>> implementation work along the way.  He participates in IRC regularly, and
>> has a commitment from his management team at his employer to work full time
>> 100% committed to Kolla for the foreseeable future (although things can
>> always change in the future :)
>>
>> Please vote +1 if you approve of Lei for core reviewer, or –1 if wish to
>> veto his nomination.  Remember just one –1 vote is a complete veto, so if
>> your on the fence, another option is to abstain from voting.
>>
>> I would like to change from our 3 votes required, as our core team has
>> grown, to requiring a simple majority of core reviewers with no veto votes.
>> As we have 9 core reviewers, this means Lei requires 4 more  +1 votes with
>> no veto vote in the voting window to join the core reviewer team.
>>
>> I will leave the voting open for 1 week as is the case with our other core
>> reviewer nominations until January 26th.  If the vote is unanimous or there
>> is a veto vote before January 26th I will close voting.  I'll make
>> appropriate changes to gerrit permissions if Lei is voted into the core
>> reviewer team.
>>
>> Thank you for your time in evaluating Lei for the core review team.
>>
>> Regards
>> -steve
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> +1 :)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Gareth (Kun Huang)

Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball
OpenStack contributor, kun_huang@freenode
My promise: if you find any spelling or grammar mistakes in my email
from Mar 1 2013, notify me
and I'll donate $1 or ¥1 to an open organization you specify.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [oslo] Proposal of adding puppet-oslo to OpenStack

2016-01-19 Thread Xingchao Yu
Hi, Emilien:

 Thanks for your efforts on this topic, I didn't attend V release
summit and missed related discussion about puppet-oslo.

 As the reason for not using a unified way to manage oslo_* parameters
is there maybe exist different oslo_* version between openstack projects.

 I have an idea to solve this potential problem,we can maintain several
versions of puppet-oslo, each module can map to different version of
puppet-oslo.

It would be something like follows: (the map info is not true, just for
example)

In Mitaka release
puppet-nova maps to puppet-oslo with 8.0.0
puppet-designate maps to puppet-oslo with 7.0.0
puppet-murano maps to puppet-oslo with 6.0.0

In Newton release
puppet-nova maps to puppet-oslo with 9.0.0
puppet-designate maps to puppet-oslo with 9.0.0
puppet-murano maps to puppet-oslo with 7.0.0

And by the way, most of projects' requirements.txt
and test-requirements.txt are maintained automatically by requirements
project(https://github.com/openstack/requirements), they have the same
version of oslo.* projects.
So there maybe minor projects would need extra efforts.

2016-01-19 20:44 GMT+08:00 Emilien Macchi :

> Hi,
>
> Adding [oslo] tag for more visibility.
>
> On 01/19/2016 05:01 AM, Xingchao Yu wrote:
> > Hi,  all:
> >
> > Recently I submit some patches for adding rabbit_ha_queues and
> > correct the section name of memcached_servers params to each modules,
> > then I find I just did repeated things:
> >
> >1. Adding one parameters which related to oslo.*  or authtoken to
> > all puppet modules
> >2. Correct section of parameters, move it from deprecated section
> > to oslo_* section, apply it on all puppet modules
> >
> >  We have more than 30+ modules for now, that means we have to repeat
> > 10+ or 20+ times if we want to do a simple change on oslo_* common
> configs.
> >
> >  Besides, the number of oslo_* section is growing, for example :
> >
> >- oslo_messaging_amqp
> >- oslo_messaging_rabbit
> >- oslo_middleware
> >- oslo_policy
> >- oslo_concurrency
> >- oslo_versionedobjects
> >...
> >
> > Now we maintain these oslo_* parameters separately in each modules,
> >  this has lead some problems:
> >
> > 1.  oslo_* params are inconsistent in each modules
> > 2.  common params explosion in each modules
> > 3.  no convenient way for managing oslo_* params
> >
> > When I was doing some work on keystone::resource::authtoken
> >  (https://review.openstack.org/#/c/266723/)
> >
> > Then I have a idea about adding puppet-oslo project, using a bunch
> > of define resources to unify oslo_* configs in each modules.
> >
> > I just write a prototype to show how does it works with oslo.cache:
> >
> >
> https://github.com/NewpTone/puppet-oslo/blob/master/manifests/cache.pp
> >
> > Please let me know your opinion on the same.
>
> We already talked about this topics during Vancouver Summit:
> https://etherpad.openstack.org/p/liberty-summit-design-puppet
>
> Real output is documented here:
> http://my1.fr/blog/puppet-openstack-plans-for-liberty/
>
> And I already initiated some code 8 months ago:
> https://github.com/redhat-cip/puppet-oslo
>
> At this time, we decided not to go this way because some OpenStack
> projects were not using the same version of oslo.*. sometimes.
> So it could have lead to something like:
> "nova using newest version of oslo messaging parameters comparing to
> murano" (that's an example, probably wrong...), so puppet-oslo would
> have been risky to use here.
> I would like to know from Oslo folks if we can safely configure Oslo
> projects the same way during a cycle (Ex: Mitaka, then N, etc) or if
> some projects are using too old versions of Oslo that makes impossible a
> consistent configuration across all OpenStack projects.
>
> So indeed, I'm still convinced this topic should be brought alive again.
> We would need to investigate with Oslo team if it makes sense and if we
> can safely do that for all our modules.
> If we have positive feedback, we can create the new module and
> refactorize our modules that will consume puppet-oslo.
> It will help a lot in keeping our modules consistent and eventually drop
> a lot of duplicated code.
>
> Thoughts?
>
> >
> > Thanks & Regards.
> >
> > --
> >  Xingchao Yu
> >
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> --
> Emilien Macchi
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

[openstack-dev] [kolla] US/APAC meeting scheduling

2016-01-19 Thread Steven Dake (stdake)
Please vote on every time slot you can make on Wednesdays on a recurring basis. 
 I will give preference to the time slots to the folks in APAC since this 
meeting is targeted at including those folks.  I'll leave polling open for 1 
week until January 27th.

EMEA, apologies, you won't be able to make these time slots.  Please still 
vote, especially if your a core reviewer, so we will know who not to expect.  
Its not ideal, but we will still have the other bi-monthly meeting f or US/EMEA.

http://doodle.com/poll/vn98bkn8e6nhn226

Regards,
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

2016-01-19 Thread Kyle Kelley
With /containers gone, what Magnum offers is a workflow for consuming container 
orchestration engines:

* Deployment of a cluster
* Management of that cluster
* Key handling (creation, upload, revocation, etc.)

The first two are handled underneath by Nova + Heat, the last is in the purview 
of Barbican. That doesn't matter though.

What users care about is getting access to these resources without having to 
write their own heat template, create a backing key store, etc. They'd like to 
get started immediately with container technologies that are proven.

If you're looking for analogies Hongbin, this would be more like saying that 
Cinder shouldn't have an endpoint that let you work with individual files on a 
volume. It would be unreasonable to try to abstract across filesystems in a 
meaningful and sustainable way.


From: Hongbin Lu 
Sent: Tuesday, January 19, 2016 9:43 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

Assume your logic is applied. Should Nova remove the endpoint of managing VMs? 
Should Cinder remove the endpoint of managing volumes?

I think the best way to deal with the heterogeneity is to introduce a common 
abstraction layer, not to decouple from it. The real critical functionality 
Magnum could offer to OpenStack is to provide a Container-as-a-Service. If 
Magnum is a Deployment-as-a-service, it will be less useful and won't bring too 
much value to the OpenStack ecosystem.

Best regards,
Hongbin

-Original Message-
From: Clark, Robert Graham [mailto:robert.cl...@hpe.com]
Sent: January-19-16 5:19 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

+1

Doing this, and doing this well, provides critical functionality to OpenStack 
while keeping said functionality reasonably decoupled from the COE API vagaries 
that would inevitably encumber a solution that sought to provide ‘one api to 
control them all’.

-Rob

From: Mike Metral
Reply-To: OpenStack List
Date: Saturday, 16 January 2016 02:24
To: OpenStack List
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

The requirements that running a fully containerized application optimally & 
effectively requires the usage of a dedicated COE tool such as Swarm, 
Kubernetes or Marathon+Mesos.

OpenStack is better suited for managing the underlying infrastructure.

Mike Metral
Product Architect – Private Cloud R
email: mike.met...@rackspace.com
cell: +1-305-282-7606

From: Hongbin Lu
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Friday, January 15, 2016 at 8:02 PM
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

A reason is the container abstraction brings containers to OpenStack: Keystone 
for authentication, Heat for orchestration, Horizon for UI, etc.

From: Kyle Kelley [mailto:rgb...@gmail.com]
Sent: January-15-16 10:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

What are the reasons for keeping /containers?

On Fri, Jan 15, 2016 at 9:14 PM, Hongbin Lu 
> wrote:
Disagree.

If the container managing part is removed, Magnum is just a COE deployment 
tool. This is really a scope-mismatch IMO. The middle ground I can see is to 
have a flag that allows operators to turned off the container managing part. If 
it is turned off, COEs are not managed by Magnum and requests sent to the 
/container endpoint will return a reasonable error code. Thoughts?

Best regards,
Hongbin

From: Mike Metral 
[mailto:mike.met...@rackspace.com]
Sent: January-15-16 6:24 PM
To: openstack-dev@lists.openstack.org

Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

I too believe that the /containers endpoint is obstructive to the overall goal 
of Magnum.

IMO, Magnum’s scope should only be concerned with:

  1.  Provisioning the underlying infrastructure required by the Container 
Orchestration Engine (COE) and
  2.  Instantiating the COE itself on top of said infrastructure from step #1.
Anything further regarding Magnum interfacing or interacting with containers 
starts to get into a gray area that could easily evolve into:

  *   Potential race conditions between Magnum and the designated COE and
  *   Would create design & implementation overhead and debt that could bite us 
in the long run seeing how all COE’s operate & are based off various different 
paradigms in terms of describing & managing containers, 

Re: [openstack-dev] [nova][infra] Ability to run newer QEMU in Gate jobs

2016-01-19 Thread Jeremy Stanley
On 2016-01-19 18:32:38 +0100 (+0100), Kashyap Chamarthy wrote:
[...]
> Matt Riedemann tells me on IRC that multi-node live migration job is
> currently Ubuntu only, and to get a newer QEMU, it has to be added to
> Ubuntu Cloud Archive.
[...]

As discussed recently on another thread[1], we're not currently
using UCA in jobs either. We can discuss it, but generally by the
time people start actively wanting newer whatever we're only a few
months away from the next LTS anyway. In this case I have hopes that
in a few months we'll be able to start running jobs on Ubuntu 16.04
LTS, which looks like it's going to ship with QEMU 2.5.

Alternatively, look into getting a live migration job running on
CentOS 7 or Fedora 23 if it can't wait until after Mitaka.

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-January/084148.html
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][docker][powervm] out of tree virt driver breakage

2016-01-19 Thread Andrew Thorstensen
Thanks Daniel.  We will be proposing an update to the powervm driver today 
to
handle this change.

Appreciate you reaching out about this!


Thanks.

Drew Thorstensen



From:   "Daniel P. Berrange" 
To: openstack-dev@lists.openstack.org
Date:   01/19/2016 06:31 AM
Subject:[openstack-dev] [nova][docker][powervm] out of tree virt 
driver  breakage



This is an alert for anyone who maintains an out of tree virt driver
for Nova (docker & powervm are the 2 I know of).

The following change has just merged changing the nova/virt/driver.py
API, and as such it will break any out of tree virt drivers until they
are updated

  commit fbe31e461ac3f16edb795993558a2314b4c16b52
  Author: Daniel P. Berrange 
  Date:   Mon Jun 8 17:58:09 2015 +0100

compute: convert manager to use nova.objects.ImageMeta
 
Update the virt driver API so that all methods with an
'image_meta' parameter take a nova.objects.ImageMeta
instance instead of a dict.
 
NB, this will break out of tree virt drivers until they
convert their code to use the new object.
 
Blueprint: mitaka-objects
Change-Id: I75465a2029b53aa4d338b80619ed7380e0d19e6a

Anywhere in your virt driver impl that uses the 'image_meta' parameter
should be updated to use the nova.objects.ImageMeta instance rather
than assuming it has a dict.

If you have any trouble understanding how to update the code, reply
to this message or find me on IRC for guidance, or look at changes
made to the libvirt/xenapi/vmware drivers in tree.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ 
:|
|: http://libvirt.org  -o- http://virt-manager.org 
:|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ 
:|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc 
:|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][release] reno release 1.3.1 (independent)

2016-01-19 Thread doug
We are amped to announce the release of:

reno 1.3.1: RElease NOtes manager

This release is part of the independent release series.

With source available at:

http://git.openstack.org/cgit/openstack/reno

With package available at:

https://pypi.python.org/pypi/reno

Please report issues through launchpad:

http://bugs.launchpad.net/reno

For more details, please see below.


Changes in reno 1.3.0..1.3.1


052206e manage stderr output from external commands

Diffstat (except docs and test files)
-

reno/utils.py | 18 --
1 file changed, 16 insertions(+), 2 deletions(-)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] [horizon] Metadata definitions catalog concept demo

2016-01-19 Thread Tripp, Travis S
Hi Glance & Horizon team,

This past week I had 3 people from different companies contact me asking for a 
few pointers on where to find more about the metadata definitions catalog and 
its usage. While trying to grab some links and screenshots to send along, I 
just decided to record a quick video and post to youtube. Its nothing fancy 
since I only had time to do 1 take, but here you go:

https://youtu.be/zJpHXdBOoeM

I hope this is at least somewhat useful to a few folks!

Thanks,
Travis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [oslo] Proposal of adding puppet-oslo to OpenStack

2016-01-19 Thread Cody Herriges
Colleen Murphy wrote:
> On Tue, Jan 19, 2016 at 9:57 AM, Xingchao Yu  > wrote:
> 
> Hi, Emilien:
> 
>  Thanks for your efforts on this topic, I didn't attend V
> release summit and missed related discussion about puppet-oslo.
> 
>  As the reason for not using a unified way to manage oslo_*
> parameters is there maybe exist different oslo_* version between
> openstack projects.
> 
>  I have an idea to solve this potential problem,we can maintain
> several versions of puppet-oslo, each module can map to different
> version of puppet-oslo.
> 
> It would be something like follows: (the map info is not true,
> just for example)
> 
> In Mitaka release
> puppet-nova maps to puppet-oslo with 8.0.0
> puppet-designate maps to puppet-oslo with 7.0.0
> puppet-murano maps to puppet-oslo with 6.0.0
> 
> In Newton release
> puppet-nova maps to puppet-oslo with 9.0.0
> puppet-designate maps to puppet-oslo with 9.0.0
> puppet-murano maps to puppet-oslo with 7.0.0
> 
> For the simplest case of puppet infrastructure configuration, which is a
> single puppetmaster with one environment, you cannot have multiple
> versions of a single puppet module installed. This means you absolutely
> cannot have an openstack infrastructure depend on having different
> versions of a single module installed. In your example, a user would not
>  be able to use both puppet-nova and puppet-designate since they are
> using different versions of the puppet-oslo module.
> 
> When we put out puppet modules, we guarantee that version X.x.x of a
> given module works with the same version of every other module, and this
> proposal would totally break that guarantee. 
> 

How does OpenStack solve this issue?

* Do they literally install several different versions of the same
python library?
* Does every project vendor oslo?
* Is the oslo library its self API compatible with older versions?


-- 
Cody



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog][TOSCA] IRC Meeting Thursday January 21st at 17:00UTC

2016-01-19 Thread Christopher Aedo
Join us tomorrow for our weekly meeting, January 14th at 17:00UTC in
#openstack-meeting-3.

The agenda can be found here, and please add to it if you want to get
something on the agenda:
https://wiki.openstack.org/wiki/Meetings/app-catalog

This week we hope to have someone working on TOSCA join us to talk
about the metadata around their assets, and agree to a plan to add
TOSCA elements to the App Catalog.

Hope to see you there on Thursday!

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api] GET call with huge argument list

2016-01-19 Thread Shraddha Pandhe
Hi Doug,

What would be the reason for such timeout? Based on my current test, it
doesn't take more than few hundreds of milliseconds to return.

What I am trying to do is,

I have a Neutron extension that returns IP usage per subnet per network. It
needs to support:

1. Return usage info for all networks (default if no filters specified)
2. Return usage info for one network id
3. Return usage info for several network ids. This can go up to 1000
network ids.

I have added similar comments to the existing implementation currently
being reviewed: https://review.openstack.org/#/c/212955/16




On Tue, Jan 19, 2016 at 4:14 PM, Doug Wiegley 
wrote:

> It would have to be a POST, but even that will start to have timeout
> issues. An API which requires that kind of input is kind of a bad idea.
> Perhaps you could describe what you’re trying to do?
>
> Thanks,
> doug
>
> On Jan 19, 2016, at 4:59 PM, Shraddha Pandhe 
> wrote:
>
> Hi folks,
>
>
> I am writing a Neutron extension which needs to take 1000s of network-ids
> as argument for filtering. The CURL call is as follows:
>
> curl -i -X GET '
> http://hostname:port/neutron/v2.0/extension_name.json?net-id=fffecbd1-0f6d-4f02-aee7-ca62094830f5=fffeee07-4f94-4cff-bf8e-a2aa7be59e2e'
> -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H
> "X-Auth-Token: "
>
>
> The list of net-ids can go up to 1000s. The problem is, with such large
> url, I get the "Request URI too long" error. I don't want to update this
> limit as proxies can have their own limits.
>
> What options do I have to send 1000s of network IDs?
>
> 1. -d '{}' is not a recommended option for GET call and wsgi Controller
> drops the data part when routing the request.
>
> 2. Use POST instead of GET? I will need to write the get_ logic
> inside create_resource logic for this to work. Its a hack, but complies
> with HTTP standard.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][[Infra] Gate failure

2016-01-19 Thread Robert Collins
I suspect we'll see fallout in unit tests too, once new images are built.

On 20 January 2016 at 14:47, Davanum Srinivas  wrote:
> https://review.openstack.org/#/c/269954/ is the plan of action.
> @mtreinish is driving it. Plan is to request infra to promote it once
> it passes check. This is not a neutron only break. it breaks all dsvm
> jobs.
>
> -- Dims
>
> On Tue, Jan 19, 2016 at 8:28 PM, Armando M.  wrote:
>> Hi neutrinos,
>>
>> New week, new gate failure. This time this might be infra related [1]. This
>> one fails with [2]. If you know what's going on, spread the word!
>>
>> Cheers,
>> Armando
>>
>> [1] https://review.openstack.org/#/c/269937/
>> [2]
>> http://logs.openstack.org/37/269937/1/check/gate-tempest-dsvm-neutron-full/a91b641/logs/devstacklog.txt.gz#_2016-01-20_01_12_41_571
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][[Infra] Gate failure

2016-01-19 Thread Armando M.
On 19 January 2016 at 17:53, Robert Collins 
wrote:

> I suspect we'll see fallout in unit tests too, once new images are built.
>

Thanks for the quick feedback. I knew people were already on top of it!

Cheers,
Armando


>
> On 20 January 2016 at 14:47, Davanum Srinivas  wrote:
> > https://review.openstack.org/#/c/269954/ is the plan of action.
> > @mtreinish is driving it. Plan is to request infra to promote it once
> > it passes check. This is not a neutron only break. it breaks all dsvm
> > jobs.
> >
> > -- Dims
> >
> > On Tue, Jan 19, 2016 at 8:28 PM, Armando M.  wrote:
> >> Hi neutrinos,
> >>
> >> New week, new gate failure. This time this might be infra related [1].
> This
> >> one fails with [2]. If you know what's going on, spread the word!
> >>
> >> Cheers,
> >> Armando
> >>
> >> [1] https://review.openstack.org/#/c/269937/
> >> [2]
> >>
> http://logs.openstack.org/37/269937/1/check/gate-tempest-dsvm-neutron-full/a91b641/logs/devstacklog.txt.gz#_2016-01-20_01_12_41_571
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> > --
> > Davanum Srinivas :: https://twitter.com/dims
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-19 Thread Brandon Logan
So it really comes down to driver (or driver's appliance)
implementation.  Here's some scenarios to consider:

1) vip on tenant network, members on tenant network
- if a user wants to add an external IP to this configuration, how do we
handle that?  If the subnet is optional the it just uses the default
routing, then it won't ever get external unless the backend
implementation sets up routing to external from the load balancer.  I
think this is a bad idea because the tenant would probably want these
networks isolated.  But if the backend puts a load balancer on it with
external connectivity, its not as isolated as it was.  So to me, if
subnet is optional the best choice is to do default routing which
*SHOULD* fail on default routing.   This of course is something a tenant
will have to realize.  The good thing about a required subnet_id is that
the tenant has explicitly stated they wanted external connectivity and
the backend is not making assumptions as to whether they want it or
don't.

2) vip on public network, members on tenant network
- defaults route should be able to route out to external IPs now so if
subnet_id is optional it works.  If subnet_id is required then the
tenant would have to specify the public network again, which is less
than ideal and also has other issues brought up in this thread.

All other scenario permutations are similar to the above ones so I don't
think i need to go through them.

Basically, I'm waffling on this and am currently on the optional
subnet_id side but as the builders of octavia, I don't think we should
allow a load balancer external access unless the tenant has in a way
given permission by the configuration they've explicitly set.  Though,
that too should be defined.

Thanks,
Brandon
On Tue, 2016-01-19 at 12:07 -0700, Doug Wiegley wrote:
> But, by requiring an external subnet, you are assuming that the packets 
> always originate from inside a neutron network. That is not necessarily the 
> case with a physical device.
> 
> doug
> 
> 
> > On Jan 19, 2016, at 11:55 AM, Michael Johnson  wrote:
> > 
> > I feel that the subnet should be mandatory as there are too many
> > ambiguity issues due to overlapping subnets and multiple routes.
> > In the case of an IP being outside of the tenant networks, the user
> > would specify an external network that has the appropriate routes.  We
> > cannot always assume which tenant network with an external (or VPN)
> > route is the appropriate one to use.
> > 
> > Michael
> > 
> > On Mon, Jan 18, 2016 at 2:45 PM, Stephen Balukoff  
> > wrote:
> >> Vivek--
> >> 
> >> "Member" in this case refers to an IP address that (probably) lives on a
> >> tenant back-end network. We can't specify just the IP address when talking
> >> to such an IP since tenant subnets may use overlapping IP ranges (ie. in
> >> this case, subnet is required). In the case of the namespace driver and
> >> Octavia, we use the subnet parameter for all members to determine which
> >> back-end networks the load balancing software needs a port on.
> >> 
> >> I think the original use case for making subnet optional was the idea that
> >> sometimes a tenant would like to add a "member" IP that is not part of 
> >> their
> >> tenant networks at all--  this is more than likely an IP address that lives
> >> outside the local cloud. The assumption, then, would be that this IP 
> >> address
> >> should be reachable through standard routing from wherever the load 
> >> balancer
> >> happens to live on the network. That is to say, the load balancer will try
> >> to get to such an IP address via its default gateway, unless it has a more
> >> specific route.
> >> 
> >> As far as I'm aware, this use case is still valid and being asked for by
> >> tenants. Therefore, I'm in favor of making member subnet optional.
> >> 
> >> Stephen
> >> 
> >> On Mon, Jan 18, 2016 at 11:14 AM, Jain, Vivek  wrote:
> >>> 
> >>> If member port (IP address) is allocated by neutron, then why do we need
> >>> to specify it explicitly? It can be derived by LBaaS driver implicitly.
> >>> 
> >>> Thanks,
> >>> Vivek
> >>> 
> >>> 
> >>> 
> >>> 
> >>> 
> >>> 
> >>> On 1/17/16, 1:05 AM, "Samuel Bercovici"  wrote:
> >>> 
>  Btw.
>  
>  I am still in favor on associating the subnets to the LB and then not
>  specify them per node at all.
>  
>  -Sam.
>  
>  
>  -Original Message-
>  From: Samuel Bercovici [mailto:samu...@radware.com]
>  Sent: Sunday, January 17, 2016 10:14 AM
>  To: OpenStack Development Mailing List (not for usage questions)
>  Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be
>  optional on member create?
>  
>  +1
>  Subnet should be mandatory
>  
>  The only thing this makes supporting load balancing servers which are not
>  running in the cloud more challenging to support.
>  But I do not see this 

[openstack-dev] [neutron][api] GET call with huge argument list

2016-01-19 Thread Shraddha Pandhe
Hi folks,


I am writing a Neutron extension which needs to take 1000s of network-ids
as argument for filtering. The CURL call is as follows:

curl -i -X GET 
'http://hostname:port/neutron/v2.0/extension_name.json?net-id=fffecbd1-0f6d-4f02-aee7-ca62094830f5=fffeee07-4f94-4cff-bf8e-a2aa7be59e2e'
-H "User-Agent: python-neutronclient" -H "Accept: application/json" -H
"X-Auth-Token: "


The list of net-ids can go up to 1000s. The problem is, with such large
url, I get the "Request URI too long" error. I don't want to update this
limit as proxies can have their own limits.

What options do I have to send 1000s of network IDs?

1. -d '{}' is not a recommended option for GET call and wsgi Controller
drops the data part when routing the request.

2. Use POST instead of GET? I will need to write the get_ logic
inside create_resource logic for this to work. Its a hack, but complies
with HTTP standard.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][[Infra] Gate failure

2016-01-19 Thread Matthew Treinish
On Tue, Jan 19, 2016 at 05:28:46PM -0800, Armando M. wrote:
> Hi neutrinos,
> 
> New week, new gate failure. This time this might be infra related [1]. This
> one fails with [2]. If you know what's going on, spread the word!

Pip 8 was just released and made trying to uninstall distutils installed
packages fatal. (it was previously just a deprecation warning) A cap is already
up:

https://review.openstack.org/#/c/269954/

I'll fast approve it once the gate comes back.

-Matt Treinish

> 
> [1] https://review.openstack.org/#/c/269937/
> [2]
> http://logs.openstack.org/37/269937/1/check/gate-tempest-dsvm-neutron-full/a91b641/logs/devstacklog.txt.gz#_2016-01-20_01_12_41_571


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][[Infra] Gate failure

2016-01-19 Thread Davanum Srinivas
https://review.openstack.org/#/c/269954/ is the plan of action.
@mtreinish is driving it. Plan is to request infra to promote it once
it passes check. This is not a neutron only break. it breaks all dsvm
jobs.

-- Dims

On Tue, Jan 19, 2016 at 8:28 PM, Armando M.  wrote:
> Hi neutrinos,
>
> New week, new gate failure. This time this might be infra related [1]. This
> one fails with [2]. If you know what's going on, spread the word!
>
> Cheers,
> Armando
>
> [1] https://review.openstack.org/#/c/269937/
> [2]
> http://logs.openstack.org/37/269937/1/check/gate-tempest-dsvm-neutron-full/a91b641/logs/devstacklog.txt.gz#_2016-01-20_01_12_41_571
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking guide meeting] Meeting Tomorrow!

2016-01-19 Thread Edgar Magana
Folks,

After the new year meetings madness, we all have to reload our ICS files for 
the networking-guide meetings:
http://eavesdrop.openstack.org/#Networking_Guide_Team_Meeting


So, after doing that you will notice that our next meeting will be this 
Thursday January 21st at 16:00 UTC. Please, plan to attend it.

Agenda: https://wiki.openstack.org/wiki/Documentation/NetworkingGuide/Meetings

Thanks,

Edgar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api] GET call with huge argument list

2016-01-19 Thread Doug Wiegley
It would have to be a POST, but even that will start to have timeout issues. An 
API which requires that kind of input is kind of a bad idea. Perhaps you could 
describe what you’re trying to do?

Thanks,
doug

> On Jan 19, 2016, at 4:59 PM, Shraddha Pandhe  
> wrote:
> 
> Hi folks,
> 
> 
> I am writing a Neutron extension which needs to take 1000s of network-ids as 
> argument for filtering. The CURL call is as follows:
> 
> curl -i -X GET 
> 'http://hostname:port/neutron/v2.0/extension_name.json?net-id=fffecbd1-0f6d-4f02-aee7-ca62094830f5=fffeee07-4f94-4cff-bf8e-a2aa7be59e2e'
>  -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H 
> "X-Auth-Token: "
> 
> 
> The list of net-ids can go up to 1000s. The problem is, with such large url, 
> I get the "Request URI too long" error. I don't want to update this limit as 
> proxies can have their own limits.
> 
> What options do I have to send 1000s of network IDs? 
> 
> 1. -d '{}' is not a recommended option for GET call and wsgi Controller drops 
> the data part when routing the request.
> 
> 2. Use POST instead of GET? I will need to write the get_ logic 
> inside create_resource logic for this to work. Its a hack, but complies with 
> HTTP standard.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NEUTRON] Need you help.THANKS!!

2016-01-19 Thread hao li
Hi,everybody.I am a new hand.At first,I don't know whether the neutron's 
contributors can receive this letter.If not,could you tell me how to contact 
with them?We are a neutron team.we add a ''AC-L2 Mech Driver'' to the ML2 
Plug-in to support our company controllers.We also add a  ''AC-VPN Service 
Driver'' to the Vpnaas Plug-in to support our company controllers.Based on the 
spirit of the "four open",we want to get these code open by the way of 
sub-project .Of course our team try to make our Plugins to conform to the 
specs. Are you interested to have a look at our codes documents and ppt? 
Apologies for the confusion. Hao Li__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-11, Jan 18-22, Mitaka-2 milestone

2016-01-19 Thread Lingxian Kong
Thanks Doug for letting us know that!

On Wed, Jan 20, 2016 at 6:53 AM, Doug Hellmann  wrote:
> Excerpts from Thierry Carrez's message of 2016-01-19 09:48:19 +0100:
>> Kyle Mestery wrote:
>> > One question I have is, what should the version for projects be? For
>> > example, for Neutron, M1 was set to 8.0.0.0b1. Should the M2 Neutron
>> > milestone be 8.0.0.0c1? Or 8.0.0.0b2?
>>
>> Good question! It should be X.0.0.0b2, so 8.0.0.0b2 for Neutron.
>> Cheers!
>>
>
> Right, the "b" means "beta" not just "after a". So we'll have a second
> beta, designated by 0b2.
>
> Another question that came up related to adding the milestone tags was
> whether to replace the old 0b1 tag info in the releases repository or to
> add the new one. Please add a new section for 0b2 tags in the
> appropriate deliverable file(s). We want to preserve the full history
> of the tags for documentation.
>
> Thanks,
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Regards!
---
Lingxian Kong

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NEUTRON] Need you help.THANKS!!

2016-01-19 Thread Henry Gessau
hao li  wrote:
> Hi,everybody.
> I am a new hand.At first,I don't know whether the neutron's contributors can
> receive this letter.
> If not,could you tell me how to contact with them?
> We are a neutron team.we add a ''AC-L2 Mech Driver'' to the ML2 Plug-in to
> support our company controllers.We also add a  ''AC-VPN Service Driver'' to
> the Vpnaas Plug-in to support our company controllers.
> Based on the spirit of the "four open",we want to get these code open by the
> way of sub-project .
> Of course our team try to make our Plugins to conform to the specs. 
> Are you interested to have a look at our codes documents and ppt?
>  
> Apologies for the confusion.
>  
> Hao Li

I think most of what you are asking is answered here:
http://docs.openstack.org/developer/neutron/devref/contribute.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-19 Thread Stephen Balukoff
Michael-- I think you're assuming that adding an external subnet ID means
that the load balancing service will route requests to out an interface
with a route to said external subnet. However, the model we have is
actually too simple to convey this information to the load balancing
service. This is because while we know the member's IP and a subnet to
which the load balancing service should connect to theoretically talk to
said IP, we don't have any kind of actual routing information for the IP
address (like, say a default route for the subnet).

Consider this not far-fetched example: Suppose a tenant wants to add a
back-end member which is reachable only over a VPN, the gateway for which
lives on a tenant internal subnet. If we had a more feature-rich model to
work with here, the tenant could specify the member IP, the subnet
containing the VPN gateway and the gateway's IP address. In theory the load
balancing service could add local routing rules to make sure that
communication to that member happens on the tenant subnet and gets routed
to the VPN gateway.

If we want to support this use case, then we'd probably need to add an
optional gateway IP parameter to the member object. (And I'd still be in
favor of assuming the subnet_id on the member is optional, and that default
routing should be used if not specified.)

Let me see if I can break down several use cases we could support with this
model. Let's assume the member model contains (among other things) the
following attributes:

ip_address (member IP, required)
subnet_id (member or gateway subnet, optional)
gateway_ip (VPN or other layer-3 gateway that should be used to access the
member_ip. optional)

Expected behaviors:

Scenario 1:
ip_address specified, subnet_id and gateway_ip are None:  Load balancing
service assumes member IP address is reachable through default routing.
Appropriate for members that are not part of the local cloud that are
accessible from the internet.

Scenario 2:
ip_address and subnet_id specified, gateway_ip is None: Load balancing
service assumes it needs an interface on subnet_id to talk directly to the
member IP address. Appropriate for members that live on tenant networks.
member_ip should exist within the subnet specified by subnet_id. This is
the only scenario supported under the current model if we make subnet_id a
required field and don't add a gateway_ip.

Scenario 3:
ip_address, subnet_id and gateway_ip are all specified:  Load balancing
service assumes it needs an interface on subnet_id to talk to the
gateway_ip. Load balancing service should add local routing rule (ie. to
the host and / or local network namespace context of the load balancing
service itself, not necessarily to Neutron or anything) to route any
packets destined for member_ip to the gateway_ip. gateway_ip should exist
within the subnet specified by subnet_id. Appropriate for members that are
on the other side of a VPN links, or reachable via other local routing
within a tenant network or local cloud.

Scenario 4:
ip_address and gateway_ip are specified, subnet_id is None: This is an
invalid configuration.

So what do y'all think of this? Am I smoking crack with how this should
work?

For what it's worth, I think the "member is on the other side of a VPN"
scenario is not one our customers are champing at the bit to have, so I'm
fine with not supporting that kind of topology if nobody else wants it. I'm
still in favor of making subnet_id optional, as this supports both
Scenarios 1 and 2 above.

Stephen


On Tue, Jan 19, 2016 at 7:09 PM, Brandon Logan 
wrote:

> So it really comes down to driver (or driver's appliance)
> implementation.  Here's some scenarios to consider:
>
> 1) vip on tenant network, members on tenant network
> - if a user wants to add an external IP to this configuration, how do we
> handle that?  If the subnet is optional the it just uses the default
> routing, then it won't ever get external unless the backend
> implementation sets up routing to external from the load balancer.  I
> think this is a bad idea because the tenant would probably want these
> networks isolated.  But if the backend puts a load balancer on it with
> external connectivity, its not as isolated as it was.  So to me, if
> subnet is optional the best choice is to do default routing which
> *SHOULD* fail on default routing.   This of course is something a tenant
> will have to realize.  The good thing about a required subnet_id is that
> the tenant has explicitly stated they wanted external connectivity and
> the backend is not making assumptions as to whether they want it or
> don't.
>
> 2) vip on public network, members on tenant network
> - defaults route should be able to route out to external IPs now so if
> subnet_id is optional it works.  If subnet_id is required then the
> tenant would have to specify the public network again, which is less
> than ideal and also has other issues brought up in this thread.
>
> All other 

Re: [openstack-dev] [Kuryr] Need review help on IPAM patches

2016-01-19 Thread Vikas Choudhary
Hi Team,

I recently rebased https://review.openstack.org/#/c/265744/ and
https://review.openstack.org/#/c/267302/ and since the gate tests are
failing. I am not able to understand the reason.

http://logs.openstack.org/02/267302/3/check/gate-install-dsvm-kuryr/11de58c/console.html#_2016-01-20_04_48_08_797

2016-01-20 04:48:08.797

| + RETVAL=22016-01-20 04:48:08.797

| + '[' 2 -ne 0 ']'2016-01-20 04:48:08.797

| + echo 'ERROR: the main setup script run by this job failed - exit
code: 2'2016-01-20 04:48:08.797

| ERROR: the main setup script run by this job failed - exit code:
22016-01-20 04:48:08.797

| + echo 'please look at the relevant log files to determine the
root cause'2016-01-20 04:48:08.798

| please look at the relevant log files to determine the root
cause2016-01-20 04:48:08.798

| + echo 'Running devstack worlddump.py'2016-01-20 04:48:08.798

| Running devstack worlddump.py2016-01-20 04:48:08.798

| + sudo /opt/stack/new/devstack/tools/worlddump.py -

Will appreciate any help or pointers.

-Vikas



On Tue, Jan 19, 2016 at 9:15 AM, Vikas Choudhary  wrote:

> Hi Kuryr Team,
>
> As discussed in last meeting, here i am sharing review requests which are
> related to IPAM. Please have a look:
>
> https://review.openstack.org/#/q/owner:vikaschoudhary16+status:open
>
>- https://review.openstack.org/#/c/265094/
>
>- https://review.openstack.org/#/c/265732/
>- https://review.openstack.org/#/c/265744/
>- https://review.openstack.org/#/c/267302/
>
>
> Thanks & Regards
> Vikas Choudhary
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][neutron][requirements] - keystonemiddleware-4.1.0 performance regression

2016-01-19 Thread Steve Martinelli

Hmm, looking at:
https://github.com/openstack/keystonemiddleware/compare/4.0.0...4.1.0 the
only change that I can think of that might be the culprit is:
https://github.com/openstack/keystonemiddleware/commit/f27d7f776e8556d976f75d07c99373455106de52

I'll dig into this some more soon, but it might be worth trying things out
with that commit reverted.

stevemar



From:   "Armando M." 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   2016/01/20 01:59 AM
Subject:Re: [openstack-dev] [keystone][neutron][requirements] -
keystonemiddleware-4.1.0 performance regression





On 19 January 2016 at 22:46, Kevin Benton  wrote:
  Hi all,

  We noticed a major jump in the neutron tempest and API test run times
  recently in Neutron. After digging through logstash I found out that it
  first occurred on the requirements bump here:
  https://review.openstack.org/#/c/265697/

  After locally testing each requirements change individually, I found that
  the keystonemiddleware change seems to be the culprit. It almost doubles
  the time it takes to fulfill simple port-list requests in Neutron.

  Armando pushed up a patch here to confirm:
  https://review.openstack.org/#/c/270024/
  Once that's verified, we should probably put a cap on the middleware
  because it's causing the tests to run up close to their time limits.

Kevin,

As usual your analytical skills are to be praised.

I wonder if anyone else is aware of the issue/s, because during the usual
hunting I could see other projects being affected and showing abnormally
high run times of the dsvm jobs.

I am not sure that [1] is the right approach, but it should give us some
data points if executed successfully.

Cheers,
Armando

[1]  https://review.openstack.org/#/c/270024/


  --
  Kevin Benton

  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Nominating Lei Zhang (Jeffrey Zhang in English) - jeffrey4l on irc

2016-01-19 Thread Lei Zhang
Thank you all for the nominating and votes. I will try and maintain my
current level activity and make the Kolla better.

On Wed, Jan 20, 2016 at 12:13 PM, Steven Dake (stdake) 
wrote:

> Lei,
>
> Looks like its unanimous in under 24 hours :)  Welcome to the Kolla core
> reviewer team!  I made the appropriate changes in gerrit.
>
> Regards
> -steve
>
>
> From: Steven Dake 
> Reply-To: "openstack-dev@lists.openstack.org" <
> openstack-dev@lists.openstack.org>
> Date: Tuesday, January 19, 2016 at 1:26 AM
> To: "openstack-dev@lists.openstack.org"  >
> Subject: [openstack-dev] [kolla] Nominating Lei Zhang (Jeffrey Zhang in
> English) - jeffrey4l on irc
>
> Hi folks,
>
> I would like to propose Lei Zhang for our core reviewer team.  Count this
> proposal as a +1 vote from me.  Lei has done a fantastic job in his reviews
> over the last 6 weeks and has managed to produce some really nice
> implementation work along the way.  He participates in IRC regularly, and
> has a commitment from his management team at his employer to work full time
> 100% committed to Kolla for the foreseeable future (although things can
> always change in the future :)
>
> Please vote +1 if you approve of Lei for core reviewer, or –1 if wish to
> veto his nomination.  Remember just one –1 vote is a complete veto, so if
> your on the fence, another option is to abstain from voting.
>
> I would like to change from our 3 votes required, as our core team has
> grown, to requiring a simple majority of core reviewers with no veto
> votes.  As we have 9 core reviewers, this means Lei requires 4 more  +1
> votes with no veto vote in the voting window to join the core reviewer team.
>
> I will leave the voting open for 1 week as is the case with our other core
> reviewer nominations until January 26th.  If the vote is unanimous or there
> is a veto vote before January 26th I will close voting.  I'll make
> appropriate changes to gerrit permissions if Lei is voted into the core
> reviewer team.
>
> Thank you for your time in evaluating Lei for the core review team.
>
> Regards
> -steve
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Jeffrey Zhang
Blog: http://xcodest.me
twitter/weibo: @jeffrey4l
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][neutron][requirements] - keystonemiddleware-4.1.0 performance regression

2016-01-19 Thread Armando M.
On 19 January 2016 at 22:46, Kevin Benton  wrote:

> Hi all,
>
> We noticed a major jump in the neutron tempest and API test run times
> recently in Neutron. After digging through logstash I found out that it
> first occurred on the requirements bump here:
> https://review.openstack.org/#/c/265697/
>
> After locally testing each requirements change individually, I found that
> the keystonemiddleware change seems to be the culprit. It almost doubles
> the time it takes to fulfill simple port-list requests in Neutron.
>
> Armando pushed up a patch here to confirm:
> https://review.openstack.org/#/c/270024/
> Once that's verified, we should probably put a cap on the middleware
> because it's causing the tests to run up close to their time limits.
>

Kevin,

As usual your analytical skills are to be praised.

I wonder if anyone else is aware of the issue/s, because during the usual
hunting I could see other projects being affected and showing abnormally
high run times of the dsvm jobs.

I am not sure that [1] is the right approach, but it should give us some
data points if executed successfully.

Cheers,
Armando

[1]  https://review.openstack.org/#/c/270024/


> --
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Nominating Lei Zhang (Jeffrey Zhang in English) - jeffrey4l on irc

2016-01-19 Thread Steven Dake (stdake)
Lei,

Looks like its unanimous in under 24 hours :)  Welcome to the Kolla core 
reviewer team!  I made the appropriate changes in gerrit.

Regards
-steve


From: Steven Dake >
Reply-To: 
"openstack-dev@lists.openstack.org" 
>
Date: Tuesday, January 19, 2016 at 1:26 AM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [kolla] Nominating Lei Zhang (Jeffrey Zhang in 
English) - jeffrey4l on irc

Hi folks,

I would like to propose Lei Zhang for our core reviewer team.  Count this 
proposal as a +1 vote from me.  Lei has done a fantastic job in his reviews 
over the last 6 weeks and has managed to produce some really nice 
implementation work along the way.  He participates in IRC regularly, and has a 
commitment from his management team at his employer to work full time 100% 
committed to Kolla for the foreseeable future (although things can always 
change in the future :)

Please vote +1 if you approve of Lei for core reviewer, or -1 if wish to veto 
his nomination.  Remember just one -1 vote is a complete veto, so if your on 
the fence, another option is to abstain from voting.

I would like to change from our 3 votes required, as our core team has grown, 
to requiring a simple majority of core reviewers with no veto votes.  As we 
have 9 core reviewers, this means Lei requires 4 more  +1 votes with no veto 
vote in the voting window to join the core reviewer team.

I will leave the voting open for 1 week as is the case with our other core 
reviewer nominations until January 26th.  If the vote is unanimous or there is 
a veto vote before January 26th I will close voting.  I'll make appropriate 
changes to gerrit permissions if Lei is voted into the core reviewer team.

Thank you for your time in evaluating Lei for the core review team.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] pip 8 no longer over-installs system packages [was: Gate failure]

2016-01-19 Thread Ian Wienand

On 01/20/2016 04:14 PM, Ian Wienand wrote:

On 01/20/2016 12:53 PM, Robert Collins wrote:

I suspect we'll see fallout in unit tests too, once new images are
built.


If the images can build ...


yeah, dib is not happy about this either


Just removing the directory as pip
used to do has been enough to keep things going.


To be clear, what happens is that pip removes the egg-info file and
then overwrites the system installed files.  This is, of course,
unsafe, but we generally get away with it.


Presume we can't remove the system python-* packages for these tools
because other bits of the system rely on it.  We've been down the path
of creating dummy packages before, I think ... that never got very
far.


Another option would be for us to just keep a list of egg-info files
to remove within devstack and more or less do what pip was doing
before.


Would pip accept maybe a environment flag to restore the old ability
to remove based on the egg-info?  Is it really so bad given what
devstack is doing?


I proposed a revert in [1] which I'm sure people will have opinions
on.

[1] https://github.com/pypa/pip/pull/3389

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][neutron][requirements] - keystonemiddleware-4.1.0 performance regression

2016-01-19 Thread Kevin Benton
Hi all,

We noticed a major jump in the neutron tempest and API test run times
recently in Neutron. After digging through logstash I found out that it
first occurred on the requirements bump here:
https://review.openstack.org/#/c/265697/

After locally testing each requirements change individually, I found that
the keystonemiddleware change seems to be the culprit. It almost doubles
the time it takes to fulfill simple port-list requests in Neutron.

Armando pushed up a patch here to confirm:
https://review.openstack.org/#/c/270024/
Once that's verified, we should probably put a cap on the middleware
because it's causing the tests to run up close to their time limits.

-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Failed to create network with kuryr driver type

2016-01-19 Thread Mars Ma
hi Vikas,

Thanks for your reply, I tried your method, looks like that the previous
problem disappear,
but a new problem came out:

$ sudo docker network create -d kuryr --ipam-driver=kuryr kuryr
Error response from daemon: failed to allocate gateway (): invalid CIDR
address: /24

$ neutron subnetpool-list
+--+---+---+---+--+
| id   | name  | prefixes  |
default_prefixlen | address_scope_id |
+--+---+---+---+--+
| 3c52c9dd-579e-4648-8ea7-e2af059d2914 | kuryr | [u'10.10.1.0/24'] | 24
   |  |
+--+---+---+---+--+

it got invalid gateway CIDR address:  /24 , I don't know why

Thanks you & Best regards !
Mars Ma
-

On Wed, Jan 20, 2016 at 11:53 AM, Vikas Choudhary <
choudharyvika...@gmail.com> wrote:

> Hi Mars,
>
> Please use "--ipam-driver=kuryr" also. Kuryr has its own ipam driver.
>
> Please refer this also:
> https://github.com/openstack/kuryr/blob/master/doc/source/devref/libnetwork_remote_driver_design.rst#libnetwork-user-workflow-with-kuryr-as-remote-network-driver---host-networking
>
> Hope this helps.
>
> -Vikas
>
> On Wed, Jan 20, 2016 at 9:01 AM, Mars Ma  wrote:
>
>> hi,
>>
>> I used the devstack to deploy kuryr to integrate openstack neutron and
>> docker, but encounter some errors like:
>>
>> $ sudo docker network create -d kuryr kuryr
>> Error response from daemon: failed to parse pool request for address
>> space "GlobalDefault" pool "" subpool "": cannot find address space
>> GlobalDefault (most likely the backing datastore is not configured)
>>
>> $ ./scripts/run_kuryr.sh
>>  * Running on http://0.0.0.0:2377/ (Press CTRL+C to quit)
>>  * Restarting with stat
>>  * Debugger is active!
>>  * Debugger pin code: 451-362-807
>>
>> 127.0.0.1 - - [14/Jan/2016 09:23:03] "POST /Plugin.Activate HTTP/1.1" 200
>> -
>> 127.0.0.1 - - [14/Jan/2016 09:23:03] "POST /NetworkDriver.GetCapabilities
>> HTTP/1.1" 200 -
>> 127.0.0.1 - - [14/Jan/2016 09:23:03] "POST
>> /IpamDriver.GetDefaultAddressSpaces HTTP/1.1" 200 -
>>
>> any comment is appreciated ?
>>
>> Thanks you & Best regards !
>> Mars Ma
>> -
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Failed to create network with kuryr driver type

2016-01-19 Thread Vikas Choudhary
Hi Mars,

Your code seems to be missing missing this fix:
https://review.openstack.org/#/c/265732/

Please try with this and let us know if any issues further.

Thanks
-Vikas


On Wed, Jan 20, 2016 at 11:41 AM, Mars Ma  wrote:

> hi Vikas,
>
> Thanks for your reply, I tried your method, looks like that the previous
> problem disappear,
> but a new problem came out:
>
> $ sudo docker network create -d kuryr --ipam-driver=kuryr kuryr
> Error response from daemon: failed to allocate gateway (): invalid CIDR
> address: /24
>
> $ neutron subnetpool-list
>
> +--+---+---+---+--+
> | id   | name  | prefixes  |
> default_prefixlen | address_scope_id |
>
> +--+---+---+---+--+
> | 3c52c9dd-579e-4648-8ea7-e2af059d2914 | kuryr | [u'10.10.1.0/24'] | 24
>  |  |
>
> +--+---+---+---+--+
>
> it got invalid gateway CIDR address:  /24 , I don't know why
>
> Thanks you & Best regards !
> Mars Ma
> -
>
> On Wed, Jan 20, 2016 at 11:53 AM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>> Hi Mars,
>>
>> Please use "--ipam-driver=kuryr" also. Kuryr has its own ipam driver.
>>
>> Please refer this also:
>> https://github.com/openstack/kuryr/blob/master/doc/source/devref/libnetwork_remote_driver_design.rst#libnetwork-user-workflow-with-kuryr-as-remote-network-driver---host-networking
>>
>> Hope this helps.
>>
>> -Vikas
>>
>> On Wed, Jan 20, 2016 at 9:01 AM, Mars Ma  wrote:
>>
>>> hi,
>>>
>>> I used the devstack to deploy kuryr to integrate openstack neutron and
>>> docker, but encounter some errors like:
>>>
>>> $ sudo docker network create -d kuryr kuryr
>>> Error response from daemon: failed to parse pool request for address
>>> space "GlobalDefault" pool "" subpool "": cannot find address space
>>> GlobalDefault (most likely the backing datastore is not configured)
>>>
>>> $ ./scripts/run_kuryr.sh
>>>  * Running on http://0.0.0.0:2377/ (Press CTRL+C to quit)
>>>  * Restarting with stat
>>>  * Debugger is active!
>>>  * Debugger pin code: 451-362-807
>>>
>>> 127.0.0.1 - - [14/Jan/2016 09:23:03] "POST /Plugin.Activate HTTP/1.1"
>>> 200 -
>>> 127.0.0.1 - - [14/Jan/2016 09:23:03] "POST
>>> /NetworkDriver.GetCapabilities HTTP/1.1" 200 -
>>> 127.0.0.1 - - [14/Jan/2016 09:23:03] "POST
>>> /IpamDriver.GetDefaultAddressSpaces HTTP/1.1" 200 -
>>>
>>> any comment is appreciated ?
>>>
>>> Thanks you & Best regards !
>>> Mars Ma
>>> -
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Team meeting on Tuesday 1400UTC

2016-01-19 Thread Vikram Choudhary
+1 for Kyle's suggestion

Thanks
Vikram

On Wed, Jan 20, 2016 at 2:40 AM, Martin Hickey 
wrote:

> Hi,
>
> +1 for me on Kyle's suggestion.
>
> Regards,
> Martin
>
> [image: Inactive hide details for Kyle Mestery ---19/01/2016 16:39:05---On
> Tue, Jan 19, 2016 at 10:14 AM, Ihar Hrachyshka  ---19/01/2016 16:39:05---On Tue, Jan 19, 2016 at 10:14 AM, Ihar Hrachyshka <
> ihrac...@redhat.com> wrote: > Rossella Sblendido
>
> From: Kyle Mestery 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 19/01/2016 16:39
> Subject: Re: [openstack-dev] [Neutron] Team meeting on Tuesday 1400UTC
> --
>
>
>
> On Tue, Jan 19, 2016 at 10:14 AM, Ihar Hrachyshka 
> wrote:
> > Rossella Sblendido  wrote:
> >
> >>
> >>
> >> On 01/19/2016 04:36 PM, Miguel Angel Ajo Pelayo wrote:
> >>>
> >>> Thinking of this, I had another idea, a bit raw yet.
> >>>
> >>> But how does it sound to have two meetings a week, one in a EU/ASIA
> >>> friendlier
> >>> timezone, and another for USA/AU (current one), with different chairs.
> >>>
> >>> We don't impose unnatural-working hours (too early, too late for
> family,
> >>> etc..)
> >>> to anyone, we encourage gathering as a community (may be split by
> >>> timezones, but
> >>> it feels more human and faster than ML conversations..) and also people
> >>> able
> >>> to make to both, could serve as bridges for both meetings.
> >>>
> >>>
> >>> Thoughts?
> >>
> >>
> >> I think that is what Kyle was proposing and if I am not wrong that's
> what
> >> they do in nova.
> >
> >
> > My understanding is that Kyle proposed to switch back to bi-weekly
> > alternating meetings, and have a separate chair for each.
> >
> > I think Kyle’s suggestion is wiser since it won’t leave the community
> split
> > into two separate parts, and it won’t waste two hours each week where we
> > could make it with just one.
> >
> Yes, I was proposing two bi-weekly meetings with different chairs. We
> could even have kept the existing schedule and just had a different
> chair for the 1400UTC meeting on Tuesday.
>
> > Ihar
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] pip 8 no longer over-installs system packages [was: Gate failure]

2016-01-19 Thread Ian Wienand

On 01/20/2016 12:53 PM, Robert Collins wrote:

I suspect we'll see fallout in unit tests too, once new images are
built.


If the images can build ...

This was marked as deprecated, I understand, but the removal is very
unfortunate [1] considering it's really just a
shoot-yourself-in-the-foot operation.

From the latest runs, on ubuntu we are using pip to over-install
system packages of

 six
 requests
 netaddr
 PyYAML
 PyOpenSSL
 jsonpointer
 urllib3
 PyYAML
 pyOpenSSL

On CentOS it is

 requests
 PyYAML
 enum34
 ipaddress
 numpy

The problem is that we can't remove these system packages with the
package-manager from the base images, because other packages we need
rely on having them installed.  Just removing the directory as pip
used to do has been enough to keep things going.

So, what to do?  We can't stay at pip < 8 forever, because I'm sure
there will be some pip problem we need to patch soon enough.

Presume we can't remove the system python-* packages for these tools
because other bits of the system rely on it.  We've been down the path
of creating dummy packages before, I think ... that never got very
far.

I really don't know how with the world of devstack plugins we'd deploy
a strict global virtualenv.  Heaven knows what "creative" things
plugins are going to come up with if someone hits this (not that I've
proposed anything elegant)...

Would pip accept maybe a environment flag to restore the old ability
to remove based on the egg-info?  Is it really so bad given what
devstack is doing?

-i

[1] 
https://github.com/pypa/pip/commit/6afc718307fea36b9ffddd376c1395ee1061795c


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Failed to create network with kuryr driver type

2016-01-19 Thread Mars Ma
hi,

I used the devstack to deploy kuryr to integrate openstack neutron and
docker, but encounter some errors like:

$ sudo docker network create -d kuryr kuryr
Error response from daemon: failed to parse pool request for address space
"GlobalDefault" pool "" subpool "": cannot find address space GlobalDefault
(most likely the backing datastore is not configured)

$ ./scripts/run_kuryr.sh
 * Running on http://0.0.0.0:2377/ (Press CTRL+C to quit)
 * Restarting with stat
 * Debugger is active!
 * Debugger pin code: 451-362-807

127.0.0.1 - - [14/Jan/2016 09:23:03] "POST /Plugin.Activate HTTP/1.1" 200 -
127.0.0.1 - - [14/Jan/2016 09:23:03] "POST /NetworkDriver.GetCapabilities
HTTP/1.1" 200 -
127.0.0.1 - - [14/Jan/2016 09:23:03] "POST
/IpamDriver.GetDefaultAddressSpaces HTTP/1.1" 200 -

any comment is appreciated ?

Thanks you & Best regards !
Mars Ma
-
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Failed to create network with kuryr driver type

2016-01-19 Thread Vikas Choudhary
Hi Mars,

Please use "--ipam-driver=kuryr" also. Kuryr has its own ipam driver.

Please refer this also:
https://github.com/openstack/kuryr/blob/master/doc/source/devref/libnetwork_remote_driver_design.rst#libnetwork-user-workflow-with-kuryr-as-remote-network-driver---host-networking

Hope this helps.

-Vikas

On Wed, Jan 20, 2016 at 9:01 AM, Mars Ma  wrote:

> hi,
>
> I used the devstack to deploy kuryr to integrate openstack neutron and
> docker, but encounter some errors like:
>
> $ sudo docker network create -d kuryr kuryr
> Error response from daemon: failed to parse pool request for address space
> "GlobalDefault" pool "" subpool "": cannot find address space GlobalDefault
> (most likely the backing datastore is not configured)
>
> $ ./scripts/run_kuryr.sh
>  * Running on http://0.0.0.0:2377/ (Press CTRL+C to quit)
>  * Restarting with stat
>  * Debugger is active!
>  * Debugger pin code: 451-362-807
>
> 127.0.0.1 - - [14/Jan/2016 09:23:03] "POST /Plugin.Activate HTTP/1.1" 200 -
> 127.0.0.1 - - [14/Jan/2016 09:23:03] "POST /NetworkDriver.GetCapabilities
> HTTP/1.1" 200 -
> 127.0.0.1 - - [14/Jan/2016 09:23:03] "POST
> /IpamDriver.GetDefaultAddressSpaces HTTP/1.1" 200 -
>
> any comment is appreciated ?
>
> Thanks you & Best regards !
> Mars Ma
> -
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] ceilometer Floatingip Pollster change

2016-01-19 Thread Pradeep Kilambi
Ceilometer floatingip pollster currently polls the nova api to get the
floating ip info periodically. But due to limitations in the nova api as
listed in this bug[1],
the data ceilometer receives isn't all that useful. Nova doesn't return
this info for all tenants. There has been some work in juno time frame but
it was
reverted due to the fact that the nova api logs were being spammed [2].

Due to these concerns, the proposal now is to use the neutron api to get
this data as proposed in this patch[3]. What we would like to know is,


1. Is the data gathered from current floating ip pollster being used? if so
how and what context ?

2. This might only be an issue for pure nova-network scenario, but even
that i'm not sure how useful the current data we gather is?

3. Are there any other concerns regarding this change that we should
discuss?

Any feedback appreciated,


Thanks,
~ Pradeep

[1] https://bugs.launchpad.net/nova/+bug/1402514

[2] http://lists.openstack.org/pipermail/openstack-dev/2014-June/037304.html

[3] https://review.openstack.org/#/c/269369/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Scheduler EDP patch request to be merged

2016-01-19 Thread lu jander
Hi, List



Thx for reminding to add release notes for new features.



I have add the release notes for scheduling EDP jobs
https://review.openstack.org/#/c/268881/





*https://review.openstack.org/#/c/182310/45
 *

This patch has been long time reviewed and passed the integration test for
scheduler EDP jobs,  so request to merge before mitaka-2.



Thx
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NEUTRON] Need you help.THANKS!!

2016-01-19 Thread rzang_openstack
I do not know whether this fits you well, but here is the process of creating a 
openstack project:
http://docs.openstack.org/infra/manual/creators.html
在2016年01月20 10时16分, "hao li"写道:
Hi,everybody.
I am a new hand.At first,I don't know whether the neutron's contributors can 
receive this letter.
If not,could you tell me how to contact with them?
We are a neutron team.we add a ''AC-L2 Mech Driver'' to the ML2 Plug-in to 
support our company controllers.We also add a  ''AC-VPN Service Driver'' to the 
Vpnaas Plug-in to support our company controllers.
Based on the spirit of the "four open",we want to get these code open by the 
way of sub-project .
Of course our team try to make our Plugins to conform to the specs. 
Are you interested to have a look at our codes documents and ppt?
 
Apologies for the confusion.
 
Hao Li__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] pip 8 no longer over-installs system packages [was: Gate failure]

2016-01-19 Thread Andreas Jaeger
Now docs, pep8, and python27 are broken as well here:
https://review.openstack.org/#/c/268687/

Andreas

On 2016-01-20 06:14, Ian Wienand wrote:
> On 01/20/2016 12:53 PM, Robert Collins wrote:
>> I suspect we'll see fallout in unit tests too, once new images are
>> built.
> 
> If the images can build ...
> 
> This was marked as deprecated, I understand, but the removal is very
> unfortunate [1] considering it's really just a
> shoot-yourself-in-the-foot operation.
> 
> From the latest runs, on ubuntu we are using pip to over-install
> system packages of
> 
>  six
>  requests
>  netaddr
>  PyYAML
>  PyOpenSSL
>  jsonpointer
>  urllib3
>  PyYAML
>  pyOpenSSL
> 
> On CentOS it is
> 
>  requests
>  PyYAML
>  enum34
>  ipaddress
>  numpy
> 
> The problem is that we can't remove these system packages with the
> package-manager from the base images, because other packages we need
> rely on having them installed.  Just removing the directory as pip
> used to do has been enough to keep things going.
> 
> So, what to do?  We can't stay at pip < 8 forever, because I'm sure
> there will be some pip problem we need to patch soon enough.
> 
> Presume we can't remove the system python-* packages for these tools
> because other bits of the system rely on it.  We've been down the path
> of creating dummy packages before, I think ... that never got very
> far.
> 
> I really don't know how with the world of devstack plugins we'd deploy
> a strict global virtualenv.  Heaven knows what "creative" things
> plugins are going to come up with if someone hits this (not that I've
> proposed anything elegant)...
> 
> Would pip accept maybe a environment flag to restore the old ability
> to remove based on the egg-info?  Is it really so bad given what
> devstack is doing?
> 
> -i
> 
> [1]
> https://github.com/pypa/pip/commit/6afc718307fea36b9ffddd376c1395ee1061795c
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Docs] Definition of a provider Network

2016-01-19 Thread Akihiro Motoki
I agree that the current definition can be improved.

"Provider Network" vs "Self service network" highlights who can
provision a network.

In my understanding, "Provider Network" is a network provisioned by
the cloud operator. Practically the operator cannot provision a network
for a tenant, so a single provider network is shared by tenants.

On the other hand, "Self-service network" scenario allows OpenStack users
to provision their own networks.

In the scenario of "provider network", a single network is shared by
multiple tenants.
and network-related Neutron API calls should be disallowed for tenants.
It is reasonable to disallow tenants to provision routers, firewalls
or VPNs as well.
LBaaS can be used.

I hope this helps improve the text.

Akihiro


2016-01-19 16:33 GMT+09:00 Andreas Scheuring :
> Hi everybody,
>
> I stumbled over a definition that explains the difference between a
> Provider network and a self service network. [1]
>
> To summarize it says:
> - Provider Network: primarily uses layer2 services and vlan segmentation
> and cannot be used for advanced services (fwaas,..)
> - Self-service Network: is Neutron configured to use a overlay network
> and supports advanced services (fwaas,..)
>
> But my understanding is more like this:
> - Provider Network: The Openstack user needs information about the
> underlying network infrastructure to create a virtual network that
> exactly matches this infrastructure.
>
> - Self service network: The Openstack user can create virtual networks
> without knowledge about the underlaying infrastructure on the data
> network. This can also include vlan networks, if the l2 plugin/agent was
> configured accordingly.
>
>
> Did the meaning of a provider network change in the meantime, or is my
> understanding just wrong?
>
> Thanks!
>
>
>
>
> [1]
> http://docs.openstack.org/liberty/install-guide-rdo/overview.html#id4
>
>
> --
> -
> Andreas (IRC: scheuran)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] sponsor some LVM development

2016-01-19 Thread Premysl Kouril
Hi Matt,

thanks for letting me know, we will definitely do reach you out if we
start some activity in this area.

To answer your question: main reason for LVM is simplicity and
performance. It seems from our benchmarks that LVM behavior when
processing many IOPs (10s of thousands) is more stable than if
filesystem is used as backend. Also a filesystem generally is heavier
and more complex technology than LVM and we wanted to stay really as
simple as possible on the IO datapath - to make everything
(maintaining, tuning, configuring) easier.

Do you see this as reasonable argumentation? Do you see some major
benefits of file-based backend over the LVM one?

Cheers,
Prema

On Tue, Jan 19, 2016 at 12:18 PM, Matthew Booth  wrote:
> Hello, Premysl,
>
> I'm not working on these features, however I am working in this area of code
> implementing the libvirt storage pools spec. If anybody does start working
> on this, please reach out to coordinate as I have a bunch of related
> patches. My work should also make your features significantly easier to
> implement.
>
> Out of curiosity, can you explain why you want to use LVM specifically over
> the file-based backends?
>
> Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Neutron] [Docs] Definition of a provider Network

2016-01-19 Thread John Belamaric
Yes, I think of it as:

A provider network in OpenStack is simply a record specifying the necessary 
details of the underlying infrastructure so that OpenStack can utilize it. The 
actual networking services (layer 2 and 3 forwarding, for example) are provided 
by the infrastructure and configured independently.

John

> On Jan 19, 2016, at 4:32 AM, Neil Jerram  wrote:
> 
> On 19/01/16 07:36, Andreas Scheuring wrote:
>> Hi everybody, 
>> 
>> I stumbled over a definition that explains the difference between a
>> Provider network and a self service network. [1] 
> 
> I've also spent time trying to understand this, so am happy to offer
> that understanding here (for checking?)...
> 
> I believe the _definition_ of a 'provider' network is that it is a
> network provisioned by the cloud operator - as opposed to 'tenant'
> networks that are provisioned by non-admin tenants aka users aka projects.
> 
> (I've not seen the term 'Self service' before, but presumably it means
> what I'm calling 'tenant'.
> 
> Corollaries - but not strictly part of the definition - are that:
> 
> - Provider networks typically 'map more closely' in some sense onto the
> cloud's underlying physical network than tenant networks do.  The
> 'provider' API extension - which is usually limited by policy to
> operators only, and hence can only be used with provider networks -
> allows the operator to specify that mapping, for example which VLAN to
> map on to.  Tenant networks are typically implemented with additional
> layers of encapsulation, in comparison with provider networks, in order
> to allow many tenant networks to coexist on the same compute hosts and
> yet be isolatable from each other.
> 
> - Provider networks typically use the real IP address space, whereas
> tenant networks typically use private IP address space so that multiple
> tenant networks can use the same IP addresses.
> 
> The network that is on the external side of a Neutron Router has its
> router:external property True, and also has to be a provider network. 
> Floating IPs come from a subnet that is associated with that provider
> network.
> 
> It's possible to attach VMs directly to a provider network, as well as
> to tenant networks.
> 
>> 
>> To summarize it says:
>> - Provider Network: primarily uses layer2 services
> 
> I don't know what this means.  All networks have a layer 2 somewhere.
> 
>> and vlan segmentation
> 
> Yes, but they don't have to.  A provider network can be 'flat', which
> means that its VM interfaces are bridged onto one of the physical
> interfaces of the compute host (and it is assumed that all hosts'
> physical interfaces are themselves bridged together).  So then any VLAN
> that a VM used would be trunked through the physical network.
> 
>> and cannot be used for advanced services (fwaas,..)
> 
> (I didn't know that, but OK.)
> 
>> - Self-service Network: is Neutron configured to use a overlay network
> 
> Grammar?
> 
>> and supports advanced services (fwaas,..)
>> 
>> 
>> But my understanding is more like this:
>> - Provider Network: The Openstack user needs information about the
>> underlying network infrastructure to create a virtual network that
>> exactly matches this infrastructure. 
> 
> Agreed, if s/user/operator/ and s/virtual//.  OpenStack _users_ cannot
> create provider networks, and I wouldn't call a provider network 'virtual'.
> 
> 
>> 
>> - Self service network: The Openstack user can create virtual networks
>> without knowledge about the underlaying infrastructure on the data
>> network. This can also include vlan networks, if the l2 plugin/agent was
>> configured accordingly.
> 
> Agreed.
>> 
>> 
>> Did the meaning of a provider network change in the meantime, or is my
>> understanding just wrong?
>> 
>> Thanks!
>> 
>> 
>> 
>> 
>> [1]
>> http://docs.openstack.org/liberty/install-guide-rdo/overview.html#id4
>> 
>> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle] weekly meeting of Jan.20

2016-01-19 Thread joehuang
Hello,

After the stateless design was moved to the master branch, we also move the 
focus back to the master branch.

the weekly meeting will be held at the UTC1300 at #openstack-meeting on Jan.20.

Agenda:
l  Progress of To-do list review: https://etherpad.openstack.org/p/TricircleToDo

l  l3 networking N-S.

l  Quota management.

Design doc: 
https://docs.google.com/document/d/18kZZ1snMOCD9IQvUKI5NVDzSASpw-QKj7l2zNqMEd3g/edit#

Best Regards
Chaoyi Huang ( Joe Huang )

From: joehuang [joehu...@huawei.com]
Sent: 14 January 2016 17:33
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [tricircle] move to stateless design in master branch.

Hello,

As the stateless design in the experiment branch has a quite positive feedback, 
the stateless design is moved from the experiment branch to the master branch.

You can try it through Devstack: https://github.com/openstack/tricircle

If you find bug, please feel free to report it at 
https://bugs.launchpad.net/tricircle

You can learn the source code via the BP and spec: 
https://blueprints.launchpad.net/tricircle/+spec/implement-stateless

Best Regards
Chaoyi Huang ( Joe Huang )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Vitrage graph design

2016-01-19 Thread AFEK, Ifat (Ifat)
Hi,

I added vitrage graph design, you can find it in: 
https://github.com/openstack/vitrage/blob/master/doc/source/vitrage-graph-design.rst

Feel free to comment.

Thanks, 
Ifat.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Bulk Instance Delete support

2016-01-19 Thread vishal yadav
Hey guys,

Would like to know the plan for support of bulk instance delete feature.
There was a blueprint registered a while ago [1] but it's status is not
clear. No corresponding API [2]

[1] https://blueprints.launchpad.net/nova/+spec/bulk-delete-servers
[2] http://developer.openstack.org/api-ref-compute-v2.1.html

Regards,
Vishal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][clients] Enable hacking in python-*clients

2016-01-19 Thread Akihiro Motoki
2016-01-19 18:58 GMT+09:00 Kekane, Abhishek :
>
>
> -Original Message-
> From: Andreas Jaeger [mailto:a...@suse.com]
> Sent: 19 January 2016 15:19
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all][clients] Enable hacking in python-*clients
>
> On 2016-01-19 10:44, Abhishek Kekane wrote:
>>> Hi Abishek,
>>
>>> In my understanding, hacking check is enabled for most (or all) of
>>
>>> python-*client.
>>
>>> For example, flake8 is run for each neutronclient review [1].
>>
>>> test-requirements installs hacking, so I believe hacking check is enabled.
>>
>>> openstackclient and novaclient do the same [2] [3].
>>
>>> Am I missing something?
>>
>> Hi Akhiro Motoki,
>>
>> Individual OpenStack projects has separate hacking module (e.g.
>> nova/hacking/checks.py) which contains additional rules other than
>> standard PEP8 errors/warnings.
>>
>> In similar mode can we do same in python-*clients?
>
> Let's share one common set of rules and not have each repo additional ones. 
> So, if those are useful, propose them for the hacking repo.

Totally agree.

> To answer your questions: Sure, it can be done but why?
>
> Because we can encounter this issues in local environments only, also we can 
> add custom checks like
> 1. use six.string_types instead of basestring
> 2. use dict.items or six.iteritems(dict) instead of dict.items
> 3. checks on assertions etc.

If you find hacking rules you feel useful for various projects,
I would suggest you try to add them to the hacking repo first.

If there is still a reasonable reasons to add the rules to individual repos,
you can propose them (though I believe it is a rare care).

I am not sure there are common rules specific to python-*client repos,
but it seems this is not a case of yours as far as I read this thread.

Akihiro


>
> Abhishek
>
> Andreas
> --
>   Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
> GF: Felix Imendörffer, Jane Smithard, Graham Norton,
> HRB 21284 (AG Nürnberg)
>  GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> Disclaimer: This email and any attachments are sent in strictest confidence
> for the sole use of the addressee and may contain legally privileged,
> confidential, and proprietary data. If you are not the intended recipient,
> please advise the sender by replying promptly to this email and then delete
> and destroy this email and any attachments without any further use, copying
> or forwarding.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Docs] Definition of a provider Network

2016-01-19 Thread Jay Pipes

On 01/19/2016 02:33 AM, Andreas Scheuring wrote:

Hi everybody,

I stumbled over a definition that explains the difference between a
Provider network and a self service network. [1]

To summarize it says:
- Provider Network: primarily uses layer2 services and vlan segmentation
and cannot be used for advanced services (fwaas,..)
- Self-service Network: is Neutron configured to use a overlay network
and supports advanced services (fwaas,..)

But my understanding is more like this:
- Provider Network: The Openstack user needs information about the
underlying network infrastructure to create a virtual network that
exactly matches this infrastructure.

- Self service network: The Openstack user can create virtual networks
without knowledge about the underlaying infrastructure on the data
network. This can also include vlan networks, if the l2 plugin/agent was
configured accordingly.

Did the meaning of a provider network change in the meantime, or is my
understanding just wrong?


I don't know the answer to the above questions, however in reading some 
of the networking guide last night, I ran into a similar question around 
provider networks.


In the "Scenario: Provider Networks with Linux bridge" document [0], the 
second paragraph has this statement:


"Also, provider networks lack the concept of fixed and floating IP 
addresses because they only handle layer-2 connectivity for instances."


and then, three paragraphs later, this statement is made:

"To improve performance and reliability, provider networks move layer-3 
operations to the physical network infrastructure."


So, which is it exactly? Do provider networks handle layer 3 or don't they?

Best,
-jay

[0] 
http://docs.openstack.org/liberty/networking-guide/scenario_provider_lb.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Neutron] [Docs] Definition of a provider Network

2016-01-19 Thread Neil Jerram
On 19/01/16 07:36, Andreas Scheuring wrote:
> Hi everybody, 
>
> I stumbled over a definition that explains the difference between a
> Provider network and a self service network. [1] 

I've also spent time trying to understand this, so am happy to offer
that understanding here (for checking?)...

I believe the _definition_ of a 'provider' network is that it is a
network provisioned by the cloud operator - as opposed to 'tenant'
networks that are provisioned by non-admin tenants aka users aka projects.

(I've not seen the term 'Self service' before, but presumably it means
what I'm calling 'tenant'.

Corollaries - but not strictly part of the definition - are that:

- Provider networks typically 'map more closely' in some sense onto the
cloud's underlying physical network than tenant networks do.  The
'provider' API extension - which is usually limited by policy to
operators only, and hence can only be used with provider networks -
allows the operator to specify that mapping, for example which VLAN to
map on to.  Tenant networks are typically implemented with additional
layers of encapsulation, in comparison with provider networks, in order
to allow many tenant networks to coexist on the same compute hosts and
yet be isolatable from each other.

- Provider networks typically use the real IP address space, whereas
tenant networks typically use private IP address space so that multiple
tenant networks can use the same IP addresses.

The network that is on the external side of a Neutron Router has its
router:external property True, and also has to be a provider network. 
Floating IPs come from a subnet that is associated with that provider
network.

It's possible to attach VMs directly to a provider network, as well as
to tenant networks.

>
> To summarize it says:
> - Provider Network: primarily uses layer2 services

I don't know what this means.  All networks have a layer 2 somewhere.

>  and vlan segmentation

Yes, but they don't have to.  A provider network can be 'flat', which
means that its VM interfaces are bridged onto one of the physical
interfaces of the compute host (and it is assumed that all hosts'
physical interfaces are themselves bridged together).  So then any VLAN
that a VM used would be trunked through the physical network.

> and cannot be used for advanced services (fwaas,..)

(I didn't know that, but OK.)

> - Self-service Network: is Neutron configured to use a overlay network

Grammar?

> and supports advanced services (fwaas,..)
>
>
> But my understanding is more like this:
> - Provider Network: The Openstack user needs information about the
> underlying network infrastructure to create a virtual network that
> exactly matches this infrastructure. 

Agreed, if s/user/operator/ and s/virtual//.  OpenStack _users_ cannot
create provider networks, and I wouldn't call a provider network 'virtual'.


>
> - Self service network: The Openstack user can create virtual networks
> without knowledge about the underlaying infrastructure on the data
> network. This can also include vlan networks, if the l2 plugin/agent was
> configured accordingly.

Agreed.
>
>
> Did the meaning of a provider network change in the meantime, or is my
> understanding just wrong?
>
> Thanks!
>
>
>
>
> [1]
> http://docs.openstack.org/liberty/install-guide-rdo/overview.html#id4
>
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Review Day results

2016-01-19 Thread Flavio Percoco

Greetings,

The Glance team had a review day yday. Patches from glance, glanceclient,
glance_store and glance-specs were reviewed and several of them merged.

I wanted to take a chance to thank everyone who joined and share some stats
taken from Stackalytics. They might not look impressive to some, bigger, teams
but it was a great effort that the Glance team should be proud of.

Stats:

Total reviews: 115 (57.5 per day)
Total reviewers: 21 (2.7 per reviewer per day)
Total reviews by core team: 71 (35.5 per day)
Core team size: 7 (5.1 per core per day)

For a great M-2 and a better M-3!
Thanks everyone,
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] need your help on bug #1449498

2016-01-19 Thread Zhenyu Zheng
This is a bug, as tenant and user is maintained by keystone and quota are
stored in nova db, so it has the cleanup problems.

On Tue, Jan 19, 2016 at 8:12 PM, jialiang_song517 
wrote:

> Hi guys,
>
> I am working on bug #1449498,  display the quota of a user has been deleted>.
>
> Reproduction steps w/ devstack and Liberty:
> 1) create a tenant bug_test
> 2) create a user test1 in tenant bug_test
> 3) update the quota instances of test1 as 5 (the default instances value
> is 10)
> 4) delete user test1
> 5) query the quota information for user test1 in tenant bug_test
> in step5, the expected result should indicate user test1 doesn't exist,
> while nova returned the deleted user test1's quota infomation with
> instances as 5.
>
> After investigation, it is found that quota_get_all_by_project_and_user()
> and quota_get_all_by_project() will invoke model_query(context, model,
> args=None,
> session=None,
> use_slave=False,
> *read_deleted=None*,
> project_only=False)
> to query the quota information specified by project or project & user.
> While the model_query() doesnot work as expected, that is, in case a user
> was deleted, even *read_deleted *is set as *no*, the quota information
> associated with the deleted user will also be returned.
>
> I am not sure if this is a design behavior or this could be problem in
> oslo_db? Could you give some instruction on the further direction? Thanks.
>
> Any other comments are welcome.
>
> Best Regards,
> Jialiang
>
> --
> jialiang_song517
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [oslo] Proposal of adding puppet-oslo to OpenStack

2016-01-19 Thread Emilien Macchi
Hi,

Adding [oslo] tag for more visibility.

On 01/19/2016 05:01 AM, Xingchao Yu wrote:
> Hi,  all:
> 
> Recently I submit some patches for adding rabbit_ha_queues and
> correct the section name of memcached_servers params to each modules,
> then I find I just did repeated things:
> 
>1. Adding one parameters which related to oslo.*  or authtoken to
> all puppet modules
>2. Correct section of parameters, move it from deprecated section
> to oslo_* section, apply it on all puppet modules
> 
>  We have more than 30+ modules for now, that means we have to repeat
> 10+ or 20+ times if we want to do a simple change on oslo_* common configs.
> 
>  Besides, the number of oslo_* section is growing, for example : 
> 
>- oslo_messaging_amqp
>- oslo_messaging_rabbit
>- oslo_middleware
>- oslo_policy
>- oslo_concurrency
>- oslo_versionedobjects
>...
>  
> Now we maintain these oslo_* parameters separately in each modules,
>  this has lead some problems:
> 
> 1.  oslo_* params are inconsistent in each modules
> 2.  common params explosion in each modules
> 3.  no convenient way for managing oslo_* params
> 
> When I was doing some work on keystone::resource::authtoken  
>  (https://review.openstack.org/#/c/266723/)
> 
> Then I have a idea about adding puppet-oslo project, using a bunch
> of define resources to unify oslo_* configs in each modules.
> 
> I just write a prototype to show how does it works with oslo.cache:
>   
> https://github.com/NewpTone/puppet-oslo/blob/master/manifests/cache.pp
>   
> Please let me know your opinion on the same.

We already talked about this topics during Vancouver Summit:
https://etherpad.openstack.org/p/liberty-summit-design-puppet

Real output is documented here:
http://my1.fr/blog/puppet-openstack-plans-for-liberty/

And I already initiated some code 8 months ago:
https://github.com/redhat-cip/puppet-oslo

At this time, we decided not to go this way because some OpenStack
projects were not using the same version of oslo.*. sometimes.
So it could have lead to something like:
"nova using newest version of oslo messaging parameters comparing to
murano" (that's an example, probably wrong...), so puppet-oslo would
have been risky to use here.
I would like to know from Oslo folks if we can safely configure Oslo
projects the same way during a cycle (Ex: Mitaka, then N, etc) or if
some projects are using too old versions of Oslo that makes impossible a
consistent configuration across all OpenStack projects.

So indeed, I'm still convinced this topic should be brought alive again.
We would need to investigate with Oslo team if it makes sense and if we
can safely do that for all our modules.
If we have positive feedback, we can create the new module and
refactorize our modules that will consume puppet-oslo.
It will help a lot in keeping our modules consistent and eventually drop
a lot of duplicated code.

Thoughts?

> 
> Thanks & Regards.
> 
> -- 
>  Xingchao Yu
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] MTU configuration pain

2016-01-19 Thread Matt Kassawara
No. However, we ought to determine what happens when both DHCP and RA
advertise it.

On Tue, Jan 19, 2016 at 12:36 AM, Kevin Benton  wrote:

> >Yup. We mostly attempt to do that now.
>
> Right, but not by default. Can you think of a scenario where advertising
> it would be harmful?
> On Jan 18, 2016 23:57, "Matt Kassawara"  wrote:
>
>>
>>
>> On Mon, Jan 18, 2016 at 4:14 PM, Kevin Benton  wrote:
>>
>>> Thanks for the awesome writeup.
>>>
>>> >5) A bridge or veth pair with an IP address can participate in path
>>> MTU discovery (PMTUD). However, these devices do not appear to understand
>>> namespaces and originate the ICMP message from the host instead of a
>>> namespace. Therefore, the message never reaches the destination...
>>> typically a host outside of the deployment.
>>>
>>> I suspect this is because we don't put the bridges into namespaces. Even
>>> if we did do this, we would need to allocate IP addresses for every compute
>>> node to use to chat on the network...
>>>
>>
>> Yup. Moving the MTU disparity to the first layer-3 device a packet
>> traverses inbound to a VM saves us from burning IPs too.
>>
>>
>>>
>>>
>>>
>>> >At least for the Linux bridge agent, I think we can address ingress
>>> MTU disparity (to the VM) by moving it to the first device in the chain
>>> capable of layer-3 operations, particularly the neutron router namespace.
>>> We can address the egress MTU disparity (from the VM) by advertising the
>>> MTU of the overlay network to the VM via DHCP/RA or using manual interface
>>> configuration.
>>>
>>> So when setting up DHCP for the subnet, would telling the DHCP agent to
>>> use an MTU we calculate based on (global MTU value - network encap
>>> overhead) achieve what you are suggesting here?
>>>
>>
>> Yup. We mostly attempt to do that now.
>>
>> On Fri, Jan 15, 2016 at 10:41 AM, Sean M. Collins 
 wrote:

> MTU has been an ongoing issue in Neutron for _years_.
>
> It's such a hassle, that most people just throw up their hands and set
> their physical infrastructure to jumbo frames. We even document it.
>
>
> http://docs.openstack.org/juno/install-guide/install/apt-debian/content/neutron-network-node.html
>
> > Ideally, you can prevent these problems by enabling jumbo frames on
> > the physical network that contains your tenant virtual networks.
> Jumbo
> > frames support MTUs up to approximately 9000 bytes which negates the
> > impact of GRE overhead on virtual networks.
>
> We've pushed this onto operators and deployers. There's a lot of
> code in provisioning projects to handle MTUs.
>
> http://codesearch.openstack.org/?q=MTU=nope==
>
> We have mentions to it in our architecture design guide
>
>
> http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/arch-design/source/network-focus-architecture.rst#n150
>
> I want to get Neutron to the point where it starts discovering this
> information and automatically configuring, in the optimistic cases. I
> understand that it can be complex and have corner cases, but the issue
> we have today is that it is broken in some multinode jobs, even Neutron
> developers are configuring it correctly.
>
> I also had this discussion on the DevStack side in
> https://review.openstack.org/#/c/112523/
> where basically, sure we can fix it in DevStack and at the gate, but it
> doesn't fix the problem for anyone who isn't using DevStack to deploy
> their cloud.
>
> Today we have a ton of MTU configuration options sprinkled throghout
> the
> L3 agent, dhcp agent, l2 agents, and at least one API extension to the
> REST API for handling MTUs.
>
> So yeah, a lot of knobs and not a lot of documentation on how to make
> this thing work correctly. I'd like to try and simplify.
>
>
> Further reading:
>
>
> http://techbackground.blogspot.co.uk/2013/06/path-mtu-discovery-and-gre.html
>
> http://lists.openstack.org/pipermail/openstack/2013-October/001778.html
>
>
> https://ask.openstack.org/en/question/6140/quantum-neutron-gre-slow-performance/
>
>
> https://ask.openstack.org/en/question/12499/forcing-mtu-to-1400-via-etcneutrondnsmasq-neutronconf-per-daniels/
>
>
> http://blog.systemathic.ch/2015/03/05/openstack-mtu-pitfalls-with-tunnels/
>
> https://twitter.com/search?q=openstack%20neutron%20MTU
>
> --
> Sean M. Collins
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



 

Re: [openstack-dev] [TripleO] Deploy Overcloud Keystone in HTTPD

2016-01-19 Thread Emilien Macchi


On 01/18/2016 09:59 PM, Adam Young wrote:
> I have a review here for switching Keystone to HTTPD
> 
> https://review.openstack.org/#/c/269377/

Adam, I think your patch overlaps with my patch:
https://review.openstack.org/#/c/269377

Feel free to take over it if you feel like it miss something.
I haven't worked on it since lot of time now, and it will need to be
rebased.

Thanks,

> 
> But I have no idea how to kick off the CI to really test it.  The check
> came back way too quick for it to have done a full install; less than 3
> minutes.  I think it was little more than a lint check.
> 
> How can I get a real sense of if it is this easy or if there is
> something more that needs to be done?
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Nominating Lei Zhang (Jeffrey Zhang in English) - jeffrey4l on irc

2016-01-19 Thread Ryan Hallisey
+1 nice work!

-Ryan

- Original Message -
From: "Steven Dake (stdake)" 
To: openstack-dev@lists.openstack.org
Sent: Tuesday, January 19, 2016 3:26:38 AM
Subject: [openstack-dev] [kolla] Nominating Lei Zhang (Jeffrey Zhang in 
English) - jeffrey4l on irc

Hi folks, 

I would like to propose Lei Zhang for our core reviewer team. Count this 
proposal as a +1 vote from me. Lei has done a fantastic job in his reviews over 
the last 6 weeks and has managed to produce some really nice implementation 
work along the way. He participates in IRC regularly, and has a commitment from 
his management team at his employer to work full time 100% committed to Kolla 
for the foreseeable future (although things can always change in the future :) 

Please vote +1 if you approve of Lei for core reviewer, or –1 if wish to veto 
his nomination. Remember just one –1 vote is a complete veto, so if your on the 
fence, another option is to abstain from voting. 

I would like to change from our 3 votes required, as our core team has grown, 
to requiring a simple majority of core reviewers with no veto votes. As we have 
9 core reviewers, this means Lei requires 4 more +1 votes with no veto vote in 
the voting window to join the core reviewer team. 

I will leave the voting open for 1 week as is the case with our other core 
reviewer nominations until January 26th. If the vote is unanimous or there is a 
veto vote before January 26th I will close voting. I'll make appropriate 
changes to gerrit permissions if Lei is voted into the core reviewer team. 

Thank you for your time in evaluating Lei for the core review team. 

Regards 
-steve 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Bulk Instance Delete support

2016-01-19 Thread Michael Still
Heya,

I am not aware of anyone working on this. That said, its also not clear to
me that this is actually a good idea. Why can't you just loop through the
instances and delete them one at a time?

Michael

On Wed, Jan 20, 2016 at 12:08 AM, vishal yadav 
wrote:

> Hey guys,
>
> Would like to know the plan for support of bulk instance delete feature.
> There was a blueprint registered a while ago [1] but it's status is not
> clear. No corresponding API [2]
>
> [1] https://blueprints.launchpad.net/nova/+spec/bulk-delete-servers
> [2] http://developer.openstack.org/api-ref-compute-v2.1.html
>
> Regards,
> Vishal
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Docs] Definition of a provider Network

2016-01-19 Thread Bogdan Dobrelya
On 19.01.2016 15:31, Bogdan Dobrelya wrote:
> On 19.01.2016 15:19, Akihiro Motoki wrote:
>> I agree that the current definition can be improved.
> 
> Here is a docs bug [0]
> 
> [0] https://bugs.launchpad.net/fuel/+bug/1513421

I pasted a wrong link, sorry. Here is the correct one [0]

[0] https://bugs.launchpad.net/openstack-manuals/+bug/1535744

> 
>>
>> "Provider Network" vs "Self service network" highlights who can
>> provision a network.
>>
>> In my understanding, "Provider Network" is a network provisioned by
>> the cloud operator. Practically the operator cannot provision a network
>> for a tenant, so a single provider network is shared by tenants.
>>
>> On the other hand, "Self-service network" scenario allows OpenStack users
>> to provision their own networks.
>>
>> In the scenario of "provider network", a single network is shared by
>> multiple tenants.
>> and network-related Neutron API calls should be disallowed for tenants.
>> It is reasonable to disallow tenants to provision routers, firewalls
>> or VPNs as well.
>> LBaaS can be used.
>>
>> I hope this helps improve the text.
>>
>> Akihiro
>>
>>
>> 2016-01-19 16:33 GMT+09:00 Andreas Scheuring :
>>> Hi everybody,
>>>
>>> I stumbled over a definition that explains the difference between a
>>> Provider network and a self service network. [1]
>>>
>>> To summarize it says:
>>> - Provider Network: primarily uses layer2 services and vlan segmentation
>>> and cannot be used for advanced services (fwaas,..)
>>> - Self-service Network: is Neutron configured to use a overlay network
>>> and supports advanced services (fwaas,..)
>>>
>>> But my understanding is more like this:
>>> - Provider Network: The Openstack user needs information about the
>>> underlying network infrastructure to create a virtual network that
>>> exactly matches this infrastructure.
>>>
>>> - Self service network: The Openstack user can create virtual networks
>>> without knowledge about the underlaying infrastructure on the data
>>> network. This can also include vlan networks, if the l2 plugin/agent was
>>> configured accordingly.
>>>
>>>
>>> Did the meaning of a provider network change in the meantime, or is my
>>> understanding just wrong?
>>>
>>> Thanks!
>>>
>>>
>>>
>>>
>>> [1]
>>> http://docs.openstack.org/liberty/install-guide-rdo/overview.html#id4
>>>
>>>
>>> --
>>> -
>>> Andreas (IRC: scheuran)
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Bulk Instance Delete support

2016-01-19 Thread Sean Dague
I think it's also important to think about what efficiency might be
gained, which isn't much. DELETE is an async action that returns 202. If
you care about those resources or quota so you can build more, you need
to monitor those deletes.

Which means you are going to be looping on servers to determine status.
The bulk operation as a fire and forget isn't any more efficient on the
server, all the same teardown needs to happen there.

On 01/19/2016 09:32 AM, Michael Still wrote:
> Heya,
> 
> I am not aware of anyone working on this. That said, its also not clear
> to me that this is actually a good idea. Why can't you just loop through
> the instances and delete them one at a time?
> 
> Michael
> 
> On Wed, Jan 20, 2016 at 12:08 AM, vishal yadav  > wrote:
> 
> Hey guys,
> 
> Would like to know the plan for support of bulk instance delete
> feature. There was a blueprint registered a while ago [1] but it's
> status is not clear. No corresponding API [2]
> 
> [1] https://blueprints.launchpad.net/nova/+spec/bulk-delete-servers
> [2] http://developer.openstack.org/api-ref-compute-v2.1.html
> 
> Regards,
> Vishal
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> Rackspace Australia
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #67

2016-01-19 Thread Emilien Macchi


On 01/18/2016 08:54 AM, Emilien Macchi wrote:
> Hello Puppeteers,
> 
> Tomorrow we will have our weekly meeting at UTC 1500.
> Here is our agenda:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160119
> 
> Feel free to add more topics, reviews, bugs, as usual.
> 
> See you there,

Quick meeting today :-)
You can read notes:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2016/puppet_openstack.2016-01-19-15.00.html

Thanks for your participation,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Nominating Lei Zhang (Jeffrey Zhang in English) - jeffrey4l on irc

2016-01-19 Thread Michał Jastrzębski
+1 :)

On 19 January 2016 at 07:28, Ryan Hallisey  wrote:
> +1 nice work!
>
> -Ryan
>
> - Original Message -
> From: "Steven Dake (stdake)" 
> To: openstack-dev@lists.openstack.org
> Sent: Tuesday, January 19, 2016 3:26:38 AM
> Subject: [openstack-dev] [kolla] Nominating Lei Zhang (Jeffrey Zhang in 
> English) - jeffrey4l on irc
>
> Hi folks,
>
> I would like to propose Lei Zhang for our core reviewer team. Count this 
> proposal as a +1 vote from me. Lei has done a fantastic job in his reviews 
> over the last 6 weeks and has managed to produce some really nice 
> implementation work along the way. He participates in IRC regularly, and has 
> a commitment from his management team at his employer to work full time 100% 
> committed to Kolla for the foreseeable future (although things can always 
> change in the future :)
>
> Please vote +1 if you approve of Lei for core reviewer, or –1 if wish to veto 
> his nomination. Remember just one –1 vote is a complete veto, so if your on 
> the fence, another option is to abstain from voting.
>
> I would like to change from our 3 votes required, as our core team has grown, 
> to requiring a simple majority of core reviewers with no veto votes. As we 
> have 9 core reviewers, this means Lei requires 4 more +1 votes with no veto 
> vote in the voting window to join the core reviewer team.
>
> I will leave the voting open for 1 week as is the case with our other core 
> reviewer nominations until January 26th. If the vote is unanimous or there is 
> a veto vote before January 26th I will close voting. I'll make appropriate 
> changes to gerrit permissions if Lei is voted into the core reviewer team.
>
> Thank you for your time in evaluating Lei for the core review team.
>
> Regards
> -steve
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Docs] Definition of a provider Network

2016-01-19 Thread Bogdan Dobrelya
On 19.01.2016 15:19, Akihiro Motoki wrote:
> I agree that the current definition can be improved.

Here is a docs bug [0]

[0] https://bugs.launchpad.net/fuel/+bug/1513421

> 
> "Provider Network" vs "Self service network" highlights who can
> provision a network.
> 
> In my understanding, "Provider Network" is a network provisioned by
> the cloud operator. Practically the operator cannot provision a network
> for a tenant, so a single provider network is shared by tenants.
> 
> On the other hand, "Self-service network" scenario allows OpenStack users
> to provision their own networks.
> 
> In the scenario of "provider network", a single network is shared by
> multiple tenants.
> and network-related Neutron API calls should be disallowed for tenants.
> It is reasonable to disallow tenants to provision routers, firewalls
> or VPNs as well.
> LBaaS can be used.
> 
> I hope this helps improve the text.
> 
> Akihiro
> 
> 
> 2016-01-19 16:33 GMT+09:00 Andreas Scheuring :
>> Hi everybody,
>>
>> I stumbled over a definition that explains the difference between a
>> Provider network and a self service network. [1]
>>
>> To summarize it says:
>> - Provider Network: primarily uses layer2 services and vlan segmentation
>> and cannot be used for advanced services (fwaas,..)
>> - Self-service Network: is Neutron configured to use a overlay network
>> and supports advanced services (fwaas,..)
>>
>> But my understanding is more like this:
>> - Provider Network: The Openstack user needs information about the
>> underlying network infrastructure to create a virtual network that
>> exactly matches this infrastructure.
>>
>> - Self service network: The Openstack user can create virtual networks
>> without knowledge about the underlaying infrastructure on the data
>> network. This can also include vlan networks, if the l2 plugin/agent was
>> configured accordingly.
>>
>>
>> Did the meaning of a provider network change in the meantime, or is my
>> understanding just wrong?
>>
>> Thanks!
>>
>>
>>
>>
>> [1]
>> http://docs.openstack.org/liberty/install-guide-rdo/overview.html#id4
>>
>>
>> --
>> -
>> Andreas (IRC: scheuran)
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Neutron: accessing the host network.

2016-01-19 Thread Atif Saeed
Hi All, 

I am doing some rough experiment, I want to access the host network using 
Instance console. any idea helps me a lot. 

A. 


  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposal of adding puppet-oslo to OpenStack

2016-01-19 Thread Bogdan Dobrelya
On 19.01.2016 11:01, Xingchao Yu wrote:
> Hi,  all:
> 
> Recently I submit some patches for adding rabbit_ha_queues and
> correct the section name of memcached_servers params to each modules,
> then I find I just did repeated things:
> 
>1. Adding one parameters which related to oslo.*  or authtoken to
> all puppet modules
>2. Correct section of parameters, move it from deprecated section
> to oslo_* section, apply it on all puppet modules
> 
>  We have more than 30+ modules for now, that means we have to repeat
> 10+ or 20+ times if we want to do a simple change on oslo_* common configs.
> 
>  Besides, the number of oslo_* section is growing, for example : 
> 
>- oslo_messaging_amqp
>- oslo_messaging_rabbit
>- oslo_middleware
>- oslo_policy
>- oslo_concurrency
>- oslo_versionedobjects
>...
>  
> Now we maintain these oslo_* parameters separately in each modules,
>  this has lead some problems:
> 
> 1.  oslo_* params are inconsistent in each modules
> 2.  common params explosion in each modules
> 3.  no convenient way for managing oslo_* params
> 
> When I was doing some work on keystone::resource::authtoken  
>  (https://review.openstack.org/#/c/266723/)
> 
> Then I have a idea about adding puppet-oslo project, using a bunch
> of define resources to unify oslo_* configs in each modules.
> 
> I just write a prototype to show how does it works with oslo.cache:
>   
> https://github.com/NewpTone/puppet-oslo/blob/master/manifests/cache.pp
>   
> Please let me know your opinion on the same.

I liked the idea very much! And the oslo.cache PoC looks simple and elegant.

> 
> Thanks & Regards.
> 
> -- 
>  Xingchao Yu
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Deploy Overcloud Keystone in HTTPD

2016-01-19 Thread Adam Young

On 01/19/2016 07:54 AM, Emilien Macchi wrote:


On 01/18/2016 09:59 PM, Adam Young wrote:

I have a review here for switching Keystone to HTTPD

https://review.openstack.org/#/c/269377/

Adam, I think your patch overlaps with my patch:
https://review.openstack.org/#/c/269377


Yep.  I wanted to test out just the Overcloud subset.  I'll abandon 
mine; CI ran.




Feel free to take over it if you feel like it miss something.
I haven't worked on it since lot of time now, and it will need to be
rebased.

Thanks,


But I have no idea how to kick off the CI to really test it.  The check
came back way too quick for it to have done a full install; less than 3
minutes.  I think it was little more than a lint check.

How can I get a real sense of if it is this easy or if there is
something more that needs to be done?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >