[openstack-dev] [neutron][SFC]

2016-06-03 Thread Alioune
Probleme with OpenStack SFC
Hi all,
I've installed Openstack SFC with devstack and all module are corretly
running except the neutron L2-agent

After a "screen -rd", it seems that there is a conflict between l2-agent
and SFC (see trace bellow).
I solved the issue with "sudo ovs-vsctl set bridge br
protocols=OpenFlow10,OpenFlow11,OpenFlow12,OpenFlow13" on all openvswitch
bridge (br-int, br-ex, br-tun and br-mgmt0).
I would like to know:
  - If someone knows why this error arrises ?
 - is there another way to solve it ?

Regards,

2016-06-03 12:51:56.323 WARNING
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] OVS is dead.
OVSNeutronAgent will keep running and checking OVS status periodically.
2016-06-03 12:51:56.330 DEBUG
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] Agent rpc_loop -
iteration:4722 completed. Processed ports statistics: {'regular':
{'updated': 0, 'added': 0, 'removed': 0}}. Elapsed:0.086 from (pid=12775)
loop_count_and_wait
/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1680
2016-06-03 12:51:58.256 DEBUG
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] Agent rpc_loop -
iteration:4723 started from (pid=12775) rpc_loop
/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1732
2016-06-03 12:51:58.258 DEBUG neutron.agent.linux.utils
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] Running command
(rootwrap daemon): ['ovs-ofctl', '-O openflow13', 'dump-flows', 'br-int',
'table=23'] from (pid=12775) execute_rootwrap_daemon
/opt/stack/neutron/neutron/agent/linux/utils.py:101
2016-06-03 12:51:58.311 ERROR neutron.agent.linux.utils
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None]
Command: ['ovs-ofctl', '-O openflow13', 'dump-flows', 'br-int', 'table=23']
Exit code: 1
Stdin:
Stdout:
Stderr:
2016-06-03T12:51:58Z|1|vconn|WARN|unix:/var/run/openvswitch/br-int.mgmt:
version negotiation failed (we support version 0x04, peer supports version
0x01)
ovs-ofctl: br-int: failed to connect to socket (Broken pipe)

2016-06-03 12:51:58.323 ERROR
networking_sfc.services.sfc.common.ovs_ext_lib
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None]
Command: ['ovs-ofctl', '-O openflow13', 'dump-flows', 'br-int', 'table=23']
Exit code: 1
Stdin:
Stdout:
Stderr:
2016-06-03T12:51:58Z|1|vconn|WARN|unix:/var/run/openvswitch/br-int.mgmt:
version negotiation failed (we support version 0x04, peer supports version
0x01)
ovs-ofctl: br-int: failed to connect to socket (Broken pipe)

2016-06-03 12:51:58.323 TRACE
networking_sfc.services.sfc.common.ovs_ext_lib Traceback (most recent call
last):
2016-06-03 12:51:58.323 TRACE
networking_sfc.services.sfc.common.ovs_ext_lib   File
"/opt/stack/networking-sfc/networking_sfc/services/sfc/common/ovs_ext_lib.py",
line 125, in run_ofctl
2016-06-03 12:51:58.323 TRACE
networking_sfc.services.sfc.common.ovs_ext_lib
process_input=process_input)
2016-06-03 12:51:58.323 TRACE
networking_sfc.services.sfc.common.ovs_ext_lib   File
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 159, in execute
2016-06-03 12:51:58.323 TRACE
networking_sfc.services.sfc.common.ovs_ext_lib raise RuntimeError(m)
2016-06-03 12:51:58.323 TRACE
networking_sfc.services.sfc.common.ovs_ext_lib RuntimeError:
2016-06-03 12:51:58.323 TRACE
networking_sfc.services.sfc.common.ovs_ext_lib Command: ['ovs-ofctl', '-O
openflow13', 'dump-flows', 'br-int', 'table=23']
2016-06-03 12:51:58.323 TRACE
networking_sfc.services.sfc.common.ovs_ext_lib Exit code: 1
2016-06-03 12:51:58.323 TRACE
networking_sfc.services.sfc.common.ovs_ext_lib Stdin:
2016-06-03 12:51:58.323 TRACE
networking_sfc.services.sfc.common.ovs_ext_lib Stdout:
2016-06-03 12:51:58.323 TRACE
networking_sfc.services.sfc.common.ovs_ext_lib Stderr:
2016-06-03T12:51:58Z|1|vconn|WARN|unix:/var/run/openvswitch/br-int.mgmt:
version negotiation failed (we support version 0x04, peer supports version
0x01)
2016-06-03 12:51:58.323 TRACE
networking_sfc.services.sfc.common.ovs_ext_lib ovs-ofctl: br-int: failed to
connect to socket (Broken pipe)
2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib
2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib
2016-06-03 12:51:58.335 ERROR
networking_sfc.services.sfc.common.ovs_ext_lib
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] Unable to execute
['ovs-ofctl', '-O openflow13', 'dump-flows', 'br-int', 'table=23'].
2016-06-03 12:51:58.337 WARNING
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] OVS is dead.
OVSNeutronAgent will keep running and checking OVS status periodically.
2016-06-03 12:51:58.341 DEBUG
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None 

Re: [openstack-dev] [puppet] Discussion of PuppetOpenstack Project abbreviation

2016-06-03 Thread Rich Megginson

On 06/03/2016 01:34 AM, Sergii Golovatiuk wrote:

I would vote for POSM - "Puppet OpenStack Modules"



+1 - possum, american slang for the animal "opossum"


--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Wed, Jun 1, 2016 at 7:25 PM, Cody Herriges > wrote:



> On Jun 1, 2016, at 5:56 AM, Dmitry Tantsur > wrote:
>
> On 06/01/2016 02:20 PM, Jason Guiditta wrote:
>> On 01/06/16 18:49 +0800, Xingchao Yu wrote:
>>>  Hi, everyone:
>>>
>>>  Do we need to give a abbreviation for PuppetOpenstack
project? B/C
>>>  it's really a long words when I introduce this project to
people or
>>>  writng article about it.
>>>
>>>  How about POM(PuppetOpenstack Modules) or POP(PuppetOpenstack
>>>  Project) ?
>>>
>>>  I would like +1 for POM.
>>>  Just an idea, please feel free to give your comment :D
>>>  Xingchao Yu
>>
>> For rdo and osp, we package it as 'openstack-puppet-modules',
or OPM
>> for short.
>
> I definitely love POM as it reminds me of pomeranians :) but I
agree that OPM will probably be easier recognizable.

The project's official name is in fact "Puppet OpenStack" so OPM
would be kinda confusing.  I'd put my vote on POP because it is
closer to the actual definition of an acronym[1], which I
generally find easier to remember over all when it comes to the
shortening of long phrases.

[1] http://www.merriam-webster.com/dictionary/acronym

--
Cody


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Ansible] Splitting out generic testing components

2016-06-03 Thread Andy McCrae
TL;DR We're doing a PoC to split out common testing plays/vars into a
generic repository, for the purposes of making in-role testing more uniform
and generic. Looking for feedback, comments, informal reviews, ideas etc!
https://github.com/andymcc/openstack-ansible-testing-core (Includes a
simple README)

We now have a lot of duplication after moving to a single role per
repository structure with specific testing per role. For example, almost
all repositories require Galera/Rabbit/Keystone in order to deploy testing
successfully. This has led to a scenario where each repository essentially
carries the same deployment code.


Aims:
- The primary aim of extracting the testing infrastructure into a single
repository is to reduce the cases where a simple change is needed, which
dominoes, causing a patch to each of 15 repositories. Instead, a change to
the uniform testing repository would resolve the issue for all other
repository's testing.
- In the future, we are looking to deploy scenario testing. For example, we
may want to test glance with a swift backend, or keystone with memcache. If
the testing play to deploy swift is already in a uniform repository, the
changes required to get glance testing enabled for that use case would be
minimal.
- This allows new projects to consume existing structure/playbooks to
deploy common components and vars, which should simplify the manner in
which new openstack-ansible projects set up their testing.


Steps taken so far:
- The deployment plays for each project have been split out into the
separate testing role.
- Each role only carries a specific "Test project" play.
- The test playbooks have been made generic so it uses the inventory
specified per repository (defining what hosts/roles there are).
- The test-vars have been put in the testing-repository and moved out of
the generic role.
- An override file has been created for each project and included using
"-e" (the highest precedence) but at present, of the 4 projects converted
the maximum number of overrides used is 2, so these overrides are minimal.
- Adjustments to tox.ini and var files references have been made to use the
new repository.


Further work to look into:
*- *It may be worth looking into making the tox.ini more generic, if we
were to make a sweeping change that impacts on tox.ini we would still need
to do changes to each repository. (I am not sure on how feasible this is
though)


Raised concerns:
- This creates a situation where a change may require me to make a change
to the central testing repository before changing the role repository. For
example, in order to get the generic testing for a keystone change I would
have to change the testing repository in advance, and then change the
keystone repository. Or override the var, adjust the testing repo and then
remove the keystone override.
- Changes to the testing-repo can cause all other repo tests (aside from
the overarching openstack-ansible repository) from breaking.


Where to find the PoC, what next?

The repository can be found here:
https://github.com/andymcc/openstack-ansible-testing-core

This is a proof of concept so in no way is it considered complete or
perfect, we are asking for more eyes on this; test it, let us know what you
like/do not like/want changed/additional ideas to improve.

Thanks!

Andy
irc: andymccr
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-03 Thread Lance Bragstad
On Fri, Jun 3, 2016 at 3:20 AM, Henry Nash  wrote:

>
> On 3 Jun 2016, at 01:22, Adam Young  wrote:
>
> On 06/02/2016 07:22 PM, Henry Nash wrote:
>
> Hi
>
> As you know, I have been working on specs that change the way we handle
> the uniqueness of project names in Newton. The goal of this is to better
> support project hierarchies, which as they stand today are restrictive in
> that all project names within a domain must be unique, irrespective of
> where in the hierarchy that projects sits (unlike, say, the unix directory
> structure where a node name only has to be unique within its parent). Such
> a restriction is particularly problematic when enterprise start modelling
> things like test, QA and production as branches of a project hierarchy,
> e.g.:
>
> /mydivsion/projectA/dev
> /mydivsion/projectA/QA
> /mydivsion/projectA/prod
> /mydivsion/projectB/dev
> /mydivsion/projectB/QA
> /mydivsion/projectB/prod
>
> Obviously the idea of a project name (née tenant) being unique has been
> around since near the beginning of (OpenStack) time, so we must be
> cautions. There are two alternative specs proposed:
>
> 1) Relax project name constraints:
> 
> https://review.openstack.org/#/c/310048/
> 2) Hierarchical project naming:
> 
> https://review.openstack.org/#/c/318605/
>
> First, here’s what they have in common:
>
> a) They both solve the above problem
> b) They both allow an authorization scope to use a path rather than just a
> simple name, hence allowing you to address a project anywhere in the
> hierarchy
> c) Neither have any impact if you are NOT using a hierarchy - i.e. if you
> just have a flat layer of projects in a domain, then they have no API or
> semantic impact (since both ensure that a project’s name must still be
> unique within a parent)
>
> Here’s how the differ:
>
> - Relax project name constraints (1), keeps the meaning of the ‘name’
> attribute of a project to be its node-name in the hierarchy, but formally
> relaxes the uniqueness constraint to say that it only has to be unique
> within its parent. In other words, let’s really model this a bit like a
> unix directory tree.
>
> I think I lean towards relaxing the project name constraint. The reason is
because we already expose `domain_id`, `parent_id`, and `name` of a
project. By relaxing the constraint we can give the user everything the
need to know about a project without really changing any of these. When
using 3.7, you know what domain your project is in, you know the identifier
of the parent project, and you know that your project name is unique within
the parent.

> - Hierarchical project naming (2), formally changes the meaning of the
> ‘name’ attribute to include the path to the node as well as the node name,
> and hence ensures that the (new) value of the name attribute remains unique.
>
> Do we intend to *store* the full path as the name, or just build it out on
demand? If we do store the full path, we will have to think about our
current data model since the depth of the organization or domain would be
limited by the max possible name length. Will performance be something to
think about building the full path on every request?

>
> While whichever approach we chose would only be included in a new
> microversion (3.7) of the Identity API, although some relevant APIs can
> remain unaffected for a client talking 3.6 to a Newton server, not all can
> be. As pointed out be jamielennox, this is a data modelling problem - if a
> Newton server has created multiple projects called “dev” in the hierarchy,
> a 3.6 client trying to scope a token simply to “dev” cannot be answered
> correctly (and it is proposed we would have to return an HTTP 409 Conflict
> error if multiple nodes with the same name were detected). This is true for
> both approaches.
>
> Other comments on the approaches:
>
> - Having a full path as the name seems duplicative with the current
> project entity - since we already return the parent_id (hence parent_id +
> name is, today, sufficient to place a project in the hierarchy).
>
>
> The one thing I like is the ability to specify just the full path for the
> OS_PROJECT_NAME env var, but we could make that a separate variable.  Just
> as DOMAIN_ID + PROJECT_NAME is unique today, OS_PROJECT_PATH should be able
> to fully specify a project unambiguously.  I'm not sure which would have a
> larger impact on users.
>
> Agreed - and this could be done for both approaches (since this is all
> part of the “auth data flow").
>
>
> - In the past, we have been concerned about the issue of what we do if
> there is a project further up the tree that we do not have any roles on. In
> such cases, APIs like list project parents will not display anything other
> than the project ID for such projects. In the case of making the name the
> full path, we would be effectively exposing the name of all projects above
> 

[openstack-dev] [ironic] Trello board

2016-06-03 Thread Jim Rollenhagen
Hey all,

Myself and some other cores have had trouble tracking our priorities
using Launchpad and friends, so we put together a Trello board to help
us track it. This should also help us focus on what to review or work
on.

https://trello.com/b/ROTxmGIc/ironic-newton-priorities

Some notes on this:

* This is not the "official" tracking system for ironic - everything
  should still be tracked in Launchpad as we've been doing. This just
  helps us organize that.

* This is not free software, unfortunately. Sorry. If this is a serious
  problem for you in practice, let's chat on IRC and try to come up with
  a solution.

* I plan on only giving cores edit access on this board to help keep it
  non-chaotic.

* I'd like to keep this restricted to the priorities we decided on at
  the summit (including the small stuff not on our priorities page). I'm
  okay with adding a small number of things here and there, if something
  comes up that is super important or we think is a nice feature we
  definitely want to finish in Newton. I don't want to put everything
  being worked on in this (at least for now).

If you're a core and want edit access to the board, please PM me on IRC
with your Trello username and I'll add you.

Feedback welcome. :)

// jim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-03 Thread Hongbin Lu
I agree that heterogeneous cluster is more advanced and harder to control, but 
I don't get why we (as service developers/providers) care about that. If there 
is a significant portion of users asking for advanced topologies (i.e. 
heterogeneous cluster) and willing to deal with the complexities, Magnum should 
just provide them (unless there are technical difficulties or other valid 
arguments). From my point of view, Magnum should support the basic use cases 
well (i.e. homogenous), *and* be flexible to accommodate various advanced use 
cases if we can.

Best regards,
Hongbin

> -Original Message-
> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
> Sent: June-02-16 7:24 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> I am really struggling to accept the idea of heterogeneous clusters. My
> experience causes me to question whether a heterogeneus cluster makes
> sense for Magnum. I will try to explain why I have this hesitation:
> 
> 1) If you have a heterogeneous cluster, it suggests that you are using
> external intelligence to manage the cluster, rather than relying on it
> to be self-managing. This is an anti-pattern that I refer to as “pets"
> rather than “cattle”. The anti-pattern results in brittle deployments
> that rely on external intelligence to manage (upgrade, diagnose, and
> repair) the cluster. The automation of the management is much harder
> when a cluster is heterogeneous.
> 
> 2) If you have a heterogeneous cluster, it can fall out of balance.
> This means that if one of your “important” or “large” members fail,
> there may not be adequate remaining members in the cluster to continue
> operating properly in the degraded state. The logic of how to track and
> deal with this needs to be handled. It’s much simpler in the
> heterogeneous case.
> 
> 3) Heterogeneous clusters are complex compared to homogeneous clusters.
> They are harder to work with, and that usually means that unplanned
> outages are more frequent, and last longer than they with a homogeneous
> cluster.
> 
> Summary:
> 
> Heterogeneous:
>   - Complex
>   - Prone to imbalance upon node failure
>   - Less reliable
> 
> Heterogeneous:
>   - Simple
>   - Don’t get imbalanced when a min_members concept is supported by the
> cluster controller
>   - More reliable
> 
> My bias is to assert that applications that want a heterogeneous mix of
> system capacities at a node level should be deployed on multiple
> homogeneous bays, not a single heterogeneous one. That way you end up
> with a composition of simple systems rather than a larger complex one.
> 
> Adrian
> 
> 
> > On Jun 1, 2016, at 3:02 PM, Hongbin Lu  wrote:
> >
> > Personally, I think this is a good idea, since it can address a set
> of similar use cases like below:
> > * I want to deploy a k8s cluster to 2 availability zone (in future 2
> regions/clouds).
> > * I want to spin up N nodes in AZ1, M nodes in AZ2.
> > * I want to scale the number of nodes in specific AZ/region/cloud.
> For example, add/remove K nodes from AZ1 (with AZ2 untouched).
> >
> > The use case above should be very common and universal everywhere. To
> address the use case, Magnum needs to support provisioning
> heterogeneous set of nodes at deploy time and managing them at runtime.
> It looks the proposed idea (manually managing individual nodes or
> individual group of nodes) can address this requirement very well.
> Besides the proposed idea, I cannot think of an alternative solution.
> >
> > Therefore, I vote to support the proposed idea.
> >
> > Best regards,
> > Hongbin
> >
> >> -Original Message-
> >> From: Hongbin Lu
> >> Sent: June-01-16 11:44 AM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
> >> managing the bay nodes
> >>
> >> Hi team,
> >>
> >> A blueprint was created for tracking this idea:
> >> https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
> >> nodes . I won't approve the BP until there is a team decision on
> >> accepting/rejecting the idea.
> >>
> >> From the discussion in design summit, it looks everyone is OK with
> >> the idea in general (with some disagreements in the API style).
> >> However, from the last team meeting, it looks some people disagree
> >> with the idea fundamentally. so I re-raised this ML to re-discuss.
> >>
> >> If you agree or disagree with the idea of manually managing the Heat
> >> stacks (that contains individual bay nodes), please write down your
> >> arguments here. Then, we can start debating on that.
> >>
> >> Best regards,
> >> Hongbin
> >>
> >>> -Original Message-
> >>> From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> >>> Sent: May-16-16 5:28 AM
> >>> To: OpenStack Development Mailing List (not for usage questions)
> >>> Subject: Re: [openstack-dev] [magnum] Discuss 

Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-03 Thread Henry Nash

> On 3 Jun 2016, at 16:38, Lance Bragstad  wrote:
> 
> 
> 
> On Fri, Jun 3, 2016 at 3:20 AM, Henry Nash  > wrote:
> 
>> On 3 Jun 2016, at 01:22, Adam Young > > wrote:
>> 
>> On 06/02/2016 07:22 PM, Henry Nash wrote:
>>> Hi
>>> 
>>> As you know, I have been working on specs that change the way we handle the 
>>> uniqueness of project names in Newton. The goal of this is to better 
>>> support project hierarchies, which as they stand today are restrictive in 
>>> that all project names within a domain must be unique, irrespective of 
>>> where in the hierarchy that projects sits (unlike, say, the unix directory 
>>> structure where a node name only has to be unique within its parent). Such 
>>> a restriction is particularly problematic when enterprise start modelling 
>>> things like test, QA and production as branches of a project hierarchy, 
>>> e.g.:
>>> 
>>> /mydivsion/projectA/dev
>>> /mydivsion/projectA/QA
>>> /mydivsion/projectA/prod
>>> /mydivsion/projectB/dev
>>> /mydivsion/projectB/QA
>>> /mydivsion/projectB/prod
>>> 
>>> Obviously the idea of a project name (née tenant) being unique has been 
>>> around since near the beginning of (OpenStack) time, so we must be 
>>> cautions. There are two alternative specs proposed:
>>> 
>>> 1) Relax project name constraints:  
>>> https://review.openstack.org/#/c/310048/
>>>   
>>> 2) Hierarchical project naming:  
>>> https://review.openstack.org/#/c/318605/
>>>  
>>> 
>>> First, here’s what they have in common:
>>> 
>>> a) They both solve the above problem
>>> b) They both allow an authorization scope to use a path rather than just a 
>>> simple name, hence allowing you to address a project anywhere in the 
>>> hierarchy
>>> c) Neither have any impact if you are NOT using a hierarchy - i.e. if you 
>>> just have a flat layer of projects in a domain, then they have no API or 
>>> semantic impact (since both ensure that a project’s name must still be 
>>> unique within a parent)
>>> 
>>> Here’s how the differ:
>>> 
>>> - Relax project name constraints (1), keeps the meaning of the ‘name’ 
>>> attribute of a project to be its node-name in the hierarchy, but formally 
>>> relaxes the uniqueness constraint to say that it only has to be unique 
>>> within its parent. In other words, let’s really model this a bit like a 
>>> unix directory tree.
> 
> I think I lean towards relaxing the project name constraint. The reason is 
> because we already expose `domain_id`, `parent_id`, and `name` of a project. 
> By relaxing the constraint we can give the user everything the need to know 
> about a project without really changing any of these. When using 3.7, you 
> know what domain your project is in, you know the identifier of the parent 
> project, and you know that your project name is unique within the parent.  
>>> - Hierarchical project naming (2), formally changes the meaning of the 
>>> ‘name’ attribute to include the path to the node as well as the node name, 
>>> and hence ensures that the (new) value of the name attribute remains unique.
> 
> Do we intend to *store* the full path as the name, or just build it out on 
> demand? If we do store the full path, we will have to think about our current 
> data model since the depth of the organization or domain would be limited by 
> the max possible name length. Will performance be something to think about 
> building the full path on every request?   
I now mention this issue in the spec for hierarchical project naming (the relax 
naming approach does not suffer this issue). As you say, we’ll have to change 
the DB (today it is only 64 chars) if we do store the full path . This is 
slightly problematic since the maximum depth of hierarchy is controlled by a 
config option, and hence could be changed. We will absolutely have be able to 
build the path on-the-fly in order to support legacy drivers (who won’t be able 
to store more than 64 chars). We may need to do some performance tests to 
ascertain if we can get away with building the path on-the-fly in all cases and 
avoid changing the table.  One additional point is that, of course, the API 
will return this path whenever it returns a project - so clients will need to 
be aware of this increase in size. 
>>> 
>>> While whichever approach we chose would only be included in a new 
>>> microversion (3.7) of the Identity API, although some relevant APIs can 
>>> remain unaffected for a client talking 3.6 to a Newton server, not all can 
>>> be. As pointed out be jamielennox, this is a data modelling problem - if a 
>>> Newton server has created multiple projects called “dev” in the hierarchy, 
>>> a 3.6 client trying to scope a token simply to “dev” cannot be answered 
>>> correctly (and 

[openstack-dev] [nova] We are now past the non-priority spec approval deadline

2016-06-03 Thread Matt Riedemann
Yesterday was the non-priority spec approval deadline per the nova 
release schedule [1].


We have 80 approved blueprints for Newton so far [2]. In my opinion 
we're already way over-committed for the release, but we'll get done 
what we can. The upside to getting your spec approved in Newton is even 
if the code doesn't make this release, we've hashed out the spec so 
re-approval for Ocata should be easier.


For those non-priority specs which were not approved already, the Ocata 
specs directory is already available [3]. If you plan on continuing to 
push your spec for the Ocata release, update the change and move the 
spec from the specs/newton/approved/ directory to the 
specs/ocata/approved directory.


Note, however, that the nova-specs core team will not be actively 
reviewing specs for Ocata.


Any specs that aren't moved in the next week or two will be subject to 
being abandoned.


For specs that are related to priorities for Newton [4], those should be 
brought to the attention of the nova-specs core team [5] in IRC or via 
the nova meeting since we won't be spending review time in that repo for 
new things.


[1] https://wiki.openstack.org/wiki/Nova/Newton_Release_Schedule
[2] https://blueprints.launchpad.net/nova/newton
[3] https://review.openstack.org/#/c/324397/
[4] 
https://specs.openstack.org/openstack/nova-specs/priorities/newton-priorities.html

[5] https://review.openstack.org/#/admin/groups/302,members

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [infra] Conflict between publishing jobs

2016-06-03 Thread Andreas Jaeger
On 06/02/2016 03:38 PM, Julien Danjou wrote:
> Hi,
> 
> While importing Panko¹ into OpenStack, Andreas informed me that the jobs
> "openstack-server-release-jobs" and "publish-to-pypi" were incompatible
> and that the release team would know that. We actually want to publish
> Panko as an OpenStack server and also to PyPI.
> 
> We already have both these jobs for Gnocchi without any problem.
> 
> Could the infra team enlighten us about the possible issue here?
> 
> Thanks!
> 
> ¹  https://review.openstack.org/#/c/318677/

The issue is that the infra jobs are setup with the assumption that a
server project does not publish to pypi.

Both templates contain the same jobs and thus announcements might be
send twice.

So, if you want to publish to PyPI, remove the server-release-jobs
template...

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Trello board

2016-06-03 Thread Alexis Monville
Hi,

On Fri, Jun 3, 2016 at 5:39 PM, Jim Rollenhagen  wrote:
> Hey all,
>
> Myself and some other cores have had trouble tracking our priorities
> using Launchpad and friends, so we put together a Trello board to help
> us track it. This should also help us focus on what to review or work
> on.
>
> https://trello.com/b/ROTxmGIc/ironic-newton-priorities
>
> Some notes on this:
>
> * This is not the "official" tracking system for ironic - everything
>   should still be tracked in Launchpad as we've been doing. This just
>   helps us organize that.
>
> * This is not free software, unfortunately. Sorry. If this is a serious
>   problem for you in practice, let's chat on IRC and try to come up with
>   a solution.
>
> * I plan on only giving cores edit access on this board to help keep it
>   non-chaotic.
>
> * I'd like to keep this restricted to the priorities we decided on at
>   the summit (including the small stuff not on our priorities page). I'm
>   okay with adding a small number of things here and there, if something
>   comes up that is super important or we think is a nice feature we
>   definitely want to finish in Newton. I don't want to put everything
>   being worked on in this (at least for now).
>
> If you're a core and want edit access to the board, please PM me on IRC
> with your Trello username and I'll add you.
>
> Feedback welcome. :)

I would like to know if you are aware of this specs around StoryBoard:
http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html

Maybe it could be interesting to have a look at it and see if it could
fit your needs?


>
> // jim
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 

Alexis


Alexis Monville
alexis.monvi...@redhat.com
+33 6 75 73 54 82
irc: alexismonville
Linkedin: fr.linkedin.com/in/alexismonville
Twitter: https://twitter.com/alexismonville


Bringing Red Hat Openstack teams to continuously deliver more
impactful innovation in a sustainable way

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] OSIC cluster accepteed

2016-06-03 Thread Michał Jastrzębski
Hello Kollagues,

Some of you might know that I submitted request for 130 nodes out of
osic cluster for testing Kolla. We just got accepted. Time window will
be 3 weeks between 7/22 and 8/14, so we need to make most of it. I'd
like some volunteers to help me with tests, setup and such. We need to
prepare test scenerios, streamline bare metal deployment and prepare
architectures we want to run through. I would also make use of our
global distribution to keep nodes being utilized 24h.

Nodes we're talking about are pretty powerful 256gigs of ram each, 12
ssd disks in each and 10Gig networking all the way. We will get IPMI
access to it so bare metal provisioning will have to be there too
(good time to test out bifrost right?:))

Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][mistral] Saga of process than ack and where can we go from here...

2016-06-03 Thread Joshua Harlow

Deja, Dawid wrote:

On Thu, 2016-05-05 at 11:08 +0700, Renat Akhmerov wrote:



On 05 May 2016, at 01:49, Mehdi Abaakouk > wrote:


Le 2016-05-04 10:04, Renat Akhmerov a écrit :

No problem. Let’s not call it RPC (btw, I completely agree with that).
But it’s one of the messaging patterns and hence should be under
oslo.messaging I guess, no?


Yes and no, we currently have two APIs (rpc and notification). And
personally I regret to have the notification part in oslo.messaging.

RPC and Notification are different beasts, and both are today limited
in terms of feature because they share the same driver implementation.

Our RPC errors handling is really poor, for example Nova just put
instance in ERROR when something bad occurs in oslo.messaging layer.
This enforces deployer/user to fix the issue manually.

Our Notification system doesn't allow fine grain routing of message,
everything goes into one configured topic/queue.

And now we want to add a new one... I'm not against this idea,
but I'm not a huge fan.


Thoughts from folks (mistral and oslo)?

Also, I was not at the Summit, should I conclude the Tooz+taskflow
approach (that ensure the idempotent of the application within the
library API) have not been accepted by mistral folks ?

Speaking about idempotency, IMO it’s not a central question that we
should be discussing here. Mistral users should have a choice: if they
manage to make their actions idempotent it’s excellent, in many cases
idempotency is certainly possible, btw. If no, then they know about
potential consequences.


You shouldn't mix the idempotency of the user task and the idempotency
of a Mistral action (that will at the end run the user task).
You can have your Mistral task runner implementation idempotent and just
make the workflow to use configurable in case the user task is
interrupted or badly finished even if the user task is idempotent or not.
This makes the thing very predictable. You will know for example:
* if the user task has started or not,
* if the error is due to a node power cut when the user task runs,
* if you can safely retry a not idempotent user task on an other node,
* you will not be impacted by rabbitmq restart or TCP connection issues,
* ...

With the oslo.messaging approach, everything will just end up in a
generic MessageTimeout error.

The RPC API already have this kind of issue. Applications have
unfortunately
dealt with that (and I think they want something better now).
I'm just not convinced we should add a new "working queue" API in
oslo.messaging for tasks scheduling that have the same issue we already
have with RPC.

Anyway, that's your choice, if you want rely on this poor structure,
I will
not be against, I'm not involved in Mistral. I just want everybody is
aware
of this.


And even in this case there’s usually a number
of measures that can be taken to mitigate those consequences (reruning
workflows from certain points after manually fixing problems, rollback
scenarios etc.).


taskflow allows to describe and automate this kind of workflow really
easily.


What I’m saying is: let’s not make that crucial decision now about
what a messaging framework should support or not, let’s make it more
flexible to account for variety of different usage scenarios.


I think the confusion is in the "messaging" keyword, currently
oslo.messaging
is a "RPC" framework and a "Notification" framework on top of 'messaging'
frameworks.

Messaging framework we uses are 'kombu', 'pika', 'zmq' and 'pingus'.


It’s normal for frameworks to give more rather than less.


I disagree, here we mix different concepts into one library, all concepts
have to be implemented by different 'messaging framework',
So we fortunately give less to make thing just works in the same way
with all
drivers for all APIs.


One more thing, at the summit we were discussing the possibility to
define at-most-once/at-least-once individually for Mistral tasks. This
is demanded because there cases where we need to do it, advanced users
may choose one or another depending on a task/action semantics.
However, it won’t be possible to implement w/o changes in the
underlying messaging framework.


If we goes that way, oslo.messaging users and Mistral users have to
be aware
that their job/task/action/whatever will perhaps not be called
(at-most-once)
or perhaps called twice (at-least-once).

The oslo.messaging/Mistral API and docs must be clear about this
behavior to
not having bugs open against oslo.messaging because script written
via Mistral
API is not executed as expected "sometimes".
"sometimes" == when deployers have trouble with its rabbitmq (or
whatever)
broker and even just when a deployer restart a broker node or when a TCP
issue occurs. At this end the backtrace in theses cases always trows only
oslo.messaging trace (the well known MessageTimeout...).


Also oslo.messaging is already a fragile brick used by everybody that
a very small subset of people 

[openstack-dev] [new] nova_powervm 2.0.2 release (mitaka)

2016-06-03 Thread no-reply
We are satisfied to announce the release of:

nova_powervm 2.0.2: PowerVM driver for OpenStack Nova.

This release is part of the mitaka stable release series.

For more details, please see below.

Changes in nova_powervm 2.0.1..2.0.2


c9e0ea4 hdisk discovery in disconnect if no Storage XAG
312bf58 Reuse existing connection
7a75a01 Smart save in SwiftSlotManager
e2c07b1 get_active_vioses should not use host UUID
620714d Scrub before RebuildSlotMap
c020cfa Wait for more than just one VIOS on start up

Diffstat (except docs and test files)
-

nova_powervm/virt/powervm/nvram/swift.py   |  70 +++---
nova_powervm/virt/powervm/slot.py  |  39 
nova_powervm/virt/powervm/vios.py  |  86 +
nova_powervm/virt/powervm/volume/vscsi.py  |  19 ++--
10 files changed, 286 insertions(+), 146 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Trello board

2016-06-03 Thread Jim Rollenhagen
On Fri, Jun 03, 2016 at 06:29:13PM +0200, Alexis Monville wrote:
> Hi,
> 
> On Fri, Jun 3, 2016 at 5:39 PM, Jim Rollenhagen  
> wrote:
> > Hey all,
> >
> > Myself and some other cores have had trouble tracking our priorities
> > using Launchpad and friends, so we put together a Trello board to help
> > us track it. This should also help us focus on what to review or work
> > on.
> >
> > https://trello.com/b/ROTxmGIc/ironic-newton-priorities
> >
> > Some notes on this:
> >
> > * This is not the "official" tracking system for ironic - everything
> >   should still be tracked in Launchpad as we've been doing. This just
> >   helps us organize that.
> >
> > * This is not free software, unfortunately. Sorry. If this is a serious
> >   problem for you in practice, let's chat on IRC and try to come up with
> >   a solution.
> >
> > * I plan on only giving cores edit access on this board to help keep it
> >   non-chaotic.
> >
> > * I'd like to keep this restricted to the priorities we decided on at
> >   the summit (including the small stuff not on our priorities page). I'm
> >   okay with adding a small number of things here and there, if something
> >   comes up that is super important or we think is a nice feature we
> >   definitely want to finish in Newton. I don't want to put everything
> >   being worked on in this (at least for now).
> >
> > If you're a core and want edit access to the board, please PM me on IRC
> > with your Trello username and I'll add you.
> >
> > Feedback welcome. :)
> 
> I would like to know if you are aware of this specs around StoryBoard:
> http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html
> 
> Maybe it could be interesting to have a look at it and see if it could
> fit your needs?

I'm aware of it, and keeping storyboard on my radar.

I am excited for the time when it's feasible to move the project from
Launchpad to storyboard, but I don't think that time has come yet.

I don't want to disrupt all of our tracking right now. We simply need a
high-level view of what's currently important to the ironic project,
where those important things are in terms of getting done, and
aggregating pointers to the resources needed to continue working on
those things.

We aren't moving our bug/feature list to Trello, simply using it as a
way to stay more organized. :)

// jim

> 
> 
> >
> > // jim
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> -- 
> 
> Alexis
> 
> 
> Alexis Monville
> alexis.monvi...@redhat.com
> +33 6 75 73 54 82
> irc: alexismonville
> Linkedin: fr.linkedin.com/in/alexismonville
> Twitter: https://twitter.com/alexismonville
> 
> 
> Bringing Red Hat Openstack teams to continuously deliver more
> impactful innovation in a sustainable way
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-03 Thread Lance Bragstad
On Fri, Jun 3, 2016 at 11:20 AM, Henry Nash  wrote:

>
> On 3 Jun 2016, at 16:38, Lance Bragstad  wrote:
>
>
>
> On Fri, Jun 3, 2016 at 3:20 AM, Henry Nash  wrote:
>
>>
>> On 3 Jun 2016, at 01:22, Adam Young  wrote:
>>
>> On 06/02/2016 07:22 PM, Henry Nash wrote:
>>
>> Hi
>>
>> As you know, I have been working on specs that change the way we handle
>> the uniqueness of project names in Newton. The goal of this is to better
>> support project hierarchies, which as they stand today are restrictive in
>> that all project names within a domain must be unique, irrespective of
>> where in the hierarchy that projects sits (unlike, say, the unix directory
>> structure where a node name only has to be unique within its parent). Such
>> a restriction is particularly problematic when enterprise start modelling
>> things like test, QA and production as branches of a project hierarchy,
>> e.g.:
>>
>> /mydivsion/projectA/dev
>> /mydivsion/projectA/QA
>> /mydivsion/projectA/prod
>> /mydivsion/projectB/dev
>> /mydivsion/projectB/QA
>> /mydivsion/projectB/prod
>>
>> Obviously the idea of a project name (née tenant) being unique has been
>> around since near the beginning of (OpenStack) time, so we must be
>> cautions. There are two alternative specs proposed:
>>
>> 1) Relax project name constraints:
>> 
>> https://review.openstack.org/#/c/310048/
>> 2) Hierarchical project naming:
>> 
>> https://review.openstack.org/#/c/318605/
>>
>> First, here’s what they have in common:
>>
>> a) They both solve the above problem
>> b) They both allow an authorization scope to use a path rather than just
>> a simple name, hence allowing you to address a project anywhere in the
>> hierarchy
>> c) Neither have any impact if you are NOT using a hierarchy - i.e. if you
>> just have a flat layer of projects in a domain, then they have no API or
>> semantic impact (since both ensure that a project’s name must still be
>> unique within a parent)
>>
>> Here’s how the differ:
>>
>> - Relax project name constraints (1), keeps the meaning of the ‘name’
>> attribute of a project to be its node-name in the hierarchy, but formally
>> relaxes the uniqueness constraint to say that it only has to be unique
>> within its parent. In other words, let’s really model this a bit like a
>> unix directory tree.
>>
>> I think I lean towards relaxing the project name constraint. The reason
> is because we already expose `domain_id`, `parent_id`, and `name` of a
> project. By relaxing the constraint we can give the user everything the
> need to know about a project without really changing any of these. When
> using 3.7, you know what domain your project is in, you know the identifier
> of the parent project, and you know that your project name is unique within
> the parent.
>
>> - Hierarchical project naming (2), formally changes the meaning of the
>> ‘name’ attribute to include the path to the node as well as the node name,
>> and hence ensures that the (new) value of the name attribute remains unique.
>>
>> Do we intend to *store* the full path as the name, or just build it out
> on demand? If we do store the full path, we will have to think about our
> current data model since the depth of the organization or domain would be
> limited by the max possible name length. Will performance be something to
> think about building the full path on every request?
>
> I now mention this issue in the spec for hierarchical project naming (the
> relax naming approach does not suffer this issue). As you say, we’ll have
> to change the DB (today it is only 64 chars) if we do store the full path .
> This is slightly problematic since the maximum depth of hierarchy is
> controlled by a config option, and hence could be changed. We will
> absolutely have be able to build the path on-the-fly in order to support
> legacy drivers (who won’t be able to store more than 64 chars). We may need
> to do some performance tests to ascertain if we can get away with building
> the path on-the-fly in all cases and avoid changing the table.  One
> additional point is that, of course, the API will return this path whenever
> it returns a project - so clients will need to be aware of this increase in
> size.
>

These are all good points that continue to push me towards relaxing the
project naming constraint :)

>
>> While whichever approach we chose would only be included in a new
>> microversion (3.7) of the Identity API, although some relevant APIs can
>> remain unaffected for a client talking 3.6 to a Newton server, not all can
>> be. As pointed out be jamielennox, this is a data modelling problem - if a
>> Newton server has created multiple projects called “dev” in the hierarchy,
>> a 3.6 client trying to scope a token simply to “dev” cannot be answered
>> correctly (and it is proposed we would have to return an HTTP 409 Conflict

Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-03 Thread Paul Michali
Thanks for the link Tim!

Right now, I have two things I'm unsure about...

One is that I had 1945 huge pages left (of size 2048k) and tried to create
a VM with a small flavor (2GB), which should need 1024 pages, but Nova
indicated that it wasn't able to find a host (and QEMU reported an
allocation issue).

The other is that VMs are not being evenly distributed on my two NUMA
nodes, and instead, are getting created all on one NUMA node. Not sure if
that is expected (and setting mem_page_size to 2048 is the proper way).

Regards,

PCM


On Fri, Jun 3, 2016 at 1:21 PM Tim Bell  wrote:

> The documentation at
> http://docs.openstack.org/admin-guide/compute-flavors.html is gradually
> improving. Are there areas which were not covered in your clarifications ?
> If so, we should fix the documentation too since this is a complex area to
> configure and good documentation is a great help.
>
>
>
> BTW, there is also an issue around how the RAM for the BIOS is shadowed. I
> can’t find the page from a quick google but we found an imbalance when we
> used 2GB pages as the RAM for BIOS shadowing was done by default in the
> memory space for only one of the NUMA spaces.
>
>
>
> Having a look at the KVM XML can also help a bit if you are debugging.
>
>
>
> Tim
>
>
>
> *From: *Paul Michali 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Friday 3 June 2016 at 15:18
> *To: *"Daniel P. Berrange" , "OpenStack Development
> Mailing List (not for usage questions)"  >
> *Subject: *Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling
>
>
>
> See PCM inline...
>
> On Fri, Jun 3, 2016 at 8:44 AM Daniel P. Berrange 
> wrote:
>
> On Fri, Jun 03, 2016 at 12:32:17PM +, Paul Michali wrote:
> > Hi!
> >
> > I've been playing with Liberty code a bit and had some questions that I'm
> > hoping Nova folks may be able to provide guidance on...
> >
> > If I set up a flavor with hw:mem_page_size=2048, and I'm creating
> (Cirros)
> > VMs with size 1024, will the scheduling use the minimum of the number of
>
> 1024 what units ? 1024 MB, or 1024 huge pages aka 2048 MB ?
>
>
>
> PCM: I was using small flavor, which is 2 GB. So that's 2048 MB and the
> page size is 2048K, so 1024 pages? Hope I have the units right.
>
>
>
>
>
>
> > huge pages available and the size requested for the VM, or will it base
> > scheduling only on the number of huge pages?
> >
> > It seems to be doing the latter, where I had 1945 huge pages free, and
> > tried to create another VM (1024) and Nova rejected the request with "no
> > hosts available".
>
> From this I'm guessing you're meaning 1024 huge pages aka 2 GB earlier.
>
> Anyway, when you request huge pages to be used for a flavour, the
> entire guest RAM must be able to be allocated from huge pages.
> ie if you have a guest with 2 GB of RAM, you must have 2 GB worth
> of huge pages available. It is not possible for a VM to use
> 1.5 GB of huge pages and 500 MB of normal sized pages.
>
>
>
> PCM: Right, so, with 2GB of RAM, I need 1024 huge pages of size 2048K. In
> this case, there are 1945 huge pages available, so I was wondering why it
> failed. Maybe I'm confusing sizes/pages?
>
>
>
>
>
>
> > Is this still the same for Mitaka?
>
> Yep, this use of huge pages has not changed.
>
> > Where could I look in the code to see how the scheduling is determined?
>
> Most logic related to huge pages is in nova/virt/hardware.py
>
> > If I use mem_page_size=large (what I originally had), should it evenly
> > assign huge pages from the available NUMA nodes (there are two in my
> case)?
> >
> > It looks like it was assigning all VMs to the same NUMA node (0) in this
> > case. Is the right way to change to 2048, like I did above?
>
> Nova will always avoid spreading your VM across 2 host NUMA nodes,
> since that gives bad performance characteristics. IOW, it will always
> allocate huge pages from the NUMA node that the guest will run on. If
> you explicitly want your VM to spread across 2 host NUMA nodes, then
> you must tell nova to create 2 *guest* NUMA nodes for the VM. Nova
> will then place each guest NUMA node, on a separate host NUMA node
> and allocate huge pages from node to match. This is done using
> the hw:numa_nodes=2 parameter on the flavour
>
>
>
> PCM: Gotcha, but that was not the issue I'm seeing. With this small flavor
> (2GB = 1024 pages), I had 13107 huge pages initially. As I created VMs,
> they were *all* placed on the same NUMA node (0). As a result, when I got
> to more than have the available pages, Nova failed to allow further VMs,
> even though I had 6963 available on one compute node, and 5939 on another.
>
>
>
> It seems that all the assignments were to node zero. Someone suggested to
> me to set mem_page_size to 2048, and at that point it started assigning to
> both NUMA nodes evenly.
>
>
>
> Thanks for 

[openstack-dev] [Neutron] Elevating context to remove subnets created by admin

2016-06-03 Thread Darek Smigiel
Hello,
Doing reviews I noticed, that Liu Yong submitted a bug [1] where we have a 
problem with removing subnets.

In short: if tenant wants to delete network with subnets, where at least one of 
subnets is created by admin, he’s not able to do this.
Liu also prepared bugfix for it [2], but now it’s starting to be much more 
complicated.

What is desired solution in this case?
One of suggestions is to elevate context, remove all subnets and nuke 
everything. It can cause a problem, when one tenant can remove others’ tenant 
subnets.
The other is to just show info to tenant, that he’s not allowed to delete 
network. But in the same time, it could be strange, that owner is not able to 
just get rid of *his* network and subnets.

If you have any opinions, suggestions, please feel free to share

[1] https://bugs.launchpad.net/neutron/+bug/1588228
[2] https://review.openstack.org/#/c/324617/


Darek
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] OSIC cluster accepteed

2016-06-03 Thread Steven Dake (stdake)
Michal,

Great news.

Just as a followup, the midcycle is scheduled for the 12th and 13th of
July.  I am working on the eventbrite page today, but the initial
information is here:
https://wiki.openstack.org/wiki/Sprints#Newton_sprints


Unlike last midcycles, there is no vote on the date this time because
finding conference space funded by the community is extremely hard to come
by.  This was the only week that was available to me.

We will use time at the midcycle to plan usage and execution of the
cluster resources OSIC has made so graciously available to us.

Sean,

It would be super helpful if we could have bifrost functional and
integrated with Kolla by then - even if its not fully merged upstream.
That is six weeks or so to work in.  If you don't think you can make it
let us know.

Regards
-steve

On 6/3/16, 9:58 AM, "Michał Jastrzębski"  wrote:

>Hello Kollagues,
>
>Some of you might know that I submitted request for 130 nodes out of
>osic cluster for testing Kolla. We just got accepted. Time window will
>be 3 weeks between 7/22 and 8/14, so we need to make most of it. I'd
>like some volunteers to help me with tests, setup and such. We need to
>prepare test scenerios, streamline bare metal deployment and prepare
>architectures we want to run through. I would also make use of our
>global distribution to keep nodes being utilized 24h.
>
>Nodes we're talking about are pretty powerful 256gigs of ram each, 12
>ssd disks in each and 10Gig networking all the way. We will get IPMI
>access to it so bare metal provisioning will have to be there too
>(good time to test out bifrost right?:))
>
>Cheers,
>Michal
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Replacing user based policies in Nova

2016-06-03 Thread Tim Bell

With https://review.openstack.org/324068 (thanks ☺), the key parts of user 
based policies as currently deployed would be covered in the short term. 
However, my understanding is that this functionality needs to be replaced with 
something sustainable in the long term and consistent with the approach that 
permissions should be on a per-project basis rather than a per-instance/object.

Looking at the use cases:


-  Shared pools of quota between smaller teams

-  Protection from a VM created by one team being shutdown/deleted/etc 
by another

I think much of this could be handled using nested projects in the future.

Specifically,


-  Given a project ‘long tail’, smaller projects could be created under 
that which would share the total ‘long tail’ quota with other siblings

-  Project ‘higgs’ could be a sub-project of ‘long tail’ and have its 
own role assignments so that the members of the team of sub-project ‘diphoton’ 
could not affect the ‘higgs’ VMs

-  The administrator of the project ‘long tail’ would be responsible 
for setting up the appropriate user<->role mappings for the sub projects and 
not require tickets to central support teams

-  This could potentially be taken to the ‘personal project’ use case 
following the implementation of https://review.openstack.org/#/c/324055 in 
Keystone and implementation in other projects

Does this sound doable ?

The major missing piece that I would see for the implementation would the 
nested quotas in Nova/Cinder. The current structure seems to be try to build a 
solution on top of the delimiter library but this is early days.

I’d be happy for feedback on the technical viability of this proposal and then 
I can review with those who have raised the need to see if it would work for 
them.

Tim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][all] Incorporating performance feedback into the review process

2016-06-03 Thread Lance Bragstad
Hey all,

I have been curious about impact of providing performance feedback as part
of the review process. From what I understand, keystone used to have a
performance job that would run against proposed patches (I've only heard
about it so someone else will have to keep me honest about its timeframe),
but it sounds like it wasn't valued.

I think revisiting this topic is valuable, but it raises a series of
questions.

Initially it probably only makes sense to test a reasonable set of
defaults. What do we want these defaults to be? Should they be determined
by DevStack, openstack-ansible, or something else?

What does the performance test criteria look like and where does it live?
Does it just consist of running tempest?

>From a contributor and reviewer perspective, it would be nice to have the
ability to compare performance results across patch sets. I understand that
keeping all performance results for every patch for an extended period of
time is unrealistic. Maybe we take a daily performance snapshot against
master and use that to map performance patterns over time?

Have any other projects implemented a similar workflow?

I'm open to suggestions and discussions because I can't imagine there
aren't other folks out there interested in this type of pre-merge data
points.

Thanks!

Lance
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Trello board

2016-06-03 Thread Tristan Cacqueray
On 06/03/2016 07:08 PM, Jim Rollenhagen wrote:
> On Fri, Jun 03, 2016 at 06:29:13PM +0200, Alexis Monville wrote:
>> Hi,
>>
>> On Fri, Jun 3, 2016 at 5:39 PM, Jim Rollenhagen  
>> wrote:
>>> Hey all,
>>>
>>> Myself and some other cores have had trouble tracking our priorities
>>> using Launchpad and friends, so we put together a Trello board to help
>>> us track it. This should also help us focus on what to review or work
>>> on.
>>>
>>> https://trello.com/b/ROTxmGIc/ironic-newton-priorities
>>>
>>> Some notes on this:
>>>
>>> * This is not the "official" tracking system for ironic - everything
>>>   should still be tracked in Launchpad as we've been doing. This just
>>>   helps us organize that.
>>>
>>> * This is not free software, unfortunately. Sorry. If this is a serious
>>>   problem for you in practice, let's chat on IRC and try to come up with
>>>   a solution.
>>>
>>> * I plan on only giving cores edit access on this board to help keep it
>>>   non-chaotic.
>>>
>>> * I'd like to keep this restricted to the priorities we decided on at
>>>   the summit (including the small stuff not on our priorities page). I'm
>>>   okay with adding a small number of things here and there, if something
>>>   comes up that is super important or we think is a nice feature we
>>>   definitely want to finish in Newton. I don't want to put everything
>>>   being worked on in this (at least for now).
>>>
>>> If you're a core and want edit access to the board, please PM me on IRC
>>> with your Trello username and I'll add you.
>>>
>>> Feedback welcome. :)
>>
>> I would like to know if you are aware of this specs around StoryBoard:
>> http://specs.openstack.org/openstack-infra/infra-specs/specs/task-tracker.html
>>
>> Maybe it could be interesting to have a look at it and see if it could
>> fit your needs?
> 
> I'm aware of it, and keeping storyboard on my radar.
> 
> I am excited for the time when it's feasible to move the project from
> Launchpad to storyboard, but I don't think that time has come yet.
> 
> I don't want to disrupt all of our tracking right now. We simply need a
> high-level view of what's currently important to the ironic project,
> where those important things are in terms of getting done, and
> aggregating pointers to the resources needed to continue working on
> those things.
> 
> We aren't moving our bug/feature list to Trello, simply using it as a
> way to stay more organized. :)
> 

Without moving your project from launhpad to storyboard, it seems like
you can already use storyboard to keep things organized with a kanban
board, e.g.:
  https://storyboard.openstack.org/#!/board/15

To create a new board, you need to click "Create new" then "board".
Cards are in fact normal stories that you can update and reference directly.

Is there something missing that makes Trello a better solution ?

-Tristan



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Elevating context to remove subnets created by admin

2016-06-03 Thread Brandon Logan
To me, it seems more appropriate to delete all the subnets no matter
who they're owned by if the owner of the network decided they wanted to
delete it.  If there is a subnet associated with their network that
they do not see, then the delete network call would have to fail.
 That's going to be quite confusing to a user, especially if they get a
message saying that a particular subnet is preventing the deletion and
the owner can't even see that subnet exists.

One thing I may not be thinking about is shared networks and/or rbac.
 I'm not sure some tenant/project can even create a subnet on another
tenant/project's shared/rbac'ed network.  I just attempted to do it
quickly on the CLI and it failed, but the error message was a big
policy splat.  I doubt that's even meant to happen, so perhaps this
case hasn't been thought about.

Thanks,
Brandon

On Fri, 2016-06-03 at 12:16 -0500, Darek Smigiel wrote:
> Hello,
> Doing reviews I noticed, that Liu Yong submitted a bug [1] where we
> have a problem with removing subnets.
> 
> In short: if tenant wants to delete network with subnets, where at
> least one of subnets is created by admin, he’s not able to do this.
> Liu also prepared bugfix for it [2], but now it’s starting to be much
> more complicated.
> 
> What is desired solution in this case?
> One of suggestions is to elevate context, remove all subnets and nuke
> everything. It can cause a problem, when one tenant can remove
> others’ tenant subnets.
> The other is to just show info to tenant, that he’s not allowed to
> delete network. But in the same time, it could be strange, that owner
> is not able to just get rid of *his* network and subnets.
> 
> If you have any opinions, suggestions, please feel free to share
> 
> [1] https://bugs.launchpad.net/neutron/+bug/1588228
> [2] https://review.openstack.org/#/c/324617/
> 
> 
> Darek
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Incorporating performance feedback into the review process

2016-06-03 Thread Lance Bragstad
Dedicated and isolated infrastructure is a must if we want consistent
performance numbers. If we can come up with a reasonable plan, I'd be happy
to ask for resources. Even with dedicated infrastructure we would still
have to keep in mind that it's a data point from a single provider that
hopefully highlights a general trend about performance.

Here is a list of focus points as I see them so far:


   - Dedicated hardware is a requirement in order to achieve somewhat
   consistent results
   - Tight loop micro benchmarks
   - Tests highlighting the performance cases we care about
   - The ability to determine a sane control
   - The ability to tests proposed patches, compare them to the control,
   and leave comments on reviews
   - Reproducible setup and test runner so that others can run these
   against a dedicated performance environment
   - Daily snapshots of performance published publicly (nice to have)



On Fri, Jun 3, 2016 at 3:16 PM, Brant Knudson  wrote:

>
>
> On Fri, Jun 3, 2016 at 2:35 PM, Lance Bragstad 
> wrote:
>
>> Hey all,
>>
>> I have been curious about impact of providing performance feedback as
>> part of the review process. From what I understand, keystone used to have a
>> performance job that would run against proposed patches (I've only heard
>> about it so someone else will have to keep me honest about its timeframe),
>> but it sounds like it wasn't valued.
>>
>>
> We had a job running rally for a year (I think) that nobody ever looked at
> so we decided it was a waste and stopped running it.
>
>
>> I think revisiting this topic is valuable, but it raises a series of
>> questions.
>>
>> Initially it probably only makes sense to test a reasonable set of
>> defaults. What do we want these defaults to be? Should they be determined
>> by DevStack, openstack-ansible, or something else?
>>
>>
> A performance test is going to depend on the environment (the machines,
> disks, network, etc), the existing data (tokens, revocations, users, etc.),
> and the config (fernet, uuid, caching, etc.). If these aren't consistent
> between runs then the results are not going to be usable. (This is the
> problem with running rally on infra hardware.) If the data isn't realistic
> (1000s of tokens, etc.) then the results are going to be at best not useful
> or at worst misleading.
>
> What does the performance test criteria look like and where does it live?
>> Does it just consist of running tempest?
>>
>>
> I don't think tempest is going to give us numbers that we're looking for
> for performance. I've seen a few scripts and have my own for testing
> performance of token validation, token creation, user creation, etc. which
> I think will do the exact tests we want and we can get the results
> formatted however we like.
>
> From a contributor and reviewer perspective, it would be nice to have the
>> ability to compare performance results across patch sets. I understand that
>> keeping all performance results for every patch for an extended period of
>> time is unrealistic. Maybe we take a daily performance snapshot against
>> master and use that to map performance patterns over time?
>>
>>
> Where are you planning to store the results?
>
>
>> Have any other projects implemented a similar workflow?
>>
>> I'm open to suggestions and discussions because I can't imagine there
>> aren't other folks out there interested in this type of pre-merge data
>> points.
>>
>>
> Thanks!
>>
>> Lance
>>
>>
> Since the performance numbers are going to be very dependent on the
> machines I think the only way this is going to work is if somebody's
> willing to set up dedicated hardware to run the tests on. If you're doing
> that then set it up to mimic how you deploy keystone, deploy the patch
> under test, run the performance tests, and report the results. I'd be fine
> with something like this commenting on keystone changes. None of this has
> to involve openstack infra. Gerrit has a REST API to get the current
> patches.
>
> Everyone that's got performance requirements should do the same. Maybe I
> can get the group I'm in to try it sometime.
>
> - Brant
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Elevating context to remove subnets created by admin

2016-06-03 Thread Carl Baldwin
On Fri, Jun 3, 2016 at 11:16 AM, Darek Smigiel
 wrote:
> Hello,
> Doing reviews I noticed, that Liu Yong submitted a bug [1] where we have a 
> problem with removing subnets.

This makes me wonder what the use case that gets in to this situation.

> In short: if tenant wants to delete network with subnets, where at least one 
> of subnets is created by admin, he’s not able to do this.
> Liu also prepared bugfix for it [2], but now it’s starting to be much more 
> complicated.
>
> What is desired solution in this case?
> One of suggestions is to elevate context, remove all subnets and nuke 
> everything. It can cause a problem, when one tenant can remove others’ tenant 
> subnets.

Ignoring implementation details, I think if I own a network, I ought
to be able to delete it regardless of who has created subnets on it.
A network is composed of subnets.  They are nothing more than the IPAM
details of the network.  I usually think of subnets as part of the
network for this reason.  I'm not even sure why a subnet has its own
owner that is allowed to be different from the network owner.

There only place where I've seen access to a network differ from
access to the subnets is on a shared network where regular tenants
have not been able to view the subnets on an admin-owned shared
network.  I'm not even sure this is important.

I think ports are a little different.  A port represents a connection
from something (like a VM) to the network.  Depending on what ports
exist on a network we should (and do) prevent the deletion of the
network.

> The other is to just show info to tenant, that he’s not allowed to delete 
> network. But in the same time, it could be strange, that owner is not able to 
> just get rid of *his* network and subnets.

Its like if I owned a car but my neighbor owned the seats.  I can't
sell or dispose of the car without my neighbor's permission?  That
doesn't make any sense.

> If you have any opinions, suggestions, please feel free to share

I think we need to figure out how to enable deleting the network
without error.  We can take that up in the review.

Carl

> [1] https://bugs.launchpad.net/neutron/+bug/1588228
> [2] https://review.openstack.org/#/c/324617/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Elevating context to remove subnets created by admin

2016-06-03 Thread Henry Gessau
Darek Smigiel  wrote:
> strange, that owner is not able to just get rid of *his* network and subnets.

But not all the subnets are his, and consequently the network is partially not
his.

Why did the admin create a subnet on the user's network in [1]?

IMO the admin messed things up for the user.

[1] https://bugs.launchpad.net/neutron/+bug/1588228

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Elevating context to remove subnets created by admin

2016-06-03 Thread Henry Gessau
Carl Baldwin  wrote:
> On Fri, Jun 3, 2016 at 2:26 PM, Henry Gessau  wrote:
>> Darek Smigiel  wrote:
>>> strange, that owner is not able to just get rid of *his* network and 
>>> subnets.
>>
>> But not all the subnets are his, and consequently the network is partially 
>> not
>> his.
> 
> To me, this is a nonsensical outcome and tells me that subnets
> probably shouldn't really have owners distinct from the network's.

Right. So are you saying we should prevent that?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Incorporating performance feedback into the review process

2016-06-03 Thread Matthew Treinish
On Fri, Jun 03, 2016 at 01:53:16PM -0600, Matt Fischer wrote:
> On Fri, Jun 3, 2016 at 1:35 PM, Lance Bragstad  wrote:
> 
> > Hey all,
> >
> > I have been curious about impact of providing performance feedback as part
> > of the review process. From what I understand, keystone used to have a
> > performance job that would run against proposed patches (I've only heard
> > about it so someone else will have to keep me honest about its timeframe),
> > but it sounds like it wasn't valued.
> >
> > I think revisiting this topic is valuable, but it raises a series of
> > questions.
> >
> > Initially it probably only makes sense to test a reasonable set of
> > defaults. What do we want these defaults to be? Should they be determined
> > by DevStack, openstack-ansible, or something else?
> >
> > What does the performance test criteria look like and where does it live?
> > Does it just consist of running tempest?
> >
> 
> Keystone especially has some calls that are used 1000x or more relative to
> others and so I'd be more concerned about them. For me this is token
> validation #1 and token creation #2. Tempest checks them of course but
> might be too coarse? There are token benchmarks like the ones Dolph and I
> use, they are don't mimic a real work flow.  Something to consider.
> 
> 
> 
> >
> > From a contributor and reviewer perspective, it would be nice to have the
> > ability to compare performance results across patch sets. I understand that
> > keeping all performance results for every patch for an extended period of
> > time is unrealistic. Maybe we take a daily performance snapshot against
> > master and use that to map performance patterns over time?
> >
> 
> Having some time series data captured would be super useful. Could we have
> daily charts stored indefinitely?

We are already doing this to a certain extent with results from the gate using
subunit2sql and openstack-health. I pointed Lance to this on IRC as an example:

http://status.openstack.org/openstack-health/#/test/tempest.api.identity.v3.test_tokens.TokensV3Test.test_create_token

Which is showing all the execute times for tempest's V3 test_create_token for
all runs in the gate and periodic queues (resampled to the hour) This is all
done automatically for everything that emits subunit. (not just tempest jobs)
We're storing 6 months of data in the DB right now.

FWIW, I've written some blog posts about this and how to interact with it:

http://blog.kortar.org/?p=212

and

http://blog.kortar.org/?p=279

(although some of the info is a bit dated)

The issue with doing this in the the gate though is it's inherently noisy given
that we're running everything in guests on multiple different public clouds.
It's impossible to get any consistency in results to do any useful benchmarking
when looking at a single change. (or even a small group of changes) A good
example of this are some of tempest's scenario tests, like:

http://status.openstack.org/openstack-health/#/test/tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern

which shows a normal variance of about 60sec between runs.

-Matt Treinish

> 
> 
> 
> >
> > Have any other projects implemented a similar workflow?
> >
> > I'm open to suggestions and discussions because I can't imagine there
> > aren't other folks out there interested in this type of pre-merge data
> > points.
> >
> > Thanks!
> >
> > Lance
> >


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][mistral] Saga of process than ack and where can we go from here...

2016-06-03 Thread Doug Hellmann
Excerpts from Joshua Harlow's message of 2016-06-03 09:14:05 -0700:
> Deja, Dawid wrote:
> > On Thu, 2016-05-05 at 11:08 +0700, Renat Akhmerov wrote:
> >>
> >>> On 05 May 2016, at 01:49, Mehdi Abaakouk  >>> > wrote:
> >>>
> >>>
> >>> Le 2016-05-04 10:04, Renat Akhmerov a écrit :
>  No problem. Let’s not call it RPC (btw, I completely agree with that).
>  But it’s one of the messaging patterns and hence should be under
>  oslo.messaging I guess, no?
> >>>
> >>> Yes and no, we currently have two APIs (rpc and notification). And
> >>> personally I regret to have the notification part in oslo.messaging.
> >>>
> >>> RPC and Notification are different beasts, and both are today limited
> >>> in terms of feature because they share the same driver implementation.
> >>>
> >>> Our RPC errors handling is really poor, for example Nova just put
> >>> instance in ERROR when something bad occurs in oslo.messaging layer.
> >>> This enforces deployer/user to fix the issue manually.
> >>>
> >>> Our Notification system doesn't allow fine grain routing of message,
> >>> everything goes into one configured topic/queue.
> >>>
> >>> And now we want to add a new one... I'm not against this idea,
> >>> but I'm not a huge fan.
> >>>
> >>> Thoughts from folks (mistral and oslo)?
> > Also, I was not at the Summit, should I conclude the Tooz+taskflow
> > approach (that ensure the idempotent of the application within the
> > library API) have not been accepted by mistral folks ?
>  Speaking about idempotency, IMO it’s not a central question that we
>  should be discussing here. Mistral users should have a choice: if they
>  manage to make their actions idempotent it’s excellent, in many cases
>  idempotency is certainly possible, btw. If no, then they know about
>  potential consequences.
> >>>
> >>> You shouldn't mix the idempotency of the user task and the idempotency
> >>> of a Mistral action (that will at the end run the user task).
> >>> You can have your Mistral task runner implementation idempotent and just
> >>> make the workflow to use configurable in case the user task is
> >>> interrupted or badly finished even if the user task is idempotent or not.
> >>> This makes the thing very predictable. You will know for example:
> >>> * if the user task has started or not,
> >>> * if the error is due to a node power cut when the user task runs,
> >>> * if you can safely retry a not idempotent user task on an other node,
> >>> * you will not be impacted by rabbitmq restart or TCP connection issues,
> >>> * ...
> >>>
> >>> With the oslo.messaging approach, everything will just end up in a
> >>> generic MessageTimeout error.
> >>>
> >>> The RPC API already have this kind of issue. Applications have
> >>> unfortunately
> >>> dealt with that (and I think they want something better now).
> >>> I'm just not convinced we should add a new "working queue" API in
> >>> oslo.messaging for tasks scheduling that have the same issue we already
> >>> have with RPC.
> >>>
> >>> Anyway, that's your choice, if you want rely on this poor structure,
> >>> I will
> >>> not be against, I'm not involved in Mistral. I just want everybody is
> >>> aware
> >>> of this.
> >>>
>  And even in this case there’s usually a number
>  of measures that can be taken to mitigate those consequences (reruning
>  workflows from certain points after manually fixing problems, rollback
>  scenarios etc.).
> >>>
> >>> taskflow allows to describe and automate this kind of workflow really
> >>> easily.
> >>>
>  What I’m saying is: let’s not make that crucial decision now about
>  what a messaging framework should support or not, let’s make it more
>  flexible to account for variety of different usage scenarios.
> >>>
> >>> I think the confusion is in the "messaging" keyword, currently
> >>> oslo.messaging
> >>> is a "RPC" framework and a "Notification" framework on top of 'messaging'
> >>> frameworks.
> >>>
> >>> Messaging framework we uses are 'kombu', 'pika', 'zmq' and 'pingus'.
> >>>
>  It’s normal for frameworks to give more rather than less.
> >>>
> >>> I disagree, here we mix different concepts into one library, all concepts
> >>> have to be implemented by different 'messaging framework',
> >>> So we fortunately give less to make thing just works in the same way
> >>> with all
> >>> drivers for all APIs.
> >>>
>  One more thing, at the summit we were discussing the possibility to
>  define at-most-once/at-least-once individually for Mistral tasks. This
>  is demanded because there cases where we need to do it, advanced users
>  may choose one or another depending on a task/action semantics.
>  However, it won’t be possible to implement w/o changes in the
>  underlying messaging framework.
> >>>
> >>> If we goes that way, oslo.messaging users and Mistral users have to
> >>> be aware
> >>> that their job/task/action/whatever 

Re: [openstack-dev] [keystone][all] Incorporating performance feedback into the review process

2016-06-03 Thread Matt Fischer
On Fri, Jun 3, 2016 at 1:35 PM, Lance Bragstad  wrote:

> Hey all,
>
> I have been curious about impact of providing performance feedback as part
> of the review process. From what I understand, keystone used to have a
> performance job that would run against proposed patches (I've only heard
> about it so someone else will have to keep me honest about its timeframe),
> but it sounds like it wasn't valued.
>
> I think revisiting this topic is valuable, but it raises a series of
> questions.
>
> Initially it probably only makes sense to test a reasonable set of
> defaults. What do we want these defaults to be? Should they be determined
> by DevStack, openstack-ansible, or something else?
>
> What does the performance test criteria look like and where does it live?
> Does it just consist of running tempest?
>

Keystone especially has some calls that are used 1000x or more relative to
others and so I'd be more concerned about them. For me this is token
validation #1 and token creation #2. Tempest checks them of course but
might be too coarse? There are token benchmarks like the ones Dolph and I
use, they are don't mimic a real work flow.  Something to consider.



>
> From a contributor and reviewer perspective, it would be nice to have the
> ability to compare performance results across patch sets. I understand that
> keeping all performance results for every patch for an extended period of
> time is unrealistic. Maybe we take a daily performance snapshot against
> master and use that to map performance patterns over time?
>

Having some time series data captured would be super useful. Could we have
daily charts stored indefinitely?



>
> Have any other projects implemented a similar workflow?
>
> I'm open to suggestions and discussions because I can't imagine there
> aren't other folks out there interested in this type of pre-merge data
> points.
>
> Thanks!
>
> Lance
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Announcing Higgins -- Container management service for OpenStack

2016-06-03 Thread Hongbin Lu
Hi all,

We would like to introduce you a new container project for OpenStack called 
Higgins (might be renamed later [1]).

Higgins is a Container Management service for OpenStack. The key objective of 
the Higgins project is to enable tight integration between OpenStack and 
container technologies. In before, there is no perfect solution that can 
effectively bring containers to OpenStack. Magnum provides service to provision 
and manage Container Orchestration Engines (COEs), such as Kubernetes, Docker 
Swarm, and Apache Mesos, on top of Nova instances, but container management is 
out of its scope [2]. Nova-docker enables operating Docker containers from 
existing Nova APIs, but it can't support container features that go beyond the 
compute model. Heat docker plugin allows using Docker containers as Heat 
resources, but it has a similar limitation as nova-docker. Generally speaking, 
OpenStack is lack of a container management service that can integrate 
containers with OpenStack, and Higgins is created to fill the gap.

Higgins aims to provide an OpenStack-native API for launching and managing 
containers backed by different container technologies, such as Docker, Rocket 
etc. Higgins doesn't require calling other services/tools to provision the 
container infrastructure. Instead, it relies on existing infrastructure that is 
setup by operators. In our vision, the key value Higgins brings to OpenStack is 
enabling one platform for provisioning and managing VMs, baremetals, and 
containers as compute resource. In particular, VMs, baremetals, and containers 
will share the following:
- Single authentication and authorization system: Keystone
- Single UI Dashboard: Horizon
- Single resource and quota management
- Single block storage pools: Cinder
- Single networking layer: Neutron
- Single CLI: OpenStackClient
- Single image management: Glance
- Single Heat template for orchestration
- Single resource monitoring and metering system: Telemetry

For more information, please find below:
Wiki: https://wiki.openstack.org/wiki/Higgins
The core team: https://review.openstack.org/#/admin/groups/1382,members
Team meeting: Every Tuesday 0300 UTC at #openstack-meeting

NOTE: we are looking for feedback to shape the project roadmap. If you're 
interested in this project, we appreciate your inputs in the etherpad: 
https://etherpad.openstack.org/p/container-management-service

Best regards,
The Higgins team

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-May/095746.html
[2] https://review.openstack.org/#/c/311476/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Elevating context to remove subnets created by admin

2016-06-03 Thread Armando M.
On 3 June 2016 at 13:31, Carl Baldwin  wrote:

> On Fri, Jun 3, 2016 at 2:26 PM, Henry Gessau  wrote:
> > Darek Smigiel  wrote:
> >> strange, that owner is not able to just get rid of *his* network and
> subnets.
> >
> > But not all the subnets are his, and consequently the network is
> partially not
> > his.
>
> To me, this is a nonsensical outcome and tells me that subnets
> probably shouldn't really have owners distinct from the network's.
>

This might turn out to be a PEBCAK, as an admin can create a subnet on
behalf of a tenant by specifying his/her tenant id on the request, and that
might as well be the reason why this was never tackled before and we have a
latent loop in the code.

Having said that I think I lean on avoiding the ransomware situation where
a tenant cannot delete his/her own resources, unless the other tenant frees
up the resource explicitly, but only for situations where the resource is
indeed idle. I would be extra cautious of elevating the context
indiscriminately though.


>
> Carl
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] newton 1 milestone closed

2016-06-03 Thread Anita Kuno
On 06/03/2016 05:32 PM, Doug Hellmann wrote:
> The Newton 1 milestone deadline is past, and the release team has
> processed all but a few of the tag requests. We had some technical
> issues with a few requests that we expect to have resolved early
> next week.
> 
> A few projects missed the deadline. Please review the schedule [1]
> and add the date for the Newton 2 milestone to your calendar. Keep
> in mind that the deadline is the Thursday of the designated week,
> as experienced in the western hemisphere. For Newton 2 that's July 14.
> 
> A few projects not following the milestone release model submitted
> release requests anyway. That's fine, but since we gave priority
> to the milestone projects this week we have postponed processing
> some of those other requests until next week.
> 
> Thanks,
> Doug
> 
> [1] http://releases.openstack.org/newton/schedule.html
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Thanks to the Release Team for your awesome work!

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Incorporating performance feedback into the review process

2016-06-03 Thread Brant Knudson
On Fri, Jun 3, 2016 at 2:35 PM, Lance Bragstad  wrote:

> Hey all,
>
> I have been curious about impact of providing performance feedback as part
> of the review process. From what I understand, keystone used to have a
> performance job that would run against proposed patches (I've only heard
> about it so someone else will have to keep me honest about its timeframe),
> but it sounds like it wasn't valued.
>
>
We had a job running rally for a year (I think) that nobody ever looked at
so we decided it was a waste and stopped running it.


> I think revisiting this topic is valuable, but it raises a series of
> questions.
>
> Initially it probably only makes sense to test a reasonable set of
> defaults. What do we want these defaults to be? Should they be determined
> by DevStack, openstack-ansible, or something else?
>
>
A performance test is going to depend on the environment (the machines,
disks, network, etc), the existing data (tokens, revocations, users, etc.),
and the config (fernet, uuid, caching, etc.). If these aren't consistent
between runs then the results are not going to be usable. (This is the
problem with running rally on infra hardware.) If the data isn't realistic
(1000s of tokens, etc.) then the results are going to be at best not useful
or at worst misleading.

What does the performance test criteria look like and where does it live?
> Does it just consist of running tempest?
>
>
I don't think tempest is going to give us numbers that we're looking for
for performance. I've seen a few scripts and have my own for testing
performance of token validation, token creation, user creation, etc. which
I think will do the exact tests we want and we can get the results
formatted however we like.

>From a contributor and reviewer perspective, it would be nice to have the
> ability to compare performance results across patch sets. I understand that
> keeping all performance results for every patch for an extended period of
> time is unrealistic. Maybe we take a daily performance snapshot against
> master and use that to map performance patterns over time?
>
>
Where are you planning to store the results?


> Have any other projects implemented a similar workflow?
>
> I'm open to suggestions and discussions because I can't imagine there
> aren't other folks out there interested in this type of pre-merge data
> points.
>
>
Thanks!
>
> Lance
>
>
Since the performance numbers are going to be very dependent on the
machines I think the only way this is going to work is if somebody's
willing to set up dedicated hardware to run the tests on. If you're doing
that then set it up to mimic how you deploy keystone, deploy the patch
under test, run the performance tests, and report the results. I'd be fine
with something like this commenting on keystone changes. None of this has
to involve openstack infra. Gerrit has a REST API to get the current
patches.

Everyone that's got performance requirements should do the same. Maybe I
can get the group I'm in to try it sometime.

- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Elevating context to remove subnets created by admin

2016-06-03 Thread Carl Baldwin
On Fri, Jun 3, 2016 at 2:26 PM, Henry Gessau  wrote:
> Darek Smigiel  wrote:
>> strange, that owner is not able to just get rid of *his* network and subnets.
>
> But not all the subnets are his, and consequently the network is partially not
> his.

To me, this is a nonsensical outcome and tells me that subnets
probably shouldn't really have owners distinct from the network's.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] newton 1 milestone closed

2016-06-03 Thread Doug Hellmann
The Newton 1 milestone deadline is past, and the release team has
processed all but a few of the tag requests. We had some technical
issues with a few requests that we expect to have resolved early
next week.

A few projects missed the deadline. Please review the schedule [1]
and add the date for the Newton 2 milestone to your calendar. Keep
in mind that the deadline is the Thursday of the designated week,
as experienced in the western hemisphere. For Newton 2 that's July 14.

A few projects not following the milestone release model submitted
release requests anyway. That's fine, but since we gave priority
to the milestone projects this week we have postponed processing
some of those other requests until next week.

Thanks,
Doug

[1] http://releases.openstack.org/newton/schedule.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Incorporating performance feedback into the review process

2016-06-03 Thread Morgan Fainberg
On Jun 3, 2016 13:16, "Brant Knudson"  wrote:
>
>
>
> On Fri, Jun 3, 2016 at 2:35 PM, Lance Bragstad 
wrote:
>>
>> Hey all,
>>
>> I have been curious about impact of providing performance feedback as
part of the review process. From what I understand, keystone used to have a
performance job that would run against proposed patches (I've only heard
about it so someone else will have to keep me honest about its timeframe),
but it sounds like it wasn't valued.
>>
>
> We had a job running rally for a year (I think) that nobody ever looked
at so we decided it was a waste and stopped running it.
>
>>
>> I think revisiting this topic is valuable, but it raises a series of
questions.
>>
>> Initially it probably only makes sense to test a reasonable set of
defaults. What do we want these defaults to be? Should they be determined
by DevStack, openstack-ansible, or something else?
>>
>
> A performance test is going to depend on the environment (the machines,
disks, network, etc), the existing data (tokens, revocations, users, etc.),
and the config (fernet, uuid, caching, etc.). If these aren't consistent
between runs then the results are not going to be usable. (This is the
problem with running rally on infra hardware.) If the data isn't realistic
(1000s of tokens, etc.) then the results are going to be at best not useful
or at worst misleading.
>
>> What does the performance test criteria look like and where does it
live? Does it just consist of running tempest?
>>
>
> I don't think tempest is going to give us numbers that we're looking for
for performance. I've seen a few scripts and have my own for testing
performance of token validation, token creation, user creation, etc. which
I think will do the exact tests we want and we can get the results
formatted however we like.
>
>> From a contributor and reviewer perspective, it would be nice to have
the ability to compare performance results across patch sets. I understand
that keeping all performance results for every patch for an extended period
of time is unrealistic. Maybe we take a daily performance snapshot against
master and use that to map performance patterns over time?
>>
>
> Where are you planning to store the results?
>
>>
>> Have any other projects implemented a similar workflow?
>>
>> I'm open to suggestions and discussions because I can't imagine there
aren't other folks out there interested in this type of pre-merge data
points.
>>
>>
>> Thanks!
>>
>> Lance
>>
>
> Since the performance numbers are going to be very dependent on the
machines I think the only way this is going to work is if somebody's
willing to set up dedicated hardware to run the tests on. If you're doing
that then set it up to mimic how you deploy keystone, deploy the patch
under test, run the performance tests, and report the results. I'd be fine
with something like this commenting on keystone changes. None of this has
to involve openstack infra. Gerrit has a REST API to get the current
patches.
>
> Everyone that's got performance requirements should do the same. Maybe I
can get the group I'm in to try it sometime.
>
> - Brant
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

You have outlined everything I was asking for fr rally as a useful metric,
but simply getting the resources was a problem.

Unfortunately I have not seen anyone willing to offer these dedicated
resources and/or reporting the delta over time or per patchset.

There is only so much we can do without consistent / reliably the same test
environments.

I would be very happy to see this type of testing consistently reported
especially if it mimics real workloads as well as synthetic like rally/what
Matt and Dolph use.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-03 Thread Henry Nash
Both proposals allow you to provide a path as the project name in auth (so you 
can still use domain name + project path name). The difference between the two 
is whether you formally represent the path in the name attribute of a project, 
i.e. when it is returned by GET /project.  The relax name constraints works 
like the linux dir tree. If I do an ‘ls’ I get the node names of the all 
entities in that directory, but I can still do 'cd /a/b/c' to jump right to 
where I want.

Henry
> On 3 Jun 2016, at 23:53, Morgan Fainberg  wrote:
> 
> 
> On Jun 3, 2016 12:42, "Lance Bragstad"  > wrote:
> >
> >
> >
> > On Fri, Jun 3, 2016 at 11:20 AM, Henry Nash  > > wrote:
> >>
> >>
> >>> On 3 Jun 2016, at 16:38, Lance Bragstad  >>> > wrote:
> >>>
> >>>
> >>>
> >>> On Fri, Jun 3, 2016 at 3:20 AM, Henry Nash  >>> > wrote:
> 
> 
> > On 3 Jun 2016, at 01:22, Adam Young  > > wrote:
> >
> > On 06/02/2016 07:22 PM, Henry Nash wrote:
> >>
> >> Hi
> >>
> >> As you know, I have been working on specs that change the way we 
> >> handle the uniqueness of project names in Newton. The goal of this is 
> >> to better support project hierarchies, which as they stand today are 
> >> restrictive in that all project names within a domain must be unique, 
> >> irrespective of where in the hierarchy that projects sits (unlike, 
> >> say, the unix directory structure where a node name only has to be 
> >> unique within its parent). Such a restriction is particularly 
> >> problematic when enterprise start modelling things like test, QA and 
> >> production as branches of a project hierarchy, e.g.:
> >>
> >> /mydivsion/projectA/dev
> >> /mydivsion/projectA/QA
> >> /mydivsion/projectA/prod
> >> /mydivsion/projectB/dev
> >> /mydivsion/projectB/QA
> >> /mydivsion/projectB/prod
> >>
> >> Obviously the idea of a project name (née tenant) being unique has 
> >> been around since near the beginning of (OpenStack) time, so we must 
> >> be cautions. There are two alternative specs proposed:
> >>
> >> 1) Relax project name constraints: 
> >> https://review.openstack.org/#/c/310048/ 
> >>  
> >> 2) Hierarchical project naming: 
> >> https://review.openstack.org/#/c/318605/ 
> >> 
> >>
> >> First, here’s what they have in common:
> >>
> >> a) They both solve the above problem
> >> b) They both allow an authorization scope to use a path rather than 
> >> just a simple name, hence allowing you to address a project anywhere 
> >> in the hierarchy
> >> c) Neither have any impact if you are NOT using a hierarchy - i.e. if 
> >> you just have a flat layer of projects in a domain, then they have no 
> >> API or semantic impact (since both ensure that a project’s name must 
> >> still be unique within a parent)
> >>
> >> Here’s how the differ:
> >>
> >> - Relax project name constraints (1), keeps the meaning of the ‘name’ 
> >> attribute of a project to be its node-name in the hierarchy, but 
> >> formally relaxes the uniqueness constraint to say that it only has to 
> >> be unique within its parent. In other words, let’s really model this a 
> >> bit like a unix directory tree.
> >>>
> >>> I think I lean towards relaxing the project name constraint. The reason 
> >>> is because we already expose `domain_id`, `parent_id`, and `name` of a 
> >>> project. By relaxing the constraint we can give the user everything the 
> >>> need to know about a project without really changing any of these. When 
> >>> using 3.7, you know what domain your project is in, you know the 
> >>> identifier of the parent project, and you know that your project name is 
> >>> unique within the parent.  
> >>
> >> - Hierarchical project naming (2), formally changes the meaning of the 
> >> ‘name’ attribute to include the path to the node as well as the node 
> >> name, and hence ensures that the (new) value of the name attribute 
> >> remains unique.
> >>>
> >>> Do we intend to *store* the full path as the name, or just build it out 
> >>> on demand? If we do store the full path, we will have to think about our 
> >>> current data model since the depth of the organization or domain would be 
> >>> limited by the max possible name length. Will performance be something to 
> >>> think about building the full path on every request?   
> >>
> >> I now mention this issue in the spec for hierarchical project naming (the 
> >> relax naming approach does not suffer this issue). As you say, we’ll have 
> >> 

[openstack-dev] [tacker] Proposing Bharath Thiruveedula to Tacker core team

2016-06-03 Thread Sridhar Ramaswamy
Tackers,

I'm happy to propose Bharath Thiruveedula (IRC: tbh) to join the tacker
core team. Bharath has been contributing to Tacker from the Liberty cycle,
and he has grown into a key member of this project. His contribution has
steadily increased as he picked up bigger pieces to deliver [1].
Specifically, he contributed the automatic resource creation blueprint [2]
in the Mitaka release. Plus tons of other RFEs and bug fixes [3]. Bharath
is also a key contributor in tosca-parser and heat-translator projects
which is an added plus.

Please provide your +1/-1 votes.

Thanks Bharath for your contributions so far and much more to come !!

[1]
http://stackalytics.com/?project_type=openstack=all=commits_id=bharath-ves=tacker-group
[2]
https://blueprints.launchpad.net/tacker/+spec/automatic-resource-creation
[3] https://bugs.launchpad.net/bugs/+bugs?field.assignee=bharath-ves

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tacker] Proposing Bharath Thiruveedula to Tacker core team

2016-06-03 Thread Sripriya Seetharam
+1. Welcome onboard Bharath!

-Sripriya

From: Sridhar Ramaswamy [mailto:sric...@gmail.com]
Sent: Friday, June 03, 2016 6:21 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [tacker] Proposing Bharath Thiruveedula to Tacker core 
team

Tackers,

I'm happy to propose Bharath Thiruveedula (IRC: tbh) to join the tacker core 
team. Bharath has been contributing to Tacker from the Liberty cycle, and he 
has grown into a key member of this project. His contribution has steadily 
increased as he picked up bigger pieces to deliver [1]. Specifically, he 
contributed the automatic resource creation blueprint [2] in the Mitaka 
release. Plus tons of other RFEs and bug fixes [3]. Bharath is also a key 
contributor in tosca-parser and heat-translator projects which is an added plus.

Please provide your +1/-1 votes.

Thanks Bharath for your contributions so far and much more to come !!

[1] 
http://stackalytics.com/?project_type=openstack=all=commits_id=bharath-ves=tacker-group
[2] 
https://blueprints.launchpad.net/tacker/+spec/automatic-resource-creation
[3] 
https://bugs.launchpad.net/bugs/+bugs?field.assignee=bharath-ves
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Google Hangouts discussion for dueling specifications for Dockerfile customization

2016-06-03 Thread Steven Dake (stdake)
Ihor,

I don't plan to have hangouts be a common theme in our community.  However, one 
of our main contributor's to the conversation IT department requires google 
hangouts or nothing at all.  So we were stuck with that, or I'd used webex and 
recorded it.

Regards
-steve


From: Ihor Dvoretskyi 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, June 3, 2016 at 3:00 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [kolla] Google Hangouts discussion for dueling 
specifications for Dockerfile customization

Steve,

If you have any concerns with recording the Hangouts meetings, we may try to 
run Zoom for that.

On Fri, Jun 3, 2016 at 3:50 AM, Steven Dake (stdake) 
> wrote:
Hey folks,

IRC and mailing list were going far too slow for us to make progress on the 
competing specifications for handling Dockerfile customization.  Instead we 
held a hangout, which I don't like because it isn't recorded, but it is high 
bandwidth and permitted us to work through the problem in 1 hour instead of 1 
month.

The essence of the discussion:

  1.  I will use inc0's patch as a starting point and will do the following:
 *   Prototype base with  operations using the specification items 
in the elemental DSL
 *   Prototype mariadb with  operations using the specification 
items in the elemental DSL
 *   I will create a document assuming these two prototypes work that 
describe how to use the jinja2  operations to replace or merge sections 
of Dockerfile.j2 files.
 *   We will stop specification development as it has served its purpose 
(of defining the requirements) assuming the prototypes meet people's taste test
  2.  We believe the Jinja2  operation will meet the requirements set 
forth in the original elemental DSL specification
  3.  We as a community will need to modify our 115 dockerfiles, of which I'd 
like people to take 1 or 2 container sets each (40 in total), in a distributed 
fashion to implement the documentation described in section 1.3
  4.  We will produce an optional DSL parser (based upon the prototyping work) 
that outputs the proper  Dockerfile.j2 files or alternatively operators 
can create their own block syntax files
  5.  All customization will be done in one master block replacement file
  6.  Original dockerfile.j2 files will stay intact with the addition of a 
bunch of block operations
  7.  Some RUN layer compression will be lost (the && in our Dockerfiles)
  8.  There are 8 DSL operations but we will need twice as many to handle both 
override and merging in a worst case scenario.  That means 16 blocks will need 
to be added to each Dockerfile.
  9.  Operators that have already customized their Dockerfile.j2 files can 
carry those changes or migrate to this new customization technique when this 
feature hits Newton, up to them
  10. If the prototypes don't work, back to the drawing board - that said I am 
keen to have any solution that meets the requirements so I will do a thorough 
job on the prototypes of inc0's work

If you have questions, or I missed key points, please feel fee to ask or speak 
up.

Regards
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best regards,

Ihor Dvoretskyi,
OpenStack Operations Engineer

---

Mirantis, Inc. (925) 808-FUEL
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Fwd: Re: Kilo version of Kolla

2016-06-03 Thread Paul Bourke

fyi

 Forwarded Message 
Subject: Re: Kilo version of Kolla
Date: Fri, 3 Jun 2016 16:01:59 +
From: Steven Dake (stdake) 
To: Paul Bourke , POLAKU, CHANDRA 
CC: VENKATSUBRAMANIAM, SHRINATH , 
prabhu...@cognizant.com 


Thanks Paul for sharing!

Would you mind ccing the mailing list with this response (I'd do it, but
I'm not sure if this link is special internal or not).

On 6/3/16, 2:14 AM, "Paul Bourke"  wrote:


Hi,

I think Steve summed things up pretty well including the challenges
involved in making Kolla deploy Kilo.

Our latest stable release based on Kilo is available at
http://www.oracle.com/technetwork/server-storage/openstack/linux/downloads
/index.html
and source code is available at
https://oss.oracle.com/git/?p=openstack-kolla.git;a=summary which you're
welcome to check out.

Beyond Kilo support many of our tweaks are related to Oracle provided
configurations such as support for Oracle VM server and using
mysqlcluster rather than galera which may or may not be of interest to
you.

Regards,
-Paul

On 03/06/16 02:14, Steven Dake (stdake) wrote:

Chandra,

The first version of Kolla the community released that actually worked
was 1.0.0 (Liberty).  Prior to that all the work was R prototyping.
  That said, I'd recommend using 1.1.0 if you plan to use Liberty.  Note
1.1.0 can be used to deploy Kilo, but it requires custom work which I
estimate at about 3 weeks for an engineer with 5-8 years of experience
and a thorough understanding of OpenStack configuration.

The hard part is changing the configuration options to work with Kilo as
well as sorting out where to get the repositories (or tarballs) from.
  Most everything should work as is, but there will be some changes.
  Oracle has a Kilo based OpenStack product of Kolla available that has
these customizations.  They didn't share mainly because we didn't call
Kilo our 1.0.0 release and there is no branch for their code to land in.
  Paul (cc) may be willing to share his Kilo customization, or it may be
secret sauce as their product is OpenCore.

Regards
-steve


From: "POLAKU, CHANDRA" >
Date: Wednesday, June 1, 2016 at 7:52 AM
To: Steven Dake >
Cc: "VENKATSUBRAMANIAM, SHRINATH" >, "prabhu...@cognizant.com
" >
Subject: Kilo version of Kolla

Hello Steve,

We want to install kilo version of Kolla in an open stack
environment. I came across this link,

https://github.com/openstack/kolla/blob/master/docs/developer-env.md.
However
it seems to be moved or deleted. Wondering if you can help me in
this regard. Any help will be greatly appreciated.

Thanks & Regards,

Chandra Polaku

Data Fabric Developer


*K**force Inc***

990 Hammond Dr NE #930

Atlanta, GA 30328


P: 404.441.3667

cp6...@att.com 

TEXTING and DRIVING... It Can Wait.

*Take the pledge*  today and pass it on.






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-03 Thread Tim Bell
The documentation at http://docs.openstack.org/admin-guide/compute-flavors.html 
is gradually improving. Are there areas which were not covered in your 
clarifications ? If so, we should fix the documentation too since this is a 
complex area to configure and good documentation is a great help.

BTW, there is also an issue around how the RAM for the BIOS is shadowed. I 
can’t find the page from a quick google but we found an imbalance when we used 
2GB pages as the RAM for BIOS shadowing was done by default in the memory space 
for only one of the NUMA spaces.

Having a look at the KVM XML can also help a bit if you are debugging.

Tim

From: Paul Michali 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday 3 June 2016 at 15:18
To: "Daniel P. Berrange" , "OpenStack Development Mailing 
List (not for usage questions)" 
Subject: Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

See PCM inline...
On Fri, Jun 3, 2016 at 8:44 AM Daniel P. Berrange 
> wrote:
On Fri, Jun 03, 2016 at 12:32:17PM +, Paul Michali wrote:
> Hi!
>
> I've been playing with Liberty code a bit and had some questions that I'm
> hoping Nova folks may be able to provide guidance on...
>
> If I set up a flavor with hw:mem_page_size=2048, and I'm creating (Cirros)
> VMs with size 1024, will the scheduling use the minimum of the number of

1024 what units ? 1024 MB, or 1024 huge pages aka 2048 MB ?

PCM: I was using small flavor, which is 2 GB. So that's 2048 MB and the page 
size is 2048K, so 1024 pages? Hope I have the units right.



> huge pages available and the size requested for the VM, or will it base
> scheduling only on the number of huge pages?
>
> It seems to be doing the latter, where I had 1945 huge pages free, and
> tried to create another VM (1024) and Nova rejected the request with "no
> hosts available".

From this I'm guessing you're meaning 1024 huge pages aka 2 GB earlier.

Anyway, when you request huge pages to be used for a flavour, the
entire guest RAM must be able to be allocated from huge pages.
ie if you have a guest with 2 GB of RAM, you must have 2 GB worth
of huge pages available. It is not possible for a VM to use
1.5 GB of huge pages and 500 MB of normal sized pages.

PCM: Right, so, with 2GB of RAM, I need 1024 huge pages of size 2048K. In this 
case, there are 1945 huge pages available, so I was wondering why it failed. 
Maybe I'm confusing sizes/pages?



> Is this still the same for Mitaka?

Yep, this use of huge pages has not changed.

> Where could I look in the code to see how the scheduling is determined?

Most logic related to huge pages is in nova/virt/hardware.py

> If I use mem_page_size=large (what I originally had), should it evenly
> assign huge pages from the available NUMA nodes (there are two in my case)?
>
> It looks like it was assigning all VMs to the same NUMA node (0) in this
> case. Is the right way to change to 2048, like I did above?

Nova will always avoid spreading your VM across 2 host NUMA nodes,
since that gives bad performance characteristics. IOW, it will always
allocate huge pages from the NUMA node that the guest will run on. If
you explicitly want your VM to spread across 2 host NUMA nodes, then
you must tell nova to create 2 *guest* NUMA nodes for the VM. Nova
will then place each guest NUMA node, on a separate host NUMA node
and allocate huge pages from node to match. This is done using
the hw:numa_nodes=2 parameter on the flavour

PCM: Gotcha, but that was not the issue I'm seeing. With this small flavor (2GB 
= 1024 pages), I had 13107 huge pages initially. As I created VMs, they were 
*all* placed on the same NUMA node (0). As a result, when I got to more than 
have the available pages, Nova failed to allow further VMs, even though I had 
6963 available on one compute node, and 5939 on another.

It seems that all the assignments were to node zero. Someone suggested to me to 
set mem_page_size to 2048, and at that point it started assigning to both NUMA 
nodes evenly.

Thanks for the help!!!


Regards,

PCM


> Again, has this changed at all in Mitaka?

Nope. Well aside from random bug fixes.

Regards,
Daniel
--
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [ironic] [oslo] Template to follow for policy support?

2016-06-03 Thread Devananda van der Veen


On 05/31/2016 04:01 PM, Jay Faulkner wrote:
> Hi all,
> 
> 
> During this cycle, on behalf of OSIC, I'll be working on implementing proper
> oslo.policy support for Ironic. The reasons this is needed probably don't need
> to be explained here, so I won't :).
> 
> 
> I have two requests for the list regarding this though:
> 
> 
> 1) Is there a general guideline to follow when designing policy roles? There
> appears to have been some discussion around this already
> here: https://review.openstack.org/#/c/245629/, but it hasn't moved in over a
> month. I want Ironic's implementation of policy to be as 'standard' as 
> possible;
> but I've had trouble finding any kind of standard.
> 
> 
> 2) A general call for contributors to help make this happen in Ironic. I want,
> in the next week, to finish up the research and start on a spec. Anyone 
> willing
> to help with the design or implementation let me know here or in IRC so we can
> work together.
> 
> 
> Thanks in advance,
> 
> Jay Faulkner
> 

Hi Jay,

Morgan and I sat down earlier this week to brainstorm on adding policy checks to
Ironic's API. Turns out, all the glue for enforcing policy is *already* in place
in the project, but we haven't implemented enforcement within specific API
methods yet. It's not going to be that much work -- I already have a POC up
locally with a few new policy settings.

I'll be happy to work on the spec with you, and hope to have the POC in a
shareable form within a couple days.

--Devananda


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-03 Thread Tony Breeds
On Fri, Jun 03, 2016 at 09:29:34AM +0200, Matthias Runge wrote:
> On 02/06/16 12:31, Tony Breeds wrote:
> > The list of 171 projects that match above is at [1].  There are
> > another 68
> I just abandoned open reviews for django_openstack_auth in kilo version.

Thanks.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-03 Thread Tony Breeds
On Thu, Jun 02, 2016 at 08:31:43PM +1000, Tony Breeds wrote:
> Hi all,
> In early May we tagged/EOL'd several (13) projects.  We'd like to do a
> final round for a more complete set.  We looked for projects meet one or more
> of the following criteria:
> - The project is openstack-dev/devstack, openstack-dev/grenade or
>   openstack/requirements
> - The project has the 'check-requirements' job listed as a template in
>   project-config:zuul/layout.yaml
> - The project is listed in governance:reference/projects.yaml and is tagged
>   with 'release:managed' or 'stable:follows-policy' (or both).

So We've had a few people opt into EOL'ing which is great.

I've Moved the lists from paste.o.o to a gist.  The reason for that is I can
update them, the URL doesn't change and there is a revision history (or sorts).

The 2 lists are now at: 
https://gist.github.com/tbreeds/7de812a5d363fab4bd425beae5084c87

Given that there are now only 39 repos that are not (yet) EOL'ing I'm inclined
to default to EOL'ing everything that that isn't a deployment project.

That is to say I'm suggesting that:
openstack/cloudkitty  cloudkitty   1
openstack/cloudkitty-dashboardcloudkitty   1
openstack/cloudpulse BigTent
openstack/compute-hyperv BigTent
openstack/fuel-plugin-purestorage-cinder BigTent
openstack/group-based-policy BigTent   4
openstack/group-based-policy-automation  BigTent
openstack/group-based-policy-ui  BigTent
openstack/murano-apps murano   3
openstack/nova-solver-scheduler  BigTent
openstack/openstack-resource-agents  BigTent
openstack/oslo-incubatoroslo
openstack/powervc-driver BigTent   1
openstack/python-cloudkittyclient cloudkitty   1
openstack/python-cloudpulseclientBigTent
openstack/python-group-based-policy-client   BigTent
openstack/swiftonfileBigTent
openstack/training-labsDocumentation
openstack/yaql   BigTent   2

Get added to the EOL list.

With the following hanging back for a while as they might need small tweaks
based on the kilo-eol tag.

openstack/cookbook-openstack-bare-metal   Chef OpenStack
openstack/cookbook-openstack-block-storageChef OpenStack
openstack/cookbook-openstack-client   Chef OpenStack
openstack/cookbook-openstack-common   Chef OpenStack
openstack/cookbook-openstack-compute  Chef OpenStack
openstack/cookbook-openstack-dashboardChef OpenStack
openstack/cookbook-openstack-data-processing  Chef OpenStack
openstack/cookbook-openstack-database Chef OpenStack
openstack/cookbook-openstack-identity Chef OpenStack
openstack/cookbook-openstack-imageChef OpenStack
openstack/cookbook-openstack-integration-test Chef OpenStack
openstack/cookbook-openstack-network  Chef OpenStack
openstack/cookbook-openstack-object-storage   Chef OpenStack
openstack/cookbook-openstack-ops-database Chef OpenStack
openstack/cookbook-openstack-ops-messagingChef OpenStack
openstack/cookbook-openstack-orchestrationChef OpenStack
openstack/cookbook-openstack-telemetryChef OpenStack
openstack/openstack-ansible OpenStackAnsible
openstack/openstack-chef-repo Chef OpenStack
openstack/packstack  BigTent

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-03 Thread Matthias Runge
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 02/06/16 12:31, Tony Breeds wrote:
> Hi all, In early May we tagged/EOL'd several (13) projects.  We'd
> like to do a final round for a more complete set.  We looked for
> projects meet one or more of the following criteria: - The project
> is openstack-dev/devstack, openstack-dev/grenade or 
> openstack/requirements - The project has the 'check-requirements'
> job listed as a template in project-config:zuul/layout.yaml - The
> project is listed in governance:reference/projects.yaml and is
> tagged with 'release:managed' or 'stable:follows-policy' (or
> both).
> 
> The list of 171 projects that match above is at [1].  There are
> another 68
I just abandoned open reviews for django_openstack_auth in kilo version.

django_openstack_auth is tightly coupled with horizon and should
follow the same schedule.

- -- 
Matthias Runge 

Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Michael Cunningham,
Michael O'Neill, Eric Shander
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJXUTHeAAoJEBdpskgc9eOLNEIQAJtrEvtmCyEWo1aozIZOJSyp
+by9HO7s4UzP2wUmR5WpAG0w4wFDIlloY1i+jeRh9nDrSVJiyBqu7VKetgIhcEyP
Zcg+ACVzLUzM5cCDuGR9892yO7jo+M9ez8i9UORiCHi4Q5rqok6EyykEWCDgue1X
WMOJG+pTobdwA25bb9Eoq+3QNg/K988qzs0Y2JG+bU2chtcxLeJEfoeji1yEfnhE
rvpNfLlXOhaR8eN9ZguoEX2eAXGFy9uMLbk5r9kAGG7IPoq41Lkry47qJkOEtR7e
ztwqEfEy3BTTIi+bgV56HJRjZoirgj6+5pOmsXVN+CUJUVCrESK8yLIBORaj5LzN
pFssS5b4h/j7/MShkV1AbltbrQNbiKJw/0CHjvxKll0bHpgfSedqGJCPfWZQvHqF
qXODns0dmIS2xvFU8dBhLpAuRG5kirUzTXJb/Rfaknta7Smz5IWQ7lW37eeDvrrz
/LDo3P4gQa8jaSj/1ye4QYnTulRgOw5TGitctHofzKPxfm+oPu3p2/QR08kAWA/g
HUAenWrRkralksf5vX/vecW8lvZyyJBcKWeBVZbZQH5LHE062SwWHc0kXekgAmtT
ewYUzWDBt0Afuwf70dl59GMlbk0wk4VT9awybBgMyRKmZI6UyAaVhLY0htcc2GZv
wN+EdXCr4KNyPriWcT4T
=U9ij
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] openstack Unauthorized HTTP 401 'Could not find user' when sahara call heat

2016-06-03 Thread taget

oh, so old of Juno, Kilo's reached EOL already.

Suggest you to add [sahara] in subject.

On 2016年06月03日 09:51, 阮鹏飞 wrote:





Hi, Friends,

I used Openstack-Juno heat, keystone and Mitaka sahara in CentOS7. 
Sahara is installed in docker container using host network.
When sahara wants to call heat to create a hadoop cluster, the below 
error is happened.
Could you help to check this issue? I guess, the heat couldn't create 
user in keystone. The attachment is the conf file and log file. Please 
reference them.

Thanks for your help in advance. Hoping for your answer.

2016-05-31 11:22:45.625 41759 INFO urllib3.connectionpool [-] Starting 
new HTTP connection (1): 10.252.100.4
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource Traceback 
(most recent call last):
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/heat/engine/resource.py", line 435, 
in _action_recorder

2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource yield
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/heat/engine/resource.py", line 505, 
in _do_action
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource yield 
self.action_handler_task(action, args=handler_args)
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/heat/engine/scheduler.py", line 286, 
in wrapper
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource step = 
next(subtask)
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/heat/engine/resource.py", line 476, 
in action_handler_task
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource 
handler_data = handler(*args)
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/heat/engine/resources/wait_condition.py", 
line 143, in handle_create
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource token = 
self._user_token()
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/heat/engine/stack_user.py", line 75, 
in _user_token
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource 
project_id=project_id, password=password)
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/heat/common/heat_keystoneclient.py", 
line 410, in stack_domain_user_token
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource 
authenticated=False)
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/keystoneclient/session.py", line 
430, in post
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource return 
self.request(url, 'POST', **kwargs)
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/keystoneclient/utils.py", line 318, 
in inner
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource return 
func(*args, **kwargs)
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/keystoneclient/session.py", line 
346, in request
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource raise 
exceptions.from_response(resp, method, url)
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource Unauthorized: 
Could not find user: 
haddp45018380-test-master-ajdlwfudliu2-0-hnnojqmbzkbr-test-master-wc-handle-dp4cqhkmtykr 
(Disable debug mode to suppress these details.) (HTTP 401)

2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource

Fred Ruan







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Best Regards,
Eli Qiao (乔立勇), Intel OTC.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc] how to install networking-sfc on compute node

2016-06-03 Thread Na Zhu
Yes, but networking-sfc rewrite the q-agt binary file, when i install 
networking-sfc in allinone mode, the q-aget binary file is:
juno@sfc:~/devstack$ cat /usr/local/bin/neutron-openvswitch-agent
#!/usr/bin/python
# PBR Generated from u'console_scripts'

import sys

from networking_sfc.services.sfc.agent.agent import main


if __name__ == "__main__":
sys.exit(main())
steve@sfc:~/devstack$ 





Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   Vikram Choudhary 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date:   2016/06/03 15:05
Subject:Re: [openstack-dev] [networking-sfc] how to install 
networking-sfc on compute node





On Thu, Jun 2, 2016 at 9:11 PM, Na Zhu  wrote:
Hi,

>From this link 
https://github.com/openstack/networking-sfc/tree/master/devstack, it is 
about installing networking-sfc together with neutron-server,
I want to install networking-sfc on compute node, can anyone tell me how 
to set the local.conf? 
networking-sfc support is only required on the controller node as it uses 
q-agt (ovs driver implementation) for downloading flows to the ovs. By 
default, we already run q-agt on the compute node.




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Discussion of PuppetOpenstack Project abbreviation

2016-06-03 Thread Sergii Golovatiuk
I would vote for POSM - "Puppet OpenStack Modules"

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Wed, Jun 1, 2016 at 7:25 PM, Cody Herriges  wrote:

>
> > On Jun 1, 2016, at 5:56 AM, Dmitry Tantsur  wrote:
> >
> > On 06/01/2016 02:20 PM, Jason Guiditta wrote:
> >> On 01/06/16 18:49 +0800, Xingchao Yu wrote:
> >>>  Hi, everyone:
> >>>
> >>>  Do we need to give a abbreviation for PuppetOpenstack project? B/C
> >>>  it's really a long words when I introduce this project to people or
> >>>  writng article about it.
> >>>
> >>>  How about POM(PuppetOpenstack Modules) or POP(PuppetOpenstack
> >>>  Project) ?
> >>>
> >>>  I would like +1 for POM.
> >>>  Just an idea, please feel free to give your comment :D
> >>>  Xingchao Yu
> >>
> >> For rdo and osp, we package it as 'openstack-puppet-modules', or OPM
> >> for short.
> >
> > I definitely love POM as it reminds me of pomeranians :) but I agree
> that OPM will probably be easier recognizable.
>
> The project's official name is in fact "Puppet OpenStack" so OPM would be
> kinda confusing.  I'd put my vote on POP because it is closer to the actual
> definition of an acronym[1], which I generally find easier to remember over
> all when it comes to the shortening of long phrases.
>
> [1] http://www.merriam-webster.com/dictionary/acronym
>
> --
> Cody
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-03 Thread Henry Nash

> On 3 Jun 2016, at 01:22, Adam Young  wrote:
> 
> On 06/02/2016 07:22 PM, Henry Nash wrote:
>> Hi
>> 
>> As you know, I have been working on specs that change the way we handle the 
>> uniqueness of project names in Newton. The goal of this is to better support 
>> project hierarchies, which as they stand today are restrictive in that all 
>> project names within a domain must be unique, irrespective of where in the 
>> hierarchy that projects sits (unlike, say, the unix directory structure 
>> where a node name only has to be unique within its parent). Such a 
>> restriction is particularly problematic when enterprise start modelling 
>> things like test, QA and production as branches of a project hierarchy, e.g.:
>> 
>> /mydivsion/projectA/dev
>> /mydivsion/projectA/QA
>> /mydivsion/projectA/prod
>> /mydivsion/projectB/dev
>> /mydivsion/projectB/QA
>> /mydivsion/projectB/prod
>> 
>> Obviously the idea of a project name (née tenant) being unique has been 
>> around since near the beginning of (OpenStack) time, so we must be cautions. 
>> There are two alternative specs proposed:
>> 
>> 1) Relax project name constraints:  
>> https://review.openstack.org/#/c/310048/
>>   
>> 2) Hierarchical project naming:  
>> https://review.openstack.org/#/c/318605/
>>  
>> 
>> First, here’s what they have in common:
>> 
>> a) They both solve the above problem
>> b) They both allow an authorization scope to use a path rather than just a 
>> simple name, hence allowing you to address a project anywhere in the 
>> hierarchy
>> c) Neither have any impact if you are NOT using a hierarchy - i.e. if you 
>> just have a flat layer of projects in a domain, then they have no API or 
>> semantic impact (since both ensure that a project’s name must still be 
>> unique within a parent)
>> 
>> Here’s how the differ:
>> 
>> - Relax project name constraints (1), keeps the meaning of the ‘name’ 
>> attribute of a project to be its node-name in the hierarchy, but formally 
>> relaxes the uniqueness constraint to say that it only has to be unique 
>> within its parent. In other words, let’s really model this a bit like a unix 
>> directory tree.
>> - Hierarchical project naming (2), formally changes the meaning of the 
>> ‘name’ attribute to include the path to the node as well as the node name, 
>> and hence ensures that the (new) value of the name attribute remains unique.
>> 
>> While whichever approach we chose would only be included in a new 
>> microversion (3.7) of the Identity API, although some relevant APIs can 
>> remain unaffected for a client talking 3.6 to a Newton server, not all can 
>> be. As pointed out be jamielennox, this is a data modelling problem - if a 
>> Newton server has created multiple projects called “dev” in the hierarchy, a 
>> 3.6 client trying to scope a token simply to “dev” cannot be answered 
>> correctly (and it is proposed we would have to return an HTTP 409 Conflict 
>> error if multiple nodes with the same name were detected). This is true for 
>> both approaches.
>> 
>> Other comments on the approaches:
>> 
>> - Having a full path as the name seems duplicative with the current project 
>> entity - since we already return the parent_id (hence parent_id + name is, 
>> today, sufficient to place a project in the hierarchy).
> 
> The one thing I like is the ability to specify just the full path for the 
> OS_PROJECT_NAME env var, but we could make that a separate variable.  Just as 
> DOMAIN_ID + PROJECT_NAME is unique today, OS_PROJECT_PATH should be able to 
> fully specify a project unambiguously.  I'm not sure which would have a 
> larger impact on users.
> 
Agreed - and this could be done for both approaches (since this is all part of 
the “auth data flow").
> 
>> - In the past, we have been concerned about the issue of what we do if there 
>> is a project further up the tree that we do not have any roles on. In such 
>> cases, APIs like list project parents will not display anything other than 
>> the project ID for such projects. In the case of making the name the full 
>> path, we would be effectively exposing the name of all projects above us, 
>> irrespective of whether we had roles on them. Maybe this is OK, maybe it 
>> isn’t.
> 
> I think it is OK.  If this info needs to be hidden from a user, the project 
> should probably be in a different domain.
> 
>> - While making the name the path keeps it unique, this is fine if clients 
>> blindly use this attribute to plug back into another API to call. However 
>> if, for example, you are Horizon and are displaying them in a UI then you 
>> need to start breaking down the path into its components, where you don’t 
>> today.
>> - One area where names as the hierarchical path DOES look right is calling 
>> the /auth/projects API - where what the caller wants 

Re: [openstack-dev] [networking-sfc] how to install networking-sfc on compute node

2016-06-03 Thread Vikram Choudhary
On Thu, Jun 2, 2016 at 9:11 PM, Na Zhu  wrote:

> Hi,
>
> From this link
> https://github.com/openstack/networking-sfc/tree/master/devstack, it is
> about installing networking-sfc together with neutron-server,
> I want to install networking-sfc on compute node, can anyone tell me how
> to set the local.conf?

networking-sfc support is only required on the controller node as it uses
q-agt (ovs driver implementation) for downloading flows to the ovs. By
default, we already run q-agt on the compute node.

>
>
>
>
> Regards,
> Juno Zhu
> IBM China Development Labs (CDL) Cloud IaaS Lab
> Email: na...@cn.ibm.com
> 5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New
> District, Shanghai, China (201203)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all][murano] Tagging kilo-eol for "the world"

2016-06-03 Thread Ihar Hrachyshka

> On 03 Jun 2016, at 12:56, Kirill Zaitsev  wrote:
> 
> I’d like to ask to keep murano-apps kilo branch alive. It’s indeed not a 
> deployable project, but a collection of reference apps for murano. While no 
> active development happens for murano on kilo itself anymore, the apps repo 
> is intended to provide reference application for kilo users.
> 
> Is it possible for us to keep that branch alive?

You will still have kilo-eol tag for external references. Isn’t it enough?

Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle] reviewed by multiple eyes

2016-06-03 Thread Shinobu Kinjo
Hi Team,

There are some patch sets reviewed by only myself.
>From my point of view, any patch set needs to be reviewed by multiple eyes.

It's because anyone is not perfect. And there should be anything missing.
Please take a look, if you get notification to review.

Cheers,
Shinobu

-- 
Email:
shin...@linux.com
shin...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][mistral] Saga of process than ack and where can we go from here...

2016-06-03 Thread Deja, Dawid
On Thu, 2016-05-05 at 11:08 +0700, Renat Akhmerov wrote:

On 05 May 2016, at 01:49, Mehdi Abaakouk 
> wrote:


Le 2016-05-04 10:04, Renat Akhmerov a écrit :
No problem. Let’s not call it RPC (btw, I completely agree with that).
But it’s one of the messaging patterns and hence should be under
oslo.messaging I guess, no?

Yes and no, we currently have two APIs (rpc and notification). And
personally I regret to have the notification part in oslo.messaging.

RPC and Notification are different beasts, and both are today limited
in terms of feature because they share the same driver implementation.

Our RPC errors handling is really poor, for example Nova just put
instance in ERROR when something bad occurs in oslo.messaging layer.
This enforces deployer/user to fix the issue manually.

Our Notification system doesn't allow fine grain routing of message,
everything goes into one configured topic/queue.

And now we want to add a new one... I'm not against this idea,
but I'm not a huge fan.

Thoughts from folks (mistral and oslo)?
Also, I was not at the Summit, should I conclude the Tooz+taskflow approach 
(that ensure the idempotent of the application within the library API) have not 
been accepted by mistral folks ?
Speaking about idempotency, IMO it’s not a central question that we
should be discussing here. Mistral users should have a choice: if they
manage to make their actions idempotent it’s excellent, in many cases
idempotency is certainly possible, btw. If no, then they know about
potential consequences.

You shouldn't mix the idempotency of the user task and the idempotency
of a Mistral action (that will at the end run the user task).
You can have your Mistral task runner implementation idempotent and just
make the workflow to use configurable in case the user task is
interrupted or badly finished even if the user task is idempotent or not.
This makes the thing very predictable. You will know for example:
* if the user task has started or not,
* if the error is due to a node power cut when the user task runs,
* if you can safely retry a not idempotent user task on an other node,
* you will not be impacted by rabbitmq restart or TCP connection issues,
* ...

With the oslo.messaging approach, everything will just end up in a
generic MessageTimeout error.

The RPC API already have this kind of issue. Applications have unfortunately
dealt with that (and I think they want something better now).
I'm just not convinced we should add a new "working queue" API in
oslo.messaging for tasks scheduling that have the same issue we already
have with RPC.

Anyway, that's your choice, if you want rely on this poor structure, I will
not be against, I'm not involved in Mistral. I just want everybody is aware
of this.

And even in this case there’s usually a number
of measures that can be taken to mitigate those consequences (reruning
workflows from certain points after manually fixing problems, rollback
scenarios etc.).

taskflow allows to describe and automate this kind of workflow really easily.

What I’m saying is: let’s not make that crucial decision now about
what a messaging framework should support or not, let’s make it more
flexible to account for variety of different usage scenarios.

I think the confusion is in the "messaging" keyword, currently oslo.messaging
is a "RPC" framework and a "Notification" framework on top of 'messaging'
frameworks.

Messaging framework we uses are 'kombu', 'pika', 'zmq' and 'pingus'.

It’s normal for frameworks to give more rather than less.

I disagree, here we mix different concepts into one library, all concepts
have to be implemented by different 'messaging framework',
So we fortunately give less to make thing just works in the same way with all
drivers for all APIs.

One more thing, at the summit we were discussing the possibility to
define at-most-once/at-least-once individually for Mistral tasks. This
is demanded because there cases where we need to do it, advanced users
may choose one or another depending on a task/action semantics.
However, it won’t be possible to implement w/o changes in the
underlying messaging framework.

If we goes that way, oslo.messaging users and Mistral users have to be aware
that their job/task/action/whatever will perhaps not be called (at-most-once)
or perhaps called twice (at-least-once).

The oslo.messaging/Mistral API and docs must be clear about this behavior to
not having bugs open against oslo.messaging because script written via Mistral
API is not executed as expected "sometimes".
"sometimes" == when deployers have trouble with its rabbitmq (or whatever)
broker and even just when a deployer restart a broker node or when a TCP
issue occurs. At this end the backtrace in theses cases always trows only
oslo.messaging trace (the well known MessageTimeout...).


Also oslo.messaging is already a fragile brick used by everybody that a very 
small subset of people maintain (thanks to them).

I'm afraid 

Re: [openstack-dev] [tc][security] Item #5 of the VMT

2016-06-03 Thread Rob C
Doug Chivers might have some thoughts on this but I'm happy with your
proposal Steve, kind of you to do the leg-work.

-rob

On Fri, Jun 3, 2016 at 1:29 AM, Steven Dake (stdake) 
wrote:

> Hi folks,
>
> I think we are nearly done with Item #5 [1] of the VMT.  One question
> remains.
>
> We need to know which repo the analysis documentation will land in .
> There is security-doc we could use for this purpose, but we could also
> create a new repository called "security-analysis" (or open to other
> names).  I'll create the repo, get reno integrated with it, get sphinx
> integrated with it, and get a basic documentation index.rst in place using
> cookiecutter + extra git reviews.  I'll also set up project-config for
> you.  After that, I don't think there is much I can do as my plate is
> pretty full :)
>
> Regards
> -steve
>
> [1] https://review.openstack.org/#/c/300698/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all][murano] Tagging kilo-eol for "the world"

2016-06-03 Thread Kirill Zaitsev
I’d like to ask to keep murano-apps kilo branch alive. It’s indeed not a
deployable project, but a collection of reference apps for murano. While no
active development happens for murano on kilo itself anymore, the apps repo
is intended to provide reference application for kilo users.

Is it possible for us to keep that branch alive?

-- 
Kirill Zaitsev
Software Engineer
Mirantis, Inc

On 3 June 2016 at 09:26:58, Tony Breeds (t...@bakeyournoodle.com) wrote:

to default to EOL'ing everything that that isn't a deployment project.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] versioning of IPA, it is time or is it?

2016-06-03 Thread Sam Betts (sambetts)
I personally think that we need IPA versioning, but not so that we can pin a 
version. We need versioning so that we can do more intelligent graceful 
degradation in Ironic without just watching for errors and guessing if a 
feature isn’t available. If we add a new feature in Ironic that requires a 
feature in IPA, then we should add code in Ironic that checks the version of 
IPA (either via an API or reported at lookup) and turns on/off that feature 
based on the version of IPA we’re talking to. Doing this would allow for both 
backwards and forward IPA version compatibility:

Old Ironic with newer IPA: Should just work
New Ironic with old IPA: Ironic should intelligently turn off unsupported 
features, with Warnings in the logs telling the operator if a feature is 
skipped.

Sam

From: Dmitry Tantsur >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, 2 June 2016 22:03
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [ironic] versioning of IPA, it is time or is it?


2 июня 2016 г. 10:19 PM пользователь "Loo, Ruby" 
> написал:
>
> Hi,
>
> I recently reviewed a patch [1] that is trying to address an issue with 
> ironic (master) talking to a ramdisk that has a mitaka IPA lurking around.
>
> It made me think that IPA may no longer be a teenager (yay, boo). IPA now has 
> a stable branch. I think it is time it grows up and acts responsibly. Ironic 
> needs to know which era of IPA it is talking to. Or conversely, does ironic 
> want to specify which microversion of IPA it wants to use? (Sorry, Dmitry, I 
> realize you are cringing.)

With versioning in place we'll have to fix one IPA version in ironic. Meaning, 
as soon as we introduce a new feature, we have to explicitly break 
compatibility with old ramdisk by requesting a version it does not support. 
Even if the feature itself is optional. Or we have to wait some long time 
before using new IPA features in ironic. I hate both options.

Well, or we can use some different versioning procedure :)

>
> Has anyone thought more than I have about this (i.e., more than 2ish minutes)?
>
> If the solution (whatever it is) is going to take a long time to implement, 
> is there anything we can do in the short term (ie, in this cycle)?
>
> --ruby
>
> [1] https://review.openstack.org/#/c/319183/
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [freezer] Addition to the core team

2016-06-03 Thread Saad Zaher
+1

On Fri, Jun 3, 2016 at 10:11 AM, Vanni, Fabrizio 
wrote:

> +1
>
> -Original Message-
> From: Lopes do Sacramento, Cynthia
> Sent: 02 June 2016 17:21
> To: Zaher, Saad ; Mathieu, Pierre-Arthur <
> pierre-arthur.math...@hpe.com>; openstack-dev@lists.openstack.org
> Cc: freezer-eskimos 
> Subject: RE: [openstack-dev][freezer] Addition to the core team
>
> +1
>
> -Original Message-
> From: Zaher, Saad
> Sent: 02 June 2016 17:13
> To: Mathieu, Pierre-Arthur ;
> openstack-dev@lists.openstack.org
> Cc: freezer-eskimos 
> Subject: RE: [openstack-dev][freezer] Addition to the core team
>
> +1
>
> -Original Message-
> From: Mathieu, Pierre-Arthur
> Sent: Thursday, June 2, 2016 4:42 PM
> To: openstack-dev@lists.openstack.org
> Cc: freezer-eskimos 
> Subject: Re: [openstack-dev][freezer] Addition to the core team
>
> Small correction for the final line of the last email.
> I am proposing Deklan and not Saad as core.
>
> - Pierre
>
> 
> From: Mathieu, Pierre-Arthur
> Sent: Thursday, June 2, 2016 4:29:29 PM
> To: openstack-dev@lists.openstack.org
> Cc: freezer-eskimos
> Subject: [openstack-dev][freezer] Addition to the core team
>
> Hello,
>
> I would like to propose that we make Deklan Dieterly (ddieterly) core on
> freezer.
> He has been a highly valuable developper for the past few month, mainly
> working on integration testing for Freezer components.
> He has also been helping a lot with features and Ux testing.
>
>
> His work can be found here: [1]
> And his stackalitics profile here: [2]
>
> Unless there is a disagreement I plan to make Saad core by the end of the
> week.
>
>
> Thanks
> - Pierre, Freezer PTL
>
> [1] https://review.openstack.org/#/q/owner:%22Deklan+Dieterly%22
> [2] http://stackalytics.com/?user_id=deklan=all=loc
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
--
Best Regards,
Saad!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc] how to install networking-sfc on compute node

2016-06-03 Thread Mohan Kumar
Juno Zhu ,

Please check this wiki  link :
https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining#Multi-Host_Installation

Thanks.,
Mohankumar.N


On Fri, Jun 3, 2016 at 12:43 PM, Na Zhu  wrote:

> Yes, but networking-sfc rewrite the q-agt binary file, when i install
> networking-sfc in allinone mode, the q-aget binary file is:
> juno@sfc:~/devstack$ cat /usr/local/bin/neutron-openvswitch-agent
> #!/usr/bin/python
> # PBR Generated from u'console_scripts'
>
> import sys
>
> from networking_sfc.services.sfc.agent.agent import main
>
>
> if __name__ == "__main__":
> sys.exit(main())
> steve@sfc:~/devstack$
>
>
>
>
>
> Regards,
> Juno Zhu
> IBM China Development Labs (CDL) Cloud IaaS Lab
> Email: na...@cn.ibm.com
> 5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New
> District, Shanghai, China (201203)
>
>
>
> From:Vikram Choudhary 
> To:"OpenStack Development Mailing List (not for usage questions)"
> 
> Date:2016/06/03 15:05
> Subject:Re: [openstack-dev] [networking-sfc] how to install
> networking-sfc on compute node
> --
>
>
>
>
>
> On Thu, Jun 2, 2016 at 9:11 PM, Na Zhu <*na...@cn.ibm.com*
> > wrote:
> Hi,
>
> From this link
> *https://github.com/openstack/networking-sfc/tree/master/devstack*
> , it is
> about installing networking-sfc together with neutron-server,
> I want to install networking-sfc on compute node, can anyone tell me how
> to set the local.conf?
> networking-sfc support is only required on the controller node as it uses
> q-agt (ovs driver implementation) for downloading flows to the ovs. By
> default, we already run q-agt on the compute node.
>
>
>
>
> Regards,
> Juno Zhu
> IBM China Development Labs (CDL) Cloud IaaS Lab
> Email: *na...@cn.ibm.com* 
> 5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New
> District, Shanghai, China (201203)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> *openstack-dev-requ...@lists.openstack.org?subject:unsubscribe*
> 
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
> 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tacker] Proposing Bharath Thiruveedula to Tacker core team

2016-06-03 Thread Karthik Natarajan
+1. Thanks for your awesome contributions Bharath !

From: Sripriya Seetharam [mailto:ssee...@brocade.com]
Sent: Friday, June 03, 2016 6:39 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [tacker] Proposing Bharath Thiruveedula to Tacker 
core team

+1. Welcome onboard Bharath!

-Sripriya

From: Sridhar Ramaswamy [mailto:sric...@gmail.com]
Sent: Friday, June 03, 2016 6:21 PM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: [openstack-dev] [tacker] Proposing Bharath Thiruveedula to Tacker core 
team

Tackers,

I'm happy to propose Bharath Thiruveedula (IRC: tbh) to join the tacker core 
team. Bharath has been contributing to Tacker from the Liberty cycle, and he 
has grown into a key member of this project. His contribution has steadily 
increased as he picked up bigger pieces to deliver [1]. Specifically, he 
contributed the automatic resource creation blueprint [2] in the Mitaka 
release. Plus tons of other RFEs and bug fixes [3]. Bharath is also a key 
contributor in tosca-parser and heat-translator projects which is an added plus.

Please provide your +1/-1 votes.

Thanks Bharath for your contributions so far and much more to come !!

[1] 
http://stackalytics.com/?project_type=openstack=all=commits_id=bharath-ves=tacker-group
[2] 
https://blueprints.launchpad.net/tacker/+spec/automatic-resource-creation
[3] 
https://bugs.launchpad.net/bugs/+bugs?field.assignee=bharath-ves
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tacker] Proposing Bharath Thiruveedula to Tacker core team

2016-06-03 Thread Haddleton, Bob (Nokia - US)
+1

Bob

On Jun 3, 2016, at 8:24 PM, Sridhar Ramaswamy 
> wrote:

Tackers,

I'm happy to propose Bharath Thiruveedula (IRC: tbh) to join the tacker core 
team. Bharath has been contributing to Tacker from the Liberty cycle, and he 
has grown into a key member of this project. His contribution has steadily 
increased as he picked up bigger pieces to deliver [1]. Specifically, he 
contributed the automatic resource creation blueprint [2] in the Mitaka 
release. Plus tons of other RFEs and bug fixes [3]. Bharath is also a key 
contributor in tosca-parser and heat-translator projects which is an added plus.

Please provide your +1/-1 votes.

Thanks Bharath for your contributions so far and much more to come !!

[1] 
http://stackalytics.com/?project_type=openstack=all=commits_id=bharath-ves=tacker-group
[2] https://blueprints.launchpad.net/tacker/+spec/automatic-resource-creation
[3] 
https://bugs.launchpad.net/bugs/+bugs?field.assignee=bharath-ves
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] Kuryr did not detect neutron tag plugin in devstack

2016-06-03 Thread Liping Mao (limao)
Hi Kuryr team,

I notice kuryr did not detect neutron tag plugin in devstack[1].
This is because when kuryr process start up in devstack,
neutron-server did not finish load tag plugin.
Kuryr use api call to detect neutron tag, so kuryr will not detect it.
After I manually restart kuryr process, everything works well.

I¹m not familiar with devstack, not sure if there is anyway to
make sure neutron-server finished start before kuryr start up.
I submit a patch[2], I just restart kuryr in extra stage, at that stage,
Neutron-server already finish start.
Any comments or good idea to solve this?
Please just help to add your comments in patch or here. Thanks.

[1] https://bugs.launchpad.net/kuryr/+bug/1587522
[2] https://review.openstack.org/#/c/323453/

Regards,
Liping Mao


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-03 Thread Morgan Fainberg
On Jun 3, 2016 12:42, "Lance Bragstad"  wrote:
>
>
>
> On Fri, Jun 3, 2016 at 11:20 AM, Henry Nash  wrote:
>>
>>
>>> On 3 Jun 2016, at 16:38, Lance Bragstad  wrote:
>>>
>>>
>>>
>>> On Fri, Jun 3, 2016 at 3:20 AM, Henry Nash  wrote:


> On 3 Jun 2016, at 01:22, Adam Young  wrote:
>
> On 06/02/2016 07:22 PM, Henry Nash wrote:
>>
>> Hi
>>
>> As you know, I have been working on specs that change the way we
handle the uniqueness of project names in Newton. The goal of this is to
better support project hierarchies, which as they stand today are
restrictive in that all project names within a domain must be unique,
irrespective of where in the hierarchy that projects sits (unlike, say, the
unix directory structure where a node name only has to be unique within its
parent). Such a restriction is particularly problematic when enterprise
start modelling things like test, QA and production as branches of a
project hierarchy, e.g.:
>>
>> /mydivsion/projectA/dev
>> /mydivsion/projectA/QA
>> /mydivsion/projectA/prod
>> /mydivsion/projectB/dev
>> /mydivsion/projectB/QA
>> /mydivsion/projectB/prod
>>
>> Obviously the idea of a project name (née tenant) being unique has
been around since near the beginning of (OpenStack) time, so we must be
cautions. There are two alternative specs proposed:
>>
>> 1) Relax project name constraints:
https://review.openstack.org/#/c/310048/
>> 2) Hierarchical project naming:
https://review.openstack.org/#/c/318605/
>>
>> First, here’s what they have in common:
>>
>> a) They both solve the above problem
>> b) They both allow an authorization scope to use a path rather than
just a simple name, hence allowing you to address a project anywhere in the
hierarchy
>> c) Neither have any impact if you are NOT using a hierarchy - i.e.
if you just have a flat layer of projects in a domain, then they have no
API or semantic impact (since both ensure that a project’s name must still
be unique within a parent)
>>
>> Here’s how the differ:
>>
>> - Relax project name constraints (1), keeps the meaning of the
‘name’ attribute of a project to be its node-name in the hierarchy, but
formally relaxes the uniqueness constraint to say that it only has to be
unique within its parent. In other words, let’s really model this a bit
like a unix directory tree.
>>>
>>> I think I lean towards relaxing the project name constraint. The reason
is because we already expose `domain_id`, `parent_id`, and `name` of a
project. By relaxing the constraint we can give the user everything the
need to know about a project without really changing any of these. When
using 3.7, you know what domain your project is in, you know the identifier
of the parent project, and you know that your project name is unique within
the parent.
>>
>> - Hierarchical project naming (2), formally changes the meaning of
the ‘name’ attribute to include the path to the node as well as the node
name, and hence ensures that the (new) value of the name attribute remains
unique.
>>>
>>> Do we intend to *store* the full path as the name, or just build it out
on demand? If we do store the full path, we will have to think about our
current data model since the depth of the organization or domain would be
limited by the max possible name length. Will performance be something to
think about building the full path on every request?
>>
>> I now mention this issue in the spec for hierarchical project naming
(the relax naming approach does not suffer this issue). As you say, we’ll
have to change the DB (today it is only 64 chars) if we do store the full
path . This is slightly problematic since the maximum depth of hierarchy is
controlled by a config option, and hence could be changed. We will
absolutely have be able to build the path on-the-fly in order to support
legacy drivers (who won’t be able to store more than 64 chars). We may need
to do some performance tests to ascertain if we can get away with building
the path on-the-fly in all cases and avoid changing the table.  One
additional point is that, of course, the API will return this path whenever
it returns a project - so clients will need to be aware of this increase in
size.
>
>
> These are all good points that continue to push me towards relaxing the
project naming constraint :)
>>
>>
>> While whichever approach we chose would only be included in a new
microversion (3.7) of the Identity API, although some relevant APIs can
remain unaffected for a client talking 3.6 to a Newton server, not all can
be. As pointed out be jamielennox, this is a data modelling problem - if a
Newton server has created multiple projects called “dev” in the hierarchy,
a 3.6 client trying to scope a token simply to “dev” cannot be answered
correctly (and it is proposed we would have to return an 

[openstack-dev] [Kuryr] roles of kuryr server in server/agent mode

2016-06-03 Thread Vikas Choudhary
Hi Fawad,

While I was going through nested-containers-spec

,
found it difficult to understand the roles and responsibilities of
kuryr-server, which is supposed to be run on controller nodes.

To me it seems like all queries such vlan-ID allocation, subport creation,
ips etc , kuryr(running inside vm) should be able to make to neutron.

Will appreciate few inputs from your side.



Thanks & Regards
Vikas Choudhary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-03 Thread Daniel P. Berrange
On Fri, Jun 03, 2016 at 12:32:17PM +, Paul Michali wrote:
> Hi!
> 
> I've been playing with Liberty code a bit and had some questions that I'm
> hoping Nova folks may be able to provide guidance on...
> 
> If I set up a flavor with hw:mem_page_size=2048, and I'm creating (Cirros)
> VMs with size 1024, will the scheduling use the minimum of the number of

1024 what units ? 1024 MB, or 1024 huge pages aka 2048 MB ?

> huge pages available and the size requested for the VM, or will it base
> scheduling only on the number of huge pages?
> 
> It seems to be doing the latter, where I had 1945 huge pages free, and
> tried to create another VM (1024) and Nova rejected the request with "no
> hosts available".

>From this I'm guessing you're meaning 1024 huge pages aka 2 GB earlier.

Anyway, when you request huge pages to be used for a flavour, the
entire guest RAM must be able to be allocated from huge pages.
ie if you have a guest with 2 GB of RAM, you must have 2 GB worth
of huge pages available. It is not possible for a VM to use
1.5 GB of huge pages and 500 MB of normal sized pages.

> Is this still the same for Mitaka?

Yep, this use of huge pages has not changed.

> Where could I look in the code to see how the scheduling is determined?

Most logic related to huge pages is in nova/virt/hardware.py

> If I use mem_page_size=large (what I originally had), should it evenly
> assign huge pages from the available NUMA nodes (there are two in my case)?
> 
> It looks like it was assigning all VMs to the same NUMA node (0) in this
> case. Is the right way to change to 2048, like I did above?

Nova will always avoid spreading your VM across 2 host NUMA nodes,
since that gives bad performance characteristics. IOW, it will always
allocate huge pages from the NUMA node that the guest will run on. If
you explicitly want your VM to spread across 2 host NUMA nodes, then
you must tell nova to create 2 *guest* NUMA nodes for the VM. Nova
will then place each guest NUMA node, on a separate host NUMA node
and allocate huge pages from node to match. This is done using
the hw:numa_nodes=2 parameter on the flavour

> Again, has this changed at all in Mitaka?

Nope. Well aside from random bug fixes.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-03 Thread Paul Michali
Hi!

I've been playing with Liberty code a bit and had some questions that I'm
hoping Nova folks may be able to provide guidance on...

If I set up a flavor with hw:mem_page_size=2048, and I'm creating (Cirros)
VMs with size 1024, will the scheduling use the minimum of the number of
huge pages available and the size requested for the VM, or will it base
scheduling only on the number of huge pages?

It seems to be doing the latter, where I had 1945 huge pages free, and
tried to create another VM (1024) and Nova rejected the request with "no
hosts available".

Is this still the same for Mitaka?

Where could I look in the code to see how the scheduling is determined?

If I use mem_page_size=large (what I originally had), should it evenly
assign huge pages from the available NUMA nodes (there are two in my case)?

It looks like it was assigning all VMs to the same NUMA node (0) in this
case. Is the right way to change to 2048, like I did above?

Again, has this changed at all in Mitaka?

Lastly, I had a case where there was not enough huge pages, so the create
failed and the VM was in ERROR state. It had created and bound a neutron
port.  I then deleted the VM. The VM disappeared from the list of VMs, but
the Neutron port was still there. I don't see anything in the neutron log
to request deleting the port.  Shouldn't the port have been unbound/deleted?

Any thoughts on how to figure out why not?


Thanks in advance!

PCM
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-03 Thread Alan Pevec
> openstack/packstack  BigTent

Just to clarify, Packstack has not formally applied to BigTent yet, it
has only been automatically migrated from stackforge to openstack
namespace.
But yes, please keep its kilo branch for now until we properly wrap it up.

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [freezer] Addition to the core team

2016-06-03 Thread Vanni, Fabrizio
+1

-Original Message-
From: Lopes do Sacramento, Cynthia 
Sent: 02 June 2016 17:21
To: Zaher, Saad ; Mathieu, Pierre-Arthur 
; openstack-dev@lists.openstack.org
Cc: freezer-eskimos 
Subject: RE: [openstack-dev][freezer] Addition to the core team

+1

-Original Message-
From: Zaher, Saad 
Sent: 02 June 2016 17:13
To: Mathieu, Pierre-Arthur ; 
openstack-dev@lists.openstack.org
Cc: freezer-eskimos 
Subject: RE: [openstack-dev][freezer] Addition to the core team

+1

-Original Message-
From: Mathieu, Pierre-Arthur 
Sent: Thursday, June 2, 2016 4:42 PM
To: openstack-dev@lists.openstack.org
Cc: freezer-eskimos 
Subject: Re: [openstack-dev][freezer] Addition to the core team

Small correction for the final line of the last email.
I am proposing Deklan and not Saad as core.

- Pierre


From: Mathieu, Pierre-Arthur
Sent: Thursday, June 2, 2016 4:29:29 PM
To: openstack-dev@lists.openstack.org
Cc: freezer-eskimos
Subject: [openstack-dev][freezer] Addition to the core team

Hello,

I would like to propose that we make Deklan Dieterly (ddieterly) core on 
freezer.
He has been a highly valuable developper for the past few month, mainly working 
on integration testing for Freezer components.
He has also been helping a lot with features and Ux testing.


His work can be found here: [1]
And his stackalitics profile here: [2]

Unless there is a disagreement I plan to make Saad core by the end of the week.


Thanks
- Pierre, Freezer PTL

[1] https://review.openstack.org/#/q/owner:%22Deklan+Dieterly%22
[2] http://stackalytics.com/?user_id=deklan=all=loc

















__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] trove-image-builder project

2016-06-03 Thread Amrith Kumar
We've had some lengthy discussions over the past couple of release cycles, most 
recently in Austin, and on the ML on the subject of the trove-image-builder 
project[1]. Subsequently the trove spec [2] has been merged and yesterday the 
infra and governance changes [3] and [4] have also merged.

The empty trove-image-builder project is now up at 
http://git.openstack.org/openstack/trove-image-builder.

The project [2] will populate it will elements and scripts (from the 
trove-integration project) to make it easier for people to build guest images 
for use with Trove. 

As stated in that spec, 

"This change proposes to start with existing DIB elements
derived from the trove-integration project. These currently
comprise Ubuntu and Fedora (F22 or higher). Work is underway
to convert the existing Fedora elements to CentOS 7."

Thanks,

-amrith

[1] http://openstack.markmail.org/thread/6zgvhkswxyfle77w
[2] https://review.openstack.org/#/c/315141/
[3] https://review.openstack.org/#/c/312806/
[4] https://review.openstack.org/#/c/312805/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Google Hangouts discussion for dueling specifications for Dockerfile customization

2016-06-03 Thread Ihor Dvoretskyi
Steve,

If you have any concerns with recording the Hangouts meetings, we may try
to run Zoom for that.

On Fri, Jun 3, 2016 at 3:50 AM, Steven Dake (stdake) 
wrote:

> Hey folks,
>
> IRC and mailing list were going far too slow for us to make progress on
> the competing specifications for handling Dockerfile customization.
> Instead we held a hangout, which I don't like because it isn't recorded,
> but it is high bandwidth and permitted us to work through the problem in 1
> hour instead of 1 month.
>
> The essence of the discussion:
>
>1. I will use inc0's patch as a starting point and will do the
>following:
>   1. Prototype base with  operations using the specification
>   items in the elemental DSL
>   2. Prototype mariadb with  operations using the
>   specification items in the elemental DSL
>   3. I will create a document assuming these two prototypes work that
>   describe how to use the jinja2  operations to replace or merge
>   sections of Dockerfile.j2 files.
>   4. We will stop specification development as it has served its
>   purpose (of defining the requirements) assuming the prototypes meet
>   people's taste test
>2. We believe the Jinja2  operation will meet the requirements
>set forth in the original elemental DSL specification
>3. We as a community will need to modify our 115 dockerfiles, of which
>I'd like people to take 1 or 2 container sets each (40 in total), in a
>distributed fashion to implement the documentation described in section 1.3
>4. We will produce an optional DSL parser (based upon the prototyping
>work) that outputs the proper  Dockerfile.j2 files or alternatively
>operators can create their own block syntax files
>5. All customization will be done in one master block replacement file
>6. Original dockerfile.j2 files will stay intact with the addition of
>a bunch of block operations
>7. Some RUN layer compression will be lost (the && in our Dockerfiles)
>8. There are 8 DSL operations but we will need twice as many to handle
>both override and merging in a worst case scenario.  That means 16 blocks
>will need to be added to each Dockerfile.
>9. Operators that have already customized their Dockerfile.j2 files
>can carry those changes or migrate to this new customization technique when
>this feature hits Newton, up to them
>10. If the prototypes don't work, back to the drawing board – that
>said I am keen to have any solution that meets the requirements so I will
>do a thorough job on the prototypes of inc0's work
>
> If you have questions, or I missed key points, please feel fee to ask or
> speak up.
>
> Regards
> -steve
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,

Ihor Dvoretskyi,
OpenStack Operations Engineer

---

Mirantis, Inc. (925) 808-FUEL
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [Infra][tricircle] patch not able to be merged

2016-06-03 Thread Jeremy Stanley
On 2016-06-03 03:45:31 + (+), joehuang wrote:
> There is one quite strange issue in Tricircle stable/mitaka branch
> (https://github.com/openstack/tricircle/tree/stable/mitaka) . Even
> the patch ( https://review.openstack.org/#/c/324209/ ) were given
> Code-Review +2 and Workflow +1, the gating job not started, and
> the patch was not merged. 
> 
> This also happen even we cherry pick a patch from the master
> branch to the stable/mitaka branch, for example,
> https://review.openstack.org/#/c/307627/.
> 
> Is there configuration missing for the stable branch after
> tagging, or some issue in infra?

For some reason, (based on my reading of debugging logs) when Zuul
queried Gerrit for open changes needed by that one it decided
https://review.openstack.org/306278 was required to merge first but
thought it wasn't merged. Then it attempted to enqueue it but
couldn't because it was actually already merged a couple months ago.
I manually instructed Zuul to try and requeue 324209 and that seems
to have worked.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [freezer] Addition to the core team

2016-06-03 Thread Mathieu, Pierre-Arthur
Welcome to the core team Deklan !
Don't forget, with great power comes great responsability ;-)

- Pierre
  
From: Saad Zaher 
Sent: Friday, June 3, 2016 11:01:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Lopes do Sacramento, Cynthia; Zaher, Saad; Mathieu, Pierre-Arthur; 
freezer-eskimos
Subject: Re: [openstack-dev] [freezer] Addition to the core team
  

+1


On Fri, Jun 3, 2016 at 10:11 AM, Vanni, Fabrizio   
wrote:
 +1

-Original Message-
From: Lopes do Sacramento, Cynthia
Sent: 02 June 2016 17:21
To: Zaher, Saad ; Mathieu, Pierre-Arthur 
; openstack-dev@lists.openstack.org
Cc: freezer-eskimos 
Subject: RE: [openstack-dev][freezer] Addition to the core team

+1

-Original Message-
From: Zaher, Saad
Sent: 02 June 2016 17:13
To: Mathieu, Pierre-Arthur ; 
openstack-dev@lists.openstack.org
Cc: freezer-eskimos 
Subject: RE: [openstack-dev][freezer] Addition to the core team

+1

-Original Message-
From: Mathieu, Pierre-Arthur
Sent: Thursday, June 2, 2016 4:42 PM
To: openstack-dev@lists.openstack.org


Cc: freezer-eskimos 
Subject: Re: [openstack-dev][freezer] Addition to the core team

Small correction for the final line of the last email.
I am proposing Deklan and not Saad as core.

- Pierre


From: Mathieu, Pierre-Arthur
Sent: Thursday, June 2, 2016 4:29:29 PM
To: openstack-dev@lists.openstack.org
Cc: freezer-eskimos
Subject: [openstack-dev][freezer] Addition to the core team

Hello,

I would like to propose that we make Deklan Dieterly (ddieterly) core on 
freezer.
He has been a highly valuable developper for the past few month, mainly working 
on integration testing for Freezer components.
He has also been helping a lot with features and Ux testing.


His work can be found here: [1]
And his stackalitics profile here: [2]

Unless there is a disagreement I plan to make Saad core by the end of the week.


Thanks
- Pierre, Freezer PTL

[1]  https://review.openstack.org/#/q/owner:%22Deklan+Dieterly%22
[2]  http://stackalytics.com/?user_id=deklan=all=loc

















__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   



 -- 




--
Best Regards,
Saad!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Discuss Cinder testing Wednesdays at 1500 UTC

2016-06-03 Thread D'Angelo, Scott
For those interested in various aspects of Cinder testing, we're planning on 
discussing and coordinating efforts. Please join us:
 #openstack-cinder
 1500 UTC Wednesdays
 (just before the Weekly Cinder meeting)

Testing subjects:
Multi-node Cinder testing
Active-Active HA testing
Improved Tempest coverage
Improved Functional Tests
Cleanup Unit tests
Partial multi-node Grenade testing
More details in the etherpads below

from the Newton Summit:
https://etherpad.openstack.org/p/cinder-newton-testingprocess
multi-node:
https://etherpad.openstack.org/p/cinder-multinode-testing

Cheers,
Scott DAngelo (scottda)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Discuss Cinder testing Wednesdays at 1500 UTC

2016-06-03 Thread D'Angelo, Scott
In the interest of yet-another-etherpad I created:
https://etherpad.openstack.org/p/Cinder-testing

I'll put an agenda, folks can sign up to be pinged, we'll keep the notes and 
action items here, etc

From: D'Angelo, Scott
Sent: Friday, June 03, 2016 8:14 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Cinder] Discuss Cinder testing Wednesdays at 1500 
UTC

For those interested in various aspects of Cinder testing, we're planning on 
discussing and coordinating efforts. Please join us:
 #openstack-cinder
 1500 UTC Wednesdays
 (just before the Weekly Cinder meeting)

Testing subjects:
Multi-node Cinder testing
Active-Active HA testing
Improved Tempest coverage
Improved Functional Tests
Cleanup Unit tests
Partial multi-node Grenade testing
More details in the etherpads below

from the Newton Summit:
https://etherpad.openstack.org/p/cinder-newton-testingprocess
multi-node:
https://etherpad.openstack.org/p/cinder-multinode-testing

Cheers,
Scott DAngelo (scottda)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-03 Thread Paul Michali
See PCM inline...

On Fri, Jun 3, 2016 at 8:44 AM Daniel P. Berrange 
wrote:

> On Fri, Jun 03, 2016 at 12:32:17PM +, Paul Michali wrote:
> > Hi!
> >
> > I've been playing with Liberty code a bit and had some questions that I'm
> > hoping Nova folks may be able to provide guidance on...
> >
> > If I set up a flavor with hw:mem_page_size=2048, and I'm creating
> (Cirros)
> > VMs with size 1024, will the scheduling use the minimum of the number of
>
> 1024 what units ? 1024 MB, or 1024 huge pages aka 2048 MB ?
>

PCM: I was using small flavor, which is 2 GB. So that's 2048 MB and the
page size is 2048K, so 1024 pages? Hope I have the units right.



> > huge pages available and the size requested for the VM, or will it base
> > scheduling only on the number of huge pages?
> >
> > It seems to be doing the latter, where I had 1945 huge pages free, and
> > tried to create another VM (1024) and Nova rejected the request with "no
> > hosts available".
>
> From this I'm guessing you're meaning 1024 huge pages aka 2 GB earlier.
>
> Anyway, when you request huge pages to be used for a flavour, the
> entire guest RAM must be able to be allocated from huge pages.
> ie if you have a guest with 2 GB of RAM, you must have 2 GB worth
> of huge pages available. It is not possible for a VM to use
> 1.5 GB of huge pages and 500 MB of normal sized pages.
>

PCM: Right, so, with 2GB of RAM, I need 1024 huge pages of size 2048K. In
this case, there are 1945 huge pages available, so I was wondering why it
failed. Maybe I'm confusing sizes/pages?



>
> > Is this still the same for Mitaka?
>
> Yep, this use of huge pages has not changed.
>
> > Where could I look in the code to see how the scheduling is determined?
>
> Most logic related to huge pages is in nova/virt/hardware.py
>
> > If I use mem_page_size=large (what I originally had), should it evenly
> > assign huge pages from the available NUMA nodes (there are two in my
> case)?
> >
> > It looks like it was assigning all VMs to the same NUMA node (0) in this
> > case. Is the right way to change to 2048, like I did above?
>
> Nova will always avoid spreading your VM across 2 host NUMA nodes,
> since that gives bad performance characteristics. IOW, it will always
> allocate huge pages from the NUMA node that the guest will run on. If
> you explicitly want your VM to spread across 2 host NUMA nodes, then
> you must tell nova to create 2 *guest* NUMA nodes for the VM. Nova
> will then place each guest NUMA node, on a separate host NUMA node
> and allocate huge pages from node to match. This is done using
> the hw:numa_nodes=2 parameter on the flavour
>

PCM: Gotcha, but that was not the issue I'm seeing. With this small flavor
(2GB = 1024 pages), I had 13107 huge pages initially. As I created VMs,
they were *all* placed on the same NUMA node (0). As a result, when I got
to more than have the available pages, Nova failed to allow further VMs,
even though I had 6963 available on one compute node, and 5939 on another.

It seems that all the assignments were to node zero. Someone suggested to
me to set mem_page_size to 2048, and at that point it started assigning to
both NUMA nodes evenly.

Thanks for the help!!!


Regards,

PCM


>
> > Again, has this changed at all in Mitaka?
>
> Nope. Well aside from random bug fixes.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-03 Thread Emilien Macchi
On Thu, Jun 2, 2016 at 7:32 PM, Tony Breeds  wrote:
> On Thu, Jun 02, 2016 at 07:10:23PM -0400, Emilien Macchi wrote:
>
>> I think that all openstack/puppet-* projects that have stable/kilo can
>> be kilo-EOLed.
>> Let me know if it's ok and I'll abandon all open reviews.
>
> Totally fine with me.
>
> I've added them.  Feel free to abanond the reviews.  Any you don't get to by
> 2016-06-09 00:00 UTC  I'll take care of.

Done.
https://review.openstack.org/#/q/branch:stable/kilo+project:%22%255Eopenstack/puppet-.*%2524%22+status:open,n,z

Thanks,

> Yours Tony.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev