[openstack-dev] [neutron] [IPv6] New API format for extra_dhcp_opts

2014-09-25 Thread Xu Han Peng

Currently the extra_dhcp_opts has the following API interface on a port:

{
"port":
{
"extra_dhcp_opts": [
{"opt_value": "testfile.1","opt_name": "bootfile-name"},
{"opt_value": "123.123.123.123", "opt_name": "tftp-server"},
{"opt_value": "123.123.123.45", "opt_name": 
"server-ip-address"}

],

 }
}

During the development of DHCPv6 function for IPv6 subnets, we found 
this format doesn't work anymore because an port can have both IPv4 and 
IPv6 address. So we need to find a new way to specify extra_dhcp_opts 
for DHCPv4 and DHCPv6, respectively. ( 
https://bugs.launchpad.net/neutron/+bug/1356383)


Here are some thoughts about the new format:

Option1: Change the opt_name in extra_dhcp_opts to add a prefix (v4 or 
v6) so we can distinguish opts for v4 or v6 by parsing the opt_name. For 
backward compatibility, no prefix means IPv4 dhcp opt.


"extra_dhcp_opts": [
{"opt_value": "testfile.1","opt_name": "bootfile-name"},
{"opt_value": "123.123.123.123", "opt_name": 
"*v4:*tftp-server"},
{"opt_value": "[2001:0200:feed:7ac0::1]", "opt_name": 
"*v6:*dns-server"}

]

Option2: break extra_dhcp_opts into IPv4 opts and IPv6 opts. For 
backward compatibility, both old format and new format are acceptable, 
but old format means IPv4 dhcp opts.


"extra_dhcp_opts": {
 "ipv4": [
{"opt_value": "testfile.1","opt_name": 
"bootfile-name"},
{"opt_value": "123.123.123.123", "opt_name": 
"tftp-server"},

 ],
 "ipv6": [
{"opt_value": "[2001:0200:feed:7ac0::1]", 
"opt_name": "dns-server"}

 ]
}

The pro of Option1 is there is no need to change API structure but only 
need to add validation and parsing to opt_name. The con of Option1 is 
that user need to input prefix for every opt_name which can be error 
prone. The pro of Option2 is that it's clearer than Option1. The con is 
that we need to check two formats for backward compatibility.


We discussed this in IPv6 sub-team meeting and we think Option2 is 
preferred. Can I also get community's feedback on which one is preferred 
or any other comments?


Thanks,
Xu Han
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] ./run_tests issue

2014-09-25 Thread melanie witt
On Sep 21, 2014, at 23:31, Deepak Shetty  wrote:

> Even better, whenever ./run_tests fail... maybe put a msg stating the 
> following C libs needs to be installed, have the user check the 
> same..something like that would help too.

I don't think it should be a human-maintained list, otherwise it's prone to 
fall out of date or be incomplete in some way.

FWIW, simply typing the error message in google and taking the first result 
e.g. "fatal error: my_config.h: No such file or directory" solves these 
obstacles in seconds, at least for me.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Angus Salkeld
On Fri, Sep 26, 2014 at 2:01 PM, Angus Lees  wrote:

> On Thu, 25 Sep 2014 04:01:38 PM Fox, Kevin M wrote:
> > Doesn't nova with a docker driver and heat autoscaling handle case 2 and
> 3
> > for control jobs? Has anyone tried yet?
>
> For reference, the cases were:
>
> > - Something to deploy the code (docker / distro packages / pip install /
> > etc)
> > - Something to choose where to deploy
> > - Something to respond to machine outages / autoscaling and re-deploy as
> > necessary
>
>
> I tried for a while, yes.  The problems I ran into (and I'd be interested
> to
> know if there are solutions to these):
>
> - I'm trying to deploy into VMs on rackspace public cloud (just because
> that's
> what I have).  This means I can't use the nova docker driver, without
> constructing an entire self-contained openstack undercloud first.
>
> - heat+cloud-init (afaics) can't deal with circular dependencies (like
> nova<-
> >neutron) since the machines need to exist first before you can refer to
> their
> IPs.
> From what I can see, TripleO gets around this by always scheduling them on
> the
> same machine and just using the known local IP.  Other installs declare
> fixed
> IPs up front - on rackspace I can't do that (easily).
> I can't use loadbalancers via heat for this because the loadbalancers need
> to
> know the backend node addresses, which means the nodes have to exist first
> and
> you're back to a circular dependency.
>
> For comparision, with kubernetes you declare the loadbalancer-equivalents
> (services) up front with a search expression for the backends.  In a second
> pass you create the backends (pods) which can refer to any of the
> loadbalanced
> endpoints.  The loadbalancers then reconfigure themselves on the fly to
> find the
> new backends.  You _can_ do a similar lazy-loadbalancer-reconfig thing with
> openstack too, but not with heat and not just "out of the box".
>

Do you have a minimal template that shows what you are trying to do?
(just to demonstrate the circular dependency).


> - My experiences using heat for anything complex have been extremely
> frustrating.  The version on rackspace public cloud is ancient and limited,
> and quite easy to get into a state where the only fix is to destroy the
> entire
> stack and recreate it.  I'm sure these are fixed in newer versions of
> heat, but
> last time I tried I was unable to run it standalone against an arms-length
> keystone because some of the recursive heat callbacks became confused about
> which auth token to use.
>

Gus we are working at improving standalone (Steven Baker has some patch out
for this).


>
> (I'm sure this can be fixed, if it wasn't already just me using it wrong
> in the
> first place.)
>
> - As far as I know, nothing in a heat/loadbalancer/nova stack will actually
> reschedule jobs away from a failed machine.  There's also no lazy
>

This might go part of the way there, the other part of it is detecting the
failed machine
and some how marking it as failed.
 https://review.openstack.org/#/c/105907/

discovery/nameservice mechanism, so updating IP address declarations in
> cloud-
> configs tend to ripple through the heat config and cause all sorts of
> VMs/containers to be reinstalled without any sort of throttling or rolling
> update.
>
>
> So: I think there's some things to learn from the kubernetes approach,
> which
> is why I'm trying to gain more experience with it.  I know I'm learning
> more
> about the various OpenStack components along the way too ;)
>

This is valuable feedback, we need to improve Heat to make these use case
work better.
But I also don't believe there is one tool for all jobs, so see little harm
in trying
other things out too.

Thanks
Angus


>
> --
>  - Gus
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] logging around olso lockutils

2014-09-25 Thread Christopher Yeoh
On Thu, 25 Sep 2014 08:49:12 -0400
Sean Dague  wrote:

> 
> #1 - tried to get a lock, but someone else has it. Then we know we've
> got lock contention. .
> #2 - something is still holding a lock after some "long" amount of
> time.

+1 to both.

> #2 turned out to be a critical bit in understanding one of the worst
> recent gate impacting issues.
> 
> You can write a tool today that analyzes the logs and shows you these
> things. However, I wonder if we could actually do something creative
> in the code itself to do this already. I'm curious if the creative
> use of Timers might let us emit log messages under the conditions
> above (someone with better understanding of python internals needs to
> speak up here). Maybe it's too much overhead, but I think it's worth
> at least asking the question.

Even a simple log at the end when its finally released if it has been
held for a long time would be handy. As matching up acquire/release by
eye is not easy.

I don't think we get a log message when an acquire is attempted but
fails. That might help get a measure of lock contention?

> The same issue exists when it comes to processutils I think, warning
> that a command is still running after 10s might be really handy,
> because it turns out that issue #2 was caused by this, and it took
> quite a bit of decoding to figure that out.

Also I think that perhaps a log message when a period task takes longer
than the interval that the task is meant to be run would be a useful
warning sign that something odd is going on.

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Angus Lees
On Thu, 25 Sep 2014 04:01:38 PM Fox, Kevin M wrote:
> Doesn't nova with a docker driver and heat autoscaling handle case 2 and 3
> for control jobs? Has anyone tried yet?

For reference, the cases were:

> - Something to deploy the code (docker / distro packages / pip install /
> etc)
> - Something to choose where to deploy
> - Something to respond to machine outages / autoscaling and re-deploy as
> necessary


I tried for a while, yes.  The problems I ran into (and I'd be interested to 
know if there are solutions to these):

- I'm trying to deploy into VMs on rackspace public cloud (just because that's 
what I have).  This means I can't use the nova docker driver, without 
constructing an entire self-contained openstack undercloud first.

- heat+cloud-init (afaics) can't deal with circular dependencies (like nova<-
>neutron) since the machines need to exist first before you can refer to their 
IPs.
>From what I can see, TripleO gets around this by always scheduling them on the 
same machine and just using the known local IP.  Other installs declare fixed 
IPs up front - on rackspace I can't do that (easily).
I can't use loadbalancers via heat for this because the loadbalancers need to 
know the backend node addresses, which means the nodes have to exist first and 
you're back to a circular dependency.

For comparision, with kubernetes you declare the loadbalancer-equivalents 
(services) up front with a search expression for the backends.  In a second 
pass you create the backends (pods) which can refer to any of the loadbalanced 
endpoints.  The loadbalancers then reconfigure themselves on the fly to find 
the 
new backends.  You _can_ do a similar lazy-loadbalancer-reconfig thing with 
openstack too, but not with heat and not just "out of the box".

- My experiences using heat for anything complex have been extremely 
frustrating.  The version on rackspace public cloud is ancient and limited, 
and quite easy to get into a state where the only fix is to destroy the entire 
stack and recreate it.  I'm sure these are fixed in newer versions of heat, but 
last time I tried I was unable to run it standalone against an arms-length 
keystone because some of the recursive heat callbacks became confused about 
which auth token to use.

(I'm sure this can be fixed, if it wasn't already just me using it wrong in the 
first place.)

- As far as I know, nothing in a heat/loadbalancer/nova stack will actually 
reschedule jobs away from a failed machine.  There's also no lazy 
discovery/nameservice mechanism, so updating IP address declarations in cloud-
configs tend to ripple through the heat config and cause all sorts of 
VMs/containers to be reinstalled without any sort of throttling or rolling 
update.


So: I think there's some things to learn from the kubernetes approach, which 
is why I'm trying to gain more experience with it.  I know I'm learning more 
about the various OpenStack components along the way too ;)

-- 
 - Gus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Bringing back auto-abandon

2014-09-25 Thread James Polley
On Thu, Sep 11, 2014 at 8:32 AM, James E. Blair  wrote:

> James Polley  writes:
>
> > On Thu, Sep 11, 2014 at 6:52 AM, James E. Blair 
> wrote:
> >
> >> Steven Hardy  writes:
> >>
> >> > Yeah, I don't know what the optimal solution is - my attention has
> >> recently
> >> > been drawn to queries generated via gerrit-dash-creator, which I'm
> >> finding
> >> > help a lot.
> >>
> >> This is one of several great solutions to the problem.  Any query in
> >> Gerrit can include an age specifier.  To get the old behavior, just add
> >> "age:-2week" (that translates to "last updated less than 2 weeks ago")
> >> to any query -- whether a dashboard or your own bookmarked query like
> >> this one:
> >>
> >>
> >>
> https://review.openstack.org/#/q/status:open+age:-2week+project:openstack/nova,n,z
> >
> >
> > If someone uploads a patch, and 15 days later it's had no comments at
> all,
> > would it be visible in this query? My understanding is that it wouldn't,
> as
> > it was last updated more than two weeks ago
> >
> > In my mind, a patch that's had no comments in two weeks should be high on
> > the list of thing that need feedback. As far as I know, Gerrit doesn't
> have
> > any way to sort by oldest-first though, so even if a two-week-old patch
> was
> > visible in the query, it would be at the bottom of the list.
>
> Indeed, however, a slightly different query will get you exactly what
> you're looking for.  This will show changes that are at least 2 days
> old, have no code reviews, are not WIP, and have passed Jenkins:
>
>   project:openstack/nova status:open label:Verified>=1,jenkins NOT
> label:Workflow<=-1 NOT label:Code-Review<=2 age:2d
>
> or the direct link:
>
>
> https://review.openstack.org/#/q/project:openstack/nova+status:open+label:Verified%253E%253D1%252Cjenkins+NOT+label:Workflow%253C%253D-1+NOT+label:Code-Review%253C%253D2+age:2d,n,z


Weeks later I finally went to add this to our dashboard, only to find that
we already have something similar, if I'm reading correctly.

http://git.openstack.org/cgit/stackforge/gerrit-dash-creator/tree/dashboards/tripleo.dash#n15

[section "5 Days Without Feedback"]
query = label:Verified>=1%2cjenkins NOT owner:self NOT
project:openstack/tripleo-specs NOT label:Code-Review<=2 age:5d


(plus the other qualifiers added in the header: status:open NOT
label:Workflow<=-1 NOT label:Code-Review<=-2)

This particular section is okay right now - only one change visible. The
"Needs Approval" section (query = label:Verified>=1%2cjenkins NOT
owner:self label:Code-Review>=2 NOT label:Code-Review-1) is more of a
problem; 43 reviews, with the oldest hidden at the bottom.

I can see a few ways I could improve this; one would be to to split "Needs
approval" into multiple sections - "Needs Approval" for 0-5 days, "Really
needs approval" for 5-10, and so on. Another would be to add something to
enable oldest-first sorting in Gerrit. I'm thinking that it doesn't even
need to be server-side, a client-side script (just like the one that adds
the "Toggle CI" button) would probably suffice to enable the sorting.

If anyone has other ideas before I start tinkering with jquery to make the
tables sortable, I'd love to hear it - but from my limited experience with
jquery I don't think it should be too much of an issue.


>
>
> Incidentally, that is the query in the "Wayward Changes" section of the
> "Review Inbox" dashboard (thanks Sean!); for nova, you can see it here:
>
>
> https://review.openstack.org/#/projects/openstack/nova,dashboards/important-changes:review-inbox-dashboard
>
> The key here is that there are a lot of changes in a lot of different
> states, and one query isn't going to do everything that everyone wants
> it to do.  Gerrit has a _very_ powerful query language that can actually
> help us make sense of all the changes we have in our system without
> externalizing the cost of that onto contributors in the form of
> forced-abandoning of changes.  Dashboards can help us share the
> knowledge of how to get the most out of it.
>
>   https://review.openstack.org/Documentation/user-dashboards.html
>   https://review.openstack.org/Documentation/user-search.html
>
> -Jim
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Token Constraints

2014-09-25 Thread Robert Collins
On 26 September 2014 14:18, Adam Young  wrote:
> There are a few Keystone features that are coming together for Kilo.
...
> For endpoint binding, an endpoint will have to know its own id.   So the
> endpoint_id will be recorded in the config file.  This means that the
> endpoint should be created in keystone before bringing up the server.  Since
> we already require workflow like this to create the service users, this
> should not be too big a burden.  Then that becomes a check here:

That will break TripleO. We currently deploy everything and *then*
configure keystone. That is, we don't follow that workflow for service
users today.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSN 0024] Sensitive data is exposed in log statements by python-keystoneclient

2014-09-25 Thread Nathan Kinder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Sensitive data is exposed in log statements by python-keystoneclient
- ---

### Summary ###
Python-keystoneclient is a client tool for the OpenStack Identity API,
which is implemented by the Keystone project. Various OpenStack services
including the OpenStack Dashboard depend on python-keystoneclient to
consume the OpenStack Identity API service. A particular log level
setting in python-keystoneclient can lead to exposure of user sensitive
data (e.g., passwords or tokens) in log statements.

### Affected Services / Software ###
Python-keystoneclient=<0.10.0

### Discussion ###
Python-keystoneclient provides an interface for making Identity API
requests to the OpenStack Identity Service, Keystone.
Python-keystoneclient handles user sensitive data such as user passwords
and tokens when sending requests or receiving responses from a Keystone
server. Like all OpenStack projects, python-keystoneclient uses a python
logger to log request/response activities. When python-keystoneclient
runs with the DEBUG log level enabled, sensitive data such as user
passwords and tokens associated with requests/responses will be exposed
in log statements. For example:

-  begin example 
$ keystone --debug user-list
DEBUG:keystoneclient.session:REQ: curl -i -X POST
http://10.0.0.15:5000/v2.0/tokens -H "Content-Type:application/json"
-H "User-Agent: python-keystoneclient"
DEBUG:keystoneclient.session:REQ BODY: {"auth": {"tenantName": "admin",
"passwordCredentials": {"username": "admin", "password": "stack"
}}}
-  end example 

This sensitive data can potentially be exploited by an attacker with
access to the log statements.

Python-keystoneclient is used by Horizon and other Identity consuming
services to authenticate a user against the Identity API service,
Keystone. A user providing password or token for authentication to these
services could result in the capture of this sensitive data in the
respective services log statements.

### Recommended Actions ###
Version 0.10.1 of python-keystoneclient has addressed this issue by not
exposing user password and token information in log statements. Any
service using version 0.10.1 or later of python-keystoneclient is not
affected by this issue. Other services using old versions, should
upgrade to a fixed version of python-keystoneclient.

For a fresh installation of a service which depends on
pythone-keystoneclient, make sure it uses at least version 0.10.1 of
python-keystoneclient. One way to do this is to set a specific version
in the requirments.txt file. For example, in Horizon, update
horizon/requirements.txt file:

-  begin example 
python-keystoneclient>=0.10.1
-  end example 

For existing installations, upgrade python-keystoneclient to the
latest version. For example, python package manager (PIP) can be used
to upgrade the existing installations.

-  begin example 
$ pip install python-keystoneclient --upgrade
-  end example 

An alternate approach is to never run a production system with the log
level in DEBUG mode.

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0024
Original Launchpad Bug:
https://bugs.launchpad.net/python-keystoneclient/+bug/1004114
Original LaunchPad Bug : https://bugs.launchpad.net/ossn/+bug/1004114
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJUJM6iAAoJEJa+6E7Ri+EVnjYH+QEZ3xbe2ySu4Mf0jboLkpeb
HnKcXgC8FbL3f70fkFn054d7jnxqdN8qsFaXpxSwOpKBvg+IPxv/l7aC0foIiVUu
uH4cLC/ZUNJkbxp8eCZBH82E7KzhwUa/Eg/uvK6u/F2ilIlUTC5zfsgzE3wZh8q4
OGZ09YXwnT+d9lWwoK/DNoOlQVK+kQO11UpT+kdtgjtGgcR+DjGy7NFE9w5z8/jz
nk6APdZwFW9JAVbSVJg3jblIpUhtue5fkmZLP9u+AE9c7V1U/6/w5EaAoOQEnTkZ
BbnT65dS8Em6+zWk/+yvRQB+F2K5rs7RAw+sUDTszD86ntpBqn+CwY8AySJbNaY=
=QXYy
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Token Constraints

2014-09-25 Thread Adam Young
There are a few Keystone features that are coming together for Kilo.  
Endpoint binding of tokens is one, and I think can be handled completely 
in keystonemiddelware.auth_token.  Another, though, is based on a few 
requests we've had to be able to have policy performed against a 
specific object or API based on what is in the token.


The token would have a new section 'constraints', parallel to 'scope'.  
It will contain one or more of the following.


  `endpoints` : a list with each of the endpoint ids explcitly enumerated.
  `operations` : a list of the APIs as definied in the policy rules file.
  `entities`:  a list of the object identifiers.

If any section is not explicitly set, there are no constraints of that kind.

For example, if all three were specified, the token would contain 
something like:



constraints: {
endpoints: ['novaepid', 'glanceepid'],
operations: ["compute:create","compute:start", 
"network:associate","network:get_floating_ip",

"image:get_image","network:update_port" ],
entities:['imageid1','networkport2']
}

Since the nova server would not have created the instance yet, there 
would be no restriction on the create call.  Only the specified image 
with id 'imageid1' would be accessible from glance, and only the 
"get_image" API would be allowed on glance.  Only access to 
'networkport2' would be granted from Neutron.



To enforce the 'operations' constraint, we can modify the policy 
enforcer  here:


http://git.openstack.org/cgit/openstack/keystone/tree/keystone/openstack/common/policy.py#n290

A check like:

if creds.token.get('constraints',{}).get('operations') and rule not in 
creds.token.constraint.operations:

   raise PolicyNotAuthorized(rule)


I'm not, however, certain how to standardize the "entities" portion of 
it.  Any suggestions?



For endpoint binding, an endpoint will have to know its own id. So the 
endpoint_id will be recorded in the config file.  This means that the 
endpoint should be created in keystone before bringing up the server.  
Since we already require workflow like this to create the service users, 
this should not be too big a burden.  Then that becomes a check here:


http://git.openstack.org/cgit/openstack/keystonemiddleware/tree/keystonemiddleware/auth_token.py#n863

which looks like :

if|data['access']['token'].get('constrains',{}).get('endpoints') and 
CONF.endpoint_id not in |

|data['access']['token']['constrains']['endpoints']|:
|raise InvalidToken('Endpoint constraint not met')|

The WIP spec is here: https://review.openstack.org/#/c/123726/

please provide feedback on content.  I'll deal with formatting once it 
is roughed out.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Get rid of the sample config file

2014-09-25 Thread Tom Fifield
On 26/09/14 03:35, Morgan Fainberg wrote:
> -Original Message-
> From: John Griffith 
> Reply: OpenStack Development Mailing List (not for usage questions) 
> >
> Date: September 25, 2014 at 12:27:52
> To: OpenStack Development Mailing List (not for usage questions) 
> >
> Subject:  Re: [openstack-dev] [Ironic] Get rid of the sample config file
> 
>> On Thu, Sep 25, 2014 at 12:34 PM, Devdatta Kulkarni <
>> devdatta.kulka...@rackspace.com> wrote:
>>  
>>> Hi,
>>>
>>> We have faced this situation in Solum several times. And in fact this was
>>> one of the topics
>>> that we discussed in our last irc meeting.
>>>
>>> We landed on separating the sample check from pep8 gate into a non-voting
>>> gate.
>>> One reason to keep the sample check is so that when say a feature in your
>>> code fails
>>> due to some upstream changes and for which you don't have coverage in your
>>> functional tests then
>>> a non-voting but failing sample check gate can be used as a starting point
>>> of the failure investigation.
>>>
>>> More details about the discussion can be found here:
>>>
>>> http://eavesdrop.openstack.org/meetings/solum_team_meeting/2014/solum_team_meeting.2014-09-23-16.00.log.txt
>>>   
>>>
>>> - Devdatta
>>>
>>> --
>>> *From:* David Shrewsbury [shrewsbury.d...@gmail.com]
>>> *Sent:* Thursday, September 25, 2014 12:42 PM
>>> *To:* OpenStack Development Mailing List (not for usage questions)
>>> *Subject:* Re: [openstack-dev] [Ironic] Get rid of the sample config file
>>>
>>> Hi!
>>>
>>> On Thu, Sep 25, 2014 at 12:23 PM, Lucas Alvares Gomes <
>>> lucasago...@gmail.com> wrote:
>>>
 Hi,

 Today we have hit the problem of having an outdated sample
 configuration file again[1]. The problem of the sample generation is
 that it picks up configuration from other projects/libs
 (keystoneclient in that case) and this break the Ironic gate without
 us doing anything.

 So, what you guys think about removing the test that compares the
 configuration files and makes it no longer gate[2]?

 We already have a tox command to generate the sample configuration
 file[3], so folks that needs it can generate it locally.

 Does anyone disagree?


>>> +1 to this, but I think we should document how to generate the sample
>>> config
>>> in our documentation (install guide?).
>>>
>>> -Dave
>>> --
>>> David Shrewsbury (Shrews)
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> I tried this in Cinder a while back and was actually rather surprised by
>> the overwhelming push-back I received from the Operator community, and
>> whether I agreed with all of it or not, the last thing I want to do is
>> ignore the Operators that are actually standing up and maintaining what
>> we're building.
>>  
>> Really at the end of the day this isn't really that big of a deal. It's
>> relatively easy to update the config in most of the projects "tox
>> -egenconfig" see my posting back in May [1]. For all the more often this
>> should happen I'm not sure why we can't have enough contributors that are
>> just pro-active enough to "fix it up" when they see it falls out of date.
>>  
>> John
>>  
>> [1]: http://lists.openstack.org/pipermail/openstack-dev/2014-May/036438.html 
>>  
> 
> +1 to what John just said.
>  
> I know in Keystone we update the sample config (usually) whenever we notice 
> it out of date. Often we ask developers making config changes to run `tox 
> -esample_config` and re-upload their patch. If someone misses we (the cores) 
> will do a patch that just updates the sample config along the way. Ideally we 
> should have a check job that just reports the config is out of date (instead 
> of blocking the review).
> 
> The issue is the premise that there are 2 options:
> 
> 1) Gate on the sample config being current
> 2) Have no sample config in the tree.
> 
> The missing third option is the proactive approach (plus having something 
> convenient like `tox -egenconfig` or `tox -eupdate_sample_config` to make it 
> convenient to update the sample config) is the approach that covers both 
> sides nicely. The Operators/deployers have the sample config in tree, the 
> developers don’t get patched rejected in the gate because the sample config 
> doesn’t match new options in an external library.
> 
> I know a lot of operators and deployers appreciate the sample config being 
> in-tree.

Just confirming this is definitely the case.

Regards,


Tom


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][IPv6] Neighbor Discovery for HA

2014-09-25 Thread Vishvananda Ishaya
You are going to have to make this as a separate binary and call it
via rootwrap ip netns exec. While it is possible to change network
namespaces in python, you aren’t going to be able to do this consistently
without root access, so it will need to be guarded by rootwrap anyway.

Vish

On Sep 25, 2014, at 7:00 PM, Xu Han Peng  wrote:

> Sending unsolicited NA by scapy is like this:
> 
> from scapy.all import send, IPv6, ICMPv6ND_NA, ICMPv6NDOptDstLLAddr
> 
> target_ll_addr = ICMPv6NDOptDstLLAddr(lladdr = mac_address)
> unsolicited_na=ICMPv6ND_NA(R=1, S=0, O=1, tgt=target)
> packet=IPv6(src=source)/unsolicited_na/target_ll_addr
> send(packet, iface=interface_name, count=10, inter=0.2)
> 
> It's not actually a python script but a python method. Any ideas?
> 
> On 09/25/2014 06:20 PM, Kevin Benton wrote:
>> Does running the python script with ip netns exec not work correctly?
>> 
>> On Thu, Sep 25, 2014 at 2:05 AM, Xu Han Peng  wrote:
>>> Hi,
>>> 
>>> As we talked in last IPv6 sub-team meeting, I was able to construct and send
>>> IPv6 unsolicited neighbor advertisement for external gateway interface by
>>> python tool scapy:
>>> 
>>> http://www.secdev.org/projects/scapy/
>>> 
>>> http://www.idsv6.de/Downloads/IPv6PacketCreationWithScapy.pdf
>>> 
>>> 
>>> However, I am having trouble to send this unsolicited neighbor advertisement
>>> in a given namespace. All the current namespace operations leverage ip netns
>>> exec and shell command. But we cannot do this to scapy since it's python
>>> code. Can anyone advise me on this?
>>> 
>>> Thanks,
>>> Xu Han
>>> 
>>> 
>>> On 09/05/2014 05:46 PM, Xu Han Peng wrote:
>>> 
>>> Carl,
>>> 
>>> Seem so. I think internal router interface and external gateway port GARP
>>> are taken care by keepalived during failover. And if HA is not enable,
>>> _send_gratuitous_arp is called to send out GARP.
>>> 
>>> I think we will need to take care IPv6 for both cases since keepalived 1.2.0
>>> support IPv6. May need a separate BP. For the case HA is enabled externally,
>>> we still need unsolicited neighbor advertisement for gateway failover. But
>>> for internal router interface, since Router Advertisement is automatically
>>> send out by RADVD after failover, we don't need to send out neighbor
>>> advertisement anymore.
>>> 
>>> Xu Han
>>> 
>>> 
>>> On 09/05/2014 03:04 AM, Carl Baldwin wrote:
>>> 
>>> Hi Xu Han,
>>> 
>>> Since I sent my message yesterday there has been some more discussion
>>> in the review on that patch set.  See [1] again.  I think your
>>> assessment is likely correct.
>>> 
>>> Carl
>>> 
>>> [1] https://review.openstack.org/#/c/70700/37/neutron/agent/l3_ha_agent.py
>>> 
>>> On Thu, Sep 4, 2014 at 3:32 AM, Xu Han Peng  wrote:
>>> 
>>> Carl,
>>> 
>>> Thanks a lot for your reply!
>>> 
>>> If I understand correctly, in VRRP case, keepalived will be responsible for
>>> sending out GARPs? By checking the code you provided, I can see all the
>>> _send_gratuitous_arp_packet call are wrapped by "if not is_ha" condition.
>>> 
>>> Xu Han
>>> 
>>> 
>>> 
>>> On 09/04/2014 06:06 AM, Carl Baldwin wrote:
>>> 
>>> It should be noted that "send_arp_for_ha" is a configuration option
>>> that preceded the more recent in-progress work to add VRRP controlled
>>> HA to Neutron's router.  The option was added, I believe, to cause the
>>> router to send (default) 3 GARPs to the external gateway if the router
>>> was removed from one network node and added to another by some
>>> external script or manual intervention.  It did not send anything on
>>> the internal network ports.
>>> 
>>> VRRP is a different story and the code in review [1] sends GARPs on
>>> internal and external ports.
>>> 
>>> Hope this helps avoid confusion in this discussion.
>>> 
>>> Carl
>>> 
>>> [1] https://review.openstack.org/#/c/70700/37/neutron/agent/l3_ha_agent.py
>>> 
>>> On Mon, Sep 1, 2014 at 8:52 PM, Xu Han Peng  wrote:
>>> 
>>> Anthony,
>>> 
>>> Thanks for your reply.
>>> 
>>> If HA method like VRRP are used for IPv6 router, according to the VRRP RFC
>>> with IPv6 included, the servers should be auto-configured with the active
>>> router's LLA as the default route before the failover happens and still
>>> remain that route after the failover. In other word, there should be no need
>>> to use two LLAs for default route of a subnet unless load balance is
>>> required.
>>> 
>>> When the backup router become the master router, the backup router should be
>>> responsible for sending out an unsolicited ND neighbor advertisement with
>>> the associated LLA (the previous master's LLA) immediately to update the
>>> bridge learning state and sending out router advertisement with the same
>>> options with the previous master to maintain the route and bridge learning.
>>> 
>>> This is shown in http://tools.ietf.org/html/rfc5798#section-4.1 and the
>>> actions backup router should take after failover is documented here:
>>> http://tools.ietf.org/html/rfc5798#section-6.4.2. The need for i

Re: [openstack-dev] [neutron][IPv6] Neighbor Discovery for HA

2014-09-25 Thread Xu Han Peng

Sending unsolicited NA by scapy is like this:

from scapy.all import send, IPv6, ICMPv6ND_NA, ICMPv6NDOptDstLLAddr

target_ll_addr = ICMPv6NDOptDstLLAddr(lladdr = mac_address)
unsolicited_na=ICMPv6ND_NA(R=1, S=0, O=1, tgt=target)
packet=IPv6(src=source)/unsolicited_na/target_ll_addr
*send(packet, iface=interface_name, count=10, inter=0.2)*

It's not actually a python script but a python method. Any ideas?

On 09/25/2014 06:20 PM, Kevin Benton wrote:

Does running the python script with ip netns exec not work correctly?

On Thu, Sep 25, 2014 at 2:05 AM, Xu Han Peng  wrote:

Hi,

As we talked in last IPv6 sub-team meeting, I was able to construct and send
IPv6 unsolicited neighbor advertisement for external gateway interface by
python tool scapy:

http://www.secdev.org/projects/scapy/

http://www.idsv6.de/Downloads/IPv6PacketCreationWithScapy.pdf


However, I am having trouble to send this unsolicited neighbor advertisement
in a given namespace. All the current namespace operations leverage ip netns
exec and shell command. But we cannot do this to scapy since it's python
code. Can anyone advise me on this?

Thanks,
Xu Han


On 09/05/2014 05:46 PM, Xu Han Peng wrote:

Carl,

Seem so. I think internal router interface and external gateway port GARP
are taken care by keepalived during failover. And if HA is not enable,
_send_gratuitous_arp is called to send out GARP.

I think we will need to take care IPv6 for both cases since keepalived 1.2.0
support IPv6. May need a separate BP. For the case HA is enabled externally,
we still need unsolicited neighbor advertisement for gateway failover. But
for internal router interface, since Router Advertisement is automatically
send out by RADVD after failover, we don't need to send out neighbor
advertisement anymore.

Xu Han


On 09/05/2014 03:04 AM, Carl Baldwin wrote:

Hi Xu Han,

Since I sent my message yesterday there has been some more discussion
in the review on that patch set.  See [1] again.  I think your
assessment is likely correct.

Carl

[1] https://review.openstack.org/#/c/70700/37/neutron/agent/l3_ha_agent.py

On Thu, Sep 4, 2014 at 3:32 AM, Xu Han Peng  wrote:

Carl,

Thanks a lot for your reply!

If I understand correctly, in VRRP case, keepalived will be responsible for
sending out GARPs? By checking the code you provided, I can see all the
_send_gratuitous_arp_packet call are wrapped by "if not is_ha" condition.

Xu Han



On 09/04/2014 06:06 AM, Carl Baldwin wrote:

It should be noted that "send_arp_for_ha" is a configuration option
that preceded the more recent in-progress work to add VRRP controlled
HA to Neutron's router.  The option was added, I believe, to cause the
router to send (default) 3 GARPs to the external gateway if the router
was removed from one network node and added to another by some
external script or manual intervention.  It did not send anything on
the internal network ports.

VRRP is a different story and the code in review [1] sends GARPs on
internal and external ports.

Hope this helps avoid confusion in this discussion.

Carl

[1] https://review.openstack.org/#/c/70700/37/neutron/agent/l3_ha_agent.py

On Mon, Sep 1, 2014 at 8:52 PM, Xu Han Peng  wrote:

Anthony,

Thanks for your reply.

If HA method like VRRP are used for IPv6 router, according to the VRRP RFC
with IPv6 included, the servers should be auto-configured with the active
router's LLA as the default route before the failover happens and still
remain that route after the failover. In other word, there should be no need
to use two LLAs for default route of a subnet unless load balance is
required.

When the backup router become the master router, the backup router should be
responsible for sending out an unsolicited ND neighbor advertisement with
the associated LLA (the previous master's LLA) immediately to update the
bridge learning state and sending out router advertisement with the same
options with the previous master to maintain the route and bridge learning.

This is shown in http://tools.ietf.org/html/rfc5798#section-4.1 and the
actions backup router should take after failover is documented here:
http://tools.ietf.org/html/rfc5798#section-6.4.2. The need for immediate
messaging sending and periodic message sending is documented here:
http://tools.ietf.org/html/rfc5798#section-2.4

Since the keepalived manager support for L3 HA is merged:
https://review.openstack.org/#/c/68142/43. And keepalived release 1.2.0
supports VRRP IPv6 features ( http://www.keepalived.org/changelog.html, see
Release 1.2.0 | VRRP IPv6 Release). I think we can check if keepalived can
satisfy our requirement here and if that will cause any conflicts with
RADVD.

Thoughts?

Xu Han


On 08/28/2014 10:11 PM, Veiga, Anthony wrote:



Anthony and Robert,

Thanks for your reply. I don't know if the arping is there for NAT, but I am
pretty sure it's for HA setup to broadcast the router's own change since the
arping is controlled by "send_arp_for_ha" config. By checking the

Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-25 Thread Vishvananda Ishaya

On Sep 25, 2014, at 4:01 PM, Robert Collins  wrote:

> So I guess I'm saying:
> 
>Lets decouple 'what is openstack' from 'what we test together on
> every commit'.

It seems that this discussion has actually illustrated shortcomings in our
answers to 3 separate questions, and people have been throwing out ideas
that attempt to solve all 3. Perhaps we need to address each one individually.

The three questions are:

1. Which projects are “part of openstack”?
2. Which projects are released as a single unit?
3. Which projects are tested together

The current answers are:
1. Three levels incubation, integration, core
2. Things that reach the integration level
3. Things that reach the integration level.

Some proposed answers:
1. Lightweight incubation a la apache
2. Monty’s layer1
3. Direct dependencies and close collaborators

Discussing the propased answers(in reverse order):
I think we have rough consensus around 3: that we should move
towards functional testing for direct dependencies and let the
projects decide when they want to co-gate. The functional
co-gating should ideally be based on important use-cases.

2 is a bit murkier. In the interest of staying true to our roots
the best we can probably do is to allow projects to opt-out of
the coordinated release and for thierry to specifically select
which projects he is willing to coordinate. Any other project
could co-release with the integrated release but wouldn’t be
centrally managed by thierry. There is also a decision about
what the TCs role is in these projects.

1 Has some unanswerd questions, like is there another
level “graduation” where the tc has some kind of technical
oversight? What is the criteria for it? etc.

Maybe addressing these things separately will allow us to make progress.

Vish





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] MySQL performance and Mongodb backend maturity question

2014-09-25 Thread Qiming Teng
On Thu, Sep 25, 2014 at 11:51:23AM -0400, gordon chung wrote:
> > mysql> select count(*) from metadata_text;
> > +--+
> > | count(*) |
> > +--+
> > | 25249913 |
> > +--+
> > 1 row in set (3.83 sec)> 
> > There were 25M records in one table.  The deletion time is reaching an
> > unacceptable level (7 minutes for 4M records) and it was not increasing
> > in a linear way.  Maybe DB experts can show me how to optimize this?
> we don't do any customisations in default ceilometer package so i'm sure 
> there's way to optimise... not sure if any devops ppl read this list. 
> > Another question: does the mongodb backend support events now?
> > (I asked this question in IRC, but, just as usual, no response from
> > anyone in that community, no matter a silly question or not is it...)
> regarding events, are you specifically asking about events 
> (http://docs.openstack.org/developer/ceilometer/events.html) in ceilometer or 
> using events term in generic sense? the table above has no relation to events 
> in ceilometer, it's related to samples and corresponding resource.  we did do 
> some remodelling of sql backend this cycle which should shrink the size of 
> the metadata tables.
> there's a euro-bias in ceilometer so you'll be more successful reaching 
> people on irc during euro work hours... that said, you'll probably get best 
> response by posting to list or pinging someone on core team directly.
> cheers,gord 

Thanks for the responses above.
TBH, I am unware of any performance problems based on my previous
experience using MongoDB as the backend.  I switched over to MySQL
simply because only SQlAlchemy has supports to Ceilometer events.
Sorry for the confusion -- the metadata table size wasn't a direct
result of using events, though it does seem like an indirect result of
switching to MySQL (not sure about this either).

I'll try Euro work hours in future.  Thanks for the hints!

Cheers,
Qiming

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] logging around olso lockutils

2014-09-25 Thread Rochelle.RochelleGrober
+1

Exactly what I was thinking.  Semaphore races and deadlocks are important to be 
able to trace, but the normal production cloud doesn't want to see those 
messages.  

What might be even better would be to also put a counter on the semaphores so 
that if they ever are >1 or <0 they report an error on normal log levels.  I'm 
assuming it would be an error.  I can't see why it would be just a warn or 
info, but, I don't know the guts of the code here.

--Rocky

-Original Message-
From: Joshua Harlow [mailto:harlo...@outlook.com] 
Sent: Thursday, September 25, 2014 12:23 PM
To: openst...@nemebean.com; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [oslo] logging around olso lockutils

Or how about we add in a new log level?

A few libraries I have come across support the log level 5 (which is less than 
debug (10) but greater than notset (0))...

One usage of this is in the multiprocessing library in python itself @

https://hg.python.org/releasing/3.4/file/8671f89107c8/Lib/multiprocessing/util.py#l34

Kazoo calls it the 'BLATHER' level @

https://github.com/python-zk/kazoo/blob/master/kazoo/loggingsupport.py

Since these messages can be actually useful for lock_utils developers it could 
be useful to keep them[1]?

Just a thought...

[1] Ones mans DEBUG is another mans garbage, ha.

On Sep 25, 2014, at 12:06 PM, Ben Nemec  wrote:

> On 09/25/2014 07:49 AM, Sean Dague wrote:
>> Spending a ton of time reading logs, oslo locking ends up basically
>> creating a ton of output at DEBUG that you have to mentally filter to
>> find problems:
>> 
>> 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Created new semaphore "iptables" internal_lock
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:206
>> 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Acquired semaphore "iptables" lock
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:229
>> 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Attempting to grab external lock "iptables" external_lock
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:178
>> 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Got file lock "/opt/stack/data/nova/nova-iptables" acquire
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:93
>> 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Got semaphore / lock "_do_refresh_provider_fw_rules" inner
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:271
>> 2014-09-24 18:44:49.244 DEBUG nova.compute.manager
>> [req-b91cb1c1-f211-43ef-9714-651eeb3b2302
>> DeleteServersAdminTestXML-1408641898
>> DeleteServersAdminTestXML-469708524] [instance:
>> 98eb8e6e-088b-4dda-ada5-7b2b79f62506] terminating bdm
>> BlockDeviceMapping(boot_index=0,connection_info=None,created_at=2014-09-24T18:44:42Z,delete_on_termination=True,deleted=False,deleted_at=None,destination_type='local',device_name='/dev/vda',device_type='disk',disk_bus=None,guest_format=None,id=43,image_id='262ab8a2-0790-49b3-a8d3-e8ed73e3ed71',instance=,instance_uuid=98eb8e6e-088b-4dda-ada5-7b2b79f62506,no_device=False,snapshot_id=None,source_type='image',updated_at=2014-09-24T18:44:42Z,volume_id=None,volume_size=None)
>> _cleanup_volumes /opt/stack/new/nova/nova/compute/manager.py:2407
>> 2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Released file lock "/opt/stack/data/nova/nova-iptables" release
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:115
>> 2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Releasing semaphore "iptables" lock
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:238
>> 2014-09-24 18:44:49.249 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Semaphore / lock released "_do_refresh_provider_fw_rules" inner
>> 
>> Also readable here:
>> http://logs.openstack.org/01/123801/1/check/check-tempest-dsvm-full/b5f8b37/logs/screen-n-cpu.txt.gz#_2014-09-24_18_44_49_240
>> 
>> (Yes, it's kind of ugly)
>> 
>> What occured to me is that in debugging locking iss

[openstack-dev] [infra] [all] Announcing the project-config repo

2014-09-25 Thread James E. Blair
We have moved (most) project configuration data out of the
openstack-infra/config repository into a new repository called
openstack-infra/project-config.

This repo contains only config files related to configuring software
projects in the OpenStack project infrastructure.  This includes:

  * Zuul
  * Jenkins Job Builder
  * Gerrit
  * Nodepool
  * IRC bots
  * The index page for specs.openstack.org

There are some things that are still in the config repo that we would
like to move but require further refactoring.  However, the bulk of
project related configuration is in the new repository.

Why Was This Done?
==

We have done this for a number of reasons:

  * To make it easier for people who care about the "big tent" of
OpenStack to review changes to add new projects and changes to the
CI system.

  * To make it easier for people who care about system administration of
the project infrustructure to review those changes.

  * To make the software that we use to run the infrastructure a little
more reusable by downstream consumers.

For more about the rationale and the mechanics of the split itself, see
this spec:

  
http://specs.openstack.org/openstack-infra/infra-specs/specs/config-repo-split.html

How To Use the New Repo
===

All of the same files are present with their history, but we have
reorganized the repo to make it a bit more convenient.  Most files are
simply one or two levels down from the root directory under what I
sincerely hope is a meaningful name.  For instance:

  zuul/layout.yaml
  jenkins/jobs/devstack-gate.yaml
  ...and so on...

Here is a browseable link to the repo:

  http://git.openstack.org/cgit/openstack-infra/project-config/tree/

And you know about our documentation, right?  It's all been updated with
the new paths.  Highlights include:

  * The stackforge howto:  http://ci.openstack.org/stackforge.html
  * The our Zuul docs:  http://ci.openstack.org/zuul.html
  * Our JJB docs:  http://ci.openstack.org/jjb.html
  * And many others accessible from:  http://ci.openstack.org/

Finally, all those neat jobs that tell you that you added a job without
a definition or didn't put something in alphabetical order are all
running on the new repo as well.

What Next?
==

If you had an outstanding patch against the config repo that was
affected by the split, you will need to re-propose it to the
project-config repo.

You should review changes in project-config.  Yes -- you.  If you have
any idea what this stuff is, reviewing changes to this repo will be a
big help to all the projects that are using our infrastructure.

This repo has its own group of core reviewers.  Currently it includes
only infra-core, but regular reviewers who understand the major systems
involved, the general requirements for new projects, and the overall
direction of the testing infrastructure will be nominated for membership
in the project-config-core team.

As always, feel free to reply to this email or visit us in
#openstack-infra with any questions.

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NOVA] security group fails to attach to an instance if port-id is specified during boot.

2014-09-25 Thread Day, Phil
I think the expectation is that if a user is already interaction with Neutron 
to create ports then they should do the security group assignment in Neutron as 
well.

The trouble I see with supporting this way of assigning security groups is what 
should the correct behavior be if the user passes more than one port into the 
Nova boot command ?   In the case where Nova is creating the ports it kind of 
feels (just)  Ok to assign the security groups to all the ports.  In the case 
where the ports have already been created then it doesn’t feel right to me that 
Nova modifies them.






From: Oleg Bondarev [mailto:obonda...@mirantis.com]
Sent: 25 September 2014 08:19
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [NOVA] security group fails to attach to an 
instance if port-id is specified during boot.

Hi Parikshit,

Looks like a bug. Currently if port is specified its security groups are not 
updated, it shpould be fixed.
I've reported https://bugs.launchpad.net/nova/+bug/1373774 to track this.
Thanks for reporting!

Thanks,
Oleg

On Thu, Sep 25, 2014 at 10:15 AM, Parikshit Manur 
mailto:parikshit.ma...@citrix.com>> wrote:
Hi All,
Creation of server with command  ‘nova boot  --image  
--flavor m1.medium --nic port-id= --security-groups   ’ 
fails to attach the security group to the port/instance. The response payload 
has the security group added but only default security group is attached to the 
instance.  Separate action has to be performed on the instance to add sec_grp, 
and it is successful. Supplying the same with ‘--nic net-id=’ works as 
expected.

Is this the expected behaviour / are there any other options which needs to be 
specified to add the security group when port-id needs to be attached during 
boot.

Thanks,
Parikshit Manur

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-25 Thread Robert Collins
On 26 September 2014 10:28, Zane Bitter  wrote:

> So it goes without saying that I support the latter part ("functionally test
> against their real dependencies"). I'm not convinced by the idea of not
> having an integrated release though. Time-based releases seem to be pretty
> popular lately - the Linux kernel, most distributions and the two major
> open-source web browsers use them, for example - and as a developer I don't
> feel especially qualified to second-guess what the release schedule for a
> particular component should actually be. The current cycle works decently
> well, is not obviously worse than any particular alternative, and aligns
> semi-nicely with the design summits (FWIW I think I would actually prefer
> the design summit take place _before_ the release, but that is a whole other
> discussion).
>
> We actually discussed with Monty at this week's Heat meeting his proposal to
> move the UI projects to a continuous release schedule. (For the moment Heat
> is actually blocked on severe limitations in the usefulness of standalone
> mode, but we expect those to shake out over the near term anyway.) I think
> there was general agreement that there would be some big upsides - it really
> sucks telling the 15th user that we already redesigned that thing to solve
> your issue like 4 months ago, but since you're stuck on Icehouse we can't
> help. On the other hand, there's big downsides too. We're still making major
> changes, and it's really nice to be able to let them bed in as part of a
> release cycle.

I think bedding them in is great, but sometimes thats more than a
cycle. We benefit if we can decouple 'make big change' from 'big
change is bedded in and ready for users to use'.

> (Since I started this discussion talking about bias, it's worth calling out
> a *huge* one here: my team has to figure out a way to package and distribute
> this stuff.)

:).

> That said, if we can get to a situation where we *choose* to do a
> co-ordinated release for the convenience of developers, distributors and
> (hopefully) users - rather than being forced into it through sheer terror
> that everything will fall over in a flaming heap if we don't - that would
> obviously be a win :)

+1.


> Right, if we _have_ to have it let's not name it something aspirational like
> "Layer 1" or "ring 0". Let's call it "Cluster Foxtrot" and make sure that
> projects are queueing up to get _out_. We can start with Designate, which
> afaik has done nothing to bring upon themselves such a fate ;)
>
> I'm not convinced that it has to be a formal, named thing though. In terms
> of the gating thing, that would be an organisational solution to a purely
> technical problem: up to now there's been no middle ground between a cluster
> of projects that all gate against each other and just not gating against
> each other at all. I'm totally confident that the QA and Infra teams can fix
> that. Those folks are superb at what they do.
>
> And on the release side, I think we're running before we can walk. It's not
> clear that most, or even many, projects would want to abandon a co-ordinated
> release anyway. (I'd actually have no problem with letting projects opt-out
> if they have some reason to - TripleO already effectively did.) External
> stakeholders (including the board) would _inevitably_ treat a shrinking of
> the co-ordinated release as an endorsement of which projects we actually
> care about, and by extension which projects they should care about.

Hmm, this isn't really representative of TripleO's position. We
*want*, *desperately* to be in the integrated gate, and we're nearly
there in terms of donated capacity - we're focusing on on reliability
at the moment. We have no integrated API servers [yet], but Tuskar's
got a very clear plan taking it into the integrated release. Projects
!= Programs :). The non-API-server components of OpenStack as a whole
are mostly not part of the integrated release. The plan with Tuskar
was to get it stable enough to meet the incubation requirements and
then apply. Of course if incubation goes away, thats different :).

> If we ever reach a point where interfaces are stable enough that we don't
> need a co-ordinated release, let's consider _then_ whether to tear the whole
> thing down - in one fell swoop. Taking a half-measure sends exactly the
> wrong signal.

So as soon as you said that the Board would care about what is in the
integrated release, that re-instates the winners-and-losers thing that
a lot of this discussion is about. And Swift is already on a separate
schedule, but its one of our most popular projects... I think there's
something fundamentally mixed up here :0.

> Finally, I would add that unco-ordinated releases, where distributions have
> to select a set of components that synced to the global requirements at
> different times, are not going to be truly feasible until we have a
> container-based deployment system. I expect it to not be an issue in the
> future, but that w

Re: [openstack-dev] [Heat] Question regarding Stack updates and templates

2014-09-25 Thread Zane Bitter

On 22/09/14 11:04, Anant Patil wrote:

Hi,

In convergence, we discuss about having concurrent updates to a stack. I
wanted to know if it is safe to assume that the an update will be a
super set of it's previous updates. Understanding this is critical to
arrive at implementation of concurrent stack operations.

Assuming that an admin will have VCS setup and will issue requests by
checking-out the template and modifying it, I could see that the updates
will be incremental and not discreet. Is this assumption correct? When
an update is issued before a previous update is complete, would the
template for that be based on the template of previously issued
incomplete update or the last completed one?


Neither.

The only thing that matters is that we get to the state described in the 
latest template. With multiple updates rippling through at a time, it's 
likely that neither the last completed update nor the last issued update 
is representative of the current state that we need to modify.


Actually, given that updates can fail part-way through, that's already 
the case. That's why we now update the template incrementally in the 
database as we make individual changes.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-25 Thread Zane Bitter

On 25/09/14 15:12, Vishvananda Ishaya wrote:


On Sep 24, 2014, at 10:55 AM, Zane Bitter  wrote:


On 18/09/14 14:53, Monty Taylor wrote:

Hey all,

I've recently been thinking a lot about Sean's Layers stuff. So I wrote
a blog post which Jim Blair and Devananda were kind enough to help me edit.

http://inaugust.com/post/108


I think there are a number of unjustified assumptions behind this arrangement 
of things. I'm going to list some here, but I don't want anyone to interpret 
this as a personal criticism of Monty. The point is that we all suffer from 
biases - not for any questionable reasons but purely as a result of our own 
experiences, who we spend our time talking to and what we spend our time 
thinking about - and therefore we should all be extremely circumspect about 
trying to bake our own mental models of what OpenStack should be into the 
organisational structure of the project itself.


I think there were some assumptions that lead to the Layer1 model. Perhaps a 
little insight into the in-person debate[1] at OpenStack-SV might help explain 
where monty was coming from.


Thanks Vish, that is indeed useful background. Apparently I need to get 
out more ;)



The initial thought was a radical idea (pioneered by Jay) to completely 
dismantle the integrated release and have all projects release independently 
and functionally test against their real dependencies. This gained support from 
various people and I still think it is a great long-term goal.


So it goes without saying that I support the latter part ("functionally 
test against their real dependencies"). I'm not convinced by the idea of 
not having an integrated release though. Time-based releases seem to be 
pretty popular lately - the Linux kernel, most distributions and the two 
major open-source web browsers use them, for example - and as a 
developer I don't feel especially qualified to second-guess what the 
release schedule for a particular component should actually be. The 
current cycle works decently well, is not obviously worse than any 
particular alternative, and aligns semi-nicely with the design summits 
(FWIW I think I would actually prefer the design summit take place 
_before_ the release, but that is a whole other discussion).


We actually discussed with Monty at this week's Heat meeting his 
proposal to move the UI projects to a continuous release schedule. (For 
the moment Heat is actually blocked on severe limitations in the 
usefulness of standalone mode, but we expect those to shake out over the 
near term anyway.) I think there was general agreement that there would 
be some big upsides - it really sucks telling the 15th user that we 
already redesigned that thing to solve your issue like 4 months ago, but 
since you're stuck on Icehouse we can't help. On the other hand, there's 
big downsides too. We're still making major changes, and it's really 
nice to be able to let them bed in as part of a release cycle.


(Since I started this discussion talking about bias, it's worth calling 
out a *huge* one here: my team has to figure out a way to package and 
distribute this stuff.)


That said, if we can get to a situation where we *choose* to do a 
co-ordinated release for the convenience of developers, distributors and 
(hopefully) users - rather than being forced into it through sheer 
terror that everything will fall over in a flaming heap if we don't - 
that would obviously be a win :)



The worry that Monty (and others) had are two-fold:

1. When we had no co-gating in the past, we ended up with a lot of 
cross-project breakage. If we jump right into this we could end up in the wild 
west were different projects expect different keystone versions and there is no 
way to deploy a functional cloud.
2. We have set expectations in our community (and especially with 
distributions), that we release a set of things that all work together. It is 
not acceptable for us to just pull the rug out from under them.

These concerns show that we must (in the short term) provide some kind of 
integrated testing and release. I see the layer1 model as a stepping stone 
towards the long term goal of having the projects release independently and 
depend on stable interfaces. We aren’t going to get there immediately, so 
having a smaller, integrated set of services representing our most common use 
case seems like a good first step. As our interfaces get more stable and our 
testing gets better it could move to a (once every X months) release that just 
packages the current version of the layer1 projects or even be completely 
managed by distributions.

We need a way to move forward, but I’m hoping we can do it without a concept of 
“specialness” around layer1 projects. I actually see it as a limitation of 
these projects that we have to take this stepping stone and cannot disaggregate 
completely. Instead it should be seen as a necessary evil so that we don’t 
break our users.


Right, if we _have_ to have it let's not name i

Re: [openstack-dev] [all] [glance] python namespaces considered harmful to development, lets not introduce more of them

2014-09-25 Thread Robert Collins
On 24 September 2014 11:03, Robert Collins  wrote:

> So... FWIW I think I've got a cleaner implementation of namespaces
> *for our context* - it takes inspiration from the PEP-420 discussion
> and final design. It all started when Mike reported issues with testr
> to me.
>
> https://bugs.launchpad.net/oslo.db/+bug/1372250
>
> tl;dr: we should stop using pkg_resources style namespace packages and
> instead have an effectively empty oslo package that sets up the
> namespace, which all namespaced libraries would depend on. With a stub
> __init__ in local source trees that adds the site-packages path to
> itself automatically, and excluding that file in sdist, it should be
> entirely transparent to developers and packagers, with no file
> conflicts etc.
>
> This works with the existing pkg_resources namespace packages, lets us
> migrate away from the pkg_resources implementation one package at a
> time, and we don't need to rename any of the packages, and it works
> fine with uninstalled and install -e installed source trees.
>
> We need:
>  - a new olso package to introduce a common oslo/__init__.py
> (recommended in the pre-PEP420 world)
>  - a tiny pbr bugfix: https://review.openstack.org/123597
>  - and a patch like so to each project: https://review.openstack.org/123604
>
> I have such an olso package https://github.com/rbtcollins/oslo, if
> this sounds reasonable I will push up an infra patch to create it.

Doug raised on IRC a concern about system-site-packages.

I have tested this, and I can make it work, but I'm not sure its
needed: it is totally broken today:

# Put oslo.config in the system site and oslo.18n not yet installed
sudo apt-get install oslo.config
sudo apt-get remove oslo.i18n
# make a virtualenv with system site packages
mkvirtualenv --system-site-packages test-system-site
# install oslo.18n
pip install oslo.i18n
# now when I tested oslo.i18n doesn't depent on oslo.config, but lets be sure:
python -c 'import oslo.config; print oslo.config.__file__'
Traceback (most recent call last):
  File "", line 1, in 
ImportError: No module named config

# Oh look! can't import oslo.config.
pip install oslo.db
...
python -c 'import oslo.config; print oslo.config.__file__'
/home/robertc/.virtualenvs/test-system-site/local/lib/python2.7/site-packages/oslo/config/__init__.pyc

# Now we need it, it got pulled in.


Now, as I say, I can fix this quite easily with a virtualenv aware pth
file, but since its broken today and AFAIK there isn't a bug open
about this, I think it will be fine.

When you install e.g. oslo.db which *does* depend on oslo.config,
oslo.config is being installed within the venv. I'm not sure if thats
strictly due to version constraints, or if its systemic.

So - I'd like to say that its a separate preexisting issue and we can
loop back and tackle it should it show up as a problem.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] PTL candidacy

2014-09-25 Thread Mark Washenberger
Thanks, Nikhil, for offering to take on this responsibility.

I know you've had a lot of experience with Glance in the past and I feel
comfortable knowing that you'll be around to keep the project moving
forwards!

Cheers!

On Thu, Sep 25, 2014 at 1:56 PM, Nikhil Komawar <
nikhil.koma...@rackspace.com> wrote:

>   Hi,
>
>  I would like to take this opportunity and announce my candidacy for the
> role of Glance PTL.
>
>  I have been part of this program since Folsom release and have had 
> opportunity
> to work with an awesome team. There have been really challenging changes in
> the way Glance works and it has been a pleasure to contribute my reviews
> and code to many of those changes.
>
>  With the change in mission statement [1], that now provides a direction
> for other services to upload and discover data assets using Glance, it
> would be my focus to enable new features like 'Artifacts' to merge smoothly
> into master. This is a paradigm change in the way Glance is consumed and
> would be my priority to see this through. In addition, Glance is supporting a
> few new features like async workers and metadef, as of Juno that could be
> improved in terms of bugs and their maintainability. Seeing this through
> would be my next priority.
>
>  In addition to these, there are a few other challenges which Glance
> project faces - review/feedback time, triaging ever growing bug list, BP
> 'validation and followup' etc. I have some ideas to develop more momentum
> in each of these processes. With the advent of the Artifacts feature, new
> developers would be contributing to Glance. I would like to encourage and
> work with them become core members sooner than later. Also, there are many
> merge propositions which become stale due to lack of reviews from
> core-reviewers. My plan is to have bi-weekly sync-ups with the core and
> driver members to keep the review cycle active. As a good learning lesson
> from Juno, I would like to work closely with all the developers and
> involved core reviewers to know their sincere intent of accomplishing a
> feature within the scope of release timeline. There are some really
> talented people involved in Glance and I would like to keep synthesizing
> the ecosystem to enable everyone involved to do their best.
>
>  Lastly, my salutations to Mark. He has provided great direction and
> leadership to this project. I would like to keep his strategy of rotation
> of weekly meeting times to accommodate the convenience of people from
> various time zones.
>
>  Thanks for reading and I hope you will support my candidacy!
>
>  [1]
> https://github.com/openstack/governance/blob/master/reference/programs.yaml#L26
>
>  -Nikhil Komawar
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] PTL candidacy

2014-09-25 Thread Tristan Cacqueray
confirmed

On 25/09/14 04:56 PM, Nikhil Komawar wrote:
> Hi,
> 
> I would like to take this opportunity and announce my candidacy for the role 
> of Glance PTL.
> 
> I have been part of this program since Folsom release and have had 
> opportunity to work with an awesome team. There have been really challenging 
> changes in the way Glance works and it has been a pleasure to contribute my 
> reviews and code to many of those changes.
> 
> With the change in mission statement [1], that now provides a direction for 
> other services to upload and discover data assets using Glance, it would be 
> my focus to enable new features like 'Artifacts' to merge smoothly into 
> master. This is a paradigm change in the way Glance is consumed and would be 
> my priority to see this through. In addition, Glance is supporting a few new 
> features like async workers and metadef, as of Juno that could be improved in 
> terms of bugs and their maintainability. Seeing this through would be my next 
> priority.
> 
> In addition to these, there are a few other challenges which Glance project 
> faces - review/feedback time, triaging ever growing bug list, BP 'validation 
> and followup' etc. I have some ideas to develop more momentum in each of 
> these processes. With the advent of the Artifacts feature, new developers 
> would be contributing to Glance. I would like to encourage and work with them 
> become core members sooner than later. Also, there are many merge 
> propositions which become stale due to lack of reviews from core-reviewers. 
> My plan is to have bi-weekly sync-ups with the core and driver members to 
> keep the review cycle active. As a good learning lesson from Juno, I would 
> like to work closely with all the developers and involved core reviewers to 
> know their sincere intent of accomplishing a feature within the scope of 
> release timeline. There are some really talented people involved in Glance 
> and I would like to keep synthesizing the ecosystem to enable everyone 
> involved to do their best.
> 
> Lastly, my salutations to Mark. He has provided great direction and 
> leadership to this project. I would like to keep his strategy of rotation of 
> weekly meeting times to accommodate the convenience of people from various 
> time zones.
> 
> Thanks for reading and I hope you will support my candidacy!
> 
> [1] 
> https://github.com/openstack/governance/blob/master/reference/programs.yaml#L26
> 
> -Nikhil Komawar
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] PTL Candidacy

2014-09-25 Thread Tristan Cacqueray
confirmed

On 25/09/14 04:51 PM, Nikhil Manchanda wrote:
> I'd like to announce my candidacy for the PTL role of the Database
> (Trove) program for Kilo.
> 
> I'm the current PTL for Trove for Juno, and during the Juno time frame
> we made some really good progress on multiple fronts. We completed the
> Neutron integration work that we had started in Icehouse. We've added
> support for asynchronous mysql master-slave replication. We added a
> clustering API, and an initial implementation of clusters for MongoDB.
> We furthered the testability of Trove, by adding more Trove related
> tests to Tempest, and are continuing to make good progress updating
> and cleaning up our developer docs, install guide, and user
> documentation.
> 
> For Kilo, I'd like us to keep working on clustering, with the end goal
> of being able to provision fully HA database clusters in Trove. This
> means a continued focus on clustering for datastores (including a
> semi-synchronous mysql clustering solution), as well as heat
> integration. I'd also like to ensure that we make progress towards our
> goal of integrating trove with a monitoring solution to enable
> scenarios like auto-failover, which will be crucial to HA (for async
> replication scenarios). I'd also like to ensure that we do a better job
> integrating with the oslo libraries. And additionally, I'd like to
> keep our momentum going with regards to improving Trove testability
> and documentation.
> 
> Some of the other work-items that I hope we can get to in Kilo include:
> 
> - Packaging the Guest Agent separately from the other Trove services.
> - Automated guest agent upgrades.
> - Enabling hot pools for Trove instances.
> - User access of datastore logs.
> - Automated, and scheduled backups for instances.
> 
> No PTL candidate email is complete without the commit / review stats,
> so here they are:
> 
> * My Patches:
>   https://review.openstack.org/#/q/owner:slicknik,n,z
> 
> * My Reviews:
>   https://review.openstack.org/#/q/-owner:slicknik+reviewer:slicknik,n,z
> 
> Thanks for taking the time to make it this far,
> -Nikhil
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] [Heat] [Mistral] Merlin project PoC update: shift from HOT builder to Mistral Workbook builder

2014-09-25 Thread Steve Baker

On 26/09/14 05:36, Timur Sufiev wrote:

Hello, folks!

Following Drago Rosson's introduction of Barricade.js and our 
discussion in ML about possibility of using it in Merlin [1], I've 
decided to change the plans for PoC: now the goal for Merlin's PoC is 
to implement Mistral Workbook builder on top of Barricade.js. The 
reasons for that are:


* To better understand Barricade.js potential as data abstraction 
layer in Merlin, I need to learn much more about its possibilities and 
limitations than simple examining/reviewing of its source code allows. 
The best way to do this is by building upon it.
* It's becoming too crowded in the HOT builder's sandbox - doing the 
same work as Drago currently does [2] seems like a waste of resources 
to me (especially in case he'll opensource his HOT builder someday 
just as he did with Barricade.js).


Drago, it would be to everyone's benefit if your HOT builder efforts 
were developed on a public git repository, no matter how functional it 
is currently.


Is there any chance you can publish what you're working on to 
https://github.com/dragorosson or rackerlabs for a start?


* Why Mistral and not Murano or Solum? Because Mistral's YAML 
templates have simpler structure than Murano's ones do and is better 
defined at that moment than the ones in Solum.


There already some commits in https://github.com/stackforge/merlin and 
since client-side app doesn't talk to the Mistral's server yet, it is 
pretty easy to run it (just follow the instructions in README.md) and 
then see it in browser at http://localhost:8080. UI is yet not great, 
as the current focus is data abstraction layer exploration, i.e. how 
to exploit Barricade.js capabilities to reflect all relations between 
Mistral's entities. I hope to finish the minimal set of features in a 
few weeks - and will certainly announce it in the ML.


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2014-September/044591.html
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2014-August/044186.html




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] How to set port_filter in port binding?

2014-09-25 Thread Alexandre Levine

Hi All,

I'm looking for a way to set port_filter flag to False for port binding. 
Is there a way to do this in IceHouse or in current Juno code? I use 
devstack with the default ML2 plugin and configuration.


According to this guide 
(http://docs.openstack.org/api/openstack-network/2.0/content/binding_ext_ports.html) 
it should be done via binding:profile but it gets only recorded in the 
dictionary of binding:profile and doesn't get reflected in vif_details 
as supposed to.


I tried to find any code in Neutron that can potentially do this 
transferring from incoming binding:profile into binding:vif_details and 
found none.


I'd be very grateful if anybody can point me in the right direction.

And by the by the reason I'm trying to do this is because I want to use 
one instance as NAT for another one in private subnet. As a result of 
ping 8.8.8.8 from private instance to NAT instance the reply gets 
Dropped by the security rule in iptables on TAP interface of NAT 
instance because the source is different from the NAT instance IP. So I 
suppose that port_filter is responsible for this behavior and will 
remove this restriction in iptables.


Best regards,
  Alex Levine

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] How to set port_filter in port binding?

2014-09-25 Thread Alexandre Levine

Sorry,

I managed to misplace my question into the existing thread.


On 9/26/14, 12:56 AM, Alexandre Levine wrote:

Hi All,

I'm looking for a way to set port_filter flag to False for port 
binding. Is there a way to do this in IceHouse or in current Juno 
code? I use devstack with the default ML2 plugin and configuration.


According to this guide 
(http://docs.openstack.org/api/openstack-network/2.0/content/binding_ext_ports.html) 
it should be done via binding:profile but it gets only recorded in the 
dictionary of binding:profile and doesn't get reflected in vif_details 
as supposed to.


I tried to find any code in Neutron that can potentially do this 
transferring from incoming binding:profile into binding:vif_details 
and found none.


I'd be very grateful if anybody can point me in the right direction.

And by the by the reason I'm trying to do this is because I want to 
use one instance as NAT for another one in private subnet. As a result 
of ping 8.8.8.8 from private instance to NAT instance the reply gets 
Dropped by the security rule in iptables on TAP interface of NAT 
instance because the source is different from the NAT instance IP. So 
I suppose that port_filter is responsible for this behavior and will 
remove this restriction in iptables.


Best regards,
  Alex Levine



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] How to set port_filter in port binding?

2014-09-25 Thread Alexandre Levine

Hi All,

I'm looking for a way to set port_filter flag to False for port binding. 
Is there a way to do this in IceHouse or in current Juno code? I use 
devstack with the default ML2 plugin and configuration.


According to this guide 
(http://docs.openstack.org/api/openstack-network/2.0/content/binding_ext_ports.html) 
it should be done via binding:profile but it gets only recorded in the 
dictionary of binding:profile and doesn't get reflected in vif_details 
as supposed to.


I tried to find any code in Neutron that can potentially do this 
transferring from incoming binding:profile into binding:vif_details and 
found none.


I'd be very grateful if anybody can point me in the right direction.

And by the by the reason I'm trying to do this is because I want to use 
one instance as NAT for another one in private subnet. As a result of 
ping 8.8.8.8 from private instance to NAT instance the reply gets 
Dropped by the security rule in iptables on TAP interface of NAT 
instance because the source is different from the NAT instance IP. So I 
suppose that port_filter is responsible for this behavior and will 
remove this restriction in iptables.


Best regards,
  Alex Levine


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] PTL candidacy

2014-09-25 Thread Nikhil Komawar
Hi,

I would like to take this opportunity and announce my candidacy for the role of 
Glance PTL.

I have been part of this program since Folsom release and have had opportunity 
to work with an awesome team. There have been really challenging changes in the 
way Glance works and it has been a pleasure to contribute my reviews and code 
to many of those changes.

With the change in mission statement [1], that now provides a direction for 
other services to upload and discover data assets using Glance, it would be my 
focus to enable new features like 'Artifacts' to merge smoothly into master. 
This is a paradigm change in the way Glance is consumed and would be my 
priority to see this through. In addition, Glance is supporting a few new 
features like async workers and metadef, as of Juno that could be improved in 
terms of bugs and their maintainability. Seeing this through would be my next 
priority.

In addition to these, there are a few other challenges which Glance project 
faces - review/feedback time, triaging ever growing bug list, BP 'validation 
and followup' etc. I have some ideas to develop more momentum in each of these 
processes. With the advent of the Artifacts feature, new developers would be 
contributing to Glance. I would like to encourage and work with them become 
core members sooner than later. Also, there are many merge propositions which 
become stale due to lack of reviews from core-reviewers. My plan is to have 
bi-weekly sync-ups with the core and driver members to keep the review cycle 
active. As a good learning lesson from Juno, I would like to work closely with 
all the developers and involved core reviewers to know their sincere intent of 
accomplishing a feature within the scope of release timeline. There are some 
really talented people involved in Glance and I would like to keep synthesizing 
the ecosystem to enable everyone involved to do their best.

Lastly, my salutations to Mark. He has provided great direction and leadership 
to this project. I would like to keep his strategy of rotation of weekly 
meeting times to accommodate the convenience of people from various time zones.

Thanks for reading and I hope you will support my candidacy!

[1] 
https://github.com/openstack/governance/blob/master/reference/programs.yaml#L26

-Nikhil Komawar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] PTL Candidacy

2014-09-25 Thread Nikhil Manchanda
I'd like to announce my candidacy for the PTL role of the Database
(Trove) program for Kilo.

I'm the current PTL for Trove for Juno, and during the Juno time frame
we made some really good progress on multiple fronts. We completed the
Neutron integration work that we had started in Icehouse. We've added
support for asynchronous mysql master-slave replication. We added a
clustering API, and an initial implementation of clusters for MongoDB.
We furthered the testability of Trove, by adding more Trove related
tests to Tempest, and are continuing to make good progress updating
and cleaning up our developer docs, install guide, and user
documentation.

For Kilo, I'd like us to keep working on clustering, with the end goal
of being able to provision fully HA database clusters in Trove. This
means a continued focus on clustering for datastores (including a
semi-synchronous mysql clustering solution), as well as heat
integration. I'd also like to ensure that we make progress towards our
goal of integrating trove with a monitoring solution to enable
scenarios like auto-failover, which will be crucial to HA (for async
replication scenarios). I'd also like to ensure that we do a better job
integrating with the oslo libraries. And additionally, I'd like to
keep our momentum going with regards to improving Trove testability
and documentation.

Some of the other work-items that I hope we can get to in Kilo include:

- Packaging the Guest Agent separately from the other Trove services.
- Automated guest agent upgrades.
- Enabling hot pools for Trove instances.
- User access of datastore logs.
- Automated, and scheduled backups for instances.

No PTL candidate email is complete without the commit / review stats,
so here they are:

* My Patches:
  https://review.openstack.org/#/q/owner:slicknik,n,z

* My Reviews:
  https://review.openstack.org/#/q/-owner:slicknik+reviewer:slicknik,n,z

Thanks for taking the time to make it this far,
-Nikhil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] PTL candidacy

2014-09-25 Thread Tristan Cacqueray
confirmed

On 25/09/14 02:50 PM, John Dickinson wrote:
> I'm announcing my candidacy for Swift PTL. I've been involved with Swift 
> specifically and OpenStack in general since the beginning. I'd like to 
> continue to serve in the role as Swift PTL.
> 
> In my last candidacy email[1], I talked about several things I wanted to 
> focus on in Swift.
> 
> 1) Storage policies. This is done, and we're currently building on it to 
> implement erasure code storage in Swift.
> 
> 2) Focus on performance and efficiency. This is an ongoing thing that is 
> never "done", but we have made improvements here, and there are some other 
> interesting things in-progress right now (like zero-copy data paths).
> 
> 3) Better QA. We've added a third-party test cluster to the CI system, but 
> I'd like to improve this further, for example by adding our internal 
> integration tests (probe tests) to our QA pipeline.
> 
> 4) Better community efficiency. Again, we've made some small improvements 
> here, but we have a ways to go yet. Our review backlog is large, and it takes 
> a while for patches to land. We need to continue to improve community 
> efficiency on these metrics.
> 
> Overall, I want to ensure that Swift continues to provide a stable and robust 
> object storage engine. Focusing on the areas listed above will help us do 
> that. We'll continue to build functionality that allows applications to rely 
> on Swift to take over hard problems of storage so that apps can focus on 
> adding their value without worrying about storage.
> 
> My vision for Swift is that everyone will use it every day, even if they 
> don't realize it. Together we can make it happen.
> 
> --John
> 
> [1] http://lists.openstack.org/pipermail/openstack-dev/2014-March/031450.html
> 
> 
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] PTL Candidacy

2014-09-25 Thread Tristan Cacqueray
confirmed

On 25/09/14 03:24 PM, Sergey Lukjanov wrote:
> Hey folks,
> 
> I'd like to announce my intention to continue being PTL of the Data
> Processing program (Sahara).
> 
> I’m working on Sahara (ex. Savanna) project from scratch, from the
> initial proof of concept implementation and till now. I have been the
> acting/elected PTL since Sahara was an idea. Additionally, I’m
> contributing to other OpenStack projects, especially Infrastructure
> for the last two releases where I’m core/root teams member now.
> 
> My high-level focus as PTL is to coordinate work of subteams, code
> review, release management and general architecture/design tracking.
> 
> During the Juno cycle I was especially focused on stability, improving
> testing and supporting for the different data processing tools in
> addition to the Apache Hadoop. The very huge lists of bugs and
> improvements has  been done during the cycle and I’m glad that we’re
> ending the Juno with completed list of planned features and new
> plugins available to end users including Cloudera and Spark. The great
> work was done on keeping backward compatibility together with security
> and usability improvements.
> 
> For the Kilo I’d like to keep my own focus on the same stuff -
> coordination, review, release management and general approach
> tracking. As about the overall project focus I’d like to continue
> working on stability and tests coverage, distributed architecture,
> improved UX for non-expert EDP users, ability to use Sahara out of the
> box and etc. Additionally, I’m thinking about adopting an idea of
> czars system for Sahara in Kilo release and I’d like to discuss it on
> the summit. So, my vision of Kilo is to continue moving forward in
> implementing scalable and flexible Data Processing aaS for OpenStack
> ecosystem by investing in quality and new features.
> 
> A few words about myself: I’m Principle Software Engineer in Mirantis.
> I was working a lot with  Big Data projects and technologies (Hadoop,
> HDFS, Cassandra, Twitter Storm, etc.) and enterprise-grade solutions
> before starting working on Sahara in OpenStack ecosystem. You can see
> my commit history [0], review history [1] using the links below.
> 
> [0] http://stackalytics.com/?user_id=slukjanov&metric=commits&release=all
> [1] http://stackalytics.com/?user_id=slukjanov&metric=marks&release=all
> 
> Thanks.
> 
> 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Of wiki and contributors docs (was Re: [Nova] [All] API standards working group)

2014-09-25 Thread Stefano Maffulli
On 09/24/2014 09:09 PM, Anne Gentle wrote:
> I think the wiki is a great place to get ideas out while we look for a
> cross-project specs workflow in the meantime. 

The wiki is a great place to store things temporarily until they mature
and find a stable home :)

Speaking of wiki, those of you that follow the recent changes may have
noticed that I've been doing quite a bit of gardening lately in the
Category namespace[1].

The wiki pages have been growing in a fast pace when thinking of a
taxonomy and more structure was not really an option. Given the feedback
I'm getting from people interested in becoming contributors, I think
it's time to give the wiki more shape.

Some time ago, Katherine Cranford (a trained taxonomist) volunteered to
get through the wiki pages and draft a taxonomy for us. Shari Mahrdt, a
recent hire by the Foundation, volunteered a few hours per week to
implement it and I finally took the lead for a project to reorganize
content for developers (as in contributors) community[2].

We have a proposed taxonomy[3] and a first try at implementing it is
visible as a navigable tree on
https://wiki.openstack.org/wiki/Category:Home

Shari and I are keeping track of things to do on this etherpad:
https://etherpad.openstack.org/p/Action_Items_OpenStack_Wiki

We're very early in this project, things may change and we'll need help
from each editor of the wiki. I just wanted to let you know that work is
being done to improve life for new contributors. More details will follow.

/stef

[1]
https://wiki.openstack.org/w/index.php?namespace=14&tagfilter=&translations=filter&hideminor=1&title=Special%3ARecentChanges
[2]
http://maffulli.net/2014/09/18/improving-documentation-for-new-openstack-contributors/
[3]
https://docs.google.com/a/openstack.org/spreadsheets/d/1MA_u8RRnqCJC3AWQYLOz4r_zqOCewoP_ds1t_yvBak4/edit#gid=1014544834

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Need a solution for large catalog in PKI tokens

2014-09-25 Thread Dolph Mathews
On Thu, Sep 25, 2014 at 3:21 PM, Ved Lad  wrote:

> The Openstack installation (Havana) at our company has a large number of
> service endpoints in the catalog. As a consequence, when using PKI tokens,
> my HTTP request header gets too big to handle for services like neutron. Im
> evaluating different options for reducing the size of the catalog in the
> PKI token. Some that I have found are:
>
> 1. Using the per tenant endpoint filtering extension: This could break if
> the per tenant endpoint list gets too big
>

In Juno, there's a revision to this which makes the management easier:


https://blueprints.launchpad.net/keystone/+spec/multi-attribute-endpoint-grouping


>
> 2. Using PKIZ Tokens(In Juno): Were using Havana, so I cant use this
> feature, but it still doesnt look scalable
>

You're correct, it's a step in the right direction that we should have
taken in the first place, but it's still going to run into the same problem
with (even larger) large catalogs.


>
> 3. Using the ?nocatalog option. This is the best option for scalability
> but isnt the catalog a required component for authorization?
>

The catalog (historically) does not convey any sort of authorization
information, but does provide some means of obscurity. There's been an
ongoing effort to make keystonemiddleware aware of the endpoint it's
protecting, and thus the catalog becomes pertinent authZ data in that
scenario. The bottom line is that the ?nocatalog auth flow is not a
completely viable code path yet.


>
> Are there any other solutions that i am unaware of, that scale with number
> of endpoints?
>

Use UUID tokens, which Keystone defaults to in Juno for some of the same
pain points that you're experiencing. UUID provides the same level of
security as PKI, with different scaling characteristics.


>
> Thanks,
> Ved
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?

2014-09-25 Thread Dolph Mathews
On Wed, Sep 24, 2014 at 9:48 AM, Day, Phil  wrote:

> > >
> > > I think we should aim to /always/ have 3 notifications using a pattern
> > > of
> > >
> > >try:
> > >   ...notify start...
> > >
> > >   ...do the work...
> > >
> > >   ...notify end...
> > >except:
> > >   ...notify abort...
> >
> > Precisely my viewpoint as well. Unless we standardize on the above, our
> > notifications are less than useful, since they will be open to
> interpretation by
> > the consumer as to what precisely they mean (and the consumer will need
> to
> > go looking into the source code to determine when an event actually
> > occurred...)
> >
> > Smells like a blueprint to me. Anyone have objections to me writing one
> up
> > for Kilo?
> >
> > Best,
> > -jay
> >
> Hi Jay,
>
> So just to be clear, are you saying that we should generate 2 notification
> messages on Rabbit for every DB update ?   That feels like a big overkill
> for me.   If I follow that login then the current state transition
> notifications should also be changed to "Starting to update task state /
> finished updating task state"  - which seems just daft and confuisng
> logging with notifications.
>
> Sandy's answer where start /end are used if there is a significant amount
> of work between the two and/or the transaction spans multiple hosts makes a
> lot more sense to me.   Bracketing a single DB call with two notification
> messages rather than just a single one on success to show that something
> changed would seem to me to be much more in keeping with the concept of
> notifying on key events.
>

+1 Following similar thinking, Keystone recently dropped a "pending"
notification that proceeded a single DB call, which was always followed by
either a success or failure notification.


>
> Phil
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Get rid of the sample config file

2014-09-25 Thread Morgan Fainberg
-Original Message-
From: John Griffith 
Reply: OpenStack Development Mailing List (not for usage questions) 
>
Date: September 25, 2014 at 12:27:52
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject:  Re: [openstack-dev] [Ironic] Get rid of the sample config file

> On Thu, Sep 25, 2014 at 12:34 PM, Devdatta Kulkarni <
> devdatta.kulka...@rackspace.com> wrote:
>  
> > Hi,
> >
> > We have faced this situation in Solum several times. And in fact this was
> > one of the topics
> > that we discussed in our last irc meeting.
> >
> > We landed on separating the sample check from pep8 gate into a non-voting
> > gate.
> > One reason to keep the sample check is so that when say a feature in your
> > code fails
> > due to some upstream changes and for which you don't have coverage in your
> > functional tests then
> > a non-voting but failing sample check gate can be used as a starting point
> > of the failure investigation.
> >
> > More details about the discussion can be found here:
> >
> > http://eavesdrop.openstack.org/meetings/solum_team_meeting/2014/solum_team_meeting.2014-09-23-16.00.log.txt
> >   
> >
> > - Devdatta
> >
> > --
> > *From:* David Shrewsbury [shrewsbury.d...@gmail.com]
> > *Sent:* Thursday, September 25, 2014 12:42 PM
> > *To:* OpenStack Development Mailing List (not for usage questions)
> > *Subject:* Re: [openstack-dev] [Ironic] Get rid of the sample config file
> >
> > Hi!
> >
> > On Thu, Sep 25, 2014 at 12:23 PM, Lucas Alvares Gomes <
> > lucasago...@gmail.com> wrote:
> >
> >> Hi,
> >>
> >> Today we have hit the problem of having an outdated sample
> >> configuration file again[1]. The problem of the sample generation is
> >> that it picks up configuration from other projects/libs
> >> (keystoneclient in that case) and this break the Ironic gate without
> >> us doing anything.
> >>
> >> So, what you guys think about removing the test that compares the
> >> configuration files and makes it no longer gate[2]?
> >>
> >> We already have a tox command to generate the sample configuration
> >> file[3], so folks that needs it can generate it locally.
> >>
> >> Does anyone disagree?
> >>
> >>
> > +1 to this, but I think we should document how to generate the sample
> > config
> > in our documentation (install guide?).
> >
> > -Dave
> > --
> > David Shrewsbury (Shrews)
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> I tried this in Cinder a while back and was actually rather surprised by
> the overwhelming push-back I received from the Operator community, and
> whether I agreed with all of it or not, the last thing I want to do is
> ignore the Operators that are actually standing up and maintaining what
> we're building.
>  
> Really at the end of the day this isn't really that big of a deal. It's
> relatively easy to update the config in most of the projects "tox
> -egenconfig" see my posting back in May [1]. For all the more often this
> should happen I'm not sure why we can't have enough contributors that are
> just pro-active enough to "fix it up" when they see it falls out of date.
>  
> John
>  
> [1]: http://lists.openstack.org/pipermail/openstack-dev/2014-May/036438.html  

+1 to what John just said.
 
I know in Keystone we update the sample config (usually) whenever we notice it 
out of date. Often we ask developers making config changes to run `tox 
-esample_config` and re-upload their patch. If someone misses we (the cores) 
will do a patch that just updates the sample config along the way. Ideally we 
should have a check job that just reports the config is out of date (instead of 
blocking the review).

The issue is the premise that there are 2 options:

1) Gate on the sample config being current
2) Have no sample config in the tree.

The missing third option is the proactive approach (plus having something 
convenient like `tox -egenconfig` or `tox -eupdate_sample_config` to make it 
convenient to update the sample config) is the approach that covers both sides 
nicely. The Operators/deployers have the sample config in tree, the developers 
don’t get patched rejected in the gate because the sample config doesn’t match 
new options in an external library.

I know a lot of operators and deployers appreciate the sample config being 
in-tree.

—Morgan







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Get rid of the sample config file

2014-09-25 Thread John Griffith
On Thu, Sep 25, 2014 at 12:34 PM, Devdatta Kulkarni <
devdatta.kulka...@rackspace.com> wrote:

>  Hi,
>
> We have faced this situation in Solum several times. And in fact this was
> one of the topics
> that we discussed in our last irc meeting.
>
> We landed on separating the sample check from pep8 gate into a non-voting
> gate.
> One reason to keep the sample check is so that when say a feature in your
> code fails
> due to some upstream changes and for which you don't have coverage in your
> functional tests then
> a non-voting but failing sample check gate can be used as a starting point
> of the failure investigation.
>
> More details about the discussion can be found here:
>
> http://eavesdrop.openstack.org/meetings/solum_team_meeting/2014/solum_team_meeting.2014-09-23-16.00.log.txt
>
> - Devdatta
>
>  --
> *From:* David Shrewsbury [shrewsbury.d...@gmail.com]
> *Sent:* Thursday, September 25, 2014 12:42 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Ironic] Get rid of the sample config file
>
>   Hi!
>
> On Thu, Sep 25, 2014 at 12:23 PM, Lucas Alvares Gomes <
> lucasago...@gmail.com> wrote:
>
>> Hi,
>>
>> Today we have hit the problem of having an outdated sample
>> configuration file again[1]. The problem of the sample generation is
>> that it picks up configuration from other projects/libs
>> (keystoneclient in that case) and this break the Ironic gate without
>> us doing anything.
>>
>> So, what you guys think about removing the test that compares the
>> configuration files and makes it no longer gate[2]?
>>
>> We already have a tox command to generate the sample configuration
>> file[3], so folks that needs it can generate it locally.
>>
>> Does anyone disagree?
>>
>>
>  +1 to this, but I think we should document how to generate the sample
> config
> in our documentation (install guide?).
>
>  -Dave
>  --
>  David Shrewsbury (Shrews)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
I tried this in Cinder a while back and was actually rather surprised by
the overwhelming push-back I received from the Operator community, and
whether I agreed with all of it or not, the last thing I want to do is
ignore the Operators that are actually standing up and maintaining what
we're building.

Really at the end of the day this isn't really that big of a deal.  It's
relatively easy to update the config in most of the projects "tox
-egenconfig" see my posting back in May [1].  For all the more often this
should happen I'm not sure why we can't have enough contributors that are
just pro-active enough to "fix it up" when they see it falls out of date.

John

[1]: http://lists.openstack.org/pipermail/openstack-dev/2014-May/036438.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] PTL Candidacy

2014-09-25 Thread Sergey Lukjanov
Hey folks,

I'd like to announce my intention to continue being PTL of the Data
Processing program (Sahara).

I’m working on Sahara (ex. Savanna) project from scratch, from the
initial proof of concept implementation and till now. I have been the
acting/elected PTL since Sahara was an idea. Additionally, I’m
contributing to other OpenStack projects, especially Infrastructure
for the last two releases where I’m core/root teams member now.

My high-level focus as PTL is to coordinate work of subteams, code
review, release management and general architecture/design tracking.

During the Juno cycle I was especially focused on stability, improving
testing and supporting for the different data processing tools in
addition to the Apache Hadoop. The very huge lists of bugs and
improvements has  been done during the cycle and I’m glad that we’re
ending the Juno with completed list of planned features and new
plugins available to end users including Cloudera and Spark. The great
work was done on keeping backward compatibility together with security
and usability improvements.

For the Kilo I’d like to keep my own focus on the same stuff -
coordination, review, release management and general approach
tracking. As about the overall project focus I’d like to continue
working on stability and tests coverage, distributed architecture,
improved UX for non-expert EDP users, ability to use Sahara out of the
box and etc. Additionally, I’m thinking about adopting an idea of
czars system for Sahara in Kilo release and I’d like to discuss it on
the summit. So, my vision of Kilo is to continue moving forward in
implementing scalable and flexible Data Processing aaS for OpenStack
ecosystem by investing in quality and new features.

A few words about myself: I’m Principle Software Engineer in Mirantis.
I was working a lot with  Big Data projects and technologies (Hadoop,
HDFS, Cassandra, Twitter Storm, etc.) and enterprise-grade solutions
before starting working on Sahara in OpenStack ecosystem. You can see
my commit history [0], review history [1] using the links below.

[0] http://stackalytics.com/?user_id=slukjanov&metric=commits&release=all
[1] http://stackalytics.com/?user_id=slukjanov&metric=marks&release=all

Thanks.


-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] logging around olso lockutils

2014-09-25 Thread Joshua Harlow
Or how about we add in a new log level?

A few libraries I have come across support the log level 5 (which is less than 
debug (10) but greater than notset (0))...

One usage of this is in the multiprocessing library in python itself @

https://hg.python.org/releasing/3.4/file/8671f89107c8/Lib/multiprocessing/util.py#l34

Kazoo calls it the 'BLATHER' level @

https://github.com/python-zk/kazoo/blob/master/kazoo/loggingsupport.py

Since these messages can be actually useful for lock_utils developers it could 
be useful to keep them[1]?

Just a thought...

[1] Ones mans DEBUG is another mans garbage, ha.

On Sep 25, 2014, at 12:06 PM, Ben Nemec  wrote:

> On 09/25/2014 07:49 AM, Sean Dague wrote:
>> Spending a ton of time reading logs, oslo locking ends up basically
>> creating a ton of output at DEBUG that you have to mentally filter to
>> find problems:
>> 
>> 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Created new semaphore "iptables" internal_lock
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:206
>> 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Acquired semaphore "iptables" lock
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:229
>> 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Attempting to grab external lock "iptables" external_lock
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:178
>> 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Got file lock "/opt/stack/data/nova/nova-iptables" acquire
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:93
>> 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Got semaphore / lock "_do_refresh_provider_fw_rules" inner
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:271
>> 2014-09-24 18:44:49.244 DEBUG nova.compute.manager
>> [req-b91cb1c1-f211-43ef-9714-651eeb3b2302
>> DeleteServersAdminTestXML-1408641898
>> DeleteServersAdminTestXML-469708524] [instance:
>> 98eb8e6e-088b-4dda-ada5-7b2b79f62506] terminating bdm
>> BlockDeviceMapping(boot_index=0,connection_info=None,created_at=2014-09-24T18:44:42Z,delete_on_termination=True,deleted=False,deleted_at=None,destination_type='local',device_name='/dev/vda',device_type='disk',disk_bus=None,guest_format=None,id=43,image_id='262ab8a2-0790-49b3-a8d3-e8ed73e3ed71',instance=,instance_uuid=98eb8e6e-088b-4dda-ada5-7b2b79f62506,no_device=False,snapshot_id=None,source_type='image',updated_at=2014-09-24T18:44:42Z,volume_id=None,volume_size=None)
>> _cleanup_volumes /opt/stack/new/nova/nova/compute/manager.py:2407
>> 2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Released file lock "/opt/stack/data/nova/nova-iptables" release
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:115
>> 2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Releasing semaphore "iptables" lock
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:238
>> 2014-09-24 18:44:49.249 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Semaphore / lock released "_do_refresh_provider_fw_rules" inner
>> 
>> Also readable here:
>> http://logs.openstack.org/01/123801/1/check/check-tempest-dsvm-full/b5f8b37/logs/screen-n-cpu.txt.gz#_2014-09-24_18_44_49_240
>> 
>> (Yes, it's kind of ugly)
>> 
>> What occured to me is that in debugging locking issues what we actually
>> care about is 2 things semantically:
>> 
>> #1 - tried to get a lock, but someone else has it. Then we know we've
>> got lock contention. .
>> #2 - something is still holding a lock after some "long" amount of time.
> 
> We did just merge https://review.openstack.org/#/c/122166/ which adds
> some contention/timing information to the log messages and should at
> least be a step toward what you're talking about.
> 
> For context, we had some bad logging that resulted in
> https://bugs.launchpad.net/oslo.concurrency/+bug/1367941 .  That has
> been cleaned up to at least be accurate, but it did add an extra log
> message (created and acquired).  The reason we cared about that is we
> thought there might be a bad inter

[openstack-dev] [keystone] Need a solution for large catalog in PKI tokens

2014-09-25 Thread Ved Lad
The Openstack installation (Havana) at our company has a large number of
service endpoints in the catalog. As a consequence, when using PKI tokens,
my HTTP request header gets too big to handle for services like neutron. Im
evaluating different options for reducing the size of the catalog in the
PKI token. Some that I have found are:

1. Using the per tenant endpoint filtering extension: This could break if
the per tenant endpoint list gets too big

2. Using PKIZ Tokens(In Juno): Were using Havana, so I cant use this
feature, but it still doesnt look scalable

3. Using the ?nocatalog option. This is the best option for scalability but
isnt the catalog a required component for authorization?

Are there any other solutions that i am unaware of, that scale with number
of endpoints?

Thanks,
Ved
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-25 Thread Vishvananda Ishaya

On Sep 24, 2014, at 10:55 AM, Zane Bitter  wrote:

> On 18/09/14 14:53, Monty Taylor wrote:
>> Hey all,
>> 
>> I've recently been thinking a lot about Sean's Layers stuff. So I wrote
>> a blog post which Jim Blair and Devananda were kind enough to help me edit.
>> 
>> http://inaugust.com/post/108
> 
> I think there are a number of unjustified assumptions behind this arrangement 
> of things. I'm going to list some here, but I don't want anyone to interpret 
> this as a personal criticism of Monty. The point is that we all suffer from 
> biases - not for any questionable reasons but purely as a result of our own 
> experiences, who we spend our time talking to and what we spend our time 
> thinking about - and therefore we should all be extremely circumspect about 
> trying to bake our own mental models of what OpenStack should be into the 
> organisational structure of the project itself.

I think there were some assumptions that lead to the Layer1 model. Perhaps a 
little insight into the in-person debate[1] at OpenStack-SV might help explain 
where monty was coming from.

The initial thought was a radical idea (pioneered by Jay) to completely 
dismantle the integrated release and have all projects release independently 
and functionally test against their real dependencies. This gained support from 
various people and I still think it is a great long-term goal.

The worry that Monty (and others) had are two-fold:

1. When we had no co-gating in the past, we ended up with a lot of 
cross-project breakage. If we jump right into this we could end up in the wild 
west were different projects expect different keystone versions and there is no 
way to deploy a functional cloud.
2. We have set expectations in our community (and especially with 
distributions), that we release a set of things that all work together. It is 
not acceptable for us to just pull the rug out from under them.

These concerns show that we must (in the short term) provide some kind of 
integrated testing and release. I see the layer1 model as a stepping stone 
towards the long term goal of having the projects release independently and 
depend on stable interfaces. We aren’t going to get there immediately, so 
having a smaller, integrated set of services representing our most common use 
case seems like a good first step. As our interfaces get more stable and our 
testing gets better it could move to a (once every X months) release that just 
packages the current version of the layer1 projects or even be completely 
managed by distributions.

We need a way to move forward, but I’m hoping we can do it without a concept of 
“specialness” around layer1 projects. I actually see it as a limitation of 
these projects that we have to take this stepping stone and cannot disaggregate 
completely. Instead it should be seen as a necessary evil so that we don’t 
break our users.

In addition, we should encourage other shared use cases in openstack both for 
testing (functional tests against groups of services) and for releases (shared 
releases of related projects).

[1] Note this wasn’t a planned debate, but a spontaneous discussion that 
included (at various points) Monty Taylor, Jay Pipes, Joe Gordon, John 
Dickenson, Myself, and (undoubtedly) one or two people I”m forgetting.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] logging around olso lockutils

2014-09-25 Thread Ben Nemec
On 09/25/2014 07:49 AM, Sean Dague wrote:
> Spending a ton of time reading logs, oslo locking ends up basically
> creating a ton of output at DEBUG that you have to mentally filter to
> find problems:
> 
> 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
> Created new semaphore "iptables" internal_lock
> /opt/stack/new/nova/nova/openstack/common/lockutils.py:206
> 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
> Acquired semaphore "iptables" lock
> /opt/stack/new/nova/nova/openstack/common/lockutils.py:229
> 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
> Attempting to grab external lock "iptables" external_lock
> /opt/stack/new/nova/nova/openstack/common/lockutils.py:178
> 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
> Got file lock "/opt/stack/data/nova/nova-iptables" acquire
> /opt/stack/new/nova/nova/openstack/common/lockutils.py:93
> 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
> Got semaphore / lock "_do_refresh_provider_fw_rules" inner
> /opt/stack/new/nova/nova/openstack/common/lockutils.py:271
> 2014-09-24 18:44:49.244 DEBUG nova.compute.manager
> [req-b91cb1c1-f211-43ef-9714-651eeb3b2302
> DeleteServersAdminTestXML-1408641898
> DeleteServersAdminTestXML-469708524] [instance:
> 98eb8e6e-088b-4dda-ada5-7b2b79f62506] terminating bdm
> BlockDeviceMapping(boot_index=0,connection_info=None,created_at=2014-09-24T18:44:42Z,delete_on_termination=True,deleted=False,deleted_at=None,destination_type='local',device_name='/dev/vda',device_type='disk',disk_bus=None,guest_format=None,id=43,image_id='262ab8a2-0790-49b3-a8d3-e8ed73e3ed71',instance=,instance_uuid=98eb8e6e-088b-4dda-ada5-7b2b79f62506,no_device=False,snapshot_id=None,source_type='image',updated_at=2014-09-24T18:44:42Z,volume_id=None,volume_size=None)
> _cleanup_volumes /opt/stack/new/nova/nova/compute/manager.py:2407
> 2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
> Released file lock "/opt/stack/data/nova/nova-iptables" release
> /opt/stack/new/nova/nova/openstack/common/lockutils.py:115
> 2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
> Releasing semaphore "iptables" lock
> /opt/stack/new/nova/nova/openstack/common/lockutils.py:238
> 2014-09-24 18:44:49.249 DEBUG nova.openstack.common.lockutils
> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
> Semaphore / lock released "_do_refresh_provider_fw_rules" inner
> 
> Also readable here:
> http://logs.openstack.org/01/123801/1/check/check-tempest-dsvm-full/b5f8b37/logs/screen-n-cpu.txt.gz#_2014-09-24_18_44_49_240
> 
> (Yes, it's kind of ugly)
> 
> What occured to me is that in debugging locking issues what we actually
> care about is 2 things semantically:
> 
> #1 - tried to get a lock, but someone else has it. Then we know we've
> got lock contention. .
> #2 - something is still holding a lock after some "long" amount of time.

We did just merge https://review.openstack.org/#/c/122166/ which adds
some contention/timing information to the log messages and should at
least be a step toward what you're talking about.

For context, we had some bad logging that resulted in
https://bugs.launchpad.net/oslo.concurrency/+bug/1367941 .  That has
been cleaned up to at least be accurate, but it did add an extra log
message (created and acquired).  The reason we cared about that is we
thought there might be a bad interaction between our code and eventlet,
so we wanted to know whether we were in fact locking the same semaphore
twice or mistakenly creating two separate ones (as it turns out, neither
- it was just the bad logging I mentioned earlier).

So, given that I think everyone involved agrees that the double-locking
thing was a cosmetic issue and not a functional one we could probably
just remove the created/using messages here:
https://github.com/openstack/oslo-incubator/blob/master/openstack/common/lockutils.py#L202
which would eliminate one message per lock operation without
significantly impacting debuggability of our code.  Those messages only
exist because we didn't trust what was going on at a lower level.

It would also be nice to reduc

[openstack-dev] [Swift] PTL candidacy

2014-09-25 Thread John Dickinson
I'm announcing my candidacy for Swift PTL. I've been involved with Swift 
specifically and OpenStack in general since the beginning. I'd like to continue 
to serve in the role as Swift PTL.

In my last candidacy email[1], I talked about several things I wanted to focus 
on in Swift.

1) Storage policies. This is done, and we're currently building on it to 
implement erasure code storage in Swift.

2) Focus on performance and efficiency. This is an ongoing thing that is never 
"done", but we have made improvements here, and there are some other 
interesting things in-progress right now (like zero-copy data paths).

3) Better QA. We've added a third-party test cluster to the CI system, but I'd 
like to improve this further, for example by adding our internal integration 
tests (probe tests) to our QA pipeline.

4) Better community efficiency. Again, we've made some small improvements here, 
but we have a ways to go yet. Our review backlog is large, and it takes a while 
for patches to land. We need to continue to improve community efficiency on 
these metrics.

Overall, I want to ensure that Swift continues to provide a stable and robust 
object storage engine. Focusing on the areas listed above will help us do that. 
We'll continue to build functionality that allows applications to rely on Swift 
to take over hard problems of storage so that apps can focus on adding their 
value without worrying about storage.

My vision for Swift is that everyone will use it every day, even if they don't 
realize it. Together we can make it happen.

--John

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-March/031450.html






signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] team meeting Sept 25 1800 UTC

2014-09-25 Thread Andrew Lazarev
Thanks everyone who have joined Sahara meeting.

Here are the logs from the meeting:

http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-09-25-18.02.html
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-09-25-18.02.log.html

Andrew.

On Wed, Sep 24, 2014 at 2:50 PM, Sergey Lukjanov 
wrote:

> Hi folks,
>
> We'll be having the Sahara team meeting as usual in
> #openstack-meeting-alt channel.
>
> Agenda:
> https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings
>
>
> http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meeting&iso=20140925T18
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Principal Software Engineer
> Mirantis Inc.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Get rid of the sample config file

2014-09-25 Thread Devdatta Kulkarni
Hi,

We have faced this situation in Solum several times. And in fact this was one 
of the topics
that we discussed in our last irc meeting.

We landed on separating the sample check from pep8 gate into a non-voting gate.
One reason to keep the sample check is so that when say a feature in your code 
fails
due to some upstream changes and for which you don't have coverage in your 
functional tests then
a non-voting but failing sample check gate can be used as a starting point of 
the failure investigation.

More details about the discussion can be found here:
http://eavesdrop.openstack.org/meetings/solum_team_meeting/2014/solum_team_meeting.2014-09-23-16.00.log.txt

- Devdatta


From: David Shrewsbury [shrewsbury.d...@gmail.com]
Sent: Thursday, September 25, 2014 12:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic] Get rid of the sample config file

Hi!

On Thu, Sep 25, 2014 at 12:23 PM, Lucas Alvares Gomes 
mailto:lucasago...@gmail.com>> wrote:
Hi,

Today we have hit the problem of having an outdated sample
configuration file again[1]. The problem of the sample generation is
that it picks up configuration from other projects/libs
(keystoneclient in that case) and this break the Ironic gate without
us doing anything.

So, what you guys think about removing the test that compares the
configuration files and makes it no longer gate[2]?

We already have a tox command to generate the sample configuration
file[3], so folks that needs it can generate it locally.

Does anyone disagree?


+1 to this, but I think we should document how to generate the sample config
in our documentation (install guide?).

-Dave
--
David Shrewsbury (Shrews)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] logging around olso lockutils

2014-09-25 Thread Davanum Srinivas
Logged as high priority bug -
https://bugs.launchpad.net/oslo.concurrency/+bug/1374075

On Thu, Sep 25, 2014 at 1:57 PM, Jay Pipes  wrote:
> +1 for making those two changes. I also have been frustrated doing debugging
> in the gate recently, and any operational-ease-of-debugging things like this
> would be appreciated.
>
> -jay
>
> On 09/25/2014 08:49 AM, Sean Dague wrote:
>>
>> Spending a ton of time reading logs, oslo locking ends up basically
>> creating a ton of output at DEBUG that you have to mentally filter to
>> find problems:
>>
>> 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Created new semaphore "iptables" internal_lock
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:206
>> 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Acquired semaphore "iptables" lock
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:229
>> 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Attempting to grab external lock "iptables" external_lock
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:178
>> 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Got file lock "/opt/stack/data/nova/nova-iptables" acquire
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:93
>> 2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Got semaphore / lock "_do_refresh_provider_fw_rules" inner
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:271
>> 2014-09-24 18:44:49.244 DEBUG nova.compute.manager
>> [req-b91cb1c1-f211-43ef-9714-651eeb3b2302
>> DeleteServersAdminTestXML-1408641898
>> DeleteServersAdminTestXML-469708524] [instance:
>> 98eb8e6e-088b-4dda-ada5-7b2b79f62506] terminating bdm
>>
>> BlockDeviceMapping(boot_index=0,connection_info=None,created_at=2014-09-24T18:44:42Z,delete_on_termination=True,deleted=False,deleted_at=None,destination_type='local',device_name='/dev/vda',device_type='disk',disk_bus=None,guest_format=None,id=43,image_id='262ab8a2-0790-49b3-a8d3-e8ed73e3ed71',instance=,instance_uuid=98eb8e6e-088b-4dda-ada5-7b2b79f62506,no_device=False,snapshot_id=None,source_type='image',updated_at=2014-09-24T18:44:42Z,volume_id=None,volume_size=None)
>> _cleanup_volumes /opt/stack/new/nova/nova/compute/manager.py:2407
>> 2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Released file lock "/opt/stack/data/nova/nova-iptables" release
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:115
>> 2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Releasing semaphore "iptables" lock
>> /opt/stack/new/nova/nova/openstack/common/lockutils.py:238
>> 2014-09-24 18:44:49.249 DEBUG nova.openstack.common.lockutils
>> [req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
>> ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
>> Semaphore / lock released "_do_refresh_provider_fw_rules" inner
>>
>> Also readable here:
>>
>> http://logs.openstack.org/01/123801/1/check/check-tempest-dsvm-full/b5f8b37/logs/screen-n-cpu.txt.gz#_2014-09-24_18_44_49_240
>>
>> (Yes, it's kind of ugly)
>>
>> What occured to me is that in debugging locking issues what we actually
>> care about is 2 things semantically:
>>
>> #1 - tried to get a lock, but someone else has it. Then we know we've
>> got lock contention. .
>> #2 - something is still holding a lock after some "long" amount of time.
>>
>> #2 turned out to be a critical bit in understanding one of the worst
>> recent gate impacting issues.
>>
>> You can write a tool today that analyzes the logs and shows you these
>> things. However, I wonder if we could actually do something creative in
>> the code itself to do this already. I'm curious if the creative use of
>> Timers might let us emit log messages under the conditions above
>> (someone with better understanding of python internals needs to speak up
>> here). Maybe it's too much overhead, but I think it's worth at least
>> asking the question.
>>
>> The same issue exists when it comes to processutils I think, warning
>> that a command is still running after 10s might be really handy, because
>> it turns out that issue #2 was caused by this, and it took quite a bit
>> of decoding to figure that out

Re: [openstack-dev] [Glance] Concurrent update issue in Glance v2 API

2014-09-25 Thread Mark Washenberger
Thanks for diving on this grenade, Alex!

FWIW, I agree with all of your assessments. Just in case I am mistaken, I
summarize them as smaller updates > logical clocks > wall clocks (due to
imprecision and skew).

Given the small size of your patch [4], I'd say lets try to land that. It
is nicer to solve this problem with software rather than with db schema if
that is possible.

On Thu, Sep 25, 2014 at 9:21 AM, Alexander Tivelkov 
wrote:

> Hi folks!
>
> There is a serious issue [0] in the v2 API of Glance which may lead to
> race conditions during the concurrent updates of Images' metadata.
> It can be fixed in a number of ways, but we need to have some solution
> soon, as we are approaching rc1 release, and the race in image updates
> looks like a serious problem which has to be fixed in J, imho.
>
> A quick description of the problem:
> When the image-update is called (PUT /v2/images/%image_id%/) we get the
> image from the repository, which fetches a record from the DB and forms its
> content into an Image Domain Object ([1]), which is then modified (has its
> attributes updated) and passed through all the layers of our domain model.
> This object is not managed by the SQLAlchemy's session, so the
> modifications of its attributes are not tracked anywhere.
> When all the processing is done and the updated object is passed back to
> the DB repository, it serializes all the attributes of the image into a
> dict ([2]) and then this dict is used to create an UPDATE query for the
> database.
> As this serialization includes all the attribute of the object (rather
> then only the modified ones), the update query updates all the columns of
> the appropriate database row, putting there the values which were
> originally fetched when the processing began. This may obviously overwrite
> the values which could be written there by some other concurent request.
>
> There are two possible solutions to fix this problem.
> First, known as the optimistic concurrency control, checks if the
> appropriate database row was modified between the data fetching and data
> updates. In case of such modification the update operation reports a
> "conflict" and fails (and may be retried based on the updated data if
> needed). Modification detection is usually based on the timstamps, i.e. the
> query updates the row in database only if the timestamp there matches the
> timestamp of initially fetched data.
> I've introduced this approach in this patch [3], however it has a major
> flaw: I used the 'updated_at' attribute as a timestamp, and this attribute
> is mapped to a DateTime-typed column. In many RDBMS's (including
> MySql<5.6.4) this column stores values with per-second precision and does
> not store fractions of seconds. So, even if patch [3] is merged the race
> conditions may still occur if there are many updates happening at the same
> moment of time.
> A better approach would be to add a new column with int (or longint) type
> to store millisecond-based (or even microsecond-based) timestamps instead
> of (or additionally to) date-time based updated_at. But data model
> modification will require to add new migration etc, which is a major step
> and I don't know if we want to make it so close to the release.
>
> The second solution is to keep track of the changed attributes and
> properties for the image and do not include the unchanged ones into the
> UPDATE query, so nothing gets overwritten. This dramatically reduces the
> threat of races, as the updates of different properties do not interfere
> with each other. Also this is a usefull change regardless of the race
> itself: being able to differentiate between changed and unchanged
> attributes may have its own value for other purposes; the DB performance
> will also be better when updating just the needed fields instead of all of
> them.
> I've submitted a patch with this approach as well [4], but it still breaks
> some unittests and I am working to fix them right now.
>
> So, we need to decide which of these approaches (or their combination) to
> take: we may stick with optimistic locking on timestamp (and then decide if
> we are ok with a per-second timestamps or we need to add a new column),
> choose to track state of attributes or combine them together. So, could you
> folks please review patches [3] and [4] and come up with some ideas on them?
>
> Also, probably we should consider targeting [0] to juno-rc1 milestone to
> make sure that this bug is fixed in J. Do you guys think it is possible at
> this stage?
>
> Thanks!
>
>
> [0] https://bugs.launchpad.net/glance/+bug/1371728
> [1]
> https://github.com/openstack/glance/blob/master/glance/db/__init__.py#L74
> [2]
> https://github.com/openstack/glance/blob/master/glance/db/__init__.py#L169
> [3] https://review.openstack.org/#/c/122814/
> [4] https://review.openstack.org/#/c/123722/
>
> --
> Regards,
> Alexander Tivelkov
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lis

Re: [openstack-dev] [Nova] Release criticality of bug 1365606 (get_network_info efficiency for nova-network)

2014-09-25 Thread Vishvananda Ishaya
Ok new versions have reversed the order so we can take:

https://review.openstack.org/#/c/121663/4

before:

https://review.openstack.org/#/c/119521/10

I still strongly recommend that we take the second so we at least have
the possibility of backporting the other two patches. And I also wouldn’t
complain if we just took all 4 :)

Vish

On Sep 25, 2014, at 9:44 AM, Vishvananda Ishaya  wrote:

> To explain my rationale:
> 
> I think it is totally reasonable to be conservative and wait to merge
> the actual fixes to the network calls[1][2] until Kilo and have them
> go through the stable/backports process. Unfortunately, due to our object
> design, if we block https://review.openstack.org/#/c/119521/ then there
> is no way we can backport those fixes, so we are stuck for a full 6
> months with abysmal performance. This is why I’ve been pushing to get
> that one fix in. That said, I will happily decouple the two patches.
> 
> Vish
> 
> [1] https://review.openstack.org/#/c/119522/9
> [2] https://review.openstack.org/#/c/119523/10
> 
> On Sep 24, 2014, at 3:51 PM, Michael Still  wrote:
> 
>> Hi,
>> 
>> so, I'd really like to see https://review.openstack.org/#/c/121663/
>> merged in rc1. That patch is approved right now.
>> 
>> However, it depends on https://review.openstack.org/#/c/119521/, which
>> is not approved. 119521 fixes a problem where we make five RPC calls
>> per call to get_network_info, which is an obvious efficiency problem.
>> 
>> Talking to Vish, who is the author of these patches, it sounds like
>> the efficiency issue is a pretty big deal for users of nova-network
>> and he'd like to see 119521 land in Juno. I think that means he's
>> effectively arguing that the bug is release critical.
>> 
>> On the other hand, its only a couple of days until rc1, so we're
>> trying to be super conservative about what we land now in Juno.
>> 
>> So... I'd like to see a bit of a conversation on what call we make
>> here. Do we land 119521?
>> 
>> Michael
>> 
>> -- 
>> Rackspace Australia
> 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] logging around olso lockutils

2014-09-25 Thread Jay Pipes
+1 for making those two changes. I also have been frustrated doing 
debugging in the gate recently, and any operational-ease-of-debugging 
things like this would be appreciated.


-jay

On 09/25/2014 08:49 AM, Sean Dague wrote:

Spending a ton of time reading logs, oslo locking ends up basically
creating a ton of output at DEBUG that you have to mentally filter to
find problems:

2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Created new semaphore "iptables" internal_lock
/opt/stack/new/nova/nova/openstack/common/lockutils.py:206
2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Acquired semaphore "iptables" lock
/opt/stack/new/nova/nova/openstack/common/lockutils.py:229
2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Attempting to grab external lock "iptables" external_lock
/opt/stack/new/nova/nova/openstack/common/lockutils.py:178
2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Got file lock "/opt/stack/data/nova/nova-iptables" acquire
/opt/stack/new/nova/nova/openstack/common/lockutils.py:93
2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Got semaphore / lock "_do_refresh_provider_fw_rules" inner
/opt/stack/new/nova/nova/openstack/common/lockutils.py:271
2014-09-24 18:44:49.244 DEBUG nova.compute.manager
[req-b91cb1c1-f211-43ef-9714-651eeb3b2302
DeleteServersAdminTestXML-1408641898
DeleteServersAdminTestXML-469708524] [instance:
98eb8e6e-088b-4dda-ada5-7b2b79f62506] terminating bdm
BlockDeviceMapping(boot_index=0,connection_info=None,created_at=2014-09-24T18:44:42Z,delete_on_termination=True,deleted=False,deleted_at=None,destination_type='local',device_name='/dev/vda',device_type='disk',disk_bus=None,guest_format=None,id=43,image_id='262ab8a2-0790-49b3-a8d3-e8ed73e3ed71',instance=,instance_uuid=98eb8e6e-088b-4dda-ada5-7b2b79f62506,no_device=False,snapshot_id=None,source_type='image',updated_at=2014-09-24T18:44:42Z,volume_id=None,volume_size=None)
_cleanup_volumes /opt/stack/new/nova/nova/compute/manager.py:2407
2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Released file lock "/opt/stack/data/nova/nova-iptables" release
/opt/stack/new/nova/nova/openstack/common/lockutils.py:115
2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Releasing semaphore "iptables" lock
/opt/stack/new/nova/nova/openstack/common/lockutils.py:238
2014-09-24 18:44:49.249 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Semaphore / lock released "_do_refresh_provider_fw_rules" inner

Also readable here:
http://logs.openstack.org/01/123801/1/check/check-tempest-dsvm-full/b5f8b37/logs/screen-n-cpu.txt.gz#_2014-09-24_18_44_49_240

(Yes, it's kind of ugly)

What occured to me is that in debugging locking issues what we actually
care about is 2 things semantically:

#1 - tried to get a lock, but someone else has it. Then we know we've
got lock contention. .
#2 - something is still holding a lock after some "long" amount of time.

#2 turned out to be a critical bit in understanding one of the worst
recent gate impacting issues.

You can write a tool today that analyzes the logs and shows you these
things. However, I wonder if we could actually do something creative in
the code itself to do this already. I'm curious if the creative use of
Timers might let us emit log messages under the conditions above
(someone with better understanding of python internals needs to speak up
here). Maybe it's too much overhead, but I think it's worth at least
asking the question.

The same issue exists when it comes to processutils I think, warning
that a command is still running after 10s might be really handy, because
it turns out that issue #2 was caused by this, and it took quite a bit
of decoding to figure that out.

-Sean



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Fox, Kevin M


> -Original Message-
> From: Clint Byrum [mailto:cl...@fewbar.com]
> Sent: Thursday, September 25, 2014 9:44 AM
> To: openstack-dev
> Subject: Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and
> Manage OpenStack using Kubernetes and Docker
> 
> Excerpts from Fox, Kevin M's message of 2014-09-25 09:13:26 -0700:
> > Why can't you manage baremetal and containers from a single host with
> nova/neutron? Is this a current missing feature, or has the development
> teams said they will never implement it?
> >
> 
> It's a bug.
> 
> But it is also a complexity that isn't really handled well in Nova's current
> design. Nova wants to send the workload onto the machine, and that is it. In
> this case, you have two workloads, one hosted on the other, and Nova has
> no model for that. You end up in a weird situation where one
> (baremetal) is host for other (containers) and no real way to separate the
> two or identify that dependency.

Ideally, like you say, you should be able to have one host managed by two 
different nova drivers in the same cell. But I think today, you can simply use 
two different cells and it should work? One cell for deploying bare metal 
images, of which one image is provided that contains the nova docker compute 
resources. The other cell used to support launching docker instances on those 
hosts. To the end user, it still looks like one unified cloud like we all want, 
but under the hood, its two separate subclouds. An under and an overcloud.

> I think it's worth pursuing in OpenStack, but Steven is solving deployment of
> OpenStack today with tools that exist today. I think Kolla may very well
> prove that the container approach is too different from Nova's design and
> wants to be more separate, at which point our big tent will be in an
> interesting position: Do we adopt Kubernetes and put an OpenStack API on
> it, or do we re-implement it.

That is a very interesting question, worth pursuing.

I think either way, most of the work is going to be in dockerizing the 
services. So that alone is worth playing with too.

I managed to get libvirt to work in docker once. It was a pain. Getting nova 
and neutron bits in that container too would be even harder. I'm waiting to try 
again until I know that systemd will run nicely inside a docker container. It 
would make managing the startup/stopping of the container much easier to get 
right. 

Thanks,
Kevin

> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Fox, Kevin M


> -Original Message-
> From: Clint Byrum [mailto:cl...@fewbar.com]
> Sent: Thursday, September 25, 2014 9:35 AM
> To: openstack-dev
> Subject: Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and
> Manage OpenStack using Kubernetes and Docker
> 
> First, Kevin, please try to figure out a way to reply in-line when you're
> replying to multiple levels of threads. Even if you have to copy and quote it
> manually.. it took me reading your message and the previous message 3
> times to understand the context.

I'm sorry. I think your frustration with it mirrors the frustration I have with 
having to use this blankity blank microsoft webmail that doesn't support inline 
commenting, or having to rdesktop to a windows terminal server so I can reply 
inline. :/

 
> Second, I don't think anybody minds having a control plane for each level of
> control. The point isn't to replace the undercloud, but to replace nova
> rebuild as the way you push out new software while retaining the benefits
> of the image approach.

I don't quite follow. Wouldn't you be using heat autoscaling, not nova directly?

Thanks,
Kevin
 
> Excerpts from Fox, Kevin M's message of 2014-09-25 09:07:10 -0700:
> > Then you still need all the kubernetes api/daemons for the master and
> slaves. If you ignore the complexity this adds, then it seems simpler then
> just using openstack for it. but really, it still is an under/overcloud kind 
> of
> setup, your just using kubernetes for the undercloud, and openstack for the
> overcloud?
> >
> > Thanks,
> > Kevin
> > 
> > From: Steven Dake [sd...@redhat.com]
> > Sent: Wednesday, September 24, 2014 8:02 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [all][tripleo] New Project -> Kolla:
> > Deploy and Manage OpenStack using Kubernetes and Docker
> >
> > On 09/24/2014 03:31 PM, Alan Kavanagh wrote:
> > Steven
> > I have to ask what is the motivation and benefits we get from integrating
> Kubernetes into Openstack? Would be really useful if you can elaborate and
> outline some use cases and benefits Openstack and Kubernetes can gain.
> >
> > /Alan
> >
> > Alan,
> >
> > I am either unaware or ignorant of another Docker scheduler that is
> currently available that has a big (100+ folks) development community.
> Kubernetes meets these requirements and is my main motivation for using
> it to schedule Docker containers.  There are other ways to skin this cat - The
> TripleO folks wanted at one point to deploy nova with the nova docker VM
> manager to do such a thing.  This model seemed a little clunky to me since it
> isn't purpose built around containers.
> >
> > As far as use cases go, the main use case is to run a specific Docker
> container on a specific Kubernetes "minion" bare metal host.  These docker
> containers are then composed of the various config tools and services for
> each detailed service in OpenStack.  For example, mysql would be a
> container, and tools to configure the mysql service would exist in the
> container.  Kubernetes would pass config options for the mysql database
> prior to scheduling and once scheduled, Kubernetes would be responsible
> for connecting the various containers together.
> >
> > Regards
> > -steve
> >
> >
> >
> > From: Steven Dake [mailto:sd...@redhat.com]
> > Sent: September-24-14 7:41 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [all][tripleo] New Project -> Kolla:
> > Deploy and Manage OpenStack using Kubernetes and Docker
> >
> > On 09/24/2014 10:12 AM, Joshua Harlow wrote:
> > Sounds like an interesting project/goal and will be interesting to see
> where this goes.
> >
> > A few questions/comments:
> >
> > How much golang will people be exposed to with this addition?
> >
> > Joshua,
> >
> > I expect very little.  We intend to use Kubernetes as an upstream project,
> rather then something we contribute to directly.
> >
> >
> > Seeing that this could be the first 'go' using project it will be 
> > interesting to
> see where this goes (since afaik none of the infra support exists, and people
> aren't likely to familiar with go vs python in the openstack community
> overall).
> >
> > What's your thoughts on how this will affect the existing openstack
> container effort?
> >
> > I don't think it will have any impact on the existing Magnum project.  At
> some point if Magnum implements scheduling of docker containers, we
> may add support for Magnum in addition to Kubernetes, but it is impossible
> to tell at this point.  I don't want to derail either project by trying to 
> force
> them together unnaturally so early.
> >
> >
> > I see that kubernetes isn't exactly a small project either (~90k LOC, for
> those who use these types of metrics), so I wonder how that will affect
> people getting involved here, aka, who has the
> resources/operators/other... available to actually setup/deploy/run
> kube

Re: [openstack-dev] [Ironic] Get rid of the sample config file

2014-09-25 Thread David Shrewsbury
Hi!

On Thu, Sep 25, 2014 at 12:23 PM, Lucas Alvares Gomes  wrote:

> Hi,
>
> Today we have hit the problem of having an outdated sample
> configuration file again[1]. The problem of the sample generation is
> that it picks up configuration from other projects/libs
> (keystoneclient in that case) and this break the Ironic gate without
> us doing anything.
>
> So, what you guys think about removing the test that compares the
> configuration files and makes it no longer gate[2]?
>
> We already have a tox command to generate the sample configuration
> file[3], so folks that needs it can generate it locally.
>
> Does anyone disagree?
>
>
+1 to this, but I think we should document how to generate the sample config
in our documentation (install guide?).

-Dave
-- 
David Shrewsbury (Shrews)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [UX] [Heat] [Mistral] Merlin project PoC update: shift from HOT builder to Mistral Workbook builder

2014-09-25 Thread Timur Sufiev
Hello, folks!

Following Drago Rosson's introduction of Barricade.js and our discussion in
ML about possibility of using it in Merlin [1], I've decided to change the
plans for PoC: now the goal for Merlin's PoC is to implement Mistral
Workbook builder on top of Barricade.js. The reasons for that are:

* To better understand Barricade.js potential as data abstraction layer in
Merlin, I need to learn much more about its possibilities and limitations
than simple examining/reviewing of its source code allows. The best way to
do this is by building upon it.
* It's becoming too crowded in the HOT builder's sandbox - doing the same
work as Drago currently does [2] seems like a waste of resources to me
(especially in case he'll opensource his HOT builder someday just as he did
with Barricade.js).
* Why Mistral and not Murano or Solum? Because Mistral's YAML templates
have simpler structure than Murano's ones do and is better defined at that
moment than the ones in Solum.

There already some commits in https://github.com/stackforge/merlin and
since client-side app doesn't talk to the Mistral's server yet, it is
pretty easy to run it (just follow the instructions in README.md) and then
see it in browser at http://localhost:8080. UI is yet not great, as the
current focus is data abstraction layer exploration, i.e. how to exploit
Barricade.js capabilities to reflect all relations between Mistral's
entities. I hope to finish the minimal set of features in a few weeks - and
will certainly announce it in the ML.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-September/044591.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2014-August/044186.html

-- 
Timur Sufiev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Get rid of the sample config file

2014-09-25 Thread Jay Faulkner

On Sep 25, 2014, at 9:23 AM, Lucas Alvares Gomes  wrote:

> Hi,
> 
> Today we have hit the problem of having an outdated sample
> configuration file again[1]. The problem of the sample generation is
> that it picks up configuration from other projects/libs
> (keystoneclient in that case) and this break the Ironic gate without
> us doing anything.
> 
> So, what you guys think about removing the test that compares the
> configuration files and makes it no longer gate[2]?
> 
> We already have a tox command to generate the sample configuration
> file[3], so folks that needs it can generate it locally.
> 

+1

In a perfect world, one would be generated and put somewhere for easy access 
without a development environment setup. However I think the impact from having 
this config file break pep8 non-interactively is important enough to do it now 
and worry about generating one for docs later. :)

-
Jay Faulkner

> Does anyone disagree?
> 
> [1] https://review.openstack.org/#/c/124090/
> [2] https://github.com/openstack/ironic/blob/master/tox.ini#L23
> [3] https://github.com/openstack/ironic/blob/master/tox.ini#L32-L34
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] PTL Candidacy

2014-09-25 Thread Tristan Cacqueray
confirmed

On 25/09/14 12:31 PM, Douglas Mendizabal wrote:
> Hi OpenStack-dev,
> 
> I would like to put my name in the hat for PTL of the Key Management Service
> Program, which includes Barbican, python-barbicanclient, Kite, and
> python-kiteclient.
> 
> I’ve had the pleasure of being a part of the Barbican team since the very
> beginning of the project.  During the last year and half I’ve helped
> Barbican grow from a project that only a couple of Rackers were hacking on,
> to an Incubated OpenStack project that continues to gain adoption in the
> community, and I would like to see that momentum continue through the Kilo
> cycle.
> 
> I’ve been a big fan and supporter of Jarret Raim’s vision for Barbican, and
> it would be an honor for me to continue his work as the new PTL for the Key
> Management Program.  One of my goals for the Kilo cycle is to move Barbican
> through the Integration process by working with other OpenStack projects to
> enable the security minded use-cases that are now possible with Barbican.
> Additionally, I would like to continue to focus on the quality of Barbican
> code by leveraging the knowledge and lessons learned from deploying Barbican
> at Rackspace.
> 
> Thank you,
> Douglas Mendizábal
> 
> 
> Douglas Mendizábal
> IRC: redrobot
> PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C
> 
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Release criticality of bug 1365606 (get_network_info efficiency for nova-network)

2014-09-25 Thread Vishvananda Ishaya
To explain my rationale:

I think it is totally reasonable to be conservative and wait to merge
the actual fixes to the network calls[1][2] until Kilo and have them
go through the stable/backports process. Unfortunately, due to our object
design, if we block https://review.openstack.org/#/c/119521/ then there
is no way we can backport those fixes, so we are stuck for a full 6
months with abysmal performance. This is why I’ve been pushing to get
that one fix in. That said, I will happily decouple the two patches.

Vish

[1] https://review.openstack.org/#/c/119522/9
[2] https://review.openstack.org/#/c/119523/10

On Sep 24, 2014, at 3:51 PM, Michael Still  wrote:

> Hi,
> 
> so, I'd really like to see https://review.openstack.org/#/c/121663/
> merged in rc1. That patch is approved right now.
> 
> However, it depends on https://review.openstack.org/#/c/119521/, which
> is not approved. 119521 fixes a problem where we make five RPC calls
> per call to get_network_info, which is an obvious efficiency problem.
> 
> Talking to Vish, who is the author of these patches, it sounds like
> the efficiency issue is a pretty big deal for users of nova-network
> and he'd like to see 119521 land in Juno. I think that means he's
> effectively arguing that the bug is release critical.
> 
> On the other hand, its only a couple of days until rc1, so we're
> trying to be super conservative about what we land now in Juno.
> 
> So... I'd like to see a bit of a conversation on what call we make
> here. Do we land 119521?
> 
> Michael
> 
> -- 
> Rackspace Australia



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2014-09-25 09:13:26 -0700:
> Why can't you manage baremetal and containers from a single host with 
> nova/neutron? Is this a current missing feature, or has the development teams 
> said they will never implement it?
> 

It's a bug.

But it is also a complexity that isn't really handled well in Nova's
current design. Nova wants to send the workload onto the machine, and
that is it. In this case, you have two workloads, one hosted on the other,
and Nova has no model for that. You end up in a weird situation where one
(baremetal) is host for other (containers) and no real way to separate
the two or identify that dependency.

I think it's worth pursuing in OpenStack, but Steven is solving deployment
of OpenStack today with tools that exist today. I think Kolla may very
well prove that the container approach is too different from Nova's design
and wants to be more separate, at which point our big tent will be in
an interesting position: Do we adopt Kubernetes and put an OpenStack
API on it, or do we re-implement it.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Barbican] PTL for Barbican

2014-09-25 Thread Jarret Raim
All,


It has been my pleasure to lead the Key Management program and Barbican
over the last year and a half. I'm proud of the work we have done, the
problems we are solving and the community that has developed around the
project. 

It should be no surprise to our community members that my day job has
pulled me further and further away from Barbican on a day to day basis. It
is for this reason that I am planning to step down as PTL for the program.

Thankfully, I've had great support from my team as Douglas Mendizabal has
stepped in to help with many of my PTL duties. He's been running our
weekly meetings, releases and shepparding specs through for a good chunk
of the Juno release cycle. Simply put, without his hard work, we wouldn't
have made the progress we have made for this release.

I encourage all our community members to support Douglas. He has my full
endorsement and I'm confident he is the right person to lead us through
the Kilo cycle, graduation and the first public Cloud deployment of
Barbican at Rackspace.



Thanks,

--
Jarret Raim 
@jarretraim




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Fox, Kevin M
Ah. So the goal of project Kolla then is to deploy OpenStack via Docker using 
whatever means that works, not, to deploy OpenStack using Docker+Kubernetes, 
where the first stab at an implementation is using Kubernetes. That seems like 
a much more reasonable goal to me.

Thanks,
Kevin

From: Steven Dake [sd...@redhat.com]
Sent: Thursday, September 25, 2014 8:30 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and 
Manage OpenStack using Kubernetes and Docker

On 09/25/2014 12:01 AM, Clint Byrum wrote:
> Excerpts from Mike Spreitzer's message of 2014-09-24 22:01:54 -0700:
>> Clint Byrum  wrote on 09/25/2014 12:13:53 AM:
>>
>>> Excerpts from Mike Spreitzer's message of 2014-09-24 20:49:20 -0700:
 Steven Dake  wrote on 09/24/2014 11:02:49 PM:
> ...
 ...
 Does TripleO require container functionality that is not available
 when using the Docker driver for Nova?

 As far as I can tell, the quantitative handling of capacities and
 demands in Kubernetes is much inferior to what Nova does today.

>>> Yes, TripleO needs to manage baremetal and containers from a single
>>> host. Nova and Neutron do not offer this as a feature unfortunately.
>> In what sense would Kubernetes "manage baremetal" (at all)?
>> By "from a single host" do you mean that a client on one host
>> can manage remote baremetal and containers?
>>
>> I can see that Kubernetes allows a client on one host to get
>> containers placed remotely --- but so does the Docker driver for Nova.
>>
> I mean that one box would need to host Ironic, Docker, and Nova, for
> the purposes of deploying OpenStack. We call it the "undercloud", or
> sometimes the "Deployment Cloud".
>
> It's not necessarily something that Nova/Neutron cannot do by design,
> but it doesn't work now.
>
> As far as use cases go, the main use case is to run a specific
> Docker container on a specific Kubernetes "minion" bare metal host.
>> Clint, in another branch of this email tree you referred to
>> "the VMs that host Kubernetes".  How does that square with
>> Steve's text that seems to imply bare metal minions?
>>
> That was in a more general context, discussing using Kubernetes for
> general deployment. Could have just as easily have said "hosts",
> "machines", or "instances".
>
>> I can see that some people have had much more detailed design
>> discussions than I have yet found.  Perhaps it would be helpful
>> to share an organized presentation of the design thoughts in
>> more detail.
>>
> I personally have not had any detailed discussions about this before it
> was announced. I've just dug into the design and some of the code of
> Kubernetes because it is quite interesting to me.
>
 If TripleO already knows it wants to run a specific Docker image
 on a specific host then TripleO does not need a scheduler.

>>> TripleO does not ever specify destination host, because Nova does not
>>> allow that, nor should it. It does want to isolate failure domains so
>>> that all three Galera nodes aren't on the same PDU, but we've not really
>>> gotten to the point where we can do that yet.
>> So I am still not clear on what Steve is trying to say is the main use
>> case.
>> Kubernetes is even farther from balancing among PDUs than Nova is.
>> At least Nova has a framework in which this issue can be posed and solved.
>> I mean a framework that actually can carry the necessary information.
>> The Kubernetes scheduler interface is extremely impoverished in the
>> information it passes and it uses GO structs --- which, like C structs,
>> can not be subclassed.
> I don't think this is totally clear yet. The thing that Steven seems to be
> trying to solve is deploying OpenStack using docker, and Kubernetes may
> very well be a better choice than Nova for this. There are some really
> nice features, and a lot of the benefits we've been citing about image
> based deployments are realized in docker without the pain of a full OS
> image to redeploy all the time.

This is precisely the problem I want to solve.  I looked at Nova+Docker
as a solution, and it seems to me the runway to get to a successful
codebase is longer with more risk.  That is why this is an experiment to
see if a Kubernetes based approach would work.  if at the end of the day
we throw out Kubernetes as a scheduler once we have the other problems
solved and reimplement Kubernetes in Nova+Docker, I think that would be
an acceptable outcome, but not something I want to *start* with but
*finish* with.

Regards
-steve

> The structs vs. classes argument is completely out of line and has
> nothing to do with where Kubernetes might go in the future. It's like
> saying because cars use internal combustion engines they are limited. It
> is just a facet of how it works today.
>
>> Nova's filter scheduler includes a fatal bug that bites when balancing and
>> you want

Re: [openstack-dev] [Ironic] Get rid of the sample config file

2014-09-25 Thread Dmitry Tantsur

On 09/25/2014 06:23 PM, Lucas Alvares Gomes wrote:

Hi,

Today we have hit the problem of having an outdated sample
configuration file again[1]. The problem of the sample generation is
that it picks up configuration from other projects/libs
(keystoneclient in that case) and this break the Ironic gate without
us doing anything.

So, what you guys think about removing the test that compares the
configuration files and makes it no longer gate[2]?

We already have a tox command to generate the sample configuration
file[3], so folks that needs it can generate it locally.

Does anyone disagree?
It's a pity we won't have sample config by default, but I guess it can't 
be helped. +1 from me.




[1] https://review.openstack.org/#/c/124090/
[2] https://github.com/openstack/ironic/blob/master/tox.ini#L23
[3] https://github.com/openstack/ironic/blob/master/tox.ini#L32-L34

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Clint Byrum
First, Kevin, please try to figure out a way to reply in-line when you're
replying to multiple levels of threads. Even if you have to copy and
quote it manually.. it took me reading your message and the previous
message 3 times to understand the context.

Second, I don't think anybody minds having a control plane for each
level of control. The point isn't to replace the undercloud, but to
replace nova rebuild as the way you push out new software while
retaining the benefits of the image approach.

Excerpts from Fox, Kevin M's message of 2014-09-25 09:07:10 -0700:
> Then you still need all the kubernetes api/daemons for the master and slaves. 
> If you ignore the complexity this adds, then it seems simpler then just using 
> openstack for it. but really, it still is an under/overcloud kind of setup, 
> your just using kubernetes for the undercloud, and openstack for the 
> overcloud?
> 
> Thanks,
> Kevin
> 
> From: Steven Dake [sd...@redhat.com]
> Sent: Wednesday, September 24, 2014 8:02 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and 
> Manage OpenStack using Kubernetes and Docker
> 
> On 09/24/2014 03:31 PM, Alan Kavanagh wrote:
> Steven
> I have to ask what is the motivation and benefits we get from integrating 
> Kubernetes into Openstack? Would be really useful if you can elaborate and 
> outline some use cases and benefits Openstack and Kubernetes can gain.
> 
> /Alan
> 
> Alan,
> 
> I am either unaware or ignorant of another Docker scheduler that is currently 
> available that has a big (100+ folks) development community.  Kubernetes 
> meets these requirements and is my main motivation for using it to schedule 
> Docker containers.  There are other ways to skin this cat - The TripleO folks 
> wanted at one point to deploy nova with the nova docker VM manager to do such 
> a thing.  This model seemed a little clunky to me since it isn't purpose 
> built around containers.
> 
> As far as use cases go, the main use case is to run a specific Docker 
> container on a specific Kubernetes "minion" bare metal host.  These docker 
> containers are then composed of the various config tools and services for 
> each detailed service in OpenStack.  For example, mysql would be a container, 
> and tools to configure the mysql service would exist in the container.  
> Kubernetes would pass config options for the mysql database prior to 
> scheduling and once scheduled, Kubernetes would be responsible for connecting 
> the various containers together.
> 
> Regards
> -steve
> 
> 
> 
> From: Steven Dake [mailto:sd...@redhat.com]
> Sent: September-24-14 7:41 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and 
> Manage OpenStack using Kubernetes and Docker
> 
> On 09/24/2014 10:12 AM, Joshua Harlow wrote:
> Sounds like an interesting project/goal and will be interesting to see where 
> this goes.
> 
> A few questions/comments:
> 
> How much golang will people be exposed to with this addition?
> 
> Joshua,
> 
> I expect very little.  We intend to use Kubernetes as an upstream project, 
> rather then something we contribute to directly.
> 
> 
> Seeing that this could be the first 'go' using project it will be interesting 
> to see where this goes (since afaik none of the infra support exists, and 
> people aren't likely to familiar with go vs python in the openstack community 
> overall).
> 
> What's your thoughts on how this will affect the existing openstack container 
> effort?
> 
> I don't think it will have any impact on the existing Magnum project.  At 
> some point if Magnum implements scheduling of docker containers, we may add 
> support for Magnum in addition to Kubernetes, but it is impossible to tell at 
> this point.  I don't want to derail either project by trying to force them 
> together unnaturally so early.
> 
> 
> I see that kubernetes isn't exactly a small project either (~90k LOC, for 
> those who use these types of metrics), so I wonder how that will affect 
> people getting involved here, aka, who has the resources/operators/other... 
> available to actually setup/deploy/run kubernetes, when operators are likely 
> still just struggling to run openstack itself (at least operators are getting 
> used to the openstack warts, a new set of kubernetes warts could not be so 
> helpful).
> 
> Yup it is fairly large in size.  Time will tell if this approach will work.
> 
> This is an experiment as Robert and others on the thread have pointed out :).
> 
> Regards
> -steve
> 
> 
> On Sep 23, 2014, at 3:40 PM, Steven Dake 
> mailto:sd...@redhat.com>> wrote:
> 
> 
> Hi folks,
> 
> I'm pleased to announce the development of a new project Kolla which is Greek 
> for glue :). Kolla has a goal of providing an implementation that deploys 
> OpenStack using Kubernetes and Docker. This proje

[openstack-dev] [barbican] PTL Candidacy

2014-09-25 Thread Douglas Mendizabal
Hi OpenStack-dev,

I would like to put my name in the hat for PTL of the Key Management Service
Program, which includes Barbican, python-barbicanclient, Kite, and
python-kiteclient.

I’ve had the pleasure of being a part of the Barbican team since the very
beginning of the project.  During the last year and half I’ve helped
Barbican grow from a project that only a couple of Rackers were hacking on,
to an Incubated OpenStack project that continues to gain adoption in the
community, and I would like to see that momentum continue through the Kilo
cycle.

I’ve been a big fan and supporter of Jarret Raim’s vision for Barbican, and
it would be an honor for me to continue his work as the new PTL for the Key
Management Program.  One of my goals for the Kilo cycle is to move Barbican
through the Integration process by working with other OpenStack projects to
enable the security minded use-cases that are now possible with Barbican.
Additionally, I would like to continue to focus on the quality of Barbican
code by leveraging the knowledge and lessons learned from deploying Barbican
at Rackspace.

Thank you,
Douglas Mendizábal


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Get rid of the sample config file

2014-09-25 Thread Lucas Alvares Gomes
Hi,

Today we have hit the problem of having an outdated sample
configuration file again[1]. The problem of the sample generation is
that it picks up configuration from other projects/libs
(keystoneclient in that case) and this break the Ironic gate without
us doing anything.

So, what you guys think about removing the test that compares the
configuration files and makes it no longer gate[2]?

We already have a tox command to generate the sample configuration
file[3], so folks that needs it can generate it locally.

Does anyone disagree?

[1] https://review.openstack.org/#/c/124090/
[2] https://github.com/openstack/ironic/blob/master/tox.ini#L23
[3] https://github.com/openstack/ironic/blob/master/tox.ini#L32-L34

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Concurrent update issue in Glance v2 API

2014-09-25 Thread Alexander Tivelkov
Hi folks!

There is a serious issue [0] in the v2 API of Glance which may lead to race
conditions during the concurrent updates of Images' metadata.
It can be fixed in a number of ways, but we need to have some solution
soon, as we are approaching rc1 release, and the race in image updates
looks like a serious problem which has to be fixed in J, imho.

A quick description of the problem:
When the image-update is called (PUT /v2/images/%image_id%/) we get the
image from the repository, which fetches a record from the DB and forms its
content into an Image Domain Object ([1]), which is then modified (has its
attributes updated) and passed through all the layers of our domain model.
This object is not managed by the SQLAlchemy's session, so the
modifications of its attributes are not tracked anywhere.
When all the processing is done and the updated object is passed back to
the DB repository, it serializes all the attributes of the image into a
dict ([2]) and then this dict is used to create an UPDATE query for the
database.
As this serialization includes all the attribute of the object (rather then
only the modified ones), the update query updates all the columns of the
appropriate database row, putting there the values which were originally
fetched when the processing began. This may obviously overwrite the values
which could be written there by some other concurent request.

There are two possible solutions to fix this problem.
First, known as the optimistic concurrency control, checks if the
appropriate database row was modified between the data fetching and data
updates. In case of such modification the update operation reports a
"conflict" and fails (and may be retried based on the updated data if
needed). Modification detection is usually based on the timstamps, i.e. the
query updates the row in database only if the timestamp there matches the
timestamp of initially fetched data.
I've introduced this approach in this patch [3], however it has a major
flaw: I used the 'updated_at' attribute as a timestamp, and this attribute
is mapped to a DateTime-typed column. In many RDBMS's (including
MySql<5.6.4) this column stores values with per-second precision and does
not store fractions of seconds. So, even if patch [3] is merged the race
conditions may still occur if there are many updates happening at the same
moment of time.
A better approach would be to add a new column with int (or longint) type
to store millisecond-based (or even microsecond-based) timestamps instead
of (or additionally to) date-time based updated_at. But data model
modification will require to add new migration etc, which is a major step
and I don't know if we want to make it so close to the release.

The second solution is to keep track of the changed attributes and
properties for the image and do not include the unchanged ones into the
UPDATE query, so nothing gets overwritten. This dramatically reduces the
threat of races, as the updates of different properties do not interfere
with each other. Also this is a usefull change regardless of the race
itself: being able to differentiate between changed and unchanged
attributes may have its own value for other purposes; the DB performance
will also be better when updating just the needed fields instead of all of
them.
I've submitted a patch with this approach as well [4], but it still breaks
some unittests and I am working to fix them right now.

So, we need to decide which of these approaches (or their combination) to
take: we may stick with optimistic locking on timestamp (and then decide if
we are ok with a per-second timestamps or we need to add a new column),
choose to track state of attributes or combine them together. So, could you
folks please review patches [3] and [4] and come up with some ideas on them?

Also, probably we should consider targeting [0] to juno-rc1 milestone to
make sure that this bug is fixed in J. Do you guys think it is possible at
this stage?

Thanks!


[0] https://bugs.launchpad.net/glance/+bug/1371728
[1]
https://github.com/openstack/glance/blob/master/glance/db/__init__.py#L74
[2]
https://github.com/openstack/glance/blob/master/glance/db/__init__.py#L169
[3] https://review.openstack.org/#/c/122814/
[4] https://review.openstack.org/#/c/123722/

--
Regards,
Alexander Tivelkov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo Blueprints and Specs

2014-09-25 Thread John Garbutt
On 25 September 2014 14:10, Daniel P. Berrange  wrote:
>> The proposal is to keep kilo-1, kilo-2 much the same as juno. Except,
>> we work harder on getting people to buy into the priorities that are
>> set, and actively provoke more debate on their "correctness", and we
>> reduce the bar for what needs a blueprint.
>>
>> We can't have 50 high priority blueprints, it doesn't mean anything,
>> right? We need to trim the list down to a manageable number, based on
>> the agreed project priorities. Thats all I mean by slots / runway at
>> this point.
>
> I would suggest we don't try to rank high/medium/low as that is
> too coarse, but rather just an ordered priority list. Then you
> would not be in the situation of having 50 high blueprints. We
> would instead naturally just start at the highest priority and
> work downwards.

OK. I guess I was fixating about fitting things into launchpad.

I guess having both might be what happens.

>> > The runways
>> > idea is just going to make me less efficient at reviewing. So I'm
>> > very much against it as an idea.
>>
>> This proposal is different to the runways idea, although it certainly
>> borrows aspects of it. I just don't understand how this proposal has
>> all the same issues?
>>
>>
>> The key to the kilo-3 proposal, is about getting better at saying no,
>> this blueprint isn't very likely to make kilo.
>>
>> If we focus on a smaller number of blueprints to review, we should be
>> able to get a greater percentage of those fully completed.
>>
>> I am just using slots/runway-like ideas to help pick the high priority
>> blueprints we should concentrate on, during that final milestone.
>> Rather than keeping the distraction of 15 or so low priority
>> blueprints, with those poor submitters jamming up the check queue, and
>> constantly rebasing, and having to deal with the odd stray review
>> comment they might get lucky enough to get.
>>
>> Maybe you think this bit is overkill, and thats fine. But I still
>> think we need a way to stop wasting so much of peoples time on things
>> that will not make it.
>
> The high priority blueprints are going to end up being mostly the big
> scope changes which take alot of time to review & probably go through
> many iterations. The low priority blueprints are going to end up being
> the small things that don't consume significant resource to review and
> are easy to deal with in the time we're waiting for the big items to
> go through rebases or whatever. So what I don't like about the runways
> slots idea is that removes the ability to be agile and take the initiative
> to review & approve the low priority stuff that would otherwise never
> make it through.

The idea is more around concentrating on the *same* list of things.

Certainly we need to avoid the priority inversion of concentrating
only on the big things.

Its also why I suggested that for kilo-1 and kilo-2, we allow any
blueprint to merge, and only restrict it to a specific list in kilo-3,
the idea being to maximise the number of things that get completed,
rather than merging some half blueprints, but not getting to the good
bits.


Anyways, it seems like this doesn't hit a middle ground that would
gain pre-summit approval. Or at least needs some online chat time to
work out something.


Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Fox, Kevin M
Why can't you manage baremetal and containers from a single host with 
nova/neutron? Is this a current missing feature, or has the development teams 
said they will never implement it?

Thanks,
Kevin

From: Clint Byrum [cl...@fewbar.com]
Sent: Wednesday, September 24, 2014 9:13 PM
To: openstack-dev
Subject: Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and
Manage OpenStack using Kubernetes and Docker

Excerpts from Mike Spreitzer's message of 2014-09-24 20:49:20 -0700:
> Steven Dake  wrote on 09/24/2014 11:02:49 PM:
>
> > On 09/24/2014 03:31 PM, Alan Kavanagh wrote:
> > Steven
> > I have to ask what is the motivation and benefits we get from
> > integrating Kubernetes into Openstack? Would be really useful if you
> > can elaborate and outline some use cases and benefits Openstack and
> > Kubernetes can gain.
> >
> > /Alan
> >
> > Alan,
> >
> > I am either unaware or ignorant of another Docker scheduler that is
> > currently available that has a big (100+ folks) development
> > community.  Kubernetes meets these requirements and is my main
> > motivation for using it to schedule Docker containers.  There are
> > other ways to skin this cat - The TripleO folks wanted at one point
> > to deploy nova with the nova docker VM manager to do such a thing.
> > This model seemed a little clunky to me since it isn't purpose built
> > around containers.
>
> Does TripleO require container functionality that is not available
> when using the Docker driver for Nova?
>
> As far as I can tell, the quantitative handling of capacities and
> demands in Kubernetes is much inferior to what Nova does today.
>

Yes, TripleO needs to manage baremetal and containers from a single
host. Nova and Neutron do not offer this as a feature unfortunately.

> > As far as use cases go, the main use case is to run a specific
> > Docker container on a specific Kubernetes "minion" bare metal host.
>
> If TripleO already knows it wants to run a specific Docker image
> on a specific host then TripleO does not need a scheduler.
>

TripleO does not ever specify destination host, because Nova does not
allow that, nor should it. It does want to isolate failure domains so
that all three Galera nodes aren't on the same PDU, but we've not really
gotten to the point where we can do that yet.

> > These docker containers are then composed of the various config
> > tools and services for each detailed service in OpenStack.  For
> > example, mysql would be a container, and tools to configure the
> > mysql service would exist in the container.  Kubernetes would pass
> > config options for the mysql database prior to scheduling
>
> I am not sure what is meant here by "pass config options" nor how it
> would be done prior to scheduling; can you please clarify?
> I do not imagine Kubernetes would *choose* the config values,
> K8s does not know anything about configuring OpenStack.
> Before scheduling, there is no running container to pass
> anything to.
>

Docker containers tend to use environment variables passed to the initial
command to configure things. The Kubernetes API allows setting these
environment variables on creation of the container.

> >   and once
> > scheduled, Kubernetes would be responsible for connecting the
> > various containers together.
>
> Kubernetes has a limited role in connecting containers together.
> K8s creates the networking environment in which the containers
> *can* communicate, and passes environment variables into containers
> telling them from what protocol://host:port/ to import each imported
> endpoint.  Kubernetes creates a universal reverse proxy on each
> minion, to provide endpoints that do not vary as the servers
> move around.
> It is up to stuff outside Kubernetes to decide
> what should be connected to what, and it is up to the containers
> to read the environment variables and actually connect.
>

This is a nice simple interface though, and I like that it is narrowly
defined, not trying to be "anything that containers want to share with
other containers."

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?

2014-09-25 Thread Day, Phil
> > Hi Jay,
> >
> > So just to be clear, are you saying that we should generate 2
> > notification messages on Rabbit for every DB update?   That feels
> > like a big overkill for me.   If I follow that login then the current
> > state transition notifications should also be changed to "Starting to
> > update task state / finished updating task state"  - which seems just
> > daft and confuisng logging with notifications.
> > Sandy's answer where start /end are used if there is a significant
> > amount of work between the two and/or the transaction spans multiple
> > hosts makes a lot more sense to me.   Bracketing a single DB call
> > with two notification messages rather than just a single one on
> > success to show that something changed would seem to me to be much
> > more in keeping with the concept of notifying on key events.
> 
> I can see your point, Phil. But what about when the set of DB calls takes a
> not-insignificant amount of time? Would the event be considered significant
> then? If so, sending only the "I completed creating this thing" notification
> message might mask the fact that the total amount of time spent creating
> the thing was significant.

Sure, I think there's a judgment call to be made on a case by case basis on 
this.   In general thought I'd say it's tasks that do more than just update the 
database that need to provide this kind of timing data.   Simple object 
creation / db table inserts don't really feel like they need to be individually 
timed by pairs of messages - if there is value in providing the creation time 
that could just be part of the payload of the single message, rather than 
doubling up on messages.
 
> 
> That's why I think it's safer to always wrap tasks -- a series of actions that
> *do* one or more things -- with start/end/abort context managers that send
> the appropriate notification messages.
> 
> Some notifications are for events that aren't tasks, and I don't think those
> need to follow start/end/abort semantics. Your example of an instance state
> change is not a task, and therefore would not need a start/end/abort
> notification manager. However, the user action of say, "Reboot this server"
> *would* have a start/end/abort wrapper for the "REBOOT_SERVER" event.
> In between the start and end notifications for this REBOOT_SERVER event,
> there may indeed be multiple SERVER_STATE_CHANGED notification
> messages sent, but those would not have start/end/abort wrappers around
> them.
> 
> Make a bit more sense?
> -jay
> 
Sure - it sounds like we're agreed in principle then that not all operations 
need start/end/abort messages, only those that are a series of operations.

So in that context the server group operations to me still look like they fall 
into the first groups.

Phil



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Choice of Series goal of a blueprint

2014-09-25 Thread Joe Gordon
On Thu, Sep 25, 2014 at 7:22 AM, Angelo Matarazzo <
angelo.matara...@dektech.com.au> wrote:

> Hi all,
> Can create a blueprint and choose a previous Series goal (eg:Icehouse)?
> I think that it can be possible but no reviewer or driver will be
> interested in it.
> Right?
>
>
I am not sure what the 'why' is here, but Icehouse is under stable
maintenance mode so it is not accepting new features.


> Best regards,
> Angelo
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Migrations in feature branch

2014-09-25 Thread Mike Bayer

If Neutron is ready for more Alembic features I could in theory begin work on 
https://bitbucket.org/zzzeek/alembic/issue/167/multiple-heads-branch-resolution-support
 .Folks should ping me on IRC regarding this.


On Sep 24, 2014, at 5:30 AM, Salvatore Orlando  wrote:

> Relying again on automatic schema generation could be error-prone. It can 
> only be enabled globally, and does not work when models are altered if the 
> table for the model being altered already exists in the DB schema.
> 
> I don't think it would be a big problem to put these migrations in the main 
> sequence once the feature branch is merged back into master.
> Alembic unfortunately does not yet do a great job in maintaining multiple 
> timelines. Even if only a single migration branch is supported, in theory one 
> could have a separate alembic environment for the feature branch, but that in 
> my opinion just creates the additional problem of handling a new environment, 
> and does not solve the initial problem of re-sequencing migrations.
> 
> Re-sequencing at merge time is not going to be a problem in my opinion. 
> However, keeping all the lbaas migrations chained together will help. You can 
> also do as Henry suggests, but that option has the extra (possibly 
> negligible) cost of squashing all migrations for the whole feature branch at 
> merge time.
> 
> As an example:
> 
> MASTER  ---> X -> X+1 -> ... -> X+n
> \
> FEATURE  \-> Y -> Y+1 -> ... -> Y+m
> 
> At every rebase of rebase the migration timeline for the feature branch could 
> be rearranged as follows:
> 
> MASTER  ---> X -> X+1 -> ... -> X+n --->
>  \
> FEATURE   \-> Y=X+n -> Y+1 -> ... -> Y+m = X+n+m
> 
> And therefore when the final merge in master comes, all the migrations in the 
> feature branch can be inserted in sequence on top of master's HEAD.
> I have not tried this, but I reckon that conceptually it should work.
> 
> Salvatore
> 
> 
> On 24 September 2014 08:16, Kevin Benton  wrote:
> If these are just feature branches and they aren't intended to be
> deployed for long life cycles, why don't we just skip the db migration
> and enable auto-schema generation inside of the feature branch? Then a
> migration can be created once it's time to actually merge into master.
> 
> On Tue, Sep 23, 2014 at 9:37 PM, Brandon Logan
>  wrote:
> > Well the problem with resequencing on a merge is that a code change for
> > the first migration must be added first and merged into the feature
> > branch before the merge is done.  Obviously this takes review time
> > unless someone of authority pushes it through.  We'll run into this same
> > problem on rebases too if we care about keeping the migration sequenced
> > correctly after rebases (which we don't have to, only on a merge do we
> > really need to care).  If we did what Henry suggested in that we only
> > keep one migration file for the entire feature, we'd still have to do
> > the same thing.  I'm not sure that buys us much other than keeping the
> > feature's migration all in one file.
> >
> > I'd also say that code in master should definitely NOT be dependent on
> > code in a feature branch, much less a migration.  This was a requirement
> > of the incubator as well.
> >
> > So yeah this sounds like a problem but one that really only needs to be
> > solved at merge time.  There will definitely need to be coordination
> > with the cores when merge time comes.  Then again, I'd be a bit worried
> > if there wasn't since a feature branch being merged into master is a
> > huge deal.  Unless I am missing something I don't see this as a big
> > problem, but I am highly capable of being blind to many things.
> >
> > Thanks,
> > Brandon
> >
> >
> > On Wed, 2014-09-24 at 01:38 +, Doug Wiegley wrote:
> >> Hi Eugene,
> >>
> >>
> >> Just my take, but I assumed that we’d re-sequence the migrations at
> >> merge time, if needed.  Feature branches aren’t meant to be optional
> >> add-on components (I think), nor are they meant to live that long.
> >>  Just a place to collaborate and work on a large chunk of code until
> >> it’s ready to merge.  Though exactly what those merge criteria are is
> >> also yet to be determined.
> >>
> >>
> >> I understand that you’re raising a general problem, but given lbaas
> >> v2’s state, I don’t expect this issue to cause many practical problems
> >> in this particular case.
> >>
> >>
> >> This is also an issue for the incubator, whenever it rolls around.
> >>
> >>
> >> Thanks,
> >> doug
> >>
> >>
> >>
> >>
> >> On September 23, 2014 at 6:59:44 PM, Eugene Nikanorov
> >> (enikano...@mirantis.com) wrote:
> >>
> >> >
> >> > Hi neutron and lbaas folks.
> >> >
> >> >
> >> > Recently I briefly looked at one of lbaas proposed into feature
> >> > branch.
> >> > I see migration IDs there are lined into a general migration
> >> > sequence.
> >> >
> >> >
> >> > I think something is definitely wrong with this approach as
> >> > feat

Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Fox, Kevin M
Then you still need all the kubernetes api/daemons for the master and slaves. 
If you ignore the complexity this adds, then it seems simpler then just using 
openstack for it. but really, it still is an under/overcloud kind of setup, 
your just using kubernetes for the undercloud, and openstack for the overcloud?

Thanks,
Kevin

From: Steven Dake [sd...@redhat.com]
Sent: Wednesday, September 24, 2014 8:02 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and 
Manage OpenStack using Kubernetes and Docker

On 09/24/2014 03:31 PM, Alan Kavanagh wrote:
Steven
I have to ask what is the motivation and benefits we get from integrating 
Kubernetes into Openstack? Would be really useful if you can elaborate and 
outline some use cases and benefits Openstack and Kubernetes can gain.

/Alan

Alan,

I am either unaware or ignorant of another Docker scheduler that is currently 
available that has a big (100+ folks) development community.  Kubernetes meets 
these requirements and is my main motivation for using it to schedule Docker 
containers.  There are other ways to skin this cat - The TripleO folks wanted 
at one point to deploy nova with the nova docker VM manager to do such a thing. 
 This model seemed a little clunky to me since it isn't purpose built around 
containers.

As far as use cases go, the main use case is to run a specific Docker container 
on a specific Kubernetes "minion" bare metal host.  These docker containers are 
then composed of the various config tools and services for each detailed 
service in OpenStack.  For example, mysql would be a container, and tools to 
configure the mysql service would exist in the container.  Kubernetes would 
pass config options for the mysql database prior to scheduling and once 
scheduled, Kubernetes would be responsible for connecting the various 
containers together.

Regards
-steve



From: Steven Dake [mailto:sd...@redhat.com]
Sent: September-24-14 7:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and 
Manage OpenStack using Kubernetes and Docker

On 09/24/2014 10:12 AM, Joshua Harlow wrote:
Sounds like an interesting project/goal and will be interesting to see where 
this goes.

A few questions/comments:

How much golang will people be exposed to with this addition?

Joshua,

I expect very little.  We intend to use Kubernetes as an upstream project, 
rather then something we contribute to directly.


Seeing that this could be the first 'go' using project it will be interesting 
to see where this goes (since afaik none of the infra support exists, and 
people aren't likely to familiar with go vs python in the openstack community 
overall).

What's your thoughts on how this will affect the existing openstack container 
effort?

I don't think it will have any impact on the existing Magnum project.  At some 
point if Magnum implements scheduling of docker containers, we may add support 
for Magnum in addition to Kubernetes, but it is impossible to tell at this 
point.  I don't want to derail either project by trying to force them together 
unnaturally so early.


I see that kubernetes isn't exactly a small project either (~90k LOC, for those 
who use these types of metrics), so I wonder how that will affect people 
getting involved here, aka, who has the resources/operators/other... available 
to actually setup/deploy/run kubernetes, when operators are likely still just 
struggling to run openstack itself (at least operators are getting used to the 
openstack warts, a new set of kubernetes warts could not be so helpful).

Yup it is fairly large in size.  Time will tell if this approach will work.

This is an experiment as Robert and others on the thread have pointed out :).

Regards
-steve


On Sep 23, 2014, at 3:40 PM, Steven Dake 
mailto:sd...@redhat.com>> wrote:


Hi folks,

I'm pleased to announce the development of a new project Kolla which is Greek 
for glue :). Kolla has a goal of providing an implementation that deploys 
OpenStack using Kubernetes and Docker. This project will begin as a StackForge 
project separate from the TripleO/Deployment program code base. Our long term 
goal is to merge into the TripleO/Deployment program rather then create a new 
program.



Docker is a container technology for delivering hermetically sealed 
applications and has about 620 technical contributors [1]. We intend to produce 
docker images for a variety of platforms beginning with Fedora 20. We are 
completely open to any distro support, so if folks want to add new Linux 
distribution to Kolla please feel free to submit patches :)



Kubernetes at the most basic level is a Docker scheduler produced by and used 
within Google [2]. Kubernetes has in excess of 100 technical contributors. 
Kubernetes is more then just a scheduler, it provides additional fun

[openstack-dev] [MagnetoDB] IRC weekly meeting minutes 25-09-2014

2014-09-25 Thread Ilya Sviridov
Hello team,

Thank you for attending meeting today.

I'm puting here meeting minutes and link to logs [1] [2]

Please pay attention that we are having meeting at #magentodb because of
schedule conflict.
The meeting agenda is free to updated [3]

[1]
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.log.html
[2]
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-25-13.00.txt
[3] https://wiki.openstack.org/wiki/MagnetoDB/WeeklyMeetingAgenda

Meeting summary

   1.
  1. from last meeting
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-09-18-13.01.html
   (isviridov
  
,
  13:02:11)

   2. *Go through action items* (isviridov
   
,
   13:02:35)
  1.
  https://wiki.openstack.org/wiki/MagnetoDB/specs/async-schema-operations
   (ikhudoshyn
  
,
  13:03:38)
  2. https://review.openstack.org/#/c/122404/ (ikhudoshyn
  
,
  13:07:23)
  3. ACTION: provide numbers about performance impact from big PKI
  token in ML (isviridov
  
,
  13:09:01)

   3. *Asynchronous table creation and removal* (isviridov
   
,
   13:09:25)
   4. *Monitoring API* (isviridov
   
,
   13:16:26)
  1. https://blueprints.launchpad.net/magnetodb/+spec/monitoring-api (
  isviridov
  
,
  13:18:28)

   5. *Light weight session for authorization* (isviridov
   
,
   13:24:48)
   6. *Review tempest tests and move to stable test dir* (isviridov
   
,
   13:33:49)
  1.
  https://blueprints.launchpad.net/magnetodb/+spec/review-tempest-tests
  (isviridov
  
,
  13:35:09)

   7. *Monitoring - healthcheck http request* (isviridov
   
,
   13:41:16)
  1. AGREED: file missed tests as bugs (isviridov
  
,
  13:42:02)
  2. ACTION: aostapenko write a spec about healthcheck (isviridov
  
,
  13:46:00)

   8. *Log management* (isviridov
   
,
   13:46:16)
  1. https://blueprints.launchpad.net/magnetodb/+spec/log-rotating (
  isviridov
  
,
  13:46:23)
  2. AGREED: put log rotation configs in mdb config. No separate
  logging config (isviridov
  
,
  13:54:35)

   9. *Open discussion* (isviridov
   
,
   13:56:05)
  1. https://blueprints.launchpad.net/magnetodb/+spec/oslo-notify (
  ikhudoshyn
  
,
  14:00:13)
  2. ACTION: ikhudoshyn write a spec for migration to
  oslo.messaging.notify (isviridov
  
,
  14:01:04)
  3. ACTION: isviridov look how to created magentodb-spec repo (
  isviridov
  
,
  14:02:12)
  4. ACTION: ajayaa write spec for RBAC (isviridov
  
,
  14:03:43)



Meeting ended at 14:07:00 UTC (full logs

).

Action items

   1. provide numbers about performance 

Re: [openstack-dev] [oslo] adding James Carey to oslo-i18n-core

2014-09-25 Thread Ben Nemec
+1.  He's on the short list of people who actually understand how all
that lazy translation stuff works. :-)

-Ben

On 09/23/2014 04:03 PM, Doug Hellmann wrote:
> James Carey (jecarey) from IBM has done the 3rd most reviews of oslo.i18n 
> this cycle [1]. His feedback has been useful, and I think he would be a good 
> addition to the team for maintaining oslo.i18n.
> 
> Let me know what you think, please.
> 
> Doug
> 
> [1] http://stackalytics.com/?module=oslo.i18n
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Fox, Kevin M
Doesn't nova with a docker driver and heat autoscaling handle case 2 and 3 for 
control jobs? Has anyone tried yet?

Thanks,
Kevin

From: Angus Lees [g...@inodes.org]
Sent: Wednesday, September 24, 2014 6:33 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and
Manage OpenStack using Kubernetes and Docker

On Wed, 24 Sep 2014 10:31:19 PM Alan Kavanagh wrote:
> Steven
> I have to ask what is the motivation and benefits we get from integrating
> Kubernetes into Openstack? Would be really useful if you can elaborate and
> outline some use cases and benefits Openstack and Kubernetes can gain.

I've no idea what Steven's motivation is, but here's my reasoning for going
down a similar path:

OpenStack deployment is basically two types of software:
1. "Control" jobs, various API servers, etc that are basically just regular
python wsgi apps.
2. Compute/network node agents that run under hypervisors, configure host
networking, etc.

The 2nd group probably wants to run on baremetal and is mostly identical on
all such machines, but the 1st group wants higher level PaaS type things.

In particular, for the control jobs you want:

- Something to deploy the code (docker / distro packages / pip install / etc)
- Something to choose where to deploy
- Something to respond to machine outages / autoscaling and re-deploy as
necessary

These last few don't have strong existing options within OpenStack yet (as far
as I'm aware).  Having explored a few different approaches recently, kubernetes
is certainly not the only option - but is a reasonable contender here.


So: I certainly don't see kubernetes as competing with anything in OpenStack -
but as filling a gap in job management with something that has a fairly
lightweight config syntax and is relatively simple to deploy on VMs or
baremetal.  I also think the phrase "integrating kubernetes into OpenStack" is
overstating the task at hand.

The primary downside I've discovered so far seems to be that kubernetes is
very young and still has an awkward cli, a few easy to encounter bugs, etc.

 - Gus

> From: Steven Dake [mailto:sd...@redhat.com]
> Sent: September-24-14 7:41 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and
> Manage OpenStack using Kubernetes and Docker
>
> On 09/24/2014 10:12 AM, Joshua Harlow wrote:
> Sounds like an interesting project/goal and will be interesting to see where
> this goes.
>
> A few questions/comments:
>
> How much golang will people be exposed to with this addition?
>
> Joshua,
>
> I expect very little.  We intend to use Kubernetes as an upstream project,
> rather then something we contribute to directly.
>
>
> Seeing that this could be the first 'go' using project it will be
> interesting to see where this goes (since afaik none of the infra support
> exists, and people aren't likely to familiar with go vs python in the
> openstack community overall).
>
> What's your thoughts on how this will affect the existing openstack
> container effort?
>
> I don't think it will have any impact on the existing Magnum project.  At
> some point if Magnum implements scheduling of docker containers, we may add
> support for Magnum in addition to Kubernetes, but it is impossible to tell
> at this point.  I don't want to derail either project by trying to force
> them together unnaturally so early.
>
>
> I see that kubernetes isn't exactly a small project either (~90k LOC, for
> those who use these types of metrics), so I wonder how that will affect
> people getting involved here, aka, who has the resources/operators/other...
> available to actually setup/deploy/run kubernetes, when operators are
> likely still just struggling to run openstack itself (at least operators
> are getting used to the openstack warts, a new set of kubernetes warts
> could not be so helpful).
>
> Yup it is fairly large in size.  Time will tell if this approach will work.
>
> This is an experiment as Robert and others on the thread have pointed out
> :).
>
> Regards
> -steve
>
>
> On Sep 23, 2014, at 3:40 PM, Steven Dake
> mailto:sd...@redhat.com>> wrote:
>
>
> Hi folks,
>
> I'm pleased to announce the development of a new project Kolla which is
> Greek for glue :). Kolla has a goal of providing an implementation that
> deploys OpenStack using Kubernetes and Docker. This project will begin as a
> StackForge project separate from the TripleO/Deployment program code base.
> Our long term goal is to merge into the TripleO/Deployment program rather
> then create a new program.
>
>
>
> Docker is a container technology for delivering hermetically sealed
> applications and has about 620 technical contributors [1]. We intend to
> produce docker images for a variety of platforms beginning with Fedora 20.
> We are completely open to any distro support, so if folks want to add new
> Linux distrib

Re: [openstack-dev] [Ceilometer] MySQL performance and Mongodb backend maturity question

2014-09-25 Thread Clint Byrum
Excerpts from Daniele Venzano's message of 2014-09-25 02:40:11 -0700:
> On 09/25/14 10:12, Qiming Teng wrote:
> > Yes, just about 3 VMs running on two hosts, for at most 3 weeks. This 
> > is leading me to another question -- any best practices/tools to 
> > retire the old data on a regular basis? Regards, Qiming
> 
> There is a tool: ceilometer-expirer
> 
> I tried to use it on a mysql database, since I had the same table size 
> problem as you and it made the machine hit swap. I think it tries to 
> load the whole table in memory.
> Just to see if it would eventually finish, I let it run for 1 week 
> before throwing away the whole database and move on.
> 
> Now I use Ceilometer's pipeline to forward events to elasticsearch via 
> udp + logstash and do not use Ceilometer's DB or API at all.
> 

Interesting, this almost sounds like what should be the default
configuration honestly.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] ./run_tests issue

2014-09-25 Thread Ben Nemec
On 09/22/2014 01:29 AM, Deepak Shetty wrote:
> Thats incorrect, as i said in my original mail.. I am usign devstack+manila
> and it wasn't very clear to me that mysql-devel needs to be installed and
> it didn't get installed. I am on F20, not sure if that causes this , if
> yes, then we need to debug and fix this.

This is because by default devstack only installs the packages needed to
actually run OpenStack.  For unit test deps, you need the
INSTALL_TESTONLY_PACKAGES variable set to true in your localrc.  I've
advocated to get it enabled by default in the past but was told that
running unit tests on a devstack vm isn't the recommended workflow so
they don't want to do that.

> 
> Maybe its a good idea to put a comment in requirements.txt statign that the
> following C libs needs to be installed for  the venv to work smoothly. That
> would help too for the short term.

It's worth noting that you would need multiple entries for each lib
since every distro tends to call them something different.

> 
> On Sun, Sep 21, 2014 at 12:12 PM, Valeriy Ponomaryov <
> vponomar...@mirantis.com> wrote:
> 
>> Dep "MySQL-python" is already in test-requirements.txt file. As Andreas
>> said, second one "mysql-devel" is C lib and can not be installed via pip.
>> So, project itself, as all projects in OpenStack, can not install it.
>>
>> C lib deps are handled by Devstack, if it is used. See:
>> https://github.com/openstack-dev/devstack/tree/master/files/rpms
>>
>> https://github.com/openstack-dev/devstack/blob/2f27a0ed3c609bfcd6344a55c121e56d5569afc9/functions-common#L895
>>
>> Yes, Manila could have its files in the same way in
>> https://github.com/openstack/manila/tree/master/contrib/devstack , but
>> this lib is already exist in deps for other projects. So, I guess you used
>> Manila "run_tests.sh" file on host without devstack installation, in that
>> case all other projects would fail in the same way.
>>
>> On Sun, Sep 21, 2014 at 2:54 AM, Alex Leonhardt 
>> wrote:
>>
>>> And yet it's a dependency so I'm with Deepak and it should at least be
>>> mentioned in the prerequisites on a webpage somewhere .. :) I might even
>>> try and update/add that myself as it caught me out a few times too..
>>>
>>> Alex
>>>  On 20 Sep 2014 12:44, "Andreas Jaeger"  wrote:
>>>
 On 09/20/2014 09:34 AM, Deepak Shetty wrote:
> thanks , that worked.
> Any idea why it doesn't install it automatically and/or it isn't
 present
> in requirements.txt ?
> I thought that was the purpose of requirements.txt ?

 AFAIU requirements.txt has only python dependencies while
 mysql-devel is a C development package,

 Andreas
 --
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
   SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
 GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Kind Regards
>> Valeriy Ponomaryov
>> www.mirantis.com
>> vponomar...@mirantis.com
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] MySQL performance and Mongodb backend maturity question

2014-09-25 Thread gordon chung
> mysql> select count(*) from metadata_text;
> +--+
> | count(*) |
> +--+
> | 25249913 |
> +--+
> 1 row in set (3.83 sec)> 
> There were 25M records in one table.  The deletion time is reaching an
> unacceptable level (7 minutes for 4M records) and it was not increasing
> in a linear way.  Maybe DB experts can show me how to optimize this?
we don't do any customisations in default ceilometer package so i'm sure 
there's way to optimise... not sure if any devops ppl read this list. 
> Another question: does the mongodb backend support events now?
> (I asked this question in IRC, but, just as usual, no response from
> anyone in that community, no matter a silly question or not is it...)
regarding events, are you specifically asking about events 
(http://docs.openstack.org/developer/ceilometer/events.html) in ceilometer or 
using events term in generic sense? the table above has no relation to events 
in ceilometer, it's related to samples and corresponding resource.  we did do 
some remodelling of sql backend this cycle which should shrink the size of the 
metadata tables.
there's a euro-bias in ceilometer so you'll be more successful reaching people 
on irc during euro work hours... that said, you'll probably get best response 
by posting to list or pinging someone on core team directly.
cheers,gord   ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Steven Dake

On 09/24/2014 10:01 PM, Mike Spreitzer wrote:

Clint Byrum  wrote on 09/25/2014 12:13:53 AM:

> Excerpts from Mike Spreitzer's message of 2014-09-24 20:49:20 -0700:
> > Steven Dake  wrote on 09/24/2014 11:02:49 PM:
> > > ...
> > ...
> > Does TripleO require container functionality that is not available
> > when using the Docker driver for Nova?
> >
> > As far as I can tell, the quantitative handling of capacities and
> > demands in Kubernetes is much inferior to what Nova does today.
> >
>
> Yes, TripleO needs to manage baremetal and containers from a single
> host. Nova and Neutron do not offer this as a feature unfortunately.

In what sense would Kubernetes "manage baremetal" (at all)?
By "from a single host" do you mean that a client on one host
can manage remote baremetal and containers?

I can see that Kubernetes allows a client on one host to get
containers placed remotely --- but so does the Docker driver for Nova.

>
> > > As far as use cases go, the main use case is to run a specific
> > > Docker container on a specific Kubernetes "minion" bare metal host.

Clint, in another branch of this email tree you referred to
"the VMs that host Kubernetes".  How does that square with
Steve's text that seems to imply bare metal minions?

I can see that some people have had much more detailed design
discussions than I have yet found.  Perhaps it would be helpful
to share an organized presentation of the design thoughts in
more detail.



Mike,

I have had no such design discussions.  Thus far the furthest along we 
are in the project is determining we need Docker containers for each of 
the OpenStack daemons.  We are working a bit on how that design should 
operate.  For example, our current model on reconfiguration of a docker 
container is to kill the docker container and start a fresh one with the 
new configuration.


This is literally where the design discussions have finished.  We have 
not had much discussion about Kubernetes at all other then I know it is 
a docker scheduler and I know it can get the job done :) I think other 
folks design discussions so far on this thread are speculation about 
what an architecture should look like.  That is great - lets have those 
Monday 2000 UTC in #openstack-medeting in our first Kolla meeting.


Regards
-steve


> >
> > If TripleO already knows it wants to run a specific Docker image
> > on a specific host then TripleO does not need a scheduler.
> >
>
> TripleO does not ever specify destination host, because Nova does not
> allow that, nor should it. It does want to isolate failure domains so
> that all three Galera nodes aren't on the same PDU, but we've not really
> gotten to the point where we can do that yet.

So I am still not clear on what Steve is trying to say is the main use 
case.

Kubernetes is even farther from balancing among PDUs than Nova is.
At least Nova has a framework in which this issue can be posed and 
solved.

I mean a framework that actually can carry the necessary information.
The Kubernetes scheduler interface is extremely impoverished in the
information it passes and it uses GO structs --- which, like C structs,
can not be subclassed.
Nova's filter scheduler includes a fatal bug that bites when balancing 
and you want more than

one element per area, see https://bugs.launchpad.net/nova/+bug/1373478.
However: (a) you might not need more than one element per area and
(b) fixing that bug is a much smaller job than expanding the mind of K8s.

Thanks,
Mike


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

2014-09-25 Thread Steven Dake

On 09/25/2014 12:01 AM, Clint Byrum wrote:

Excerpts from Mike Spreitzer's message of 2014-09-24 22:01:54 -0700:

Clint Byrum  wrote on 09/25/2014 12:13:53 AM:


Excerpts from Mike Spreitzer's message of 2014-09-24 20:49:20 -0700:

Steven Dake  wrote on 09/24/2014 11:02:49 PM:

...

...
Does TripleO require container functionality that is not available
when using the Docker driver for Nova?

As far as I can tell, the quantitative handling of capacities and
demands in Kubernetes is much inferior to what Nova does today.


Yes, TripleO needs to manage baremetal and containers from a single
host. Nova and Neutron do not offer this as a feature unfortunately.

In what sense would Kubernetes "manage baremetal" (at all)?
By "from a single host" do you mean that a client on one host
can manage remote baremetal and containers?

I can see that Kubernetes allows a client on one host to get
containers placed remotely --- but so does the Docker driver for Nova.


I mean that one box would need to host Ironic, Docker, and Nova, for
the purposes of deploying OpenStack. We call it the "undercloud", or
sometimes the "Deployment Cloud".

It's not necessarily something that Nova/Neutron cannot do by design,
but it doesn't work now.


As far as use cases go, the main use case is to run a specific
Docker container on a specific Kubernetes "minion" bare metal host.

Clint, in another branch of this email tree you referred to
"the VMs that host Kubernetes".  How does that square with
Steve's text that seems to imply bare metal minions?


That was in a more general context, discussing using Kubernetes for
general deployment. Could have just as easily have said "hosts",
"machines", or "instances".


I can see that some people have had much more detailed design
discussions than I have yet found.  Perhaps it would be helpful
to share an organized presentation of the design thoughts in
more detail.


I personally have not had any detailed discussions about this before it
was announced. I've just dug into the design and some of the code of
Kubernetes because it is quite interesting to me.


If TripleO already knows it wants to run a specific Docker image
on a specific host then TripleO does not need a scheduler.


TripleO does not ever specify destination host, because Nova does not
allow that, nor should it. It does want to isolate failure domains so
that all three Galera nodes aren't on the same PDU, but we've not really
gotten to the point where we can do that yet.

So I am still not clear on what Steve is trying to say is the main use
case.
Kubernetes is even farther from balancing among PDUs than Nova is.
At least Nova has a framework in which this issue can be posed and solved.
I mean a framework that actually can carry the necessary information.
The Kubernetes scheduler interface is extremely impoverished in the
information it passes and it uses GO structs --- which, like C structs,
can not be subclassed.

I don't think this is totally clear yet. The thing that Steven seems to be
trying to solve is deploying OpenStack using docker, and Kubernetes may
very well be a better choice than Nova for this. There are some really
nice features, and a lot of the benefits we've been citing about image
based deployments are realized in docker without the pain of a full OS
image to redeploy all the time.


This is precisely the problem I want to solve.  I looked at Nova+Docker 
as a solution, and it seems to me the runway to get to a successful 
codebase is longer with more risk.  That is why this is an experiment to 
see if a Kubernetes based approach would work.  if at the end of the day 
we throw out Kubernetes as a scheduler once we have the other problems 
solved and reimplement Kubernetes in Nova+Docker, I think that would be 
an acceptable outcome, but not something I want to *start* with but 
*finish* with.


Regards
-steve


The structs vs. classes argument is completely out of line and has
nothing to do with where Kubernetes might go in the future. It's like
saying because cars use internal combustion engines they are limited. It
is just a facet of how it works today.


Nova's filter scheduler includes a fatal bug that bites when balancing and
you want more than
one element per area, see https://bugs.launchpad.net/nova/+bug/1373478.
However: (a) you might not need more than one element per area and
(b) fixing that bug is a much smaller job than expanding the mind of K8s.


Perhaps. I am quite a fan of set based design, and Kubernetes is a
narrowly focused single implementation solution, where Nova is a broadly
focused abstraction layer for VM's. I think it is worthwhile to push
a bit into the Kubernetes space and see whether the limitations are
important or not.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@list

Re: [openstack-dev] [Nova] Release criticality of bug 1365606 (get_network_info efficiency for nova-network)

2014-09-25 Thread Matt Riedemann



On 9/25/2014 9:15 AM, Dan Smith wrote:

and I don't see how https://review.openstack.org/#/c/121663/ is actually
dependent on https://review.openstack.org/#/c/119521/.


Yeah, agreed. I think that we _need_ the fix patch in Juno. The query
optimization is good, and something we should take, but it makes me
nervous sliding something like that in at the last minute without more
exposure. Especially given that it has been like this for more than one
release, it seems like Kilo material to me.

--Dan



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I agree with this and said the same in IRC a few times when it was 
brought up.  Unfortunately the optimization patch was approved at one 
point but had to be rebased.  Then about three weeks went by and we're 
sitting on top of rc1 and I think that optimization is too risky at this 
point, i.e. we have known gate issues, I wouldn't like to see us add to 
that.  Granted, this might actually help with some gate races, I'm not 
sure, but it seems too risky to me without more time to back it in 
before we do release candidates.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create an instance with a custom uuid

2014-09-25 Thread Daniel P. Berrange
On Thu, Sep 25, 2014 at 05:23:22PM +0200, Pasquale Porreca wrote:
> This is correct Daniel, except that that it is done by the virtual
> firmware/BIOS of the virtual machine and not by the OS (not yet installed at
> that time).
> 
> This is the reason we thought about UUID: it is yet used by the iPXE client
> to be included in Bootstrap Protocol messages, it is taken from the 
> field in libvirt template and the  in libvirt is set by OpenStack; the
> only missing passage is the chance to set the UUID in OpenStack instead to
> have it randomly generated.
> 
> Having another user defined tag in libvirt won't help for our issue, since
> it won't be included in Bootstrap Protocol messages, not without changes in
> the virtual BIOS/firmware (as you stated too) and honestly my team doesn't
> have interest in this (neither the competence).
> 
> I don't think the configdrive or metadata service would help either: the OS
> on the instance is not yet installed at that time (the target if the network
> boot is exactly to install the OS on the instance!), so it won't be able to
> mount it.

Ok, yes, if we're considering the DHCP client inside the iPXE BIOS
blob, then I don't see any currently viable options besides UUID.
There's no mechanism for passing any other data into iPXE that I
am aware of, though if there is a desire todo that it could be
raised on the QEMU mailing list for discussion.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] keystonemiddleware release 1.2.0

2014-09-25 Thread Morgan Fainberg
The Keystone team has released keystonemiddleware 1.2.0 [1]. This version is 
meant to be the release coinciding with the Juno release of OpenStack. 

Details of new features and bug fixes included in the 1.2.0 release of 
keystonemiddleware can be found on the milestone information page [2].


Cheers, 
Morgan Fainberg 

[1] https://pypi.python.org/pypi/keystonemiddleware/1.2.0
[2] https://launchpad.net/keystonemiddleware/+milestone/1.2.0



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create an instance with a custom uuid

2014-09-25 Thread Pasquale Porreca
This is correct Daniel, except that that it is done by the virtual 
firmware/BIOS of the virtual machine and not by the OS (not yet 
installed at that time).


This is the reason we thought about UUID: it is yet used by the iPXE 
client to be included in Bootstrap Protocol messages, it is taken from 
the  field in libvirt template and the  in libvirt is set by 
OpenStack; the only missing passage is the chance to set the UUID in 
OpenStack instead to have it randomly generated.


Having another user defined tag in libvirt won't help for our issue, 
since it won't be included in Bootstrap Protocol messages, not without 
changes in the virtual BIOS/firmware (as you stated too) and honestly my 
team doesn't have interest in this (neither the competence).


I don't think the configdrive or metadata service would help either: the 
OS on the instance is not yet installed at that time (the target if the 
network boot is exactly to install the OS on the instance!), so it won't 
be able to mount it.


On 09/25/14 16:24, Daniel P. Berrange wrote:

On Thu, Sep 25, 2014 at 09:19:03AM -0500, Matt Riedemann wrote:


On 9/25/2014 8:26 AM, Pasquale Porreca wrote:

The problem to use a different tag than UUID is that it won't be
possible (for what I know) to include this tag in the Bootstrap Protocol
messages exchanged during the pre-boot phase.

Our original idea was to use the Client-identifier (option 61) or Vendor
class identifier (option 60) of the dhcp request to achieve our target,
but these fields cannot be controlled in libvirt template and so they
cannot be set in OpenStack either. Instead the UUID is set it the
libvirt template by OpenStack and it is included in the messages
exchanged in the pre-boot phase (option 97) by the instance trying to
boot from network.

Reference:
http://www.iana.org/assignments/bootp-dhcp-parameters/bootp-dhcp-parameters.xhtml

[snip]


If it's a matter of getting the instance tag information down to the libvirt
driver on boot that shouldn't be a problem, there are others asking for
similar things, i.e. I want to tag my instances at create time and store
that tag metadata in some namespace in the libvirt domain xml so I can have
an application outside of openstack consuming those domain xml's and reading
that custom namespace information.

Perhaps I'm misunderstanding something, but isn't the DHCP client that
needs to send the tag running in the guest OS ? Libvirt is involved wrt
UUID, because UUID is populated in the guest's virtual BIOS and then
extracted by the guest OS and from there used by the DHCP client. If
we're talking about making a different tag/identifier available for
the DHCP client, then this is probably not going to involve libvirt
unless it also gets pushed up via the virtual BIOS. IOW, couldn't you
just pass whatever tag is needed to the guest OS via the configdrive
or metadata service.

Regards,
Daniel


--
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] client release deadline - Sept 18th

2014-09-25 Thread Morgan Fainberg
Keystone team has released Keystonemiddleware 1.2.0

https://pypi.python.org/pypi/keystonemiddleware/1.2.0

This should be the version coinciding with the Juno OpenStack release. 


—
Morgan Fainberg


-Original Message-
From: Sergey Lukjanov 
Reply: OpenStack Development Mailing List (not for usage questions) 
>
Date: September 23, 2014 at 17:12:16
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject:  Re: [openstack-dev] [release] client release deadline - Sept 18th

> We have a final sahara client release for Juno -
> https://pypi.python.org/pypi/python-saharaclient/0.7.4
>  
> On Tue, Sep 23, 2014 at 12:59 PM, Eoghan Glynn wrote:
> >
> > The ceilometer team released python-ceilometerclient vesion 1.0.11 
> > yesterday:  
> >
> > https://pypi.python.org/pypi/python-ceilometerclient/1.0.11
> >
> > Cheers,
> > Eoghan
> >
> >> Keystone team has released 0.11.1 of python-keystoneclient. Due to some
> >> delays getting things through the gate this took a few extra days.
> >>
> >> https://pypi.python.org/pypi/python-keystoneclient/0.11.1
> >>
> >> —Morgan
> >>
> >>
> >> —
> >> Morgan Fainberg
> >>
> >>
> >> -Original Message-
> >> From: John Dickinson  
> >> Reply: OpenStack Development Mailing List (not for usage questions)
> >> >
> >> Date: September 17, 2014 at 20:54:19
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> >
> >> Subject: Re: [openstack-dev] [release] client release deadline - Sept 18th
> >>
> >> > I just release python-swiftclient 2.3.0
> >> >
> >> > In addition to some smaller changes and bugfixes, the biggest changes are
> >> > the support
> >> > for Keystone v3 and a refactoring that allows for better testing and
> >> > extensibility of
> >> > the functionality exposed by the CLI.
> >> >
> >> > https://pypi.python.org/pypi/python-swiftclient/2.3.0
> >> >
> >> > --John
> >> >
> >> >
> >> >
> >> > On Sep 17, 2014, at 8:14 AM, Matt Riedemann wrote:
> >> >
> >> > >
> >> > >
> >> > > On 9/15/2014 12:57 PM, Matt Riedemann wrote:
> >> > >>
> >> > >>
> >> > >> On 9/10/2014 11:08 AM, Kyle Mestery wrote:
> >> > >>> On Wed, Sep 10, 2014 at 10:01 AM, Matt Riedemann
> >> > >>> wrote:
> >> > 
> >> > 
> >> >  On 9/9/2014 4:19 PM, Sean Dague wrote:
> >> > >
> >> > > As we try to stabilize OpenStack Juno, many server projects need to
> >> > > get
> >> > > out final client releases that expose new features of their 
> >> > > servers.
> >> > > While this seems like not a big deal, each of these clients 
> >> > > releases
> >> > > ends up having possibly destabilizing impacts on the OpenStack 
> >> > > whole
> >> > > (as
> >> > > the clients do double duty in cross communicating between 
> >> > > services).
> >> > >
> >> > > As such in the release meeting today it was agreed clients should
> >> > > have
> >> > > their final release by Sept 18th. We'll start applying the 
> >> > > dependency
> >> > > freeze to oslo and clients shortly after that, all other 
> >> > > requirements
> >> > > should be frozen at this point unless there is a high priority bug
> >> > > around them.
> >> > >
> >> > > -Sean
> >> > >
> >> > 
> >> >  Thanks for bringing this up. We do our own packaging and need time
> >> >  for legal
> >> >  clearances and having the final client releases done in a reasonable
> >> >  time
> >> >  before rc1 is helpful. I've been pinging a few projects to do a 
> >> >  final
> >> >  client release relatively soon. python-neutronclient has a release
> >> >  this
> >> >  week and I think John was planning a python-cinderclient release 
> >> >  this
> >> >  week
> >> >  also.
> >> > 
> >> > >>> Just a slight correction: python-neutronclient will have a final
> >> > >>> release once the L3 HA CLI changes land [1].
> >> > >>>
> >> > >>> Thanks,
> >> > >>> Kyle
> >> > >>>
> >> > >>> [1] https://review.openstack.org/#/c/108378/
> >> > >>>
> >> >  --
> >> > 
> >> >  Thanks,
> >> > 
> >> >  Matt Riedemann
> >> > 
> >> > 
> >> > 
> >> >  ___
> >> >  OpenStack-dev mailing list
> >> >  OpenStack-dev@lists.openstack.org
> >> >  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
> >> > >>>
> >> > >>> ___
> >> > >>> OpenStack-dev mailing list
> >> > >>> OpenStack-dev@lists.openstack.org
> >> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
> >> > >>>
> >> > >>
> >> > >> python-cinderclient 1.1.0 was released on Saturday:
> >> > >>
> >> > >> https://pypi.python.org/pypi/python-cinderclient/1.1.0
> >> > >>
> >> > >
> >> > > python-novaclient 2.19.0 was released yesterday [1].
> >> > >
> >> > > List of changes:
> >> > >
> >> > > mriedem@ubuntu:~/git/python-novaclient$ git log 2.18.1..2.19.0 
> >> > > --oneline  
> >> > > --n

[openstack-dev] [elections] Last hours for PTL candidate announcements

2014-09-25 Thread Anita Kuno
Tristan has been doing a great job verifying most of the current
candidate announcements - thank you, Tristan! - while I am head down on
the project-config split in infra, but I did want to send out the
reminder that we are in the last hours for PTL candidate announcements.

If you want to stand for PTL, don't delay, follow the instructions on
the wikipage and make sure we know your intentions:
https://wiki.openstack.org/wiki/PTL_Elections_September/October_2014

Thank you,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create an instance with a custom uuid

2014-09-25 Thread Daniel P. Berrange
On Thu, Sep 25, 2014 at 09:19:03AM -0500, Matt Riedemann wrote:
> 
> 
> On 9/25/2014 8:26 AM, Pasquale Porreca wrote:
> >The problem to use a different tag than UUID is that it won't be
> >possible (for what I know) to include this tag in the Bootstrap Protocol
> >messages exchanged during the pre-boot phase.
> >
> >Our original idea was to use the Client-identifier (option 61) or Vendor
> >class identifier (option 60) of the dhcp request to achieve our target,
> >but these fields cannot be controlled in libvirt template and so they
> >cannot be set in OpenStack either. Instead the UUID is set it the
> >libvirt template by OpenStack and it is included in the messages
> >exchanged in the pre-boot phase (option 97) by the instance trying to
> >boot from network.
> >
> >Reference:
> >http://www.iana.org/assignments/bootp-dhcp-parameters/bootp-dhcp-parameters.xhtml

[snip]

> If it's a matter of getting the instance tag information down to the libvirt
> driver on boot that shouldn't be a problem, there are others asking for
> similar things, i.e. I want to tag my instances at create time and store
> that tag metadata in some namespace in the libvirt domain xml so I can have
> an application outside of openstack consuming those domain xml's and reading
> that custom namespace information.

Perhaps I'm misunderstanding something, but isn't the DHCP client that
needs to send the tag running in the guest OS ? Libvirt is involved wrt
UUID, because UUID is populated in the guest's virtual BIOS and then
extracted by the guest OS and from there used by the DHCP client. If
we're talking about making a different tag/identifier available for
the DHCP client, then this is probably not going to involve libvirt
unless it also gets pushed up via the virtual BIOS. IOW, couldn't you
just pass whatever tag is needed to the guest OS via the configdrive
or metadata service.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Choice of Series goal of a blueprint

2014-09-25 Thread Angelo Matarazzo

Hi all,
Can create a blueprint and choose a previous Series goal (eg:Icehouse)?
I think that it can be possible but no reviewer or driver will be 
interested in it.

Right?

Best regards,
Angelo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create an instance with a custom uuid

2014-09-25 Thread Matt Riedemann



On 9/25/2014 8:26 AM, Pasquale Porreca wrote:

The problem to use a different tag than UUID is that it won't be
possible (for what I know) to include this tag in the Bootstrap Protocol
messages exchanged during the pre-boot phase.

Our original idea was to use the Client-identifier (option 61) or Vendor
class identifier (option 60) of the dhcp request to achieve our target,
but these fields cannot be controlled in libvirt template and so they
cannot be set in OpenStack either. Instead the UUID is set it the
libvirt template by OpenStack and it is included in the messages
exchanged in the pre-boot phase (option 97) by the instance trying to
boot from network.

Reference:
http://www.iana.org/assignments/bootp-dhcp-parameters/bootp-dhcp-parameters.xhtml



On 09/25/14 14:43, Andrew Laski wrote:


On 09/25/2014 04:18 AM, Pasquale Porreca wrote:

I will briefly explain our use case. This idea is related to another
project to enable the network boot in OpenStack
https://blueprints.launchpad.net/nova/+spec/pxe-boot-instance

We want to make use of the extra-dhcp-opt to indicate as tftp server
a specific instance inside our deployed system, so it will provide
the right operating system to the other instances booting from
network (once the feature from the linked blueprint will be
implemented).

On the tftp server we want to be able to filter what boot file to
provide to different class of instances and our idea was to identify
each class with 2 hexadecimal of the UUID (while the rest would be
random generated, still "granting" its uniqueness).


It seems like this would still be achievable using the instance tags
feature that Matt mentioned.  And it would be more clear since you
could use human readable class names rather than relying on knowing
that part of the UUID had special meaning.

If you have a need to add specific information to an instance like
'boot class' or want to indicate that an instance in two different
clouds is actually the same one, the Pumphouse use case, that
information should be something we layer on top of an instance and not
something we encode in the UUID.



Anyway this is a customization for our specific environment and for a
feature that is still in early proposal stage, so we wanted to
propose as a separate feature to allow user custom UUID and manage
the generation out of OpenStack.
On 09/24/14 23:15, Matt Riedemann wrote:



On 9/24/2014 3:17 PM, Dean Troyer wrote:

On Wed, Sep 24, 2014 at 2:58 PM, Roman Podoliaka
mailto:rpodoly...@mirantis.com>> wrote:

Are there any known gotchas with support of this feature in
REST APIs
(in general)?


I'd be worried about relying on a user-defined attribute in that use
case, that's ripe for a DOS.  Since these are cloud-unique I wouldn't
even need to be in your project to block you from creating that clone
instance if I knew your UUID.

dt

--

Dean Troyer
dtro...@gmail.com 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



We talked about this a bit before approving the
'enforce-unique-instance-uuid-in-db' blueprint [1].  As far as we
knew there was no one using null instance UUIDs or duplicates for
that matter.

The instance object already enforces that the UUID field is unique
but the database schema doesn't.  I'll be re-proposing that for Kilo
when it opens up.

If it's a matter of tagging an instance, there is also the tags
blueprint [2] which will probably be proposed again for Kilo.

[1]
https://blueprints.launchpad.net/nova/+spec/enforce-unique-instance-uuid-in-db

[2] https://blueprints.launchpad.net/nova/+spec/tag-instances






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




If it's a matter of getting the instance tag information down to the 
libvirt driver on boot that shouldn't be a problem, there are others 
asking for similar things, i.e. I want to tag my instances at create 
time and store that tag metadata in some namespace in the libvirt domain 
xml so I can have an application outside of openstack consuming those 
domain xml's and reading that custom namespace information.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Release criticality of bug 1365606 (get_network_info efficiency for nova-network)

2014-09-25 Thread Dan Smith
> and I don't see how https://review.openstack.org/#/c/121663/ is actually
> dependent on https://review.openstack.org/#/c/119521/.

Yeah, agreed. I think that we _need_ the fix patch in Juno. The query
optimization is good, and something we should take, but it makes me
nervous sliding something like that in at the last minute without more
exposure. Especially given that it has been like this for more than one
release, it seems like Kilo material to me.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Kilo release cycle schedule proposal

2014-09-25 Thread Morgan Fainberg
On Thursday, September 25, 2014, Thierry Carrez 
wrote:

> Thierry Carrez wrote:
> > Kilo Design Summit: Nov 4-7
> > Kilo-1 milestone: Dec 11
> > Kilo-2 milestone: Jan 29
> > Kilo-3 milestone, feature freeze: March 12
> > 2015.1 ("Kilo") release: Apr 23
> > L Design Summit: May 18-22
>
> Following feedback on the mailing-list and at the cross-project meeting,
> there is growing consensus that shifting one week to the right would be
> better. It makes for a short L cycle, but avoids losing 3 weeks between
> Kilo release and L design summit. That gives:
>
> Kilo Design Summit: Nov 4-7
> Kilo-1 milestone: Dec 18
> Kilo-2 milestone: Feb 5
> Kilo-3 milestone, feature freeze: Mar 19
> 2015.1 ("Kilo") release: Apr 30
> L Design Summit: May 18-22
>
> If you prefer a picture, see attached PDF.
>
> --
> Thierry Carrez (ttx)
>

+1
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Docs] PTL Candidacy

2014-09-25 Thread Tristan Cacqueray
confirmed

On 25/09/14 12:10 AM, Anne Gentle wrote:
> I'm writing to announce my candidacy for the Documentation Program
> Technical Lead (PTL).
> 
> The past six months have flown by. I still recall writing up wish lists
> per-deliverable on the plane home from the Atlanta Summit and the great
> news is many are completed. Of course we still have a lot to do.
> 
> We face many challenges as an open source community as we grow and define
> ourselves through our users. As documentation specialists, we have to be
> creative with our resourcing for documentation as the number of teams and
> services increases each release. This release we have:
> - experimented with using RST sourcing for a chapter about Heat Templates
> - managed to keep automating where it makes sense, using the toolset we
> keep improving upon
> - held another successful book sprint for the Architecture and Design Guide
> - split out a repo for the training group focusing not only on training
> guides but also scripts and other training specialties
> - split out the Security Guide with their own review team; completed a
> thorough review of that guide
> - split out the High Availability Guide with their own review team from
> discussions at the Ops Meetup
> - began a Networking Guide pulling together as many interested parties as
> possible before and after the Ops Meetup with a plan for hiring a contract
> writer to work on it with the community
> - added the openstack common client help text to the CLI Reference
> - added Chinese, German, French, and Korean language landing pages to the
> docs site
> - generated config option tables with each milestone release (with few
> exceptions of individual projects)
> - lost a key contributor to API docs (Diane's stats didn't decline far yet:
> http://stackalytics.com/?user_id=diane-fleming&release=juno)
> - still working towards a new design for page-based docs
> - still working on API reference information
> - still working on removing "spec" API documents to avoid duplication and
> confusion
> - still testing three of four install guides for the JUNO release (that
> we're nearly there is just so great)
> 
> So you can see we have much more to do, but we have come so far. Even in
> compiling this list I worry I'm missing items, there's just so much scope
> to OpenStack docs. We serve users, deployers, administrators, and app
> developers. It continues to be challenging but we keep looking for ways to
> make it work.
> 
> We have seen amazing contributors like Andreas Jaeger, Matt Kassawara,
> Gauvain Pocentek, and Christian Berendt find their stride and shine. Yes, I
> could name more but these people have done an incredible job this release.
> 
> I'm especially eager to continue collaborating with great managers like
> Nick Chase at Mirantis and Lana Brindley at Rackspace -- they see what we
> can accomplish when enterprise doc teams work well with an upstream.
> They're behind-the-scenes much of the time but I must express my gratitude
> to these two pros up front.
> 
> Thanks for your consideration. I'd be honored to continue to serve in this
> role.
> Anne
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] PTL Candidacy

2014-09-25 Thread Tristan Cacqueray
confirmed

On 24/09/14 05:19 PM, James Slagle wrote:
> I'd like to announce my candidacy for TripleO PTL.
> 
> I think most folks who have worked in the TripleO community probably know me.
> For those who don't, I work for Red Hat, and over the last year and a half 
> that
> I've been involved with TripleO I've worked in different areas. My focus has
> been on improvements to the frameworks to support things such as other 
> distros,
> packages, and offering deployment choices. I've also tried to focus on
> stabilization and documentation as well.
> 
> I stand by what I said in my last candidacy announcement[1], so I'm not going
> to repeat all of that here :-).
> 
> One of the reasons I've been so active in reviewing changes to the project is
> because I want to help influence the direction and move progress forward for
> TripleO. The spec process was new for TripleO during the Juno cycle, and I 
> also
> helped define that. I think that process is working well and will continue to
> evolve during Kilo as we find what works best.
> 
> The TripleO team has made a lot of great progress towards full HA deployments,
> CI improvements, rearchitecting Tuskar as a deployment planning service, and
> driving features in Heat to support our use cases. I support this work
> continuing in Kilo.
> 
> I continue to believe in TripleO's mission to use OpenStack itself.  I think
> the feedback provided by TripleO to other projects is very valuable. Given the
> complexity to deploy OpenStack, TripleO has set a high bar for other
> integrated projects to meet to achieve this goal. The resulting new features
> and bug fixes that have surfaced as a result has been great for all of
> OpenStack.
> 
> Given that TripleO is the Deployment program though, I also support 
> alternative
> implementations where they make sense. Those implementations may be in
> TripleO's existing projects themselves, new projects entirely, or pulling in
> existing projects under the Deployment program where a desire exists. Not 
> every
> operator is going to deploy OpenStack the same way, and some organizations
> already have entrenched and accepted tooling.
> 
> To that end, I would also encourage integration with other deployment tools.
> Puppet is one such example and already has wide support in the broader
> OpenStack community. I'd also like to see TripleO support different update
> mechanisms potentially with Heat's SoftwareConfig feature, which didn't yet
> exist when TripleO first defined an update strategy.
> 
> The tripleo-image-elements repository is a heavily used part of our process 
> and
> I've seen some recurring themes come up that I'd like to see addressed. 
> Element
> idempotence seems to often come up, as well as the ability to edit already
> built images. I'd also like to see our elements more generally applicable to
> installing OpenStack vs. just installing OpenStack in an image building
> context.  Personally, I support these features, but mostly, I'd like to drive
> to a consensus on those points during Kilo.
> 
> I'd love to see more people developing and using TripleO where they can and
> providing feedback. To enable that, I'd like for easier developer setups to
> be a focus during Kilo so that it's simpler for people to contribute without
> such a large initial learning curve investment. Downloadable prebuilt images
> could be one way we could make that process easier.
> 
> There have been a handful of mailing list threads recently about the
> organization of OpenStack and how TripleO/Deployment may fit into that going
> forward. One thing is clear, the team has made a ton of great progress since
> it's inception. I think we should continue on the mission of OpenStack owning
> it's own production deployment story, regardless of how programs may be
> organized in the future, or what different paths that story may take.
> 
> Thanks for your consideration!
> 
> [1] http://lists.openstack.org/pipermail/openstack-dev/2014-April/031772.html
> 
> 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] PTL Candidacy

2014-09-25 Thread Tristan Cacqueray
confirmed

On 24/09/14 04:37 PM, Swartzlander, Ben wrote:
> Hello! I have been the de-facto PTL for Manila from the conception of
> the project up to now. Since Manila is an officially incubated OpenStack
> program, I have the opportunity to run for election and hopefully become
> the officially elected PTL for the Manila project.
> 
> I'm running because I feel that the vision of the Manila project is
> still not achieved, even though we've made tremendous strides in the
> last year, and I want to see the project mature and become part of
> core OpenStack.
> 
> Some of you may remember the roots of the Manila project, when we
> proposed shared file system management as an extension to the
> then-nascent Cinder project during the Folsom release. It's taken a lot
> of attempts and failures to arrive at the current Manila project, and
> it's been an exciting and humbling journey, where along the way I've
> had the opportunity to work with many great individuals.
> 
> My vision for the future of the Manila includes:
> * Getting more integrated with the rest of OpenStack. We have Devstack,
>   Tempest, and Horizon integration, and I'd like to get that code into
>   the right places where it can be maintained. We also need to add Heat
>   integration, and more complete documentation.
> * Working with distributions on issues related to packaging and
>   installation to make Manila as easy to use as possible. This includes
>   work with Chef and Puppet.
> * Making Manila usable in more environments. Manila's design center has
>   been large-scale public clouds, but we haven't spent enough time on
>   small/medium scale environments -- the kind the developers typically
>   have and the kind that users typically start out with.
> * Taking good ideas from the rest of OpenStack. We're a small team and
>   we can't do everything ourselves. The OpenStack ecosystem is full of
>   excellent technology and I want to make sure we take the best ideas
>   and apply them to Manila. In particular, there are some features I'd
>   like to copy from the Cinder project.
> * A focus on quality. I want to make sure we keep test coverage high
>   as we add new features, and increase test coverage on existing
>   features. I also want to try to start vendor CI similar to what
>   Cinder has.
> * Lastly, I expect to work with vendors to get more drivers contributed
>   to expand Manila's hardware support. I am very interested in
>   smoothing out some of the networking complexities that make it
>   difficult to write drivers today.
> 
> I hope you will support my candidacy so I can continue to lead Manila
> towards eventual integration with OpenStack and realize my dream of
> shared file system management in the cloud.
> 
> Thank you,
> Ben Swartzlander
> Manila PTL, NetApp Architect
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] MySQL performance and Mongodb backend maturity question

2014-09-25 Thread Qiming Teng
On Thu, Sep 25, 2014 at 11:40:11AM +0200, Daniele Venzano wrote:
> On 09/25/14 10:12, Qiming Teng wrote:
> >Yes, just about 3 VMs running on two hosts, for at most 3 weeks.
> >This is leading me to another question -- any best practices/tools
> >to retire the old data on a regular basis? Regards, Qiming
> 
> There is a tool: ceilometer-expirer
> 
> I tried to use it on a mysql database, since I had the same table
> size problem as you and it made the machine hit swap. I think it
> tries to load the whole table in memory.
> Just to see if it would eventually finish, I let it run for 1 week
> before throwing away the whole database and move on.
> 
> Now I use Ceilometer's pipeline to forward events to elasticsearch
> via udp + logstash and do not use Ceilometer's DB or API at all.

Ah, that is something worth a try.  Thanks.

Regards,
 Qiming
 
> Best,
> Daniele
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Kilo release cycle schedule proposal

2014-09-25 Thread Sean Dague
On 09/25/2014 09:36 AM, Thierry Carrez wrote:
> Thierry Carrez wrote:
>> Kilo Design Summit: Nov 4-7
>> Kilo-1 milestone: Dec 11
>> Kilo-2 milestone: Jan 29
>> Kilo-3 milestone, feature freeze: March 12
>> 2015.1 ("Kilo") release: Apr 23
>> L Design Summit: May 18-22
> 
> Following feedback on the mailing-list and at the cross-project meeting,
> there is growing consensus that shifting one week to the right would be
> better. It makes for a short L cycle, but avoids losing 3 weeks between
> Kilo release and L design summit. That gives:
> 
> Kilo Design Summit: Nov 4-7
> Kilo-1 milestone: Dec 18
> Kilo-2 milestone: Feb 5
> Kilo-3 milestone, feature freeze: Mar 19
> 2015.1 ("Kilo") release: Apr 30
> L Design Summit: May 18-22
> 
> If you prefer a picture, see attached PDF.

+1

-Sean


-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Kilo release cycle schedule proposal

2014-09-25 Thread Thierry Carrez
Thierry Carrez wrote:
> Kilo Design Summit: Nov 4-7
> Kilo-1 milestone: Dec 11
> Kilo-2 milestone: Jan 29
> Kilo-3 milestone, feature freeze: March 12
> 2015.1 ("Kilo") release: Apr 23
> L Design Summit: May 18-22

Following feedback on the mailing-list and at the cross-project meeting,
there is growing consensus that shifting one week to the right would be
better. It makes for a short L cycle, but avoids losing 3 weeks between
Kilo release and L design summit. That gives:

Kilo Design Summit: Nov 4-7
Kilo-1 milestone: Dec 18
Kilo-2 milestone: Feb 5
Kilo-3 milestone, feature freeze: Mar 19
2015.1 ("Kilo") release: Apr 30
L Design Summit: May 18-22

If you prefer a picture, see attached PDF.

-- 
Thierry Carrez (ttx)


kilo.pdf
Description: Adobe PDF document
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create an instance with a custom uuid

2014-09-25 Thread Pasquale Porreca
The problem to use a different tag than UUID is that it won't be 
possible (for what I know) to include this tag in the Bootstrap Protocol 
messages exchanged during the pre-boot phase.


Our original idea was to use the Client-identifier (option 61) or Vendor 
class identifier (option 60) of the dhcp request to achieve our target, 
but these fields cannot be controlled in libvirt template and so they 
cannot be set in OpenStack either. Instead the UUID is set it the 
libvirt template by OpenStack and it is included in the messages 
exchanged in the pre-boot phase (option 97) by the instance trying to 
boot from network.


Reference: 
http://www.iana.org/assignments/bootp-dhcp-parameters/bootp-dhcp-parameters.xhtml



On 09/25/14 14:43, Andrew Laski wrote:


On 09/25/2014 04:18 AM, Pasquale Porreca wrote:
I will briefly explain our use case. This idea is related to another 
project to enable the network boot in OpenStack 
https://blueprints.launchpad.net/nova/+spec/pxe-boot-instance


We want to make use of the extra-dhcp-opt to indicate as tftp server 
a specific instance inside our deployed system, so it will provide 
the right operating system to the other instances booting from 
network (once the feature from the linked blueprint will be 
implemented).


On the tftp server we want to be able to filter what boot file to 
provide to different class of instances and our idea was to identify 
each class with 2 hexadecimal of the UUID (while the rest would be 
random generated, still "granting" its uniqueness).


It seems like this would still be achievable using the instance tags 
feature that Matt mentioned.  And it would be more clear since you 
could use human readable class names rather than relying on knowing 
that part of the UUID had special meaning.


If you have a need to add specific information to an instance like 
'boot class' or want to indicate that an instance in two different 
clouds is actually the same one, the Pumphouse use case, that 
information should be something we layer on top of an instance and not 
something we encode in the UUID.



Anyway this is a customization for our specific environment and for a 
feature that is still in early proposal stage, so we wanted to 
propose as a separate feature to allow user custom UUID and manage 
the generation out of OpenStack.

On 09/24/14 23:15, Matt Riedemann wrote:



On 9/24/2014 3:17 PM, Dean Troyer wrote:

On Wed, Sep 24, 2014 at 2:58 PM, Roman Podoliaka
mailto:rpodoly...@mirantis.com>> wrote:

Are there any known gotchas with support of this feature in 
REST APIs

(in general)?


I'd be worried about relying on a user-defined attribute in that use
case, that's ripe for a DOS.  Since these are cloud-unique I wouldn't
even need to be in your project to block you from creating that clone
instance if I knew your UUID.

dt

--

Dean Troyer
dtro...@gmail.com 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



We talked about this a bit before approving the 
'enforce-unique-instance-uuid-in-db' blueprint [1].  As far as we 
knew there was no one using null instance UUIDs or duplicates for 
that matter.


The instance object already enforces that the UUID field is unique 
but the database schema doesn't.  I'll be re-proposing that for Kilo 
when it opens up.


If it's a matter of tagging an instance, there is also the tags 
blueprint [2] which will probably be proposed again for Kilo.


[1] 
https://blueprints.launchpad.net/nova/+spec/enforce-unique-instance-uuid-in-db

[2] https://blueprints.launchpad.net/nova/+spec/tag-instances






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo Blueprints and Specs

2014-09-25 Thread Daniel P. Berrange
On Thu, Sep 25, 2014 at 01:52:48PM +0100, John Garbutt wrote:
> On 25 September 2014 11:44, Daniel P. Berrange  wrote:
> > To use the runway system, we need to have a frequently updated list
> > of blueprints which are a priority to review / merge. Once we have
> > such a list, IMHO, adding the fixed runway slots around that does
> > not do anything positive for me as a reviewer. If we have a priority
> > list of blueprints that is accurate & timely updated, I'd be far
> > more effective if I just worked directly from that list.
> 
> I am proposing we do that for kilo-1 and kilo-2.
> 
> > Please just focus on the maintaining
> > the blueprint priority list.
> 
> I am trying to. I clearly failed.
> 
> 
> The proposal is to keep kilo-1, kilo-2 much the same as juno. Except,
> we work harder on getting people to buy into the priorities that are
> set, and actively provoke more debate on their "correctness", and we
> reduce the bar for what needs a blueprint.
> 
> We can't have 50 high priority blueprints, it doesn't mean anything,
> right? We need to trim the list down to a manageable number, based on
> the agreed project priorities. Thats all I mean by slots / runway at
> this point.

I would suggest we don't try to rank high/medium/low as that is
too coarse, but rather just an ordered priority list. Then you
would not be in the situation of having 50 high blueprints. We
would instead naturally just start at the highest priority and
work downwards. 

> > The runways
> > idea is just going to make me less efficient at reviewing. So I'm
> > very much against it as an idea.
> 
> This proposal is different to the runways idea, although it certainly
> borrows aspects of it. I just don't understand how this proposal has
> all the same issues?
> 
> 
> The key to the kilo-3 proposal, is about getting better at saying no,
> this blueprint isn't very likely to make kilo.
> 
> If we focus on a smaller number of blueprints to review, we should be
> able to get a greater percentage of those fully completed.
>
> I am just using slots/runway-like ideas to help pick the high priority
> blueprints we should concentrate on, during that final milestone.
> Rather than keeping the distraction of 15 or so low priority
> blueprints, with those poor submitters jamming up the check queue, and
> constantly rebasing, and having to deal with the odd stray review
> comment they might get lucky enough to get.
>
> Maybe you think this bit is overkill, and thats fine. But I still
> think we need a way to stop wasting so much of peoples time on things
> that will not make it.

The high priority blueprints are going to end up being mostly the big
scope changes which take alot of time to review & probably go through
many iterations. The low priority blueprints are going to end up being
the small things that don't consume significant resource to review and
are easy to deal with in the time we're waiting for the big items to
go through rebases or whatever. So what I don't like about the runways
slots idea is that removes the ability to be agile and take the initiative
to review & approve the low priority stuff that would otherwise never
make it through.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo Blueprints and Specs

2014-09-25 Thread John Garbutt
On 25 September 2014 11:44, Daniel P. Berrange  wrote:
> To use the runway system, we need to have a frequently updated list
> of blueprints which are a priority to review / merge. Once we have
> such a list, IMHO, adding the fixed runway slots around that does
> not do anything positive for me as a reviewer. If we have a priority
> list of blueprints that is accurate & timely updated, I'd be far
> more effective if I just worked directly from that list.

I am proposing we do that for kilo-1 and kilo-2.

> Please just focus on the maintaining
> the blueprint priority list.

I am trying to. I clearly failed.


The proposal is to keep kilo-1, kilo-2 much the same as juno. Except,
we work harder on getting people to buy into the priorities that are
set, and actively provoke more debate on their "correctness", and we
reduce the bar for what needs a blueprint.

We can't have 50 high priority blueprints, it doesn't mean anything,
right? We need to trim the list down to a manageable number, based on
the agreed project priorities. Thats all I mean by slots / runway at
this point.

Does this sound reasonable?


> The runways
> idea is just going to make me less efficient at reviewing. So I'm
> very much against it as an idea.

This proposal is different to the runways idea, although it certainly
borrows aspects of it. I just don't understand how this proposal has
all the same issues?


The key to the kilo-3 proposal, is about getting better at saying no,
this blueprint isn't very likely to make kilo.

If we focus on a smaller number of blueprints to review, we should be
able to get a greater percentage of those fully completed.

I am just using slots/runway-like ideas to help pick the high priority
blueprints we should concentrate on, during that final milestone.
Rather than keeping the distraction of 15 or so low priority
blueprints, with those poor submitters jamming up the check queue, and
constantly rebasing, and having to deal with the odd stray review
comment they might get lucky enough to get.


Maybe you think this bit is overkill, and thats fine. But I still
think we need a way to stop wasting so much of peoples time on things
that will not make it.


Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] logging around olso lockutils

2014-09-25 Thread Sean Dague
Spending a ton of time reading logs, oslo locking ends up basically
creating a ton of output at DEBUG that you have to mentally filter to
find problems:

2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Created new semaphore "iptables" internal_lock
/opt/stack/new/nova/nova/openstack/common/lockutils.py:206
2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Acquired semaphore "iptables" lock
/opt/stack/new/nova/nova/openstack/common/lockutils.py:229
2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Attempting to grab external lock "iptables" external_lock
/opt/stack/new/nova/nova/openstack/common/lockutils.py:178
2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Got file lock "/opt/stack/data/nova/nova-iptables" acquire
/opt/stack/new/nova/nova/openstack/common/lockutils.py:93
2014-09-24 18:44:49.240 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Got semaphore / lock "_do_refresh_provider_fw_rules" inner
/opt/stack/new/nova/nova/openstack/common/lockutils.py:271
2014-09-24 18:44:49.244 DEBUG nova.compute.manager
[req-b91cb1c1-f211-43ef-9714-651eeb3b2302
DeleteServersAdminTestXML-1408641898
DeleteServersAdminTestXML-469708524] [instance:
98eb8e6e-088b-4dda-ada5-7b2b79f62506] terminating bdm
BlockDeviceMapping(boot_index=0,connection_info=None,created_at=2014-09-24T18:44:42Z,delete_on_termination=True,deleted=False,deleted_at=None,destination_type='local',device_name='/dev/vda',device_type='disk',disk_bus=None,guest_format=None,id=43,image_id='262ab8a2-0790-49b3-a8d3-e8ed73e3ed71',instance=,instance_uuid=98eb8e6e-088b-4dda-ada5-7b2b79f62506,no_device=False,snapshot_id=None,source_type='image',updated_at=2014-09-24T18:44:42Z,volume_id=None,volume_size=None)
_cleanup_volumes /opt/stack/new/nova/nova/compute/manager.py:2407
2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Released file lock "/opt/stack/data/nova/nova-iptables" release
/opt/stack/new/nova/nova/openstack/common/lockutils.py:115
2014-09-24 18:44:49.248 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Releasing semaphore "iptables" lock
/opt/stack/new/nova/nova/openstack/common/lockutils.py:238
2014-09-24 18:44:49.249 DEBUG nova.openstack.common.lockutils
[req-d7443a9c-1eb5-42c3-bf40-aadd20f0452f
ListImageFiltersTestXML-1316776531 ListImageFiltersTestXML-132181290]
Semaphore / lock released "_do_refresh_provider_fw_rules" inner

Also readable here:
http://logs.openstack.org/01/123801/1/check/check-tempest-dsvm-full/b5f8b37/logs/screen-n-cpu.txt.gz#_2014-09-24_18_44_49_240

(Yes, it's kind of ugly)

What occured to me is that in debugging locking issues what we actually
care about is 2 things semantically:

#1 - tried to get a lock, but someone else has it. Then we know we've
got lock contention. .
#2 - something is still holding a lock after some "long" amount of time.

#2 turned out to be a critical bit in understanding one of the worst
recent gate impacting issues.

You can write a tool today that analyzes the logs and shows you these
things. However, I wonder if we could actually do something creative in
the code itself to do this already. I'm curious if the creative use of
Timers might let us emit log messages under the conditions above
(someone with better understanding of python internals needs to speak up
here). Maybe it's too much overhead, but I think it's worth at least
asking the question.

The same issue exists when it comes to processutils I think, warning
that a command is still running after 10s might be really handy, because
it turns out that issue #2 was caused by this, and it took quite a bit
of decoding to figure that out.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create an instance with a custom uuid

2014-09-25 Thread Andrew Laski


On 09/25/2014 04:18 AM, Pasquale Porreca wrote:
I will briefly explain our use case. This idea is related to another 
project to enable the network boot in OpenStack 
https://blueprints.launchpad.net/nova/+spec/pxe-boot-instance


We want to make use of the extra-dhcp-opt to indicate as tftp server a 
specific instance inside our deployed system, so it will provide the 
right operating system to the other instances booting from network 
(once the feature from the linked blueprint will be implemented).


On the tftp server we want to be able to filter what boot file to 
provide to different class of instances and our idea was to identify 
each class with 2 hexadecimal of the UUID (while the rest would be 
random generated, still "granting" its uniqueness).


It seems like this would still be achievable using the instance tags 
feature that Matt mentioned.  And it would be more clear since you could 
use human readable class names rather than relying on knowing that part 
of the UUID had special meaning.


If you have a need to add specific information to an instance like 'boot 
class' or want to indicate that an instance in two different clouds is 
actually the same one, the Pumphouse use case, that information should 
be something we layer on top of an instance and not something we encode 
in the UUID.



Anyway this is a customization for our specific environment and for a 
feature that is still in early proposal stage, so we wanted to propose 
as a separate feature to allow user custom UUID and manage the 
generation out of OpenStack.

On 09/24/14 23:15, Matt Riedemann wrote:



On 9/24/2014 3:17 PM, Dean Troyer wrote:

On Wed, Sep 24, 2014 at 2:58 PM, Roman Podoliaka
mailto:rpodoly...@mirantis.com>> wrote:

Are there any known gotchas with support of this feature in REST 
APIs

(in general)?


I'd be worried about relying on a user-defined attribute in that use
case, that's ripe for a DOS.  Since these are cloud-unique I wouldn't
even need to be in your project to block you from creating that clone
instance if I knew your UUID.

dt

--

Dean Troyer
dtro...@gmail.com 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



We talked about this a bit before approving the 
'enforce-unique-instance-uuid-in-db' blueprint [1].  As far as we 
knew there was no one using null instance UUIDs or duplicates for 
that matter.


The instance object already enforces that the UUID field is unique 
but the database schema doesn't.  I'll be re-proposing that for Kilo 
when it opens up.


If it's a matter of tagging an instance, there is also the tags 
blueprint [2] which will probably be proposed again for Kilo.


[1] 
https://blueprints.launchpad.net/nova/+spec/enforce-unique-instance-uuid-in-db

[2] https://blueprints.launchpad.net/nova/+spec/tag-instances






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >