Re: [openstack-dev] [Fuel] Change VIP address via API

2015-11-03 Thread Aleksey Kasatkin
Igor,

> For VIP allocation we should use POST request. It's ok to use PUT for
setting (changing) IP address.

My proposal is about setting IP addresses for VIPs only (auto and manual).
No any other allocations.
Do you propose to use POST for first-time IP allocation and PUT for IP
re-allocation?
Or use POST for adding entries to some new 'vips' table (so that all VIPs
descriptions
will be added there from network roles)?

> We don't store network_role, namespace and node_roles within VIPs.
> They are belonged to network roles. So how are you going to retrieve
> them? Did you plan to make some changes to our data model? You know,
> it's not a good idea to make connections between network roles and
> VIPs each time your make a GET request to list them.

It's our current format we use in API when VIPs are being retrieved.
Do you propose to use different one for address allocation?

> Should we return VIPs that aren't allocated, and if so - why? If they
> would be just, you know, fetched from network roles - that's a bad
> design. Each VIP should have an explicit entry in VIPs database table.

I propose to return VIPs even w/o IP addresses to show user what VIPs he has
so he can assign IP addresses to them. Yes, I supposed that the information
will be retrieved from network roles as it is done now. Do you propose to
create
separate table for VIPs or extend ip_addrs table to store VIPs information?

> We definitely should handle `null` this way, but I think from API POV
> it would be more clearer just do not pass `ipaddr` value if user wants
> it to be auto allocated. I mean, let's keep `null` as implementation
> details ans force API users just do not pass this key if they don't
> want to.

Oh, I didn't write it here, I thought about keeping IP addresses as is when
corresponding key is skipped by the user.

>The thing is that there's no way to *warn* users through API. You
> could either reject or accept request. So all we can do is to
> introduce some `force` flag, and if it's passed - ignore overlapping.

It is now done for network verification that it can pass with warning
message.
But I like your proposal better.

> overlaps with the network of current environment which does not
> match the network role of the VIP?

So, when IP address of the VIP matches some IP range that corresponds
to the network which is different from the one that network role bound to
the VIP has.
E.g. IP address matches the 'public' network but VIP is bound to
'management/vip' role
which is mapped to 'management' network.


Thanks,



Aleksey Kasatkin


On Mon, Nov 2, 2015 at 7:06 PM, Igor Kalnitsky 
wrote:

> Hey Aleksey,
>
> I agree that we need a separate API call for VIP allocation, thought I
> don't agree on some points you have proposed. See my comments below.
>
> > use PUT to change VIPs addresses (set them manually or request
> > to allocate them automatically)
>
> PUT requests SHOULD NOT be used for VIP allocation, by RESTful API
> notation the PUT request should be used for changing (editing)
> entities, not for creating new ones. For VIP allocation we should use
> POST request. It's ok to use PUT for setting (changing) IP address.
>
> > vips: [
> > {
> > 'network_role': 'management',
> > 'namespace': 'haproxy',
> > 'ipaddr': '10.10.10.10',
> > 'node_roles': ['controller']
> > },...
> > ]
>
> There I have two comments:
>
> * We don't need the "vips" word in API output - let's return a JSON
> list with VIPs and that's it.
> * We don't store network_role, namespace and node_roles within VIPs.
> They are belonged to network roles. So how are you going to retrieve
> them? Did you plan to make some changes to our data model? You know,
> it's not a good idea to make connections between network roles and
> VIPs each time your make a GET request to list them.
>
> > When it is set to None, IP address will be allocated automatically
>
> We definitely should handle `null` this way, but I think from API POV
> it would be more clearer just do not pass `ipaddr` value if user wants
> it to be auto allocated. I mean, let's keep `null` as implementation
> details ans force API users just do not pass this key if they don't
> want to.
>
> > When the user runs GET request for the first time, all 'ipaddr'
> > fields are equal to None.
>
> Should we return VIPs that aren't allocated, and if so - why? If they
> would be just, you know, fetched from network roles - that's a bad
> design. Each VIP should have an explicit entry in VIPs database table.
>
> > There is a question, what to do when the given address overlaps
> > with the network from another environment? My opinion that those
> > should pass with a warning message.
>
> The thing is that there's no way to *warn* users through API. You
> could either reject or accept request. So all we can do is to
> introduce some `force` flag, and if it's passed - ignore overlapping.
>
> I didn't get what do you mean by:
>
> > overlaps 

[openstack-dev] [NFV][Telco] Resigning TelcoWG core team

2015-11-03 Thread Marc Koderer
Hello TelcoWG,

Due to personal reasons I have to resign my TelcoWG core team membership.
I will remove myself from the core reviewer group.

Thanks for all the support!

Regards
Marc


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DevStack errors...

2015-11-03 Thread Neil Jerram
On 02/11/15 23:56, Thales wrote:
I'm trying to get DevStack to work, but am getting errors.  Is this a good list 
to ask questions for this?  I can't seem to get answers anywhere I look.   I 
tried the openstack list, but it kind of moves slow.

Thanks for any help.

Regards, John

In case it helps, I had no problem using DevStack's stable/liberty branch 
yesterday.  If you don't specifically need master, you might try that too:

  # Clone the DevStack repository.
  git clone https://git.openstack.org/openstack-dev/devstack

  # Use the stable/liberty branch.
  cd devstack
  git checkout stable/liberty

  ...

I also just looked again at your report on openstack@.  Were you using Python 
2.7?

I expect you'll have seen discussions like 
http://stackoverflow.com/questions/23176697/importerror-no-module-named-io-in-ubuntu-14-04.
  It's not obvious to me how those can be relevant, though, as they seem to 
involve corruption of an existing virtualenv, whereas DevStack I believe 
creates a virtualenv from scratch.

When you say 'on Ubuntu 14.04', are we talking a completely fresh install with 
nothing else on it?  That's the most reliable way to run DevStack - people 
normally create a fresh disposable VM for this kind of work.

Regards,
Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][ThirdPartyCI]CloudFounders OpenvStorage CI - request to re-add the cinder driver

2015-11-03 Thread Eduard Matei
Hi,

Trying to get more attention to this ...

We had our driver removed by commit:
https://github.com/openstack/cinder/commit/f0ab819732d77a8a6dd1a91422ac183ac4894419
 due to no CI.

Pls let me know if there is something wrong so we can fix it asap so we can
have the driver back in M.

The CI is commenting using the name "Open vStorage CI" instead of
"CloudFounders OpenvStorage CI".

Thanks,

Eduard

On Thu, Sep 3, 2015 at 10:33 AM, Eduard Matei <
eduard.ma...@cloudfounders.com> wrote:

>
> Hi,
>
> Trying to get more attention to this ...
>
> We had our driver removed by commit:
> https://github.com/openstack/cinder/commit/f0ab819732d77a8a6dd1a91422ac183ac4894419
> due to no CI.
>
> Pls let me know if there is something wrong so we can fix it asap so we
> can have the driver back in Liberty (if possible).
>
> The CI is commenting using the name "Open vStorage CI" instead of
> "CloudFounders OpenvStorage CI".
>
> Thanks,
>
> Eduard
>
>
>


-- 

*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com



*CloudFounders, The Private Cloud Software Company*

Disclaimer:
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed.
If you are not the named addressee or an employee or agent responsible
for delivering this message to the named addressee, you are hereby
notified that you are not authorized to read, print, retain, copy or
disseminate this message or any part of it. If you have received this
email in error we request you to notify us by reply e-mail and to
delete all electronic files of the message. If you are not the
intended recipient you are notified that disclosing, copying,
distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
E-mail transmission cannot be guaranteed to be secure or error free as
information could be intercepted, corrupted, lost, destroyed, arrive
late or incomplete, or contain viruses. The sender therefore does not
accept liability for any errors or omissions in the content of this
message, and shall have no liability for any loss or damage suffered
by the user, which arise as a result of e-mail transmission.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Specifications to support High Availability Active-Active configurations in Cinder

2015-11-03 Thread Dulko, Michal
On Tue, 2015-10-20 at 20:17 +0200, Gorka Eguileor wrote:
> Hi,
> 
> We finally have ready for review all specifications required to support
> High Availability Active/Active configurations in Cinder's Volume nodes.
> 
> There is a Blueprint to track this effort [1] and the specs are as follow:
> 
> - General description of the issues and solutions [2]
> - Removal of Races on API nodes [3]
> - Job distribution to clusters [4]
> - Cleanup process of crashed nodes [5]
> - Data corruption prevention [6]
> - Removing local file locks from the manager [7]
> - Removing local file locks from drivers [8]
> 
> (snip)
> 
> [1]: 
> https://blueprints.launchpad.net/cinder/+spec/cinder-volume-active-active-support
> [2]: https://review.openstack.org/232599
> [3]: https://review.openstack.org/207101
> [4]: https://review.openstack.org/232595
> [5]: https://review.openstack.org/236977
> [6]: https://review.openstack.org/237076
> [7]: https://review.openstack.org/237602
> [8]: https://review.openstack.org/237604

I just want to give a heads up that during the Summit we've discussed
this topic and specs will be modified to reflect decisions made there.
General notes from the sessions can be found in [1], [2]. Main points
are that on DLM session [3] it was decided that projects can hard depend
on DLM - which may make things easier for us. Also we want to disable
automatic cleanup of stale resources in the first version of c-vol A/A,
because such implementation should be simpler and safer.

[1] https://etherpad.openstack.org/p/mitaka-cinder-cvol-aa
[2] https://etherpad.openstack.org/p/mitaka-cinder-volmgr-locks
[3] https://etherpad.openstack.org/p/mitaka-cross-project-dlm


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About the response parameters of the API List volumes

2015-11-03 Thread Duncan Thomas
Absolutely an oversight as far as I can tell - probably similar problems in
the admin --all-tenants view of various other cinder resources. Patch
welcome once we get micro-versions landed, in the meantime it would be
great if you could file a bug, and ideally check any other views (snaps,
backups, cgs, etc) for the same problem.

Thanks

On 30 October 2015 at 17:45, Matt Riedemann 
wrote:

>
>
> On 10/27/2015 9:57 PM, chenying wrote:
>
>> hi, Folks
>>
>> The API: GET/v2/​{tenant_id}​/volumes  List volumes
>>
>> When we use the tenant admin to list all the created volumes, we can
>> list all tenant's volumes. But the response parameters do not include
>>
>> the parameter tenant_id. For a administrator, it is reasonable to see
>> the the tenant_id of a volume form the response.
>>
>> So why don't we add the tenant_id to the response parameters of this
>> API? What's the reason? Thanks.
>>
>>
>> Best regard.
>> chenying(IRC)
>> ying.c...@huawei.com
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> It might just be an oversight. There is a similar issue with listing all
> tenant server groups as an admin in nova, and a spec was proposed [1]. To
> fix that, we use microversions (in nova). Cinder is working toward
> microversion support in Mitaka I believe be able to easily make API changes
> like this which are otherwise backward incompatible since you're changing
> the response.
>
> [1] https://review.openstack.org/#/c/209917/
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][library] CI gate for regressions detection in deployment data

2015-11-03 Thread Aleksandr Didenko
Hi,

let me try to rephrase this a bit and Bogdan will correct me if I'm wrong
or missing something.

We have a set of top-scope manifests (called Fuel puppet tasks) that we use
for OpenStack deployment. We execute those tasks with "puppet apply". Each
task supposed to bring target system into some desired state, so puppet
compiles a catalog and applies it. So basically, puppet catalog = desired
system state.

So we can compile* catalogs for all top-scope manifests in master branch
and store those compiled* catalogs in fuel-library repo. Then for each
proposed patch CI will compare new catalogs with stored ones and print out
the difference if any. This will pretty much show what is going to be
changed in system configuration by proposed patch.

We were discussing such checks before several times, iirc, but we did not
have right tools to implement such thing before. Well, now we do :) I think
it could be quite useful even in non-voting mode.

* By saying compiled catalogs I don't mean actual/real puppet catalogs, I
mean sorted lists of all classes/resources with all parameters that we find
during puppet-rspec tests in our noop test framework, something like
standard puppet-rspec coverage. See example [0] for networks.pp task [1].

Regards,
Alex

[0] http://paste.openstack.org/show/477839/
[1]
https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/openstack-network/networks.pp


On Mon, Nov 2, 2015 at 5:35 PM, Bogdan Dobrelya 
wrote:

> Here is a docs update [0] for the patch [1] - which is rather a
> framework - being discussed here.
> Note, that the tool fuel_noop_tests.rb Dmitry Ilyin wrote became a Noop
> testing framework, which is Fuel specific. But the same approach may be
> used for any set of puppet modules and a composition layer manifests
> with a dataset of deployment parameters you may want it to be tracked
> against potential regressions.
>
> I believe we should think about how that Noop testing framework (and
> the deployment data checks under discussion as well) might benefit the
> puppet community.
>
> [1] https://review.openstack.org/240901
> [2] https://review.openstack.org/240015
>
> On 29.10.2015 15:24, Bogdan Dobrelya wrote:
> > Hello.
> > There are few types of a deployment regressions possible. When changing
> > a module version to be used from upstream (or internal module repo), for
> > example from Liberty to Mitaka. Or when changing the composition layer
> > (modular tasks in Fuel). Specifically, adding/removing/changing classes
> > and a class parameters.
> >
> > An example regression for swift deployment data [0]. Something was
> > changed unnoticed by existing noop tests and as a result
> > the swift data became being stored in root partition.
> >
> > Suggested per-commit based regressions detection [1] for deployment data
> > assumes to automatically detect if a class in a noop catalog run has
> > gained or lost a parameter or if it has been updated to another value by
> > a patch under test. Later, this check could even replace existing noop
> > tests, everything will be checked automatically, unless every deployment
> > scenario is covered by a corresponding template, which are represented
> > as YAML files [2] in Fuel.
> > Note: The tool [3] can help to get all deployment cases (-Y) and all
> > deployment tasks (-S) as well.
> >
> > I propose to review the patch [1], understand how it works (see tl;dr
> > section below) and to start using it ASAP. The earlier we commit the
> > "initial" data layer state, less regressions would pop up.
> >
> > (tl;dr)
> > The check should be done for every modular component (aka deployment
> > task). Data generated in the noop catalog run for all classes and
> > defines of a given deployment task should be verified against its
> > "acknowledged" (committed) state.
> > And fail the test gate, if changes has been found, like new parameter
> > with a defined value, removed a parameter, changed a parameter's value.
> >
> > In order to remove a regression, a patch author will have to add (and
> > reviewers should acknowledge) detected changes in the committed state of
> > the deployment data. This may be done manually, with a tool like [3] or
> > by a pre-commit hook, or even at the CI side!
> > The regression check should show the diff between committed state and a
> > new state proposed in a patch. Changed state should be *reviewed* and
> > accepted with a patch, to became a committed one. So the deployment data
> > will evolve with *only* approved changes. And those changes would be
> > very easy to be discovered for each patch under review process!
> > No more regressions, everyone happy.
> >
> > Examples:
> >
> > - A. A patch author removed the mpm_module parameter from the
> > composition layer (apache modular task). The test should fail with a
> >
> > Diff:
> >   @@ -90,7 +90,7 @@
> >  manage_user=> 'true',
> >  max_keepalive_requests => '100',
> 

Re: [openstack-dev] [neutron] How could an L2 agent extension access agent methods ?

2015-11-03 Thread Ihar Hrachyshka

Reviving the thread.

On the design summit session dedicated to agent and plugin extensions [1]  
the following was stated for l2 agent extensions (I appreciate if someone  
checks me on the following though):


- current l2 agent extensions mechanism lacks insight into agent details  
like bridges or vlan maps;


- in some cases, we don’t care about extension portability across multiple  
agents, so it’s not of concern if some of them use implementation details  
like bridges to set specific flows, or to wire up some additional ports to  
them;


- that said, we still don’t want extensions to have unlimited access to  
agent details; the rationale for hard constraints on what is seen inside  
extensions is that we cannot support backwards compatibility for *all*  
possible internal attributes of an agent; instead, we should explicitly  
define where we can make an effort to provide stable API into agent  
details, and what’s, on contrary, beyond real life use cases and hence can  
be left to be broken/refactored as neutron developers see fit; this API can  
be agent specific though;


- agent details that are to be passed into extensions should be driven by  
actual use cases. There were several subprojects mentioned in the session  
that are assumed to lack enough access to agent attributes to do their job  
without patching core ovs agent files. Those are: BGP-VPN, SFC, (anything  
else?) Those subprojects that are interested in extending l2 agent  
extension framework are expected to come up with a list of things missing  
in current implementation, so that neutron developers can agree on proper  
abstractions to provide missing details to extensions. For that goal, I set  
up a new etherpad to collect feedback from subprojects [2].


Once we collect use cases there and agree on agent API for extensions (even  
if per agent type), we will implement it and define as stable API, then  
pass objects that implement the API into extensions thru extension manager.  
If extensions support multiple agent types, they can still distinguish  
between which API to use based on agent type string passed into extension  
manager.


I really hope we start to collect use cases early so that we have time to  
polish agent API and make it part of l2 extensions earlier in Mitaka cycle.


[1]: https://etherpad.openstack.org/p/mitaka-neutron-core-extensibility
[2]: https://etherpad.openstack.org/p/l2-agent-extensions-api-expansion

Ihar

Ihar Hrachyshka  wrote:


On 30 Sep 2015, at 12:53, Miguel Angel Ajo  wrote:



Ihar Hrachyshka wrote:

On 30 Sep 2015, at 12:08, thomas.mo...@orange.com wrote:

Hi Ihar,

Ihar Hrachyshka :

Miguel Angel Ajo :

Do you have a rough idea of what operations you may need to do?
Right now, what bagpipe driver for networking-bgpvpn needs to  
interact with is:

- int_br OVSBridge (read-only)
- tun_br OVSBridge (add patch port, add flows)
- patch_int_ofport port number (read-only)
- local_vlan_map dict (read-only)
- setup_entry_for_arp_reply method (called to add static ARP entries)

Sounds very tightly coupled to OVS agent.
Please bear in mind, the extension interface will be available from  
different agent types
(OVS, SR-IOV, [eventually LB]), so this interface you're talking  
about could also serve as
a translation driver for the agents (where the translation is  
possible), I totally understand
that most extensions are specific agent bound, and we must be able  
to identify

the agent we're serving back exactly.
Yes, I do have this in mind, but what we've identified for now seems  
to be OVS specific.
Indeed it does. Maybe you can try to define the needed pieces in high  
level actions, not internal objects you need to access to. Like ‘-  
connect endpoint X to Y’, ‘determine segmentation id for a network’  
etc.
I've been thinking about this, but would tend to reach the conclusion  
that the things we need to interact with are pretty hard to abstract  
out into something that would be generic across different agents.   
Everything we need to do in our case relates to how the agents use  
bridges and represent networks internally: linuxbridge has one bridge  
per Network, while OVS has a limited number of bridges playing  
different roles for all networks with internal segmentation.


To look at the two things you  mention:
- "connect endpoint X to Y" : what we need to do is redirect the  
traffic destinated to the gateway of a Neutron network, to the thing  
that will do the MPLS forwarding for the right BGP VPN context (called  
VRF), in our case br-mpls (that could be done with an OVS table too) ;  
that action might be abstracted out to hide the details specific to  
OVS, but I'm not sure on how to  name the destination in a way that  
would be agnostic to these details, and this is not really relevant to  
do until we have a relevant context in which the linuxbridge would  
pass packets to something doing MPLS forwarding (OVS is currently the 

[openstack-dev] [Magnum] [RFC] split pip line of functional testing

2015-11-03 Thread Qiao,Liyong

hi Magnum hackers:

Currently there is a pip line on project-config to do magnum functional 
testing [1]


on summit, we've discussed that we need to split it per COE[2], we can 
do this by adding new pip line to testing./

/ /- '{pipeline}-functional-dsvm-magnum{coe}{job-suffix}':/
coe could be swarm/mesos/k8s,
then passing coe in our post_test_hook.sh [3], is this a good idea?
and I still have others questions need to be addressed before split 
functional testing per COE:
1 how can we pass COE parameter to tox in [4], or add some new envs like 
[testenv:functional-swarm] [testenv:functional-k8s] etc?

stupid?
2 also there are some common testing cases, should we run them in all 
COE ?(I think so)

but how to construct the source code tree?

//functional/swarm//
///functional/k8s//
///functional/common ../


[1]https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/projects.yaml#L2288
[2]https://etherpad.openstack.org/p/mitaka-magnum-functional-testing
[3]https://github.com/openstack/magnum/blob/master/magnum/tests/contrib/post_test_hook.sh#L100
[4]https://github.com/openstack/magnum/blob/master/tox.ini#L19

--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][ThirdPartyCI]CloudFounders OpenvStorage CI - request to re-add the cinder driver

2015-11-03 Thread Duncan Thomas
Hi

Have you posted a review to re-add the driver? I can't see one, though I
might be missing it.

If you have a review posted, please add a link here, but you are in good
shape to make the M release.

If you don't yet have a review posted, please prepare and submit one using
the normal gerrit process. Referencing the name of the CI in the commit
message makes reviewer's lives easier so is worth doing. If you post a
review now, again you will be in very good shape to get back in M.

Hope this helps.

-- 
Duncan Thomas

On 3 November 2015 at 11:43, Eduard Matei 
wrote:

> Hi,
>
> Trying to get more attention to this ...
>
> We had our driver removed by commit:
> https://github.com/openstack/cinder/commit/f0ab819732d77a8a6dd1a91422ac183ac4894419
>  due to no CI.
>
> Pls let me know if there is something wrong so we can fix it asap so we
> can have the driver back in M.
>
> The CI is commenting using the name "Open vStorage CI" instead of
> "CloudFounders OpenvStorage CI".
>
> Thanks,
>
> Eduard
>
> On Thu, Sep 3, 2015 at 10:33 AM, Eduard Matei <
> eduard.ma...@cloudfounders.com> wrote:
>
>>
>> Hi,
>>
>> Trying to get more attention to this ...
>>
>> We had our driver removed by commit:
>> https://github.com/openstack/cinder/commit/f0ab819732d77a8a6dd1a91422ac183ac4894419
>> due to no CI.
>>
>> Pls let me know if there is something wrong so we can fix it asap so we
>> can have the driver back in Liberty (if possible).
>>
>> The CI is commenting using the name "Open vStorage CI" instead of
>> "CloudFounders OpenvStorage CI".
>>
>> Thanks,
>>
>> Eduard
>>
>>
>>
>
>
> --
>
> *Eduard Biceri Matei, Senior Software Developer*
> www.cloudfounders.com
>  | eduard.ma...@cloudfounders.com
>
>
>
> *CloudFounders, The Private Cloud Software Company*
>
> Disclaimer:
> This email and any files transmitted with it are confidential and intended 
> solely for the use of the individual or entity to whom they are addressed.
> If you are not the named addressee or an employee or agent responsible for 
> delivering this message to the named addressee, you are hereby notified that 
> you are not authorized to read, print, retain, copy or disseminate this 
> message or any part of it. If you have received this email in error we 
> request you to notify us by reply e-mail and to delete all electronic files 
> of the message. If you are not the intended recipient you are notified that 
> disclosing, copying, distributing or taking any action in reliance on the 
> contents of this information is strictly prohibited.
> E-mail transmission cannot be guaranteed to be secure or error free as 
> information could be intercepted, corrupted, lost, destroyed, arrive late or 
> incomplete, or contain viruses. The sender therefore does not accept 
> liability for any errors or omissions in the content of this message, and 
> shall have no liability for any loss or damage suffered by the user, which 
> arise as a result of e-mail transmission.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][glance] No mid-cycle meetup for Mitaka

2015-11-03 Thread Flavio Percoco

Greetings

At the Mitaka summit, we discussed whether having a mid-cycle summit
was worth it for Mitaka. The outcome is that we should skip it this
time. Some reasons below:

1. We've a well defined, narrow, list of priorities that need to be
worked on.

2. Based on the above, there won't be enough content to make a
mid-cycle meetup worth it. We'll have ad-hoc meetings whenever extra
discussions are required.

3. There's a growing feeling that mid-cycle meetups are a requirement
and the team doesn't believe so. The team believes, meetups should
happen when there are enough things to discuss and meeting is the best
way to do that.

4. Budget wise, it'll make all of our lives simpler.

I'd like to thank Sabari Kumar Murugesan from VMWare and Stuart
Mclaren/Erno Kuvaja from HP for offering their office space for the
mid-cycle. I hope such offer will still be available in the future
when a mid-cycle meetup may be needed.

For those of you that didn't make it to the summit, I'd love to know
your thoughts about the above.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Mesos orchestration as discussed at mid cycle (action required from core reviewers)

2015-11-03 Thread Jeff Peeler
On Tue, Nov 3, 2015 at 1:44 AM, Michal Rostecki  wrote:
> Hi,
>
> +1 to what Steven said about Kubernetes.
>
> I'd like to add that these 3 things (pid=host, net=host, -v) are supported
> by Marathon, so probably it's much less problematic for us than Kubernetes
> at this moment.

I don't actively track Kubernetes upstream, so this seemed like a
natural point of reevaluation. If Kubernetes still doesn't support the
Docker features Kolla needs, obviously it's a non-starter. Nice to
hear that Marathon does though.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] live migration sub-team meeting

2015-11-03 Thread Murray, Paul (HP Cloud)
Hi all,

Live migration was confirmed as a Nova priority for the Mitaka cycle and a 
sub-team section can be found on the priorities tracking page [1].

Most team members expressed they would like a regular IRC meeting for tracking 
work and raising blocking issues. Looking at the contributors here [2], most of 
the participants seem to be in the European continent (in time zones ranging 
from UTC to UTC+3) with a few in the US (please correct me if I am wrong). That 
suggests that a time around 1500 UTC makes sense.

I would like to invite suggestions for a day and time for a weekly meeting - 
you do not have to go by the above suggestion. When we have a time I will 
arrange the meeting and create a corresponding wiki page.

Please note that attendance is not a requirement for participation, but it will 
help to track or coordinate some activities.

Paul

[1]  https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking
[2] https://etherpad.openstack.org/p/mitaka-live-migration

Paul Murray
Nova Technical Lead, HP Cloud
+44 117 316 2527

Hewlett-Packard Limited   |   Registered Office: Cain Road, Bracknell, 
Berkshire, RG12 1HN   |Registered No: 690597 England   |VAT Number: GB 
314 1496 79

This e-mail may contain confidential and/or legally privileged material for the 
sole use of the intended recipient.  If you are not the intended recipient (or 
authorized to receive for the recipient) please contact the sender by reply 
e-mail and delete all copies of this message.  If you are receiving this 
message internally within the Hewlett Packard group of companies, you should 
consider the contents "CONFIDENTIAL".

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][ThirdPartyCI]CloudFounders OpenvStorage CI - request to re-add the cinder driver

2015-11-03 Thread Eduard Matei
Hi Duncan,
here is the review: https://review.openstack.org/#/c/241174

Thanks,
Eduard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][glance] Summary from the Mitaka summit

2015-11-03 Thread Flavio Percoco

Greetings,

As usual, here's my summary from the summit. This time, it's limited
mostly to Glance as I wasn't able to spread myself on other rooms as I
normally do. Before I get into the details, I'd like to recommend
folks to do the same and write their own summaries (especially PTLs).
It's nice to read what will happen in other projects and to compare
the different visions and data from ppl that attended the summit.
Also, remember that there are many folks that might have had to skip
the summit that are also interested in hearing the news and,
hopefully, provide extra feedback.

Anyway, here's my collection of thoughts. Note that more detail can be
found in each spec and each etherpad.

For folks that didn't make it to the summit, I'd like to remind you
that, despite the agreements made at the summit, there's still chance
for you to comment and weight in. Nothing is written on stone so
please, provide your feedback.

Glance Trusts
=

We discussed the implementation of Keystone's trusts in Glance[0][1].
We spent some time throughout the week talking with folks from the
Keystone team on whether it was recommended to use `Service Tokens` or
go ahead with trusts. The short version of that discussion is that,
while service token is finalized, we'll go ahead and add support for
trusts. This may be superseded in the future by the use of `service
tokens` but it'll have to be discussed when that time comes.

Small note for folks not familiar with some of Glance's issues. This
work is important as it'll help fixing a long standing issue on the
data upload/download of big images. In the current Glance tokens may
expire during such operations (unless the token lifetime is extended
in keystone), which ends up in an auth error being returned back to
the user.

Trusts won't be used for every request but rather for those that
involve sending and receiving data (upload and download). This is to
reduce the performance impact creating trusts has on the API.

[0] https://review.openstack.org/#/c/229878/
[1] https://etherpad.openstack.org/p/mitaka-glance-trusts

Glance Artifacts REpository (Glare)
===

Do you remember the Glance *EXPERIMENTAL* Glance V3 API? We had that
famous discussion again, the one we had in Vancouver, Paris and
Atlanta :) This time, however, we were able to reason about this with
the implementation in mind and, for the sake of backwards
compatibility, DefCore support and not having another major API
release, we've agreed to pull it out into its own endpoint/process.

In addition to the above, the experimental version of this API will be
refactored a bit to be compliant with DefCore requirements. Or better,
the team has engaged with the API WG team and asked them to review the
API implementation. There was quite some feedback that will be
addressed during Mitaka. It's still unsure whether it'll be considered
stable at the end of the cycle. This will be revisited when the time
comes.

As far as the python bindings go, we'll pull into glanceclient the
work that was done during liberty. Therefore, glanceclient will be the
python library to use, whereas the CLI will be in openstackclient.

We also participated in Murano's and App Catalog's meetup to discuss
how we can move forward with this. The result of that discussion is
that these teams will look into using Glare. They had several
questions and we went through all of them. I'm personally super happy
about this collaboration.

Glance image upload reloaded


We spent most of our summit time discussing this topic. This was
discussed in 1 fishbowl session, 1 working session and half meetup.
The topics discussed were related to improving the current upload
workflow by refactoring it into something that is interoperable,
discoverable and has a lower impact on deployments without affecting
the requested functionality by users.

This all sounds like a whole bunch of fancy words put together to
describe something that won't happen, ever. However, I've to say that
after the lengthy discussions there have been with the community
before and during the summit, we've reached a status in the proposal
that seems to accomplish all the above in a nice way.

I'm not going to get into the details of what the results have been
because to explain that, I'd have to also explain all the problems and
requirements. This is exactly what the spec[0] does. Please, go and
read it, provide feedback, and help out. This is important.[1]

Thanks Brian Rosmaita and Stuart Mclaren for preparing and moderating
the session.

[0] https://review.openstack.org/#/c/232371/
[1] https://etherpad.openstack.org/p/Mitaka-glance-image-import-reloaded

Glance Priorities
=

Starting this cycle, we've agreed on having a list of
priorities[0][1]. These priorities mention what the team is going to
be focused on. In other words, reviewers will give a higher priority
to the topics mentioned on that list as they impact 

Re: [openstack-dev] [Neutron][db][migration] Neutron db migration by python scripts

2015-11-03 Thread Ihar Hrachyshka

Zhi Chang  wrote:


Hi, all
Now, I should make some database model definitions if I want to upgrade db. And a database migration 
script will generated when I run "neutron-db-manage revision -m "description of revision" 
--autogenerate". The database will upgraded when run "neutron-db-manage upgrade head".
I want to upgrade db and I plan to write db migration scripts manually 
instead of change database model definitions. Is there some ways to realize it?


I would start by asking *why* you may want to go with manual instead of  
automatic generation. It’s so much easier to do it automatic way, and you  
won’t be bothered by classifying operations into corresponding branches.


Ihar

signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-03 Thread Davanum Srinivas
Here's a Devstack review for zookeeper in support of this initiative:

https://review.openstack.org/241040

Thanks,
Dims


On Mon, Nov 2, 2015 at 11:05 PM, Joshua Harlow  wrote:
> Thanks robert,
>
> I've started to tweak https://review.openstack.org/#/c/209661/ with regard
> to the outcome of that (at least to cover the basics)... Should be finished
> up soon (I hope).
>
>
> Robert Collins wrote:
>>
>> Hi, at the summit we had a big session on distributed lock managers
>> (DLMs).
>>
>> I'd just like to highlight the conclusions we came to in the session (
>>  https://etherpad.openstack.org/p/mitaka-cross-project-dlm
>>  )
>>
>> Firstly OpenStack projects that want to use a DLM can make it a hard
>> dependency. Previously we've had a unwritten policy that DLMs should
>> be optional, which has led to us writing poor DLM-like things backed
>> by databases :(. So this is a huge and important step forward in our
>> architecture.
>>
>> As in our existing pattern of usage for database and message-queues,
>> we'll use an oslo abstraction layer: tooz. This doesn't preclude a
>> different answer in special cases - but they should be considered
>> special and exception, not the general case.
>>
>> Based on the project requirements surfaced in the discussion, it seems
>> likely that all of konsul, etc and zookeeper will be able to have
>> suitable production ready drivers written for tooz. Specifically no
>> project required a fair locking implementation in the DLM.
>>
>> After our experience with oslo.messaging however, we wanted to avoid
>> the situation of having unmaintained drivers and no signalling to
>> users about them.
>>
>> So, we resolved to adopt roughly the oslo.messaging requirements for
>> drivers, with a couple of tweaks...
>>
>> Production drivers in-tree will need:
>>   - two nominated developers responsible for it
>>   - gating functional tests that use dsvm
>> Test drivers in-tree will need:
>>   - clear identification that the driver is a test driver - in the
>> module name at minimum
>>
>> All hail our new abstraction overlords.
>>
>> -Rob
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][bugs] weekly bug status + bug tag contacts

2015-11-03 Thread Sylvain Bauza



Le 03/11/2015 12:53, Markus Zoeller a écrit :

Hey Nova folks, this is your bug lord speaking. You may have noticed
that we have around 3 bugs open ... just kidding, it's above 1000 and
keeps growing. During the summit, we concluded that the "fix all the
crap" action item was a bit too unspecific, so let's try another thing.
Beginning by end of this week, I'll give a weekly report of the state of
the bug list on the ML. This intends to keep the attention at a certain
level. I'm also going to ping the subteams more actively if the bugs in
their area of expertise will get lost in the shuffle. To do that I need
you to double-check the wiki [1] if you are still a valid contact for a
specific bug tag or if you want to become a contact for a bug tag. For
example we need contacts for "api", "console", "db", "network" and "pci".
Please let me know the necessary updates and if I missed a subteam.


What I'd love is to see each subteam having a weekly meeting to make 
sure that they review the bugs, and not only the needed features, like 
we do for the critical ones for the nova meeting.


Also, like I said in IRC, I'd also like to see how we could have a 
process to say to bug owners how to call us to make sure there is no 
regression. That was really difficult to find the open bugs during the 
RC period because some of them were 'in progress' but they were really 
important because the related patch was fixing a regression.


my 2.65 yens,
-Sylvain


Just to be clear, this is *not* intended to be finger-pointing in any
way. It's an attempt to organize the effort of bug solving to get a
more stable product.

Regards, Markus Zoeller (markus_z)

References:
[1]
https://wiki.openstack.org/wiki/Nova/BugTriage#Step_2:_Triage_Tagged_Bugs


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][ThirdPartyCI]CloudFounders OpenvStorage CI - request to re-add the cinder driver

2015-11-03 Thread Eduard Matei
Hi,

Thanks for the quick reply.
What do you mean a review? Should i resubmit the driver code to the cinder
repo? Do i need also the driver certification tests?

Thanks,
Eduard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Policy][Group-based-policy] Service chain node creation fails

2015-11-03 Thread NareshA kumar
Hi Sumit,
As you mentioned I used the AWS template format and now able to create a
service chain successfully. But I dont see any firewall and lb created in
my network. Am I missing anything here?

Regards,
Naresh.

On Mon, Nov 2, 2015 at 1:42 PM, NareshA kumar 
wrote:

> Hi Sumit,
> Thanks for your response. I am just trying to create a VM as a dumb
> firewall. Attached the template file for your reference.
>
> Regards,
> Naresh
>
> On Mon, Nov 2, 2015 at 1:34 PM, Sumit Naiksatam 
> wrote:
>
>> Hi Naresh,
>>
>> Please send me the your template file in response to this email, I can
>> take look.
>>
>> Thanks,
>> ~Sumit.
>>
>> On Sun, Nov 1, 2015 at 10:26 PM, NareshA kumar
>>  wrote:
>> > Hi,
>> > When I try to create a Service chain node by giving the yaml file as
>> heat
>> > template it says "Invalid file format". But with the same file I am
>> able to
>> > create a stack using "heat stack-create"  command.
>> >
>> > What am I missing here?
>> > Is there is any log file that can provide me the details of my error?
>> >
>> > Anyone please help me . I am stuck here for a long time.
>> >
>> > Regards,
>> > Naresh.
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][db][migration] Neutron db migration by python scripts

2015-11-03 Thread Zhi Chang
Hi, all
Now, I should make some database model definitions if I want to upgrade db. 
And a database migration script will generated when I run "neutron-db-manage 
revision -m "description of revision" --autogenerate". The database will 
upgraded when run "neutron-db-manage upgrade head". 
I want to upgrade db and I plan to write db migration scripts manually 
instead of change database model definitions. Is there some ways to realize it?
Does anyone have some good ideas?


Thanks
Zhi Chang__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][db][migration] Neutron db migration by python scripts

2015-11-03 Thread Anna Kamyshnikova
You can create new migration using "neutron-db-manage revision -m 'desc'
--expand/--contract" depends in what changes do you want to do in migration
expand - add something, contract - delete or modify.
 More information -
http://docs.openstack.org/developer/neutron/devref/alembic_migrations.html

On Tue, Nov 3, 2015 at 1:52 PM, Zhi Chang  wrote:

> Hi, all
> Now, I should make some database model definitions if I want to
> upgrade db. And a database migration script will generated when I run 
> "neutron-db-manage
> revision -m "description of revision" --autogenerate". The database will
> upgraded when run "neutron-db-manage upgrade head".
> I want to upgrade db and I plan to write db migration scripts manually
> instead of change database model definitions. Is there some ways to realize
> it?
> Does anyone have some good ideas?
>
> Thanks
> Zhi Chang
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][db][migration] Neutron db migrationby python scripts

2015-11-03 Thread Zhi Chang
Thanks for your reply.
 There is an error when I run migration cmd:


stack@devstack:~/neutron/neutron/db/migration$ neutron-db-manage revision -m 
'desc' --contract
usage: neutron-db-manage [-h] [--config-dir DIR] [--config-file PATH]
 [--core_plugin CORE_PLUGIN] [--nosplit_branches]
 [--service SERVICE] [--split_branches]
 [--subproject SUBPROJECT] [--version]
 [--database-connection DATABASE_CONNECTION]
 [--database-engine DATABASE_ENGINE]
 
{current,history,branches,check_migration,upgrade,downgrade,stamp,revision}
 ...
neutron-db-manage: error: unrecognized arguments: --contract

 
 
-- Original --
From:  "Anna Kamyshnikova";
Date:  Tue, Nov 3, 2015 07:03 PM
To:  "OpenStack Development Mailing List (not for usage 
questions)"; 

Subject:  Re: [openstack-dev] [Neutron][db][migration] Neutron db migrationby 
python scripts

 
You can create new migration using "neutron-db-manage revision -m 'desc' 
--expand/--contract" depends in what changes do you want to do in migration 
expand - add something, contract - delete or modify. More information - 
http://docs.openstack.org/developer/neutron/devref/alembic_migrations.html


On Tue, Nov 3, 2015 at 1:52 PM, Zhi Chang  wrote:
Hi, all
Now, I should make some database model definitions if I want to upgrade db. 
And a database migration script will generated when I run "neutron-db-manage 
revision -m "description of revision" --autogenerate". The database will 
upgraded when run "neutron-db-manage upgrade head". 
I want to upgrade db and I plan to write db migration scripts manually 
instead of change database model definitions. Is there some ways to realize it?
Does anyone have some good ideas?


Thanks
Zhi Chang


__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 





-- 
Regards,Ann Kamyshnikova
Mirantis, Inc__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][db][migration] Neutron db migrationby python scripts

2015-11-03 Thread Henry Gessau
Your installed neutron is not current. When developing new db migrations you
should be working with the master branch. Fast forward your repo and re-run
devstack to get the latest neutron-db-manage.

On Tue, Nov 03, 2015, Zhi Chang  wrote:
> Thanks for your reply.
>  There is an error when I run migration cmd:
>
> stack@devstack:~/neutron/neutron/db/migration$ neutron-db-manage revision -m
> 'desc' --contract
> usage: neutron-db-manage [-h] [--config-dir DIR] [--config-file PATH]
>  [--core_plugin CORE_PLUGIN] [--nosplit_branches]
>  [--service SERVICE] [--split_branches]
>  [--subproject SUBPROJECT] [--version]
>  [--database-connection DATABASE_CONNECTION]
>  [--database-engine DATABASE_ENGINE]
>
>  {current,history,branches,check_migration,upgrade,downgrade,stamp,revision}
>  ...
> neutron-db-manage: error: unrecognized arguments: --contract
>  
>  
> -- Original --
> *From: * "Anna Kamyshnikova";
> *Date: * Tue, Nov 3, 2015 07:03 PM
> *To: * "OpenStack Development Mailing List (not for usage
> questions)";
> *Subject: * Re: [openstack-dev] [Neutron][db][migration] Neutron db
> migrationby python scripts
>  
> You can create new migration using "neutron-db-manage revision -m 'desc'
> --expand/--contract" depends in what changes do you want to do in migration
> expand - add something, contract - delete or modify.
>  More information
> - http://docs.openstack.org/developer/neutron/devref/alembic_migrations.html
>
> On Tue, Nov 3, 2015 at 1:52 PM, Zhi Chang  > wrote:
>
> Hi, all
> Now, I should make some database model definitions if I want to
> upgrade db. And a database migration script will generated when I run
> "neutron-db-manage revision -m "description of revision" --autogenerate".
> The database will upgraded when run "neutron-db-manage upgrade head". 
> I want to upgrade db and I plan to write db migration scripts manually
> instead of change database model definitions. Is there some ways to
> realize it?
> Does anyone have some good ideas?
>
> Thanks
> Zhi Chang
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> -- 
> Regards,
> Ann Kamyshnikova
> Mirantis, Inc
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][bugs] weekly bug status + bug tag contacts

2015-11-03 Thread Markus Zoeller
Hey Nova folks, this is your bug lord speaking. You may have noticed
that we have around 3 bugs open ... just kidding, it's above 1000 and
keeps growing. During the summit, we concluded that the "fix all the
crap" action item was a bit too unspecific, so let's try another thing.
Beginning by end of this week, I'll give a weekly report of the state of 
the bug list on the ML. This intends to keep the attention at a certain
level. I'm also going to ping the subteams more actively if the bugs in
their area of expertise will get lost in the shuffle. To do that I need 
you to double-check the wiki [1] if you are still a valid contact for a
specific bug tag or if you want to become a contact for a bug tag. For
example we need contacts for "api", "console", "db", "network" and "pci".
Please let me know the necessary updates and if I missed a subteam.

Just to be clear, this is *not* intended to be finger-pointing in any
way. It's an attempt to organize the effort of bug solving to get a
more stable product.

Regards, Markus Zoeller (markus_z)

References:
[1] 
https://wiki.openstack.org/wiki/Nova/BugTriage#Step_2:_Triage_Tagged_Bugs


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][bandit] Handling bandit configuration files in Oslo.

2015-11-03 Thread Victor Stinner

Le 02/11/2015 19:40, Brant Knudson a écrit :

(...) by typing something like:

$ bandit-conf-generator --disable try_except_pass --out bandit.yaml
oslo.messaging ~/openstack/bandit/bandit/config/bandit.yaml


(...) we should have a config file for bandit-conf-generator...
but then why not just have bandit know how to read the
bandit-conf-generator config file and skip the extra step?


Hi,

I don't like very long command lines, it's hard to document them or 
comment them. I prefer configuration files. But bandit.yaml, the 
"template", is already a configuration file!?


As Brant wrote, we should enhance Bandit to use a simpler configuration 
file. Or maybe we should have our own configuration file which on ly 
contains "differences" between the YAML template and the expected YAML 
output configuration file. Basically, the "differences" is what you 
wrote on the command line.


Anyway, it would be better to add this new bandit-conf-generator tool 
(or making config simpler) directly in Bandit. What do you think Cyril?


Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][ThirdPartyCI]CloudFounders OpenvStorage CI - request to re-add the cinder driver

2015-11-03 Thread Duncan Thomas
Hi

Yes, you should resubmit the code to cinder. There is no need to do the
certification tests, they are replaced by the CI.

On 3 November 2015 at 12:10, Eduard Matei 
wrote:

> Hi,
>
> Thanks for the quick reply.
> What do you mean a review? Should i resubmit the driver code to the cinder
> repo? Do i need also the driver certification tests?
>
> Thanks,
> Eduard
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Creating puppet-keystone-core and proposing Richard Megginson core-reviewer

2015-11-03 Thread Sofer Athlan-Guyot
He's very good reviewer with a deep knowledge of keystone and puppet.
Thank you Richard for your help.

+1

Emilien Macchi  writes:

> At the Summit we discussed about scaling-up our team.
> We decided to investigate the creation of sub-groups specific to our
> modules that would have +2 power.
>
> I would like to start with puppet-keystone:
> https://review.openstack.org/240666
>
> And propose Richard Megginson part of this group.
>
> Rich is leading puppet-keystone work since our Juno cycle. Without his
> leadership and skills, I'm not sure we would have Keystone v3 support
> in our modules.
> He's a good Puppet reviewer and takes care of backward compatibility.
> He also has strong knowledge at how Keystone works. He's always
> willing to lead our roadmap regarding identity deployment in
> OpenStack.
>
> Having him on-board is for us an awesome opportunity to be ahead of
> other deployments tools and supports many features in Keystone that
> real deployments actually need.
>
> I would like to propose him part of the new puppet-keystone-core
> group.
>
> Thank you Rich for your work, which is very appreciated.

-- 
Sofer Athlan-Guyot

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][db][migration] Neutron db migrationby python scripts

2015-11-03 Thread Anna Kamyshnikova
Do you have the lasted code of Neutron? Change [1] that allows this was
merged on the 22d of October.

[1] -
https://github.com/openstack/neutron/commit/9d069c48aed3a087c5c51366c8e70b29f339e794

On Tue, Nov 3, 2015 at 2:10 PM, Zhi Chang  wrote:

> Thanks for your reply.
>  There is an error when I run migration cmd:
>
> stack@devstack:~/neutron/neutron/db/migration$ neutron-db-manage revision
> -m 'desc' --contract
> usage: neutron-db-manage [-h] [--config-dir DIR] [--config-file PATH]
>  [--core_plugin CORE_PLUGIN] [--nosplit_branches]
>  [--service SERVICE] [--split_branches]
>  [--subproject SUBPROJECT] [--version]
>  [--database-connection DATABASE_CONNECTION]
>  [--database-engine DATABASE_ENGINE]
>
>  {current,history,branches,check_migration,upgrade,downgrade,stamp,revision}
>  ...
> neutron-db-manage: error: unrecognized arguments: --contract
>
>
> -- Original --
> *From: * "Anna Kamyshnikova";
> *Date: * Tue, Nov 3, 2015 07:03 PM
> *To: * "OpenStack Development Mailing List (not for usage questions)"<
> openstack-dev@lists.openstack.org>;
> *Subject: * Re: [openstack-dev] [Neutron][db][migration] Neutron db
> migrationby python scripts
>
> You can create new migration using "neutron-db-manage revision -m 'desc'
> --expand/--contract" depends in what changes do you want to do in migration
> expand - add something, contract - delete or modify.
>  More information -
> http://docs.openstack.org/developer/neutron/devref/alembic_migrations.html
>
> On Tue, Nov 3, 2015 at 1:52 PM, Zhi Chang 
> wrote:
>
>> Hi, all
>> Now, I should make some database model definitions if I want to
>> upgrade db. And a database migration script will generated when I run 
>> "neutron-db-manage
>> revision -m "description of revision" --autogenerate". The database will
>> upgraded when run "neutron-db-manage upgrade head".
>> I want to upgrade db and I plan to write db migration scripts
>> manually instead of change database model definitions. Is there some ways
>> to realize it?
>> Does anyone have some good ideas?
>>
>> Thanks
>> Zhi Chang
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards,
> Ann Kamyshnikova
> Mirantis, Inc
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Policy][Group-based-policy] SFC Use Case

2015-11-03 Thread NareshA kumar
Hi Sumit,
Thanks for your quick help. I think I am almost done. I am able to
successfully create sfc chain spec with aws template. I have also created
group members and able to ping between them. But I dont see any firewall or
loadbalancer namespaces created.
Am I missing something here? Thanks in advance.

Regards,
Naresh.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api][nova] Handling lots of GET query string parameters?

2015-11-03 Thread Jay Pipes

Hi all,

A spec [1] that proposes adding a new server_ids query string parameter 
to the existing GET /servers/detail URI resource has highlighted an 
interesting issue.


The point of the spec is to add an ability to filter the results for the 
GET /servers/detail API call to a specified set of instance UUIDs. 
However, Tony Breeds points out that there will be a rather small limit 
(~55 or so, maximum) on the number of UUIDs that can be specified in the 
query parameters due to length limitations of the URI.


My suggestion was to add a new POST /servers/search URI resource that 
can take a request body containing large numbers of filter arguments, 
encoded in a JSON object.


API working group, what thoughts do you have about this? Please add your 
comments to the Gerrit spec patch if you have time.


Thank you!
-jay

[1] https://review.openstack.org/#/c/239286/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #57

2015-11-03 Thread Emilien Macchi


On 11/02/2015 08:02 AM, Emilien Macchi wrote:
> Hello!
> 
> Here's an initial agenda for our weekly meeting, tomorrow at 1500 UTC
> in #openstack-meeting-4:
> 
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20151103
> 
> Feel free to add any items you'd like to discuss.
> If our schedule allows it, we'll make bug triage during the meeting.
> 
> Thanks,

We did our meeting (late, thanks clock changes):
http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-11-03-15.16.html

Thanks,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][bandit] Handling bandit configuration files in Oslo.

2015-11-03 Thread Cyril Roelandt

On 11/03/2015 10:50 AM, Victor Stinner wrote:

Hi,

I don't like very long command lines, it's hard to document them or
comment them. I prefer configuration files. But bandit.yaml, the
"template", is already a configuration file!?



Yes, the config file provided by bandit is some kind of "enable all 
checkers" configuration. Basically, it seems to me that people just 
re-use that with minor tweaks.



As Brant wrote, we should enhance Bandit to use a simpler configuration
file. Or maybe we should have our own configuration file which on ly
contains "differences" between the YAML template and the expected YAML
output configuration file. Basically, the "differences" is what you
wrote on the command line.



I think we do not want bandit to start supporting N different 
configuration formats. I like that "bandit" reads "bandit.yaml", in its 
current state. It is *simple*.


Now, writing a working "bandit.yaml" could be less of a burden. To 
achieve this, bandit could provide a tool that allows developers to say 
"well, I want everything but this particular checker" or "well, I need 
this tweak to the configuration of that checker".


The right "architecture" would be:
- bandit-conf-generator (possibly included in the bandit git repo) reads 
a 'bandit-conf' config file and generates 'bandit.yaml';

- 'bandit' reads 'bandit.yaml' and does its job.

The configuration file for bandit-conf-generator could look something like:

[general]
project_name = oslo.messaging
path_to_src = oslo_messaging
disabled_tests = try_except_pass,assert_used

And then some code to configure the checkers that require additional 
configuration. It might be harder to think of something easy to write, 
though :)



Anyway, it would be better to add this new bandit-conf-generator tool
(or making config simpler) directly in Bandit. What do you think Cyril?



Yes. I should write a blueprint :)

Cyril.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Proposing Ian Wienand as core reviewer on diskimage-builder

2015-11-03 Thread Gregory Haynes
Hello everyone,

I would like to propose adding Ian Wienand as a core reviewer on the
diskimage-builder project. Ian has been making a significant number of
contributions for some time to the project, and has been a great help in
reviews lately. Thus, I think we could benefit greatly by adding him as
a core reviewer.

Current cores - Please respond with any approvals/objections by next Friday
(November 13th).

Cheers,
Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][bandit] Handling bandit configuration files in Oslo.

2015-11-03 Thread Cyril Roelandt

On 11/02/2015 07:32 PM, Davanum Srinivas wrote:


If we can add this command directly in our tox.ini and entirely avoid
having the bandit.yaml would that be even better?


Why not, but it'd have some drawbacks as well:

- should the conf generator be broken for some reason, the gate may end 
up being blocked for a while, because fixing it would be harder than 
fixing a bandit.yaml file;
- newcomers will feel overwhelmed knowing that a tool writes a config 
file for another tool that generates a report, so I'd rather keep it 
stupid simple.


WDYT?

Cyril.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][bandit] Handling bandit configuration files in Oslo

2015-11-03 Thread McPeak, Travis
Hi Cyril,

This is a really cool idea.  It should be fairly easy to implement and
can only help make Bandit more usable.  To be honest enhancing the way
we're using the 'bandit.yaml' file has been on our list for a while.

A tool like this seems like it would be a nice intermediate solution
until we get a better config file approach.  I'd like for it to live
in the Bandit repo.

The best way forward is probably to create a quick blueprint to
track the work and whoever wants to take it forward can assign it to
themselves.

By the way, it's really cool to see Oslo using Bandit!

Thanks,
 -Travis




On Mon, Nov 2, 2015 at 1:22 PM, Cyril Roelandt 
wrote:


>Whenever a new version of bandit comes out, one can grab the latest
>config file example from the bandit release, and re-run the above
>command. The generated config file will include all the new checkers.
>
>What do you think? Could this be a useful tool to handle bandit
>configurations?

smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][bandit] Handling bandit configuration files in Oslo.

2015-11-03 Thread Cyril Roelandt

On 11/02/2015 07:40 PM, Brant Knudson wrote:


We could use something like this in keystone since we've got a few
repositories. There should be a way to document why the test was skipped
since otherwise we'll have to figure it out every time we update the
file. Putting a comment on the command line would wind up being
unwieldy, so we should have a config file for bandit-conf-generator...
but then why not just have bandit know how to read the
bandit-conf-generator config file and skip the extra step?



The bandit.yaml from python-keystoneclient supports multiple profiles, 
which is already something my tool, in its current state, cannot do.


I don't know exactly which set of features should be supported by a 
configuration generator. If it becomes too hard to write the 
configuration for the configuration generator, we might as well just 
write the configuration for bandit manually :⁾


See my answer to Victor about enhancing Bandit so that it can read a 
"simpler" config file. I'm not a big fan of it.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][qos][fwaas] service groups vs. traffic classifiers

2015-11-03 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,

on fwaas design session in Tokyo it was pointed out that for new fwaas
API we may want to use service groups [1] that were approved but never
merged. As far as I can tell, service groups are designed to catch
several criteria to describe a type of networking application
'service'. Possible criteria are e.g.: port, protocol, icmp code or
type, ...

I need to admit this is was the first time when I heard about this new
feature proposed. And it immediately hit me that it somehow resembles
traffic classifier feature [2] that we look into QoS context for
Mitaka. The classifier thingy is designed to describe traffic types,
and is expected to support multiple criteria, including: port, mac,
ether type, protocol, ... You can see that the lists of possible
criteria are quite similar, and it's of no surprise since for what I
wonder both features are designed to do the same thing: to allow to
match and classify traffic based on criteria, and then use those sets
of criteria to apply different policies (whether it's firewall, QoS
marks, or any other action you can think of for specific traffic type).

Now, I don't think that we need two APIs for the same thing. I would
be glad if we instead converge on a single API, making sure all cases
are covered. In the end, the feature is just a building block for
other features, like fwaas, security groups, or QoS.

We could build traffic classifier use cases on top of service groups,
though the name of the latter is a bit specific, and could use some
more generalization to cover other cases where we need to classify
traffic that may belong to different services; or vice versa, may
split into several categories, even while having a single service source
.

I encourage those who work on traffic classifier, and those who will
implement and review service group feature, to start discussion on how
we converge and avoid multiple APIs for similar things.

Am I making any sense?

[1]:
http://specs.openstack.org/openstack/neutron-specs/specs/juno/service-gr
oup.html
[2]: https://review.openstack.org/#/c/190463/

Ihar
-BEGIN PGP SIGNATURE-

iQEcBAEBAgAGBQJWOOy5AAoJEC5aWaUY1u57TF0IAMgfcJosHFPTIOObNxOVdSBv
SnCRrTWOH0KKP1omyFh3oEWyiZJcAWarAUYPCdpKDo9PNF73jCUJcE0ieiWnIjzN
2N7Km1c7nxDNla5oGhHIlckVkeVKwrt8y1JiuJkqCB59FlgJ1wCYKiKipx3hQKTN
TqmU7kjpt5VL7L1uRCJIQ5GN1vwpEA4xzcBG39xdZe6PzP41ztGDO0Cdkeo63xYj
AvvFfW/KxZ332+PyZS4ZtYYLFd33s0PCe70g4CcnfuM/3Ma350gUdJAPdz4knrDx
5f9oPdkJDfBJTmxmz5GJJFKjc/FwFydy5J69jEWSeWKx+dM1tUbCS5hwaloSqk4=
=kJ70
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] should we open gate for per sub-project stable-maint teams?

2015-11-03 Thread Salvatore Orlando
This plan makes a lot of sense to me.
With the staggering number of sub-projects in neutron it is impossible for
the stable team to cope with the load. Delegation and decentralisation is a
must and both sub-project maintainers and the stable team will benefit from
it.
Also, since patches can always be reverted and rights revoked in case of
misbehaviour I do not see any major downside.
I am sure the stable maint team will periodically monitor what's being
backported in order to intervene quickly if backport policies are violated.

Salvatore



On 3 November 2015 at 18:09, Kyle Mestery  wrote:

> On Tue, Nov 3, 2015 at 10:49 AM, Ihar Hrachyshka 
> wrote:
>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> Hi all,
>>
>> currently we have a single neutron-wide stable-maint gerrit group that
>> maintains all stable branches for all stadium subprojects. I believe
>> that in lots of cases it would be better to have subproject members to
>> run their own stable maintenance programs, leaving
>> neutron-stable-maint folks to help them in non-obvious cases, and to
>> periodically validate that project wide stable policies are still honore
>> d.
>>
>> I suggest we open gate to creating subproject stable-maint teams where
>> current neutron-stable-maint members feel those subprojects are ready
>> for that and can be trusted to apply stable branch policies in
>> consistent way.
>>
>> Note that I don't suggest we grant those new permissions completely
>> automatically. If neutron-stable-maint team does not feel safe to give
>> out those permissions to some stable branches, their feeling should be
>> respected.
>>
>> I believe it will be beneficial both for subprojects that would be
>> able to iterate on backports in more efficient way; as well as for
>> neutron-stable-maint members who are often busy with other stuff, and
>> often times are not the best candidates to validate technical validity
>> of backports in random stadium projects anyway. It would also be in
>> line with general 'open by default' attitude we seem to embrace in
>> Neutron.
>>
>> If we decide it's the way to go, there are alternatives on how we
>> implement it. For example, we can grant those subproject teams all
>> permissions to merge patches; or we can leave +W votes to
>> neutron-stable-maint group.
>>
>> I vote for opening the gates, *and* for granting +W votes where
>> projects showed reasonable quality of proposed backports before; and
>> leaving +W to neutron-stable-maint in those rare cases where history
>> showed backports could get more attention and safety considerations
>> [with expectation that those subprojects will eventually own +W votes
>> as well, once quality concerns are cleared].
>>
>> If we indeed decide to bootstrap subproject stable-maint teams, I
>> volunteer to reach the candidate teams for them to decide on initial
>> lists of stable-maint members, and walk them thru stable policies.
>>
>> Comments?
>>
>>
> As someone who spends a considerable amount of time reviewing stable
> backports on a regular basis across all the sub-projects, I'm in favor of
> this approach. I'd like to be included when selecting teams which are
> approproate to have their own stable teams as well. Please include me when
> doing that.
>
> Thanks,
> Kyle
>
>
>> Ihar
>> -BEGIN PGP SIGNATURE-
>>
>> iQEcBAEBAgAGBQJWOOWkAAoJEC5aWaUY1u57sVIIALrnqvuj3t7c25DBHvywxBZV
>> tCMlRY4cRCmFuVy0VXokM5DxGQ3VRwbJ4uWzuXbeaJxuVWYT2Kn8JJ+yRjdg7Kc4
>> 5KXy3Xv0MdJnQgMMMgyjJxlTK4MgBKEsCzIRX/HLButxcXh3tqWAh0oc8WW3FKtm
>> wWFZ/2Gmf4K9OjuGc5F3dvbhVeT23IvN+3VkobEpWxNUHHoALy31kz7ro2WMiGs7
>> GHzatA2INWVbKfYo2QBnszGTp4XXaS5KFAO8+4H+HvPLxOODclevfKchOIe6jthH
>> F1z4JcJNMmQrQDg1WSqAjspAlne1sqdVLX0efbvagJXb3Ju63eSLrvUjyCsZG4Q=
>> =HE+y
>> -END PGP SIGNATURE-
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Neutron DVR Subteam Meeting Resumes from Nov 4th 2015.

2015-11-03 Thread Vasudevan, Swaminathan (PNB Roseville)
Hi Folks,
We would like to resume the Neutron DVR Subteam Meeting starting November 4th 
2015.
If you are an active technical contributor in openstack and would like to 
discuss any DVR related items feel free to join the meeting.

Here are the meeting details.

IRC channel:  #openstack-meeting-alt
Time: 1500 UTC on Wednesdays.

For Agenda and topics to be discussed look at the Wiki below.
https://wiki.openstack.org/wiki/Meetings/Neutron-DVR

Thanks.
Swaminathan Vasudevan
Systems Software Engineer (TC)


HP Networking
Hewlett-Packard
8000 Foothills Blvd
M/S 5541
Roseville, CA - 95747
tel: 916.785.0937
fax: 916.785.1815
email: swaminathan.vasude...@hp.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] process changes for mitaka releases

2015-11-03 Thread Doug Hellmann
Release liaisons,

At the summit we discussed some of the process changes we are putting
in place for the Mitaka cycle. This email thread is the official
notification of those changes for folks who weren't able to be in
the room for the discussion, and the reminder for those who were.

The biggest change is that we are going to shift away from treating
milestones as strict synchronization points. We will still have
milestones on the schedule [1], but we will treat them as reminders
for the projects to have their own status checkpoints rather than
strict deadlines for everyone to be following. Each project still
will be responsible for handling its milestone tasks during the
relevant week on the schedule, but we will not be trying to have
all of the tags and launchpad updates applied on the same day.

As part of the desynchronization this cycle, we are going to rely
on release liaisons to request all tags, including for milestones,
for their deliverables through the openstack/releases repository.
We will no longer schedule 1-on-1 meetings to coordinate those tags
or review progress on bugs and blueprints in launchpad.  It will
be up to all of you to handle this for your projects (stay tuned
for another email thread about how we intend to make change management
tracking simpler this cycle).

Please make sure you are familiar with the schedule for this cycle
so you can help your team keep up. To help you stay on top of things,
we will be sending out periodic emails to this list as reminders
about the sorts of things project teams should be doing as we move
through the cycle. If you have concerns or questions, reply to the
relevant email thread or check with us in #openstack-release.

Doug

[1] https://wiki.openstack.org/wiki/Mitaka_Release_Schedule

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qos][fwaas] service groups vs. traffic classifiers

2015-11-03 Thread Sean M. Collins
Hi Ihar,

This sounds good. I actually had a draft e-mail that I've been saving
until I got back, that may be relevant. Some contributors met on Friday
to discuss the packet classification framework, mostly centered around
just building a reusable library that can be shared among multiple
services.

It was my view that just getting the different APIs to share a common
data model would be a big first step, since we can refactor a lot of
common internal data structures without any user facing API changes. 

I quickly went back to my hotel room on Friday (after stealing some red bulls 
from the
dev lounge) to start hacking on a shared library for packet
classification, that can be re-used by other projects.

At this point, the code is mostly SQLAlchemy models, but the objective is to
try and see if the models are actually useful, and can be re-used by multiple 
services.

On the FwaaS side I plan on proving out the models by attempting to
replace some of the FwaaS database models with models from the
common-classifier. I also plan on putting together some simple tests to
see if it can also handle classifiers for security groups in the future,
since there has already been some ideas about creating a common backend
for both FwaaS and the Security Group implementation.

Anyway, the code is currently up on GitHub - I just threw it on there
because I wanted to scratch my hacking itch quickly.

https://github.com/sc68cal/neutron-classifier

Hopefully this can help spur more discussion.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Mesos orchestration as discussed at mid cycle (action required from core reviewers)

2015-11-03 Thread Michal Rostecki

Egor,

I don't have much experience with Aurora, but as far as I see, it 
doesn't support mounting volumes from host yet:


https://issues.apache.org/jira/browse/AURORA-1107

In my opinion we should try to investigate currently existing frameworks 
which meets our main criteria before making decision about creating our 
own scheduler. Just to not re-invent things.


Regards,
Michal

On 11/03/2015 06:35 PM, Egor Guz wrote:

Michal/Steve,


could you elaborate about choosing Marathon vs Aurora vs custom scheduler
(to implement very precise control around placement/failures/etc)?

‹
Egor


On 11/2/15, 22:44, "Michal Rostecki"  wrote:


Hi,

+1 to what Steven said about Kubernetes.

I'd like to add that these 3 things (pid=host, net=host, -v) are
supported by Marathon, so probably it's much less problematic for us
than Kubernetes at this moment.

Regards,
Michal

On 11/03/2015 12:18 AM, Steven Dake (stdake) wrote:

Gosh,

Kubernetes as an underlay is an interesting idea.  We tried it for the
first 6 months of Kolla¹s existence and it almost killed the project.
   Essentially kubernetes lacks support for pid=host, net=host, and ­v
bind mounting.  All 3 are required to deliver an operational OpenStack.

This is why current Kolla goes with a bare metal underlay ­ all docker
options we need are available.

Regards
-steve


From: Georgy Okrokvertskhov >
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>
Date: Monday, November 2, 2015 at 3:47 PM
To: "OpenStack Development Mailing List (not for usage questions)"
>
Subject: Re: [openstack-dev] [kolla] Mesos orchestration as discussed at
mid cycle (action required from core reviewers)

Hi Steve,

Thank you for the update. This is really interesting direction for
Kolla.
I agree with Jeff. It is interesting to see what other frameworks will
be used. I suspect Marathon framework is under consideration as it adds
most of the application centric functionality like HA\restarter, scaling
and rolling-restarts\upgrades. Kubernetes might be also a good candidate
for that.

Thanks
Gosha

On Mon, Nov 2, 2015 at 2:00 PM, Jeff Peeler > wrote:

 On Mon, Nov 2, 2015 at 12:02 PM, Steven Dake (stdake)
 > wrote:
 > Hey folks,
 >
 > We had an informal vote at the mid cycle from the core reviewers,
and it was
 > a majority vote, so we went ahead and started the process of the
 > introduction of mesos orchestration into Kolla.
 >
 > For background for our few core reviewers that couldn¹t make it
and the
 > broader community, Angus Salkeld has committed himself and 3
other Mirantis
 > engineers full time to investigate if Mesos could be used as an
 > orchestration engine in place of Ansible.  We are NOT dropping
our Ansible
 > implementation in the short or long term.  Kolla will continue to
lead with
 > Ansible.  At some point in Mitaka or the N cycle we may move the
ansible
 > bits to a repository called ³kolla-ansible² and the kolla
repository would
 > end up containing the containers only.
 >
 > The general consensus was that if folks wanted to add additional
 > orchestration systems for Kolla, they were free to do so if they
did the
 > development and made a commitment to maintaining one core
reviewer team with
 > broad expertise among the core reviewer team of how these various
systems
 > work.
 >
 > Angus has agreed to the following
 >
 > A new team called ³kolla-mesos-core² with 2 members.  One of the
members is
 > Angus Salkeld, the other is selected by Angus Salkeld since this
is a cookie
 > cutter empty repository.  This is typical of how new projects
would operate,
 > but we don¹t want a code dump and instead want an integrated core
team.  To
 > prevent a situation which the current Ansible expertise shy away

>from the

 > Mesos implementation, the core reviewer team has committed to
reviewing the
 > mesos code to get a feel for it.
 > Over the next 6-8 weeks these two folks will strive to join the
Kolla core
 > team by typical means 1) irc participation 2) code generation 3)
effective
 > and quality reviews 4) mailing list participation
 > Angus will create a technical specification which will we will
roll-call
 > voted and only accepted once a majority of core review team is
satisfied
 > with the solution.
 > The kolla-mesos deliverable will be under Kolla governance and be
managed by
 > the Kolla core reviewer team after the kolla-mesos-core team is
deprecated.
 > If the experiment fails, kolla-mesos will be placed in the attic.
There is
 > no specific 

Re: [openstack-dev] [neutron][qos][fwaas] service groups vs. traffic classifiers

2015-11-03 Thread Vikram Choudhary
Thanks for all your efforts Sean.

I was actually thinking a separate IRC for this effort would be great and
will help all the interested people to come together and develop.

Any thoughts on this?

Thanks
Vikram
On Nov 3, 2015 11:54 PM, "Sean M. Collins"  wrote:

> I made a very quick attempt to jot down my thoughts about how it could
> be used. It's based off what I proposed in
> https://review.openstack.org/238812,
> and is my attempt to take that review and use SQLAlchemy to make it
> actually work.
>
>
> https://github.com/sc68cal/neutron-classifier/blob/master/doc/source/usage.rst
>
> It's very basic, it was just me hacking away at a proof of concept to
> demonstrate that what I was proposing was possible, that we didn't need
> a single database table with 10+ columns.
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Ian Wienand as core reviewer on diskimage-builder

2015-11-03 Thread Ben Nemec
+1

On 11/03/2015 09:25 AM, Gregory Haynes wrote:
> Hello everyone,
> 
> I would like to propose adding Ian Wienand as a core reviewer on the
> diskimage-builder project. Ian has been making a significant number of
> contributions for some time to the project, and has been a great help in
> reviews lately. Thus, I think we could benefit greatly by adding him as
> a core reviewer.
> 
> Current cores - Please respond with any approvals/objections by next Friday
> (November 13th).
> 
> Cheers,
> Greg
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Ian Wienand as core reviewer on diskimage-builder

2015-11-03 Thread Clint Byrum
Excerpts from Gregory Haynes's message of 2015-11-03 07:25:27 -0800:
> Hello everyone,
> 
> I would like to propose adding Ian Wienand as a core reviewer on the
> diskimage-builder project. Ian has been making a significant number of
> contributions for some time to the project, and has been a great help in
> reviews lately. Thus, I think we could benefit greatly by adding him as
> a core reviewer.
> 
> Current cores - Please respond with any approvals/objections by next Friday
> (November 13th).
> 

+1

Ian has been super helpful and really has been putting in the time and effort
to improve diskimage-builder lately.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Mesos orchestration as discussed at mid cycle (action required from core reviewers)

2015-11-03 Thread Michal Rostecki

On 11/03/2015 04:16 PM, Jeff Peeler wrote:

On Tue, Nov 3, 2015 at 1:44 AM, Michal Rostecki  wrote:

Hi,

+1 to what Steven said about Kubernetes.

I'd like to add that these 3 things (pid=host, net=host, -v) are supported
by Marathon, so probably it's much less problematic for us than Kubernetes
at this moment.


I don't actively track Kubernetes upstream, so this seemed like a
natural point of reevaluation. If Kubernetes still doesn't support the
Docker features Kolla needs, obviously it's a non-starter. Nice to
hear that Marathon does though.



After taking a look on docs, issues and pull requests in Kubernetes I 
have to admit that:


- net=host is supported now - 
https://github.com/kubernetes/kubernetes/pull/5886
- volumes seems to be supported - 
https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/volumes.md#hostpath 
- the question is whether this option provides 'rw' mode


I couldn't find any info about pid=host. But if there is a support for 
that, I will have to change my mind about Kubernetes.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][bandit] Handling bandit configuration files in Oslo.

2015-11-03 Thread Doug Hellmann
Excerpts from Cyril Roelandt's message of 2015-11-03 16:46:25 +0100:
> On 11/02/2015 07:32 PM, Davanum Srinivas wrote:
> >
> > If we can add this command directly in our tox.ini and entirely avoid
> > having the bandit.yaml would that be even better?
> 
> Why not, but it'd have some drawbacks as well:
> 
> - should the conf generator be broken for some reason, the gate may end 
> up being blocked for a while, because fixing it would be harder than 
> fixing a bandit.yaml file;
> - newcomers will feel overwhelmed knowing that a tool writes a config 
> file for another tool that generates a report, so I'd rather keep it 
> stupid simple.

We have a hacking plugin for flake8 that we use to apply common rules
across projects. Unfortunately, since not all projects are ready to
apply those rules at the same time we have to carefully upgrade hacking
in a way that doesn't break anyone's gate jobs until their code matches
the rules in the new version.

That said, since we already have the pattern of a plugin providing a
common set of rules, I wonder if we could do something similar for
bandit. Maybe the bandit developers don't want a plugin, which would
make a wrapper program easier to implement. The wrapper could own the
common contents of the YAML file, with each application able to override
the settings by updating its local copy of the file. We could then
manage updates of this tool in the same way we do for
hacking/flake8/pep8.

Doug

PS - Bonus points for naming the wrapper program "smokey" [1].

[1] https://en.wikipedia.org/wiki/Smokey_and_the_Bandit

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Adding a new feature in Kilo. Is it possible?

2015-11-03 Thread Michał Dubiel
Hi all,

We have a simple patch allowing to use OpenContrail's vrouter with
vhostuser vif types (currently only OVS has support for that). We would
like to contribute it.

However, We would like this change to land in the next maintenance release
of Kilo. Is it possible? What should be the process for this? Should we
prepare a blueprint and review request for the 'master' branch first? It is
small self contained change so I believe it does not need a nova-spec.

Regards,
Michal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] stable branch release process changes

2015-11-03 Thread Doug Hellmann
As we discussed at the summit, we are going to be changing the way we
handle releases from stable branches, starting with stable/liberty this
cycle.

In the past the release team and stable maintenance team have
coordinated stable releases of all projects at specific times during the
life cycle of each stable branch. With the liberty release we
re-versioned most projects so they are no longer using the same release
numbers, so synchronizing their stable releases makes less sense. We
originally wanted to either stop releasing from stable branches at all,
or to treat every stable commit as a release. However, it doesn’t always
make sense to tag every commit to a stable branch so we compromised on
having projects release from stable branches frequently and when they
need to do so.

Starting immediately, projects maintaining stable/liberty branches will
need to manage their stable branch releases by proposing tags to the
openstack/releases repository when the maintenance team feels it is
appropriate.

Our recommendation is to release relatively often, to avoid having
fixes land without being pushed out to users. Security fixes should
probably be released immediately, and the urgency of releases for
other fixes should be based on the nature of the change. For example,
if there are several stable fixes queued up to be merged in a short
period of time, it would make sense to wait to release all of them
instead of releasing each separately.  This is going to involve
applying judgment, and we’ll need to collaborate to decide what
heuristics to use.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] should we open gate for per sub-project stable-maint teams?

2015-11-03 Thread Kyle Mestery
On Tue, Nov 3, 2015 at 10:49 AM, Ihar Hrachyshka 
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi all,
>
> currently we have a single neutron-wide stable-maint gerrit group that
> maintains all stable branches for all stadium subprojects. I believe
> that in lots of cases it would be better to have subproject members to
> run their own stable maintenance programs, leaving
> neutron-stable-maint folks to help them in non-obvious cases, and to
> periodically validate that project wide stable policies are still honore
> d.
>
> I suggest we open gate to creating subproject stable-maint teams where
> current neutron-stable-maint members feel those subprojects are ready
> for that and can be trusted to apply stable branch policies in
> consistent way.
>
> Note that I don't suggest we grant those new permissions completely
> automatically. If neutron-stable-maint team does not feel safe to give
> out those permissions to some stable branches, their feeling should be
> respected.
>
> I believe it will be beneficial both for subprojects that would be
> able to iterate on backports in more efficient way; as well as for
> neutron-stable-maint members who are often busy with other stuff, and
> often times are not the best candidates to validate technical validity
> of backports in random stadium projects anyway. It would also be in
> line with general 'open by default' attitude we seem to embrace in
> Neutron.
>
> If we decide it's the way to go, there are alternatives on how we
> implement it. For example, we can grant those subproject teams all
> permissions to merge patches; or we can leave +W votes to
> neutron-stable-maint group.
>
> I vote for opening the gates, *and* for granting +W votes where
> projects showed reasonable quality of proposed backports before; and
> leaving +W to neutron-stable-maint in those rare cases where history
> showed backports could get more attention and safety considerations
> [with expectation that those subprojects will eventually own +W votes
> as well, once quality concerns are cleared].
>
> If we indeed decide to bootstrap subproject stable-maint teams, I
> volunteer to reach the candidate teams for them to decide on initial
> lists of stable-maint members, and walk them thru stable policies.
>
> Comments?
>
>
As someone who spends a considerable amount of time reviewing stable
backports on a regular basis across all the sub-projects, I'm in favor of
this approach. I'd like to be included when selecting teams which are
approproate to have their own stable teams as well. Please include me when
doing that.

Thanks,
Kyle


> Ihar
> -BEGIN PGP SIGNATURE-
>
> iQEcBAEBAgAGBQJWOOWkAAoJEC5aWaUY1u57sVIIALrnqvuj3t7c25DBHvywxBZV
> tCMlRY4cRCmFuVy0VXokM5DxGQ3VRwbJ4uWzuXbeaJxuVWYT2Kn8JJ+yRjdg7Kc4
> 5KXy3Xv0MdJnQgMMMgyjJxlTK4MgBKEsCzIRX/HLButxcXh3tqWAh0oc8WW3FKtm
> wWFZ/2Gmf4K9OjuGc5F3dvbhVeT23IvN+3VkobEpWxNUHHoALy31kz7ro2WMiGs7
> GHzatA2INWVbKfYo2QBnszGTp4XXaS5KFAO8+4H+HvPLxOODclevfKchOIe6jthH
> F1z4JcJNMmQrQDg1WSqAjspAlne1sqdVLX0efbvagJXb3Ju63eSLrvUjyCsZG4Q=
> =HE+y
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-03 Thread Mike Perez
On 15:26 Nov 03, Robert Collins wrote:
> Hi, at the summit we had a big session on distributed lock managers (DLMs).
> 
> I'd just like to highlight the conclusions we came to in the session (
> https://etherpad.openstack.org/p/mitaka-cross-project-dlm
> )

Also Cinder will be spearheading some Tooz integration work:

https://review.openstack.org/#/c/185646/

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Mesos orchestration as discussed at mid cycle (action required from core reviewers)

2015-11-03 Thread Egor Guz
Michal/Steve,


could you elaborate about choosing Marathon vs Aurora vs custom scheduler
(to implement very precise control around placement/failures/etc)?

‹
Egor


On 11/2/15, 22:44, "Michal Rostecki"  wrote:

>Hi,
>
>+1 to what Steven said about Kubernetes.
>
>I'd like to add that these 3 things (pid=host, net=host, -v) are
>supported by Marathon, so probably it's much less problematic for us
>than Kubernetes at this moment.
>
>Regards,
>Michal
>
>On 11/03/2015 12:18 AM, Steven Dake (stdake) wrote:
>> Gosh,
>>
>> Kubernetes as an underlay is an interesting idea.  We tried it for the
>> first 6 months of Kolla¹s existence and it almost killed the project.
>>   Essentially kubernetes lacks support for pid=host, net=host, and ­v
>> bind mounting.  All 3 are required to deliver an operational OpenStack.
>>
>> This is why current Kolla goes with a bare metal underlay ­ all docker
>> options we need are available.
>>
>> Regards
>> -steve
>>
>>
>> From: Georgy Okrokvertskhov > >
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> > >
>> Date: Monday, November 2, 2015 at 3:47 PM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> > >
>> Subject: Re: [openstack-dev] [kolla] Mesos orchestration as discussed at
>> mid cycle (action required from core reviewers)
>>
>> Hi Steve,
>>
>> Thank you for the update. This is really interesting direction for
>>Kolla.
>> I agree with Jeff. It is interesting to see what other frameworks will
>> be used. I suspect Marathon framework is under consideration as it adds
>> most of the application centric functionality like HA\restarter, scaling
>> and rolling-restarts\upgrades. Kubernetes might be also a good candidate
>> for that.
>>
>> Thanks
>> Gosha
>>
>> On Mon, Nov 2, 2015 at 2:00 PM, Jeff Peeler > > wrote:
>>
>> On Mon, Nov 2, 2015 at 12:02 PM, Steven Dake (stdake)
>> > wrote:
>> > Hey folks,
>> >
>> > We had an informal vote at the mid cycle from the core reviewers,
>>and it was
>> > a majority vote, so we went ahead and started the process of the
>> > introduction of mesos orchestration into Kolla.
>> >
>> > For background for our few core reviewers that couldn¹t make it
>>and the
>> > broader community, Angus Salkeld has committed himself and 3
>>other Mirantis
>> > engineers full time to investigate if Mesos could be used as an
>> > orchestration engine in place of Ansible.  We are NOT dropping
>>our Ansible
>> > implementation in the short or long term.  Kolla will continue to
>>lead with
>> > Ansible.  At some point in Mitaka or the N cycle we may move the
>>ansible
>> > bits to a repository called ³kolla-ansible² and the kolla
>>repository would
>> > end up containing the containers only.
>> >
>> > The general consensus was that if folks wanted to add additional
>> > orchestration systems for Kolla, they were free to do so if they
>>did the
>> > development and made a commitment to maintaining one core
>>reviewer team with
>> > broad expertise among the core reviewer team of how these various
>>systems
>> > work.
>> >
>> > Angus has agreed to the following
>> >
>> > A new team called ³kolla-mesos-core² with 2 members.  One of the
>>members is
>> > Angus Salkeld, the other is selected by Angus Salkeld since this
>>is a cookie
>> > cutter empty repository.  This is typical of how new projects
>>would operate,
>> > but we don¹t want a code dump and instead want an integrated core
>>team.  To
>> > prevent a situation which the current Ansible expertise shy away
>>from the
>> > Mesos implementation, the core reviewer team has committed to
>>reviewing the
>> > mesos code to get a feel for it.
>> > Over the next 6-8 weeks these two folks will strive to join the
>>Kolla core
>> > team by typical means 1) irc participation 2) code generation 3)
>>effective
>> > and quality reviews 4) mailing list participation
>> > Angus will create a technical specification which will we will
>>roll-call
>> > voted and only accepted once a majority of core review team is
>>satisfied
>> > with the solution.
>> > The kolla-mesos deliverable will be under Kolla governance and be
>>managed by
>> > the Kolla core reviewer team after the kolla-mesos-core team is
>>deprecated.
>> > If the experiment fails, kolla-mesos will be placed in the attic.
>> There is
>> > no specific window for the experiments, it is really up to Angus
>>to decide
>> > if the technique is viable down the road.
>> > For the purpose of voting, the kolla-mesos-core team 

Re: [openstack-dev] [puppet] about $::os_service_default

2015-11-03 Thread Clayton O'Neill
What is the issue with logging?  Can someone other than Yanis look into
this?

On Tue, Nov 3, 2015 at 8:57 AM, Emilien Macchi  wrote:

> I'm seeing a lot of patches using the new $::os_service_default.
>
> Please stop trying to using it at this time. The feature is not stable
> yet and we're testing it only for puppet-cinder module.
> I've heard Yanis found something that is not backward compatible with
> logging, but he's away this week so I suggest we wait next week.
>
> In the meantime, please do not use $::os_service_default outside
> puppet-cinder.
>
> Thanks a lot,
> --
> Emilien Macchi
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qos][fwaas] service groups vs. traffic classifiers

2015-11-03 Thread Sean M. Collins
I made a very quick attempt to jot down my thoughts about how it could
be used. It's based off what I proposed in https://review.openstack.org/238812, 
and is my attempt to take that review and use SQLAlchemy to make it
actually work.

https://github.com/sc68cal/neutron-classifier/blob/master/doc/source/usage.rst

It's very basic, it was just me hacking away at a proof of concept to
demonstrate that what I was proposing was possible, that we didn't need
a single database table with 10+ columns.
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Mesos orchestration as discussed at mid cycle (action required from core reviewers)

2015-11-03 Thread Jeff Peeler
On Tue, Nov 3, 2015 at 1:25 PM, Michal Rostecki  wrote:
> On 11/03/2015 04:16 PM, Jeff Peeler wrote:
>>
>> On Tue, Nov 3, 2015 at 1:44 AM, Michal Rostecki 
>> wrote:
>>>
>>> Hi,
>>>
>>> +1 to what Steven said about Kubernetes.
>>>
>>> I'd like to add that these 3 things (pid=host, net=host, -v) are
>>> supported
>>> by Marathon, so probably it's much less problematic for us than
>>> Kubernetes
>>> at this moment.
>>
>>
>> I don't actively track Kubernetes upstream, so this seemed like a
>> natural point of reevaluation. If Kubernetes still doesn't support the
>> Docker features Kolla needs, obviously it's a non-starter. Nice to
>> hear that Marathon does though.
>>
>
> After taking a look on docs, issues and pull requests in Kubernetes I have
> to admit that:
>
> - net=host is supported now -
> https://github.com/kubernetes/kubernetes/pull/5886
> - volumes seems to be supported -
> https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/volumes.md#hostpath
> - the question is whether this option provides 'rw' mode
>
> I couldn't find any info about pid=host. But if there is a support for that,
> I will have to change my mind about Kubernetes.

This might be helpful:
https://github.com/kubernetes/kubernetes/pull/3817

I don't have any vested interest in usage of Kubernetes, just thought
it was worth looking into.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Adding a new feature in Kilo. Is it possible?

2015-11-03 Thread Dulko, Michal
On Tue, 2015-11-03 at 18:57 +0100, Michał Dubiel wrote:
> Hi all,

> We have a simple patch allowing to use OpenContrail's vrouter with
> vhostuser vif types (currently only OVS has support for that). We
> would like to contribute it. 

> However, We would like this change to land in the next maintenance
> release of Kilo. Is it possible? What should be the process for this?
> Should we prepare a blueprint and review request for the 'master'
> branch first? It is small self contained change so I believe it does
> not need a nova-spec.

> Regards,
> Michal

The policy is that backports to Kilo are now possible for security fixes
only [1]. Even if your commit would fall into security bugfix category,
it would need to merged to master first.

I think your best call is contributing the feature to the current master
(Mitaka) and prepare downstream backport for your internal needs.

[1] https://wiki.openstack.org/wiki/Releases
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] updating upper-constraints for stable branches

2015-11-03 Thread Davanum Srinivas
+1 Doug

On Tue, Nov 3, 2015 at 3:28 PM, Doug Hellmann  wrote:
> lifeless had some proposals about managing stable requirements and
> constraints that he presented during the summit. We should get those
> written down before we start approving any changes.
>
> Doug
>
> Excerpts from Davanum Srinivas (dims)'s message of 2015-11-03 14:57:01 -0500:
>> Matthew,
>>
>> There's a failure in grenade  - https://review.openstack.org/#/c/232918/
>> There's a fix in progress - https://review.openstack.org/#/c/240371/
>>
>> Once that gets released in oslo.reports. We should be able to unclog
>> those updates
>>
>> thanks,
>> Dims
>>
>> On Tue, Nov 3, 2015 at 2:37 PM, Matthew Thode  
>> wrote:
>> > Are these constraints locked in place for the entire release cycle?  It
>> > doesn't look like the stable/liberty version has been updated since
>> > release and it'd be nice if newer versions of the packages were tested,
>> > specifically babel in my case (as we have to have users mask the newer,
>> > stable version).
>> >
>> > --
>> > Matthew Thode (prometheanfire)
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [inspector] Auto discovery extension for Ironic Inspector

2015-11-03 Thread Pavlo Shchelokovskyy
Hi all,

For auto-setting driver options on enrollment, I would vote for option 2
with default being fake driver + optional CMDB integration. This would ease
managing a homogeneous pool of BMs, but still (using fake driver or data
from CMDB) work reasonably well in heterogeneous case.

As for setting a random password, CMDB integration is crucial IMO. Large
deployments usually have some sort of it already, and it must serve as a
single source of truth for the deployment. So if inspector is changing the
ipmi password, it should not only notify/update Ironic's knowledge on that
node, but also notify/update the CMDB on that change - at least there must
be a possibility (a ready-to-use plug point) to do that before we roll out
such feature.

Best regards,
-- 
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Ian Wienand as core reviewer on diskimage-builder

2015-11-03 Thread Derek Higgins



On 03/11/15 15:25, Gregory Haynes wrote:

Hello everyone,

I would like to propose adding Ian Wienand as a core reviewer on the
diskimage-builder project. Ian has been making a significant number of
contributions for some time to the project, and has been a great help in
reviews lately. Thus, I think we could benefit greatly by adding him as
a core reviewer.

Current cores - Please respond with any approvals/objections by next Friday
(November 13th).


+1 from me, Ian has been putting in a lot of good reviews in DIB over 
the last few months.




Cheers,
Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Enabling and Reading notifications

2015-11-03 Thread Pratik Mallya
Hello,

I was looking for guidance as to how to enable notifications in heat and if 
there is already a tool that can read those events? Looking through the code, 
it gives somewhat conflicting information as to the extent to which 
notifications are supported. e.g. [1] says its not supported, but there is an 
integration test [2] available.

Thanks,
Pratik

[1]: 
https://github.com/openstack/heat/blob/aa6449ce5df64a95df29a15bfe3edacbefb8f1aa/heat/api/openstack/v1/stacks.py#L222
[2]: 
https://github.com/openstack/heat/blob/master/heat_integrationtests/functional/test_notifications.py
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-03 Thread Boris Pavlovic
Hi stackers,

Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
that works with OpenStack are working with resources (e.g. VM, Volumes,
Images, ..) in the next way:

>>> resource = api.resouce_do_some_stuff()
>>> while api.resource_get(resource["uuid"]) != expected_status
>>>sleep(a_bit)

For each async operation they are polling and call many times
resource_get() which creates significant load on API and DB layers due the
nature of this request. (Usually getting full information about resources
produces SQL requests that contains multiple JOINs, e,g for nova vm it's 6
joins).

What if we add new API method that will just resturn resource status by
UUID? Or even just extend get request with the new argument that returns
only status?

Thoughts?


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] notification subteam

2015-11-03 Thread Michael Still
I'd be interested in being involved with this, and I know Paul Murray is
interested as well.

I went to make a doodle, but then realised the only non-terrible timeslot
for Australia / UK / US Central is 8pm UTC (7am Australia, 8pm London, 2pm
Central US). So what do people think of that time slot?

Michael

On Wed, Nov 4, 2015 at 1:46 AM, Balázs Gibizer 
wrote:

> Hi,
>
> We discussed in the summit that nova notification API needs some care.  It
> was suggested that forming a subteam around notifications might help
> coordinate and track the effort.  So I have created a wikipage [1] and I
> added a section in the Mitaka tracking etherpad [2] for the subteam.
>
> We can have regular meetings on IRC if we want but we have to find a
> suitable timeslot. I work in CET time zone, so from UTC 7:00  to UTC 17:00
> is the best for me, but a bit later might also OK.
>
> Any comments and suggestions are welcome.
>
> Cheers,
> Gibi
>
> [1] https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking
> [2] https://wiki.openstack.org/wiki/Meetings/NovaNotification
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Mesos orchestration as discussed at mid cycle (action required from core reviewers)

2015-11-03 Thread Steven Dake (stdake)
With the spec that Angus proposed, a read/write bind mount volume is not a
requirement as the configuration comes from the central data store (etcd
or zookeeper).

Just pid=host and net=host are hard requirements.  Nova will not work
without net=host, atleast on RPM based distributions.  Sam Yaple I think
indicated it had worked in the past on Ubuntu, but we really want to stick
with a ³all the distress² policy even for experimental repositories.

I found getting neutron networking with nova vas operational inside
Kubernetes to be non-viable, but I couldn¹t get very far because of the
lack of net=host.

Using replication controllers also didn¹t work as I would have liked
because there is no way to specify "replicate to all nodes 1 copy of the
pod².

I¹m unclear how the kubernetes service construct would work in a world
with net=host, but perhaps it is detailed in the implementation.

I would recommend to pick the tech you want to work with, either mesos or
kubernetes as to put all effort behind one arrow rather than two :)

Regards
-steve

On 11/3/15, 11:25 AM, "Michal Rostecki"  wrote:

>On 11/03/2015 04:16 PM, Jeff Peeler wrote:
>> On Tue, Nov 3, 2015 at 1:44 AM, Michal Rostecki
>> wrote:
>>> Hi,
>>>
>>> +1 to what Steven said about Kubernetes.
>>>
>>> I'd like to add that these 3 things (pid=host, net=host, -v) are
>>>supported
>>> by Marathon, so probably it's much less problematic for us than
>>>Kubernetes
>>> at this moment.
>>
>> I don't actively track Kubernetes upstream, so this seemed like a
>> natural point of reevaluation. If Kubernetes still doesn't support the
>> Docker features Kolla needs, obviously it's a non-starter. Nice to
>> hear that Marathon does though.
>>
>
>After taking a look on docs, issues and pull requests in Kubernetes I
>have to admit that:
>
>- net=host is supported now -
>https://github.com/kubernetes/kubernetes/pull/5886
>- volumes seems to be supported -
>https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/
>volumes.md#hostpath
>- the question is whether this option provides 'rw' mode
>
>I couldn't find any info about pid=host. But if there is a support for
>that, I will have to change my mind about Kubernetes.
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Role for Fuel Master Node

2015-11-03 Thread Igor Kalnitsky
Hi Javeria,

Try to use 'master' in 'role' field. Example:

- role: 'master'
  stage: pre_deployment
  type: shell
  parameters:
  cmd: echo all > /tmp/plugin.all
  timeout: 42

Let me know if you need additional help.

Thanks,
Igor

P.S: Since Fuel 7.0 it's recommended to use deployment_tasks.yaml
instead of tasks.yaml. Please see Fuel Plugins wiki page for details.

On Tue, Nov 3, 2015 at 10:26 PM, Javeria Khan  wrote:
> Hey everyone,
>
> I've been working on a fuel plugin and for some reason just cant figure out
> how to run a task on the fuel master node through the tasks.yaml. Is there
> even a role for it?
>
> Something similar to what ansible does with localhost would work.
>
> Thanks,
> Javeria
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Creating puppet-keystone-core and proposing Richard Megginson core-reviewer

2015-11-03 Thread Matt Fischer
Sorry I replied to this right away but used the wrong email address and it
bounced!

> I've appreciated all of richs v3 contributions to keystone. +1 from me.

On Tue, Nov 3, 2015 at 4:38 AM, Sofer Athlan-Guyot 
wrote:

> He's very good reviewer with a deep knowledge of keystone and puppet.
> Thank you Richard for your help.
>
> +1
>
> Emilien Macchi  writes:
>
> > At the Summit we discussed about scaling-up our team.
> > We decided to investigate the creation of sub-groups specific to our
> > modules that would have +2 power.
> >
> > I would like to start with puppet-keystone:
> > https://review.openstack.org/240666
> >
> > And propose Richard Megginson part of this group.
> >
> > Rich is leading puppet-keystone work since our Juno cycle. Without his
> > leadership and skills, I'm not sure we would have Keystone v3 support
> > in our modules.
> > He's a good Puppet reviewer and takes care of backward compatibility.
> > He also has strong knowledge at how Keystone works. He's always
> > willing to lead our roadmap regarding identity deployment in
> > OpenStack.
> >
> > Having him on-board is for us an awesome opportunity to be ahead of
> > other deployments tools and supports many features in Keystone that
> > real deployments actually need.
> >
> > I would like to propose him part of the new puppet-keystone-core
> > group.
> >
> > Thank you Rich for your work, which is very appreciated.
>
> --
> Sofer Athlan-Guyot
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Question about Microsoft Hyper-V CI tests

2015-11-03 Thread Sławek Kapłoński
Hello,

I'm now working on patch to neutron to add QoS in linuxbridge: https://
review.openstack.org/#/c/236210/
Patch is not finished yet but I have some "problem" with some tests. For 
example Microsoft Hyper-V CI check are failing. When I checked logs of this 
tests in http://64.119.130.115/neutron/236210/7/results.html.gz file I found 
error like:

ft1.1: setUpClass 
(tempest.api.network.test_networks.NetworksIpV6TestAttrs)_StringException: 
Traceback (most recent call last):
  File "tempest/test.py", line 274, in setUpClass
six.reraise(etype, value, trace)
  File "tempest/test.py", line 267, in setUpClass
cls.resource_setup()
  File "tempest/api/network/test_networks.py", line 65, in resource_setup
cls.network = cls.create_network()
  File "tempest/api/network/base.py", line 152, in create_network
body = cls.networks_client.create_network(name=network_name)
  File "tempest/services/network/json/networks_client.py", line 21, in 
create_network
return self.create_resource(uri, post_data)
  File "tempest/services/network/json/base.py", line 59, in create_resource
resp, body = self.post(req_uri, req_post_data)
  File "/usr/local/lib/python2.7/dist-packages/tempest_lib/common/
rest_client.py", line 259, in post
return self.request('POST', url, extra_headers, headers, body)
  File "/usr/local/lib/python2.7/dist-packages/tempest_lib/common/
rest_client.py", line 639, in request
resp, resp_body)
  File "/usr/local/lib/python2.7/dist-packages/tempest_lib/common/
rest_client.py", line 757, in _error_checker
resp=resp)
tempest_lib.exceptions.UnexpectedResponseCode: Unexpected response code 
received
Details: 503


It is strange for me because it looks that error is somewhere in 
create_network. I didn't change anything in code which is creating networks. 
Other tests are fine IMHO.
So my question is: should I check reason of this errors and try to fix it also 
in my patch? Or how should I proceed with such kind of errors?

--
Pozdrawiam / Best regards
Sławek Kapłoński
sla...@kaplonski.pl

signature.asc
Description: This is a digitally signed message part.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] updating upper-constraints for stable branches

2015-11-03 Thread Davanum Srinivas
Matthew,

There's a failure in grenade  - https://review.openstack.org/#/c/232918/
There's a fix in progress - https://review.openstack.org/#/c/240371/

Once that gets released in oslo.reports. We should be able to unclog
those updates

thanks,
Dims

On Tue, Nov 3, 2015 at 2:37 PM, Matthew Thode  wrote:
> Are these constraints locked in place for the entire release cycle?  It
> doesn't look like the stable/liberty version has been updated since
> release and it'd be nice if newer versions of the packages were tested,
> specifically babel in my case (as we have to have users mask the newer,
> stable version).
>
> --
> Matthew Thode (prometheanfire)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleO] appropriate location for docker image uploading

2015-11-03 Thread Jeff Peeler
I'm looking at introducing the ability for tripleoclient to upload
docker images into a docker registry (planning for it to be installed
in the undercloud [1]). I wanted to make sure something like this
would be accepted or get suggestions on an alternate approach.
Ultimately may end up looking something like the patch below, which
I'm still waiting for further feedback on:
https://review.openstack.org/#/c/239090/


[1] https://review.openstack.org/#/c/238238/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Plugins] Role for Fuel Master Node

2015-11-03 Thread Javeria Khan
Hey everyone,

I've been working on a fuel plugin and for some reason just cant figure out
how to run a task on the fuel master node through the tasks.yaml. Is there
even a role for it?

Something similar to what ansible does with localhost would work.

Thanks,
Javeria
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] updating upper-constraints for stable branches

2015-11-03 Thread Matthew Thode
Are these constraints locked in place for the entire release cycle?  It
doesn't look like the stable/liberty version has been updated since
release and it'd be nice if newer versions of the packages were tested,
specifically babel in my case (as we have to have users mask the newer,
stable version).

-- 
Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] new change management tools and processes for stable/liberty and mitaka

2015-11-03 Thread Doug Hellmann
As we discussed at the summit, the release management team is
modifying our change management tracking tools and processes this
cycle. This email is the official announcement of those changes,
with more detail than we provided at the summit.

In past cycles, we have used a combination of Launchpad milestone
pages and our wiki to track changes in releases. We used to pull
together release notes for stable point releases at the time of
release. Most of that work fell to the stable maintenance and release
teams. Similarly, the release managers worked with PTLs and release
liaisons at each milestone checkpoint to update Launchpad to
accurately reflect the work completed at each stage of development.
It's a lot of work to fix up Launchpad and assemble the notes and
make sure they are accurate, which has caused us to be a bottleneck
for clear and complete communication at the time of the release.
We have been looking for ways to reduce that effort for these tasks
and eliminate the bottleneck for some time.

This cycle, to address these problems for our ever-growing set of
projects, the release management team is introducing a new tool for
handling release notes as files in-tree, to allow us to simply and
continuously build the release notes for stable branch point releases
and milestones on the master branch. The idea is to use small YAML
files, usually one per note or patch, to avoid merge conflicts on
backports and then to compile those files in a deterministic way
into a more readable document for readers. Files containing release
notes can be including in patches directly, or you can create a
separate patch with release notes if you want to document a feature
than spans several patches.  The tool is called Reno, and it currently
supports ReStructuredText and Sphinx for converting note input files
to HTML for publication.  Reno is git branch-aware, so we can have
separate release notes documents for each release series published
together from the master build.

The documentation for Reno, including design principles and basic
usage instructions, is available at [1]. For now we are focusing
on Sphinx integration so that release notes are published online.
We will add setuptools integration in a future version of Reno so
that the release notes can be built with the source distribution.

As part of this rollout, I will also be updating the settings for
the gerrit hook script so that when a patch with "Closes-Bug" in
the commit message is merged the bug will be marked as "Fix Released"
instead of "Fix Committeed" (since "Fix Committed" is not a closed
state). When that work is done, I'll send another email to let PTLs
know they can go through their existing bugs and change their status.

We are ready to start rolling out Reno for use with Liberty stable
branch releases and in master for the Mitaka release. We need the
release liaisons to create and merge a few patches for each project
between now and the Mitaka-1 milestone.

1. We need one patch to the master branch of the project to add the
   instructions for publishing the notes as part of the project
   sphinx documentation build.  An example patch for Glance is in
   [2].

2. We need another patch to the stable/liberty branch of the project
   to set up Reno and introduce the first release note for that
   series. An example patch for Glance is in [3].

3. Each project needs to turn on the relevant jobs in project-config.
   An example patch using Glance is in [4]. New patches will need
   to be based on the change that adds the necessary template [5],
   until that lands.

4. Reno was not ready before the summit, so we started by using the
   wiki for release notes for the initial Liberty releases. We also
   need liaisons to convert those notes to reno YAML files in the
   stable/liberty branch of each project.

Please use the topic "add-reno" for all patches so we can track
adoption.

Once those merge, project teams can stop using Launchpad for tracking
completed work. We will still use Launchpad for bug reports, for
now. If a team wants to continue using it for tracking blueprints,
that's fine.  If a team wants to use Launchpad for scheduling work
to be done in the future, but not for release tracking, that is
also fine. The release management team will no longer be reviewing
or updating Launchpad as part of the release process.

Thanks,
Doug

[1] http://docs.openstack.org/developer/reno/
[2] https://review.openstack.org/241323
[3] https://review.openstack.org/241322
[4] https://review.openstack.org/241344
[5] https://review.openstack.org/241343

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] updating upper-constraints for stable branches

2015-11-03 Thread Matthew Thode
On 11/03/2015 01:57 PM, Davanum Srinivas wrote:
> Matthew,
> 
> There's a failure in grenade  - https://review.openstack.org/#/c/232918/
> There's a fix in progress - https://review.openstack.org/#/c/240371/
> 
> Once that gets released in oslo.reports. We should be able to unclog
> those updates
> 
> thanks,
> Dims
> 
> On Tue, Nov 3, 2015 at 2:37 PM, Matthew Thode  
> wrote:
>> Are these constraints locked in place for the entire release cycle?  It
>> doesn't look like the stable/liberty version has been updated since
>> release and it'd be nice if newer versions of the packages were tested,
>> specifically babel in my case (as we have to have users mask the newer,
>> stable version).
>>
>> --
>> Matthew Thode (prometheanfire)
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 
good to know, thanks

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleO] appropriate location for docker image uploading

2015-11-03 Thread Clint Byrum
Excerpts from Jeff Peeler's message of 2015-11-03 11:54:24 -0800:
> I'm looking at introducing the ability for tripleoclient to upload
> docker images into a docker registry (planning for it to be installed
> in the undercloud [1]). I wanted to make sure something like this
> would be accepted or get suggestions on an alternate approach.
> Ultimately may end up looking something like the patch below, which
> I'm still waiting for further feedback on:
> https://review.openstack.org/#/c/239090/
> 

I'm curious if you could push toward just using Magnum for this, which
would give you all of the power of the k8s registry, for instance.

> 
> [1] https://review.openstack.org/#/c/238238/
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] updating upper-constraints for stable branches

2015-11-03 Thread Doug Hellmann
lifeless had some proposals about managing stable requirements and
constraints that he presented during the summit. We should get those
written down before we start approving any changes.

Doug

Excerpts from Davanum Srinivas (dims)'s message of 2015-11-03 14:57:01 -0500:
> Matthew,
> 
> There's a failure in grenade  - https://review.openstack.org/#/c/232918/
> There's a fix in progress - https://review.openstack.org/#/c/240371/
> 
> Once that gets released in oslo.reports. We should be able to unclog
> those updates
> 
> thanks,
> Dims
> 
> On Tue, Nov 3, 2015 at 2:37 PM, Matthew Thode  
> wrote:
> > Are these constraints locked in place for the entire release cycle?  It
> > doesn't look like the stable/liberty version has been updated since
> > release and it'd be nice if newer versions of the packages were tested,
> > specifically babel in my case (as we have to have users mask the newer,
> > stable version).
> >
> > --
> > Matthew Thode (prometheanfire)
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Ian Wienand as core reviewer on diskimage-builder

2015-11-03 Thread Steve Kowalik
On 04/11/15 02:25, Gregory Haynes wrote:
> Current cores - Please respond with any approvals/objections by next Friday
> (November 13th).

+1 from me as well.

-- 
Steve
"...In the UNIX world, people tend to interpret `non-technical user'
 as meaning someone who's only ever written one device driver."
 - Daniel Pead

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] new change management tools and processes for stable/liberty and mitaka

2015-11-03 Thread Fei Long Wang

Hi Doug,

Thanks for posting this. I'm working on this for Zaqar now and there is 
a question. As for the stable/liberty patch, where does the 
"60fdcaba00e30d02" in [1] come from? Thanks.


[1] 
https://review.openstack.org/#/c/241322/1/releasenotes/notes/60fdcaba00e30d02-start-using-reno.yaml


On 04/11/15 08:46, Doug Hellmann wrote:

As we discussed at the summit, the release management team is
modifying our change management tracking tools and processes this
cycle. This email is the official announcement of those changes,
with more detail than we provided at the summit.

In past cycles, we have used a combination of Launchpad milestone
pages and our wiki to track changes in releases. We used to pull
together release notes for stable point releases at the time of
release. Most of that work fell to the stable maintenance and release
teams. Similarly, the release managers worked with PTLs and release
liaisons at each milestone checkpoint to update Launchpad to
accurately reflect the work completed at each stage of development.
It's a lot of work to fix up Launchpad and assemble the notes and
make sure they are accurate, which has caused us to be a bottleneck
for clear and complete communication at the time of the release.
We have been looking for ways to reduce that effort for these tasks
and eliminate the bottleneck for some time.

This cycle, to address these problems for our ever-growing set of
projects, the release management team is introducing a new tool for
handling release notes as files in-tree, to allow us to simply and
continuously build the release notes for stable branch point releases
and milestones on the master branch. The idea is to use small YAML
files, usually one per note or patch, to avoid merge conflicts on
backports and then to compile those files in a deterministic way
into a more readable document for readers. Files containing release
notes can be including in patches directly, or you can create a
separate patch with release notes if you want to document a feature
than spans several patches.  The tool is called Reno, and it currently
supports ReStructuredText and Sphinx for converting note input files
to HTML for publication.  Reno is git branch-aware, so we can have
separate release notes documents for each release series published
together from the master build.

The documentation for Reno, including design principles and basic
usage instructions, is available at [1]. For now we are focusing
on Sphinx integration so that release notes are published online.
We will add setuptools integration in a future version of Reno so
that the release notes can be built with the source distribution.

As part of this rollout, I will also be updating the settings for
the gerrit hook script so that when a patch with "Closes-Bug" in
the commit message is merged the bug will be marked as "Fix Released"
instead of "Fix Committeed" (since "Fix Committed" is not a closed
state). When that work is done, I'll send another email to let PTLs
know they can go through their existing bugs and change their status.

We are ready to start rolling out Reno for use with Liberty stable
branch releases and in master for the Mitaka release. We need the
release liaisons to create and merge a few patches for each project
between now and the Mitaka-1 milestone.

1. We need one patch to the master branch of the project to add the
instructions for publishing the notes as part of the project
sphinx documentation build.  An example patch for Glance is in
[2].

2. We need another patch to the stable/liberty branch of the project
to set up Reno and introduce the first release note for that
series. An example patch for Glance is in [3].

3. Each project needs to turn on the relevant jobs in project-config.
An example patch using Glance is in [4]. New patches will need
to be based on the change that adds the necessary template [5],
until that lands.

4. Reno was not ready before the summit, so we started by using the
wiki for release notes for the initial Liberty releases. We also
need liaisons to convert those notes to reno YAML files in the
stable/liberty branch of each project.

Please use the topic "add-reno" for all patches so we can track
adoption.

Once those merge, project teams can stop using Launchpad for tracking
completed work. We will still use Launchpad for bug reports, for
now. If a team wants to continue using it for tracking blueprints,
that's fine.  If a team wants to use Launchpad for scheduling work
to be done in the future, but not for release tracking, that is
also fine. The release management team will no longer be reviewing
or updating Launchpad as part of the release process.

Thanks,
Doug

[1] http://docs.openstack.org/developer/reno/
[2] https://review.openstack.org/241323
[3] https://review.openstack.org/241322
[4] https://review.openstack.org/241344
[5] https://review.openstack.org/241343


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-03 Thread Morgan Fainberg
On Nov 3, 2015 4:29 PM, "Clint Byrum"  wrote:
>
> Excerpts from Boris Pavlovic's message of 2015-11-03 14:20:10 -0800:
> > Hi stackers,
> >
> > Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
> > that works with OpenStack are working with resources (e.g. VM, Volumes,
> > Images, ..) in the next way:
> >
> > >>> resource = api.resouce_do_some_stuff()
> > >>> while api.resource_get(resource["uuid"]) != expected_status
> > >>>sleep(a_bit)
> >
> > For each async operation they are polling and call many times
> > resource_get() which creates significant load on API and DB layers due
the
> > nature of this request. (Usually getting full information about
resources
> > produces SQL requests that contains multiple JOINs, e,g for nova vm
it's 6
> > joins).
> >
> > What if we add new API method that will just resturn resource status by
> > UUID? Or even just extend get request with the new argument that returns
> > only status?
>
> I like the idea of being able pass in the set of fields you want to
> see with each get. In SQL, often times only passing in indexed fields
> will allow a query to be entirely serviced by a brief range scan in
> the B-tree. For instance, if you have an index on '(UUID, status)',
> then this lookup will be a single read from an index in MySQL/MariaDB:
>
> SELECT status FROM instances WHERE UUID='foo';
>
> The explain on this will say 'Using index' and basically you'll just do
> a range scan on the UUID portion, and only find one entry, which will
> be lightning fast, and return only status since it already has it there
> in the index. Maintaining the index is not free, but probably worth it
> if your users really do poll this way a lot.
>
> That said, this is optimizing for polling, and I'm not a huge fan. I'd
> much rather see a pub/sub model added to the API, so that users can
> simply subscribe to changes in resources, and poll only when a very long
> timeout has passed. This will reduce load on API services, databases,

++ this is a much better long term solution if we are investing engineering
resources along these lines.

> caches, etc. There was a thread some time ago about using Nova's built
> in notifications to produce an Atom feed per-project. That seems like
> a much more scalable model, as even polling just that super fast query
> will still incur quite a bit more cost than a GET with If-Modified-Since
> on a single xml file.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-03 Thread Boris Pavlovic
Clint, Morgan,

I totally agree that the pub/sub model is better approach.

However, there are 2 great things about polling:
1) it's simpler to use than pub/sub (especially in shell)
2) it has really simple implementation & we can get this in OpenStack in
few days/weeks

What about just supporting both approaches?


Best regards,
Boris Pavlovic

On Wed, Nov 4, 2015 at 9:33 AM, Morgan Fainberg 
wrote:

>
> On Nov 3, 2015 4:29 PM, "Clint Byrum"  wrote:
> >
> > Excerpts from Boris Pavlovic's message of 2015-11-03 14:20:10 -0800:
> > > Hi stackers,
> > >
> > > Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
> > > that works with OpenStack are working with resources (e.g. VM, Volumes,
> > > Images, ..) in the next way:
> > >
> > > >>> resource = api.resouce_do_some_stuff()
> > > >>> while api.resource_get(resource["uuid"]) != expected_status
> > > >>>sleep(a_bit)
> > >
> > > For each async operation they are polling and call many times
> > > resource_get() which creates significant load on API and DB layers due
> the
> > > nature of this request. (Usually getting full information about
> resources
> > > produces SQL requests that contains multiple JOINs, e,g for nova vm
> it's 6
> > > joins).
> > >
> > > What if we add new API method that will just resturn resource status by
> > > UUID? Or even just extend get request with the new argument that
> returns
> > > only status?
> >
> > I like the idea of being able pass in the set of fields you want to
> > see with each get. In SQL, often times only passing in indexed fields
> > will allow a query to be entirely serviced by a brief range scan in
> > the B-tree. For instance, if you have an index on '(UUID, status)',
> > then this lookup will be a single read from an index in MySQL/MariaDB:
> >
> > SELECT status FROM instances WHERE UUID='foo';
> >
> > The explain on this will say 'Using index' and basically you'll just do
> > a range scan on the UUID portion, and only find one entry, which will
> > be lightning fast, and return only status since it already has it there
> > in the index. Maintaining the index is not free, but probably worth it
> > if your users really do poll this way a lot.
> >
> > That said, this is optimizing for polling, and I'm not a huge fan. I'd
> > much rather see a pub/sub model added to the API, so that users can
> > simply subscribe to changes in resources, and poll only when a very long
> > timeout has passed. This will reduce load on API services, databases,
>
> ++ this is a much better long term solution if we are investing
> engineering resources along these lines.
>
> > caches, etc. There was a thread some time ago about using Nova's built
> > in notifications to produce an Atom feed per-project. That seems like
> > a much more scalable model, as even polling just that super fast query
> > will still incur quite a bit more cost than a GET with If-Modified-Since
> > on a single xml file.
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][glance] Summary from the Mitaka summit

2015-11-03 Thread Christopher Aedo
On Tue, Nov 3, 2015 at 1:48 AM, Flavio Percoco  wrote:
> [...]
> Glance Artifacts REpository (Glare)
> ===
>
> Do you remember the Glance *EXPERIMENTAL* Glance V3 API? We had that
> famous discussion again, the one we had in Vancouver, Paris and
> Atlanta :) This time, however, we were able to reason about this with
> the implementation in mind and, for the sake of backwards
> compatibility, DefCore support and not having another major API
> release, we've agreed to pull it out into its own endpoint/process.
>
> In addition to the above, the experimental version of this API will be
> refactored a bit to be compliant with DefCore requirements. Or better,
> the team has engaged with the API WG team and asked them to review the
> API implementation. There was quite some feedback that will be
> addressed during Mitaka. It's still unsure whether it'll be considered
> stable at the end of the cycle. This will be revisited when the time
> comes.
>
> As far as the python bindings go, we'll pull into glanceclient the
> work that was done during liberty. Therefore, glanceclient will be the
> python library to use, whereas the CLI will be in openstackclient.
>
> We also participated in Murano's and App Catalog's meetup to discuss
> how we can move forward with this. The result of that discussion is
> that these teams will look into using Glare. They had several
> questions and we went through all of them. I'm personally super happy
> about this collaboration.

I'm really happy we had the chance to get a much closer look at all
the great work Alexander Tivelkov and others have put in to Glare.
Thank you for making time to come to our sessions, demonstrate the
potential ways we could come together on this, and discuss the path
forward.  I'm hopeful this is going to work out and will turn out to
be something we can implement for our use-case in the next few months
:)

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-03 Thread michael mccune

On 11/03/2015 05:20 PM, Boris Pavlovic wrote:

What if we add new API method that will just resturn resource status by
UUID? Or even just extend get request with the new argument that returns
only status?

Thoughts?


not sure i understand the resource status by UUID, could you explain 
that a little more.


as for changing the get request to return only the status, can't you 
have a filter on the get url that instructs it to return only the status?


mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tacker] Summary from the Mitaka summit

2015-11-03 Thread Sridhar Ramaswamy
Here are the notes related to OpenStack Tacker project from Tokyo summit.

Tacker Developer Meetup
---

Rough notes are captured in the tacker design summit etherpad [1].
As discussed in the previous irc meeting [2] the team discussed
possible new irc meeting slots for the upcoming cycle. We decided
to go with Tuesday 1700 UTC slot.

Tacker - tosca-parser / heat-translator Meetup
--

Tacker developers had a meetup with tosca-parser / heat-translator
core-team members. Bob from Tacker team will pitch-in to bring
tosca-nfv profile support to tosca-parser library. He will also
co-ordinate bringing tosca-parser based template parsing to Tacker.

Birds of a Feather on NFV Orchestration using Tacker
-

We had quite a packed room full of attendees for this session.
We provided an update on what we achieved in Liberty. More
importantly, presented
the planned roadmap for Mitaka and
solicited inputs.

There was a strong support for the key features listed - Multi-VIM,
Service Function Chaining (SFC) and Platform aware VNF placement.
There were many questions on the layer of separation between VNFM
and NFVO. Some of the attendees requested the flexibility of Tacker
used just as a VNFM  or just as a NFVO. There was also a concern
whether Tacker is taking up lots of things under its scope. The
answer to this, the features added so far and the ones planned
are in response to the requests from initial adopters of Tacker.
Again, the plan is to leverage as many existing openstack projects
(like Heat) and libraries (like tosca-parser) as possible. There was
an ask to create few generic configuration-mgmt drivers like
NETCONF/YANG for quick on-boarding of VNFs.

The team will take these inputs into account in the upcoming blueprints and
RFEs.


Tacker related demos and talks,
---

1. NFV Orchestration using OpenStack Tacker / vBrownBag -
https://youtu.be/y9fYiIsIErc
2. Tacker demo orchestrating vRouter and vEPC (complex VNF) -
https://youtu.be/EfqWArz25Hg

- Sridhar

[1]  

https://etherpad.openstack.org/p/mitaka-tacker-design-summit
[2]

http://eavesdrop.openstack.org/meetings/tacker/2015/tacker.2015-10-22-16.09.log.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] notification subteam

2015-11-03 Thread Michael Davies
On Wed, Nov 4, 2015 at 8:49 AM, Michael Still  wrote:

> I'd be interested in being involved with this, and I know Paul Murray is
> interested as well.
>
> I went to make a doodle, but then realised the only non-terrible timeslot
> for Australia / UK / US Central is 8pm UTC (7am Australia, 8pm London, 2pm
> Central US). So what do people think of that time slot?
>

I'm interested along with Mario about making sure Ironic and Nova
notifications follow similar paths, so I'd probably lurk along to this as
well (so the proposed time slot works for me).
-- 
Michael Davies   mich...@the-davies.net
Rackspace Cloud Builders Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-03 Thread Clint Byrum
Excerpts from Boris Pavlovic's message of 2015-11-03 14:20:10 -0800:
> Hi stackers,
> 
> Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
> that works with OpenStack are working with resources (e.g. VM, Volumes,
> Images, ..) in the next way:
> 
> >>> resource = api.resouce_do_some_stuff()
> >>> while api.resource_get(resource["uuid"]) != expected_status
> >>>sleep(a_bit)
> 
> For each async operation they are polling and call many times
> resource_get() which creates significant load on API and DB layers due the
> nature of this request. (Usually getting full information about resources
> produces SQL requests that contains multiple JOINs, e,g for nova vm it's 6
> joins).
> 
> What if we add new API method that will just resturn resource status by
> UUID? Or even just extend get request with the new argument that returns
> only status?

I like the idea of being able pass in the set of fields you want to
see with each get. In SQL, often times only passing in indexed fields
will allow a query to be entirely serviced by a brief range scan in
the B-tree. For instance, if you have an index on '(UUID, status)',
then this lookup will be a single read from an index in MySQL/MariaDB:

SELECT status FROM instances WHERE UUID='foo';

The explain on this will say 'Using index' and basically you'll just do
a range scan on the UUID portion, and only find one entry, which will
be lightning fast, and return only status since it already has it there
in the index. Maintaining the index is not free, but probably worth it
if your users really do poll this way a lot.

That said, this is optimizing for polling, and I'm not a huge fan. I'd
much rather see a pub/sub model added to the API, so that users can
simply subscribe to changes in resources, and poll only when a very long
timeout has passed. This will reduce load on API services, databases,
caches, etc. There was a thread some time ago about using Nova's built
in notifications to produce an Atom feed per-project. That seems like
a much more scalable model, as even polling just that super fast query
will still incur quite a bit more cost than a GET with If-Modified-Since
on a single xml file.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][glance][murano][app-catalog] Summary from the Mitaka summit

2015-11-03 Thread Fox, Kevin M
+1. I think we all had a great experience coming together, finding common 
ground, and coming up with a solid plan to move forward that helps everyone 
make progress. Thanks everyone for not skipping over a potentially unpleasant 
conversation and getting together to talk it through. I think we all won by 
working together. Go OpenStack! :)

Kevin

From: Christopher Aedo [d...@aedo.net]
Sent: Tuesday, November 03, 2015 3:29 PM
To: Flavio Percoco; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][glance] Summary from the Mitaka summit

On Tue, Nov 3, 2015 at 1:48 AM, Flavio Percoco  wrote:
> [...]
> Glance Artifacts REpository (Glare)
> ===
>
> Do you remember the Glance *EXPERIMENTAL* Glance V3 API? We had that
> famous discussion again, the one we had in Vancouver, Paris and
> Atlanta :) This time, however, we were able to reason about this with
> the implementation in mind and, for the sake of backwards
> compatibility, DefCore support and not having another major API
> release, we've agreed to pull it out into its own endpoint/process.
>
> In addition to the above, the experimental version of this API will be
> refactored a bit to be compliant with DefCore requirements. Or better,
> the team has engaged with the API WG team and asked them to review the
> API implementation. There was quite some feedback that will be
> addressed during Mitaka. It's still unsure whether it'll be considered
> stable at the end of the cycle. This will be revisited when the time
> comes.
>
> As far as the python bindings go, we'll pull into glanceclient the
> work that was done during liberty. Therefore, glanceclient will be the
> python library to use, whereas the CLI will be in openstackclient.
>
> We also participated in Murano's and App Catalog's meetup to discuss
> how we can move forward with this. The result of that discussion is
> that these teams will look into using Glare. They had several
> questions and we went through all of them. I'm personally super happy
> about this collaboration.

I'm really happy we had the chance to get a much closer look at all
the great work Alexander Tivelkov and others have put in to Glare.
Thank you for making time to come to our sessions, demonstrate the
potential ways we could come together on this, and discuss the path
forward.  I'm hopeful this is going to work out and will turn out to
be something we can implement for our use-case in the next few months
:)

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack] [nova] Instance snapshot creation locally and Negative values returned by Resource Tracker

2015-11-03 Thread Vilobh Meshram
Hi All,

I see negative values being returned by resource tracker, which is
surprising, since enough capacity is available on Hypervisor (as seen
through df -ha output [0]). In my setup I have configured nova.conf to
created instance snapshot locally and I *don't have* disk-filter enabled.

Local instance snapshot means the snapshot creation (and conversion from
RAW=>QCOW2) happens on the Hypervisor where the instance was created. After
the conversion the snapshot is uploaded to Glance and deleted from the
Hypervisor.

Questions are :-

1. compute_nodes['free_disk_gb'] is not in-sync with the actual free disk
capacity for that partition (as seen by df -ha) [0]  (see /home).

This is because resource tracker is returning negative values for
free_disk_gb [1] and that is because the value of resources['local_gb_used']
is greater than resources['local_gb']. The value for resources['
local_gb_used'] should ideally be the local gigabytes (787G [0]) used by
the Hypervisor but in-fact is the local gigabytes allocated on the
Hypervisor (3525 G [0]). Allocated is the sum of used capacity on
hypervisor + space consumed by instances spawned on that Hypervisor ( and
there size depends on which flavor VM was spawned on the Hypervisor).
Because of [2] the used space on the Hypervisor is discarded and only the
space consumed by the instances on the HV is taken into consideration.

Was there a specific reason to do so, specifically [2] i.e. resetting the
value of resources['local_gb_used'] ?

2. Is seeing negative values for compute_nodes['free_disk_gb'] and
compute_nodes['disk_available_least'] a normal pattern ? When can we expect
to see them ?

3. Lets say in future I plan to enable disk filter, scheduler logic will
make sure not to pick up this Hypervisor if its reaching its consumption
(considering it might need to have enough space for snapshot creation and
later a scratch space for snapshot conversion from RAW => QCOW2) will it
help so that resource tracker does not return negative values. Is there a
recommended overcommit ration suggestion in these scenario where you happen
to create/convert snapshot locally before uploading to glance.

4. How will multiple snapshot request for instances on same Hypervisor be
handled because till the time the request reaches the compute it has no
clear idea about the free capacity on HV which might lead to instance
unusable. Will something of this sort [3]  help? How do people using local
snapshots handle it right now ?

-Vilobh

[0] http://paste.openstack.org/show/477926/
[1]
https://github.com/openstack/nova/blob/stable/liberty/nova/compute/resource_tracker.py#L576
[2]
https://github.com/openstack/nova/blob/stable/liberty/nova/compute/resource_tracker.py#L853
[3] https://review.openstack.org/#/c/208078/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Mesos orchestration as discussed at mid cycle (action required from core reviewers)

2015-11-03 Thread Jay Lau
+1 to Georgy, we may need a framework on top of Mesos to enable Mesos
manage long running services and have the capability of scaling, ha etc and
Marathon is a good choice. Thanks.

On Tue, Nov 3, 2015 at 6:47 AM, Georgy Okrokvertskhov <
gokrokvertsk...@mirantis.com> wrote:

> Hi Steve,
>
> Thank you for the update. This is really interesting direction for Kolla.
> I agree with Jeff. It is interesting to see what other frameworks will be
> used. I suspect Marathon framework is under consideration as it adds most
> of the application centric functionality like HA\restarter, scaling and
> rolling-restarts\upgrades. Kubernetes might be also a good candidate for
> that.
>
> Thanks
> Gosha
>
> On Mon, Nov 2, 2015 at 2:00 PM, Jeff Peeler  wrote:
>
>> On Mon, Nov 2, 2015 at 12:02 PM, Steven Dake (stdake) 
>> wrote:
>> > Hey folks,
>> >
>> > We had an informal vote at the mid cycle from the core reviewers, and
>> it was
>> > a majority vote, so we went ahead and started the process of the
>> > introduction of mesos orchestration into Kolla.
>> >
>> > For background for our few core reviewers that couldn’t make it and the
>> > broader community, Angus Salkeld has committed himself and 3 other
>> Mirantis
>> > engineers full time to investigate if Mesos could be used as an
>> > orchestration engine in place of Ansible.  We are NOT dropping our
>> Ansible
>> > implementation in the short or long term.  Kolla will continue to lead
>> with
>> > Ansible.  At some point in Mitaka or the N cycle we may move the ansible
>> > bits to a repository called “kolla-ansible” and the kolla repository
>> would
>> > end up containing the containers only.
>> >
>> > The general consensus was that if folks wanted to add additional
>> > orchestration systems for Kolla, they were free to do so if they did the
>> > development and made a commitment to maintaining one core reviewer team
>> with
>> > broad expertise among the core reviewer team of how these various
>> systems
>> > work.
>> >
>> > Angus has agreed to the following
>> >
>> > A new team called “kolla-mesos-core” with 2 members.  One of the
>> members is
>> > Angus Salkeld, the other is selected by Angus Salkeld since this is a
>> cookie
>> > cutter empty repository.  This is typical of how new projects would
>> operate,
>> > but we don’t want a code dump and instead want an integrated core
>> team.  To
>> > prevent a situation which the current Ansible expertise shy away from
>> the
>> > Mesos implementation, the core reviewer team has committed to reviewing
>> the
>> > mesos code to get a feel for it.
>> > Over the next 6-8 weeks these two folks will strive to join the Kolla
>> core
>> > team by typical means 1) irc participation 2) code generation 3)
>> effective
>> > and quality reviews 4) mailing list participation
>> > Angus will create a technical specification which will we will roll-call
>> > voted and only accepted once a majority of core review team is satisfied
>> > with the solution.
>> > The kolla-mesos deliverable will be under Kolla governance and be
>> managed by
>> > the Kolla core reviewer team after the kolla-mesos-core team is
>> deprecated.
>> > If the experiment fails, kolla-mesos will be placed in the attic.
>> There is
>> > no specific window for the experiments, it is really up to Angus to
>> decide
>> > if the technique is viable down the road.
>> > For the purpose of voting, the kolla-mesos-core team won’t be permitted
>> to
>> > vote (on things like this or other roll-call votes in the community)
>> until
>> > they are “promoted” to the koala-core reviewer team.
>> >
>> >
>> > The core reviewer team has agreed to the following
>> >
>> > Review patches in kolla-mesos repository
>> > Actively learn how the mesos orchestration system works in the context
>> of
>> > Kolla
>> > Actively support Angus’s effort in the existing Kolla code base as long
>> as
>> > it is not harmful to the Kolla code base
>> >
>> > We all believe this will lead to a better outcome then Mirantis
>> developing
>> > some code on their own and later dumping it into the Kolla governance or
>> > operating as a fork.
>> >
>> > I’d like to give the core reviewers another chance to vote since the
>> voting
>> > was semi-rushed.
>> >
>> > I am +1 given the above constraints.  I think this will help Kolla grow
>> and
>> > potentially provide a better (or arguably different) orchestration
>> system
>> > and is worth the investigation.  At no time will we put the existing
>> Kolla
>> > Ansible + Docker goodness into harms way, so I see no harm in an
>> independent
>> > repository especially if the core reviewer team strives to work as one
>> team
>> > (rather then two independent teams with the same code base).
>> >
>> > Abstaining is the same as voting as –1, so please vote one way or
>> another
>> > with a couple line blob about your thoughts on the idea.
>> >
>> > Note of the core reviewers there, we had 7 +1 votes (and we have a 9
>> > individual 

[openstack-dev] [all][ironic] How to proceed about deprecation of untagged code?

2015-11-03 Thread Gabriel Bezerra

Hi,

The change in https://review.openstack.org/237122 touches a feature from 
ironic that has not been released in any tag yet.


At first, we from the team who has written the patch thought that, as it 
has not been part of any release, we could do backwards incompatible 
changes on that part of the code. As it turned out from discussing with 
the community, ironic commits to keeping the master branch backwards 
compatible and a deprecation process is needed in that case.


That stated, the question at hand is: How long should this deprecation 
process last?


This spec specifies the deprecation policy we should follow: 
https://github.com/openstack/governance/blob/master/reference/tags/assert_follows-standard-deprecation.rst


As from its excerpt below, the minimum obsolescence period must be 
max(next_release, 3 months).


"""
Based on that data, an obsolescence date will be set. At the very 
minimum the feature (or API, or configuration option) should be marked 
deprecated (and still be supported) in the next stable release branch, 
and for at least three months linear time. For example, a feature 
deprecated in November 2015 should still appear in the Mitaka release 
and stable/mitaka stable branch and cannot be removed before the 
beginning of the N development cycle in April 2016. A feature deprecated 
in March 2016 should still appear in the Mitaka release and 
stable/mitaka stable branch, and cannot be removed before June 2016.

"""

This spec, however, only covers released and/or tagged code.

tl;dr:

How should we proceed regarding code/features/configs/APIs that have not 
even been tagged yet?


Isn't waiting for the next OpenStack release in this case too long? 
Otherwise, we are going to have features/configs/APIs/etc. that are 
deprecated from their very first tag/release.


How about sticking to min(next_release, 3 months)? Or next_tag? Or 3 
months? max(next_tag, 3 months)?



Best regards,
Gabriel Bezerra.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-03 Thread John Griffith
On Tue, Nov 3, 2015 at 3:20 PM, Boris Pavlovic  wrote:

> Hi stackers,
>
> Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
> that works with OpenStack are working with resources (e.g. VM, Volumes,
> Images, ..) in the next way:
>
> >>> resource = api.resouce_do_some_stuff()
> >>> while api.resource_get(resource["uuid"]) != expected_status
> >>>sleep(a_bit)
>
> For each async operation they are polling and call many times
> resource_get() which creates significant load on API and DB layers due the
> nature of this request. (Usually getting full information about resources
> produces SQL requests that contains multiple JOINs, e,g for nova vm it's 6
> joins).
>
> What if we add new API method that will just resturn resource status by
> UUID? Or even just extend get request with the new argument that returns
> only status?
>
> Thoughts?
>
>
> Best regards,
> Boris Pavlovic
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​Hey Boris,

As I asked in IRC, I'm kinda curious what the difference is here in terms
of API and DB calls.  I very well might be missing an idea here, but
currently we do a get by ID in that loop that you mention, the only
difference I see in what you're suggesting is a reduced payload maybe?  A
response that only includes the status?

I may be missing an important idea here, but it seems to me that you would
still have the same number of API calls and DB request, just possibly a
slightly smaller payload.  Let me know if I'm missing the idea here.

Thanks,
John​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [networking-powervm] Please create networking-powervm on PyPI

2015-11-03 Thread Kyle Mestery
I'm reaching out to whoever owns the networking-powervm project [1]. I have
a review out [2] which updates the PyPI publishing jobs so we can push
releases for networking-powervm. However, in looking at PyPI, I don't see a
networking-powervm project, but instead a neutron-powervm project. Is there
a reason for the PyPI project to have a different name? I believe this will
not allow us to push releases, as the name of the projects need to match.
Further, the project creation guide recommends naming them the same [4].

Can someone from the PowerVM team look at registering networking-powervm on
PyPI and correcting this please?

Thanks!
Kyle

[1] https://launchpad.net/neutron-powervm
[2] https://review.openstack.org/#/c/233466/
[3] https://pypi.python.org/pypi/neutron-powervm/0.1.0
[4] http://docs.openstack.org/infra/manual/creators.html#pypi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-03 Thread John Griffith
On Tue, Nov 3, 2015 at 4:57 PM, michael mccune  wrote:

> On 11/03/2015 05:20 PM, Boris Pavlovic wrote:
>
>> What if we add new API method that will just resturn resource status by
>> UUID? Or even just extend get request with the new argument that returns
>> only status?
>>
>> Thoughts?
>>
>
> not sure i understand the resource status by UUID, could you explain that
> a little more.
>
> as for changing the get request to return only the status, can't you have
> a filter on the get url that instructs it to return only the status?
>

​Yes, we already have that capability and it's used in a number of places.​


>
> mike
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] updating upper-constraints for stable branches

2015-11-03 Thread Robert Collins
On 4 November 2015 at 09:28, Doug Hellmann  wrote:
> lifeless had some proposals about managing stable requirements and
> constraints that he presented during the summit. We should get those
> written down before we start approving any changes.

The tl;dr is that we'd make a copy of upper-constraints.txt to
release-constraints.txt and then test both cases in the gate.

We can always pull up an older version of upper-constraints.txt in
liberty's history, so I see no particular reason not to approve manual
point release changes to upper-constraints.txt in liberty.

-Rob



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tripleo] Mesos orchestration as discussed at mid cycle (action required from core reviewers)

2015-11-03 Thread Zane Bitter

On 02/11/15 18:33, Steven Dake (stdake) wrote:


Blame the core team :)  I suspect you will end up retrying a lot of
patterns we tried and failed with Kubernetes.  Kubernetes eventually was
found to be non-viable by the delivery of this 2 week project:

https://github.com/sdake/compute-upgrade

Documented in this blog:

http://sdake.io/2015/01/28/an-atomic-upgrade-process-for-openstack-compute-nodes/


I don't recognise half of the names of tools y'all have been talking 
about here, but I can't help wondering whether the assumption that 
exactly one of these tools has to do all of the things has gone 
unchallenged.


I think we all agree that using something _like_ Kubernetes would be 
extremely interesting for controller services, where you have a bunch of 
heterogeneous services with scheduling constraints (HA), that may need 
to be scaled out at different rates,  


IMHO it's not interesting at all for compute nodes though, where the 
scheduling is not only fixed but well-defined in advance. (It's... one 
compute node per compute node. Duh.)


e.g. I could easily imagine a future containerised TripleO where the 
controller services were deployed with Magnum but the compute nodes were 
configured directly with Heat software deployments.


In such a scenario the fact that you can't use Kubernetes for compute 
nodes diminishes its value not at all. So while I'm guessing net=host is 
still a blocker (for Neutron services on the controller - although 
another message in this thread suggests that K8s now supports it 
anyway), I don't think pid=host needs to be since AFAICT it appears to 
be required only for libvirt.


Something to think about...

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Upgrading elasticsearch cluster Monday 11/9

2015-11-03 Thread Clark Boylan
Hello,

The infrastructure team will be upgrading the Elasticsearch cluster used
by Logstash and elastic-recheck on Monday November 9th starting at about
1700UTC.

Because we are upgrading from 0.90.9 to 1.7.3 this will require a full
cluster restart. During the cluster restart you will not be able to make
searches against the cluster. We will be queuing new log indexing jobs
in Gearman so indexing will resume after the cluster restart.

This upgrade gives us a few new features like aggregations, rolling
upgrades within a major release, and should improve performance of the
cluster. It will also enable us to update the versions of Logstash and
Kibana (again) that we use modernizing the whole setup. Also,
Elasticsearch 2.0 just released so we may repeat this process as soon as
the other tooling and libs support the newer API version.

Sorry for the disruption but this should lead to even shinier
Elasticsearchy things during the Mitaka cycle. Do let us know if you
have questions, concerns, or crazy Elasticsearch upgrade stories to
share.

Thank you,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Adding a new feature in Kilo. Is it possible?

2015-11-03 Thread Matt Riedemann



On 11/3/2015 11:57 AM, Michał Dubiel wrote:

Hi all,

We have a simple patch allowing to use OpenContrail's vrouter with
vhostuser vif types (currently only OVS has support for that). We would
like to contribute it.

However, We would like this change to land in the next maintenance
release of Kilo. Is it possible? What should be the process for this?
Should we prepare a blueprint and review request for the 'master' branch
first? It is small self contained change so I believe it does not need a
nova-spec.

Regards,
Michal


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The short answer is 'no' to backporting features to stable branches.

As the other reply said, feature changes are targeted to master.

The full stable branch policy is here:

https://wiki.openstack.org/wiki/StableBranch

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Learning to Debug the Gate

2015-11-03 Thread Anita Kuno
On 11/02/2015 12:39 PM, Anita Kuno wrote:
> On 10/29/2015 10:42 PM, Anita Kuno wrote:
>> On 10/29/2015 08:27 AM, Anita Kuno wrote:
>>> On 10/28/2015 12:14 AM, Matt Riedemann wrote:


 On 10/27/2015 4:08 AM, Anita Kuno wrote:
> Learning how to debug the gate was identified as a theme at the
> "Establish Key Themes for the Mitaka Cycle" cross-project session:
> https://etherpad.openstack.org/p/mitaka-crossproject-themes
>
> I agreed to take on this item and facilitate the process.
>
> Part one of the conversation includes referencing this video created by
> Sean Dague and Dan Smith:
> https://www.youtube.com/watch?v=fowBDdLGBlU
>
> Please consume this as you are able.
>
> Other suggestions for how to build on this resource were mentioned and
> will be coming in the future but this was an easy, actionable first step.
>
> Thank you,
> Anita.
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

 https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/tales-from-the-gate-how-debugging-the-gate-helps-your-enterprise


>>>
>>> The source for the definition of "the gate":
>>> http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml#n34
>>>
>>> Thanks for following along,
>>> Anita.
>>>
>>
>> This is the status page showing the status of our running jobs,
>> including patches in the gate pipeline: http://status.openstack.org/zuul/
>>
>> Thank you,
>> Anita.
>>
> 
> This is a simulation of how the gate tests patches:
> http://docs.openstack.org/infra/publications/zuul/#%2818%29
> 
> Click in the browser window to advance the simulation.
> 
> Thank you,
> Anita.
> 

Here is a presentation that uses the slide deck linked above, I
recommend watching: https://www.youtube.com/watch?v=WDoSCGPiFDQ

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron]L2/L3 switching between SR-IOV ports

2015-11-03 Thread daya kamath
all,i would like to check if there is a Neutron driver or agent supporting 
switching between SR-IOV ports on the same server.the wiki mentions 802.1Qbg 
for switching in the server NIC (VEB), or by hairpinning the pkts in the 
nexthop physical TOR (VEPA). i wanted to check if this is a supported test case 
in Neutron, and whether there is any network equipment that supports this today 
in an Openstack context?is there any thought or blueprint towards orchestrating 
this functionality via Neutron? perhaps the L2-GW plugin could be the right fit?

thanks!daya kamath__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-03 Thread Boris Pavlovic
John,


The main point here is to reduce amount of data that we request from DB and
that is process by API services and sent via network
and make SQL requests simpler (remove joins from SELECT).

So like if you fetch 10 bytes instead of 1000 bytes you will process 100
times less and it will scale 100 timer better and work overall 100 time
faster.

>From other side polling may easily cause 100 API requests / second And
create significant load on the cloud.

Clint,

Please do not forget abut the fact that we are removing from SQL requests
JOINs.

Here is how look SQL request that gets VM info:
http://paste.openstack.org/show/477934/ (it has 6 joins)

This is how it looks for glance image:
http://paste.openstack.org/show/477933/ (it has 2 joins)

So the performance/scale impact will be higher.

Best regards,
Boris Pavlovic


On Wed, Nov 4, 2015 at 4:18 PM, Clint Byrum  wrote:

> Excerpts from Boris Pavlovic's message of 2015-11-03 17:32:43 -0800:
> > Clint, Morgan,
> >
> > I totally agree that the pub/sub model is better approach.
> >
> > However, there are 2 great things about polling:
> > 1) it's simpler to use than pub/sub (especially in shell)
>
> I envision something like this:
>
>
> while changes=$(openstack compute server-events --run react-to-status
> --fields status id1 id2 id3 id4) ; do
>   for id_and_status in $changes ; do
> id=${id_and_status##:}
> status=${id_and_status%%:}
>   done
> done
>
> Not exactly "hard"
>
> > 2) it has really simple implementation & we can get this in OpenStack in
> > few days/weeks
> >
>
> It doesn't actually solve a ton of things though. Even if we optimize
> it down to the fewest operations, it is still ultimately a DB query and
> extra churn in the API service.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-11-03 Thread Tang Chen

Hi all,

Just FYI, the WIP patch-set is now available here:

https://review.openstack.org/241476
https://review.openstack.org/241477
https://review.openstack.org/241478
https://review.openstack.org/241479
https://review.openstack.org/241480

Thanks.

On 10/14/2015 10:05 AM, Tang Chen wrote:

Hi, all,

Please help to review this BP.

https://blueprints.launchpad.net/nova/+spec/live-migration-state-machine


Currently, the migration_status field in Migration object is 
indicating the

status of migration process. But in the current code, it is represented
by pure string, like 'migrating', 'finished', and so on.

The strings could be confusing to different developers, e.g. there are 3
statuses representing the migration process is over successfully:
'finished', 'completed' and 'done'.
And 2 for migration in process: 'running' and 'migrating'.

So I think we should use constants or enum for these statuses.


Furthermore, Nikola has proposed to create a state machine for the 
statuses,
which is part of another abandoned BP. And this is also the work I'd 
like to go

on with. Please refer to:
https://review.openstack.org/#/c/197668/ 

https://review.openstack.org/#/c/197669/ 




Another proposal is: introduce a new member named "state" into Migration.
Use a state machine to handle this Migration.state, and leave 
migration_status

field a descriptive human readable free-form.


So how do you think ?

Thanks.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-03 Thread Clint Byrum
Excerpts from Boris Pavlovic's message of 2015-11-03 17:32:43 -0800:
> Clint, Morgan,
> 
> I totally agree that the pub/sub model is better approach.
> 
> However, there are 2 great things about polling:
> 1) it's simpler to use than pub/sub (especially in shell)

I envision something like this:


while changes=$(openstack compute server-events --run react-to-status --fields 
status id1 id2 id3 id4) ; do
  for id_and_status in $changes ; do
id=${id_and_status##:}
status=${id_and_status%%:}
  done
done

Not exactly "hard"

> 2) it has really simple implementation & we can get this in OpenStack in
> few days/weeks
> 

It doesn't actually solve a ton of things though. Even if we optimize
it down to the fewest operations, it is still ultimately a DB query and
extra churn in the API service.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >