[openstack-dev] [glance] Proposal of a virtual mid-cycle instead of the co-located

2016-05-29 Thread Nikhil Komawar
Hello,


I would like to propose a two day 4 hour sessions each of Glance virtual
mid-cycle on June 15th Wednesday & 16th Thursday, 1400 UTC onward. This
is a replacement of the Glance mid-cycle meetup that we've cancelled.
Some people have already expressed some items to discuss then and I
would like for us to utilize a couple of hours discussing the
glance-specs so that we can apply spec-soft-freeze [1] in a better capacity.


We can try to accommodate topics according to the TZ, for example topics
proposed by folks in EMEA earlier in the day vs. for those in the PDT TZ
in the later part of the event.


Please vote with +1, 0, -1. If the time/date doesn't work, please
propose 2-3 additional slots.


We can use either hangouts, bluejeans or an IBM conferencing tool as
required, which is to be finalized closer to the event.


I will setup an agenda etherpad once we decide on the date/time.


[1] http://lists.openstack.org/pipermail/openstack-dev/2016-May/096175.html

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr] Gap analysis: Heat as a k8s orchestrator

2016-05-29 Thread Steve Baker

On 29/05/16 08:16, Hongbin Lu wrote:



-Original Message-
From: Zane Bitter [mailto:zbit...@redhat.com]
Sent: May-27-16 6:31 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr]
Gap analysis: Heat as a k8s orchestrator

I spent a bit of time exploring the idea of using Heat as an external
orchestration layer on top of Kubernetes - specifically in the case of
TripleO controller nodes but I think it could be more generally useful
too - but eventually came to the conclusion it doesn't work yet, and
probably won't for a while. Nevertheless, I think it's helpful to
document a bit to help other people avoid going down the same path, and
also to help us focus on working toward the point where it _is_
possible, since I think there are other contexts where it would be
useful too.

We tend to refer to Kubernetes as a "Container Orchestration Engine"
but it does not actually do any orchestration, unless you count just
starting everything at roughly the same time as 'orchestration'. Which
I wouldn't. You generally handle any orchestration requirements between
services within the containers themselves, possibly using external
services like etcd to co-ordinate. (The Kubernetes project refer to
this as "choreography", and explicitly disclaim any attempt at
orchestration.)

What Kubernetes *does* do is more like an actively-managed version of
Heat's SoftwareDeploymentGroup (emphasis on the _Group_). Brief recap:
SoftwareDeploymentGroup is a type of ResourceGroup; you give it a map
of resource names to server UUIDs and it creates a SoftwareDeployment
for each server. You have to generate the list of servers somehow to
give it (the easiest way is to obtain it from the output of another
ResourceGroup containing the servers). If e.g. a server goes down you
have to detect that externally, and trigger a Heat update that removes
it from the templates, redeploys a replacement server, and regenerates
the server list before a replacement SoftwareDeployment is created. In
constrast, Kubernetes is running on a cluster of servers, can use rules
to determine where to run containers, and can very quickly redeploy
without external intervention in response to a server or container
falling over. (It also does rolling updates, which Heat can also do
albeit in a somewhat hacky way when it comes to SoftwareDeployments -
which we're planning to fix.)

So this seems like an opportunity: if the dependencies between services
could be encoded in Heat templates rather than baked into the
containers then we could use Heat as the orchestration layer following
the dependency-based style I outlined in [1]. (TripleO is already
moving in this direction with the way that composable-roles uses
SoftwareDeploymentGroups.) One caveat is that fully using this style
likely rules out for all practical purposes the current Pacemaker-based
HA solution. We'd need to move to a lighter-weight HA solution, but I
know that TripleO is considering that anyway.

What's more though, assuming this could be made to work for a
Kubernetes cluster, a couple of remappings in the Heat environment file
should get you an otherwise-equivalent single-node non-HA deployment
basically for free. That's particularly exciting to me because there
are definitely deployments of TripleO that need HA clustering and
deployments that don't and which wouldn't want to pay the complexity
cost of running Kubernetes when they don't make any real use of it.

So you'd have a Heat resource type for the controller cluster that maps
to either an OS::Nova::Server or (the equivalent of) an OS::Magnum::Bay,
and a bunch of software deployments that map to either a
OS::Heat::SoftwareDeployment that calls (I assume) docker-compose
directly or a Kubernetes Pod resource to be named later.

The first obstacle is that we'd need that Kubernetes Pod resource in
Heat. Currently there is no such resource type, and the OpenStack API
that would be expected to provide that API (Magnum's /container
endpoint) is being deprecated, so that's not a long-term solution.[2]
Some folks from the Magnum community may or may not be working on a
separate project (which may or may not be called Higgins) to do that.
It'd be some time away though.

An alternative, though not a good one, would be to create a Kubernetes
resource type in Heat that has the credentials passed in somehow. I'm
very against that though. Heat is just not good at handling credentials
other than Keystone ones. We haven't ever created a resource type like
this before, except for the Docker one in /contrib that serves as a
prime example of what *not* to do. And if it doesn't make sense to wrap
an OpenStack API around this then IMO it isn't going to make any more
sense to wrap a Heat resource around it.

There are ways to alleviate the credential handling issue. First, Kubernetes 
supports Keystone authentication [1]. Magnum has a BP [2] to turn on this 
feature. In addition, there is a Kubernetes 

Re: [openstack-dev] [glance] Proposal for a mid-cycle virtual sync on operator issues

2016-05-29 Thread Nikhil Komawar
Hi Ops team(s),


Can we get some RSVPs on the etherpad to help us plan and coordinate the
event? I'd like to cancel this by June 4th if there isn't significant
interest (less than 3 operator friends).


https://etherpad.openstack.org/p/newton-glance-and-ops-midcycle-sync




On 5/26/16 1:30 AM, Nikhil Komawar wrote:
> Thanks Sam. We purposefully chose that time to accommodate some of our
> community members from the Pacific. I'm assuming it's just your case
> that's not working out for that time? So, hopefully other Australian/NZ
> friends can join.
>
>
> On 5/26/16 12:59 AM, Sam Morrison wrote:
>> I’m hoping some people from the Large Deployment Team can come along. It’s 
>> not a good time for me in Australia but hoping someone else can join in.
>>
>> Sam
>>
>>
>>> On 26 May 2016, at 2:16 AM, Nikhil Komawar  wrote:
>>>
>>> Hello,
>>>
>>>
>>> Firstly, I would like to thank Fei Long for bringing up a few operator
>>> centric issues to the Glance team. After chatting with him on IRC, we
>>> realized that there may be more operators who would want to contribute
>>> to the discussions to help us take some informed decisions.
>>>
>>>
>>> So, I would like to call for a 2 hour sync for the Glance team along
>>> with interested operators on Thursday June 9th, 2016 at 2000UTC. 
>>>
>>>
>>> If you are interested in participating please RSVP here [1], and
>>> participate in the poll for the tool you'd prefer. I've also added a
>>> section for Topics and provided a template to document the issues clearly.
>>>
>>>
>>> Please be mindful of everyone's time and if you are proposing issue(s)
>>> to be discussed, come prepared with well documented & referenced topic(s).
>>>
>>>
>>> If you've feedback that you are not sure if appropriate for the
>>> etherpad, you can reach me on irc (nick: nikhil).
>>>
>>>
>>> [1] https://etherpad.openstack.org/p/newton-glance-and-ops-midcycle-sync
>>>
>>> -- 
>>>
>>> Thanks,
>>> Nikhil Komawar
>>> Newton PTL for OpenStack Glance
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>



-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API changes on limit / marker / sort in Newton

2016-05-29 Thread Alex Xu
2016-05-20 20:05 GMT+08:00 Sean Dague :

> There are a number of changes up for spec reviews that add parameters to
> LIST interfaces in Newton:
>
> * keypairs-pagination (MERGED) -
>
> https://github.com/openstack/nova-specs/blob/8d16fc11ee6d01b5a9fe1b8b7ab7fa6dff460e2a/specs/newton/approved/keypairs-pagination.rst#L2
> * os-instances-actions - https://review.openstack.org/#/c/240401/
> * hypervisors - https://review.openstack.org/#/c/240401/
> * os-migrations - https://review.openstack.org/#/c/239869/
>
> I think that limit / marker is always a legit thing to add, and I almost
> wish we just had a single spec which is "add limit / marker to the
> following APIs in Newton"
>
>
Are you looking for code sharing or one microversion? For code sharing, it
sounds ok if people have some co-work. Probably we need a common pagination
supported model_query function for all of those. For one microversion, i'm
a little hesitate, we should keep one small change, or enable all in one
microversion. But if we have some base code for pagination support, we
probably can make the pagination as default thing support for all list
method?


> Most of these came in with sort_keys as well. We currently don't have
> schema enforcement on sort_keys, so I don't think we should add any more
> instances of it until we scrub it. Right now sort_keys is mostly a way
> to generate a lot of database load because users can sort by things not
> indexed in your DB. We really should close that issue in the future, but
> I don't think we should make it any worse. I have -1s on
> os-instance-actions and hypervisors for that reason.
>
> os-instances-actions and os-migrations are time based, so they are
> proposing a changes-since. That seems logical and fine. Date seems like
> the natural sort order for those anyway, so it's "almost" limit/marker,
> except from end not the beginning. I think that in general changes-since
> on any resource which is time based should be fine, as long as that
> resource is going to natural sort by the time field in question.
>
> So... I almost feel like this should just be soft policy at this point:
>
> limit / marker - always ok
> sort_* - no more until we have a way to scrub sort (and we fix weird
> sort key issues we have)
> changes-since - ok on any resource that will natural sort with the
> updated time
>
>
> That should make proposing these kinds of additions easier for folks,
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]consistency and disaster recovery

2016-05-29 Thread joehuang
Hello, 

This spec[1] was to expose quiesce/unquiesce API, which had been approved in 
Mitaka, but code not merged in time. 

The major consideration for this spec is to enable application level 
consistency snapshot, so that the backup of the snapshot in the remote site 
could be recovered correctly in case of disaster recovery. Currently there is 
only single VM level consistency snapshot( through create image from VM ), but 
it's not enough.

First, the disaster recovery is mainly the action in the infrastructure level 
in case of catastrophic failures (flood, earthquake, propagating software 
fault), the cloud service provider recover the infrastructure and the 
applications without the help from each application owner: you can not just 
recover the OpenStack, then send notification to all applications' owners, to 
ask them to restore their applications by their own. As the cloud service 
provider, they should be responsible for the infrastructure and application 
recovery in case of disaster.

The second, this requirement is not to make OpenStack bend over NFV, although 
this requirement was asked from OPNFV at first, it's general requirement to 
have application level consistency snapshot. For example, just using OpenStack 
itself as the application running in the cloud, we can deploy different DB for 
different service, i.e. Nova has its own mysql server nova-db-VM, Neutron has 
its own mysql server neutron-db-VM. In fact, I have seen in some production to 
divide the db for Nova/Cinder/Neutron to different DB server for scalability 
purpose. We know that there are interaction between Nova and Neutron when 
booting a new VM, during the VM booting period, some data will be in the memory 
cache of the nova-db-VM/neutron-db-VM, if we just create snapshot of the 
volumes of nova-db-VM/neutron-db-VM in Cinder, the data which has not been 
flushed to the disk will not be in the snapshot of the volumes. We cann't make 
sure when these data in the memory cache will be flushed, then there is
  random possibility that the data in the snapshot is not consistent as what 
happened as in the virtual machines of nova-db-VM/neutron-db-VM.In this case, 
Nova/Neutron may boot in the disaster recovery site successfully, but some port 
information may be crushed for not flushed into the neutron-db-VM when doing 
snapshot, and in the severe situation, even the VM may not be able to recover 
successfully to run. Although there is one project called Dragon[2], Dragon 
can't guarantee the consistency of the application snapshot too through 
OpenStack API.

The third, for those applications which can decide the data and checkpoint 
should be replicated to disaster recovery site, this is the third option 
discussed and described in our analysis: 
https://git.opnfv.org/cgit/multisite/tree/docs/requirements/multisite-vnf-gr-requirement.rst.
 But unfortunately in Cinder, after the volume replication V2.1 is developed, 
the tenant granularity volume replication is still being discussed, and still 
not on single volume level. And just like what have mentioned in the first 
point, both application level and infrastructure level are needed, for you 
can't only expect that asking each application owners to do recovery after 
disaster recovery of a site's OpenStack: applications usually can deal with the 
data generated by it, but for the configuration change's protection, it's out 
of scope of application. There are several options for disaster recovery, but 
doesn't mean one option can fit all.

There are several -1 for this re-proposed spec which had been approved in 
Mitaka, so the explanation is sent in the mail-list for discussion. If someone 
can provide other way to guarantee application level snapshot for disaster 
recovery purpose, it's also welcome.

[1] Re-Propose Expose quiesce/unquiesce API:  
https://review.openstack.org/#/c/295595/
[2] Dragon: 
https://github.com/os-cloud-storage/openstack-workload-disaster-recovery

Best Regards
Chaoyi Huang ( Joe Huang )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] NOTICE: Glance Soft Spec freeze R-16 approaching

2016-05-29 Thread Nikhil Komawar
Hi all,


As stated in my email [1], we will be having the soft spec freeze for
glance-specs on June 17th R-16 [2].


As a part of this freeze we will be evaluating the already proposed
specs for their possibility of getting done in Newton based on the
size/complexity/conflict-of-interest-with-priority/comprehensiveness of
the proposal, author follow-up, the severity of comments, etc. We will
then come up with a list of specs that are likely to make it to Newton
and keep our core reviewers' focus on those. We will then ask other
specs to be proposed against Ocata or later.


This also means that we will not allow new specs to be proposed against
Newton.


This freeze does not apply to lite-specs.


Some of the important factors that will go into this evaluation:

* Core reviewer's bandwidth

* Priorities and focus/interest of those core reviewers involved therein

* Interest shown by the cores on your spec

* Other dependencies involving your proposal -- for example if you are
proposing adding a store driver to glance_store, is there a maintainer
ready to be signed-up, if another spec or lite-spec has overlap or
conflict with your, etc.

* Other important factors raised by cores on the spec or during R-16



I urge all the authors/committers to pay close attention to this
deadline as it will severely affect your plans to propose anything
against Newton if it does not make it in this soft-freeze. If you have
questions or are looking for more help, you are welcome to join us
during one of our glance weekly meetings [3] or reach out to one or more
of the current glance-cores [4].


Please DO NOT ask for reviews on ML or during the meetings, all that
space and time is reserved for project discussions. You are welcome to
use normal media of communication like irc pings, direct emails, etc.
for review requests but not the public channels.


As always, feel free to reach out for any questions or concerns.


[1] http://lists.openstack.org/pipermail/openstack-dev/2016-May/094780.html

[2] http://releases.openstack.org/newton/schedule.html

[3] http://eavesdrop.openstack.org/#Glance_Team_Meeting

[4] http://lists.openstack.org/pipermail/openstack-dev/2016-May/094221.html


-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Focus for week R-18 May 30-June 03

2016-05-29 Thread Nikhil Komawar
Hi team,


The focus for week R-18 is as follows:


* Mon, part of Tues: Reviews that look good enough to be merged and
suitable for newton-1 release, any reviews that help fixing important
bugs or those that possibly need further testing later in the release.
Please keep any eye on the gate to ensure things are in place. I plan to
propose a review for newton-1 release tag (a bit early) on Tuesday May
31st. Last few days, I have pushed quite a few reviews that looked clear
enough to be merged. Now, we need to merge reviews (if there are any)
that people are motivated enough to get them ready for newton-1.

* rest of Tues, Wed: Reviews that help us get a good store and client
release. As discussed in the meeting [1], I plan to 'propose' a store
and a client release tag later in the week (possibly on late Wednesday
or on Thursday). I want Glance team to be ready from the release
perspective, the actual release will happen whenever release team gets a
chance; given R-18 is general release week, they look less likely then.

* Fri: Reviews on glance-specs, see if the author has followed up on
your comments, see which reviews look important to give a early
indication to the author that the spec is not likely to be accepted for
Newton if we do not deem it fit during our discussions at
spec-soft-freeze of June 17, R-16 [2].


Please be mindful that Monday May 30th is a federal holiday [3] in the US so 
you may see folks (across OpenStack) OOO. However, I will be available on 
Monday.


Reference for the week numbers:
http://releases.openstack.org/newton/schedule.html


(unlike some of my emails meant to be notices, this email is okay for 
discussion)


As always, feel free to reach out for any questions or concerns.


[1] 
http://eavesdrop.openstack.org/meetings/glance/2016/glance.2016-05-26-14.03.log.html#l-88
[2] http://lists.openstack.org/pipermail/openstack-dev/2016-May/094780.html
[3] https://en.wikipedia.org/wiki/Memorial_Day

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [smaug] gate fullstack is shown as NOT_REGISTERED

2016-05-29 Thread xiangxinyong
Hello Jeremy, 


Thanks your great work.
The gate-smaug-dsvm-fullstack-nv is working now.


Best Regards,
  xiangxinyong


On Sat, May 28, 2016 07:56 PM, Jeremy Stanley wrote:
> On 2016-05-28 11:11:27 +0800 (+0800), xiangxinyong wrote:
> [...]
> gate-smaug-dsvm-fullstack-nv is shown as NOT_REGISTERED.
> [...]
> 
> There was an error in https://review.openstack.org/317566 which got
> overlooked during review. I have submitted
> https://review.openstack.org/322431 to correct it for you.
> -- 
> Jeremy Stanley__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr] Gap analysis: Heat as a k8s orchestrator

2016-05-29 Thread Hongbin Lu


> -Original Message-
> From: Steven Dake (stdake) [mailto:std...@cisco.com]
> Sent: May-29-16 3:29 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev]
> [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr] Gap analysis: Heat as a
> k8s orchestrator
> 
> Quick question below.
> 
> On 5/28/16, 1:16 PM, "Hongbin Lu"  wrote:
> 
> >
> >
> >> -Original Message-
> >> From: Zane Bitter [mailto:zbit...@redhat.com]
> >> Sent: May-27-16 6:31 PM
> >> To: OpenStack Development Mailing List
> >> Subject: [openstack-dev]
> >> [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr]
> >> Gap analysis: Heat as a k8s orchestrator
> >>
> >> I spent a bit of time exploring the idea of using Heat as an
> external
> >> orchestration layer on top of Kubernetes - specifically in the case
> >> of TripleO controller nodes but I think it could be more generally
> >> useful too - but eventually came to the conclusion it doesn't work
> >> yet, and probably won't for a while. Nevertheless, I think it's
> >> helpful to document a bit to help other people avoid going down the
> >> same path, and also to help us focus on working toward the point
> >> where it _is_ possible, since I think there are other contexts where
> >> it would be useful too.
> >>
> >> We tend to refer to Kubernetes as a "Container Orchestration Engine"
> >> but it does not actually do any orchestration, unless you count just
> >> starting everything at roughly the same time as 'orchestration'.
> >> Which I wouldn't. You generally handle any orchestration
> requirements
> >> between services within the containers themselves, possibly using
> >> external services like etcd to co-ordinate. (The Kubernetes project
> >> refer to this as "choreography", and explicitly disclaim any attempt
> >> at
> >> orchestration.)
> >>
> >> What Kubernetes *does* do is more like an actively-managed version
> of
> >> Heat's SoftwareDeploymentGroup (emphasis on the _Group_). Brief
> recap:
> >> SoftwareDeploymentGroup is a type of ResourceGroup; you give it a
> map
> >> of resource names to server UUIDs and it creates a
> SoftwareDeployment
> >> for each server. You have to generate the list of servers somehow to
> >> give it (the easiest way is to obtain it from the output of another
> >> ResourceGroup containing the servers). If e.g. a server goes down
> you
> >> have to detect that externally, and trigger a Heat update that
> >> removes it from the templates, redeploys a replacement server, and
> >> regenerates the server list before a replacement SoftwareDeployment
> >> is created. In constrast, Kubernetes is running on a cluster of
> >> servers, can use rules to determine where to run containers, and can
> >> very quickly redeploy without external intervention in response to a
> >> server or container falling over. (It also does rolling updates,
> >> which Heat can also do albeit in a somewhat hacky way when it comes
> >> to SoftwareDeployments - which we're planning to fix.)
> >>
> >> So this seems like an opportunity: if the dependencies between
> >> services could be encoded in Heat templates rather than baked into
> >> the containers then we could use Heat as the orchestration layer
> >> following the dependency-based style I outlined in [1]. (TripleO is
> >> already moving in this direction with the way that composable-roles
> >> uses
> >> SoftwareDeploymentGroups.) One caveat is that fully using this style
> >> likely rules out for all practical purposes the current
> >> Pacemaker-based HA solution. We'd need to move to a lighter-weight
> HA
> >> solution, but I know that TripleO is considering that anyway.
> >>
> >> What's more though, assuming this could be made to work for a
> >> Kubernetes cluster, a couple of remappings in the Heat environment
> >> file should get you an otherwise-equivalent single-node non-HA
> >> deployment basically for free. That's particularly exciting to me
> >> because there are definitely deployments of TripleO that need HA
> >> clustering and deployments that don't and which wouldn't want to pay
> >> the complexity cost of running Kubernetes when they don't make any
> real use of it.
> >>
> >> So you'd have a Heat resource type for the controller cluster that
> >> maps to either an OS::Nova::Server or (the equivalent of) an
> >> OS::Magnum::Bay, and a bunch of software deployments that map to
> >> either a OS::Heat::SoftwareDeployment that calls (I assume)
> >> docker-compose directly or a Kubernetes Pod resource to be named
> later.
> >>
> >> The first obstacle is that we'd need that Kubernetes Pod resource in
> >> Heat. Currently there is no such resource type, and the OpenStack
> API
> >> that would be expected to provide that API (Magnum's /container
> >> endpoint) is being deprecated, so that's not a long-term solution.[2]
> >> Some folks from the Magnum community may or may not be working on a
> >> separate project (which may or may not be called Higgins) to do that.
> >> It'd be some time a

[openstack-dev] [nova] [placement] aggregates associated with multiple resource providers

2016-05-29 Thread Chris Dent


I'm currently doing some thinking on step 4 ("Modify resource tracker
to pull information on aggregates the compute node is associated with
and the resource pools available for those aggregatesa.") of the
work items for the generic resource pools spec[1] and I've run into
a brain teaser that I need some help working out.

I'm not sure if I've run into an issue, or am just being ignorant. The
latter is quite likely.

This gets a bit complex (to me) but: The idea for step 4 is that the
resource tracker will be modified such that:

* if the compute node being claimed by an instance is a member of some
  aggregates
* and one of those  aggregates is associated with a resource provider 
* and the resource provider has inventory of resource class DISK_GB


then rather than claiming disk on the compute node, claim it on the
resource provider.

The first hurdle to overcome when doing this is to trace the path
from compute node, through aggregates, to a resource provider. We
can get a list of aggregates by host, and then we can use those
aggregates to get a list of resource providers by joining across
ResourceProviderAggregates, and we can join further to get just
those ResourceProviders which have Inventory of resource class
DISK_GB.

The issue here is that the result is a list. As far as I can tell
we can end up with >1 ResourceProviders providing DISK_GB for this
host because it is possible for a host to be in more than one
aggregate and it is necessary for an aggregate to be able to associate
with more than one resource provider.

If the above is true and we can find two resource providers providing
DISK_GB how does:

* the resource tracker know where (to which provider) to write its
  disk claim?
* the scheduler (the next step in the work items) make choices and
  declarations amongst providers? (Yes, place on that node, but use disk 
provider
  X, not Y)

If the above is not true, why is it not true? (show me the code
please)

If the above is an issue, but we'd like to prevent it, how do we fix it?
Do we need to make it so that when we associate an aggregate with a
resource provider we check to see that it is not already associated with
some other provider of the same resource class? This would be a
troubling approach because as things currently stand we can add Inventory
of any class and aggregates to a provider at any time and the amount of
checking that would need to happen is at least bi-directional if not multi
and that level of complexity is not a great direction to be going.

So, yeah, if someone could help me tease this out, that would be
great, thanks.


[1] 
http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/generic-resource-pools.html#work-items

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [oslo][pylockfile] update documentation

2016-05-29 Thread Joshua Harlow

I'll work on getting that fixed this week,

Either me or one of the other owners of pylockfile (doug hellmann) 
should be able to get that resolved,


Thanks for bringing it up!

The code repository btw is at 
https://git.openstack.org/cgit/openstack/pylockfile (mirrored at 
https://github.com/openstack/pylockfile),


-Josh

anatoly techtonik wrote:

-- Forwarded message --
From: anatoly techtonik
Date: Fri, Apr 8, 2016 at 6:04 PM
Subject: [oslo][pylockfile] update documentation
To: openstack-dev@lists.openstack.org


HI,

https://pypi.python.org/pypi/lockfile links to
https://pythonhosted.org/lockfile/
both in text in metadata, which is outdated. Is it possible to update those
links to point to latest information at:
http://docs.openstack.org/developer/pylockfile/

?

Also, where is the code repository?

--
anatoly t.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [oslo][pylockfile] update documentation

2016-05-29 Thread Doug Hellmann
Excerpts from anatoly techtonik's message of 2016-05-29 13:33:40 +0300:
> -- Forwarded message --
> From: anatoly techtonik 
> Date: Fri, Apr 8, 2016 at 6:04 PM
> Subject: [oslo][pylockfile] update documentation
> To: openstack-dev@lists.openstack.org
> 
> 
> HI,
> 
> https://pypi.python.org/pypi/lockfile links to
> https://pythonhosted.org/lockfile/
> both in text in metadata, which is outdated. Is it possible to update those
> links to point to latest information at:
> http://docs.openstack.org/developer/pylockfile/
> 
> ?
> 
> Also, where is the code repository?

The code is in http://git.openstack.org/cgit/openstack/pylockfile and
you'll find instructions for getting set up to contribute patches at
http://docs.openstack.org/infra/manual/developers.html

It looks like since you reported the issue someone has tried to update
the readme metadata on PyPI. I did a little cleanup to make the RST
render properly.

Thanks for reporting the issue,
Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] prototype of a DSL for generating Dockerfiles

2016-05-29 Thread Steven Dake (stdake)

>On 5/27/16, 1:58 AM, "Steven Dake (stdake)"  wrote:
>
>>
>>
>>On 5/26/16, 8:45 PM, "Swapnil Kulkarni (coolsvap)" 
>>wrote:
>>
>>>On Fri, May 27, 2016 at 8:35 AM, Steven Dake (stdake) 
>>>wrote:
 Hey folks,

 While Swapnil has been busy churning the dockerfile.j2 files to all
match
 the same style, and we also had summit where we declared we would
solve
the
 plugin problem, I have decided to begin work on a DSL prototype.

 Here are the problems I want to solve in order of importance by this
work:

 Build CentOS, Ubuntu, Oracle Linux, Debian, Fedora containers
 Provide a programmatic way to manage Dockerfile construction rather
then a
 manual (with vi or emacs or the like) mechanism
 Allow complete overrides of every facet of Dockerfile construction,
most
 especially repositories per container (rather than in the base
container) to
 permit the use case of dependencies from one version with dependencies
in
 another version of a different service
 Get out of the business of maintaining 100+ dockerfiles but instead
maintain
 one master file which defines the data that needs to be used to
construct
 Dockerfiles
 Permit different types of optimizations or Dockerfile building by
changing
 around the parser implementation ­ to allow layering of each
operation,
or
 alternatively to merge layers as we do today

 I don't believe we can proceed with both binary and source plugins
given our
 current implementation of Dockerfiles in any sane way.

 I further don't believe it is possible to customize repositories &
installed
 files per container, which I receive increasing requests for offline.

 To that end, I've created a very very rough prototype which builds the
base
 container as well as a mariadb container.  The mariadb container
builds
and
 I suspect would work.

 An example of the DSL usage is here:
 https://review.openstack.org/#/c/321468/4/dockerdsl/dsl.yml

 A very poorly written parser is here:
 https://review.openstack.org/#/c/321468/4/dockerdsl/load.py

 I played around with INI as a format, to take advantage of oslo.config
and
 kolla-build.conf, but that didn't work out.  YML is the way to go.

 I'd appreciate reviews on the YML implementation especially.

 How I see this work progressing is as follows:

 A yml file describing all docker containers for all distros is placed
in
 kolla/docker
 The build tool adds an option ‹use-yml which uses the YML file
 A parser (such as load.py above) is integrated into build.py to lay
down he
 Dockerfiles
 Wait 4-6 weeks for people to find bugs and complain
 Make the ‹use-yml the default for 4-6 weeks
 Once we feel confident in the yml implementation, remove all
Dockerfile.j2
 files
 Remove ‹use-yml option
 Remove all jinja2-isms from build.py

 This is similar to the work that took place to convert from raw
Dockerfiles
 to Dockerfile.j2 files.  We are just reusing that pattern.  Hopefully
this
 will be the last major refactor of the dockerfiles unless someone has
some
 significant complaints about the approach.

 Regards
 -steve


On 5/27/16, 3:44 AM, "Britt Houser (bhouser)"  wrote:

>I admit I'm not as knowledgable about the Kolla codebase as I'd like to
>be, so most of what you're saying is going over my head.  I think mainly
>I don't understand the problem statement.  It looks like you're pulling
>all the "hard coded" things out of the docker files, and making them user
>replaceable?  So the dockerfiles just become a list of required steps,
>and the user can change how each step is implemented?  Would this also
>unify the dockefiles so there wouldn't be a huge if statements between
>Centos and Ubuntu?
>
>Thx,
>Britt
>

What is being pulled out is all of the metadata used by the Dockerfiles or
Kolla in general.  This metadata, being structured either as a dictionary
or ordered list, can be manipulated by simple python tools to do things
like merge sections and override sections or optimize the built images.
FWIW it looks without even trying the Dockerfiles produce a 50MB smaller
image produced by the parser.  The jinja2 templates we have today cannot
be easily overridden.  We have to provide a new key for each type of
override, which is super onerous on the build.py tool.

To your question of docker files being a list of required steps, with this
method Dockerfile.j2 would go away permanently.  On each bulid, as is done
now to process a jinja2 file into a Dockerfile, the elemental.yml file
would be processed into a Dockerfile for that particular set of build
options. 
>




 
___
__
_
 OpenStack Development Mailing List 

Re: [openstack-dev] [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr] Gap analysis: Heat as a k8s orchestrator

2016-05-29 Thread Steven Dake (stdake)
Quick question below.

On 5/28/16, 1:16 PM, "Hongbin Lu"  wrote:

>
>
>> -Original Message-
>> From: Zane Bitter [mailto:zbit...@redhat.com]
>> Sent: May-27-16 6:31 PM
>> To: OpenStack Development Mailing List
>> Subject: [openstack-dev] [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr]
>> Gap analysis: Heat as a k8s orchestrator
>> 
>> I spent a bit of time exploring the idea of using Heat as an external
>> orchestration layer on top of Kubernetes - specifically in the case of
>> TripleO controller nodes but I think it could be more generally useful
>> too - but eventually came to the conclusion it doesn't work yet, and
>> probably won't for a while. Nevertheless, I think it's helpful to
>> document a bit to help other people avoid going down the same path, and
>> also to help us focus on working toward the point where it _is_
>> possible, since I think there are other contexts where it would be
>> useful too.
>> 
>> We tend to refer to Kubernetes as a "Container Orchestration Engine"
>> but it does not actually do any orchestration, unless you count just
>> starting everything at roughly the same time as 'orchestration'. Which
>> I wouldn't. You generally handle any orchestration requirements between
>> services within the containers themselves, possibly using external
>> services like etcd to co-ordinate. (The Kubernetes project refer to
>> this as "choreography", and explicitly disclaim any attempt at
>> orchestration.)
>> 
>> What Kubernetes *does* do is more like an actively-managed version of
>> Heat's SoftwareDeploymentGroup (emphasis on the _Group_). Brief recap:
>> SoftwareDeploymentGroup is a type of ResourceGroup; you give it a map
>> of resource names to server UUIDs and it creates a SoftwareDeployment
>> for each server. You have to generate the list of servers somehow to
>> give it (the easiest way is to obtain it from the output of another
>> ResourceGroup containing the servers). If e.g. a server goes down you
>> have to detect that externally, and trigger a Heat update that removes
>> it from the templates, redeploys a replacement server, and regenerates
>> the server list before a replacement SoftwareDeployment is created. In
>> constrast, Kubernetes is running on a cluster of servers, can use rules
>> to determine where to run containers, and can very quickly redeploy
>> without external intervention in response to a server or container
>> falling over. (It also does rolling updates, which Heat can also do
>> albeit in a somewhat hacky way when it comes to SoftwareDeployments -
>> which we're planning to fix.)
>> 
>> So this seems like an opportunity: if the dependencies between services
>> could be encoded in Heat templates rather than baked into the
>> containers then we could use Heat as the orchestration layer following
>> the dependency-based style I outlined in [1]. (TripleO is already
>> moving in this direction with the way that composable-roles uses
>> SoftwareDeploymentGroups.) One caveat is that fully using this style
>> likely rules out for all practical purposes the current Pacemaker-based
>> HA solution. We'd need to move to a lighter-weight HA solution, but I
>> know that TripleO is considering that anyway.
>> 
>> What's more though, assuming this could be made to work for a
>> Kubernetes cluster, a couple of remappings in the Heat environment file
>> should get you an otherwise-equivalent single-node non-HA deployment
>> basically for free. That's particularly exciting to me because there
>> are definitely deployments of TripleO that need HA clustering and
>> deployments that don't and which wouldn't want to pay the complexity
>> cost of running Kubernetes when they don't make any real use of it.
>> 
>> So you'd have a Heat resource type for the controller cluster that maps
>> to either an OS::Nova::Server or (the equivalent of) an OS::Magnum::Bay,
>> and a bunch of software deployments that map to either a
>> OS::Heat::SoftwareDeployment that calls (I assume) docker-compose
>> directly or a Kubernetes Pod resource to be named later.
>> 
>> The first obstacle is that we'd need that Kubernetes Pod resource in
>> Heat. Currently there is no such resource type, and the OpenStack API
>> that would be expected to provide that API (Magnum's /container
>> endpoint) is being deprecated, so that's not a long-term solution.[2]
>> Some folks from the Magnum community may or may not be working on a
>> separate project (which may or may not be called Higgins) to do that.
>> It'd be some time away though.
>> 
>> An alternative, though not a good one, would be to create a Kubernetes
>> resource type in Heat that has the credentials passed in somehow. I'm
>> very against that though. Heat is just not good at handling credentials
>> other than Keystone ones. We haven't ever created a resource type like
>> this before, except for the Docker one in /contrib that serves as a
>> prime example of what *not* to do. And if it doesn't make sense to wrap
>> an OpenStack API around this then 

Re: [openstack-dev] [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr] Gap analysis: Heat as a k8s orchestrator

2016-05-29 Thread Steven Dake (stdake)
Hongbin,

Re Netowrk coverage, he is talking about the best practice way to deploy
an OpenStack cloud.  I have a diagram here:

http://www.gliffy.com/go/publish/10486755

I think what Zane is getting that network Diagram above to magically map
into Kubernetes is not possible at present and may not be possible ever.

Regards
-steve


On 5/28/16, 1:16 PM, "Hongbin Lu"  wrote:

>
>
>> -Original Message-
>> From: Zane Bitter [mailto:zbit...@redhat.com]
>> Sent: May-27-16 6:31 PM
>> To: OpenStack Development Mailing List
>> Subject: [openstack-dev] [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr]
>> Gap analysis: Heat as a k8s orchestrator
>> 
>> I spent a bit of time exploring the idea of using Heat as an external
>> orchestration layer on top of Kubernetes - specifically in the case of
>> TripleO controller nodes but I think it could be more generally useful
>> too - but eventually came to the conclusion it doesn't work yet, and
>> probably won't for a while. Nevertheless, I think it's helpful to
>> document a bit to help other people avoid going down the same path, and
>> also to help us focus on working toward the point where it _is_
>> possible, since I think there are other contexts where it would be
>> useful too.
>> 
>> We tend to refer to Kubernetes as a "Container Orchestration Engine"
>> but it does not actually do any orchestration, unless you count just
>> starting everything at roughly the same time as 'orchestration'. Which
>> I wouldn't. You generally handle any orchestration requirements between
>> services within the containers themselves, possibly using external
>> services like etcd to co-ordinate. (The Kubernetes project refer to
>> this as "choreography", and explicitly disclaim any attempt at
>> orchestration.)
>> 
>> What Kubernetes *does* do is more like an actively-managed version of
>> Heat's SoftwareDeploymentGroup (emphasis on the _Group_). Brief recap:
>> SoftwareDeploymentGroup is a type of ResourceGroup; you give it a map
>> of resource names to server UUIDs and it creates a SoftwareDeployment
>> for each server. You have to generate the list of servers somehow to
>> give it (the easiest way is to obtain it from the output of another
>> ResourceGroup containing the servers). If e.g. a server goes down you
>> have to detect that externally, and trigger a Heat update that removes
>> it from the templates, redeploys a replacement server, and regenerates
>> the server list before a replacement SoftwareDeployment is created. In
>> constrast, Kubernetes is running on a cluster of servers, can use rules
>> to determine where to run containers, and can very quickly redeploy
>> without external intervention in response to a server or container
>> falling over. (It also does rolling updates, which Heat can also do
>> albeit in a somewhat hacky way when it comes to SoftwareDeployments -
>> which we're planning to fix.)
>> 
>> So this seems like an opportunity: if the dependencies between services
>> could be encoded in Heat templates rather than baked into the
>> containers then we could use Heat as the orchestration layer following
>> the dependency-based style I outlined in [1]. (TripleO is already
>> moving in this direction with the way that composable-roles uses
>> SoftwareDeploymentGroups.) One caveat is that fully using this style
>> likely rules out for all practical purposes the current Pacemaker-based
>> HA solution. We'd need to move to a lighter-weight HA solution, but I
>> know that TripleO is considering that anyway.
>> 
>> What's more though, assuming this could be made to work for a
>> Kubernetes cluster, a couple of remappings in the Heat environment file
>> should get you an otherwise-equivalent single-node non-HA deployment
>> basically for free. That's particularly exciting to me because there
>> are definitely deployments of TripleO that need HA clustering and
>> deployments that don't and which wouldn't want to pay the complexity
>> cost of running Kubernetes when they don't make any real use of it.
>> 
>> So you'd have a Heat resource type for the controller cluster that maps
>> to either an OS::Nova::Server or (the equivalent of) an OS::Magnum::Bay,
>> and a bunch of software deployments that map to either a
>> OS::Heat::SoftwareDeployment that calls (I assume) docker-compose
>> directly or a Kubernetes Pod resource to be named later.
>> 
>> The first obstacle is that we'd need that Kubernetes Pod resource in
>> Heat. Currently there is no such resource type, and the OpenStack API
>> that would be expected to provide that API (Magnum's /container
>> endpoint) is being deprecated, so that's not a long-term solution.[2]
>> Some folks from the Magnum community may or may not be working on a
>> separate project (which may or may not be called Higgins) to do that.
>> It'd be some time away though.
>> 
>> An alternative, though not a good one, would be to create a Kubernetes
>> resource type in Heat that has the credentials passed in somehow. I'm
>> very against that

[openstack-dev] [octavia] enabling new topologies

2016-05-29 Thread Sergey Guenender
I'm working with the IBM team implementing the Active-Active N+1 topology 
[1].

I've been commissioned with the task to help integrate the code supporting 
the new topology while a) making as few code changes and b) reusing as 
much code as possible.

To make sure the changes to existing code are future-proof, I'd like to 
implement them outside AA N+1, submit them on their own and let the AA N+1 
base itself on top of it.

--TL;DR--

what follows is a description of the challenges I'm facing and the way I 
propose to solve them. Please skip down to the end of the email to see the 
actual questions.

--The details--

I've been studying the code for a few weeks now to see where the best 
places for minimal changes might be.

Currently I see two options:

   1. introduce a new kind of entity (the distributor) and make sure it's 
being handled on any of the 6 levels of controller worker code (endpoint, 
controller worker, *_flows, *_tasks, *_driver)

   2. leave most of the code layers intact by building on the fact that 
distributor will inherit most of the controller worker logic of amphora


In Active-Active topology, very much like in Active/StandBy:
* top level of distributors will have to run VRRP
* the distributors will have a Neutron port made on the VIP network
* the distributors' neutron ports on VIP network will need the same 
security groups
* the amphorae facing the pool member networks still require
* ports on the pool member networks
* "peers" HAProxy configuration for real-time state exchange
* VIP network connections with the right security groups

The fact that existing topologies lack the notion of distributor and 
inspecting the 30-or-so existing references to amphorae clusters, swayed 
me towards the second option.

The easiest way to make use of existing code seems to be by splitting 
load-balancer's amphorae into three overlapping sets:
1. The front-facing - those connected to the VIP network
2. The back-facing - subset of front-facing amphorae, also connected to 
the pool members' networks
3. The VRRP-running - subset of front-facing amphorae, making sure the VIP 
routing remains highly available

At the code-changes level
* the three sets can be simply added as properties of 
common.data_model.LoadBalancer
* the existing amphorae cluster references would switch to using one of 
these properties, for example
* the VRRP sub-flow would loop over only the VRRP amphorae
* the network driver, when plugging the VIP, would loop over the 
front-facing amphorae
* when connecting to the pool members' networks, 
network_tasks.CalculateDelta would only loop over the back-facing amphorae

In terms of backwards compatibility, Active-StandBy topology would have 
the 3 sets equal and contain both of its amphorae.

An even more future-proof approach might be to implement the sets-getters 
as selector methods, supporting operation on subsets of each kind of 
amphorae. For instance when growing/shrinking back-facing amphorae 
cluster, only the added/removed ones will need to be processed.

Finally (thank you for your patience, dear reader), my question is: if any 
of the above makes sense, and to facilitate the design/code review, what 
would be the best way to move forward?

Should I create a mini-blueprint describing the changes and implement it?
Should I just open a bug for it and supply a fix?

Thanks,
-Sergey.

[1] https://review.openstack.org/#/c/234639

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] IPAM issue with multiple docker networks having same cidr subnets

2016-05-29 Thread Vikas Choudhary
Hi All,

I humbly request everybody to review this bp [1] and provide inputs once
you got some time.



Thanks
Vikas

[1] https://blueprints.launchpad.net/kuryr/+spec/address-scopes-spaces

On Sat, May 28, 2016 at 10:08 AM, Vikas Choudhary <
choudharyvika...@gmail.com> wrote:

> Thanks Toni, after some thought process, to me also also addressSpace
> approach making sense. We can map neutron addressScopes to docker
> addressSpaces.
>
> Have drafted a blueprint here [1]. I think i have got enough clarity on
> approach. Will be pushing a spec for this soon.
>
>
>
> Thanks
> Vikas
>
>  [1]https://blueprints.launchpad.net/kuryr/+spec/address-scopes-spaces
>
> On Sat, May 28, 2016 at 1:54 AM, Antoni Segura Puimedon <
> toni+openstac...@midokura.com> wrote:
>
>>
>>
>> On Thu, May 26, 2016 at 9:48 PM, Vikas Choudhary <
>> choudharyvika...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> Recently, Banix observed and brought into notice this issue [1].
>>>
>>> To solve this, i could think of two approaches:
>>> 1. Modifying the libnetwork apis to get PoolID also at network creation.
>>>  OR
>>> 2. Enhancing the /network docker api to get PoolID details also
>>>
>>> Problem with the first approach is that it is changing libnetwork
>>> interface which is common for all remote drivers and thus chances of any
>>> break-ups are high. So I preferred second one.
>>>
>>> Here is the patch I pushed to docker [2].
>>>
>>> Once this is merged, we can easily fix this issue by tagging poolID to
>>> neutron networks and filtering subnets at address request time based on
>>> this information.
>>>
>>> Any thoughts/suggestions?
>>>
>>
>> I think following the address scope proposal at [2] is the best course of
>> action. Thanks for taking
>> it up with Docker upstream!
>>
>>
>>>
>>>
>>> Thanks
>>> Vikas
>>>
>>> [1] https://bugs.launchpad.net/kuryr/+bug/1585572
>>> [2] https://github.com/docker/docker/issues/23025
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [oslo][pylockfile] update documentation

2016-05-29 Thread anatoly techtonik
-- Forwarded message --
From: anatoly techtonik 
Date: Fri, Apr 8, 2016 at 6:04 PM
Subject: [oslo][pylockfile] update documentation
To: openstack-dev@lists.openstack.org


HI,

https://pypi.python.org/pypi/lockfile links to
https://pythonhosted.org/lockfile/
both in text in metadata, which is outdated. Is it possible to update those
links to point to latest information at:
http://docs.openstack.org/developer/pylockfile/

?

Also, where is the code repository?

--
anatoly t.


-- 
anatoly t.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-29 Thread Hong Hui Xiao
Hi ML2 team.

I created this patch [1] based on the discussion in the mail list. Since 
it touch the code in ml2(especially in the segment part), could you review 
it and give some advice on it?

[1] https://review.openstack.org/#/c/317358/

HongHui Xiao(肖宏辉)



From:   Carl Baldwin 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date:   05/19/2016 05:34
Subject:Re: [openstack-dev] [Neutron][ML2][Routed Networks]



On Wed, May 18, 2016 at 5:24 AM, Hong Hui Xiao  
wrote:
> I update [1] to auto delete dhcp port if there is no other ports. But
> after the dhcp port is deleted, the dhcp service is not usable. I can

I think this is what I expect.

> resume the dhcp service by adding another subnet, but I don't think it 
is
> a good way. Do we need to consider bind dhcp port to another segment 
when
> deleting the existing one?

Where would you bind the port?  DHCP requires L2 connectivity to the
segment which it serves.  But, you deleted the segment.  So, it makes
sense that it wouldn't work.

Brandon is working on DHCP scheduling which should take care of this.
DHCP should be scheduled to all of the segments with DHCP enabled
subnets.  It should have a port for each of these segments.  So, if a
segment (and its ports) are deleted, I think the right thing to do is
to make sure that DHCP scheduling removes DHCP from that segment.  I
would expect this to happen automatically when the subnet is deleted.
We should check with Brandon to make sure this works (or will work
when his work merges).

Carl

> [1] https://review.openstack.org/#/c/317358

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev