Re: [openstack-dev] [nova] python-novaclient region setting

2016-02-21 Thread Andrey Kurilin
Hi!
`novaclient.client.Client` entry-point supports almost the same arguments
as `novaclient.v2.client.Client`. The difference is only in api_version, so
you can set up region via `novaclient.client.Client` in the same way as
`novaclient.v2.client.Client`.

On Mon, Feb 22, 2016 at 6:11 AM, Xav Paice  wrote:

> Hi,
>
> In http://docs.openstack.org/developer/python-novaclient/api.html it's
> got some pretty clear instructions not to use novaclient.v2.client.Client
> but I can't see another way to specify the region - there's more than one
> in my installation, and no param for region in novaclient.client.Client
>
> Shall I hunt down/write a blueprint for that?
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Midcycle summary part 3/6

2016-02-21 Thread 守屋哲 / MORIYA,SATORU
Thanks Jim for summarizing mid-cycle meeting.

I'd like to clarify the next step for boot-from-volume things.
In the etherpad (https://etherpad.openstack.org/p/ironic-mitaka-midcycle), 
there's high level plan:

* High level plan
  * Review specs
  * Write new specs for the base drivers - This may need the composable driver 
spec as there are numerous permutations
* the base implementation should be fully open source
  ...
* Julia to write a spec for this reference implementation
  * Then begin letting in the other drivers

So the next step is reviewing specs and the specs are:
* ironic
  https://review.openstack.org/#/c/200496/
* nova-ironic driver
  https://review.openstack.org/#/c/211101/

We need to agree on ironic-spec(200496) before nova PTL reviews nova-ironic
driver spec(211101). In addition to that, Nova spec freeze date is usually
set around milestone-1 (quite early in the development cycle), and so I'd
like Ironic-cores to review the specs above before the next summit.

200496 has already got lots of reviews (thanks guys!), but unfortunately,
it's not reviewed enough by core reviewers.

Regards,
Satoru


> -Original Message-
> From: Jim Rollenhagen [mailto:j...@jimrollenhagen.com]
> Sent: Thursday, February 18, 2016 5:14 AM
> To: openstack-dev@lists.openstack.org
> Subject: [!][openstack-dev] [ironic] Midcycle summary part 3/6
>  
> * Discussed boot-from-volume things
>   * This is something we'd like to start working on in Newton, though
> depending on priority it may be an Otaca thing.
>   * Would like to ship a reference implementation first as the "base
> case"; this would support a deployment where all of the below are
> true:
> * Deployment has metadata service (configdrive not supported)
> * Deployment does not require local boot (in other words, this will
>   only support booting the instance via iPXE)
> * Hardware supports iPXE
> * Hardware supports the UEFI 2.4 spec
>   * Once that ships, vendors are free to use vendor-specific features to
> provide a better experience
>   * TheJulia will be writing a spec for this reference implementation
>   * Talked about how we might get a configdrive to the instance
> * Some hardware may support it via virtualmedia
> * Some hardware may support it via a second volume
> * ironic-conductor could mount the volume and carve out a
>   configdrive partition. This has implications on network and
>   customer data security, cannot be used with encrypted volumes, and
>   could break an image that doesn't have sufficient space at the
>   end.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-21 Thread Thomas Goirand
On 02/18/2016 11:38 PM, D'Angelo, Scott wrote:
> Cinder team is proposing to add support for API microversions [1]. It came up 
> at our mid-cycle that we should add a new /v3 endpoint [2]. Discussions on 
> IRC have raised questions about this [3]
> 
> Please weigh in on the design decision to add a new /v3 endpoint for Cinder 
> for clients to use when they wish to have api-microversions.
> 
> PRO add new /v3 endpoint: A client should not ask for new-behaviour against 
> old /v2 endpoint, because that might hit an old pre-microversion (i.e. 
> Liberty) server, and that server might carry on with old behaviour. The 
> client would not know this without checking, and so strange things happen 
> silently.
> It is possible for client to check the response from the server, but his 
> requires an extra round trip.
> It is possible to implement some type of caching of supported 
> (micro-)version, but not all clients will do this.
> Basic argument is that  continuing to use /v2 endpoint either requires an 
> extra trip for each request (absent caching) meaning performance slow-down, 
> or possibility of unnoticed errors.
> 
> CON add new endpoint:
> Downstream cost of changing endpoints is large. It took ~3 years to move from 
> /v1 -> /v2 and we will have to support the deprecated /v2 endpoint forever.
> If we add microversions with /v2 endpoint, old scripts will keep working on 
> /v2 and they will continue to work.
> We would assume that people who choose to use microversions will check that 
> the server supports it.
> 
> Scottda

I'd vote for the extra round trip and implementation of caching whenever
possible. Using another endpoint is really annoying, I already have
specific stuff for cinder to setup both v1 and v2 endpoint, as v2
doesn't fully implements what's in v1. BTW, where are we with this? Can
I fully get rid of the v1 endpoint, or will I still experience some
Tempest failures?

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] python-novaclient region setting

2016-02-21 Thread Xav Paice
Hi,

In http://docs.openstack.org/developer/python-novaclient/api.html it's got
some pretty clear instructions not to use novaclient.v2.client.Client but I
can't see another way to specify the region - there's more than one in my
installation, and no param for region in novaclient.client.Client

Shall I hunt down/write a blueprint for that?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] discussion about core reviewer limitations by company

2016-02-21 Thread Joshua Harlow

Gary Kotton wrote:

I think that IBM has a very interesting policy in that two IBM cores
should not approve a patch posted by one of their colleagues (that is
what Chris RIP used to tell me). It would be nice if the community would
follow this policy.
Thanks
Gary


Sounds similar to a representative government vs. a democratic one. I'm 
not sure though if it's needed or applicable (the aspiration of it 
sounds nice, but meh), as long as people are good and use their heads 
and we believe that people will do the right thing (and handle the cases 
where this is violated in a polite and considerate manner) then meh, 
more power to everyone...


My 2 cents



From: "Armando M." mailto:arma...@gmail.com>>
Reply-To: OpenStack List mailto:openstack-dev@lists.openstack.org>>
Date: Sunday, February 21, 2016 at 6:40 PM
To: OpenStack List mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [kolla] discussion about core reviewer
limitations by company



On 20 February 2016 at 14:06, Kevin Benton mailto:ke...@benton.pub>> wrote:

I don't think neutron has a limit. There are 4 from redhat and 3
from hp and mirantis right now.
https://review.openstack.org/#/admin/groups/38,members


By the way, technically speaking some of those also only limit
themselves the right to merge to their area of expertise.

On Feb 20, 2016 13:02, "Steven Dake (stdake)" mailto:std...@cisco.com>> wrote:

Neutron, the largest project in OpenStack by active committers
and reviewers as measured by the governance repository teamstats
tool, has a limit of 2 core reviewers per company. They do that
for a reason. I expect Kolla will grow over time (we are about
1/4 their size in terms of contributors and reviewers). I
believe other projects follow a similar pattern besides Neutron
that already have good diversity (and intend to keep it in place).

Regards
-steve


From: Gal Sagie mailto:gal.sa...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" mailto:openstack-dev@lists.openstack.org>>
Date: Saturday, February 20, 2016 at 10:38 AM
To: "OpenStack Development Mailing List (not for usage
questions)" mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [kolla] discussion about core
reviewer limitations by company

I think setting these limits is wrong, some companies have
more overall representation then others.
The core reviewer job should be on a personal basis and not
on a company basis, i think the PTL of each project needs
to make sure the diversity and the community voice is heard
in each project and the correct path is taken even if
many (or even if all) of the cores are from the same company.
If you really want to set limits then i would go with
something like 2 cores from the same company cannot +2 the
same patch, but
again i am against such things personally..

Disclaimer: i am not personally involved in Kolla or know
how things are running there.

On Sat, Feb 20, 2016 at 7:09 PM, Steven Dake (stdake)
mailto:std...@cisco.com>> wrote:

Hey folks,

Mirantis has been developing a big footprint in the core
review team, and Red Hat already has a big footprint in
the core review team. These are all good things, but I
want to avoid in the future a situation in which one
company has a majority of core reviewers. Since core
reviewers set policy for the project, the project could
be harmed if one company has such a majority. This is
one reason why project diversity is so important and has
its own special snowflake tag in the governance repository.

I'd like your thoughts on how to best handle this
situation, before I trigger a vote we can all agree on.

I was thinking of something simple like:
"1 company may not have more then 33% of core reviewers.
At the conclusion of PTL elections, the current cycle's
6 months of reviews completed will be used as a metric
to select the core reviewers from that particular
company if the core review team has shrunk as a result
of removal of core reviewers during the cycle."

Thoughts, comments, questions, concerns, etc?

Regards,
-steve



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lis

Re: [openstack-dev] [tricircle] playing tricircle with devstack under two-region configuration

2016-02-21 Thread Vega Cai
Hi Yipei,

One reason for that error is that the API service is down. You can run
"rejoin-stack.sh" under your DevStack folder to enter the "screen" console
of DevStack, to check if services are running well. If you are not familiar
with "screen", which is a window manager for Linux, you can do a brief
search.

One more thing you can try, change the IP address to 127.0.0.1 and issue
the request in the machine hosting the services to see if there is still
"Connection refused" error.

BR
Zhiyuan

On 20 February 2016 at 20:49, Yipei Niu  wrote:

> Hi Joe and Zhiyuan,
>
> I encounter an error when executing the following command:
>
> stack@nyp-VirtualBox:~/devstack$ curl -X POST
> http://192.168.56.101:1/v1.0/pods -H "Content-Type: application/json"
> -H "X-Auth-Token: 0ead350329ef4b07ab3b823a9d37b724" -d '{"pod":
> {"pod_name":  "RegionOne"}}'
> curl: (7) Failed to connect to 192.168.56.101 port 1: Connection
> refused
>
> Before executing the command, I source the file "userrc_early", whose
> content is as follows:
> export OS_IDENTITY_API_VERSION=3
> export OS_AUTH_URL=http://192.168.56.101:35357
> export OS_USERNAME=admin
> export OS_USER_DOMAIN_ID=default
> export OS_PASSWORD=nypnyp0316
> export OS_PROJECT_NAME=admin
> export OS_PROJECT_DOMAIN_ID=default
> export OS_REGION_NAME=RegionOne
>
> Furthermore, the results of "openstack endpoint list" are as follows:
> stack@nyp-VirtualBox:~/devstack$ openstack endpoint list
>
> +--+---+--++-+---++
> | ID   | Region| Service Name | Service
> Type   | Enabled | Interface | URL
>|
>
> +--+---+--++-+---++
> | 0702ff208f914910bf5c0e1b69ee73cc | RegionOne | nova_legacy  |
> compute_legacy | True| internal  |
> http://192.168.56.101:8774/v2/$(tenant_id)s|
> | 07fe31211a234566a257e3388bba0393 | RegionOne | nova_legacy  |
> compute_legacy | True| admin |
> http://192.168.56.101:8774/v2/$(tenant_id)s|
> | 11cea2de9407459480a30b190e005a5c | Pod1  | neutron  | network
>  | True| internal  | http://192.168.56.101:20001/
>   |
> | 16c0d9f251d84af897dfdd8df60f76dd | Pod2  | nova_legacy  |
> compute_legacy | True| admin |
> http://192.168.56.102:8774/v2/$(tenant_id)s|
> | 184870e1e5df48629e8e1c7a13c050f8 | RegionOne | cinderv2 | volumev2
> | True| public| http://192.168.56.101:19997/v2/$(tenant_id)s
>   |
> | 1a068f85aa12413582c4f4d256d276af | Pod2  | nova | compute
>  | True| admin | http://192.168.56.102:8774/v2.1/$(tenant_id)s
>  |
> | 1b3799428309490bbce57043e87ac815 | RegionOne | cinder   | volume
> | True| internal  | http://192.168.56.101:8776/v1/$(tenant_id)s
>  |
> | 221d74877fdd4c03b9b9b7d752e30473 | Pod2  | neutron  | network
>  | True| internal  | http://192.168.56.102:9696/
>|
> | 413de19152f04fc6b2b1f3a1e43fd8eb | Pod2  | cinderv2 | volumev2
> | True| public| http://192.168.56.102:8776/v2/$(tenant_id)s
>  |
> | 42e1260ab0854f3f807dcd67b19cf671 | RegionOne | keystone | identity
> | True| admin | http://192.168.56.101:35357/v2.0
>   |
> | 45e4ccd5e16a423e8cb9f59742acee27 | Pod1  | neutron  | network
>  | True| public| http://192.168.56.101:20001/
>   |
> | 464dd469545b4eb49e53aa8dafc114bc | RegionOne | cinder   | volume
> | True| admin | http://192.168.56.101:8776/v1/$(tenant_id)s
>  |
> | 47351cda93a54a2a9379b83c0eb445ca | Pod2  | neutron  | network
>  | True| admin | http://192.168.56.102:9696/
>|
> | 56d6f7641ee84ee58611621c4657e45d | Pod2  | nova_legacy  |
> compute_legacy | True| internal  |
> http://192.168.56.102:8774/v2/$(tenant_id)s|
> | 57887a9d15164d6cb5b58d9342316cf7 | RegionOne | glance   | image
>  | True| internal  | http://192.168.56.101:9292
>   |
> | 5f2a4f69682941edbe54a85c45a5fe1b | Pod1  | cinderv2 | volumev2
> | True| public| http://192.168.56.101:8776/v2/$(tenant_id)s
>  |
> | 6720806fe7e24a7c8335159013dba948 | Pod2  | cinderv2 | volumev2
> | True| admin | http://192.168.56.102:8776/v2/$(tenant_id)s
>  |
> | 72e2726b55414d25928b4fc9925a06ed | RegionOne | nova_legacy  |
> compute_legacy | True| public|
> http://192.168.56.101:8774/v2/$(tenant_id)s|
> | 75163e97c3014a389ab56184f970908f | Pod2  | neutron  | network
>  | True| public| http://192.168.56.102:9696/
>|
> | 77b67589282e4776916ead802646de11 | Pod1  | nova | compute
>  | True| internal  | http://192.168.56.101:8774/v2.1/$(tenant_id)s
>  |
> | 789f3456899f4381aef850822c412436 | Pod2  | nova | compute
>  | True| internal  | http://19

Re: [openstack-dev] [swift] zones and partition

2016-02-21 Thread Matthew Oliver
Kiru,

That just means you have put even weight on all your drives, so your
telling swift to store it that way.

So short answer is  there is more to it then that. Sure evenly balanced
makes life easier. But it doesn't have to be the case. You can set drive
weights and overload factor to tune/balance data placement throughout the
cluster. Further you have more then just regions and zones, swift knows
about servers and disks. And will always attempt to keep the objects and
disburse and durable as possible.

If there is ever a case for a some partitions to have 2 replicas on the one
zone, then you'd find they live on different servers or if there is only 1
server, different disks. Therefore adding more failure domains, the better
your data is durability stored.

Regards,
Matt

On Mon, Feb 22, 2016 at 2:00 PM, Kirubakaran Kaliannan <
kiru...@zadarastorage.com> wrote:

>
>
> Hi,
>
>
>
> I have 3 ZONEs, with different capacity in each. Say I have 4 X 1TB disk
>  (r0z1 - 1TB, r0z2 - 1TB,r0 z3 - 2TB ).
>
>
>
> The ring builder (rebalance code), keep ¼-partitions of all 3 replica in
> Zone-3. This is the current default  behavior from the rebalance code.
>
> This puts pressure to the storage user to evenly increase the storage
> capacity across the zones. Is this is the correct understanding I have ?
>
>
>
> If so, Why have we chosen this approach, rather cant we enforce zone based
> partition (but the partition size on Z1 and Z2 may be lesser than Z3) ?
>
> This makes sure we have 100% zone level protection and not loss of data on
> 1 zone failure ?
>
>
>
> Thanks,
>
> -kiru
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Proposing Angus Salkeld for kolla-core

2016-02-21 Thread Jeffrey Zhang
+1 Nick work!

On Mon, Feb 22, 2016 at 10:27 AM, Ryan Hallisey  wrote:

> +1.  Nice work Angus!
>
> -Ryan
>
> > On Feb 19, 2016, at 11:51 PM, Michał Jastrzębski 
> wrote:
> >
> > +1 on condition that he will appear in kolla itself, after
> > all...you'll be a kolla core as well right?;)
> >
> >> On 19 February 2016 at 21:44, Sam Yaple  wrote:
> >> +1 of course. I mean, its Angus. Who can say no to Angus?
> >>
> >> Sam Yaple
> >>
> >> On Fri, Feb 19, 2016 at 10:57 PM, Michal Rostecki <
> mroste...@mirantis.com>
> >> wrote:
> >>>
>  On 02/19/2016 07:04 PM, Steven Dake (stdake) wrote:
> 
>  Angus is already in kolla-mesos-core but doesn't have broad ability to
>  approve changes for all of kolla-core.  We agreed by majority vote in
>  Tokyo that folks in kolla-mesos-core that integrated well with the
>  project would be moved from kolla-mesos-core to kolla-core.  Once
>  kolla-mesos-core is empty, we will deprecate that group.
> 
>  Angus has clearly shown his commitment to Kolla:
>  He is #9 in reviews for Mitaka and #3 in commits(!) as well as shows a
>  solid PDE of 64 (meaning 64 days of interaction with either reviews,
>  commits, or mailing list participation.
> 
>  Count my vote as a +1.  If your on the fence, feel free to abstain.  A
>  vote of –1 is a VETO vote, which terminates the voting process.  If
>  there is unanimous approval prior to February 26, or a veto vote, the
>  voting will be closed and appropriate changes made.
> 
>  Remember now we agreed it takes a majority vote to approve a core
>  reviewer, which means Angus needs a +1 support from at least 6 core
>  reviewers with no veto votes.
> 
>  Regards,
>  -steve
> >>>
> >>> +1
> >>> Good job, Angus!
> >>>
> >>>
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Contributor Awards

2016-02-21 Thread Steve Martinelli

limited edition (and hilarious) t-shirts are always fun :)

++ on raspberry pis, those are always a hit.

stevemar



From:   Hugh Blemings 
To: "OpenStack Development Mailing List (not for usage questions)"
, OpenStack Operators

Date:   2016/02/21 09:54 PM
Subject:Re: [openstack-dev] OpenStack Contributor Awards



Hiya,

On 16/02/2016 21:43, Tom Fifield wrote:
> Hi all,
>
> I'd like to introduce a new round of community awards handed out by the
> Foundation, to be presented at the feedback session of the summit.
>
> Nothing flashy or starchy - the idea is that these are to be a little
> informal, quirky ... but still recognising the extremely valuable work
> that we all do to make OpenStack excel.
>
> [...]
 >
> in the meantime, let's use this thread to discuss the fun part: goodies.
> What do you think we should lavish award winners with? Soft toys?
> Perpetual trophies? baseball caps ?

I can't help but think that given the scale of a typical OpenStack
deployment and the desire for these awards to be a bit quirky, giving
recipients something at the other end of the computing scale - an
Arduino, or cluster of Raspberry Pis or similar could be kinda fun :)

Cheers,
Hugh



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] playing tricircle with devstack under two-region configuration

2016-02-21 Thread joehuang
Hi, Yipei,

Is this port disabled to be visit in your host? Pls. check iptables.

Best Regards
Chaoyi Huang ( Joe Huang )

From: Yipei Niu [mailto:newy...@gmail.com]
Sent: Saturday, February 20, 2016 8:49 PM
To: openstack-dev@lists.openstack.org
Cc: joehuang; Zhiyuan Cai
Subject: [tricircle] playing tricircle with devstack under two-region 
configuration

Hi Joe and Zhiyuan,

I encounter an error when executing the following command:

stack@nyp-VirtualBox:~/devstack$ curl -X POST 
http://192.168.56.101:1/v1.0/pods -H "Content-Type: application/json" -H 
"X-Auth-Token: 0ead350329ef4b07ab3b823a9d37b724" -d '{"pod": {"pod_name":  
"RegionOne"}}'
curl: (7) Failed to connect to 192.168.56.101 port 1: Connection refused

Before executing the command, I source the file "userrc_early", whose content 
is as follows:
export OS_IDENTITY_API_VERSION=3
export OS_AUTH_URL=http://192.168.56.101:35357
export OS_USERNAME=admin
export OS_USER_DOMAIN_ID=default
export OS_PASSWORD=nypnyp0316
export OS_PROJECT_NAME=admin
export OS_PROJECT_DOMAIN_ID=default
export OS_REGION_NAME=RegionOne

Furthermore, the results of "openstack endpoint list" are as follows:
stack@nyp-VirtualBox:~/devstack$ openstack endpoint list
+--+---+--++-+---++
| ID   | Region| Service Name | Service Type   
| Enabled | Interface | URL|
+--+---+--++-+---++
| 0702ff208f914910bf5c0e1b69ee73cc | RegionOne | nova_legacy  | compute_legacy 
| True| internal  | http://192.168.56.101:8774/v2/$(tenant_id)s|
| 07fe31211a234566a257e3388bba0393 | RegionOne | nova_legacy  | compute_legacy 
| True| admin | http://192.168.56.101:8774/v2/$(tenant_id)s|
| 11cea2de9407459480a30b190e005a5c | Pod1  | neutron  | network
| True| internal  | http://192.168.56.101:20001/   |
| 16c0d9f251d84af897dfdd8df60f76dd | Pod2  | nova_legacy  | compute_legacy 
| True| admin | http://192.168.56.102:8774/v2/$(tenant_id)s|
| 184870e1e5df48629e8e1c7a13c050f8 | RegionOne | cinderv2 | volumev2   
| True| public| http://192.168.56.101:19997/v2/$(tenant_id)s   |
| 1a068f85aa12413582c4f4d256d276af | Pod2  | nova | compute
| True| admin | http://192.168.56.102:8774/v2.1/$(tenant_id)s  |
| 1b3799428309490bbce57043e87ac815 | RegionOne | cinder   | volume 
| True| internal  | http://192.168.56.101:8776/v1/$(tenant_id)s|
| 221d74877fdd4c03b9b9b7d752e30473 | Pod2  | neutron  | network
| True| internal  | http://192.168.56.102:9696/|
| 413de19152f04fc6b2b1f3a1e43fd8eb | Pod2  | cinderv2 | volumev2   
| True| public| http://192.168.56.102:8776/v2/$(tenant_id)s|
| 42e1260ab0854f3f807dcd67b19cf671 | RegionOne | keystone | identity   
| True| admin | http://192.168.56.101:35357/v2.0   |
| 45e4ccd5e16a423e8cb9f59742acee27 | Pod1  | neutron  | network
| True| public| http://192.168.56.101:20001/   |
| 464dd469545b4eb49e53aa8dafc114bc | RegionOne | cinder   | volume 
| True| admin | http://192.168.56.101:8776/v1/$(tenant_id)s|
| 47351cda93a54a2a9379b83c0eb445ca | Pod2  | neutron  | network
| True| admin | http://192.168.56.102:9696/|
| 56d6f7641ee84ee58611621c4657e45d | Pod2  | nova_legacy  | compute_legacy 
| True| internal  | http://192.168.56.102:8774/v2/$(tenant_id)s|
| 57887a9d15164d6cb5b58d9342316cf7 | RegionOne | glance   | image  
| True| internal  | http://192.168.56.101:9292 |
| 5f2a4f69682941edbe54a85c45a5fe1b | Pod1  | cinderv2 | volumev2   
| True| public| http://192.168.56.101:8776/v2/$(tenant_id)s|
| 6720806fe7e24a7c8335159013dba948 | Pod2  | cinderv2 | volumev2   
| True| admin | http://192.168.56.102:8776/v2/$(tenant_id)s|
| 72e2726b55414d25928b4fc9925a06ed | RegionOne | nova_legacy  | compute_legacy 
| True| public| http://192.168.56.101:8774/v2/$(tenant_id)s|
| 75163e97c3014a389ab56184f970908f | Pod2  | neutron  | network
| True| public| http://192.168.56.102:9696/|
| 77b67589282e4776916ead802646de11 | Pod1  | nova | compute
| True| internal  | http://192.168.56.101:8774/v2.1/$(tenant_id)s  |
| 789f3456899f4381aef850822c412436 | Pod2  | nova | compute
| True| internal  | http://192.168.56.102:8774/v2.1/$(tenant_id)s  |
| 78da748f6fdc41e8b690acefec8e2838 | Pod1  | cinderv2 | volumev2   
| True| internal  | http:

[openstack-dev] [swift] zones and partition

2016-02-21 Thread Kirubakaran Kaliannan
Hi,



I have 3 ZONEs, with different capacity in each. Say I have 4 X 1TB disk
 (r0z1 - 1TB, r0z2 - 1TB,r0 z3 - 2TB ).



The ring builder (rebalance code), keep ¼-partitions of all 3 replica in
Zone-3. This is the current default  behavior from the rebalance code.

This puts pressure to the storage user to evenly increase the storage
capacity across the zones. Is this is the correct understanding I have ?



If so, Why have we chosen this approach, rather cant we enforce zone based
partition (but the partition size on Z1 and Z2 may be lesser than Z3) ?

This makes sure we have 100% zone level protection and not loss of data on
1 zone failure ?



Thanks,

-kiru
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]one more use case for Image Import Refactor from OPNFV

2016-02-21 Thread joehuang
Hello, Ian and Jay,

The issue for the use case to address will be described more detail here:

There are often dozens of data center in telecom operators, that means the 
number of data centers > 10 is quite normal, and these data centers are 
geo-graphically distributed, with lots of small edge data centers for fast 
media / data transferring. 

There are two ways to manage images in such a cloud with many geo- graphically 
distributed data centers involved:

1. Use shared Glance for all data centers.
  The Glance interface, driver and backend needs to support the distribution of 
image to all data centers, on demand or automatically. 

  Suppose a new image was uploaded to Glance in DC1(or to the backend storage 
in DC1, and register the location to Glance image), but the user wants to boot 
a new virtual machine in any of other datacenters, for example, DC2 or DC3, ... 
DCn. Do we have to download the image from DC1 when booting a new VM for each 
other data center? Is there any data center level image cache mechanism 
supported in Glance image management?
 
  How to deal with the use case that creating an image from a VM (or volume) in 
DCn, but want to boot a VM(or volume) in DCm, under dozens of data center 
scenario? 

  Is there any driver and backend of Glance can replicate the image to dozens 
of data centers on demand or automatically? Not find such Glance driver/backend 
existing yet. Even single Swift instance is not able support dozens of data 
centers. Or have image repository outside Glance and upload the image to each 
data center one by one, then why we have to do duplicated image management 
outside Glance?

  Is there any interface in Glance can indicate Glance to replicate image from 
one location to another location? No, not find such interface in Glance, not 
mention to driver/backend to support this.

  To make the Glance registry / DB / API being distributed into dozens of data 
center is quite similar like that in KeyStone, where lightweight Fernet token 
is supported to enable the distribution. But the difference is how to deal with 
the bulk data of image, how to avoid downloading image cross data center each 
time.

2. Use separated Glance for each data center with image import capability.
  
  An end user is able to import image from another Glance in another OpenStack 
cloud while sharing same identity management( KeyStone ). This one is preferred 
proposal. The reason is as following:

  1) One data center crash should not affect other data center's service. So 
make OpenStack services in each data center as independent as possible. The 
only exception for KeyStone is that There is one requirement for that "a user 
should, using a single authentication point be able to manage virtual resources 
spread over multiple OpenStack regions." 
https://gerrit.opnfv.org/gerrit/#/c/1357/6/multisite-identity-service-management.rst
 . Of course, someone may use KeyStone federation to finish this purpose, but 
it's not recommended for dozens of data centers inter-federation.

  2) If no image import capability cross Glance is supported, then we have to 
use a 3rd tool to download image from one Glance in DCn, then upload the image 
to Glance in DCm. The image data bits need to be passed by the tool, one more 
data plane bottleneck, and manage this images outside Glance, upper layer 
software like MANO has to deal with non-Glance interface. 

  3) Use Swift as the shared backend for multiple Glance services in different 
data centers is applicable only in very limited number of data centers, can't 
support dozens of data centers. But for dozens of data centers, multiple Glance 
services with different backend is still an issue to address.

It's reasonable to have one solution to address NFV scenario. If you guys have 
other ideas to address multi-data centers image management, please share your 
thoughts in the thread.

Best Regards
Chaoyi Huang ( Joe Huang )


-Original Message-
From: Ian Cordasco [mailto:sigmaviru...@gmail.com] 
Sent: Saturday, February 20, 2016 3:11 AM
To: Jay Pipes; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance]one more use case for Image Import 
Refactor from OPNFV

 

-Original Message-
From: Jay Pipes 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: February 19, 2016 at 06:45:38
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [glance]one more use case for Image Import 
Refactor from OPNFV

> On 02/18/2016 10:29 PM, joehuang wrote:
> > There is difference between " An end user is able to import image 
> > from another Glance
> in another OpenStack cloud while sharing same identity management( KeyStone )"
>  
> This is an invalid use case, IMO. What's wrong with exporting the 
> image from one OpenStack cloud and importing it to another? What does 
> a shared identity management service have to do with anything?

I have to agree with Jay. I'm not sure 

Re: [openstack-dev] OpenStack Contributor Awards

2016-02-21 Thread Hugh Blemings

Hiya,

On 16/02/2016 21:43, Tom Fifield wrote:

Hi all,

I'd like to introduce a new round of community awards handed out by the
Foundation, to be presented at the feedback session of the summit.

Nothing flashy or starchy - the idea is that these are to be a little
informal, quirky ... but still recognising the extremely valuable work
that we all do to make OpenStack excel.

[...]

>

in the meantime, let's use this thread to discuss the fun part: goodies.
What do you think we should lavish award winners with? Soft toys?
Perpetual trophies? baseball caps ?


I can't help but think that given the scale of a typical OpenStack 
deployment and the desire for these awards to be a bit quirky, giving 
recipients something at the other end of the computing scale - an 
Arduino, or cluster of Raspberry Pis or similar could be kinda fun :)


Cheers,
Hugh



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Proposing Angus Salkeld for kolla-core

2016-02-21 Thread Ryan Hallisey
+1.  Nice work Angus!

-Ryan

> On Feb 19, 2016, at 11:51 PM, Michał Jastrzębski  wrote:
> 
> +1 on condition that he will appear in kolla itself, after
> all...you'll be a kolla core as well right?;)
> 
>> On 19 February 2016 at 21:44, Sam Yaple  wrote:
>> +1 of course. I mean, its Angus. Who can say no to Angus?
>> 
>> Sam Yaple
>> 
>> On Fri, Feb 19, 2016 at 10:57 PM, Michal Rostecki 
>> wrote:
>>> 
 On 02/19/2016 07:04 PM, Steven Dake (stdake) wrote:
 
 Angus is already in kolla-mesos-core but doesn't have broad ability to
 approve changes for all of kolla-core.  We agreed by majority vote in
 Tokyo that folks in kolla-mesos-core that integrated well with the
 project would be moved from kolla-mesos-core to kolla-core.  Once
 kolla-mesos-core is empty, we will deprecate that group.
 
 Angus has clearly shown his commitment to Kolla:
 He is #9 in reviews for Mitaka and #3 in commits(!) as well as shows a
 solid PDE of 64 (meaning 64 days of interaction with either reviews,
 commits, or mailing list participation.
 
 Count my vote as a +1.  If your on the fence, feel free to abstain.  A
 vote of –1 is a VETO vote, which terminates the voting process.  If
 there is unanimous approval prior to February 26, or a veto vote, the
 voting will be closed and appropriate changes made.
 
 Remember now we agreed it takes a majority vote to approve a core
 reviewer, which means Angus needs a +1 support from at least 6 core
 reviewers with no veto votes.
 
 Regards,
 -steve
>>> 
>>> +1
>>> Good job, Angus!
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-21 Thread Arkady_Kanevsky
With nova and Keystone both at v3 is helps to consistent versioning across all 
projects.
Still need documentation for transition clients from one API version to next.
With new functionality not available in previous version it should be easier 
than API changes.


-Original Message-
From: Walter A. Boring IV [mailto:walter.bor...@hpe.com]
Sent: Friday, February 19, 2016 4:18 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [cinder] adding a new /v3 endpoint for 
api-microversions


>> But, there are no such clients today. And there is no library that
>> does this yet. It will be 4 - 6 months (or even more likely 12+)
>> until that's in the ecosystem. Which is why adding the header
>> validation to existing
>> v2 API, and backporting to liberty / kilo, will provide really
>> substantial coverage for the concern the bswartz is bringing forward.
> Yeah, I have to agree with that. We can certainly have the protection
> out in time.
>
> The only concern there is the admin who set up his Kilo initial
> release cloud and doesn't want to touch it for updates. But they
> likely have more pressing issues than this any way.
>
>> -Sean
>>
>>

Not that I'm adding much to this conversation that hasn't been said already, 
but I am pro v2 API, purely because of how painful and long it's been to get 
the official OpenStack projects to adopt the v2 API from v1. I know we need to 
be sort of concerned about other 'client's
that call the API, but for me that's way down the lists of concerns.
If we go to v3 API, most likely it's going to be another 3+ years before folks 
can use the new Cinder features that the microversioned changes will provides. 
This in effect invalidates the microversion capability in Cinder's API 
completely.

/sadness
Walt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-21 Thread Chris Dent

On Sun, 21 Feb 2016, Jay Pipes wrote:

I don't see how the shared-state scheduler is getting the most accurate 
resource view. It is only in extreme circumstances that the resource-provider 
scheduler's view of the resources in a system (all of which is stored without 
caching in the database) would differ from the "actual" inventory on a 
compute node.


I'm pretty sure this ¶ is central to the whole discussion. It's a
question of where the final truth lies and what that positioning allows
and forbids. In resource-providers, the truth, or at least the truth
that is acted upon is in the database. In shared-state, the scheduler
mirrors the resources. People have biases about that sort of stuff.

Generalizing quite bit:

All that mirroring costs quite a bit in communication terms and can go
funky if the communication goes awry. But it does mean that the compute
nodes are authoritative about themselves and have the possibility of
using/claiming/placing resources that are not under control of the
scheduler (or even nova in general).

Centralizing things in the DB cuts way back on messaging and appears to
provide both a computationally and conceptually efficient way of
calculating placement but it does so at the cost of the compute nodes
have less flexibility about managing their own resources, unless we want
the failure mode you describe elsewhere to be more common than you
implied.

I heard somewhere, but this may be wrong or out of date, that one of the
constraints with compute-nodes is that it should be possible to spawn
VMs on them that are not managed by nova. If, in the full blown
version of the resource-provider-based scheduler, we are not sending
resource usage updates on compute-node state changes to the
scheduler db and only on failure, retry rate goes up in a
heterogeneous environment. That could well be fine, a price you pay,
but I wonder if it is a concern?

I could get into some noodling here about the artifact world versus
the real world, but that's probably belaboring the point. I'm not
trying to diss or support either approach, just flesh out some of
the gaps in at least my understanding.


b) Simplicity

Goes to the above point about debuggability, but I've always tried to follow 
the mantra that the best software design is not when you've added the last 
piece to it, but rather when you've removed the last piece from it and still 
have a functioning and performant system. Having a scheduler that can tackle 
the process of tracking resources, deciding on placement, and claiming those 
resources instead of playing an intricate dance of keeping state caches valid 
will, IMHO, lead to a better scheduler.


I think it is moving in the right direction. Removing the dance of
keeping state caches valid will be a big improvement.

Better still would be removing the duplication and persistence of
information that already exists on the compute nodes. That would be
really cool, but doesn't yet seem possible with the way we do messaging
nor with the way we track shared resources (resource-pools ought to
help).

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] discussion about core reviewer limitations by company

2016-02-21 Thread Doug Wiegley

> On Feb 21, 2016, at 10:38 AM, Steven Dake (stdake)  wrote:
> 
> Armando,
> 
> I apologize if neutron does not have a limit of 2 core reviewers per company 
> – I had heard this through the grapevine but a google search of the mailing 
> list shows no such limitation.

It goes back to what Armando mentioned. If I don’t trust my fellow core 
reviewers, for *whatever reason*, we have much bigger problems than company 
affiliation.

I was told when I joined that the same company shouldn’t +2/+2/+A, which I 
follow, but even then, it’s a judgement call. I mean, who cares if the same 
company merges proposal bot? I certainly don’t. Nor would I think ill of it 
even for a second for a gate fix or the like.

It’s usually pretty obvious when one entity is trying to shove something in to 
the detriment of the project. And I’d rather just have a conversation with 
those folks at the time, and deal with the social problem, rather than trying 
to pass a million bureaucratic rules to cover every what-if. It’s not that I 
like having difficult conversations; I just like a world with a million little 
rules even less.

doug


> 
> Regards
> -steve
> 
> 
> From: "Armando M." mailto:arma...@gmail.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Date: Sunday, February 21, 2016 at 9:38 AM
> To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Subject: Re: [openstack-dev] [kolla] discussion about core reviewer 
> limitations by company
> 
> 
> 
> On 20 February 2016 at 12:58, Steven Dake (stdake)  > wrote:
> Neutron, the largest project in OpenStack by active committers and reviewers 
> as measured by the governance repository teamstats tool, has a limit of 2 
> core reviewers per company.  They do that for a reason.  I expect Kolla will 
> grow over time (we are about 1/4 their size in terms of contributors and 
> reviewers).  I believe other projects follow a similar pattern besides 
> Neutron that already have good diversity (and intend to keep it in place).
> 
> Where did you find this information? I do not believe this is true. I agree 
> wholeheartedly with Joshua: I personally value the judgement of the people I 
> trust rather than looking at affiliation. 
>  
> 
> Regards
> -steve
> 
> 
> From: Gal Sagie mailto:gal.sa...@gmail.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Date: Saturday, February 20, 2016 at 10:38 AM
> To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Subject: Re: [openstack-dev] [kolla] discussion about core reviewer 
> limitations by company
> 
> I think setting these limits is wrong, some companies have more overall 
> representation then others.
> The core reviewer job should be on a personal basis and not on a company 
> basis, i think the PTL of each project needs
> to make sure the diversity and the community voice is heard in each project 
> and the correct path is taken even if
> many (or even if all) of the cores are from the same company.
> If you really want to set limits then i would go with something like 2 cores 
> from the same company cannot +2 the same patch, but 
> again i am against such things personally..
> 
> Disclaimer: i am not personally involved in Kolla or know how things are 
> running there.
> 
> On Sat, Feb 20, 2016 at 7:09 PM, Steven Dake (stdake)  > wrote:
> Hey folks,
> 
> Mirantis has been developing a big footprint in the core review team, and Red 
> Hat already has a big footprint in the core review team.  These are all good 
> things, but I want to avoid in the future a situation in which one company 
> has a majority of core reviewers.  Since core reviewers set policy for the 
> project, the project could be harmed if one company has such a majority.  
> This is one reason why project diversity is so important and has its own 
> special snowflake tag in the governance repository.
> 
> I'd like your thoughts on how to best handle this situation, before I trigger 
>  a vote we can all agree on.
> 
> I was thinking of something simple like:
> "1 company may not have more then 33% of core reviewers.  At the conclusion 
> of PTL elections, the current cycle's 6 months of reviews completed will be 
> used as a metric to select the core reviewers from that particular company if 
> the core review team has shrunk as a result of removal of core reviewers 
> during the cycle."
> 
> Thoughts, comments, questions, concerns, etc?
> 
> Regards,
> -steve
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 

Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-21 Thread Jay Pipes
Yingxin, sorry for the delay in responding to this thread. My comments 
inline.


On 02/17/2016 12:45 AM, Cheng, Yingxin wrote:

To better illustrate the differences between shared-state,
resource-provider and legacy scheduler, I’ve drew 3 simplified pictures
[1] in emphasizing the location of resource view, the location of claim
and resource consumption, and the resource update/refresh pattern in
three kinds of schedulers. Hoping I’m correct in the “resource-provider
scheduler” part.


No, the diagram is not correct for the resource-provider scheduler.

Problems with your depiction of the resource-provider scheduler:

1) There is no proposed cache at all in the resource-provider scheduler 
so all the arrows for "cache refresh" can be eliminated.


2) Claims of resource amounts are done in a database transaction 
atomically within each scheduler process. Therefore there are no "cache 
updates" arrows going back from compute nodes to the resource-provider 
DB. The only time a compute node would communicate with the 
resource-provider DB (and thus the scheduler at all) would be in the 
case of a *failed* attempt to initialize already-claimed resources.



A point of view from my analysis in comparing three schedulers (before
real experiment):

1. Performance: The performance bottlehead of resource-provider and
legacy scheduler is from the centralized db and scheduler cache
refreshing.


You must first prove that there is a bottleneck with the 
resource-provider scheduler.


> It can be alleviated by changing to a stand-alone high

 performance database.


It doesn't need to be high-performance at all. In my benchmarks, a 
small-sized stock MySQL database server is able to fulfill thousands of 
placement queries and claim transactions per minute using completely 
isolated non-shared, non-caching scheduler processes.


> And the cache refreshing is designed to be

replaced by to direct SQL queries according to resource-provider
scheduler spec [2].


Yes, this is correct.

> The performance bottlehead of shared-state scheduler

may come from the overwhelming update messages, it can also be
alleviated by changing to stand-alone distributed message queue and by
using the “MessagePipe” to merge messages.


In terms of the number of messages used in each design, I see the 
following relationship:


resource-providers < legacy < shared-state-scheduler

would you agree with that?

The resource-providers proposal actually uses no update messages at all 
(except in the abnormal case of a compute node failing to start the 
resources that had previously been claimed by the scheduler). All 
updates are done in a single database transaction when the claim is made.


The legacy scheduler has each compute node sending an update message 
(actually it's a database update in the form of ComputeNode.save() that 
is done at the completion of the local nova.compute.claims.Claim() 
context manager. In addition to these update messages, the legacy 
scheduler has a problem with retries (because the scheduler operates on 
non-fresh data when there are more than one scheduler process and they 
both make the same placement decision).


The shared-state scheduler has the most amount of update messages. It 
sends an update message to each scheduler in the system every time 
anything at all happens on the compute node, in addition to messages 
involving claims -- sending, confirming and timing them out -- all of 
which affect each scheduler process' state cache.



2. Final decision accuracy: I think the accuracy of the final decision
are high in all three schedulers, because until now the consistent
resource view and the final resource consumption with claims are all in
the same place. It’s resource trackers in shared-state scheduler and
legacy scheduler, and it’s the resource-provider db in resource-provider
scheduler.


Agreed, I don't believe the final decision accuracy will be affected 
much by the three designs. It's the speed by which the decision can be 
reached and the concurrency at which placement decisions can be made 
that are the differing metrics we are measuring.



3. Scheduler decision accuracy: IMO the order of accuracy of a single
schedule decision is resource-provider > shared-state >> legacy
scheduler. The resource-provider scheduler can get the accurate resource
view directly from db. Shared-state scheduler is getting the most
accurate resource view by constantly collecting updates from resource
trackers and by tracking the scheduler claims from schedulers to RTs.
Legacy scheduler’s decision is the worst because it doesn’t track its
claims and get resource views from compute nodes records which are not
that accurate.


I don't see how the shared-state scheduler is getting the most accurate 
resource view. It is only in extreme circumstances that the 
resource-provider scheduler's view of the resources in a system (all of 
which is stored without caching in the database) would differ from the 
"actual" inventory on a 

Re: [openstack-dev] [api] header non proliferation (that naming thing, _again_)

2016-02-21 Thread Jay Pipes

On 02/21/2016 12:50 PM, Chris Dent wrote:


In a recent api-wg meeting I set forth the idea that it is both a
bad idea to add lots of different headers and to add headers which
have meaning in the name of the header (rather than just the value).
This proved to a bit confusing, so I was asked to write it up. I
did:

 https://review.openstack.org/#/c/280381/

When I did, the best example for how _not_ to do things is the way in
which we are currently doing microversion headers.

So two questions:

* Is my position on header non proliferation right?


Yes, I believe so.


* Is it so right that we should consider doing microversions
   differently?


Ship has sailed on a number of things, including this. I *do* think it 
would be great to just use OpenStack-API-Version: $SERVICE_TYPE X.Y, 
however we'll need to add another microversion to support that of 
course. Isn't it ironic? Don't you think?


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] discussion about core reviewer limitations by company

2016-02-21 Thread Gary Kotton
I think that IBM has a very interesting policy in that two IBM cores should not 
approve a patch posted by one of their colleagues (that is what Chris RIP used 
to tell me). It would be nice if the community would follow this policy.
Thanks
Gary

From: "Armando M." mailto:arma...@gmail.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Sunday, February 21, 2016 at 6:40 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [kolla] discussion about core reviewer limitations 
by company



On 20 February 2016 at 14:06, Kevin Benton 
mailto:ke...@benton.pub>> wrote:

I don't think neutron has a limit. There are 4 from redhat and 3 from hp and 
mirantis right now. https://review.openstack.org/#/admin/groups/38,members

By the way, technically speaking some of those also only limit themselves the 
right to merge to their area of expertise.

On Feb 20, 2016 13:02, "Steven Dake (stdake)" 
mailto:std...@cisco.com>> wrote:
Neutron, the largest project in OpenStack by active committers and reviewers as 
measured by the governance repository teamstats tool, has a limit of 2 core 
reviewers per company.  They do that for a reason.  I expect Kolla will grow 
over time (we are about 1/4 their size in terms of contributors and reviewers). 
 I believe other projects follow a similar pattern besides Neutron that already 
have good diversity (and intend to keep it in place).

Regards
-steve


From: Gal Sagie mailto:gal.sa...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Saturday, February 20, 2016 at 10:38 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [kolla] discussion about core reviewer limitations 
by company

I think setting these limits is wrong, some companies have more overall 
representation then others.
The core reviewer job should be on a personal basis and not on a company basis, 
i think the PTL of each project needs
to make sure the diversity and the community voice is heard in each project and 
the correct path is taken even if
many (or even if all) of the cores are from the same company.
If you really want to set limits then i would go with something like 2 cores 
from the same company cannot +2 the same patch, but
again i am against such things personally..

Disclaimer: i am not personally involved in Kolla or know how things are 
running there.

On Sat, Feb 20, 2016 at 7:09 PM, Steven Dake (stdake) 
mailto:std...@cisco.com>> wrote:
Hey folks,

Mirantis has been developing a big footprint in the core review team, and Red 
Hat already has a big footprint in the core review team.  These are all good 
things, but I want to avoid in the future a situation in which one company has 
a majority of core reviewers.  Since core reviewers set policy for the project, 
the project could be harmed if one company has such a majority.  This is one 
reason why project diversity is so important and has its own special snowflake 
tag in the governance repository.

I'd like your thoughts on how to best handle this situation, before I trigger  
a vote we can all agree on.

I was thinking of something simple like:
"1 company may not have more then 33% of core reviewers.  At the conclusion of 
PTL elections, the current cycle's 6 months of reviews completed will be used 
as a metric to select the core reviewers from that particular company if the 
core review team has shrunk as a result of removal of core reviewers during the 
cycle."

Thoughts, comments, questions, concerns, etc?

Regards,
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best Regards ,

The G.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.o

Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-21 Thread Duncan Thomas
On 21 February 2016 at 19:34, Jay S. Bryant 
wrote:

> Spent some time talking to Sean about this on Friday afternoon and bounced
> back and forth between the two options.  At first, /v3 made the most sense
> to me ... at least it did at the meetup.  With people like Sean Dague and
> Morgan Fainberg weighing in with concerns, it seems like we should
> reconsider.  Duncan, your comment here about customers moving when they are
> ready is somewhat correct.  That, however, I am concerned is a a small
> subset of the users.  I think many users want to move but don't know any
> better.  That was what we encountered with our consumers.  They didn't
> understand that they needed to update the endpoint and couldn't figure out
> why their new functions weren't working.
>
> So, I am leaning towards going with the /v2 endpoint and making sure that
> the clients we can control are set up properly and we put safety checks in
> the server end.  I think that may be the safest way to go.
>

So we can't get users to change endpoints, or write our libraries to have
sensible defaults, but we're somehow going to magically get consumers to do
the much harder job of doing version probes in their code/libraries so that
they don't get surprised by unexpected results? This seems to be entirely
nuts. If 'they' can't change endpoints (and we can't make the libraries we
write just do the right thing without needing to change endpoints) then how
are 'they' expected to do the probing magic that will be required at some
unpredictable poin tin the future, but which you'll get away without until
then?

This would also make us inconsistent with the other projects that have
implemented microversions - so we're changing a known working pattern, to
try to avoid the problem of a user having to get their settings right if
they want new functionality, and hoping this doesn't introduce entirely
predictable and foreseeable bugs in the future that can't actually be fixed
except by checking/changing every client library out there? There's no way
that's a sensible API design.


--
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] header non proliferation (that naming thing, _again_)

2016-02-21 Thread Chris Dent


In a recent api-wg meeting I set forth the idea that it is both a
bad idea to add lots of different headers and to add headers which
have meaning in the name of the header (rather than just the value).
This proved to a bit confusing, so I was asked to write it up. I
did:

https://review.openstack.org/#/c/280381/

When I did, the best example for how _not_ to do things is the way in
which we are currently doing microversion headers.

So two questions:

* Is my position on header non proliferation right?
* Is it so right that we should consider doing microversions
  differently?

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] discussion about core reviewer limitations by company

2016-02-21 Thread Steven Dake (stdake)
Armando,

I apologize if neutron does not have a limit of 2 core reviewers per company – 
I had heard this through the grapevine but a google search of the mailing list 
shows no such limitation.

Regards
-steve


From: "Armando M." mailto:arma...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Sunday, February 21, 2016 at 9:38 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [kolla] discussion about core reviewer limitations 
by company



On 20 February 2016 at 12:58, Steven Dake (stdake) 
mailto:std...@cisco.com>> wrote:
Neutron, the largest project in OpenStack by active committers and reviewers as 
measured by the governance repository teamstats tool, has a limit of 2 core 
reviewers per company.  They do that for a reason.  I expect Kolla will grow 
over time (we are about 1/4 their size in terms of contributors and reviewers). 
 I believe other projects follow a similar pattern besides Neutron that already 
have good diversity (and intend to keep it in place).

Where did you find this information? I do not believe this is true. I agree 
wholeheartedly with Joshua: I personally value the judgement of the people I 
trust rather than looking at affiliation.


Regards
-steve


From: Gal Sagie mailto:gal.sa...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Saturday, February 20, 2016 at 10:38 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [kolla] discussion about core reviewer limitations 
by company

I think setting these limits is wrong, some companies have more overall 
representation then others.
The core reviewer job should be on a personal basis and not on a company basis, 
i think the PTL of each project needs
to make sure the diversity and the community voice is heard in each project and 
the correct path is taken even if
many (or even if all) of the cores are from the same company.
If you really want to set limits then i would go with something like 2 cores 
from the same company cannot +2 the same patch, but
again i am against such things personally..

Disclaimer: i am not personally involved in Kolla or know how things are 
running there.

On Sat, Feb 20, 2016 at 7:09 PM, Steven Dake (stdake) 
mailto:std...@cisco.com>> wrote:
Hey folks,

Mirantis has been developing a big footprint in the core review team, and Red 
Hat already has a big footprint in the core review team.  These are all good 
things, but I want to avoid in the future a situation in which one company has 
a majority of core reviewers.  Since core reviewers set policy for the project, 
the project could be harmed if one company has such a majority.  This is one 
reason why project diversity is so important and has its own special snowflake 
tag in the governance repository.

I'd like your thoughts on how to best handle this situation, before I trigger  
a vote we can all agree on.

I was thinking of something simple like:
"1 company may not have more then 33% of core reviewers.  At the conclusion of 
PTL elections, the current cycle's 6 months of reviews completed will be used 
as a metric to select the core reviewers from that particular company if the 
core review team has shrunk as a result of removal of core reviewers during the 
cycle."

Thoughts, comments, questions, concerns, etc?

Regards,
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best Regards ,

The G.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-21 Thread Jay S. Bryant



On 02/20/2016 04:42 PM, Duncan Thomas wrote:



On 20 Feb 2016 00:21, "Walter A. Boring IV" > wrote:


> Not that I'm adding much to this conversation that hasn't been said 
already, but I am pro v2 API, purely because of how painful and long 
it's been to get the official OpenStack projects to adopt the v2 API 
from v1.


I think there's a slightly different argument here. We aren't taking 
away the v2 API, probably ever. Clients that are satisfied with it can 
continue to use it, as it is, forever. For clients that aren't trying 
to do anything beyond the current basics will quite possibly be happy 
with that. Consumers have no reason to change over without compelling 
value from the change - that will come from what we implement on top 
of microversions, or not. Unlike the v1 transition, we aren't trying 
to get rid of v2, just stop changing existing semantics of it.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Spent some time talking to Sean about this on Friday afternoon and 
bounced back and forth between the two options.  At first, /v3 made the 
most sense to me ... at least it did at the meetup.  With people like 
Sean Dague and Morgan Fainberg weighing in with concerns, it seems like 
we should reconsider.  Duncan, your comment here about customers moving 
when they are ready is somewhat correct.  That, however, I am concerned 
is a a small subset of the users.  I think many users want to move but 
don't know any better.  That was what we encountered with our 
consumers.  They didn't understand that they needed to update the 
endpoint and couldn't figure out why their new functions weren't working.


So, I am leaning towards going with the /v2 endpoint and making sure 
that the clients we can control are set up properly and we put safety 
checks in the server end.  I think that may be the safest way to go.


Jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara][horizon] translation support in sahara-dashboard and INSTALLED_APPS

2016-02-21 Thread Akihiro Motoki
Hi sahara-dashaboard team (perhaps horizon and sahara dev team),

I am working on translation support in sahara-dashboard for feature
parity of Liberty horizon.

TL;DR: Why does not sahara-dashboard use 'sahara_dashboard' as INSTALLED_APPS?
Can't we add 'sahara_dashboard' as INSTALLED_APPS?

Long version:

sahara-dashboard now uses

  ADD_INSTALLED_APPS = \
  ["sahara_dashboard.content.data_processing",
   "sahara_dashboard.content.data_processing.clusters", ]

instead of 'sahara_dashboard' as most horizon plugins do.

On the other hand, the translation setup infra scripts uses
sahara_dashboard.locale as the locale directory
and django searches $apps/locale for each directory specified in
$INSTALLED_APPS.

To make translation work for sahara-dashboard, we need to place the
'locale' directory
in one of $INSTALLED_APPS directories.
It seems the easiest way is to add INSTALLED_APPS to 'sahara_dashoard'
without changing others.
As far as I checked sahara-dashboard repo, adding INSTALLED_APPS has no problem.
I proposed https://review.openstack.org/282883 based on my
investigation results.

Note: I am not sure why the current ADD_INSTALLED_APPS were chosen.
However, changing all of them seems a problem because template paths
seem to depend
on INSTALLED_APPS.

If it does not work, I would like to have more ideas from sahara-dashboard team
on how to support translation in Mitaka.

Thanks in advance!

Akihiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] discussion about core reviewer limitations by company

2016-02-21 Thread Armando M.
On 20 February 2016 at 14:06, Kevin Benton  wrote:

> I don't think neutron has a limit. There are 4 from redhat and 3 from hp
> and mirantis right now.
> https://review.openstack.org/#/admin/groups/38,members
>

By the way, technically speaking some of those also only limit themselves
the right to merge to their area of expertise.


> On Feb 20, 2016 13:02, "Steven Dake (stdake)"  wrote:
>
>> Neutron, the largest project in OpenStack by active committers and
>> reviewers as measured by the governance repository teamstats tool, has a
>> limit of 2 core reviewers per company.  They do that for a reason.  I
>> expect Kolla will grow over time (we are about 1/4 their size in terms of
>> contributors and reviewers).  I believe other projects follow a similar
>> pattern besides Neutron that already have good diversity (and intend to
>> keep it in place).
>>
>> Regards
>> -steve
>>
>>
>> From: Gal Sagie 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Saturday, February 20, 2016 at 10:38 AM
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Subject: Re: [openstack-dev] [kolla] discussion about core reviewer
>> limitations by company
>>
>> I think setting these limits is wrong, some companies have more overall
>> representation then others.
>> The core reviewer job should be on a personal basis and not on a company
>> basis, i think the PTL of each project needs
>> to make sure the diversity and the community voice is heard in each
>> project and the correct path is taken even if
>> many (or even if all) of the cores are from the same company.
>> If you really want to set limits then i would go with something like 2
>> cores from the same company cannot +2 the same patch, but
>> again i am against such things personally..
>>
>> Disclaimer: i am not personally involved in Kolla or know how things are
>> running there.
>>
>> On Sat, Feb 20, 2016 at 7:09 PM, Steven Dake (stdake) 
>> wrote:
>>
>>> Hey folks,
>>>
>>> Mirantis has been developing a big footprint in the core review team,
>>> and Red Hat already has a big footprint in the core review team.  These are
>>> all good things, but I want to avoid in the future a situation in which one
>>> company has a majority of core reviewers.  Since core reviewers set policy
>>> for the project, the project could be harmed if one company has such a
>>> majority.  This is one reason why project diversity is so important and has
>>> its own special snowflake tag in the governance repository.
>>>
>>> I'd like your thoughts on how to best handle this situation, before I
>>> trigger  a vote we can all agree on.
>>>
>>> I was thinking of something simple like:
>>> "1 company may not have more then 33% of core reviewers.  At the
>>> conclusion of PTL elections, the current cycle's 6 months of reviews
>>> completed will be used as a metric to select the core reviewers from that
>>> particular company if the core review team has shrunk as a result of
>>> removal of core reviewers during the cycle."
>>>
>>> Thoughts, comments, questions, concerns, etc?
>>>
>>> Regards,
>>> -steve
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best Regards ,
>>
>> The G.
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] discussion about core reviewer limitations by company

2016-02-21 Thread Armando M.
On 20 February 2016 at 12:58, Steven Dake (stdake)  wrote:

> Neutron, the largest project in OpenStack by active committers and
> reviewers as measured by the governance repository teamstats tool, has a
> limit of 2 core reviewers per company.  They do that for a reason.  I
> expect Kolla will grow over time (we are about 1/4 their size in terms of
> contributors and reviewers).  I believe other projects follow a similar
> pattern besides Neutron that already have good diversity (and intend to
> keep it in place).
>

Where did you find this information? I do not believe this is true. I agree
wholeheartedly with Joshua: I personally value the judgement of the people
I trust rather than looking at affiliation.


>
> Regards
> -steve
>
>
> From: Gal Sagie 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Saturday, February 20, 2016 at 10:38 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [kolla] discussion about core reviewer
> limitations by company
>
> I think setting these limits is wrong, some companies have more overall
> representation then others.
> The core reviewer job should be on a personal basis and not on a company
> basis, i think the PTL of each project needs
> to make sure the diversity and the community voice is heard in each
> project and the correct path is taken even if
> many (or even if all) of the cores are from the same company.
> If you really want to set limits then i would go with something like 2
> cores from the same company cannot +2 the same patch, but
> again i am against such things personally..
>
> Disclaimer: i am not personally involved in Kolla or know how things are
> running there.
>
> On Sat, Feb 20, 2016 at 7:09 PM, Steven Dake (stdake) 
> wrote:
>
>> Hey folks,
>>
>> Mirantis has been developing a big footprint in the core review team, and
>> Red Hat already has a big footprint in the core review team.  These are all
>> good things, but I want to avoid in the future a situation in which one
>> company has a majority of core reviewers.  Since core reviewers set policy
>> for the project, the project could be harmed if one company has such a
>> majority.  This is one reason why project diversity is so important and has
>> its own special snowflake tag in the governance repository.
>>
>> I'd like your thoughts on how to best handle this situation, before I
>> trigger  a vote we can all agree on.
>>
>> I was thinking of something simple like:
>> "1 company may not have more then 33% of core reviewers.  At the
>> conclusion of PTL elections, the current cycle's 6 months of reviews
>> completed will be used as a metric to select the core reviewers from that
>> particular company if the core review team has shrunk as a result of
>> removal of core reviewers during the cycle."
>>
>> Thoughts, comments, questions, concerns, etc?
>>
>> Regards,
>> -steve
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards ,
>
> The G.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][infra] Publishing kolla images to docker-registry.openstack.org

2016-02-21 Thread Michał Jastrzębski
I'd say 5Gigs should be enough for all the images per distro (maybe
less if we have to squeeze). Since we have have 2 strongly supported
distro 10Gigs. If we would like to add all distros we support, that's
20-25 (I think). That also depends how many older versions we want to
keep (current+stable would be absolute minimum, we might increase it
to milestones). We have lot's of options to tweak so no one will get
hurt, and if we have dedicated machine for us (which we should because
apart from disk space, registry can actually eat up lots of IOPS, can
be VM tho with disk that can handle that), I think any dedicated,
industry standard, disk should be enough (but SSD would be great).

Cheers,
Michal

On 20 February 2016 at 16:14, Ricardo Carrillo Cruz
 wrote:
> Hi Steve
>
> When you say the registry would require a machine with plenty of disk space,
> do you have an estimate of storage needed?
>
> Regards
>
> 2016-02-20 14:21 GMT+01:00 Steven Dake (stdake) :
>>
>> Infra folks,
>>
>> I'd like to see a full CI/CD pipeline of Kolla to an OpenStack
>> infrastructure hosted registry.
>>
>> With docker registry 2.2 and earlier a Docker push of Kolla containers
>> took 5-10 hours.  This is because of design problems in Docker which made a
>> push each layer of each Docker image repeatedly.  This has been rectified in
>> docker-regitery 2.3 (the latest hub tagged docker registry).  The 5-10 hour
>> upload times are now down to about 15 minutes.  Now it takes approximately
>> 15 minutes to push all 115 kolla containers on a gigabit network.
>>
>> Kolla in general wants to publish to a docker registry at least per tag,
>> and possibly per commit (or alternatively daily).  We already build Kolla
>> images in the gate, and although sometimes our jobs time out on CentOS the
>> build on Ubuntu is about 12 minutes.  The reason our jobs time out on CentOS
>> is because we lack local to the infrastructure mirrors as is available on
>> Ubuntu from a recent patch I believe that Monty offered.
>>
>> We have one of two options going forward
>>
>> We could publish to the docker hub registry
>> We could publish to docker-registry.openstack.org
>>
>> Having a docker-registry.openstack.org would be my preference, but
>> requires a machine with plenty of disk space and a copy of docker 1.10.1 or
>> later running on it.  The docker-registry 2.3 and later runs as a container
>> inside Docker.  The machine could be Ubuntu or CentOS – we have gate scripts
>> for both that do the machine setup which the infrastructure team could begin
>> with[1][2]  I don't care which distro is used for docker registry – it
>> reallly shouldn't matter as it will be super lightweight and really only
>> need a /var/lib/docker that is fast and large.  Kolla dev's can help get the
>> docker registry setup and provide guidance to the infrastructure team on how
>> to setup Docker, but I'm unclear of OpenStack has resources to make this
>> particular request happen.
>>
>> NB the machine need not be baremetal – it  really doesn't matter.  It does
>> need fast bi-directional networking and fast disk IO to meet the gate
>> timeout requirements and Operator requirements that a pull is speedy.  The
>> other change needed is a CentOS mirror internal to the infrastructure, so
>> our CentOS jobs don't time out and we can push per cmmit (or we could add a
>> nightly job).
>>
>> This is something new OpenStack hasn't done before, so feedback from the
>> infrastructure team welcome if that team is willing to maintain a
>> docker-registry.openstack.org.  The other challenge here will be
>> authentication – we setup our gate Docker without TLS because we throw away
>> the VMs but infra will want to setup TLS with the docker registry.  Folks
>> wanting to use the docker reigstry service from OpenStack will need to be
>> able to put TLS credentials in the gating in some way.  I'm not sure we want
>> to just check these credentials into our repository – which means they need
>> to somehow be injected into our VMs to protect the security of the Docker
>> images.
>>
>> If infra decides they don’t want to take on a
>> docker-registry.openstack.org, guidance on how to get our credentials
>> securely into our built VM would be helpful.
>>
>> One final note – Docker can be setup to use Swift as a storage backend, or
>> alternatively can use straight up disk space on the node.  It can also
>> publish to an AWS storage backend and has many other storage backend modes.
>>
>> Regards
>> -steve
>>
>>
>> [1] https://github.com/openstack/kolla/blob/master/tools/setup_RedHat.sh
>> [2] https://github.com/openstack/kolla/blob/master/tools/setup_Debian.sh
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __

Re: [openstack-dev] [kolla][vote] port neutron thin containers to stable/liberty

2016-02-21 Thread Michał Jastrzębski
So for thin containers, as opposed to data containers, there is no
migration script needed whatsoever. All it takes is to tear down
neutron-agents and start thin containers.

On 21 February 2016 at 06:47, Jeffrey Zhang  wrote:
> I like the thin container idea, and I am +1 too. But the only concern is
> that we MUST provide a robust migrate script( or Ansible role task) to do
> the convert stuff. Doesn't we have enough time for this?
>
> On Sun, Feb 21, 2016 at 3:44 PM, Michal Rostecki 
> wrote:
>>
>> On 02/20/2016 05:39 PM, Steven Dake (stdake) wrote:
>>>
>>> Sam,
>>>
>>> I seem to recall Paul was not in favor, so there was not a majority of
>>> cores there.  There were 6 core reviewers at the midcycle, and if you
>>> only count kolla-core (which at this time I do for policy changes) that
>>> means we had a vote of 5.  We have 11 core reviewers, so we need a vote
>>> of 6+ for simple majority. I was also sort of –1 because it is an
>>> exception, but I do agree the value is warranted.  I believe I expressed
>>> at  the midcycle that I was –1 to the idea, atleast until the broader
>>> core review team voted.  If I wasn't clear on that, I apologize.
>>>
>>> I'll roll with the community on this one unless I have to tie break –
>>> then groan :)
>>>
>>> That is why a decision was made by the group to take this to the mailing
>>> list.
>>>
>>> Regards
>>> -steve
>>>
>>> From: Sam Yaple mailto:sam...@yaple.net>>
>>> Reply-To: "s...@yaple.net " >> >, "OpenStack Development Mailing List (not for
>>> usage questions)" >> >
>>> Date: Saturday, February 20, 2016 at 9:32 AM
>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>> >> >
>>> Subject: Re: [openstack-dev] [kolla][vote] port neutron thin containers
>>> to stable/liberty
>>>
>>> I was under the impression we did have a majority of cores in favor
>>> of the idea at the midcycle. But if this is a vote-vote, then I am a
>>> very strong +1 as well. This is something operators will absolutely
>>> want and and need.
>>>
>>> Sam Yaple
>>>
>>> On Sat, Feb 20, 2016 at 4:27 PM, Michał Jastrzębski
>>> mailto:inc...@gmail.com>> wrote:
>>>
>>> Strong +1 from me. This have multiple benefits:
>>> Easier (aka possible) debugging of networking in running envs
>>> (not
>>> having tools like tcpdump at your disposal is a pain) - granted,
>>> there
>>> are ways to get this working without thin containers but require
>>> fair
>>> amount of docker knowledge.
>>> Docker daemon restart will not break routers - currently with
>>> docker
>>> restart container with namespace dies and we lose our routers
>>> (they
>>> will migrate using HA, but well, still a networking downtime).
>>> This
>>> will no longer be the case so...
>>> Upgrades with no vm downtime whatsoever depends on this one.
>>> If we could deploy liberty code with all these nice stuff, I'd be
>>> happier person;)
>>>
>>> Cheers,
>>> Michal
>>>
>>> On 20 February 2016 at 07:40, Steven Dake (stdake)
>>> mailto:std...@cisco.com>> wrote:
>>> > Just clarifying, this is not a "revote" - there were not enough
>>> core
>>> > reviewers in favor of this idea at the Kolla midcycle, so we
>>> need to have a
>>> > vote on the mailing list to sort out this policy decision of
>>> managing
>>> > stable/liberty.
>>> >
>>> > Regards,
>>> > -steve
>>> >
>>> >
>>> > From: Steven Dake mailto:std...@cisco.com>>
>>> > Reply-To: "OpenStack Development Mailing List (not for usage
>>> questions)"
>>> > >> >
>>> > Date: Saturday, February 20, 2016 at 6:28 AM
>>> > To: "OpenStack Development Mailing List (not for usage
>>> questions)"
>>> > >> >
>>> > Subject: [openstack-dev] [kolla][vote] port neutron thin
>>> containers to
>>> > stable/liberty
>>> >
>>> > Folks,
>>> >
>>> > There were not enough core reviewers to pass a majority
>>> approval of the
>>> > neutron thin container backport idea, so we separated it out
>>> from fixing
>>> > stable/liberty itself.
>>> >
>>> > I am going to keep voting open for *2* weeks this time.  The
>>> reason for the
>>> > two weeks is I would like a week of discussion before people
>>> just blindly
>>> > vote ;)
>>> >
>>> > Voting begins now and concludes March 4th.  Since this is a
>>> policy decision,
>>> > no veto votes are permitted, just a +1 and a  -1.  Abstaining
>>> is the same as
>>> > voting –1.
>>> >
>>
>>
>> I'm +1, but under condition tha

[openstack-dev] [nova] Network operations on shelve_offload'd servers

2016-02-21 Thread Shoham Peller
Hi,

A recently merged patch from 2 weeks ago allows to attach\detach volumes to
a shelved_offload server:
https://review.openstack.org/#/c/259528/

Network operations on shelved_offload server is currently not allowed.
There's a bug and a change proposal from a year-and-a-half ago regarding
this issue:
https://bugs.launchpad.net/nova/+bug/1299333
https://review.openstack.org/#/c/87081/

In the change, I see that core contributors Dan Smith and Andrew Laski
commented that such operations shouldn't be allowed on shelved servers.

Do the latest volume operations changes mean that approach has changed? I'd
like to revive this change and to re-propose it.

Thank you,
Shoham Peller
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-21 Thread Cheng, Yingxin
On 19 February 2016 at 5:58, John Garbutt wrote:
> On 17 February 2016 at 17:52, Clint Byrum  wrote:
> > Excerpts from Cheng, Yingxin's message of 2016-02-14 21:21:28 -0800:
> >> Hi,
> >>
> >> I've uploaded a prototype https://review.openstack.org/#/c/280047/ to
> >> testify its design goals in accuracy, performance, reliability and
> >> compatibility improvements. It will also be an Austin Summit Session
> >> if elected:
> >> https://www.openstack.org/summit/austin-2016/vote-for-speakers/Presen
> >> tation/7316
> 
> Long term, I see a world where there are multiple scheduler Nova is able to 
> use,
> depending on the deployment scenario.
> 
> We have tried to stop any more scheduler going in tree (like the solver 
> scheduler)
> while we get the interface between the nova-scheduler and the rest of Nova
> straightened out, to make that much easier.

Technically, what I've implemented is a new type of scheduler host manager
`shared_state_manager.SharedHostManager`[1] with the ability to synchronize host
states directly from resource trackers. Filter scheduler driver can choose to 
load this
manager from stevedore[2], and thus get a different update model of internal
caches. This new manager is highly compatible to current scheduler architecture
because filter scheduler with HostManager can even run with the schedulers 
loaded
SharedHostManager at the same time(tested).

So why not have this in tree to give operators more options in choosing host
managers. I also have an opinion that caching scheduler is not exactly a new 
kind
of scheduler driver, it only has a different behavior in updating host states, 
and it
should be implemented as a new kind of host manager instead.

What I'm concerned is that the resource provider scheduler is going to change 
the
architecture of filter scheduler in Jay Pipe's bp[3]. There will be no host 
manager,
even no host state caches in the future. So what I've done in keeping 
compatibilities
will become incompatibilities in the future.

[1] 
https://review.openstack.org/#/c/280047/2/nova/scheduler/shared_state_manager.py
 L55
[2] https://review.openstack.org/#/c/280047/2/setup.cfg L194
[3] https://review.openstack.org/#/c/271823 

> 
> So a big question for me is, does the new scheduler interface work if you 
> look at
> slotting in your prototype scheduler?
> 
> Specifically I am thinking about this interface:
> https://github.com/openstack/nova/blob/master/nova/scheduler/client/__init_
> _.py


> There are several problems in this update model, proven in experiments[3]:
> >> 1. Performance: The scheduler performance is largely affected by db access
> in retrieving compute node records. The db block time of a single request is
> 355ms in average in the deployment of 3 compute nodes, compared with only
> 3ms in in-memory decision-making. Imagine there could be at most 1k nodes,
> even 10k nodes in the future.
> >> 2. Race conditions: This is not only a parallel-scheduler problem,
> >> but also a problem using only one scheduler. The detailed analysis of one-
> scheduler-problem is located in bug analysis[2]. In short, there is a gap 
> between
> the scheduler makes a decision in host state cache and the compute node
> updates its in-db resource record according to that decision in resource 
> tracker.
> A recent scheduler resource consumption in cache can be lost and overwritten
> by compute node data because of it, result in cache inconsistency and
> unexpected retries. In a one-scheduler experiment using 3-node deployment,
> there are 7 retries out of 31 concurrent schedule requests recorded, results 
> in
> 22.6% extra performance overhead.
> >> 3. Parallel scheduler support: The design of filter scheduler leads to an 
> >> "even
> worse" performance result using parallel schedulers. In the same experiment
> with 4 schedulers on separate machines, the average db block time is increased
> to 697ms per request and there are 16 retries out of 31 schedule requests,
> namely 51.6% extra overhead.
> >
> > This mostly agrees with recent tests I've been doing simulating 1000
> > compute nodes with the fake virt driver.
> 
> Overall this agrees with what I saw in production before moving us to the
> caching scheduler driver.
> 
> I would love a nova functional test that does that test. It will help us 
> compare
> these different schedulers and find the strengths and weaknesses.

I'm also working on implementing the functional tests of nova scheduler, there
is a patch showing my latest progress: https://review.openstack.org/#/c/281825/ 

IMO scheduler functional tests are not good at testing real performance of
different schedulers, because all of the services are running as green threads
instead of real processes. I think the better way to analysis the real 
performance
and the strengths and weaknesses is to start services in different processes 
with
fake virt driver(i.e. Clint Byrum's work) or Jay Pipe's work in emulating 
different
designs.

> >> 2. Since the scheduler claims 

Re: [openstack-dev] [kolla][vote] port neutron thin containers to stable/liberty

2016-02-21 Thread Jeffrey Zhang
I like the thin container idea, and I am +1 too. But the only concern is
that we MUST provide a robust migrate script( or Ansible role task) to do
the convert stuff. Doesn't we have enough time for this?

On Sun, Feb 21, 2016 at 3:44 PM, Michal Rostecki 
wrote:

> On 02/20/2016 05:39 PM, Steven Dake (stdake) wrote:
>
>> Sam,
>>
>> I seem to recall Paul was not in favor, so there was not a majority of
>> cores there.  There were 6 core reviewers at the midcycle, and if you
>> only count kolla-core (which at this time I do for policy changes) that
>> means we had a vote of 5.  We have 11 core reviewers, so we need a vote
>> of 6+ for simple majority. I was also sort of –1 because it is an
>> exception, but I do agree the value is warranted.  I believe I expressed
>> at  the midcycle that I was –1 to the idea, atleast until the broader
>> core review team voted.  If I wasn't clear on that, I apologize.
>>
>> I'll roll with the community on this one unless I have to tie break –
>> then groan :)
>>
>> That is why a decision was made by the group to take this to the mailing
>> list.
>>
>> Regards
>> -steve
>>
>> From: Sam Yaple mailto:sam...@yaple.net>>
>> Reply-To: "s...@yaple.net " > >, "OpenStack Development Mailing List (not for
>> usage questions)" > >
>> Date: Saturday, February 20, 2016 at 9:32 AM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> > >
>> Subject: Re: [openstack-dev] [kolla][vote] port neutron thin containers
>> to stable/liberty
>>
>> I was under the impression we did have a majority of cores in favor
>> of the idea at the midcycle. But if this is a vote-vote, then I am a
>> very strong +1 as well. This is something operators will absolutely
>> want and and need.
>>
>> Sam Yaple
>>
>> On Sat, Feb 20, 2016 at 4:27 PM, Michał Jastrzębski
>> mailto:inc...@gmail.com>> wrote:
>>
>> Strong +1 from me. This have multiple benefits:
>> Easier (aka possible) debugging of networking in running envs (not
>> having tools like tcpdump at your disposal is a pain) - granted,
>> there
>> are ways to get this working without thin containers but require
>> fair
>> amount of docker knowledge.
>> Docker daemon restart will not break routers - currently with
>> docker
>> restart container with namespace dies and we lose our routers
>> (they
>> will migrate using HA, but well, still a networking downtime).
>> This
>> will no longer be the case so...
>> Upgrades with no vm downtime whatsoever depends on this one.
>> If we could deploy liberty code with all these nice stuff, I'd be
>> happier person;)
>>
>> Cheers,
>> Michal
>>
>> On 20 February 2016 at 07:40, Steven Dake (stdake)
>> mailto:std...@cisco.com>> wrote:
>> > Just clarifying, this is not a "revote" - there were not enough
>> core
>> > reviewers in favor of this idea at the Kolla midcycle, so we
>> need to have a
>> > vote on the mailing list to sort out this policy decision of
>> managing
>> > stable/liberty.
>> >
>> > Regards,
>> > -steve
>> >
>> >
>> > From: Steven Dake mailto:std...@cisco.com>>
>> > Reply-To: "OpenStack Development Mailing List (not for usage
>> questions)"
>> > > >
>> > Date: Saturday, February 20, 2016 at 6:28 AM
>> > To: "OpenStack Development Mailing List (not for usage
>> questions)"
>> > > >
>> > Subject: [openstack-dev] [kolla][vote] port neutron thin
>> containers to
>> > stable/liberty
>> >
>> > Folks,
>> >
>> > There were not enough core reviewers to pass a majority
>> approval of the
>> > neutron thin container backport idea, so we separated it out
>> from fixing
>> > stable/liberty itself.
>> >
>> > I am going to keep voting open for *2* weeks this time.  The
>> reason for the
>> > two weeks is I would like a week of discussion before people
>> just blindly
>> > vote ;)
>> >
>> > Voting begins now and concludes March 4th.  Since this is a
>> policy decision,
>> > no veto votes are permitted, just a +1 and a  -1.  Abstaining
>> is the same as
>> > voting –1.
>> >
>>
>
> I'm +1, but under condition that we will provide some script to migrate
> from supervisord-container to thin-containers (even if such a script will
> bring risk of downtime of the cloud).
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?su

Re: [openstack-dev] [Kuryr] will we use os-vif in kuryr

2016-02-21 Thread Gal Sagie
Yes, the intention from the start was to see if we can converge and use
os-vif
and i certainly see us using it.


On Thu, Feb 18, 2016 at 12:32 PM, Daniel P. Berrange 
wrote:

> On Thu, Feb 18, 2016 at 09:01:35AM +, Liping Mao (limao) wrote:
> > Hi Kuryr team,
> >
> > I see couple of commits to add support for vif plug.
> > https://review.openstack.org/#/c/280411/
> > https://review.openstack.org/#/c/280878/
> >
> > Do we have plan to use os-vif?
> > https://github.com/openstack/os-vif
>
> FYI, we're trying reasonably hard to *not* make any assumptions about
> what compute or network services are using os-vif. ie, we want os-vif
> as a framework to be usable from Nova, or any other compute manager,
> and likewise be usable from Neutron or any other network manager.
> Obviously the actual implementations may be different, but the general
> os-vif framework tries to be agnostic.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] Installing networking-* pythonclient extensions to multiple locations

2016-02-21 Thread Javeria Khan
Hey everyone,

At the moment OSA installs the python-neutronclient in a few locations
including the containers neutron-server, utility, heat, tempest.

Now neutron has a bunch of sub-projects like networking-l2gw [1],
networking-bgpvpn [2] networking-plumgrid [5] etc, which have their own
python-neutronclient CLI extensions [3][4][5] in their respective
repositories and packages.

Since these CLI extensions are not part of the neutron packages and must be
enabled by installing the additional networking-* packages. We don't
install most of these sub-projects in OSA at the moment, however moving
forward do you think its reasonable to install said packages in every
location that installs the neutron client inside the OSA plays? If so, then
how would you recommend we go about it since the installation will be
conditional on the enabling of the relevant neutron subproject features?

[1] https://github.com/openstack/networking-l2gw
[2] https://github.com/openstack/networking-bgpvpn
[3]
https://github.com/openstack/networking-l2gw/tree/master/networking_l2gw/l2gatewayclient
[4]
https://github.com/openstack/networking-bgpvpn/tree/master/networking_bgpvpn/neutronclient
[5] https://github.com/openstack/networking-plumgrid


Thanks,
Javeria
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] next Team meeting cancelled (Feb-22)

2016-02-21 Thread Gary Kotton
Thanks for the update. Will there be an option of connecting remotely? Google 
chat? Webex?

From: "Armando M." mailto:arma...@gmail.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Saturday, February 20, 2016 at 2:32 AM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Neutron] next Team meeting cancelled (Feb-22)

Hi Neutrinos,

This week is Mid-cycle week [1], and some of us will be potentially enroute to 
the destination. For this reason, the meeting is cancelled.

If you're interested in participating remotely, please keep an eye on the 
etherpad for updates.

Cheers,
Armando

[1] https://etherpad.openstack.org/p/neutron-mitaka-midcycle
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Dragonflow] IRC Meeting tomorrow (2/22) - 0900 UTC

2016-02-21 Thread Gal Sagie
Hello All,

We will have an IRC meeting tomorrow (Monday, 2/22) at 0900 UTC
in #openstack-meeting-4

Please review the expected meeting agenda here:
https://wiki.openstack.org/wiki/Meetings/Dragonflow

You can view last meeting action items and logs here:
http://eavesdrop.openstack.org/meetings/dragonflow/2016/dragonflow.2016-02-15-09.00.html

Please update the agenda if you have any subject you would like to discuss
about.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev