[openstack-dev] Requirements for becoming approved official project

2016-04-29 Thread Shinobu Kinjo
Hi Tom,

First sorry for bothering you -;

We are trying to make the tricircle project [1] one of the opnestack
official projects. And we are referring to project team guide to make
sure what are requirements. [2]. Reading this guide, what we need to
consider right now is open development, I think (but not 100% sure).
[3]

We have a blueprint. [4] We also have git repositories in
openstack.org and github.com. [5] [6]
There is a few bugs filed. [7] There are few contributors.

What we don't have now is official documentation which is supposed to
be located at openstack.org. [8] This is because we are not officially
approved project. [9] This situation is now huge bottleneck for our
project.

There were some advices from one of developers, which pointed to
guides. [1] [10]
If you could provide some suggestions, advices or no matter what are
really necessary for becoming officially approved project with us, it
would be MUCH appreciated.

[1] https://wiki.openstack.org/wiki/Tricircle
[2] http://docs.openstack.org/project-team-guide/
[3] http://docs.openstack.org/project-team-guide/open-development.html
[4]https://launchpad.net/tricircle
[5] https://git.openstack.org/openstack/tricircle
[6] https://github.com/openstack/tricircle/
[7] http://bugs.launchpad.net/tricircle
[8] http://docs.openstack.org/developer/tricircle
[9] 
http://docs.openstack.org/infra/manual/creators.html#add-link-to-your-developer-documentation
[10] http://governance.openstack.org/reference/new-projects-requirements.html

Thanks for your great help in advance!

Cheers,
Shinobu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Notes for Magnum design summit

2016-04-29 Thread Hongbin Lu
Hi team,

For reference, below is a summary of the discussions/decisions in Austin design 
summit. Please feel free to point out if anything is incorrect or incomplete. 
Thanks.

1. Bay driver: https://etherpad.openstack.org/p/newton-magnum-bay-driver
- Refactor existing code into bay drivers
- Each bay driver will be versioned
- Individual bay driver can have API extension and magnum CLI could load the 
extensions dynamically
- Work incrementally and support same API before and after the driver change

2. Bay lifecycle operations: 
https://etherpad.openstack.org/p/newton-magnum-bays-lifecycle-operations
- Support the following operations: reset the bay, rebuild the bay, rotate TLS 
certificates in the bay, adjust storage of the bay, scale the bay.

3. Scalability: https://etherpad.openstack.org/p/newton-magnum-scalability
- Implement Magnum plugin for Rally
- Implement the spec to address the scalability of deploying multiple bays 
concurrently: https://review.openstack.org/#/c/275003/

4. Container storage: 
https://etherpad.openstack.org/p/newton-magnum-container-storage
- Allow choice of storage driver
- Allow choice of data volume driver
- Work with Kuryr/Fuxi team to have data volume driver available in COEs 
upstream

5. Container network: 
https://etherpad.openstack.org/p/newton-magnum-container-network
- Discuss how to scope/pass/store OpenStack credential in bays (needed by Kuryr 
to communicate with Neutron).
- Several options were explored. No perfect solution was identified.

6. Ironic Integration: 
https://etherpad.openstack.org/p/newton-magnum-ironic-integration
- Start the implementation immediately
- Prefer quick work-around for identified issues (cinder volume attachment, 
variation of number of ports, etc.)

7. Magnum adoption challenges: 
https://etherpad.openstack.org/p/newton-magnum-adoption-challenges
- The challenges is listed in the etherpad above

8. Unified abstraction for COEs: 
https://etherpad.openstack.org/p/newton-magnum-unified-abstraction
- Create a new project for this efforts
- Alter Magnum mission statement to clarify its goal (Magnum is not a container 
service, it is sort of a COE management service)

9. Magnum Heat template version: 
https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning
- In each bay driver, version the template and template definition.
- Bump template version for minor changes, and bump bay driver version for 
major changes.

10. Monitoring: https://etherpad.openstack.org/p/newton-magnum-monitoring
- Add support for sending notifications to Ceilometer
- Revisit bay monitoring and self-healing later
- Container monitoring should not be done by Magnum, but it can be done by 
cAdvisor, Heapster, etc.

11. Others: https://etherpad.openstack.org/p/newton-magnum-meetup
- Clear Container support: Clear Container needs to integrate with COEs first. 
After the integration is done, Magnum team will revisit bringing the Clear 
Container COE to Magnum.
- Enhance mesos bay to DCOS bay: Need to do it step-by-step: First, create a 
new DCOS bay type. Then, deprecate and delete the mesos bay type.
- Start enforcing API deprecation policy: 
https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html
- Freeze API v1 after some patches are merged.
- Multi-tenancy within a bay: not the priority in Newton cycle
- Manually manage bay nodes (instead of being managed by Heat ResourceGroup): 
It can address the use case of heterogeneity of bay nodes (i.e. different 
availability zones, flavors), but need to elaborate the details further.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] Place kolla-mesos in openstack-attic

2016-04-29 Thread Steven Dake (stdake)
Hey folks,

This vote passed with 8 core reviewers +1ing the retirement of the kolla-mesos 
repository.

As such I will submit patches in the following week to retire this project and 
remove it from the technical committee's governance repository for the Kolla 
Project.

Regards
-steve

From: Steven Dake >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, April 22, 2016 at 7:08 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [kolla][vote] Place kolla-mesos in openstack-attic

Fellow Core Reviewers,

Since many of the engineers working on the kolla-mesos repository are moving on 
to other things[1], possibly including implementing Kubernetes as an underlay 
for OpenStack containers, I propose we move the kolla-mesos repository into the 
attic[2].  This will allow folks to focus on Ryan's effort[3] to use Kubernetes 
as an underlay for Kolla containers for folks that want a software based 
underlay rather than bare metal.  I understand Mirantis's position that 
Kubernetes may have some perceived "more mojo" and If we are to work on an 
underlay, it might as well be a fresh effort based upon the experience of the 
past two failures to develop a software underlay for OpenStack services.  We 
can come back to mesos once kubernetes is implemented with a fresh perspective 
on the problem.

Please vote +1 to attic the repo, or -1 not to attic the repo.  I'll leave the 
voting open until everyone has voted, or for 1 week until April 29th, 2016.

Regards
-steve

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-April/093143.html
[2] https://github.com/openstack-attic
[3] https://review.openstack.org/#/c/304182/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Live Migration: Austin summit update

2016-04-29 Thread Matt Riedemann

On 4/29/2016 5:32 PM, Murray, Paul (HP Cloud) wrote:

The following summarizes status of the main topics relating to live
migration after the Newton design summit. Please feel free to correct
any inaccuracies or add additional information.



Paul



-



Libvirt storage pools



The storage pools work has been selected as one of the project review
priorities for Newton.

(see https://etherpad.openstack.org/p/newton-nova-summit-priorities )



Continuation of the libvirt storage pools work was discussed in the live
migration session. The proposal has grown to include a refactor of the
existing libvirt driver instance storage code. Justification for this is
based on three factors:

1.   The code needs to be refactored to use storage pools

2.   The code is complicated and uses inspection, poor practice

3.   During the investigation Matt Booth discovered two CVEs in the
code – suggesting further work is justified



So the proposal is now to follow three stages:

1.   Refactor the instance storage code

2.   Adapt to use storage pools for the instance storage

3.   Use storage pools to drive resize/migration


We also talked about the need for some additional test coverage for the 
refactor work:


1. A job that uses LVM on the experimental queue.

2. ploop should be covered by the Virtuozzo Compute third party CI but 
we'll need to double-check the test coverage there (is it running the 
tests that hit the code paths being refactored). Note that they have 
their own blueprint for implementing resize for ploop:


https://blueprints.launchpad.net/nova/+spec/virtuozzo-instance-resize-support

3. Ceph testing - we already have a single-node job for Ceph that will 
test the resize paths. We should also be testing Ceph-backed live 
migration in the special live-migration job that Timofey has been 
working on.


4. NFS testing - this also falls into the special live migration CI job 
that will test live migration in different storage configurations within 
a single run.






Matt has code already starting the refactor and will continue with help
from Paul Carlton + Paul Murray. We will look for additional
contributors to help as we plan out the patches.



https://review.openstack.org/#/c/302117 : Persist libvirt instance
storage metadata

https://review.openstack.org/#/c/310505 : Use libvirt storage pools

https://review.openstack.org/#/c/310538 : Migrate libvirt volumes



Post copy



The spec to add post copy migration support in the libvirt driver was
discussed in the live migration session. Post copy guarantees completion
of a migration in linear time without needing to pause the VM. This can
be used as an alternative to pausing in live-migration-force-complete.
Pause or complete could also be invoked automatically under some
circumstances. The issue slowing these specs is how to decide which
method to use given they provide a different user experience but we
don’t want to expose virt specific features in the API. Two additional
specs listed below suggest possible generic ways to address the issue.



There was no conclusions reached in the session so the debate will
continue on the specs. The first below is the main spec for the feature.



https://review.openstack.org/#/c/301509 : Adds post-copy live migration
support to Nova

https://review.openstack.org/#/c/305425 : Define instance availability
profiles

https://review.openstack.org/#/c/306561 : Automatic Live Migration
Completion



Live Migration orchestrated via conductor



The proposal to move orchestration of live migration to conductor was
discussed in the working session on Friday, presented by Andrew Laski on
behalf of Timofey Durakov. This one threw up a lot of debate both for
and against the general idea, but not supporting the patches that have
been submitted along with the spec so far. The general feeling was that
we need to attack this, but need to take some simple first cleanup steps
first to get a better idea of the problem. Dan Smith proposed moving the
stateless pre-migration steps to a sequence of calls from conductor (as
opposed to the going back and forth between computes) as the first step.



https://review.openstack.org/#/c/292271 : Remove compute-compute
communication in live-migration



Cold and Live Migration Scheduling



When this patch merges all migrations will use the request spec for
scheduling: https://review.openstack.org/#/c/284974

Work is still ongoing for check destinations (allowing the scheduler to
check a destination chosen by the admin). When that is complete
migrations will have three ways to be placed:

1.   Destination chosen by scheduler

2.   Destination chosen by admin but checked by scheduler

3.   Destination forced by admin



https://review.openstack.org/#/c/296408 Re-Proposes to check destination
on migrations



PCI + NUMA claims



Moshe and Jay are making great progress refactoring Nicola’s patches 

Re: [openstack-dev] [Ceph][ceilometer][libvirt] Libvirt error during instance disk allocation metering

2016-04-29 Thread ZhiQiang Fan
Hi Ceph devs,

I raise this bug again because as Ceph becomes more popular, our customer
suffers from it and I have no solution for it.

*Is there anyway to get the usage of a disk which is a Ceph volume? (Not
sure if the term is right or not)*

Ceilometer use libvirt.domain.blockInfo(device) to get the usage
(allocation/physical/capacity) of a disk, [1]. This works fine when the VM
is boot from local file system, and according to the note from [2], it
seems blockInfo uses stat instead of querying qemu, so **maybe** this is
why it doesn't work for the network type disk: internal error: missing
storage backend for network files using rbd protocol.

libvirt.domain.blockStats() doesn't return usage but r/w bytes/requests,
 neither domainListGetStats

I haven't try virStorageVolInfo yet, because don't know the args, but it
doesn't return all the 3 usage dimension of a disk, only capacity and
allocation[3]

So I'm asking if anyone can help me to resolve this issue.
Thank you very much!

PS: change subject to get ceph devs noticed.

[1] https://review.openstack.org/#/c/145819/
[2]  https://www.redhat.com/archives/libvir-list/2014-December/msg00762.html

[3] https://libvirt.org/html/libvirt-libvirt-storage.html#virStorageVolInfo
[4] https://bugs.launchpad.net/ceilometer/+bug/1457440

On Wed, Nov 18, 2015 at 11:46 PM, Ilya Tyaptin 
wrote:

> Hi, folks!
>
> In our deployed envs we met with a libvirt error *"missing storage
> backend for network files using rbd protocol"* in *virDomainGetBlockInfo*
>  call [1] 
> .
> This exception is raised when Ceilometer are trying to get info about VM
> disk usage and allocation.
> It only affects getting measures for a some disk pollsters which added in
> this CR [2]
> 
>  with specified libvirt call [3]
> 
>  .
> These pollsters have been added in the Kilo cycle and successful work in
> Kilo deployments, but it doesn't work now.
>
> Also, we have a bug in the upstream launchpad [4]
> 
>  but it have not been fixed yet.
>
> I would glad to see any ideas about root cause of this issue or ways to
> fixing it.
>
> Thank you in advance!
>
> References:
> [1] Traceback 
>
>
> ./ceilometer-polling.log.0:4192:2015-11-17 16:20:54.807 14107 ERROR
> ceilometer.compute.pollsters.disk Traceback (most recent call last):
> ./ceilometer-polling.log.0:4193:2015-11-17 16:20:54.807 14107 ERROR
> ceilometer.compute.pollsters.disk   File
> "/usr/lib/python2.7/dist-packages/ceilometer/compute/pollsters/disk.py",
> line 703, in get_samples
> ./ceilometer-polling.log.0:4194:2015-11-17 16:20:54.807 14107 ERROR
> ceilometer.compute.pollsters.disk instance,
> ./ceilometer-polling.log.0:4195:2015-11-17 16:20:54.807 14107 ERROR
> ceilometer.compute.pollsters.disk   File
> "/usr/lib/python2.7/dist-packages/ceilometer/compute/pollsters/disk.py",
> line 672, in _populate_cache
> ./ceilometer-polling.log.0:4196:2015-11-17 16:20:54.807 14107 ERROR
> ceilometer.compute.pollsters.disk for disk, info in disk_info:
> ./ceilometer-polling.log.0:4197:2015-11-17 16:20:54.807 14107 ERROR
> ceilometer.compute.pollsters.disk   File
> "/usr/lib/python2.7/dist-packages/ceilometer/compute/virt/libvirt/inspector.py",
> line 215, in inspect_disk_info
> ./ceilometer-polling.log.0:4198:2015-11-17 16:20:54.807 14107 ERROR
> ceilometer.compute.pollsters.disk block_info = domain.blockInfo(device)
> ./ceilometer-polling.log.0:4199:2015-11-17 16:20:54.807 14107 ERROR
> ceilometer.compute.pollsters.disk   File
> "/usr/lib/python2.7/dist-packages/libvirt.py", line 658, in blockInfo
> ./ceilometer-polling.log.0:4200:2015-11-17 16:20:54.807 14107 ERROR
> ceilometer.compute.pollsters.disk if ret is None: raise libvirtError
> ('virDomainGetBlockInfo() failed', dom=self)
> ./ceilometer-polling.log.0:4201:2015-11-17 16:20:54.807 14107 ERROR
> ceilometer.compute.pollsters.disk libvirtError: internal error: missing
> storage backend for network files using rbd protocol
>
> [2] CR with this commit:
> https://review.openstack.org/#/c/145819/23/ceilometer/compute/virt/libvirt/inspector.py,cm
>
> [3] Code entry:
> https://github.com/openstack/ceilometer/blob/stable/liberty/ceilometer/compute/virt/libvirt/inspector.py#L215
> [4] Upstream bug: https://bugs.launchpad.net/ceilometer/+bug/1457440
>
>
> Best regards,
>
> Tyaptin Il​y​a,
>
> Ceilometer developer,
>
> Mirantis Inc.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

Re: [openstack-dev] [magnum] Proposed Revision to Magnum's Mission

2016-04-29 Thread Davanum Srinivas
Adrian,

fyi, there's one more already filed by Josh -
https://review.openstack.org/#/c/310941/

-- Dims

On Fri, Apr 29, 2016 at 7:47 PM, Adrian Otto  wrote:
> Magnum Team,
>
> In accordance with our Fishbowl discussion yesterday at the Newton Design
> Summit in Austin, I have proposed the following revision to Magnum’s mission
> statement:
>
> https://review.openstack.org/311476
>
> The idea is to narrow the scope of our Magnum project to allow us to focus
> on making popular COE software work great with OpenStack, and make it easy
> for OpenStack cloud users to quickly set up fleets of cloud capacity managed
> by chosen COE software (such as Swam, Kubernetes, Mesos, etc.). Cloud
> operators and users will value Multi-Tenancy for COE’s, tight integration
> with OpenStack, and the ability to source this all as a self-service
> resource.
>
> We agreed to deprecate and remove the /containers resource from Magnum’s
> API, and will leave the door open for a new OpenStack project with its own
> name and mission to satisfy the interests of our community members who want
> an OpenStack API service that abstracts one or more COE’s.
>
> Regards,
>
> Adrian
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Proposed Revision to Magnum's Mission

2016-04-29 Thread Adrian Otto
Magnum Team,

In accordance with our Fishbowl discussion yesterday at the Newton Design 
Summit in Austin, I have proposed the following revision to Magnum’s mission 
statement:

https://review.openstack.org/311476

The idea is to narrow the scope of our Magnum project to allow us to focus on 
making popular COE software work great with OpenStack, and make it easy for 
OpenStack cloud users to quickly set up fleets of cloud capacity managed by 
chosen COE software (such as Swam, Kubernetes, Mesos, etc.). Cloud operators 
and users will value Multi-Tenancy for COE’s, tight integration with OpenStack, 
and the ability to source this all as a self-service resource.

We agreed to deprecate and remove the /containers resource from Magnum’s API, 
and will leave the door open for a new OpenStack project with its own name and 
mission to satisfy the interests of our community members who want an OpenStack 
API service that abstracts one or more COE’s.

Regards,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] restarting SR-IOV/PCI Passthrough meeting

2016-04-29 Thread Moshe Levi
Hi all,

I would like to restart the SR-IOV/PCI Pass-through.
The main focus is to improve CI coverage and fixing bugs in nova side.
I updated the agenda etherpad  [1] with stuff to cover on the next meeting Mar 
3rd 2016. The meeting will be biweekly.
I like that the Mellanox/Intel and other vendors CI representative will join 
the meeting, so that we can make progress in that area.
Please review the etherpad  and add stuff that you want to talk about (remember 
the current focus is on CI and bug fixes)
I updated the  sriov meeting chair see [2]

Thanks,
Moshe Levi.
[1] - https://etherpad.openstack.org/p/sriov_meeting_agenda
[2] - https://review.openstack.org/#/c/311472/1


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] RFC Host Maintenance

2016-04-29 Thread Juvonen, Tomi (Nokia - FI/Espoo)
Hi,

Maintenance was discussed in the OpenStack summit in "Ops: Nova Maintenance - 
how do you do it?"

It was decided the best alternative is just to expose the nova-compute 
disabled_reason to owner of the server. This field can then have URL to more 
detailed status given in external tool. There was also all kind of requirements 
from operators that we did not have time to go through, but those are as a base 
what the external tool could handle.

As result, original spec is now abandoned:
https://review.openstack.org/296995
And new made:
https://review.openstack.org/310510
Also as part of the whole story, filtering by host_status:
https://review.openstack.org/276671

Thank you for all the comments and getting this NFV requirement forwards.

Br,
Tomi

> -Original Message-
> From: Juvonen, Tomi (Nokia - FI/Espoo)
> Sent: Wednesday, April 13, 2016 2:02 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: RE: [openstack-dev] [Nova] RFC Host Maintenance
> 
> > -Original Message-
> > From: EXT Jim Rollenhagen [mailto:j...@jimrollenhagen.com]
> > Sent: Tuesday, April 12, 2016 4:46 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> > Subject: Re: [openstack-dev] [Nova] RFC Host Maintenance
> >
> > On Thu, Apr 07, 2016 at 06:36:20AM -0400, Sean Dague wrote:
> > > On 04/07/2016 03:26 AM, Juvonen, Tomi (Nokia - FI/Espoo) wrote:
> > > > Hi Nova, Ops, stackers,
> > > >
> > > > I am trying to figure out different use cases and requirements there
> > > > would be for host maintenance and would like to get feedback and
> > > > transfer all this to spec and discussion what could and should land
> for
> > > > Nova or other places.
> > > >
> > > > As working in OPNFV Doctor project that has the Telco perspective
> about
> > > > related requirements, I started to draft a spec based on something
> > > > smaller that would be nice to have in Nova and less complicated to
> have
> > > > it in single cycle. Anyhow the feedback from Nova API team was to
> look
> > > > this as a whole and gather more. This is why asking this here and not
> > > > just trough spec, to get input for requirements and use cases with
> > wider
> > > > audience. Here is the draft spec proposing first just maintenance
> > window
> > > > to be added:
> > > > _https://review.openstack.org/296995/_
> > > >
> > > > Here is link to OPNFV Doctor requirements:
> > > > _http://artifacts.opnfv.org/doctor/docs/requirements/02-
> > use_cases.html#nvfi-maintenance_
> > > > _http://artifacts.opnfv.org/doctor/docs/requirements/03-
> > architecture.html#nfvi-maintenance_
> > > > _http://artifacts.opnfv.org/doctor/docs/requirements/05-
> > implementation.html#nfvi-maintenance_
> > > >
> > > > Here is what I could transfer as use cases, but would ask feedback to
> > > > get more:
> > > >
> > > > As admin I want to set maintenance period for certain host.
> > > >
> > > > As admin I want to know when host is ready to actions to be done by
> > admin
> > > > during the maintenance. Meaning physical resources are emptied.
> > > >
> > > > As owner of a server I want to prepare for maintenance to minimize
> > downtime,
> > > > keep capacity on needed level and switch HA service to server not
> > > > affected by
> > > > maintenance.
> > > >
> > > > As owner of a server I want to know when my servers will be down
> > because of
> > > > host maintenance as it might be servers are not moved to another
> host.
> > > >
> > > > As owner of a server I want to know if host is to be totally removed,
> > so
> > > > instead of keeping my servers on host during maintenance, I want to
> > move
> > > > them
> > > > to somewhere else.
> > > >
> > > > As owner of a server I want to send acknowledgement to be ready for
> > host
> > > > maintenance and I want to state if servers are to be moved or kept on
> > host.
> > > > Removal and creating of server is in owner's control already.
> > Optionally
> > > > server
> > > > Configuration data could hold information about automatic actions to
> be
> > > > done
> > > > when host is going down unexpectedly or in controlled manner. Also
> > > > actions at
> > > > the same if down permanently or only temporarily. Still this needs
> > > > acknowledgement from server owner as he needs time for application
> > level
> > > > controlled HA service switchover.
> > >
> > > While I definitely understand the value of these in a deployement, I'm
> a
> > > bit concerned of baking all this structured data into Nova itself. As
> it
> > > effectively means putting some degree of a ticket management system in
> > > Nova that's specific to a workflow you've decided on here. Baked in
> > > workflow is hard to change when the needs of an industry do.
> > >
> > > My counter proposal on your spec was to provide a free form field
> > > associated with maintenance mode which could contain a url linking to
> > > the details. This 

[openstack-dev] [heat] Summit Friday Night Dinner

2016-04-29 Thread Jay Dobies

Torchy's Tacos
1311 S 1st St
Austin, TX 78704

It's about a 20 minute walk from the Radisson (pretty much straight down 
Congress Ave) and then over a block.


Meeting in the Radisson lobby at 7:10 to walk over. Apologies in advance 
if we see another snake and I jump on someone's shoulders (Zane, you're 
tall, so you're likely my target).


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Live Migration: Austin summit update

2016-04-29 Thread Murray, Paul (HP Cloud)
The following summarizes status of the main topics relating to live migration 
after the Newton design summit. Please feel free to correct any inaccuracies or 
add additional information.

Paul

-

Libvirt storage pools

The storage pools work has been selected as one of the project review 
priorities for Newton.
(see https://etherpad.openstack.org/p/newton-nova-summit-priorities )

Continuation of the libvirt storage pools work was discussed in the live 
migration session. The proposal has grown to include a refactor of the existing 
libvirt driver instance storage code. Justification for this is based on three 
factors:

1.   The code needs to be refactored to use storage pools

2.   The code is complicated and uses inspection, poor practice

3.   During the investigation Matt Booth discovered two CVEs in the code - 
suggesting further work is justified

So the proposal is now to follow three stages:

1.   Refactor the instance storage code

2.   Adapt to use storage pools for the instance storage

3.   Use storage pools to drive resize/migration

Matt has code already starting the refactor and will continue with help from 
Paul Carlton + Paul Murray. We will look for additional contributors to help as 
we plan out the patches.

https://review.openstack.org/#/c/302117 : Persist libvirt instance storage 
metadata
https://review.openstack.org/#/c/310505 : Use libvirt storage pools
https://review.openstack.org/#/c/310538 : Migrate libvirt volumes

Post copy

The spec to add post copy migration support in the libvirt driver was discussed 
in the live migration session. Post copy guarantees completion of a migration 
in linear time without needing to pause the VM. This can be used as an 
alternative to pausing in live-migration-force-complete. Pause or complete 
could also be invoked automatically under some circumstances. The issue slowing 
these specs is how to decide which method to use given they provide a different 
user experience but we don't want to expose virt specific features in the API. 
Two additional specs listed below suggest possible generic ways to address the 
issue.

There was no conclusions reached in the session so the debate will continue on 
the specs. The first below is the main spec for the feature.

https://review.openstack.org/#/c/301509 : Adds post-copy live migration support 
to Nova
https://review.openstack.org/#/c/305425 : Define instance availability profiles
https://review.openstack.org/#/c/306561 : Automatic Live Migration Completion

Live Migration orchestrated via conductor

The proposal to move orchestration of live migration to conductor was discussed 
in the working session on Friday, presented by Andrew Laski on behalf of 
Timofey Durakov. This one threw up a lot of debate both for and against the 
general idea, but not supporting the patches that have been submitted along 
with the spec so far. The general feeling was that we need to attack this, but 
need to take some simple first cleanup steps first to get a better idea of the 
problem. Dan Smith proposed moving the stateless pre-migration steps to a 
sequence of calls from conductor (as opposed to the going back and forth 
between computes) as the first step.

https://review.openstack.org/#/c/292271 : Remove compute-compute communication 
in live-migration

Cold and Live Migration Scheduling

When this patch merges all migrations will use the request spec for scheduling: 
https://review.openstack.org/#/c/284974
Work is still ongoing for check destinations (allowing the scheduler to check a 
destination chosen by the admin). When that is complete migrations will have 
three ways to be placed:

1.   Destination chosen by scheduler

2.   Destination chosen by admin but checked by scheduler

3.   Destination forced by admin

https://review.openstack.org/#/c/296408 Re-Proposes to check destination on 
migrations

PCI + NUMA claims

Moshe and Jay are making great progress refactoring Nicola's patches to fix PCI 
and NUMA handling in migrations. The patch series should be completed soon.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Seek advices for a licence issue

2016-04-29 Thread Hongbin Lu
Hi Jay,

Thanks for taking the responsibility. I have approved the blueprint [1] and 
assigned it to you. According to the decision in the design summit, we prefer 
to have a new bay type for DC/OS, and the original mesos bay should be 
untouched until the new DC/OS bay is mature. I am looking forward to your 
contribution.

[1] https://blueprints.launchpad.net/magnum/+spec/mesos-dcos

Best regards,
Hongbin

From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: April-24-16 6:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Seek advices for a licence issue

Yes, I will contribute to this project.
Thanks

On Sat, Apr 23, 2016 at 8:44 PM, Hongbin Lu 
> wrote:
Jay,

I will discuss the proposal [1] in the design summit. Do you plan to contribute 
on this efforts or someone from DCOS community interest to contribute?

[1] https://blueprints.launchpad.net/magnum/+spec/mesos-dcos

Best regards,
Hongbin

From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: April-22-16 12:12 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Seek advices for a licence issue

I got confirmation from Mesosphere that we can use the open source DC/OS in 
Magnum now, it is a good time to enhance the Mesos Bay to Open Source DCOS.
From Mesosphere
DC/OS software is licensed under the Apache License, so you should feel free to 
use it within the terms of that license.
---
Thanks.

On Thu, Apr 21, 2016 at 5:35 AM, Hongbin Lu 
> wrote:
Hi Mark,

I have went though the announcement in details, From my point of view, it seems 
to resolve the license issue that was blocking us in before. I have included 
the Magnum team in ML to see if our team members have any comment.

Thanks for the support from foundation.

Best regards,
Hongbin

From: Mark Collier [mailto:m...@openstack.org]
Sent: April-19-16 12:36 PM
To: Hongbin Lu
Cc: foundat...@lists.openstack.org; 
Guang Ya GY Liu
Subject: Re: [OpenStack Foundation] [magnum] Seek advices for a licence issue

Hopefully today’s news that Mesosphere is open major sourcing components of 
DCOS under an Apache 2.0 license will make things easier:

https://mesosphere.com/blog/2016/04/19/open-source-dcos/

I’ll be interested to hear your take after you have time to look at it in more 
detail, Hongbin.

Mark



On Apr 9, 2016, at 10:02 AM, Hongbin Lu 
> wrote:

Hi all,

A brief introduction to myself. I am the Magnum Project Team Lead (PTL). Magnum 
is the OpenStack container service. I wrote this email because the Magnum team 
is seeking clarification for a licence issue for shipping third-party software 
(DCOS [1] in particular) and I was advised to consult OpenStack Board of 
Directors in this regards.

Before getting into the question, I think it is better to provide some 
backgroup information. A feature provided by Magnum is to provision container 
management tool on top of a set of Nova instances. One of the container 
management tool Magnum supports is Apache Mesos [2]. Generally speaking, Magnum 
ships Mesos by providing a custom cloud image with the necessary packages 
pre-installed. So far, all the shipped components are open source with 
appropriate license, so we are good so far.

Recently, one of our contributors suggested to extend the Mesos support to DCOS 
[3]. The Magnum team is unclear if there is a license issue for shipping DCOS, 
which looks like a close-source product but has community version in Amazon Web 
Services [4]. I want to know what are the appropriate actions Magnum team 
should take in this pursuit, or we should stop pursuing this direction further? 
Advices are greatly appreciated. Please let us know if we need to provide 
further information. Thanks.

[1] https://docs.mesosphere.com/
[2] http://mesos.apache.org/
[3] https://blueprints.launchpad.net/magnum/+spec/mesos-dcos
[4] 
https://docs.mesosphere.com/administration/installing/installing-community-edition/

Best regards,
Hongbin



___
Foundation mailing list
foundat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Thanks,
Jay Lau (Guangya Liu)


[openstack-dev] [heat][qa] devstack plugin

2016-04-29 Thread Ken'ichi Ohmichi
Hi Heat-team,

Thanks for talking the topic of devstack plugin with us in Austin summit.
As you know, many projects have started to use devstack plugin in each
project repo.
and it is nice to use the plugin in heat also.

A good sample is ironic one:
https://review.openstack.org/#/q/topic:ironic-devstack-plugin
The way is like:
1. (heat) Add devstack plugin by copying the code from devstack
2. (project-config) Switch to use devstack plugin
3. (devstack) Remove lib/heat, ...

The related doc is http://docs.openstack.org/developer/devstack/plugins.html
I am happy to get helps to move forward.

Thanks
Ken Omichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] final notes from Trove summit design summit sessions

2016-04-29 Thread Amrith Kumar
Here is the updated (final) notes from the Trove design summit in
Austin, TX.

Some changes from yesterday's email are highlighted. 

Changes are marked with "::NOTE::"

-amrith


Python3 support

- make python34 a voting test in the gate (was already done, so no
  action required)

- peterstac to ensure that trove-dashboard is python3 enabled and
  enable python34 gate changes in trove-dashboard


Multiple datastores with a single manager

etherpad: trove-newton-summit-multiple-datastores

- needs more discussion


Management client and openstack client

::NOTE:: Change from yesterday's summary

- decided to add quota management to the Trove API and expose it
  through the trove client [flavio, maybe]


Trove upgrades

etherpad: trove-newton-summit-trove-upgrades
spec: https://review.openstack.org/#/c/302416/

- the team was in agreement with the proposal that has been up for
  review.

- pete mackinnon had a specific request that some thought and research
  be put into the area of how nova rebuild may fail, and how a user
  would have to recover from this.


Extend backend persistent storage

etherpad: trove-newton-summit-extensible-backend-storage
spec: https://review.openstack.org/#/c/302952/

- the team was in agreement with the proposal that has been up for
  review.

- need to investigate supporting replicated volumes for DR and
  restoring a new database instance from a volume replica. This
  workflow is not currently possible.


Trove container support

etherpad: trove-newton-summit-container
spec: https://review.openstack.org/307883

- rather than having trove integrate with magnum, we agreed that in
  the short term we would provide containers through the nova lxc or
  lxd drivers

- the spec will need to be rewritten to reflect this, and to identify
  the work involved in doing that

- in the future, trove will have to realize how it will deal with
  clouds where this solution would not work. for example, clouds that
  run kubernetes on native bare metal and don't have nova.


Snapshot as a backup strategy

etherpad: trove-newton-summit-snapshot-as-a-backup-strategy
spec: https://review.openstack.org/#/c/306620/

- the team was in agreement with the proposal that was up for review,
  in principle. some changes are required such as to deal with
  quiescing the database.

- there was discussion of whether the operation of taking the snapshot
  would be taken by the guest agent or the task manager and the
  consensus was that the task manager should be the one generating the
  snapshot.

- it was recommended that the task manager should make a synchronous
  call to the task manager to quiesce the database, and to release it
  after the snapshot was taken. this would avoid the asynchronous
  mechanism based on cast().

- there are two operations here that could cause an interruption in
  service of the database and these relate to the period of time when
  the database is quiesced. it is possible that the operation to
  quiesce the database may take a while to happen (for example if
  there is a long running transaction going on). this could cause the
  call() from the task manager to timeout. Similarly once the database
  is quiesced, the snapshot may take a while. It was suggested that
  there should be mechanisms to place limits on how long either of
  these can take.

- it should be configurable to determine where the snapshot will get
  stored; if we have to actually stream it somewhere. this is not
  necessarily the case for all storage solutions.

- point in time rollback is not a deliverable of this project.


Make it easier to build guest images

etherpad: trove-newton-summit-easier-to-build-images
spec: https://review.openstack.org/#/c/295274/

- the consensus was that we should create a new repository (named
  something like trove-image-builder) and pick up all the elements
  from trove-integration and move them here

- we need to write a new element to install the guestagent code into
  the image

- we need to write a image-builder script that would do the moral
  equivalent of redstack build-image

- that script/tool should allow for creation of both
  development/testing images and images that a customer could use.

- [amrith] to send a note to the ML with the [dib] in the subject line
  and keep the dib folks informed if there are real problems that we
  face with the tool

- pete mackinnon will be working to update and provide elements with
  dib that will generate database images for some databases and
  operating system (CentOS 7)

- there was no consensus on whether the future should be dib based or
  libvirt-customize based

- there was a concern that if we ended up with a different tool in the
  future, there would end up being duplication and redundancy in the
  image-builder tool

- once we get the elements out of trove-integration, if there's not
  much left there we can move it to the trove project and get rid of
  trove-integration altogether

::NOTE:: Change from 

[openstack-dev] Please propose and join me for the first meeting of new Project Cloudlet

2016-04-29 Thread prakash RAMCHANDRAN
Hi all 

I am trying to organize a new module called "Cloudlet" in OpenStack. At the 
Product Working Group and after discussions with Infrastructure team decided to 
start putting together pieces of past work and plan for new "Cloudlet" project 
for Edge Cloud Services using Cloudlet in OpenStack.
Background:

As you all know OpenStack is a loosely coupled  Infrastructure services that 
include  compute, networking , storage and some global services like ID 
Management, End Point Management and Image Management services binding them 
together to serve the Cloud Administrators and Application administrators or 
end users.
OpenStack serves a Central or Core Cloud, Can it serve Edge Cloud?

An Edge Cloud Service is one at the  Edge(Provider {Data Center close to Base 
Station}or Customer Premise{WiFi Access Point ) Gateway  and serves the mobile 
or nomadic user clients through a VM (or composition of VMs) who are authorized 
to use them by the Provider. This Edge Service through Cloudlet  may be a free 
or paid service as the Provider chooses.
Provider uses a standard base VM. We will use VM built from Ubuntu14.04  or 
Centos 7.3 with minimum 4 or 8GB foot print.These VMs will have running 
applications within it to service the provider offer to the customer.
Two commercial use cases we consider:1. Smart City Garbage Collection services 
for the City of San Jose, California
2. Forest Fire Fighting for state of California.
These are typical utility services for Smart City all over the world as well 
disaster prevention due to various natural calamity.One can have slow response 
times where as the other requires real time response, the orders of hours to 
minutes is what we state for simplicity but actual applications may seek better 
latency and through puts as they evolve. Here the idea is to derive work flow 
and see what APIs will be needed to fulfill these requirements.
We can have a doodle pole once I know how many folks are interested and when 
shall we meet?
Agenda for irc meetin on #openstck-dev
1. What is the state of Cloudlet in world todaya. Open Edge Computing & OPNFV
b. OpenFogc. Do the use cases listed above is good start point review specs and 
codes used by OpenStack++ by CMUd.Should we proceed with new project Cloudlet & 
what was Gap analysis feedback from PTLs and cross project teams at Product 
WOrking Group.e. Who are willing to volunteer to get the new "Cloudlet" module 
as initial committers  and contributors?f. Any other items missed and finally 
Action Items  and plan for next meet (irc)
Please send your feedback and we will try get O4 requirements of OpenStack 
before ensuring Cloudlet gets its due at next Barcelona Summit.
ThanksPrakash





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][release] freeze timelines for horizon in newton

2016-04-29 Thread Rob Cresswell (rcresswe)
This has been discussed (just now) in the release management plan 
(https://etherpad.openstack.org/p/newton-relmgt-plan) See point 8 under 
Communication/Governance Changes. From an immediate standpoint, the RC phase of 
this cycle will be much stricter to prevent late breakages. Going forward, 
we’re likely going to establish an earlier feature freeze too, pending 
community discussion.

On a separate note, this email prompted me to scan the governance for the 
dashboard plugins and it became apparent that several have changed their 
release tags, without informing Horizon of this release cadence expectation via 
IRC, email, or our plugin feedback fishbowl. If we are to continue building a 
good plugin ecosystem, the plugins *must* communicate their expectations to us 
upstream; we do not have the time to monitor every plugin.

Rob


On 29 Apr 2016, at 11:26, Amrith Kumar 
> wrote:

In the Trove review of the release schedule this morning, and in the 
retrospective of the mitaka release process, one question which was raised was 
the linkage between projects like Trove and Horizon.

This came up in the specific context of the fact that projects like Trove (in 
the form of the trove-dashboard repository) late in the Mitaka cycle[3]. Late 
in the Mitaka cycle, a change in Horizon caused Trove to break very close to 
the feature freeze date.

So the question is whether we can assume that projects like Horizon will freeze 
in R-6 to ensure that (for example) Trove will freeze in R-5.

Thanks,

-amrith

[1] http://releases.openstack.org/newton/schedule.html
[2] https://review.openstack.org/#/c/311123/
[3] https://review.openstack.org/#/c/307221/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] composable roles team

2016-04-29 Thread Emilien Macchi
Hi,

One of the most urgent tasks we need to achieve in TripleO during
Newton cycle is the composable roles support.
So we decided to build a team that would focus on it during the next weeks.

We started this etherpad:
https://etherpad.openstack.org/p/tripleo-composable-roles-work

So anyone can help or check where we are.
We're pushing / going to push a lot of patches, we would appreciate
some reviews and feedback.

Also, I would like to propose to -1 every patch that is not
composable-role-helpful, it will help us to move forward. Our team
will be available to help in the patches, so we can all converge
together.

Any feedback is welcome, thanks.
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] next min libvirt?

2016-04-29 Thread Matt Riedemann

On 4/29/2016 10:28 AM, Daniel P. Berrange wrote:

On Fri, Apr 29, 2016 at 10:13:42AM -0500, Sean Dague wrote:

We've just landed the libvirt min to bump us up to 1.2.1 required. It's
probably a good time consider the appropriate bump for Otaca.

By that time our Ubuntu LTS will be 16.04 (libvirt 1.3.1), RHEL 7.1
(1.2.8). Additionally Debian Jessie has 1.2.9. RHEL 7.2 is at 1.2.17.


By the time Ocata is released, I think it'll be valid to ignore
RHEL-7.1, as we'll already be onto 7.3 at that time.


My suggestion is we set MIN_LIBVIRT_VERSION to 1.2.8. This will mean
that NUMA support in libvirt (excepting the blacklists) and huge page
support is assumed on x86_64.


If we ignore RHEL 7.1, we could go to 1.2.9 which is the min in Jessie.


Is there a simple reason why ignoring RHEL 7.1 is OK? Honestly I can't 
remember which OpenStack release came out around that time, was it Kilo?





We should also now consider our minimum QEMU versions. Jessie will have
QEMU 2.1.0,  16.04 LTS will have 2.5.0 and  RHEL 7.2 will have 2.3.0

So that'd suggest a valid QEMU/KVM version of 2.1.0, vs our current
1.5.3 version.

Regards,
Daniel




--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-health][QA] We need your feedbacks

2016-04-29 Thread Masayuki Igawa
Hi,

Now, we are making OpenStack-Health[1] which is a dashboard for
visualizing test results of OpenStack CI jobs. This is heavily under
development mode but works, you can use it now.

We'd like to get your feedback to make it better UI/UX. So, if you
have any comments or feedbacks, please feel free to file it as a
bug[2] and/or submit patches[3].

This is a kind of advertisements, however, I think openstack-health
could be useful for all projects through knowing the development
status of the OpenStack projects.


[1] http://status.openstack.org/openstack-health/
[2] https://bugs.launchpad.net/openstack-health
[3] http://git.openstack.org/cgit/openstack/openstack-health

Best Regards,
-- Masayuki Igawa

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][MySQL][DLM][Oslo][DB][Trove][Galera][operators] Multi-master writes look OK, OCF RA and more things

2016-04-29 Thread Roman Podoliaka
Hi Bogdan,

Thank you for sharing this! I'll need to familiarize myself with this
Jepsen thing, but overall it looks interesting.

As it turns out, we already run Galera in multi-writer mode in Fuel
unintentionally in the case, when the active MySQL node goes down,
HAProxy starts opening connections to a backup, then the active goes
up again, HAProxy starts opening connections to the original MySQL
node, but OpenStack services may still have connections opened to the
backup in their connection pools - so now you may have connections to
multiple MySQL nodes at the same time, exactly what you wanted to
avoid by using active/backup in the HAProxy configuration.

^ this actually leads to an interesting issue [1], when the DB state
committed on one node is not immediately available on another one.
Replication lag can be controlled  via session variables [2], but that
does not always help: e.g. in [1] Nova first goes to Neutron to create
a new floating IP, gets 201 (and Neutron actually *commits* the DB
transaction) and then makes another REST API request to get a list of
floating IPs by address - the latter can be served by another
neutron-server, connected to another Galera node, which does not have
the latest state applied yet due to 'slave lag' - it can happen that
the list will be empty. Unfortunately, 'wsrep_sync_wait' can't help
here, as it's two different REST API requests, potentially served by
two different neutron-server instances.

Basically, you'd need to *always* wait for the latest state to be
applied before executing any queries, which Galera is trying to avoid
for performance reasons.

Thanks,
Roman

[1] https://bugs.launchpad.net/fuel/+bug/1529937
[2] 
http://galeracluster.com/2015/06/achieving-read-after-write-semantics-with-galera/

On Fri, Apr 22, 2016 at 10:42 AM, Bogdan Dobrelya
 wrote:
> [crossposting to openstack-operat...@lists.openstack.org]
>
> Hello.
> I wrote this paper [0] to demonstrate an approach how we can leverage a
> Jepsen framework for QA/CI/CD pipeline for OpenStack projects like Oslo
> (DB) or Trove, Tooz DLM and perhaps for any integration projects which
> rely on distributed systems. Although all tests are yet to be finished,
> results are quite visible, so I better off share early for a review,
> discussion and comments.
>
> I have similar tests done for the RabbitMQ OCF RA clusterers as well,
> although have yet wrote a report.
>
> PS. I'm sorry for so many tags I placed in the topic header, should I've
> used just "all" :) ? Have a nice weekends and take care!
>
> [0] https://goo.gl/VHyIIE
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Multi-region openstack liberty setup

2016-04-29 Thread kiran vemuri UH
Hello All,

I am trying to deploy multi-region liberty OpenStack setup and I am
following http://docs.openstack.org/liberty/install-guide-ubuntu/ to bring
up each node. I was able to  bring up region one successfully.

Now while bringing up region2, when I try to create keystone service while
exporting OS_URL as the http://regionone:35357/v3.. it gives me an error

"You are not authorized to perform the requested action:
identity:create_service (HTTP 403)"

anyone have any idea how to get past this? am i doing something wrong?

Thanks in advance


Thanks,
Kiran Vemuri


*kiran vemuri*
www.kiranvemuri.info
Tel: (832)-701-8281
kkvem...@uh.edu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Backwards compatibility policy

2016-04-29 Thread Robert Collins
Yesterday, in 
https://etherpad.openstack.org/p/newton-oslo-backwards-compat-testing
we actually reached consensus - well, 7/10 for and noone against -
doing 1 cycle backwards compatibility in Oslo.

That means that after we release a thing, code using it must not break
(due to an Oslo change) until a release from the cycle *after* the
next cycle..

Concretely, add a thing at the start of L, we can break code using
that exact API etc at the start of N. We could try to be clever and
say 'Release of L, can break at the start of N', but 'release of L' is
now much fuzzier than it used to be - particularly with independent
releases of some things. So I'd rather have a really simple easy to
define and grep for rule than try to optimise the window. Happy if
someone else can come up with a way to optimise it.

This is obviously not binding on any non-oslo projects: os-brick,
python-neutronclient etc etc: though I believe folks in such projects
are generally cautious as well.

The key thing here is that we already have to do backwards compat to
move folk off of a poor API onto a new one: all this policy says is
that the grace period needs to cross release boundaries.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Deploying kolla from a container?

2016-04-29 Thread Jamie Hannaford
But if I use eth0 (public internet) for network_interface and the VM's public 
IPv4 as the kolla_internal_vip_address, I'm not sure it'll work because the 
prechecks does the following:

TASK: [prechecks | Checking if kolla_internal_vip_address and 
kolla_external_vip_address are not pingable from any node] ***
failed: [localhost] => (item=172.99.69.125) => {"changed": false, "cmd": 
["ping", "-c", "3", "172.99.69.125"], "delta": "0:00:02.002623", "end": 
"2016-04-29 16:42:27.298748", "failed": true, "failed_when_result": true, 
"item": "172.99.69.125", "rc": 0, "start": "2016-04-29 16:42:25.296125", 
"stdout_lines": ["PING 172.99.69.125 (172.99.69.125) 56(84) bytes of data.", 
"64 bytes from 172.99.69.125: icmp_seq=1 ttl=64 time=0.052 ms", "64 bytes from 
172.99.69.125: icmp_seq=2 ttl=64 time=0.060 ms", "64 bytes from 172.99.69.125: 
icmp_seq=3 ttl=64 time=0.056 ms", "", "--- 172.99.69.125 ping statistics ---", 
"3 packets transmitted, 3 received, 0% packet loss, time 1999ms", "rtt 
min/avg/max/mdev = 0.052/0.056/0.060/0.003 ms"], "warnings": []}
stdout: PING 172.99.69.125 (172.99.69.125) 56(84) bytes of data.
64 bytes from 172.99.69.125: icmp_seq=1 ttl=64 time=0.052 ms
64 bytes from 172.99.69.125: icmp_seq=2 ttl=64 time=0.060 ms

Can the neutron_external_interface be any user-create Neutron private network, 
or does it have access to the public internet?

If there's some kind of guide that explains the networking configuration for an 
all-in-one VM on public cloud, that would be great!

Jamie

From: Michał Jastrzębski 
Sent: 29 April 2016 15:06
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Deploying kolla from a container?

So network interface is an inteface that APIs will bind into. That
means it's network which openstack stuff travels on.

Neutron_external_iface is iface tenant networks will be placed on. In
you r case you could use eth0 as network and eth1 as neutron external.
If it has IP that's ok...it simply won't be used anywhere:)

On 28 April 2016 at 21:48, Jamie Hannaford
 wrote:
> Okay, that makes sense. For a normal Ubuntu VM (in my case on Rackspace
> cloud), what would the networking configuration look like? Usually eth0 is
> the interface for the public internet, eth1 is servicenet, and I have eth2
> as an arbitrary neutron private network.
>
>
> For the `network_interface` config value I used eth2 and for
> `kolla_internal_vip_address` a free VIP on its subnet - does that sound
> right?
>
>
> For `neutron_external_interface`, it says in your dev guide that you can use
> a veth pair when there's only a single public interface on a machine, which
> is the case here. Is there any documentation available for how to do that?
>
>
> Jamie
>
>
> 
> From: Michał Jastrzębski 
> Sent: 28 April 2016 04:36
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [kolla] Deploying kolla from a container?
>
>
> Hey,
>
> So privileged containers are required by stuff like libvirt, and there isn't
> much we can do about it. Shared /run is required by openvswitch afair. We
> didn't try to run Kolla with swarm, but I'm afraid that privileged container
> and network host are unfortunately a must. OpenStack wasn't really build to
> be run in containers, so we had to make sacrifices here and there. We are
> experimenting with kubernates now, as it supports both priv containers and
> net host.
>
> Let me know what if I can be any help.
>
>
> Michal
>
> On Apr 27, 2016 5:27 PM, "Jamie Hannaford" 
> wrote:
>>
>> Hi,
>>
>>
>> Is it possible to deploy Kolla from a container rather than an
>> ubuntu/centos VM? I have a Swarm cluster, so I don't really want to leave
>> that ecosystem and start creating other cloud resources.
>>
>>
>> I got quite far with the dev guide, but the step which seems to throw a
>> spanner in the works is setting the MountFlags. You recommend either systemd
>> (15.04+) or `mount --make-shared /run` (14.04), both of which require a
>> container running in privileged mode, which I can't do on my swarm cluster.
>> Is there any workaround here?
>>
>>
>> Alternatively, is it possible to run a privileged container locally in
>> virtualbox and have it deploy to a remote Swarm cluster?
>>
>>
>> Any advice you have here would be really appreciated. Kolla looks like a
>> great project!
>>
>>
>> Jamie
>>
>>
>> 
>> Rackspace International GmbH a company registered in the Canton of Zurich,
>> Switzerland (company identification number CH-020.4.047.077-1) whose
>> registered office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland.
>> Rackspace International GmbH privacy policy can be viewed at
>> www.rackspace.co.uk/legal/swiss-privacy-policy - This e-mail message may
>> contain confidential or privileged 

[openstack-dev] [horizon][release] freeze timelines for horizon in newton

2016-04-29 Thread Amrith Kumar
In the Trove review of the release schedule this morning, and in the 
retrospective of the mitaka release process, one question which was raised was 
the linkage between projects like Trove and Horizon.

This came up in the specific context of the fact that projects like Trove (in 
the form of the trove-dashboard repository) late in the Mitaka cycle[3]. Late 
in the Mitaka cycle, a change in Horizon caused Trove to break very close to 
the feature freeze date.

So the question is whether we can assume that projects like Horizon will freeze 
in R-6 to ensure that (for example) Trove will freeze in R-5.

Thanks,

-amrith

[1] http://releases.openstack.org/newton/schedule.html
[2] https://review.openstack.org/#/c/311123/
[3] https://review.openstack.org/#/c/307221/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Social at the summit

2016-04-29 Thread Miguel Lavalle
A good number of us (about 20) got to Bangers around 6:15 and stayed there
until around 10. There were plenty of tables available. We had a great time

On Thu, Apr 28, 2016 at 7:00 PM, Zhou, Han  wrote:

> When I went there I didn't see any of our folks so I went to another
> dinner. Have fun!
>
>
> On Thursday, April 28, 2016, Nate Johnston 
> wrote:
>
>> A few of us made it in, but the number that they can for is probably only
>> one or two more.
>>
>> --N.
>>
>> On Thursday, April 28, 2016, Kyle Mestery  wrote:
>>
>>> Folks, unfortunately this will have to be postponed. I was too busy
>>> doing a standup routine with Armando to find another place. Apologies.
>>>
>>> Thanks,
>>> Kyle
>>>
>>> On Apr 28, 2016, at 12:02 PM, Darek Smigiel 
>>> wrote:
>>>
>>> Unfortunately, I’ve got response from Bangers, that they’re fully booked
>>> for Today
>>>
>>> "Thank you for your interest in hosting a business dinner with us at
>>> Banger's tonight. Unfortunately we are booked with reservations this
>>> evening, so I am unable to accommodate your request. I wish you all the
>>> best in finding the perfect venue for your event.”
>>>
>>> Are we trying to find some other spot, or just keep Bangers and we will
>>> see?
>>>
>>> Darek
>>>
>>> On Apr 26, 2016, at 8:27 PM, Kyle Mestery  wrote:
>>>
>>> I propose we meet at Bangers on Rainey St. at 6PM. I don't have a
>>> reservation but it should be able to hold 50+ people. See y'all at 6PM
>>> Thursday!
>>>
>>> Kyle
>>>
>>> On Apr 25, 2016, at 1:07 PM, Kyle Mestery  wrote:
>>>
>>> OK, there is enough interest, I'll find a place on 6th Street for us
>>> and get a reservation for Thursday around 7 or so.
>>>
>>> Thanks folks!
>>>
>>> On Mon, Apr 25, 2016 at 12:30 PM, Zhou, Han  wrote:
>>> +1 :)
>>>
>>> Han Zhou
>>> Irc: zhouhan
>>>
>>>
>>> On Monday, April 25, 2016, Korzeniewski, Artur
>>>  wrote:
>>>
>>>
>>> Sign me up :)
>>>
>>> Artur
>>> IRC: korzen
>>>
>>> -Original Message-
>>> From: Darek Smigiel [mailto:smigiel.dari...@gmail.com]
>>> Sent: Monday, April 25, 2016 7:19 PM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> 
>>> Subject: Re: [openstack-dev] [neutron] Social at the summit
>>>
>>> Count me in!
>>> Will be good to meet all you guys!
>>>
>>> Darek (dasm) Smigiel
>>>
>>> On Apr 25, 2016, at 12:13 PM, Doug Wiegley
>>>  wrote:
>>>
>>>
>>> On Apr 25, 2016, at 12:01 PM, Ihar Hrachyshka 
>>> wrote:
>>>
>>> WAT???
>>>
>>> It was never supposed to be core only. Everyone is welcome!
>>>
>>>
>>> +2
>>>
>>> irony intended.
>>>
>>> Socials are not controlled by gerrit ACLs.  :-)
>>>
>>> doug
>>>
>>>
>>> Sent from my iPhone
>>>
>>> On 25 Apr 2016, at 11:56, Edgar Magana 
>>> wrote:
>>>
>>> Would you extend it to ex-cores?
>>>
>>> Edgar
>>>
>>>
>>>
>>>
>>> On 4/25/16, 10:55 AM, "Kyle Mestery"  wrote:
>>>
>>> Ihar, Henry and I were talking and we thought Thursday night makes
>>> sense for a Neutron social in Austin. If others agree, reply on this
>>> thread
>>> and we'll find a place.
>>>
>>> Thanks!
>>> Kyle
>>>
>>> ___
>>> ___ OpenStack Development Mailing List (not for usage
>>> questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> 
>>> __ OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> _
>>> _ OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> __
>>>  OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>>> ?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __
>>> OpenStack Development Mailing 

[openstack-dev] [neutron] dvr with ovs-dpdk.

2016-04-29 Thread Mooney, Sean K
Hi
If any of the dvr team are around I would love to
Meet with ye to discuss how to make dvr work efficiently when using ovs with 
the dpdk datapath.
Ill be around the Hilton all day and am currently in room 400 but if anyone 
wants to meetup to
Discuss this let me know

Regards
Seán

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] next min libvirt?

2016-04-29 Thread Sean Dague
On 04/29/2016 10:28 AM, Daniel P. Berrange wrote:
> On Fri, Apr 29, 2016 at 10:13:42AM -0500, Sean Dague wrote:
>> We've just landed the libvirt min to bump us up to 1.2.1 required. It's
>> probably a good time consider the appropriate bump for Otaca.
>>
>> By that time our Ubuntu LTS will be 16.04 (libvirt 1.3.1), RHEL 7.1
>> (1.2.8). Additionally Debian Jessie has 1.2.9. RHEL 7.2 is at 1.2.17.
> 
> By the time Ocata is released, I think it'll be valid to ignore
> RHEL-7.1, as we'll already be onto 7.3 at that time.
> 
>> My suggestion is we set MIN_LIBVIRT_VERSION to 1.2.8. This will mean
>> that NUMA support in libvirt (excepting the blacklists) and huge page
>> support is assumed on x86_64.
> 
> If we ignore RHEL 7.1, we could go to 1.2.9 which is the min in Jessie.
> 
> 
> We should also now consider our minimum QEMU versions. Jessie will have
> QEMU 2.1.0,  16.04 LTS will have 2.5.0 and  RHEL 7.2 will have 2.3.0
> 
> So that'd suggest a valid QEMU/KVM version of 2.1.0, vs our current
> 1.5.3 version.

Works for me. Would you like to propose that review? I updated the wiki
with what I found about current versions, any 7.3 info would also be
useful.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Discussing DLMs in the Unplugged Track

2016-04-29 Thread John Schwarz
Hi guys,

We'll meet at 11:00 at Salon C, Hilton.

John.

On Sun, Apr 24, 2016 at 8:59 AM, Joshua Harlow  wrote:
> I'll try to be there as well, since I know a little bit about tooz ;)
>
> -Josh
>
>
> Gary Kotton wrote:
>>
>> Hi,
>> I suggest that you speak with Kobi Samoray - he implemented this for the
>> vmware_nsx repository using tooz. Its pretty cool.
>> Thanks
>> Gary
>>
>>
>>
>>
>> On 4/23/16, 5:16 PM, "John Schwarz"  wrote:
>>
>>> Hi guys,
>>>
>>> I'm interested in discussing the DLM RFE [1] during the unplugged
>>> track on Friday morning, around 11:00am. A face-to-face talk about
>>> this with all parties interested will surely prove fruitful and will
>>> set us up on the track we want to go through in respect to this
>>> feature.
>>>
>>> You can find past discussions about this topic in previous driver
>>> meetings here: [2], [3].
>>>
>>> If you're interested in this topic, please come to talk :)
>>>
>>> [1]: https://bugs.launchpad.net/neutron/+bug/1552680
>>> [2]:
>>> http://eavesdrop.openstack.org/meetings/neutron_drivers/2016/neutron_drivers.2016-03-31-22.00.log.html#l-231
>>> [3]:
>>> http://eavesdrop.openstack.org/meetings/neutron_drivers/2016/neutron_drivers.2016-04-14-22.00.log.html#l-153
>>>
>>> --
>>> John Schwarz,
>>> Red Hat.
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
John Schwarz,
Red Hat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] next min libvirt?

2016-04-29 Thread Daniel P. Berrange
On Fri, Apr 29, 2016 at 10:13:42AM -0500, Sean Dague wrote:
> We've just landed the libvirt min to bump us up to 1.2.1 required. It's
> probably a good time consider the appropriate bump for Otaca.
> 
> By that time our Ubuntu LTS will be 16.04 (libvirt 1.3.1), RHEL 7.1
> (1.2.8). Additionally Debian Jessie has 1.2.9. RHEL 7.2 is at 1.2.17.

By the time Ocata is released, I think it'll be valid to ignore
RHEL-7.1, as we'll already be onto 7.3 at that time.

> My suggestion is we set MIN_LIBVIRT_VERSION to 1.2.8. This will mean
> that NUMA support in libvirt (excepting the blacklists) and huge page
> support is assumed on x86_64.

If we ignore RHEL 7.1, we could go to 1.2.9 which is the min in Jessie.


We should also now consider our minimum QEMU versions. Jessie will have
QEMU 2.1.0,  16.04 LTS will have 2.5.0 and  RHEL 7.2 will have 2.3.0

So that'd suggest a valid QEMU/KVM version of 2.1.0, vs our current
1.5.3 version.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 HA testing on scale

2016-04-29 Thread Dina Belova
Folks,

change https://review.openstack.org/#/c/310419/ proposed to
performance-docs. Thanks Ann, soon it'll appear on
http://docs.openstack.org/developer/performance-docs/#

Cheers,
Dina

On Wed, Apr 20, 2016 at 10:56 PM, Edgar Magana 
wrote:

> Indeed it will be a terrific contribution.
>
> Edgar
>
>
> On Apr 20, 2016, at 4:10 AM, Dina Belova  wrote:
>
> Folks,
>
> I think Ann's report is super cool and 100% worth publishing on OpenStack
> performance-docs .
> This is really good information to share community-wide.
>
> Ann, please think if you would like to contribute to performance
> documentation.
>
> Cheers,
> Dina
>
> On Wed, Apr 20, 2016 at 12:34 PM, Anna Kamyshnikova <
> akamyshnik...@mirantis.com> wrote:
>
>> Unfortunately, I won't attend summit in Austin, that is why I decided to
>> present these results in the mailing list instead.
>>
>> On Tue, Apr 19, 2016 at 7:29 PM, Edgar Magana 
>> wrote:
>>
>>> Is there any session presenting these results during the Summit? It will
>>> be awesome to have a session on this. I could extend the invite to the Ops
>>> Meet-up. We have a section on lighting talks where the team will be very
>>> intesreted in learning from your testing.
>>>
>>> Edgar
>>>
>>> From: Anna Kamyshnikova 
>>> Reply-To: "OpenStack Development Mailing List (not for usage
>>> questions)" 
>>> Date: Tuesday, April 19, 2016 at 5:30 AM
>>> To: "OpenStack Development Mailing List (not for usage questions)" <
>>> openstack-dev@lists.openstack.org>
>>> Subject: Re: [openstack-dev] [Neutron] L3 HA testing on scale
>>>
>>> >I would definitely like to see how these results are effected by
>>> >https://review.openstack.org/#/c/305774/ but understandably 49
>>> >physical nodes are hard to come by.
>>>
>>> Yes, I'm planning to check how situation will change with all recent
>>> fixes, but I will be able to do this in May or later.
>>>
>>> >About testing on scale it’s not so problematic because of the Cloud For
>>> All project.
>>> >Here [1] you can request for a multi node cluster which you can use to
>>> >perform tests. Exact requirements are specified on that website.
>>>
>>> [1] http://osic.org
>>>
>>> Thanks for pointing this!
>>>
>>> >It's a great report, thanks for sharing that! Do you plan to run similar
>>> >scale tests on other scenarios e.g. dvr?
>>>
>>> Thanks! I have testing L3 HA + DVR in plans.
>>>
>>> P. S.
>>>
>>> I've updated environment description in report with some details.
>>>
>>> On Tue, Apr 19, 2016 at 12:52 PM, Rossella Sblendido <
>>> rsblend...@suse.com> wrote:
>>>


 On 04/18/2016 04:15 PM, Anna Kamyshnikova wrote:
 > Hi guys!
 >
 > As a developer I use Devstack or multinode OpenStack installation (4-5
 > nodes) for work, but these are "abstract" environments, where you are
 > not able to perform some scenarios as your machine is not powerful
 > enough. But it is really important to understand the issues that real
 > deployments have.
 >
 > Recently I've performed testing of L3 HA on the scale environment 49
 > nodes (3 controllers, 46 computes) Fuel 8.0. On this environment I ran
 > shaker and rally tests and also performed some
 > manual destructive scenarios. I think that this is very important to
 > share these results. Ideally, I think that we should collect
 statistics
 > for different configurations each release to compare and check it to
 > make sure that we are heading the right way.
 >
 > The results of shaker and rally tests [1]. I put detailed report in
 > google doc [2]. I would appreciate all comments on these results.

 It's a great report, thanks for sharing that! Do you plan to run similar
 scale tests on other scenarios e.g. dvr?

 Rossella

 >
 > [1] - http://akamyshnikova.github.io/neutron-benchmark-results/
 > [2]
 > -
 https://docs.google.com/a/mirantis.com/document/d/1TFEUzRRlRIt2HpsOzFh-RqWwgTzJPBefePPA0f0x9uw/edit?usp=sharing
 >
 > Regards,
 > Ann Kamyshnikova
 > Mirantis, Inc
 >
 >
 >
 __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> --
>>> Regards,
>>> Ann Kamyshnikova
>>> Mirantis, Inc
>>>
>>>
>>> 

[openstack-dev] [nova] next min libvirt?

2016-04-29 Thread Sean Dague
We've just landed the libvirt min to bump us up to 1.2.1 required. It's
probably a good time consider the appropriate bump for Otaca.

By that time our Ubuntu LTS will be 16.04 (libvirt 1.3.1), RHEL 7.1
(1.2.8). Additionally Debian Jessie has 1.2.9. RHEL 7.2 is at 1.2.17.

My suggestion is we set MIN_LIBVIRT_VERSION to 1.2.8. This will mean
that NUMA support in libvirt (excepting the blacklists) and huge page
support is assumed on x86_64.

And it keeps the ball moving forward.

Objections?

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]informal meetup during summit

2016-04-29 Thread Sergey Kraynev
I will be there after 15 min
28 апр. 2016 г. 7:14 PM пользователь "Rico Lin" 
написал:

> *Tomorrow 10:00 am at alta's cafe*
>
> See you guys have there:)
>
> http://altascafe.com/
>
> map
> 
> On Apr 25, 2016 8:11 PM, "Steve Baker"  wrote:
>
>> We are now at Terry black's BBQ now for anyone who wants to join usThanks
>> for organising Rico! See you when you get there.
>>
>> - ZB
>>
>>
>> On 22/04/16 20:01, Rico Lin wrote:
>> > Let's settle down on with
>> >
>> > A meet up on Monday night 7:00pm
>> > At continentalclub
>> > <
>> http://www.google.com/url?q=http%3A%2F%2Fcontinentalclub.com=D=1=AFQjCNGirvAgZuZhxVEzHb7bFZU_fShJQw
>> >
>> > address : 1315 S Congress Ave
>> > Austin, TX 78704 http://continentalclub.com
>> > <
>> http://www.google.com/url?q=http%3A%2F%2Fcontinentalclub.com=D=1=AFQjCNGirvAgZuZhxVEzHb7bFZU_fShJQw
>> >
>> > And
>> > Friday morning 10:00 venue:TBD
>> >
>> > Is the time and venue find with everyone?
>> >
>> > Everyone are welcome :)
>> > Feel free to let me know if you're coming, just for easy pre-booking
>> > purpose:)
>> >
>> > On Apr 22, 2016 12:13 AM, "Zane Bitter" > > > wrote:
>> >
>> > On 20/04/16 13:00, Rico Lin wrote:
>> >
>> > Hi team
>> > Let plan for more informal meetup(relax) time! Let all heaters
>> > and any
>> > other projects can have fun and chance for technical discussions
>> > together.
>> >
>> > After discuss in meeting, we will have a pre-meetup-meetup on
>> Friday
>> > morning to have a cup of cafe or some food. Would like to ask if
>> > anyone
>> > knows any nice place for this meetup?:)
>> >
>> >
>> > According to
>> > https://www.openstack.org/summit/austin-2016/guide-to-austin/ if we
>> > line up at Franklin's at 7am then we can be eating barbeque by 11
>> > and still make it back in time for the afternoon meetup :))
>> >
>> > Also open for other chance for all can go out for a nice dinner
>> and
>> > beer. Right now seems maybe Monday or Friday night could be the
>> best
>> > candidate for this wonderful task, what all think about this? :)
>> >
>> >
>> > +1. I'll be around on Friday, but I imagine a few people will be
>> > leaving so Monday is probably better.
>> >
>> > cheers,
>> > Zane.
>> >
>> >
>>  __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > <
>> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-04-29 Thread Clint Byrum
Excerpts from Matt Riedemann's message of 2016-04-29 05:38:17 -0700:
> 
> So, we're all here in person this week (with 1 day left). The Nova team 
> has a meetup session all day (Salon A in the Hilton). Clint/Ed, can you 
> guys show up to that and bring these issues up in person so we can 
> actually talk through this? Preferably in the morning since people are 
> starting to leave after lunch.
> 

Unfortunately, I'm already back home in Los Angeles due to family
needs. But I do hope you all can have a discussion this morning, and
I'm happy to join via phone/skype/hangouts/etc. after 11:00am Austin time.

The reason I didn't bring any of this up while there was mostly that I
spent the time actually learning what was actually planned with Cells
v2, which I think I've gotten wrong 3 times now while learning before.
It has taken me another day or so to be able to articulate why I think
we may want to separate the concept of cells from the concept of scaling.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Deploying kolla from a container?

2016-04-29 Thread Michał Jastrzębski
So network interface is an inteface that APIs will bind into. That
means it's network which openstack stuff travels on.

Neutron_external_iface is iface tenant networks will be placed on. In
you r case you could use eth0 as network and eth1 as neutron external.
If it has IP that's ok...it simply won't be used anywhere:)

On 28 April 2016 at 21:48, Jamie Hannaford
 wrote:
> Okay, that makes sense. For a normal Ubuntu VM (in my case on Rackspace
> cloud), what would the networking configuration look like? Usually eth0 is
> the interface for the public internet, eth1 is servicenet, and I have eth2
> as an arbitrary neutron private network.
>
>
> For the `network_interface` config value I used eth2 and for
> `kolla_internal_vip_address` a free VIP on its subnet - does that sound
> right?
>
>
> For `neutron_external_interface`, it says in your dev guide that you can use
> a veth pair when there's only a single public interface on a machine, which
> is the case here. Is there any documentation available for how to do that?
>
>
> Jamie
>
>
> 
> From: Michał Jastrzębski 
> Sent: 28 April 2016 04:36
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [kolla] Deploying kolla from a container?
>
>
> Hey,
>
> So privileged containers are required by stuff like libvirt, and there isn't
> much we can do about it. Shared /run is required by openvswitch afair. We
> didn't try to run Kolla with swarm, but I'm afraid that privileged container
> and network host are unfortunately a must. OpenStack wasn't really build to
> be run in containers, so we had to make sacrifices here and there. We are
> experimenting with kubernates now, as it supports both priv containers and
> net host.
>
> Let me know what if I can be any help.
>
>
> Michal
>
> On Apr 27, 2016 5:27 PM, "Jamie Hannaford" 
> wrote:
>>
>> Hi,
>>
>>
>> Is it possible to deploy Kolla from a container rather than an
>> ubuntu/centos VM? I have a Swarm cluster, so I don't really want to leave
>> that ecosystem and start creating other cloud resources.
>>
>>
>> I got quite far with the dev guide, but the step which seems to throw a
>> spanner in the works is setting the MountFlags. You recommend either systemd
>> (15.04+) or `mount --make-shared /run` (14.04), both of which require a
>> container running in privileged mode, which I can't do on my swarm cluster.
>> Is there any workaround here?
>>
>>
>> Alternatively, is it possible to run a privileged container locally in
>> virtualbox and have it deploy to a remote Swarm cluster?
>>
>>
>> Any advice you have here would be really appreciated. Kolla looks like a
>> great project!
>>
>>
>> Jamie
>>
>>
>> 
>> Rackspace International GmbH a company registered in the Canton of Zurich,
>> Switzerland (company identification number CH-020.4.047.077-1) whose
>> registered office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland.
>> Rackspace International GmbH privacy policy can be viewed at
>> www.rackspace.co.uk/legal/swiss-privacy-policy - This e-mail message may
>> contain confidential or privileged information intended for the recipient.
>> Any dissemination, distribution or copying of the enclosed material is
>> prohibited. If you receive this transmission in error, please notify us
>> immediately by e-mail at ab...@rackspace.com and delete the original
>> message. Your cooperation is appreciated.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Meetup in Salon B - split between Magnum and Kolal

2016-04-29 Thread Steven Dake (stdake)
Hey folks,

The meet up room for this morning is split between two rooms so we will be 
sharing a room in Salon B.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [qos] gathering Friday 9:30

2016-04-29 Thread Frances, Margaret
Hi Miguel,

Nate and I have a conflicting meeting, about FWaaS, at 9:30.  I’m reaching out 
to him now to see how we might fan out to get coverage.

Margaret

--
Margaret Frances
Eng 4, Prodt Dev Engineering



> On Apr 28, 2016, at 6:02 PM, Miguel Angel Ajo Pelayo  
> wrote:
> 
> Does governors ballroom in Hilton sound ok?
> 
> We can move to somewhere else if necessary.
> 



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Summary of design summit for Newton

2016-04-29 Thread Ken'ichi Ohmichi
Hi QA-team,

Thanks for joining QA sessions on OpenStack Summit Austin.
They were interesting and good to get directions to move forward in
this development cycle.
This is a summary of these sessions for next steps and hope this helps
our works.

1. Tempest: Cleanup
  https://etherpad.openstack.org/p/newton-qa-cruft-busters
  Main assignee / organizer: andreaf
  Action items:
  * Test resources as fixtures (jswarren, mtreinish, dwalleck)
  * Remote client debuggability (andreaf)
  * Test class hierarchy (dmellado)
  * Neutron OO wrappers (jlwhite, hockeynut)
  * Documentation review and cleanup (andreaf)
  * Test audit (Revital)
  * Cleanup client and manager aliases
  * Refactor test base class setup and teardown steps

2. Devstack
  https://etherpad.openstack.org/p/newton-qa-devstack-roadmap
  Action items:
  * Neutron cleanup (sc68cal)
  * Add tempest config hook to devstack plugin interface (call it
test-config) (mtreinish)
  * Talk to remove heat from devstack (oomichi)

3. openstack-health
  https://etherpad.openstack.org/p/newton-qa-openstack-health
  Main assignee / organizer: mtreinish, masayukig
  Action items:
  * Elastic recheck integration
  * Figure out check data solution
  * Add a bug link to the page (- low-hanging-fruit)

4. Tempest: Remaining CLI work
  https://etherpad.openstack.org/p/newton-qa-tempest-cli
  Main assignee / organizer: mtreinish
  Action items:
  * Run (dpaterson, dmellado)
- spec
- impl

5. Tempest: Negative Test
  https://etherpad.openstack.org/p/newton-qa-negative-testing
  Main assignee / organizer: oomichi
  Action items:
  * Doc update: Remove negative tests framework from Doc (luzC)
  * Write guideline of negative tests (hogepodge, dwalleck, and oomichi)
  * Will consider removing/keeping negative test framework after the guideline
refstack doesn't use the framework.

6.Tempest: tempest-lib/plugin
  https://etherpad.openstack.org/p/newton-qa-tempest-lib-and-tempest-plugin
  Main assignee / organizer: gmann
  Action items:
  * Remove client manager use from credential providers and move to
lib (andreaf)
  * Need to push reno on pip site: Talk with doughellmann
  * Extending plugin for CLI Tools(cleanup) - spec - (gmann)
  * Migrate pending interfaces to /lib
create new etherpad and ML for volunteers (gmann)

7. Tempest: Client Manager Refactor
  Main assignee / organizer: andreaf
  https://review.openstack.org/#/c/92804/

8. Tempest: Test resource management
  Main assignee / organizer: oomichi, jswarren?
  https://review.openstack.org/#/c/173334/

9. Tempest: Microversion tests
  Main assignee / organizer: gmann
  Action items:
  * compute microversion tests - gmann (Low priority)

NOTE: Helpful ways to join into QA-team for new volunteers
  * Join into #openstack-qa
  * Triage bugs on LP

If having questions, please send mails to me or "Main assignee / organizer".
Thanks for your help.

Ken Omichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-04-29 Thread Chris Dent

On Fri, 29 Apr 2016, Matt Riedemann wrote:

So, we're all here in person this week (with 1 day left). The Nova team has a 
meetup session all day (Salon A in the Hilton). Clint/Ed, can you guys show 
up to that and bring these issues up in person so we can actually talk 
through this? Preferably in the morning since people are starting to leave 
after lunch.


If this happens, I hope the notes will be excellent, because this
seems like a rather important topic and those of us not able to be in
the room...

It seems to me like there are several overlapping general topics
(below) which are, some of the time, conflated under the topic of
"cells" but at least some people think Cells is symptom not a
solution, other people think it is overreaching and yet others think
it isn't reaching far enough.

* Which kind of distributed is Nova supposed to be?
* Are we scaling the right stuff?
* Are we isolating the right stuff?
* What's right/wrong with how we do state management?
* What's right/wrong with how we do messaging?
* What's right/wrong with how we do RPC (not the same thing, despite the
  current precedent)?
* Is there such a thing as an event? Should there be?

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-04-29 Thread Matt Riedemann

On 4/28/2016 8:54 PM, Edward Leafe wrote:

On Apr 28, 2016, at 5:35 PM, Clint Byrum  wrote:


- Vitess [2] is a proven technology that serves _every_ request to
 Youtube, and provides a familiar SQL interface with sharding built
 in. Shard by project ID and you can just use regular index semantics.
 Or if that's unacceptable (IMO it's fine since Vitess provides enough
 redundancy that one shard has plenty of failure-domain reliability),
 you can also use the built-in Hadoop support they have for doing
 exactly what has been described (merge sorting the result of cross-cell
 queries).


Thanks for that reference. I hadn’t heard of Vitess before, but it looks pretty 
capable.


So, I have to ask, why is cells v2 being pushed so hard without looking
outside OpenStack for actual existing solutions, which, IMO, are
_numerous_, battle hardened, and simpler than cells.


Cells are a great concept, but of course the devil is in the implementation. So 
if having cells is an advantage (and that is a separate discussion that already 
seems settled), then we should focus on the best way to implement it for 
(short-term) efficiency and (long-term) maintainability.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



So, we're all here in person this week (with 1 day left). The Nova team 
has a meetup session all day (Salon A in the Hilton). Clint/Ed, can you 
guys show up to that and bring these issues up in person so we can 
actually talk through this? Preferably in the morning since people are 
starting to leave after lunch.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Interest in contributing to OpenStack

2016-04-29 Thread Djimeli Konrad
Hello,

My name is Djimeli Konrad a second year computer science student from
the University of Buea Cameroon, Africa. I am GSoC 2015 participant,
and I have also worked on  some open-source projects on github
(http://github.com/djkonro) and sourceforge
(https://sourceforge.net/u/konrado/profile/). I am very passionate
about cloud development, distributed systems, virtualization and I
would like to contribute to OpenStack.

I have to gone through the OpenStack GSoC2016/Outreachy2016 projects
(https://wiki.openstack.org/wiki/Internship_ideas ) and I am
interested the in working on "Glance - Extended support for requests
library" and "Glance - Develop a python based GLARE (GLance Artifacts
REpository) client library and shell API". I would like to get some
detail regarding the projects. Is this a priority project?, what is
the expected outcome? and what are some starting points?.

I am proficient with C, C++ and Python, and I have successfully build
and setup OpenStack, using devstack.

Thanks
Konrad
https://www.linkedin.com/in/konrad-djimeli-69809b97

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should be instance_dir in all nova compute node same ?

2016-04-29 Thread Timofei Durakov
Hi,

>From the first sight there are no restrictions for instance_path option.
Could you please provide some details about the use case?
I wonder would this functionality be really useful for cloud-operators, or
we just should add description to instance path option, forcing to use the
same path on compute nodes?

Timofey



On Fri, Apr 29, 2016 at 4:47 AM, Eli Qiao  wrote:

> hi team,
>
> Is there any require that all compute node's instance_dir should be same?
>
> I recently get an issue while doing live migration with migrateToURI3
> method of libvirt python interface, if
> source and dest host are configured with difference instance_dir,
> migration will fail since migrateToURI3
> requires a new xml, but we don't modified the instance_dir of dest xml.
>
> I reported a bug on [1]_ , would like to get confirmation before spend
> effort working on it.
>
> [1] https://bugs.launchpad.net/nova/+bug/1576245
>
> Thanks.
>
> --
>
> Best Regards, Eli Qiao (乔立勇)
> Intel OTC China
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][taas] TaaS meetup in Austin

2016-04-29 Thread Shigeta, Soichi

Hi,

> It will be held on 9:30 a.m., Meeting Room 400 (Level 4, Hilton Austin).
  This is just an announcement of rendezvous point.
  We may move to somewhere else if necessary.


From: Shigeta, Soichi [mailto:shigeta.soi...@jp.fujitsu.com]
Sent: Friday, April 29, 2016 6:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][taas] TaaS meetup in Austin


Hi,

   It will be held on 9:30 a.m., Meeting Room 400 (Level 4, Hilton Austin).

   Regards,
   Soichi


From: Anil Rao [mailto:anil@gigamon.com]
Sent: Friday, April 29, 2016 2:38 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron][taas] TaaS meetup in Austin

Hi,

The TaaS team is planning to meet on Friday morning (29th Apr) in the Design 
Summit area to discuss project status, pending issues and new work items. 
Anyone interested in the project is welcome to join.

We will send out the time and location as soon as there is an agreement.

Regards,
Anil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision minion nodes

2016-04-29 Thread Ma, Wen-Tao (Mike, HP Servers-PSC-BJ)
Hi Eli,

Copy that, I will consider that. Thanks.

Regards
Mike


Date: Thu, 28 Apr 2016 17:05:44 +0800

From: Eli Qiao >

To: openstack-dev@lists.openstack.org

Subject: Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to

provision minion nodes

Message-ID: <5721d268.4040...@intel.com>

Content-Type: text/plain; charset="utf-8"; Format="flowed"



hi Mike



Can you please also consider the effect to do rebuild/resize a bay if you want 
to support more than 1 nova flavor?



There are some discussion while in Austin summit, check 
https://etherpad.openstack.org/p/newton-magnum-bays-lifecycle-operations



Thanks

Eli.



On 2016?04?28? 16:52, Ma, Wen-Tao (Mike, HP Servers-PSC-BJ) wrote:

>

> Hi Kai Qiang,

>

> Thanks for your comments,  your consideration is very comprehensive, I

> think it is a good way to implement this feature.

>

> Regards

>

> Mike

>

> Date: Wed, 27 Apr 2016 17:38:32 +0800

>

> From: "Kai Qiang Wu" >

>

> To: "OpenStack Development Mailing List \(not for usage questions\)"

>

> >

>

> Subject: Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to

>

> provision minion nodes

>

> Message-ID: 
> <201604271004.u3ra49v4008...@d23av04.au.ibm.com>

>

> Content-Type: text/plain; charset="gb2312"

>

> Hi Mike,

>

> Since right now, we have also support bay-update (node_count)

>

> I am thinking the following case:

>

> 1>  baymodel-create have default flavor, and extra labels specify

> the(other

>

> node flavors) requirements,

>

> if (other node flavors) count <= bay(node_count), the extra nodes

> would be

>

> created use default flavor

>

> if (other node flavors) count  > bay(node_count), it should pop error,

>

> since it not quite clear why flavor to use

>

> 2> magnum bay-update k8sbay replace node_count < existed node_count,

> 2> it

>

> should be OK. same as old behavior

>

>  if node_count > existed node_count, all new nodes would use

> default

>

> flavor_id, (if not, we need to find what's the better policy to handle

>

> that)

>

> Refer:

>

> https://github.com/openstack/magnum/blob/master/doc/source/dev/quickst

> art.rst

>

> What do you think ?

>

> Thanks

>

> Best Wishes,

>

> --

> --

>

> Kai Qiang Wu (???  Kennan?

>

> IBM China System and Technology Lab, Beijing

>

> E-mail: wk...@cn.ibm.com

>

> Tel: 86-10-82451647

>

> Address: Building 28(Ring Building), ZhongGuanCun Software Park,

>

>  No.8 Dong Bei Wang West Road, Haidian District Beijing

> P.R.China

>

> 100193

>

> --

> --

>

> Follow your heart. You are miracle!

>

> From:"Ma, Wen-Tao (Mike, HP Servers-PSC-BJ)" 
> >

>

> To: 
> "openstack-dev@lists.openstack.org"

>

> >

>

> Date: 27/04/2016 03:10 pm

>

> Subject:   Re: [openstack-dev] [Magnum] Magnum supports 2

> Nova flavor to

>

> provision minion nodes

>

> Hi Hong bin,

>

> Thanks very much. It?s good suggestion, I think it is a good way by

> using

>

> labels for extra flavors. But I notice that there is not the

> ?node-count

>

> parameter in baymodel.

>

> So I think it doesn?t need specify minion-flavor-0 counts by ?node-count.

>

> We can specify all of the flavor id and count ratio in the labels. It

> will

>

> check the minion node count with this ratio of labels when creating

> magnum

>

> bay that specified total minion node count . If the node-count in

> baycreate

>

> doesn?t match with the flavor ratio, it will return the ratio match

> error

>

> message.   If there is not the multi-flavor-ratio key in lables, it will

>

> just use  minion-flavor-0  to create 10 minion nodes.

>

> $ magnum baymodel-create --name k8sbaymodel --flavor-id

> minion-flavor-0

>

> --labels multi-

>

> flavor-ratio=minion-flavor-0:3,minions-flavor-1:5,minion-flavor-2:2

>

> $  magnum bay-create --name k8sbay --baymodel k8sbaymodel --node-count

> 10

>

> Do you think about it?

>

> > -Original Message-

>

> > From: Ma, Wen-Tao (Mike, HP Servers-PSC-BJ)

> > [mailto:wentao...@hpe.com]

>

> > Sent: April-26-16 3:01 AM

>

> > To: 
> > openstack-dev@lists.openstack.org

>

> > Subject: Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor

> > to

>

> > provision minion nodes

>

> >

>

> > Hi Hongbin, Ricardo

Re: [openstack-dev] [neutron][taas] TaaS meetup in Austin

2016-04-29 Thread Shigeta, Soichi

Hi,

   It will be held on 9:30 a.m., Meeting Room 400 (Level 4, Hilton Austin).

   Regards,
   Soichi


From: Anil Rao [mailto:anil@gigamon.com]
Sent: Friday, April 29, 2016 2:38 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron][taas] TaaS meetup in Austin

Hi,

The TaaS team is planning to meet on Friday morning (29th Apr) in the Design 
Summit area to discuss project status, pending issues and new work items. 
Anyone interested in the project is welcome to join.

We will send out the time and location as soon as there is an agreement.

Regards,
Anil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all]call for contributors for Tricircle project - for massive distributed edge clouds and large scale cloud.

2016-04-29 Thread Shinobu Kinjo
Hi Chaoyi,

At a meeting today, we had a great update from you.
I don't write any details here but I'm quite happy with that.
I believe that we will have awesome contributors and the Tricircle
become one of important projects soon.

Cheers,
Shinobu


On Mon, Apr 25, 2016 at 1:59 PM, joehuang  wrote:
> Hi,
>
>
>
> Two sessions to learn more about Tricircle:
>
>
>
> 1. lightning talks about Tricircle use case: modularized capacity expansion
> in OpenStack based public cloud
> Monday, April 25 14:50 – 16:20 in the Hilton room Salon E
> a. https://www.openstack.org/summit/austin-2016/summit-schedule/events/9499
> b. https://www.openstack.org/summit/austin-2016/venues/#venue=39
> c. https://etherpad.openstack.org/p/AUS-ops-lightning-talks
>
>
>
> 2. Tricircle in a nutshell – 15 minutes talk in Massively Distributed Clouds
> Working Group Inaugural Meeting
>
> Tuesday, April 26, 4:40pm-6:10pm (Hilton, Level 6, Salon J)
>
> a.https://www.openstack.org/summit/austin-2016/summit-schedule/events/9533
>
> b. https://etherpad.openstack.org/p/massively-distributed-clouds
>
> BR
>
> Chaoyi Huang(joehuang)
>
> 
> From: joehuang
> Sent: 19 April 2016 14:37
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev][all]call for contributors for Tricircle project -
> for massive distributed edge clouds and large scale cloud.
>
> Hello,
>
>
>
> There are lots of challenges in massive distributed edge clouds. Enterprise
> as the customer of large public cloud already asked for putting the cloud to
> close to the end user for its distributed branches, to fulfill the user
> experience expectation for bandwidth and latency sensitive application like
> CAD modeling, the experience is not good to run in remote centralized cloud.
> The common problem domain is how to address the challenges in a cloud which
> is consisted of a lots of OpenStack instances, in one site or distributed in
> multiple sites:
>
>
>
> For example:
>
> Tenant level L2/L3 networking across OpenStack instances for isolation to
> tenant's E-W traffic
>
> Tenant level Volume/VM/object backup/migration/distribution across OpenStack
> instances
>
> Distributed image management, if an user create image from VM/volume, how to
> use the image in another OpenStack instance in another site.
>
> Distributed quota management, how to control the quota if tenant’s resources
> spread into multiple OpenStack instances across multiple sites.
>
> ...
>
>
>
> All these challenges and requirements already happened in current production
> cloud built upon OpenStack.
>
>
>
> Tricircle project tries to provide an OpenStack API gateway and networking
> automation to allow multiple OpenStack instances, spanning in one site or
> multiple sites or in hybrid cloud, to be managed as a single OpenStack
> cloud.
>
>
>
> All challenges mentioned above have not been developed in Tricircle, and
> hope to be developed in next several cycles, need your contribution.
>
>
>
> All source code in Tricircle were written from zero since Jun, 2015, and
> decoupled from current Nova/Cinder/Neutron etc service, that means the
> Tricircle is developed loose coupling with current OpenStack. The code base
> is still very small, about 25 kloc(including test cases), and working on the
> first release, it’s easy to get on board.
>
>
>
> Compared to other broker method for multi-OpenStacks management, Tricricle
> provides the OpenStack API and seamlessly work together with software like
> Heat, Magnum, Murano, CLI, Horizon, SDK, ….
>
>
>
> Tricircle will be talked in two sessions in OpenStack Austin Summit ( for
> NFV cloud is multi-site in nature, but doesn’t mean Tricircle only for NFV):
>
>
>
> 1) multisite-openstack-for-nfv-bridging-the-gap:
>
> https://www.openstack.org/summit/austin-2016/summit-schedule/events/7480/multisite-openstack-for-nfv-bridging-the-gap
>
> 2) NFV Orchestration - Project Landscape:
> https://www.openstack.org/summit/austin-2016/summit-schedule/events/8468
>
>
>
> How to get involved in Tricricle quickly:
>
> 1.  Read wiki of Tricircle: https://wiki.openstack.org/wiki/Tricircle
>
> 2.  Register BP or report bug in https://launchpad.net/tricircle, found
> items not implemented yet in:
> https://etherpad.openstack.org/p/TricircleToDo, submit your patch for review
> just like any other OpenStack project.
>
> 3.  regular weekly meeting at #openstack-meeting on every Wednesday
> starting from UTC 13:00
>
> 4.  openstack-dev mail-list discussion, with [Tricircle] tag in the mail
> title
>
> 5.  You can follow the framework blueprint to read the source code:
> https://blueprints.launchpad.net/tricircle/+spec/implement-stateless the
> design doc for the blueprint is
> https://docs.google.com/document/d/18kZZ1snMOCD9IQvUKI5NVDzSASpw-QKj7l2zNqMEd3g/edit?usp=sharing
>
>
>
> Hope this mail will help you to join Tricircle J
>
>
>
> Best Regards
>
> Chaoyi Huang ( Joe Huang )
>
>
>