Re: [openstack-dev] [nova] Migration progress

2016-02-03 Thread Daniel P. Berrange
On Wed, Feb 03, 2016 at 10:44:36AM +, Daniel P. Berrange wrote:
> On Wed, Feb 03, 2016 at 10:37:24AM +, Koniszewski, Pawel wrote:
> > Hello everyone,
> > 
> > On the yesterday's live migration meeting we had concerns that interval of
> > writing migration progress to the database is too short.
> > 
> > Information about migration progress will be stored in the database and
> > exposed through the API (/servers//migrations/). In current
> > proposition [1] migration progress will be updated every 2 seconds. It
> > basically means that every 2 seconds a call through RPC will go from compute
> > to conductor to write migration data to the database. In case of parallel
> > live migrations each migration will report progress by itself.
> > 
> > Isn't 2 seconds interval too short for updates if the information is exposed
> > through the API and it requires RPC and DB call to actually save it in the
> > DB?
> > 
> > Our default configuration allows only for 1 concurrent live migration [2],
> > but it might vary between different deployments and use cases as it is
> > configurable. Someone might want to trigger 10 (or even more) parallel live
> > migrations and each might take even a day to finish in case of block
> > migration. Also if deployment is big enough rabbitmq might be fully-loaded.
> > I'm not sure whether updating each migration every 2 seconds makes sense in
> > this case. On the other hand it might be hard to observe fast enough that
> > migration is stuck if we increase this interval...
> 
> Do we have any actual data that this is a real problem. I have a pretty hard
> time believing that a database update of a single field every 2 seconds is
> going to be what pushes Nova over the edge into a performance collapse, even
> if there are 20 migrations running in parallel, when you compare it to the
> amount of DB queries & updates done across other areas of the code for pretty
> much every singke API call and background job.

Also note that progress is rounded to the nearest integer. So even if the
migration runs all day, there is a maximum of 100 possible changes in value
for the progress field, so most of the updates should turn in to no-ops at
the database level.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Migration progress

2016-02-03 Thread Daniel P. Berrange
On Wed, Feb 03, 2016 at 10:37:24AM +, Koniszewski, Pawel wrote:
> Hello everyone,
> 
> On the yesterday's live migration meeting we had concerns that interval of
> writing migration progress to the database is too short.
> 
> Information about migration progress will be stored in the database and
> exposed through the API (/servers//migrations/). In current
> proposition [1] migration progress will be updated every 2 seconds. It
> basically means that every 2 seconds a call through RPC will go from compute
> to conductor to write migration data to the database. In case of parallel
> live migrations each migration will report progress by itself.
> 
> Isn't 2 seconds interval too short for updates if the information is exposed
> through the API and it requires RPC and DB call to actually save it in the
> DB?
> 
> Our default configuration allows only for 1 concurrent live migration [2],
> but it might vary between different deployments and use cases as it is
> configurable. Someone might want to trigger 10 (or even more) parallel live
> migrations and each might take even a day to finish in case of block
> migration. Also if deployment is big enough rabbitmq might be fully-loaded.
> I'm not sure whether updating each migration every 2 seconds makes sense in
> this case. On the other hand it might be hard to observe fast enough that
> migration is stuck if we increase this interval...

Do we have any actual data that this is a real problem. I have a pretty hard
time believing that a database update of a single field every 2 seconds is
going to be what pushes Nova over the edge into a performance collapse, even
if there are 20 migrations running in parallel, when you compare it to the
amount of DB queries & updates done across other areas of the code for pretty
much every singke API call and background job.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] New Core Reviewers

2016-02-03 Thread Ton Ngo

Thanks Adrian and Magnum team for having me as part of the team.  It has
been a lot of fun working with everyone and I look forward to continuing
the great progress of the project.
Ton,



From:   Jay Lau 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   02/02/2016 08:28 PM
Subject:Re: [openstack-dev] [Magnum] New Core Reviewers



Welcome Ton and Egor!!

On Wed, Feb 3, 2016 at 12:04 AM, Adrian Otto 
wrote:
  Thanks everyone for your votes. Welcome Ton and Egor to the core team!

  Regards,

  Adrian

  > On Feb 1, 2016, at 7:58 AM, Adrian Otto 
  wrote:
  >
  > Magnum Core Team,
  >
  > I propose Ton Ngo (Tango) and Egor Guz (eghobo) as new Magnum Core
  Reviewers. Please respond with your votes.
  >
  > Thanks,
  >
  > Adrian Otto


  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Migration progress

2016-02-03 Thread Koniszewski, Pawel
Hello everyone,

On the yesterday's live migration meeting we had concerns that interval of
writing migration progress to the database is too short.

Information about migration progress will be stored in the database and
exposed through the API (/servers//migrations/). In current
proposition [1] migration progress will be updated every 2 seconds. It
basically means that every 2 seconds a call through RPC will go from compute
to conductor to write migration data to the database. In case of parallel
live migrations each migration will report progress by itself.

Isn't 2 seconds interval too short for updates if the information is exposed
through the API and it requires RPC and DB call to actually save it in the
DB?

Our default configuration allows only for 1 concurrent live migration [2],
but it might vary between different deployments and use cases as it is
configurable. Someone might want to trigger 10 (or even more) parallel live
migrations and each might take even a day to finish in case of block
migration. Also if deployment is big enough rabbitmq might be fully-loaded.
I'm not sure whether updating each migration every 2 seconds makes sense in
this case. On the other hand it might be hard to observe fast enough that
migration is stuck if we increase this interval...

What's worth mentioning is that during Nova Midcycle we had discussion about
refactoring live migration flow. Proposition was to get rid of
compute->compute communication during live migration and make conductor
conduct whole process. Nikola proposed that in such case nova-compute should
be stateful and store migration status on compute node [3]. We might be able
to use state stored on compute node and get rid of RPC-DB queries every 2
seconds.

Thoughts?

Kind Regards,
Pawel Koniszewski

[1] https://review.openstack.org/#/c/258813
[2]
http://docs.openstack.org/liberty/config-reference/content/list-of-compute-c
onfig-options.html
[3] https://etherpad.openstack.org/p/mitaka-nova-midcycle


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Migration progress

2016-02-03 Thread Murray, Paul (HP Cloud)


> -Original Message-
> From: Daniel P. Berrange [mailto:berra...@redhat.com]
> Sent: 03 February 2016 10:49
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Feng, Shaohe
> Subject: Re: [openstack-dev] [nova] Migration progress
> 
> On Wed, Feb 03, 2016 at 10:44:36AM +, Daniel P. Berrange wrote:
> > On Wed, Feb 03, 2016 at 10:37:24AM +, Koniszewski, Pawel wrote:
> > > Hello everyone,
> > >
> > > On the yesterday's live migration meeting we had concerns that
> > > interval of writing migration progress to the database is too short.
> > >
> > > Information about migration progress will be stored in the database
> > > and exposed through the API (/servers//migrations/). In
> > > current proposition [1] migration progress will be updated every 2
> > > seconds. It basically means that every 2 seconds a call through RPC
> > > will go from compute to conductor to write migration data to the
> > > database. In case of parallel live migrations each migration will report
> progress by itself.
> > >
> > > Isn't 2 seconds interval too short for updates if the information is
> > > exposed through the API and it requires RPC and DB call to actually
> > > save it in the DB?
> > >
> > > Our default configuration allows only for 1 concurrent live
> > > migration [2], but it might vary between different deployments and
> > > use cases as it is configurable. Someone might want to trigger 10
> > > (or even more) parallel live migrations and each might take even a
> > > day to finish in case of block migration. Also if deployment is big enough
> rabbitmq might be fully-loaded.
> > > I'm not sure whether updating each migration every 2 seconds makes
> > > sense in this case. On the other hand it might be hard to observe
> > > fast enough that migration is stuck if we increase this interval...
> >
> > Do we have any actual data that this is a real problem. I have a
> > pretty hard time believing that a database update of a single field
> > every 2 seconds is going to be what pushes Nova over the edge into a
> > performance collapse, even if there are 20 migrations running in
> > parallel, when you compare it to the amount of DB queries & updates
> > done across other areas of the code for pretty much every singke API call
> and background job.

As a data point: when we were doing live migrations in HP public cloud for 
rolling updates we were maintaining approximately 150 concurrent migrations 
through the process. At 2s intervals that would make approx. 75 updates per 
second. We don't feel that would have been a problem.

We also spoke to Michael Still and he thought it wouldn't be a problem for Rack 
Space (remembering they have cells). Having said that I have no idea of numbers 
I their case and would rather they spoke for themselves. In this thread.



> 
> Also note that progress is rounded to the nearest integer. So even if the
> migration runs all day, there is a maximum of 100 possible changes in value
> for the progress field, so most of the updates should turn in to no-ops at the
> database level.
> 
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Make a separate library from /neutron/agent/ovsdb

2016-02-03 Thread Petr Horacek
Hello,

would it be possible to change /neutron/agent/ovsdb package into a
separate library, independent on openstack? It's a pity that there is
no high-level python library for ovs handling available and your
implementation seems to be great. The module is dependent only or some
openstack utils, would be packaging a problem?

Thanks,
Petr

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] changes to keystone-core!

2016-02-03 Thread Marek Denis

++!

On 01.02.2016 01:26, Lance Bragstad wrote:

++

I'm happy to see this go through! Samuel and Dave have been helping me 
out a lot lately. Both make great additions to the team!


On Thu, Jan 28, 2016 at 9:12 AM, Brad Topol > wrote:


CONGRATULATIONS Dave and Samuel. Very well deserved!!!

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646 
Internet: bto...@us.ibm.com 
Assistant: Kendra Witherspoon (919) 254-0680


Inactive hide details for "Steve Martinelli" ---01/27/2016
05:17:12 PM---Hello everyone! We've been talking about this for a
lo"Steve Martinelli" ---01/27/2016 05:17:12 PM---Hello everyone!
We've been talking about this for a long while, and I am very
pleased to

From: "Steve Martinelli" >
To: openstack-dev >
Date: 01/27/2016 05:17 PM
Subject: [openstack-dev] [keystone] changes to keystone-core!





Hello everyone!

We've been talking about this for a long while, and I am very
pleased to announce that at the midcycle we have made changes to
keystone-core. The project has grown and our review queue grows
ever longer. Effective immediately, we'd like to welcome the
following new Guardians of the Gate to keystone-core:

+ Dave Chen (davechen)
+ Samuel de Medeiros Queiroz (samueldmq)

Happy code reviewing!

Steve Martinelli
OpenStack Keystone Project Team

Lead__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Marek Denis
[marek.de...@cern.ch]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Vitrage meeting minutes

2016-02-03 Thread Afek, Ifat (Nokia - IL)
Hi,

You can find the meeting minutes of Vitrage meeting: 
http://eavesdrop.openstack.org/meetings/vitrage/2016/vitrage.2016-02-03-09.00.html
 
Meeting log: 
http://eavesdrop.openstack.org/meetings/vitrage/2016/vitrage.2016-02-03-09.00.log.html
 

See you next week,
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Migration progress

2016-02-03 Thread Paul Carlton

On 03/02/16 10:49, Daniel P. Berrange wrote:

On Wed, Feb 03, 2016 at 10:44:36AM +, Daniel P. Berrange wrote:

On Wed, Feb 03, 2016 at 10:37:24AM +, Koniszewski, Pawel wrote:

Hello everyone,

On the yesterday's live migration meeting we had concerns that interval of
writing migration progress to the database is too short.

Information about migration progress will be stored in the database and
exposed through the API (/servers//migrations/). In current
proposition [1] migration progress will be updated every 2 seconds. It
basically means that every 2 seconds a call through RPC will go from compute
to conductor to write migration data to the database. In case of parallel
live migrations each migration will report progress by itself.

Isn't 2 seconds interval too short for updates if the information is exposed
through the API and it requires RPC and DB call to actually save it in the
DB?

Our default configuration allows only for 1 concurrent live migration [2],
but it might vary between different deployments and use cases as it is
configurable. Someone might want to trigger 10 (or even more) parallel live
migrations and each might take even a day to finish in case of block
migration. Also if deployment is big enough rabbitmq might be fully-loaded.
I'm not sure whether updating each migration every 2 seconds makes sense in
this case. On the other hand it might be hard to observe fast enough that
migration is stuck if we increase this interval...

Do we have any actual data that this is a real problem. I have a pretty hard
time believing that a database update of a single field every 2 seconds is
going to be what pushes Nova over the edge into a performance collapse, even
if there are 20 migrations running in parallel, when you compare it to the
amount of DB queries & updates done across other areas of the code for pretty
much every singke API call and background job.

Also note that progress is rounded to the nearest integer. So even if the
migration runs all day, there is a maximum of 100 possible changes in value
for the progress field, so most of the updates should turn in to no-ops at
the database level.

Regards,
Daniel

I agree with Daniel, these rpc and db access ops are a tiny percentage
of the overall load on rabbit and mysql and properly configured these
subsystems should have no issues with this workload.

One correction, unless I'm misreading it, the existing
_live_migration_monitor code updates the progress field of the instance
record every 5 seconds.  However this value can go up and down so
an infinate number of updates are possible?

However, the issue raised here is not with the existing implementation
but with the proposed change
https://review.openstack.org/#/c/258813/5/nova/virt/libvirt/driver.py
This add a save() operation on the migration object every 2 seconds

Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard Enterprise
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:+44 (0)7768 994283
Office:+44 (0)117 316 2189
Email:mailto:paul.carlt...@hpe.com
irc:  paul-carlton2

Hewlett-Packard Enterprise Limited registered Office: Cain Road, Bracknell, 
Berks RG12 1HN Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may be 
legally privileged. If you have received this message in error, you should delete it from 
your system immediately and advise the sender. To any recipient of this message within 
HP, unless otherwise stated you should consider this message and attachments as "HP 
CONFIDENTIAL".


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Logging and traceback at same time.

2016-02-03 Thread Julien Danjou
On Wed, Feb 03 2016, Khayam Gondal wrote:

> Is there a way to do logging the information and traceback at the same
> Let me know if this is correct way?

Pass exc_info=True in your LOG.() call.

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Integrating physical appliance into virtual infrastructure

2016-02-03 Thread Kevin Benton
I think you will want to write an ML2 mechanism driver for your appliance
so it can receive all port/network/subnet information that you can forward
onto your appliance.

To automatically have Neutron's l2pop mechanism setup tunnels on the agent
to point to your appliance, you need to register the appliance in the
Neutron db as an agent with the IP address of the appliance and a unique
hostname to identify it. Then whenever you create a neutron port with the
hostname of your appliance, the other L2 agents in Neutron will be informed
by L2pop of the IP address of your appliance to setup a forwarding entry to
it.


On Mon, Feb 1, 2016 at 10:00 AM, Vijay Venkatachalam <
vijay.venkatacha...@citrix.com> wrote:

>
>
> L2GW seems like a good option for bridging/linking /integrating physical
> appliances which does not support overlay technology (say VXLAN) natively.
>
>
>
> In my case the physical appliance supports VXLAN natively, meaning it can
> act as a VTEP. The appliance is capable of decapsulating packets that are
> received and encapsulating packets that are sent (looking at the forwarding
> table).
>
>
>
> Now we want to add the capability in the  middleware/controller so that
> forwarding tables in the appliance can be populated and also let the rest
> of infrastructure know about the physical appliance (VTEP) and its L2 info?
>
>
>
> Is it possible to achieve this?
>
>
>
> Thanks,
>
> Vijay V.
>
>
>
>
>
>
>
> *From:* Gal Sagie [mailto:gal.sa...@gmail.com]
> *Sent:* 01 February 2016 19:38
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [Neutron] Integrating physical appliance
> into virtual infrastructure
>
>
>
> There is a project that aims at solving your use cases (at least from a
> general view)
>
> Its called L2GW and uses OVSDB Hardware VTEP schema (which is supported by
> many physical appliances for switching capabilities)
>
>
>
> Some information: https://wiki.openstack.org/wiki/Neutron/L2-GW
>
>
>
> There are also other possible solutions, depending what you are trying to
> do and what is the physical applicance job.
>
>
>
>
>
>
>
> On Mon, Feb 1, 2016 at 3:44 PM, Vijay Venkatachalam <
> vijay.venkatacha...@citrix.com> wrote:
>
> Hi ,
>
>
>
> How to integrate a physical appliance into the virtual OpenStack
> infrastructure (with L2 population)? Can you please point me to any
> relevant material.
>
>
>
> We want to add the capability to “properly” schedule the port on the
> physical appliance, so that the rest of the virtual infrastructure knows
> that a new port is scheduled in the physical appliance.  How to do this?
>
>
>
> We manage the appliance through a middleware. Today, when it creates a
> neutron port, that is to be hosted on the physical appliance, the port is
> dangling.  Meaning, the virtual infrastructure does not know where this
> port is hosted/implemented. How to fix this?
>
>
>
> Also, we want the physical appliance plugged into L2 population mechanism.
> Looks like the L2 population driver is distributing L2 info to all virtual
> infrastructure nodes where a neutron agent is running. Can we leverage this
> framework? We don’t want to run the neutron agent in the physical
> appliance, can it run in the middle ware?
>
>
>
> Thanks,
>
> Vijay V.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Best Regards ,
>
> The G.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding Ironic node kills devstack configuration

2016-02-03 Thread Pavlo Shchelokovskyy
Hi Pavel,

1. Unfortunately no, it is not possible to "update" a running devstack by
executing stack.sh, everything is created from scratch - e.g. empty DB is
created and all migration scripts are being run to ensure that the DB is in
the state required by the component.
2. Could you please post (to pastebin for example) your full local.conf? It
is not clear - are you trying to deploy a multi-node devstack with ironic
on one node, or just deploy an all-in-one with Ironic?

For example, here is my working devstack config (all-in-one with Ironic)
for master branch (reinstalled with it about two days ago). May be you find
some inspiration there :)

http://paste.openstack.org/show/485815/

Cheers,

On Tue, Feb 2, 2016 at 1:43 PM Pavel Fedin  wrote:

>  Hello again!
>
>  Now i am trying to add Ironic-driven compute note to existing devstack.
> Below is my local.conf for it. When i run stack.sh, it does
> something, then starts to reinitialize projects, users, groups, tenants,
> etc, effectively destroying my existing configuration.
> After that it dies with "cannot connect to... something", and my system is
> in non-working state, ready for reinstalling from
> scratch.
>  Actually, two questions:
> 1. Is it possible to tell stack.sh to keep old configuration? Rebuilding
> it every time is very tedious task.
> 2. Why does my compute node wipe everything out? Because i enable 'key'
> (keystone ?) service? But ironic forces me to do it ("key"
> service is required by ironic). So how do i install the thing correctly ?
>
> --- cut ---
> [[local|localrc]]
> HOST_IP=10.51.0.5
> SERVICE_HOST=10.51.0.4
> MYSQL_HOST=$SERVICE_HOST
> RABBIT_HOST=$SERVICE_HOST
> GLANCE_HOSTPORT=$SERVICE_HOST:9292
> ADMIN_PASSWORD=nfv
> DATABASE_PASSWORD=$ADMIN_PASSWORD
> RABBIT_PASSWORD=$ADMIN_PASSWORD
> SERVICE_PASSWORD=$ADMIN_PASSWORD
> DATABASE_TYPE=mysql
>
> # Services that a compute node runs
> ENABLED_SERVICES=n-cpu,rabbit,q-agt
>
> ## Open vSwitch provider networking options
> PHYSICAL_NETWORK=public
> OVS_PHYSICAL_BRIDGE=br-ex
> PUBLIC_INTERFACE=ens33
> Q_USE_PROVIDER_NETWORKING=True
> Q_L3_ENABLED=False
>
> # Enable Ironic plugin
> enable_plugin ironic git://git.openstack.org/openstack/ironic
>
> enable_service key
> enable_service glance
>
> # Enable Swift for agent_* drivers
> enable_service s-proxy
> enable_service s-object
> enable_service s-container
> enable_service s-account
>
> # Swift temp URL's are required for agent_* drivers.
> SWIFT_ENABLE_TEMPURLS=True
>
> # Create 3 virtual machines to pose as Ironic's baremetal nodes.
> IRONIC_VM_COUNT=2
> IRONIC_VM_SSH_PORT=22
> IRONIC_BAREMETAL_BASIC_OPS=True
> IRONIC_DEPLOY_DRIVER_ISCSI_WITH_IPA=True
>
> # Enable Ironic drivers.
> IRONIC_ENABLED_DRIVERS=fake,agent_ssh,agent_ipmitool,pxe_ssh,pxe_ipmitool
>
> # Change this to alter the default driver for nodes created by devstack.
> # This driver should be in the enabled list above.
> IRONIC_DEPLOY_DRIVER=pxe_ssh
>
> # The parameters below represent the minimum possible values to create
> # functional nodes.
> IRONIC_VM_SPECS_RAM=1024
> IRONIC_VM_SPECS_DISK=10
>
> # Size of the ephemeral partition in GB. Use 0 for no ephemeral partition.
> IRONIC_VM_EPHEMERAL_DISK=0
>
> # To build your own IPA ramdisk from source, set this to True
> IRONIC_BUILD_DEPLOY_RAMDISK=False
>
> VIRT_DRIVER=ironic
> --- cut ---
>
> Kind regards,
> Pavel Fedin
> Senior Engineer
> Samsung Electronics Research center Russia
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - L3 flavors and issues with usecasesfor multiple L3 backends

2016-02-03 Thread Germy Lure
People need high performance but also xaaS integrated, slow and free but
also packet logged. And lots of back-ends have multiple characters.
According to the example described in this thread, those characters really
should be modeled as different flavors.
Indeed, I think people just want to know what features can those backends
provide and chose one of them to deploy her or his business. Flavor
sub-system can help people to choose easier.
So flavor should be understood by user, any change that facing to user
should introduce a NEW flavor. One vendor for one flavor, even every
version of a vendor for one flavor.

IMHO, no interruption, no rescheduling. Everything should be ready when
user creates a router, according to a flavor gets from neutron.

Thanks.
Germy


On Wed, Feb 3, 2016 at 12:01 PM, rzang  wrote:

> Is it possible that the third router interface that the user wants to add
> will bind to a provider network that the chosen driver (for bare metal
> routers) can not access physically? Even though the chosen driver has the
> capability for that type of network? Is it a third dimension that needs to
> take into consideration besides flavors and capabilities? If this case is
> possible, it is a problem even we restrict all the drivers in the same
> flavor should have the same capability set.
>
>
> -- Original --
> *From: * "Kevin Benton";;
> *Send time:* Wednesday, Feb 3, 2016 9:43 AM
> *To:* "OpenStack Development Mailing List (not for usage questions)"<
> openstack-dev@lists.openstack.org>;
> *Subject: * Re: [openstack-dev] [neutron] - L3 flavors and issues with
> usecasesfor multiple L3 backends
>
> So flavors are for routers with different behaviors that you want the user
> to be able to choose from (e.g. High performance, slow but free, packet
> logged, etc). Multiple drivers are for when you have multiple backends
> providing the same flavor (e.g. The high performance flavor has several
> drivers for various bare metal routers).
> On Feb 2, 2016 18:22, "rzang"  wrote:
>
>> What advantage can we get from putting multiple drivers into one flavor
>> over strictly limit one flavor one driver (or whatever it is called).
>>
>> Thanks,
>> Rui
>>
>> -- Original --
>> *From: * "Kevin Benton";;
>> *Send time:* Wednesday, Feb 3, 2016 8:55 AM
>> *To:* "OpenStack Development Mailing List (not for usage questions)"<
>> openstack-dev@lists.openstack.org>;
>> *Subject: * Re: [openstack-dev] [neutron] - L3 flavors and issues with
>> usecases for multiple L3 backends
>>
>> Choosing from multiple drivers for the same flavor is scheduling. I
>> didn't mean automatically selecting other flavors.
>> On Feb 2, 2016 17:53, "Eichberger, German" 
>> wrote:
>>
>>> Not that you could call it scheduling. The intent was that the user
>>> could pick the best flavor for his task (e.g. a gold router as opposed to a
>>> silver one). The system then would “schedule” the driver configured for
>>> gold or silver. Rescheduling wasn’t really a consideration…
>>>
>>> German
>>>
>>> From: Doug Wiegley > doug...@parksidesoftware.com>>
>>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>>> > openstack-dev@lists.openstack.org>>
>>> Date: Monday, February 1, 2016 at 8:17 PM
>>> To: "OpenStack Development Mailing List (not for usage questions)" <
>>> openstack-dev@lists.openstack.org>> openstack-dev@lists.openstack.org>>
>>> Subject: Re: [openstack-dev] [neutron] - L3 flavors and issues with use
>>> cases for multiple L3 backends
>>>
>>> Yes, scheduling was a big gnarly wart that was punted for the first
>>> pass. The intention was that any driver you put in a single flavor had
>>> equivalent capabilities/plumbed to the same networks/etc.
>>>
>>> doug
>>>
>>>
>>> On Feb 1, 2016, at 7:08 AM, Kevin Benton > blak...@gmail.com>> wrote:
>>>
>>>
>>> Hi all,
>>>
>>> I've been working on an implementation of the multiple L3 backends
>>> RFE[1] using the flavor framework and I've run into some snags with the
>>> use-cases.[2]
>>>
>>> The first use cases are relatively straightforward where the user
>>> requests a specific flavor and that request gets dispatched to a driver
>>> associated with that flavor via a service profile. However, several of the
>>> use-cases are based around the idea that there is a single flavor with
>>> multiple drivers and a specific driver will need to be used depending on
>>> the placement of the router interfaces. i.e. a router cannot be bound to a
>>> driver until an interface is attached.
>>>
>>> This creates some painful coordination problems amongst drivers. For
>>> example, say the first two networks that a user attaches a router to can be
>>> reached by all drivers because they use overlays so the first driver chosen
>>> by the 

Re: [openstack-dev] [Magnum] API service won't work if conductor down?

2016-02-03 Thread Corey O'Brien
The service-* commands aren't related to the magnum services (e.g.
magnum-conductor). The service-* commands are for services on the bay that
the user creates and deletes.

Corey

On Wed, Feb 3, 2016 at 2:25 AM Eli Qiao  wrote:

> hi
> Whey I try to run magnum service-list to list all services (seems now we
> only have m-cond service), it m-cond is down(which means no conductor at
> all),
> API won't response and will return a timeout error.
>
> taget@taget-ThinkStation-P300:~/devstack$ magnum service-list
> ERROR: Timed out waiting for a reply to message ID
> fd1e9529f60f42bf8db903bbf75bbade (HTTP 500)
>
> And I debug more and compared with nova service-list, nova will give
> response and will tell the conductor is down.
>
> and deeper I get this in magnum-api boot up:
>
> *# Enable object backporting via the conductor*
> *base.MagnumObject.indirection_api = base.MagnumObjectIndirectionAPI()*
>
> so in magnum_service api code
>
> return objects.MagnumService.list(context, limit, marker, sort_key,
>   sort_dir)
>
> will require to use magnum-conductor to access DB, but no magnum-conductor
> at all, then we get a 500 error.
> (nova-api doesn't specify *indirection_api so nova-api can access DB*)
>
> My question is:
>
> 1) is this by designed that we don't allow magnum-api to access DB
> directly ?
> 2) if 1) is by designed, then `magnum service-list` won't work, and the
> error message should be improved such as "magnum service is down , please
> check magnum conductor is alive"
>
> What do you think?
>
> P.S. I tested comment this line:
> *# base.MagnumObject.indirection_api = base.MagnumObjectIndirectionAPI()*
> magnum-api will response but failed to create bay(), which means api
> service have read access but can not write it at all since(all db write
> happened in conductor layer).
>
> --
> Best Regards, Eli(Li Yong)Qiao
> Intel OTC China
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Duncan Thomas
On 2 February 2016 at 02:28, Sam Yaple  wrote:

>
> I disagree with this statement strongly as I have stated before. Nova has
> snapshots. Cinder has snapshots (though they do say cinder-backup). Freezer
> wraps Nova and Cinder. Snapshots are not backups. They are certainly not
> _incremental_ backups. They can have neither compression, nor encryption.
> With this in mind, Freezer does not have this "feature" at all. Its not
> that it needs improvement, it simply does not exist in Freezer. So a
> separate project dedicated to that one goal is not unreasonable. The real
> question is whether it is practical to merge Freezer and Ekko, and this is
> the question Ekko and the Freezer team are attempting to answer.
>

You're misinformed of the cinder feature set there - cinder has both
snapshots (usually fast COW thing on the same storage backend) and backups
(copy to a different storage backend, usually swift but might be
NFS/ceph/TSM) - the backups support incremental and compression. Separate
encryption to the volume encryption is not yet supported or implemented,
merely because nobody has written it yet. There's also live backup
(internally via a snapshot) merged last cycle.

I can see a place for other backup solutions, I just want to make the
existing ones clear.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Migration progress

2016-02-03 Thread Daniel P. Berrange
On Wed, Feb 03, 2016 at 11:27:16AM +, Paul Carlton wrote:
> On 03/02/16 10:49, Daniel P. Berrange wrote:
> >On Wed, Feb 03, 2016 at 10:44:36AM +, Daniel P. Berrange wrote:
> >>On Wed, Feb 03, 2016 at 10:37:24AM +, Koniszewski, Pawel wrote:
> >>>Hello everyone,
> >>>
> >>>On the yesterday's live migration meeting we had concerns that interval of
> >>>writing migration progress to the database is too short.
> >>>
> >>>Information about migration progress will be stored in the database and
> >>>exposed through the API (/servers//migrations/). In current
> >>>proposition [1] migration progress will be updated every 2 seconds. It
> >>>basically means that every 2 seconds a call through RPC will go from 
> >>>compute
> >>>to conductor to write migration data to the database. In case of parallel
> >>>live migrations each migration will report progress by itself.
> >>>
> >>>Isn't 2 seconds interval too short for updates if the information is 
> >>>exposed
> >>>through the API and it requires RPC and DB call to actually save it in the
> >>>DB?
> >>>
> >>>Our default configuration allows only for 1 concurrent live migration [2],
> >>>but it might vary between different deployments and use cases as it is
> >>>configurable. Someone might want to trigger 10 (or even more) parallel live
> >>>migrations and each might take even a day to finish in case of block
> >>>migration. Also if deployment is big enough rabbitmq might be fully-loaded.
> >>>I'm not sure whether updating each migration every 2 seconds makes sense in
> >>>this case. On the other hand it might be hard to observe fast enough that
> >>>migration is stuck if we increase this interval...
> >>Do we have any actual data that this is a real problem. I have a pretty hard
> >>time believing that a database update of a single field every 2 seconds is
> >>going to be what pushes Nova over the edge into a performance collapse, even
> >>if there are 20 migrations running in parallel, when you compare it to the
> >>amount of DB queries & updates done across other areas of the code for 
> >>pretty
> >>much every singke API call and background job.
> >Also note that progress is rounded to the nearest integer. So even if the
> >migration runs all day, there is a maximum of 100 possible changes in value
> >for the progress field, so most of the updates should turn in to no-ops at
> >the database level.
> >
> >Regards,
> >Daniel
> I agree with Daniel, these rpc and db access ops are a tiny percentage
> of the overall load on rabbit and mysql and properly configured these
> subsystems should have no issues with this workload.
> 
> One correction, unless I'm misreading it, the existing
> _live_migration_monitor code updates the progress field of the instance
> record every 5 seconds.  However this value can go up and down so
> an infinate number of updates are possible?

Oh yes, you are in fact correct. Technically you could have an unbounded
number of updates if migration goes backwards. Some mitigation against
this is if we see progress going backwards we'll actually abort the
migration if it gets stuck for too long. We'll also be progressively
increasing the permitted downtime. So except in pathelogical scenarios
I think the number of updates should still be relatively small.

> However, the issue raised here is not with the existing implementation
> but with the proposed change
> https://review.openstack.org/#/c/258813/5/nova/virt/libvirt/driver.py
> This add a save() operation on the migration object every 2 seconds

Ok, that is more heavy weight since it is recording the raw byte values
and so it is guaranteed to do a database update pretty much every time.
It still shouldn't be too unreasonable a loading though. FWIW I think
it is worth being consistent in the update frequency betweeen the
progress value & the migration object save, so switching to be every
5 seconds probably makes more sense, so we know both objects are
reflecting the same point in time.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] microversion spec

2016-02-03 Thread Sean Dague
I've been looking through the reviews on and where it's gotten to -
https://review.openstack.org/#/c/243429/4/guidelines/microversion_specification.rst


A couple of questions / concerns.

There was major push back from API-WG on 'API' itself being in the
headers. What is the data on what services are already doing? My
understanding is this is convention for all every service so far, mostly
because that's how we did it in Nova. Forcing a header change for that
seems massively bike shed. There is zero value gained in such a change
by anyone, and just confusion.

On moving from code names to service types, I'm completely onboard with
that providing value. However there is a bigger issue about the fact
that service types don't really have a central registry. That's why Nova
didn't do this up front because that's a whole other thing to figure out
which has some really big implications on our community.

Code names are self namespaced because they are based on git repo -
openstack/nova, openstack/ironic. We get a registry for free that won't
have conflicts.

I actually agree these should be service types, however, that requires
understanding how service types are going to be handed out. Having a
project just start using 'monitoring' or 'policy' as a service type is
going to go poorly in the long term when they get told they have to
change that, and now all their clients are broken.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer] New project: collectd-ceilometer-plugin

2016-02-03 Thread Simon Pasquier
On Tue, Feb 2, 2016 at 5:08 PM, Foley, Emma L 
wrote:

> Hi Simon,
>
>
>
> So collectd acts as a statsd server, and the metrics are aggregated and
> dispatched to the collectd daemon.
>
> Collectd’s write plugins then output the stats to wherever we want them to
> go.
>
>
>
> In order to interact with gnocchi using statsd, we require collectd to act
> as a statsd client and dispatch the metrics to gnocchi-statsd service.
>

AFAICT there's no such thing out of the box but it should be fairly
straightforward to implement a StatsD writer using the collectd Python
plugin.

Simon

[1] https://collectd.org/documentation/manpages/collectd-python.5.shtml


>
>
> Regards,
>
> Emma
>
>
>
>
>
> *From:* Simon Pasquier [mailto:spasqu...@mirantis.com]
> *Sent:* Monday, February 1, 2016 9:02 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>; Foley, Emma L 
> *Subject:* Re: [openstack-dev] [telemetry][ceilometer] New project:
> collectd-ceilometer-plugin
>
>
>
>
>
>
>
> On Fri, Jan 29, 2016 at 6:30 PM, Julien Danjou  wrote:
>
> On Fri, Jan 29 2016, Foley, Emma L wrote:
>
> > Supporting statsd would require some more investigation, as collectd's
> > statsd plugin supports reading stats from the system, but not writing
> > them.
>
> I'm not sure what that means?
> https://collectd.org/wiki/index.php/Plugin:StatsD seems to indicate it
> can send metrics to a statsd daemon.
>
>
>
> Nope that is the opposite: collectd can act as a statsd server. The man
> page [1] is clearer than the collectd Wiki.
>
> Simon
>
>
> [1]
> https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_statsd
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Monasca] alarm-definition-list is failing with "service unavailable"

2016-02-03 Thread Pradip Mukhopadhyay
Hello,


Seeing the alarm-definition-list is returning "service unavailable" for
admin user:


stack@ubuntu:~/devstack$ monasca --os-username admin --os-password
secretadmin --os-project-name admin alarm-definition-list
ERROR (exc:65) exception: {
"title": "Service unavailable",
"description": ""
}
HTTPException code=500 message={
"title": "Service unavailable",
"description": ""
}


However can see other APIs like notification listing etc. working fine.

stack@ubuntu:~/devstack$ monasca --os-username admin --os-password
secretadmin --os-project-name admin notification-list
+---+--+---++
| name  | id   | type  |
address|
+---+--+---++
| pradipm_email | bf60996d-d500-4b59-b42f-a942e9121859 | EMAIL | email_id |
+---+--+---++



The following changes we did to make it work for 'admin' user:

Do the following changes:
1. /opt/stack/monasca-api/java/src/main/resources/api-config.yml  --- add
'admin' in 'defaultAuthorizedRoles'
2./etc/monasca/api-config.conf  --- add 'admin' in default_authorized_roles
3. sudo service monasca-api restart
4. sudo service monasca-thresh restart
5. sudo service monasca-agent restart




Any help will be appreciated.



Thanks,
Pradip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CentOS bootstrap image retirement

2016-02-03 Thread Igor Kalnitsky
No objections from my side. Let's do it.

On Tue, Feb 2, 2016 at 8:35 PM, Dmitry Klenov  wrote:
> Hi Sergey,
>
> I fully support this idea. It was our plan as well when we were developing
> Ubuntu Bootstrap feature. So let's proceed with CentOS bootstrap removal.
>
> BR,
> Dmitry.
>
> On Tue, Feb 2, 2016 at 2:55 PM, Sergey Kulanov 
> wrote:
>>
>> Hi Folks,
>>
>> I think it's time to declare CentOS bootstrap image retirement.
>> Since Fuel 8.0 we've switched to Ubuntu bootstrap image usage [1, 2] and
>> CentOS one became deprecated,
>> so in Fuel 9.0 we can freely remove it [2].
>> For now we are building CentOS bootstrap image together with ISO and then
>> package it into rpm [3], so by removing fuel-bootstrap-image [3] we:
>>
>> * simplify patching/update story, since we don't need to rebuild/deliver
>> this
>>   package on changes in dependent packages [4].
>>
>> * speed-up ISO build process, since building centos bootstrap image takes
>> ~ 20%
>>   of build-iso time.
>>
>> We've prepared related blueprint for this change [5] and spec [6]. We also
>> have some draft patchsets [7]
>> which passed BVT tests.
>>
>> So the next steps are:
>> * get feedback by reviewing the spec/patches;
>> * remove related code from the rest fuel projects (fuel-menu, fuel-devops,
>> fuel-qa).
>>
>>
>> Thank you
>>
>>
>> [1]
>> https://specs.openstack.org/openstack/fuel-specs/specs/7.0/fuel-bootstrap-on-ubuntu.html
>> [2]
>> https://specs.openstack.org/openstack/fuel-specs/specs/8.0/dynamically-build-bootstrap.html
>> [3]
>> https://github.com/openstack/fuel-main/blob/master/packages/rpm/specs/fuel-bootstrap-image.spec
>> [4]
>> https://github.com/openstack/fuel-main/blob/master/bootstrap/module.mk#L12-L50
>> [5]
>> https://blueprints.launchpad.net/fuel/+spec/remove-centos-bootstrap-from-fuel
>> [6] https://review.openstack.org/#/c/273159/
>> [7]
>> https://review.openstack.org/#/q/topic:bp/remove-centos-bootstrap-from-fuel
>>
>>
>> --
>> Sergey
>> DevOps Engineer
>> IRC: SergK
>> Skype: Sergey_kul
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Neutron] IPv6 related intermittent test failures

2016-02-03 Thread Sean Dague
On 02/02/2016 10:03 PM, Matthew Treinish wrote:
> On Tue, Feb 02, 2016 at 05:09:47PM -0800, Armando M. wrote:
>> Folks,
>>
>> We have some IPv6 related bugs [1,2,3] that have been lingering for some
>> time now. They have been hurting the gate (e.g. [4] the most recent
>> offending failure) and since it looks like they have been without owners
>> nor a plan of action for some time, I made the hard decision of skipping
>> them [5] ahead of the busy times ahead.
> 
> So TBH I don't think the failure rate for these tests are really at a point
> necessitating a skip:
> 
> http://status.openstack.org/openstack-health/#/test/tempest.scenario.test_network_v6.TestGettingAddress.test_multi_prefix_slaac
> http://status.openstack.org/openstack-health/#/test/tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_dhcp6_stateless_from_os
> http://status.openstack.org/openstack-health/#/test/tempest.scenario.test_network_v6.TestGettingAddress.test_dhcp6_stateless_from_os
> 
> (also just a cool side-note, you can see the very obvious performance 
> regression
> caused by the keystonemiddleware release and when we excluded that version in
> requirements)
> 
> Well, test_dualnet_dhcp6_stateless_from_os is kinda there with a ~10% failure
> rate, but the other 2 really aren't. I normally would be -1 on the skip patch
> because of that. We try to save the skips for cases where the bugs are really
> severe and preventing productivity at a large scale. 
> 
> But, in this case these ipv6 tests are kinda of out of place in tempest. 
> Having
> all the permutations of possible ip allocation configurations always seemed a
> bit too heavy handed. These tests are also consistently in the top 10 slowest
> for a run. We really should have trimmed down this set a while ago so we're 
> only
> have a single case in tempest. Neutron should own the other possible
> configurations as an in-tree test.
> 
> Brian Haley has a patch up from Dec. that was trying to clean it up:
> 
> https://review.openstack.org/#/c/239868/
> 
> We probably should revisit that soon, since quite clearly no one is looking at
> these right now.

We definitely shouldn't be running all the IPv6 tests.

But I also think the assumption that the failure rate is low is not a
valid reason to keep a test. Unreliable tests that don't have anyone
looking into them should be deleted. They are providing negative value.
Because people just recheck past them even if their code made the race
worse. So any legitimate issues they are exposing are being ignored.

If the neutron PTL wants tests pulled, we should just do it.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] do not account compute resource of instances in state SHELVED_OFFLOADED

2016-02-03 Thread John Garbutt
On 2 February 2016 at 14:11, Sascha Vogt  wrote:
> Hi,
>
> Am 31.01.2016 um 18:57 schrieb John Garbutt:
>> We need to make sure we don't have configuration values that change
>> the semantic of our API.
>> Such things, at a minimum, need to be discoverable, but are best avoided.
> I totally agree on that. I
>
>>> I think an off-loaded / shelved resource should still count against the
>>> quota being used (instance, allocated floating IPs, disk space etc) just
>>> not the resources which are no longer consumed (CPU and RAM)
>>
>> OK, but that does mean unshelve can fail due to qutoa. Maybe thats OK?
> For me that would be ok, just like a boot could fail. Even now I think
> an unshelve can fail, because a new scheduling run is triggered and
> depending on various things you could get a "no valid host" (e.g. we
> have properties on Windows instances to only run them on a host with a
> datacenter license. If that host is full (we only have one at the
> moment), unshelve shouldn't work, should it?).
>
>> The quota really should live with the project that owns the resource.
>> i.e. nova has the "ephemeral" disk quota, but glance should have the
>> glance quota.
> Oh sure, I didn't mean to have that quota in Nova just to have them in
> general "somewhere". When I first started playing around with OpenStack,
> I was surprised that there are no quotas for images and ephemeral disks.
>
> What is the general feeling about this? Should I ask on "operators" if
> there is someone else who would like to have this added?

I think the best next step is to write up a nova-spec for newton:
http://docs.openstack.org/developer/nova/process.html#how-do-i-get-my-code-merged

But from a wider project view the quota system is very fragile, and is
proving hard to evolve. There are some suggested approaches to fix
that, but no one has had the time to take on that work. There is a bit
of a backlog of quota features right now.

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Task Based Deployment Is at Least Twice Faster

2016-02-03 Thread Bogdan Dobrelya
On 02.02.2016 17:35, Alexey Shtokolov wrote:
> Hi Fuelers!
> 
> As you may be aware, since [0] Fuel has implemented a new orchestration
> engine [1]
> We switched the deployment paradigm from role-based (aka granular) to
> task-based and now Fuel can deploy all nodes simultaneously using
> cross-node dependencies between deployment tasks.

That is great news! Please do not forget about docs updates as well.
Those docs are always forgotten like poor orphans... I submitted a patch
[0] to MOS docs, please review and add more details, if possible, for
plugins impact as well.

[0] https://review.fuel-infra.org/#/c/16509/

> 
> This feature is experimental in Fuel 8.0 and will be enabled by default
> for Fuel 9.0
> 
> Allow me to show you the results. We made some benchmarks on our bare
> metal lab [2]
> 
> Case #1. 3 controllers + 7 computes w/ ceph. 
> Task-based deployment takes *~38* minutes vs *~1h15m* for granular (*~2*
> times faster)
> Here and below the deployment time is average time for 10 runs
> 
> Case #2. 3 controllers + 3 mongodb + 4 computes w/ ceph.
> Task-based deployment takes *~41* minutes vs *~1h32m* for granular
> (*~2.24* times faster)
> 
> 
> 
> Also we took measurements for Fuel CI test cases. Standard BVT (Master
> node + 3 controllers + 3 computes w/ ceph. All are in qemu VMs on one host)
> 
> Fuel CI slaves with *4 *cores *~1.1* times faster
> In case of 4 cores for 7 VMs they are fighting for CPU resources and it
> marginalizes the gain of task-based deployment
> 
> Fuel CI slaves with *6* cores *~1.6* times faster
> 
> Fuel CI slaves with *12* cores *~1.7* times faster

These are really outstanding results!
(tl;dr)
I believe the next step may be to leverage the "external install & svc
management" feature (example [1]) of the Liberty release (7.0.0) of
Puppet-Openstack (PO) modules. So we could use separate concurrent
cross-depends based tasks *within a single node* as well, like:
- task: install_all_packages - a singleton task for a node,
- task: [configure_x, for each x] - concurrent for a node,
- task: [manage_service_x, for each x] - some may be concurrent for a
node, while another shall be serialized.

So, one might use the "--tags" separator for concurrent puppet runs to
make things go even faster, for example:

# cat test.pp
notify
{"A": tag => "a" }
notify
{"B": tag => "b" }

# puppet apply test.pp
Notice: A
Notice: /Stage[main]/Main/Notify[A]/message: defined 'message' as 'A'
Notice: B
Notice: /Stage[main]/Main/Notify[B]/message: defined 'message' as 'B'

# puppet apply test.pp --tags a
Notice: A
Notice: /Stage[main]/Main/Notify[A]/message: defined 'message' as 'A'

# puppet apply test.pp --tags a & puppet apply test.pp --tags b
Notice: B
Notice: /Stage[main]/Main/Notify[B]/message: defined 'message' as 'B'
Notice: A
Notice: /Stage[main]/Main/Notify[A]/message: defined 'message' as 'A'

Which is supposed to be faster, although not for this example.

[1] https://review.openstack.org/#/c/216926/3/manifests/init.pp

> 
> You can see additional information and charts in the presentation [3].
> 
> [0]
> - http://lists.openstack.org/pipermail/openstack-dev/2015-December/082093.html
> [1]
> - 
> https://specs.openstack.org/openstack/fuel-specs/specs/8.0/task-based-deployment-mvp.html
> [2] -  3 x HP ProLiant DL360p Gen8 (XeonE5 6 cores/64GB/SSD)  + 7 x HP
> ProLiant DL320p Gen8 (XeonE3 4 cores/8-16GB/HDD)
> [3] -
> https://docs.google.com/presentation/d/1jZCFZlXHs_VhjtVYS2VuWgdxge5Q6sOMLz4bRLuw7YE
> 
> ---
> WBR, Alexey Shtokolov
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle] Port Query Performance Test

2016-02-03 Thread Vega Cai
Hi all,

I did a test about the performance of port query in Tricircle yesterday.
The result is attached.

Three observations in the test result:
(1) Neutron client costs much more time than curl, the reason may be
neutron client needs to apply for a new token in each run.
(2) Eventlet doesn't bring much improvement, the reason may be we only have
two bottom pods. I will add some logs to do a further investigation.
(3) Query 1000 ports in top pod costs about 1.5s when using curl, which is
acceptable.

BR
Zhiyuan


tricircle-query-test.xlsx
Description: MS-Excel 2007 spreadsheet
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] API service won't work if conductor down?

2016-02-03 Thread Hongbin Lu
I can clarify Eli’s question further.

1) is this by designed that we don't allow magnum-api to access DB directly ?
Yes, that is what it is. Actually, The magnum-api was allowed to access DB 
directly in before. After the indirection API patch landed [1], magnum-api 
starts using magnum-conductor as a proxy to access DB. According to the inputs 
from oslo team, this design allows operators to take down either magnum-api or 
magnum-conductor to upgrade. This is not the same as nova-api, because 
nova-api, nova-scheduler, and nova-conductor are assumed to be shutdown all 
together as an atomic unit.

I think we should make our own decision here. If we can pair magnum-api with 
magnum-conductor as a unit, we can remove the indirection API and allow both 
binaries to access DB. This could mitigate the potential performance bottleneck 
of message queue. On the other hand, if we stay with the current design, we 
would allow magnum-api and magnum-conductor to scale independently. Thoughts?

[1] https://review.openstack.org/#/c/184791/

Best regards,
Hongbin

From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
Sent: February-03-16 10:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] API service won't work if conductor down?

Corey the one you are talking about has changed to coe-service-*.

Eli, IMO we should display proper error message. M-api service should only have 
read permission.

Regards,
Madhuri

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: Wednesday, February 3, 2016 6:50 PM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: Re: [openstack-dev] [Magnum] API service won't work if conductor down?

The service-* commands aren't related to the magnum services (e.g. 
magnum-conductor). The service-* commands are for services on the bay that the 
user creates and deletes.

Corey

On Wed, Feb 3, 2016 at 2:25 AM Eli Qiao 
> wrote:
hi
Whey I try to run magnum service-list to list all services (seems now we only 
have m-cond service), it m-cond is down(which means no conductor at all),
API won't response and will return a timeout error.

taget@taget-ThinkStation-P300:~/devstack$ magnum service-list
ERROR: Timed out waiting for a reply to message ID 
fd1e9529f60f42bf8db903bbf75bbade (HTTP 500)

And I debug more and compared with nova service-list, nova will give response 
and will tell the conductor is down.

and deeper I get this in magnum-api boot up:

# Enable object backporting via the conductor
base.MagnumObject.indirection_api = base.MagnumObjectIndirectionAPI()

so in magnum_service api code

return objects.MagnumService.list(context, limit, marker, sort_key,
  sort_dir)

will require to use magnum-conductor to access DB, but no magnum-conductor at 
all, then we get a 500 error.
(nova-api doesn't specify indirection_api so nova-api can access DB)

My question is:

1) is this by designed that we don't allow magnum-api to access DB directly ?
2) if 1) is by designed, then `magnum service-list` won't work, and the error 
message should be improved such as "magnum service is down , please check 
magnum conductor is alive"

What do you think?

P.S. I tested comment this line:
# base.MagnumObject.indirection_api = base.MagnumObjectIndirectionAPI()
magnum-api will response but failed to create bay(), which means api service 
have read access but can not write it at all since(all db write happened in 
conductor layer).


--

Best Regards, Eli(Li Yong)Qiao

Intel OTC China
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2016-02-03 Thread Russell Bryant
On 11/30/2015 07:56 PM, Armando M. wrote:
> I would like to suggest that we evolve the structure of the Neutron
> governance, so that most of the deliverables that are now part of the
> Neutron stadium become standalone projects that are entirely
> self-governed (they have their own core/release teams, etc).

After thinking over the discussion in this thread for a while, I have
started the following proposal to implement the stadium renovation that
Armando originally proposed in this thread.

https://review.openstack.org/#/c/275888

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][release] reno 1.4.0 release (independent)

2016-02-03 Thread Doug Hellmann
We are delighted to announce the release of:

reno 1.4.0: RElease NOtes manager

This release is part of the independent release series.

With source available at:

http://git.openstack.org/cgit/openstack/reno

With package available at:

https://pypi.python.org/pypi/reno

Please report issues through launchpad:

http://bugs.launchpad.net/reno

For more details, please see below.

1.4.0
^


New Features


* Add a flag to collapse pre-release notes into their final release,
  if the final release tag is present.


Bug Fixes
*

* Resolves a bug with properly detecting pre-release versions in the
  existing history of a repository that resulted in some release notes
  not appearing in the report output.


Changes in reno 1.3.1..1.4.0


53891dd add flag to collapse pre-releases into final releases
2077767 fix detection of pre-release tags in git log

Diffstat (except docs and test files)
-

.../notes/bug-1537451-f44591da125ba09d.yaml|   6 +
.../collapse-pre-releases-0b24e0bab46d7cf1.yaml|   4 +
reno/lister.py |   6 +-
reno/main.py   |  12 ++
reno/report.py |   6 +-
reno/scanner.py|  61 -
reno/sphinxext.py  |   7 +-
9 files changed, 236 insertions(+), 9 deletions(-)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Bug 1541105 options

2016-02-03 Thread Steve Gordon
- Original Message -
> From: "Hongbin Lu" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> I would vote for a quick fix + a blueprint.
> 
> BTW, I think it is a general consensus that we should move away from Atomic
> for various reasons (painful image building, lack of document, hard to use,
> etc.). We are working on fixing the CoreOS templates which could replace
> Atomic in the future.
> 
> Best regards,
> Hongbin

Hi Hongbin,

I had heard this previously in Tokyo and again when I was asking around about 
the image support on IRC last week, is there a list of the exact issues with 
image building etc. with regards to Atomic? When I was following up on this it 
seemed like the main issue is that the docs in the magnum repo are quite out of 
date (versus the upstream fedora atomic docs) both with regards to the content 
of the image and the process used to (re)build it - there didn't seem to be 
anything quantifiable that's wrong with the current Atomic images but perhaps I 
was asking the wrong folks. I was able to rebuild fairly trivially using the 
Fedora built artefacts [1][2].

So are the exact requirements of Magnum w.r.t. the image and how they aren't 
currently met listed somewhere? If there are quantifiable issues then I can get 
them in front of the atomic folks to address them.

Thanks,

Steve

[1] https://git.fedorahosted.org/git/spin-kickstarts.git
[2] https://git.fedorahosted.org/git/fedora-atomic.git


> From: Corey O'Brien [mailto:coreypobr...@gmail.com]
> Sent: February-03-16 2:53 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Magnum] Bug 1541105 options
> 
> As long as configurations for 2.2 and 2.0 are compatible we shouldn't have an
> issue I wouldn't think. I just don't know enough about etcd deployment to be
> sure about that.
> 
> If we want to quickly improve the gate, I can patch the problematic areas in
> the templates and then we can make a blueprint for upgrading to Atomic 23.
> 
> Corey
> 
> On Wed, Feb 3, 2016 at 1:47 PM Vilobh Meshram
> >
> wrote:
> Hi Corey,
> 
> This is slowing down our merge rate and needs to be fixed IMHO.
> 
> What risk are you talking about when using newer version of etcd ? Is it
> documented somewhere for the team to have a look ?
> 
> -Vilobh
> 
> On Wed, Feb 3, 2016 at 8:11 AM, Corey O'Brien
> > wrote:
> Hey team,
> 
> I've been looking into https://bugs.launchpad.net/magnum/+bug/1541105 which
> covers a bug with etcdctl, and I wanted opinions on how best to fix it.
> 
> Should we update the image to include the latest version of etcd? Or, should
> we temporarily install the latest version as a part of notify-heat (see bug
> for patch)?
> 
> I'm personally in favor of updating the image, but there is presumably some
> small risk with using a newer version of etcd.
> 
> Thanks,
> Corey O'Brien

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Fullstack tests question

2016-02-03 Thread Sławek Kapłoński
Hello,

I'm currently working on patch https://review.openstack.org/#/c/248938/7 which 
will provide support for LinuxBridge agent in fullstack tests (and make 
connectivity test for that type of agents).
In this test I'm spawning LinuxBridge agent in "host" namespace.
Generally tests are working fine now if I run it as root. When I run it as 
normal user then LinuxBridge agent process is not spawned in namespace and 
tests are failing.
I suppose that problem is with missing rootwrap rule for spawning such process 
in namespace but I have no idea how to add such rootwrap rules for fullstack 
tests. Can You maybe help me with that or point some links to documentation 
where I could find any info about that?

--
Pozdrawiam / Best regards
Sławek Kapłoński
sla...@kaplonski.pl

signature.asc
Description: This is a digitally signed message part.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Bug 1541105 options

2016-02-03 Thread Hongbin Lu
I would vote for a quick fix + a blueprint.

BTW, I think it is a general consensus that we should move away from Atomic for 
various reasons (painful image building, lack of document, hard to use, etc.). 
We are working on fixing the CoreOS templates which could replace Atomic in the 
future.

Best regards,
Hongbin

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: February-03-16 2:53 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Bug 1541105 options

As long as configurations for 2.2 and 2.0 are compatible we shouldn't have an 
issue I wouldn't think. I just don't know enough about etcd deployment to be 
sure about that.

If we want to quickly improve the gate, I can patch the problematic areas in 
the templates and then we can make a blueprint for upgrading to Atomic 23.

Corey

On Wed, Feb 3, 2016 at 1:47 PM Vilobh Meshram 
> 
wrote:
Hi Corey,

This is slowing down our merge rate and needs to be fixed IMHO.

What risk are you talking about when using newer version of etcd ? Is it 
documented somewhere for the team to have a look ?

-Vilobh

On Wed, Feb 3, 2016 at 8:11 AM, Corey O'Brien 
> wrote:
Hey team,

I've been looking into https://bugs.launchpad.net/magnum/+bug/1541105 which 
covers a bug with etcdctl, and I wanted opinions on how best to fix it.

Should we update the image to include the latest version of etcd? Or, should we 
temporarily install the latest version as a part of notify-heat (see bug for 
patch)?

I'm personally in favor of updating the image, but there is presumably some 
small risk with using a newer version of etcd.

Thanks,
Corey O'Brien

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Port Query Performance Test

2016-02-03 Thread Kevin Benton
+1. The neutron client can only operate on the environmental variables it
has access to, it doesn't store any other state. So if all it has is
credentials, it has to use those to fetch a token.

On Wed, Feb 3, 2016 at 12:05 PM, Rick Jones  wrote:

> On 02/03/2016 05:32 AM, Vega Cai wrote:
>
>> Hi all,
>>
>> I did a test about the performance of port query in Tricircle yesterday.
>> The result is attached.
>>
>> Three observations in the test result:
>> (1) Neutron client costs much more time than curl, the reason may be
>> neutron client needs to apply for a new token in each run.
>>
>
> Is "needs" a little strong there?  When I have been doing things with
> Neutron CLI at least and needed to issue a lot of requests over a somewhat
> high latency path, I've used the likes of:
>
> token=$(keystone token-get | awk '$2 == "id" {print$4}')
> NEUTRON="neutron --os-token=$token --os-url=https://mutter
>
> to avoid grabbing a token each time.  Might that be possible with what you
> are testing?
>
> rick jones
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Duncan Thomas
On 3 February 2016 at 16:32, Sam Yaple  wrote:

>
> Looking into it, however, shows Cinder has no mechanism to delete backups
> in the middle of a chain since you use dependent backups (please correct me
> if I am wrong here). This means after a number of incremental backups you
> _must_ take another full to ensure the chain doesn't get to long. That is a
> problem Ekko is purposing to solve as well. Full backups are costly in
> terms of IO, storage, bandwidth and time. A full backup being required in a
> backup plan is a big problem for backups when we talk about volumes that
> are terabytes large.
>

You're right that this is an issue currently. Cinder actually has enough
info in theory to be able to trivially squash backups to be able to break
the chain, it's only a bit of metadata ref counting and juggling, however
nobody has yet written the code.


> Luckily, digging into it it appears cinder already has all the
> infrastructure in place to handle what we had talked about in a separate
> email thread Duncan. It is very possible Ekko can leverage the existing
> features to do it's backup with no change from Cinder. This isn't the
> initial priority for Ekko though, but it is good information to have. Thank
> you for your comments!
>


Always interested in better ways to solve backup.


-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Duncan Thomas
On 3 February 2016 at 17:27, Sam Yaple  wrote:


>
> And here we get to the meat of the matter. Squashing backups is awful in
> object storage. It requires you to pull both backups, merge them, then
> reupload. This also has the downside of casting doubt on a backup since you
> are now modifying data after it has been backed up (though that doubt is
> lessened with proper checksuming/hashing which cinder does it looks like).
> This is the issue Ekko can solve (and has solved over the past 2 years).
> Ekko can do this "squashing" in a non-traditional way, without ever
> modifying content or merging anything. With deletions only. This means we
> do not have to pull two backups, merge, and reupload to delete a backup
> from the chain.
>

I'm sure we've lost most of the audience by this point, but I might as well
reply here as anywhere else...

In the cinder backup case, since the backup is chunked in object store, all
that is required is to reference count the chunks that are required for the
backups you want to keep, get rid of the rest, and re-upload the (very
small) json mapping file. You can either upload over the old json, or
create a new one. Either way, the bulk data does not need to be touched.



-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Logging and traceback at same time.

2016-02-03 Thread Sean McGinnis
On Wed, Feb 03, 2016 at 02:46:28PM +0800, 王华 wrote:
> You can use LOG.exception.

Yes, I highly recommend using LOG.exception in this case. That is
exactly what it's used for. LOG.exception is pretty much exactly like
LOG.error, but with the additional behavior that it will log out the
details of whatever exception is currently in scope.

> 
> Regards,
> Wanghua
> 
> On Wed, Feb 3, 2016 at 2:28 PM, Khayam Gondal 
> wrote:
> 
> > Is there a way to do logging the information and traceback at the same
> > time. Currently I am doing it like this.
> >
> >
> >
> >
> >
> > LOG.error(_LE('Record already exists: %(exception)s '
> >
> >  '\n %(traceback)'),
> >
> >{'exception': e1},
> >
> >{'traceback': traceback.print_stack()}).
> >
> >
> >
> > Let me know if this is correct way?
> >
> > Regards
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Sam Yaple
On Wed, Feb 3, 2016 at 1:41 PM, Duncan Thomas 
wrote:

> On 2 February 2016 at 02:28, Sam Yaple  wrote:
>
>>
>> I disagree with this statement strongly as I have stated before. Nova has
>> snapshots. Cinder has snapshots (though they do say cinder-backup). Freezer
>> wraps Nova and Cinder. Snapshots are not backups. They are certainly not
>> _incremental_ backups. They can have neither compression, nor encryption.
>> With this in mind, Freezer does not have this "feature" at all. Its not
>> that it needs improvement, it simply does not exist in Freezer. So a
>> separate project dedicated to that one goal is not unreasonable. The real
>> question is whether it is practical to merge Freezer and Ekko, and this is
>> the question Ekko and the Freezer team are attempting to answer.
>>
>
> You're misinformed of the cinder feature set there - cinder has both
> snapshots (usually fast COW thing on the same storage backend) and backups
> (copy to a different storage backend, usually swift but might be
> NFS/ceph/TSM) - the backups support incremental and compression. Separate
> encryption to the volume encryption is not yet supported or implemented,
> merely because nobody has written it yet. There's also live backup
> (internally via a snapshot) merged last cycle.
>
> You are right Duncan. I was working on outdated information that Cinder
does not have incremental backups. I apologize for the misstep there, we
haven't started on the Cinder planning yet so I haven't looked into it in
great detail.

Looking into it, however, shows Cinder has no mechanism to delete backups
in the middle of a chain since you use dependent backups (please correct me
if I am wrong here). This means after a number of incremental backups you
_must_ take another full to ensure the chain doesn't get to long. That is a
problem Ekko is purposing to solve as well. Full backups are costly in
terms of IO, storage, bandwidth and time. A full backup being required in a
backup plan is a big problem for backups when we talk about volumes that
are terabytes large.

Luckily, digging into it it appears cinder already has all the
infrastructure in place to handle what we had talked about in a separate
email thread Duncan. It is very possible Ekko can leverage the existing
features to do it's backup with no change from Cinder. This isn't the
initial priority for Ekko though, but it is good information to have. Thank
you for your comments!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Jeremy Stanley
On 2016-02-03 14:32:36 + (+), Sam Yaple wrote:
[...]
> Luckily, digging into it it appears cinder already has all the
> infrastructure in place to handle what we had talked about in a
> separate email thread Duncan. It is very possible Ekko can
> leverage the existing features to do it's backup with no change
> from Cinder.
[...]

If Cinder's backup facilities already do most of
what you want from it and there's only a little bit of development
work required to add the missing feature, why jump to implementing
this feature in a completely separate project instead rather than
improving Cinder's existing solution so that people who have been
using that can benefit directly?
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] CentOS bootstrap image retirement

2016-02-03 Thread Vladimir Kuklin
+1


On Wed, Feb 3, 2016 at 4:45 PM, Igor Kalnitsky 
wrote:

> No objections from my side. Let's do it.
>
> On Tue, Feb 2, 2016 at 8:35 PM, Dmitry Klenov 
> wrote:
> > Hi Sergey,
> >
> > I fully support this idea. It was our plan as well when we were
> developing
> > Ubuntu Bootstrap feature. So let's proceed with CentOS bootstrap removal.
> >
> > BR,
> > Dmitry.
> >
> > On Tue, Feb 2, 2016 at 2:55 PM, Sergey Kulanov 
> > wrote:
> >>
> >> Hi Folks,
> >>
> >> I think it's time to declare CentOS bootstrap image retirement.
> >> Since Fuel 8.0 we've switched to Ubuntu bootstrap image usage [1, 2] and
> >> CentOS one became deprecated,
> >> so in Fuel 9.0 we can freely remove it [2].
> >> For now we are building CentOS bootstrap image together with ISO and
> then
> >> package it into rpm [3], so by removing fuel-bootstrap-image [3] we:
> >>
> >> * simplify patching/update story, since we don't need to rebuild/deliver
> >> this
> >>   package on changes in dependent packages [4].
> >>
> >> * speed-up ISO build process, since building centos bootstrap image
> takes
> >> ~ 20%
> >>   of build-iso time.
> >>
> >> We've prepared related blueprint for this change [5] and spec [6]. We
> also
> >> have some draft patchsets [7]
> >> which passed BVT tests.
> >>
> >> So the next steps are:
> >> * get feedback by reviewing the spec/patches;
> >> * remove related code from the rest fuel projects (fuel-menu,
> fuel-devops,
> >> fuel-qa).
> >>
> >>
> >> Thank you
> >>
> >>
> >> [1]
> >>
> https://specs.openstack.org/openstack/fuel-specs/specs/7.0/fuel-bootstrap-on-ubuntu.html
> >> [2]
> >>
> https://specs.openstack.org/openstack/fuel-specs/specs/8.0/dynamically-build-bootstrap.html
> >> [3]
> >>
> https://github.com/openstack/fuel-main/blob/master/packages/rpm/specs/fuel-bootstrap-image.spec
> >> [4]
> >>
> https://github.com/openstack/fuel-main/blob/master/bootstrap/module.mk#L12-L50
> >> [5]
> >>
> https://blueprints.launchpad.net/fuel/+spec/remove-centos-bootstrap-from-fuel
> >> [6] https://review.openstack.org/#/c/273159/
> >> [7]
> >>
> https://review.openstack.org/#/q/topic:bp/remove-centos-bootstrap-from-fuel
> >>
> >>
> >> --
> >> Sergey
> >> DevOps Engineer
> >> IRC: SergK
> >> Skype: Sergey_kul
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer] New project: collectd-ceilometer-plugin

2016-02-03 Thread Foley, Emma L

AFAICT there's no such thing out of the box but it should be fairly 
straightforward to implement a StatsD writer using the collectd Python plugin.
Simon

[1] https://collectd.org/documentation/manpages/collectd-python.5.shtml

I guess that’ll have to be the plan now: get a prototype in place and have a 
look at how well it does.
The first one is always the most difficult, so it should be fairly quick to get 
this going.

Regards,
Emma


--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Logging and traceback at same time.

2016-02-03 Thread Sean McGinnis
On Wed, Feb 03, 2016 at 10:36:55AM +0100, Julien Danjou wrote:
> On Wed, Feb 03 2016, Khayam Gondal wrote:
> 
> > Is there a way to do logging the information and traceback at the same
> > Let me know if this is correct way?
> 
> Pass exc_info=True in your LOG.() call.

Ooo, great tip!

> 
> -- 
> Julien Danjou
> ;; Free Software hacker
> ;; https://julien.danjou.info



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Domain Specific Roles vs Local Groups

2016-02-03 Thread Adam Young

On 02/02/2016 10:47 PM, Morgan Fainberg wrote:



On Feb 2, 2016 19:38, "Yee, Guang" > wrote:

>
> I presume there’s a spec coming for this “seductive approach”? Not 
sure if I get all of it. From what’s been described here, 
conceptually, isn’t “local groups”, DSRs, or role groups the same thing?

>

Subtle differences. Local groups would be locked to a specific scope / 
group of scopes. And Domain Specific Role (dont use the 
initialism/acronym overloaded), would be global that could be assinged 
to many various scopes.




So long as local groups are considered in addition to Domain specific 
roles, and not as a replacement.  We can do local groups today, by 
allowing users from, say a Federated backend, to be enrolled into a 
group defined in a separate domain.



E.g. local group would be role x, Y, z on domain q.

Domain specific role would be "role a, which is role x, y, z", and 
works like any other role for user/project(ordomain) combination.


The local groups we have all the code to do today.

--M
>
>
>
>
> Guang
>
>
>
>
>
> From: Henry Nash [mailto:henryna...@mac.com 
]

> Sent: Monday, February 01, 2016 3:50 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [keystone] Domain Specific Roles vs Local 
Groups

>
>
>
> Hi
>
>
>
> During the recent keystone midcycle, it was suggested than an 
alternative domain specific roles (see spec: 
https://github.com/openstack/keystone-specs/blob/master/specs/mitaka/domain-specific-roles.rst and 
code patches starting at: https://review.openstack.org/#/c/261846/) 
might be to somehow re-use the group concept. This was actually 
something we had discussed in previous proposals for this 
functionality. As I mentioned during the last day, while this is a 
seductive approach, it doesn’t actually scale well (or in fact provide 
the right abstraction). The best way to illustrate this is with an 
example:

>
>
>
> Let’s say a customer is being hosted by a cloud provider. The 
customer has their own domain containing their own users and groups, 
to keep them segregated from other customers. The cloud provider, 
wanting to attract as many different types of customer as possible, 
has created a set of fine-grained global roles tied to APIs via the 
policy files. The domain admin of the customer wants to create a 
collection of 10 such fine-grained roles that represent some function 
that is meaningful to their setup (perhaps it’s job that allows you to 
monitor resources and fix a subset of problems).

>
>
>
> With domain specific roles (DSR) , the domain admin creates a DSR 
(which is just a role with a domain_id attribute), and then adds the 
10 global policy roles required using the implied roles API. They can 
then assign this DSR to all the projects they need to, probably as a 
group assignment (where the groups could be local, federated or LDAP). 
One assignment per project is required, so if there were, over time, 
100 projects, then that’s 100 assignments. Further, if they want to 
add another global role (maybe to allow access to a new API) to that 
DSR, then it’s a single API call to do it.

>
>
>
> The proposal to use groups instead would work something like this: 
We would support a concept of “local groups” in keystone, that would 
be independent of whatever groups the identity backend was mapped to. 
In order to represent the DSR, a local group would be created (perhaps 
with the name of the functional job members of the group could carry 
out). User who could carry out this function would be added to this 
group (presumably we might also have to support “remote” groups being 
members of such local groups, a concept we don’t really support today, 
but not too much of a stretch). This group would then need to be 
assigned to each project in turn, but for each of the 10 global roles 
that this “DSR equivalent” provided in turn (so an immediate increase 
by a factor of N API calls, where N is the number of roles per DSR) - 
so 1000 assignments in our example. If the domain admin wanted to add 
a new role to (or remove a role from) the “DSR”, they would have to do 
another assignment to each project that this “DSR” was being used (100 
new assignments in our example).  Again, I would suggest, much less 
convenient.

>
>
>
> Given the above, I believe the current DSR proposal does provide the 
right abstraction and scalability, and we should continue to review 
and merge it as planned. Obviously this is still dependant on Implied 
Roles (either in its current form, or a modified version). Alternative 
code of doing a one-level-only inference part of DSRs does exist (from 
an earlier attempt), but I don’t think we want to do that if we are 
going to have any kind of implied roles.

>
>
>
> Henry
>
>
> 
__

> OpenStack Development Mailing List (not for usage questions)
> 

Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Sam Yaple
On Wed, Feb 3, 2016 at 2:37 PM, Duncan Thomas 
wrote:

>
>
> On 3 February 2016 at 16:32, Sam Yaple  wrote:
>
>>
>> Looking into it, however, shows Cinder has no mechanism to delete backups
>> in the middle of a chain since you use dependent backups (please correct me
>> if I am wrong here). This means after a number of incremental backups you
>> _must_ take another full to ensure the chain doesn't get to long. That is a
>> problem Ekko is purposing to solve as well. Full backups are costly in
>> terms of IO, storage, bandwidth and time. A full backup being required in a
>> backup plan is a big problem for backups when we talk about volumes that
>> are terabytes large.
>>
>
> You're right that this is an issue currently. Cinder actually has enough
> info in theory to be able to trivially squash backups to be able to break
> the chain, it's only a bit of metadata ref counting and juggling, however
> nobody has yet written the code.
>
>
And here we get to the meat of the matter. Squashing backups is awful in
object storage. It requires you to pull both backups, merge them, then
reupload. This also has the downside of casting doubt on a backup since you
are now modifying data after it has been backed up (though that doubt is
lessened with proper checksuming/hashing which cinder does it looks like).
This is the issue Ekko can solve (and has solved over the past 2 years).
Ekko can do this "squashing" in a non-traditional way, without ever
modifying content or merging anything. With deletions only. This means we
do not have to pull two backups, merge, and reupload to delete a backup
from the chain.


> Luckily, digging into it it appears cinder already has all the
>> infrastructure in place to handle what we had talked about in a separate
>> email thread Duncan. It is very possible Ekko can leverage the existing
>> features to do it's backup with no change from Cinder. This isn't the
>> initial priority for Ekko though, but it is good information to have. Thank
>> you for your comments!
>>
>
>
> Always interested in better ways to solve backup.
>

Thats the plan!

>
> --
> Duncan Thomas
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] compatibility of puppet upstream modules

2016-02-03 Thread Ptacek, MichalX
Hi all,

I have one general question,
currently I am deploying liberty openstack as described in 
https://wiki.openstack.org/wiki/Puppet/Deploy
Unfortunately puppet modules specified in 
puppet-openstack-integration/Puppetfile are not compatible
and some are also missing as visible from following output of "puppet module 
list"

Warning: Setting templatedir is deprecated. See 
http://links.puppetlabs.com/env-settings-deprecations
   (at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in 
`issue_deprecation_warning')
Warning: Module 'openstack-openstacklib' (v7.0.0) fails to meet some 
dependencies:
  'openstack-barbican' (v0.0.1) requires 'openstack-openstacklib' (>=6.0.0 
<7.0.0)
  'openstack-zaqar' (v0.0.1) requires 'openstack-openstacklib' (>=6.0.0 <7.0.0)
Warning: Module 'puppetlabs-postgresql' (v4.4.2) fails to meet some 
dependencies:
  'openstack-openstacklib' (v7.0.0) requires 'puppetlabs-postgresql' (>=3.3.0 
<4.0.0)
Warning: Missing dependency 'deric-storm':
  'openstack-monasca' (v1.0.0) requires 'deric-storm' (>=0.0.1 <1.0.0)
Warning: Missing dependency 'deric-zookeeper':
  'openstack-monasca' (v1.0.0) requires 'deric-zookeeper' (>=0.0.1 <1.0.0)
Warning: Missing dependency 'dprince-qpid':
  'openstack-cinder' (v7.0.0) requires 'dprince-qpid' (>=1.0.0 <2.0.0)
  'openstack-manila' (v7.0.0) requires 'dprince-qpid' (>=1.0.0 <2.0.0)
  'openstack-nova' (v7.0.0) requires 'dprince-qpid' (>=1.0.0 <2.0.0)
Warning: Missing dependency 'jdowning-influxdb':
  'openstack-monasca' (v1.0.0) requires 'jdowning-influxdb' (>=0.3.0 <1.0.0)
Warning: Missing dependency 'opentable-kafka':
  'openstack-monasca' (v1.0.0) requires 'opentable-kafka' (>=1.0.0 <2.0.0)
Warning: Missing dependency 'puppetlabs-stdlib':
  'antonlindstrom-powerdns' (v0.0.5) requires 'puppetlabs-stdlib' (>= 0.0.0)
Warning: Missing dependency 'puppetlabs-corosync':
  'openstack-openstack_extras' (v7.0.0) requires 'puppetlabs-corosync' (>=0.1.0 
<1.0.0)
/etc/puppet/modules
├── antonlindstrom-powerdns (v0.0.5)
├── duritong-sysctl (v0.0.11)
├── nanliu-staging (v1.0.4)
├── openstack-barbican (v0.0.1)
├── openstack-ceilometer (v7.0.0)
├── openstack-cinder (v7.0.0)
├── openstack-designate (v7.0.0)
├── openstack-glance (v7.0.0)
├── openstack-gnocchi (v7.0.0)
├── openstack-heat (v7.0.0)
├── openstack-horizon (v7.0.0)
├── openstack-ironic (v7.0.0)
├── openstack-keystone (v7.0.0)
├── openstack-manila (v7.0.0)
├── openstack-mistral (v0.0.1)
├── openstack-monasca (v1.0.0)
├── openstack-murano (v7.0.0)
├── openstack-neutron (v7.0.0)
├── openstack-nova (v7.0.0)
├── openstack-openstack_extras (v7.0.0)
├── openstack-openstacklib (v7.0.0)  invalid
├── openstack-sahara (v7.0.0)
├── openstack-swift (v7.0.0)
├── openstack-tempest (v7.0.0)
├── openstack-trove (v7.0.0)
├── openstack-tuskar (v7.0.0)
├── openstack-vswitch (v3.0.0)
├── openstack-zaqar (v0.0.1)
├── openstack_integration (???)
├── puppet-aodh (v7.0.0)
├── puppet-corosync (v0.8.0)
├── puppetlabs-apache (v1.4.1)
├── puppetlabs-apt (v2.1.1)
├── puppetlabs-concat (v1.2.5)
├── puppetlabs-firewall (v1.6.0)
├── puppetlabs-inifile (v1.4.3)
├── puppetlabs-mongodb (v0.11.0)
├── puppetlabs-mysql (v3.6.2)
├── puppetlabs-postgresql (v4.4.2)  invalid
├── puppetlabs-rabbitmq (v5.2.3)
├── puppetlabs-rsync (v0.4.0)
├── puppetlabs-stdlib (v4.6.0)
├── puppetlabs-vcsrepo (v1.3.2)
├── puppetlabs-xinetd (v1.5.0)
├── qpid (???)
├── saz-memcached (v2.8.1)
├── stankevich-python (v1.8.0)
└── theforeman-dns (v3.0.0)


Most of the warning can be probably ignored, e.g I assume that latest barbican 
& zaqar are compatible with liberty (7.0) version of openstack-openstacklib
  'openstack-barbican' (v0.0.1) requires 'openstack-openstacklib' (>=6.0.0 
<7.0.0)
  'openstack-zaqar' (v0.0.1) requires 'openstack-openstacklib' (>=6.0.0 <7.0.0)

Am I right or I need to get rid of all of these compatibility warnings before 
proceeding further ?

I tried both,  but during subsequent deployments I reached some intermediate 
issue with number of parallel mysql connections

2016-02-03 00:01:03.326 90406 DEBUG oslo_db.api [-] Loading backend 
'sqlalchemy' from 'nova.db.sqlalchemy.api' _load_backend 
/usr/lib/python2.7/dist-packages/oslo_db/api.py:238
2016-02-03 00:01:03.333 90406 WARNING oslo_db.sqlalchemy.engines [-] SQL 
connection failed. 10 attempts left.
2016-02-03 00:01:13.345 90406 WARNING oslo_db.sqlalchemy.engines [-] SQL 
connection failed. 9 attempts left.
2016-02-03 00:01:23.358 90406 WARNING oslo_db.sqlalchemy.engines [-] SQL 
connection failed. 8 attempts left.
2016-02-03 00:01:33.361 90406 WARNING oslo_db.sqlalchemy.engines [-] SQL 
connection failed. 7 attempts left.
2016-02-03 00:01:43.374 90406 WARNING oslo_db.sqlalchemy.engines [-] SQL 
connection failed. 6 attempts left.
2016-02-03 00:01:53.387 90406 WARNING oslo_db.sqlalchemy.engines [-] SQL 
connection failed. 5 attempts left.
2016-02-03 00:02:03.400 90406 WARNING oslo_db.sqlalchemy.engines [-] SQL 
connection failed. 4 attempts left.
2016-02-03 00:02:13.412 90406 WARNING 

Re: [openstack-dev] [glance] Virtual Mid-Cycle meeting next week

2016-02-03 Thread Flavio Percoco

On 03/02/16 12:20 -0500, Nikhil Komawar wrote:

Hi,

The time allocation for both days Thursdays and Friday is 2 hours and the
agenda proposed already seems to be already handful for the time allocated
(please correct me if I am wrong).

Nevertheless, I have proposed another couple of topics in the agenda that I
would like be discussed briefly. I would like another 30 mins slots both days
or an hour slot on either one of the days to cover these topics.


The schedule is not fixed, let's work together on improving it. As far as
extenging the time allocation goes, I'm ok with that but I think we'll figure
this out on the go. We could probably just add 1h to Friday.

Cheers,
Flavio



On 2/2/16 12:07 PM, Flavio Percoco wrote:

   On 29/01/16 09:33 -0430, Flavio Percoco wrote:

   Greetings,

   As promissed (although I promissed it yday), here's the link to vote
   for the
   days you'd like the Glance Virtual Midcycle to happen. We'll be meeting
   just for
   2 days and at maximum for 3 hours. The 2 days with more votes are the
   ones that
   will be picked. Since there's such a short notice, I'll be actively
   pinging you
   all and I'll close the vote on Monday Feb 1st.

   http://doodle.com/poll/eck5hr5d746fdxh6

   Thank you all for jumping in with such a short notice,
   Flavio

   P.S: I'll be sending the details of the meeting out with the
   invitation.

   -- 
   @flaper87
   Flavio Percoco



   Hey Folks,

   So, Let's do this:

   I've started putting together an agenda for these 2 days here:

   https://etherpad.openstack.org/p/glance-mitaka-virtual-mid-cycle

   Please, chime in and comment on what topics you'd like to talk about.

   The virtual mid-cycle will be held on the follwing dates:

   Thursday 4th from 15:00 UTC to 17:00 UTC

   Thursday 5th from 15:00 UTC to 17:00 UTC

   The calls will happen on BlueJeans and it's open to everyone. Please, do
   reply
   off-list if you'd like to get a proper invite on your calendar. Otherwise,
   you
   can simply join the link below at the meeting time and meet us there.

   Bluejeans link: https://bluejeans.com/1759335191

   One more note. The virtual mid-cycle will be recorded and when you join,
   the
   recording will likely have been started already.

   Hope to see you all there!
   Flavio


  


   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

Thanks,
Nikhil



--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Mentoring

2016-02-03 Thread Mike Perez
Mentoring is an important tool for growing our OpenStack community.  Mentors 
help new community members come on board and existing community members expand 
their skills and reputation.  Mentees help their mentors expand their worldview 
and challenge their mindset. This process of gaining knowledge and challenging 
existing ideas is vital to our community.

The OpenStack community currently has a few levels of mentoring:

* Outreachy mentoring - an intense internship-type experience over a particular 
three month period https://wiki.openstack.org/wiki/Outreachy.
* Upstream University - a two-day class for beginners in the community, held 
on-site the two days prior to the summit

The Women of OpenStack group has seen the need for additional types of 
mentoring, specifically long term but lightweight mentoring aimed at bringing 
together mentors and mentees who may not be co-located.  

* Technical mentoring - mentorship is spread over several months. Mentors with 
experience in a particular area of OpenStack help their mentees grow in that 
area. This could be a focus area like release management or marketing, or a 
particular project like Nova or Neutron.
* Career mentoring - mentorship is spread out over several months or years. 
Mentors help their mentees define what kind of career they'd like in the 
community and move towards those goals.   

Mentors and mentees may or may not have the same focus area in the community. 
We hope to kick off these new mentoring programs before the Austin summit, to 
tie in with a new Speed Mentoring session there.

We are currently looking for mentors of all shapes, sizes and backgrounds to 
take part in the program pilot.  Mentors should have between 1 and 4 hours a 
month free to spend with their menthe.  

Read our guidelines 
(https://drive.google.com/file/d/0BxtM4AiszlEyVkEtdktmWjBPN3c/view) and if 
you're ready to sign up, please fill out our questionnaire 
https://openstackfoundation.formstack.com/forms/mentoring!

There is an optional speed mentoring session that will be on April 25th Monday 
morning at the OpenStack conference in Austin before the first keynote. In this 
event, mentees will be able to meet a variety of mentors in a short amount of 
time to see who they would like to pair up with. Please check the appropriate 
box on the questionnaire form if you’re interested in attending.

--  
Mike Perez


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Neutron] IPv6 related intermittent test failures

2016-02-03 Thread Armando M.
On 3 February 2016 at 04:28, Sean Dague  wrote:

> On 02/02/2016 10:03 PM, Matthew Treinish wrote:
> > On Tue, Feb 02, 2016 at 05:09:47PM -0800, Armando M. wrote:
> >> Folks,
> >>
> >> We have some IPv6 related bugs [1,2,3] that have been lingering for some
> >> time now. They have been hurting the gate (e.g. [4] the most recent
> >> offending failure) and since it looks like they have been without owners
> >> nor a plan of action for some time, I made the hard decision of skipping
> >> them [5] ahead of the busy times ahead.
> >
> > So TBH I don't think the failure rate for these tests are really at a
> point
> > necessitating a skip:
> >
> >
> http://status.openstack.org/openstack-health/#/test/tempest.scenario.test_network_v6.TestGettingAddress.test_multi_prefix_slaac
> >
> http://status.openstack.org/openstack-health/#/test/tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_dhcp6_stateless_from_os
> >
> http://status.openstack.org/openstack-health/#/test/tempest.scenario.test_network_v6.TestGettingAddress.test_dhcp6_stateless_from_os
> >
> > (also just a cool side-note, you can see the very obvious performance
> regression
> > caused by the keystonemiddleware release and when we excluded that
> version in
> > requirements)
> >
> > Well, test_dualnet_dhcp6_stateless_from_os is kinda there with a ~10%
> failure
> > rate, but the other 2 really aren't. I normally would be -1 on the skip
> patch
> > because of that. We try to save the skips for cases where the bugs are
> really
> > severe and preventing productivity at a large scale.
> >
> > But, in this case these ipv6 tests are kinda of out of place in tempest.
> Having
> > all the permutations of possible ip allocation configurations always
> seemed a
> > bit too heavy handed. These tests are also consistently in the top 10
> slowest
> > for a run. We really should have trimmed down this set a while ago so
> we're only
> > have a single case in tempest. Neutron should own the other possible
> > configurations as an in-tree test.
> >
> > Brian Haley has a patch up from Dec. that was trying to clean it up:
> >
> > https://review.openstack.org/#/c/239868/
> >
> > We probably should revisit that soon, since quite clearly no one is
> looking at
> > these right now.
>
> We definitely shouldn't be running all the IPv6 tests.
>
> But I also think the assumption that the failure rate is low is not a
> valid reason to keep a test. Unreliable tests that don't have anyone
> looking into them should be deleted. They are providing negative value.
> Because people just recheck past them even if their code made the race
> worse. So any legitimate issues they are exposing are being ignored.
>
> If the neutron PTL wants tests pulled, we should just do it.
>
>
Thanks for the support! Having said, I think it's important to make a
judgement call on a case by case basis, because removing tests blindly
might as well backfire.

In this specific instance and all things considered, merging [2] (or even
better [1]) feel like a net gain.

Cheers,
Armando

[1] https://review.openstack.org/#/c/239868/
[2] https://review.openstack.org/#/c/275457/


> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tosca-parser] [heat-translator] [heat] TOSCA-Parser 0.4.0 PyPI release

2016-02-03 Thread Sahdev P Zala
Hello Everyone, 

On behalf of the TOSCA-Parser team, I am pleased to announce the 0.4.0 
PyPI release of tosca-parser which can be downloaded from 
https://pypi.python.org/pypi/tosca-parser
This release includes following enhancements:
1) Initial support for TOSCA Simple Profile for Network Functions 
Virtualization (NFV) v1.0
2) Support for TOSCA Groups and Group Type
3) Initial support for TOSCA Policy and Policy Types
4) Support for TOSCA Namespaces
5) Many bug fixes and minor enhancements including:
-Fix for proper inheritance among types and custom 
relationships based on it
-Updated min and max length with map
-New get_property function for HOST properties similar to 
get_attribute function
  -Updated datatype_definition 
   -Support for nested properties
-Fix for incorrect inheritance in properties of 
capabilities 
-High level validation of imported template types
-Six compatibility for urllib
-Test updates
  -Documentation updates

Please let me know if you have any questions or comments.

Thanks!

Regards,
Sahdev Zala
PTL, Tosca-Parser

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tacker] TOSCA-Parser 0.4.0 PyPI release

2016-02-03 Thread Sahdev P Zala
Hello Tacker team,

Sorry I forgot to include the project name in the subject of the following 
original email, so FYI. 

Thanks! 

Regards, 
Sahdev Zala

- Forwarded by Sahdev P Zala/Durham/IBM on 02/03/2016 09:03 PM -

From:   Sahdev P Zala/Durham/IBM@IBMUS
To: "OpenStack Development Mailing List (not for usage questions)" 

Date:   02/03/2016 08:16 PM
Subject:[openstack-dev] [tosca-parser] [heat-translator] [heat] 
TOSCA-Parser 0.4.0 PyPI release



Hello Everyone, 

On behalf of the TOSCA-Parser team, I am pleased to announce the 0.4.0 
PyPI release of tosca-parser which can be downloaded from 
https://pypi.python.org/pypi/tosca-parser
This release includes following enhancements:
1) Initial support for TOSCA Simple Profile for Network Functions 
Virtualization (NFV) v1.0
2) Support for TOSCA Groups and Group Type
3) Initial support for TOSCA Policy and Policy Types
4) Support for TOSCA Namespaces
5) Many bug fixes and minor enhancements including:
-Fix for proper inheritance among types and custom 
relationships based on it
-Updated min and max length with map
-New get_property function for HOST properties similar to 
get_attribute function
-Updated datatype_definition 
-Support for nested properties
-Fix for incorrect inheritance in properties of 
capabilities 
-High level validation of imported template types
-Six compatibility for urllib
-Test updates
  -Documentation updates

Please let me know if you have any questions or comments.

Thanks!

Regards,
Sahdev Zala
PTL, Tosca-Parser
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Bug 1541105 options

2016-02-03 Thread Steven Dake (stdake)
Steve,

Comments inline

On 2/3/16, 3:08 PM, "Steve Gordon"  wrote:

>- Original Message -
>> From: "Hongbin Lu" 
>> To: "OpenStack Development Mailing List (not for usage questions)"
>>
>> 
>> I would vote for a quick fix + a blueprint.
>> 
>> BTW, I think it is a general consensus that we should move away from
>>Atomic
>> for various reasons (painful image building, lack of document, hard to
>>use,
>> etc.). We are working on fixing the CoreOS templates which could replace
>> Atomic in the future.
>> 
>> Best regards,
>> Hongbin
>
>Hi Hongbin,
>
>I had heard this previously in Tokyo and again when I was asking around
>about the image support on IRC last week, is there a list of the exact
>issues with image building etc. with regards to Atomic? When I was
>following up on this it seemed like the main issue is that the docs in
>the magnum repo are quite out of date (versus the upstream fedora atomic
>docs) both with regards to the content of the image and the process used
>to (re)build it - there didn't seem to be anything quantifiable that's
>wrong with the current Atomic images but perhaps I was asking the wrong
>folks. I was able to rebuild fairly trivially using the Fedora built
>artefacts [1][2].

Steve,

I hope you can forgive my directness and lack of diplomacy in this
message. :)

At least when I was heavily involved with Magnum, building atomic images
resulted in a situation in which the binaries built did not work properly.
 I begged on the irc channels for help and begged on the mailing list for
help for _ months _ on end and nobody listened.  It is almost as if nobody
is actually working on Atomic.  If there are people, they do not maintain
any kind of support footprint upstream to make Atomic a viable platform
for Magnum.

I taught Tango how to build the images, who wrote the instructions down in
the Magnum documentation.  That documentation ends up producing images
that randomly don't always work. The binaries return some weird system
call error, ebadlink I think but not sure.  Tango may remember.

Perhaps the rpm-ostree defect has been resolved now.  I have to be clear
that I was told "please wait 6 months for us to fix the build system and
bugs" while Atomic was our only distro implemented.  It was very
maddening.  I was so frustrated with Atomic, at the start of Mitaka I was
going to propose deprecating Atomic because of a complete lack of upstream
responsiveness.  I decided to let other folks make the call about what
they wanted to do with Atomic since I was myself unresponsive with the
Magnum upstream because of my full-time Kolla commitment.

I am pretty sure a bug was filed about this issue in the Red Hat bugzilla,
but I can't find it.

Personally if I was running the Atomic project, I'd containerize all of
the etcd/flannel/kubernetes and only leave docker in the base image.  Next
up I'd get the distro being produced in a CI pipeline with atleast some
basic dead chicken testing.  I am pretty sure this would fit Magnum's use
case well.  This would make interacting with the 6 month release cycle of
Fedora much more viable.  But alas I'm not running Atomic, and I don't
have the bandwidth to make a contribution here other then to say Atomic
needs more attention from it's upstream if it is to have any hope.

Warm regards,
-steve


>
>So are the exact requirements of Magnum w.r.t. the image and how they
>aren't currently met listed somewhere? If there are quantifiable issues
>then I can get them in front of the atomic folks to address them.
>
>Thanks,
>
>Steve
>
>[1] https://git.fedorahosted.org/git/spin-kickstarts.git
>[2] https://git.fedorahosted.org/git/fedora-atomic.git
>
>
>> From: Corey O'Brien [mailto:coreypobr...@gmail.com]
>> Sent: February-03-16 2:53 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Magnum] Bug 1541105 options
>> 
>> As long as configurations for 2.2 and 2.0 are compatible we shouldn't
>>have an
>> issue I wouldn't think. I just don't know enough about etcd deployment
>>to be
>> sure about that.
>> 
>> If we want to quickly improve the gate, I can patch the problematic
>>areas in
>> the templates and then we can make a blueprint for upgrading to Atomic
>>23.
>> 
>> Corey
>> 
>> On Wed, Feb 3, 2016 at 1:47 PM Vilobh Meshram
>> 
>>>om>>
>> wrote:
>> Hi Corey,
>> 
>> This is slowing down our merge rate and needs to be fixed IMHO.
>> 
>> What risk are you talking about when using newer version of etcd ? Is it
>> documented somewhere for the team to have a look ?
>> 
>> -Vilobh
>> 
>> On Wed, Feb 3, 2016 at 8:11 AM, Corey O'Brien
>> > wrote:
>> Hey team,
>> 
>> I've been looking into https://bugs.launchpad.net/magnum/+bug/1541105
>>which
>> covers a bug with etcdctl, and I wanted opinions on how best to 

Re: [openstack-dev] Logging and traceback at same time.

2016-02-03 Thread Doug Hellmann
Excerpts from Khayam Gondal's message of 2016-02-03 11:28:52 +0500:
> Is there a way to do logging the information and traceback at the same time. 
> Currently I am doing it like this.
> 
>  
> 
>  
> 
> LOG.error(_LE('Record already exists: %(exception)s '
> 
>  '\n %(traceback)'),
> 
>{'exception': e1},
> 
>{'traceback': traceback.print_stack()}).
> 
>  
> 
> Let me know if this is correct way?
> 
> Regards

You've had a couple of responses with tips for how to log the traceback.
Before you adopt those approaches, please consider whether you actually
need it or not, though.

Tracebacks should be logged for errors the application can't recover
from. They shouldn't be logged for user errors, warnings, or errors
that an operator can fix without changing source code. Errors the
operator can handle should be logged with details about what happened,
but not the traceback.

Our logging guidelines are available at
http://specs.openstack.org/openstack/openstack-specs/specs/log-guidelines.html

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Make a separate library from /neutron/agent/ovsdb

2016-02-03 Thread Jakub Libosvar
On 02/03/2016 12:23 PM, Petr Horacek wrote:
> Hello,
> 
> would it be possible to change /neutron/agent/ovsdb package into a
> separate library, independent on openstack? It's a pity that there is
> no high-level python library for ovs handling available and your
> implementation seems to be great. The module is dependent only or some
> openstack utils, would be packaging a problem?
> 
> Thanks,
> Petr
Hi,

there is an initiative to move some parts of the code from neutron tree
into neutron-lib[1][2] so that other services like neutron-*aas don't
necessarily need to depend on neutron. ovsdb sounds like a reusable
library code.

[1]
http://specs.openstack.org/openstack/neutron-specs/specs/mitaka/neutron-lib.html
[2] https://github.com/openstack/neutron-lib
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Duncan Thomas
tOn 3 February 2016 at 17:52, Sam Yaple  wrote:


> This is a very similiar method to what Ekko is doing. The json mapping in
> Ekko is a manifest file which is a sqlite database. The major difference I
> see is Ekko is doing backup trees. If you launch 1000 instances from the
> same glance image, you don't need 1000 fulls, you need 1 full and 1000
> incrementals. Doing that means you save a ton of space, time, bandwidth,
> IO, but it also means n number of backups can reference the same chunk of
> data and it makes deletion of that data much harder than you describe in
> Cinder. When restoring a backup, you don't _need_ a new full, you need to
> start your backups based on the last restore point and the same point about
> saving applies. It also means that Ekko can provide "backups can scale with
> OpenStack" in that sense. Your backups will only ever be your changed data.
>
> I recognize that isn't probably a huge concern for Cinder, with volumes
> typically being just unique data and not duplicate data, but with nova I
> would argue _most_ instances in an OpenStack deployment will be based on
> the same small subset of images and thats alot of duplicate data to
> consider backing up especially at scale.
>
>

So this sounds great. If your backup formats are similar enough, it is
worth considering putting a backup export function in that spits out a
cinder-backup compatible JSON file (it's a dead simple format) and perhaps
an import for the same. That would allow cinder backup and Ekko to exchange
data where desired. I'm not sure if this is possible, but I'd certainly
suggest looking at it.

Thanks for keeping the dialog open, it has definitely been useful.


-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Bug 1541105 options

2016-02-03 Thread Corey O'Brien
Hey team,

I've been looking into https://bugs.launchpad.net/magnum/+bug/1541105 which
covers a bug with etcdctl, and I wanted opinions on how best to fix it.

Should we update the image to include the latest version of etcd? Or,
should we temporarily install the latest version as a part of notify-heat
(see bug for patch)?

I'm personally in favor of updating the image, but there is presumably some
small risk with using a newer version of etcd.

Thanks,
Corey O'Brien
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Preston L. Bannister
On Wed, Feb 3, 2016 at 6:32 AM, Sam Yaple  wrote:

> [snip]
>
Full backups are costly in terms of IO, storage, bandwidth and time. A full
> backup being required in a backup plan is a big problem for backups when we
> talk about volumes that are terabytes large.
>

As an incidental note...

You have to collect full backups, periodically. To do otherwise
assumes *absolutely
no failures* anywhere in the entire software/hardware stack -- ever -- and
no failures in storage over time. (Which collectively is a tad optimistic,
at scale.) Whether due to a rare software bug, a marginal piece of
hardware, or a stray cosmic ray - an occasional bad block will slip through.

More exactly, you need some means of doing occasional full end-to-end
verification of stored backups. Periodic full backups are one
safeguard. How you go about performing full verification, and how often is
a subject for design and optimization. This is where things get a *bit*
more complex. :)

Or you just accept a higher error rate. (How high depends on the
implementation.)

And "Yes", multi-terabyte volumes *are* a challenge.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Sam Yaple
On Wed, Feb 3, 2016 at 3:36 PM, Duncan Thomas 
wrote:

> On 3 February 2016 at 17:27, Sam Yaple  wrote:
>
>
>>
>> And here we get to the meat of the matter. Squashing backups is awful in
>> object storage. It requires you to pull both backups, merge them, then
>> reupload. This also has the downside of casting doubt on a backup since you
>> are now modifying data after it has been backed up (though that doubt is
>> lessened with proper checksuming/hashing which cinder does it looks like).
>> This is the issue Ekko can solve (and has solved over the past 2 years).
>> Ekko can do this "squashing" in a non-traditional way, without ever
>> modifying content or merging anything. With deletions only. This means we
>> do not have to pull two backups, merge, and reupload to delete a backup
>> from the chain.
>>
>
> I'm sure we've lost most of the audience by this point, but I might as
> well reply here as anywhere else...
>

That's ok. We are talking and thats important for featuresets that people
don't even know they want!

>
> In the cinder backup case, since the backup is chunked in object store,
> all that is required is to reference count the chunks that are required for
> the backups you want to keep, get rid of the rest, and re-upload the (very
> small) json mapping file. You can either upload over the old json, or
> create a new one. Either way, the bulk data does not need to be touched.
>

This is a very similiar method to what Ekko is doing. The json mapping in
Ekko is a manifest file which is a sqlite database. The major difference I
see is Ekko is doing backup trees. If you launch 1000 instances from the
same glance image, you don't need 1000 fulls, you need 1 full and 1000
incrementals. Doing that means you save a ton of space, time, bandwidth,
IO, but it also means n number of backups can reference the same chunk of
data and it makes deletion of that data much harder than you describe in
Cinder. When restoring a backup, you don't _need_ a new full, you need to
start your backups based on the last restore point and the same point about
saving applies. It also means that Ekko can provide "backups can scale with
OpenStack" in that sense. Your backups will only ever be your changed data.

I recognize that isn't probably a huge concern for Cinder, with volumes
typically being just unique data and not duplicate data, but with nova I
would argue _most_ instances in an OpenStack deployment will be based on
the same small subset of images and thats alot of duplicate data to
consider backing up especially at scale.

I will have to understand a bit more about cinder-backup before I approach
that subject with Ekko (which right now is on the newton roadmap). What you
have told me absolutely justifies the cinder-backup name (rather than
cinder-snapshot) so thank you for correcting me on that point!


>
>
>
> --
> --
> Duncan Thomas
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Third Party CI Deadlines for Mitaka and N

2016-02-03 Thread Mike Perez
On 17:00 Nov 30, Mike Perez wrote:
> On October 28th 2015 at the Ironic Third Party CI summit session [1], there 
> was
> consensus by the Ironic core and participating vendors that the set of
> deadlines will be:
> 
> * Mitaka-2ː Driver teams will have registered their intent to run CI by 
> creating
> system accounts and identifying a point of contact for their CI team in the
> Third party CI wiki [2].
> * Mitaka Feature Freezeː All driver systems show the ability to receive events
> and post comments in the sandbox.
> * N release feature freezeː Per patch testing and posting comments.
> 
> There are requirements set for OpenStack Third Party CI's [3]. In addition
> Ironic third party CI's must:
> 
> 1) Test all drivers your company has integrated in Ironic.
> 
> For example, if your company has two drivers in Ironic, you would need to have
> a CI that tests against the two and reports the results for each, for every
> Ironic upstream patch. The tests come from a Devstack Gate job template [4], 
> in
> which you just need to switch the "deploy_driver" to your driver.
> 
> To get started, read OpenStack's third party testing documentation [5]. There
> are efforts by OpenStack Infra to allow others to run third party CI similar 
> to
> the OpenStack upstream CI using Puppet [6] and instruction are available [7].
> Don't forget to register your CI in the wiki [2], there is no need to announce
> about it on any mailing list.
> 
> OpenStack Infra also provides third party CI help via meetings [8], and the
> Ironic team has designated people to answer questions with setting up a third
> party CI in the #openstack-ironic room [9].
> 
> If a solution does not have a CI watching for events and posting comments to
> the sandbox [10] by the Mitaka feature freeze, it'll be assumed the driver is
> not active, and can be removed from the Ironic repository as of the Mitaka
> release.
> 
> If a solution is not being tested in a CI system and reporting to OpenStack
> gerrit Ironic patches by the deadline of the N release feature freeze, an
> Ironic driver could be removed from the Ironic repository. Without a CI 
> system,
> Ironic core is unable to verify your driver works in the N release of Ironic.
> 
> If there is something not clear about this email, please email me *directly*
> with your question. You can also reach me as thingee on Freenode IRC in the
> #openstack-ironic channel. Again I want you all to be successful in this, and
> take advantage of this testing you will have with your product. Please
> communicate with me and reach out to the team for help.
> 
> [1] - https://etherpad.openstack.org/p/summit-mitaka-ironic-third-party-ci
> [2] - https://wiki.openstack.org/wiki/ThirdPartySystems
> [3] - 
> http://docs.openstack.org/infra/system-config/third_party.html#requirements
> [4] - 
> https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/devstack-gate.yaml#L961
> [5] - http://docs.openstack.org/infra/system-config/third_party.html
> [6] - https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/
> [7] - 
> https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/README.md
> [8] - https://wiki.openstack.org/wiki/Meetings/ThirdParty
> [9] - https://wiki.openstack.org/wiki/Ironic/Testing#Questions
> [10] - https://review.openstack.org/#/q/project:+openstack-dev/sandbox,n,z

Hi all,

Just a reminder that M-2 has passed and all Ironic drivers at this point should
have a service account [1] registered in the third party CI wiki [2] per our
agreed spec [3] for bringing third party CI support in Ironic.

If you are being cc'd directly on this email, it's because you're known as
being a maintainer of a driver, and have been previously contacted on November
30th 2016 about this.

By not having a service account registered for the M-2 deadline, you are
expressing the driver is inactive in the Ironic project and therefore the team
will be unable to verify your driver works.

As expressed in the quoted email, if your driver has no CI reporting in the
sandbox by Mitaka feature freeze, it can be removed in Mitaka.

Please use the resources provided by getting help in the third party CI help
meeting [4] that meets twice a week and different time zones. Also see the
Ironic third party CI information page [5].  Thanks!

[1] - 
http://docs.openstack.org/infra/system-config/third_party.html#creating-a-service-account
[2] - https://wiki.openstack.org/wiki/ThirdPartySystems
[3] - 
http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/third-party-ci.html
[4] - https://wiki.openstack.org/wiki/Meetings/ThirdParty
[5] - https://wiki.openstack.org/wiki/Ironic/Testing

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Doc] Further DocImpact Changes

2016-02-03 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi everyone,

TL;DR: DocImpact is changing, again, please review 276065

Some time ago, docs implemented a blueprint[1] to modify the behaviour of the 
DocImpact script. The agreed change was that all projects *with the exception 
of the five defcore projects* would have DocImpact bugs raised in their own 
repo. In other words, docs were anticipating continuing to handle DocImpact 
bugs for Nova, Glance, Swift, Keystone, and Cinder, but no other projects. In 
order to make this triaging process easier, we also wanted to enforce a 
description when DocImpact was used in a commit message. However, upon 
implementing this in Nova, as a trial, we discovered that it had a non-trivial 
impact on processing times, and the change was reverted[2]. After some 
discussion, we agreed[3] to try just resetting DocImpact to the individual 
groups, *including* the five defcore projects. This eliminates the need the run 
a check on the commit message for a description, as we're moving the burden of 
triage away from the docs team, and into the individual project teams. One of 
the ben
 e
fits of this change is that all product groups can now use DocImpact in a way 
that best suits their individual needs, rather than being tied to what Docs 
expects.

I've now gotten in-principle approval from each of the PTLs affected by this 
change, but wanted to ensure everyone had a chance to go over the new patch[4], 
and discuss these changes.

This is, as always, a reversible decision. If this solution happens to fall in 
a heap, I'm willing to try something different :)

Cheers,
Lana

1: 
http://specs.openstack.org/openstack/docs-specs/specs/mitaka/review-docimpact.html
2: https://review.openstack.org/#/c/259569/
3: http://lists.openstack.org/pipermail/openstack-dev/2016-January/083806.html
4: https://review.openstack.org/#/c/276065/ 

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCAAGBQJWsviTAAoJELppzVb4+KUyz2EH/0IDHPOYzxjaypoJfFF7SopZ
fZLvGZ0EgcIZc39582N38BUW8INFTFG90YrqPbaaPSF0Ri0HBLcs9PuosUaoaQYe
BPe38zm11k5V8rQG6r7WA7vhMsTvHRhNNQpZ89cWkYMWLkFVDYA6LoKN+5RjDeEB
wCpb+4zef2L22UKffaxEZ2bCUYXMx1LCppMN2EFhDou9blZ1yBtAYqS9wPPuUzVB
DbSuRlZ6TMHiVntpJ1X/E8/PpErtaMkd3HPnbz3Ag2oT9O1qqdLCFdDkPaGsnkfJ
aEtPjpdgeDfSVJW5UEJLJdeGhqY7aOtVq/3suLKWf+D/UzY/YBLWHMYacDjabPE=
=0uUc
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Remove time costing case from gate-functional-dsvm-magnum-api

2016-02-03 Thread 王华
Dimitry Ushakov,

Heat wait condition has a timeout, now the default for it is 6000 in our
Heat template. I think we can change it to a reasonable value.

Regards,
wanghua

On Thu, Feb 4, 2016 at 12:38 PM, Dimitry Ushakov <
dimitry.usha...@rackspace.com> wrote:

> Eli,
>
> I’m ok with removing that test but we’ll still have the real problem,
> which is the fact that heat hangs in heat_stack_create while /something/ in
> cloud init fails to complete.  Currently, the theory is that etcdctl hangs,
> which leaves the entire stack in the progress state [1].  Case in point,
> when bay create hangs, the test would still fail after about an hour [2] by
> giving up trying to create a bay.  The last couple of days we’ve really
> been seeing a myriad of issues in the gates not related to Magnum, from
> keystone version bump to apt-get failing (which caused the majority of
> check failures).  I agree with the frustration but I’m afraid that just
> taking that test out won’t fix all our problems.
>
> Thanks,
> Dimitry
>
> [1] https://bugs.launchpad.net/magnum/+bug/1541105
> [2]
> http://logs.openstack.org/65/272965/7/check/gate-functional-dsvm-magnum-api/4a35917/console.html#_2016-02-04_02_57_03_244
>
> From: Eli Qiao 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Wednesday, February 3, 2016 at 10:03 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [Magnum] Remove time costing case from
> gate-functional-dsvm-magnum-api
>
> hello
> all, as you see that[1], gate failed to merge patch though gate since
> gate-functional-dsvm-magnum-api will cause timeout error and make job
> failed.
> by investigate cases in gate-functional-dsvm-magnum-api, these 2 are time
> costing.
>
> 2016-02-03 22:25:42.834 
> 
>  | 2016-02-03 22:25:42.811 | 
> magnum.tests.functional.api.v1.test_bay.BayTest.test_update_bay_name_for_existing_bay[negative]
>   1350.7702016-02-03 22:25:42.836 
> 
>  | 2016-02-03 22:25:42.814 | 
> magnum.tests.functional.api.v1.test_bay.BayTest.test_create_list_and_delete_bays[positive]
>1300.981
>
>
> I suggest to remove test_update_bay_name_for_exiting_bay as it can be
> covered by unit test.
> I proposed 2 patch to achieve that, please help to do review[2], let avoid
> reverify time to time.
>
> [1]https://review.openstack.org/#/c/260894/
> [2]https://review.openstack.org/276028https://review.openstack.org/276029
>
> --
> Best Regards, Eli(Li Yong)Qiao
> Intel OTC China
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Remove time costing case from gate-functional-dsvm-magnum-api

2016-02-03 Thread Eli Qiao

hello
all, as you see that[1], gate failed to merge patch though gate since 
gate-functional-dsvm-magnum-api will cause timeout error and make job 
failed.
by investigate cases in gate-functional-dsvm-magnum-api, these 2 are 
time costing.


2016-02-03 22:25:42.834  

  | 2016-02-03 22:25:42.811 | 
magnum.tests.functional.api.v1.test_bay.BayTest.test_update_bay_name_for_existing_bay[negative]
  1350.770
2016-02-03 22:25:42.836  

  | 2016-02-03 22:25:42.814 | 
magnum.tests.functional.api.v1.test_bay.BayTest.test_create_list_and_delete_bays[positive]
   1300.981


I suggest to remove test_update_bay_name_for_exiting_bay as it can be 
covered by unit test.
I proposed 2 patch to achieve that, please help to do review[2], let 
avoid reverify time to time.


[1]https://review.openstack.org/#/c/260894/
[2]https://review.openstack.org/276028 https://review.openstack.org/276029

--
Best Regards, Eli(Li Yong)Qiao
Intel OTC China

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Remove time costing case from gate-functional-dsvm-magnum-api

2016-02-03 Thread Dimitry Ushakov
Eli,

I'm ok with removing that test but we'll still have the real problem, which is 
the fact that heat hangs in heat_stack_create while /something/ in cloud init 
fails to complete.  Currently, the theory is that etcdctl hangs, which leaves 
the entire stack in the progress state [1].  Case in point, when bay create 
hangs, the test would still fail after about an hour [2] by giving up trying to 
create a bay.  The last couple of days we've really been seeing a myriad of 
issues in the gates not related to Magnum, from keystone version bump to 
apt-get failing (which caused the majority of check failures).  I agree with 
the frustration but I'm afraid that just taking that test out won't fix all our 
problems.

Thanks,
Dimitry

[1] https://bugs.launchpad.net/magnum/+bug/1541105
[2] 
http://logs.openstack.org/65/272965/7/check/gate-functional-dsvm-magnum-api/4a35917/console.html#_2016-02-04_02_57_03_244

From: Eli Qiao >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, February 3, 2016 at 10:03 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [Magnum] Remove time costing case from 
gate-functional-dsvm-magnum-api

hello
all, as you see that[1], gate failed to merge patch though gate since 
gate-functional-dsvm-magnum-api will cause timeout error and make job failed.
by investigate cases in gate-functional-dsvm-magnum-api, these 2 are time 
costing.


2016-02-03 
22:25:42.834
 | 2016-02-03 22:25:42.811 | 
magnum.tests.functional.api.v1.test_bay.BayTest.test_update_bay_name_for_existing_bay[negative]
  1350.770
2016-02-03 
22:25:42.836
 | 2016-02-03 22:25:42.814 | 
magnum.tests.functional.api.v1.test_bay.BayTest.test_create_list_and_delete_bays[positive]
   1300.981

I suggest to remove test_update_bay_name_for_exiting_bay as it can be covered 
by unit test.
I proposed 2 patch to achieve that, please help to do review[2], let avoid 
reverify time to time.

[1]https://review.openstack.org/#/c/260894/
[2]https://review.openstack.org/276028https://review.openstack.org/276029

--
Best Regards, Eli(Li Yong)Qiao
Intel OTC China
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] OVS flow modification performance

2016-02-03 Thread IWAMOTO Toshihiro
At Sat, 30 Jan 2016 02:08:55 +,
Wuhongning wrote:
> 
> By our testing, ryu openflow has greatly improved the performance, with 500 
> port vxlan flow table, from 15s to 2.5s, 6 times better.

That's quite a impressive number.
What tests did you do?  Could you share some details?

Also, although unlikely, but please make sure your measurements aren't
affected by https://bugs.launchpad.net/neutron/+bug/1538368 .


> 
> From: IWAMOTO Toshihiro [iwam...@valinux.co.jp]
> Sent: Monday, January 25, 2016 5:08 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron] OVS flow modification performance
> 
> At Thu, 21 Jan 2016 02:59:16 +,
> Wuhongning wrote:
> >
> > I don't think 400 flows can show the difference , do you have setup any 
> > tunnel peer?
> >
> > In fact we may set the network type as "vxlan", then make a fake MD 
> > simulate sending l2pop fdb add messages, to push ten's of thousands flows 
> > into the testing ovs agent.
> 
> I chose this method because I didn't want to write such extra code for
> measurements. ;)
> Of course, I'd love to see data from other test environments and other
> workload than agent restarts.
> 
> Also, we now have https://review.openstack.org/#/c/271939/ and can
> profile neutron-server (and probably others, too).
> I couldn't find non-trivial findings until now, though.
> 
> > 
> > From: IWAMOTO Toshihiro [iwam...@valinux.co.jp]
> > Sent: Monday, January 18, 2016 4:37 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Neutron] OVS flow modification performance
> >
> > At Mon, 18 Jan 2016 00:42:32 -0500,
> > Kevin Benton wrote:
> > >
> > > Thanks for doing this. A couple of questions:
> > >
> > > What were your rootwrap settings when running these tests? Did you just
> > > have it calling sudo directly?
> >
> > I used devstack's default, which runs root_helper_daemon.
> >
> > > Also, you mention that this is only ~10% of the time spent during flow
> > > reconfiguration. What other areas are eating up so much time?
> >
> >
> > In another run,
> >
> > $ for f in `cat tgidlist.n2`; do echo -n $f; opreport -n tgid:$f --merge 
> > tid|head -1|tr -d '\n'; (cd bg; opreport -n tgid:$f --merge tid|head 
> > -1);echo; done|sort -nr -k +2
> > 10071   239058 100.000 python2.714922 100.000 python2.7
> > 999592328 100.000 python2.711450 100.000 python2.7
> > 757988202 100.000 python2.7(18596)
> > 1109451560 100.000 python2.747964 100.000 python2.7
> > 703549687 100.000 python2.740678 100.000 python2.7
> > 1109349380 100.000 python2.736004 100.000 python2.7
> > (legend:
> >  )
> >
> > These processes are neutron-server, nova-api,
> > neutron-openvswitch-agent, nova-conductor, dstat and nova-conductor in
> > a decending order.
> >
> > So neutron-server uses about 3x CPU time than the ovs agent,
> > nova-api's CPU usage is similar to the ovs agent's, and the others
> > aren't probably significant.
> >
> > > Cheers,
> > > Kevin Benton
> > >
> > > On Sun, Jan 17, 2016 at 10:12 PM, IWAMOTO Toshihiro 
> > > 
> > > wrote:
> > >
> > > > I'm sending out this mail to share the finding and discuss how to
> > > > improve with those interested in neutron ovs performance.
> > > >
> > > > TL;DR: The native of_interface code, which has been merged recently
> > > > and isn't default, seems to consume less CPU time but gives a mixed
> > > > result.  I'm looking into this for improvement.
> > > >
> > > > * Introduction
> > > >
> > > > With an ML2+ovs Neutron configuration, openflow rule modification
> > > > happens often and is somewhat a heavy operation as it involves
> > > > exec() of the ovs-ofctl command.
> > > >
> > > > The native of_interface driver doesn't use the ovs-ofctl command and
> > > > should have less performance impact on the system.  This document
> > > > tries to confirm this hypothesis.
> > > >
> > > >
> > > > * Method
> > > >
> > > > In order to focus on openflow rule operation time and avoid noise from
> > > > other operations (VM boot-up, etc.), neutron-openvswitch-agent was
> > > > restarted and the time it took to reconfigure the flows was measured.
> > > >
> > > > 1. Use devstack to start a test environment.  As debug logs generate
> > > >considable amount of load, ENABLE_DEBUG_LOG_LEVEL was set to false.
> > > > 2. Apply https://review.openstack.org/#/c/267905/ to enable
> > > >measurement of flow reconfiguration times.
> > > > 3. Boot 80 m1.nano instances.  In my setup, this generates 404 br-int
> > > >flows.  If you have >16G RAM, more could be booted.
> > > > 4. Stop neutron-openvswitch-agent and restart with --run-once arg.
> > > >Use time, oprofile, and python's cProfile (use --profile arg) to
> > > >collect data.
> > > >
> > > > * Results
> > > >
> > > > Execution time (averages of 3 

Re: [openstack-dev] [docs][all] Software design in openstack

2016-02-03 Thread Nick Yeates
Josh, thanks for pointing this out and in being hospitable to an outsider.

Oslo is definitely some of what I was looking for. As you stated, the fact that 
there is an extensive review system with high participation, that this alone 
organically leads to particular trends in sw design. I will have to read more 
about ‘specs', as I don’t quite get what they are and how they are different 
from blueprints.

When I said "What encourages or describes good design in OpenStack?”, I meant, 
what mechanism's/qualities/artifact's/whatever create code that is 
well-received, well-used, efficient. effective, secure… basically: successful 
from a wider-ecosystem standpoint. It sounds to me like much is built into 1) 
the detailed system of reviews, 2) an informal hierarchy of wise technicians, 
and now 3) modularization efforts like this Oslo. Did I summarize this 
adequately?

What artifacts were you going to send me at?
I have still yet to find a good encompassing architecture diagram or white 
paper.

Thanks again!
-N

> On Feb 3, 2016, at 3:05 PM, Joshua Harlow  wrote:
> 
> 
> Nick Yeates wrote:
>> I have been scouring OpenStack artifacts to find examples of what
>> encourages good software design / patterns / architecture in the wider
>> system and code. The info will be used in teaching university students.
>> I suppose it would be good for new developers of the community too.
>> 
>> I found hacking.rst files, along with blueprints and bugs and code
>> reviews, but cant piece together a full picture of how good architecture
>> and design are encouraged via process and/or documents.
>> - Architecture descriptions (ex: http://www.aosabook.org/en/index.html )?
>> - Code standards?
>> - Design rules of thumb?
>> I see the Design Summits, but have not yet found in-depth design
>> recommendations or a process.
> 
> Perhaps oslo is a good start? It starts to feel that good patterns begin 
> either there or in projects, and then those good patterns start to move into 
> a shared location (or library) and then get adopted by others.
> 
> As for a process, the spec process is part of it IMHO, organically it also 
> happens by talking to people in the community and learning who the 
> experienced folks are and what there thoughts are on specs, code (the review 
> process) but that one (organic) is harder to pinpoint exactly when it happens.
> 
>> 
>> Does it come from Developers personal experience, or are there some sort
>> of artifacts to point at? I am looking for both specific examples of
>> design patterns, but more a meta of that. What encourages or describes
>> good design in OpenStack?
> 
> As an oslo core, I can point u at artifacts, but it depends on having more 
> information on what u want, because 'good design' and what encourages it or 
> discourages it is highly relative to the persons definition of the word 
> 'good' (which is connected itself to many things, experience, time in 
> community... prior designs/code/systems built...).
> 
>> 
>> Thanks,
>> -Nick Yeates
>> IRC: nyeates (freenode)
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2016-02-03 Thread Gal Sagie
As i have commented on the patch i will also send this to the mailing list:

I really dont see why Dragonflow is not part of this list, given the
criteria you listed.

Dragonflow is fully developed under Neutron/OpenStack, no other
repositories. It is fully Open source and already have a community of
people contributing and interest from various different companies and
OpenStack deployers. (I can prepare the list of active contributions and of
interested parties) It also puts OpenStack Neutron APIs and use cases as
first class citizens and working on being an integral part of OpenStack.

I agree that OVN needs to be part of the list, but you brought up this
criteria in regards to ODL, so: OVN like ODL is not only Neutron and
OpenStack and is even running/being implemented on a whole different
governance model and requirements to it.

I think you also forgot to mention some other projects as well that are
fully open source with a vibrant and diverse community, i will let them
comment here by themselves.

Frankly this approach disappoints me, I have honestly worked hard to make
Dragonflow fully visible and add and support open discussion and follow the
correct guidelines to work in a project. I think that Dragonflow community
has already few members from various companies and this is only going to
grow in the near future. (in addition to deployers that are considering it
as a solution)  we also welcome anyone that wants to join and be part of
the process to step in, we are very welcoming

I also think that the correct way to do this is to actually add as
reviewers all lieutenants of the projects you are now removing from Neutron
big stadium and letting them comment.

Gal.

On Wed, Feb 3, 2016 at 11:48 PM, Russell Bryant  wrote:

> On 11/30/2015 07:56 PM, Armando M. wrote:
> > I would like to suggest that we evolve the structure of the Neutron
> > governance, so that most of the deliverables that are now part of the
> > Neutron stadium become standalone projects that are entirely
> > self-governed (they have their own core/release teams, etc).
>
> After thinking over the discussion in this thread for a while, I have
> started the following proposal to implement the stadium renovation that
> Armando originally proposed in this thread.
>
> https://review.openstack.org/#/c/275888
>
> --
> Russell Bryant
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] API service won't work if conductor down?

2016-02-03 Thread 王华
I think we should allow magnum-api to access DB directly like nova-api.

As describe in [1], nova may have many compute nodes and it may take an
hour or a month to upgrade. But the number of magnum-api and
magnum-conductor is limited, the upgrade of them is fast. They don't
benefit from the method. We should upgrade them like the control services
in nova and upgrade them together.

In this step, you will upgrade everything but the compute nodes. This means
nova-api, nova-scheduler, nova-conductor, nova-consoleauth, nova-network,
and nova-cert. In reality, this needs to be done fairly atomically. So,
shut down all of the affected services, roll the new code, and start them
back up. This will result in some downtime for your API, but in reality, it
should be easy to quickly perform the swap. In later releases, we’ll reduce
the pain felt here by eliminating the need for the control services to go
together.

[1]
http://www.danplanet.com/blog/2015/06/26/upgrading-nova-to-kilo-with-minimal-downtime/


On Thu, Feb 4, 2016 at 4:59 AM, Hongbin Lu  wrote:

> I can clarify Eli’s question further.
>
>
>
> 1) is this by designed that we don't allow magnum-api to access DB
> directly ?
>
> Yes, that is what it is. Actually, The magnum-api was allowed to access DB
> directly in before. After the indirection API patch landed [1], magnum-api
> starts using magnum-conductor as a proxy to access DB. According to the
> inputs from oslo team, this design allows operators to take down either
> magnum-api or magnum-conductor to upgrade. This is not the same as
> nova-api, because nova-api, nova-scheduler, and nova-conductor are assumed
> to be shutdown all together as an atomic unit.
>
>
>
> I think we should make our own decision here. If we can pair magnum-api
> with magnum-conductor as a unit, we can remove the indirection API and
> allow both binaries to access DB. This could mitigate the potential
> performance bottleneck of message queue. On the other hand, if we stay with
> the current design, we would allow magnum-api and magnum-conductor to scale
> independently. Thoughts?
>
>
>
> [1] https://review.openstack.org/#/c/184791/
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
> *Sent:* February-03-16 10:57 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Magnum] API service won't work if
> conductor down?
>
>
>
> Corey the one you are talking about has changed to coe-service-*.
>
>
>
> Eli, IMO we should display proper error message. M-api service should only
> have read permission.
>
>
>
> Regards,
>
> Madhuri
>
>
>
> *From:* Corey O'Brien [mailto:coreypobr...@gmail.com
> ]
> *Sent:* Wednesday, February 3, 2016 6:50 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [Magnum] API service won't work if
> conductor down?
>
>
>
> The service-* commands aren't related to the magnum services (e.g.
> magnum-conductor). The service-* commands are for services on the bay that
> the user creates and deletes.
>
>
>
> Corey
>
>
>
> On Wed, Feb 3, 2016 at 2:25 AM Eli Qiao  wrote:
>
> hi
> Whey I try to run magnum service-list to list all services (seems now we
> only have m-cond service), it m-cond is down(which means no conductor at
> all),
> API won't response and will return a timeout error.
>
> taget@taget-ThinkStation-P300:~/devstack$ magnum service-list
> ERROR: Timed out waiting for a reply to message ID
> fd1e9529f60f42bf8db903bbf75bbade (HTTP 500)
>
> And I debug more and compared with nova service-list, nova will give
> response and will tell the conductor is down.
>
> and deeper I get this in magnum-api boot up:
>
>
> * # Enable object backporting via the conductor
> base.MagnumObject.indirection_api = base.MagnumObjectIndirectionAPI()*
>
> so in magnum_service api code
>
> return objects.MagnumService.list(context, limit, marker, sort_key,
>   sort_dir)
>
> will require to use magnum-conductor to access DB, but no magnum-conductor
> at all, then we get a 500 error.
> (nova-api doesn't specify *indirection_api so nova-api can access DB*)
>
> My question is:
>
> 1) is this by designed that we don't allow magnum-api to access DB
> directly ?
> 2) if 1) is by designed, then `magnum service-list` won't work, and the
> error message should be improved such as "magnum service is down , please
> check magnum conductor is alive"
>
> What do you think?
>
> P.S. I tested comment this line:
> *# base.MagnumObject.indirection_api = base.MagnumObjectIndirectionAPI()*
> magnum-api will response but failed to create bay(), which means api
> service have read access but can not write it at all since(all db write
> happened in conductor layer).
>
> --
>
> Best Regards, Eli(Li Yong)Qiao
>
> Intel OTC China
>
> 

[openstack-dev] [tricircle] DHCP port problem

2016-02-03 Thread Vega Cai
Hi all,

When implementing L3 north-south networking functionality, I meet the DHCP
port problem again.

First let me briefly explain the DHCP port problem. In Tricircle, we have a
Neutron server using Tricircle plugin in top pod to take control all the
Neutron servers in bottom pods. The strategy of Tricircle to avoid IP
address conflict is that IP address allocation is done on top and we create
port with IP address specified in bottom pod. However, the behavior of
Neutron to create DHCP port has been changed. Neutron no longer waits for
the creation of the first VM to schedule DHCP agent, but schedule DHCP
agent when subnet is created, then the bound DHCP agent will automatically
create DHCP port. So we have no chance to specify the IP address of DHCP
port. Since the IP address of DHCP port is not reserved in top pod, we have
risk to encounter IP address conflict.

How we solve this problem for VM creation is that we still create a DHCP
port first, then use the IP address of the port to create DHCP port in
bottom pod. If we get an IP address conflict exception, we check if the
bottom port is a DHCP port, if so, we directly use this bottom port and
build a id mapping. If we successfully create the bottom DHCP port, we
check if there are other DHCP ports in bottom pod in the same subnet and
remove them.

Now let's go back to the L3 north-south networking functionality
implementation. If user creates a port and then associates it with a
floating IP before booting a VM, Tricircle plugin needs to create the
bottom internal port first in order to setup bottom floating IP. So again
we have risk that the IP address of the internal port conflicts with the IP
address of a bottom DHCP port.

Below I list some choices to solve this problem:
(1) Always create internal port in Nova gateway so we can directly use the
codes handling DHCP problem in Nova gateway. This will also leave floating
IP stuff to Nova gateway.

(2) Transplant the codes handling DHCP problem from Nova gateway to
Tricircle plugin. Considering there are already a lot of things to do when
associating floating IP, this will make floating IP association more
complex.

(3) Anytime we need to create a bottom subnet, we disable DHCP in this
subnet first so bottom DHCP port will not be created automatically. When we
are going to boot a VM, we create DHCP port in top and bottom pod then
enable DHCP in bottom subnet. When a DHCP agent is scheduled, it will check
if there exists a port whose device_id is "reserved_dhcp_port" and use it
as the DHCP port. By creating a bottom DHCP port with device_id set to
"reserved_dhcp_port", we can guide DHCP agent to use the port we create.

I think this problem can be solved in a separate patch and I will add a
TODO in the patch for L3 north-south networking functionality.

Any comments or suggestions?

BR
Zhiyuan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]Question regarding pep8 test

2016-02-03 Thread Vega Cai
Hi Khayam,

Could you check the version of your flake8? It's located in .tox/pep8/bin.

D102 is one of the pep8 error, and I find it's not ignored in Tricircle
tox.ini, so I guess that flake8 on Jenkins doesn't enable this error by
default but in your flake8, this error is enabled.

I check the log of one successful Jenkins job on Tricircle and list the
version of flake8 and related packages Jenkins uses below:

flake8==2.2.4
hacking==0.10.2
pyflakes==0.8.1
mccabe==0.2.1

BR
Zhiyuan

On 3 February 2016 at 14:53, Zhipeng Huang  wrote:

> Hi Khayam,
>
> We try to get every communication on the mailing list for transparency :)
> So I copied your email here.
>
> -
>
> Zhiyan pls check to see what happened.
>
> Sent from HUAWEI AnyOffice
>
>
> -
>
> Hi Chaoyi,
>
> When I run local *pep8 *test using* tox -e pep8*, It always shows error
> regarding
>
> *./tricircle/tests/unit/network/test_plugin.py:268:1: D102  Missing
> docstring in public method*
>
> but on Jenkins no such errors are shown. Due to this contradictory
> behavior, I am unable to locally test *pep8*.
>
> May be there is something I am missing or not well using. Is it possible
> for you to guide me through.
>
>
>
> Regards
>
> Khayam
>
> 
> --
> Zhipeng (Howard) Huang
>
> Standard Engineer
> IT Standard & Patent/IT Prooduct Line
> Huawei Technologies Co,. Ltd
> Email: huangzhip...@huawei.com
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
> (Previous)
> Research Assistant
> Mobile Ad-Hoc Network Lab, Calit2
> University of California, Irvine
> Email: zhipe...@uci.edu
> Office: Calit2 Building Room 2402
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.messaging 4.1.0 release (mitaka)

2016-02-03 Thread davanum
We are gleeful to announce the release of:

oslo.messaging 4.1.0: Oslo Messaging API

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.messaging

With package available at:

https://pypi.python.org/pypi/oslo.messaging

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.messaging

For more details, please see below.


Changes in oslo.messaging 4.0.0..4.1.0
--

e7d6e92 [zmq] Fix slow down
e32560e Update translation setup
b515859 Let PikaDriver inherit base.BaseDriver
062fedc Improve simulator.py
a0d806f Fixed some warnings about imports and variable
79c9b46 Updated from global requirements
07822a0 Adds document and configuration guide
3c41df9 [zmq] Support KeyboardInterrupt for broker
08dd23d [zmq] Reduce proxy for direct messaging
166cfbf Fixed a couple of pep8 errors/warnings
890125e assertEquals is deprecated, use assertEqual
fb1164f Updated from global requirements
3e5a9f6 Updated from global requirements
7aa7a88 Trivial: Remove unused logging import
6e2f4ef replace string format arguments with function parameters
2706e16 Adds params field to BlockingConnection object
2d2f6ca Python 3 deprecated the logger.warn method in favor of warning
c066c96 Fix URL in warning message
a0a58da [zmq] Implement background redis polling from the client-side
bb8f950 rabbit: Add option to configure QoS prefetch count
0870604 rabbit: making interval_max configurable
4ca6583 Imported Translations from Zanata
7d71a98 Updated from global requirements
39729e4 Logging rpc client/server targets
c818e85 Updated from global requirements
9880765 Topic/server arguments changed in simulator.py
0b350f2 [zmq] Update zmq-guide with new options
3fd208e [zmq] Listeners management cleanup
5a78019 Drop H237,H402,H904 in flake8 ignore list
5150661 Replace deprecated library function os.popen() with subprocess
5ff2dfc py3: Replaces xrange() with six.moves.range()
10625ee Kombu: make reply and fanout queues expire instead of auto-delete
3706263 fix .gitreview - bad merge from pika branch
19921a9 Explicitly add pika dependencies
2c8f393 Add duration option to simulator.py
6f6a0ae [zmq] Added redis sentinel HA implementation to zmq driver
c5825e2 rabbit: set interval max for auto retry
f99a459 [zmq] Add TTL to redis records
9f41070 Updated from global requirements
ca6c34a make enforce_type=True in CONF.set_override
b7fee71 Use assertTrue/False instead of assertEqual(T/F)
6b20fa8 Improvement of logging acorrding to oslo.i18n guideline
91273fe Updated from global requirements
d49ddc3 rabbit: fix unit conversion error of expiration
87e06d9 list_opts: update the notification options group
3d4babe rabbit: Missing to pass parameter timeout to next
817cb0c Fix formatting of code blocks in zmq docs
83a08d4 Adds unit tests for pika_poll module
7c723af Updated from global requirements
7e6470a [zmq] Switch notifications to PUB/SUB pattern
e0a9b0c Optimize sending of a reply in RPC server
2251966 Optimize simulator.py for better throughput
54d0d59 Remove stale directory synced from oslo-incubator
f342573 Fix wrong bugs report URL in CONTRIBUTING
e8703dc zmq: Don't log error when can't import zmq module
417f079 assertIsNone(val) instead of assertEqual(None,val)
5149461 Adds tests for pika_message.py
3976a2f Fixes conflicts after merging master
bee303c Adds comment for pika_pooler.py
438a808 Adds comment, updates pika-pool version
bbf0efa Preparations for configurable serialization
a30fcdf Adds comments and small fixes
46ac91e Provide missing parts of error messages
8caa4be Removes additional select module patching
e24f4fa Fix delay before host reconnecting
cc3db22 Implements more smart retrying
8737dea Splits pika driver into several files
968d3e6 Fixes and improvements after testing on RabbitMQ cluster:
9cae182 Fix fanout exchange name pattern
ad2f475 Implements rabbit-pika driver
6fceab1 bootstrap branch

Diffstat (except docs and test files)
-

CONTRIBUTING.rst   |   2 +-
.../en_GB/LC_MESSAGES/oslo.messaging-log-error.po  |  31 -
.../en_GB/LC_MESSAGES/oslo.messaging-log-info.po   |  27 -
.../LC_MESSAGES/oslo.messaging-log-warning.po  |  34 --
.../es/LC_MESSAGES/oslo.messaging-log-error.po |  32 --
.../fr/LC_MESSAGES/oslo.messaging-log-error.po |  27 -
oslo.messaging/locale/oslo.messaging-log-error.pot |  30 -
oslo.messaging/locale/oslo.messaging-log-info.pot  |  25 -
.../locale/oslo.messaging-log-warning.pot  |  39 --
oslo.messaging/locale/oslo.messaging.pot   |  24 -
.../ru/LC_MESSAGES/oslo.messaging-log-error.po |  29 -
oslo_messaging/_cmd/__init__.py|   1 -
oslo_messaging/_cmd/zmq_broker.py  |  12 +-
oslo_messaging/_drivers/__init__.py|   1 -
oslo_messaging/_drivers/amqp.py|   2 -
oslo_messaging/_drivers/amqpdriver.py  |  30 +-

[openstack-dev] [release][all] release countdown for week R-8, Feb 8-12

2016-02-03 Thread Doug Hellmann
Focus
-

We have 2 more weeks before the final releases for non-client
libraries for this cycle, and 3 weeks before the final releases for
client libraries. Project teams should be focusing on wrapping up
new feature work in all libraries.

We have 3 more weeks before the Mitaka-3 milestone and overall
feature freeze.

Release Actions
---

We will be more strictly enforcing the library release freeze before
M3 in 3 weeks. Please review client libraries, integration libraries,
and any other libraries managed by your team and ensure that recent
changes have been released and the global requirements and constraints
lists are up to date with accurate minimum versions and exclusions.

Important Dates
---

Final release for  non-client libraries: Feb 24
Final release for client libraries: Mar 2
Mitaka 3: Feb 29-Mar 4 (includes feature freeze and soft string freeze)

Mitaka release schedule: 
http://docs.openstack.org/releases/schedules/mitaka.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Sam Yaple
On Wed, Feb 3, 2016 at 2:52 PM, Jeremy Stanley  wrote:

> On 2016-02-03 14:32:36 + (+), Sam Yaple wrote:
> [...]
> > Luckily, digging into it it appears cinder already has all the
> > infrastructure in place to handle what we had talked about in a
> > separate email thread Duncan. It is very possible Ekko can
> > leverage the existing features to do it's backup with no change
> > from Cinder.
> [...]
>
> If Cinder's backup facilities already do most of
> what you want from it and there's only a little bit of development
> work required to add the missing feature, why jump to implementing
> this feature in a completely separate project instead rather than
> improving Cinder's existing solution so that people who have been
> using that can benefit directly?
>

Backing up Cinder was never the initial goal, just a potential feature on
the roadmap. Nova is the main goal.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] API service won't work if conductor down?

2016-02-03 Thread Kumari, Madhuri
Corey the one you are talking about has changed to coe-service-*.

Eli, IMO we should display proper error message. M-api service should only have 
read permission.

Regards,
Madhuri

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: Wednesday, February 3, 2016 6:50 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Magnum] API service won't work if conductor down?

The service-* commands aren't related to the magnum services (e.g. 
magnum-conductor). The service-* commands are for services on the bay that the 
user creates and deletes.

Corey

On Wed, Feb 3, 2016 at 2:25 AM Eli Qiao 
> wrote:
hi
Whey I try to run magnum service-list to list all services (seems now we only 
have m-cond service), it m-cond is down(which means no conductor at all),
API won't response and will return a timeout error.

taget@taget-ThinkStation-P300:~/devstack$ magnum service-list
ERROR: Timed out waiting for a reply to message ID 
fd1e9529f60f42bf8db903bbf75bbade (HTTP 500)

And I debug more and compared with nova service-list, nova will give response 
and will tell the conductor is down.

and deeper I get this in magnum-api boot up:

# Enable object backporting via the conductor
base.MagnumObject.indirection_api = base.MagnumObjectIndirectionAPI()

so in magnum_service api code

return objects.MagnumService.list(context, limit, marker, sort_key,
  sort_dir)

will require to use magnum-conductor to access DB, but no magnum-conductor at 
all, then we get a 500 error.
(nova-api doesn't specify indirection_api so nova-api can access DB)

My question is:

1) is this by designed that we don't allow magnum-api to access DB directly ?
2) if 1) is by designed, then `magnum service-list` won't work, and the error 
message should be improved such as "magnum service is down , please check 
magnum conductor is alive"

What do you think?

P.S. I tested comment this line:
# base.MagnumObject.indirection_api = base.MagnumObjectIndirectionAPI()
magnum-api will response but failed to create bay(), which means api service 
have read access but can not write it at all since(all db write happened in 
conductor layer).



--

Best Regards, Eli(Li Yong)Qiao

Intel OTC China
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer] New project: collectd-ceilometer-plugin

2016-02-03 Thread gordon chung


On 03/02/2016 9:16 AM, Foley, Emma L wrote:
>   AFAICT there's no such thing out of the box but it should be fairly 
> straightforward to implement a StatsD writer using the collectd Python plugin.
>   Simon
>
>   [1] https://collectd.org/documentation/manpages/collectd-python.5.shtml
>
> I guess that’ll have to be the plan now: get a prototype in place and have a 
> look at how well it does.
> The first one is always the most difficult, so it should be fairly quick to 
> get this going.
>

nice, do you have resource to look at this? or maybe something to add to 
Gnocchi's potential backlog. existing plugin still seems useful to those 
who want to use custom/proprietary storage.

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Sam Yaple
On Wed, Feb 3, 2016 at 3:58 PM, Duncan Thomas 
wrote:

> On 3 February 2016 at 17:52, Sam Yaple  wrote:
>
>
>> This is a very similiar method to what Ekko is doing. The json mapping in
>> Ekko is a manifest file which is a sqlite database. The major difference I
>> see is Ekko is doing backup trees. If you launch 1000 instances from the
>> same glance image, you don't need 1000 fulls, you need 1 full and 1000
>> incrementals. Doing that means you save a ton of space, time, bandwidth,
>> IO, but it also means n number of backups can reference the same chunk of
>> data and it makes deletion of that data much harder than you describe in
>> Cinder. When restoring a backup, you don't _need_ a new full, you need to
>> start your backups based on the last restore point and the same point about
>> saving applies. It also means that Ekko can provide "backups can scale with
>> OpenStack" in that sense. Your backups will only ever be your changed data.
>>
>> I recognize that isn't probably a huge concern for Cinder, with volumes
>> typically being just unique data and not duplicate data, but with nova I
>> would argue _most_ instances in an OpenStack deployment will be based on
>> the same small subset of images and thats alot of duplicate data to
>> consider backing up especially at scale.
>>
>>
>
> So this sounds great. If your backup formats are similar enough, it is
> worth considering putting a backup export function in that spits out a
> cinder-backup compatible JSON file (it's a dead simple format) and perhaps
> an import for the same. That would allow cinder backup and Ekko to exchange
> data where desired. I'm not sure if this is possible, but I'd certainly
> suggest looking at it.
>
> This is potentially possible. The issue I see would be around
compression/encryption of the different chunks (in ekko we refer to them as
segments). But we will probably be able to work this out in time.


> Thanks for keeping the dialog open, it has definitely been useful.
>
> I have enjoyed the exchange as well. I am a big fan of open-source and
community.

>
> --
> Duncan Thomas
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - L3 flavors and issues with usecases for multiple L3 backends

2016-02-03 Thread Eichberger, German
+1 – Good discussion in this thread.

We once had the plan to go with Gantt (https://wiki.openstack.org/wiki/Gantt) 
rather than re-invent that wheel but… in any case we have a simple framework to 
start experimenting ;-)

German

From: Doug Wiegley 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, February 2, 2016 at 7:01 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [neutron] - L3 flavors and issues with usecases 
for multiple L3 backends

The lbaas use case was something like having one flavor with hardware SSL 
offload and one that doesn’t, e.g. You can easily have multiple backends that 
can do both (in fact, you might even want to let the lower flavor provision 
onto the higher, if you have spare capacity on one and not the other.) And the 
initial “scheduler” in such cases was supposed to be a simple round robin or 
hash, to be revisted later, including the inevitable rescheduling problem, or 
oversubscription issue. It quickly becomes as the same hairy wart that nova has 
to deal with, and all are valid use cases.

doug


On Feb 2, 2016, at 6:43 PM, Kevin Benton 
> wrote:


So flavors are for routers with different behaviors that you want the user to 
be able to choose from (e.g. High performance, slow but free, packet logged, 
etc). Multiple drivers are for when you have multiple backends providing the 
same flavor (e.g. The high performance flavor has several drivers for various 
bare metal routers).

On Feb 2, 2016 18:22, "rzang" 
> wrote:
What advantage can we get from putting multiple drivers into one flavor over 
strictly limit one flavor one driver (or whatever it is called).

Thanks,
Rui

-- Original --
From:  "Kevin Benton";>;
Send time: Wednesday, Feb 3, 2016 8:55 AM
To: "OpenStack Development Mailing List (not for usage 
questions)">;
Subject:  Re: [openstack-dev] [neutron] - L3 flavors and issues with usecases 
for multiple L3 backends


Choosing from multiple drivers for the same flavor is scheduling. I didn't mean 
automatically selecting other flavors.

On Feb 2, 2016 17:53, "Eichberger, German" 
> wrote:
Not that you could call it scheduling. The intent was that the user could pick 
the best flavor for his task (e.g. a gold router as opposed to a silver one). 
The system then would “schedule” the driver configured for gold or silver. 
Rescheduling wasn’t really a consideration…

German

From: Doug Wiegley 
>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>>
Date: Monday, February 1, 2016 at 8:17 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>>
Subject: Re: [openstack-dev] [neutron] - L3 flavors and issues with use cases 
for multiple L3 backends

Yes, scheduling was a big gnarly wart that was punted for the first pass. The 
intention was that any driver you put in a single flavor had equivalent 
capabilities/plumbed to the same networks/etc.

doug


On Feb 1, 2016, at 7:08 AM, Kevin Benton 
>>
 wrote:


Hi all,

I've been working on an implementation of the multiple L3 backends RFE[1] using 
the flavor framework and I've run into some snags with the use-cases.[2]

The first use cases are relatively straightforward where the user requests a 
specific flavor and that request gets dispatched to a driver associated with 
that flavor via a service profile. However, several of the use-cases are based 
around the idea that there is a single flavor with multiple drivers and a 
specific driver will need to be used depending on the placement of the router 
interfaces. i.e. a router cannot be bound to a driver until an interface is 
attached.

This creates some painful coordination problems amongst drivers. For example, 
say the first two networks that a user attaches a router to can be reached by 
all drivers because they use overlays so the first driver chosen by the 
framework works  

Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-02-03 Thread Sam Yaple
On Wed, Feb 3, 2016 at 4:53 PM, Preston L. Bannister 
wrote:

> On Wed, Feb 3, 2016 at 6:32 AM, Sam Yaple  wrote:
>
>> [snip]
>>
> Full backups are costly in terms of IO, storage, bandwidth and time. A
>> full backup being required in a backup plan is a big problem for backups
>> when we talk about volumes that are terabytes large.
>>
>
> As an incidental note...
>
> You have to collect full backups, periodically. To do otherwise assumes 
> *absolutely
> no failures* anywhere in the entire software/hardware stack -- ever --
> and no failures in storage over time. (Which collectively is a tad
> optimistic, at scale.) Whether due to a rare software bug, a marginal piece
> of hardware, or a stray cosmic ray - an occasional bad block will slip
> through.
>

A new full can be triggered at any time should there be concern of a
problem. (see my next point)

>
> More exactly, you need some means of doing occasional full end-to-end
> verification of stored backups. Periodic full backups are one
> safeguard. How you go about performing full verification, and how often is
> a subject for design and optimization. This is where things get a *bit*
> more complex. :)
>

Yes an end-to-end verification of the backup would be easy to implement,
but costly to run. But thats more on the user to decided those things. With
a proper scheduler this is less an issue for Ekko, and more a backup policy
issue.

>
> Or you just accept a higher error rate. (How high depends on the
> implementation.)
>

And its not a full loss, its just not a 100% valid backup. Luckily youve
only lost a single segment (a few thousand sectors) chances are the
critical stuff you want isn't there. That data can still be recovered.
And object-storage with replication makes it very, very hard to loss data
when properly maintained (look at S3 and the data its lost over time). We
have checksum/hash verification in place already so the underlying data
must be valid or we don't restore. But your points are well received.

>
> And "Yes", multi-terabyte volumes *are* a challenge.
>

And increasingly common...
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Virtual Mid-Cycle meeting next week

2016-02-03 Thread Nikhil Komawar
Hi,

The time allocation for both days Thursdays and Friday is 2 hours and
the agenda proposed already seems to be already handful for the time
allocated (please correct me if I am wrong).

Nevertheless, I have proposed another couple of topics in the agenda
that I would like be discussed briefly. I would like another 30 mins
slots both days or an hour slot on either one of the days to cover these
topics.


On 2/2/16 12:07 PM, Flavio Percoco wrote:
> On 29/01/16 09:33 -0430, Flavio Percoco wrote:
>> Greetings,
>>
>> As promissed (although I promissed it yday), here's the link to vote
>> for the
>> days you'd like the Glance Virtual Midcycle to happen. We'll be
>> meeting just for
>> 2 days and at maximum for 3 hours. The 2 days with more votes are the
>> ones that
>> will be picked. Since there's such a short notice, I'll be actively
>> pinging you
>> all and I'll close the vote on Monday Feb 1st.
>>
>> http://doodle.com/poll/eck5hr5d746fdxh6
>>
>> Thank you all for jumping in with such a short notice,
>> Flavio
>>
>> P.S: I'll be sending the details of the meeting out with the invitation.
>>
>> -- 
>> @flaper87
>> Flavio Percoco
>
>
> Hey Folks,
>
> So, Let's do this:
>
> I've started putting together an agenda for these 2 days here:
>
> https://etherpad.openstack.org/p/glance-mitaka-virtual-mid-cycle
>
> Please, chime in and comment on what topics you'd like to talk about.
>
> The virtual mid-cycle will be held on the follwing dates:
>
> Thursday 4th from 15:00 UTC to 17:00 UTC
>
> Thursday 5th from 15:00 UTC to 17:00 UTC
>
> The calls will happen on BlueJeans and it's open to everyone. Please,
> do reply
> off-list if you'd like to get a proper invite on your calendar.
> Otherwise, you
> can simply join the link below at the meeting time and meet us there.
>
> Bluejeans link: https://bluejeans.com/1759335191
>
> One more note. The virtual mid-cycle will be recorded and when you
> join, the
> recording will likely have been started already.
>
> Hope to see you all there!
> Flavio
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] IRC Meeting Thursday February 4th at 17:00UTC

2016-02-03 Thread Christopher Aedo
Join us tomorrow for our next weekly meeting, scheduled for February
4th at 17:00UTC in #openstack-meeting-3

The agenda can be found here, and please add to if you want to get
something on the agenda:
https://wiki.openstack.org/wiki/Meetings/app-catalog

One thing on the agenda for the 2/4/2016 meeting is the topic of
implementing an API for the App Catalog, and whether we'll have a
strong commitment of the necessary resources to continue in the
direction agreed upon during the Tokyo summit.  If you have anything
to say on that subject please be sure to join us!

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs][all] Software design in openstack

2016-02-03 Thread Nick Yeates
I have been scouring OpenStack artifacts to find examples of what encourages 
good software design / patterns / architecture in the wider system and code. 
The info will be used in teaching university students. I suppose it would be 
good for new developers of the community too.

I found hacking.rst files, along with blueprints and bugs and code reviews, but 
cant piece together a full picture of how good architecture and design are 
encouraged via process and/or documents. 
  - Architecture descriptions (ex: http://www.aosabook.org/en/index.html 
 )?
  - Code standards?
  - Design rules of thumb?
I see the Design Summits, but have not yet found in-depth design 
recommendations or a process.

Does it come from Developers personal experience, or are there some sort of 
artifacts to point at? I am looking for both specific examples of design 
patterns, but more a meta of that. What encourages or describes good design in 
OpenStack?

Thanks,
-Nick Yeates
IRC: nyeates (freenode)__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Port Query Performance Test

2016-02-03 Thread Rick Jones

On 02/03/2016 05:32 AM, Vega Cai wrote:

Hi all,

I did a test about the performance of port query in Tricircle yesterday.
The result is attached.

Three observations in the test result:
(1) Neutron client costs much more time than curl, the reason may be
neutron client needs to apply for a new token in each run.


Is "needs" a little strong there?  When I have been doing things with 
Neutron CLI at least and needed to issue a lot of requests over a 
somewhat high latency path, I've used the likes of:


token=$(keystone token-get | awk '$2 == "id" {print$4}')
NEUTRON="neutron --os-token=$token --os-url=https://mutter

to avoid grabbing a token each time.  Might that be possible with what 
you are testing?


rick jones

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Logging and traceback at same time.

2016-02-03 Thread Joshua Harlow

Sean McGinnis wrote:

On Wed, Feb 03, 2016 at 02:46:28PM +0800, 王华 wrote:

You can use LOG.exception.


Yes, I highly recommend using LOG.exception in this case. That is
exactly what it's used for. LOG.exception is pretty much exactly like
LOG.error, but with the additional behavior that it will log out the
details of whatever exception is currently in scope.


Not pretty much, it is ;)

https://hg.python.org/cpython/file/b4cbecbc0781/Lib/logging/__init__.py#l1305 



'''
def exception(self, msg, *args, **kwargs):
kwargs['exc_info'] = True
self.error(msg, *args, **kwargs)
'''

-Josh




Regards,
Wanghua

On Wed, Feb 3, 2016 at 2:28 PM, Khayam Gondal
wrote:


Is there a way to do logging the information and traceback at the same
time. Currently I am doing it like this.





LOG.error(_LE('Record already exists: %(exception)s '

  '\n %(traceback)'),

{'exception': e1},

{'traceback': traceback.print_stack()}).



Let me know if this is correct way?

Regards

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Bug 1541105 options

2016-02-03 Thread Vilobh Meshram
Hi Corey,

This is slowing down our merge rate and needs to be fixed IMHO.

What risk are you talking about when using newer version of etcd ? Is it
documented somewhere for the team to have a look ?

-Vilobh

On Wed, Feb 3, 2016 at 8:11 AM, Corey O'Brien 
wrote:

> Hey team,
>
> I've been looking into https://bugs.launchpad.net/magnum/+bug/1541105 which
> covers a bug with etcdctl, and I wanted opinions on how best to fix it.
>
> Should we update the image to include the latest version of etcd? Or,
> should we temporarily install the latest version as a part of notify-heat
> (see bug for patch)?
>
> I'm personally in favor of updating the image, but there is presumably
> some small risk with using a newer version of etcd.
>
> Thanks,
> Corey O'Brien
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Bug 1541105 options

2016-02-03 Thread Corey O'Brien
As long as configurations for 2.2 and 2.0 are compatible we shouldn't have
an issue I wouldn't think. I just don't know enough about etcd deployment
to be sure about that.

If we want to quickly improve the gate, I can patch the problematic areas
in the templates and then we can make a blueprint for upgrading to Atomic
23.

Corey

On Wed, Feb 3, 2016 at 1:47 PM Vilobh Meshram <
vilobhmeshram.openst...@gmail.com> wrote:

> Hi Corey,
>
> This is slowing down our merge rate and needs to be fixed IMHO.
>
> What risk are you talking about when using newer version of etcd ? Is it
> documented somewhere for the team to have a look ?
>
> -Vilobh
>
> On Wed, Feb 3, 2016 at 8:11 AM, Corey O'Brien 
> wrote:
>
>> Hey team,
>>
>> I've been looking into https://bugs.launchpad.net/magnum/+bug/1541105 which
>> covers a bug with etcdctl, and I wanted opinions on how best to fix it.
>>
>> Should we update the image to include the latest version of etcd? Or,
>> should we temporarily install the latest version as a part of notify-heat
>> (see bug for patch)?
>>
>> I'm personally in favor of updating the image, but there is presumably
>> some small risk with using a newer version of etcd.
>>
>> Thanks,
>> Corey O'Brien
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][all] Software design in openstack

2016-02-03 Thread Joshua Harlow


Nick Yeates wrote:

I have been scouring OpenStack artifacts to find examples of what
encourages good software design / patterns / architecture in the wider
system and code. The info will be used in teaching university students.
I suppose it would be good for new developers of the community too.

I found hacking.rst files, along with blueprints and bugs and code
reviews, but cant piece together a full picture of how good architecture
and design are encouraged via process and/or documents.
- Architecture descriptions (ex: http://www.aosabook.org/en/index.html )?
- Code standards?
- Design rules of thumb?
I see the Design Summits, but have not yet found in-depth design
recommendations or a process.


Perhaps oslo is a good start? It starts to feel that good patterns begin 
either there or in projects, and then those good patterns start to move 
into a shared location (or library) and then get adopted by others.


As for a process, the spec process is part of it IMHO, organically it 
also happens by talking to people in the community and learning who the 
experienced folks are and what there thoughts are on specs, code (the 
review process) but that one (organic) is harder to pinpoint exactly 
when it happens.




Does it come from Developers personal experience, or are there some sort
of artifacts to point at? I am looking for both specific examples of
design patterns, but more a meta of that. What encourages or describes
good design in OpenStack?


As an oslo core, I can point u at artifacts, but it depends on having 
more information on what u want, because 'good design' and what 
encourages it or discourages it is highly relative to the persons 
definition of the word 'good' (which is connected itself to many things, 
experience, time in community... prior designs/code/systems built...).




Thanks,
-Nick Yeates
IRC: nyeates (freenode)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev