Re: [openstack-dev] [nova][glance][cinder][neutron]How to make use of x-openstack-request-id

2016-01-27 Thread Rochelle Grober
At the Tokyo summit, there was a working session that addressed how to log the 
request-id chain.  The etherpad for that is [0]

A spec needs to be written and implementation details need some hashing out, 
but the approach should provide a way to track the originating request through 
each logged transition, even through forking.

the solution is essentially a triplet with the {original RID, previous RID, 
Current RID}  The first and last steps in the process would have only two 
fields.

So, it's a matter of getting the spec written and approved, then implementing 
in Oslo and integrating with the new RID  calls.

And, yeah, I should have done the spec long ago

--Rocky

[0] https://etherpad.openstack.org/p/Mitaka_Cross_Project_Logging

-Original Message-
From: Andrew Laski [mailto:and...@lascii.com] 
Sent: Wednesday, January 27, 2016 3:21 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova][glance][cinder][neutron]How to make use of 
x-openstack-request-id



On Wed, Jan 27, 2016, at 05:47 AM, Kuvaja, Erno wrote:
> > -Original Message-
> > From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
> > Sent: Wednesday, January 27, 2016 9:56 AM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [nova][glance][cinder][neutron]How to make
> > use of x-openstack-request-id
> > 
> > 
> > 
> > On 1/27/2016 9:40 AM, Tan, Lin wrote:
> > > Thank you so much. Eron. This really helps me a lot!!
> > >
> > > Tan
> > >
> > > *From:*Kuvaja, Erno [mailto:kuv...@hpe.com]
> > > *Sent:* Tuesday, January 26, 2016 8:34 PM
> > > *To:* OpenStack Development Mailing List (not for usage questions)
> > > *Subject:* Re: [openstack-dev] [nova][glance][cinder][neutron]How to
> > > make use of x-openstack-request-id
> > >
> > > Hi Tan,
> > >
> > > While the cross project spec was discussed Glance already had
> > > implementation of request ids in place. At the time of the Glance
> > > implementation we assumed that one request id is desired through the
> > > chain of services and we implemented the req id to be accepted as part
> > > of the request. This was mainly driven to have same request id through
> > > the chain between glance-api and glance-registry but as the same code
> > > was used in both api and registry services we got this functionality
> > > across glance.
> > >
> > > The cross project discussion turned this approach down and decided
> > > that only new req id will be returned. We did not want to utilize 2
> > > different code bases to handle req ids in glance-api and
> > > glance-registry, nor we wanted to remove the functionality to allow
> > > the req ids being passed to the service as that was already merged to
> > > our API. Thus is requests are passed without req id defined to the
> > > services they behave (apart from nova having different header name)
> > > same way, but with glance the request maker has the liberty to specify
> > > request id they want to use (within configured length limits).
> > >
> > > Hopefully that clarifies it for you.
> > >
> > > -Erno
> > >
> > > *From:*Tan, Lin [mailto:lin@intel.com]
> > > *Sent:* 26 January 2016 01:26
> > > *To:* OpenStack Development Mailing List (not for usage questions)
> > > *Subject:* Re: [openstack-dev] [nova][glance][cinder][neutron]How to
> > > make use of x-openstack-request-id
> > >
> > > Thanks Kebane, I test glance/neutron/keystone with
> > > ``x-openstack-request-id`` and find something interesting.
> > >
> > > I am able to pass ``x-openstack-request-id``  to glance and it will
> > > use the UUID as its request-id. But it failed with neutron and keystone.
> > >
> > > Here is my test:
> > >
> > > http://paste.openstack.org/show/484644/
> > >
> > > It looks like because keystone and neutron are using
> > > oslo_middleware:RequestId.factory and in this part:
> > >
> > >
> > https://github.com/openstack/oslo.middleware/blob/master/oslo_middlew
> > a
> > > re/request_id.py#L35
> > >
> > > It will always generate an UUID and append to response as
> > > ``x-openstack-request-id`` header.
> > >
> > > My question is should we accept an external passed request-id as the
> > > project's own request-id or having its unique request-id?
> > >
> > > In other words, which one is correct way, glance or neutron/keystone?
> > > There must be something wrong with one of them.
> > >
> > > Thanks
> > >
> > > B.R
> > >
> > > Tan
> > >
> > > *From:*Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
> > > *Sent:* Wednesday, December 2, 2015 2:24 PM
> > > *To:* OpenStack Development Mailing List
> > > (openstack-dev@lists.openstack.org
> > > )
> > > *Subject:* Re: [openstack-dev] [nova][glance][cinder][neutron]How to
> > > make use of x-openstack-request-id
> > >
> > > Hi Tan,
> > >
> > > Most of the OpenStack RESTful API returns `X-Openstack-Request-Id` in
> > > the API response header but thisrequest id isnotavailable to the
> > > callerfromthe python client.
> > 

Re: [openstack-dev] [Neutron][L3] Orphaned process cleanup

2016-01-27 Thread Sean M. Collins
On Wed, Jan 27, 2016 at 05:06:03PM EST, Assaf Muller wrote:
> >> RDO systemd init script for the L3 agent will send a signal 15 when
> >> 'systemctl restart neutron-l3-agent' is executed. I assume
> >> Debian/Ubuntu do the same. It is imperative that agent restarts do not
> >> cause data plane interruption. This has been the case for the L3 agent
> >
> > But wouldn't it really be wiser to use SIGHUP to communicate the intent
> > to restart a process?
> 
> Maybe. I just checked and on a Liberty based RDO installation, sending
> SIGHUP to a L3 agent doesn't actually do anything. Specifically it
> doesn't resync its routers (Which restarting it with signal 15 does).

See, but there must be something that is starting the neutron l3 agent
again, *after* sending it a SIGTERM (signal 15). Then the l3 agent does
a full resync since it's started back up, based on some state accounting
done in what appears to be the plugin. Nothing about signal 15 actually
does any restarting. It just terminates the process.

> 2016-01-27 20:45:35.075 14651 INFO neutron.agent.l3.agent [-] Agent has just 
> been revived. Doing a full sync.

https://github.com/openstack/neutron/blob/ea8cafdfc0789bd01cf6b26adc6e5b7ee6b141d6/neutron/agent/l3/agent.py#L697

https://github.com/openstack/neutron/blob/ea8cafdfc0789bd01cf6b26adc6e5b7ee6b141d6/neutron/agent/l3/agent.py#L679


-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Orphaned process cleanup

2016-01-27 Thread Assaf Muller
On Wed, Jan 27, 2016 at 4:52 PM, Sean M. Collins  wrote:
> On Wed, Jan 27, 2016 at 04:24:00PM EST, Assaf Muller wrote:
>> On Wed, Jan 27, 2016 at 4:10 PM, Sean M. Collins  wrote:
>> > Hi,
>> >
>> > I started poking a bit at https://bugs.launchpad.net/devstack/+bug/1535661
>> >
>> > We have radvd processes that the l3 agent launches, and if the l3 agent
>> > is terminated these radvd processes continue to run. I think we should
>> > probably terminate them when the l3 agent is terminated, like if we are
>> > in DevStack and doing an unstack.sh[1]. There's a fix on the DevStack
>> > side but I'm waffling a bit on if it's the right thing to do or not[2].
>> >
>> > The only concern I have is if there are situations where the l3 agent
>> > terminates, but we don't want data plane disruption. For example, if
>> > something goes wrong and the L3 agent dies, if the OS will be sending a
>> > SIGABRT (which my WIP patch doesn't catch[3] and radvd would continue to 
>> > run) or if a
>> > SIGTERM is issued, or worse, an OOM event occurs (I think thats a
>> > SIGTERM too?) and you get an outage.
>>
>> RDO systemd init script for the L3 agent will send a signal 15 when
>> 'systemctl restart neutron-l3-agent' is executed. I assume
>> Debian/Ubuntu do the same. It is imperative that agent restarts do not
>> cause data plane interruption. This has been the case for the L3 agent
>
> But wouldn't it really be wiser to use SIGHUP to communicate the intent
> to restart a process?

Maybe. I just checked and on a Liberty based RDO installation, sending
SIGHUP to a L3 agent doesn't actually do anything. Specifically it
doesn't resync its routers (Which restarting it with signal 15 does).

>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] changes to keystone-core!

2016-01-27 Thread Steve Martinelli


Hello everyone!

We've been talking about this for a long while, and I am very pleased to
announce that at the midcycle we have made changes to keystone-core. The
project has grown and our review queue grows ever longer. Effective
immediately, we'd like to welcome the following new Guardians of the Gate
to keystone-core:

+ Dave Chen (davechen)
+ Samuel de Medeiros Queiroz (samueldmq)

Happy code reviewing!

Steve Martinelli
OpenStack Keystone Project Team Lead
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Add TOSCA assets to the catalog

2016-01-27 Thread Fox, Kevin M
Our schema file is located here:
http://git.openstack.org/cgit/openstack/app-catalog/tree/openstack_catalog/web/static/assets.schema.yaml

We don't have a schema type defined for TOSCA assets yet. We should discuss the 
sorts of things that need to be listed so that the asset can be loaded into an 
OpenStack instance. For example, glance needs architecture, minimum 
requirements, container type information, etc in addition to the file. When the 
horizon plugin installs/runs it, we need to pass all the information along. 
Also, it would be helpful to discuss the mechanism for launching the assets if 
there is such a thing.

Thanks,
Kevin

From: Steve Gordon [sgor...@redhat.com]
Sent: Wednesday, January 27, 2016 1:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [app-catalog] Add TOSCA assets to the catalog

- Original Message -
> From: "Sahdev P Zala" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
>
> Hello,
>
> I am looking at this blueprint
> https://blueprints.launchpad.net/app-catalog/+spec/add-tosca-assets and
> confused about the first task, "define metadata for TOSCA assets"? Can
> someone please provide example of metadata for previous work on Heat and
> Murano? I tried to find some old patch for reference but couldn't get one.
>
>
> The TOSCA will provide YAML template file and package in a CSAR form
> (.csar and .zip) to host on catalog.
>
>
> Thanks!
>
> Regards,
> Sahdev Zala

I *believe* it's referring to the metadata for the asset type, if you look in 
the schema here:


http://git.openstack.org/cgit/openstack/app-catalog/tree/openstack_catalog/web/static/assets.schema.yaml

...you will find definitions for heat, glance, and murano assets indicating 
required properties etc.

-Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] service type vs. project name for use in headers

2016-01-27 Thread Ravi, Goutham
What is the point of the guideline if we're not able to influence some of the 
biggest projects out there, that would keep growing with what they have..

Maybe we should add a note in each of those guidelines saying some examples 
exist where SERVICE_TYPE has been replaced by PROJECT_NAME for these headers; 
however, this "anomaly" is only where projects have/support multiple 
controllers, under different SERVICE_TYPEs. It should be explicit that 
guidelines recommend SERVICE_TYPE (as Dean stated) and do not recommend the 
PROJECT_NAME; and that the main purpose of inclusion of these names at all is 
to distinguish the headers when they are being recorded for some support 
purposes, etc; amongst all the OpenStack REST API calls.

--
Goutham



From: Dean Troyer >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, January 27, 2016 at 3:31 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [api] service type vs. project name for use in 
headers

On Wed, Jan 27, 2016 at 1:47 PM, michael mccune 
> wrote:
i am not convinced that we would ever need to have a standard on how these 
names are chosen for the header values, or if we would even need to have header 
names that could be deduced. for me, it would be much better for the projects 
use an identifier that makes sense to them, *and* for each project to have good 
api documentation.

I think we would be better served in selecting these things thinking about the 
API consumers first.  We already have  enough for them to wade through, the 
API-WG is making great gains in herding those particular cats, I would hate to 
see giving back some of that here.

so, instead of using examples where we have header names like 
"OpenStack-Some-[SERVICE_TYPE]-Header", maybe we should suggest 
"OpenStack-Some-[SERVICE_TYPE or PROJECT_NAME]-Header" as our guideline.

I think the listed reviews have it right, only referencing service type.  We 
have attempted to reduce the visible surface area of project names in a LOT of 
areas, I do not think this is one that needs to be an exception to that.

Projects will do what they are going to do, sometimes in spite of guidelines.  
This does not mean that the guidelines need to bend to match that practice when 
it is at odds with larger concerns.

In this case, the use of service type as the primary identifier for endpoints 
and API services is well established, and is how the service catalog has and 
will always work.

dt

--

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [app-catalog] Add TOSCA assets to the catalog

2016-01-27 Thread Sahdev P Zala
Great. Thank you so much Kevin and Steve!! 

I will try defining schema for TOSCA and update it for the review. 

Regards, 
Sahdev Zala





From:   "Fox, Kevin M" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date:   01/27/2016 05:26 PM
Subject:Re: [openstack-dev] Add TOSCA assets to the catalog



Our schema file is located here:
http://git.openstack.org/cgit/openstack/app-catalog/tree/openstack_catalog/web/static/assets.schema.yaml


We don't have a schema type defined for TOSCA assets yet. We should 
discuss the sorts of things that need to be listed so that the asset can 
be loaded into an OpenStack instance. For example, glance needs 
architecture, minimum requirements, container type information, etc in 
addition to the file. When the horizon plugin installs/runs it, we need to 
pass all the information along. Also, it would be helpful to discuss the 
mechanism for launching the assets if there is such a thing.

Thanks,
Kevin

From: Steve Gordon [sgor...@redhat.com]
Sent: Wednesday, January 27, 2016 1:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [app-catalog] Add TOSCA assets to the catalog

- Original Message -
> From: "Sahdev P Zala" 
> To: "OpenStack Development Mailing List (not for usage questions)" 

>
> Hello,
>
> I am looking at this blueprint
> https://blueprints.launchpad.net/app-catalog/+spec/add-tosca-assets and
> confused about the first task, "define metadata for TOSCA assets"? Can
> someone please provide example of metadata for previous work on Heat and
> Murano? I tried to find some old patch for reference but couldn't get 
one.
>
>
> The TOSCA will provide YAML template file and package in a CSAR form
> (.csar and .zip) to host on catalog.
>
>
> Thanks!
>
> Regards,
> Sahdev Zala

I *believe* it's referring to the metadata for the asset type, if you look 
in the schema here:


http://git.openstack.org/cgit/openstack/app-catalog/tree/openstack_catalog/web/static/assets.schema.yaml


...you will find definitions for heat, glance, and murano assets 
indicating required properties etc.

-Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-27 Thread Sam Yaple
On Wed, Jan 27, 2016 at 8:19 PM, Fausto Marzi 
wrote:

> Hi Sam,
>
> After our conversation, I have few questions and consideration about Ekko,
> mainly on how it works et similar. Also to make available to the community
> our discussions:
>
> -  In understand you are placing a backup-agent on the compute
> node and execute actions interacting directly with the hypervisor. I’m
> thinking that while Ekko execute this actions, the Nova service have no
> visibility whatsoever of this. I do not think is a good idea to execute
> actions directly on the hypervisor without interacting with the Nova API.
>
This is not an ideal situation, no. Nova should be aware of what we are
doing and when we are doing it. We are aware of this and plan on purposing
ways to be better integrated with Nova (and Cinder for that matter).

> -  On your assumptions, you said that Nova snapshots creation
> generate a VM downtime. I don’t think the assumption is correct, at least
> in Kilo, Liberty and Mitaka. The only downtime you may have related to the
> snapshot, is when you merge back the snapshot to the original root image,
> and this is not our case here.
>
For Kilo and up Nova does leverage the live snapshot allowing for a
snapshot without placing the instance into a Paused state. That is correct.
Some of the same underlying functions are used for the IncrementalBackup
feature of QEMU as well. So you are right its not fair to say that
snapshots always cause downtime as that hasn't been the case since Kilo.

> -  How the restore would work? If you do a restore of the VM and
> the record of that VM instance is not available in the Nova DB (i.e.
> restoring a VM on a newly installed Openstack cloud, or in another
> region, or after a vm has beed destroyed)what would happen? How do you
> manage the consistency of the data between Nova DB and VM status
>
Restore has two pieces. The one we are definitely implementing is restoring
an backup image to a glance image. At that point anyone could start an
instance off of it. Additionally, it _could_ be restored directly back to
the instance in question by powering off the instance and restoring the
data directly back then starting the instance again.

> -  If you execute a backup of the VM image file without executing
> a backup of the related VM metadata information (in the shortest time frame
> as possible) there are chances the backup can be inconsistent.
>
I don't see how VM metadata information has anything to do with a proper
backup of the data in this case.

> - How the restore would happen if on that moment Keystone or Swift is not
> available?
>
How does anything happen if Keystone isn't available? User can't auth so
nothing happens.

> -  Does the backup that Ekko execute, generates bootable image?
> If not, the image is not usable and the restore process will take longer to
> execute the steps to make the image bootable.
>
The backup Ekko would take would be a bit-for-bit copy of what is on the
underlying disk. If that is bootable, the it is bootable.

> -   I do not see any advantage in Ekko over using Nova API to
> snapshot -> Generate an image -> upload to Glance -> upload to Swift.
>
Snapshots are not backups. This is a very important point. Additionally the
process you describe is extremely expensive in terms of time, bandwidth,
and IO. For sake of example, if you have 1TB of data on an instance and you
snapshot it you must upload 1TB to Glance/Swift. The following day you do
another snapshot and you must upload another 1TB, likely the majority of
the data is exactly the same. With Ekko (or any true backup) you should
only be uploading what has changed since the last backup.

> -  The Ekko approach is limited to Nova, KVM QEMU, having a
> qemu-agent running on the VM. I think the scope is probably a bit limited.
> This is more a feature than a tool itself, but the problem is being solved
> I think more efficiently already.
>
It is not limited to Nova, Libvirt nor QEMU. It also does not _require_ the
qemu-agent. It can be used independent of OpenStack (though that is not the
endgoal) and even with VMWare or Hyper-V since they both support CBT which
is the main component we leverage.

> -  By executing all the actions related to backup (i.e.
> compression, incremental computation, upload, I/O and segmented upload to
> Swift) Ekko is adding a significant load to the Compute Nodes. All the
> work is done on the hypervisor and not taken into account by ceilometer (or
> similar), so for example not billable. I do not think this is a good idea
> as distributing the load over multiple components helps OpenStack to scale
> and by leveraging the existing API you integrated better with existing
> tools.
>
The backup-agent that is purposed to exist on the compute node does not
necessarily perform the upload from the compute node since the data may
exist on a backend like Ceph or NFS. But in the case 

[openstack-dev] [app-catalog] App Catalog IRC Meeting CANCELLED this week

2016-01-27 Thread Christopher Aedo
Due to scheduling conflicts and a very light agenda, there will be no
Community App Catalog IRC meeting this week.

Our next meeting is scheduled for February 4th, the agenda can be found here:
https://wiki.openstack.org/wiki/Meetings/app-catalog

One thing on the agenda for the 2/4/2016 meeting is the topic of
implementing an API for the App Catalog, and whether we'll have a
strong commitment of the necessary resources to continue in the
direction agreed upon during the Tokyo summit.  If you have anything
to say on that subject please be sure to join us NEXT week!

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers

2016-01-27 Thread Neil Jerram
Is there any part of the cited wiki page that is still relevant?I've just been 
asked whether networking-calico (a backend project that I work on) should be 
listed there, and I think the answer is no because that page is out of date now.

If that's right, could/should it be deleted?

Thanks,
Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers

2016-01-27 Thread Armando M.
I'll kill it, as this should be superseded by the in-tree devref version.

Thanks for pointing my attention to it.

On 27 January 2016 at 23:40, Neil Jerram  wrote:

> Is there any part of the cited wiki page that is still relevant?I've just
> been asked whether networking-calico (a backend project that I work on)
> should be listed there, and I think the answer is no because that page is
> out of date now.
>
> If that's right, could/should it be deleted?
>
> Thanks,
> Neil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Tasks ordering between plugins

2016-01-27 Thread Andrew Woodward
Simon, you should use the deployment_tasks.yaml interface (which will
likely eventually move to '*/tasks.yaml' (to mimic library) This uses the
same task system as granular deploy. you can set task ordering between
known tasks and roles names, in the case that they are not registered they
will simply be ignored.

The result will be that the engine will parse out the precise location for
tasks to run in the graph (you can run outside of the post-deployment with
them). In most cases, you will not need to specify precise ordering between
the plugins. I know there is the odd case that two components need to
modify the same parts, there are a couple of ways we can work this out, but
it ultimately will come down to a case-by case until we solidify the
config-db workflow

On Wed, Jan 27, 2016 at 5:45 AM Simon Pasquier 
wrote:

> Hi,
>
> I see that tasks.yaml is going to be deprecated in the future MOS versions
> [1]. I've got one question regarding the ordering of tasks between
> different plugins.
> With tasks.yaml, it was possible to coordinate the execution of tasks
> between plugins without prior knowledge of which plugins were installed [2].
> For example, lets say we have 2 plugins: A and B. The plugins may or may
> not be installed in the same environment and the tasks execution should be:
> 1. Run task X for plugin A (if installed).
> 2. Run task Y for plugin B (if installed).
> 3. Run task Z for plugin A (if installed).
>
> Right now, we can set task priorities like:
>
> # tasks.yaml for plugin A
> - role: ['*']
>   stage: post_deployment/1000
>   type: puppet
>   parameters:
> puppet_manifest: puppet/manifests/task_X.pp
> puppet_modules: puppet/modules
>
> - role: ['*']
>   stage: post_deployment/3000
>   type: puppet
>   parameters:
> puppet_manifest: puppet/manifests/task_Z.pp
> puppet_modules: puppet/modules
>
> # tasks.yaml for plugin B
> - role: ['*']
>   stage: post_deployment/2000
>   type: puppet
>   parameters:
> puppet_manifest: puppet/manifests/task_Y.pp
> puppet_modules: puppet/modules
>
> How would it be handled without tasks.yaml?
>
> Regards,
> Simon
>
> [1] https://review.openstack.org/#/c/271417/
> [2] https://wiki.openstack.org/wiki/Fuel/Plugins#Plugins_deployment_order
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Tasks ordering between plugins

2016-01-27 Thread Alex Schultz
On Jan 27, 2016 4:58 PM, "Andrew Woodward"  wrote:
>
> Simon, you should use the deployment_tasks.yaml interface (which will
likely eventually move to '*/tasks.yaml' (to mimic library) This uses the
same task system as granular deploy. you can set task ordering between
known tasks and roles names, in the case that they are not registered they
will simply be ignored.
>
> The result will be that the engine will parse out the precise location
for tasks to run in the graph (you can run outside of the post-deployment
with them). In most cases, you will not need to specify precise ordering
between the plugins. I know there is the odd case that two components need
to modify the same parts, there are a couple of ways we can work this out,
but it ultimately will come down to a case-by case until we solidify the
config-db workflow

Kind of along this topic I've actually run into difficulties when trying to
wedge a plugins tasks at the absolute end of a deployment using the
deployment_tasks.yaml. I was only able to get it working reliably by using
the tasks.yaml method. I feel that forcing plugin developers to know what
possible tasks could be executed for all cases puts too much on them since
we don't really supply good ways to get the task lists. It seems like you
basically need to be a fuel dev to understand it. Before we get rid of
tasks.yaml can we provide a mechanism for plugin devs could leverage to
have tasks executes at specific points in the deploy process.  Basically
provide the stage concept from tasks.yaml with documented task anchors? For
my case it would have been nice to be able to pin to run after post
deployment end.

-Alex

>
> On Wed, Jan 27, 2016 at 5:45 AM Simon Pasquier 
wrote:
>>
>> Hi,
>>
>> I see that tasks.yaml is going to be deprecated in the future MOS
versions [1]. I've got one question regarding the ordering of tasks between
different plugins.
>> With tasks.yaml, it was possible to coordinate the execution of tasks
between plugins without prior knowledge of which plugins were installed [2].
>> For example, lets say we have 2 plugins: A and B. The plugins may or may
not be installed in the same environment and the tasks execution should be:
>> 1. Run task X for plugin A (if installed).
>> 2. Run task Y for plugin B (if installed).
>> 3. Run task Z for plugin A (if installed).
>>
>> Right now, we can set task priorities like:
>>
>> # tasks.yaml for plugin A
>> - role: ['*']
>>   stage: post_deployment/1000
>>   type: puppet
>>   parameters:
>> puppet_manifest: puppet/manifests/task_X.pp
>> puppet_modules: puppet/modules
>>
>> - role: ['*']
>>   stage: post_deployment/3000
>>   type: puppet
>>   parameters:
>> puppet_manifest: puppet/manifests/task_Z.pp
>> puppet_modules: puppet/modules
>>
>> # tasks.yaml for plugin B
>> - role: ['*']
>>   stage: post_deployment/2000
>>   type: puppet
>>   parameters:
>> puppet_manifest: puppet/manifests/task_Y.pp
>> puppet_modules: puppet/modules
>>
>> How would it be handled without tasks.yaml?
>>
>> Regards,
>> Simon
>>
>> [1] https://review.openstack.org/#/c/271417/
>> [2] https://wiki.openstack.org/wiki/Fuel/Plugins#Plugins_deployment_order
>>
__
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
>
> --
>
> Andrew Woodward
>
> Mirantis
>
> Fuel Community Ambassador
>
> Ceph Community
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS][barbican]TLS container could not be found

2016-01-27 Thread Jiahao Liang
Hi community,

I was going through
https://wiki.openstack.org/wiki/Network/LBaaS/docs/how-to-create-tls-loadbalancer
with
devstack. I was stuck at a point when I tried to create a listener within a
loadbalancer with this command:

neutron lbaas-listener-create --loadbalancer lb1 --protocol-port 443
--protocol TERMINATED_HTTPS --name listener1
--default-tls-container=$(barbican secret container list | awk '/
tls_container / {print $2}')

But the command failed with output:

TLS container 
http://192.168.100.149:9311/v1/containers/d8b25d56-4fc5-406d-8b2d-5a85de2a1e34
could not be found


When I run:

barbican secret container list

I was able to see the corresponding container in the list and the status is
active.
(Sorry, the format is a little bit ugly.)
+++---++-+-+---+
| Container href
  | Name   | Created   | Status | Type|
Secrets
| Consumers |
+++---++-+-+---+
|
http://192.168.100.149:9311/v1/containers/d8b25d56-4fc5-406d-8b2d-5a85de2a1e34
|
tls_container  | 2016-01-28 04:58:42+00:00 | ACTIVE | certificate |
private_key=
http://192.168.100.149:9311/v1/secrets/1bbe33fc-ecd2-43e5-82ce-34007b9f6bfd |
None  |
|
 ||   || |
certificate=
http://192.168.100.149:9311/v1/secrets/6d0211c6-8515-4e55-b1cf-587324a79abe |
  |
|
http://192.168.100.149:9311/v1/containers/31045466-bf7b-426f-9ba8-135c260418ee
|
tls_container2 | 2016-01-28 04:59:05+00:00 | ACTIVE | certificate |
private_key=
http://192.168.100.149:9311/v1/secrets/dba18cbc-9bfe-499e-931e-90574843ca10 |
None  |
|
 ||   || |
certificate=
http://192.168.100.149:9311/v1/secrets/23e11441-d119-4b24-a288-9ddc963cb698 |
  |
+++---++-+-+---+


Also, if I did a GET method from a RESTful client with correct X-Auth-Token
to the url:
http://192.168.100.149:9311/v1/containers/d8b25d56-4fc5-406d-8b2d-5a85de2a1e3,
I was able to receive the JSON information of the TLS container.


Anybody could give some advice on how to fix this problem?

Thank you in advance!

Best,
Jiahao Liang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday January 28th at 9:00 UTC

2016-01-27 Thread GHANSHYAM MANN
Hello everyone,

Please reminder that the weekly OpenStack QA team IRC meeting will be
Thursday, Jan 28th at 9:00 UTC in the #openstack-meeting channel.

The agenda for the meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Proposed_Agenda_for_January_28st_2016_.280900_UTC.29

Anyone is welcome to add an item to the agenda.

To help people figure out what time 9:00 UTC is in other timezones the
next meeting will be at:

04:00 EST

18:00 JST

18:30 ACST

11:00 CEST

04:00 CDT

02:00 PDT



Regards

Ghanshyam Mann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Orphaned process cleanup

2016-01-27 Thread Assaf Muller
On Wed, Jan 27, 2016 at 5:20 PM, Sean M. Collins  wrote:
> On Wed, Jan 27, 2016 at 05:06:03PM EST, Assaf Muller wrote:
>> >> RDO systemd init script for the L3 agent will send a signal 15 when
>> >> 'systemctl restart neutron-l3-agent' is executed. I assume
>> >> Debian/Ubuntu do the same. It is imperative that agent restarts do not
>> >> cause data plane interruption. This has been the case for the L3 agent
>> >
>> > But wouldn't it really be wiser to use SIGHUP to communicate the intent
>> > to restart a process?
>>
>> Maybe. I just checked and on a Liberty based RDO installation, sending
>> SIGHUP to a L3 agent doesn't actually do anything. Specifically it
>> doesn't resync its routers (Which restarting it with signal 15 does).
>
> See, but there must be something that is starting the neutron l3 agent
> again, *after* sending it a SIGTERM (signal 15).

That's why I wrote 'restarting it with signal 15'.

> Then the l3 agent does
> a full resync since it's started back up, based on some state accounting
> done in what appears to be the plugin. Nothing about signal 15 actually
> does any restarting. It just terminates the process.

Yup. The point stands, there's a difference between sig 15 then start,
and a SIGHUP. Currently, Neutron agents don't resync after a SIGHUP
(And I wouldn't expect them to. I'd just expect a SIGHUP to reload
configuration). Restarting an agent shouldn't stop any agent spawned
processes like radvd, keepalived, or perform any clean ups to its
resources (Namespaces, etc), just like you wouldn't want the OVS agent
to destroy bridges and ports, and you wouldn't want a restart to
nova-compute to interfere with its qemu-kvm processes.

>
>> 2016-01-27 20:45:35.075 14651 INFO neutron.agent.l3.agent [-] Agent has just 
>> been revived. Doing a full sync.
>
> https://github.com/openstack/neutron/blob/ea8cafdfc0789bd01cf6b26adc6e5b7ee6b141d6/neutron/agent/l3/agent.py#L697
>
> https://github.com/openstack/neutron/blob/ea8cafdfc0789bd01cf6b26adc6e5b7ee6b141d6/neutron/agent/l3/agent.py#L679
>
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] spec-lite process for tripleo

2016-01-27 Thread Jason Rist
On 01/27/2016 09:21 AM, Derek Higgins wrote:
> Hi All,
> 
> We briefly discussed feature tracking in this weeks tripleo meeting. I
> would like to provide a way for downstream consumers (and ourselves) to
> track new features as they get implemented. The main things that came
> out of the discussion is that people liked the spec-lite process that
> the glance team are using.
> 
> I'm proposing we would start to use the same process, essentially small
> features that don't warrant a blueprint would instead have a wishlist
> bug opened against them and get marked with the spec-lite tag. This bug
> could then be referenced in the commit messages. For larger features
> blueprints can still be used. I think the process documented by
> glance[1] is a good model to follow so go read that and see what you think
> 
> The general feeling at the meeting was +1 to doing this[2] so I hope we
> can soon start enforcing it, assuming people are still happy to proceed?
> 
> thanks,
> Derek.
> 
> [1]
> http://docs.openstack.org/developer/glance/contributing/blueprints.html#glance-spec-lite
> 
> [2]
> http://eavesdrop.openstack.org/meetings/tripleo/2016/tripleo.2016-01-26-14.02.log.html
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I guess my only thought would be to make the bug/rfe fairly descriptive
so we don't have to go tracking down whoever reported it for more
details.  Maybe just some light rules about age and responsiveness so we
can quickly retire those bugs/rfes that people aren't really paying
attention to.

-J

-- 
Jason E. Rist
Senior Software Engineer
OpenStack Infrastructure Integration
Red Hat, Inc.
openuc: +1.972.707.6408
mobile: +1.720.256.3933
Freenode: jrist
github/twitter: knowncitizen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-27 Thread Brandon Logan
I could see it being interesting, but that would have to be something
vetted by other drivers and appliances because they may not support
that.

On Mon, 2016-01-25 at 21:37 +, Fox, Kevin M wrote:
> We are using a neutron v1 lb that has external to the cloud members in a lb 
> used by a particular tenant in production. It is working well. Hoping to do 
> the same thing once we get to Octavia+LBaaSv2.
> 
> Being able to tweak the routes of the load balancer would be an interesting 
> feature, though I don't think I'd ever need to. Maybe that should be an 
> extension? I'm guessing a lot of lb plugins won't be able to support it at 
> all.
> 
> Thanks,
> Kevin
> 
> 
> From: Brandon Logan [brandon.lo...@rackspace.com]
> Sent: Monday, January 25, 2016 1:03 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be 
> optional on member create?
> 
> Any additional thoughts and opinions people want to share on this.  I
> don't have a horse in this race as long as we don't make dangerous
> assumptions about what the user wants.  So I am fine with making
> subnet_id optional.
> 
> Michael, how strong would your opposition for this be?
> 
> Thanks,
> Brandon
> 
> On Tue, 2016-01-19 at 20:49 -0800, Stephen Balukoff wrote:
> > Michael-- I think you're assuming that adding an external subnet ID
> > means that the load balancing service will route requests to out an
> > interface with a route to said external subnet. However, the model we
> > have is actually too simple to convey this information to the load
> > balancing service. This is because while we know the member's IP and a
> > subnet to which the load balancing service should connect to
> > theoretically talk to said IP, we don't have any kind of actual
> > routing information for the IP address (like, say a default route for
> > the subnet).
> >
> >
> > Consider this not far-fetched example: Suppose a tenant wants to add a
> > back-end member which is reachable only over a VPN, the gateway for
> > which lives on a tenant internal subnet. If we had a more feature-rich
> > model to work with here, the tenant could specify the member IP, the
> > subnet containing the VPN gateway and the gateway's IP address. In
> > theory the load balancing service could add local routing rules to
> > make sure that communication to that member happens on the tenant
> > subnet and gets routed to the VPN gateway.
> >
> >
> > If we want to support this use case, then we'd probably need to add an
> > optional gateway IP parameter to the member object. (And I'd still be
> > in favor of assuming the subnet_id on the member is optional, and that
> > default routing should be used if not specified.)
> >
> >
> > Let me see if I can break down several use cases we could support with
> > this model. Let's assume the member model contains (among other
> > things) the following attributes:
> >
> >
> > ip_address (member IP, required)
> > subnet_id (member or gateway subnet, optional)
> > gateway_ip (VPN or other layer-3 gateway that should be used to access
> > the member_ip. optional)
> >
> >
> > Expected behaviors:
> >
> >
> > Scenario 1:
> > ip_address specified, subnet_id and gateway_ip are None:  Load
> > balancing service assumes member IP address is reachable through
> > default routing. Appropriate for members that are not part of the
> > local cloud that are accessible from the internet.
> >
> >
> >
> > Scenario 2:
> > ip_address and subnet_id specified, gateway_ip is None: Load balancing
> > service assumes it needs an interface on subnet_id to talk directly to
> > the member IP address. Appropriate for members that live on tenant
> > networks. member_ip should exist within the subnet specified by
> > subnet_id. This is the only scenario supported under the current model
> > if we make subnet_id a required field and don't add a gateway_ip.
> >
> >
> > Scenario 3:
> > ip_address, subnet_id and gateway_ip are all specified:  Load
> > balancing service assumes it needs an interface on subnet_id to talk
> > to the gateway_ip. Load balancing service should add local routing
> > rule (ie. to the host and / or local network namespace context of the
> > load balancing service itself, not necessarily to Neutron or anything)
> > to route any packets destined for member_ip to the gateway_ip.
> > gateway_ip should exist within the subnet specified by subnet_id.
> > Appropriate for members that are on the other side of a VPN links, or
> > reachable via other local routing within a tenant network or local
> > cloud.
> >
> >
> > Scenario 4:
> > ip_address and gateway_ip are specified, subnet_id is None: This is an
> > invalid configuration.
> >
> >
> > So what do y'all think of this? Am I smoking crack with how this
> > should work?
> >
> >
> > For what it's worth, I think the "member is on the other side of a
> > VPN" scenario is not one our customers are champing at the bit to
> > have, so 

[openstack-dev] [neutron][dvr]

2016-01-27 Thread Sudhakar Gariganti
Hi all,

One basic question related to DVR topic '*dvr_update'*. In OVS Neutron
agent, I see that the *dvr_update* topic is being added to the consumer
list irrespective of DVR being enabled or not. Because of this, even though
I have disabled DVR in my environment, I still see the agent subscribe and
listen on dvr_update topic.

Is there any reason for enabling the DVR topic by default, unlike l2_pop
which makes the agent subscribe to the l2_pop topic only when l2_pop is
enabled??

Thanks,
Sudhakar.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Better tests for nova scheduler(esp. race conditions)?

2016-01-27 Thread Cheng, Yingxin
Thank you Nikola! I'm very interested in this.


According to my current understanding, a complete functional test for nova 
scheduler should include nova-api, the scheduler service, part of conductor 
service which forward scheduler decisions to compute services, and the part of 
compute service including claim, claim abort and compute node resource 
consumption inside resource tracker. 

The inputs of this series of tests are the initial resource view, existing 
resource consumptions from fake instances and the coming schedule requests with 
flavors. 

The outputs are the statistics of elapsed time in every schedule phases, the 
statistics of requests' lifecycles, and the sanity of final resource view with 
booted fake instances. 

Extra features should also be taken into consideration including, but not 
limited to, image properties, host aggregates, availability zones, compute 
capabilities, servergroups, compute service status, forced hosts, metrics etc.

Please correct me if anything wrong, I also want to know the existing 
decisions/ideas from mid-cycle sprint.


I'll start from investigating existent functional test infrastructure, this 
could be much quicker if anyone (maybe Sean Dague) can provide help with the 
introduction of existing features. I've also seem others showing interests in 
this area -- Chris Dent(cdent). It would be great to work with other 
experienced contributors in community.



Regards,
-Yingxin


> -Original Message-
> From: Nikola Đipanov [mailto:ndipa...@redhat.com]
> Sent: Wednesday, January 27, 2016 9:58 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Cheng, Yingxin
> Subject: Re: [openstack-dev] [nova] Better tests for nova scheduler(esp. race
> conditions)?
> 
> Top posting since better scheduler testing just got brought up during the
> midcycle meetup, so it might be useful to re-kindle this thread.
> 
> Sean (Dague) brought up that there is some infrastructure already that could
> help us do what you propose bellow, but work may be needed to make it viable
> for proper reasource accounting tests.
> 
> Yingxin - in case you are still interested in doing some of this stuff, we can
> discuss here or on IRC.
> 
> Thanks,
> Nikola
> 
> On 12/15/2015 03:33 AM, Cheng, Yingxin wrote:
> >
> >> -Original Message-
> >> From: Nikola Đipanov [mailto:ndipa...@redhat.com]
> >> Sent: Monday, December 14, 2015 11:11 PM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [nova] Better tests for nova
> >> scheduler(esp. race conditions)?
> >>
> >> On 12/14/2015 08:20 AM, Cheng, Yingxin wrote:
> >>> Hi All,
> >>>
> >>>
> >>>
> >>> When I was looking at bugs related to race conditions of scheduler
> >>> [1-3], it feels like nova scheduler lacks sanity checks of schedule
> >>> decisions according to different situations. We cannot even make
> >>> sure that some fixes successfully mitigate race conditions to an
> >>> acceptable scale. For example, there is no easy way to test whether
> >>> server-group race conditions still exists after a fix for bug[1], or
> >>> to make sure that after scheduling there will be no violations of
> >>> allocation ratios reported by bug[2], or to test that the retry rate
> >>> is acceptable in various corner cases proposed by bug[3]. And there
> >>> will be much more in this list.
> >>>
> >>>
> >>>
> >>> So I'm asking whether there is a plan to add those tests in the
> >>> future, or is there a design exist to simplify writing and executing
> >>> those kinds of tests? I'm thinking of using fake databases and fake
> >>> interfaces to isolate the entire scheduler service, so that we can
> >>> easily build up a disposable environment with all kinds of fake
> >>> resources and fake compute nodes to test scheduler behaviors. It is
> >>> even a good way to test whether scheduler is capable to scale to 10k
> >>> nodes without setting up 10k real compute nodes.
> >>>
> >>
> >> This would be a useful effort - however do not assume that this is
> >> going to be an easy task. Even in the paragraph above, you fail to
> >> take into account that in order to test the scheduling you also need
> >> to run all compute services since claims work like a kind of 2 phase
> >> commit where a scheduling decision gets checked on the destination
> >> compute host (through Claims logic), which involves locking in each compute
> process.
> >>
> >
> > Yes, the final goal is to test the entire scheduling process including 2PC.
> > As scheduler is still in the process to be decoupled, some parts such
> > as RT and retry mechanism are highly coupled with nova, thus IMO it is
> > not a good idea to include them in this stage. Thus I'll try to
> > isolate filter-scheduler as the first step, hope to be supported by 
> > community.
> >
> >
> >>>
> >>>
> >>> I'm also interested in the bp[4] to reduce scheduler race conditions
> >>> in green-thread level. I think it is a good start point in solving

Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-27 Thread Steven Dake (stdake)


On 1/27/16, 12:06 PM, "Flavio Percoco"  wrote:

>On 27/01/16 12:16 -0500, Emilien Macchi wrote:
>>
>>
>>On 01/27/2016 10:51 AM, Jay Pipes wrote:
>>> On 01/27/2016 12:53 PM, gordon chung wrote:
> It makes for a crappy user experience. Crappier than the crappy user
> experience that OpenStack API users already have because we have
>done a
> crappy job shepherding projects in order to make sure there isn't
> overlap between their APIs (yes, Ceilometer and Monasca, I'm looking
> directly at you).
 ... yes, Ceilometer can easily handle your events and meters and store
 them in either Elasticsearch or Gnocchi for visualisations. you just
 need to create a new definition in our mapping files[1][2]. you will
 definitely want to coordinate the naming of your messages. ie.
 event_type == backup. and event_type ==
 backup..
>>>
>>> This isn't at all what I was referring to, actually. I was referring to
>>> my belief that we (the API WG, the TC, whatever...) have failed to
>>> properly prevent almost complete and total overlap of the Ceilometer
>>>[1]
>>> and Monasca [2] REST APIs.
>>>
>>> They are virtually identical in purpose, but in frustrating
>>> slightly-inconsistent ways. and this means that users of the "OpenStack
>>> APIs" have absolutely no idea what the "OpenStack Telemetry API"
>>>really is.
>>>
>>> Both APIs have /alarms as a top-level resource endpoint. One of them
>>> refers to the alarm notification with /alarms, while the other refers
>>>to
>>> the alarm definition with /alarms.
>>>
>>> One API has /meters as a top-level resource endpoint. The other uses
>>> /metrics to mean the exact same thing.
>>>
>>> One API has /samples as a top-level resource endpoint. The other uses
>>> /metrics/measurements to mean the exact same thing.
>>>
>>> One API returns a list JSON object for list results. The other returns
>>>a
>>> dict JSON object with a "links" key and an "elements" key.
>>>
>>> And the list goes on... all producing a horrible non-unified,
>>> overly-complicated and redundant experience for our API users.
>>>
>>
>>I agree with you here Jay, Monasca is a great example of failure in
>>having consistency across OpenStack projects.
>>It's a different topic but maybe a retrospective of what happened could
>>help our community to not reproduce the same mistakes again.
>>
>>Please do not repeat this failure for other projects.
>>Do not duplicate efforts: if Ekko has a similar mission statement, maybe
>>we should avoid creating a new project and contribute to Freezer?
>>(I'm probably missing some technical bits so tell me if I'm wrong)
>
>FWIW, the current governance model does not prevent competition. That's
>not to
>be understood as we encourage it but rather than there could be services
>with
>some level of overlap that are still worth being separate.
>
>What Jay is referring to is that regardless the projects do similar
>things, the
>same or totally different things, we should strive to have different
>APIs. The
>API shouldn't overlap in terms of endpoints and the way they are exposed.
>
>With all that said, I'd like to encourage collaboration over competition
>and I'm
>sure both teams will find a way to make this work.
>
>Cheers,
>Flavio
>
>
>-- 
>@flaper87
>Flavio Percoco

Flavio,

Of course you know this, but for the broader community that may not be
aware, the exact governance repo line item is as follows:
"

* Where it makes sense, the project cooperates with existing projects
rather than gratuitously competing or reinventing the wheel

"

Now that line could be interpreted in many ways, but when Kolla went
through incubation with at least 5 other competitors in the deployment
space, the issue of competition came up and the TC argued that competition
was a good thing on the outer-layer services (such as deployment) and a
bad thing for the inner layer services (such as nova).  The fact that APIs
may be duplicated in some way is not an optimal situation, but if that is
what the TC wishes, the governance repository for new projects should be
updated to indicate the guidelines.

Sam and the EKKO core team's work is creative, original, and completely
different then freezer.  The only thing they have in common is they both
do backups.  They fundamentally operate in different ways.

The TC set precedent in this first competition-induced review which is
worth a good read for other projects thinking of competing with existing
projects of which there are already plenty in OpenStack..

https://review.openstack.org/206789


My parsing of the Technical Committee precedent set there is if the
project is technically different in significant ways, then its A-OK for
big-tent inclusion down the road and definitely suitable for a new project
development effort.

Sam and the EKKO core team's work is creative, original, and completely
different then freezer.  The only thing they have in common is they both
do backups.  They fundamentally operate in 

Re: [openstack-dev] [keystone] changes to keystone-core!

2016-01-27 Thread Chen, Wei D
Hi,

 

Many thanks to everyone for your patient guidance and  mentoring me in the past 
years!  It's your help to make me grow up from a
newbie! 

 

As a core project in OpenStack, keystone has grown to be several sub-projects, 
I strongly believe that we will continue to provide a
stable/great identity service, and I will do my best to make it a model of high 
quality, great project to contribute!

 

 

 

Best Regards,

Dave Chen

 

From: Steve Martinelli [mailto:steve...@ca.ibm.com] 
Sent: Thursday, January 28, 2016 7:13 AM
To: openstack-dev
Subject: [openstack-dev] [keystone] changes to keystone-core!

 

Hello everyone!

We've been talking about this for a long while, and I am very pleased to 
announce that at the midcycle we have made changes to
keystone-core. The project has grown and our review queue grows ever longer. 
Effective immediately, we'd like to welcome the
following new Guardians of the Gate to keystone-core:

+ Dave Chen (davechen)
+ Samuel de Medeiros Queiroz (samueldmq)

Happy code reviewing!

Steve Martinelli
OpenStack Keystone Project Team Lead



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-27 Thread Samuel Bercovici
If we take the approach do "download configuration for all v1 out of OpenStack, 
delete all v1 configuration and then, after lbaas v1 is removed and lbaas v2 is 
installed, use the data to recreate the items, this should be compatible to all 
drivers.
No sure if such procedure will be accepted though.


-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Thursday, January 28, 2016 6:49 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be 
optional on member create?

I could see it being interesting, but that would have to be something vetted by 
other drivers and appliances because they may not support that.

On Mon, 2016-01-25 at 21:37 +, Fox, Kevin M wrote:
> We are using a neutron v1 lb that has external to the cloud members in a lb 
> used by a particular tenant in production. It is working well. Hoping to do 
> the same thing once we get to Octavia+LBaaSv2.
> 
> Being able to tweak the routes of the load balancer would be an interesting 
> feature, though I don't think I'd ever need to. Maybe that should be an 
> extension? I'm guessing a lot of lb plugins won't be able to support it at 
> all.
> 
> Thanks,
> Kevin
> 
> 
> From: Brandon Logan [brandon.lo...@rackspace.com]
> Sent: Monday, January 25, 2016 1:03 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be 
> optional on member create?
> 
> Any additional thoughts and opinions people want to share on this.  I 
> don't have a horse in this race as long as we don't make dangerous 
> assumptions about what the user wants.  So I am fine with making 
> subnet_id optional.
> 
> Michael, how strong would your opposition for this be?
> 
> Thanks,
> Brandon
> 
> On Tue, 2016-01-19 at 20:49 -0800, Stephen Balukoff wrote:
> > Michael-- I think you're assuming that adding an external subnet ID 
> > means that the load balancing service will route requests to out an 
> > interface with a route to said external subnet. However, the model 
> > we have is actually too simple to convey this information to the 
> > load balancing service. This is because while we know the member's 
> > IP and a subnet to which the load balancing service should connect 
> > to theoretically talk to said IP, we don't have any kind of actual 
> > routing information for the IP address (like, say a default route 
> > for the subnet).
> >
> >
> > Consider this not far-fetched example: Suppose a tenant wants to add 
> > a back-end member which is reachable only over a VPN, the gateway 
> > for which lives on a tenant internal subnet. If we had a more 
> > feature-rich model to work with here, the tenant could specify the 
> > member IP, the subnet containing the VPN gateway and the gateway's 
> > IP address. In theory the load balancing service could add local 
> > routing rules to make sure that communication to that member happens 
> > on the tenant subnet and gets routed to the VPN gateway.
> >
> >
> > If we want to support this use case, then we'd probably need to add 
> > an optional gateway IP parameter to the member object. (And I'd 
> > still be in favor of assuming the subnet_id on the member is 
> > optional, and that default routing should be used if not specified.)
> >
> >
> > Let me see if I can break down several use cases we could support 
> > with this model. Let's assume the member model contains (among other
> > things) the following attributes:
> >
> >
> > ip_address (member IP, required)
> > subnet_id (member or gateway subnet, optional) gateway_ip (VPN or 
> > other layer-3 gateway that should be used to access the member_ip. 
> > optional)
> >
> >
> > Expected behaviors:
> >
> >
> > Scenario 1:
> > ip_address specified, subnet_id and gateway_ip are None:  Load 
> > balancing service assumes member IP address is reachable through 
> > default routing. Appropriate for members that are not part of the 
> > local cloud that are accessible from the internet.
> >
> >
> >
> > Scenario 2:
> > ip_address and subnet_id specified, gateway_ip is None: Load 
> > balancing service assumes it needs an interface on subnet_id to talk 
> > directly to the member IP address. Appropriate for members that live 
> > on tenant networks. member_ip should exist within the subnet 
> > specified by subnet_id. This is the only scenario supported under 
> > the current model if we make subnet_id a required field and don't add a 
> > gateway_ip.
> >
> >
> > Scenario 3:
> > ip_address, subnet_id and gateway_ip are all specified:  Load 
> > balancing service assumes it needs an interface on subnet_id to talk 
> > to the gateway_ip. Load balancing service should add local routing 
> > rule (ie. to the host and / or local network namespace context of 
> > the load balancing service itself, not necessarily to Neutron or 
> > anything) to route any packets destined for member_ip to the 

Re: [openstack-dev] [keystone] changes to keystone-core!

2016-01-27 Thread Iury Gregory
Congratulations Samuel and Dave =)

2016-01-27 20:13 GMT-03:00 Steve Martinelli :

> Hello everyone!
>
> We've been talking about this for a long while, and I am very pleased to
> announce that at the midcycle we have made changes to keystone-core. The
> project has grown and our review queue grows ever longer. Effective
> immediately, we'd like to welcome the following new Guardians of the Gate
> to keystone-core:
>
> + Dave Chen (davechen)
> + Samuel de Medeiros Queiroz (samueldmq)
>
> Happy code reviewing!
>
> Steve Martinelli
> OpenStack Keystone Project Team Lead
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

~


*Att[]'sIury Gregory Melo Ferreira **Master student in Computer Science at
UFCG*
*E-mail:  iurygreg...@gmail.com *
~
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-27 Thread Tzu-Mainn Chen
> Okay, so I initially thought we weren't making much progress on this
> discussion, but after some more thought and reading of the existing PoC,
> we're (maybe?) less far apart than I initially thought.
> 
> I think there are kind of three different designs being discussed.
> 
> 1) Rewrite a bunch of stuff into MistrYAML, with the idea that users
> could edit our workflows.  I think this is what I've been most
> strenuously objecting to, and for the most part my previous arguments
> pertain to this model.
> 
> 2) However, I think there's another thing going on/planned with at least
> some of the actions.  It sounds like some of our workflows are going to
> essentially be a single action that just passes the REST params into our
> Python code.  This sort of API as a Service would be more palatable to
> me, as it doesn't really split our implementation between YAML and
> Python (the YAML is pretty much only defining the REST API in this
> model), but it still gives us a quick and easy REST interface to the
> existing code.  It also keeps a certain amount of separation between
> Mistral and the TripleO code in case we decide some day that we need a
> proper API service and need to swap out the Mistral frontend for a
> different one.  This should also be the easiest to implement since it
> doesn't involve rewriting anything - we're mostly just moving the
> existing code into Mistral actions and creating some pretty trivial
> Mistral workflows.
> 
> 3) The thing I _want_ to see, which is a regular Python-based API
> service.  Again, you can kind of see my arguments around why I think we
> should do this elsewhere in the thread.  It's also worth noting that
> there is already an initial implementation of this proposed to
> tripleo-common, so it's not like we'd be starting from zero here either.
> 
> I'm still not crazy about 2, but if it lets me stop spending excessive
> amounts of time on this topic it might be worth it. :-)
> 

I'm kinda with Ben here; I'm strongly for 3), but 2) is okay-ish - with a
few caveats.  This thread has raised a lot of interesting points that, if
clarified, might help me feel more comfortable about 2), so I'm hoping
that Dan/Steve, you'd be willing to help me understand a few things:

a) One argument against the TripleO API is that the Tuskar API tied us
far too strongly with one way of doing things.  However, the Mistral
solution would create a set of workflows that essentially have the same
interface as the TripleO API, correct?  If so, would we support those
workflows the same as if they were an API, with extensive documentation
and guaranteeing stability from version to version of TripleO?

b) For simple features that we might expose through the Mistral API as
one-step workflows calling a single function (getting parameters for a
deployment, say): when we expose these features through the CLI, would we
also enforce the CLI going through Mistral to access those features rather
than calling that single function?

c) Is there consideration to the fact that multiple OpenStack projects
have created their own REST APIs to the point that seems like more of
a known technology than using Mistral to front everything?  Or are we
going to argue that other projects should also switch to using Mistral?

d) If we proceed down the Mistral path and run into issues, is there a
point when we'd be willing to back away?


Mainn

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Playing Tricircle with Devstack

2016-01-27 Thread Yipei Niu
Hi Jeo,

This error occurred when installing devstack on node2.

Best regards,
Yipei

On Wed, Jan 27, 2016 at 3:13 PM, Yipei Niu  wrote:

>
> -- Forwarded message --
> From: Yipei Niu 
> Date: Tue, Jan 26, 2016 at 8:42 PM
> Subject: Re: [tricircle] Playing Tricircle with Devstack
> To: openstack-dev@lists.openstack.org
>
>
> Hi Zhiyuan,
>
> Your solution works, but I encountered another error. When executing
> command
>
> "openstack volume type create --property volume_backend_name=lvmdriver-1
> lvmdriver-1",
>
> it returns
>
> "Unable to establish connection to
> http://192.168.56.101:19997/v2/c4f6ad92427b49f9a59810e88fbe4c11/types;.
>
>
> Then I execute the command with debug option, it returns
>
> Traceback (most recent call last):
>   File "/usr/local/lib/python2.7/dist-packages/openstackclient/shell.py",
> line 113, in run
> ret_val = super(OpenStackShell, self).run(argv)
>   File "/usr/local/lib/python2.7/dist-packages/cliff/app.py", line 255, in
> run
> result = self.run_subcommand(remainder)
>   File "/usr/local/lib/python2.7/dist-packages/cliff/app.py", line 367, in
> run_subcommand
> self.prepare_to_run_command(cmd)
>   File "/usr/local/lib/python2.7/dist-packages/openstackclient/shell.py",
> line 352, in prepare_to_run_command
> self.client_manager.auth_ref
>   File
> "/usr/local/lib/python2.7/dist-packages/openstackclient/common/clientmanager.py",
> line 189, in auth_ref
> self.setup_auth()
>   File
> "/usr/local/lib/python2.7/dist-packages/openstackclient/common/clientmanager.py",
> line 128, in setup_auth
> auth.check_valid_auth_options(self._cli_options, self.auth_plugin_name)
>   File
> "/usr/local/lib/python2.7/dist-packages/openstackclient/api/auth.py", line
> 172, in check_valid_auth_options
> raise exc.CommandError('Missing parameter(s): \n%s' % msg)
> CommandError: Missing parameter(s):
> Set a username with --os-username, OS_USERNAME, or auth.username
> Set an authentication URL, with --os-auth-url, OS_AUTH_URL or auth.auth_url
> Set a scope, such as a project or domain, set a project scope with
> --os-project-name, OS_PROJECT_NAME or auth.project_name, set a domain scope
> with --os-domain-name, OS_DOMAIN_NAME or auth.domain_name
>
> These parameters have been set before, and why the error happens?
>
> Best regards,
> Yipei
>
>
> On Tue, Jan 26, 2016 at 10:40 AM, Yipei Niu  wrote:
>
>> Hi Joe, Zhiyuan,
>>
>> I found that such an error may be caused by "export OS_REGION_NAME=Pod2".
>> When I source "userrc_early" without "export OS_REGION_NAME=Pod2" on node2,
>> the command "openstack project show admin -f value -c id" returns the
>> same result as it does on node1. How can I deal with it so that I can
>> proceed?
>>
>> Best regards,
>> Yipei
>>
>> On Mon, Jan 25, 2016 at 4:13 PM, Yipei Niu  wrote:
>>
>>> There doesn't any problems when installing devstack on node1. However,
>>> when install devstack to node2, I encounter an error and the trace is as
>>> follows:
>>>
>>> 2016-01-25 07:40:47.068 | + echo -e Starting Keystone
>>> 2016-01-25 07:40:47.069 | + '[' 192.168.56.101 == 192.168.56.102 ']'
>>> 2016-01-25 07:40:47.070 | + is_service_enabled tls-proxy
>>> 2016-01-25 07:40:47.091 | + return 1
>>> 2016-01-25 07:40:47.091 | + cat
>>> 2016-01-25 07:40:47.093 | + source /home/stack/devstack/userrc_early
>>> 2016-01-25 07:40:47.095 | ++ export OS_IDENTITY_API_VERSION=3
>>> 2016-01-25 07:40:47.095 | ++ OS_IDENTITY_API_VERSION=3
>>> 2016-01-25 07:40:47.095 | ++ export OS_AUTH_URL=
>>> http://192.168.56.101:35357
>>> 2016-01-25 07:40:47.095 | ++ OS_AUTH_URL=http://192.168.56.101:35357
>>> 2016-01-25 07:40:47.095 | ++ export OS_USERNAME=admin
>>> 2016-01-25 07:40:47.095 | ++ OS_USERNAME=admin
>>> 2016-01-25 07:40:47.095 | ++ export OS_USER_DOMAIN_ID=default
>>> 2016-01-25 07:40:47.095 | ++ OS_USER_DOMAIN_ID=default
>>> 2016-01-25 07:40:47.096 | ++ export OS_PASSWORD=nypnyp0316
>>> 2016-01-25 07:40:47.096 | ++ OS_PASSWORD=nypnyp0316
>>> 2016-01-25 07:40:47.096 | ++ export OS_PROJECT_NAME=admin
>>> 2016-01-25 07:40:47.097 | ++ OS_PROJECT_NAME=admin
>>> 2016-01-25 07:40:47.098 | ++ export OS_PROJECT_DOMAIN_ID=default
>>> 2016-01-25 07:40:47.099 | ++ OS_PROJECT_DOMAIN_ID=default
>>> 2016-01-25 07:40:47.100 | ++ export OS_REGION_NAME=Pod2
>>> 2016-01-25 07:40:47.101 | ++ OS_REGION_NAME=Pod2
>>> 2016-01-25 07:40:47.102 | + create_keystone_accounts
>>> 2016-01-25 07:40:47.105 | + local admin_tenant
>>> 2016-01-25 07:40:47.111 | ++ openstack project show admin -f value -c id
>>> 2016-01-25 07:40:48.408 | Could not find resource admin
>>> 2016-01-25 07:40:48.452 | + admin_tenant=
>>> 2016-01-25 07:40:48.453 | + exit_trap
>>> 2016-01-25 07:40:48.454 | + local r=1
>>> 2016-01-25 07:40:48.456 | ++ jobs -p
>>> 2016-01-25 07:40:48.461 | + jobs=
>>> 2016-01-25 07:40:48.464 | + [[ -n '' ]]
>>> 2016-01-25 07:40:48.464 | + kill_spinner
>>> 2016-01-25 07:40:48.464 | + '[' '!' -z '' ']'
>>> 

Re: [openstack-dev] [Fuel][Bugs] Time sync problem when testing.

2016-01-27 Thread Stanislaw Bogatkin
Yes, I have created custom iso with debug output. It didn't help, so
another one with strace was created.
On Jan 27, 2016 00:56, "Alex Schultz"  wrote:

> On Tue, Jan 26, 2016 at 2:16 PM, Stanislaw Bogatkin
>  wrote:
> > When there is too high strata, ntpdate can understand this and always
> write
> > this into its log. In our case there are just no log - ntpdate send first
> > packet, get an answer - that's all. So, fudging won't save us, as I
> think.
> > Also, it's a really bad approach to fudge a server which doesn't have a
> real
> > clock onboard.
>
> Do you have a debug output of the ntpdate somewhere? I'm not finding
> it in the bugs or in some of the snapshots for the failures. I did
> find one snapshot with the -v change that didn't have any response
> information so maybe it's the other problem where there is some
> network connectivity isn't working correctly or the responses are
> getting dropped somewhere?
>
> -Alex
>
> >
> > On Tue, Jan 26, 2016 at 10:41 PM, Alex Schultz 
> > wrote:
> >>
> >> On Tue, Jan 26, 2016 at 11:42 AM, Stanislaw Bogatkin
> >>  wrote:
> >> > Hi guys,
> >> >
> >> > for some time we have a bug [0] with ntpdate. It doesn't reproduced
> 100%
> >> > of
> >> > time, but breaks our BVT and swarm tests. There is no exact point
> where
> >> > problem root located. To better understand this, some verbosity to
> >> > ntpdate
> >> > output was added but in logs we can see only that packet exchange
> >> > between
> >> > ntpdate and server was started and was never completed.
> >> >
> >>
> >> So when I've hit this in my local environments there is usually one or
> >> two possible causes for this. 1) lack of network connectivity so ntp
> >> server never responds or 2) the stratum is too high.  My assumption is
> >> that we're running into #2 because of our revert-resume in testing.
> >> When we resume, the ntp server on the master may take a while to
> >> become stable. This sync in the deployment uses the fuel master for
> >> synchronization so if the stratum is too high, it will fail with this
> >> lovely useless error.  My assumption on what is happening is that
> >> because we aren't using a set of internal ntp servers but rather
> >> relying on the standard ntp.org pools.  So when the master is being
> >> resumed it's struggling to find a good enough set of servers so it
> >> takes a while to sync. This then causes these deployment tasks to fail
> >> because the master has not yet stabilized (might also be geolocation
> >> related).  We could either address this by fudging the stratum on the
> >> master server in the configs or possibly introducing our own more
> >> stable local ntp servers. I have a feeling fudging the stratum might
> >> be better when we only use the master in our ntp configuration.
> >>
> >> > As this bug is blocker, I propose to merge [1] to better understanding
> >> > what's going on. I created custom ISO with this patchset and tried to
> >> > run
> >> > about 10 BVT tests on this ISO. Absolutely with no luck. So, if we
> will
> >> > merge this, we would catch the problem much faster and understand root
> >> > cause.
> >> >
> >>
> >> I think we should merge the increased logging patch anyway because
> >> it'll be useful in troubleshooting but we also might want to look into
> >> getting an ntp peers list added into the snapshot.
> >>
> >> > I appreciate your answers, folks.
> >> >
> >> >
> >> > [0] https://bugs.launchpad.net/fuel/+bug/1533082
> >> > [1] https://review.openstack.org/#/c/271219/
> >> > --
> >> > with best regards,
> >> > Stan.
> >> >
> >>
> >> Thanks,
> >> -Alex
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > --
> > with best regards,
> > Stan.
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] PCI CI is down

2016-01-27 Thread He, Yongli
This bug and another related block bug: 
https://bugs.launchpad.net/nova/+bug/1536509 been merged recently, PCI-CI now 
is back online.

Regards
Yongli He

-Original Message-
From: Nikola Đipanov [mailto:ndipa...@redhat.com] 
Sent: Wednesday, January 20, 2016 10:41 PM
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [Nova] PCI CI is down

Hey Nova,

It seems the a bug [1] sneaked in that made the PCI CI jobs fail 100% of the 
time so they got turned off.

Fix for the bug should be making it's way through the queue soon, but it was 
hinted on the review that there may be further problems. I'd like to help fix 
these issues ASAP as the regression seems fairly fresh, but debugging is hard 
since the CI is offline (instead of just non-voting) so I can't really access 
any logs.

It'd be great if someone from the team helping out with the Intel PCI CI would 
bring it back online as a non-voting job so that we can have feedback which 
will surely help us fix it more quickly.

Cheers,
Nikola

[1] https://bugs.launchpad.net/nova/+bug/1535367

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Does Kuryr support multi-tenant

2016-01-27 Thread Liping Mao (limao)
Hi Vikas,

> >The question is what you mean by multi-tenancy, if you mean that different 
> >tenants each control their own bare-metal

> >server then Kuryr already support this. (by tenant credential configuration)
>
>I understand kuryr can configure with tenant credential, but we still need 
> neutron-openvswitch-agent on
> the bare-metal server, it need admin account…


> Vikas-- If kuryr is configured with admin credentials same credentials will 
> be passed to neutron client APIs and thus eventually to openvswitch agent.
> Can you please elaborate "need admin account"?

Let me try to make me clear:
AFAIK, docker runs in Bare-metal Server case, we need to install kuryr and 
neutron-openvswitch-agent in the bare metal server.
We can configure tenant account in this kuryr. And I think all the neutron 
resource which created in this server will belong this tenant(not admin tenant).
But in neutron-openvswitch-agent, we still need to configure admin account in 
keystone_authtoken:

[keystone_authtoken]

# auth_host = 127.0.0.1

# auth_port = 35357

# auth_protocol = http

# admin_tenant_name = %SERVICE_TENANT_NAME%

# admin_user = %SERVICE_USER%

# admin_password = %SERVICE_PASSWORD%

And the tenant can login the bare metal server directly, it is not good to 
configure this kind of things on this server.

Thanks.


Regards,
Liping Mao

From: Vikas Choudhary 
>
Reply-To: OpenStack List 
>
Date: 2016年1月27日 星期三 上午10:57
To: OpenStack List 
>
Subject: Re: [openstack-dev] [kuryr] Does Kuryr support multi-tenant


On 26 Jan 2016 13:30, "Liping Mao (limao)" 
> wrote:
>
> Hi Gal,
>
> Thanks for your answer.
>
> >The question is what you mean by multi-tenancy, if you mean that different 
> >tenants each control their own bare-metal
> >server then Kuryr already support this. (by tenant credential configuration)
>
>I understand kuryr can configure with tenant credential, but we still need 
> neutron-openvswitch-agent on
> the bare-metal server, it need admin account…


Vikas-- If kuryr is configured with admin credentials same credentials will be 
passed to neutron client APIs and thus eventually to openvswitch agent.
Can you please elaborate "need admin account"?

Thanks
Vikas

> Thanks.
>
> Regards,
> Liping Mao
>
> From: Gal Sagie >
> Reply-To: OpenStack List 
> >
> Date: 2016年1月26日 星期二 下午12:47
>
> To: OpenStack List 
> >
> Subject: Re: [openstack-dev] [kuryr] Does Kuryr support multi-tenant
>
> Hi Liping Mao,
>
> The question is what you mean by multi-tenancy, if you mean that different 
> tenants each control their own bare-metal
> server then Kuryr already support this. (by tenant credential configuration)
>
> If what i think you mean, and thats running multi tenants on the same 
> bare-metal then the problem
> here is that Docker and Kubernetes doesnt support something like that either 
> (mostly for security reasons) and
> the networking is just part of it (Which is what Kuryr focus on).
> For this, you usually pick with what Magnum offer and thats running 
> containers inside tenant VMs.
>
> However, there are some interesting technologies and open source projects 
> which enable
> something like that and we are evaluating them, its definitely a long term 
> goal for us.
>
>
>
> On Tue, Jan 26, 2016 at 5:06 AM, Liping Mao (limao) 
> > wrote:
>>
>> Thanks Mohammad for your clear explanation.
>> Do we have any way or roadmap or idea to support kuryr in multi-tenant in 
>> bare metal servers now?
>>
>> Thanks.
>>
>> Regards,
>> Liping Mao
>>
>>
>> From: Mohammad Banikazemi >
>> Reply-To: OpenStack List 
>> >
>> Date: 2016年1月26日 星期二 上午2:35
>> To: OpenStack List 
>> >
>> Subject: Re: [openstack-dev] [kuryr] Does Kuryr support multi-tenant
>>
>> Considering that the underlying container technology is not multi-tenant (as 
>> of now), your observation is correct in that all neutron resources are made 
>> for a single tenant. Until Docker supports multi tenancy, we can possibly 
>> use network options and/or wrappers for docker/swarm clients to achieve some 
>> kind of multi tenancy support. Having said that, I should add that as of now 
>> we do not have such a feature in Kuryr.
>>
>> Best,
>>
>> Mohammad
>>
>>
>> "Liping Mao (limao)" ---01/25/2016 06:39:44 AM---Hi Kuryr guys, I'm a new 
>> bee in kuryr, and using devstack to try kuryr 

Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-27 Thread Dan Prince
On Wed, 2016-01-27 at 09:36 -0500, Dan Prince wrote:
> On Wed, 2016-01-27 at 14:32 +0100, Jiri Tomasek wrote:
> > On 01/26/2016 09:05 PM, Ben Nemec wrote:
> > > On 01/25/2016 04:36 PM, Dan Prince wrote:
> > > > On Mon, 2016-01-25 at 15:31 -0600, Ben Nemec wrote:
> > > > > On 01/22/2016 06:19 PM, Dan Prince wrote:
> > > > > > On Fri, 2016-01-22 at 11:24 -0600, Ben Nemec wrote:
> > > > > > > So I haven't weighed in on this yet, in part because I
> > > > > > > was
> > > > > > > on
> > > > > > > vacation
> > > > > > > when it was first proposed and missed a lot of the
> > > > > > > initial
> > > > > > > discussion,
> > > > > > > and also because I wanted to take some time to order my
> > > > > > > thoughts
> > > > > > > on
> > > > > > > it.
> > > > > > >   Also because my initial reaction...was not conducive to
> > > > > > > calm and
> > > > > > > rational discussion. ;-)
> > > > > > > 
> > > > > > > The tldr is that I don't like it.  To explain why, I'm
> > > > > > > going to
> > > > > > > make
> > > > > > > a
> > > > > > > list (everyone loves lists, right? Top $NUMBER reasons we
> > > > > > > should
> > > > > > > stop
> > > > > > > expecting other people to write our API for us):
> > > > > > > 
> > > > > > > 1) We've been down this road before.  Except last time it
> > > > > > > was
> > > > > > > with
> > > > > > > Heat.
> > > > > > >   I'm being somewhat tongue-in-cheek here, but expecting
> > > > > > > a
> > > > > > > general
> > > > > > > service to provide us a user-friendly API for our
> > > > > > > specific
> > > > > > > use
> > > > > > > case
> > > > > > > just
> > > > > > > doesn't make sense to me.
> > > > > > We've been down this road with Heat yes. But we are
> > > > > > currently
> > > > > > using
> > > > > > Heat for some things that we arguable should be (a
> > > > > > workflows
> > > > > > tool
> > > > > > might
> > > > > > help offload some stuff out of Heat). Also we haven't
> > > > > > implemented
> > > > > > custom Heat resources for TripleO either. There are mixed
> > > > > > opinions
> > > > > > on
> > > > > > this but plugging in your code to a generic API is quite
> > > > > > nice
> > > > > > sometimes.
> > > > > > 
> > > > > > That is the beauty of Mistral I think. Unlike Heat it
> > > > > > actually
> > > > > > encourages you to customize it with custom Python actions.
> > > > > > Anything
> > > > > > we
> > > > > > want in tripleo-common can become our own Mistral action
> > > > > > (these get
> > > > > > registered with stevedore entry points so we'd own the
> > > > > > code)
> > > > > > and
> > > > > > the
> > > > > > YAML workflows just tie them together via tasks.
> > > > > > 
> > > > > > We don't have to go off and build our own proxy deployment
> > > > > > workflow
> > > > > > API. The structure to do just about anything we need
> > > > > > already
> > > > > > exists
> > > > > > so
> > > > > > why not go and use it?
> > > > > > 
> > > > > > > 2) The TripleO API is not a workflow API.  I also largely
> > > > > > > missed
> > > > > > > this
> > > > > > > discussion, but the TripleO API is a _Deployment_
> > > > > > > API.  In
> > > > > > > some
> > > > > > > cases
> > > > > > > there also happens to be a workflow going on behind the
> > > > > > > scenes,
> > > > > > > but
> > > > > > > honestly that's not something I want our users to have to
> > > > > > > care
> > > > > > > about.
> > > > > > Agree that users don't have to care about this.
> > > > > > 
> > > > > > Users can get as involved as they want here. Most users I
> > > > > > think
> > > > > > will
> > > > > > use python-tripleoclient to drive the deployment or the new
> > > > > > UI.
> > > > > > They
> > > > > > don't have to interact with Mistral directly unless they
> > > > > > really
> > > > > > want
> > > > > > to. So whether we choose to build our own API or use a
> > > > > > generic one
> > > > > > I
> > > > > > think this point is mute.
> > > > > Okay, I think this is a very fundamental point, and I believe
> > > > > it gets
> > > > > right to the heart of my objection to the proposed change.
> > > > > 
> > > > > When I hear you say that users will use tripleoclient to talk
> > > > > to
> > > > > Mistral, it raises a big flag.  Then I look at something like
> > > > > https://github.com/dprince/python-tripleoclient/commit/77ffd2
> > > > > fa
> > > > > 7b1642
> > > > > b9f05713ca30b8a27ec4b322b7
> > > > > and the flag gets bigger.
> > > > > 
> > > > > The thing is that there's a whole bunch of business logic
> > > > > currently
> > > > > sitting in the client that shouldn't/can't be there.  There
> > > > > are
> > > > > historical reasons for it, but the important thing is that
> > > > > the
> > > > > current
> > > > > client architecture is terribly flawed.  Business logic
> > > > > should
> > > > > never
> > > > > live in the client like it does today.
> > > > Totally agree here. In fact I have removed business logic from
> > > > python-
> > > > tripleoclient in this patch and moved it into a Mistral action.
> 

[openstack-dev] [neutron] no neutron lib meeting

2016-01-27 Thread Doug Wiegley
Multiple cancels, and I'll be on a plane.

Thanks,
Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] spec-lite process for tripleo

2016-01-27 Thread Derek Higgins

Hi All,

We briefly discussed feature tracking in this weeks tripleo meeting. I 
would like to provide a way for downstream consumers (and ourselves) to 
track new features as they get implemented. The main things that came 
out of the discussion is that people liked the spec-lite process that 
the glance team are using.


I'm proposing we would start to use the same process, essentially small 
features that don't warrant a blueprint would instead have a wishlist 
bug opened against them and get marked with the spec-lite tag. This bug 
could then be referenced in the commit messages. For larger features 
blueprints can still be used. I think the process documented by 
glance[1] is a good model to follow so go read that and see what you think


The general feeling at the meeting was +1 to doing this[2] so I hope we 
can soon start enforcing it, assuming people are still happy to proceed?


thanks,
Derek.

[1] 
http://docs.openstack.org/developer/glance/contributing/blueprints.html#glance-spec-lite
[2] 
http://eavesdrop.openstack.org/meetings/tripleo/2016/tripleo.2016-01-26-14.02.log.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-27 Thread Tzu-Mainn Chen
> On Wed, 2016-01-27 at 09:36 -0500, Dan Prince wrote:
> > On Wed, 2016-01-27 at 14:32 +0100, Jiri Tomasek wrote:
> > > On 01/26/2016 09:05 PM, Ben Nemec wrote:
> > > > On 01/25/2016 04:36 PM, Dan Prince wrote:
> > > > > On Mon, 2016-01-25 at 15:31 -0600, Ben Nemec wrote:
> > > > > > On 01/22/2016 06:19 PM, Dan Prince wrote:
> > > > > > > On Fri, 2016-01-22 at 11:24 -0600, Ben Nemec wrote:
> > > > > > > > So I haven't weighed in on this yet, in part because I
> > > > > > > > was
> > > > > > > > on
> > > > > > > > vacation
> > > > > > > > when it was first proposed and missed a lot of the
> > > > > > > > initial
> > > > > > > > discussion,
> > > > > > > > and also because I wanted to take some time to order my
> > > > > > > > thoughts
> > > > > > > > on
> > > > > > > > it.
> > > > > > > >   Also because my initial reaction...was not conducive to
> > > > > > > > calm and
> > > > > > > > rational discussion. ;-)
> > > > > > > > 
> > > > > > > > The tldr is that I don't like it.  To explain why, I'm
> > > > > > > > going to
> > > > > > > > make
> > > > > > > > a
> > > > > > > > list (everyone loves lists, right? Top $NUMBER reasons we
> > > > > > > > should
> > > > > > > > stop
> > > > > > > > expecting other people to write our API for us):
> > > > > > > > 
> > > > > > > > 1) We've been down this road before.  Except last time it
> > > > > > > > was
> > > > > > > > with
> > > > > > > > Heat.
> > > > > > > >   I'm being somewhat tongue-in-cheek here, but expecting
> > > > > > > > a
> > > > > > > > general
> > > > > > > > service to provide us a user-friendly API for our
> > > > > > > > specific
> > > > > > > > use
> > > > > > > > case
> > > > > > > > just
> > > > > > > > doesn't make sense to me.
> > > > > > > We've been down this road with Heat yes. But we are
> > > > > > > currently
> > > > > > > using
> > > > > > > Heat for some things that we arguable should be (a
> > > > > > > workflows
> > > > > > > tool
> > > > > > > might
> > > > > > > help offload some stuff out of Heat). Also we haven't
> > > > > > > implemented
> > > > > > > custom Heat resources for TripleO either. There are mixed
> > > > > > > opinions
> > > > > > > on
> > > > > > > this but plugging in your code to a generic API is quite
> > > > > > > nice
> > > > > > > sometimes.
> > > > > > > 
> > > > > > > That is the beauty of Mistral I think. Unlike Heat it
> > > > > > > actually
> > > > > > > encourages you to customize it with custom Python actions.
> > > > > > > Anything
> > > > > > > we
> > > > > > > want in tripleo-common can become our own Mistral action
> > > > > > > (these get
> > > > > > > registered with stevedore entry points so we'd own the
> > > > > > > code)
> > > > > > > and
> > > > > > > the
> > > > > > > YAML workflows just tie them together via tasks.
> > > > > > > 
> > > > > > > We don't have to go off and build our own proxy deployment
> > > > > > > workflow
> > > > > > > API. The structure to do just about anything we need
> > > > > > > already
> > > > > > > exists
> > > > > > > so
> > > > > > > why not go and use it?
> > > > > > > 
> > > > > > > > 2) The TripleO API is not a workflow API.  I also largely
> > > > > > > > missed
> > > > > > > > this
> > > > > > > > discussion, but the TripleO API is a _Deployment_
> > > > > > > > API.  In
> > > > > > > > some
> > > > > > > > cases
> > > > > > > > there also happens to be a workflow going on behind the
> > > > > > > > scenes,
> > > > > > > > but
> > > > > > > > honestly that's not something I want our users to have to
> > > > > > > > care
> > > > > > > > about.
> > > > > > > Agree that users don't have to care about this.
> > > > > > > 
> > > > > > > Users can get as involved as they want here. Most users I
> > > > > > > think
> > > > > > > will
> > > > > > > use python-tripleoclient to drive the deployment or the new
> > > > > > > UI.
> > > > > > > They
> > > > > > > don't have to interact with Mistral directly unless they
> > > > > > > really
> > > > > > > want
> > > > > > > to. So whether we choose to build our own API or use a
> > > > > > > generic one
> > > > > > > I
> > > > > > > think this point is mute.
> > > > > > Okay, I think this is a very fundamental point, and I believe
> > > > > > it gets
> > > > > > right to the heart of my objection to the proposed change.
> > > > > > 
> > > > > > When I hear you say that users will use tripleoclient to talk
> > > > > > to
> > > > > > Mistral, it raises a big flag.  Then I look at something like
> > > > > > https://github.com/dprince/python-tripleoclient/commit/77ffd2
> > > > > > fa
> > > > > > 7b1642
> > > > > > b9f05713ca30b8a27ec4b322b7
> > > > > > and the flag gets bigger.
> > > > > > 
> > > > > > The thing is that there's a whole bunch of business logic
> > > > > > currently
> > > > > > sitting in the client that shouldn't/can't be there.  There
> > > > > > are
> > > > > > historical reasons for it, but the important thing is that
> > > > > > the
> > > > > > current
> > > > > > client architecture is terribly flawed. 

[openstack-dev] [release][neutron] networking-sfc release 1.0.0 (independent)

2016-01-27 Thread doug
We are overjoyed to announce the release of:

networking-sfc 1.0.0: API's and implementations to support Service
Function Chaining in Neutron.

This release is part of the independent release series.

With source available at:

http://git.openstack.org/cgit/openstack/networking-sfc

With package available at:

https://pypi.python.org/pypi/networking-sfc

Please report issues through launchpad:

http://bugs.launchpad.net/networking-sfc

For more details, please see below.


Changes in networking-sfc ..1.0.0
-


Diffstat (except docs and test files)
-

README.rst |  19 --
devstack/README.md |   2 +-
networking_sfc/db/flowclassifier_db.py | 112 ++--
.../db/migration/alembic_migrations/env.py |   2 -
.../liberty/contract/48072cb59133_initial.py   |  33 
.../versions/liberty/expand/24fc7241aa5_initial.py |  33 
.../liberty/expand/5a475fc853e6_ovs_data_model.py  |  87 +
.../9768e6a66c9_flowclassifier_data_model.py   |  69 
.../liberty/expand/c3e178d4a985_sfc_data_model.py  | 119 +
.../mitaka/contract/48072cb59133_initial.py|  33 
.../versions/mitaka/expand/24fc7241aa5_initial.py  |  33 
.../mitaka/expand/5a475fc853e6_ovs_data_model.py   |  87 -
.../9768e6a66c9_flowclassifier_data_model.py   |  69 
.../mitaka/expand/c3e178d4a985_sfc_data_model.py   | 119 -
networking_sfc/db/migration/models/__init__.py |   0
networking_sfc/db/migration/models/head.py |  23 ---
networking_sfc/db/sfc_db.py|  74 +++-
networking_sfc/extensions/sfc.py   |   5 -
networking_sfc/services/sfc/agent/agent.py |  71 ++--
requirements.txt   |  54 +++---
test-requirements.txt  |  24 +--
tools/tox_install.sh   |   2 +-
23 files changed, 514 insertions(+), 750 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 8d6030c..4c3f762 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -4 +4 @@
-pbr>=1.6 # Apache-2.0
+pbr>=1.6
@@ -6,4 +6,4 @@ pbr>=1.6 # Apache-2.0
-Paste # MIT
-PasteDeploy>=1.5.0 # MIT
-Routes!=2.0,!=2.1,>=1.12.3;python_version=='2.7' # MIT
-Routes!=2.0,>=1.12.3;python_version!='2.7' # MIT
+Paste
+PasteDeploy>=1.5.0
+Routes!=2.0,!=2.1,>=1.12.3;python_version=='2.7'
+Routes!=2.0,>=1.12.3;python_version!='2.7'
@@ -11,5 +11,5 @@ debtcollector>=0.3.0 # Apache-2.0
-eventlet>=0.17.4 # MIT
-pecan>=1.0.0 # BSD
-greenlet>=0.3.2 # MIT
-httplib2>=0.7.5 # MIT
-requests!=2.9.0,>=2.8.1 # Apache-2.0
+eventlet>=0.17.4
+pecan>=1.0.0
+greenlet>=0.3.2
+httplib2>=0.7.5
+requests!=2.9.0,>=2.8.1
@@ -17,3 +17,3 @@ Jinja2>=2.8 # BSD License (3 clause)
-keystonemiddleware>=4.0.0 # Apache-2.0
-netaddr!=0.7.16,>=0.7.12 # BSD
-python-neutronclient>=2.6.0 # Apache-2.0
+keystonemiddleware>=4.0.0
+netaddr!=0.7.16,>=0.7.12
+python-neutronclient>=2.6.0
@@ -21,6 +21,6 @@ retrying!=1.3.0,>=1.2.3 # Apache-2.0
-ryu!=3.29,>=3.23.2 # Apache-2.0
-SQLAlchemy<1.1.0,>=1.0.10 # MIT
-WebOb>=1.2.3 # MIT
-python-keystoneclient!=1.8.0,!=2.1.0,>=1.6.0 # Apache-2.0
-alembic>=0.8.0 # MIT
-six>=1.9.0 # MIT
+ryu>=3.23.2 # Apache-2.0
+SQLAlchemy<1.1.0,>=0.9.9
+WebOb>=1.2.3
+python-keystoneclient!=1.8.0,>=1.6.0
+alembic>=0.8.0
+six>=1.9.0
@@ -29 +29 @@ oslo.concurrency>=2.3.0 # Apache-2.0
-oslo.config>=3.2.0 # Apache-2.0
+oslo.config>=2.7.0 # Apache-2.0
@@ -33 +33 @@ oslo.i18n>=1.5.0 # Apache-2.0
-oslo.log>=1.14.0 # Apache-2.0
+oslo.log>=1.12.0 # Apache-2.0
@@ -40,2 +40,2 @@ oslo.service>=1.0.0 # Apache-2.0
-oslo.utils>=3.4.0 # Apache-2.0
-oslo.versionedobjects>=0.13.0 # Apache-2.0
+oslo.utils>=3.2.0 # Apache-2.0
+oslo.versionedobjects>=0.13.0
@@ -43 +43 @@ oslo.versionedobjects>=0.13.0 # Apache-2.0
-python-novaclient!=2.33.0,>=2.29.0 # Apache-2.0
+python-novaclient!=2.33.0,>=2.29.0
@@ -46,2 +46,2 @@ python-novaclient!=2.33.0,>=2.29.0 # Apache-2.0
-pywin32;sys_platform=='win32' # PSF
-wmi;sys_platform=='win32' # MIT
+pywin32;sys_platform=='win32'
+wmi;sys_platform=='win32'
@@ -52 +52 @@ wmi;sys_platform=='win32' # MIT
-# -e git+https://git.openstack.org/openstack/neutron#egg=neutron
+# -e git+https://git.openstack.org/openstack/neutron@stable/liberty#egg=neutron
diff --git a/test-requirements.txt b/test-requirements.txt
index 1eca301..52a6177 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -7,4 +7,4 @@ cliff>=1.15.0 # Apache-2.0
-coverage>=3.6 # Apache-2.0
-fixtures>=1.3.1 # Apache-2.0/BSD
-mock>=1.2 # BSD
-python-subunit>=0.0.18 # Apache-2.0/BSD
+coverage>=3.6
+fixtures>=1.3.1
+mock>=1.2
+python-subunit>=0.0.18
@@ -12 +12 @@ requests-mock>=0.7.0 # Apache-2.0
-sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
+sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
@@ -14,4 +14,4 @@ oslosphinx!=3.4.0,>=2.5.0 # 

[openstack-dev] [all] [Ironic] New project: Ironic Staging Drivers

2016-01-27 Thread Lucas Alvares Gomes
Hi,

I would like to quickly announce the creation of a new project called
Ironic Staging Drivers.


What the Ironic Staging Drivers project is?
-

As context, in the Tokyo design summit it was decided that drivers in the
Ironic tree will require 3rd Party CI testing to assert that they are
maintained and have the high quality expected by our users. But, not all
driver maintainers have the means to afford setting up a 3rd Party CI for
their drivers, for example, we have people (myself included) which maintain
a driver that is only used in some very constrained environments to
develop/test Ironic itself. So, this project is about having a place to
hold these useful drivers that will soon be out of the Ironic tree or that
never made it to that tree.

The project main focus is to provide a common place where drivers with no
3rd Party CI can be developed, documented, unittested and shared; solving
the "hundreds of different download sites" problem that we may face if each
driver end up having its own repository. But also we want to make Ironic
better by exercising and validating the maturity of the Ironic's driver
interface.


What the Ironic Staging Drivers project is not?
---

It's important to note that the Ironic Staging Drivers *is not* a project
under Ironic's governance, meaning that the Ironic core group *is not*
responsible for the code in this project. But it's not unrealistic to
expect that some of the maintainers of this project are also people holding
core review status in the Ironic project.

The project is not a place to dump code and run away hoping that someone
else will take care of it for you. Maintainers of a driver in this project
will be asked to "babysit" their own drivers, fix their bugs and document
it.


Where we are at the moment?


The project literally just started, the skeleton code to allow us to run
the unittests, generate the documentation and so on is being worked out as
I write this email.

The initial core team is composed by myself and Dmitry Tantsur but I
welcome anyone interested in helping with the take off of the project to
talk to us about joining the core team immediately. If not, we can review
the team very soon again to add fresh developers.

We still need to come up with some guides about how one can submit their
driver to the project, what's required etc... So we need help/opinions here
too ( and let's keep it less beurocratic as possible :-) )

The code can be found at https://github.com/openstack/ironic-staging-drivers

Cheers,
Lucas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][library] Switching to external fixtures for integration Noop tests

2016-01-27 Thread Bogdan Dobrelya
On 26.01.2016 22:18, Kyrylo Galanov wrote:
> Hello Bogdan,
> 
> I hope I am not the one of the context. Why do we separate fixtures for
> Noop tests from the repo?
> I can understand if while noop test block was carried out to a separate
> repo.
> 

I believe fixtures normally are downloaded by the rake spec_prep.
Developers avoid to ship fixtures with tests.

The astute.yaml data fixtures are supposed to be external to the
fuel-library as that data comes from the Nailgun backend and corresponds
to all known deploy paths.

Later, the generated puppet catalogs (see [0]) shall be put to the
fixtures repo as well - as they will contain hundreds thousands of
auto-generate lines and are tightly related to the astute.yaml fixtures.

While the Noop tests framework itself indeed may be moved to another
separate repo (later), we should keep our integration tests [1] in the
fuel-library repository, which is "under test" by those tests.

[0] https://blueprints.launchpad.net/fuel/+spec/deployment-data-dryrun
[1]
https://git.openstack.org/cgit/openstack/fuel-library/tree/tests/noop/spec/hosts

> On Tue, Jan 26, 2016 at 1:54 PM, Bogdan Dobrelya  > wrote:
> 
> We are going to switch [0] to external astute.yaml fixtures for Noop
> tests and remove them from the fuel-library repo as well.
> Please make sure all new changes to astute.yaml fixtures will be
> submitted now to the new location. Related mail thread [1].
> 
> [0]
> 
> https://review.openstack.org/#/c/272480/1/doc/noop-guide/source/noop_fixtures.rst
> [1]
> 
> http://lists.openstack.org/pipermail/openstack-dev/2015-December/082888.html
> 
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-27 Thread Jay Pipes

On 01/27/2016 12:53 PM, gordon chung wrote:

It makes for a crappy user experience. Crappier than the crappy user
experience that OpenStack API users already have because we have done a
crappy job shepherding projects in order to make sure there isn't
overlap between their APIs (yes, Ceilometer and Monasca, I'm looking
directly at you).

... yes, Ceilometer can easily handle your events and meters and store
them in either Elasticsearch or Gnocchi for visualisations. you just
need to create a new definition in our mapping files[1][2]. you will
definitely want to coordinate the naming of your messages. ie.
event_type == backup. and event_type == backup..


This isn't at all what I was referring to, actually. I was referring to 
my belief that we (the API WG, the TC, whatever...) have failed to 
properly prevent almost complete and total overlap of the Ceilometer [1] 
and Monasca [2] REST APIs.


They are virtually identical in purpose, but in frustrating 
slightly-inconsistent ways. and this means that users of the "OpenStack 
APIs" have absolutely no idea what the "OpenStack Telemetry API" really is.


Both APIs have /alarms as a top-level resource endpoint. One of them 
refers to the alarm notification with /alarms, while the other refers to 
the alarm definition with /alarms.


One API has /meters as a top-level resource endpoint. The other uses 
/metrics to mean the exact same thing.


One API has /samples as a top-level resource endpoint. The other uses 
/metrics/measurements to mean the exact same thing.


One API returns a list JSON object for list results. The other returns a 
dict JSON object with a "links" key and an "elements" key.


And the list goes on... all producing a horrible non-unified, 
overly-complicated and redundant experience for our API users.


Best,
-jay

[1] http://developer.openstack.org/api-ref-telemetry-v2.html
[2] 
https://github.com/openstack/monasca-api/blob/master/docs/monasca-api-spec.md


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] do not account compute resource of instances in state SHELVED_OFFLOADED

2016-01-27 Thread Sascha Vogt
Hi Andrew,

Am 27.01.2016 um 10:38 schrieb Andrew Laski:
> 1. This allows for a poor experience where a user would not be able to
> turn on and use an instance that they already have due to overquota. 
> This is a change from the current behavior where they just can't create
> resources, now something they have is unusable.
That is a valid point, though I think if it's configurable it is up to
the operator to use that feature.

> 2. I anticipate a further ask for a separate quota for the number of
> offloaded resources being used to prevent just continually spinning up
> and shelving instances with no limit.  Because while disk/ram/cpu
> resources are not being consumed by an offloaded instance network and
> volume resources remain consumed and storage is required is Glance for
> the offloaded disk.  And adding this additional quota adds more
> complexity to shelving which is already overly complex and not well
> understood.
I think an off-loaded / shelved resource should still count against the
quota being used (instance, allocated floating IPs, disk space etc) just
not the resources which are no longer consumed (CPU and RAM)

In addition I think it would make sense to introduce a quota for Glance
and ephemeral disk size and a shelved instance could (should) still
count against those quotas.

Greetings
-Sascha-


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Bugs] Time sync problem when testing.

2016-01-27 Thread Stanislaw Bogatkin
I've got proposal from mos-linux team about switch to sntp instead of
ntpdate due to ntpdate deprecation. It looks nice enough for me but we
already had similar problems with sntp before switching to ntpdate.
Does anyone vote against switching to sntp?

On Wed, Jan 27, 2016 at 2:25 PM, Maksim Malchuk 
wrote:

> I think we shouldn't depend on the other services like Syslog and logger
> trying to catch the problem and it is better to create the logs ourselves.
>
>
> On Wed, Jan 27, 2016 at 1:49 PM, Stanislaw Bogatkin <
> sbogat...@mirantis.com> wrote:
>
>> >But you've used 'logger -t ntpdate' - this is can fail again and logs
>> can be empty again.
>> What do you mean by 'fall again'? Piping to logger uses standard blocking
>> I/O - logger gets
>> all the output it can reach, so it get all output strace will produce. If
>> ntpdate will hang for some
>> reason - we should see it in strace output. If ntpdate will exit - we
>> will see this too.
>>
>> On Wed, Jan 27, 2016 at 12:57 PM, Maksim Malchuk 
>> wrote:
>>
>>> But you've used 'logger -t ntpdate' - this is can fail again and logs
>>> can be empty again.
>>> My opinion we should use output redirection to the log-file directly.
>>>
>>>
>>> On Wed, Jan 27, 2016 at 11:21 AM, Stanislaw Bogatkin <
>>> sbogat...@mirantis.com> wrote:
>>>
 Yes, I have created custom iso with debug output. It didn't help, so
 another one with strace was created.
 On Jan 27, 2016 00:56, "Alex Schultz"  wrote:

> On Tue, Jan 26, 2016 at 2:16 PM, Stanislaw Bogatkin
>  wrote:
> > When there is too high strata, ntpdate can understand this and
> always write
> > this into its log. In our case there are just no log - ntpdate send
> first
> > packet, get an answer - that's all. So, fudging won't save us, as I
> think.
> > Also, it's a really bad approach to fudge a server which doesn't
> have a real
> > clock onboard.
>
> Do you have a debug output of the ntpdate somewhere? I'm not finding
> it in the bugs or in some of the snapshots for the failures. I did
> find one snapshot with the -v change that didn't have any response
> information so maybe it's the other problem where there is some
> network connectivity isn't working correctly or the responses are
> getting dropped somewhere?
>
> -Alex
>
> >
> > On Tue, Jan 26, 2016 at 10:41 PM, Alex Schultz <
> aschu...@mirantis.com>
> > wrote:
> >>
> >> On Tue, Jan 26, 2016 at 11:42 AM, Stanislaw Bogatkin
> >>  wrote:
> >> > Hi guys,
> >> >
> >> > for some time we have a bug [0] with ntpdate. It doesn't
> reproduced 100%
> >> > of
> >> > time, but breaks our BVT and swarm tests. There is no exact point
> where
> >> > problem root located. To better understand this, some verbosity to
> >> > ntpdate
> >> > output was added but in logs we can see only that packet exchange
> >> > between
> >> > ntpdate and server was started and was never completed.
> >> >
> >>
> >> So when I've hit this in my local environments there is usually one
> or
> >> two possible causes for this. 1) lack of network connectivity so ntp
> >> server never responds or 2) the stratum is too high.  My assumption
> is
> >> that we're running into #2 because of our revert-resume in testing.
> >> When we resume, the ntp server on the master may take a while to
> >> become stable. This sync in the deployment uses the fuel master for
> >> synchronization so if the stratum is too high, it will fail with
> this
> >> lovely useless error.  My assumption on what is happening is that
> >> because we aren't using a set of internal ntp servers but rather
> >> relying on the standard ntp.org pools.  So when the master is being
> >> resumed it's struggling to find a good enough set of servers so it
> >> takes a while to sync. This then causes these deployment tasks to
> fail
> >> because the master has not yet stabilized (might also be geolocation
> >> related).  We could either address this by fudging the stratum on
> the
> >> master server in the configs or possibly introducing our own more
> >> stable local ntp servers. I have a feeling fudging the stratum might
> >> be better when we only use the master in our ntp configuration.
> >>
> >> > As this bug is blocker, I propose to merge [1] to better
> understanding
> >> > what's going on. I created custom ISO with this patchset and
> tried to
> >> > run
> >> > about 10 BVT tests on this ISO. Absolutely with no luck. So, if
> we will
> >> > merge this, we would catch the problem much faster and understand
> root
> >> > cause.
> >> >
> >>
> >> I think we should merge the increased logging patch anyway 

Re: [openstack-dev] [nova][neutron] Update on Neutron's rolling upgrade support?

2016-01-27 Thread Jay Pipes

Thanks very much for the update, Ihar, much appreciated!

On 01/26/2016 01:19 PM, Ihar Hrachyshka wrote:

Jay Pipes  wrote:


Hey Sean,

Tomorrow morning UTC, we'll be discussing Neutron topics at the Nova
mid-cycle. Wondering if you might give us a quick status update on
where we are with the Neutron devstack grenade jobs and what your work
has uncovered on the Neutron side that might be possible to discuss in
our midcycle meetup?


Lemme give you some update on the matter.

We experienced some grenade partial job failures in multinode when
creating initial long standing resources:

https://bugs.launchpad.net/neutron/+bug/1527675

That was identified as an MTU issue and to be fixed by a set of patches:

https://review.openstack.org/#/q/topic:multinode-neutron-mtu

The only needed piece not in yet is: https://review.openstack.org/267847
(we need someone from infra to give second +2)

With that, we pass past the resource creation phase and run actual tests
with 3 failures. You can see latest results using the following fake
patch: https://review.openstack.org/265759

Failure logs:
http://logs.openstack.org/59/265759/15/experimental/gate-grenade-dsvm-neutron-multinode/6c1c97b/console.html


All three tests fail when ssh-ing into an instance using br-ex. This may
be another MTU issue, now on br-ex side. We have not identified specific
fixes to merge for that yet.

Hope that helps,

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fuel 9.0 (Mitaka) deployment with Ubuntu UCA packages

2016-01-27 Thread Matthew Mosesohn
Hi Fuelers and Stackers,

I am pleased to announce the first possibility to deploy Mitaka using
Fuel as a deployment tool. I am taking advantage of Alex Schultz's
plugin, fuel-plugin-upstream[0], along with a series of patches
currently on review[1]. I have not had a chance to do destructive
tests yet, but OSTF health checks are (nearly all) passing. Tests show
that we can complete deployment with either ceph or swift
successfully.

The positive side of all of this experience shows that we can deploy
both Liberty and Mitaka (b1) on the same manifests.

One item to note is that the Nova EC2 API has gone away completely in
Mitaka, and haproxy configuration is updated to compensate for this.

Finally, I should add that our current custom automated BVT script
can't install a plugin, so I've written 2 patches[2] to hack in
fuel-plugin-upstream's tasks.

The only failure during deployment is OSTF reported that nova metadata
and nova osapi_compute services are down. Other tests pass just fine.
For those interested, I've attached a link to the logs from the
deployment[3].


This achievement moves us closer to the goal of enabling Fuel to
deploy OpenStack using non-Mirantis packages.


[0] https://github.com/mwhahaha/fuel-plugin-upstream
[1] Mitaka support patches (in order):
https://review.openstack.org/#/c/267697/11
https://review.openstack.org/#/c/268147/13
https://review.openstack.org/#/c/269564/5
https://review.openstack.org/#/c/268214/11
https://review.openstack.org/#/c/268082/9
https://review.openstack.org/#/c/267448/7
https://review.openstack.org/#/c/272557/2

[2] https://review.openstack.org/#/c/269752/ https://review.openstack.org/269749

[3] 
https://drive.google.com/file/d/0B0UMyn5tu8EUdkUxMUs3Z0FxbGs/view?usp=sharing
(.tar.xz format)

Best Regards,
Matthew Moseoshn

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer] New project: collectd-ceilometer-plugin

2016-01-27 Thread Foley, Emma L
So, metrics are grouped by the type of resource they use, and each metric has 
to be listed.
Grouping isn't a problem, but creating an exhaustive list might be, since there 
are 100+ plugins [1] in collectd which can provide statistics, although not all 
of these are useful, and some require extensive configuration. The plugins each 
provide multiple metrics, and each metric can be duplicated for a number of 
instances, examples: [2].

Collectd data is minimal: timestamp and volume, so there's little room to find 
interesting meta data.
It would be nice to see this support integrated, but it might be very tedious 
to list all the metric names and group by resource type without any form of 
Do the resource definitions support wildcards? Collectd can provide A LOT of 
metrics.

Regards,
Emma

[1] https://collectd.org/wiki/index.php/Table_of_Plugins 
[2] https://collectd.org/wiki/index.php/Naming_schema 


Original Message-
From: gordon chung [mailto:g...@live.ca] 
Sent: Monday, January 25, 2016 5:12 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [telemetry][ceilometer] New project: 
collectd-ceilometer-plugin

you can consider ceilometer database (and api) as an open-ended model designed 
to capture the full fidelity of a datapoint (something useful for auditing, 
post processing). alternatively, gnocchi is a strongly type model which 
captures only required data.

in the case of ceilometer -> gnocchi, the measurement data ceilometer collects 
is sent to gnocchi and mapped to specific resource types[1]. 
here we define all the resources and the metric mappings available. with 
regards to collectd, i'm just wondering what additional metrics are added and 
possibly any interesting metadata?

[1]
https://github.com/openstack/ceilometer/blob/master/etc/ceilometer/gnocchi_resources.yaml

On 25/01/2016 9:21 AM, Foley, Emma L wrote:
> I'm not overly familiar with Gnocchi, so I can't answer that off the bat, but 
> I would be looking for answers to the following questions:
> What changes need to be made to gnocchi to accommodate regular data from 
> ceilometer?
> Is there anything additional in Gnocchi's data model that is not part of 
> Ceilometer?
>
> Regards,
> Emma
>
>
> -Original Message-
> From: gord chung [mailto:g...@live.ca]
> Sent: Friday, January 22, 2016 2:41 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [telemetry][ceilometer] New project: 
> collectd-ceilometer-plugin
>
> nice! thanks Emma!
>
> i'm wondering if there's an additional metrics/resources we should add to 
> gnocchi to accommodate the data?
>
> On 22/01/2016 6:11 AM, Foley, Emma L wrote:
>> Hi folks,
>>
>> A new plug-in for making collectd[1] stats available to Ceilometer [2] is 
>> ready for use.
>>
>> The collectd-ceilometer-plugin make system statistics from collectd 
>> available to Ceilometer.
>> These additional statistics make it easier to detect faults and identify 
>> performance bottlenecks (among other uses).
>>
>> Regards,
>> Emma
>>
>> [1] https://collectd.org/
>> [2] http://github.com/openstack/collectd-ceilometer-plugin
>>
>> --
>> Intel Research and Development Ireland Limited Registered in Ireland 
>> Registered Office: Collinstown Industrial Park, Leixlip, County 
>> Kildare Registered Number: 308263
>>
>>
>> _
>> _  OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> --
> gord
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TaskFlow] TaskFlow persistence

2016-01-27 Thread pn kk
Hi,

Thanks for the responses. Putting it in a small example

def flow_factory(tmp):
return lf.Flow('resume from backend example').add(
TestTask(name='first', test=tmp),
InterruptTask(name='boom'),
TestTask(name='second', test="second task"))


class TestTask(task.Task):
def __init__(self, name, provides=None, test, **kwargs):
self.test=test
super(TestTask, self).__init__(name, provides, **kwargs)
def execute(self, *args, **kwargs):
print('executing %s' % self)
return 'ok'

class InterruptTask(task.Task):
def execute(self, *args, **kwargs):
# DO NOT TRY THIS AT HOME
engine.suspend()

I was searching for a way in which I can reload the flow after crash
without passing the parameter "tmp" shown above
Looks like "load_from_factory" gives this provision.


engine = taskflow.engines.load_from_factory(flow_factory=flow_factory,
factory_kwargs={"tmp":"test_data"}, book=book, backend=backend)
engine.run()

Now it suspends after running interrupt task, I can now reload the flow
from the saved factory method without passing parameter again.
for flow_detail_2 in book:
engine2 = taskflow.engines.load_from_detail(flow_detail_2,
backend=backend)

engine2.run()

Let me know if this is ok or is there a better approach to achieve this?

-Thanks


On Wed, Jan 27, 2016 at 12:03 AM, Joshua Harlow 
wrote:

> Hi there,
>
> Michał is correct, it should be saved.
>
> Do u have a small example of what u are trying to do because that will
> help determine if what u are doing will be saved or whether it will not be.
>
> Or even possibly explaining what is being done would be fine to (more
> data/info for me to reason about what should be stored in your case).
>
> Thanks,
>
> Josh
>
>
> Michał Dulko wrote:
>
>> On 01/26/2016 10:23 AM, pn kk wrote:
>>
>>> Hi,
>>>
>>> I use taskflow for job management and now trying to persist the state
>>> of flows/tasks in mysql to recover incase of process crashes.
>>>
>>> I could see the state and the task results stored in the database.
>>>
>>> Now I am looking for some way to store the input parameters of the tasks.
>>>
>>> Please share your inputs to achieve this.
>>>
>>> -Thanks
>>>
>>> I've played with that some time ago and if I recall correctly input
>> parameters should be available in the flow's storage, which means these
>> are also saved to the DB. Take a look on resume_workflows method on my
>> old PoC [1] (hopefully TaskFlow haven't changed much since then).
>>
>> [1] https://review.openstack.org/#/c/152200/4/cinder/scheduler/manager.py
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-27 Thread Jiri Tomasek

On 01/27/2016 03:36 PM, Dan Prince wrote:

On Wed, 2016-01-27 at 14:32 +0100, Jiri Tomasek wrote:

On 01/26/2016 09:05 PM, Ben Nemec wrote:

On 01/25/2016 04:36 PM, Dan Prince wrote:

On Mon, 2016-01-25 at 15:31 -0600, Ben Nemec wrote:

On 01/22/2016 06:19 PM, Dan Prince wrote:

On Fri, 2016-01-22 at 11:24 -0600, Ben Nemec wrote:

So I haven't weighed in on this yet, in part because I was
on
vacation
when it was first proposed and missed a lot of the initial
discussion,
and also because I wanted to take some time to order my
thoughts
on
it.
   Also because my initial reaction...was not conducive to
calm and
rational discussion. ;-)

The tldr is that I don't like it.  To explain why, I'm
going to
make
a
list (everyone loves lists, right? Top $NUMBER reasons we
should
stop
expecting other people to write our API for us):

1) We've been down this road before.  Except last time it
was
with
Heat.
   I'm being somewhat tongue-in-cheek here, but expecting a
general
service to provide us a user-friendly API for our specific
use
case
just
doesn't make sense to me.

We've been down this road with Heat yes. But we are currently
using
Heat for some things that we arguable should be (a workflows
tool
might
help offload some stuff out of Heat). Also we haven't
implemented
custom Heat resources for TripleO either. There are mixed
opinions
on
this but plugging in your code to a generic API is quite nice
sometimes.

That is the beauty of Mistral I think. Unlike Heat it
actually
encourages you to customize it with custom Python actions.
Anything
we
want in tripleo-common can become our own Mistral action
(these get
registered with stevedore entry points so we'd own the code)
and
the
YAML workflows just tie them together via tasks.

We don't have to go off and build our own proxy deployment
workflow
API. The structure to do just about anything we need already
exists
so
why not go and use it?


2) The TripleO API is not a workflow API.  I also largely
missed
this
discussion, but the TripleO API is a _Deployment_ API.  In
some
cases
there also happens to be a workflow going on behind the
scenes,
but
honestly that's not something I want our users to have to
care
about.

Agree that users don't have to care about this.

Users can get as involved as they want here. Most users I
think
will
use python-tripleoclient to drive the deployment or the new
UI.
They
don't have to interact with Mistral directly unless they
really
want
to. So whether we choose to build our own API or use a
generic one
I
think this point is mute.

Okay, I think this is a very fundamental point, and I believe
it gets
right to the heart of my objection to the proposed change.

When I hear you say that users will use tripleoclient to talk
to
Mistral, it raises a big flag.  Then I look at something like
https://github.com/dprince/python-tripleoclient/commit/77ffd2fa
7b1642
b9f05713ca30b8a27ec4b322b7
and the flag gets bigger.

The thing is that there's a whole bunch of business logic
currently
sitting in the client that shouldn't/can't be there.  There are
historical reasons for it, but the important thing is that the
current
client architecture is terribly flawed.  Business logic should
never
live in the client like it does today.

Totally agree here. In fact I have removed business logic from
python-
tripleoclient in this patch and moved it into a Mistral action.
Which
can then be used via a stable API from anywhere.


Looking at that change, I see a bunch of business logic around
taking
our configuration and passing it to Mistral.  In order for us
to do
something like that and have a sustainable GUI, that code _has_
to
live
behind an API that the GUI and CLI alike can call.  If we ask
the GUI
to
re-implement that code, then we're doomed to divergence between
the
CLI
and GUI code and we'll most likely end up back where we are
with a
GUI
that can't deploy half of our features because they were
implemented
solely with the CLI in mind and made assumptions the GUI can't
meet.

The latest feedback I've gotten from working with the UI
developers on
this was that we should have a workflow to create the
environment. That
would get called via the Mistral API via python-tripleoclient and
any
sort of UI you could imagine and would essentially give us a
stable
environment interface.

Anything that requires tripleoclient means !GUI though.  I know the
current GUI still has a bunch of dependencies on the CLI, but that
seems
like something we need to fix, not a pattern to repeat.  I still
think
any sentence containing "call Mistral via tripleoclient" is
indicative
of a problem in the design.

I am not sure I understand the argument here.

Regardless of which API we use (Mistral API or TripleO API) GUI is
going
to call the API and tripleoclient (CLI) is going to call the API
(through mistralclient - impl. detail).

GUI can't and does not call API through tripleoclient. This is why
the
work on extracting the common business logic to tripleo-common
happened.
So tripleo-common is 

Re: [openstack-dev] OpenStack installer

2016-01-27 Thread Fox, Kevin M
As someone that has upgraded rdo clouds on multiple occasions, I concur. Its 
way easier to upgrade a whole cloud, (or even pieces of a cloud are possible) 
with properly isolated containers instead of rpms. Thats the case where 
containers really shine, and you don't experience it until you try and upgrade 
an existing cloud, or try to run multiple services from different releases on 
the same controller.

Thanks,
Kevin


From: Michał Jastrzębski [inc...@gmail.com]
Sent: Wednesday, January 27, 2016 6:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] OpenStack installer

Disclaimer: I'm from Kolla.

Well, you will see these benefits when you'll try to perform upgrade:)
that's the tricky part about containers - they become really useful
after 6 months. Without them you'll end up having to upgrade every
service at the very same time, and that's disruptive at best,
disastrous at worst. Containers gives you benefit of separation
between services dependencies, so you won't even run into conflict of
versions. Another benefit is pre-built images gives you repeatability
of deployments, which is something you don't appreciate until you lose
it. Last but not least, cleanups and rollbacks are a breeze, and
doesn't leave any trash on base system - another benefit easily
underestimated.

feel free to find us in #kolla if you have any questions.

Cheers,
inc0

On 27 January 2016 at 02:49, Gyorgy Szombathelyi
 wrote:
>>
>> Hi Gyorgy,
>>
> Hi Kevin,
>
>> I'll definitely give this a look and thanks for sharing. I would like to ask
>> however why you found OpenStack-Anisble overly complex so much so that
>> you've taken on the complexity of developing a new installer all together? 
>> I'd
>> love to understand the issues you ran into and see what we can do in
>> upstream OpenStack-Ansible to overcome them for the greater community.
>> Being that OpenStack-Ansible is no longer a Rackspace project but a
>> community effort governed by the OpenStack Foundation I'd been keen on
>> seeing how we can simplify the deployment offerings we're currently
>> working on today in an effort foster greater developer interactions so that
>> we can work together on building the best deployer and operator
>> experience.
>>
> Basically there were two major points:
>
> - containers: we don't need it. For us, that was no real benefits to use 
> them, but just
> added unnecessary complexity. Instead of having 1 mgmt address of a 
> controller, it had
> a dozen, installation times were huge (>2 hours) with creating and updating 
> each controller, the
> generated inventory was fragile (any time I wanted to change something in the 
> generated
> inventory, I had a high chance to break it). When I learned how to install 
> without containers,
> another problem came in: every service listens on 0.0.0.0, so haproxy can't 
> bind to the service ports.
>
> - packages: we wanted to avoid mixing pip and vendor packages. Linux great 
> power was
> always the package management system. We don't have the capacity to choose 
> the right
> revision from git. Also a .deb package come with goodies, like the init 
> scripts, proper system
> users, directories, upgrade possibility and so on. Bugs can be reported 
> against .debs.
>
> And some minor points:
> - Need root rights to start. I don't really understand why it is needed.
> - I think the role plays are unnecessary fragmented into files. Ansible 
> designed with simplicity in mind,
>   now keystone for example has 29 files, lots of them with 1 task. I could 
> not understand what the
> - The 'must have tags' are also against Ansible's philosophy. No one should 
> need to start a play with a tag
> (tagging should be an exception, not the rule). Running a role doesn't take 
> more than 10-20 secs, if it is already
> completed, tagging is just unnecessary bloat. If you need to start something 
> at the middle of a play, then that play
> is not right.
>
> So those were the reason why we started our project, hope you can understand 
> it. We don't want to compete,
> just it serves us better.
>
>> All that said, thanks for sharing the release and if I can help in any way 
>> please
>> reach out.
>>
> Thanks, maybe we can work together in the future.
>
>> --
>>
>> Kevin Carter
>> IRC: cloudnull
>>
> Br,
> György
>
>>
>> 
>> From: Gyorgy Szombathelyi 
>> Sent: Tuesday, January 26, 2016 4:32 AM
>> To: 'openstack-dev@lists.openstack.org'
>> Subject: [openstack-dev] OpenStack installer
>>
>> Hello!
>>
>> I just want to announce a new installer for OpenStack:
>> https://github.com/DoclerLabs/openstack
>> It is GPLv3, uses Ansible (currently 1.9.x,  2.0.0.2 has some bugs which has 
>> to
>> be resolved), has lots of components integrated (of course there are missing
>> ones).
>> Goal was simplicity and also 

Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-27 Thread Emilien Macchi


On 01/27/2016 10:51 AM, Jay Pipes wrote:
> On 01/27/2016 12:53 PM, gordon chung wrote:
>>> It makes for a crappy user experience. Crappier than the crappy user
>>> experience that OpenStack API users already have because we have done a
>>> crappy job shepherding projects in order to make sure there isn't
>>> overlap between their APIs (yes, Ceilometer and Monasca, I'm looking
>>> directly at you).
>> ... yes, Ceilometer can easily handle your events and meters and store
>> them in either Elasticsearch or Gnocchi for visualisations. you just
>> need to create a new definition in our mapping files[1][2]. you will
>> definitely want to coordinate the naming of your messages. ie.
>> event_type == backup. and event_type ==
>> backup..
> 
> This isn't at all what I was referring to, actually. I was referring to
> my belief that we (the API WG, the TC, whatever...) have failed to
> properly prevent almost complete and total overlap of the Ceilometer [1]
> and Monasca [2] REST APIs.
> 
> They are virtually identical in purpose, but in frustrating
> slightly-inconsistent ways. and this means that users of the "OpenStack
> APIs" have absolutely no idea what the "OpenStack Telemetry API" really is.
> 
> Both APIs have /alarms as a top-level resource endpoint. One of them
> refers to the alarm notification with /alarms, while the other refers to
> the alarm definition with /alarms.
> 
> One API has /meters as a top-level resource endpoint. The other uses
> /metrics to mean the exact same thing.
> 
> One API has /samples as a top-level resource endpoint. The other uses
> /metrics/measurements to mean the exact same thing.
> 
> One API returns a list JSON object for list results. The other returns a
> dict JSON object with a "links" key and an "elements" key.
> 
> And the list goes on... all producing a horrible non-unified,
> overly-complicated and redundant experience for our API users.
> 

I agree with you here Jay, Monasca is a great example of failure in
having consistency across OpenStack projects.
It's a different topic but maybe a retrospective of what happened could
help our community to not reproduce the same mistakes again.

Please do not repeat this failure for other projects.
Do not duplicate efforts: if Ekko has a similar mission statement, maybe
we should avoid creating a new project and contribute to Freezer?
(I'm probably missing some technical bits so tell me if I'm wrong)

As an operator, I don't want see 2 OpenStack projects solving the same
issue.
As a developer, I don't want to implement the same feature in 2
different projects.

If we have (again) 2 projects with the same mission statement, I think
we'll waste time & resources, and eventually isolate people working on
their own projects.

I'm sure we don't want that.
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][tempest] RuntimeError: no suitable implementation for this system thrown by monotonic.py

2016-01-27 Thread Bob Hansen

This appears to have gone away this morning. I ran clean.sh and removed
monotonic.

The next run of stack.sh installed version 0.6 of monotonic and I no longer
see this exception.

Bob Hansen
z/VM OpenStack Enablement



From:   Bob Hansen/Endicott/IBM@IBMUS
To: "openstack-dev" 
Date:   01/26/2016 05:21 PM
Subject:[openstack-dev] [devstack][tempest] RuntimeError: no suitable
implementation for this system thrown by monotonic.py



I get this when running tempest now on a devstack I installed today. I did
not have this issue on a devstack installation I did a few days ago. This
is on ubuntu 14.04 LTS.

Everything else on this system seems to be working just fine. Only
run_tempest throws this exception.

./run_tempest.sh -s; RuntimeError: no suitable implementation for this
system.

A deeper look finds this in key.log

2016-01-26 20:07:53.991616 10461 INFO keystone.common.wsgi
[req-a184a559-91d4-4f87-b36c-5b5c1c088a4c - - - - -] POST
http://127.0.0.1:5000/v2.0/tokens
2016-01-26 20:07:58.000373 mod_wsgi (pid=10460): Target WSGI script
'/usr/local/bin/keystone-wsgi-public' cannot be loaded as Python module.
2016-01-26 20:07:58.008776 mod_wsgi (pid=10460): Exception occurred
processing WSGI script '/usr/local/bin/keystone-wsgi-public'.
2016-01-26 20:07:58.009147 Traceback (most recent call last):
2016-01-26 20:07:58.009308 File "/usr/local/bin/keystone-wsgi-public", line
6, in 
2016-01-26 20:07:58.019625 from keystone.server.wsgi import
initialize_public_application
2016-01-26 20:07:58.019725 File
"/opt/stack/keystone/keystone/server/wsgi.py", line 28, in 
2016-01-26 20:07:58.029498 from keystone.common import config
2016-01-26 20:07:58.029561 File
"/opt/stack/keystone/keystone/common/config.py", line 18, in 
2016-01-26 20:07:58.090786 from oslo_cache import core as cache
2016-01-26 20:07:58.090953 File
"/usr/local/lib/python2.7/dist-packages/oslo_cache/__init__.py", line 14,
in 
2016-01-26 20:07:58.095327 from oslo_cache.core import * # noqa
2016-01-26 20:07:58.095423 File
"/usr/local/lib/python2.7/dist-packages/oslo_cache/core.py", line 42, in

2016-01-26 20:07:58.097721 from oslo_log import log
2016-01-26 20:07:58.097776 File
"/usr/local/lib/python2.7/dist-packages/oslo_log/log.py", line 50, in

2016-01-26 20:07:58.100863 from oslo_log import formatters
2016-01-26 20:07:58.100931 File
"/usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py", line 27,
in 
2016-01-26 20:07:58.102743 from oslo_serialization import jsonutils
2016-01-26 20:07:58.102807 File
"/usr/local/lib/python2.7/dist-packages/oslo_serialization/jsonutils.py",
line 60, in 
2016-01-26 20:07:58.124702 from oslo_utils import timeutils
2016-01-26 20:07:58.124862 File
"/usr/local/lib/python2.7/dist-packages/oslo_utils/timeutils.py", line 27,
in 
2016-01-26 20:07:58.128004 from monotonic import monotonic as now # noqa
2016-01-26 20:07:58.128047 File
"/usr/local/lib/python2.7/dist-packages/monotonic.py", line 131, in

2016-01-26 20:07:58.152613 raise RuntimeError('no suitable implementation
for this system')
2016-01-26 20:07:58.154446 RuntimeError: no suitable implementation for
this system
2016-01-26 20:09:49.247986 10464 INFO keystone.common.wsgi
[req-7c00064c-318d-419a-8aaa-0536e74db473 - - - - -] GET
http://127.0.0.1:35357/
2016-01-26 20:09:49.339477 10468 DEBUG keystone.middleware.auth
[req-4851e134-1f0c-45b0-959c-881a2b1f5fd8 - - - - -] There is either no
auth token in the request or the certificate issuer is not trusted. No auth
context will be set.
process_request /opt/stack/keystone/keystone/middleware/auth.py:171

A peek in monotonic.py finds where the exception is.

The version of monotonic is:

---
Metadata-Version: 2.0
Name: monotonic
Version: 0.5
Summary: An implementation of time.monotonic() for Python 2 & < 3.3
Home-page: https://github.com/atdt/monotonic
Author: Ori Livneh
Author-email: o...@wikimedia.org
License: Apache
Location: /usr/local/lib/python2.7/dist-packages

Any suggestions on how to get around this one?
Bug?

Bob Hansen
z/VM OpenStack Enablement
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-27 Thread gordon chung


On 27/01/2016 10:51 AM, Jay Pipes wrote:
> On 01/27/2016 12:53 PM, gordon chung wrote:
>>> It makes for a crappy user experience. Crappier than the crappy user
>>> experience that OpenStack API users already have because we have done a
>>> crappy job shepherding projects in order to make sure there isn't
>>> overlap between their APIs (yes, Ceilometer and Monasca, I'm looking
>>> directly at you).
>> ... yes, Ceilometer can easily handle your events and meters and store
>> them in either Elasticsearch or Gnocchi for visualisations. you just
>> need to create a new definition in our mapping files[1][2]. you will
>> definitely want to coordinate the naming of your messages. ie.
>> event_type == backup. and event_type == 
>> backup..
>
> This isn't at all what I was referring to, actually. I was referring 
> to my belief that we (the API WG, the TC, whatever...) have failed to 
> properly prevent almost complete and total overlap of the Ceilometer 
> [1] and Monasca [2] REST APIs.
>
> They are virtually identical in purpose, but in frustrating 
> slightly-inconsistent ways. and this means that users of the 
> "OpenStack APIs" have absolutely no idea what the "OpenStack Telemetry 
> API" really is.
>
> Both APIs have /alarms as a top-level resource endpoint. One of them 
> refers to the alarm notification with /alarms, while the other refers 
> to the alarm definition with /alarms.
>
> One API has /meters as a top-level resource endpoint. The other uses 
> /metrics to mean the exact same thing.
>
> One API has /samples as a top-level resource endpoint. The other uses 
> /metrics/measurements to mean the exact same thing.
>
> One API returns a list JSON object for list results. The other returns 
> a dict JSON object with a "links" key and an "elements" key.
>
> And the list goes on... all producing a horrible non-unified, 
> overly-complicated and redundant experience for our API users.
>
> Best,
> -jay
>
> [1] http://developer.openstack.org/api-ref-telemetry-v2.html
> [2] 
> https://github.com/openstack/monasca-api/blob/master/docs/monasca-api-spec.md
>
> __ 
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

... i'm aware, thus the leading dots. as i saw no suggestions in your 
message -- just a statement -- i chose to provide some 'hopefully' 
constructive comments rather than make assumption on what you were 
hinting at. obviously, no one is able to foresee the existence of a 
project built internally within another company let alone the api of 
said project, i'm not sure what the proposed resolution is.

as the scope is different between ekko and freezer (same for monasca and 
telemetry according to governance voting[1]) what is the issue with 
having overlaps? bringing in CADF[2], if you define your taxonomy 
correctly, sharing a common base is fine as long as there exists enough 
granularity in how you define your resources that differentiates a 
'freezer' resource and an 'ekko' resource. that said, if they are truly 
the same, then you should probably be debating why you have two of the 
same thing instead of api.

[1] https://review.openstack.org/#/c/213183/
[2] 
https://www.dmtf.org/sites/default/files/standards/documents/DSP0262_1.0.0.pdf

-- 
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Allow YAML config

2016-01-27 Thread Alexis Lee
Vladislav Kuzmin said on Tue, Dec 15, 2015 at 04:26:32PM +0300:
> I want to specify all my option in yaml file, because it is much more
> readable. But I must use ini file, because oslo.log using
> logging.config.fileConfig for reading the config file (
> https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L216)
> Why we cannot use yaml file? Can I propose solution for that?

Vlad's patch is here:
https://review.openstack.org/#/c/259000/

He's covered logging config, I've also put one up for regular config:
https://review.openstack.org/#/c/273117/

Eventually this could help get rid of MultiStrOpt.


Alexis (lxsli)
-- 
Nova developer, Hewlett-Packard Limited.
Registered Office: Cain Road, Bracknell, Berkshire RG12 1HN.
Registered Number: 00690597 England
VAT number: GB 314 1496 79

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] LBaaSv2 / Octavia support

2016-01-27 Thread Jain, Vivek
+1 on sharing/outlining migration script form v1 to v2. It will help lot of 
teams.

Thanks,
Vivek







On 1/26/16, 6:58 PM, "Kevin Carter"  wrote:

>I know that Neutron LBaaS V1 is still available in Liberty and functional, and 
>at this point I assume its in Mitaka (simply judging the code not the actual 
>functionality). From a production stand point I think its safe to say we can 
>keep supporting the V1 implementation for a while however we'll be stuck once 
>V1 is deprecated should there not be a proper migration path for old and new 
>LBs at that time. 
>
>I'd also echo the request from Kevin for a share on some of the migration 
>scripts that have been made such that we can all benefit from the prior art 
>that has already been created. @Eichberger If not possible to share the 
>"proprietary" scripts outright, maybe we could get an outline of the process / 
>whitepaper on what's been done so we can work on to getting the needful 
>migrations baked into Octavia proper? (/me speaking as someone with no 
>experience in Octavia nor the breath of work that I may be asking for however 
>I am interested in making things better for deployers, operators, developers)
>
>--
>
>Kevin Carter
>IRC: cloudnull
>
>
>
>From: Fox, Kevin M 
>Sent: Tuesday, January 26, 2016 5:38 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [openstack-ansible] LBaaSv2 / Octavia support
>
>Thats very very unfortunate. :/ Lbaas team, (or any other team), please never 
>do this again. :/
>
>so does liberty/mitaka at least support using the old v1? it would be nice to 
>have a different flag day to upgrade the load balancers then the upgrade day 
>to get from kilo to release next...
>
>Any chance you can share your migration scripts? I'm guessing we're not the 
>only two clouds that need to migrate things.
>
>hmm Would it be possible to rename the tables to something else and tweak 
>a few lines of code so they could run in parallel? Or is there deeper 
>incompatibility then just the same table schema being interpreted differently?
>
>Thanks,
>Kevin
>
>From: Eichberger, German [german.eichber...@hpe.com]
>Sent: Tuesday, January 26, 2016 1:39 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [openstack-ansible] LBaaSv2 / Octavia support
>
>Hi,
>
>As Brandon pointed out you can’t run V1 and V2 at the same time because they 
>share the same database tables and interpret columns differently. Hence, at 
>HPE we have some proprietary script which takes the V1 database tables and 
>migrates them to the V2 format. After that the v2 agent based driver will pick 
>it up and create those load balancers.
>
>To migrate agent based driver to Octavia we are thinking self migration since 
>people van use the same (ansible) scripts and point them at Octavia.
>
>Thanks,
>German
>
>
>
>On 1/26/16, 12:40 PM, "Fox, Kevin M"  wrote:
>
>>I assumed they couldn't run on the same host, but would work on different 
>>hosts. maybe I was wrong?
>>
>>I've got a production cloud that's heavily using v1. Having a flag day where 
>>we upgrade all from v1 to v2 might be possible, but will be quite painful. If 
>>they can be made to co'exist, that would be substantially better.
>>
>>Thanks,
>>Kevin
>>
>>From: Brandon Logan [brandon.lo...@rackspace.com]
>>Sent: Tuesday, January 26, 2016 12:19 PM
>>To: openstack-dev@lists.openstack.org
>>Subject: Re: [openstack-dev] [openstack-ansible] LBaaSv2 / Octavia support
>>
>>Oh lbaas versioning was a big deal in the beginning.  Versioning an
>>advanced service is a whole other topic and exposed many "interesting"
>>issues with the neutron extension and service plugin framework.
>>
>>The reason v1 and v2 cannot be run together are mainly to get over an
>>issue we had with the 2 different agents which woudl have caused a much
>>larger refactor.  The v1 OR v2 requirement was basically a hack to get
>>around that.  Now that Octavia is the reference implementation and the
>>default, relaxing this restriction shouldn't cause any problems really.
>>Although, I don't want to 100% guarantee that because it's been a while
>>since I was in that world.
>>
>>If that were relaxed, the v2 agent and v1 agent could still be run at
>>the same time which is something to think about.  Come to think about
>>it, we might want to revisit whether the v2 and v1 agent running
>>together is something that can be easily fixed because many things have
>>improved since then AND my knowledge has obviously improved a lot since
>>that time.
>>
>>Glad yall brought this up.
>>
>>Thanks,
>>Brandon
>>
>>
>>On Tue, 2016-01-26 at 14:07 -0600, Major Hayden wrote:
>>> On 01/26/2016 02:01 PM, Fox, Kevin M wrote:
>>> > I believe lbaas v1 and v2 are different then every other openstack 

Re: [openstack-dev] glance community image visibility blue print

2016-01-27 Thread Flavio Percoco

On 27/01/16 11:02 -0800, Su Zhang wrote:



Hello everyone,

I created another BP for Newton release regarding glance community image
visibility based on Louis Taylor's original BP. I will continue working on
feature. Before I upload our implementation, it will be wonderful if anyone
in glance community can review and approve the BP. It can be found at
https://review.openstack.org/#/c/271019/



Hey Su,

Thanks a lot for your proposal. The team will review.

Meanwhile, allow me to ask you to avoid sending reviews requests to this mailing
list. Each team reviews patches and specs based on their own schedule.

Thanks again,
Flavio


Look forward to receiving your reviews.

Thanks,

Su Zhang
Symantec Corporation

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-27 Thread Flavio Percoco

On 27/01/16 12:16 -0500, Emilien Macchi wrote:



On 01/27/2016 10:51 AM, Jay Pipes wrote:

On 01/27/2016 12:53 PM, gordon chung wrote:

It makes for a crappy user experience. Crappier than the crappy user
experience that OpenStack API users already have because we have done a
crappy job shepherding projects in order to make sure there isn't
overlap between their APIs (yes, Ceilometer and Monasca, I'm looking
directly at you).

... yes, Ceilometer can easily handle your events and meters and store
them in either Elasticsearch or Gnocchi for visualisations. you just
need to create a new definition in our mapping files[1][2]. you will
definitely want to coordinate the naming of your messages. ie.
event_type == backup. and event_type ==
backup..


This isn't at all what I was referring to, actually. I was referring to
my belief that we (the API WG, the TC, whatever...) have failed to
properly prevent almost complete and total overlap of the Ceilometer [1]
and Monasca [2] REST APIs.

They are virtually identical in purpose, but in frustrating
slightly-inconsistent ways. and this means that users of the "OpenStack
APIs" have absolutely no idea what the "OpenStack Telemetry API" really is.

Both APIs have /alarms as a top-level resource endpoint. One of them
refers to the alarm notification with /alarms, while the other refers to
the alarm definition with /alarms.

One API has /meters as a top-level resource endpoint. The other uses
/metrics to mean the exact same thing.

One API has /samples as a top-level resource endpoint. The other uses
/metrics/measurements to mean the exact same thing.

One API returns a list JSON object for list results. The other returns a
dict JSON object with a "links" key and an "elements" key.

And the list goes on... all producing a horrible non-unified,
overly-complicated and redundant experience for our API users.



I agree with you here Jay, Monasca is a great example of failure in
having consistency across OpenStack projects.
It's a different topic but maybe a retrospective of what happened could
help our community to not reproduce the same mistakes again.

Please do not repeat this failure for other projects.
Do not duplicate efforts: if Ekko has a similar mission statement, maybe
we should avoid creating a new project and contribute to Freezer?
(I'm probably missing some technical bits so tell me if I'm wrong)


FWIW, the current governance model does not prevent competition. That's not to
be understood as we encourage it but rather than there could be services with
some level of overlap that are still worth being separate.

What Jay is referring to is that regardless the projects do similar things, the
same or totally different things, we should strive to have different APIs. The
API shouldn't overlap in terms of endpoints and the way they are exposed.

With all that said, I'd like to encourage collaboration over competition and I'm
sure both teams will find a way to make this work.

Cheers,
Flavio


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TaskFlow] TaskFlow persistence

2016-01-27 Thread Joshua Harlow

pn kk wrote:

Hi,

Thanks for the responses. Putting it in a small example

def flow_factory(tmp):
 return lf.Flow('resume from backend example').add(
 TestTask(name='first', test=tmp),
 InterruptTask(name='boom'),
 TestTask(name='second', test="second task"))


class TestTask(task.Task):
 def __init__(self, name, provides=None, test, **kwargs):
 self.test=test
 super(TestTask, self).__init__(name, provides, **kwargs)
 def execute(self, *args, **kwargs):
 print('executing %s' % self)
 return 'ok'

class InterruptTask(task.Task):
 def execute(self, *args, **kwargs):
 # DO NOT TRY THIS AT HOME
 engine.suspend()

I was searching for a way in which I can reload the flow after crash
without passing the parameter "tmp" shown above
Looks like "load_from_factory" gives this provision.


Thanks for the example, ya, this is one such way to do this that u have 
found, there are a few other ways, but that is the main one that people 
should be using.





engine = taskflow.engines.load_from_factory(flow_factory=flow_factory,
factory_kwargs={"tmp":"test_data"}, book=book, backend=backend)
engine.run()

Now it suspends after running interrupt task, I can now reload the flow
from the saved factory method without passing parameter again.
for flow_detail_2 in book:
 engine2 = taskflow.engines.load_from_detail(flow_detail_2,
backend=backend)

engine2.run()

Let me know if this is ok or is there a better approach to achieve this?


There are a few other ways, but this one should be the currently 
recommended one.


An idea, do u want to maybe update (submit a review to update) the docs, 
if u didn't find this very easy to figure out so that others can more 
easily figure it out. I'm sure that would be appreciated by all.




-Thanks


On Wed, Jan 27, 2016 at 12:03 AM, Joshua Harlow > wrote:

Hi there,

Michał is correct, it should be saved.

Do u have a small example of what u are trying to do because that
will help determine if what u are doing will be saved or whether it
will not be.

Or even possibly explaining what is being done would be fine to
(more data/info for me to reason about what should be stored in your
case).

Thanks,

Josh


Michał Dulko wrote:

On 01/26/2016 10:23 AM, pn kk wrote:

Hi,

I use taskflow for job management and now trying to persist
the state
of flows/tasks in mysql to recover incase of process crashes.

I could see the state and the task results stored in the
database.

Now I am looking for some way to store the input parameters
of the tasks.

Please share your inputs to achieve this.

-Thanks

I've played with that some time ago and if I recall correctly input
parameters should be available in the flow's storage, which
means these
are also saved to the DB. Take a look on resume_workflows method
on my
old PoC [1] (hopefully TaskFlow haven't changed much since then).

[1]
https://review.openstack.org/#/c/152200/4/cinder/scheduler/manager.py


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] glance community image visibility blue print

2016-01-27 Thread Su Zhang


Hello everyone,

I created another BP for Newton release regarding glance community image
visibility based on Louis Taylor's original BP. I will continue working on
feature. Before I upload our implementation, it will be wonderful if anyone
in glance community can review and approve the BP. It can be found at
https://review.openstack.org/#/c/271019/


Look forward to receiving your reviews.

Thanks,

Su Zhang
Symantec Corporation

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][infra] Impact of HPE Helion Public Cloud Sunset on OpenStack Infrastructure

2016-01-27 Thread Cody A.W. Somerville
The HPE Helion Public Cloud is one of several OpenStack public clouds that
generously donate compute, network, and storage resources to power the
OpenStack Developer Infrastructure. As you may know, HPE is sunsetting the
HPE Helion Public Cloud on January 31st 2016[1]. Use of HPE Helion Public
Cloud resources by the OpenStack Infrastructure system will be discontinued
this Friday, January 29th. This will have an impact on the number of
compute nodes available to run gate tests and as such developers may
experience longer wait times.

Efforts are underway to launch an OpenStack cloud managed by the OpenStack
infrastructure team with hardware generously donated by Hewlett-Packard
Enterprise. Among other outcomes, the intention is for this private cloud
to provide a similar level of capacity for use by the infrastructure system
in lieu of the HPE Helion Public Cloud. The infrastructure team is holding
an in-person sprint to focus on this initiative next month in Fort Collins
CO February 22nd - 25th[2][3]. For more information about the "infra-cloud"
project, please see
http://docs.openstack.org/infra/system-config/infra-cloud.html

If your organization is interested in donating public cloud resources,
please see
http://docs.openstack.org/infra/system-config/contribute-cloud.html

Last, but not least, a huge thank you to HPE for their support and
continued commitment to the OpenStack developer infrastructure project.

[1]
http://community.hpe.com/t5/Grounded-in-the-Cloud/A-new-model-to-deliver-public-cloud/ba-p/6804409#.VqgUMF5VKlM
[2]
http://lists.openstack.org/pipermail/openstack-infra/2015-December/003554.html
[3] https://wiki.openstack.org/wiki/Sprints/InfraMitakaSprint

-- 
Cody A.W. Somerville
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Stepping down from Puppet Core

2016-01-27 Thread Mathieu Gagné
Hi,

I would like to ask to be removed from the core reviewers team on the
Puppet for OpenStack project.

My day to day tasks and focus no longer revolve solely around Puppet and
I lack dedicated time to contribute to the project.

In the past months, I stopped actively reviewing changes compared to
what I used to at the beginning when the project was moved to
StackForge. Community code of conduct suggests I step down
considerately. [1]

I'm very proud of what the project managed to achieve in the past
months. It would be a disservice to the community to pretend I'm still
able or have time to review changes. A lot changed since and I can no
longer keep up or pretend I can review changes pedantically or
efficiently as I used to.

Today is time to formalize and face the past months reality by
announcing my wish to be removed from the core reviewers team.

I will be available to answer questions or move ownership of anything I
still have under my name.

Wishing you the best.

Mathieu

[1] http://www.openstack.org/legal/community-code-of-conduct/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] service type vs. project name for use in headers

2016-01-27 Thread Dean Troyer
On Wed, Jan 27, 2016 at 1:47 PM, michael mccune  wrote:

> i am not convinced that we would ever need to have a standard on how these
> names are chosen for the header values, or if we would even need to have
> header names that could be deduced. for me, it would be much better for the
> projects use an identifier that makes sense to them, *and* for each project
> to have good api documentation.
>

I think we would be better served in selecting these things thinking about
the API consumers first.  We already have  enough for them to wade through,
the API-WG is making great gains in herding those particular cats, I would
hate to see giving back some of that here.


> so, instead of using examples where we have header names like
> "OpenStack-Some-[SERVICE_TYPE]-Header", maybe we should suggest
> "OpenStack-Some-[SERVICE_TYPE or PROJECT_NAME]-Header" as our guideline.
>

I think the listed reviews have it right, only referencing service type.
We have attempted to reduce the visible surface area of project names in a
LOT of areas, I do not think this is one that needs to be an exception to
that.

Projects will do what they are going to do, sometimes in spite of
guidelines.  This does not mean that the guidelines need to bend to match
that practice when it is at odds with larger concerns.

In this case, the use of service type as the primary identifier for
endpoints and API services is well established, and is how the service
catalog has and will always work.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Stepping down from Puppet Core

2016-01-27 Thread Emilien Macchi


On 01/27/2016 03:13 PM, Mathieu Gagné wrote:
> Hi,
> 
> I would like to ask to be removed from the core reviewers team on the
> Puppet for OpenStack project.
> 
> My day to day tasks and focus no longer revolve solely around Puppet and
> I lack dedicated time to contribute to the project.
> 
> In the past months, I stopped actively reviewing changes compared to
> what I used to at the beginning when the project was moved to
> StackForge. Community code of conduct suggests I step down
> considerately. [1]
> 
> I'm very proud of what the project managed to achieve in the past
> months. It would be a disservice to the community to pretend I'm still
> able or have time to review changes. A lot changed since and I can no
> longer keep up or pretend I can review changes pedantically or
> efficiently as I used to.

> Today is time to formalize and face the past months reality by
> announcing my wish to be removed from the core reviewers team.
> 
> I will be available to answer questions or move ownership of anything I
> still have under my name.
> 
> Wishing you the best.

Mathieu, I would like to personally thank your for your mentoring in my
early Puppet times ;-)

You took any work conscientiously, you helped to setup lot of
conventions, you were painstaking in your reviews and code.
That's something very previous in a team and I'm very happy you were
on-board all those years.

I wish you all the best in your day2day work, and feel free to kick our
asses in Gerrit if you see something wrong ;-)

Merci,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] documenting configuration option segregation between services and agents

2016-01-27 Thread Dustin Lundquist
We should expand services_and_agents devref to describe how and why
configuration options should be segregated between services and agents. I
stumbled into this recently while trying to remove a confusing duplicate
configuration option [1][2][3]. The present separation appears to be
'tribal knowledge', and not consistently enforced. So I'll take a shot at
explaining the status quo as I understand it and hopefully some seasoned
contributors can fill in the gaps.

=BEGIN PROPOSED DEVREF SECTION=
Configuration Options
-

In addition to database access, configuration options are segregated
between neutron-server and agents. Both services and agents may load the
main neutron.conf since this file should contain the Oslo message
configuration for internal Neutron RPCs and may contain host specific
configuration such as file paths. In addition neutron.conf contains the
database, keystone and nova credentials and endpoints strictly for use by
neutron-server.

In addition neutron-server may load a plugin specific configuration file,
yet the agents should not. As the plugin configuration is primarily site
wide options and the plugin provides the persistence layer for Neutron,
agents should instructed to act upon these values via RPC.

Each individual agent may have its own configuration file. This file should
be loaded after the main neutron.conf file, so the agent configuration
takes precedence. The agent specific configuration may contain
configurations which vary between hosts in a Neutron deployment such as the
external_network_bridge for a L3 agent. If any agent requires access to
additional external services beyond the Neutron RPC, those endpoints should
be defined in the agent specific configuration file (e.g. nova metadata for
metadata agent).


==END PROPOSED DEVREF SECTION==

Disclaimers: this description is informed my by own experiences reading
existing documentation and examining example configurations including
various devstack deployments. I've tried to use RFC style wording: should,
may, etc.. I'm relatively confused on this subject, and my goal in writing
this is to obtain some clarity myself and share it with others in the form
of documentation.


[1] https://review.openstack.org/262621
[2] https://bugs.launchpad.net/neutron/+bug/1523614
[3] https://review.openstack.org/268153
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Impact of HPE Helion Public Cloud Sunset on OpenStack Infrastructure

2016-01-27 Thread Flavio Percoco

On 27/01/16 14:26 -0500, Cody A.W. Somerville wrote:

The HPE Helion Public Cloud is one of several OpenStack public clouds that
generously donate compute, network, and storage resources to power the
OpenStack Developer Infrastructure. As you may know, HPE is sunsetting the HPE
Helion Public Cloud on January 31st 2016[1]. Use of HPE Helion Public Cloud
resources by the OpenStack Infrastructure system will be discontinued this
Friday, January 29th. This will have an impact on the number of compute nodes
available to run gate tests and as such developers may experience longer wait
times.

Efforts are underway to launch an OpenStack cloud managed by the OpenStack
infrastructure team with hardware generously donated by Hewlett-Packard
Enterprise. Among other outcomes, the intention is for this private cloud to
provide a similar level of capacity for use by the infrastructure system in
lieu of the HPE Helion Public Cloud. The infrastructure team is holding an
in-person sprint to focus on this initiative next month in Fort Collins CO
February 22nd - 25th[2][3]. For more information about the "infra-cloud"
project, please see http://docs.openstack.org/infra/system-config/
infra-cloud.html



All the above (except the HPE cloud being shutdown) is awesome! Thanks for the
hard work.


If your organization is interested in donating public cloud resources, please
see http://docs.openstack.org/infra/system-config/contribute-cloud.html

Last, but not least, a huge thank you to HPE for their support and continued
commitment to the OpenStack developer infrastructure project.



+1^gazillion

Thanks a lot to everyone at HPE that worked hard on keeping the cloud up and for
donating hardware. Wish you all the best on whatever new adventure HPE is going
to jump into.

Flavio


[1] http://community.hpe.com/t5/Grounded-in-the-Cloud/
A-new-model-to-deliver-public-cloud/ba-p/6804409#.VqgUMF5VKlM
[2] http://lists.openstack.org/pipermail/openstack-infra/2015-December/
003554.html
[3] https://wiki.openstack.org/wiki/Sprints/InfraMitakaSprint

--
Cody A.W. Somerville



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] service type vs. project name for use in headers

2016-01-27 Thread michael mccune

hi all,

there have been a few reviews recently where the issue of service type 
versus project name have come up for use in the headers. as usual this 
conversation can get quite murky as there are several good examples 
where service type alone is not sufficient (for example if a service 
exposes several api controllers), and as has been pointed out project 
name can also be problematic (for example projects can change name).


i'm curious if we could come to a consensus regarding the use of service 
type *or* project name for headers. i propose leaving the ultimate 
decision up to the projects involved to choose the most appropriate 
identifier for their custom headers.


i am not convinced that we would ever need to have a standard on how 
these names are chosen for the header values, or if we would even need 
to have header names that could be deduced. for me, it would be much 
better for the projects use an identifier that makes sense to them, 
*and* for each project to have good api documentation.


so, instead of using examples where we have header names like 
"OpenStack-Some-[SERVICE_TYPE]-Header", maybe we should suggest 
"OpenStack-Some-[SERVICE_TYPE or PROJECT_NAME]-Header" as our guideline.


for reference, here are the current reviews that are circling around 
this issue:


https://review.openstack.org/#/c/243429
https://review.openstack.org/#/c/273158
https://review.openstack.org/#/c/243414

and one that has already been merged:

https://review.openstack.org/#/c/196918

thoughts?

regards,
mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-27 Thread Sam Yaple
On Wed, Jan 27, 2016 at 7:06 PM, Flavio Percoco  wrote:
>
> FWIW, the current governance model does not prevent competition. That's
> not to
> be understood as we encourage it but rather than there could be services
> with
> some level of overlap that are still worth being separate.
>
> What Jay is referring to is that regardless the projects do similar
> things, the
> same or totally different things, we should strive to have different APIs.
> The
> API shouldn't overlap in terms of endpoints and the way they are exposed.
>
> With all that said, I'd like to encourage collaboration over competition
> and I'm
> sure both teams will find a way to make this work.
>
> Cheers,
> Flavio


And to come full circle on this thread, I will point out once again there
is no competition between Ekko and Freezer at this time. Freezer is
file-level backup where Ekko is block-level backup. Anyone familiar with
backups knows these are drastically different types of backups. Those using
block-level backups typically won't be using file-level backups and
vice-versa. That said, even if there is no convergence of Freezer and Ekko
 they can still live side-by-side without any conflict at all.

As of now, Ekko and Freezer teams have started a dialogue and we will
continue to collaborate rather than compete in every way that is reasonable
for both projects.

Sam Yaple
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Ekko -- Scalable block-based backup for OpenStack

2016-01-27 Thread Fausto Marzi
Hi Sam,

After our conversation, I have few questions and consideration about Ekko,
mainly on how it works et similar. Also to make available to the community
our discussions:

-  In understand you are placing a backup-agent on the compute node
and execute actions interacting directly with the hypervisor. I’m thinking
that while Ekko execute this actions, the Nova service have no visibility
whatsoever of this. I do not think is a good idea to execute actions
directly on the hypervisor without interacting with the Nova API.

-  On your assumptions, you said that Nova snapshots creation
generate a VM downtime. I don’t think the assumption is correct, at least
in Kilo, Liberty and Mitaka. The only downtime you may have related to the
snapshot, is when you merge back the snapshot to the original root image,
and this is not our case here.

-  How the restore would work? If you do a restore of the VM and
the record of that VM instance is not available in the Nova DB (i.e.
restoring a VM on a newly installed Openstack cloud, or in another region,
or after a vm has beed destroyed)what would happen? How do you manage the
consistency of the data between Nova DB and VM status

-  If you execute a backup of the VM image file without executing a
backup of the related VM metadata information (in the shortest time frame
as possible) there are chances the backup can be inconsistent.

- How the restore would happen if on that moment Keystone or Swift is not
available?

-  Does the backup that Ekko execute, generates bootable image? If
not, the image is not usable and the restore process will take longer to
execute the steps to make the image bootable.

-   I do not see any advantage in Ekko over using Nova API to
snapshot -> Generate an image -> upload to Glance -> upload to Swift.

-  The Ekko approach is limited to Nova, KVM QEMU, having a
qemu-agent running on the VM. I think the scope is probably a bit limited.
This is more a feature than a tool itself, but the problem is being solved
I think more efficiently already.

-  By executing all the actions related to backup (i.e.
compression, incremental computation, upload, I/O and segmented upload to
Swift) Ekko is adding a significant load to the Compute Nodes. All the work
is done on the hypervisor and not taken into account by ceilometer (or
similar), so for example not billable. I do not think this is a good idea
as distributing the load over multiple components helps OpenStack to scale
and by leveraging the existing API you integrated better with existing
tools.

-  There’s no documentation whatsoever provided with Ekko. I had to
read the source code, have conversations directly with you and invest
significant time on it. I think provide some documentation is helpful, as
the doc link in the openstack/ekko repo return 404 Not Found.

Please let me know what your thoughts are on this.

Thanks,
Fausto


On Wed, Jan 27, 2016 at 1:55 PM, Sam Yaple  wrote:

> On Wed, Jan 27, 2016 at 7:06 PM, Flavio Percoco  wrote:
>>
>> FWIW, the current governance model does not prevent competition. That's
>> not to
>> be understood as we encourage it but rather than there could be services
>> with
>> some level of overlap that are still worth being separate.
>>
>> What Jay is referring to is that regardless the projects do similar
>> things, the
>> same or totally different things, we should strive to have different
>> APIs. The
>> API shouldn't overlap in terms of endpoints and the way they are exposed.
>>
>> With all that said, I'd like to encourage collaboration over competition
>> and I'm
>> sure both teams will find a way to make this work.
>>
>> Cheers,
>> Flavio
>
>
> And to come full circle on this thread, I will point out once again there
> is no competition between Ekko and Freezer at this time. Freezer is
> file-level backup where Ekko is block-level backup. Anyone familiar with
> backups knows these are drastically different types of backups. Those using
> block-level backups typically won't be using file-level backups and
> vice-versa. That said, even if there is no convergence of Freezer and Ekko
>  they can still live side-by-side without any conflict at all.
>
> As of now, Ekko and Freezer teams have started a dialogue and we will
> continue to collaborate rather than compete in every way that is reasonable
> for both projects.
>
> Sam Yaple
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [puppet] Stepping down from Puppet Core

2016-01-27 Thread Matt Fischer
Mathieu,

Thank you for all the work you've done over the past few years in this
community. You've done a lot and also done a lot to help answer questions
and mentor new folks.

On Wed, Jan 27, 2016 at 1:13 PM, Mathieu Gagné  wrote:

> Hi,
>
> I would like to ask to be removed from the core reviewers team on the
> Puppet for OpenStack project.
>
> My day to day tasks and focus no longer revolve solely around Puppet and
> I lack dedicated time to contribute to the project.
>
> In the past months, I stopped actively reviewing changes compared to
> what I used to at the beginning when the project was moved to
> StackForge. Community code of conduct suggests I step down
> considerately. [1]
>
> I'm very proud of what the project managed to achieve in the past
> months. It would be a disservice to the community to pretend I'm still
> able or have time to review changes. A lot changed since and I can no
> longer keep up or pretend I can review changes pedantically or
> efficiently as I used to.
>
> Today is time to formalize and face the past months reality by
> announcing my wish to be removed from the core reviewers team.
>
> I will be available to answer questions or move ownership of anything I
> still have under my name.
>
> Wishing you the best.
>
> Mathieu
>
> [1] http://www.openstack.org/legal/community-code-of-conduct/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] service type vs. project name for use in headers

2016-01-27 Thread Ryan Brown

On 01/27/2016 03:31 PM, Dean Troyer wrote:

On Wed, Jan 27, 2016 at 1:47 PM, michael mccune > wrote:

i am not convinced that we would ever need to have a standard on how
these names are chosen for the header values, or if we would even
need to have header names that could be deduced. for me, it would be
much better for the projects use an identifier that makes sense to
them, *and* for each project to have good api documentation.


I think we would be better served in selecting these things thinking
about the API consumers first.  We already have  enough for them to wade
through, the API-WG is making great gains in herding those particular
cats, I would hate to see giving back some of that here.

so, instead of using examples where we have header names like
"OpenStack-Some-[SERVICE_TYPE]-Header", maybe we should suggest
"OpenStack-Some-[SERVICE_TYPE or PROJECT_NAME]-Header" as our guideline.


I think the listed reviews have it right, only referencing service
type.  We have attempted to reduce the visible surface area of project
names in a LOT of areas, I do not think this is one that needs to be an
exception to that.


+1, I prefer service type over project name. Among other benefits, it 
leaves room for multiple implementations without being totally baffling 
to consumers.



Projects will do what they are going to do, sometimes in spite of
guidelines.  This does not mean that the guidelines need to bend to
match that practice when it is at odds with larger concerns.

In this case, the use of service type as the primary identifier for
endpoints and API services is well established, and is how the service
catalog has and will always work.

dt


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] enable voting for integration jobs

2016-01-27 Thread Emilien Macchi
Hi,

Puppet OpenStack Integration jobs [1] have been here for some months.
It seems like they're pretty stable when rubygems or EPEL is not down ;-)

I'm sure it will fail from time to time, like other jobs but 'recheck'
will do the job if we need to kick-off another run.

We might want to enable the jobs in the gate queue for some reasons:
* avoid regressions when patches do not pass the jobs; having them
voting will make sure we don't break anything.
* see job results in OpenStack Health [2]
* gate our modules in a shared change queue so depends-on will
automatically work in the gate across shared change queues.

We know we regularly have timeout issues when cloning modules or pulling
gems, but I don't think that should be a blocker. OpenStack Infra is
already working on getting Gem mirror [3], so it's in the roadmap.


Feedback is welcome here,

[1] https://github.com/openstack/puppet-openstack-integration
[2] http://status.openstack.org/openstack-health/#/
[3] https://review.openstack.org/253616
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] Orphaned process cleanup

2016-01-27 Thread Sean M. Collins
Hi,

I started poking a bit at https://bugs.launchpad.net/devstack/+bug/1535661

We have radvd processes that the l3 agent launches, and if the l3 agent
is terminated these radvd processes continue to run. I think we should
probably terminate them when the l3 agent is terminated, like if we are
in DevStack and doing an unstack.sh[1]. There's a fix on the DevStack
side but I'm waffling a bit on if it's the right thing to do or not[2].

The only concern I have is if there are situations where the l3 agent
terminates, but we don't want data plane disruption. For example, if
something goes wrong and the L3 agent dies, if the OS will be sending a
SIGABRT (which my WIP patch doesn't catch[3] and radvd would continue to run) 
or if a
SIGTERM is issued, or worse, an OOM event occurs (I think thats a
SIGTERM too?) and you get an outage.

[1]: 
https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L767

[2]: https://review.openstack.org/269560

[3]: https://review.openstack.org/273228
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][ironic][cinder][nova] 'tar' as an image disk_format

2016-01-27 Thread Arun SAG
Hi Flavio,

On Wed, Jan 27, 2016 at 4:50 AM, Flavio Percoco  wrote:
> [snip]
> However, as a community, I think we should send a clear message and protect 
> our users and, in this case, the best way
> is to avoid adding this format as supported.
>

To address some of the concerns i have added a security impact
statement on the spec

1. Ironic doesn't unpack the OS tarball, it will be unpacked on the
target node in a ramdisk using tar utility. (tar -avxf)
2. The moment you allow an un-trusted  OS image to be deployed, the
expected security is None. An advisory
doesn't need to manipulate the extraction of the tarball to gain
access in that case.
3. In docker the vulnerability is high because a vulnerable container
can infect the host system.
4. I understand the concerns with the conversion API's , and they are
valid. Please feel free to not support tar as a conversion target.




-- 
Arun S A G
http://zer0c00l.in/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] Add TOSCA assets to the catalog

2016-01-27 Thread Sahdev P Zala
Hello, 

I am looking at this blueprint 
https://blueprints.launchpad.net/app-catalog/+spec/add-tosca-assets and 
confused about the first task, "define metadata for TOSCA assets"? Can 
someone please provide example of metadata for previous work on Heat and 
Murano? I tried to find some old patch for reference but couldn't get one. 


The TOSCA will provide YAML template file and package in a CSAR form 
(.csar and .zip) to host on catalog. 


Thanks!

Regards, 
Sahdev Zala


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [app-catalog] Add TOSCA assets to the catalog

2016-01-27 Thread Steve Gordon
- Original Message -
> From: "Sahdev P Zala" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> Hello,
> 
> I am looking at this blueprint
> https://blueprints.launchpad.net/app-catalog/+spec/add-tosca-assets and
> confused about the first task, "define metadata for TOSCA assets"? Can
> someone please provide example of metadata for previous work on Heat and
> Murano? I tried to find some old patch for reference but couldn't get one.
> 
> 
> The TOSCA will provide YAML template file and package in a CSAR form
> (.csar and .zip) to host on catalog.
> 
> 
> Thanks!
> 
> Regards,
> Sahdev Zala

I *believe* it's referring to the metadata for the asset type, if you look in 
the schema here:


http://git.openstack.org/cgit/openstack/app-catalog/tree/openstack_catalog/web/static/assets.schema.yaml

...you will find definitions for heat, glance, and murano assets indicating 
required properties etc.

-Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Stepping down from Puppet Core

2016-01-27 Thread Colleen Murphy
On Wed, Jan 27, 2016 at 12:35 PM, Emilien Macchi  wrote:

>
>
> On 01/27/2016 03:13 PM, Mathieu Gagné wrote:
> > Hi,
> >
> > I would like to ask to be removed from the core reviewers team on the
> > Puppet for OpenStack project.
> >
> > My day to day tasks and focus no longer revolve solely around Puppet and
> > I lack dedicated time to contribute to the project.
> >
> > In the past months, I stopped actively reviewing changes compared to
> > what I used to at the beginning when the project was moved to
> > StackForge. Community code of conduct suggests I step down
> > considerately. [1]
> >
> > I'm very proud of what the project managed to achieve in the past
> > months. It would be a disservice to the community to pretend I'm still
> > able or have time to review changes. A lot changed since and I can no
> > longer keep up or pretend I can review changes pedantically or
> > efficiently as I used to.
>
> > Today is time to formalize and face the past months reality by
> > announcing my wish to be removed from the core reviewers team.
> >
> > I will be available to answer questions or move ownership of anything I
> > still have under my name.
> >
> > Wishing you the best.
>
> Mathieu, I would like to personally thank your for your mentoring in my
> early Puppet times ;-)
>
I would echo this as well :)

>
> You took any work conscientiously, you helped to setup lot of
> conventions, you were painstaking in your reviews and code.
> That's something very previous in a team and I'm very happy you were
> on-board all those years.
>
> I wish you all the best in your day2day work, and feel free to kick our
> asses in Gerrit if you see something wrong ;-)
>
> Merci,
> --
> Emilien Macchi
>
> Thank you for all your work on this project. I know that a large part of
why this project has been so successful is because of your guidance.

Colleen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Orphaned process cleanup

2016-01-27 Thread Assaf Muller
On Wed, Jan 27, 2016 at 4:10 PM, Sean M. Collins  wrote:
> Hi,
>
> I started poking a bit at https://bugs.launchpad.net/devstack/+bug/1535661
>
> We have radvd processes that the l3 agent launches, and if the l3 agent
> is terminated these radvd processes continue to run. I think we should
> probably terminate them when the l3 agent is terminated, like if we are
> in DevStack and doing an unstack.sh[1]. There's a fix on the DevStack
> side but I'm waffling a bit on if it's the right thing to do or not[2].
>
> The only concern I have is if there are situations where the l3 agent
> terminates, but we don't want data plane disruption. For example, if
> something goes wrong and the L3 agent dies, if the OS will be sending a
> SIGABRT (which my WIP patch doesn't catch[3] and radvd would continue to run) 
> or if a
> SIGTERM is issued, or worse, an OOM event occurs (I think thats a
> SIGTERM too?) and you get an outage.

RDO systemd init script for the L3 agent will send a signal 15 when
'systemctl restart neutron-l3-agent' is executed. I assume
Debian/Ubuntu do the same. It is imperative that agent restarts do not
cause data plane interruption. This has been the case for the L3 agent
for a while, and recently for the OVS agent. There's a difference
between an uninstallation (unstack.sh) and an agent restart/upgrade,
let's keep it that way :)

>
> [1]: 
> https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L767
>
> [2]: https://review.openstack.org/269560
>
> [3]: https://review.openstack.org/273228
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] spec-lite process for tripleo

2016-01-27 Thread Dan Prince
On Wed, 2016-01-27 at 16:21 +, Derek Higgins wrote:
> Hi All,
> 
> We briefly discussed feature tracking in this weeks tripleo meeting.
> I 
> would like to provide a way for downstream consumers (and ourselves)
> to 
> track new features as they get implemented. The main things that
> came 
> out of the discussion is that people liked the spec-lite process
> that 
> the glance team are using.
> 
> I'm proposing we would start to use the same process, essentially
> small 
> features that don't warrant a blueprint would instead have a
> wishlist 
> bug opened against them and get marked with the spec-lite tag. This
> bug 
> could then be referenced in the commit messages. For larger features 
> blueprints can still be used. I think the process documented by 
> glance[1] is a good model to follow so go read that and see what you
> think
> 
> The general feeling at the meeting was +1 to doing this[2] so I hope
> we 
> can soon start enforcing it, assuming people are still happy to
> proceed?

+1 from me

> 
> thanks,
> Derek.
> 
> [1] 
> http://docs.openstack.org/developer/glance/contributing/blueprints.ht
> ml#glance-spec-lite
> [2] 
> http://eavesdrop.openstack.org/meetings/tripleo/2016/tripleo.2016-01-
> 26-14.02.log.html
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Addressing issue of keysone token expiry during long running operations

2016-01-27 Thread Paul Carlton

Jamie

At the Nova mid-cycle and John Garbutt suggested I reach out to you 
again to progress this issue.


Thanks

On 05/01/16 10:05, Carlton, Paul (Cloud Services) wrote:

Jamie

John Garbutt suggested I follow up this issue with you.  I understand
you may be leading the
effort to address the issue of token expiry during a long running
operation.  Nova
encounter this scenario during image snapshots and live migrations.

Is there a keystone blueprint for this issue?

Thanks




--
Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard Enterprise
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:+44 (0)7768 994283
Email:mailto:paul.carlt...@hpe.com
Hewlett-Packard Enterprise Limited registered Office: Cain Road, Bracknell, 
Berks RG12 1HN Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may be 
legally privileged. If you have received this message in error, you should delete it from 
your system immediately and advise the sender. To any recipient of this message within 
HP, unless otherwise stated you should consider this message and attachments as "HP 
CONFIDENTIAL".


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Orphaned process cleanup

2016-01-27 Thread Sean M. Collins
On Wed, Jan 27, 2016 at 04:24:00PM EST, Assaf Muller wrote:
> On Wed, Jan 27, 2016 at 4:10 PM, Sean M. Collins  wrote:
> > Hi,
> >
> > I started poking a bit at https://bugs.launchpad.net/devstack/+bug/1535661
> >
> > We have radvd processes that the l3 agent launches, and if the l3 agent
> > is terminated these radvd processes continue to run. I think we should
> > probably terminate them when the l3 agent is terminated, like if we are
> > in DevStack and doing an unstack.sh[1]. There's a fix on the DevStack
> > side but I'm waffling a bit on if it's the right thing to do or not[2].
> >
> > The only concern I have is if there are situations where the l3 agent
> > terminates, but we don't want data plane disruption. For example, if
> > something goes wrong and the L3 agent dies, if the OS will be sending a
> > SIGABRT (which my WIP patch doesn't catch[3] and radvd would continue to 
> > run) or if a
> > SIGTERM is issued, or worse, an OOM event occurs (I think thats a
> > SIGTERM too?) and you get an outage.
> 
> RDO systemd init script for the L3 agent will send a signal 15 when
> 'systemctl restart neutron-l3-agent' is executed. I assume
> Debian/Ubuntu do the same. It is imperative that agent restarts do not
> cause data plane interruption. This has been the case for the L3 agent

But wouldn't it really be wiser to use SIGHUP to communicate the intent
to restart a process? 

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack installer

2016-01-27 Thread Kevin Carter
Hello Gyorgy,

Few more responses inline:

On 01/27/2016 02:51 AM, Gyorgy Szombathelyi wrote:
>>
>> Hi Gyorgy,
>>
> Hi Kevin,
>
>> I'll definitely give this a look and thanks for sharing. I would like to ask
>> however why you found OpenStack-Anisble overly complex so much so that
>> you've taken on the complexity of developing a new installer all together? 
>> I'd
>> love to understand the issues you ran into and see what we can do in
>> upstream OpenStack-Ansible to overcome them for the greater community.
>> Being that OpenStack-Ansible is no longer a Rackspace project but a
>> community effort governed by the OpenStack Foundation I'd been keen on
>> seeing how we can simplify the deployment offerings we're currently
>> working on today in an effort foster greater developer interactions so that
>> we can work together on building the best deployer and operator
>> experience.
>>
> Basically there were two major points:
>
> - containers: we don't need it. For us, that was no real benefits to use 
> them, but just
> added unnecessary complexity. Instead of having 1 mgmt address of a 
> controller, it had
> a dozen, installation times were huge (>2 hours) with creating and updating 
> each controller,the
I can see the benefit of both a containerized and non-containerized 
stack. This is one of the reasons the we made the OSA deployment 
solution capable of doing a deployment without containers. It's really 
as simple as setting the variable "is_metal=true". While I understand 
the desire to reduce the deployment times I've found deployments a whole 
lot more flexible and stable when isolating services especially as it 
pertains to upgrades.

> generated inventory was fragile (any time I wanted to change something in the 
> generated
> inventory, I had a high chance to break it). When I learned how to install 
> without containers,
This is true, the generated inventory can be frustrating when you 
getting used to setting things up. I've not found it fragile when 
running prod though. Was there something that you ran into on that front 
which caused you instabilities or were these all learning pains?

> another problem came in: every service listens on 0.0.0.0, so haproxy can't 
> bind to the service ports.
>
As a best practice when moving clouds to production I'd NOT recommend 
running your load balancer on the same hosts as your service 
infrastructure. One terrible limitation with that kind of a setup, 
especially without containers or service namespaces, is the problem that 
arise when a connection go into a sleep wait state while a vip is 
failing over. This will cause immanent downtime for potentially long 
periods of time and can break things like DB replication, messaging, 
etc... This is not something you have to be aware of as your tooling 
around but when a deployment goes into production its something you 
should be aware of. Fencing with pacemaker and other things can help but 
they also bring in other issues. Having an external LB is really the way 
to go which is why HAP on a controller without containers is not 
recommended. HAP on a VM or stand alone node works great! Its worth 
noting in the OSA stack the bind addresses which are assumed 0.0.0.0 can 
be arbitrarily set using a template override for a given service.

> - packages: we wanted to avoid mixing pip and vendor packages. Linux great 
> power was
> always the package management system. We don't have the capacity to choose 
> the right
> revision from git. Also a .deb package come with goodies, like the init 
> scripts, proper system
> users, directories, upgrade possibility and so on. Bugs can be reported 
> against .debs.
>
I apologize but I could disagree this more. We have all of the system 
goodies you'd expect running OpenStack on a Ubuntu system, like init 
scripts, proper system users, directories, etc.. we even have 
upgradability between major and minor versions. Did you find something 
that didn't work? Within the OSA project we're choosing the various 
version from git for the deployer by default and basing every tag off of 
the stable branches as provided by the various services; so its not like 
you had much to worry about in that regard. As for the ability to create 
bugs I fail to see how creating a bug report on a deb from a third party 
would be more beneficial and have a faster turn around than creating a 
bug report within a given service project, there by interacting with its 
developers and maintainers. By going to source we're able to fix general 
bugs, CVEs, and anything else within hours not days or weeks. Also I 
question the upgrade-ability of the general OpenStack package ecosystem. 
As a deployer whom has come from that space and knows what types of 
shianigans goes on in there, using both debs and rpms, I've found 
running OpenStack clouds at various sizes for long periods of time 
becomes very difficult as packages, package dependencies, patches the 
third party is carrying, and other things change 

Re: [openstack-dev] [TripleO] changes in irc alerts

2016-01-27 Thread James Slagle
On Tue, Jan 26, 2016 at 7:15 PM, Derek Higgins  wrote:

> Hi All,
>
> For the last few months we've been alerting the #tripleo irc channel when
> a card is open on the tripleo trello org, in the urgent list.
>
> When used I think it served a good purpose to alert people to the fact
> that deploying master is currently broken, but it hasn't been used as much
> as I hoped(not to mention the duplication of sometimes needing a LB bug
> anyways). As most people are more accustomed to creating LP bugs when
> things are broken and to avoid duplication perhaps it would have been
> better to use LaunchPad to drive the alerts instead.
>
> I've changed the bot that was looking at trello to now instead look for
> bugs on launchpad (hourly), it will alert the #tripleo channel if it finds
> a bug that matches
>
> is filed against the tripleo project  AND
> has a Importance or "Critical"AND
> has the tag "alert" applied to it
>
> I brought this up in todays meeting and people were +1 on the idea, do the
> rules above work for people? if not I can change them to something more
> suitable.
>

​WFM, I just filed a new critical bug[1] and added the tag, so we can see
if it works :)​


​[1] ​
 https://bugs.launchpad.net/tripleo/+bug/1538761

thanks,
> Derek.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-- James Slagle
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-01-27 Thread Matt Riedemann



On 1/26/2016 5:55 AM, Avishay Traeger wrote:

OK great, thanks!  I added a suggestion to the etherpad as well, and
found this link helpful: https://review.openstack.org/#/c/266095/

On Tue, Jan 26, 2016 at 1:37 AM, D'Angelo, Scott > wrote:

There is currently no simple way to clean up Cinder attachments if
the Nova node (or the instance) has gone away. We’ve put this topic
on the agenda for the Cinder mid-cycle this week:

https://etherpad.openstack.org/p/mitaka-cinder-midcycle L#113


__ __

*From:*Avishay Traeger [mailto:avis...@stratoscale.com
]
*Sent:* Monday, January 25, 2016 7:21 AM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* [openstack-dev] [Nova][Cinder] Cleanly detaching volumes
from failed nodes

__ __

Hi all,

I was wondering if there was any way to cleanly detach volumes from
failed nodes.  In the case where the node is up nova-compute will
call Cinder's terminate_connection API with a "connector" that
includes information about the node - e.g., hostname, IP, iSCSI
initiator name, FC WWPNs, etc.

If the node has died, this information is no longer available, and
so the attachment cannot be cleaned up properly.  Is there any way
to handle this today?  If not, does it make sense to save the
connector elsewhere (e.g., DB) for cases like these?

__ __

Thanks,

Avishay


__ __

-- 

*Avishay Traeger, PhD*

/System Architect/

__ __

Mobile: +972 54 447 1475 

E-mail: avis...@stratoscale.com 

__ __



__ __

Web  | Blog
 | Twitter
 | Google+


 |
Linkedin 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
*Avishay Traeger, PhD*
/System Architect/

Mobile:+972 54 447 1475
E-mail: avis...@stratoscale.com 



Web  | Blog
 | Twitter
 | Google+

 |
Linkedin 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I've replied on https://review.openstack.org/#/c/266095/ and the related 
cinder change https://review.openstack.org/#/c/272899/ which are adding 
a new key to the volume connector dict being passed around between nova 
and cinder, which is not ideal.


I'd really like to see us start modeling the volume connector with 
versioned objects so we can (1) tell what's actually in this mystery 
connector dict in the nova virt driver interface and (2) handle version 
compat with adding new keys to it.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-01-27 Thread Avishay Traeger
On Wed, Jan 27, 2016 at 1:01 PM, Matt Riedemann 
wrote:


> I've replied on https://review.openstack.org/#/c/266095/ and the related
> cinder change https://review.openstack.org/#/c/272899/ which are adding a
> new key to the volume connector dict being passed around between nova and
> cinder, which is not ideal.
>
> I'd really like to see us start modeling the volume connector with
> versioned objects so we can (1) tell what's actually in this mystery
> connector dict in the nova virt driver interface and (2) handle version
> compat with adding new keys to it.
>

I agree with you.  Actually, I think it would be more correct to have
Cinder store it, and not pass it at all to terminate_connection().


-- 
*Avishay Traeger, PhD*
*System Architect*

Mobile: +972 54 447 1475
E-mail: avis...@stratoscale.com



Web  | Blog 
 | Twitter  | Google+

 | Linkedin 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Bugs] Time sync problem when testing.

2016-01-27 Thread Maksim Malchuk
I think we shouldn't depend on the other services like Syslog and logger
trying to catch the problem and it is better to create the logs ourselves.


On Wed, Jan 27, 2016 at 1:49 PM, Stanislaw Bogatkin 
wrote:

> >But you've used 'logger -t ntpdate' - this is can fail again and logs can
> be empty again.
> What do you mean by 'fall again'? Piping to logger uses standard blocking
> I/O - logger gets
> all the output it can reach, so it get all output strace will produce. If
> ntpdate will hang for some
> reason - we should see it in strace output. If ntpdate will exit - we will
> see this too.
>
> On Wed, Jan 27, 2016 at 12:57 PM, Maksim Malchuk 
> wrote:
>
>> But you've used 'logger -t ntpdate' - this is can fail again and logs can
>> be empty again.
>> My opinion we should use output redirection to the log-file directly.
>>
>>
>> On Wed, Jan 27, 2016 at 11:21 AM, Stanislaw Bogatkin <
>> sbogat...@mirantis.com> wrote:
>>
>>> Yes, I have created custom iso with debug output. It didn't help, so
>>> another one with strace was created.
>>> On Jan 27, 2016 00:56, "Alex Schultz"  wrote:
>>>
 On Tue, Jan 26, 2016 at 2:16 PM, Stanislaw Bogatkin
  wrote:
 > When there is too high strata, ntpdate can understand this and always
 write
 > this into its log. In our case there are just no log - ntpdate send
 first
 > packet, get an answer - that's all. So, fudging won't save us, as I
 think.
 > Also, it's a really bad approach to fudge a server which doesn't have
 a real
 > clock onboard.

 Do you have a debug output of the ntpdate somewhere? I'm not finding
 it in the bugs or in some of the snapshots for the failures. I did
 find one snapshot with the -v change that didn't have any response
 information so maybe it's the other problem where there is some
 network connectivity isn't working correctly or the responses are
 getting dropped somewhere?

 -Alex

 >
 > On Tue, Jan 26, 2016 at 10:41 PM, Alex Schultz 
 > wrote:
 >>
 >> On Tue, Jan 26, 2016 at 11:42 AM, Stanislaw Bogatkin
 >>  wrote:
 >> > Hi guys,
 >> >
 >> > for some time we have a bug [0] with ntpdate. It doesn't
 reproduced 100%
 >> > of
 >> > time, but breaks our BVT and swarm tests. There is no exact point
 where
 >> > problem root located. To better understand this, some verbosity to
 >> > ntpdate
 >> > output was added but in logs we can see only that packet exchange
 >> > between
 >> > ntpdate and server was started and was never completed.
 >> >
 >>
 >> So when I've hit this in my local environments there is usually one
 or
 >> two possible causes for this. 1) lack of network connectivity so ntp
 >> server never responds or 2) the stratum is too high.  My assumption
 is
 >> that we're running into #2 because of our revert-resume in testing.
 >> When we resume, the ntp server on the master may take a while to
 >> become stable. This sync in the deployment uses the fuel master for
 >> synchronization so if the stratum is too high, it will fail with this
 >> lovely useless error.  My assumption on what is happening is that
 >> because we aren't using a set of internal ntp servers but rather
 >> relying on the standard ntp.org pools.  So when the master is being
 >> resumed it's struggling to find a good enough set of servers so it
 >> takes a while to sync. This then causes these deployment tasks to
 fail
 >> because the master has not yet stabilized (might also be geolocation
 >> related).  We could either address this by fudging the stratum on the
 >> master server in the configs or possibly introducing our own more
 >> stable local ntp servers. I have a feeling fudging the stratum might
 >> be better when we only use the master in our ntp configuration.
 >>
 >> > As this bug is blocker, I propose to merge [1] to better
 understanding
 >> > what's going on. I created custom ISO with this patchset and tried
 to
 >> > run
 >> > about 10 BVT tests on this ISO. Absolutely with no luck. So, if we
 will
 >> > merge this, we would catch the problem much faster and understand
 root
 >> > cause.
 >> >
 >>
 >> I think we should merge the increased logging patch anyway because
 >> it'll be useful in troubleshooting but we also might want to look
 into
 >> getting an ntp peers list added into the snapshot.
 >>
 >> > I appreciate your answers, folks.
 >> >
 >> >
 >> > [0] https://bugs.launchpad.net/fuel/+bug/1533082
 >> > [1] https://review.openstack.org/#/c/271219/
 >> > --
 >> > with best regards,
 >> > Stan.
 >> >
 >>
 >> Thanks,
 >> -Alex
 >>
 >>
 

Re: [openstack-dev] [fuel][plugins] Detached components plugin update requirement

2016-01-27 Thread Simon Pasquier
I see no follow-up to Swann's question so let me elaborate why this issue
is important for the LMA plugins.

First I need to explain what was our release schedule for the LMA plugins
during the MOS 7.0 cycle:
- New features were done on the master branch which was only compatible
with MOS 7.0.
- We maintained the stable/0.7 branches of the LMA plugins to remain
compatible with both MOS 6.1 and 7.0. The work was very lightweight like
backporting a few fixes from the master branch (for instance the
metadata.yaml update).

This workflow allows several things for us:
- Ship a point release of the LMA toolchain based on the stable(/0.7)
branch soon after MOS (7.0) is released. This let users deploy LMA with MOS
7 without waiting for the new LMA version that's been release a few months
after MOS 7.
- Use a well-know version of the LMA toolchain with the MOS version under
development for troubleshooting, performance analysis, longevity testing,
... This one is of great interest for the QA team. If we were to use the
master branch of the LMA plugins, it would dramatically decrease the
stability of the whole.
- Make sure that the LMA toolchain can be deployed with plugins that don't
support the latest MOS version: for instance, we're going to release our
master branch (compatible only with MOS 8) right after MOS GA but other
plugins won't ship a new version before MOS 9 so we need to keep supporting
MOS 7.

Looking at the originating bug description [1], I'm not sure to fully
understand what problem the change is trying to fix and why it's been
backported on stable/8.0. But IMO, the change puts too much burden on
plugin developers. Maintaining several branches of our plugins for every
MOS version is the last thing I want to do.

Regards,
Simon

[1] https://bugs.launchpad.net/fuel/+bug/1508486

On Thu, Jan 21, 2016 at 10:23 AM, Bartlomiej Piotrowski <
bpiotrow...@mirantis.com> wrote:

> Breakage of anything is probably the last thing I intended to achieve with
> that patch. Maybe I misunderstand how tasks dependencies works, let me
> describe *explicit* dependencies I did in tasks.yaml:
>
> hiera requires deploy_start
> hiera is required for setup_repositories
> setup_repositories is required for fuel_pkgs
> setup_repositories requires hiera
> fuel_pkgs requires setup_repositories
> fuel_pkgs is required globals
>
> Coming from packaging realm, there is clear transitive dependency for
> anything that pulls globals task, i.e. if task foo depends on globals, the
> latter pulls fuel_pkgs, which brings setup_repositories in. I'm in favor of
> reverting both patches (master and stable/8.0) if it's going to break
> backwards compatibility, but I really see bigger problem in the way we
> handle task dependencies.
>
> Bartłomiej
>
> On Thu, Jan 21, 2016 at 9:51 AM, Swann Croiset 
> wrote:
>
>> Sergii,
>> I'm also curious, what about plugins which intend to be compatible with
>> both MOS 7 and MOS 8?
>> I've in mind the LMA plugins stable/0.8
>>
>> BR
>>
>> --
>> Swann
>>
>> On Wed, Jan 20, 2016 at 8:34 PM, Sergii Golovatiuk <
>> sgolovat...@mirantis.com> wrote:
>>
>>> Plugin master branch won't be compatible with older versions. Though the
>>> plugin developer may create stable branch to have compatibility with older
>>> versions.
>>>
>>>
>>> --
>>> Best regards,
>>> Sergii Golovatiuk,
>>> Skype #golserge
>>> IRC #holser
>>>
>>> On Wed, Jan 20, 2016 at 6:41 PM, Dmitry Mescheryakov <
>>> dmescherya...@mirantis.com> wrote:
>>>
 Sergii,

 I am curious - does it mean that the plugins will stop working with
 older versions of Fuel?

 Thanks,

 Dmitry

 2016-01-20 19:58 GMT+03:00 Sergii Golovatiuk 
 :

> Hi,
>
> Recently I merged the change to master and 8.0 that moves one task
> from Nailgun to Library [1]. Actually, it replaces [2] to allow operator
> more flexibility with repository management.  However, it affects the
> detached components as they will require one more task to add as written 
> at
> [3]. Please adapt your plugin accordingly.
>
> [1]
> https://review.openstack.org/#/q/I1b83e3bfaebecdb8455d5697e320f24fb4941536
> [2]
> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L149-L190
> [3] https://review.openstack.org/#/c/270232/1/deployment_tasks.yaml
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 __
 OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [oslo] Sachi King for oslo core

2016-01-27 Thread Sachi King
Thanks for the vote of confidence all, I look forward to expanding
what I'm working on.

Cheers,
Sachi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Glance Core team additions/removals

2016-01-27 Thread Mikhail Fedosin
I'm always happy to add new cores :)

+1 for Kairat. He has been working really hard and the huge number of
high-quality reviews and commits is a good proof of this. Welcome Kairat!

+1 for Brian. He is one of the most experienced technical specialists and he's
a well-known member in the community. So, I was wondering why he was not a
core yet (better late than never).

+1 for leaving Fei Long in core team. Please! Moreover we can change IRC
meeting time to more comfortable for his tz - 3 AM is overkill.

Alex now in Murano and unfortunately he won't be able to participate in
Glance initiatives in the near future. But hope he will be back!

On Wed, Jan 27, 2016 at 10:38 AM, 见习骑士 <491745...@qq.com> wrote:

> Hi All,
>
> +1 for Kairat. He has really done a lot of work both in reviews and
> commits.
> Personally say, he helped me a lot with my patches as well. Thanks!
>
> +1 for Brian. As I know, he has done great job in glance-specs review,
> especially the image-import-refactor which is the most important goal in
> Mitaka.
>
> +1 for Fei Long. We have discussed a lot about DB-based quota blueprint
> which is maybe a big work to do. In my sight, he still want to help us to
> make Glance better.
> And as he said, he will balance his time. Hope he could stay.
>
> Cheers,
> WangXiyuan
>
> -- Original --
> *From: * "Fei Long Wang";;
> *Date: * Wed, Jan 27, 2016 05:34 AM
> *To: * "openstack-dev";
> *Subject: * Re: [openstack-dev] [glance] Glance Core team
> additions/removals
>
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi team,
>
> As you know, I'm working on Zaqar team as PTL in Mitaka and it's a
> time-consuming work. So my focus shifted for a bit but I'm now trying to
> balance my time. I'd rather stay in the team if there is still space :)
> Thanks.
>
> My recent code review:
> http://stackalytics.com/report/contribution/glance-group/30
> Glance DB-based quota: https://review.openstack.org/27
>
>
> On 27/01/16 03:41, Flavio Percoco wrote:
> >
> > Greetings,
> >
> > I'd like us to have one more core cleanup for this cycle:
> >
> > Additions:
> >
> > - Kairat Kushaev
> > - Brian Rosmaita
> >
> > Both have done amazing reviews either on specs or code and I think they
> both
> > would be an awesome addition to the Glance team.
> >
> > Removals:
> >
> > - Alexander Tivelkov
> > - Fei Long Wang
> >
> > Fei Long and Alexander are both part of the OpenStack community.
> However, their
> > focus and time has shifted from Glance and, as it stands right now, it
> would
> > make sense to have them both removed from the core team. This is not
> related to
> > their reviews per-se but just prioritization. I'd like to thank both,
> Alexander
> > and Fei Long, for their amazing contributions to the team. If you guys
> want to
> > come back to Glance, please, do ask. I'm sure the team will be happy to
> have you
> > on board again.
> >
> > To all other members of the community. Please, provide your feedback.
> Unless
> > someone objects, the above will be effective next Tuesday.
> >
> > Cheers,
> > Flavio
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> - --
> Cheers & Best regards,
> Fei Long Wang (???)
> -
> --
> Senior Cloud Software Engineer
> Tel: +64-48032246
> Email: flw...@catalyst.net.nz
> Catalyst IT Limited
> Level 6, Catalyst House, 150 Willis Street, Wellington
> -
> --
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1
>
> iQEcBAEBAgAGBQJWp+ZOAAoJEOshLRZu+ElUhwgH/RISV2JuhKCsBWKePexBLO5s
> uNVLqyRzHYknCG+5nZ8CZMw1YglRe5WtiiyDTSaapHf2dUQmEsp9eXkU/3hjxeS+
> uS4lpeqCg1c7PQKSZTfnk53XDNVj5dZMtNnYlV3iZ0TDuPRC7kK+5vD8dCgz38hj
> C1+0girlMgMoXxx6/rJxRvtVrxxfTKcZK37ttrQkjZIxKhofMEHdFqIRj3j1EDOc
> KsSed/UNqe3QSi0kb8eGiXN39MoxfbA326AcAzHPg+7doYIyzIkiJQHwlC6t0s/z
> 4iILWcqZ1W6RCtux8S37GkowVP5cPq6WKfy2hwCgoowa4dB+tKetdjkD3IKSsVc=
> =1+am
> -END PGP SIGNATURE-
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance][cinder][neutron]How to make use of x-openstack-request-id

2016-01-27 Thread Andrew Laski


On Wed, Jan 27, 2016, at 05:47 AM, Kuvaja, Erno wrote:
> > -Original Message-
> > From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
> > Sent: Wednesday, January 27, 2016 9:56 AM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [nova][glance][cinder][neutron]How to make
> > use of x-openstack-request-id
> > 
> > 
> > 
> > On 1/27/2016 9:40 AM, Tan, Lin wrote:
> > > Thank you so much. Eron. This really helps me a lot!!
> > >
> > > Tan
> > >
> > > *From:*Kuvaja, Erno [mailto:kuv...@hpe.com]
> > > *Sent:* Tuesday, January 26, 2016 8:34 PM
> > > *To:* OpenStack Development Mailing List (not for usage questions)
> > > *Subject:* Re: [openstack-dev] [nova][glance][cinder][neutron]How to
> > > make use of x-openstack-request-id
> > >
> > > Hi Tan,
> > >
> > > While the cross project spec was discussed Glance already had
> > > implementation of request ids in place. At the time of the Glance
> > > implementation we assumed that one request id is desired through the
> > > chain of services and we implemented the req id to be accepted as part
> > > of the request. This was mainly driven to have same request id through
> > > the chain between glance-api and glance-registry but as the same code
> > > was used in both api and registry services we got this functionality
> > > across glance.
> > >
> > > The cross project discussion turned this approach down and decided
> > > that only new req id will be returned. We did not want to utilize 2
> > > different code bases to handle req ids in glance-api and
> > > glance-registry, nor we wanted to remove the functionality to allow
> > > the req ids being passed to the service as that was already merged to
> > > our API. Thus is requests are passed without req id defined to the
> > > services they behave (apart from nova having different header name)
> > > same way, but with glance the request maker has the liberty to specify
> > > request id they want to use (within configured length limits).
> > >
> > > Hopefully that clarifies it for you.
> > >
> > > -Erno
> > >
> > > *From:*Tan, Lin [mailto:lin@intel.com]
> > > *Sent:* 26 January 2016 01:26
> > > *To:* OpenStack Development Mailing List (not for usage questions)
> > > *Subject:* Re: [openstack-dev] [nova][glance][cinder][neutron]How to
> > > make use of x-openstack-request-id
> > >
> > > Thanks Kebane, I test glance/neutron/keystone with
> > > ``x-openstack-request-id`` and find something interesting.
> > >
> > > I am able to pass ``x-openstack-request-id``  to glance and it will
> > > use the UUID as its request-id. But it failed with neutron and keystone.
> > >
> > > Here is my test:
> > >
> > > http://paste.openstack.org/show/484644/
> > >
> > > It looks like because keystone and neutron are using
> > > oslo_middleware:RequestId.factory and in this part:
> > >
> > >
> > https://github.com/openstack/oslo.middleware/blob/master/oslo_middlew
> > a
> > > re/request_id.py#L35
> > >
> > > It will always generate an UUID and append to response as
> > > ``x-openstack-request-id`` header.
> > >
> > > My question is should we accept an external passed request-id as the
> > > project's own request-id or having its unique request-id?
> > >
> > > In other words, which one is correct way, glance or neutron/keystone?
> > > There must be something wrong with one of them.
> > >
> > > Thanks
> > >
> > > B.R
> > >
> > > Tan
> > >
> > > *From:*Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
> > > *Sent:* Wednesday, December 2, 2015 2:24 PM
> > > *To:* OpenStack Development Mailing List
> > > (openstack-dev@lists.openstack.org
> > > )
> > > *Subject:* Re: [openstack-dev] [nova][glance][cinder][neutron]How to
> > > make use of x-openstack-request-id
> > >
> > > Hi Tan,
> > >
> > > Most of the OpenStack RESTful API returns `X-Openstack-Request-Id` in
> > > the API response header but thisrequest id isnotavailable to the
> > > callerfromthe python client.
> > >
> > > When you use --debug option from command from the command prompt
> > using
> > > client, you can see `X-Openstack-Request-Id`on the console but it is
> > > not logged anywhere.
> > >
> > > Currently a cross-project specs [1] is submitted and approved for
> > > returning X-Openstack-Request-Id to the caller and the implementation
> > > for the same is in progress.
> > >
> > > Please go through the specs for detail information which will help you
> > > to understand more about request-ids and current work about the same.
> > >
> > > Please feel free to revert back anytime for your doubts.
> > >
> > > [1]
> > > https://github.com/openstack/openstack-
> > specs/blob/master/specs/return-
> > > request-id.rst
> > >
> > > Thanks,
> > >
> > > Abhishek Kekane
> > >
> > > Hi guys
> > >
> > >  I recently play around with 'x-openstack-request-id' header
> > > but have a dump question about how it works. At beginning, I thought
> > > an action across different services should use a same 

Re: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-01-27 Thread Matt Riedemann



On 1/27/2016 11:22 AM, Avishay Traeger wrote:

On Wed, Jan 27, 2016 at 1:01 PM, Matt Riedemann
> wrote:


I've replied on https://review.openstack.org/#/c/266095/ and the
related cinder change https://review.openstack.org/#/c/272899/ which
are adding a new key to the volume connector dict being passed
around between nova and cinder, which is not ideal.

I'd really like to see us start modeling the volume connector with
versioned objects so we can (1) tell what's actually in this mystery
connector dict in the nova virt driver interface and (2) handle
version compat with adding new keys to it.


I agree with you.  Actually, I think it would be more correct to have
Cinder store it, and not pass it at all to terminate_connection().


--
*Avishay Traeger, PhD*
/System Architect/

Mobile:+972 54 447 1475
E-mail: avis...@stratoscale.com 



Web  | Blog
 | Twitter
 | Google+

 |
Linkedin 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



That would be ideal but I don't know if cinder is storing this 
information in the database like nova is in the nova 
block_device_mappings.connection_info column.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance][cinder][neutron]How to make use of x-openstack-request-id

2016-01-27 Thread Kuvaja, Erno
> -Original Message-
> From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
> Sent: Wednesday, January 27, 2016 9:56 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova][glance][cinder][neutron]How to make
> use of x-openstack-request-id
> 
> 
> 
> On 1/27/2016 9:40 AM, Tan, Lin wrote:
> > Thank you so much. Eron. This really helps me a lot!!
> >
> > Tan
> >
> > *From:*Kuvaja, Erno [mailto:kuv...@hpe.com]
> > *Sent:* Tuesday, January 26, 2016 8:34 PM
> > *To:* OpenStack Development Mailing List (not for usage questions)
> > *Subject:* Re: [openstack-dev] [nova][glance][cinder][neutron]How to
> > make use of x-openstack-request-id
> >
> > Hi Tan,
> >
> > While the cross project spec was discussed Glance already had
> > implementation of request ids in place. At the time of the Glance
> > implementation we assumed that one request id is desired through the
> > chain of services and we implemented the req id to be accepted as part
> > of the request. This was mainly driven to have same request id through
> > the chain between glance-api and glance-registry but as the same code
> > was used in both api and registry services we got this functionality
> > across glance.
> >
> > The cross project discussion turned this approach down and decided
> > that only new req id will be returned. We did not want to utilize 2
> > different code bases to handle req ids in glance-api and
> > glance-registry, nor we wanted to remove the functionality to allow
> > the req ids being passed to the service as that was already merged to
> > our API. Thus is requests are passed without req id defined to the
> > services they behave (apart from nova having different header name)
> > same way, but with glance the request maker has the liberty to specify
> > request id they want to use (within configured length limits).
> >
> > Hopefully that clarifies it for you.
> >
> > -Erno
> >
> > *From:*Tan, Lin [mailto:lin@intel.com]
> > *Sent:* 26 January 2016 01:26
> > *To:* OpenStack Development Mailing List (not for usage questions)
> > *Subject:* Re: [openstack-dev] [nova][glance][cinder][neutron]How to
> > make use of x-openstack-request-id
> >
> > Thanks Kebane, I test glance/neutron/keystone with
> > ``x-openstack-request-id`` and find something interesting.
> >
> > I am able to pass ``x-openstack-request-id``  to glance and it will
> > use the UUID as its request-id. But it failed with neutron and keystone.
> >
> > Here is my test:
> >
> > http://paste.openstack.org/show/484644/
> >
> > It looks like because keystone and neutron are using
> > oslo_middleware:RequestId.factory and in this part:
> >
> >
> https://github.com/openstack/oslo.middleware/blob/master/oslo_middlew
> a
> > re/request_id.py#L35
> >
> > It will always generate an UUID and append to response as
> > ``x-openstack-request-id`` header.
> >
> > My question is should we accept an external passed request-id as the
> > project's own request-id or having its unique request-id?
> >
> > In other words, which one is correct way, glance or neutron/keystone?
> > There must be something wrong with one of them.
> >
> > Thanks
> >
> > B.R
> >
> > Tan
> >
> > *From:*Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
> > *Sent:* Wednesday, December 2, 2015 2:24 PM
> > *To:* OpenStack Development Mailing List
> > (openstack-dev@lists.openstack.org
> > )
> > *Subject:* Re: [openstack-dev] [nova][glance][cinder][neutron]How to
> > make use of x-openstack-request-id
> >
> > Hi Tan,
> >
> > Most of the OpenStack RESTful API returns `X-Openstack-Request-Id` in
> > the API response header but thisrequest id isnotavailable to the
> > callerfromthe python client.
> >
> > When you use --debug option from command from the command prompt
> using
> > client, you can see `X-Openstack-Request-Id`on the console but it is
> > not logged anywhere.
> >
> > Currently a cross-project specs [1] is submitted and approved for
> > returning X-Openstack-Request-Id to the caller and the implementation
> > for the same is in progress.
> >
> > Please go through the specs for detail information which will help you
> > to understand more about request-ids and current work about the same.
> >
> > Please feel free to revert back anytime for your doubts.
> >
> > [1]
> > https://github.com/openstack/openstack-
> specs/blob/master/specs/return-
> > request-id.rst
> >
> > Thanks,
> >
> > Abhishek Kekane
> >
> > Hi guys
> >
> >  I recently play around with 'x-openstack-request-id' header
> > but have a dump question about how it works. At beginning, I thought
> > an action across different services should use a same request-id but
> > it looks like this is not the true.
> >
> > First I read the spec:
> > https://blueprints.launchpad.net/nova/+spec/cross-service-request-id
> > which said "This ID and the request ID of the other service will be
> > logged at service boundaries". and I see cinder/neutron/glance will
> > attach 

Re: [openstack-dev] OpenStack installer

2016-01-27 Thread Gyorgy Szombathelyi

> On 01/26/2016 11:32 AM, Gyorgy Szombathelyi wrote:
> > Hello!
> >
> > I just want to announce a new installer for OpenStack:
> > https://github.com/DoclerLabs/openstack
> > It is GPLv3, uses Ansible (currently 1.9.x,  2.0.0.2 has some bugs which has
> to be resolved), has lots of components integrated (of course there are
> missing ones).
> > Goal was simplicity and also operating the cloud, not just installing it.
> > We started with Rackspace's openstack-ansible, but found it a bit complex
> with the containers. Also it didn't include all the components we required, so
> started this project.
> > Feel free to give it a try! The documentation is sparse, but it'll improve 
> > with
> time.
> > (Hope you don't consider it as an advertisement, we don't want to sell this,
> just wanted to share our development).
> >
> > Br,
> > György
> >
> 
> Hi,
> 
Hi Michael,

> What do you mean by "complex with the containers"? Is the mere fact of
> containers usage a complex thing for you or the problem is in some
> implementation details around it?
> 
I don't see the benefits containerizing every OpenStack component. Installing 
them from 
packages, and run them in a physical host is the way how Linux systems worked 
for
years, it still works for us.

> And did you have a chance to check the Kolla project? It uses Ansible too, but
> the difference is that Kolla uses Docker containers and openstack-ansible
> uses "raw" LXC.
I read about it, I didn't check it personally. My personal concern about 
Dockerizing
OpenStack that now the whole infrastructure depends an a docker daemon. But 
maybe
I am wrong, I'm not an expert in this field.

> 
> Cheers,
> Michal
> 
Br,
György


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature suggestion - API for creating VM without powering it up

2016-01-27 Thread Rui Chen
Looks like we can use user_data and cloud-init to do this stuff.

Adding the following content into user_data.txt and launch instance like
this: nova boot --user-data user_data.txt ...,
the instance will shutdown after boot is finished.

power_state:
 mode: poweroff
 message: Bye Bye

You can find more details in cloud-init document[1].

[1]:
https://github.com/number5/cloud-init/blob/master/doc/examples/cloud-config.txt

2016-01-22 3:32 GMT+08:00 Fox, Kevin M :

> The nova instance user spec has a use case.
> https://review.openstack.org/#/c/93/
>
> Thanks,
> Kevin
> 
> From: Matt Riedemann [mrie...@linux.vnet.ibm.com]
> Sent: Thursday, January 21, 2016 7:32 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Feature suggestion - API for creating
> VM without powering it up
>
> On 1/20/2016 10:57 AM, Shoham Peller wrote:
> > Hi,
> >
> > I would like to suggest a feature in nova to allow creating a VM,
> > without powering it up.
> >
> > If the user will be able to create a stopped VM, it will allow for
> > better flexibility and user automation.
> >
> > I can personally say such a feature would greatly improve comfortability
> > of my work with nova - currently we shutdown each vm manually as we're
> > creating it.
> > What do you think?
> >
> > Regards,
> > Shoham Peller
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> What is your use case?
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Playing Tricircle with Devstack

2016-01-27 Thread joehuang
Hi, Yipei,

The issue is still caused by the change in DevStack.  When the command 
"openstack volume type create --property volume_backend_name=lvmdriver-1 
lvmdriver-1" is executed in DevStack, the exported region name (i.e, RegionOne) 
is used, so the request was sent to RegionOne, but the volume type creation 
should be done in Pod2 instead, i.e, when this command is executed, the 
exported region name should be Pod2.

Best Regards
Chaoyi Huang ( Joe Huang )

From: Yipei Niu [mailto:newy...@gmail.com]
Sent: Wednesday, January 27, 2016 4:19 PM
To: openstack-dev@lists.openstack.org
Cc: joehuang; Zhiyuan Cai
Subject: Re: [tricircle] Playing Tricircle with Devstack

Hi Jeo,

This error occurred when installing devstack on node2.

Best regards,
Yipei

On Wed, Jan 27, 2016 at 3:13 PM, Yipei Niu 
> wrote:

-- Forwarded message --
From: Yipei Niu >
Date: Tue, Jan 26, 2016 at 8:42 PM
Subject: Re: [tricircle] Playing Tricircle with Devstack
To: openstack-dev@lists.openstack.org

Hi Zhiyuan,

Your solution works, but I encountered another error. When executing command

"openstack volume type create --property volume_backend_name=lvmdriver-1 
lvmdriver-1",

it returns

"Unable to establish connection to 
http://192.168.56.101:19997/v2/c4f6ad92427b49f9a59810e88fbe4c11/types;.


Then I execute the command with debug option, it returns

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/openstackclient/shell.py", line 
113, in run
ret_val = super(OpenStackShell, self).run(argv)
  File "/usr/local/lib/python2.7/dist-packages/cliff/app.py", line 255, in run
result = self.run_subcommand(remainder)
  File "/usr/local/lib/python2.7/dist-packages/cliff/app.py", line 367, in 
run_subcommand
self.prepare_to_run_command(cmd)
  File "/usr/local/lib/python2.7/dist-packages/openstackclient/shell.py", line 
352, in prepare_to_run_command
self.client_manager.auth_ref
  File 
"/usr/local/lib/python2.7/dist-packages/openstackclient/common/clientmanager.py",
 line 189, in auth_ref
self.setup_auth()
  File 
"/usr/local/lib/python2.7/dist-packages/openstackclient/common/clientmanager.py",
 line 128, in setup_auth
auth.check_valid_auth_options(self._cli_options, self.auth_plugin_name)
  File "/usr/local/lib/python2.7/dist-packages/openstackclient/api/auth.py", 
line 172, in check_valid_auth_options
raise exc.CommandError('Missing parameter(s): \n%s' % msg)
CommandError: Missing parameter(s):
Set a username with --os-username, OS_USERNAME, or auth.username
Set an authentication URL, with --os-auth-url, OS_AUTH_URL or auth.auth_url
Set a scope, such as a project or domain, set a project scope with 
--os-project-name, OS_PROJECT_NAME or auth.project_name, set a domain scope 
with --os-domain-name, OS_DOMAIN_NAME or auth.domain_name

These parameters have been set before, and why the error happens?

Best regards,
Yipei


On Tue, Jan 26, 2016 at 10:40 AM, Yipei Niu 
> wrote:
Hi Joe, Zhiyuan,

I found that such an error may be caused by "export OS_REGION_NAME=Pod2". When 
I source "userrc_early" without "export OS_REGION_NAME=Pod2" on node2, the 
command "openstack project show admin -f value -c id" returns the same result 
as it does on node1. How can I deal with it so that I can proceed?

Best regards,
Yipei

On Mon, Jan 25, 2016 at 4:13 PM, Yipei Niu 
> wrote:
There doesn't any problems when installing devstack on node1. However, when 
install devstack to node2, I encounter an error and the trace is as follows:

2016-01-25 07:40:47.068 | + echo -e Starting Keystone
2016-01-25 07:40:47.069 | + '[' 192.168.56.101 == 192.168.56.102 ']'
2016-01-25 07:40:47.070 | + is_service_enabled tls-proxy
2016-01-25 07:40:47.091 | + return 1
2016-01-25 07:40:47.091 | + cat
2016-01-25 07:40:47.093 | + source /home/stack/devstack/userrc_early
2016-01-25 07:40:47.095 | ++ export OS_IDENTITY_API_VERSION=3
2016-01-25 07:40:47.095 | ++ OS_IDENTITY_API_VERSION=3
2016-01-25 07:40:47.095 | ++ export OS_AUTH_URL=http://192.168.56.101:35357
2016-01-25 07:40:47.095 | ++ OS_AUTH_URL=http://192.168.56.101:35357
2016-01-25 07:40:47.095 | ++ export OS_USERNAME=admin
2016-01-25 07:40:47.095 | ++ OS_USERNAME=admin
2016-01-25 07:40:47.095 | ++ export OS_USER_DOMAIN_ID=default
2016-01-25 07:40:47.095 | ++ OS_USER_DOMAIN_ID=default
2016-01-25 07:40:47.096 | ++ export OS_PASSWORD=nypnyp0316
2016-01-25 07:40:47.096 | ++ OS_PASSWORD=nypnyp0316
2016-01-25 07:40:47.096 | ++ export OS_PROJECT_NAME=admin
2016-01-25 07:40:47.097 | ++ OS_PROJECT_NAME=admin
2016-01-25 07:40:47.098 | ++ export OS_PROJECT_DOMAIN_ID=default
2016-01-25 07:40:47.099 | ++ OS_PROJECT_DOMAIN_ID=default
2016-01-25 07:40:47.100 | ++ export OS_REGION_NAME=Pod2
2016-01-25 07:40:47.101 | ++ OS_REGION_NAME=Pod2
2016-01-25 

Re: [openstack-dev] [ceilometer] :Regarding wild card configuration in pipeline.yaml

2016-01-27 Thread Raghunath D
Hi Gord,

Could you please kindly suggest how to proceed further on the below issue as 
we are somewhat blocked in our development activity due to the wildcard issue.

With Best Regards
Raghunath Dudyala
Tata Consultancy Services Limited
Mailto: raghunat...@tcs.com
Website: http://www.tcs.com

Experience certainty. IT Services
Business Solutions
Consulting




-Raghunath D/HYD/TCS wrote: - 
To: openstack-dev@lists.openstack.org
From: Raghunath D/HYD/TCS
Date: 01/21/2016 01:06PM
Cc: "Srikanth Vavilapalli" 
Subject: Re: [openstack-dev] [ceilometer] :Regarding wild card configuration in 
pipeline.yaml


Hi ,
 
 Just to reframe my query:
  I have a meter subscription m1.* for publisher p1 and I need a subset of m1.* 
 notifications for ex:m1.xyz.* for publisher p2.
If we add p2 to already exisiting sink along with p1,  p2 will get other 
notification's along with m1.xyz.* which are not needed for p2.
 
To avoid this we had the following entry in pipeline;

sources:
  -name : m1meter
   meters: m1.*,!m1.xyz.*
   sinks:
-m1sink   
   .
  -name : m2meter
   meters:m1.xyz.*
   sinks:
-m2sink
sinks:
 -name: m1sink
  publishers:
   -p1
   
  -name: m2sink
  publishers:
   -p1
   -p2
 
>From reply mail it seems there is no strict restriction to support this.Could 
>you please let me know how should we handle such cases in ceilometer.
If we do code modification in pipeline module of ceilometer does it effects any 
other parts of ceilometer frame work.
 
 
Thanks and Regards
Raghunath.
 
 
Copied reply mail content from 
http://osdir.com/ml/openstack-dev/2016-01/msg01346.html for reference, due to 
some reason I am not getting reply mail to my mailbox.
-Copied Mail Start 
here-
hi,

i don't completely recall why we don't allow wildcarded exclude and include 
meters. it's probably because there's ambiguity of ordering of wildcard which 
can lead to different filter results.

someone can correct me, but i don't think there's a strict requirement that 
stops us from supporting both at once, just that it's not there.

as it stands now. you'll need to explicitly list out the meters you want (or 
don't want) sent to each pipeline.

cheers,
gord.
-Mail End 
Here-

With Best Regards
Raghunath Dudyala
Tata Consultancy Services Limited
Mailto: raghunat...@tcs.com
Website: http://www.tcs.com

Experience certainty. IT Services
Business Solutions
Consulting




-Raghunath D/HYD/TCS wrote: - 
To: openstack-dev@lists.openstack.org
From: Raghunath D/HYD/TCS
Date: 01/20/2016 07:09PM
Cc: "Srikanth Vavilapalli" 
Subject: [openstack-dev] [ceilometer] :Regarding wild card configuration in 
pipeline.yaml


Hi ,
 
We have one use-case for which we are using ceilometer for getting 
notifications.
 
We have meter's m1.*,m1.xyz.* and publishers(kafka/udp) as p1 and p2.
i.m1.* notifications/meter data should send to p1 and p2
ii.m1.xyz.* notifications/meter data should send to p1.

We can correlate m1.* as network.* and m1.xyz.* as network.incoming.*
The pipeline.yaml is updated as:
--
sources:
  -name : m1meter
   meters: m1.*,m1.xyz.*
   sinks:
-m1sink   

sinks:
 -name: m1sink
  publishers:
   -p1
   -p2


With the above configuration p1 also receives other than m1.xyz.* notifications
which are not subscribed by p1.To avoid this duplication's,pipeline.yaml is 
updates as:
-
sources:
  -name : m1meter
   meters: m1.*,!m1.xyz.*
   sinks:
-m1sink   
   .
  -name : m2meter
   meters:m1.xyz.*
   sinks:
-m2sink
sinks:
 -name: m1sink
  publishers:
   -p1
   -p2
 
 -name: m2sink
  publishers:
   -p1

  
This configuration is failing under source rule checking with reason "both 
included and
excluded meters specified.

Info/Help needed:
  Do we have any way in ceilometer frame work to achieve this or could you 
provide 
us some suggestions how 

Re: [openstack-dev] [nova] do not account compute resource of instances in state SHELVED_OFFLOADED

2016-01-27 Thread Andrew Laski


On Tue, Jan 26, 2016, at 07:46 AM, Christian Berendt wrote:
> After offloading a shelved instance the freed compute resources are 
> still accounted.
> 
> I think it makes sense to make this behavior configurable. We often have 
> the request to not account the freed compute resources after an 
> instances was offloaded to be able to spawn new instances or to 
> unshelved offloaded instances even the assigned compute resource quota 
> was archieved.
> 
> Because the instance are shelved and offloaded they do not occupy 
> compute resource and it is often safe to allow the freed compute 
> resources for other inststances. Of coure it has to be checked if there 
> are enough free compute resource when trying to unshelve an offloaded 
> instance.
> 
> What do you thin about this use case? If it make sense to you I want to 
> propose to write a spec for this feature.

I would really prefer if we could standardize on either counting the
resources against quota or not.  However this seems unlikely since
deployers offer shelving functionality for different reasons.  I think
it's worth proposing the spec and getting operator feedback on it.

My two primary concerns with freeing up quota resources for offloaded
instances are:

1. This allows for a poor experience where a user would not be able to
turn on and use an instance that they already have due to overquota. 
This is a change from the current behavior where they just can't create
resources, now something they have is unusable.

2. I anticipate a further ask for a separate quota for the number of
offloaded resources being used to prevent just continually spinning up
and shelving instances with no limit.  Because while disk/ram/cpu
resources are not being consumed by an offloaded instance network and
volume resources remain consumed and storage is required is Glance for
the offloaded disk.  And adding this additional quota adds more
complexity to shelving which is already overly complex and not well
understood.


> 
> Christian.
> 
> -- 
> Christian Berendt
> Cloud Solution Architect
> Mail: bere...@b1-systems.de
> 
> B1 Systems GmbH
> Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
> GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [midonet] Re: Classifiers for MN's Service Chaining API

2016-01-27 Thread Ivan Kelly
> the API has L2Insertions that each have a position. The current API
> therefore only expects a single list of L2Insertions on a single VM. It
> seems wrong to attach classifiers to the L2Insertion object... on the other
> hand, since the model assumes the Service Function leaves the traffic
> unmodified, it might be consistent with the model. But I'm not sure whether
> the positions should still be unique and I think it would be hard to reason
> about. That suggests we probably need a new object type, let's call it
> InsertionChain for now, that can group a set of Classifiers with a list of
> L2Insertion objects (each with unique position). InsertionChains themselves
> probably need a position field and a single packet might satisfy classifiers
> in multiple InsertionChains, in which case I believe the sensible behavior
> is to traverse multiple lists of L2Insertions (as opposed to just those of
> the InsertionChain with earliest position).
One idea that was floated in the past by Guillermo (in another context
but applicable here), is to add marking to our rules. With this we
could have a classification chain that gets evaluated before the
l2insertion chain. This marking chain would classify the packet as it
traversed the simulation, like
if proto is tcp and port is 22 mark 0xdeadbeef

Then the l2insertion chain could have one set of rules for when mark
0xdeadbeef is present, another for when another mark is present,
another for when there's no mark.

As a bonus, we could then also route based on marks, so active-active
vpn becomes possible.

> how much would the underlying implementation (translation to Redirect Rules)
> have to change? From playing with the code last year, my feeling was that
> the translation was complex and brittle even without classifiers (which adds
> the possibility of satisfying multiple separate chains of insertions). My
> gut feeling is that any re-write/enhancement should include dropping the
> Redirect Rules in favor of modeling L2Insertion directly in MidoNet Agent's
> simulation. As we discussed before, this would allow the simulation to
> pre-compute ALL the steps in the chain and pre-install the corresponding
> flows in all the right peer Agents.
If we're going to modify this code, I would suggest making it so that
we run the full simulation for all steps. I think this would greatly
simplify the translation, since we wouldn't have to worry about trying
to reinsert packages coming back from the service function into the
correct point in simulation. Once we have that, I'm not sure how
necessary or useful it would be to directly model l2insertion in the
simulation.

-Ivan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Bugs] Time sync problem when testing.

2016-01-27 Thread Maksim Malchuk
But you've used 'logger -t ntpdate' - this is can fail again and logs can
be empty again.
My opinion we should use output redirection to the log-file directly.


On Wed, Jan 27, 2016 at 11:21 AM, Stanislaw Bogatkin  wrote:

> Yes, I have created custom iso with debug output. It didn't help, so
> another one with strace was created.
> On Jan 27, 2016 00:56, "Alex Schultz"  wrote:
>
>> On Tue, Jan 26, 2016 at 2:16 PM, Stanislaw Bogatkin
>>  wrote:
>> > When there is too high strata, ntpdate can understand this and always
>> write
>> > this into its log. In our case there are just no log - ntpdate send
>> first
>> > packet, get an answer - that's all. So, fudging won't save us, as I
>> think.
>> > Also, it's a really bad approach to fudge a server which doesn't have a
>> real
>> > clock onboard.
>>
>> Do you have a debug output of the ntpdate somewhere? I'm not finding
>> it in the bugs or in some of the snapshots for the failures. I did
>> find one snapshot with the -v change that didn't have any response
>> information so maybe it's the other problem where there is some
>> network connectivity isn't working correctly or the responses are
>> getting dropped somewhere?
>>
>> -Alex
>>
>> >
>> > On Tue, Jan 26, 2016 at 10:41 PM, Alex Schultz 
>> > wrote:
>> >>
>> >> On Tue, Jan 26, 2016 at 11:42 AM, Stanislaw Bogatkin
>> >>  wrote:
>> >> > Hi guys,
>> >> >
>> >> > for some time we have a bug [0] with ntpdate. It doesn't reproduced
>> 100%
>> >> > of
>> >> > time, but breaks our BVT and swarm tests. There is no exact point
>> where
>> >> > problem root located. To better understand this, some verbosity to
>> >> > ntpdate
>> >> > output was added but in logs we can see only that packet exchange
>> >> > between
>> >> > ntpdate and server was started and was never completed.
>> >> >
>> >>
>> >> So when I've hit this in my local environments there is usually one or
>> >> two possible causes for this. 1) lack of network connectivity so ntp
>> >> server never responds or 2) the stratum is too high.  My assumption is
>> >> that we're running into #2 because of our revert-resume in testing.
>> >> When we resume, the ntp server on the master may take a while to
>> >> become stable. This sync in the deployment uses the fuel master for
>> >> synchronization so if the stratum is too high, it will fail with this
>> >> lovely useless error.  My assumption on what is happening is that
>> >> because we aren't using a set of internal ntp servers but rather
>> >> relying on the standard ntp.org pools.  So when the master is being
>> >> resumed it's struggling to find a good enough set of servers so it
>> >> takes a while to sync. This then causes these deployment tasks to fail
>> >> because the master has not yet stabilized (might also be geolocation
>> >> related).  We could either address this by fudging the stratum on the
>> >> master server in the configs or possibly introducing our own more
>> >> stable local ntp servers. I have a feeling fudging the stratum might
>> >> be better when we only use the master in our ntp configuration.
>> >>
>> >> > As this bug is blocker, I propose to merge [1] to better
>> understanding
>> >> > what's going on. I created custom ISO with this patchset and tried to
>> >> > run
>> >> > about 10 BVT tests on this ISO. Absolutely with no luck. So, if we
>> will
>> >> > merge this, we would catch the problem much faster and understand
>> root
>> >> > cause.
>> >> >
>> >>
>> >> I think we should merge the increased logging patch anyway because
>> >> it'll be useful in troubleshooting but we also might want to look into
>> >> getting an ntp peers list added into the snapshot.
>> >>
>> >> > I appreciate your answers, folks.
>> >> >
>> >> >
>> >> > [0] https://bugs.launchpad.net/fuel/+bug/1533082
>> >> > [1] https://review.openstack.org/#/c/271219/
>> >> > --
>> >> > with best regards,
>> >> > Stan.
>> >> >
>> >>
>> >> Thanks,
>> >> -Alex
>> >>
>> >>
>> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> > --
>> > with best regards,
>> > Stan.
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 

Re: [openstack-dev] [nova][glance][cinder][neutron]How to make use of x-openstack-request-id

2016-01-27 Thread Matt Riedemann



On 1/27/2016 9:40 AM, Tan, Lin wrote:

Thank you so much. Eron. This really helps me a lot!!

Tan

*From:*Kuvaja, Erno [mailto:kuv...@hpe.com]
*Sent:* Tuesday, January 26, 2016 8:34 PM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [nova][glance][cinder][neutron]How to
make use of x-openstack-request-id

Hi Tan,

While the cross project spec was discussed Glance already had
implementation of request ids in place. At the time of the Glance
implementation we assumed that one request id is desired through the
chain of services and we implemented the req id to be accepted as part
of the request. This was mainly driven to have same request id through
the chain between glance-api and glance-registry but as the same code
was used in both api and registry services we got this functionality
across glance.

The cross project discussion turned this approach down and decided that
only new req id will be returned. We did not want to utilize 2 different
code bases to handle req ids in glance-api and glance-registry, nor we
wanted to remove the functionality to allow the req ids being passed to
the service as that was already merged to our API. Thus is requests are
passed without req id defined to the services they behave (apart from
nova having different header name) same way, but with glance the request
maker has the liberty to specify request id they want to use (within
configured length limits).

Hopefully that clarifies it for you.

-Erno

*From:*Tan, Lin [mailto:lin@intel.com]
*Sent:* 26 January 2016 01:26
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [nova][glance][cinder][neutron]How to
make use of x-openstack-request-id

Thanks Kebane, I test glance/neutron/keystone with
``x-openstack-request-id`` and find something interesting.

I am able to pass ``x-openstack-request-id``  to glance and it will use
the UUID as its request-id. But it failed with neutron and keystone.

Here is my test:

http://paste.openstack.org/show/484644/

It looks like because keystone and neutron are using
oslo_middleware:RequestId.factory and in this part:

https://github.com/openstack/oslo.middleware/blob/master/oslo_middleware/request_id.py#L35

It will always generate an UUID and append to response as
``x-openstack-request-id`` header.

My question is should we accept an external passed request-id as the
project’s own request-id or having its unique request-id?

In other words, which one is correct way, glance or neutron/keystone?
There must be something wrong with one of them.

Thanks

B.R

Tan

*From:*Kekane, Abhishek [mailto:abhishek.kek...@nttdata.com]
*Sent:* Wednesday, December 2, 2015 2:24 PM
*To:* OpenStack Development Mailing List
(openstack-dev@lists.openstack.org
)
*Subject:* Re: [openstack-dev] [nova][glance][cinder][neutron]How to
make use of x-openstack-request-id

Hi Tan,

Most of the OpenStack RESTful API returns `X-Openstack-Request-Id` in
the API response header but thisrequest id isnotavailable to the
callerfromthe python client.

When you use -–debug option from command from the command prompt using
client, you can see `X-Openstack-Request-Id`on the console but it is not
logged anywhere.

Currently a cross-project specs [1] is submitted and approved for
returning X-Openstack-Request-Id to the caller and the implementation
for the same is in progress.

Please go through the specs for detail information which will help you
to understand more about request-ids and current work about the same.

Please feel free to revert back anytime for your doubts.

[1]
https://github.com/openstack/openstack-specs/blob/master/specs/return-request-id.rst

Thanks,

Abhishek Kekane

Hi guys

 I recently play around with 'x-openstack-request-id' header but
have a dump question about how it works. At beginning, I thought an
action across different services should use a same request-id but it
looks like this is not the true.

First I read the spec:
https://blueprints.launchpad.net/nova/+spec/cross-service-request-id
which said "This ID and the request ID of the other service will be
logged at service boundaries". and I see cinder/neutron/glance will
attach its context's request-id as the value of "x-openstack-request-id"
header to its response while nova use X-Compute-Request-Id. This is easy
to understand. So It looks like each service should generate its own
request-id and attach to its response, that's all.

But then I see glance read 'X-Openstack-Request-ID' to generate the
request-id while cinder/neutron/nova read 'openstack.request_id' when
using with keystone. It is try to reuse the request-id from keystone.

This totally confused me. It would be great if you can correct me or
point me some reference. Thanks a lot

Best Regards,

Tan


__
Disclaimer: This email and any attachments are sent in strictest 

Re: [openstack-dev] [Fuel][Bugs] Time sync problem when testing.

2016-01-27 Thread Stanislaw Bogatkin
>But you've used 'logger -t ntpdate' - this is can fail again and logs can
be empty again.
What do you mean by 'fall again'? Piping to logger uses standard blocking
I/O - logger gets
all the output it can reach, so it get all output strace will produce. If
ntpdate will hang for some
reason - we should see it in strace output. If ntpdate will exit - we will
see this too.

On Wed, Jan 27, 2016 at 12:57 PM, Maksim Malchuk 
wrote:

> But you've used 'logger -t ntpdate' - this is can fail again and logs can
> be empty again.
> My opinion we should use output redirection to the log-file directly.
>
>
> On Wed, Jan 27, 2016 at 11:21 AM, Stanislaw Bogatkin <
> sbogat...@mirantis.com> wrote:
>
>> Yes, I have created custom iso with debug output. It didn't help, so
>> another one with strace was created.
>> On Jan 27, 2016 00:56, "Alex Schultz"  wrote:
>>
>>> On Tue, Jan 26, 2016 at 2:16 PM, Stanislaw Bogatkin
>>>  wrote:
>>> > When there is too high strata, ntpdate can understand this and always
>>> write
>>> > this into its log. In our case there are just no log - ntpdate send
>>> first
>>> > packet, get an answer - that's all. So, fudging won't save us, as I
>>> think.
>>> > Also, it's a really bad approach to fudge a server which doesn't have
>>> a real
>>> > clock onboard.
>>>
>>> Do you have a debug output of the ntpdate somewhere? I'm not finding
>>> it in the bugs or in some of the snapshots for the failures. I did
>>> find one snapshot with the -v change that didn't have any response
>>> information so maybe it's the other problem where there is some
>>> network connectivity isn't working correctly or the responses are
>>> getting dropped somewhere?
>>>
>>> -Alex
>>>
>>> >
>>> > On Tue, Jan 26, 2016 at 10:41 PM, Alex Schultz 
>>> > wrote:
>>> >>
>>> >> On Tue, Jan 26, 2016 at 11:42 AM, Stanislaw Bogatkin
>>> >>  wrote:
>>> >> > Hi guys,
>>> >> >
>>> >> > for some time we have a bug [0] with ntpdate. It doesn't reproduced
>>> 100%
>>> >> > of
>>> >> > time, but breaks our BVT and swarm tests. There is no exact point
>>> where
>>> >> > problem root located. To better understand this, some verbosity to
>>> >> > ntpdate
>>> >> > output was added but in logs we can see only that packet exchange
>>> >> > between
>>> >> > ntpdate and server was started and was never completed.
>>> >> >
>>> >>
>>> >> So when I've hit this in my local environments there is usually one or
>>> >> two possible causes for this. 1) lack of network connectivity so ntp
>>> >> server never responds or 2) the stratum is too high.  My assumption is
>>> >> that we're running into #2 because of our revert-resume in testing.
>>> >> When we resume, the ntp server on the master may take a while to
>>> >> become stable. This sync in the deployment uses the fuel master for
>>> >> synchronization so if the stratum is too high, it will fail with this
>>> >> lovely useless error.  My assumption on what is happening is that
>>> >> because we aren't using a set of internal ntp servers but rather
>>> >> relying on the standard ntp.org pools.  So when the master is being
>>> >> resumed it's struggling to find a good enough set of servers so it
>>> >> takes a while to sync. This then causes these deployment tasks to fail
>>> >> because the master has not yet stabilized (might also be geolocation
>>> >> related).  We could either address this by fudging the stratum on the
>>> >> master server in the configs or possibly introducing our own more
>>> >> stable local ntp servers. I have a feeling fudging the stratum might
>>> >> be better when we only use the master in our ntp configuration.
>>> >>
>>> >> > As this bug is blocker, I propose to merge [1] to better
>>> understanding
>>> >> > what's going on. I created custom ISO with this patchset and tried
>>> to
>>> >> > run
>>> >> > about 10 BVT tests on this ISO. Absolutely with no luck. So, if we
>>> will
>>> >> > merge this, we would catch the problem much faster and understand
>>> root
>>> >> > cause.
>>> >> >
>>> >>
>>> >> I think we should merge the increased logging patch anyway because
>>> >> it'll be useful in troubleshooting but we also might want to look into
>>> >> getting an ntp peers list added into the snapshot.
>>> >>
>>> >> > I appreciate your answers, folks.
>>> >> >
>>> >> >
>>> >> > [0] https://bugs.launchpad.net/fuel/+bug/1533082
>>> >> > [1] https://review.openstack.org/#/c/271219/
>>> >> > --
>>> >> > with best regards,
>>> >> > Stan.
>>> >> >
>>> >>
>>> >> Thanks,
>>> >> -Alex
>>> >>
>>> >>
>>> __
>>> >> OpenStack Development Mailing List (not for usage questions)
>>> >> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > with best regards,
>>> > Stan.
>>> >
>>> >

Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-27 Thread Kuvaja, Erno
> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: Monday, January 25, 2016 3:07 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on 
> the
> idea to move it forward
> 
> On 20/01/16 13:23 -0430, Flavio Percoco wrote:
> >Thoughts? Feedback?
> 
> Hey Folks,
> 
> Thanks a lot for the feedback. Great comments and proposals in the many
> replies.
> I've gone through the whole thread and collected the most common
> feedback.
> Here's the summary:
> 
> - The general idea of planning some sort of stabilization for a project is 
> good
>   but considering a cycle for it is terrible. It'd be easier if development
>   cycles would be shorter but the 6-month based development cycles don't
> allow
>   for planning this properly.
> 
> - Therefore, milestones are more likely to be good for this but there has to
> be
>   a good plan. What will happen with on-going features? How does a project
>   decide what to merge or not? Is it really going to help with reviews/bugs
>   backlog? or would this just increase the bakclog?
> 
> - We shouldn't need any governance resolution/reference for this. Perhaps a
>   chapter/section on the project team guide should do it.
> 
> - As other changes in the commuity, it'd be awesome to get feedback from a
>   project doing this before we start encouraging other projects to do the
> same.
> 
> 
> I'll work on adding something to the project team guide that covers the
> above points.
> 
> did I miss something? Anything else that we should add and or consider?
> 

Sorry for jumping the gun this late, but I have been thinking about this since 
your first e-mail and one thing bothers me. Don't we have stabilization cycle 
for each release starting right from the release?

In my understanding this is exactly what the Stable releases Support Phase I is 
accepting bug fixes but no new features. After 6 months the release is moved to 
Phase II where only critical and security fixes are accepted; I think this is 
good example of stabilization cycle and the output is considered solid.

All concerns looked at I think the big problem really is to get the people 
working on these cycles. Perhaps we should encourage more active maintenance on 
our stable branches and see then what we can bring from that to our development 
branch expertise and knowledge wise.

While I'm not huge believer of constant agile development, this is one of those 
things that needs to be lived with and I think stable branches are our best bet 
for stabilization work (specifically when that work needs to land to master 
first). For long term refactoring I'd like to see us using more feature 
branches so we can keep doing the work without releasing it before it's done.

My 2 Euro cents,
Erno

> Cheers,
> Flavio
> 
> --
> @flaper87
> Flavio Percoco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-27 Thread Dan Prince
On Tue, 2016-01-26 at 14:05 -0600, Ben Nemec wrote:
> On 01/25/2016 04:36 PM, Dan Prince wrote:
> > On Mon, 2016-01-25 at 15:31 -0600, Ben Nemec wrote:
> > > On 01/22/2016 06:19 PM, Dan Prince wrote:
> > > > On Fri, 2016-01-22 at 11:24 -0600, Ben Nemec wrote:
> > > > > So I haven't weighed in on this yet, in part because I was on
> > > > > vacation
> > > > > when it was first proposed and missed a lot of the initial
> > > > > discussion,
> > > > > and also because I wanted to take some time to order my
> > > > > thoughts
> > > > > on
> > > > > it.
> > > > >  Also because my initial reaction...was not conducive to calm
> > > > > and
> > > > > rational discussion. ;-)
> > > > > 
> > > > > The tldr is that I don't like it.  To explain why, I'm going
> > > > > to
> > > > > make
> > > > > a
> > > > > list (everyone loves lists, right? Top $NUMBER reasons we
> > > > > should
> > > > > stop
> > > > > expecting other people to write our API for us):
> > > > > 
> > > > > 1) We've been down this road before.  Except last time it was
> > > > > with
> > > > > Heat.
> > > > >  I'm being somewhat tongue-in-cheek here, but expecting a
> > > > > general
> > > > > service to provide us a user-friendly API for our specific
> > > > > use
> > > > > case
> > > > > just
> > > > > doesn't make sense to me.
> > > > 
> > > > We've been down this road with Heat yes. But we are currently
> > > > using
> > > > Heat for some things that we arguable should be (a workflows
> > > > tool
> > > > might
> > > > help offload some stuff out of Heat). Also we haven't
> > > > implemented
> > > > custom Heat resources for TripleO either. There are mixed
> > > > opinions
> > > > on
> > > > this but plugging in your code to a generic API is quite nice
> > > > sometimes.
> > > > 
> > > > That is the beauty of Mistral I think. Unlike Heat it actually
> > > > encourages you to customize it with custom Python actions.
> > > > Anything
> > > > we
> > > > want in tripleo-common can become our own Mistral action (these
> > > > get
> > > > registered with stevedore entry points so we'd own the code)
> > > > and
> > > > the
> > > > YAML workflows just tie them together via tasks.
> > > > 
> > > > We don't have to go off and build our own proxy deployment
> > > > workflow
> > > > API. The structure to do just about anything we need already
> > > > exists
> > > > so
> > > > why not go and use it?
> > > > 
> > > > > 
> > > > > 2) The TripleO API is not a workflow API.  I also largely
> > > > > missed
> > > > > this
> > > > > discussion, but the TripleO API is a _Deployment_ API.  In
> > > > > some
> > > > > cases
> > > > > there also happens to be a workflow going on behind the
> > > > > scenes,
> > > > > but
> > > > > honestly that's not something I want our users to have to
> > > > > care
> > > > > about.
> > > > 
> > > > Agree that users don't have to care about this.
> > > > 
> > > > Users can get as involved as they want here. Most users I think
> > > > will
> > > > use python-tripleoclient to drive the deployment or the new UI.
> > > > They
> > > > don't have to interact with Mistral directly unless they really
> > > > want
> > > > to. So whether we choose to build our own API or use a generic
> > > > one
> > > > I
> > > > think this point is mute.
> > > 
> > > Okay, I think this is a very fundamental point, and I believe it
> > > gets
> > > right to the heart of my objection to the proposed change.
> > > 
> > > When I hear you say that users will use tripleoclient to talk to
> > > Mistral, it raises a big flag.  Then I look at something like
> > > https://github.com/dprince/python-tripleoclient/commit/77ffd2fa7b
> > > 1642
> > > b9f05713ca30b8a27ec4b322b7
> > > and the flag gets bigger.
> > > 
> > > The thing is that there's a whole bunch of business logic
> > > currently
> > > sitting in the client that shouldn't/can't be there.  There are
> > > historical reasons for it, but the important thing is that the
> > > current
> > > client architecture is terribly flawed.  Business logic should
> > > never
> > > live in the client like it does today.
> > 
> > Totally agree here. In fact I have removed business logic from
> > python-
> > tripleoclient in this patch and moved it into a Mistral action.
> > Which
> > can then be used via a stable API from anywhere.
> > 
> > > 
> > > Looking at that change, I see a bunch of business logic around
> > > taking
> > > our configuration and passing it to Mistral.  In order for us to
> > > do
> > > something like that and have a sustainable GUI, that code _has_
> > > to
> > > live
> > > behind an API that the GUI and CLI alike can call.  If we ask the
> > > GUI
> > > to
> > > re-implement that code, then we're doomed to divergence between
> > > the
> > > CLI
> > > and GUI code and we'll most likely end up back where we are with
> > > a
> > > GUI
> > > that can't deploy half of our features because they were
> > > implemented
> > > solely with the CLI in mind and made assumptions the GUI can't
> > > meet.
> > 
> > The 

Re: [openstack-dev] [Nova][Cinder] Cleanly detaching volumes from failed nodes

2016-01-27 Thread Duncan Thomas
On 27 January 2016 at 06:40, Matt Riedemann 
wrote:

> On 1/27/2016 11:22 AM, Avishay Traeger wrote:
>
>>
>> I agree with you.  Actually, I think it would be more correct to have
>> Cinder store it, and not pass it at all to terminate_connection().
>>
>>
> That would be ideal but I don't know if cinder is storing this information
> in the database like nova is in the nova
> block_device_mappings.connection_info column.
>


This is being discussed for cinder, since it is useful for implementing
force detach / cleanup in cinder

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Sachi King for oslo core

2016-01-27 Thread Morgan Fainberg
Yay Sachi!
On Jan 27, 2016 05:01, "Sachi King"  wrote:

> Thanks for the vote of confidence all, I look forward to expanding
> what I'm working on.
>
> Cheers,
> Sachi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-27 Thread Dougal Matthews
On 26 January 2016 at 16:01, Ben Nemec  wrote:

> On 01/26/2016 03:46 AM, Steven Hardy wrote:
> > On Mon, Jan 25, 2016 at 05:45:30PM -0600, Ben Nemec wrote:
> >> On 01/25/2016 03:56 PM, Steven Hardy wrote:
> >>> On Fri, Jan 22, 2016 at 11:24:20AM -0600, Ben Nemec wrote:
>  So I haven't weighed in on this yet, in part because I was on vacation
>  when it was first proposed and missed a lot of the initial discussion,
>  and also because I wanted to take some time to order my thoughts on
> it.
>   Also because my initial reaction...was not conducive to calm and
>  rational discussion. ;-)
> 
>  The tldr is that I don't like it.  To explain why, I'm going to make a
>  list (everyone loves lists, right? Top $NUMBER reasons we should stop
>  expecting other people to write our API for us):
> 
>  1) We've been down this road before.  Except last time it was with
> Heat.
>   I'm being somewhat tongue-in-cheek here, but expecting a general
>  service to provide us a user-friendly API for our specific use case
> just
>  doesn't make sense to me.
> >>>
> >>> Actually, we've been down this road before with Tuskar, and discovered
> that
> >>> designing and maintaining a bespoke API for TripleO is really hard.
> >>
> >> My takeaway from Tuskar was that designing an API that none of the
> >> developers on the project use is doomed to fail.  Tuskar also suffered
> >> from a lack of some features in Heat that the new API is explicitly
> >> depending on in an attempt to avoid many of the problems Tuskar had.
> >>
> >> Problem #1 is still developer apathy IMHO though.
> >
> > I think the main issue is developer capacity - we're a small community
> and
> > I for one am worried about the effort involved with building and
> > maintaining a bespoke API - thus this whole discussion is essentially
> about
> > finding a quicker and easier way to meet the needs of those needing an
> API.
> >
> > In terms of apathy, I think as a developer I don't need an abstraction
> > between me, my templates and heat.  Some advanced operators will feel
> > likewise, others won't.  What I would find useful sometimes is a general
> > purpose workflow engine, which is where I think the more pluggable
> mistral
> > based solution may have advantages in terms of developer and advanced
> > operator uptake.
>
> The API is for end users, not developers.  tripleo-incubator was easily
> hackable for developers and power users.  It was unusable for everyone
> else.
>

Doesn't it depend on what you mean by end users? I would argue that
typically the CLI and UI will be for end users. Then API is for end users
that are also developers. I don't imagine we are going to suggest
non-developer end users directly use the API.


>
> >>> I somewhat agree that heat as an API is insufficient, but that doesn't
> >>> necessarily imply you have to have a TripleO specific abstraction, just
> >>> that *an* abstraction is required.
> >>>
>  2) The TripleO API is not a workflow API.  I also largely missed this
>  discussion, but the TripleO API is a _Deployment_ API.  In some cases
>  there also happens to be a workflow going on behind the scenes, but
>  honestly that's not something I want our users to have to care about.
> >>>
> >>> How would you differentiate between "deployment" in a generic sense in
> >>> contrast to a generic workflow?
> >>>
> >>> Every deployment I can think of involves a series of steps, involving
> some
> >>> choices and interactions with other services.  That *is* a workflow?
> >>
> >> Well, I mean if we want to take this to extremes then pretty much every
> >> API is a workflow API.  You make a REST call, a "workflow" happens in
> >> the service, and you get back a result.
> >>
> >> Let me turn this around: Would you implement Heat's API on Mistral?  All
> >> that happens when I call Heat is that a series of OpenStack calls are
> >> made from heat-engine, after all.  Or is that a gross oversimplification
> >> of what's happening?  I could argue that the same is true of this
> >> discussion. :-)
> >
> > As Hugh has mentioned the main thing Heat does is actually manage
> > dependencies.  It processes the templates, builds a graph, then walks the
> > graph running a "workflow" to create/update/delete/etc each resource.
> >
> > I could imagine a future where we interface to some external workflow
> tool to
> > e.g do each resource action (e.g create a nova server, poll until it's
> active),
> > however that's actually a pretty high overhead approach, and it'd
> probably
> > be better to move towards better use of notifications instead (e.g less
> > internal workflow)
> >
>  3) It ties us 100% to a given implementation.  If Mistral proves to
> be a
>  poor choice for some reason, or insufficient for a particular use
> case,
>  we have no alternative.  If we have an API and decide to change our
>  implementation, nobody has to know or care.  

  1   2   >