Re: [openstack-dev] [Kuryr] IPAM issue with multiple docker networks having same cidr subnets

2016-05-27 Thread Vikas Choudhary
Thanks Toni, after some thought process, to me also also addressSpace
approach making sense. We can map neutron addressScopes to docker
addressSpaces.

Have drafted a blueprint here [1]. I think i have got enough clarity on
approach. Will be pushing a spec for this soon.



Thanks
Vikas

 [1]https://blueprints.launchpad.net/kuryr/+spec/address-scopes-spaces

On Sat, May 28, 2016 at 1:54 AM, Antoni Segura Puimedon <
toni+openstac...@midokura.com> wrote:

>
>
> On Thu, May 26, 2016 at 9:48 PM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>> Hi All,
>>
>> Recently, Banix observed and brought into notice this issue [1].
>>
>> To solve this, i could think of two approaches:
>> 1. Modifying the libnetwork apis to get PoolID also at network creation.
>>  OR
>> 2. Enhancing the /network docker api to get PoolID details also
>>
>> Problem with the first approach is that it is changing libnetwork
>> interface which is common for all remote drivers and thus chances of any
>> break-ups are high. So I preferred second one.
>>
>> Here is the patch I pushed to docker [2].
>>
>> Once this is merged, we can easily fix this issue by tagging poolID to
>> neutron networks and filtering subnets at address request time based on
>> this information.
>>
>> Any thoughts/suggestions?
>>
>
> I think following the address scope proposal at [2] is the best course of
> action. Thanks for taking
> it up with Docker upstream!
>
>
>>
>>
>> Thanks
>> Vikas
>>
>> [1] https://bugs.launchpad.net/kuryr/+bug/1585572
>> [2] https://github.com/docker/docker/issues/23025
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Who is going to fix the broken non-voting tests?

2016-05-27 Thread Adam Young

On 05/27/2016 02:30 PM, Raildo Mascena wrote:
In addition, I'm the one of the folks who are working with the v3-only 
gates, the main case that we are looking for is when the functional 
job is working and the the v3-only is not, so everything related to 
this jobs, you can just ping me on irc. :)




Thanks, will do.  Just want to make sure that we treat failing tests 
like a fire alarm:  do not get used to seeing red on the tests, even if 
they are non-voting.  Its a sign of a deeper problem.


Yeah...a bit of a hot button topic for me.




Cheers,

Raildo

On Thu, May 26, 2016 at 6:27 PM Rodrigo Duarte 
> wrote:


The function-nv was depending of a first test to be merged =)

The v3 depends directly on it, the difference is that it passes a
flag to deactivate v2.0 in devstack.

On Thu, May 26, 2016 at 5:48 PM, Steve Martinelli
> wrote:

On Thu, May 26, 2016 at 12:59 PM, Adam Young
> wrote:

On 05/26/2016 11:36 AM, Morgan Fainberg wrote:



On Thu, May 26, 2016 at 7:55 AM, Adam Young
> wrote:

Some mix of these three tests is almost always failing:

gate-keystone-dsvm-functional-nv FAILURE in 20m 04s
(non-voting)
gate-keystone-dsvm-functional-v3-only-nv FAILURE in
32m 45s (non-voting)
gate-tempest-dsvm-keystone-uwsgi-full-nv FAILURE in
1h 07m 53s (non-voting)


Are we going to keep them running and failing, or
boot them?  If we are going to keep them, who is
going to commit to fixing them?

We should not live with broken windows.



The uwsgi check should be moved to a proper run utilizing
mod_proxy_uwsgi.

Who wants to own this?  I am not fielding demands for
uwsgi support mysqlf, and kind of think it is just a
novelty, thus would not mind see it going away.  If
someone really cares, please make yourself known.


Brant has a patch (https://review.openstack.org/#/c/291817/)
that adds support in devstack to use uwsgi and mod_proxy_http.
This is blocked until infra moves to Ubuntu Xenial. Once this
merges we can propose a patch that swaps out the uwsgi job for
uwsgi + mod_proxy_http.





The v3 only one is a WIP that a few folks are working on

Fair enough.


The function-nv one was passing somewhere. I think that
one is close.


Yeah, it seems to be intermittant.


These two are actively being worked on.






__
OpenStack Development Mailing List (not for usage questions)

Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rodrigo Duarte Sousa

Senior Quality Engineer @ Red Hat
MSc in Computer Science
http://rodrigods.com 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__

[openstack-dev] [infra] [smaug] gate fullstack is shown as NOT_REGISTERED

2016-05-27 Thread xiangxinyong
Hi teams,

We have merged smaug full stack job in the project-config by this patch [1].


But the gate-smaug-dsvm-fullstack-nv is shown as NOT_REGISTERED.
gate-smaug-dsvm-fullstack-nvNOT_REGISTERED (non-voting)
It can be also found in this patch [2].


Could some one help us to run the fullstack in smaug?
Thanks very much.




Best Regards,
xiangxinyong


[1] https://review.openstack.org/#/c/317566/
[2] https://review.openstack.org/#/c/319158/__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [craton] meeting is published

2016-05-27 Thread sean roberts
http://eavesdrop.openstack.org/#Craton_Team_Meeting


~ sean
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr] Gap analysis: Heat as a k8s orchestrator

2016-05-27 Thread Fox, Kevin M
Hi Zane,

I've been working on the k8s side of the equation right now...

See these two PR's:
https://github.com/kubernetes/kubernetes/pull/25391
https://github.com/kubernetes/kubernetes/pull/25624

I'm still hopeful these can make k8s 1.3 as experimental plugins. There is 
keystone username/password auth support in 1.2 & 1.3, but it is unsuitable for 
heat usage. It also does not support authorization at all.

After these patches are in, heat, horizon, and higgins should be able to use 
the k8s api. I believe they should be complete enough for testing now though, 
if you want to build it yourself.

There also will need to be a small patch to magnum to set the right flags to 
bind the deployed k8s to the local cloud if you want to use magnum to deploy.

After the patches are in, I was thinking about taking a stab at a heat resource 
for deployments, but if you can get to it before I can, that would be great 
too. :)

Thanks,
Kevin

From: Zane Bitter [zbit...@redhat.com]
Sent: Friday, May 27, 2016 3:30 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr] Gap 
analysis: Heat as a k8s orchestrator

I spent a bit of time exploring the idea of using Heat as an external
orchestration layer on top of Kubernetes - specifically in the case of
TripleO controller nodes but I think it could be more generally useful
too - but eventually came to the conclusion it doesn't work yet, and
probably won't for a while. Nevertheless, I think it's helpful to
document a bit to help other people avoid going down the same path, and
also to help us focus on working toward the point where it _is_
possible, since I think there are other contexts where it would be
useful too.

We tend to refer to Kubernetes as a "Container Orchestration Engine" but
it does not actually do any orchestration, unless you count just
starting everything at roughly the same time as 'orchestration'. Which I
wouldn't. You generally handle any orchestration requirements between
services within the containers themselves, possibly using external
services like etcd to co-ordinate. (The Kubernetes project refer to this
as "choreography", and explicitly disclaim any attempt at orchestration.)

What Kubernetes *does* do is more like an actively-managed version of
Heat's SoftwareDeploymentGroup (emphasis on the _Group_). Brief recap:
SoftwareDeploymentGroup is a type of ResourceGroup; you give it a map of
resource names to server UUIDs and it creates a SoftwareDeployment for
each server. You have to generate the list of servers somehow to give it
(the easiest way is to obtain it from the output of another
ResourceGroup containing the servers). If e.g. a server goes down you
have to detect that externally, and trigger a Heat update that removes
it from the templates, redeploys a replacement server, and regenerates
the server list before a replacement SoftwareDeployment is created. In
constrast, Kubernetes is running on a cluster of servers, can use rules
to determine where to run containers, and can very quickly redeploy
without external intervention in response to a server or container
falling over. (It also does rolling updates, which Heat can also do
albeit in a somewhat hacky way when it comes to SoftwareDeployments -
which we're planning to fix.)

So this seems like an opportunity: if the dependencies between services
could be encoded in Heat templates rather than baked into the containers
then we could use Heat as the orchestration layer following the
dependency-based style I outlined in [1]. (TripleO is already moving in
this direction with the way that composable-roles uses
SoftwareDeploymentGroups.) One caveat is that fully using this style
likely rules out for all practical purposes the current Pacemaker-based
HA solution. We'd need to move to a lighter-weight HA solution, but I
know that TripleO is considering that anyway.

What's more though, assuming this could be made to work for a Kubernetes
cluster, a couple of remappings in the Heat environment file should get
you an otherwise-equivalent single-node non-HA deployment basically for
free. That's particularly exciting to me because there are definitely
deployments of TripleO that need HA clustering and deployments that
don't and which wouldn't want to pay the complexity cost of running
Kubernetes when they don't make any real use of it.

So you'd have a Heat resource type for the controller cluster that maps
to either an OS::Nova::Server or (the equivalent of) an OS::Magnum::Bay,
and a bunch of software deployments that map to either a
OS::Heat::SoftwareDeployment that calls (I assume) docker-compose
directly or a Kubernetes Pod resource to be named later.

The first obstacle is that we'd need that Kubernetes Pod resource in
Heat. Currently there is no such resource type, and the OpenStack API
that would be expected to provide that API (Magnum's /container
endpoint) is being deprecated, so that's not a 

[openstack-dev] [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr] Gap analysis: Heat as a k8s orchestrator

2016-05-27 Thread Zane Bitter
I spent a bit of time exploring the idea of using Heat as an external 
orchestration layer on top of Kubernetes - specifically in the case of 
TripleO controller nodes but I think it could be more generally useful 
too - but eventually came to the conclusion it doesn't work yet, and 
probably won't for a while. Nevertheless, I think it's helpful to 
document a bit to help other people avoid going down the same path, and 
also to help us focus on working toward the point where it _is_ 
possible, since I think there are other contexts where it would be 
useful too.


We tend to refer to Kubernetes as a "Container Orchestration Engine" but 
it does not actually do any orchestration, unless you count just 
starting everything at roughly the same time as 'orchestration'. Which I 
wouldn't. You generally handle any orchestration requirements between 
services within the containers themselves, possibly using external 
services like etcd to co-ordinate. (The Kubernetes project refer to this 
as "choreography", and explicitly disclaim any attempt at orchestration.)


What Kubernetes *does* do is more like an actively-managed version of 
Heat's SoftwareDeploymentGroup (emphasis on the _Group_). Brief recap: 
SoftwareDeploymentGroup is a type of ResourceGroup; you give it a map of 
resource names to server UUIDs and it creates a SoftwareDeployment for 
each server. You have to generate the list of servers somehow to give it 
(the easiest way is to obtain it from the output of another 
ResourceGroup containing the servers). If e.g. a server goes down you 
have to detect that externally, and trigger a Heat update that removes 
it from the templates, redeploys a replacement server, and regenerates 
the server list before a replacement SoftwareDeployment is created. In 
constrast, Kubernetes is running on a cluster of servers, can use rules 
to determine where to run containers, and can very quickly redeploy 
without external intervention in response to a server or container 
falling over. (It also does rolling updates, which Heat can also do 
albeit in a somewhat hacky way when it comes to SoftwareDeployments - 
which we're planning to fix.)


So this seems like an opportunity: if the dependencies between services 
could be encoded in Heat templates rather than baked into the containers 
then we could use Heat as the orchestration layer following the 
dependency-based style I outlined in [1]. (TripleO is already moving in 
this direction with the way that composable-roles uses 
SoftwareDeploymentGroups.) One caveat is that fully using this style 
likely rules out for all practical purposes the current Pacemaker-based 
HA solution. We'd need to move to a lighter-weight HA solution, but I 
know that TripleO is considering that anyway.


What's more though, assuming this could be made to work for a Kubernetes 
cluster, a couple of remappings in the Heat environment file should get 
you an otherwise-equivalent single-node non-HA deployment basically for 
free. That's particularly exciting to me because there are definitely 
deployments of TripleO that need HA clustering and deployments that 
don't and which wouldn't want to pay the complexity cost of running 
Kubernetes when they don't make any real use of it.


So you'd have a Heat resource type for the controller cluster that maps 
to either an OS::Nova::Server or (the equivalent of) an OS::Magnum::Bay, 
and a bunch of software deployments that map to either a 
OS::Heat::SoftwareDeployment that calls (I assume) docker-compose 
directly or a Kubernetes Pod resource to be named later.


The first obstacle is that we'd need that Kubernetes Pod resource in 
Heat. Currently there is no such resource type, and the OpenStack API 
that would be expected to provide that API (Magnum's /container 
endpoint) is being deprecated, so that's not a long-term solution.[2] 
Some folks from the Magnum community may or may not be working on a 
separate project (which may or may not be called Higgins) to do that. 
It'd be some time away though.


An alternative, though not a good one, would be to create a Kubernetes 
resource type in Heat that has the credentials passed in somehow. I'm 
very against that though. Heat is just not good at handling credentials 
other than Keystone ones. We haven't ever created a resource type like 
this before, except for the Docker one in /contrib that serves as a 
prime example of what *not* to do. And if it doesn't make sense to wrap 
an OpenStack API around this then IMO it isn't going to make any more 
sense to wrap a Heat resource around it.


A third option might be a SoftwareDeployment, possibly on one of the 
controller nodes themselves, that calls the k8s client. (We could create 
a software deployment hook to make this easy.) That would suffer from 
all of the same issues that TripleO currently has about having to choose 
a server on which to deploy though.


The secondary obstacle is networking. TripleO has some pretty 
complicated networking 

[openstack-dev] [Neutron] Stadium Evolution - next steps

2016-05-27 Thread Armando M.
Hi Neutrinos,

I wanted to give an update on [1]. Based on the feedback received so far I
think it is time to get on the next stage of this transition and execute on
some of the things identified in the proposal, as well as provide more
detailed information to some of the folks involved in the projects affected
by this proposal.

More precisely, I will be going over [2] to revise the content and make
sure it is inline with the spec proposal. I will also start consolidating
the Neutron's API over to neutron-lib and shepard the transition.

Please do not hesitate to reach out for any question.

Many thanks,
Armando

[1] https://review.openstack.org/#/c/312199/
[2] http://docs.openstack.org/developer/neutron/#neutron-stadium
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Continued discussion from the last team meeting

2016-05-27 Thread Joshua Harlow
I get this idea, I just want to bring up the option that if u only start 
off with a basic vision, then u basically get that as a result, vs IMHO 
where u start off with a bigger/greater vision and work on getting there.


I'd personally rather work on a project that has a advanced vision vs 
one that has 'just do the basics' but thats just my 2 cents...


Hongbin Lu wrote:

I agree with you and Qiming. The Higgins project should start with basic
functionalities and revisit advanced features later.

Best regards,

Hongbin

*From:*Yanyan Hu [mailto:huyanya...@gmail.com]
*Sent:* May-24-16 11:06 PM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [higgins] Continued discussion from the
last team meeting

Hi, Hongbing, thanks a lot for the summary! The following is my thoughts
on those two questions left:

About container composition, it is a really useful and important feature
for enduser. But based on my understanding, user can actually achieve
the same goal by leveraging other high level OpenStack services, e.g.
defining a Heat template with Higgins container resources and
app/service (softwareconfig/softwaredeployment resources) running inside
containers. In future we can implement related functionality inside
Higgins to better support this kind of use cases natively. But in
current stage, I suggest we focus on container primitive and its basic
operations.

For container host management, I agree we should expose related API
interfaces to operator(admin). Ideally, Higgins should be able to manage
all container hosts(baremetal and VM) automatically, but manual
intervention could be necessary in many pratical use cases. But I
suggest to hide these API interfaces from endusers since it's not their
responsibility to manage those hosts.

Thanks.

2016-05-25 4:55 GMT+08:00 Hongbin Lu >:

Hi all,

At the last team meeting, we tried to define the scope of the Higgins
project. In general, we agreed to focus on the following features as an
initial start:

·Build a container abstraction and use docker as the first implementation.

·Focus on basic container operations (i.e. CRUD), and leave advanced
operations (i.e. keep container alive, rolling upgrade, etc.) to users
or other projects/services.

·Start with non-nested container use cases (e.g. containers on physical
hosts), and revisit nested container use cases (e.g. containers on VMs)
later.

The items below needs further discussion so I started this ML to discuss it.

1.Container composition: implement a docker compose like feature

2.Container host management: abstract container host

For #1, it seems we broadly agreed that this is a useful feature. The
argument is where this feature belongs to. Some people think this
feature belongs to other projects, such as Heat, and others think it
belongs to Higgins so we should implement it. For #2, we were mainly
debating two things: where the container hosts come from (provisioned by
Nova or provided by operators); should we expose host management APIs to
end-users? Thoughts?

Best regards,

Hongbin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Best regards,

Yanyan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][infra][deployment] Adding multinode CI jobs for TripleO in nodepool

2016-05-27 Thread Emilien Macchi
On Fri, May 27, 2016 at 3:08 PM, James Slagle  wrote:
> On Fri, May 27, 2016 at 2:37 PM, Emilien Macchi  wrote:
>> On Fri, May 27, 2016 at 2:03 PM, James Slagle  wrote:
>>> I've been working on various patches to TripleO to make it possible
>>> for the baremetal provisioning part of the workflow to be optional. In
>>> such a scenario, TripleO wouldn't use Nova or Ironic to boot any
>>> baremetal nodes. Instead it would rely on the nodes to be already
>>> installed with an OS and powered on. We then use Heat to drive the
>>> deployment of OpenStack on those nodes...that part of the process is
>>> largely unchanged.
>>>
>>> One of the things this would allow TripleO to do is make use of CI
>>> jobs using nodes just from the regular cloud providers in nodepool
>>> instead of having to use our own TripleO cloud
>>> (tripleo-test-cloud-rh1) to run all our jobs.
>>>
>>> I'm at a point where I can start working on patches to try and set
>>> this up, but I wanted to provide this context so folks were aware of
>>> the background.
>>>
>>> We'd probably start with our simplest configuration of a job with at
>>> least 3 nodes (undercloud/controller/compute), and using CentOS
>>> images. It looks like right now all multinode jobs are 2 nodes only
>>> and use Ubuntu. My hope is that I/we can make some progress in
>>> different multinode configurations and collaborate on any setup
>>> scripts or ansible playbooks in a generally useful way. I know there
>>> was interest in different multinode setups from the various deployment
>>> teams at the cross project session in Austin.
>>>
>>> If there are any pitfalls or if there are any concerns about TripleO
>>> going in this direction, I thought we could discuss those here. Thanks
>>> for any feedback.
>>
>> It is more a question than a concern:
>> are we still going to test baremetal introspection with Ironic
>> somewhere in OpenStack?
>>
>> I like the way it goes but I'm wondering if the things that we're not
>> going to test anymore will still be tested somewhere else (maybe in
>> Ironic / Nova CI jobs) or maybe it's already the case and then stop me
>> here.
>>
>
> I should have clarified: we're not moving away from still having our
> own cloud running the TripleO jobs we have today.

Thanks for this clarification!

> This is about adding new jobs to test a different way of deploying via
> TripleO Since we'd be able to use nodepool nodes directly to do that,
> I'm proposing to do it that way.
>
> If it pans out, I'd expect us to have a variety of jobs running with
> different permutations so that we can have as much coverage as
> possible.

I like it, I see different use-cases where we don't need to test
baremetal provisioning. One of them is openstack/puppet-* testing
(except for puppet-nova and puppet-ironic maybe) where we just want to
run puppet on a undercloud / overcloud and see if services are
working. With the new jobs, I would propose to think at removing old
jobs and use new multi-node jobs, it will help us to save time,
resources and test what we actually need to test.
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TaskFlow] TaskFlow persistence: Job failure retry

2016-05-27 Thread Joshua Harlow
Seems like u could just use 
http://docs.openstack.org/developer/taskflow/jobs.html (it appears that 
you may not be?); the job itself would when failed be then worked on by 
a different job consumer.


Have u looked at those? It almost appears that u are using celery as a 
job distribution system (similar to the jobs.html link mentioned above)? 
Is that somewhat correct (I haven't seen anyone try this, wondering how 
u are using it and the choices that directed u to that, aka, am curious)?


-Josh

pnkk wrote:

To be specific, we hit this issue when the node running our service is
rebooted.
Our solution is designed in a way that each and every job is a celery
task and inside celery task, we create taskflow flow.

We enabled late_acks in celery(uses rabbitmq as message broker), so if
our service/node goes down, other healthy service can pick the job and
completes it.
This works fine, but we just hit this rare case where the node was
rebooted just when taskflow is updating something to the database.

In this case, it raises an exception and the job is marked failed. Since
it is complete(with failure), message is removed from the rabbitmq and
other worker would not be able to process it.
Can taskflow handle such I/O errors gracefully or should application try
to catch this exception? If application has to handle it what would
happen to that particular database transaction which failed just when
the node is rebooted? Who will retry this transaction?

Thanks,
Kanthi

On Fri, May 27, 2016 at 5:39 PM, pnkk > wrote:

Hi,

When taskflow engine is executing a job, the execution failed due to
IO error(traceback pasted below).

2016-05-25 19:45:21.717 7119 ERROR
taskflow.engines.action_engine.engine 127.0.1.1 [-]  Engine
execution has failed, something bad must of happened (last 10
machine transitions were [('SCHEDULING', 'WAITING'), ('WAITING',
'ANALYZING'), ('ANALYZING', 'SCHEDULING'), ('SCHEDULING',
'WAITING'), ('WAITING', 'ANALYZING'), ('ANALYZING', 'SCHEDULING'),
('SCHEDULING', 'WAITING'), ('WAITING', 'ANALYZING'), ('ANALYZING',
'GAME_OVER'), ('GAME_OVER', 'FAILURE')])
2016-05-25 19:45:21.717 7119 TRACE
taskflow.engines.action_engine.engine Traceback (most recent call last):
2016-05-25 19:45:21.717 7119 TRACE
taskflow.engines.action_engine.engine   File

"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py",
line 269, in run_iter
2016-05-25 19:45:21.717 7119 TRACE
taskflow.engines.action_engine.engine
failure.Failure.reraise_if_any(memory.failures)
2016-05-25 19:45:21.717 7119 TRACE
taskflow.engines.action_engine.engine   File

"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/types/failure.py",
line 336, in reraise_if_any
2016-05-25 19:45:21.717 7119 TRACE
taskflow.engines.action_engine.engine failures[0].reraise()
2016-05-25 19:45:21.717 7119 TRACE
taskflow.engines.action_engine.engine   File

"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/types/failure.py",
line 343, in reraise
2016-05-25 19:45:21.717 7119 TRACE
taskflow.engines.action_engine.engine six.reraise(*self._exc_info)
2016-05-25 19:45:21.717 7119 TRACE
taskflow.engines.action_engine.engine   File

"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/engines/action_engine/scheduler.py",
line 94, in schedule
2016-05-25 19:45:21.717 7119 TRACE
taskflow.engines.action_engine.engine
futures.add(scheduler.schedule(atom))
2016-05-25 19:45:21.717 7119 TRACE
taskflow.engines.action_engine.engine   File

"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/engines/action_engine/scheduler.py",
line 67, in schedule
2016-05-25 19:45:21.717 7119 TRACE
taskflow.engines.action_engine.engine return
self._task_action.schedule_execution(task)
2016-05-25 19:45:21.717 7119 TRACE
taskflow.engines.action_engine.engine   File

"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/engines/action_engine/actions/task.py",
line 99, in schedule_execution
2016-05-25 19:45:21.717 7119 TRACE
taskflow.engines.action_engine.engine self.change_state(task,
states.RUNNING, progress=0.0)
2016-05-25 19:45:21.717 7119 TRACE
taskflow.engines.action_engine.engine   File

"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/engines/action_engine/actions/task.py",
line 67, in change_state
2016-05-25 19:45:21.717 7119 TRACE
taskflow.engines.action_engine.engine
self._storage.set_atom_state(task.name , state)
2016-05-25 19:45:21.717 

Re: [openstack-dev] [Kuryr] IPAM issue with multiple docker networks having same cidr subnets

2016-05-27 Thread Antoni Segura Puimedon
On Thu, May 26, 2016 at 9:48 PM, Vikas Choudhary  wrote:

> Hi All,
>
> Recently, Banix observed and brought into notice this issue [1].
>
> To solve this, i could think of two approaches:
> 1. Modifying the libnetwork apis to get PoolID also at network creation.
>  OR
> 2. Enhancing the /network docker api to get PoolID details also
>
> Problem with the first approach is that it is changing libnetwork
> interface which is common for all remote drivers and thus chances of any
> break-ups are high. So I preferred second one.
>
> Here is the patch I pushed to docker [2].
>
> Once this is merged, we can easily fix this issue by tagging poolID to
> neutron networks and filtering subnets at address request time based on
> this information.
>
> Any thoughts/suggestions?
>

I think following the address scope proposal at [2] is the best course of
action. Thanks for taking
it up with Docker upstream!


>
>
> Thanks
> Vikas
>
> [1] https://bugs.launchpad.net/kuryr/+bug/1585572
> [2] https://github.com/docker/docker/issues/23025
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][manila] Moving forward with landing manila in tripleo

2016-05-27 Thread Rodrigo Barbieri
Hello Marios,

The Data Service is needed for Share Migration feature in manila since the
Mitaka release.

There has not been any work done yet towards adding it to puppet. Since its
introduction in Mitaka, it has been made compatible only with devstack so
far.

I have not invested time thinking about how it should fit in a HA
environment at this stage, this is a service that currently sports a single
instance, but we have plans to make it more scallable in the future.
What I have briefly thought about is the idea where there would be a
scheduler that decides whether to send the data job to m-dat1, m-dat2 or
m-dat3 and so on, based on information that indicates how busy each Data
Service instance is.

For this moment, active/passive makes sense in the context that manila
expects only a single instance of m-dat. But active/active would allow the
service to be load balanced through HAProxy and could partially accomplish
what we have plans to achieve in the future.

I hope I have addressed your question. The absence of m-dat implies in the
Share Migration feature not working.


Regards,

On Fri, May 27, 2016 at 10:10 AM, Marios Andreou  wrote:

> Hi all, I explicitly cc'd a few folks I thought might be interested for
> visibility, sorry for spam if you're not. This email is about getting
> manila landed into tripleo asap, and the current obstacles to that (at
> least those visible to me):
>
> The current review [1] isn't going to land as is, regardless of the
> outcome/discussion of any of the following points because all the
> services are going to "composable controller services". How do people
> feel about me merging my review at [2] into its parent review (which is
> the current manilla review at [1]). My review just takes what is in  [1]
> (caveats below) and makes it 'composable', and includes a dependency on
> [3] which is the puppet-tripleo side for the 'composable manila'.
>
>---> Proposal merge the 'composable manila' tripleo-heat-templates
> review @ [2] into the parent review @ [1]. The review at [2] will be
> abandoned. We will continue to try and land [1] in its new 'composable
> manila' form.
>
> WRT the 'caveats' mentioned above and why I haven't just just ported
> what is in the current manila review @ [1] into the composable one @
> [2]... there are two main things I've changed, both of which on
> guidance/discussion on the reviews.
>
> The first is addition of manila-data (wasn't in the original/current
> review at [1]). The second a change to the pacemaker constraints, which
> I've corrected to make manila-data and manila-share pacemaker a/p but
> everything else systemd managed, based on ongoing discussion at [3].
>
> So IMO to move forward I need clarity on both those points. For
> manila-data my concerns are is it already available where we need it. I
> looked at puppet-manila [4] and couldn't quickly find much (any) mention
> of manila-data. We need it there if we are to configure anything for it
> via puppet. The other unkown/concern here is does manila-data get
> delivered with the manila package (I recall manila-share possibly, at
> least one of them, had a stand-alone package) otherwise we'll need to
> add it to the image. But mainly my question here is, can we live without
> it? I mean can we deploy sans manila-data or does it just not make sense
> (sorry for silly question). The motivation is if we can let's land and
> iterate to add it.
>
>Q. Can we live w/out manila-data so we can land and iterate (esp. if
> we need to land things into puppet-manila or anywhere else it is yet to
> be landed)
>
> For the pacemaker constraints I'm mainly just waiting for confirmation
> of our current understanding.. manila-share and manila-data are a/p
> pacemaker managed, everything else systemd.
>
> thanks for any info, I will follow up and update the reviews accordingly
> based on any comments,
>
> marios
>
> [1] "Enable Manila integration" https://review.openstack.org/#/c/188137/
> [2] "Composable manila tripleo-heat-templates side"
> https://review.openstack.org/#/c/315658/
> [3] "Adds the puppet-tripleo manifests for manila"
> https://review.openstack.org/#/c/313527/
> [4] "openstack/puppet-manila" https://github.com/openstack/puppet-manila
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rodrigo Barbieri
Computer Scientist
OpenStack Manila Contributor
Federal University of São Carlos
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Team meeting - cancelled

2016-05-27 Thread Armando M.
Neutrinos,

Because of holidays in US/UK, it's probably safer to cancel the meeting for
the week starting Monday 30th.

We are approaching N-1, and we'll cut the release sometime next week.
Please be aware of release deadlines [1], if you have cross-project items
you are working on.

Cheers,
Armando

[1] http://releases.openstack.org/newton/schedule.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][infra][deployment] Adding multinode CI jobs for TripleO in nodepool

2016-05-27 Thread James Slagle
On Fri, May 27, 2016 at 2:37 PM, Emilien Macchi  wrote:
> On Fri, May 27, 2016 at 2:03 PM, James Slagle  wrote:
>> I've been working on various patches to TripleO to make it possible
>> for the baremetal provisioning part of the workflow to be optional. In
>> such a scenario, TripleO wouldn't use Nova or Ironic to boot any
>> baremetal nodes. Instead it would rely on the nodes to be already
>> installed with an OS and powered on. We then use Heat to drive the
>> deployment of OpenStack on those nodes...that part of the process is
>> largely unchanged.
>>
>> One of the things this would allow TripleO to do is make use of CI
>> jobs using nodes just from the regular cloud providers in nodepool
>> instead of having to use our own TripleO cloud
>> (tripleo-test-cloud-rh1) to run all our jobs.
>>
>> I'm at a point where I can start working on patches to try and set
>> this up, but I wanted to provide this context so folks were aware of
>> the background.
>>
>> We'd probably start with our simplest configuration of a job with at
>> least 3 nodes (undercloud/controller/compute), and using CentOS
>> images. It looks like right now all multinode jobs are 2 nodes only
>> and use Ubuntu. My hope is that I/we can make some progress in
>> different multinode configurations and collaborate on any setup
>> scripts or ansible playbooks in a generally useful way. I know there
>> was interest in different multinode setups from the various deployment
>> teams at the cross project session in Austin.
>>
>> If there are any pitfalls or if there are any concerns about TripleO
>> going in this direction, I thought we could discuss those here. Thanks
>> for any feedback.
>
> It is more a question than a concern:
> are we still going to test baremetal introspection with Ironic
> somewhere in OpenStack?
>
> I like the way it goes but I'm wondering if the things that we're not
> going to test anymore will still be tested somewhere else (maybe in
> Ironic / Nova CI jobs) or maybe it's already the case and then stop me
> here.
>

I should have clarified: we're not moving away from still having our
own cloud running the TripleO jobs we have today.

This is about adding new jobs to test a different way of deploying via
TripleO Since we'd be able to use nodepool nodes directly to do that,
I'm proposing to do it that way.

If it pans out, I'd expect us to have a variety of jobs running with
different permutations so that we can have as much coverage as
possible.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][qa][ironic][nova] When Nova should mark instance as successfully deleted?

2016-05-27 Thread Andrew Laski


On Fri, May 27, 2016, at 11:25 AM, Matthew Treinish wrote:
> On Fri, May 27, 2016 at 05:52:51PM +0300, Vasyl Saienko wrote:
> > Lucas, Andrew
> > 
> > Thanks for fast response.
> > 
> > On Fri, May 27, 2016 at 4:53 PM, Andrew Laski  wrote:
> > 
> > >
> > >
> > > On Fri, May 27, 2016, at 09:25 AM, Lucas Alvares Gomes wrote:
> > > > Hi,
> > > >
> > > > Thanks for bringing this up Vasyl!
> > > >
> > > > > At the moment Nova with ironic virt_driver consider instance as
> > > deleted,
> > > > > while on Ironic side server goes to cleaning which can take a while. 
> > > > > As
> > > > > result current implementation of Nova tempest tests doesn't work for
> > > case
> > > > > when Ironic is enabled.
> > >
> > > What is the actual failure? Is it a capacity issue because nodes do not
> > > become available again quickly enough?
> > >
> > >
> > The actual failure is that temepest community doesn't want to accept 1
> > option.
> > https://review.openstack.org/315422/
> > And I'm not sure that it is the right way.
> 
> No Andrew is right, this is a resource limitation in the gate. The
> failures
> you're hitting are caused by resource constraints in the gate and not
> having
> enough available nodes to run all the tests because deleted nodes are
> still
> cleaning (or doing another operation) and aren't available to nova for
> booting
> another guest.
> 
> I -2d that patch because it's a workaround for the fundamental issue here
> and
> not actually an appropriate change for Tempest. What you've implemented
> in that
> patch is the equivalent of talking to libvirt or some other hypervisor
> directly
> to find out if something is actually deleted. It's a layer violation,
> there is
> never a reason that should be necessary especially in a test of the nova
> api.
> 
> > 
> > > >
> > > > > There are two possible options how to fix it:
> > > > >
> > > > >  Update Nova tempest test scenarios for Ironic case to wait when
> > > cleaning is
> > > > > finished and Ironic node goes to 'available' state.
> > > > >
> > > > > Mark instance as deleted in Nova only after cleaning is finished on
> > > Ironic
> > > > > side.
> > > > >
> > > > > I'm personally incline to 2 option. From user side successful instance
> > > > > termination means that no instance data is available any more, and
> > > nobody
> > > > > can access/restore that data. Current implementation breaks this rule.
> > > > > Instance is marked as successfully deleted while in fact it may be not
> > > > > cleaned, it may fail to clean and user will not know anything about 
> > > > > it.
> > > > >
> > 
> > >
> > > > I don't really like option #2, cleaning can take several hours
> > > > depending on the configuration of the node. I think that it would be a
> > > > really bad experience if the user of the cloud had to wait a really
> > > > long time before his resources are available again once he deletes an
> > > > instance. The idea of marking the instance as deleted in Nova quickly
> > > > is aligned with our idea of making bare metal deployments
> > > > look-and-feel like VMs for the end user. And also (one of) the
> > > > reason(s) why we do have a separated state in Ironic for DELETING and
> > > > CLEANING.
> > >
> > 
> > The resources will be available only if there are other available baremetal
> > nodes in the cloud.
> > User doesn't have ability to track for status of available resources
> > without admin access.
> > 
> > 
> > > I agree. From a user perspective once they've issued a delete their
> > > instance should be gone. Any delay in that actually happening is purely
> > > an internal implementation detail that they should not care about.
> > >
> 
> Delete is an async operation in Nova. There is never any immediacy here
> it
> always takes an indeterminate amount of time between it being issued by
> the user
> and the server actually going away. The disconnect here is that when
> running
> with the ironic driver the server disappears from Nova but the resources
> aren't
> freed back when that happens until the cleaning is done. I'm pretty sure
> this is
> different from all the other Nova drivers. 
> 
> I don't really have a horse in this race so whatever ends up being
> decided for
> the behavior here is fine. But, I think we need to be clear with what the
> behavior here is and want we actually want. Personally, I don't see an
> issue
> with the node being in the deleting task_state for a long time because
> that's
> what is really happening while it's deleting. To me a delete is only
> finished
> when the resource is actually gone and it's consumed resources return to
> the
> pool.

I wouldn't argue against an instance hanging around in a deleting state
for a long time. However at this time quota usage is not reduced until
the instance is considered to have been deleted. I think those would
need to be decoupled in order to leave instances in a deleting state. A
user should not need to wait hours to get their quota back just because

Re: [openstack-dev] [tripleo][infra][deployment] Adding multinode CI jobs for TripleO in nodepool

2016-05-27 Thread Emilien Macchi
On Fri, May 27, 2016 at 2:03 PM, James Slagle  wrote:
> I've been working on various patches to TripleO to make it possible
> for the baremetal provisioning part of the workflow to be optional. In
> such a scenario, TripleO wouldn't use Nova or Ironic to boot any
> baremetal nodes. Instead it would rely on the nodes to be already
> installed with an OS and powered on. We then use Heat to drive the
> deployment of OpenStack on those nodes...that part of the process is
> largely unchanged.
>
> One of the things this would allow TripleO to do is make use of CI
> jobs using nodes just from the regular cloud providers in nodepool
> instead of having to use our own TripleO cloud
> (tripleo-test-cloud-rh1) to run all our jobs.
>
> I'm at a point where I can start working on patches to try and set
> this up, but I wanted to provide this context so folks were aware of
> the background.
>
> We'd probably start with our simplest configuration of a job with at
> least 3 nodes (undercloud/controller/compute), and using CentOS
> images. It looks like right now all multinode jobs are 2 nodes only
> and use Ubuntu. My hope is that I/we can make some progress in
> different multinode configurations and collaborate on any setup
> scripts or ansible playbooks in a generally useful way. I know there
> was interest in different multinode setups from the various deployment
> teams at the cross project session in Austin.
>
> If there are any pitfalls or if there are any concerns about TripleO
> going in this direction, I thought we could discuss those here. Thanks
> for any feedback.

It is more a question than a concern:
are we still going to test baremetal introspection with Ironic
somewhere in OpenStack?

I like the way it goes but I'm wondering if the things that we're not
going to test anymore will still be tested somewhere else (maybe in
Ironic / Nova CI jobs) or maybe it's already the case and then stop me
here.

> --
> -- James Slagle
> --

-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Who is going to fix the broken non-voting tests?

2016-05-27 Thread Raildo Mascena
In addition, I'm the one of the folks who are working with the v3-only
gates, the main case that we are looking for is when the functional job is
working and the the v3-only is not, so everything related to this jobs, you
can just ping me on irc. :)

Cheers,

Raildo

On Thu, May 26, 2016 at 6:27 PM Rodrigo Duarte 
wrote:

> The function-nv was depending of a first test to be merged =)
>
> The v3 depends directly on it, the difference is that it passes a flag to
> deactivate v2.0 in devstack.
>
> On Thu, May 26, 2016 at 5:48 PM, Steve Martinelli 
> wrote:
>
>> On Thu, May 26, 2016 at 12:59 PM, Adam Young  wrote:
>>
>>> On 05/26/2016 11:36 AM, Morgan Fainberg wrote:
>>>
>>>
>>>
>>> On Thu, May 26, 2016 at 7:55 AM, Adam Young  wrote:
>>>
 Some mix of these three tests is almost always failing:

 gate-keystone-dsvm-functional-nv FAILURE in 20m 04s (non-voting)
 gate-keystone-dsvm-functional-v3-only-nv FAILURE in 32m 45s (non-voting)
 gate-tempest-dsvm-keystone-uwsgi-full-nv FAILURE in 1h 07m 53s
 (non-voting)


 Are we going to keep them running and failing, or boot them?  If we are
 going to keep them, who is going to commit to fixing them?

 We should not live with broken windows.



>>> The uwsgi check should be moved to a proper run utilizing
>>> mod_proxy_uwsgi.
>>>
>>> Who wants to own this?  I am not fielding demands for uwsgi support
>>> mysqlf, and kind of think it is just a novelty, thus would not mind see it
>>> going away.  If someone really cares, please make yourself known.
>>>
>>
>> Brant has a patch (https://review.openstack.org/#/c/291817/) that adds
>> support in devstack to use uwsgi and mod_proxy_http. This is blocked until
>> infra moves to Ubuntu Xenial. Once this merges we can propose a patch that
>> swaps out the uwsgi job for uwsgi + mod_proxy_http.
>>
>>
>>>
>>>
>>> The v3 only one is a WIP that a few folks are working on
>>>
>>> Fair enough.
>>>
>>> The function-nv one was passing somewhere. I think that one is close.
>>>
>>>
>>> Yeah, it seems to be intermittant.
>>>
>>>
>> These two are actively being worked on.
>>
>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Rodrigo Duarte Sousa
> Senior Quality Engineer @ Red Hat
> MSc in Computer Science
> http://rodrigods.com
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][infra][deployment] Adding multinode CI jobs for TripleO in nodepool

2016-05-27 Thread James Slagle
I've been working on various patches to TripleO to make it possible
for the baremetal provisioning part of the workflow to be optional. In
such a scenario, TripleO wouldn't use Nova or Ironic to boot any
baremetal nodes. Instead it would rely on the nodes to be already
installed with an OS and powered on. We then use Heat to drive the
deployment of OpenStack on those nodes...that part of the process is
largely unchanged.

One of the things this would allow TripleO to do is make use of CI
jobs using nodes just from the regular cloud providers in nodepool
instead of having to use our own TripleO cloud
(tripleo-test-cloud-rh1) to run all our jobs.

I'm at a point where I can start working on patches to try and set
this up, but I wanted to provide this context so folks were aware of
the background.

We'd probably start with our simplest configuration of a job with at
least 3 nodes (undercloud/controller/compute), and using CentOS
images. It looks like right now all multinode jobs are 2 nodes only
and use Ubuntu. My hope is that I/we can make some progress in
different multinode configurations and collaborate on any setup
scripts or ansible playbooks in a generally useful way. I know there
was interest in different multinode setups from the various deployment
teams at the cross project session in Austin.

If there are any pitfalls or if there are any concerns about TripleO
going in this direction, I thought we could discuss those here. Thanks
for any feedback.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress] Spec for congress.conf

2016-05-27 Thread Bryan Sullivan
Masahito,
 
Sorry, I'm not quite clear on the guidance. Sounds like you're saying all 
options will be defaulted by Oslo.config if not set in the congress.conf file. 
That's OK, if I understood. 
 
It's clear to me that some will be deployment-specific.
 
But what I am asking is where is the spec for:
- what congress.conf fields are supported i.e. defined for possible setting in 
a release
- which fields are mandatory to be set (or Congress will simply not work)
- which fields are not mandatory, but must be set for some specific purpose, 
which right now is unclear
 
I'm hoping the answer isn't "go look at the code"! That won't work for 
end-users, who are looking to use Congress but not decipher the 
meaning/importance of specific fields from the code.
Thanks,
Bryan Sullivan
 
> From: muroi.masah...@lab.ntt.co.jp
> Date: Fri, 27 May 2016 15:40:31 +0900
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [congress] Spec for congress.conf
> 
> Hi Bryan,
> 
> Oslo.config that Congress uses to manage config sets each fields to 
> default value if you don't specify your configured values in 
> congress.conf. In that meaning, all config is option/required.
> 
> In my experience, config values differing from each deployment, like ip 
> address and so on, have to be configured, but others might be configured 
> when you want Congress to run with different behaviors.
> 
> best regard,
> Masahito
> 
> On 2016/05/27 3:36, SULLIVAN, BRYAN L wrote:
> > Hi Congress team,
> >
> >
> >
> > Quick question for anyone. Is there a spec for fields in congress.conf
> > file? As of Liberty this has to be tox-generated but I need to know
> > which conf values are required vs optional. The generated sample output
> > doesn't clarify that. This is for the Puppet Module and JuJu Charm I am
> > developing with the help of RedHat and Canonical in OPNFV. I should have
> > Congress installed by default (for the RDO and JuJu installers) in the
> > OPNFV Colorado release in the next couple of weeks, and the
> > congress.conf file settings are an open question. The Puppet module will
> > also be used to create a Fuel plugin for installation.
> >
> >
> >
> > Thanks,
> >
> > Bryan Sullivan | AT
> >
> >
> >
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> -- 
> 室井 雅仁(Masahito MUROI)
> Software Innovation Center, NTT
> Tel: +81-422-59-4539
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] orchestration and db_sync

2016-05-27 Thread Sheel Rana Insaan
Hi Ryan,

>- Create the service's users and add a password into the databse
>- Sync the service with the database
>- Start the service
If I am right, these are one time activity...during installation stage..
one does not need to create "service users" again and again during service
start.


>Or maybe the service runs a db_sync every time
is starts?
No, service start does not run db_sync.
I think only service status is updated in db during service start(i am not
talking specific to keystone...just general behavior).

What is requirement for db_sync each time service starts?

Best Regards,
Sheel Rana

On Fri, May 27, 2016 at 9:38 PM, Ryan Hallisey  wrote:

> Hi all,
>
> When orchestrating an openstack service from nothing, there are a few
> steps that
> need to occur before you have a running service assuming the database
> already exists.
>
> - Create the service's users and add a password into the databse
> - Sync the service with the database
> - Start the service
>
> I was wondering if for some services they could be aware of whether or not
> they need
> to sync with the database at startup.  Or maybe the service runs a db_sync
> every time
> is starts?  I figured I would start a thread about this because Keystone
> has some
> flexibility when running N+1 in a cluster of N. If Keystone could have that
> that ability maybe Keystone could db_sync each time it starts without
> harming the
> cluster?
>
> It may be wishful thinking, but I'm curious to hear more thought about the
> topic.
>
> Thanks,
> Ryan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Centralize Configuration: ignore service list for newton

2016-05-27 Thread Markus Zoeller
On 20.05.2016 11:33, John Garbutt wrote:
> Hi,
> 
> The current config template includes a list of "Services which consume this":
> http://specs.openstack.org/openstack/nova-specs/specs/mitaka/implemented/centralize-config-options.html#quality-view
> 
> I propose we drop this list from the template.
> 
> I am worried this is going to be hard to maintain, and hard to review
> / check. As such, its of limited use to most deployers in its current
> form.
> 
> I have been thinking about a possible future replacement. Two separate
> sample configuration files, one for the Compute node, and one for
> non-compute nodes (i.e. "controller" nodes). The reason for this
> split, is our move towards removing sensitive credentials from compute
> nodes, etc. Over time, we could prove the split in gate testing, where
> we look for conf options accessed by computes that shouldn't be, and
> v.v.
> 
> 
> Having said that, for newton, I propose we concentrate on:
> * completing the move of all the conf options (almost there)
> * (skip tidy up of deprecated options)
> * tidying up the main description of each conf option
> * tidy up the Opt group and Opt types, i.e. int min/max, str choices, etc
> ** move options to use stevedoor, where needed
> * deprecating ones that are dumb / unused
> * identifying "required" options (those you have to set)
> * add config group descriptions
> * note any surprising dependencies or value meanings (-1 vs 0 etc)
> * ensure the docs and sample files are complete and correct
> 
> I am thinking we could copy API ref and add a comment at the top of
> each file (expecting a separate patch for each step):
> * fix_opt_registration_consistency (see sfinucan's tread)
> * fix_opt_description_indentation
> * check_deprecation_status
> * check_opt_group_and_type
> * fix_opt_description


I pushed [1] which introduced the flags from above. I reordered them
from most to least important, which is IMO:

# needs:fix_opt_description
# needs:check_deprecation_status
# needs:check_opt_group_and_type
# needs:fix_opt_description_indentation
# needs:fix_opt_registration_consistency


> Does that sound like a good plan? If so, I can write this up in a wiki page.
> 
> 
> Thanks,
> John
> 
> PS
> I also have concerns around the related config options bits and
> possible values bit, but thats a different thread. Lets focus on the
> main body of the description for now.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

References:
[1] https://review.openstack.org/#/c/322255/1

-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] orchestration and db_sync

2016-05-27 Thread Ryan Hallisey
Hi all,

When orchestrating an openstack service from nothing, there are a few steps that
need to occur before you have a running service assuming the database already 
exists.

- Create the service's users and add a password into the databse
- Sync the service with the database
- Start the service

I was wondering if for some services they could be aware of whether or not they 
need
to sync with the database at startup.  Or maybe the service runs a db_sync 
every time
is starts?  I figured I would start a thread about this because Keystone has 
some
flexibility when running N+1 in a cluster of N. If Keystone could have that
that ability maybe Keystone could db_sync each time it starts without harming 
the
cluster?

It may be wishful thinking, but I'm curious to hear more thought about the 
topic.

Thanks,
Ryan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][qa][ironic][nova] When Nova should mark instance as successfully deleted?

2016-05-27 Thread Vladyslav Drok
On Fri, May 27, 2016 at 5:52 PM, Vasyl Saienko 
wrote:

> Lucas, Andrew
>
> Thanks for fast response.
>
> On Fri, May 27, 2016 at 4:53 PM, Andrew Laski  wrote:
>
>>
>>
>> On Fri, May 27, 2016, at 09:25 AM, Lucas Alvares Gomes wrote:
>> > Hi,
>> >
>> > Thanks for bringing this up Vasyl!
>> >
>> > > At the moment Nova with ironic virt_driver consider instance as
>> deleted,
>> > > while on Ironic side server goes to cleaning which can take a while.
>> As
>> > > result current implementation of Nova tempest tests doesn't work for
>> case
>> > > when Ironic is enabled.
>>
>> What is the actual failure? Is it a capacity issue because nodes do not
>> become available again quickly enough?
>>
>>
> The actual failure is that temepest community doesn't want to accept 1
> option.
> https://review.openstack.org/315422/
> And I'm not sure that it is the right way.
>

The reason this was added was to make tempest smoke tests (as part of
grenade) to pass on a limited amount of nodes (which was 3 initially). Now
we have 7 nodes created in the gate, so we might be OK running grenade, but
we can't increase concurrency to something more than 1 in this case. Maybe
we should run our own tests, not smoke, as part of grenade?


>
> > >
>> > > There are two possible options how to fix it:
>> > >
>> > >  Update Nova tempest test scenarios for Ironic case to wait when
>> cleaning is
>> > > finished and Ironic node goes to 'available' state.
>> > >
>> > > Mark instance as deleted in Nova only after cleaning is finished on
>> Ironic
>> > > side.
>> > >
>> > > I'm personally incline to 2 option. From user side successful instance
>> > > termination means that no instance data is available any more, and
>> nobody
>> > > can access/restore that data. Current implementation breaks this rule.
>> > > Instance is marked as successfully deleted while in fact it may be not
>> > > cleaned, it may fail to clean and user will not know anything about
>> it.
>> > >
>
> >
>> > I don't really like option #2, cleaning can take several hours
>> > depending on the configuration of the node. I think that it would be a
>> > really bad experience if the user of the cloud had to wait a really
>> > long time before his resources are available again once he deletes an
>> > instance. The idea of marking the instance as deleted in Nova quickly
>> > is aligned with our idea of making bare metal deployments
>> > look-and-feel like VMs for the end user. And also (one of) the
>> > reason(s) why we do have a separated state in Ironic for DELETING and
>> > CLEANING.
>>
>
> The resources will be available only if there are other available
> baremetal nodes in the cloud.
> User doesn't have ability to track for status of available resources
> without admin access.
>
>
>> I agree. From a user perspective once they've issued a delete their
>> instance should be gone. Any delay in that actually happening is purely
>> an internal implementation detail that they should not care about.
>>
>> >
>> > I think we should go with #1, but instead of erasing the whole disk
>> > for real maybe we should have a "fake" clean step that runs quickly
>> > for tests purposes only?
>> >
>>
>
> At the gates we just waiting for bootstrap and callback from node when
> cleaning starts. All heavy operations are postponed. We can disable
> automated_clean, which means it is not tested.
>
>
>> > Cheers,
>> > Lucas
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][qa][ironic][nova] When Nova should mark instance as successfully deleted?

2016-05-27 Thread Matthew Treinish
On Fri, May 27, 2016 at 05:52:51PM +0300, Vasyl Saienko wrote:
> Lucas, Andrew
> 
> Thanks for fast response.
> 
> On Fri, May 27, 2016 at 4:53 PM, Andrew Laski  wrote:
> 
> >
> >
> > On Fri, May 27, 2016, at 09:25 AM, Lucas Alvares Gomes wrote:
> > > Hi,
> > >
> > > Thanks for bringing this up Vasyl!
> > >
> > > > At the moment Nova with ironic virt_driver consider instance as
> > deleted,
> > > > while on Ironic side server goes to cleaning which can take a while. As
> > > > result current implementation of Nova tempest tests doesn't work for
> > case
> > > > when Ironic is enabled.
> >
> > What is the actual failure? Is it a capacity issue because nodes do not
> > become available again quickly enough?
> >
> >
> The actual failure is that temepest community doesn't want to accept 1
> option.
> https://review.openstack.org/315422/
> And I'm not sure that it is the right way.

No Andrew is right, this is a resource limitation in the gate. The failures
you're hitting are caused by resource constraints in the gate and not having
enough available nodes to run all the tests because deleted nodes are still
cleaning (or doing another operation) and aren't available to nova for booting
another guest.

I -2d that patch because it's a workaround for the fundamental issue here and
not actually an appropriate change for Tempest. What you've implemented in that
patch is the equivalent of talking to libvirt or some other hypervisor directly
to find out if something is actually deleted. It's a layer violation, there is
never a reason that should be necessary especially in a test of the nova api.

> 
> > >
> > > > There are two possible options how to fix it:
> > > >
> > > >  Update Nova tempest test scenarios for Ironic case to wait when
> > cleaning is
> > > > finished and Ironic node goes to 'available' state.
> > > >
> > > > Mark instance as deleted in Nova only after cleaning is finished on
> > Ironic
> > > > side.
> > > >
> > > > I'm personally incline to 2 option. From user side successful instance
> > > > termination means that no instance data is available any more, and
> > nobody
> > > > can access/restore that data. Current implementation breaks this rule.
> > > > Instance is marked as successfully deleted while in fact it may be not
> > > > cleaned, it may fail to clean and user will not know anything about it.
> > > >
> 
> >
> > > I don't really like option #2, cleaning can take several hours
> > > depending on the configuration of the node. I think that it would be a
> > > really bad experience if the user of the cloud had to wait a really
> > > long time before his resources are available again once he deletes an
> > > instance. The idea of marking the instance as deleted in Nova quickly
> > > is aligned with our idea of making bare metal deployments
> > > look-and-feel like VMs for the end user. And also (one of) the
> > > reason(s) why we do have a separated state in Ironic for DELETING and
> > > CLEANING.
> >
> 
> The resources will be available only if there are other available baremetal
> nodes in the cloud.
> User doesn't have ability to track for status of available resources
> without admin access.
> 
> 
> > I agree. From a user perspective once they've issued a delete their
> > instance should be gone. Any delay in that actually happening is purely
> > an internal implementation detail that they should not care about.
> >

Delete is an async operation in Nova. There is never any immediacy here it
always takes an indeterminate amount of time between it being issued by the user
and the server actually going away. The disconnect here is that when running
with the ironic driver the server disappears from Nova but the resources aren't
freed back when that happens until the cleaning is done. I'm pretty sure this is
different from all the other Nova drivers. 

I don't really have a horse in this race so whatever ends up being decided for
the behavior here is fine. But, I think we need to be clear with what the
behavior here is and want we actually want. Personally, I don't see an issue
with the node being in the deleting task_state for a long time because that's
what is really happening while it's deleting. To me a delete is only finished
when the resource is actually gone and it's consumed resources return to the
pool.

> > >
> > > I think we should go with #1, but instead of erasing the whole disk
> > > for real maybe we should have a "fake" clean step that runs quickly
> > > for tests purposes only?
> > >

Disabling the cleaning step (or having a fake one that does nothing) for the
gate would get around the failures at least. It would make things work again
because the nodes would be available right after Nova deletes them.

-Matt Treinish

> >
> 
> At the gates we just waiting for bootstrap and callback from node when
> cleaning starts. All heavy operations are postponed. We can disable
> automated_clean, which means it is not tested.
> 


signature.asc
Description: PGP signature

Re: [openstack-dev] [tempest][qa][ironic][nova] When Nova should mark instance as successfully deleted?

2016-05-27 Thread Vasyl Saienko
Lucas, Andrew

Thanks for fast response.

On Fri, May 27, 2016 at 4:53 PM, Andrew Laski  wrote:

>
>
> On Fri, May 27, 2016, at 09:25 AM, Lucas Alvares Gomes wrote:
> > Hi,
> >
> > Thanks for bringing this up Vasyl!
> >
> > > At the moment Nova with ironic virt_driver consider instance as
> deleted,
> > > while on Ironic side server goes to cleaning which can take a while. As
> > > result current implementation of Nova tempest tests doesn't work for
> case
> > > when Ironic is enabled.
>
> What is the actual failure? Is it a capacity issue because nodes do not
> become available again quickly enough?
>
>
The actual failure is that temepest community doesn't want to accept 1
option.
https://review.openstack.org/315422/
And I'm not sure that it is the right way.

> >
> > > There are two possible options how to fix it:
> > >
> > >  Update Nova tempest test scenarios for Ironic case to wait when
> cleaning is
> > > finished and Ironic node goes to 'available' state.
> > >
> > > Mark instance as deleted in Nova only after cleaning is finished on
> Ironic
> > > side.
> > >
> > > I'm personally incline to 2 option. From user side successful instance
> > > termination means that no instance data is available any more, and
> nobody
> > > can access/restore that data. Current implementation breaks this rule.
> > > Instance is marked as successfully deleted while in fact it may be not
> > > cleaned, it may fail to clean and user will not know anything about it.
> > >

>
> > I don't really like option #2, cleaning can take several hours
> > depending on the configuration of the node. I think that it would be a
> > really bad experience if the user of the cloud had to wait a really
> > long time before his resources are available again once he deletes an
> > instance. The idea of marking the instance as deleted in Nova quickly
> > is aligned with our idea of making bare metal deployments
> > look-and-feel like VMs for the end user. And also (one of) the
> > reason(s) why we do have a separated state in Ironic for DELETING and
> > CLEANING.
>

The resources will be available only if there are other available baremetal
nodes in the cloud.
User doesn't have ability to track for status of available resources
without admin access.


> I agree. From a user perspective once they've issued a delete their
> instance should be gone. Any delay in that actually happening is purely
> an internal implementation detail that they should not care about.
>
> >
> > I think we should go with #1, but instead of erasing the whole disk
> > for real maybe we should have a "fake" clean step that runs quickly
> > for tests purposes only?
> >
>

At the gates we just waiting for bootstrap and callback from node when
cleaning starts. All heavy operations are postponed. We can disable
automated_clean, which means it is not tested.


> > Cheers,
> > Lucas
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-27 Thread Duarte Cardoso, Igor
Hi all,

As Uri mentioned, there is a concrete proposal for an enhanced SFC API (with 
support for open, IETF SFC Encapsulation [1] and NSH [2]) that was publicly 
shared last month as a neutron spec at [3].

Service Function Chaining (SFC) is the effort of normalizing and standardizing 
nomenclature, definitions and protocols that will guide how SFC will be done, 
and is led by the IETF SFC WG [4]. If implementing  SFC in OpenStack as 
standards-compliant as possible is a negative thing, please let me know 
straight away.

Now, it’s only natural that additional SFC implementation work for OpenStack 
should land on the already established networking-sfc project. As far as I 
understand, the project has been hard at work, focusing on completing a first 
phase of development that would enable linear port chains to be instantiated 
(which finished around the beginning of this year). The project has 
demonstrated its value for some of today’s use cases and I definitely 
congratulate the whole team on the success.

In an attempt to create the least disruption possible in the existing project 
and sticking to OpenStack best practices, I have been trying to engage with the 
team to consider and review the SFC Encapsulation proposal [5, 6, 7 and part of 
3], which I’d be absolutely delighted to develop myself.

The discussion around this thread and sibling threads seems to have been about 
supporting NSH in networking-sfc. The team has already said they will support 
NSH [7, 8, 9] as an attribute of the existing API parameter chain_parameters. 
But there’s a blurred line here: what exactly does it take to support NSH/SFC 
Encapsulation in networking-sfc? Let me quote Uri:

“I hear (…) that we have an agreement that the SFC abstraction (…) is use of 
NSH approach. This includes internal representation of the chain, support of 
metadata etc.”

Supporting NSH is not simply about enabling the dataplane protocol. If we 
considered that it was, then it would be acceptable to say that yeah OVS must 
support NSH before (or maybe ODL when there’s a finalized driver – and then 
it’s ODL’s responsibility to setup a compatible OVS deployment). But NSH is an 
SFC Encapsulation protocol and approach, and the only [protocol] being worked 
on by the WG.

The networking-sfc API requires changes to properly support NSH/SFC 
Encapsulation:

· Enable NSH dataplane support (planned by networking-sfc – I have no 
concerns with this);

· Support of metadata (less critical since it is doesn’t really change 
how we look at the chaining topology, so less disruptive and could be 
implemented later in the future)

· Support correct SFC abstraction/representation of chains and paths 
(highly critical since this is how IETF SFC compatible chains can be built – 
which justifies why NSH includes a Service Path Header [2])

And this is what we mean by supporting NSH in networking-sfc. If only the last 
point can be supported today, then NSH/SFC Encapsulation is already on the 
right track.
And the best part is that it would only require a small change in the API – the 
ability to link different port-chains together as part of a scope (a Service 
Function Chain, or a graph).

Interestingly, such an API enhancement would also work with existing upstream 
OVS, if networking-sfc is running in “legacy” mode (i.e. without actually 
encapsulating traffic into any protocol, like today – so it’s fine). It would 
not be possible to guarantee that traffic from a chain is kept inside that same 
chain because that depends on what happens inside of a Service Function. But 
assuming the functions don’t change traffic in any way, the inter-linking of 
port-chains would still work – in the same way that creating multiple, unlinked 
port-chains work together when specifying their flow classifiers’ source 
neutron ports as the previous port-chains’s last neutron ports.

Also, in anticipation: SFC “graph” != VNFFG [10].

Let me know your thoughts and questions.

[1] https://www.rfc-editor.org/rfc/rfc7665.txt
[2] https://www.ietf.org/id/draft-ietf-sfc-nsh-05.txt
[3] https://review.openstack.org/#/c/308453/
[4] https://datatracker.ietf.org/wg/sfc/charter/
[5] https://etherpad.openstack.org/p/networking-sfc-and-sfc-encapsulation
[6] 
https://wiki.openstack.org/wiki/Neutron/ServiceChainUseCases#SFC_Encapsulation
[7] 
http://eavesdrop.openstack.org/meetings/service_chaining/2016/service_chaining.2016-05-19-17.02.log.html
[8] 
http://eavesdrop.openstack.org/meetings/service_chaining/2016/service_chaining.2016-01-21-17.00.log.html
[9] 
http://eavesdrop.openstack.org/meetings/service_chaining/2016/service_chaining.2016-05-26-17.00.log.html
[10] 
http://www.etsi.org/deliver/etsi_gs/NFV-MAN/001_099/001/01.01.01_60/gs_nfv-man001v010101p.pdf

Best regards,
Igor.

From: Elzur, Uri [mailto:uri.el...@intel.com]
Sent: Wednesday, May 25, 2016 8:38 PM
To: OpenStack Development Mailing List (not for usage questions) 


Re: [openstack-dev] [horizon] Horizon in devstack is broken, rechecks are futile

2016-05-27 Thread Brant Knudson
On Fri, May 27, 2016 at 8:39 AM, Timur Sufiev  wrote:

> The root cause of Horizon issue has been identified and fixed at
> https://review.openstack.org/#/c/321639/
> The next steps are to release new version of django-openstack-auth library
> (which the above fix belongs to), update global-requirements (if it's not
> automatic, I'm not very into the details of release managing of openstack
> components), update horizon requirements from global requirements, and then
> merge the final patch https://review.openstack.org/#/c/321640/ - this
> time into horizon repo. Once all that is done, gate should be unblocked.
>
> Optimistic ETA is by tonight.
>
> On Wed, May 25, 2016 at 10:57 PM Timur Sufiev 
> wrote:
>
>> Dear Horizon contributors,
>>
>> The test job dsvm-integration fails for a reason for the last ~24 hours,
>> please do not recheck your patches if you see that almost all integration
>> tests fail (and only these tests) - it won't help. The fix for
>> django_openstack_auth issue which has been uncovered by the recent devstack
>> change (see https://bugs.launchpad.net/horizon/+bug/1585682) is being
>> worked on. Stay tuned, there will be another notification when rechecks
>> will become meaningful again.
>>
>
>
Thanks for working on this. It will help us eventually get to a devstack
where keystone and potentially the rest of the API servers are listening on
paths rather than on ports. I had to fix a similar issue in tempest.

To request a release send a review with to update
http://git.openstack.org/cgit/openstack/releases/tree/deliverables/mitaka/django-openstack-auth.yaml
with the new library version and commit hash. You'll have to create a new
yaml file for newton since there hasn't been a release yet.

-- 
- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][qa][ironic][nova] When Nova should mark instance as successfully deleted?

2016-05-27 Thread Andrew Laski


On Fri, May 27, 2016, at 09:25 AM, Lucas Alvares Gomes wrote:
> Hi,
> 
> Thanks for bringing this up Vasyl!
> 
> > At the moment Nova with ironic virt_driver consider instance as deleted,
> > while on Ironic side server goes to cleaning which can take a while. As
> > result current implementation of Nova tempest tests doesn't work for case
> > when Ironic is enabled.

What is the actual failure? Is it a capacity issue because nodes do not
become available again quickly enough?

> >
> > There are two possible options how to fix it:
> >
> >  Update Nova tempest test scenarios for Ironic case to wait when cleaning is
> > finished and Ironic node goes to 'available' state.
> >
> > Mark instance as deleted in Nova only after cleaning is finished on Ironic
> > side.
> >
> > I'm personally incline to 2 option. From user side successful instance
> > termination means that no instance data is available any more, and nobody
> > can access/restore that data. Current implementation breaks this rule.
> > Instance is marked as successfully deleted while in fact it may be not
> > cleaned, it may fail to clean and user will not know anything about it.
> >
> 
> I don't really like option #2, cleaning can take several hours
> depending on the configuration of the node. I think that it would be a
> really bad experience if the user of the cloud had to wait a really
> long time before his resources are available again once he deletes an
> instance. The idea of marking the instance as deleted in Nova quickly
> is aligned with our idea of making bare metal deployments
> look-and-feel like VMs for the end user. And also (one of) the
> reason(s) why we do have a separated state in Ironic for DELETING and
> CLEANING.

I agree. From a user perspective once they've issued a delete their
instance should be gone. Any delay in that actually happening is purely
an internal implementation detail that they should not care about.

> 
> I think we should go with #1, but instead of erasing the whole disk
> for real maybe we should have a "fake" clean step that runs quickly
> for tests purposes only?
> 
> Cheers,
> Lucas
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Horizon in devstack is broken, rechecks are futile

2016-05-27 Thread Timur Sufiev
The root cause of Horizon issue has been identified and fixed at
https://review.openstack.org/#/c/321639/
The next steps are to release new version of django-openstack-auth library
(which the above fix belongs to), update global-requirements (if it's not
automatic, I'm not very into the details of release managing of openstack
components), update horizon requirements from global requirements, and then
merge the final patch https://review.openstack.org/#/c/321640/ - this time
into horizon repo. Once all that is done, gate should be unblocked.

Optimistic ETA is by tonight.

On Wed, May 25, 2016 at 10:57 PM Timur Sufiev  wrote:

> Dear Horizon contributors,
>
> The test job dsvm-integration fails for a reason for the last ~24 hours,
> please do not recheck your patches if you see that almost all integration
> tests fail (and only these tests) - it won't help. The fix for
> django_openstack_auth issue which has been uncovered by the recent devstack
> change (see https://bugs.launchpad.net/horizon/+bug/1585682) is being
> worked on. Stay tuned, there will be another notification when rechecks
> will become meaningful again.
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][qa][ironic][nova] When Nova should mark instance as successfully deleted?

2016-05-27 Thread Lucas Alvares Gomes
Hi,

Thanks for bringing this up Vasyl!

> At the moment Nova with ironic virt_driver consider instance as deleted,
> while on Ironic side server goes to cleaning which can take a while. As
> result current implementation of Nova tempest tests doesn't work for case
> when Ironic is enabled.
>
> There are two possible options how to fix it:
>
>  Update Nova tempest test scenarios for Ironic case to wait when cleaning is
> finished and Ironic node goes to 'available' state.
>
> Mark instance as deleted in Nova only after cleaning is finished on Ironic
> side.
>
> I'm personally incline to 2 option. From user side successful instance
> termination means that no instance data is available any more, and nobody
> can access/restore that data. Current implementation breaks this rule.
> Instance is marked as successfully deleted while in fact it may be not
> cleaned, it may fail to clean and user will not know anything about it.
>

I don't really like option #2, cleaning can take several hours
depending on the configuration of the node. I think that it would be a
really bad experience if the user of the cloud had to wait a really
long time before his resources are available again once he deletes an
instance. The idea of marking the instance as deleted in Nova quickly
is aligned with our idea of making bare metal deployments
look-and-feel like VMs for the end user. And also (one of) the
reason(s) why we do have a separated state in Ironic for DELETING and
CLEANING.

I think we should go with #1, but instead of erasing the whole disk
for real maybe we should have a "fake" clean step that runs quickly
for tests purposes only?

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][manila] Moving forward with landing manila in tripleo

2016-05-27 Thread Marios Andreou
Hi all, I explicitly cc'd a few folks I thought might be interested for
visibility, sorry for spam if you're not. This email is about getting
manila landed into tripleo asap, and the current obstacles to that (at
least those visible to me):

The current review [1] isn't going to land as is, regardless of the
outcome/discussion of any of the following points because all the
services are going to "composable controller services". How do people
feel about me merging my review at [2] into its parent review (which is
the current manilla review at [1]). My review just takes what is in  [1]
(caveats below) and makes it 'composable', and includes a dependency on
[3] which is the puppet-tripleo side for the 'composable manila'.

   ---> Proposal merge the 'composable manila' tripleo-heat-templates
review @ [2] into the parent review @ [1]. The review at [2] will be
abandoned. We will continue to try and land [1] in its new 'composable
manila' form.

WRT the 'caveats' mentioned above and why I haven't just just ported
what is in the current manila review @ [1] into the composable one @
[2]... there are two main things I've changed, both of which on
guidance/discussion on the reviews.

The first is addition of manila-data (wasn't in the original/current
review at [1]). The second a change to the pacemaker constraints, which
I've corrected to make manila-data and manila-share pacemaker a/p but
everything else systemd managed, based on ongoing discussion at [3].

So IMO to move forward I need clarity on both those points. For
manila-data my concerns are is it already available where we need it. I
looked at puppet-manila [4] and couldn't quickly find much (any) mention
of manila-data. We need it there if we are to configure anything for it
via puppet. The other unkown/concern here is does manila-data get
delivered with the manila package (I recall manila-share possibly, at
least one of them, had a stand-alone package) otherwise we'll need to
add it to the image. But mainly my question here is, can we live without
it? I mean can we deploy sans manila-data or does it just not make sense
(sorry for silly question). The motivation is if we can let's land and
iterate to add it.

   Q. Can we live w/out manila-data so we can land and iterate (esp. if
we need to land things into puppet-manila or anywhere else it is yet to
be landed)

For the pacemaker constraints I'm mainly just waiting for confirmation
of our current understanding.. manila-share and manila-data are a/p
pacemaker managed, everything else systemd.

thanks for any info, I will follow up and update the reviews accordingly
based on any comments,

marios

[1] "Enable Manila integration" https://review.openstack.org/#/c/188137/
[2] "Composable manila tripleo-heat-templates side"
https://review.openstack.org/#/c/315658/
[3] "Adds the puppet-tripleo manifests for manila"
https://review.openstack.org/#/c/313527/
[4] "openstack/puppet-manila" https://github.com/openstack/puppet-manila

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest][qa][ironic][nova] When Nova should mark instance as successfully deleted?

2016-05-27 Thread Vasyl Saienko
Hello Community!


At the moment Nova with ironic virt_driver consider instance as deleted,
while on Ironic side server goes to cleaning which can take a while. As
result current implementation of Nova tempest tests doesn't work for case
when Ironic is enabled.

There are two possible options how to fix it:

   1.  Update Nova tempest test scenarios for Ironic case to wait when
   cleaning is finished and Ironic node goes to 'available' state.

   2. Mark instance as deleted in Nova only after cleaning is finished on
   Ironic side.


I'm personally incline to 2 option. From user side successful instance
termination means that no instance data is available any more, and nobody
can access/restore that data. Current implementation breaks this rule.
Instance is marked as successfully deleted while in fact it may be not
cleaned, it may fail to clean and user will not know anything about it.


Sincerely,

Vasyl Saienko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Mistral workflows for node assignment

2016-05-27 Thread Honza Pokorny
Hello folks,

I would love to get your thoughts on a new way of assigning roles to
nodes.

Since nova flavors are created during the installation of the
undercloud, they don't correspond to the actual hardware specifications
of the available nodes.  So, once a node is assigned, we should update
the flavor with new hardware specs based on that node.  This is a
two-step operation: update the node and update the flavor.

Now that we have multiple steps and multiple APIs involved in node
assignment, it seems that we should turn this procedure into a mistral
workflow.  I have created a patch that does just that and you're welcome
to review it and submit your feedback:

https://review.openstack.org/320459

The patch introduces two workflows: assign_node and assign_nodes.  The
latter is just a loop of the former.  It works like this:

Given a node_id and a role_name (compute, swift-storage, etc):

1.  Retrieve the node's details using ironic
2.  Create a JSON patch object to update the node's capabilities
3.  Update the node with the new capabilities
4.  Get all nodes for that role and determine the lowest common specs
5.  Recreate flavor with the common specs

The reason for recreating the flavor instead of updating it in place is
because mistral and the nova client don't expose the "set_keys" API yet.
If the above patch receives favorable comments, we can work on getting
those APIs exposed in order to simplify the code in tripleo-common.

Honza Pokorny

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TaskFlow] TaskFlow persistence: Job failure retry

2016-05-27 Thread pnkk
To be specific, we hit this issue when the node running our service is
rebooted.
Our solution is designed in a way that each and every job is a celery task
and inside celery task, we create taskflow flow.

We enabled late_acks in celery(uses rabbitmq as message broker), so if our
service/node goes down, other healthy service can pick the job and
completes it.
This works fine, but we just hit this rare case where the node was rebooted
just when taskflow is updating something to the database.

In this case, it raises an exception and the job is marked failed. Since it
is complete(with failure), message is removed from the rabbitmq and other
worker would not be able to process it.
Can taskflow handle such I/O errors gracefully or should application try to
catch this exception? If application has to handle it what would happen to
that particular database transaction which failed just when the node is
rebooted? Who will retry this transaction?

Thanks,
Kanthi

On Fri, May 27, 2016 at 5:39 PM, pnkk  wrote:

> Hi,
>
> When taskflow engine is executing a job, the execution failed due to IO
> error(traceback pasted below).
>
> 2016-05-25 19:45:21.717 7119 ERROR taskflow.engines.action_engine.engine
> 127.0.1.1 [-]  Engine execution has failed, something bad must of happened
> (last 10 machine transitions were [('SCHEDULING', 'WAITING'), ('WAITING',
> 'ANALYZING'), ('ANALYZING', 'SCHEDULING'), ('SCHEDULING', 'WAITING'),
> ('WAITING', 'ANALYZING'), ('ANALYZING', 'SCHEDULING'), ('SCHEDULING',
> 'WAITING'), ('WAITING', 'ANALYZING'), ('ANALYZING', 'GAME_OVER'),
> ('GAME_OVER', 'FAILURE')])
> 2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
> Traceback (most recent call last):
> 2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
> File
> "/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py",
> line 269, in run_iter
> 2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
>   failure.Failure.reraise_if_any(memory.failures)
> 2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
> File
> "/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/types/failure.py",
> line 336, in reraise_if_any
> 2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
>   failures[0].reraise()
> 2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
> File
> "/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/types/failure.py",
> line 343, in reraise
> 2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
>   six.reraise(*self._exc_info)
> 2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
> File
> "/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/engines/action_engine/scheduler.py",
> line 94, in schedule
> 2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
>   futures.add(scheduler.schedule(atom))
> 2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
> File
> "/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/engines/action_engine/scheduler.py",
> line 67, in schedule
> 2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
>   return self._task_action.schedule_execution(task)
> 2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
> File
> "/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/engines/action_engine/actions/task.py",
> line 99, in schedule_execution
> 2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
>   self.change_state(task, states.RUNNING, progress=0.0)
> 2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
> File
> "/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/engines/action_engine/actions/task.py",
> line 67, in change_state
> 2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
>   self._storage.set_atom_state(task.name, state)
> 2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
> File
> "/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/fasteners/lock.py",
> line 85, in wrapper
> 2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
>   return f(self, *args, **kwargs)
> 2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
> File
> "/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/storage.py",
> line 486, in set_atom_state
> 2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
>   self._with_connection(self._save_atom_detail, source, clone)
> 2016-05-25 19:45:21.717 7119 TRACE 

Re: [openstack-dev] [puppet] Watcher module for Puppet

2016-05-27 Thread Emilien Macchi
On Tue, May 10, 2016 at 10:25 AM, Daniel Pawlik
 wrote:
> Hello,
> I'm working on implementation of a new puppet module for Openstack Watcher
> (https://launchpad.net/watcher).
> I'm already creating this module and I would like to share it when it will
> be done.
>
> Could someone tell me how can I proceed to join puppet team's workflow ?
>
>
> By the way, Watcher team plans to be into big tent by neutron-1 milestone.
>
> Regards,
> Daniel Pawlik

Now the repository is in place [1], who is going to work on the module?
We have some documentation here:
http://docs.openstack.org/developer/puppet-openstack-guide/new-module.html#in-practice

/join #puppet-openstack if you need help, as usual.

[1] https://github.com/openstack/puppet-watcher
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Sahara Job Binaries Storage

2016-05-27 Thread Trevor McKay
Hi Jerico,

  we talked about it at Summit in one of the design sessions, but afaik
there is no blueprint or spec yet. I don't see why it can't happen in
Newton, however.

Best,

Trevor

On Thu, 2016-05-26 at 16:14 +1000, Jerico Revote wrote:
> Hi Trevor,
> 
> Just revisiting this,
> has there been any progress to deprecate sahara jobs -> internal db mechanism?
> and/or the config option to disable internal db storage?
>  
> Regards,
> 
> Jerico
> 
> 
> 
> > On 18 Mar 2016, at 12:55 AM, Trevor McKay  wrote:
> > 
> > Hi Jerico,
> > 
> >  Internal db storage for job binaries was added at
> > the start of EDP as an alternative for sites that do
> > not have swift running. Since then, we've also added
> > integration with manila so that job binaries can be
> > stored in manila shares.
> > 
> >  You are correct, storing lots of binaries in the
> > sahara db could make the database grow very large.
> > Swift or manila should be used for production, internal
> > storage is a good option for development/test.
> > 
> >  There is currently no way to disable internal storage.
> > We can took a look at adding such an option -- in fact
> > we have talked informally about the possibility of
> > deprecating internal db storage since swift and manila
> > are both mature at this point. We should discuss that
> > at the upcoming summit.
> > 
> > Best,
> > 
> > Trevor
> > 
> > On Thu, 2016-03-17 at 10:27 +1100, Jerico Revote wrote:
> >> Hello,
> >> 
> >> 
> >> When deploying Sahara, Sahara docos suggests to
> >> increase max_allowed_packet to 256MB,
> >> for internal database storing of job binaries.
> >> There could be hundreds of job binaries to be uploaded/created into
> >> Sahara,
> >> which would then cause the database to grow as well.
> >> Does anyone using Sahara encountered database sizing issues using
> >> internal db storage?
> >> 
> >> 
> >> It looks like swift is the more logical place for storing job
> >> binaries 
> >> (in our case we have a global swift cluster), and this is also
> >> available to the user.
> >> Is there a way to only enable the swift way for storing job binaries?
> >> 
> >> Thanks,
> >> 
> >> 
> >> Jerico
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ovo] NeutronDbObject concurrency issues

2016-05-27 Thread Jay Pipes

On 05/24/2016 01:54 AM, Gary Kotton wrote:

Hi,

We have used tooz to enable concurrency. Zookeeper and Redis worked
well. I think that it is certainly something that we need to consider.
The challenge becomes a deployment.


I'm not following. What does tooz, ZK or Redis have to do with 
concurrency of NeutronDbObject and oslo.versionedobject interfaces?


Best,
-jay


*From: *Damon Wang 
*Reply-To: *OpenStack List 
*Date: *Tuesday, May 24, 2016 at 5:58 AM
*To: *OpenStack List 
*Subject: *Re: [openstack-dev] [neutron][ovo] NeutronDbObject
concurrency issues

Hi,

I want to add an option which handle by another project Tooz.

https://github.com/openstack/tooz


with redis or some other drivers, it seems pretty a good choice.

Any comments?

Wei Wang

2016-05-17 6:53 GMT+08:00 Ilya Chukhnakov >:

On 16 May 2016, at 20:01, Michał Dulko > wrote:


It's not directly related, but this reminds me of tests done by
geguileo
[1] some time ago that were comparing different methods of
preventing DB
race conditions in concurrent environment. Maybe you'll also
find them
useful as you'll probably need to do something like conditional
update
to increment a revision number.

[1] https://github.com/Akrog/test-cinder-atomic-states



Thanks for the link. The SQLA revisions are similar to the
'solutions/update_with_where',

but they use the dedicated column for that [2]. And as long as it is
properly configured,

it happens 'automagically' (SQLA will take care of adding proper
'where' to 'update').

[2] http://docs.sqlalchemy.org/en/latest/orm/versioning.html




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TaskFlow] TaskFlow persistence: Job failure retry

2016-05-27 Thread pnkk
Hi,

When taskflow engine is executing a job, the execution failed due to IO
error(traceback pasted below).

2016-05-25 19:45:21.717 7119 ERROR taskflow.engines.action_engine.engine
127.0.1.1 [-]  Engine execution has failed, something bad must of happened
(last 10 machine transitions were [('SCHEDULING', 'WAITING'), ('WAITING',
'ANALYZING'), ('ANALYZING', 'SCHEDULING'), ('SCHEDULING', 'WAITING'),
('WAITING', 'ANALYZING'), ('ANALYZING', 'SCHEDULING'), ('SCHEDULING',
'WAITING'), ('WAITING', 'ANALYZING'), ('ANALYZING', 'GAME_OVER'),
('GAME_OVER', 'FAILURE')])
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
Traceback (most recent call last):
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
File
"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py",
line 269, in run_iter
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
  failure.Failure.reraise_if_any(memory.failures)
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
File
"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/types/failure.py",
line 336, in reraise_if_any
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
  failures[0].reraise()
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
File
"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/types/failure.py",
line 343, in reraise
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
  six.reraise(*self._exc_info)
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
File
"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/engines/action_engine/scheduler.py",
line 94, in schedule
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
  futures.add(scheduler.schedule(atom))
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
File
"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/engines/action_engine/scheduler.py",
line 67, in schedule
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
  return self._task_action.schedule_execution(task)
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
File
"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/engines/action_engine/actions/task.py",
line 99, in schedule_execution
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
  self.change_state(task, states.RUNNING, progress=0.0)
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
File
"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/engines/action_engine/actions/task.py",
line 67, in change_state
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
  self._storage.set_atom_state(task.name, state)
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
File
"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/fasteners/lock.py",
line 85, in wrapper
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
  return f(self, *args, **kwargs)
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
File
"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/storage.py",
line 486, in set_atom_state
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
  self._with_connection(self._save_atom_detail, source, clone)
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
File
"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/storage.py",
line 341, in _with_connection
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
  return functor(conn, *args, **kwargs)
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
File
"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/storage.py",
line 471, in _save_atom_detail
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
  original_atom_detail.update(conn.update_atom_details(atom_detail))
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
File
"/opt/nso/nso-1.1223-default/nfvo-0.8.0.dev1438/.venv/local/lib/python2.7/site-packages/taskflow/persistence/backends/impl_sqlalchemy.py",
line 427, in update_atom_details
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
  row = conn.execute(q).first()
2016-05-25 19:45:21.717 7119 TRACE taskflow.engines.action_engine.engine
File

[openstack-dev] [nova] #help: bug skimming for 1 week (R-18)

2016-05-27 Thread Markus Zoeller
Nova needs one or more volunteers for the bug skimming duty [1] for the
coming week (R-18). Unfortunately no one signed up yet.  Let me know if
you wanna help in this area.

References:
[1] https://wiki.openstack.org/wiki/Nova/BugTriage#Weekly_bug_skimming_duty

-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Mid-cycle development sprint

2016-05-27 Thread Thierry Carrez

Rossella Sblendido wrote:

On 05/26/2016 10:47 PM, Henry Gessau wrote:

I am happy to announce that the location logistics for the Neutron mid-cycle
have been finalized. The mid-cycle will take place in Cork, Ireland on August
15-17. I have updated the wiki [1] where you will find a link to an etherpad
with all the details. There you can add yourself if you plan to attend, and
make updates to topics that you would like to work on.


Thanks for organizing this! I am happy to see a sprint in Europe :)
Unfortunately the 15th is bank holidays in some European countries and
at least in Italy most people organize their holidays around those days.
I will try to change my plans and do my best to attend.


For reference, Assumption (Aug 15) is a nationwide public holiday in the 
following countries in Europe:


Andorra, Austria, Belgium, Croatia, Cyprus, France, Greece, Italy, 
Lithuania, Luxembourg, Republic of Macedonia, Malta, Republic of 
Moldova, Monaco, Poland (Polish Army Day), Portugal, Romania, Slovenia, 
and Spain.


Beyond people generally organizing summer vacation around that date, 
it's also peak-season for European travel, which can make flight prices 
go up :)


But then, no date is perfect.

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Abandoned old code reviews

2016-05-27 Thread Markus Zoeller
On 27.05.2016 11:36, Michael Still wrote:
> Hi,
> 
> I've spent some time today abandoning old reviews from the Nova queue.
> Specifically, anything which hadn't been updated before February this year
> has been abandoned with a message like this:
> 
> "This patch has been idle for a long time, so I am abandoning it to keep
> the review clean sane. If you're interested in still working on this patch,
> then please unabandon it and upload a new patchset."
> 
> Why do this? Abandoning the reviews means that Nova reviewers can focus on
> things where the author is still actively working on the code.
> Additionally, it makes it clearer which bugs are currently being worked.

Thanks! My script which searches for bug reports "in progress" which are
stale [1] is finding a lot more now.

> Additionally, unabandoning a review is a fairly cheap operation, so please
> let me know if I need to do that anywhere.
> 
> We should probably abandon more patches than those before February, but I
> got bored at this point. I'll probably abandon more later.
> 
> Cheers,
> Michael
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

References:
[1]
https://github.com/markuszoeller/openstack/blob/master/scripts/launchpad/bugs_dashboard.py#L276

-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Abandoned old code reviews

2016-05-27 Thread Michael Still
I've always done it manually by eyeballing the review, but the script is
tempting.

Thanks,
Michael
On 27 May 2016 8:42 PM, "Sean Dague"  wrote:

> On 05/27/2016 05:36 AM, Michael Still wrote:
> > Hi,
> >
> > I've spent some time today abandoning old reviews from the Nova queue.
> > Specifically, anything which hadn't been updated before February this
> > year has been abandoned with a message like this:
> >
> > "This patch has been idle for a long time, so I am abandoning it to keep
> > the review clean sane. If you're interested in still working on this
> > patch, then please unabandon it and upload a new patchset."
> >
> > Why do this? Abandoning the reviews means that Nova reviewers can focus
> > on things where the author is still actively working on the code.
> > Additionally, it makes it clearer which bugs are currently being worked.
> >
> > Additionally, unabandoning a review is a fairly cheap operation, so
> > please let me know if I need to do that anywhere.
> >
> > We should probably abandon more patches than those before February, but
> > I got bored at this point. I'll probably abandon more later.
> >
> > Cheers,
> > Michael
>
> We have a script in tree that can be run by any core team member -
>
> https://github.com/openstack/nova/blob/c69afd454b41e2e8fc3496ff56b986342f547064/tools/abandon_old_reviews.sh#L2
>
>
> It tries to describe the policy, which is basically things with no
> activity in the last 4 weeks, and has a -2 or a Jenkins -1 on it.
>
> The biggest issue here is the procedural -2s that don't tend to lift
> right away after release (which is probably a mistake, we should only
> really use procedural -2s during freeze windows). Feel free to modify
> accordingly.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] prototype of a DSL for generating Dockerfiles

2016-05-27 Thread Britt Houser (bhouser)
I admit I'm not as knowledgable about the Kolla codebase as I'd like to be, so 
most of what you're saying is going over my head.  I think mainly I don't 
understand the problem statement.  It looks like you're pulling all the "hard 
coded" things out of the docker files, and making them user replaceable?  So 
the dockerfiles just become a list of required steps, and the user can change 
how each step is implemented?  Would this also unify the dockefiles so there 
wouldn't be a huge if statements between Centos and Ubuntu?

Thx,
Britt



On 5/27/16, 1:58 AM, "Steven Dake (stdake)"  wrote:

>
>
>On 5/26/16, 8:45 PM, "Swapnil Kulkarni (coolsvap)"  wrote:
>
>>On Fri, May 27, 2016 at 8:35 AM, Steven Dake (stdake) 
>>wrote:
>>> Hey folks,
>>>
>>> While Swapnil has been busy churning the dockerfile.j2 files to all
>>>match
>>> the same style, and we also had summit where we declared we would solve
>>>the
>>> plugin problem, I have decided to begin work on a DSL prototype.
>>>
>>> Here are the problems I want to solve in order of importance by this
>>>work:
>>>
>>> Build CentOS, Ubuntu, Oracle Linux, Debian, Fedora containers
>>> Provide a programmatic way to manage Dockerfile construction rather
>>>then a
>>> manual (with vi or emacs or the like) mechanism
>>> Allow complete overrides of every facet of Dockerfile construction, most
>>> especially repositories per container (rather than in the base
>>>container) to
>>> permit the use case of dependencies from one version with dependencies
>>>in
>>> another version of a different service
>>> Get out of the business of maintaining 100+ dockerfiles but instead
>>>maintain
>>> one master file which defines the data that needs to be used to
>>>construct
>>> Dockerfiles
>>> Permit different types of optimizations or Dockerfile building by
>>>changing
>>> around the parser implementation ­ to allow layering of each operation,
>>>or
>>> alternatively to merge layers as we do today
>>>
>>> I don't believe we can proceed with both binary and source plugins
>>>given our
>>> current implementation of Dockerfiles in any sane way.
>>>
>>> I further don't believe it is possible to customize repositories &
>>>installed
>>> files per container, which I receive increasing requests for offline.
>>>
>>> To that end, I've created a very very rough prototype which builds the
>>>base
>>> container as well as a mariadb container.  The mariadb container builds
>>>and
>>> I suspect would work.
>>>
>>> An example of the DSL usage is here:
>>> https://review.openstack.org/#/c/321468/4/dockerdsl/dsl.yml
>>>
>>> A very poorly written parser is here:
>>> https://review.openstack.org/#/c/321468/4/dockerdsl/load.py
>>>
>>> I played around with INI as a format, to take advantage of oslo.config
>>>and
>>> kolla-build.conf, but that didn't work out.  YML is the way to go.
>>>
>>> I'd appreciate reviews on the YML implementation especially.
>>>
>>> How I see this work progressing is as follows:
>>>
>>> A yml file describing all docker containers for all distros is placed in
>>> kolla/docker
>>> The build tool adds an option ‹use-yml which uses the YML file
>>> A parser (such as load.py above) is integrated into build.py to lay
>>>down he
>>> Dockerfiles
>>> Wait 4-6 weeks for people to find bugs and complain
>>> Make the ‹use-yml the default for 4-6 weeks
>>> Once we feel confident in the yml implementation, remove all
>>>Dockerfile.j2
>>> files
>>> Remove ‹use-yml option
>>> Remove all jinja2-isms from build.py
>>>
>>> This is similar to the work that took place to convert from raw
>>>Dockerfiles
>>> to Dockerfile.j2 files.  We are just reusing that pattern.  Hopefully
>>>this
>>> will be the last major refactor of the dockerfiles unless someone has
>>>some
>>> significant complaints about the approach.
>>>
>>> Regards
>>> -steve
>>>
>>>
>>> 
>>>_
>>>_
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>The DSL template to generate the Dockerfile seems way better than the
>>jinja templates in terms of extension which is currently a major
>>bottleneck in the plugin implementation. I am +2+W on this plan of
>>action to test it for next 4-6 weeks and see thereon.
>>
>>Swapnil
>>
>
>Agree.
>
>Customization and plugins are the trigger for the work.  I was thinking of
>the following:
>
>Elemental.yml (ships with Kolla)
>Elemental-merge.yml (operator provides in /etc/kolla, this file is yaml
>merged with elemental.yml)
>Elemental-override.yml (operator provides in /etc/kolla, this file
>overrides any YAML sections defined)
>
>I think merging and overriding the yaml files should be pretty easy,
>compared to jinja2, where I don't even know where to begin in a way that
>the operator doesn't have to have deep 

Re: [openstack-dev] [Nova] Abandoned old code reviews

2016-05-27 Thread Sean Dague
On 05/27/2016 05:36 AM, Michael Still wrote:
> Hi,
> 
> I've spent some time today abandoning old reviews from the Nova queue.
> Specifically, anything which hadn't been updated before February this
> year has been abandoned with a message like this:
> 
> "This patch has been idle for a long time, so I am abandoning it to keep
> the review clean sane. If you're interested in still working on this
> patch, then please unabandon it and upload a new patchset."
> 
> Why do this? Abandoning the reviews means that Nova reviewers can focus
> on things where the author is still actively working on the code.
> Additionally, it makes it clearer which bugs are currently being worked.
> 
> Additionally, unabandoning a review is a fairly cheap operation, so
> please let me know if I need to do that anywhere.
> 
> We should probably abandon more patches than those before February, but
> I got bored at this point. I'll probably abandon more later.
> 
> Cheers,
> Michael

We have a script in tree that can be run by any core team member -
https://github.com/openstack/nova/blob/c69afd454b41e2e8fc3496ff56b986342f547064/tools/abandon_old_reviews.sh#L2


It tries to describe the policy, which is basically things with no
activity in the last 4 weeks, and has a -2 or a Jenkins -1 on it.

The biggest issue here is the procedural -2s that don't tend to lift
right away after release (which is probably a mistake, we should only
really use procedural -2s during freeze windows). Feel free to modify
accordingly.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Abandoned old code reviews

2016-05-27 Thread Michael Still
Hi,

I've spent some time today abandoning old reviews from the Nova queue.
Specifically, anything which hadn't been updated before February this year
has been abandoned with a message like this:

"This patch has been idle for a long time, so I am abandoning it to keep
the review clean sane. If you're interested in still working on this patch,
then please unabandon it and upload a new patchset."

Why do this? Abandoning the reviews means that Nova reviewers can focus on
things where the author is still actively working on the code.
Additionally, it makes it clearer which bugs are currently being worked.

Additionally, unabandoning a review is a fairly cheap operation, so please
let me know if I need to do that anywhere.

We should probably abandon more patches than those before February, but I
got bored at this point. I'll probably abandon more later.

Cheers,
Michael

-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Docker failed to add an existed network with GW interface already created

2016-05-27 Thread Liping Mao (limao)
Thanks Vikas for your help, it’s clear and helpful to me.

Regards,
Liping Mao

From: Vikas Choudhary 
>
Reply-To: OpenStack List 
>
Date: 2016年5月27日 星期五 下午4:32
To: OpenStack List 
>
Subject: Re: [openstack-dev] [Kuryr] Docker failed to add an existed network 
with GW interface already created

Hi Liping Mao,

Please find my response inline. If still not clear or i am wrong somewhere 
please let me know.

Regards
Vikas

On Fri, May 27, 2016 at 10:37 AM, 毛立平 
> wrote:
Hi Irena,

Thanks for your comments.

Currently, kuryr will create gw port with owner kuryr:container, but this GW 
can't work obviously.
In current code, as per my understanding, we are not creating gw port. If there 
are no pre-existing subnets and request for gw address is received, this 'if' 
condition
 will not met and no gw port will be created.

As the bug is saying, issue is there if there are pre-existing subnets and this 
if 
condition
 met. In this case it is creating a port for gw address which is not expected.
To fix this we can add logic for checking if it is gw address request (using 
"request_type") before this if condition and in case it is then verify if 
requested address is same as subnet gateway address. If not same two choices:
1) Either return to the libnetwork this actual subnet gw in response. (Antoni 
suggestion)
   OR
2) Raise an exception as docker users request cannot be met.

it can be modified to create gw port with owner network:router_interface, but 
seems like CNM module
do not have action can be map with attach GW with vrouter.

Do we have any reason why we need just create a neutron port but do not use 
it(attach to vrouter)?

So I still think we can leave it in neutron router-interface-add / 
router-interface-delete .
 what do you think?

Regards,
Liping Mao


At 2016-05-26 20:03:24, "Irena Berezovsky" 
> wrote:
Hi Liping Mao,


On Thu, May 26, 2016 at 12:31 PM, Liping Mao (limao) 
> wrote:
Hi Vikas, Antoni and Kuryr team,

When I use kuryr, I notice kuryr will failed to add an existed
network with gateway interface already created by neutron[1][2].

The bug is because kuryr will create a neutron port for gw
port in ipam_request_address.

I think kuryr should not do actual create neutron gw port at all.
Because CNM module do not have concept map with Neutron vRouter.
Till now, user have to use neutron api to attach GW port in
private network with vRouter. So I think the Kuryr should not
actually create GW port.

I think it possible to define via kuryr configuration file if kuryr should 
create or not gw port. Kuryr already does it for DHCP port.
What do you think? Thanks for any comments.


[1] https://bugs.launchpad.net/kuryr/+bug/1584286
[2] https://review.openstack.org/#/c/319524/4



Regards,
Liping Mao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Mid-cycle development sprint

2016-05-27 Thread Rossella Sblendido


On 05/26/2016 10:47 PM, Henry Gessau wrote:
> I am happy to announce that the location logistics for the Neutron mid-cycle
> have been finalized. The mid-cycle will take place in Cork, Ireland on August
> 15-17. I have updated the wiki [1] where you will find a link to an etherpad
> with all the details. There you can add yourself if you plan to attend, and
> make updates to topics that you would like to work on.

Thanks for organizing this! I am happy to see a sprint in Europe :)
Unfortunately the 15th is bank holidays in some European countries and
at least in Italy most people organize their holidays around those days.
I will try to change my plans and do my best to attend.

cheers,

Rossella

> 
> 
> [1] https://wiki.openstack.org/wiki/Sprints#Newton_sprints
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][neutron] bonding?

2016-05-27 Thread Moshe Levi
Hi Jim,

Neutron is supporting resource tagging [1] which is support currently  network 
resource_type
With a simple change is neutron you can also allow resource tagging for port 
resource_type [2]

This will allow you to tags ports and indicate that they are in the same group
maybe that can work better the ironic port group concept.


[1] - 
https://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/tag-instances.html
[2] - 
https://github.com/openstack/neutron/blob/master/neutron/extensions/tag.py#L38-L41


From: Armando M. [mailto:arma...@gmail.com]
Sent: Tuesday, May 24, 2016 10:19 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [ironic][neutron] bonding?



On 24 May 2016 at 04:51, Jim Rollenhagen 
> wrote:
Hi,

There's rumors floating around about Neutron having a bonding model in
the near future. Are there any solid plans for that?

Who spreads these rumors :)?

To the best of my knowledge I have not seen any RFE proposed recently along 
these lines.


For context, as part of the multitenant networking work, ironic has a
portgroup concept proposed, where operators can configure bonding for
NICs in a baremetal machine. There are ML2 drivers that support this
model and will configure a bond.

Some folks have concerns about landing this code if Neutron is going to
support bonding as a first-class citizen. So before we delay any
further, I'd like to find out if there's any truth to this, and what the
timeline for that might look like.

Thanks!

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-27 Thread Thierry Carrez

Ed Leafe wrote:

On May 25, 2016, at 7:25 AM, Denis Makogon  wrote:


Correct me if i'm wrong, none of the messages above were stating about support 
Go-extensions for Python (C extensions were mentioned couple times). Starting 
Go v1.5 it is possible to develop extension for Python [1] (lib that helps to 
develop extensions [2])


No, you’re not wrong at all.

This is much more in the original spirit for dealing with the inevitable issues 
where Python just doesn’t cut it performance-wise. The idea was to do 
everything in Python, and where there was a bottleneck, write a C module for 
that function and integrate it using ctypes.

So could someone from the Designate team do the following: isolate the part(s) 
of the process where Go kicks Python’s butt, create small Go packages to handle 
them, and then use gopy to integrate it? I think there would be little or no 
controversy with this approach, as it’s much less disruptive to the overall 
community.


Yes, this is a variant on the "external dependency" approach that would 
address most of the community fragmentation concerns by keeping the 
optimized parts small and Python-driven.


I could see that working for Designate's MiniDNS (and other partial 
optimizations), but I'm not sure that would work in the Hummingbird 
case, where all the node is rewritten in Go. If we mandated that 
approach, that would probably mean a lot of rework there...


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Docker failed to add an existed network with GW interface already created

2016-05-27 Thread Vikas Choudhary
Hi Liping Mao,

Please find my response inline. If still not clear or i am wrong somewhere
please let me know.

Regards
Vikas

On Fri, May 27, 2016 at 10:37 AM, 毛立平  wrote:

> Hi Irena,
>
> Thanks for your comments.
>
> Currently, kuryr will create gw port with owner kuryr:container, but this
> GW can't work obviously.
>
In current code, as per my understanding, we are not creating gw port. If
there are no pre-existing subnets and request for gw address is received,
this 'if' condition

will not met and no gw port will be created.

As the bug is saying, issue is there if there are pre-existing subnets and
this if condition

met. In this case it is creating a port for gw address which is not
expected.
To fix this we can add logic for checking if it is gw address request
(using "request_type") before this if condition and in case it is then
verify if requested address is same as subnet gateway address. If not same
two choices:
1) Either return to the libnetwork this actual subnet gw in response.
(Antoni suggestion)
   OR
2) Raise an exception as docker users request cannot be met.


> it can be modified to create gw port with owner network:router_interface,
> but seems like CNM module
> do not have action can be map with attach GW with vrouter.
>
> Do we have any reason why we need just create a neutron port but do not
> use it(attach to vrouter)?
>
> So I still think we can leave it in neutron router-interface-add /
> router-interface-delete .
>  what do you think?
>
> Regards,
> Liping Mao
>
>
> At 2016-05-26 20:03:24, "Irena Berezovsky"  wrote:
>
> Hi Liping Mao,
>
>
> On Thu, May 26, 2016 at 12:31 PM, Liping Mao (limao) 
> wrote:
>
>> Hi Vikas, Antoni and Kuryr team,
>>
>> When I use kuryr, I notice kuryr will failed to add an existed
>> network with gateway interface already created by neutron[1][2].
>>
>> The bug is because kuryr will create a neutron port for gw
>> port in ipam_request_address.
>>
>> I think kuryr should not do actual create neutron gw port at all.
>> Because CNM module do not have concept map with Neutron vRouter.
>> Till now, user have to use neutron api to attach GW port in
>> private network with vRouter. So I think the Kuryr should not
>> actually create GW port.
>>
>> I think it possible to define via kuryr configuration file if kuryr
> should create or not gw port. Kuryr already does it for DHCP port.
>
>> What do you think? Thanks for any comments.
>>
>>
>> [1] https://bugs.launchpad.net/kuryr/+bug/1584286
>> [2] https://review.openstack.org/#/c/319524/4
>>
>>
>>
>> Regards,
>> Liping Mao
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Continued discussion from the last team meeting

2016-05-27 Thread Haiwei Xu
Hi all,

+1 for starting with  basic functionalities and hiding 'host' from end users.

Regards,
xuhaiwei

-Original Message-
From: Hongbin Lu [mailto:hongbin...@huawei.com] 
Sent: Friday, May 27, 2016 6:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [higgins] Continued discussion from the last team 
meeting

I agree with you and Qiming. The Higgins project should start with basic 
functionalities and revisit advanced features later.

 

Best regards,

Hongbin

 

From: Yanyan Hu [mailto:huyanya...@gmail.com] 
Sent: May-24-16 11:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [higgins] Continued discussion from the last team 
meeting

 

Hi, Hongbing, thanks a lot for the summary! The following is my thoughts on 
those two questions left:

About container composition, it is a really useful and important feature for 
enduser. But based on my understanding, user can actually achieve the same goal 
by leveraging other high level OpenStack services, e.g. defining a Heat 
template with Higgins container resources and app/service 
(softwareconfig/softwaredeployment resources) running inside containers. In 
future we can implement related functionality inside Higgins to better support 
this kind of use cases natively. But in current stage, I suggest we focus on 
container primitive and its basic operations.

 

For container host management, I agree we should expose related API interfaces 
to operator(admin). Ideally, Higgins should be able to manage all container 
hosts(baremetal and VM) automatically, but manual intervention could be 
necessary in many pratical use cases. But I suggest to hide these API 
interfaces from endusers since it's not their responsibility to manage those 
hosts.

Thanks.

 

2016-05-25 4:55 GMT+08:00 Hongbin Lu :

Hi all,

 

At the last team meeting, we tried to define the scope of the Higgins project. 
In general, we agreed to focus on the following features as an initial start:

· Build a container abstraction and use docker as the first 
implementation.

· Focus on basic container operations (i.e. CRUD), and leave advanced 
operations (i.e. keep container alive, rolling upgrade, etc.) to users or other 
projects/services.

· Start with non-nested container use cases (e.g. containers on 
physical hosts), and revisit nested container use cases (e.g. containers on 
VMs) later.

The items below needs further discussion so I started this ML to discuss it.

1.   Container composition: implement a docker compose like feature

2.   Container host management: abstract container host

For #1, it seems we broadly agreed that this is a useful feature. The argument 
is where this feature belongs to. Some people think this feature belongs to 
other projects, such as Heat, and others think it belongs to Higgins so we 
should implement it. For #2, we were mainly debating two things: where the 
container hosts come from (provisioned by Nova or provided by operators); 
should we expose host management APIs to end-users? Thoughts?

 

Best regards,

Hongbin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

 

Yanyan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins][devstack][swift] Service name conflict in devstack

2016-05-27 Thread taget

hi Rana

Thanks for you quick response, really appreciate of your help.

On 2016年05月27日 14:45, Sheel Rana Insaan wrote:

Dear Eli Qiao,

>@Higgins team
>I workaround it by naming them as "higgi-api" and "higgi-cond" (seems 
no magic in "-i")

>Also, the api-port number as "9517", any comments?

We should update the regex.
s- should be updated to ^s- @ 
https://github.com/openstack-dev/devstack/blob/master/lib/swift#L163


 "[[ ,${ENABLED_SERVICES}=~,"s-"]]" should be updated to [[ 
,${ENABLED_SERVICES}=~,^s-]]"
(this regex is still not tested, but regex update will work, this or 
that way)


I will update this for swift soon in devstack after discussing with 
swift team.


So, you can continue with higgins- as service name.

Please let me know in case I missed, something else, you wanted to point.

Best Regards,
Sheel Rana

On Fri, May 27, 2016 at 11:15 AM, taget > wrote:


hi team,

I am working on adding devstack plugin for Higgins, meet some
troubles.

I named the service name as higgins-api, higgins-cond. in the
plugin, but I found that devstack try to install swift service,
and I got the reason is that [1], swift plugin try to grep 's-' in
the service name, this may not so good for other new project which
has 's-'
in it's service.

Can we improve swift plugin to use full service name?
Is there any doc that how to naming a new service or the service
name list of OpenStack?

@Higgins team
I workaround it by naming them as "higgi-api" and "higgi-cond"
(seems no magic in "-i")
Also, the api-port number as "9517", any comments?


[1]
https://github.com/openstack-dev/devstack/blob/master/lib/swift#L163


-- 
Best Regards, Eli Qiao (乔立勇)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best Regards, Eli Qiao (乔立勇)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins][devstack][swift] Service name conflict in devstack

2016-05-27 Thread Sheel Rana Insaan
Dear Eli Qiao,

>@Higgins team
>I workaround it by naming them as "higgi-api" and "higgi-cond" (seems no
magic in "-i")
>Also, the api-port number as "9517", any comments?

We should update the regex.
s- should be updated to ^s- @
https://github.com/openstack-dev/devstack/blob/master/lib/swift#L163

 "[[ ,${ENABLED_SERVICES} =~ ,"s-" ]]" should be updated to [[ ,
${ENABLED_SERVICES} =~ ,^s- ]]"
(this regex is still not tested, but regex update will work, this or that
way)

I will update this for swift soon in devstack after discussing with swift
team.

So, you can continue with higgins- as service name.

Please let me know in case I missed, something else, you wanted to point.

Best Regards,
Sheel Rana

On Fri, May 27, 2016 at 11:15 AM, taget  wrote:

> hi team,
>
> I am working on adding devstack plugin for Higgins, meet some troubles.
>
> I named the service name as higgins-api, higgins-cond. in the plugin, but
> I found that devstack try to install swift service,
> and I got the reason is that [1], swift plugin try to grep 's-' in the
> service name, this may not so good for other new project which has 's-'
> in it's service.
>
> Can we improve swift plugin to use full service name?
> Is there any doc that how to naming a new service or the service name list
> of OpenStack?
>
> @Higgins team
> I workaround it by naming them as "higgi-api" and "higgi-cond" (seems no
> magic in "-i")
> Also, the api-port number as "9517", any comments?
>
>
> [1] https://github.com/openstack-dev/devstack/blob/master/lib/swift#L163
>
>
> --
> Best Regards, Eli Qiao (乔立勇)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress] Spec for congress.conf

2016-05-27 Thread Masahito MUROI

Hi Bryan,

Oslo.config that Congress uses to manage config sets each fields to 
default value if you don't specify your configured values in 
congress.conf. In that meaning, all config is option/required.


In my experience, config values differing from each deployment, like ip 
address and so on, have to be configured, but others might be configured 
when you want Congress to run with different behaviors.


best regard,
Masahito

On 2016/05/27 3:36, SULLIVAN, BRYAN L wrote:

Hi Congress team,



Quick question for anyone. Is there a spec for fields in congress.conf
file? As of Liberty this has to be tox-generated but I need to know
which conf values are required vs optional. The generated sample output
doesn't clarify that. This is for the Puppet Module and JuJu Charm I am
developing with the help of RedHat and Canonical in OPNFV. I should have
Congress installed by default (for the RDO and JuJu installers) in the
OPNFV Colorado release in the next couple of weeks, and the
congress.conf file settings are an open question. The Puppet module will
also be used to create a Fuel plugin for installation.



Thanks,

Bryan Sullivan | AT





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
室井 雅仁(Masahito MUROI)
Software Innovation Center, NTT
Tel: +81-422-59-4539



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] prototype of a DSL for generating Dockerfiles

2016-05-27 Thread Steven Dake (stdake)


On 5/26/16, 8:45 PM, "Swapnil Kulkarni (coolsvap)"  wrote:

>On Fri, May 27, 2016 at 8:35 AM, Steven Dake (stdake) 
>wrote:
>> Hey folks,
>>
>> While Swapnil has been busy churning the dockerfile.j2 files to all
>>match
>> the same style, and we also had summit where we declared we would solve
>>the
>> plugin problem, I have decided to begin work on a DSL prototype.
>>
>> Here are the problems I want to solve in order of importance by this
>>work:
>>
>> Build CentOS, Ubuntu, Oracle Linux, Debian, Fedora containers
>> Provide a programmatic way to manage Dockerfile construction rather
>>then a
>> manual (with vi or emacs or the like) mechanism
>> Allow complete overrides of every facet of Dockerfile construction, most
>> especially repositories per container (rather than in the base
>>container) to
>> permit the use case of dependencies from one version with dependencies
>>in
>> another version of a different service
>> Get out of the business of maintaining 100+ dockerfiles but instead
>>maintain
>> one master file which defines the data that needs to be used to
>>construct
>> Dockerfiles
>> Permit different types of optimizations or Dockerfile building by
>>changing
>> around the parser implementation ­ to allow layering of each operation,
>>or
>> alternatively to merge layers as we do today
>>
>> I don't believe we can proceed with both binary and source plugins
>>given our
>> current implementation of Dockerfiles in any sane way.
>>
>> I further don't believe it is possible to customize repositories &
>>installed
>> files per container, which I receive increasing requests for offline.
>>
>> To that end, I've created a very very rough prototype which builds the
>>base
>> container as well as a mariadb container.  The mariadb container builds
>>and
>> I suspect would work.
>>
>> An example of the DSL usage is here:
>> https://review.openstack.org/#/c/321468/4/dockerdsl/dsl.yml
>>
>> A very poorly written parser is here:
>> https://review.openstack.org/#/c/321468/4/dockerdsl/load.py
>>
>> I played around with INI as a format, to take advantage of oslo.config
>>and
>> kolla-build.conf, but that didn't work out.  YML is the way to go.
>>
>> I'd appreciate reviews on the YML implementation especially.
>>
>> How I see this work progressing is as follows:
>>
>> A yml file describing all docker containers for all distros is placed in
>> kolla/docker
>> The build tool adds an option ‹use-yml which uses the YML file
>> A parser (such as load.py above) is integrated into build.py to lay
>>down he
>> Dockerfiles
>> Wait 4-6 weeks for people to find bugs and complain
>> Make the ‹use-yml the default for 4-6 weeks
>> Once we feel confident in the yml implementation, remove all
>>Dockerfile.j2
>> files
>> Remove ‹use-yml option
>> Remove all jinja2-isms from build.py
>>
>> This is similar to the work that took place to convert from raw
>>Dockerfiles
>> to Dockerfile.j2 files.  We are just reusing that pattern.  Hopefully
>>this
>> will be the last major refactor of the dockerfiles unless someone has
>>some
>> significant complaints about the approach.
>>
>> Regards
>> -steve
>>
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>The DSL template to generate the Dockerfile seems way better than the
>jinja templates in terms of extension which is currently a major
>bottleneck in the plugin implementation. I am +2+W on this plan of
>action to test it for next 4-6 weeks and see thereon.
>
>Swapnil
>

Agree.

Customization and plugins are the trigger for the work.  I was thinking of
the following:

Elemental.yml (ships with Kolla)
Elemental-merge.yml (operator provides in /etc/kolla, this file is yaml
merged with elemental.yml)
Elemental-override.yml (operator provides in /etc/kolla, this file
overrides any YAML sections defined)

I think merging and overriding the yaml files should be pretty easy,
compared to jinja2, where I don't even know where to begin in a way that
the operator doesn't have to have deep knowledge of Kolla's internal
implementation.

Regards
-steve
  
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev