Re: [openstack-dev] [oslo] pymysql change in error formatting has broken exception handing in oslo.db

2016-08-31 Thread Joshua Harlow


+1 to 4.13.1


I'll get a release review up once
https://review.openstack.org/#/c/363828/ merges (seems to be on its way
to merging).


https://review.openstack.org/#/c/364063/

Enjoy!

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Converged infrastructure

2016-08-31 Thread Blair Bethwaite
Following on from Edmund's issues... People talking about doing this
typically seem to cite cgroups as the way to avoid CPU and memory
related contention - has anyone been successful in e.g. setting up
cgroups on a nova qemu+kvm hypervisor to limit how much of the machine
nova uses?

On 1 September 2016 at 04:15, Edmund Rhudy (BLOOMBERG/ 120 PARK)
 wrote:
> We currently run converged at Bloomberg with Ceph (all SSD) and I strongly
> dislike it. OSDs and VMs battle for CPU time and memory, VMs steal memory
> that would go to the HV pagecache, and it puts a real dent in any plans to
> be able to deploy hypervisors (mostly) statelessly. Ceph on our largest
> compute cluster spews an endless litany of deep-scrub-related HEALTH_WARNs
> because of memory steal from the VMs depleting available pagecache memory.
> We're going to increase the OS memory reservation in nova.conf to try to
> alleviate some of the worst of the memory steal, but it's been one hack
> after another to keep it going. I hope to be able to re-architect our design
> at some point to de-converge Ceph from the compute nodes so that the two
> sides can evolve separately once more.
>
> From: matt.jar...@datacentred.co.uk
> Subject: Re:[Openstack-operators] Converged infrastructure
>
> Time once again to dredge this topic up and see what the wider operators
> community thinks this time :) There were a fair amount of summit submissions
> for Barcelona talking about converged and hyper-converged infrastructure, it
> seems to be the topic de jour from vendors at the minute despite feeling
> like we've been round this before with Nebula, Piston Cloud etc.
>
> Like a lot of others we run Ceph, and we absolutely don't converge our
> storage and compute nodes for a variety of performance and management
> related reasons. In our experience, the hardware and tuning characteristics
> of both types of nodes are pretty different, in any kind of recovery
> scenarios Ceph eats memory, and it feels like creating a SPOF.
>
> Having said that, with pure SSD clusters becoming more common, some of those
> issues may well be mitigated, so is anyone doing this in production now ? If
> so, what does your hardware platform look like, and are there issues with
> these kinds of architectures ?
>
> Matt
>
> DataCentred Limited registered in England and Wales no. 05611763
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



-- 
Cheers,
~Blairo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [oslo] pymysql change in error formatting has broken exception handing in oslo.db

2016-08-31 Thread Joshua Harlow






We need to decide how to handle this:

https://review.openstack.org/#/c/362991/


Basically, PyMySQL normally raises an error message like this:

(pymysql.err.IntegrityError) (1452, u'Cannot add or update a child row: a
foreign key constraint fails (`vaceciqnzs`.`resource_entity`, CONSTRAINT
`foo_fkey` FOREIGN KEY (`foo_id`) REFERENCES `resource_foo` (`id`))')

for some reason, PyMySQL 0.7.7 is now raising it like this:

(pymysql.err.IntegrityError) (1452, u'23000Cannot add or update a child
row: a foreign key constraint fails (`vaceciqnzs`.`resource_entity`,
CONSTRAINT `foo_fkey` FOREIGN KEY (`foo_id`) REFERENCES `resource_foo`
(`id`))')

this impacts oslo.db's "exception re-handling" functionality which tries
to classify this exception as a DBNonExistentConstraint exception.   It
also breaks oslo.db's test suite locally, but in a downstream project
would only impact its ability to intercept this exception appropriately.

now that "23000" there looks like a bug.  The above gerrit proposes to
work around it.  However, if we didn't push out the above gerrit, we'd
instead have to change requirements:

https://review.openstack.org/#/q/I33d5ef8f35747d3b6d3bc0bd4972ce3b7fd60371,n,z

It seems like at least one or the other would be needed for Newton.

Unless we fix the bug in next pymysql, it’s not either/or but both will be
needed, and also minimal oslo.db version bump.

I suggest we:
- block 0.7.7 to unblock upper-constraints updates;
- land oslo.db fix to cope with pymysql 0.7.7+, in master as well as all
stable branches;
- release new oslo.db releases for L-N;
- at least for N, bump minimal version of the library in
global-requirements.txt;
- sync the bump to all consuming projects;
- later, maybe unblock 0.7.7.

In the meantime, interested parties may work with pymysql folks to get the
issue fixed. It may take a while, so I would not make this step part of our
short term plan.

Now, I understand that this does not really sound ideal, but I assume we
are not in requirements freeze yet (the deadline for that is tomorrow), and
this plan will solve the issue for users of all versions of pymysql.

Even if we were frozen, this seems like the sort of thing we'd want to
deal with through a patch release.

I've already create the stable/newton branch for oslo.db, so we'll need
to backport the fix to have a 4.13.1 release.


+1 to 4.13.1


I'll get a release review up once 
https://review.openstack.org/#/c/363828/ merges (seems to be on its way 
to merging).


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Joshua Harlow

joehuang wrote:

I just pointed out the issues for RPC which is used between API cell and
child cell if we deploy child cells in edge clouds. For this thread is
about massively distributed cloud, so the RPC issues inside current
Nova/Cinder/Neutron are not the main focus(it could be another important
and interesting topic), for example, how to guarantee the reliability
for rpc message:


+1 although I'd like to also discuss this, but so be it, perhaps a 
different topic :)




 > Cells is a good enhancement for Nova scalability, but there are
some issues
 > in deployment Cells for massively distributed edge clouds:
 >
 > 1) using RPC for inter-data center communication will bring the
difficulty
 > in inter-dc troubleshooting and maintenance, and some critical
issue in
 > operation. No CLI or restful API or other tools to manage a child
cell
 > directly. If the link between the API cell and child cells is
broken, then
 > the child cell in the remote edge cloud is unmanageable, no
matter locally
 > or remotely.
 >
 > 2). The challenge in security management for inter-site RPC
communication.
 > Please refer to the slides[1] for the challenge 3: Securing
OpenStack over
 > the Internet, Over 500 pin holes had to be opened in the firewall
to allow
 > this to work – Includes ports for VNC and SSH for CLIs. Using RPC
in cells
 > for edge cloud will face same security challenges.
 >
 > 3)only nova supports cells. But not only Nova needs to support
edge clouds,
 > Neutron, Cinder should be taken into account too. How about
Neutron to
 > support service function chaining in edge clouds? Using RPC? how
to address
 > challenges mentioned above? And Cinder?
 >
 > 4). Using RPC to do the production integration for hundreds of
edge cloud is
 > quite challenge idea, it's basic requirements that these edge
clouds may
 > be bought from multi-vendor, hardware/software or both.
 > That means using cells in production for massively distributed
edge clouds
 > is quite bad idea. If Cells provide RESTful interface between API
cell and
 > child cell, it's much more acceptable, but it's still not enough,
similar
 > in Cinder, Neutron. Or just deploy lightweight OpenStack instance
in each
 > edge cloud, for example, one rack. The question is how to manage
the large
 > number of OpenStack instance and provision service.
 >
 >

[1]https://www.openstack.org/assets/presentation-media/OpenStack-2016-Austin-D-NFV-vM.pdf


That's also my suggestion to collect all candidate proposals, and
discuss these proposals and compare their cons. and pros. in the
Barcelona summit.

I propose to use Nova/Cinder/Neutron restful API for inter-site
communication for edge clouds, and provide Nova/Cinder/Neutron API as
the umbrella for all edge clouds. This is the pattern of Tricircle:
https://github.com/openstack/tricircle/



What is the REST API for tricircle?

When looking at the github I see:

''Documentation: TBD''

Getting a feel for its REST API would really be helpful in determine how 
much of a proxy/request router it is vs being an actual API. I don't 
really want/like a proxy/request router (if that wasn't obvious, ha).


Looking at say:

https://github.com/openstack/tricircle/blob/master/tricircle/nova_apigw/controllers/server.py

That doesn't inspire me so much, since that appears to be more of a 
fork/join across many different clients, and creating a nova like API 
out of the joined results of those clients (which feels sort of ummm, 
wrong). This is where I start to wonder about what the right API is 
here, and trying to map 1 `create_server` top-level API onto M child 
calls feels a little off (because that mapping will likely never be 
correct due to the nature of the child clouds, ie u have to assume a 
very strict homogenous nature to even get close to this working).


Where there other alternative ways of doing this that were discussed?

Perhaps even a new API that doesn't try to 1:1 map onto child calls, 
something along the line of make an API that more directly suits what 
this project is trying to do (vs trying to completely hide that there M 
child calls being made underneath).


I get the idea of becoming a uber-openstack-API and trying to unify X 
other other openstacks under that API with this uber-API but it just 
feels like the wrong way to tackle this.


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] what permission is required to create a Keystone trust

2016-08-31 Thread Matt Jia
Hi,

I am experimenting the Keystone Trusts feature with a script which creates
a trust between two users.

import keystoneclient.v3 as keystoneclient
#import swiftclient.client as swiftclient


auth_url_v3 = 'http:/xxxt.com:5000/v3/'


demo = keystoneclient.Client(auth_url=auth_url_v3,
 username='demo',
 password='openstack',
 project='demo')
import pdb; pdb.set_trace()
alt_demo = keystoneclient.Client(auth_url=auth_url_v3,
 username='alt_demo',
 password='openstack',
 project='alt_demo')

trust = demo.trusts.create(trustor_user=demo.user_id,
   trustee_user=alt_demo.user_id,
   project=demo.tenant_id)

When I run this script, I got this error:

Traceback (most recent call last):
  File "test_os_trust_1.py", line 20, in 
project=demo.tenant_id)
  File "/usr/lib/python2.7/site-packages/keystoneclient/v3/contrib/trusts.py",
line 75, in create
**kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneclient/base.py", line 72,
in func
return f(*args, **new_kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneclient/base.py", line 328,
in create
self.key)
  File "/usr/lib/python2.7/site-packages/keystoneclient/base.py", line 151,
in _create
return self._post(url, body, response_key, return_raw, **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneclient/base.py", line 165,
in _post
resp, body = self.client.post(url, body=body, **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneclient/httpclient.py",
line 635, in post
return self._cs_request(url, 'POST', **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneclient/httpclient.py",
line 621, in _cs_request
return self.request(url, method, **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneclient/httpclient.py",
line 596, in request
resp = super(HTTPClient, self).request(url, method, **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneclient/baseclient.py",
line 21, in request
return self.session.request(url, method, **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneclient/utils.py", line
318, in inner
return func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line
354, in request
raise exceptions.from_response(resp, method, url)
keystoneclient.openstack.common.apiclient.exceptions.Forbidden: You are not
authorized to perform the requested action. (HTTP 403) (Request-ID:
req-6898b073-d467-4f2a-acc0-c4c0ca15970a)

Can anyone explain what sort of permission is required for the demo user to
create a trust?

Cheers, Matt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Upstream - Barcelona

2016-08-31 Thread Adam Lawson
I seemed to have missed the thread where upstream opps we're being
announced and/or opened. Who do I contact to get in on this? I had table
duty last year and couldn't do it.

//adam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread joehuang
Some evaluation aspect were added to the etherpad 
https://etherpad.openstack.org/p/massively-distributed_WG_description for 
massively distributed edge clouds, so we can evaluate each proposals. Your 
comments for these consideration are welcome :

- Security management over the WAN: how manage the inter-site communication and 
edge cloud securely.
- Fail-safe: each edge cloud should be able to run independently, one edge 
cloud crash should not impact other edge clouds running and operation.
- Maintainability: each edge cloud installation/upgrading/patch should be able 
to be managed indepently, don't have to upgrade all edge clouds at the same 
time.
- Manageable: no island even if some link broken
- Easy integration: need to support easy integration for multi-vendors for 
handreds or thousands of edge cloud.
- Consistency: eventually consistent information(stable status) should be 
achieved for distributed system.

And also prepared one skeleton for candidate proposals discussion: 
https://etherpad.openstack.org/p/massively-distributed_WG_candidate_proposals_ocata,
 and linked it into the etherpad mentioned above.

Consider that Tricircle is moving to divide it into two projects: 
TricircleNetworking and TricircleGateway: 
https://docs.google.com/presentation/d/1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E,
So I listed these two sub-projects in the etherpad, these two projects can work 
together or separately.

Best Regards
Chaoyi Huang(joehuang)


From: lebre.adr...@free.fr [lebre.adr...@free.fr]
Sent: 01 September 2016 1:36
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

As promised, I just wrote a first draft at 
https://etherpad.openstack.org/p/massively-distributed_WG_description
I will try to add more content tomorrow in particular pointers towards 
articles/ETSI specifications/use-cases.

Comments/remarks welcome.
Ad_rien_

PS: Chaoyi, your proposal for f2f sessions in Barcelona sounds good. It is 
probably a bit too ambitious for one summit because the point 3 ''Gaps in 
OpenStack'' looks to me a major action that will probably last more than just 
one summit but I think you gave the right directions !

- Mail original -
> De: "joehuang" 
> À: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Envoyé: Mercredi 31 Août 2016 08:48:01
> Objet: Re: [openstack-dev] [all][massively 
> distributed][architecture]Coordination between actions/WGs
>
> Hello, Joshua,
>
> According to Peter's message, "However that still leaves us with the
> need to manage a stack of servers in thousands of telephone
> exchanges, central offices or even cell-sites, running multiple work
> loads in a distributed fault tolerant manner", the number of edge
> clouds may even at thousands level.
>
> These clouds may be disjoint, but some may need to provide
> inter-connection for the tenant's network, for example, to support
> database cluster distributed in several clouds, the inter-connection
> for data replication is needed.
>
> There are different thoughts, proposals or projects to tackle the
> challenge, architecture level discussion is necessary to see if
> these design and proposals can fulfill the demands. If there are
> lots of proposals, it's good to compare the pros. and cons, and
> which scenarios the proposal work, which scenario the proposal can't
> work very well.
>
> So I suggest to have at least two successive dedicated design summit
> sessions to discuss about that f2f, all thoughts, proposals or
> projects to tackle these kind of problem domain could be collected
> now, the topics to be discussed could be as follows :
>
> 0. Scenario
> 1, Use cases
> 2, Requirements in detail
> 3, Gaps in OpenStack
> 4, Proposal to be discussed
>
> Architecture level proposal discussion
> 1, Proposals
> 2, Pros. and Cons. comparation
> 3, Challenges
> 4, next step
>
> Best Regards
> Chaoyi Huang(joehuang)
> 
> From: Joshua Harlow [harlo...@fastmail.com]
> Sent: 31 August 2016 13:13
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all][massively
> distributed][architecture]Coordination between actions/WGs
>
> joehuang wrote:
> > Cells is a good enhancement for Nova scalability, but there are
> > some issues in deployment Cells for massively distributed edge
> > clouds:
> >
> > 1) using RPC for inter-data center communication will bring the
> > difficulty in inter-dc troubleshooting and maintenance, and some
> > critical issue in operation. No CLI or restful API or other tools
> > to manage a child cell directly. If the link between the API cell
> > and child cells is broken, then the child cell in the remote edge
> > cloud is unmanageable, no matter locally or remotely.
> >
> > 2). The challenge in 

Re: [openstack-dev] [kolla] important instructions or Newton Milestone #3 Release today 8/31/2015 @ ~23:30 UTC

2016-08-31 Thread Vikram Hosakote (vhosakot)
Great work kolla and kolla-kubernetes communities!

Big thanks to the openstyack-infra team as well :)

Regards,
Vikram Hosakote
IRC:  vhosakot

From: "Steven Dake (stdake)" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, August 31, 2016 at 6:37 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [kolla] important instructions or Newton Milestone #3 
Release today 8/31/2015 @ ~23:30 UTC

Hey folks,

Milestone 3 will be submitted for tagging to the release team today around my 
end of work day.  All milestone 3 blueprints and bugs will be moved to rc1 in 
the case they don't make the August 31st(today) deadline.

We require fernet in rc1, so if there is anything that can be done to 
accelerate Shuan's work there, please chip in.  I'd like this to be our highest 
priority blueprint merge.  The earlier it merges (when functional) the more 
time we have to test the changes.  Please iterate on this review and review 
daily until merged.

We have made tremendous progress in milestone 3.  We ended up carrying over 
some blueprints as FFEs to rc1 which are all in review state right now and 
nearly complete.

The extension for  features concludes September 15th, 2016 when rc1 is tagged.  
If features don't merge by that time, they will be retargeted for Ocata.  When 
we submit the rc1 tag, master will branch.  After rc1, we will require bug 
backports from master to newton (and mitaka and liberty if appropriate).

We have a large bug backlog.  If folks could tackle that, it would be 
appreciated.  I will be spending most of my time doing that sort of work and 
would appreciate everyone on the team to contribute.  Tomorrow afternoon I will 
have all the rc1 bugs prioritized as seems fitting.

Please do not workflow+1 any blueprint work in the kolla repo until rc1 has 
been tagged.  Master of kolla is frozen for new features not already listed in 
the rc1 milestone.  Master of kolla-kubernetes is open for new features as we 
have not made a stable deliverable out of this repository (a 1.0.0 release).  
As a result, no branch will be made of the kolla-kubernetes repository (I 
think..).  If a branch is made, I'll request it be deleted.

If you have a bug that needs fixing and it doesn't need a backport, just use 
TrivialFix to speed up the process.  If it needs a backport, please use a bug 
id.  After rc1, all patches will need backports so everything should have a bug 
id.  I will provide further guidance after rc1.

A big shout out goes to our tremendous community that has pulled off 3 
milestones on schedule and in functional working order for the Kolla repository 
while maintaining 2 branches and releasing 4 z streams on a 45 day schedule.  
Fantastic work everyone!

Kolla-kubernetes also deserves a shout out - we have a functional compute-kit 
kubernetes underlay that deploys Kolla containers using mostly native 
kuberenetes functionality  We are headed towards a fully Kubernetes 
implementation.  The deployment lacks the broad feature-set of kolla-ansible 
but uses the json API to our containers and is able to spin up nova virtual 
machines with full network (OVS) connectivity - which is huge!

Cheers!
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] relationship_type in static_datasources

2016-08-31 Thread Yujun Zhang
Hi, Ifat,

The static configuration contains definitions of `entities` and *their*
`relationships while the scenario templates contains a definition section
which includes `entities` and `relationships` *between them*. An outline of
these two format are as below.

static configuration

- entities
  - {entity}
  - {entity}

for each entity

- name:
  id:
  relationship:
- {relationship}
- {relationship}

scenario templates

- definitions
  - entities
- {entity}
- {entity}
  - relationships
- {relationship}
- {relationship}

Though serving different purpose, they both

   1. describe entities and relationships
   2. use a dedicated key (id/template_id) to reference the items
   3. include a source entity and target entity in relationship

The main differences between the two are


   - scenario *defines rules *(entity and relationship matching)*, *graph
   update is triggered when entities are added by datasource.
   - static configuration *defines rules* and also *add entities* to graph

The rule definition are common to these two modules. We may define the
static configuration using the same format as scenario template. And then
simulate an entity discovery from the same file.

By reusing the template parsing engine and workflow, we may reduce the work
in maintenance and bring in new features more easily.

We may discuss it further if anything unclear.

On Tue, Aug 30, 2016 at 11:07 PM Afek, Ifat (Nokia - IL) <
ifat.a...@nokia.com> wrote:

> Hi Yujun,
>
> From: Yujun Zhang
> Date: Monday, 29 August 2016 at 11:59
>
> entities:
>  - type: switch
>name: switch-1
>id: switch-1 # should be same as name
>state: available
>relationships:
>  - type: nova.host
>name: host-1
>id: host-1 # should be same as name*   is_source: true # entity is 
> `source` in this relationship
> *   relation_type: attached - type: switch   name: switch-2   
> id: switch-2 # should be same as name
> *   is_source: false # entity is `target` in this relationship*   
> relation_type: backup
>
>
> I think that’s the idea, instead of making this assumption in the code.
>
> But I wonder why the static physical configuration file use a different
> format from vitrage template definitions[1]
>
> [1]
> https://github.com/openstack/vitrage/blob/master/doc/source/vitrage-template-format.rst
>
>
> What do you mean? The purpose of the templates is to describe the
> condition-action behaviour, wheres the purpose of the static configuration
> is to define resources to be added to vitrage graph. Can you please explain
> how you would make the formats more similar?
>
> Best Regards,
> Ifat.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Next steps for resource providers work

2016-08-31 Thread Ed Leafe
On Aug 31, 2016, at 12:30 PM, Chris Dent  wrote:

> So to summarize Jay's to do list (please and thank you very much):
> 
> * Look at https://review.openstack.org/#/c/363209/ and decide if it
>  is good enough to get rolling or needs to be completely altered.
> * If the latter, alter it.

I took Jay’s stuff in 363209 and added back the logic to delete existing 
allocations. The tests are all passing for me locally, so now we just have to 
verify that this is indeed the behavior we need.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-sfc] Unable to create openstack SFC

2016-08-31 Thread Vincent.Chao
Hi Neutrons,

I met this situation once in the release Liberty.
Here is the thing.
When the create_port_chain() is called,
(@networking-sfc/networking_sfc/services/sfc/drivers/ovs/driver.py)
it goes the following code path
   -> _thread_update_path_nodes()
   ->_update_path_node_flowrules()
   ->_update_path_node_port_flowrules()
   ->_build_portchain_flowrule_body()
   ->_update_path_node_next_hops()
   ->_get_port_subnet_gw_info_by_port_id
   ->_get_port_subnet_gw_info()raise exc.SfcNoSubnetGateway
if you didn't give the network a router, it raises SfcNoSubnetGateway .
And then back to the plugin.py: create_port_chain(), cache the exception
sfc_exc.SfcDriverError as e
In this exception, there is a delete_port_chain() method.
But due to the synchronization problem between DB and ovs-bridge, it will
delete failure.
I hope this info. could help anyone who uses a liberty version.
Next time, don't forget giving a router before creating a port chain.

I don't see this code path in the master branch.
It may be better in mitaka.

Thanks
Vincent



2016-08-31 2:19 GMT+08:00 Cathy Zhang :

> Hi Alioune,
>
>
>
> It is weird that when you create a port chain, you get a “chain delete
> failed” error message.
>
> We never had this problem. Chain deletion is only involved when you do
> “delete chain” or “update chain”.
>
> Not sure which networking code file combination you are using or whether
> it is because your system is not properly cleaned up or not properly
> installed.
>
> We are going to release the networking-sfc mitaka version soon.
>
> I would suggest that you wait a little bit and then use the official
> released mitaka version and reinstall the feature on your system.
>
>
>
> Thanks,
>
> Cathy
>
>
>
> *From:* Alioune [mailto:baliou...@gmail.com]
> *Sent:* Tuesday, August 30, 2016 8:03 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Cc:* Cathy Zhang; Mohan Kumar; Henry Fourie
> *Subject:* Re: [openstack-dev][neutron][networking-sfc] Unable to create
> openstack SFC
>
>
>
> Hi,
>
> Have you received my previous email ?
>
>
>
> Regards,
>
>
>
> On 15 August 2016 at 13:39, Alioune  wrote:
>
> Hi all,
>
> I'm trying to launch Openstack SFC as explained in[1] by creating 2 SFs, 1
> Web Server (DST) and the DHCP namespace as the SRC.
>
> I've installed OVS (Open vSwitch) 2.3.90 with Linux kernel 3.13.0-62 and
> the neutron L2-agent runs correctly.
>
> I followed the process by creating classifier, port pairs and port_group
> but I got a wrong message "delete_port_chain failed." when creating
> port_chain [2]
>
> I tried to create the neutron ports with and without the option
> "--no-security-groups" then tcpdpump on SFs tap interfaces but the ICMP
> packets don't go through the SFs.
>
>
>
> Can anyone advice to fix? that ?
>
> What's your channel on IRC ?
>
>
>
> Regards,
>
>
>
>
>
> [1] https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining
>
> [2]
>
> vagrant@ubuntu:~/openstack_sfc$ ./08-os_create_port_chain.sh
>
> delete_port_chain failed.
>
> vagrant@ubuntu:~/openstack_sfc$ cat 08-os_create_port_chain.sh
>
> #!/bin/bash
>
>
>
> neutron port-chain-create --port-pair-group PG1 --port-pair-group PG2
> --flow-classifier FC1 PC1
>
>
>
> [3] Output OVS Flows
>
>
>
> vagrant@ubuntu:~$ sudo ovs-ofctl dump-flows br-tun -O OpenFlow13
>
> OFPST_FLOW reply (OF1.3) (xid=0x2):
>
>  cookie=0xbc2e9105125301dc, duration=9615.385s, table=0, n_packets=146,
> n_bytes=11534, priority=1,in_port=1 actions=resubmit(,2)
>
>  cookie=0xbc2e9105125301dc, duration=9615.382s, table=0, n_packets=0,
> n_bytes=0, priority=0 actions=drop
>
>  cookie=0xbc2e9105125301dc, duration=9615.382s, table=2, n_packets=5,
> n_bytes=490, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00
> actions=resubmit(,20)
>
>  cookie=0xbc2e9105125301dc, duration=9615.381s, table=2, n_packets=141,
> n_bytes=11044, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
> actions=resubmit(,22)
>
>  cookie=0xbc2e9105125301dc, duration=9615.380s, table=3, n_packets=0,
> n_bytes=0, priority=0 actions=drop
>
>  cookie=0xbc2e9105125301dc, duration=9615.380s, table=4, n_packets=0,
> n_bytes=0, priority=0 actions=drop
>
>  cookie=0xbc2e9105125301dc, duration=8617.106s, table=4, n_packets=0,
> n_bytes=0, priority=1,tun_id=0x40e actions=push_vlan:0x8100,set_
> field:4097->vlan_vid,resubmit(,10)
>
>  cookie=0xbc2e9105125301dc, duration=9615.379s, table=6, n_packets=0,
> n_bytes=0, priority=0 actions=drop
>
>  cookie=0xbc2e9105125301dc, duration=9615.379s, table=10, n_packets=0,
> n_bytes=0, priority=1 actions=learn(table=20,hard_
> timeout=300,priority=1,cookie=0xbc2e9105125301dc,NXM_OF_
> VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0-
> >NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],
> output:NXM_OF_IN_PORT[]),output:1
>
>  cookie=0xbc2e9105125301dc, duration=9615.378s, table=20, n_packets=5,
> n_bytes=490, priority=0 actions=resubmit(,22)
>
>  

Re: [Openstack] Horizon missing loadbalance UI button

2016-08-31 Thread Erdősi Péter

2016. 09. 01. 4:08 keltezéssel, Satish Patel írta:

If 9.0.1 is mitaka and as per doc it should support LBaasS v2 then why
it doesn't working for me.
The mitaka release support LBaaS v2, with stock images, but without 
Horizon GUI support.. (it means: neutron driver/service plugin/api working)
In another way: You can crate lbaasv2 load balancers with the cli tool 
(described in a link too) and you can also setup the gui part from the 
git repository...


I don't know, that Mitaka will makes horizon package with the lbaasv2 
GUI, or it will be only from Newton... :(


Regards:
 Peter

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread joehuang
I just pointed out the issues for RPC which is used between API cell and child 
cell if we deploy child cells in edge clouds. For this thread is about 
massively distributed cloud, so the RPC issues inside current 
Nova/Cinder/Neutron are not the main focus(it could be another important and 
interesting topic), for example, how to guarantee the reliability for rpc 
message:

> Cells is a good enhancement for Nova scalability, but there are some issues
>  in deployment Cells for massively distributed edge clouds:
>
> 1) using RPC for inter-data center communication will bring the difficulty
> in inter-dc troubleshooting and maintenance, and some critical issue in
> operation.  No CLI or restful API or other tools to manage a child cell
> directly. If the link between the API cell and child cells is broken, then
> the child cell in the remote edge cloud is unmanageable, no matter locally
> or remotely.
>
> 2). The challenge in security management for inter-site RPC communication.
> Please refer to the slides[1] for the challenge 3: Securing OpenStack over
> the Internet, Over 500 pin holes had to be opened in the firewall to allow
> this to work – Includes ports for VNC and SSH for CLIs. Using RPC in cells
> for edge cloud will face same security challenges.
>
> 3)only nova supports cells. But not only Nova needs to support edge clouds,
> Neutron, Cinder should be taken into account too. How about Neutron to
> support service function chaining in edge clouds? Using RPC? how to address
> challenges mentioned above? And Cinder?
>
> 4). Using RPC to do the production integration for hundreds of edge cloud is
> quite challenge idea, it's basic requirements that these edge clouds may
> be bought from multi-vendor, hardware/software or both.
> That means using cells in production for massively distributed edge clouds
> is quite bad idea. If Cells provide RESTful interface between API cell and
> child cell, it's much more acceptable, but it's still not enough, similar
> in Cinder, Neutron. Or just deploy lightweight OpenStack instance in each
> edge cloud, for example, one rack. The question is how to manage the large
> number of OpenStack instance and provision service.
>
> [1]https://www.openstack.org/assets/presentation-media/OpenStack-2016-Austin-D-NFV-vM.pdf

That's also my suggestion to collect all candidate proposals, and discuss these 
proposals and compare their cons. and pros. in the Barcelona summit.

I propose to use Nova/Cinder/Neutron restful API for inter-site communication 
for edge clouds, and provide Nova/Cinder/Neutron API as the umbrella for all 
edge clouds. This is the pattern of Tricircle: 
https://github.com/openstack/tricircle/

If there is other proposal, please don't hesitate to share and let's compare.

Best Regards
Chaoyi Huang(joehuang)


From: Duncan Thomas [duncan.tho...@gmail.com]
Sent: 01 September 2016 2:03
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On 31 August 2016 at 18:54, Joshua Harlow 
> wrote:
Duncan Thomas wrote:
On 31 August 2016 at 11:57, Bogdan Dobrelya 

>> wrote:

I agree that RPC design pattern, as it is implemented now, is a major
blocker for OpenStack in general. It requires a major redesign,
including handling of corner cases, on both sides, *especially* RPC call
clients. Or may be it just have to be abandoned to be replaced by a more
cloud friendly pattern.



Is there a writeup anywhere on what these issues are? I've heard this
sentiment expressed multiple times now, but without a writeup of the
issues and the design goals of the replacement, we're unlikely to make
progress on a replacement - even if somebody takes the heroic approach
and writes a full replacement themselves, the odds of getting community
by-in are very low.

+2 to that, there are a bunch of technologies that could replace the 
rabbit+rpc, aka, gRPC, then there is http2 and thrift and ... so a writeup IMHO 
would help at least clear the waters a little bit, and explain the blocker of 
the current RPC design pattern (which is multidimensional because most people 
are probably thinking RPC == rabbit when it's actually more than that now, ie 
zeromq and amqp1.0 and ...) and try to centralize on a better replacement.


Is anybody who dislikes the current pattern(s) and implementation(s) 
volunteering to start this documentation? I really am not aware of the issues, 
and I'd like to begin to understand them.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [Openstack] Horizon missing loadbalance UI button

2016-08-31 Thread Satish Patel
If 9.0.1 is mitaka and as per doc it should support LBaasS v2 then why
it doesn't working for me. I have latest RDO running and here is my
version: (Do i need to hack something in file?)

[root@controller-1 ~]# rpm -qa | grep openstack-dashboard
openstack-dashboard-9.0.1-1.el7.noarch

On Wed, Aug 31, 2016 at 1:03 PM, Erdősi Péter  wrote:
> 2016. 08. 31. 16:39 keltezéssel, Turbo Fredriksson írta:
>>
>> Technically, that's not Mitaka! That's using Horizon from Newton.
>
> How/where did you get that mate? :)
>
> [xyz(cc1:2)] <~> sudo dpkg --list |grep dashboard
> ii  openstack-dashboard 2:9.0.1-0ubuntu2~cloud0   all
> Django web interface for OpenStack
>
> Version 9.0.1 is Mitaka horizon, and we patched the lbaasv2 gui as a local
> module ;)
> As my colleague mentioned before, you can find information here:
> http://docs.openstack.org/mitaka/networking-guide/config-lbaas.html
>
> Let me quote a sentence from this page: "The Dashboard panels for managing
> LBaaS v2 are available starting with the Mitaka release."
>
> If you open the git repository from the link above, you can see two branches
> (master for newton, and stable/mitaka)
>
> Please take a time to pick up the information, before spread out something,
> which is not real...
>
>
> Thanks:
>  Peter
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [heat] Support for an "undo" operation for migrating back resource_properties_data

2016-08-31 Thread Steve Baker

On 01/09/16 12:57, Crag Wolfe wrote:

I'm working on a migrate utility for
https://review.openstack.org/#/c/363415 . Quick summary: that means
moving resource.properties_data and event.properties_data to a new
table, resource_properties_data. Migrating to the new model is easy. The
questions come up with the inverse operation.

1) Would we even want to support undoing a migrate? I lean towards "no"
but if the answer is "yes," the next question comes up:
No, OpenStack hasn't supported data migration downgrades for a while 
now. Migration failures are ideally fixed by failing forward. As a last 
resort rollbacks can be performed by restoring the database from backup.

2)

(redacted)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][keystone] auth for new metadata plugins

2016-08-31 Thread Adam Young

On 08/31/2016 07:56 AM, Michael Still wrote:
There is a quick sketch of what a service account might look like at 
https://review.openstack.org/#/c/363606/ -- I need to do some more 
fiddling to get the new option group working, but I could do that if 
we wanted to try and get this into Newton.


So, I don't think we need it.  I think that doing an identity for the 
new node *in order* to register it with an IdP is backwards: register 
it, and use the identity from the IdP via Federation.


Anything authenticated should be done from the metadata server or from 
Nova itself, based on the token used to launch the workflow.




Michael

On Wed, Aug 31, 2016 at 7:54 AM, Matt Riedemann 
> wrote:


On 8/30/2016 4:36 PM, Michael Still wrote:

Sorry for being slow on this one, I've been pulled into some
internal
things at work.

So... Talking to Matt Riedemann just now, it seems like we should
continue to pass through the user authentication details when
we have
them to the plugin. The problem is what to do in the case
where we do
not (which is mostly going to be when the instance itself makes a
metadata request).

I think what you're saying though is that the middleware wont
let any
requests through if they have no auth details? Is that correct?

Michael




On Fri, Aug 26, 2016 at 12:46 PM, Adam Young

>> wrote:

On 08/22/2016 11:11 AM, Rob Crittenden wrote:

Adam Young wrote:

On 08/15/2016 05:10 PM, Rob Crittenden wrote:

Review
https://review.openstack.org/#/c/317739/

> added a new
dynamic
metadata handler to nova. The basic jist is
that rather
than serving
metadata statically, it can be done
dyamically, so that
certain values
aren't provided until they are needed, mostly for
security purposes
(like credentials to enroll in an AD domain). The
metadata is
configured as URLs to a REST service.

Very little is passed into the REST call,
mostly UUIDs
of the
instance, image, etc. to ensure a stable API.
What this
means though
is that the REST service may need to make
calls into
nova or glance to
get information, like looking up the image
metadata in
glance.

Currently the dynamic metadata handler _can_
generate
auth headers if
an authenticated request is made to it, but
consider
that a common use
case is fetching metadata from within an
instance using
something like:

% curl
http://169.254.169.254/openstack/2016-10-06/vendor_data2.json

   
>

This will come into the nova metadata service
unauthenticated.

So a few questions:

1. Is it possible to configure paste (I'm a
relative
newbie) both
authenticated and unauthenticated requests are
accepted
such that IF
an authenticated request comes it, those
credentials can
be used,
otherwise fall back to something else?



Only if they are on different URLs, I think.  Its
auth_token
middleware
for all services but Keystone.  Keystone, the rles are
similar, but the
implementation is a little different.


Ok. I'm fine with the unauthenticated path if the
service we can
just create a separate service user for it.

2. If an unauthenticated request comes in, how
best to
obtain a 

Re: [openstack-dev] [nova][keystone] auth for new metadata plugins

2016-08-31 Thread Adam Young

On 08/30/2016 05:36 PM, Michael Still wrote:
Sorry for being slow on this one, I've been pulled into some internal 
things at work.


So... Talking to Matt Riedemann just now, it seems like we should 
continue to pass through the user authentication details when we have 
them to the plugin. The problem is what to do in the case where we do 
not (which is mostly going to be when the instance itself makes a 
metadata request).


I think what you're saying though is that the middleware wont let any 
requests through if they have no auth details? Is that correct?



Yes, that is correct.


Michael




On Fri, Aug 26, 2016 at 12:46 PM, Adam Young > wrote:


On 08/22/2016 11:11 AM, Rob Crittenden wrote:

Adam Young wrote:

On 08/15/2016 05:10 PM, Rob Crittenden wrote:

Review https://review.openstack.org/#/c/317739/
 added a new
dynamic
metadata handler to nova. The basic jist is that
rather than serving
metadata statically, it can be done dyamically, so
that certain values
aren't provided until they are needed, mostly for
security purposes
(like credentials to enroll in an AD domain). The
metadata is
configured as URLs to a REST service.

Very little is passed into the REST call, mostly UUIDs
of the
instance, image, etc. to ensure a stable API. What
this means though
is that the REST service may need to make calls into
nova or glance to
get information, like looking up the image metadata in
glance.

Currently the dynamic metadata handler _can_ generate
auth headers if
an authenticated request is made to it, but consider
that a common use
case is fetching metadata from within an instance
using something like:

% curl
http://169.254.169.254/openstack/2016-10-06/vendor_data2.json


This will come into the nova metadata service
unauthenticated.

So a few questions:

1. Is it possible to configure paste (I'm a relative
newbie) both
authenticated and unauthenticated requests are
accepted such that IF
an authenticated request comes it, those credentials
can be used,
otherwise fall back to something else?



Only if they are on different URLs, I think.  Its
auth_token middleware
for all services but Keystone.  Keystone, the rles are
similar, but the
implementation is a little different.


Ok. I'm fine with the unauthenticated path if the service we
can just create a separate service user for it.

2. If an unauthenticated request comes in, how best to
obtain a token
to use? Is it best to create a service user for the
REST services
(perhaps several), use a shared user, something else?



No unauthenticated requests, please.  If the call is to
Keystone, we
could use the X509 Tokenless approach, but if the call
comes from the
new server, you won't have a cert by the time you need to
make the call,
will you?


Not sure which cert you're referring too but yeah, the
metadata service is unauthenticated. The requests can come in
from the instance which has no credentials (via
http://169.254.169.254/).

Shared service users are probably your best bet.  We can
limit the roles
that they get.  What are these calls you need to make?


To glance for image metadata, Keystone for project information
and nova for instance information. The REST call passes in
various UUIDs for these so they need to be dereferenced. There
is no guarantee that these would be called in all cases but it
is a possibility.

rob


I guess if config_drive is True then this isn't really
a problem as
the metadata will be there in the instance already.

thanks

rob


__


OpenStack Development Mailing List (not for usage
questions)
Unsubscribe:

Re: [openstack-dev] [nova] cells v2 next steps

2016-08-31 Thread Matt Riedemann

On 8/31/2016 1:44 PM, Matt Riedemann wrote:

Just to recap a call with Laski, Sean and Dan, the goal for the next 24
hours with cells v2 is to get this nova change landed:

https://review.openstack.org/#/c/356138/

That depends on a set of grenade changes:

https://review.openstack.org/#/q/topic:setup_cell0_before_migrations

There are similar devstack changes to those:

https://review.openstack.org/#/q/topic:cell0_db

cell0 is optional in newton, so we don't want to add a required change
in grenade that forces an upgrade to newton to require cell0.

And since cell0 is optional in newton, we don't want devstack in newton
running with cell0 in all jobs.

So the plan is for Dan (or someone) to add a flag to devstack, mirrored
in grenade, that will be used to conditionally create the cell0 database
and run the simple_cell_setup command.

Then I'm going to set that flag in devstack-gate and from select jobs in
project-config, so one of the grenade jobs (either single node or
multi-node grenade), and then the placement-api job which is non-voting
in the nova check queue and is our new dumping ground for running
optional things, like the placement service and cell0.



FYI, this is the change I'm using to test the grenade/devstack series:

https://review.openstack.org/#/c/363971/

That's similar to what's proposed in the job updates in project-config:

https://review.openstack.org/#/c/363937/

We have a dependency chain going on now where the top devstack change 
depends on a nova change that depends on the top grenade change, so it's 
all kind of self-testing.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][networking-sfc] need help on requesting release for networking-sfc

2016-08-31 Thread Armando M.
On 31 August 2016 at 17:31, Cathy Zhang  wrote:

> CC OpenStack alias.
>
>
>
> *From:* Cathy Zhang
> *Sent:* Wednesday, August 31, 2016 5:19 PM
> *To:* Armando Migliaccio; Ihar Hrachyshka; Cathy Zhang
> *Subject:* need help on requesting release for networking-sfc
>
>
>
> Hi Armando/Ihar,
>
>
>
> I would like to submit a request for a networking-sfc release. I did this for
> previous branch release by submitting a bug request in launchpad before.
> I see that other subproject, such as L2GW, did this in Launchpad for mitaka
> release too.
>
> But the Neutron stadium link http://docs.openstack.org/
> developer/neutron/stadium/sub_project_guidelines.html#sub-
> project-release-process states that “A sub-project owner proposes a patch
> to openstack/releases repository with the intended git hash. The Neutron
> release liaison should be added in Gerrit to the list of reviewers for the
> patch”.
>
>
>
> Could you advise which way I should go or should I do both?
>

Consider the developer documentation the most up to date process, so please
go ahead with a patch against the openstack/releases repo.


>
>
> Thanks,
>
> Cathy
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]How to address TCs concerns in Tricircle big-tent application

2016-08-31 Thread joehuang
Hello, Monty,

Thank you very much for your guide and encouragement, then let's move on this 
direction.

Best regards
Chaoyi Huang (joehuang)

From: Monty Taylor [mord...@inaugust.com]
Sent: 01 September 2016 0:37
To: joehuang; openstack-dev
Subject: Re: [openstack-dev][tricircle]How to address TCs concerns in Tricircle 
big-tent application

On 08/31/2016 02:16 AM, joehuang wrote:
> Hello, team,
>
> During last weekly meeting, we discussed how to address TCs concerns in
> Tricircle big-tent application. After the weekly meeting, the proposal
> was co-prepared by our
> contributors: 
> https://docs.google.com/presentation/d/1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E
>
> The more doable way is to divide Tricircle into two independent and
> decoupled projects, only one of the projects which deal with networking
> automation will try to become an big-tent project, And Nova/Cinder
> API-GW will be removed from the scope of big-tent project application,
> and put them into another project:
>
> *TricircleNetworking:* Dedicated for cross Neutron networking automation
> in multi-region OpenStack deployment, run without or with
> TricircleGateway. Try to become big-tent project in the current
> application of https://review.openstack.org/#/c/338796/.

Great idea.

> *TricircleGateway:* Dedicated to provide API gateway for those who need
> single Nova/Cinder API endpoint in multi-region OpenStack deployment,
> run without or with TricircleNetworking. Live as non-big-tent,
> non-offical-openstack project, just like Tricircle toady’s status. And
> not pursue big-tent only if the consensus can be achieved in OpenStack
> community, including Arch WG and TCs, then decide how to get it on board
> in OpenStack. A new repository is needed to be applied for this project.
>
>
> And consider to remove some overlapping implementation in Nova/Cinder
> API-GW for global objects like flavor, volume type, we can configure one
> region as master region, all global objects like flavor, volume type,
> server group, etc will be managed in the master Nova/Cinder service. In
> Nova API-GW/Cinder API-GW, all requests for these global objects will be
> forwarded to the master Nova/Cinder, then to get rid of any API
> overlapping-implementation.
>
> More information, you can refer to the proposal draft
> https://docs.google.com/presentation/d/1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E,
>
> your thoughts are welcome, and let's have more discussion in this weekly
> meeting.

I think this is a great approach Joe.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Support for an "undo" operation for migrating back resource_properties_data

2016-08-31 Thread Crag Wolfe
I'm working on a migrate utility for
https://review.openstack.org/#/c/363415 . Quick summary: that means
moving resource.properties_data and event.properties_data to a new
table, resource_properties_data. Migrating to the new model is easy. The
questions come up with the inverse operation.

1) Would we even want to support undoing a migrate? I lean towards "no"
but if the answer is "yes," the next question comes up:

2) We need to indicate somewhere which resource_properties_data rows
were migrated to begin with. The reason being, we shouldn't support
trying to migrate recent (non-legacy) resource_properties_data data
backwards into the legacy columns in the resource and event tables.
There are a couple of ways to do that: a) add an is_legacy column to
resource_properties_data b) add another table which stores id's of those
that events and resources that have been migrated or c) for the super
paranoid same as b) only also store an extra copy of the original data
(partially motivated by the unfortunate situation we have of
events.properties_data being a PickleType and something going wrong with
the conversion to a Json column, not that I see that happening). c) also
opens up another can of worms with encrypt/decrypt operations. I lean
towards "b" here (well, after null set ;-).

Thanks,
--Crag

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][networking-sfc] need help on requesting release for networking-sfc

2016-08-31 Thread Cathy Zhang
CC OpenStack alias.

From: Cathy Zhang
Sent: Wednesday, August 31, 2016 5:19 PM
To: Armando Migliaccio; Ihar Hrachyshka; Cathy Zhang
Subject: need help on requesting release for networking-sfc

Hi Armando/Ihar,

I would like to submit a request for a networking-sfc release. I did this for 
previous branch release by submitting a bug request in launchpad before. I see 
that other subproject, such as L2GW, did this in Launchpad for mitaka release 
too.
But the Neutron stadium link 
http://docs.openstack.org/developer/neutron/stadium/sub_project_guidelines.html#sub-project-release-process
 states that "A sub-project owner proposes a patch to openstack/releases 
repository with the intended git hash. The Neutron release liaison should be 
added in Gerrit to the list of reviewers for the patch".

Could you advise which way I should go or should I do both?

Thanks,
Cathy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] FFE request for Ceph RGW integration

2016-08-31 Thread Giulio Fidente

On 08/30/2016 10:50 PM, Steven Hardy wrote:

On Tue, Aug 30, 2016 at 03:25:30PM -0400, Emilien Macchi wrote:

Here's my 2 cents:

The patch in puppet-ceph has been here for long time now and it still
doesn't work (recent update of today, puppet-ceph is not idempotent
when deploying RGW service. It must be fixed in order to get
successful deployment).
Puppet CI is still not gating on Ceph RGW (scenario004 still in
progress and really low progress to make it working recently).


This does sound concerning, Giulio, can you provide any feedback on work
in-progress or planned to improve this?


we invested quite some time today testing and updating the patches as needed

I've a got a successful deployment where by just adding the Member role 
to my user I could use the regular swiftclient to operate against RadosGW


This is by pulling in:

https://review.openstack.org/#/c/347956/
https://review.openstack.org/#/c/363164/

https://review.openstack.org/#/c/334081/ (and its dependencies)

https://review.openstack.org/#/c/289027/

Emilien can you re-evaluate the status of the puppet-ceph and 
puppet-tripleo submissions?



My opinion says we should not push to have it in Newton. Work to do it
was not extremely pushed during the cycle I see zero reason to push
for it now the cycle is ending.


agreed, this might not have been pushed much during the cycle as other 
priorities needed attention too but it seems to be an interesting 
feature for those deploying Ceph and in decent state; also as per 
Steven's comment below, it'll be optional in TripleO, we'll continue to 
deploy Swift by default so it's not going to have a great impact on 
other existing work



I agree this is being proposed too late, but given it will be disabled by
default that does mitigate the risk somewhat.

Giulio - can you confirm this will just be a new service template and
puppet profile, and that it's not likely to require rework outside of the
composable services interface?  If so I'm inclined to say OK even if we
know the puppet module needs work.


no rework of the composable services interface will be needed, the tht 
submission is, in addition to adding the new service template, adding an 
output to the endpoint map for the new service, the puppet submission is 
adding a new role


https://review.openstack.org/#/c/289027/

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Reliable way to filter CI in gerrit spam?

2016-08-31 Thread Jeremy Stanley
On 2016-08-31 18:58:31 + (+), Jeremy Stanley wrote:
> On 2016-08-31 17:59:43 +0100 (+0100), Matthew Booth wrote:
> > On Wed, Aug 31, 2016 at 5:31 PM, Jeremy Stanley  wrote:
> [...]
> > > Also we have naming conventions for third-party CI accounts that
> > > suggest they should end in " CI" so you could match on that.
> > 
> > Yeah, all except 'Jenkins' :)
> [...]
> 
> Right, that was mainly because there were more than a few people who
> expressed a desire to be able to receive E-mail messages on comments
> from the "Jenkins" account but not from third-party CI systems.
> 
> > All the CIs I get gerrit spam from are on that list except
> > Jenkins. Do I have to enable something specifically to exclude
> > them?
> [...]
> 
> No, as I understand it, since we set capability.emailReviewers="deny
> group Third-Party CI" in the global Gerrit configuration it should
> avoid sending E-mail for any of their comments.
> https://review.openstack.org/Documentation/access-control.html#capability_emailReviewers
> I guess we should troubleshoot that.

I spoke with Khai about it a bit in IRC, and he suggests the
description in the docs is quite literal. Basically it avoids
sending you those messages if you are only a reviewer on the change
or a watcher of the project, but if you're a change owner/author or
have starred the change you probably still receive them.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] don't wait to the last minute

2016-08-31 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2016-08-31 16:07:13 -0400:
> Folks, we've had more than the usual number of validation errors
> and -1s for version number choice on patches in openstack/releases
> this week. Please don't wait to the last minute to submit your
> milestone 3 tag request, and keep an eye on responses in case you
> need to rework it.
> 
> Being present in #openstack-release is a good way to ensure you're
> aware of any review issues.
> 
> Doug
> 

You may also find it useful to run "tox -e validate" on your patch
before you submit it (commit the patch locally , then run the
validator).

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Ubuntu Cloud Image - Forbidden access! Glance fails with error 500.

2016-08-31 Thread Nikhil Komawar
Thanks Thiago for including OpenStackers. But you do point out some
interesting deployment scenario for which I'm more than inclined to
comment on for being a community and openstack users' well wisher.
Please see notes inline.


On 8/31/16 6:24 PM, Martinx - ジェームズ wrote:
> But I need to rely on the upstream URLs for two reasons:
>
> 1- During Glance provision, I can't download the images, the images
> MUST be downloaded by Glance itself, by demand (that's why I always
> use --location and that's why I'm still using Glance v1;

It's a bit unfortunate that you use it this way, as Glance v1 was never
designed to be directly used by the users, it has a design for an
internal service for Nova, Cinder, etc. to use (service to service
communications).

>
> 2- By relying on a remote URL, I don't need to re-add the images every
> single time that upstream updates its image, Glance will always
> download the latest directly from upstream.

This is a bad idea (very very ... very bad idea). The very fact that
OpenStack replies on storing images to glance is so that the user can
know what image they are going to consume. Image locations has been
designed to be admin only feature for the same reason (with the
assumption that v1 was supposed to be internal only API). I strongly
urge you to use the image locations feature with that context.

Having such a setup will result into a unknown state where the uuid of
the image is same but if some random (may even be malicious) image is
stored at that http url, at any given point of time your cloud is not
secure. Also, there are other reasons why such a setup shouldn't be
made: the references Nova uses to determine which image the VM was
booted from is stored against the uuid of that image. If the remote url
is subject to change anytime, the shared understanding of Nova, Glance
and the user about that image will be wrong, for the image bytes have
mutated since last check.

>
> BTW, I've sent those messages to both lists (Ubuntu / OpenStack)
> because this interests Ubuntu and since Glance is failing with Error
> 500, OpenStack guys might be interested as well.

Glance has no control over the remote urls so, it's failing to interpret
the inaccessible location. In such a scenario the appropriate error is
indeed 500 -- as glance assumes that the deployment will have resilient
access to that image location.


Hope that helps.
>
> Cheers!
> Thiago
>
> On 29 August 2016 at 05:37,  > wrote:
>
>
> To prevent this kind of thing recurring you can upload the image bytes
> into Glance rather than relying on the third party url always being
> available, eg:
>
> curl
> 
> http://uec-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img
> 
> 
> | glance image-create --name "Ubuntu 16.04.1 LTS - Xenial Xerus -
> 64-bit - Cloud Based Ima
> ge" --is-public true --container-format bare --disk-format qcow2
>
>
> On Sun, 28 Aug 2016, Kaustubh Kelkar wrote:
>
> Broken link?
>
> https://cloud-images.ubuntu.com/xenial/
> 
>
> -Kaustubh
>
> From: Martinx - ジェームズ
> Sent: Saturday, August 27, 23:06
> Subject: [Openstack] Ubuntu Cloud Image - Forbidden access!
> Glance failswith error 500.
> To: ubuntu-server, Ubuntu user technical support, not for
> general discussions, openstack@lists.openstack.org
> 
>
> Guys,
>
> It is impossible to download Ubuntu Cloud Image right now:
>
> 
> http://uec-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img
> 
> 
>
> Returns: Forbidden!
>
> 
>
> wget
> 
> http://uec-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img
> 
> 
>
> --2016-08-28 02:50:36-- 
> 
> http://uec-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img
> 
> 
>
> Resolving uec-images.ubuntu.com
>  > (uec-images.ubuntu.com
>  >)... 91.189.88.140
>
> Connecting to uec-images.ubuntu.com
>  > 

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread James Bottomley
On Tue, 2016-08-30 at 03:08 +, joehuang wrote:
> Hello, Jay,
> 
> Sorry, I don't know why my mail-agent(Microsoft Outlook Web App) did 
> not carry the thread message-id information in the reply.  I'll check 
> and avoid to create a new thread for reply in existing thread.

It's a common problem with outlook.  Microsoft created their own
threading standards for email which are adopted by no-one.  Whenever
you get these headers in your email:

Thread-topic: 
Thread-index: 

And not these:

In-reply-to:
References: 

It usually means exchange has decided the other end is a microsoft
entity and it doesn't need to use the internet standard reply types. 

Unfortunately, this isn't fixable in outlook because Exchange (the MTA)
not outlook (the MUA) does the threading.  There are some thoughts
floating around the internet on how to fix exchange; if you're lucky
and you have exchange 2003, this might fix it:

https://support.microsoft.com/en-us/kb/908027

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Guest VM IP configuration script

2016-08-31 Thread Kevin Benton
The neutron DHCP agent does not issue leases for ports that don't exist in
the Neutron DB. There was a time when it would issue a DHCPNAK to other
DHCP traffic[1], but that's been fixed for quite some time now. perhaps
that was the bad behavior in Juno that you observed?

1. http://lists.openstack.org/pipermail/openstack-dev/2015-May/064725.html

On Tue, Aug 30, 2016 at 7:44 AM, Satish Patel  wrote:

> Robert,
>
> I didn't find any related configuration which blacklist mac address on
> Mitaka. also i didn't find any document stated that DHCP agent only
> gives ip address to instance mac address.
>
> Do you point me to any doc or any kind of material
>
> On Fri, Aug 26, 2016 at 4:07 PM, Van Leeuwen, Robert
>  wrote:
> > Are you sure it was DHCP misbehaving?
> > Because it could also have been that it tried to takeover the gateway IP.
> > That would certainly mess with connectivity on the network.
> >
> > Just mentioning because you gave the example --router:external while I
> think it should be --router:external True
> >
> > Also if it is dhcp misbehaving you might be able to fix it with the
> dnsmasq_config_file option in the dhcp agent. You can probably blacklist
> everything that does not start with the OpenStack MAC range. (Base_mac
> setting)
> >
> > I currently don't have a setup to reproduce this so I cannot be 100%
> sure about the details or if this works ;-)
> >
> > Cheers,
> > Robert van Leeuwen
> >
> >
> >> On 26 Aug 2016, at 18:58, Satish Patel  wrote:
> >>
> >> Robert,
> >>
> >> I remembered in JUNO release when i did flat network with my existing
> >> provider LAN then DHCP started giving IPs to my existing LAN clients
> >> and people started yelling their network is down :(
> >>
> >> Following networking i configured.
> >>
> >> #neutron net-create network1 --provider:network_type flat
> >> --provider:physical_network extnet  --router:external --shared
> >>
> >> #neutron subnet-create --name subnet1 --enable_dhcp=True
> >> --allocation-pool=start=10.0.3.160,end=10.0.3.166 --gateway=10.0.0.1
> >> network1 10.0.0.0/21
> >>
> >> After realizing issue i have changed  --enable_dhcp=False
> >>
> >> On Fri, Aug 26, 2016 at 2:35 AM, Van Leeuwen, Robert
> >>  wrote:
>  When i was trying to use DHCP in openstack i found openstack DHCP
>  start provide ip address to my existing LAN machines ( we are using
>  flat VLAN with neutron), that is why i disable openstack DHCP, Is it
>  common or i am doing something wrong?
> >>>
> >>> I do not think this should happen.
> >>> It has been a while (Folsom) since I touched a setup with mixed “LAN”
> and OpenStack DHCP but IIRC it works like this:
> >>>
> >>> AFAIK the leases file neutron uses is very specific and will only
> reply to the mac-addresses that are in the dnsmasq config.
> >>> Looking at the dnsmasq process it is set to static:
> >>> From the man page:
> >>> The optional  keyword may be static which tells dnsmasq to
> enable DHCP for the network specified, but not to dynamically allocate IP
> addresses, only hosts which have static addresses given via dhcp-host or
> from /etc/ethers will be served.
> >>>
> >>> Usually the problem is the other way around:
> >>> The existing DHCP in the “lan” bites with what OpenStack does. (so an
> OpenStack instance gets an IP from the lan DHCP)
> >>> This can be prevented by blacklisting the MAC address range your
> instances get in your lan dhcp (Blacklist MAC starting with fa:16:3e )
> >>>
> >>> Cheers,
> >>> Robert van Leeuwen
> >>>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Ubuntu Cloud Image - Forbidden access! Glance fails with error 500.

2016-08-31 Thread Clint Byrum
Excerpts from Martinx - ジェームズ's message of 2016-08-31 18:24:07 -0400:
> But I need to rely on the upstream URLs for two reasons:
> 
> 1- During Glance provision, I can't download the images, the images MUST be
> downloaded by Glance itself, by demand (that's why I always use --location
> and that's why I'm still using Glance v1;
> 
> 2- By relying on a remote URL, I don't need to re-add the images every
> single time that upstream updates its image, Glance will always download
> the latest directly from upstream.
> 

This is somewhat ludicrous. You've made your cloud dependent on a free
internet service with zero security.

If anything, setup your own local HTTP server and set the location to
_that_. But, then it starts to look a little ridiculous, when you could
just use the glance file store and upload to glance.

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Ubuntu Cloud Image - Forbidden access! Glance fails with error 500.

2016-08-31 Thread Martinx - ジェームズ
But I need to rely on the upstream URLs for two reasons:

1- During Glance provision, I can't download the images, the images MUST be
downloaded by Glance itself, by demand (that's why I always use --location
and that's why I'm still using Glance v1;

2- By relying on a remote URL, I don't need to re-add the images every
single time that upstream updates its image, Glance will always download
the latest directly from upstream.

BTW, I've sent those messages to both lists (Ubuntu / OpenStack) because
this interests Ubuntu and since Glance is failing with Error 500, OpenStack
guys might be interested as well.

Cheers!
Thiago

On 29 August 2016 at 05:37,  wrote:

>
> To prevent this kind of thing recurring you can upload the image bytes
> into Glance rather than relying on the third party url always being
> available, eg:
>
> curl http://uec-images.ubuntu.com/releases/16.04/release/ubuntu-1
> 6.04-server-cloudimg-amd64-disk1.img | glance image-create --name "Ubuntu
> 16.04.1 LTS - Xenial Xerus - 64-bit - Cloud Based Ima
> ge" --is-public true --container-format bare --disk-format qcow2
>
>
> On Sun, 28 Aug 2016, Kaustubh Kelkar wrote:
>
> Broken link?
>>
>> https://cloud-images.ubuntu.com/xenial/
>>
>> -Kaustubh
>>
>> From: Martinx - ジェームズ
>> Sent: Saturday, August 27, 23:06
>> Subject: [Openstack] Ubuntu Cloud Image - Forbidden access! Glance
>> failswith error 500.
>> To: ubuntu-server, Ubuntu user technical support, not for general
>> discussions, openstack@lists.openstack.org
>>
>> Guys,
>>
>> It is impossible to download Ubuntu Cloud Image right now:
>>
>> http://uec-images.ubuntu.com/releases/16.04/release/ubuntu-1
>> 6.04-server-cloudimg-amd64-disk1.img
>>
>> Returns: Forbidden!
>>
>> 
>>
>> wget http://uec-images.ubuntu.com/releases/16.04/release/ubuntu-1
>> 6.04-server-cloudimg-amd64-disk1.img
>>
>> --2016-08-28 02:50:36--  http://uec-images.ubuntu.com/r
>> eleases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img
>>
>> Resolving uec-images.ubuntu.com (
>> uec-images.ubuntu.com)... 91.189.88.140
>>
>> Connecting to uec-images.ubuntu.com (
>> uec-images.ubuntu.com)|91.189.88.140|:80...
>> connected.
>>
>> HTTP request sent, awaiting response... 403 Forbidden
>>
>> 2016-08-28 02:50:36 ERROR 403: Forbidden.
>>
>> 
>>
>> This broke my OpenStack deployment, because Glance tries to download it
>> and then it fails (error 500 on Glance).
>>
>> ---
>>
>> http://paste.openstack.org/show/564302/
>>
>> ---
>>
>> Here is how I'm adding Ubuntu images to my OpenStack Mitaka Cloud:
>>
>> ---
>>
>> glance image-create --location http://uec-images.ubuntu.com/r
>> eleases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img
>> --name "Ubuntu 16.04.1 LTS - Xenial Xerus - 64-bit - Cloud Based Image"
>> --is-public true --container-format bare --disk-format qcow2
>>
>> ---
>>
>> Cheers!
>>
>> Thiago
>>
>>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [kolla] important instructions or Newton Milestone #3 Release today 8/31/2015 @ ~23:30 UTC

2016-08-31 Thread Steven Dake (stdake)
Hey folks,

Milestone 3 will be submitted for tagging to the release team today around my 
end of work day.  All milestone 3 blueprints and bugs will be moved to rc1 in 
the case they don't make the August 31st(today) deadline.

We require fernet in rc1, so if there is anything that can be done to 
accelerate Shuan's work there, please chip in.  I'd like this to be our highest 
priority blueprint merge.  The earlier it merges (when functional) the more 
time we have to test the changes.  Please iterate on this review and review 
daily until merged.

We have made tremendous progress in milestone 3.  We ended up carrying over 
some blueprints as FFEs to rc1 which are all in review state right now and 
nearly complete.

The extension for  features concludes September 15th, 2016 when rc1 is tagged.  
If features don't merge by that time, they will be retargeted for Ocata.  When 
we submit the rc1 tag, master will branch.  After rc1, we will require bug 
backports from master to newton (and mitaka and liberty if appropriate).

We have a large bug backlog.  If folks could tackle that, it would be 
appreciated.  I will be spending most of my time doing that sort of work and 
would appreciate everyone on the team to contribute.  Tomorrow afternoon I will 
have all the rc1 bugs prioritized as seems fitting.

Please do not workflow+1 any blueprint work in the kolla repo until rc1 has 
been tagged.  Master of kolla is frozen for new features not already listed in 
the rc1 milestone.  Master of kolla-kubernetes is open for new features as we 
have not made a stable deliverable out of this repository (a 1.0.0 release).  
As a result, no branch will be made of the kolla-kubernetes repository (I 
think..).  If a branch is made, I'll request it be deleted.

If you have a bug that needs fixing and it doesn't need a backport, just use 
TrivialFix to speed up the process.  If it needs a backport, please use a bug 
id.  After rc1, all patches will need backports so everything should have a bug 
id.  I will provide further guidance after rc1.

A big shout out goes to our tremendous community that has pulled off 3 
milestones on schedule and in functional working order for the Kolla repository 
while maintaining 2 branches and releasing 4 z streams on a 45 day schedule.  
Fantastic work everyone!

Kolla-kubernetes also deserves a shout out – we have a functional compute-kit 
kubernetes underlay that deploys Kolla containers using mostly native 
kuberenetes functionality  We are headed towards a fully Kubernetes 
implementation.  The deployment lacks the broad feature-set of kolla-ansible 
but uses the json API to our containers and is able to spin up nova virtual 
machines with full network (OVS) connectivity – which is huge!

Cheers!
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Deprecated fields in upgrade.

2016-08-31 Thread Suresh Vinapamula
Hi,

What is the typical protocol/guideline followed in the community to handle
deprecated fields during upgrade procedure?

Should the fields be removed by the user/admin before upgrade is initiated
or would the -manage db_sync, or migrate_flavor_data etc... or any
other command take care of that seamlessly?

For example, compute_port in compute endpoint url is deprecated and remove
in L version. But, keystone-manage db_sync doesn't seem to take care while
upgrading from kilo and kilo happened to have compute_port in the compute
endpoint url. I see a deprecated warning in juno also, and I didn't go
further down, if it were already taken care in the upgrade procedure.

Is there a typical guideline on who handles deprecated fields during
upgrade procedure? Should it be the user or tool that does the version
upgrade of data?

thanks
Suresh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-31 Thread Monty Taylor
On 08/25/2016 04:14 PM, Sean Dague wrote:
> On 08/25/2016 01:13 PM, Steve Martinelli wrote:
>> The keystone team is pursuing a trigger-based approach to support
>> rolling, zero-downtime upgrades. The proposed operator experience is
>> documented here:
>>
>>   http://docs.openstack.org/developer/keystone/upgrading.html
>>
>> This differs from Nova and Neutron's approaches to solve for rolling
>> upgrades (which use oslo.versionedobjects), however Keystone is one of
>> the few services that doesn't need to manage communication between
>> multiple releases of multiple service components talking over the
>> message bus (which is the original use case for oslo.versionedobjects,
>> and for which it is aptly suited). Keystone simply scales horizontally
>> and every node talks directly to the database.
>>
>> Database triggers are obviously a new challenge for developers to write,
>> honestly challenging to debug (being side effects), and are made even
>> more difficult by having to hand write triggers for MySQL, PostgreSQL,
>> and SQLite independently (SQLAlchemy offers no assistance in this case),
>> as seen in this patch:
>>
>>   https://review.openstack.org/#/c/355618/
>>
>> However, implementing an application-layer solution with
>> oslo.versionedobjects is not an easy task either; refer to Neutron's
>> implementation:
>>
>>
>> https://review.openstack.org/#/q/topic:bp/adopt-oslo-versioned-objects-for-db
>>
>>
>> Our primary concern at this point are how to effectively test the
>> triggers we write against our supported database systems, and their
>> various deployment variations. We might be able to easily drop SQLite
>> support (as it's only supported for our own test suite), but should we
>> expect variation in support and/or actual behavior of triggers across
>> the MySQLs, MariaDBs, Perconas, etc, of the world that would make it
>> necessary to test each of them independently? If you have operational
>> experience working with triggers at scale: are there landmines that we
>> need to be aware of? What is it going to take for us to say we support
>> *zero* dowtime upgrades with confidence?
> 
> I would really hold off doing anything triggers related until there was
> sufficient testing for that, especially with potentially dirty data.
> 
> Triggers also really bring in a whole new DSL that people need to learn
> and understand, not just across this boundary, but in the future
> debugging issues. And it means that any errors happening here are now in
> a place outside of normal logging / recovery mechanisms.
> 
> There is a lot of value that in these hard problem spaces like zero down
> uptime we keep to common patterns between projects because there are
> limited folks with the domain knowledge, and splitting that even further
> makes it hard to make this more universal among projects.

I said this the other day in the IRC channel, and I'm going to say it
again here. I'm going to do it as bluntly as I can - please keeping in
mind that I respect all of the humans involved.

I think this is a monstrously terrible idea.

There are MANY reasons for this -but I'm going to limit myself to two.

OpenStack is One Project


Nova and Neutron have an approach for this. It may or may not be ideal -
but it exists right now. While it can be satisfying to discount the
existing approach and write a new one, I do not believe that is in the
best interests of OpenStack as a whole. To diverge in _keystone_ - which
is one of the few projects that must exist in every OpenStack install -
when there exists an approach in the two other most commonly deployed
projects - is such a terrible example of the problems inherent in
Conway's Law that it makes me want to push up a proposal to dissolve all
of the individual project teams and merge all of the repos into a single
repo.

Make the oslo libraries Nova and Neutron are using better. Work with the
Nova and Neutron teams on a consolidated approach. We need to be driving
more towards an OpenStack that behaves as if it wasn't written by
warring factions of developers who barely communicate.

Even if the idea was one I thought was good technically, the above would
still trump that. Work with Nova and Neutron. Be more similar.

PLEASE

BUT - I also don't think it's a good technical solution. That isn't
because triggers don't work in MySQL (they do) - but because we've spent
the last six years explicitly NOT writing raw SQL. We've chosen an
abstraction layer (SQLAlchemy) which does its job well.

IF this were going to be accompanied by a corresponding shift in
approach to not support any backends by MySQL and to start writing our
database interactions directly in SQL in ALL of our projects - I could
MAYBE be convinced. Even then I think doing it in triggers is the wrong
place to put logic.

"Database triggers are obviously a new challenge for developers to
write, honestly challenging to debug (being side effects), and are made
even more difficult by having to hand 

[openstack-dev] [Congress] python-client push bug?

2016-08-31 Thread Tim Hinrichs
Hi all,

As I was sanity checking the latest python-client, which we need to release
by tomorrow, I may have found a bug that we should fix before releasing.
If it's a bug in the server, that can be fixed later, but if it's a bug in
the client, we should get that fixed now.

https://bugs.launchpad.net/congress/+bug/1619065

Masahito: could you double-check that I'm running the right commands in the
client?

Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Reliable way to filter CI in gerrit spam?

2016-08-31 Thread John Villalovos
On Wed, Aug 31, 2016 at 11:58 AM, Jeremy Stanley  wrote:
> No, as I understand it, since we set capability.emailReviewers="deny
> group Third-Party CI" in the global Gerrit configuration it should
> avoid sending E-mail for any of their comments.
> https://review.openstack.org/Documentation/access-control.html#capability_emailReviewers
> I guess we should troubleshoot that.


I also see emails from at least the Cisco CI:

Cisco CI has posted comments on this change.

Change subject: Allow suppressing ramdisk logs collection
..


Patch Set 2:

Build failed. For help on isolating this failure, please contact
cisco-openstack-neutron...@cisco.com. To re-run, post a
'cisco-ironic-recheck' comment.

- tempest-dsvm-ironic-pxe_iscsi_cimc
http://192.133.158.2:8080/job/tempest-dsvm-ironic-pxe_iscsi_cimc/3269
: FAILURE in 2h 02m 36s
- tempest-dsvm-ironic-pxe_ucs
http://192.133.158.2:8080/job/tempest-dsvm-ironic-pxe_ucs/2684 :
FAILURE in 1h 55m 32s

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Clint Byrum
Excerpts from Ian Wells's message of 2016-08-31 12:30:45 -0700:
> On 31 August 2016 at 10:12, Clint Byrum  wrote:
> 
> > Excerpts from Duncan Thomas's message of 2016-08-31 12:42:23 +0300:
> > > On 31 August 2016 at 11:57, Bogdan Dobrelya 
> > wrote:
> > >
> > > > I agree that RPC design pattern, as it is implemented now, is a major
> > > > blocker for OpenStack in general. It requires a major redesign,
> > > > including handling of corner cases, on both sides, *especially* RPC
> > call
> > > > clients. Or may be it just have to be abandoned to be replaced by a
> > more
> > > > cloud friendly pattern.
> > >
> > >
> > > Is there a writeup anywhere on what these issues are? I've heard this
> > > sentiment expressed multiple times now, but without a writeup of the
> > issues
> > > and the design goals of the replacement, we're unlikely to make progress
> > on
> > > a replacement - even if somebody takes the heroic approach and writes a
> > > full replacement themselves, the odds of getting community by-in are very
> > > low.
> >
> > Right, this is exactly the sort of thing I'd like to gather a group of
> > design-minded folks around in an Architecture WG. Oslo is busy with the
> > implementations we have now, but I'm sure many oslo contributors would
> > like to come up for air and talk about the design issues, and come up
> > with a current design, and some revisions to it, or a whole new one,
> > that can be used to put these summit hallway rumors to rest.
> >
> 
> I'd say the issue is comparatively easy to describe.  In a call sequence:
> 
> 1. A sends a message to B
> 2. B receives messages
> 3. B acts upon message
> 4. B responds to message
> 5. A receives response
> 6. A acts upon response
> 
> ... you can have a fault at any point in that message flow (consider
> crashes or program restarts).  If you ask for something to happen, you wait
> for a reply, and you don't get one, what does it mean?  The operation may
> have happened, with or without success, or it may not have gotten to the
> far end.  If you send the message, does that mean you'd like it to cause an
> action tomorrow?  A year from now?  Or perhaps you'd like it to just not
> happen?  Do you understand what Oslo promises you here, and do you think
> every person who ever wrote an RPC call in the whole OpenStack solution
> also understood it?
> 
> I have opinions about other patterns we could use, but I don't want to push
> my solutions here, I want to see if this is really as much of a problem as
> it looks and if people concur with my summary above.  However, the right
> approach is most definitely to create a new and more fitting set of oslo
> interfaces for communication patterns, and then to encourage people to move
> to the new ones from the old.  (Whether RabbitMQ is involved is neither
> here nor there, as this is really a question of Oslo APIs, not their
> implementation.)

I think it's about time we get some Architecture WG meetings started,
and put "Document RPC design" on the agenda.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] migrate_flavor_data doesn't flavor migrate meta data of VMs spawned during upgrade.

2016-08-31 Thread Matt Riedemann

On 8/31/2016 4:17 PM, Dan Smith wrote:

Thanks Dan for your response. While I do run that before I start my
move to liberty, what I see is that it doesn't seem to flavor migrate
meta data for the VMs that are spawned after controller upgrade from
juno to kilo and before all computes upgraded from juno to kilo. The
current work around is to delete those VMs that are spawned after
controller upgrade and before all computes upgrade, and then initiate
liberty upgrade. Then it works fine.


I can't think of any reason why that would be, or why it would be a
problem. Instances created after the controllers are upgraded should not
have old-style flavor info, so they need not be touched by the migration
code.

Maybe filing a bug is in order describing what you see?

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Also, are you running with the latest kilo patch update? There were some 
bug fixes backported after the release from what I remember.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Update on Nova scheduler poor performance with Ironic

2016-08-31 Thread Mathieu Gagné
Hi Marc,

Too sad we didn't see you at the OpenStack Ops meetup. =)

On Wed, Aug 31, 2016 at 4:46 PM, Marc Heckmann
 wrote:
>
> I admit that we're having  a hard time figuring out exactly which
> scheduler filters rely on the option though.
>

I suspect the following filters depend on the scheduler instance
tracking feature:
- DifferentHostFilter
- SameHostFilter
- ServerGroupAntiAffinityFilter
- ServerGroupAffinityFilter
- TypeAffinityFilter

I checked for filters relying on host_state.instances being defined.

I opened a bug related to scheduler being too greedy when loading list
of instances for tracking:
https://bugs.launchpad.net/nova/+bug/1619050

Change proposed to fix the bug:
https://review.openstack.org/#/c/363944/

--
Mathieu

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [nova] Next steps for resource providers work

2016-08-31 Thread Matt Riedemann

On 8/31/2016 12:30 PM, Chris Dent wrote:



On 08/29/2016 12:40 PM, Matt Riedemann wrote:

I've been out for a week and not very involved in the resource providers
work, but after talking about the various changes up in the air at the
moment a bunch of us thought it would be helpful to lay out next steps
for the work we want to get done this week.


There was another hangout today where we caught up on where we are.
Some notes were added to the etherpad
https://etherpad.openstack.org/p/placement-next

There is code either merged or pending merge that allows the
resource tracker to ensure that resource providers exist and have
the correct inventory.

The major concern and blocker at this point is setting and deleting
allocations, for which the assistance of Jay is required. Some
details follow with a summary of Jay's todos at the bottom.

There are two patches, starting at
https://review.openstack.org/#/c/363209/

The first is a hack to get the object side handling for
AllocationList.create_all and delete_all. As noted in the comments
there we're not sure about the atomicity in create_all and need Jay
to determine if what's there can be made to work, or as suggested we
need a mondo SQL thing to get it right. If the latter, we need Jay
to write it :)

I'm going to carry on with those patches now and try to add some
generation handling back in to protect against inventory changing
out from under us while making allocations, but I'm not confident
of getting it anything more that possible adequate and great would
be better.

During that I'm also going to try to adjust things so that we can
update an existing allocation, not just create them, as we've
determined that's required. set_all, not create_all, basically.

The other missing piece is the client side of setting and deleting
allocations, from the resource tracker. We'd like Jay to start this
too or if we're all lucky maybe it is started already?

And finally there's a question we didn't know how to answer: What
will the process be for healing instances that already exist before
the placement service is started, and thus have no allocations?

So to summarize Jay's to do list (please and thank you very much):

* Look at https://review.openstack.org/#/c/363209/ and decide if it
  is good enough to get rolling or needs to be completely altered.
* If the latter, alter it.
* Write the allocation client.
* Consult on healing instance allocations.


I think the healing thing is something we can deal with after feature 
freeze right? I just don't want to become distracted by it.




Meanwhile several people are involved in related clean up patches in
both nova and devstack to smooth off rough edges while we pushed a
lot of code.

Thanks to everyone today for pushing so hard. We look pretty close to
getting the must haves happening.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] migrate_flavor_data doesn't flavor migrate meta data of VMs spawned during upgrade.

2016-08-31 Thread Suresh Vinapamula
Sure, will file a bug with my observations.

On Wed, Aug 31, 2016 at 2:17 PM, Dan Smith  wrote:

> > Thanks Dan for your response. While I do run that before I start my
> > move to liberty, what I see is that it doesn't seem to flavor migrate
> > meta data for the VMs that are spawned after controller upgrade from
> > juno to kilo and before all computes upgraded from juno to kilo. The
> > current work around is to delete those VMs that are spawned after
> > controller upgrade and before all computes upgrade, and then initiate
> > liberty upgrade. Then it works fine.
>
> I can't think of any reason why that would be, or why it would be a
> problem. Instances created after the controllers are upgraded should not
> have old-style flavor info, so they need not be touched by the migration
> code.
>
> Maybe filing a bug is in order describing what you see?
>
> --Dan
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] FF is active now

2016-08-31 Thread Vitaly Gridnev
Dear team,

So, N3 release is almost done, so from now feature can't be merged without
feature freeze exception. Feature Freeze status can be found at [0], along
with several features that already has FFE. FFE can be requested by the
question at openstack-dev mailing list.

[0] https://etherpad.openstack.org/p/sahara-review-priorities
[1] https://review.openstack.org/#/c/363932/

-- 
Best Regards,
Vitaly Gridnev,
Project Technical Lead of OpenStack DataProcessing Program (Sahara)
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] migrate_flavor_data doesn't flavor migrate meta data of VMs spawned during upgrade.

2016-08-31 Thread Suresh Vinapamula
>> While migrate_flavor_data seem to flavor migrate meta data of the VMs
>> that were spawned before upgrade procedure, it doesn't seem to flavor
>> migrate for the VMs that were spawned during the upgrade procedure more
>> specifically after openstack controller upgrade and before compute
>> upgrade. Am I missing something here or is it by intention?

>You can run the flavor migration as often as you need, and can certainly
>run it after your last compute is upgraded before you start to move into
>liberty.
>
>--Dan


Thanks Dan for your response. While I do run that before I start my
move to liberty, what I see is that it doesn't seem to flavor migrate
meta data for the VMs that are spawned after controller upgrade from
juno to kilo and before all computes upgraded from juno to kilo. The
current work around is to delete those VMs that are spawned after
controller upgrade and before all computes upgrade, and then initiate
liberty upgrade. Then it works fine.


Suresh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] migrate_flavor_data doesn't flavor migrate meta data of VMs spawned during upgrade.

2016-08-31 Thread Dan Smith
> Thanks Dan for your response. While I do run that before I start my
> move to liberty, what I see is that it doesn't seem to flavor migrate
> meta data for the VMs that are spawned after controller upgrade from
> juno to kilo and before all computes upgraded from juno to kilo. The
> current work around is to delete those VMs that are spawned after
> controller upgrade and before all computes upgrade, and then initiate
> liberty upgrade. Then it works fine.

I can't think of any reason why that would be, or why it would be a
problem. Instances created after the controllers are upgraded should not
have old-style flavor info, so they need not be touched by the migration
code.

Maybe filing a bug is in order describing what you see?

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] unsubscribe

2016-08-31 Thread Steve Tegeler
remove

-Original Message-
From: openstack-operators-requ...@lists.openstack.org 
[mailto:openstack-operators-requ...@lists.openstack.org] 
Sent: Wednesday, August 31, 2016 5:00 AM
To: openstack-operators@lists.openstack.org
Subject: OpenStack-operators Digest, Vol 70, Issue 36

Send OpenStack-operators mailing list submissions to
openstack-operators@lists.openstack.org

To subscribe or unsubscribe via the World Wide Web, visit
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

or, via email, send a message with subject or body 'help' to
openstack-operators-requ...@lists.openstack.org

You can reach the person managing the list at
openstack-operators-ow...@lists.openstack.org

When replying, please edit your Subject line so it is more specific than "Re: 
Contents of OpenStack-operators digest..."


Today's Topics:

   1. Re: NYC Ops Meetup - Ubuntu packaging session summary
  (Corey Bryant)
   2. [scientific][scientific-wg] Reminder: Scientific WG meeting
  Wednesday 0900 UTC (Stig Telfer)
   3. Re: Update on Nova scheduler poor performance with Ironic
  (David Medberry)
   4. [UX] Horizon Searchlight Usability Study -Call for
  Participants (Danielle Mundle)
   5. Re: Update on Nova scheduler poor performance with Ironic
  (Matt Riedemann)
   6. Re: Update on Nova scheduler poor performance with Ironic
  (Joshua Harlow)
   7. python and nice utf ? ? :) (Saverio Proto)


--

Message: 1
Date: Tue, 30 Aug 2016 08:50:55 -0400
From: Corey Bryant 
To: Saverio Proto 
Cc: OpenStack Operators 
Subject: Re: [Openstack-operators] NYC Ops Meetup - Ubuntu packaging
session summary
Message-ID:

[openstack-dev] [glance] Reviews in queue for newton-3

2016-08-31 Thread Nikhil Komawar
Hi all,


I've proposed a release patch up [1] where I am collecting all the
reviews that are in queue and you'd like them in Newton 3. Please leave
a comment up there and I will try to get to reviewing it soon. Based on
the progress, time available, freeze, etc. a determination about the
feasibility of it making into Newton will be done and if the review link
is posted on it in the next 4 hours, you'd expect a note on it
indicating if it will make it or otherwise.

Thanks for your co-operation and appreciate all the help in setting up
Newton release!


[1] https://review.openstack.org/#/c/363930/


-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Update on Nova scheduler poor performance with Ironic

2016-08-31 Thread Marc Heckmann
Hi,

On Wed, 2016-08-31 at 13:46 -0400, Mathieu Gagné wrote:
> On Wed, Aug 31, 2016 at 1:33 AM, Joshua Harlow  > wrote:
> > 
> > > 
> > > 
> > > Enabling this option will make it so Nova scheduler loads
> > > instance
> > > info asynchronously at start up. Depending on the number of
> > > hypervisors and instances, it can take several minutes. (we are
> > > talking about 10-15 minutes with 600+ Ironic nodes, or ~1s per
> > > node in
> > > our case)
> > 
> > This feels like a classic thing that could just be made better by a
> > scatter/gather (in threads or other?) to the database or other
> > service. 1s
> > per node seems ummm, sorta bad and/or non-optimal (I wonder if this
> > is low
> > hanging fruit to improve this). I can travel around the world 7.5
> > times in
> > that amount of time (if I was a light beam, haha).
> This behavior was only triggered under the following conditions:
> - Nova Kilo
> - scheduler_tracks_instance_changes=False
> 
> So someone installing the latest Nova version won't have this issue.
> Furthermore, if you enable scheduler_tracks_instance_changes,
> instances will be loaded asynchronously by chunk when nova-scheduler
> starts. (10 compute nodes at a time) But Jim found that enabling this
> config causes OOM errors.

Somewhat of thread hijack, but it's funny that this comes up now. We've
been getting OOMs on some our Liberty controllers in the past couple of
weeks in part because of Nova Scheduler memory usage (10GiB + right at
startup). 

We just now disabled "scheduler_tracks_instance_changes" and I confirm
that mem usage has become reasonable again.

I admit that we're having  a hard time figuring out exactly which
scheduler filters rely on the option though. 


 
> 
> So I investigated and found a very interesting bug presents if you
> run
> Nova in the Ironic context or anything where a single nova-compute
> process manages multiple or LOT of hypervisors. As explained
> previously, Nova loads the list of instances per compute node to help
> with placement decisions:
> https://github.com/openstack/nova/blob/kilo-eol/nova/scheduler/host_m
> anager.py#L590
> 
> Again, in Ironic context, a single nova-compute host manages ALL
> instances. This means this specific line found in _add_instance_info
> will load ALL instances managed by that single nova-compute host.
> What's even funnier is that _add_instance_info is called from
> get_all_host_states for every compute nodes (hypervisors), NOT
> nova-compute host. This means if you have 2000 hypervisors (Ironic
> nodes), this function will load 2000 instances per hypervisor found
> in
> get_all_host_states, ending with an overall process loading 2000^2
> rows from the database. Now I know why Jim Roll complained about OOM
> error. objects.InstanceList.get_by_host_and_node should be used
> instead, NOT objects.InstanceList.get_by_host. Will report this bug
> soon.
> 
> 
> > 
> > > 
> > > 
> > > There is a lot of side-effects to using it though. For example:
> > > - you can only run ONE nova-scheduler process since cache state
> > > won't
> > > be shared between processes and you don't want instances to be
> > > scheduled twice to the same node/hypervisor.
> > 
> > Out of curiosity, do you have only one scheduler process active and
> > passive
> > scheduler process(es) idle waiting to become active if the other
> > schedule
> > dies? (pretty simply done via something like
> > https://kazoo.readthedocs.io/en/latest/api/recipe/election.html) Or
> > do you
> > have some manual/other process that kicks off a new scheduler if
> > the 'main'
> > one dies?
> We use the HA feature of our virtualization infrastructure to handle
> failover. This is a compromise we are willing to accept for now. I
> agree that now everybody has access to this kind of feature in their
> infra.
> 
> 
> > 
> > > 
> > > 2) Run a single nova-compute service
> > > 
> > > I strongly suggest you DO NOT run multiple nova-compute services.
> > > If
> > > you do, you will have duplicated hypervisors loaded by the
> > > scheduler
> > > and you could end up with conflicting scheduling. You will also
> > > have
> > > twice as much hypervisors to load in the scheduler.
> > 
> > This seems scary (whenever I hear run a single of anything in a
> > *cloud*
> > platform, that makes me shiver). It'd be nice if we at least
> > recommended
> > people run https://kazoo.readthedocs.io/en/latest/api/recipe/electi
> > on.html
> > or have some active/passive automatic election process to handle
> > that single
> > thing dying (which they usually do, at odd times of the night).
> > Honestly I'd
> > (personally) really like to get to the bottom of how we as a group
> > of
> > developers ever got to the place where software was released
> > (and/or even
> > recommended to be used) in a *cloud* platform that ever required
> > only one of
> > anything to be ran (that's crazy bonkers, and yes there is history
> > here, but
> > damn, it just feels rotten as all hell, 

Re: [openstack-dev] [TripleO] FFE request for ec2-api integration

2016-08-31 Thread Emilien Macchi
On Wed, Aug 31, 2016 at 4:31 PM, Sven Anderson  wrote:
> Hi,
>
> I'm working on the integration of the puppet-ec2api module. It is a
> (probably) very straight forward task. The only thing that is a current
> impediment is that puppet CI is currently not deploying and running
> tempest on puppet-ec2api. I'm currently working on getting the ec2
> credentials created within puppet-tempest, which are needed to run
> tempest on ec2api. Once this is done, it should be very quick thing.
> Here the changes that are not yet ready/merged. The change for THT is
> still missing.
>
> https://review.openstack.org/#/c/357971
> https://review.openstack.org/#/c/356442
> https://review.openstack.org/#/c/336562
>
> I'd like to formally request an FFE for this.
>
> Thanks,
>
> Sven
>

I'll have the same kind of remark as I gave for Ceph RGW.

I haven't seen any effort to bring EC2API support in TripleO during
Newton cycle, I don't think it is the right choice to push the feature
at the end of the cycle, so close from release.
We're currently overloaded with all FFE and bugs we're trying to
land/fix, I don't think adding more bits in the stack will help.
As a retrospective for next time, I suggest to start the work at the
beginning of the cycle so we stop pushing-all-we-can at the end of
cycles.

My 2 cents again.
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Summit planning etherpad

2016-08-31 Thread Rob Cresswell
Hi all,

Etherpad for planning summit sessions: 
https://etherpad.openstack.org/p/horizon-ocata-summit

Please note the sessions have been requested, not scheduled, so the actual 
number we get may not be the same.

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] FFE request for ec2-api integration

2016-08-31 Thread Sven Anderson
Hi,

I'm working on the integration of the puppet-ec2api module. It is a
(probably) very straight forward task. The only thing that is a current
impediment is that puppet CI is currently not deploying and running
tempest on puppet-ec2api. I'm currently working on getting the ec2
credentials created within puppet-tempest, which are needed to run
tempest on ec2api. Once this is done, it should be very quick thing.
Here the changes that are not yet ready/merged. The change for THT is
still missing.

https://review.openstack.org/#/c/357971
https://review.openstack.org/#/c/356442
https://review.openstack.org/#/c/336562

I'd like to formally request an FFE for this.

Thanks,

Sven

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] don't wait to the last minute

2016-08-31 Thread Doug Hellmann
Folks, we've had more than the usual number of validation errors
and -1s for version number choice on patches in openstack/releases
this week. Please don't wait to the last minute to submit your
milestone 3 tag request, and keep an eye on responses in case you
need to rework it.

Being present in #openstack-release is a good way to ensure you're
aware of any review issues.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Deprecating old CLI in python-fuelclient

2016-08-31 Thread Roman Prykhodchenko
Fuelers,

We are proud to announce that we finally managed to reach the point when the 
old CLI in python-fuelclient (aka the old Fuel Client) can be deprecated and 
replaced with fuel2 — a cliff based CLI. Support of the full set of commands 
that are required to operate Fuel was already implemented in fuel2. After the 
deprecation plan is done, users will no longer be able to use old commands 
unless they install an older version of python-fuelclient from PyPi.

I have published a specification [1] that describes in details what changes are 
going to be made and how can different users can live with those changes. The 
specification also contains a table that compares old and new CLI, so the 
migration process will be as smooth as possible.


References:

1. https://review.openstack.org/#/c/361049


- romcheg


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [OpenStack-DefCore] [OSOps] Ansible work load test for interop patch set

2016-08-31 Thread Kris G. Lindgren
I originally agreed with you, but then I thought about it more this way:  It’s 
a tool to test to see if clouds are interop compatible (atleast that heat works 
the same on the two clouds).  While not technically a tool to manage openstack. 
 But still something that some Operators could want to know if they are looking 
at doing hybrid cloud.  Or they may want to ensure that two of their own 
private clouds are interop compatible.

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: Joseph Bajin 
Date: Wednesday, August 31, 2016 at 1:39 PM
To: "Yih Leong, Sun." 
Cc: OpenStack Operators , 
defcore-committee 
Subject: Re: [Openstack-operators] [OpenStack-DefCore] [OSOps] Ansible work 
load test for interop patch set

This looks like this was merged, but no one really answered my questions about 
an "InterOp Challenge" code base going into the Operators repository.

--Joe

On Wed, Aug 31, 2016 at 12:23 PM, Yih Leong, Sun. 
> wrote:
Can someone from ospos please review the following patch?
https://review.openstack.org/#/c/351799/

The patchset was last updated Aug 11th.
Thanks!



On Tue, Aug 16, 2016 at 7:17 PM, Joseph Bajin 
> wrote:
Sorry about that. I've been a little busy as of late, and was able to get 
around to taking a look.

I have a question about these.   What exactly is the Interop Challenge?  The 
OSOps repos are usually for code that can help Operators maintain and run their 
cloud.   These don't necessarily look like what we normally see submitted.

Can you expand on what the InterOp Challenge is and if it is something that 
Operators would use?

Thanks

Joe

On Tue, Aug 16, 2016 at 3:02 PM, Shamail 
> wrote:


> On Aug 16, 2016, at 1:44 PM, Christopher Aedo 
> > wrote:
>
> Tong Li, I think the best place to ask for a look would be the
> Operators mailing list
> (http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators).
> I've cc'd that list here, though it looks like you've already got a +2
> on it at least.
+1

I had contacted JJ earlier and he told me that the best person to contact would 
be Joseph Bajin (RaginBajin in IRC).  I've also added an OSOps tag to this 
message.
>
> -Christopher
>
>> On Tue, Aug 16, 2016 at 7:59 AM, Tong Li 
>> > wrote:
>> The patch set has been submitted to github for awhile, can some one please
>> review the patch set here?
>>
>> https://review.openstack.org/#/c/354194/
>>
>> Thanks very much!
>>
>> Tong Li
>> IBM Open Technology
>> Building 501/B205
>> liton...@us.ibm.com
>>
>>
>> ___
>> Defcore-committee mailing list
>> defcore-commit...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/defcore-committee
>
> ___
> Defcore-committee mailing list
> defcore-commit...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/defcore-committee

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
Defcore-committee mailing list
defcore-commit...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/defcore-committee


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OpenStack-DefCore] [OSOps]work load test for docker swarm fix patch

2016-08-31 Thread Joseph Bajin
HI there,

That patch was merged earlier today..

The patch was rebased at 2pm EST. IT was reviewed earlier this morning
again, and then +1 on the 25th of August.

You should be good to go.   If you do have any questions like this the
OSOps Working Group is out there to help as well.  We had another meeting
where only one person showed up (me). So this type of stuff is great to
work on and get informed about.



Thanks

Joe

On Wed, Aug 31, 2016 at 3:16 PM, Tong Li  wrote:

> Can someone from ospos please review the following patch? It has been
> sitting there for a long time, 2 lines get removed. please help out to get
> it merged so that users do not have to get the patch set to run it.
>
> https://review.openstack.org/#/c/356586/
>
> Tong Li
> IBM Open Technology
> Building 501/B205
> liton...@us.ibm.com
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Ian Wells
On 31 August 2016 at 10:12, Clint Byrum  wrote:

> Excerpts from Duncan Thomas's message of 2016-08-31 12:42:23 +0300:
> > On 31 August 2016 at 11:57, Bogdan Dobrelya 
> wrote:
> >
> > > I agree that RPC design pattern, as it is implemented now, is a major
> > > blocker for OpenStack in general. It requires a major redesign,
> > > including handling of corner cases, on both sides, *especially* RPC
> call
> > > clients. Or may be it just have to be abandoned to be replaced by a
> more
> > > cloud friendly pattern.
> >
> >
> > Is there a writeup anywhere on what these issues are? I've heard this
> > sentiment expressed multiple times now, but without a writeup of the
> issues
> > and the design goals of the replacement, we're unlikely to make progress
> on
> > a replacement - even if somebody takes the heroic approach and writes a
> > full replacement themselves, the odds of getting community by-in are very
> > low.
>
> Right, this is exactly the sort of thing I'd like to gather a group of
> design-minded folks around in an Architecture WG. Oslo is busy with the
> implementations we have now, but I'm sure many oslo contributors would
> like to come up for air and talk about the design issues, and come up
> with a current design, and some revisions to it, or a whole new one,
> that can be used to put these summit hallway rumors to rest.
>

I'd say the issue is comparatively easy to describe.  In a call sequence:

1. A sends a message to B
2. B receives messages
3. B acts upon message
4. B responds to message
5. A receives response
6. A acts upon response

... you can have a fault at any point in that message flow (consider
crashes or program restarts).  If you ask for something to happen, you wait
for a reply, and you don't get one, what does it mean?  The operation may
have happened, with or without success, or it may not have gotten to the
far end.  If you send the message, does that mean you'd like it to cause an
action tomorrow?  A year from now?  Or perhaps you'd like it to just not
happen?  Do you understand what Oslo promises you here, and do you think
every person who ever wrote an RPC call in the whole OpenStack solution
also understood it?

I have opinions about other patterns we could use, but I don't want to push
my solutions here, I want to see if this is really as much of a problem as
it looks and if people concur with my summary above.  However, the right
approach is most definitely to create a new and more fitting set of oslo
interfaces for communication patterns, and then to encourage people to move
to the new ones from the old.  (Whether RabbitMQ is involved is neither
here nor there, as this is really a question of Oslo APIs, not their
implementation.)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] migrate_flavor_data doesn't flavor migrate meta data of VMs spawned during upgrade.

2016-08-31 Thread Dan Smith
> While migrate_flavor_data seem to flavor migrate meta data of the VMs
> that were spawned before upgrade procedure, it doesn't seem to flavor
> migrate for the VMs that were spawned during the upgrade procedure more
> specifically after openstack controller upgrade and before compute
> upgrade. Am I missing something here or is it by intention?

You can run the flavor migration as often as you need, and can certainly
run it after your last compute is upgraded before you start to move into
liberty.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] The State of the NFS Driver ...

2016-08-31 Thread Erlon Cruz
Hi Jay,

Thanks for the update. I can give a look in the NFS job, it will need some
care, like configuring the slave to be a Ubuntu Xenial and setting
apparmor, so when you finish the cloning support we have an operational job.

Erlon

On Wed, Aug 31, 2016 at 11:50 AM, Jay S. Bryant <
jsbry...@electronicjungle.net> wrote:

> On 08/30/2016 08:50 PM, Matt Riedemann wrote:
>
>> On 8/30/2016 10:50 AM, Jay S. Bryant wrote:
>>
>>> All,
>>>
>>> I wanted to follow up on the e-mail thread [1] on Cloning support in the
>>> NFS driver.  The purpose of this e-mail is to provide the plan for the
>>> NFS driver going forward as I see it.
>>>
>>> First, I am aware that the driver has gone quite some time without care
>>> and feeding.  For a number of reasons, the Public Cloud team within IBM
>>> is currently dependent upon the NFS driver working properly for the
>>> cloud environment we are building.  Given our current dependence on the
>>> driver we are planning on picking up the driver and maintaining it.
>>>
>>> The first step in this process was getting the existing patch that adds
>>> snapshot support for NFS [2] rebased.  I did this work a couple of weeks
>>> ago and also got all the unit tests working for the unit test
>>> environment on the master branch.  I now see that it is in merge
>>> conflict again, I plan to continue to keep the patch up-to-date.
>>>
>>> Erlon has been investigating issues with attaching snapshots. It
>>> appears that this may be related to AppArmor running on the system where
>>> the VM is running and attachment is being attempted.  I am hoping to
>>> look into the other questions posed in the patch review in the next week
>>> or two.
>>>
>>> The next step is to create a dependent patch, upon the snapshot patch,
>>> to implement cloning.  I am planning to also undertake this work.  I am
>>> assuming that getting the cloning support in place shouldn't be too
>>> difficult once snapshots are working as it will be just a matter of
>>> using the support from the remotefs driver.
>>>
>>> The last piece of work we have in flight is working on adding QoS
>>> support to the NFS driver.  We have the following spec proposed to get
>>> that work started: [3]
>>>
>>> So, we are in the process of bringing the NFS driver up to good
>>> standing.  During this process we would greatly appreciate reviews and
>>> input from those of you who have previously worked on the driver in
>>> order to expedite integration of the necessary changes. I feel it is in
>>> the best interest of the community to get the driver updated and
>>> supported given that it is the 4th most used driver according to our
>>> user survey.  I think it would not look good to our users if it were to
>>> suddenly be removed.
>>>
>>> Thanks to all of your for your support in this effort!
>>>
>>> Jay
>>>
>>> [1]
>>> http://lists.openstack.org/pipermail/openstack-dev/2016-Augu
>>> st/102193.html
>>>
>>> [2] https://review.openstack.org/#/c/147186/
>>>
>>> [3] https://review.openstack.org/361456
>>>
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> IMO priority #1 is getting the NFS job passing consistently, who is
>> working on that? Last I checked it was failing a bunch because it was
>> running snapshot and clone tests, which obviously don't work since that
>> support isn't implemented in the driver. I think configuring tempest in the
>> devstack-plugin-nfs repo is fairly straightforward, someone just needs to
>> do it.
>>
>> But at least that gets you closer to a clean NFS job run which gets it
>> out of the experimental queue (possibly) and as a non-voting job in Cinder
>> so you can see if you're regressing anything (or if anything else regresses
>> it once you have clean CI runs).
>>
>> My 2 cents.
>>
>> Matt,
>
> This is good feedback.  I will put a story on our backlog on this for it
> and try to get that working ASAP.
>
> Jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] migrate_flavor_data doesn't flavor migrate meta data of VMs spawned during upgrade.

2016-08-31 Thread Suresh Vinapamula
Hi,

I am upgrading from Juno to Kilo and from that to Liberty.

I understand I need to nova-manage db migrate_flavor_data before upgrading
from Kilo to Liberty to let VMs that were spawned while the system was in
Juno to flavor migrate to Kilo.

Depending on the number of computes, complete upgrade can potentially be
spanned for longer duration, days if not months.

While migrate_flavor_data seem to flavor migrate meta data of the VMs that
were spawned before upgrade procedure, it doesn't seem to flavor migrate
for the VMs that were spawned during the upgrade procedure more
specifically after openstack controller upgrade and before compute upgrade.
Am I missing something here or is it by intention?

Since, the compute upgrade procedure could last for days, would it be
practical to block spawning work load VMs for that long duration?
Otherwise, next upgrade will fail right?

thanks
Suresh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Reliable way to filter CI in gerrit spam?

2016-08-31 Thread Jeremy Stanley
On 2016-08-31 17:59:43 +0100 (+0100), Matthew Booth wrote:
> On Wed, Aug 31, 2016 at 5:31 PM, Jeremy Stanley  wrote:
[...]
> > Also we have naming conventions for third-party CI accounts that
> > suggest they should end in " CI" so you could match on that.
> 
> Yeah, all except 'Jenkins' :)
[...]

Right, that was mainly because there were more than a few people who
expressed a desire to be able to receive E-mail messages on comments
from the "Jenkins" account but not from third-party CI systems.

> All the CIs I get gerrit spam from are on that list except
> Jenkins. Do I have to enable something specifically to exclude
> them?
[...]

No, as I understand it, since we set capability.emailReviewers="deny
group Third-Party CI" in the global Gerrit configuration it should
avoid sending E-mail for any of their comments.
https://review.openstack.org/Documentation/access-control.html#capability_emailReviewers
I guess we should troubleshoot that.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] cells v2 next steps

2016-08-31 Thread Matt Riedemann
Just to recap a call with Laski, Sean and Dan, the goal for the next 24 
hours with cells v2 is to get this nova change landed:


https://review.openstack.org/#/c/356138/

That depends on a set of grenade changes:

https://review.openstack.org/#/q/topic:setup_cell0_before_migrations

There are similar devstack changes to those:

https://review.openstack.org/#/q/topic:cell0_db

cell0 is optional in newton, so we don't want to add a required change 
in grenade that forces an upgrade to newton to require cell0.


And since cell0 is optional in newton, we don't want devstack in newton 
running with cell0 in all jobs.


So the plan is for Dan (or someone) to add a flag to devstack, mirrored 
in grenade, that will be used to conditionally create the cell0 database 
and run the simple_cell_setup command.


Then I'm going to set that flag in devstack-gate and from select jobs in 
project-config, so one of the grenade jobs (either single node or 
multi-node grenade), and then the placement-api job which is non-voting 
in the nova check queue and is our new dumping ground for running 
optional things, like the placement service and cell0.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Horizon missing loadbalance UI button

2016-08-31 Thread Erdősi Péter

2016. 08. 31. 19:26 keltezéssel, Turbo Fredriksson írta:

as it said to do in that page, you overwrote the package
you installed (Mitaka) with something from devel (probably
Newton - next stable).

Look at the line above that python line. It say:

Install the Dashboard panel plug-in

I.e., overwrite what you already have.
Cause my Mitaka cames from package, all the files supposed to be under 
/usr/share and /usr/lib right?


Here what I did (after cloning the repo, and switch branch):
[root(cc1:0)] <~/neutron-lbaas-dashboard> python setup.py install &> 
lbaasgui_install


Then i copied the lbaasgui_install content to pastebin: 
http://pastebin.com/pNwjRMU6


The only file, which had been created under /usr/share is the module 
file (_1481_project_ng_loadbalancersv2_panel.py file) after I copied it...


Can you point me out any _overwritten_ file from the installed package? 
(I know, that I have new files under /usr/local/ which supposed to use 
for the new Loadbalancer menu, but that not replace anything in mitaka 
horizon!)


Regards:
 Peter


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Duncan Thomas
On 31 August 2016 at 18:54, Joshua Harlow  wrote:

> Duncan Thomas wrote:
>
>> On 31 August 2016 at 11:57, Bogdan Dobrelya > > wrote:
>>
>> I agree that RPC design pattern, as it is implemented now, is a major
>> blocker for OpenStack in general. It requires a major redesign,
>> including handling of corner cases, on both sides, *especially* RPC
>> call
>> clients. Or may be it just have to be abandoned to be replaced by a
>> more
>> cloud friendly pattern.
>>
>>
>>
>> Is there a writeup anywhere on what these issues are? I've heard this
>> sentiment expressed multiple times now, but without a writeup of the
>> issues and the design goals of the replacement, we're unlikely to make
>> progress on a replacement - even if somebody takes the heroic approach
>> and writes a full replacement themselves, the odds of getting community
>> by-in are very low.
>>
>
> +2 to that, there are a bunch of technologies that could replace the
> rabbit+rpc, aka, gRPC, then there is http2 and thrift and ... so a writeup
> IMHO would help at least clear the waters a little bit, and explain the
> blocker of the current RPC design pattern (which is multidimensional
> because most people are probably thinking RPC == rabbit when it's actually
> more than that now, ie zeromq and amqp1.0 and ...) and try to centralize on
> a better replacement.
>
>
Is anybody who dislikes the current pattern(s) and implementation(s)
volunteering to start this documentation? I really am not aware of the
issues, and I'd like to begin to understand them.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Update on Nova scheduler poor performance with Ironic

2016-08-31 Thread Mathieu Gagné
On Wed, Aug 31, 2016 at 1:33 AM, Joshua Harlow  wrote:
>>
>> Enabling this option will make it so Nova scheduler loads instance
>> info asynchronously at start up. Depending on the number of
>> hypervisors and instances, it can take several minutes. (we are
>> talking about 10-15 minutes with 600+ Ironic nodes, or ~1s per node in
>> our case)
>
>
> This feels like a classic thing that could just be made better by a
> scatter/gather (in threads or other?) to the database or other service. 1s
> per node seems ummm, sorta bad and/or non-optimal (I wonder if this is low
> hanging fruit to improve this). I can travel around the world 7.5 times in
> that amount of time (if I was a light beam, haha).

This behavior was only triggered under the following conditions:
- Nova Kilo
- scheduler_tracks_instance_changes=False

So someone installing the latest Nova version won't have this issue.
Furthermore, if you enable scheduler_tracks_instance_changes,
instances will be loaded asynchronously by chunk when nova-scheduler
starts. (10 compute nodes at a time) But Jim found that enabling this
config causes OOM errors.

So I investigated and found a very interesting bug presents if you run
Nova in the Ironic context or anything where a single nova-compute
process manages multiple or LOT of hypervisors. As explained
previously, Nova loads the list of instances per compute node to help
with placement decisions:
https://github.com/openstack/nova/blob/kilo-eol/nova/scheduler/host_manager.py#L590

Again, in Ironic context, a single nova-compute host manages ALL
instances. This means this specific line found in _add_instance_info
will load ALL instances managed by that single nova-compute host.
What's even funnier is that _add_instance_info is called from
get_all_host_states for every compute nodes (hypervisors), NOT
nova-compute host. This means if you have 2000 hypervisors (Ironic
nodes), this function will load 2000 instances per hypervisor found in
get_all_host_states, ending with an overall process loading 2000^2
rows from the database. Now I know why Jim Roll complained about OOM
error. objects.InstanceList.get_by_host_and_node should be used
instead, NOT objects.InstanceList.get_by_host. Will report this bug
soon.


>>
>> There is a lot of side-effects to using it though. For example:
>> - you can only run ONE nova-scheduler process since cache state won't
>> be shared between processes and you don't want instances to be
>> scheduled twice to the same node/hypervisor.
>
>
> Out of curiosity, do you have only one scheduler process active and passive
> scheduler process(es) idle waiting to become active if the other schedule
> dies? (pretty simply done via something like
> https://kazoo.readthedocs.io/en/latest/api/recipe/election.html) Or do you
> have some manual/other process that kicks off a new scheduler if the 'main'
> one dies?

We use the HA feature of our virtualization infrastructure to handle
failover. This is a compromise we are willing to accept for now. I
agree that now everybody has access to this kind of feature in their
infra.


>> 2) Run a single nova-compute service
>>
>> I strongly suggest you DO NOT run multiple nova-compute services. If
>> you do, you will have duplicated hypervisors loaded by the scheduler
>> and you could end up with conflicting scheduling. You will also have
>> twice as much hypervisors to load in the scheduler.
>
>
> This seems scary (whenever I hear run a single of anything in a *cloud*
> platform, that makes me shiver). It'd be nice if we at least recommended
> people run https://kazoo.readthedocs.io/en/latest/api/recipe/election.html
> or have some active/passive automatic election process to handle that single
> thing dying (which they usually do, at odd times of the night). Honestly I'd
> (personally) really like to get to the bottom of how we as a group of
> developers ever got to the place where software was released (and/or even
> recommended to be used) in a *cloud* platform that ever required only one of
> anything to be ran (that's crazy bonkers, and yes there is history here, but
> damn, it just feels rotten as all hell, for lack of better words).

Same as above. If nova-compute process stops, customers won't lose
access to their baremetal but won't be able to manage them (create,
start, stop). In our context, that's not something they do often. In
fact, we more often than not deliver the baremetal for them in their
projects/tenants and they pretty much never touch the API anyway.

Also there is this hash ring feature coming in latest Nova version.
Meanwhile we are happy with the compromise.


>> 3) Increase service_down_time
>>
>> If you have a lot of nodes, you might have to increase this value
>> which is set to 60 seconds by default. This value is used by the
>> ComputeFilter filter to exclude nodes it hasn't heard from. If it
>> takes more than 60 seconds to list the list of nodes, you might guess
>> what we will happen, the scheduler will 

Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread lebre . adrien
As promised, I just wrote a first draft at 
https://etherpad.openstack.org/p/massively-distributed_WG_description
I will try to add more content tomorrow in particular pointers towards 
articles/ETSI specifications/use-cases.

Comments/remarks welcome. 
Ad_rien_

PS: Chaoyi, your proposal for f2f sessions in Barcelona sounds good. It is 
probably a bit too ambitious for one summit because the point 3 ''Gaps in 
OpenStack'' looks to me a major action that will probably last more than just 
one summit but I think you gave the right directions !

- Mail original -
> De: "joehuang" 
> À: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Envoyé: Mercredi 31 Août 2016 08:48:01
> Objet: Re: [openstack-dev] [all][massively 
> distributed][architecture]Coordination between actions/WGs
> 
> Hello, Joshua,
> 
> According to Peter's message, "However that still leaves us with the
> need to manage a stack of servers in thousands of telephone
> exchanges, central offices or even cell-sites, running multiple work
> loads in a distributed fault tolerant manner", the number of edge
> clouds may even at thousands level.
> 
> These clouds may be disjoint, but some may need to provide
> inter-connection for the tenant's network, for example, to support
> database cluster distributed in several clouds, the inter-connection
> for data replication is needed.
> 
> There are different thoughts, proposals or projects to tackle the
> challenge, architecture level discussion is necessary to see if
> these design and proposals can fulfill the demands. If there are
> lots of proposals, it's good to compare the pros. and cons, and
> which scenarios the proposal work, which scenario the proposal can't
> work very well.
> 
> So I suggest to have at least two successive dedicated design summit
> sessions to discuss about that f2f, all  thoughts, proposals or
> projects to tackle these kind of problem domain could be collected
> now,  the topics to be discussed could be as follows :
> 
>0. Scenario
>1, Use cases
>2, Requirements  in detail
>3, Gaps in OpenStack
>4, Proposal to be discussed
> 
>   Architecture level proposal discussion
>1, Proposals
>2, Pros. and Cons. comparation
>3, Challenges
>4, next step
> 
> Best Regards
> Chaoyi Huang(joehuang)
> 
> From: Joshua Harlow [harlo...@fastmail.com]
> Sent: 31 August 2016 13:13
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all][massively
> distributed][architecture]Coordination between actions/WGs
> 
> joehuang wrote:
> > Cells is a good enhancement for Nova scalability, but there are
> > some issues in deployment Cells for massively distributed edge
> > clouds:
> >
> > 1) using RPC for inter-data center communication will bring the
> > difficulty in inter-dc troubleshooting and maintenance, and some
> > critical issue in operation. No CLI or restful API or other tools
> > to manage a child cell directly. If the link between the API cell
> > and child cells is broken, then the child cell in the remote edge
> > cloud is unmanageable, no matter locally or remotely.
> >
> > 2). The challenge in security management for inter-site RPC
> > communication. Please refer to the slides[1] for the challenge 3:
> > Securing OpenStack over the Internet, Over 500 pin holes had to be
> > opened in the firewall to allow this to work – Includes ports for
> > VNC and SSH for CLIs. Using RPC in cells for edge cloud will face
> > same security challenges.
> >
> > 3)only nova supports cells. But not only Nova needs to support edge
> > clouds, Neutron, Cinder should be taken into account too. How
> > about Neutron to support service function chaining in edge clouds?
> > Using RPC? how to address challenges mentioned above? And Cinder?
> >
> > 4). Using RPC to do the production integration for hundreds of edge
> > cloud is quite challenge idea, it's basic requirements that these
> > edge clouds may be bought from multi-vendor, hardware/software or
> > both.
> >
> > That means using cells in production for massively distributed edge
> > clouds is quite bad idea. If Cells provide RESTful interface
> > between API cell and child cell, it's much more acceptable, but
> > it's still not enough, similar in Cinder, Neutron. Or just deploy
> > lightweight OpenStack instance in each edge cloud, for example,
> > one rack. The question is how to manage the large number of
> > OpenStack instance and provision service.
> >
> > [1]https://www.openstack.org/assets/presentation-media/OpenStack-2016-Austin-D-NFV-vM.pdf
> >
> > Best Regards
> > Chaoyi Huang(joehuang)
> >
> 
> Very interesting questions,
> 
> I'm starting to think that the API you want isn't really nova,
> neutron,
> or cinder at this point though. At some point it feels like the
> efforts
> you are spending in things like service chaining (there is a south
> 

Re: [openstack-dev] [nova] Next steps for resource providers work

2016-08-31 Thread Chris Dent



On 08/29/2016 12:40 PM, Matt Riedemann wrote:

I've been out for a week and not very involved in the resource providers
work, but after talking about the various changes up in the air at the
moment a bunch of us thought it would be helpful to lay out next steps
for the work we want to get done this week.


There was another hangout today where we caught up on where we are.
Some notes were added to the etherpad
https://etherpad.openstack.org/p/placement-next

There is code either merged or pending merge that allows the
resource tracker to ensure that resource providers exist and have
the correct inventory.

The major concern and blocker at this point is setting and deleting
allocations, for which the assistance of Jay is required. Some
details follow with a summary of Jay's todos at the bottom.

There are two patches, starting at
https://review.openstack.org/#/c/363209/

The first is a hack to get the object side handling for
AllocationList.create_all and delete_all. As noted in the comments
there we're not sure about the atomicity in create_all and need Jay
to determine if what's there can be made to work, or as suggested we
need a mondo SQL thing to get it right. If the latter, we need Jay
to write it :)

I'm going to carry on with those patches now and try to add some
generation handling back in to protect against inventory changing
out from under us while making allocations, but I'm not confident
of getting it anything more that possible adequate and great would
be better.

During that I'm also going to try to adjust things so that we can
update an existing allocation, not just create them, as we've
determined that's required. set_all, not create_all, basically.

The other missing piece is the client side of setting and deleting
allocations, from the resource tracker. We'd like Jay to start this
too or if we're all lucky maybe it is started already?

And finally there's a question we didn't know how to answer: What
will the process be for healing instances that already exist before
the placement service is started, and thus have no allocations?

So to summarize Jay's to do list (please and thank you very much):

* Look at https://review.openstack.org/#/c/363209/ and decide if it
  is good enough to get rolling or needs to be completely altered.
* If the latter, alter it.
* Write the allocation client.
* Consult on healing instance allocations.

Meanwhile several people are involved in related clean up patches in
both nova and devstack to smooth off rough edges while we pushed a
lot of code.

Thanks to everyone today for pushing so hard. We look pretty close to
getting the must haves happening.

--
Chris Dent   ┬─┬ノ( º _ ºノ)https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Clint Byrum
Excerpts from Duncan Thomas's message of 2016-08-31 12:42:23 +0300:
> On 31 August 2016 at 11:57, Bogdan Dobrelya  wrote:
> 
> > I agree that RPC design pattern, as it is implemented now, is a major
> > blocker for OpenStack in general. It requires a major redesign,
> > including handling of corner cases, on both sides, *especially* RPC call
> > clients. Or may be it just have to be abandoned to be replaced by a more
> > cloud friendly pattern.
> >
> 
> 
> Is there a writeup anywhere on what these issues are? I've heard this
> sentiment expressed multiple times now, but without a writeup of the issues
> and the design goals of the replacement, we're unlikely to make progress on
> a replacement - even if somebody takes the heroic approach and writes a
> full replacement themselves, the odds of getting community by-in are very
> low.

Right, this is exactly the sort of thing I'd like to gather a group of
design-minded folks around in an Architecture WG. Oslo is busy with the
implementations we have now, but I'm sure many oslo contributors would
like to come up for air and talk about the design issues, and come up
with a current design, and some revisions to it, or a whole new one,
that can be used to put these summit hallway rumors to rest.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Horizon missing loadbalance UI button

2016-08-31 Thread Erdősi Péter

2016. 08. 31. 16:39 keltezéssel, Turbo Fredriksson írta:

Technically, that's not Mitaka! That's using Horizon from Newton.

How/where did you get that mate? :)

[xyz(cc1:2)] <~> sudo dpkg --list |grep dashboard
ii  openstack-dashboard 2:9.0.1-0ubuntu2~cloud0   
all  Django web interface for OpenStack


Version 9.0.1 is Mitaka horizon, and we patched the lbaasv2 gui as a 
local module ;)
As my colleague mentioned before, you can find information here: 
http://docs.openstack.org/mitaka/networking-guide/config-lbaas.html


Let me quote a sentence from this page: "The Dashboard panels for 
managing LBaaS v2 are available starting with the Mitaka release."


If you open the git repository from the link above, you can see two 
branches (master for newton, and stable/mitaka)


Please take a time to pick up the information, before spread out 
something, which is not real...



Thanks:
 Peter

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [oslo] pymysql change in error formatting has broken exception handing in oslo.db

2016-08-31 Thread Davanum Srinivas
On Wed, Aug 31, 2016 at 12:09 PM, Doug Hellmann  wrote:
> Excerpts from Ihar Hrachyshka's message of 2016-08-31 16:48:09 +0200:
>> Mike Bayer  wrote:
>>
>> > We need to decide how to handle this:
>> >
>> > https://review.openstack.org/#/c/362991/
>> >
>> >
>> > Basically, PyMySQL normally raises an error message like this:
>> >
>> > (pymysql.err.IntegrityError) (1452, u'Cannot add or update a child row: a
>> > foreign key constraint fails (`vaceciqnzs`.`resource_entity`, CONSTRAINT
>> > `foo_fkey` FOREIGN KEY (`foo_id`) REFERENCES `resource_foo` (`id`))')
>> >
>> > for some reason, PyMySQL 0.7.7 is now raising it like this:
>> >
>> > (pymysql.err.IntegrityError) (1452, u'23000Cannot add or update a child
>> > row: a foreign key constraint fails (`vaceciqnzs`.`resource_entity`,
>> > CONSTRAINT `foo_fkey` FOREIGN KEY (`foo_id`) REFERENCES `resource_foo`
>> > (`id`))')
>> >
>> > this impacts oslo.db's "exception re-handling" functionality which tries
>> > to classify this exception as a DBNonExistentConstraint exception.   It
>> > also breaks oslo.db's test suite locally, but in a downstream project
>> > would only impact its ability to intercept this exception appropriately.
>> >
>> > now that "23000" there looks like a bug.  The above gerrit proposes to
>> > work around it.  However, if we didn't push out the above gerrit, we'd
>> > instead have to change requirements:
>> >
>> > https://review.openstack.org/#/q/I33d5ef8f35747d3b6d3bc0bd4972ce3b7fd60371,n,z
>> >
>> > It seems like at least one or the other would be needed for Newton.
>>
>> Unless we fix the bug in next pymysql, it’s not either/or but both will be
>> needed, and also minimal oslo.db version bump.
>>
>> I suggest we:
>> - block 0.7.7 to unblock upper-constraints updates;
>> - land oslo.db fix to cope with pymysql 0.7.7+, in master as well as all
>> stable branches;
>> - release new oslo.db releases for L-N;
>> - at least for N, bump minimal version of the library in
>> global-requirements.txt;
>> - sync the bump to all consuming projects;
>> - later, maybe unblock 0.7.7.
>>
>> In the meantime, interested parties may work with pymysql folks to get the
>> issue fixed. It may take a while, so I would not make this step part of our
>> short term plan.
>>
>> Now, I understand that this does not really sound ideal, but I assume we
>> are not in requirements freeze yet (the deadline for that is tomorrow), and
>> this plan will solve the issue for users of all versions of pymysql.
>
> Even if we were frozen, this seems like the sort of thing we'd want to
> deal with through a patch release.
>
> I've already create the stable/newton branch for oslo.db, so we'll need
> to backport the fix to have a 4.13.1 release.

+1 to 4.13.1

Thanks,
Dims

>
> Doug
>
>>
>> Ihar
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [FFE] Horizon Profiler feature (and openstack/osprofiler integration)

2016-08-31 Thread Timur Sufiev
David,

I understand your concerns. Early Ocata instead of late Newton makes sense
to me. Thank you for a quick response.

On Wed, Aug 31, 2016 at 5:54 PM David Lyle  wrote:

> As a developer feature, I would vote for merging in early Ocata rather
> than as a FFE. Since the potential risk is to users and operators and
> they won't generally benefit from the feature, I don't see the upside
> outweighing the potential risk.  It's not a localized change either.
>
> That said, I think the profiler work will be extremely valuable in
> Ocata and beyond. Thanks for your continued efforts on bringing it to
> life.
>
> David
>
> On Wed, Aug 31, 2016 at 6:14 AM, Timur Sufiev 
> wrote:
> > Hello, folks!
> >
> > I'd like to ask for a feature-freeze exception for a Horizon Profiler
> > feature [1], that has been demoed long ago (during Portland midcycle Feb
> > 2016) and is finally ready. The actual request applies to the 3 patches
> [2]
> > that provide the bulk of Profiler functionality.
> >
> > It is a quite useful feature that is aimed mostly to developers, thus it
> is
> > constrained within Developer dashboard and disabled by default - so it
> > shouldn't have any impact on User-facing Horizon capabilities.
> >
> > [1]
> >
> https://blueprints.launchpad.net/horizon/+spec/openstack-profiler-at-developer-dashboard
> > [2]
> >
> https://review.openstack.org/#/q/topic:bp/openstack-profiler-at-developer-dashboard+status:open
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Horizon missing loadbalance UI button

2016-08-31 Thread Turbo Fredriksson
On Aug 31, 2016, at 5:27 PM, Satish Patel wrote:

> I hate this :(  Openstack has tons of release and everyone is freaking
> opposite to other :(

Tell me about it!! And the documentation could stand a improvement
as well.

> Every year they have new release how people migrate old stuff from old
> to new, kinds scary..

And they keep changing the options (in the config files) all the time.
Instead of taking some time and think things trough and figure out what
they actually want to do! I've seen terrible horror stories about upgrading
from one release to another :(

I do _NOT_ (!!) look forward to upgrade this in a couple of months! I
will most likely just skip the next version and if/when I need to a
couple of years down the line, do a complete reinstall of everything :(.

It's simply not acceptable, and does not give confidence in Openstack!
--
Life sucks and then you die


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [infra] Reliable way to filter CI in gerrit spam?

2016-08-31 Thread Matthew Booth
On Wed, Aug 31, 2016 at 5:31 PM, Jeremy Stanley  wrote:

> On 2016-08-31 16:13:16 +0200 (+0200), Jordan Pittier wrote:
> > Most(all?) messages from CI have the lines:
> >
> > "Patch Set X:
> > Build (succeeded|failed)."
> >
> > Not super robust, but that's a start.
>
> Also we have naming conventions for third-party CI accounts that
> suggest they should end in " CI" so you could match on that.
>

Yeah, all except 'Jenkins' :) This makes me a bit nervous because all mail
still comes from 'rev...@openstack.org', with the name component set to the
name of the CI. I was nervous of false positives, so I chose to name them
all in full.


>
> Further, we have configuration in Gerrit to not send E-mail for
> comments from accounts in
> https://review.openstack.org/#/admin/groups/270,members so if you
> are seeing E-mail from Gerrit for third-party CI system comments see
> whether they're in that group already (in which case let the Infra
> team know because we might have a bug to look into) or ask one of
> the members of
> https://review.openstack.org/#/admin/groups/440,members to add the
> stragglers (or even volunteer to become part of that coordinators
> group and help maintain the list).
>

This sounds interesting. All the CIs I get gerrit spam from are on that
list except Jenkins. Do I have to enable something specifically to exclude
them? I mashed all the links I could find seemingly related to gerrit
settings and I couldn't find anything which looked promising.

Thanks again,

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]How to address TCs concerns in Tricircle big-tent application

2016-08-31 Thread Monty Taylor
On 08/31/2016 02:16 AM, joehuang wrote:
> Hello, team,
> 
> During last weekly meeting, we discussed how to address TCs concerns in
> Tricircle big-tent application. After the weekly meeting, the proposal
> was co-prepared by our
> contributors: 
> https://docs.google.com/presentation/d/1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E
> 
> The more doable way is to divide Tricircle into two independent and
> decoupled projects, only one of the projects which deal with networking
> automation will try to become an big-tent project, And Nova/Cinder
> API-GW will be removed from the scope of big-tent project application,
> and put them into another project:
> 
> *TricircleNetworking:* Dedicated for cross Neutron networking automation
> in multi-region OpenStack deployment, run without or with
> TricircleGateway. Try to become big-tent project in the current
> application of https://review.openstack.org/#/c/338796/.

Great idea.

> *TricircleGateway:* Dedicated to provide API gateway for those who need
> single Nova/Cinder API endpoint in multi-region OpenStack deployment,
> run without or with TricircleNetworking. Live as non-big-tent,
> non-offical-openstack project, just like Tricircle toady’s status. And
> not pursue big-tent only if the consensus can be achieved in OpenStack
> community, including Arch WG and TCs, then decide how to get it on board
> in OpenStack. A new repository is needed to be applied for this project.
> 
> 
> And consider to remove some overlapping implementation in Nova/Cinder
> API-GW for global objects like flavor, volume type, we can configure one
> region as master region, all global objects like flavor, volume type,
> server group, etc will be managed in the master Nova/Cinder service. In
> Nova API-GW/Cinder API-GW, all requests for these global objects will be
> forwarded to the master Nova/Cinder, then to get rid of any API
> overlapping-implementation.
> 
> More information, you can refer to the proposal draft
> https://docs.google.com/presentation/d/1kpVo5rsL6p_rq9TvkuczjommJSsisDiKJiurbhaQg7E,
> 
> your thoughts are welcome, and let's have more discussion in this weekly
> meeting.

I think this is a great approach Joe.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Reliable way to filter CI in gerrit spam?

2016-08-31 Thread Jeremy Stanley
On 2016-08-31 16:13:16 +0200 (+0200), Jordan Pittier wrote:
> Most(all?) messages from CI have the lines:
> 
> "Patch Set X:
> Build (succeeded|failed)."
> 
> Not super robust, but that's a start.

Also we have naming conventions for third-party CI accounts that
suggest they should end in " CI" so you could match on that.

Further, we have configuration in Gerrit to not send E-mail for
comments from accounts in
https://review.openstack.org/#/admin/groups/270,members so if you
are seeing E-mail from Gerrit for third-party CI system comments see
whether they're in that group already (in which case let the Infra
team know because we might have a bug to look into) or ask one of
the members of
https://review.openstack.org/#/admin/groups/440,members to add the
stragglers (or even volunteer to become part of that coordinators
group and help maintain the list).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Horizon missing loadbalance UI button

2016-08-31 Thread Satish Patel
I hate this :(  Openstack has tons of release and everyone is freaking
opposite to other :(

Every year they have new release how people migrate old stuff from old
to new, kinds scary..

On Wed, Aug 31, 2016 at 10:39 AM, Turbo Fredriksson  wrote:
> On Aug 31, 2016, at 3:18 PM, Bodor János wrote:
>
>> 2016. 08. 31. 15:51 keltezéssel, Turbo Fredriksson írta:
>>> On Aug 31, 2016, at 2:28 PM, Bodor János wrote:
>>>
 Here is a link:
 http://docs.openstack.org/mitaka/networking-guide/config-lbaas.html
>>>
>>> That's only for LBaaSv1 unfortunately.
>> No, We have been using for two weeks with lbaasv2 without any errors.
>
>
> Technically, that's not Mitaka! That's using Horizon from Newton.
>
> Which is basically what I said - you need to upgrade beyond
> Mitaka to get LBaaSv2 to work (in Horizon).
> --
> Build a man a fire, and he will be warm for the night.
> Set a man on fire and he will be warm for the rest of his life.
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Horizon][FFE]Support a param to specify subnet or fixed IP when creating port

2016-08-31 Thread David Lyle
I agree this seems reasonable and a good addition for users with minimal risk.

David

On Tue, Aug 30, 2016 at 10:29 AM, Rob Cresswell (rcresswe)
 wrote:
> I’m happy to allow this personally, but wanted to get others input and give 
> people the chance to object.
>
> My reasoning for allowing this:
> - It’s high level, doesn’t affect any base horizon lib features.
> - It is mature code, has multiple patch sets and a +2
>
> I’ll give it a few days to allow others a chance speak up, then we can move 
> forward.
>
> Rob
>
>> On 29 Aug 2016, at 07:17, Kenji Ishii  wrote:
>>
>> Hi, horizoners
>>
>> I'd like to request a feature freeze exception for the feature.
>> (This is a bug ticket, but the contents written in this ticket is a new 
>> feature.)
>> https://bugs.launchpad.net/horizon/+bug/1588663
>>
>> This is implemented by the following patch.
>> https://review.openstack.org/#/c/325104/
>>
>> It is useful to be able to create a port with using subnet or IP address 
>> which a user want to use.
>> And this has already reviewed by many reviewers, so I think the risk in this 
>> patch is very few.
>>
>> ---
>> Best regards,
>> Kenji Ishii
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [OpenStack-DefCore] [OSOps] Ansible work load test for interop patch set

2016-08-31 Thread Yih Leong, Sun.
Can someone from ospos please review the following patch?
https://review.openstack.org/#/c/351799/

The patchset was last updated Aug 11th.
Thanks!



On Tue, Aug 16, 2016 at 7:17 PM, Joseph Bajin  wrote:

> Sorry about that. I've been a little busy as of late, and was able to get
> around to taking a look.
>
> I have a question about these.   What exactly is the Interop Challenge?
> The OSOps repos are usually for code that can help Operators maintain and
> run their cloud.   These don't necessarily look like what we normally see
> submitted.
>
> Can you expand on what the InterOp Challenge is and if it is something
> that Operators would use?
>
> Thanks
>
> Joe
>
> On Tue, Aug 16, 2016 at 3:02 PM, Shamail  wrote:
>
>>
>>
>> > On Aug 16, 2016, at 1:44 PM, Christopher Aedo  wrote:
>> >
>> > Tong Li, I think the best place to ask for a look would be the
>> > Operators mailing list
>> > (http://lists.openstack.org/cgi-bin/mailman/listinfo/opensta
>> ck-operators).
>> > I've cc'd that list here, though it looks like you've already got a +2
>> > on it at least.
>> +1
>>
>> I had contacted JJ earlier and he told me that the best person to contact
>> would be Joseph Bajin (RaginBajin in IRC).  I've also added an OSOps tag to
>> this message.
>> >
>> > -Christopher
>> >
>> >> On Tue, Aug 16, 2016 at 7:59 AM, Tong Li  wrote:
>> >> The patch set has been submitted to github for awhile, can some one
>> please
>> >> review the patch set here?
>> >>
>> >> https://review.openstack.org/#/c/354194/
>> >>
>> >> Thanks very much!
>> >>
>> >> Tong Li
>> >> IBM Open Technology
>> >> Building 501/B205
>> >> liton...@us.ibm.com
>> >>
>> >>
>> >> ___
>> >> Defcore-committee mailing list
>> >> defcore-commit...@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/defcore-committee
>> >
>> > ___
>> > Defcore-committee mailing list
>> > defcore-commit...@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/defcore-committee
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
> ___
> Defcore-committee mailing list
> defcore-commit...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/defcore-committee
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [oslo] pymysql change in error formatting has broken exception handing in oslo.db

2016-08-31 Thread Doug Hellmann
Excerpts from Ihar Hrachyshka's message of 2016-08-31 16:48:09 +0200:
> Mike Bayer  wrote:
> 
> > We need to decide how to handle this:
> >
> > https://review.openstack.org/#/c/362991/
> >
> >
> > Basically, PyMySQL normally raises an error message like this:
> >
> > (pymysql.err.IntegrityError) (1452, u'Cannot add or update a child row: a  
> > foreign key constraint fails (`vaceciqnzs`.`resource_entity`, CONSTRAINT  
> > `foo_fkey` FOREIGN KEY (`foo_id`) REFERENCES `resource_foo` (`id`))')
> >
> > for some reason, PyMySQL 0.7.7 is now raising it like this:
> >
> > (pymysql.err.IntegrityError) (1452, u'23000Cannot add or update a child  
> > row: a foreign key constraint fails (`vaceciqnzs`.`resource_entity`,  
> > CONSTRAINT `foo_fkey` FOREIGN KEY (`foo_id`) REFERENCES `resource_foo`  
> > (`id`))')
> >
> > this impacts oslo.db's "exception re-handling" functionality which tries  
> > to classify this exception as a DBNonExistentConstraint exception.   It  
> > also breaks oslo.db's test suite locally, but in a downstream project  
> > would only impact its ability to intercept this exception appropriately.
> >
> > now that "23000" there looks like a bug.  The above gerrit proposes to  
> > work around it.  However, if we didn't push out the above gerrit, we'd  
> > instead have to change requirements:
> >
> > https://review.openstack.org/#/q/I33d5ef8f35747d3b6d3bc0bd4972ce3b7fd60371,n,z
> >
> > It seems like at least one or the other would be needed for Newton.
> 
> Unless we fix the bug in next pymysql, it’s not either/or but both will be  
> needed, and also minimal oslo.db version bump.
> 
> I suggest we:
> - block 0.7.7 to unblock upper-constraints updates;
> - land oslo.db fix to cope with pymysql 0.7.7+, in master as well as all  
> stable branches;
> - release new oslo.db releases for L-N;
> - at least for N, bump minimal version of the library in  
> global-requirements.txt;
> - sync the bump to all consuming projects;
> - later, maybe unblock 0.7.7.
> 
> In the meantime, interested parties may work with pymysql folks to get the  
> issue fixed. It may take a while, so I would not make this step part of our  
> short term plan.
> 
> Now, I understand that this does not really sound ideal, but I assume we  
> are not in requirements freeze yet (the deadline for that is tomorrow), and  
> this plan will solve the issue for users of all versions of pymysql.

Even if we were frozen, this seems like the sort of thing we'd want to
deal with through a patch release.

I've already create the stable/newton branch for oslo.db, so we'll need
to backport the fix to have a 4.13.1 release.

Doug

> 
> Ihar
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-31 Thread Joshua Harlow

Duncan Thomas wrote:

On 31 August 2016 at 11:57, Bogdan Dobrelya > wrote:

I agree that RPC design pattern, as it is implemented now, is a major
blocker for OpenStack in general. It requires a major redesign,
including handling of corner cases, on both sides, *especially* RPC call
clients. Or may be it just have to be abandoned to be replaced by a more
cloud friendly pattern.



Is there a writeup anywhere on what these issues are? I've heard this
sentiment expressed multiple times now, but without a writeup of the
issues and the design goals of the replacement, we're unlikely to make
progress on a replacement - even if somebody takes the heroic approach
and writes a full replacement themselves, the odds of getting community
by-in are very low.


+2 to that, there are a bunch of technologies that could replace the 
rabbit+rpc, aka, gRPC, then there is http2 and thrift and ... so a 
writeup IMHO would help at least clear the waters a little bit, and 
explain the blocker of the current RPC design pattern (which is 
multidimensional because most people are probably thinking RPC == rabbit 
when it's actually more than that now, ie zeromq and amqp1.0 and ...) 
and try to centralize on a better replacement.


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] pymysql change in error formatting has broken exception handing in oslo.db

2016-08-31 Thread Julien Danjou
On Wed, Aug 31 2016, Ihar Hrachyshka wrote:

> I suggest we:
> - block 0.7.7 to unblock upper-constraints updates;
> - land oslo.db fix to cope with pymysql 0.7.7+, in master as well as all 
> stable
> branches;
> - release new oslo.db releases for L-N;
> - at least for N, bump minimal version of the library in
> global-requirements.txt;
> - sync the bump to all consuming projects;
> - later, maybe unblock 0.7.7.
>
> In the meantime, interested parties may work with pymysql folks to get the
> issue fixed. It may take a while, so I would not make this step part of our
> short term plan.
>
> Now, I understand that this does not really sound ideal, but I assume we are
> not in requirements freeze yet (the deadline for that is tomorrow), and this
> plan will solve the issue for users of all versions of pymysql.

This is also my plan and proposal. At least the Gnocchi gate is broken
now until oslo.db is fixed, so we're really eager to see things moving… :)

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] pymysql change in error formatting has broken exception handing in oslo.db

2016-08-31 Thread Mike Bayer



On 08/31/2016 10:48 AM, Ihar Hrachyshka wrote:


Unless we fix the bug in next pymysql, it’s not either/or but both will
be needed, and also minimal oslo.db version bump.


upstream issue:

https://github.com/PyMySQL/PyMySQL/issues/507

PyMySQL tends to be very responsive to issues (plus I think I'm a 
committer anyway, even I could commit a fix I suppose)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] Unable to start httpd service

2016-08-31 Thread Venkatesh Kotipalli
Hi Folks,

I'm installing Openstack Kilo. we are following the below link

*http://docs.openstack.org/kilo/install-guide/install/yum/content/ch_preface.html
*

After installed  dashboard(Horizon) unable to start the* httpd* service.

we identified that keystone service stopped and again started httpd service
its running httpd service.

Note: Both Keystone and httpd service not running at a time,

Please help me guys. what was the issue?

Regards,
Venkatesh.k
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack-operators] [Puppet][Neutron] Mitaka ML2/OVS config file mismatch

2016-08-31 Thread Jonathan D. Proulx
Hi All,

I'm woking on testing my jump from Kilo->Mitaka

Using puppet on ubuntu 14.04 with cloud archive packages.

Puppet seems to be writing the ml2/ovs configs into
/etc/neutron/plugins/ml2/ml2_conf.ini

which is where the previously were, so I've spent a few day going
around on this thinking I'd missed adding a newly reqired bit or
something ...

But /etc/init/neutron-openvswitch-agent.conf doesn't reference this
file, it uses:

--config-file=/etc/neutron/neutron.conf \
--config-file=/etc/neutron/plugins/ml2/openvswitch_agent.ini

I must have done something weird since lots of people must have used
puppet on mitaka by now.  Anyone know what?

Thanks,
-Jon

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [cinder] The State of the NFS Driver ...

2016-08-31 Thread Jay S. Bryant

On 08/30/2016 08:50 PM, Matt Riedemann wrote:

On 8/30/2016 10:50 AM, Jay S. Bryant wrote:

All,

I wanted to follow up on the e-mail thread [1] on Cloning support in the
NFS driver.  The purpose of this e-mail is to provide the plan for the
NFS driver going forward as I see it.

First, I am aware that the driver has gone quite some time without care
and feeding.  For a number of reasons, the Public Cloud team within IBM
is currently dependent upon the NFS driver working properly for the
cloud environment we are building.  Given our current dependence on the
driver we are planning on picking up the driver and maintaining it.

The first step in this process was getting the existing patch that adds
snapshot support for NFS [2] rebased.  I did this work a couple of weeks
ago and also got all the unit tests working for the unit test
environment on the master branch.  I now see that it is in merge
conflict again, I plan to continue to keep the patch up-to-date.

Erlon has been investigating issues with attaching snapshots. It
appears that this may be related to AppArmor running on the system where
the VM is running and attachment is being attempted.  I am hoping to
look into the other questions posed in the patch review in the next week
or two.

The next step is to create a dependent patch, upon the snapshot patch,
to implement cloning.  I am planning to also undertake this work.  I am
assuming that getting the cloning support in place shouldn't be too
difficult once snapshots are working as it will be just a matter of
using the support from the remotefs driver.

The last piece of work we have in flight is working on adding QoS
support to the NFS driver.  We have the following spec proposed to get
that work started: [3]

So, we are in the process of bringing the NFS driver up to good
standing.  During this process we would greatly appreciate reviews and
input from those of you who have previously worked on the driver in
order to expedite integration of the necessary changes. I feel it is in
the best interest of the community to get the driver updated and
supported given that it is the 4th most used driver according to our
user survey.  I think it would not look good to our users if it were to
suddenly be removed.

Thanks to all of your for your support in this effort!

Jay

[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-August/102193.html 



[2] https://review.openstack.org/#/c/147186/

[3] https://review.openstack.org/361456


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



IMO priority #1 is getting the NFS job passing consistently, who is 
working on that? Last I checked it was failing a bunch because it was 
running snapshot and clone tests, which obviously don't work since 
that support isn't implemented in the driver. I think configuring 
tempest in the devstack-plugin-nfs repo is fairly straightforward, 
someone just needs to do it.


But at least that gets you closer to a clean NFS job run which gets it 
out of the experimental queue (possibly) and as a non-voting job in 
Cinder so you can see if you're regressing anything (or if anything 
else regresses it once you have clean CI runs).


My 2 cents.


Matt,

This is good feedback.  I will put a story on our backlog on this for it 
and try to get that working ASAP.


Jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][ironic] bifrost 2.0.0 release (newton)

2016-08-31 Thread no-reply
We are chuffed to announce the release of:

bifrost 2.0.0: Deployment of physical machines using OpenStack Ironic
and Ansible

This release is part of the newton release series.

For more details, please see below.

2.0.0
^


New Features


* Allows to create VMs with custom names instead of using testvm or
  NODE_BASE and sequential prefixes. This can be achieved by passing
  the TEST_VM_NODE_NAMES env var.

* The ironic install role has been split into 3 phases. "install"
  phase installs all ironic packages and dependencies. "bootstrap"
  phase generates configs and initializes the ironic db. "start" phase
  starts all ironic services and dependencies. Each phase is run by
  default and can be skipped by defining skip_package_install,
  skip_bootstrap and skip_start respectively.

* Add support for kvm acceleration for the VMs created by bifrost-
  create-vm-nodes. The default domain type for the created VMs is qemu
  which uses tcg acceleration. In order to use kvm acceleration, users
  need to set VM_DOMAIN_TYPE to kvm.

* A new playbook was added to redeploy nodes. The playbook
  transitions each node's provision state to 'available', waiting for
  the nodes to reach that state.  Next, the playbook deploys the
  nodes, waiting for the nodes to reach provision state 'active'.  The
  playbook is redeploy-dynamic.yaml in the playbooks directory.


Upgrade Notes
*

* A new test playbook, test-bifrost.yaml, has been added. This
  playbook merges the functionality of the existing test-bifrost-
  dynamic.yaml and test-bifrost-dhcp.yaml playbooks.

* Bifrost has changed to using TinyIPA as the default IPA image for
  testing. TinyIPA has a smaller footprint for downloads and memory
  utilization. Users can continue to utilize CoreOS or diskimage-
  builder based IPA images, however this was done to improve testing
  performance and reliability. If the pre-existing IPA image is
  removed, bifrost will automatically re-download the file upon being
  updated in an installation process.  Otherwise, the pre-existing IPA
  image will be utilized.


Deprecation Notes
*

* test-bifrost-dynamic.yaml and test-bifrost-dhcp.yaml have been
  superseded by test-bifrost.yaml and will be removed in the Ocata
  release.


Other Notes
***

* A new install_dib varible has been introduced to the ironic
  install role to control installation of disk image builder and dib-
  utils. To maintain  the previous behavior install_dib will default
  to the value of create_image_via_dib.

Changes in bifrost 1.0.0..2.0.0
---

8e99369 Update IPA info in troubleshooting.rst
92a2d28 Install the net-tools package in scripts/env-setup.sh
101551a Split --syntax-check and --list-tasks steps in test-bifrost.sh
28797b0 Specify node_network_info is a dict
2a6a7f7 Remove 'auth' fact initialization from bifrost-deploy-nodes-dynamic
a75b1a8 Update release notes for Newton
3775863 Use upper-constraints for all tox targets
356954b Fix /etc/hosts before starting the rabbitmq server
2b6cf52 Restore stable-2.0 as the default Ansible branch
4d59ba5 Add SUSE support in scripts/env-setup.sh
939c244 Allow to define vms with independent names
5506f5e Only set hostname on 127.0.0.1 if not present in /etc/hosts
6fa28b1 Introduce support for kvm acceleration
1141182 Fix release notes formatting issue
1d19d28 Change Vagrant VM to mirror memory/cpu in CI
292f3d6 Fix typo when querying the python version in scripts/env-setup.sh
8c947c5 Set OS_AUTH_TOKEN to dummy string, instead of empty space
84414b3 Initialize 'auth' to empty dict on bifrost_configdrives_dynamic
2b9ccfc Remove auth line to fallback on default(omit) behaviour
e3f8f86 Fix DHCP test scenario
d9dbbd3 Change Bifrost to TinyIPA as the default
dc732e6 Updated from global requirements
97b3998 Fix some spelling mistakes
353a1c0 Remove discover from test-requirements
7b109a6 Updated from global requirements
82d7650 Change none to noop network interface for bifrost
ba9e746 Disable flat network driver
2e0ff63 Unify test playbooks
ddb3965 Fix testing script permission settings for logs
32ea65d Increase timeout for downloading IPA ramdisk and kernel
155bb48 Updated from global requirements
ab0755b Correct name for test
022d05e split ironic install role into install,bootstrap,start phases
b8a97b8 Add redeploy-dynamic playbook
e7fc06a Make ansible installation directory configurable
3f21388 Updated from global requirements
1dc6d85 Make boolean usage consistent across playbooks
9d8e8de Unify testing scripts
10f3ba5 Make booleans in templates explicit
241180e Remove invalid directory_mode from ironic install
e1d20a6 Document that ssh_public_key_path must be set
995b779 Fix Bug #1583539 - rpm part
b208b84 introduce install_dib varible
7585b5c Install libssl-dev and libffi-dev
9b0106b Use constraints for all the things
d87b5cf Updated from global requirements
4e41737 Add pycrypto to requirements
3308b31 Add Ubuntu 16.04 defaults for ironic 

Re: [openstack-dev] [Horizon] [FFE] Horizon Profiler feature (and openstack/osprofiler integration)

2016-08-31 Thread David Lyle
As a developer feature, I would vote for merging in early Ocata rather
than as a FFE. Since the potential risk is to users and operators and
they won't generally benefit from the feature, I don't see the upside
outweighing the potential risk.  It's not a localized change either.

That said, I think the profiler work will be extremely valuable in
Ocata and beyond. Thanks for your continued efforts on bringing it to
life.

David

On Wed, Aug 31, 2016 at 6:14 AM, Timur Sufiev  wrote:
> Hello, folks!
>
> I'd like to ask for a feature-freeze exception for a Horizon Profiler
> feature [1], that has been demoed long ago (during Portland midcycle Feb
> 2016) and is finally ready. The actual request applies to the 3 patches [2]
> that provide the bulk of Profiler functionality.
>
> It is a quite useful feature that is aimed mostly to developers, thus it is
> constrained within Developer dashboard and disabled by default - so it
> shouldn't have any impact on User-facing Horizon capabilities.
>
> [1]
> https://blueprints.launchpad.net/horizon/+spec/openstack-profiler-at-developer-dashboard
> [2]
> https://review.openstack.org/#/q/topic:bp/openstack-profiler-at-developer-dashboard+status:open
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] pymysql change in error formatting has broken exception handing in oslo.db

2016-08-31 Thread Ihar Hrachyshka

Mike Bayer  wrote:


We need to decide how to handle this:

https://review.openstack.org/#/c/362991/


Basically, PyMySQL normally raises an error message like this:

(pymysql.err.IntegrityError) (1452, u'Cannot add or update a child row: a  
foreign key constraint fails (`vaceciqnzs`.`resource_entity`, CONSTRAINT  
`foo_fkey` FOREIGN KEY (`foo_id`) REFERENCES `resource_foo` (`id`))')


for some reason, PyMySQL 0.7.7 is now raising it like this:

(pymysql.err.IntegrityError) (1452, u'23000Cannot add or update a child  
row: a foreign key constraint fails (`vaceciqnzs`.`resource_entity`,  
CONSTRAINT `foo_fkey` FOREIGN KEY (`foo_id`) REFERENCES `resource_foo`  
(`id`))')


this impacts oslo.db's "exception re-handling" functionality which tries  
to classify this exception as a DBNonExistentConstraint exception.   It  
also breaks oslo.db's test suite locally, but in a downstream project  
would only impact its ability to intercept this exception appropriately.


now that "23000" there looks like a bug.  The above gerrit proposes to  
work around it.  However, if we didn't push out the above gerrit, we'd  
instead have to change requirements:


https://review.openstack.org/#/q/I33d5ef8f35747d3b6d3bc0bd4972ce3b7fd60371,n,z

It seems like at least one or the other would be needed for Newton.


Unless we fix the bug in next pymysql, it’s not either/or but both will be  
needed, and also minimal oslo.db version bump.


I suggest we:
- block 0.7.7 to unblock upper-constraints updates;
- land oslo.db fix to cope with pymysql 0.7.7+, in master as well as all  
stable branches;

- release new oslo.db releases for L-N;
- at least for N, bump minimal version of the library in  
global-requirements.txt;

- sync the bump to all consuming projects;
- later, maybe unblock 0.7.7.

In the meantime, interested parties may work with pymysql folks to get the  
issue fixed. It may take a while, so I would not make this step part of our  
short term plan.


Now, I understand that this does not really sound ideal, but I assume we  
are not in requirements freeze yet (the deadline for that is tomorrow), and  
this plan will solve the issue for users of all versions of pymysql.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Horizon missing loadbalance UI button

2016-08-31 Thread Satish Patel
Which version you are using lbaasv2 with RDO?  Can you explain your
setup please. I need this feature badly to provide our customer :(

On Wed, Aug 31, 2016 at 10:18 AM, Bodor János  wrote:
>
>
> 2016. 08. 31. 15:51 keltezéssel, Turbo Fredriksson írta:
>>
>> On Aug 31, 2016, at 2:28 PM, Bodor János wrote:
>>
>>> Here is a link:
>>> http://docs.openstack.org/mitaka/networking-guide/config-lbaas.html
>>
>>
>> That's only for LBaaSv1 unfortunately.
>
> No, We have been using for two weeks with lbaasv2 without any errors.
>
> Regards,
> Janos BODOR
>
>
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [oslo] pymysql change in error formatting has broken exception handing in oslo.db

2016-08-31 Thread Mike Bayer

We need to decide how to handle this:

https://review.openstack.org/#/c/362991/


Basically, PyMySQL normally raises an error message like this:

(pymysql.err.IntegrityError) (1452, u'Cannot add or update a child row: 
a foreign key constraint fails (`vaceciqnzs`.`resource_entity`, 
CONSTRAINT `foo_fkey` FOREIGN KEY (`foo_id`) REFERENCES `resource_foo` 
(`id`))')


for some reason, PyMySQL 0.7.7 is now raising it like this:

(pymysql.err.IntegrityError) (1452, u'23000Cannot add or update a child 
row: a foreign key constraint fails (`vaceciqnzs`.`resource_entity`, 
CONSTRAINT `foo_fkey` FOREIGN KEY (`foo_id`) REFERENCES `resource_foo` 
(`id`))')


this impacts oslo.db's "exception re-handling" functionality which tries 
to classify this exception as a DBNonExistentConstraint exception.   It 
also breaks oslo.db's test suite locally, but in a downstream project 
would only impact its ability to intercept this exception appropriately.


now that "23000" there looks like a bug.  The above gerrit proposes to 
work around it.  However, if we didn't push out the above gerrit, we'd 
instead have to change requirements:


https://review.openstack.org/#/q/I33d5ef8f35747d3b6d3bc0bd4972ce3b7fd60371,n,z

It seems like at least one or the other would be needed for Newton.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Horizon missing loadbalance UI button

2016-08-31 Thread Turbo Fredriksson
On Aug 31, 2016, at 3:18 PM, Bodor János wrote:

> 2016. 08. 31. 15:51 keltezéssel, Turbo Fredriksson írta:
>> On Aug 31, 2016, at 2:28 PM, Bodor János wrote:
>> 
>>> Here is a link:
>>> http://docs.openstack.org/mitaka/networking-guide/config-lbaas.html
>> 
>> That's only for LBaaSv1 unfortunately.
> No, We have been using for two weeks with lbaasv2 without any errors.


Technically, that's not Mitaka! That's using Horizon from Newton.

Which is basically what I said - you need to upgrade beyond
Mitaka to get LBaaSv2 to work (in Horizon).
--
Build a man a fire, and he will be warm for the night.
Set a man on fire and he will be warm for the rest of his life.


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [neutron][devstack][all] Deprecation of external_network_bridge and CI impact

2016-08-31 Thread Sean M. Collins
Hi,

This probably should have been advertised more widely, before the merge
happened. I would like to apologize for an after-the-fact e-mail
explaining what may be going on for some jobs that are broken.

I recently merged a change to DevStack -
https://review.openstack.org/346282

It's a little cryptic since it's a revert-of-a-revert. However, if you
take a look at the original commit[1], you can get an idea of what is
going on.

Basically, we were relying on a deprecated setting in Neutron that has
been deprecated since liberty[2]. Post 346282, we no longer use that
deprecated setting and instead are creating networks the "correct" way.

Some jobs that were relying on provider attributes for their networking
may be seeing some errors similar to what has happened to Shade[3].
Basically Shade was trying to create a public network using the same
provider attributes, that post 346282, we now create during a DevStack
run[4].

I know jroll is currently also trying to figure out how to unblock
Ironic's CI, since they too were using the provider networking API
extension. I imagine there may be some other jobs that are broken
(networking-generic-switch seems to be very sensitive), so please take a
look at the links and hopefully that will help.

[1]: https://review.openstack.org/#/c/343072/

[2]: https://bugs.launchpad.net/neutron/+bug/1511578

[3]: 
http://logs.openstack.org/01/362901/1/check/gate-shade-dsvm-functional-neutron/9698d83/console.html#_2016-08-30_18_56_58_838512

[4]: 
http://logs.openstack.org/01/362901/1/check/gate-shade-dsvm-functional-neutron/9698d83/logs/devstacklog.txt.gz#_2016-08-30_18_46_38_671
 

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Is this a bug in metadata proxy...

2016-08-31 Thread ZZelle
Hi,

Are you sure metadata_proxy_user==neutron?

neutron-metadata-proxy must be able to connect to the metadata-agent socket
and watchs its log files and neutron user should be able to do both with
usual file permissions.

Otherwise the metadata proxy is generally no more able to:
- watch log[1] so you should set metadata_proxy_watch_log=False
- connect to the metadata-agent because of socket permissions, so you
should set metadata_proxy_socket_mode option[2] in order to let the
metadata agent set the correct perms on metadata socket.

If you provide metadata_proxy_user/group in l3/dhcp-agent and
metadata-agent config then neutron should be able to deduce both
metadata_proxy_watch_log and metadata_proxy_socket_mode values.



[1] https://review.openstack.org/#/c/161494/
[2] https://review.openstack.org/#/c/165115/

Cédric/ZZelle

On Wed, Aug 31, 2016 at 2:16 PM, Paul Michali  wrote:

> Hi,
>
> I had seen something and was not sure if this was a subtle bug or not.
>
> I have a Liberty based openstack setup. The account that is setting up
> processes was user=neutron, group=neutron, however the metadata_agent.ini
> config file was set up for a different group. So there was a
> metadata_proxy_user=neutron, and metadata_proxy_group=foo config setting.
>
> This ini file was used by the metadata agent process, but it was not
> included in the DHCP agent process (not sure if I should have included the
> metadata_agent.ini in the startup of DHCP or should have added these two
> metadata proxy settings to neutron.conf, so that they were available to
> DHCP).
>
> In any case, here is what I saw happen...
>
> I created a subnet (not using a router in this setup). It looks like DHCP
> starts up the metadata agent proxy daemon) and the DHCP configuration is
> used, which does NOT include the metadata_proxy_user/group, so the current
> user's uid and gid are used (neutron/neutron) for the
> metadata_proxy_user/group settings.
>
> The proxy calls drop_privileges(), which because the group is different,
> the log file can no longer be accessed by the daemon. An OSError occurs
> with permission denied on the log file for this process, and the process
> exits without any indications.
>
> When I then try to use metadata services it fails (obviously). Looking, we
> see that the metadata service is running (but the proxy is not, and I don't
> see a way for an end user to check that - is there a way?).
>
> Looking in the proxy log, the initial startup messages are seen, showing
> all the configuration settings, and then there is nothing more. No
> indication that it is lowering privileges to run under some other
> user/group, that there was a fatal error, or that it is working and ready
> to process requests. Nothing more appears in the log, as it was working and
> there were no metadata proxy requests occurring.
>
> I was only able to figure it out, by first checking to see if the proxy
> was running, and then manually trying to start the proxy, using the command
> line in the log, under a debugger, to find out that there was a permission
> denied error.
>
> So, it is likely a misconfiguration error on the user's part, but it was
> really hard to figure that out.
>
> Should/could we somehow indicate if there is an error lowering privs?
>
> Is there a (user) way to tell if proxy is running?
>
> Is there some documentation indicating that the proxy user/group settings
> need to be available for both the metadata agent and for other agents that
> may spawn the proxy (DHCP, L3)?
>
> Regards,
>
> PCM
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Horizon missing loadbalance UI button

2016-08-31 Thread Bodor János



2016. 08. 31. 15:51 keltezéssel, Turbo Fredriksson írta:

On Aug 31, 2016, at 2:28 PM, Bodor János wrote:


Here is a link:
http://docs.openstack.org/mitaka/networking-guide/config-lbaas.html


That's only for LBaaSv1 unfortunately.

No, We have been using for two weeks with lbaasv2 without any errors.

Regards,
Janos BODOR



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Help with ipv6 route configuration and problem to traverse virtual router.

2016-08-31 Thread Brian Haley

On 08/31/2016 07:00 AM, Jorge Luiz Correa wrote:


*Chain neutron-l3-agent-scope (1 references)*
 pkts bytes target prot opt in out source
destination
   78  4368 *DROP*   all  *  qr-1ee33f03-23  ::/0
::/0 mark match ! 0x400/0x

Packets pass in chain FORWARD -> neutron-filter-top ->
neutron-l3-agent-local ->
back to FORWARD -> neutron-l3-agent-FORWARD -> neutron-l3-agent-scope ->
DROP.


This looks similar to https://bugs.launchpad.net/neutron/+bug/1570122



Thank you Brian, this is the problem.

IPv4 rules is very similar but works. Ipv6 is blocking for some reason.

Do you have the same mark/match rules with IPv4, they're just not getting 
hit?

Yes, IPv4 have this rule and works fine. Adding a similar rule manually with
ip6tables the traffic traverses the virtual router.


So is the ip6tables rule just wrong?  Feel free to add any info to the bug that 
might help fix this.


Thanks,

-Brian


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Cinder] The State of the NFS Driver ...

2016-08-31 Thread Sean McGinnis
Thanks for the write up Jay. This is useful.

Added [Cinder] tag to subject line...

On Tue, Aug 30, 2016 at 10:50:38AM -0500, Jay S. Bryant wrote:
> All,
> 
> I wanted to follow up on the e-mail thread [1] on Cloning support in
> the NFS driver.  The purpose of this e-mail is to provide the plan
> for the NFS driver going forward as I see it.
> 
> First, I am aware that the driver has gone quite some time without
> care and feeding.  For a number of reasons, the Public Cloud team
> within IBM is currently dependent upon the NFS driver working
> properly for the cloud environment we are building.  Given our
> current dependence on the driver we are planning on picking up the
> driver and maintaining it.
> 
> The first step in this process was getting the existing patch that
> adds snapshot support for NFS [2] rebased.  I did this work a couple
> of weeks ago and also got all the unit tests working for the unit
> test environment on the master branch.  I now see that it is in
> merge conflict again, I plan to continue to keep the patch
> up-to-date.
> 
> Erlon has been investigating issues with attaching snapshots.  It
> appears that this may be related to AppArmor running on the system
> where the VM is running and attachment is being attempted.  I am
> hoping to look into the other questions posed in the patch review in
> the next week or two.
> 
> The next step is to create a dependent patch, upon the snapshot
> patch, to implement cloning.  I am planning to also undertake this
> work.  I am assuming that getting the cloning support in place
> shouldn't be too difficult once snapshots are working as it will be
> just a matter of using the support from the remotefs driver.
> 
> The last piece of work we have in flight is working on adding QoS
> support to the NFS driver.  We have the following spec proposed to
> get that work started: [3]
> 
> So, we are in the process of bringing the NFS driver up to good
> standing.  During this process we would greatly appreciate reviews
> and input from those of you who have previously worked on the driver
> in order to expedite integration of the necessary changes. I feel it
> is in the best interest of the community to get the driver updated
> and supported given that it is the 4th most used driver according to
> our user survey.  I think it would not look good to our users if it
> were to suddenly be removed.
> 
> Thanks to all of your for your support in this effort!
> 
> Jay
> 
> [1] http://lists.openstack.org/pipermail/openstack-dev/2016-August/102193.html
> 
> [2] https://review.openstack.org/#/c/147186/
> 
> [3] https://review.openstack.org/361456
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Reliable way to filter CI in gerrit spam?

2016-08-31 Thread Jordan Pittier
On Wed, Aug 31, 2016 at 3:44 PM, Matthew Booth  wrote:
>
> Is there anything I missed? Or is it possible to unsubscribe from gerrit
> mail from bots? Or is there any other good way to achieve what I'm looking
> for which doesn't involve maintaining my own bot list? If not, would it be
> feasible to add something?
>

Most(all?) messages from CI have the lines:

"Patch Set X:
Build (succeeded|failed)."

Not super robust, but that's a start.

-- 
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Converged infrastructure

2016-08-31 Thread Jonathan D. Proulx
On Wed, Aug 31, 2016 at 01:01:56PM +0100, Matt Jarvis wrote:

:   Like a lot of others we run Ceph, and we absolutely don't converge our
:   storage and compute nodes for a variety of performance and management
:   related reasons. In our experience, the hardware and tuning
:   characteristics of both types of nodes are pretty different, in any
:   kind of recovery scenarios Ceph eats memory, and it feels like creating
:   a SPOF.

We do what you do for the reasons you mention :0

I guess I could spin an argument the other way and think of some ways
to make it go, but since we don't I won't wax theoretical about it.

-Jon

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack] Horizon missing loadbalance UI button

2016-08-31 Thread Satish Patel
I did follow those instruction to configure lbassv2 but I am using RDO
version of mikata so github will break my stuff :(

On Wed, Aug 31, 2016 at 9:28 AM, Bodor János  wrote:
> Hy,
>
> Here is a link:
> http://docs.openstack.org/mitaka/networking-guide/config-lbaas.html
>
> You should check the "Add LBaaS panels to Dashboard" section.
>
> Regards,
> Janos BODOR
>
>
> 2016. 08. 31. 13:34 keltezéssel, Satish Patel írta:
>
> Guy,
>
> Need help here. Anyone else who has same problem?
>
> --
> Sent from my iPhone
>
> On Aug 31, 2016, at 1:06 AM, Nasir Mahmood  wrote:
>
> Lbaas v2 is  not implemented yet,
>
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/070066.html
>
> And
>
> https://blueprints.launchpad.net/horizon/+spec/lbaas-v2-panel
>
>
> On Aug 31, 2016 09:59, "Satish Patel"  wrote:
>>
>> Look like mitaka deprecated "enable_lb" option in new release, now it
>> auto-detect if lbaasv2 module loaded. I am seeing in load its
>> successfully loaded but still not seeing in horizon :(
>>
>> 2016-08-31 00:00:32.789 30820 INFO neutron.manager
>> [req-3fc2a849-945a-4357-89f7-ac08218015a1 - - - - -] Loading Plugin:
>> neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
>> 2016-08-31 00:00:33.189 30820 WARNING
>> neutron.services.provider_configuration
>> [req-3fc2a849-945a-4357-89f7-ac08218015a1 - - - - -] The configured
>> driver neutron_lbaas.agent_scheduler.ChanceScheduler has been moved,
>> automatically using neutron_lbaas.agent_scheduler.ChanceScheduler
>> instead. Please update your config files, as this automatic fixup will
>> be removed in a future release.
>> 2016-08-31 00:00:33.445 30820 WARNING neutron.api.extensions
>> [req-3fc2a849-945a-4357-89f7-ac08218015a1 - - - - -] Extension
>> lbaas_agent_scheduler not supported by any of loaded plugins
>> 2016-08-31 00:00:33.446 30820 INFO neutron.api.extensions
>> [req-3fc2a849-945a-4357-89f7-ac08218015a1 - - - - -] Loaded extension:
>> lbaas_agent_schedulerv2
>> 2016-08-31 00:00:33.448 30820 WARNING neutron.api.extensions
>> [req-3fc2a849-945a-4357-89f7-ac08218015a1 - - - - -] Extension lbaas
>> not supported by any of loaded plugins
>> 2016-08-31 00:00:33.450 30820 INFO neutron.api.extensions
>> [req-3fc2a849-945a-4357-89f7-ac08218015a1 - - - - -] Loaded extension:
>> lbaasv2
>> 2016-08-31 00:25:27.384 35800 INFO neutron.manager
>> [req-9546ee6a-36ab-44ef-a786-a3712ee02ff7 - - - - -] Loading Plugin:
>> neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
>> 2016-08-31 00:25:27.756 35800 WARNING
>> neutron.services.provider_configuration
>> [req-9546ee6a-36ab-44ef-a786-a3712ee02ff7 - - - - -] The configured
>> driver neutron_lbaas.agent_scheduler.ChanceScheduler has been moved,
>> automatically using neutron_lbaas.agent_scheduler.ChanceScheduler
>> instead. Please update your config files, as this automatic fixup will
>> be removed in a future release.
>> 2016-08-31 00:25:28.002 35800 WARNING neutron.api.extensions
>> [req-9546ee6a-36ab-44ef-a786-a3712ee02ff7 - - - - -] Extension
>> lbaas_agent_scheduler not supported by any of loaded plugins
>> 2016-08-31 00:25:28.003 35800 INFO neutron.api.extensions
>> [req-9546ee6a-36ab-44ef-a786-a3712ee02ff7 - - - - -] Loaded extension:
>> lbaas_agent_schedulerv2
>> 2016-08-31 00:25:28.005 35800 WARNING neutron.api.extensions
>> [req-9546ee6a-36ab-44ef-a786-a3712ee02ff7 - - - - -] Extension lbaas
>> not supported by any of loaded plugins
>> 2016-08-31 00:25:28.007 35800 INFO neutron.api.extensions
>> [req-9546ee6a-36ab-44ef-a786-a3712ee02ff7 - - - - -] Loaded extension:
>> lbaasv2
>>
>> On Wed, Aug 31, 2016 at 12:33 AM, Satish Patel 
>> wrote:
>> > I have added lbassv2 using following document:
>> >
>> > http://docs.openstack.org/mitaka/networking-guide/config-lbaas.html
>> >
>> > and enable: True in /etc/openstack-dashboard/local_settings ( Restart
>> > httpd) but still i am not able to see "Load Balancer" button in
>> > Network section, I am using mitaka Does it has different configuration
>> > to enable lbassv2 on horizon interface?
>> >
>> > OPENSTACK_NEUTRON_NETWORK = {
>> > 'enable_distributed_router': False,
>> > 'enable_firewall': False,
>> > 'enable_ha_router': False,
>> > 'enable_lb': True,
>> > 'enable_quotas': True,
>> > 'enable_security_group': True,
>> > 'enable_vpn': False,
>> > 'profile_support': None,
>> > }
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>


Re: [Openstack-operators] python and nice utf ö ü :)

2016-08-31 Thread Saverio Proto
Yes it worked !
I pushed a new patchset, thank you !

https://review.openstack.org/361308

Saverio

2016-08-31 15:25 GMT+02:00 Matt Jarvis :
> I'm no python guru, but if you're getting unicode output from the python
> openstack tools then don't you want to prepend your strings with u in the
> print statements ?
>
> On 31 August 2016 at 14:03, Saverio Proto  wrote:
>>
>> Oh,
>> to stick with the subject of the email you can also call the instance
>> for example
>>
>> füöô
>>
>> and this will trigger the bug anyway :)
>>
>> Saverio
>>
>>
>>
>>
>> 2016-08-31 14:54 GMT+02:00 Saverio Proto :
>> > Hello Matt,
>> >
>> > I am sorry, I realize now I sent a very dumb email :) I will try to
>> > explain my self better.
>> >
>> > The script fails to print some resources. To reproduce the bug do the
>> > following,
>> >
>> > Create a instance and name it:
>> >
>> > آشپزی ایتالیایی
>> >
>> > then just call
>> >
>> > python user-info.py 
>> >
>> > (of course the username is the one that created the instance)
>> >
>> > this will print all the instances, including the one with the
>> > problematic name. Of course you will get an Error. It will raise an
>> > exception and will fail.
>> >
>> > the patch I proposed shows how to get a traceback that makes sense,
>> > you will get something like:
>> >
>> > UnicodeEncodeError: 'ascii' codec can't encode characters in position
>> > 0-3: ordinal not in range(128)
>> >
>> >
>> > Adding this two lines:
>> > reload(sys)
>> > sys.setdefaultencoding("utf-8")
>> >
>> > fixes the problem, I dont have anymore the exception
>> > UnicodeEncodeError and I see printed:
>> >
>> > Server: نواع-پاستاها-و-طرز-ط [3f26242c-440b-4a2e-b3ca-cb6c6c7ee8b2] -
>> > ACTIVE
>> >
>> > But on stackoverflow people say that these two lines I added are bad,
>> > so what should I do ? :)
>> >
>> > thank you !
>> >
>> > Saverio
>> >
>> >
>> >
>> > 2016-08-31 14:13 GMT+02:00 Matt Jarvis :
>> >> What was your problem to start with ?
>> >>
>> >> On 31 August 2016 at 12:56, Saverio Proto  wrote:
>> >>>
>> >>> Hello ops,
>> >>>
>> >>> this patch fixed my problem:
>> >>>
>> >>> https://review.openstack.org/#/c/361308/
>> >>>
>> >>> but it is an ugly hack according to:
>> >>>
>> >>>
>> >>>
>> >>> http://stackoverflow.com/questions/3828723/why-should-we-not-use-sys-setdefaultencodingutf-8-in-a-py-script
>> >>>
>> >>> anyone knows how to make it better ?
>> >>>
>> >>> Saverio
>> >>>
>> >>> ___
>> >>> OpenStack-operators mailing list
>> >>> OpenStack-operators@lists.openstack.org
>> >>>
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> >>
>> >>
>> >>
>> >> DataCentred Limited registered in England and Wales no. 05611763
>
>
>
> DataCentred Limited registered in England and Wales no. 05611763

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack] Horizon missing loadbalance UI button

2016-08-31 Thread Turbo Fredriksson
On Aug 31, 2016, at 2:28 PM, Bodor János wrote:

> Here is a link:
> http://docs.openstack.org/mitaka/networking-guide/config-lbaas.html


That's only for LBaaSv1 unfortunately.
-- 
I love deadlines. I love the whooshing noise they
make as they go by.
- Douglas Adams


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [infra] Reliable way to filter CI in gerrit spam?

2016-08-31 Thread Matthew Booth
I've just (re-)written an email filter which splits gerrit emails into
those from CI and those from real people. In general I'm almost never
interested in botmail, but they comprise about 80% of gerrit email.

Having looked carefully through some gerrit emails from real people and
CIs, I unfortunately can't find any features which distinguish the CI.
Consequently my filter is just a big enumeration of current known CIs. This
is kinda ugly, and will obviously get out of date.

Is there anything I missed? Or is it possible to unsubscribe from gerrit
mail from bots? Or is there any other good way to achieve what I'm looking
for which doesn't involve maintaining my own bot list? If not, would it be
feasible to add something?

Thanks,

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Horizon missing loadbalance UI button

2016-08-31 Thread Bodor János

Hy,

Here is a link:
http://docs.openstack.org/mitaka/networking-guide/config-lbaas.html

You should check the "Add LBaaS panels to Dashboard" section.

Regards,
Janos BODOR


2016. 08. 31. 13:34 keltezéssel, Satish Patel írta:

Guy,

Need help here. Anyone else who has same problem?

--
Sent from my iPhone

On Aug 31, 2016, at 1:06 AM, Nasir Mahmood > wrote:



Lbaas v2 is  not implemented yet,

http://lists.openstack.org/pipermail/openstack-dev/2015-July/070066.html

And

https://blueprints.launchpad.net/horizon/+spec/lbaas-v2-panel


On Aug 31, 2016 09:59, "Satish Patel" > wrote:


Look like mitaka deprecated "enable_lb" option in new release, now it
auto-detect if lbaasv2 module loaded. I am seeing in load its
successfully loaded but still not seeing in horizon :(

2016-08-31 00:00:32.789 30820 INFO neutron.manager
[req-3fc2a849-945a-4357-89f7-ac08218015a1 - - - - -] Loading Plugin:
neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
2016-08-31 00:00:33.189 30820 WARNING
neutron.services.provider_configuration
[req-3fc2a849-945a-4357-89f7-ac08218015a1 - - - - -] The configured
driver neutron_lbaas.agent_scheduler.ChanceScheduler has been moved,
automatically using neutron_lbaas.agent_scheduler.ChanceScheduler
instead. Please update your config files, as this automatic fixup
will
be removed in a future release.
2016-08-31 00:00:33.445 30820 WARNING neutron.api.extensions
[req-3fc2a849-945a-4357-89f7-ac08218015a1 - - - - -] Extension
lbaas_agent_scheduler not supported by any of loaded plugins
2016-08-31 00:00:33.446 30820 INFO neutron.api.extensions
[req-3fc2a849-945a-4357-89f7-ac08218015a1 - - - - -] Loaded
extension:
lbaas_agent_schedulerv2
2016-08-31 00:00:33.448 30820 WARNING neutron.api.extensions
[req-3fc2a849-945a-4357-89f7-ac08218015a1 - - - - -] Extension lbaas
not supported by any of loaded plugins
2016-08-31 00:00:33.450 30820 INFO neutron.api.extensions
[req-3fc2a849-945a-4357-89f7-ac08218015a1 - - - - -] Loaded
extension:
lbaasv2
2016-08-31 00:25:27.384 35800 INFO neutron.manager
[req-9546ee6a-36ab-44ef-a786-a3712ee02ff7 - - - - -] Loading Plugin:
neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
2016-08-31 00:25:27.756 35800 WARNING
neutron.services.provider_configuration
[req-9546ee6a-36ab-44ef-a786-a3712ee02ff7 - - - - -] The configured
driver neutron_lbaas.agent_scheduler.ChanceScheduler has been moved,
automatically using neutron_lbaas.agent_scheduler.ChanceScheduler
instead. Please update your config files, as this automatic fixup
will
be removed in a future release.
2016-08-31 00:25:28.002 35800 WARNING neutron.api.extensions
[req-9546ee6a-36ab-44ef-a786-a3712ee02ff7 - - - - -] Extension
lbaas_agent_scheduler not supported by any of loaded plugins
2016-08-31 00:25:28.003 35800 INFO neutron.api.extensions
[req-9546ee6a-36ab-44ef-a786-a3712ee02ff7 - - - - -] Loaded
extension:
lbaas_agent_schedulerv2
2016-08-31 00:25:28.005 35800 WARNING neutron.api.extensions
[req-9546ee6a-36ab-44ef-a786-a3712ee02ff7 - - - - -] Extension lbaas
not supported by any of loaded plugins
2016-08-31 00:25:28.007 35800 INFO neutron.api.extensions
[req-9546ee6a-36ab-44ef-a786-a3712ee02ff7 - - - - -] Loaded
extension:
lbaasv2

On Wed, Aug 31, 2016 at 12:33 AM, Satish Patel
> wrote:
> I have added lbassv2 using following document:
>
>
http://docs.openstack.org/mitaka/networking-guide/config-lbaas.html

>
> and enable: True in /etc/openstack-dashboard/local_settings (
Restart
> httpd) but still i am not able to see "Load Balancer" button in
> Network section, I am using mitaka Does it has different
configuration
> to enable lbassv2 on horizon interface?
>
> OPENSTACK_NEUTRON_NETWORK = {
> 'enable_distributed_router': False,
> 'enable_firewall': False,
> 'enable_ha_router': False,
> 'enable_lb': True,
> 'enable_quotas': True,
> 'enable_security_group': True,
> 'enable_vpn': False,
> 'profile_support': None,
> }

___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Post to : openstack@lists.openstack.org

Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





___
Mailing list: 

  1   2   >