Re: [Openstack-operators] [Openstack-Operators] Keystone cache strategies

2016-06-21 Thread Matt Fischer
On Tue, Jun 21, 2016 at 7:04 PM, Sam Morrison wrote: > > On 22 Jun 2016, at 10:58 AM, Matt Fischer wrote: > > Have you setup token caching at the service level? Meaning a Memcache > cluster that glance, Nova etc would talk to directly? That will really

Re: [Openstack-operators] [Openstack-Operators] Keystone cache strategies

2016-06-21 Thread Sam Morrison
> On 22 Jun 2016, at 10:58 AM, Matt Fischer wrote: > > Have you setup token caching at the service level? Meaning a Memcache cluster > that glance, Nova etc would talk to directly? That will really cut down the > traffic. > Yeah we have that although the default cache

Re: [Openstack-operators] [Openstack-Operators] Keystone cache strategies

2016-06-21 Thread Matt Fischer
Have you setup token caching at the service level? Meaning a Memcache cluster that glance, Nova etc would talk to directly? That will really cut down the traffic. On Jun 21, 2016 5:55 PM, "Sam Morrison" wrote: > > On 22 Jun 2016, at 9:42 AM, Matt Fischer

Re: [Openstack-operators] [Openstack-Operators] Keystone cache strategies

2016-06-21 Thread Steve Martinelli
FWIW, we have refactored the revocation tree into a list, this should speed up the revocation process time significantly ( https://review.openstack.org/#/c/311652/) There is no way to disable revocations, since that would open up a security hole. On Tue, Jun 21, 2016 at 8:55 PM, Sam Morrison

Re: [Openstack-operators] [Openstack-Operators] Keystone cache strategies

2016-06-21 Thread Sam Morrison
> On 22 Jun 2016, at 9:42 AM, Matt Fischer wrote: > > On Tue, Jun 21, 2016 at 4:21 PM, Sam Morrison > wrote: >> >> On 22 Jun 2016, at 1:45 AM, Matt Fischer > > wrote: >>

Re: [Openstack-operators] [Openstack-Operators] Keystone cache strategies

2016-06-21 Thread Matt Fischer
On Tue, Jun 21, 2016 at 4:21 PM, Sam Morrison wrote: > > On 22 Jun 2016, at 1:45 AM, Matt Fischer wrote: > > I don't have a solution for you, but I will concur that adding revocations > kills performance especially as that tree grows. I'm curious what

Re: [Openstack-operators] [Openstack-Operators] Keystone cache strategies

2016-06-21 Thread Sam Morrison
> > On 22 Jun 2016, at 1:45 AM, Matt Fischer wrote: > > I don't have a solution for you, but I will concur that adding revocations > kills performance especially as that tree grows. I'm curious what you guys > are doing revocations on, anything other than logging out of

Re: [Openstack-operators] [Magnum] Keystone error while creating a baymodel

2016-06-21 Thread Abhishek Chanda
Sorry for being vague. Here are some more details: I installed the Mitaka packages from opensuse repo. Here are the config files magnum.conf https://gist.github.com/achanda/3fca8914e225e430e8e6a86f321cb77d api-paste.ini https://gist.github.com/achanda/4415d5554156234c9ef5da0300e1487e policy.json

Re: [Openstack-operators] [Magnum] Keystone error while creating a baymodel

2016-06-21 Thread Hongbin Lu
Hi Abhishek, I have no idea ant need further information. Could you provide the following information? * How you installed Magnum (install from source or package, manually or using any tool, etc.)? * Which version of Magnum you installed (master, Mitaka, etc.)? * Could you paste your Magnum

[Openstack-operators] [Magnum] Keystone error while creating a baymodel

2016-06-21 Thread Abhishek Chanda
Hi all, I am trying to run Magnum on 3 management nodes. I get the following error in api logs while trying to create a baymodel Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/wsmeext/pecan.py", line 84, in callfunction result = f(self, *args, **kwargs) File

[Openstack-operators] Help the community recruit app developers!

2016-06-21 Thread Kruithof Jr, Pieter
Operators, Apologies for any cross postings. As part of a long-term commitment to enhance ease-of-use, the OpenStack UX project, with support of the OpenStack Foundation and the Technical Committee, is now building a community of application and software developers interested in providing

Re: [Openstack-operators] [Glance] Default policy in policy.json

2016-06-21 Thread Andrew Laski
On Tue, Jun 21, 2016, at 12:27 PM, Adam Young wrote: > On 06/20/2016 10:09 PM, Michael Richardson wrote: > > On Fri, 17 Jun 2016 16:27:54 + > > > >> Also which would be preferred "role:admin" or "!"? Brian points out on [1] > >> that "!" would in effect, notify the admins that a policy is

Re: [Openstack-operators] [Glance] Default policy in policy.json

2016-06-21 Thread Adam Young
On 06/20/2016 10:09 PM, Michael Richardson wrote: On Fri, 17 Jun 2016 16:27:54 + Also which would be preferred "role:admin" or "!"? Brian points out on [1] that "!" would in effect, notify the admins that a policy is not defined as they would be unable to preform the action themselves.

[Openstack-operators] [neutron] Packet loss with DVR and IPv6

2016-06-21 Thread Tomas Vondra
Dear list, I've stumbled upon a weird condition in Neutron and couldn't find a bug filed for it. So even if it is happening with the Kilo release, it could still be relevant. The setup has 3 network nodes and 1 compute node currently hosting a virtual network (GRE based). DVR is enabled. I have

[Openstack-operators] [Openstack-Operators] Keystone cache strategies

2016-06-21 Thread Jose Castro Leon
Hi all, While doing scale tests on our infrastructure, we observed some increase in the response times of our keystone servers. After further investigation we observed that we have a hot key in our cache configuration (this means than all keystone servers are checking this key quite frequently)

Re: [Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Saverio Proto
Hello Michael, have a look at Openstack Manila and CephFS Cheers Saverio 2016-06-21 11:42 GMT+02:00 Michael Stang : > I think I have asked my question not correctly, it is not for the cinder > backend, I meant the shared storage for the instances which is

Re: [Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Michael Stang
Hello Sampath, no I haven't read this one yet, thank you I will go through it. Regards, Michael > Sam P hat am 21. Juni 2016 um 09:55 geschrieben: > > > Hi, > > Hope you have already gone through this document... if not FYI >

Re: [Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Michael Stang
Hello Saverio, thank you I will have a look at these documents. Michael > Saverio Proto hat am 21. Juni 2016 um 09:42 geschrieben: > > > Hello Michael, > > a very widely adopted solution is to use Ceph with rbd volumes. > >

Re: [Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Matt Jarvis
If you look at the user survey ( https://www.openstack.org/user-survey/survey-2016-q1/landing ) you can see what the current landscape looks like in terms of deployments. Ceph is by far the most commonly used storage backend for Cinder. On 21 June 2016 at 08:27, Michael Stang

Re: [Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Sam P
Hi, Hope you have already gone through this document... if not FYI http://docs.openstack.org/ops-guide/arch_storage.html As Saverio said, Ceph is widely adopted solution. For small clouds, we found that NFS is much affordable solution in terms of cost and the complexity. --- Regards,

Re: [Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Saverio Proto
Hello Michael, a very widely adopted solution is to use Ceph with rbd volumes. http://docs.openstack.org/liberty/config-reference/content/ceph-rados.html http://docs.ceph.com/docs/master/rbd/rbd-openstack/ you find more options here under Volume drivers:

[Openstack-operators] Shared Storage for compute nodes

2016-06-21 Thread Michael Stang
Hi, I wonder what is the recommendation for a shared storage for the compute nodes? At the moment we are using an iSCSI device which is served to all compute nodes with multipath, the filesystem is OCFS2. But this makes it a little unflexible in my opinion, because you have to decide how many