[Openstack-operators] [Neutron] User feedback track: end user and operator pain points - report

2016-05-03 Thread Carl Baldwin
Hi all,

We had a productive session with operators at the summit [1].  I
wanted to be sure to go over the notes while they were fresh in my
mind.  Some of the issues still need some discussion...

Probably the most contentious issue was that of creating HA routers
when there aren't enough agents to satisfy the minimum agents
requirement of HA [7].  We need to drive this discussion.  There were
some very convincing but conflicting points of view expressed in the
session.  I have expressed my point of view in the bug report.

There was a complaint that l3-agent-router-remove doesn't work if
there are only 2 l3 agents on the network.  Assaf thinks this was
fixed in Liberty.  Please file a bug if this is still a problem.

A request was made that for upgrading, it would be nice if there was a
tool that took a flat config file and moved the deprecated options to
their new homes. Last upgrade we did was entirely automatable.  I did
not see if an RFE bug was filed to address this in the upgrades team.

We had a discussion about IP protocol numbers in security groups.  We
think that any IP protocol number can be specified in the API but that
is not well-documented in the API docs [9].  We need a documentation
bug filed.  Support for additional protocol names has been added to
the client [10].

There was discussion around routers and multiple mechanism drivers
[12].  My memory about this discussion is already fading.  Can anyone
fill in?

Some of the issues are gaining traction and have owners since the session...

  - Dougwig will explore using nginx as metadata proxy [2]
  - Kevin Benton is working on scalability of security groups changes [3].
- However, the work can't be backported so we may still need to
find some for stable branches.
  - Several operators expressed interest in north / south DVR for IPv6
tenant networks.
- We now have an RFE [11] to push.

Some issues were resolved in near real time during the session but
might still need final approval...  Great job!

  - Cleaning up stale flows needs a +A to stable/liberty [4]
  - A new Neutron Mitaka release [5] with this fix [6] is in the works.
  - Consume service plugins queues in RPC workers was merged [8].

If there is something that I missed, please let me know.

Carl Baldwin

[1] https://www.openstack.org/summit/austin-2016/summit-schedule/events/9103
[2] https://bugs.launchpad.net/neutron/+bug/1524916
[3] https://bugs.launchpad.net/neutron/+bug/1576425
[4] https://review.openstack.org/#/c/300424/
[5] https://review.openstack.org/#/c/310931/
[6] 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=90b9cd334b1b33df933bf1b61b38c6e087c431af
[7] https://bugs.launchpad.net/neutron/+bug/1555042
[8] https://review.openstack.org/#/c/238745/
[9] 
http://developer.openstack.org/api-ref-networking-v2-ext.html#security_groups
[10] https://review.openstack.org/#/c/307908/
[11] https://bugs.launchpad.net/neutron/+bug/1577488
[12] https://bugs.launchpad.net/neutron/+bug/1555384

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Anyone else use vendordata_driver in nova.conf?

2016-05-03 Thread Fox, Kevin M
For a tenant though, I may not want to have to write user-data to bind every 
thing I launch through horizon's nova workflow, heat, sahara, etc. Just having 
one place to put the hook and its always called has some major advantages.

Thanks,
Kevin

From: Mathieu Gagné [mga...@calavera.ca]
Sent: Tuesday, May 03, 2016 3:25 PM
To: Fox, Kevin M
Cc: Michael Still; openstack-operators@lists.openstack.org; Sean Dague
Subject: Re: [Openstack-operators] Anyone else use vendordata_driver in 
nova.conf?

On Tue, May 3, 2016 at 5:51 PM, Fox, Kevin M  wrote:
>
> I think I see at least one use case for minimum 2 hooks...
>
> Cloud provider wants to inject some stuff.
>
> Cloud tenant wants their own hook called to inject stuff to point to the
> Config Management server in their own tenant.
>
> Maybe that's not vendor_data but tenant_data or something...
>

I think this could very well be addressed by cloud-init or any other
sane initialization service.
We just have to make sure instance identification and any other
reasonable information are made available to those tools so they can
be passed to their own Config Management server.
I see vendor_data as a non-intrusive way to pass additional data to
the instance without requiring the vendor/provider to inject an agent
or custom code within the customer's image.

--
Mathieu


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Anyone else use vendordata_driver in nova.conf?

2016-05-03 Thread Mathieu Gagné
On Tue, May 3, 2016 at 5:51 PM, Fox, Kevin M  wrote:
>
> I think I see at least one use case for minimum 2 hooks...
>
> Cloud provider wants to inject some stuff.
>
> Cloud tenant wants their own hook called to inject stuff to point to the
> Config Management server in their own tenant.
>
> Maybe that's not vendor_data but tenant_data or something...
>

I think this could very well be addressed by cloud-init or any other
sane initialization service.
We just have to make sure instance identification and any other
reasonable information are made available to those tools so they can
be passed to their own Config Management server.
I see vendor_data as a non-intrusive way to pass additional data to
the instance without requiring the vendor/provider to inject an agent
or custom code within the customer's image.

--
Mathieu

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [nova][scheduler] please review nova sched logging proposal

2016-05-03 Thread Chris Friesen

Hi all,

There's a proposal for improving the nova scheduler logs up at 
https://review.openstack.org/#/c/306647/


If you would like to be able to more easily determine why no valid host was 
found, please review the proposal and leave feedback.


Thanks,
Chris

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Anyone else use vendordata_driver in nova.conf?

2016-05-03 Thread Fox, Kevin M
Depends on what its used for... I can see it potentially being used with Chef 
or Puppet, for calling hooks into AD to bind to a domain. etc. Probably at the 
same time. We use it with our keyserver (something similar to Barbican but 
created before Barbican was a thing) to relay trust info between Nova and the 
Keyserver through the Instance.

I've done some careful inheriting of our vendor_data plug-in to get all the 
features in one plugin, but I could see it being difficult for some folks when 
more features are added.

I think I see at least one use case for minimum 2 hooks...

Cloud provider wants to inject some stuff.

Cloud tenant wants their own hook called to inject stuff to point to the Config 
Management server in their own tenant.

Maybe that's not vendor_data but tenant_data or something...

Thanks,
Kevin

From: mikalst...@gmail.com [mikalst...@gmail.com] on behalf of Michael Still 
[mi...@stillhq.com]
Sent: Tuesday, May 03, 2016 2:37 PM
To: Fox, Kevin M
Cc: David Medberry; Ned Rhudy; openstack-operators@lists.openstack.org; Sean 
Dague
Subject: Re: [Openstack-operators] Anyone else use vendordata_driver in 
nova.conf?

Hey,

I just wanted to let people know that the review is progressing, but we have a 
question.

Do operators really need to call more than one external REST service to collect 
vendordata? We can implement that in nova, but it would be nice to reduce the 
complexity to only having one external REST service. If you needed to call more 
than one service you could of course write a REST service that aggregated REST 
services.

Does anyone in the operator community have strong feelings either way? Should 
nova be able to call more than one external vendordata REST service?

Thanks,
Michael




On Sat, Apr 30, 2016 at 4:11 AM, Michael Still 
mailto:mi...@stillhq.com>> wrote:
So, after a series of hallway track chats this week, I wrote this:

https://review.openstack.org/#/c/310904/

Which is a proposal for how to implement vendordata in a way which would 
(probably) be acceptable to nova, whilst also meeting the needs of operators. I 
should reinforce that because this week is so hectic nova core hasn't really 
talked about this yet, but I am pretty sure I understand and have addressed 
Sean's concerns.

I'd be curious as to if the proposed solution actually meets your needs.

Michael




On Mon, Apr 18, 2016 at 10:55 AM, Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:
We've used it too to work around the lack of instance users in nova. Please 
keep it until a viable solution can be reached.

Thanks,
Kevin

From: David Medberry [openst...@medberry.net]
Sent: Monday, April 18, 2016 7:16 AM
To: Ned Rhudy
Cc: 
openstack-operators@lists.openstack.org

Subject: Re: [Openstack-operators] Anyone else use vendordata_driver in 
nova.conf?

Hi Ned, Jay,

We use it also and I have to agree, it's onerous to require users to add that 
functionality back in. Where was this discussed?

On Mon, Apr 18, 2016 at 8:13 AM, Ned Rhudy (BLOOMBERG/ 731 LEX) 
mailto:erh...@bloomberg.net>> wrote:
Requiring users to remember to pass specific userdata through to their instance 
at every launch in order to replace functionality that currently works 
invisible to them would be a step backwards. It's an alternative, yes, but it's 
an alternative that adds burden to our users and is not one we would pursue.

What is the rationale for desiring to remove this functionality?

From: jaypi...@gmail.com
Subject: Re: [Openstack-operators] Anyone else use vendordata_driver in 
nova.conf?
On 04/18/2016 09:24 AM, Ned Rhudy (BLOOMBERG/ 731 LEX) wrote:
> I noticed while reading through Mitaka release notes that
> vendordata_driver has been deprecated in Mitaka
> (https://review.openstack.org/#/c/288107/) and is slated for removal at
> some point. This came as somewhat of a surprise to me - I searched
> openstack-dev for vendordata-related subject lines going back to January
> and found no discussion on the matter (IRC logs, while available on
> eavesdrop, are not trivially searchable without a little scripting to
> fetch them first, so I didn't check there yet).
>
> We at Bloomberg make heavy use of this particular feature to inject
> dynamically generated JSON into the metadata service of instances; the
> content of the JSON differs depending on the instance making the request
> to the metadata service. The functionality that adds the contents of a
> static JSON file, while remaining around, is not suitable for our use case.
>
> Please let me know if you use vendordata_driver so that I/we can present
> an organized case for why this option or equivalent functionality needs
> to remain around. The alternative is that we end up patching the
> vendordata driver directly in Nova when we move to Mitaka, which I'd
> like to avoid; as a matter of principle I

Re: [Openstack-operators] Anyone else use vendordata_driver in nova.conf?

2016-05-03 Thread Michael Still
Hey,

I just wanted to let people know that the review is progressing, but we
have a question.

Do operators really need to call more than one external REST service to
collect vendordata? We can implement that in nova, but it would be nice to
reduce the complexity to only having one external REST service. If you
needed to call more than one service you could of course write a REST
service that aggregated REST services.

Does anyone in the operator community have strong feelings either way?
Should nova be able to call more than one external vendordata REST service?

Thanks,
Michael




On Sat, Apr 30, 2016 at 4:11 AM, Michael Still  wrote:

> So, after a series of hallway track chats this week, I wrote this:
>
> https://review.openstack.org/#/c/310904/
>
> Which is a proposal for how to implement vendordata in a way which would
> (probably) be acceptable to nova, whilst also meeting the needs of
> operators. I should reinforce that because this week is so hectic nova core
> hasn't really talked about this yet, but I am pretty sure I understand and
> have addressed Sean's concerns.
>
> I'd be curious as to if the proposed solution actually meets your needs.
>
> Michael
>
>
>
>
> On Mon, Apr 18, 2016 at 10:55 AM, Fox, Kevin M  wrote:
>
>> We've used it too to work around the lack of instance users in nova.
>> Please keep it until a viable solution can be reached.
>>
>> Thanks,
>> Kevin
>> --
>> *From:* David Medberry [openst...@medberry.net]
>> *Sent:* Monday, April 18, 2016 7:16 AM
>> *To:* Ned Rhudy
>> *Cc:* openstack-operators@lists.openstack.org
>>
>> *Subject:* Re: [Openstack-operators] Anyone else use vendordata_driver
>> in nova.conf?
>>
>> Hi Ned, Jay,
>>
>> We use it also and I have to agree, it's onerous to require users to add
>> that functionality back in. Where was this discussed?
>>
>> On Mon, Apr 18, 2016 at 8:13 AM, Ned Rhudy (BLOOMBERG/ 731 LEX) <
>> erh...@bloomberg.net> wrote:
>>
>>> Requiring users to remember to pass specific userdata through to their
>>> instance at every launch in order to replace functionality that currently
>>> works invisible to them would be a step backwards. It's an alternative,
>>> yes, but it's an alternative that adds burden to our users and is not one
>>> we would pursue.
>>>
>>> What is the rationale for desiring to remove this functionality?
>>>
>>> From: jaypi...@gmail.com
>>> Subject: Re: [Openstack-operators] Anyone else use vendordata_driver in
>>> nova.conf?
>>>
>>> On 04/18/2016 09:24 AM, Ned Rhudy (BLOOMBERG/ 731 LEX) wrote:
>>> > I noticed while reading through Mitaka release notes that
>>> > vendordata_driver has been deprecated in Mitaka
>>> > (https://review.openstack.org/#/c/288107/) and is slated for removal
>>> at
>>> > some point. This came as somewhat of a surprise to me - I searched
>>> > openstack-dev for vendordata-related subject lines going back to
>>> January
>>> > and found no discussion on the matter (IRC logs, while available on
>>> > eavesdrop, are not trivially searchable without a little scripting to
>>> > fetch them first, so I didn't check there yet).
>>> >
>>> > We at Bloomberg make heavy use of this particular feature to inject
>>> > dynamically generated JSON into the metadata service of instances; the
>>> > content of the JSON differs depending on the instance making the
>>> request
>>> > to the metadata service. The functionality that adds the contents of a
>>> > static JSON file, while remaining around, is not suitable for our use
>>> case.
>>> >
>>> > Please let me know if you use vendordata_driver so that I/we can
>>> present
>>> > an organized case for why this option or equivalent functionality needs
>>> > to remain around. The alternative is that we end up patching the
>>> > vendordata driver directly in Nova when we move to Mitaka, which I'd
>>> > like to avoid; as a matter of principle I would rather see more
>>> > classloader overrides, not fewer.
>>>
>>> Wouldn't an alternative be to use something like Chef, Puppet, Ansible,
>>> Saltstack, etc and their associated config variable storage services
>>> like Hiera or something similar to publish custom metadata? That way,
>>> all you need to pass to your instance (via userdata) is a URI or
>>> connection string and some auth details for your config storage service
>>> and the instance can grab whatever you need.
>>>
>>> Thoughts?
>>> -jay
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-

Re: [Openstack-operators] User Survey usage of QEMU (as opposed to KVM) ?

2016-05-03 Thread Sergio Cuellar Valdes
On 3 May 2016 at 10:01, Daniel P. Berrange  wrote:

> Hello Operators,
>
> One of the things that constantly puzzles me when reading the user
> survey results wrt hypervisor is the high number of respondants
> claiming to be using QEMU (as distinct from KVM).
>
> As a reminder, in Nova saying virt_type=qemu causes Nova to use
> plain QEMU with pure CPU emulation which is many many times slower
> to than native CPU performance, while virt_type=kvm causes Nova to
> use QEMU with KVM hardware CPU acceleration which is close to native
> performance.
>
> IOW, virt_type=qemu is not something you'd ever really want to use
> unless you had no other options due to the terrible performance it
> would show. The only reasons to use QEMU are if you need non-native
> architecture support (ie running arm/ppc on x86_64 host), or if you
> can't do KVM due to hardware restrictions (ie ancient hardware, or
> running compute hosts inside virtual machines)
>
> Despite this, in the 2016 survey 10% claimed to be using QEMU in
> production & 3% in PoC and dev, in 2014 it was even higher at 15%
> in prod & 12% in PoC and 28% in dev.
>
> Personally my gut feeling says that QEMU usage ought to be in very
> low single figures, so I'm curious as to the apparent anomoly.
>
> I can think of a few reasons
>
>  1. Respondants are confused as to the difference between QEMU
> and KVM, so are saying QEMU, despite fact they are using KVM.
>
>  2. Respondants are confused as to the difference between QEMU
> and KVM, so have mistakenly configured their nova hosts to
> use QEMU instead of KVM and suffering poor performance without
> realizing their mistake.
>
>  3. There are more people than I expect who are running their
> cloud compute hosts inside virtual machines, and thus are
> unable to use KVM.
>
>  4. There are more people than I expect who are providing cloud
> hosting for non-native architectures. eg ability to run an
> arm7/ppc guest image on an x86_64 host and so genuinely must
> use QEMU
>
> If items 1 / 2 are the cause, then by implication the user survey
> is likely under-reporting the (already huge) scale of the KVM usage.
>
> I can see 3. being a likely explanation for high usage of QEMU in a
> dev or PoC scenario, but it feels unlikely for a production deployment.
>
> While 4 is technically possible, Nova doesn't really do a very good
> job at mixed guest arch hosting - I'm pretty sure there are broken
> pieces waiting to bite people who try it.
>
> Does anyone have any thoughts on this topic ?
>
> Indeed, is there anyone here who genuinely use virt_type=qemu in a
> production deployment of OpenStack who might have other reasons that
> I've missed ?
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>


​Hi everybody,

I'm confused too about the use of KVM or QEMU In the computes the
file​/etc/nova/nova-compute.conf has:

virt_type=kvm

The output of:

nova hypervisor-show  | grep hypervisor_type

is:

hypervisor_type   | QEMU

The virsh dumpxml of the instances shows:




/usr/bin/qemu-system-x86_64

​But according to ​this document [1], it is using QEMU emulator instead of
KVM, because it is not using /usr/bin/qemu-kvm

So I really don't know if it's using KVM or QEMU.

[1] https://libvirt.org/drvqemu.html

​Regards,
Sergio Cuéllar​


-- 
* Sergio Cuéllar │DevOps Engineer*
 KIO NETWORKS
 Mexico City Phone (52) 55 8503 2600 ext. 4335
 Mobile: 5544844298
 www.kionetworks.com
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] User Survey usage of QEMU (as opposed to KVM) ?

2016-05-03 Thread Silence Dogood
what you should be looking for is hvm.

On Tue, May 3, 2016 at 3:20 PM, Maish Saidel-Keesing 
wrote:

> I would think that the problem is that OpenStack does not really report
> back that you are using KVM - it reports that you are using QEMU.
>
> Even when in nova.conf I have configured virt_type=kvm, when I run nova
> hypervisor-show XXX | grep hypervisor_type
>
> I am presented with the following
>
> | hypervisor_type   | QEMU
>
> Bug?
>
>
> On 03/05/16 18:01, Daniel P. Berrange wrote:
>
> Hello Operators,
>
> One of the things that constantly puzzles me when reading the user
> survey results wrt hypervisor is the high number of respondants
> claiming to be using QEMU (as distinct from KVM).
>
> As a reminder, in Nova saying virt_type=qemu causes Nova to use
> plain QEMU with pure CPU emulation which is many many times slower
> to than native CPU performance, while virt_type=kvm causes Nova to
> use QEMU with KVM hardware CPU acceleration which is close to native
> performance.
>
> IOW, virt_type=qemu is not something you'd ever really want to use
> unless you had no other options due to the terrible performance it
> would show. The only reasons to use QEMU are if you need non-native
> architecture support (ie running arm/ppc on x86_64 host), or if you
> can't do KVM due to hardware restrictions (ie ancient hardware, or
> running compute hosts inside virtual machines)
>
> Despite this, in the 2016 survey 10% claimed to be using QEMU in
> production & 3% in PoC and dev, in 2014 it was even higher at 15%
> in prod & 12% in PoC and 28% in dev.
>
> Personally my gut feeling says that QEMU usage ought to be in very
> low single figures, so I'm curious as to the apparent anomoly.
>
> I can think of a few reasons
>
>  1. Respondants are confused as to the difference between QEMU
> and KVM, so are saying QEMU, despite fact they are using KVM.
>
>  2. Respondants are confused as to the difference between QEMU
> and KVM, so have mistakenly configured their nova hosts to
> use QEMU instead of KVM and suffering poor performance without
> realizing their mistake.
>
>  3. There are more people than I expect who are running their
> cloud compute hosts inside virtual machines, and thus are
> unable to use KVM.
>
>  4. There are more people than I expect who are providing cloud
> hosting for non-native architectures. eg ability to run an
> arm7/ppc guest image on an x86_64 host and so genuinely must
> use QEMU
>
> If items 1 / 2 are the cause, then by implication the user survey
> is likely under-reporting the (already huge) scale of the KVM usage.
>
> I can see 3. being a likely explanation for high usage of QEMU in a
> dev or PoC scenario, but it feels unlikely for a production deployment.
>
> While 4 is technically possible, Nova doesn't really do a very good
> job at mixed guest arch hosting - I'm pretty sure there are broken
> pieces waiting to bite people who try it.
>
> Does anyone have any thoughts on this topic ?
>
> Indeed, is there anyone here who genuinely use virt_type=qemu in a
> production deployment of OpenStack who might have other reasons that
> I've missed ?
>
> Regards,
> Daniel
>
>
> --
> Best Regards,
> Maish Saidel-Keesing
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] User Survey usage of QEMU (as opposed to KVM) ?

2016-05-03 Thread Maish Saidel-Keesing
I would think that the problem is that OpenStack does not really report
back that you are using KVM - it reports that you are using QEMU.

Even when in nova.conf I have configured virt_type=kvm, when I run nova
hypervisor-show XXX | grep hypervisor_type

I am presented with the following

| hypervisor_type   | QEMU

Bug?


On 03/05/16 18:01, Daniel P. Berrange wrote:
> Hello Operators,
>
> One of the things that constantly puzzles me when reading the user
> survey results wrt hypervisor is the high number of respondants
> claiming to be using QEMU (as distinct from KVM).
>
> As a reminder, in Nova saying virt_type=qemu causes Nova to use
> plain QEMU with pure CPU emulation which is many many times slower
> to than native CPU performance, while virt_type=kvm causes Nova to
> use QEMU with KVM hardware CPU acceleration which is close to native
> performance.
>
> IOW, virt_type=qemu is not something you'd ever really want to use
> unless you had no other options due to the terrible performance it
> would show. The only reasons to use QEMU are if you need non-native
> architecture support (ie running arm/ppc on x86_64 host), or if you
> can't do KVM due to hardware restrictions (ie ancient hardware, or
> running compute hosts inside virtual machines)
>
> Despite this, in the 2016 survey 10% claimed to be using QEMU in
> production & 3% in PoC and dev, in 2014 it was even higher at 15%
> in prod & 12% in PoC and 28% in dev.
>
> Personally my gut feeling says that QEMU usage ought to be in very
> low single figures, so I'm curious as to the apparent anomoly.
>
> I can think of a few reasons
>
>  1. Respondants are confused as to the difference between QEMU
> and KVM, so are saying QEMU, despite fact they are using KVM.
>
>  2. Respondants are confused as to the difference between QEMU
> and KVM, so have mistakenly configured their nova hosts to
> use QEMU instead of KVM and suffering poor performance without
> realizing their mistake.
>
>  3. There are more people than I expect who are running their
> cloud compute hosts inside virtual machines, and thus are
> unable to use KVM.
>
>  4. There are more people than I expect who are providing cloud
> hosting for non-native architectures. eg ability to run an
> arm7/ppc guest image on an x86_64 host and so genuinely must
> use QEMU
>
> If items 1 / 2 are the cause, then by implication the user survey
> is likely under-reporting the (already huge) scale of the KVM usage.
>
> I can see 3. being a likely explanation for high usage of QEMU in a
> dev or PoC scenario, but it feels unlikely for a production deployment.
>
> While 4 is technically possible, Nova doesn't really do a very good
> job at mixed guest arch hosting - I'm pretty sure there are broken
> pieces waiting to bite people who try it.
>
> Does anyone have any thoughts on this topic ?
>
> Indeed, is there anyone here who genuinely use virt_type=qemu in a
> production deployment of OpenStack who might have other reasons that
> I've missed ?
>
> Regards,
> Daniel

-- 
Best Regards,
Maish Saidel-Keesing
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Debug logging

2016-05-03 Thread Ronald Bradford
As Matt discussed, there is a push to better identify debug messages (and
the packages) that are important to operators.

In a subsequent session it was discussed about creating etherpads to make
it easier to identify DEBUG messages that are in use and important for
analysis, and for ease of reference any DEBUG messages that are regularly
discarded, e.g. in any log analysis patterns.

The two etherpads have been setup to help better identify and classify
DEBUG log messages.

https://etherpad.openstack.org/p/ops-debug-messages-in-use

https://etherpad.openstack.org/p/ops-debug-messages-discarded

Regards

Ronald



On Mon, Apr 25, 2016 at 11:10 PM, Matt Jarvis  wrote:

> One of the discussion points which came up in the Logging session at the
> Ops Summit today was that a lot of operators are running with debug logging
> enabled. There was a request from development to understand more about why
> that might be needed, ie. under what circumstances are operators not
> getting enough logging information without debug logging. If you are
> running one of the core projects with debug logging enabled, are there
> specific scenarios under which that has been needed to solve operational
> issues ? It would be really helpful if we could get some wider input from
> the ops community around this.
>
> DataCentred Limited registered in England and Wales no. 05611763
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] User Survey usage of QEMU (as opposed to KVM) ?

2016-05-03 Thread Jared Wilkinson
So forgive my lack of kvm/qemu knowledge but I couldn’t find anything on Google 
on this. If you deployed an instance of a different architecture than the 
physical CPU, wouldn’t qemu just emulate the processor (if you were in 
virt_type=kvm) mode, or would libvirt throw some error?

Thanks,
Jared

Jared Wilkinson | Infrastructure Engineer – Systems
jwilkin...@ebsco.com | (W) 205/981-4018 | (M) 205/259-9802
5724 US Highway 280 East, Birmingham, AL 35242, USA







On 5/3/16, 10:01 AM, "Daniel P. Berrange"  wrote:

>Hello Operators,
>
>One of the things that constantly puzzles me when reading the user
>survey results wrt hypervisor is the high number of respondants
>claiming to be using QEMU (as distinct from KVM).
>
>As a reminder, in Nova saying virt_type=qemu causes Nova to use
>plain QEMU with pure CPU emulation which is many many times slower
>to than native CPU performance, while virt_type=kvm causes Nova to
>use QEMU with KVM hardware CPU acceleration which is close to native
>performance.
>
>IOW, virt_type=qemu is not something you'd ever really want to use
>unless you had no other options due to the terrible performance it
>would show. The only reasons to use QEMU are if you need non-native
>architecture support (ie running arm/ppc on x86_64 host), or if you
>can't do KVM due to hardware restrictions (ie ancient hardware, or
>running compute hosts inside virtual machines)
>
>Despite this, in the 2016 survey 10% claimed to be using QEMU in
>production & 3% in PoC and dev, in 2014 it was even higher at 15%
>in prod & 12% in PoC and 28% in dev.
>
>Personally my gut feeling says that QEMU usage ought to be in very
>low single figures, so I'm curious as to the apparent anomoly.
>
>I can think of a few reasons
>
> 1. Respondants are confused as to the difference between QEMU
>and KVM, so are saying QEMU, despite fact they are using KVM.
>
> 2. Respondants are confused as to the difference between QEMU
>and KVM, so have mistakenly configured their nova hosts to
>use QEMU instead of KVM and suffering poor performance without
>realizing their mistake.
>
> 3. There are more people than I expect who are running their
>cloud compute hosts inside virtual machines, and thus are
>unable to use KVM.
>
> 4. There are more people than I expect who are providing cloud
>hosting for non-native architectures. eg ability to run an
>arm7/ppc guest image on an x86_64 host and so genuinely must
>use QEMU
>
>If items 1 / 2 are the cause, then by implication the user survey
>is likely under-reporting the (already huge) scale of the KVM usage.
>
>I can see 3. being a likely explanation for high usage of QEMU in a
>dev or PoC scenario, but it feels unlikely for a production deployment.
>
>While 4 is technically possible, Nova doesn't really do a very good
>job at mixed guest arch hosting - I'm pretty sure there are broken
>pieces waiting to bite people who try it.
>
>Does anyone have any thoughts on this topic ?
>
>Indeed, is there anyone here who genuinely use virt_type=qemu in a
>production deployment of OpenStack who might have other reasons that
>I've missed ?
>
>Regards,
>Daniel
>-- 
>|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
>|: http://libvirt.org  -o- http://virt-manager.org :|
>|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
>|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
>
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [osops] OSOps Meeting - Tomorrow 2016-05-04 1900 UTC

2016-05-03 Thread Joseph Bajin
Everyone,

The OSOps group will be having their next meeting tomorrow May 4th, 2016 at
1900 UTC.
It will be hosted in the #openstack-meeting-4 room.

The agenda has been added to the etherpad and wiki.  You can find that
here. [1]  The primary goal is to follow-up on the discussions that we had
at the Summit and start working on the tasks outlined in the action items.
We are open to anyone that wants to participate.

See you tomorrow!

--Joe



[1] https://etherpad.openstack.org/p/osops-irc-meeting-20160504
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] User Survey usage of QEMU (as opposed to KVM) ?

2016-05-03 Thread Matt Jarvis
I would suspect that quite a lot fall into 1

On 3 May 2016 at 16:33, David Medberry  wrote:

> The only reason I can think of is that they are doing nested VMs and don't
> have the right nesting flag enabled in their base flag.
>
> On Tue, May 3, 2016 at 9:01 AM, Daniel P. Berrange 
> wrote:
>
>> Hello Operators,
>>
>> One of the things that constantly puzzles me when reading the user
>> survey results wrt hypervisor is the high number of respondants
>> claiming to be using QEMU (as distinct from KVM).
>>
>> As a reminder, in Nova saying virt_type=qemu causes Nova to use
>> plain QEMU with pure CPU emulation which is many many times slower
>> to than native CPU performance, while virt_type=kvm causes Nova to
>> use QEMU with KVM hardware CPU acceleration which is close to native
>> performance.
>>
>> IOW, virt_type=qemu is not something you'd ever really want to use
>> unless you had no other options due to the terrible performance it
>> would show. The only reasons to use QEMU are if you need non-native
>> architecture support (ie running arm/ppc on x86_64 host), or if you
>> can't do KVM due to hardware restrictions (ie ancient hardware, or
>> running compute hosts inside virtual machines)
>>
>> Despite this, in the 2016 survey 10% claimed to be using QEMU in
>> production & 3% in PoC and dev, in 2014 it was even higher at 15%
>> in prod & 12% in PoC and 28% in dev.
>>
>> Personally my gut feeling says that QEMU usage ought to be in very
>> low single figures, so I'm curious as to the apparent anomoly.
>>
>> I can think of a few reasons
>>
>>  1. Respondants are confused as to the difference between QEMU
>> and KVM, so are saying QEMU, despite fact they are using KVM.
>>
>>  2. Respondants are confused as to the difference between QEMU
>> and KVM, so have mistakenly configured their nova hosts to
>> use QEMU instead of KVM and suffering poor performance without
>> realizing their mistake.
>>
>>  3. There are more people than I expect who are running their
>> cloud compute hosts inside virtual machines, and thus are
>> unable to use KVM.
>>
>>  4. There are more people than I expect who are providing cloud
>> hosting for non-native architectures. eg ability to run an
>> arm7/ppc guest image on an x86_64 host and so genuinely must
>> use QEMU
>>
>> If items 1 / 2 are the cause, then by implication the user survey
>> is likely under-reporting the (already huge) scale of the KVM usage.
>>
>> I can see 3. being a likely explanation for high usage of QEMU in a
>> dev or PoC scenario, but it feels unlikely for a production deployment.
>>
>> While 4 is technically possible, Nova doesn't really do a very good
>> job at mixed guest arch hosting - I'm pretty sure there are broken
>> pieces waiting to bite people who try it.
>>
>> Does anyone have any thoughts on this topic ?
>>
>> Indeed, is there anyone here who genuinely use virt_type=qemu in a
>> production deployment of OpenStack who might have other reasons that
>> I've missed ?
>>
>> Regards,
>> Daniel
>> --
>> |: http://berrange.com  -o-
>> http://www.flickr.com/photos/dberrange/ :|
>> |: http://libvirt.org  -o-
>> http://virt-manager.org :|
>> |: http://autobuild.org   -o-
>> http://search.cpan.org/~danberr/ :|
>> |: http://entangle-photo.org   -o-
>> http://live.gnome.org/gtk-vnc :|
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>

-- 
DataCentred Limited registered in England and Wales no. 05611763
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] User Survey usage of QEMU (as opposed to KVM) ?

2016-05-03 Thread Matt Riedemann

On 5/3/2016 10:01 AM, Daniel P. Berrange wrote:

Hello Operators,

One of the things that constantly puzzles me when reading the user
survey results wrt hypervisor is the high number of respondants
claiming to be using QEMU (as distinct from KVM).

As a reminder, in Nova saying virt_type=qemu causes Nova to use
plain QEMU with pure CPU emulation which is many many times slower
to than native CPU performance, while virt_type=kvm causes Nova to
use QEMU with KVM hardware CPU acceleration which is close to native
performance.

IOW, virt_type=qemu is not something you'd ever really want to use
unless you had no other options due to the terrible performance it
would show. The only reasons to use QEMU are if you need non-native
architecture support (ie running arm/ppc on x86_64 host), or if you
can't do KVM due to hardware restrictions (ie ancient hardware, or
running compute hosts inside virtual machines)

Despite this, in the 2016 survey 10% claimed to be using QEMU in
production & 3% in PoC and dev, in 2014 it was even higher at 15%
in prod & 12% in PoC and 28% in dev.

Personally my gut feeling says that QEMU usage ought to be in very
low single figures, so I'm curious as to the apparent anomoly.

I can think of a few reasons

 1. Respondants are confused as to the difference between QEMU
and KVM, so are saying QEMU, despite fact they are using KVM.

 2. Respondants are confused as to the difference between QEMU
and KVM, so have mistakenly configured their nova hosts to
use QEMU instead of KVM and suffering poor performance without
realizing their mistake.

 3. There are more people than I expect who are running their
cloud compute hosts inside virtual machines, and thus are
unable to use KVM.

 4. There are more people than I expect who are providing cloud
hosting for non-native architectures. eg ability to run an
arm7/ppc guest image on an x86_64 host and so genuinely must
use QEMU

If items 1 / 2 are the cause, then by implication the user survey
is likely under-reporting the (already huge) scale of the KVM usage.

I can see 3. being a likely explanation for high usage of QEMU in a
dev or PoC scenario, but it feels unlikely for a production deployment.

While 4 is technically possible, Nova doesn't really do a very good
job at mixed guest arch hosting - I'm pretty sure there are broken
pieces waiting to bite people who try it.

Does anyone have any thoughts on this topic ?

Indeed, is there anyone here who genuinely use virt_type=qemu in a
production deployment of OpenStack who might have other reasons that
I've missed ?

Regards,
Daniel



Another thought is that deployment tools are just copying what devstack 
does, or what shows up in the configs in our dsvm gate jobs, and those 
are using qemu, so they assume that's what should be used since that's 
what we gate on.


We should be more clear in our help text for the virt_type config option 
between using kvm vs qemu. Today it just says:


# Libvirt domain type (string value)
# Allowed values: kvm, lxc, qemu, uml, xen, parallels
#virt_type = kvm

It'd be good to point out the performance impacts and limitations of kvm 
vs qemu in that help text. There might already be a patch up for review 
that makes this better.


--

Thanks,

Matt Riedemann


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] User Survey usage of QEMU (as opposed to KVM) ?

2016-05-03 Thread David Medberry
The only reason I can think of is that they are doing nested VMs and don't
have the right nesting flag enabled in their base flag.

On Tue, May 3, 2016 at 9:01 AM, Daniel P. Berrange 
wrote:

> Hello Operators,
>
> One of the things that constantly puzzles me when reading the user
> survey results wrt hypervisor is the high number of respondants
> claiming to be using QEMU (as distinct from KVM).
>
> As a reminder, in Nova saying virt_type=qemu causes Nova to use
> plain QEMU with pure CPU emulation which is many many times slower
> to than native CPU performance, while virt_type=kvm causes Nova to
> use QEMU with KVM hardware CPU acceleration which is close to native
> performance.
>
> IOW, virt_type=qemu is not something you'd ever really want to use
> unless you had no other options due to the terrible performance it
> would show. The only reasons to use QEMU are if you need non-native
> architecture support (ie running arm/ppc on x86_64 host), or if you
> can't do KVM due to hardware restrictions (ie ancient hardware, or
> running compute hosts inside virtual machines)
>
> Despite this, in the 2016 survey 10% claimed to be using QEMU in
> production & 3% in PoC and dev, in 2014 it was even higher at 15%
> in prod & 12% in PoC and 28% in dev.
>
> Personally my gut feeling says that QEMU usage ought to be in very
> low single figures, so I'm curious as to the apparent anomoly.
>
> I can think of a few reasons
>
>  1. Respondants are confused as to the difference between QEMU
> and KVM, so are saying QEMU, despite fact they are using KVM.
>
>  2. Respondants are confused as to the difference between QEMU
> and KVM, so have mistakenly configured their nova hosts to
> use QEMU instead of KVM and suffering poor performance without
> realizing their mistake.
>
>  3. There are more people than I expect who are running their
> cloud compute hosts inside virtual machines, and thus are
> unable to use KVM.
>
>  4. There are more people than I expect who are providing cloud
> hosting for non-native architectures. eg ability to run an
> arm7/ppc guest image on an x86_64 host and so genuinely must
> use QEMU
>
> If items 1 / 2 are the cause, then by implication the user survey
> is likely under-reporting the (already huge) scale of the KVM usage.
>
> I can see 3. being a likely explanation for high usage of QEMU in a
> dev or PoC scenario, but it feels unlikely for a production deployment.
>
> While 4 is technically possible, Nova doesn't really do a very good
> job at mixed guest arch hosting - I'm pretty sure there are broken
> pieces waiting to bite people who try it.
>
> Does anyone have any thoughts on this topic ?
>
> Indeed, is there anyone here who genuinely use virt_type=qemu in a
> production deployment of OpenStack who might have other reasons that
> I've missed ?
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] User Survey usage of QEMU (as opposed to KVM) ?

2016-05-03 Thread Daniel P. Berrange
Hello Operators,

One of the things that constantly puzzles me when reading the user
survey results wrt hypervisor is the high number of respondants
claiming to be using QEMU (as distinct from KVM).

As a reminder, in Nova saying virt_type=qemu causes Nova to use
plain QEMU with pure CPU emulation which is many many times slower
to than native CPU performance, while virt_type=kvm causes Nova to
use QEMU with KVM hardware CPU acceleration which is close to native
performance.

IOW, virt_type=qemu is not something you'd ever really want to use
unless you had no other options due to the terrible performance it
would show. The only reasons to use QEMU are if you need non-native
architecture support (ie running arm/ppc on x86_64 host), or if you
can't do KVM due to hardware restrictions (ie ancient hardware, or
running compute hosts inside virtual machines)

Despite this, in the 2016 survey 10% claimed to be using QEMU in
production & 3% in PoC and dev, in 2014 it was even higher at 15%
in prod & 12% in PoC and 28% in dev.

Personally my gut feeling says that QEMU usage ought to be in very
low single figures, so I'm curious as to the apparent anomoly.

I can think of a few reasons

 1. Respondants are confused as to the difference between QEMU
and KVM, so are saying QEMU, despite fact they are using KVM.

 2. Respondants are confused as to the difference between QEMU
and KVM, so have mistakenly configured their nova hosts to
use QEMU instead of KVM and suffering poor performance without
realizing their mistake.

 3. There are more people than I expect who are running their
cloud compute hosts inside virtual machines, and thus are
unable to use KVM.

 4. There are more people than I expect who are providing cloud
hosting for non-native architectures. eg ability to run an
arm7/ppc guest image on an x86_64 host and so genuinely must
use QEMU

If items 1 / 2 are the cause, then by implication the user survey
is likely under-reporting the (already huge) scale of the KVM usage.

I can see 3. being a likely explanation for high usage of QEMU in a
dev or PoC scenario, but it feels unlikely for a production deployment.

While 4 is technically possible, Nova doesn't really do a very good
job at mixed guest arch hosting - I'm pretty sure there are broken
pieces waiting to bite people who try it.

Does anyone have any thoughts on this topic ?

Indeed, is there anyone here who genuinely use virt_type=qemu in a
production deployment of OpenStack who might have other reasons that
I've missed ?

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [openstack-operators] [fuel] fuel-mirror

2016-05-03 Thread Drakopoulos, Dionisis (Nokia - GR/Athens)
Hello world!
Using FUEL release 8 and in specific fuel-mirror to create a local Ubuntu & 
OpenStack repository.

Is there a feature which can validate that all mandatory packages have been 
downloaded successfully and that a new IaaS can be commissioned without any 
issue?

Or any other solution - suggestion which will accomplish the same goal?

Thank you in advance!

Sent from my Windows Phone
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [puppet] complete this 1 minute survey if you're Puppet OpenStack user

2016-05-03 Thread Emilien Macchi
http://goo.gl/forms/7VYibKHx1c

We would like to gather feedback of what our users are running, so we
can improve our CI and update the versions of Puppet / Ruby /
Operating Systems that we're gating.

Thanks a lot for your time,
-- 
Emilien Macchi

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Swift ACL's together with Keystone (v3) integration

2016-05-03 Thread Wijngaarden, Pieter van
Hi Saverio,



Yes, in the end I was able to get it working! The issue was related to my proxy 
server pipeline config (filter:authtoken). I did not find pointers to updated 
documentation though.



When I had updated the [filter:authtoken] configuration in 
/etc/swift/proxy-server.conf, everything worked. In my case the values auth_uri 
and auth_url were not configured correctly:



[filter:authtoken]

paste.filter_factory = keystonemiddleware.auth_token:filter_factory

auth_uri = https://:5443

auth_url = http://:35357

auth_plugin = password

project_name = service

project_domain_id = default

user_domain_id = default

username = swift

password = X



I don’t know why that meant that regular token validation worked, but 
cross-tenant did not



(unfortunately it’s a test cluster so I don’t have history on what it was 
before I changed it :( )



What works for me now (using python-swiftclient) is the following. I hope that 
the text formatting survives in the email:



1.   A user with complete ownership over the account (say account X) 
executes

a.  swift post  --read-acl ‘:’

b.  or

c.  swift post  --read-acl ‘:*>’

2.   A user in the  account can now list the container and 
get objects in the container by doing:

a.  swift list  --os-storage-url  
--os-auth-token 

b.  or

c.  swift download   --os-storage-url  
--os-auth-token 



Note that you can review the full storage URL for an account by doing swift 
stat -v.



In this case, the user in step 2 is not able to do anything else in account X 
besides do object listing in the container and get its objects, which is what I 
was aiming for. What does not work for me is if I set the read-acl to 
‘’ only, even though that should work according to the 
documentation. If you want to allow all users in another project read access to 
a container, use ‘:*’ as the read-ACL.



I hope this helps!



With kind regards,

Pieter van Wijngaarden





-Original Message-
From: Saverio Proto [mailto:ziopr...@gmail.com]
Sent: dinsdag 3 mei 2016 12:44
To: Wijngaarden, Pieter van 
Cc: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] Swift ACL's together with Keystone (v3) 
integration



Hello Pieter,



I did run into the same problem today. Did you find pointers to more updated 
documentation ? Were you able to configure the cross tenant read ACL ?



thank you



Saverio





2016-04-20 13:48 GMT+02:00 Wijngaarden, Pieter van

mailto:pieter.van.wijngaar...@philips.com>>:

> Hi all,

>

> I’m playing around with a Swift cluster (Liberty) and cannot get the

> Swift ACL’s to work. My objective is to give users from one project

> (and thus Swift account?) selective access to specific containers in another 
> project.

>

> According to

> http://docs.openstack.org/developer/swift/middleware.html#keystoneauth

> , the swift/keystoneauth plugin should support cross-tenant (now

> cross-project) ACL’s by setting the read-acl of a container to something like:

>

> swift post  --read-acl ':'

>

> Using a project name instead of a UUID should be supported if all

> projects are in the default domain.

>

> But if I set this for a user in a different project / different swift

> account, it doesn’t seem to work. The last reference to Swift

> container ACL’s from the archives is somewhere in 2011..

>

> I have found a few Swift ACL examples / tutorials online, but they are

> all outdated or appear to use special / proprietary middleware. Does

> anybody have (or can anybody create) an example that is up-to-date for

> OpenStack Liberty or later, and shows container ACL’s together with

> Keystone integration?

>

> What I would like to do:

> - I have a bunch of users and projects in Keystone, and thus a bunch

> of (automatically created) Swift accounts

> - I would like to allow one specific user in a project (say project X)

> to access a container from a different project (Y)

> - And/or, I would like to allow all users in project X to access one

> specific container in project Y.

> Both these options should include listing the objects in the

> container, but exclude listing all containers in the other account.

>

> I hope there is someone who can help, thanks a lot in advance!

>

> With kind regards,

> Pieter van Wijngaarden

> System Architect

> Digital Pathology Solutions

> Philips Healthcare

>

> Veenpluis 4-6, Building QY-2.006, 5684 PC Best

> Tel: +31 6 2958 6736, Email: 
> pieter.van.wijngaar...@philips.com

>

>

>

>

>   

> The information contained in this message may be confidential and

> legally protected under applicable law. The message is intended solely

> for the addressee(s). If you are not the intended recipient, you are

> hereby notified that any use, forwarding, dissemination, or

> reproduction of this message is strictly prohibited and may be

> unlawful. If you are not the intende

Re: [Openstack-operators] Swift ACL's together with Keystone (v3) integration

2016-05-03 Thread Saverio Proto
Hello Pieter,

I did run into the same problem today. Did you find pointers to more
updated documentation ? Were you able to configure the cross tenant
read ACL ?

thank you

Saverio


2016-04-20 13:48 GMT+02:00 Wijngaarden, Pieter van
:
> Hi all,
>
> I’m playing around with a Swift cluster (Liberty) and cannot get the Swift
> ACL’s to work. My objective is to give users from one project (and thus
> Swift account?) selective access to specific containers in another project.
>
> According to
> http://docs.openstack.org/developer/swift/middleware.html#keystoneauth, the
> swift/keystoneauth plugin should support cross-tenant (now cross-project)
> ACL’s by setting the read-acl of a container to something like:
>
> swift post  --read-acl ':'
>
> Using a project name instead of a UUID should be supported if all projects
> are in the default domain.
>
> But if I set this for a user in a different project / different swift
> account, it doesn’t seem to work. The last reference to Swift container
> ACL’s from the archives is somewhere in 2011..
>
> I have found a few Swift ACL examples / tutorials online, but they are all
> outdated or appear to use special / proprietary middleware. Does anybody
> have (or can anybody create) an example that is up-to-date for OpenStack
> Liberty or later, and shows container ACL’s together with Keystone
> integration?
>
> What I would like to do:
> - I have a bunch of users and projects in Keystone, and thus a bunch of
> (automatically created) Swift accounts
> - I would like to allow one specific user in a project (say project X) to
> access a container from a different project (Y)
> - And/or, I would like to allow all users in project X to access one
> specific container in project Y.
> Both these options should include listing the objects in the container, but
> exclude listing all containers in the other account.
>
> I hope there is someone who can help, thanks a lot in advance!
>
> With kind regards,
> Pieter van Wijngaarden
> System Architect
> Digital Pathology Solutions
> Philips Healthcare
>
> Veenpluis 4-6, Building QY-2.006, 5684 PC Best
> Tel: +31 6 2958 6736, Email: pieter.van.wijngaar...@philips.com
>
>
>
>
>   
> The information contained in this message may be confidential and legally
> protected under applicable law. The message is intended solely for the
> addressee(s). If you are not the intended recipient, you are hereby notified
> that any use, forwarding, dissemination, or reproduction of this message is
> strictly prohibited and may be unlawful. If you are not the intended
> recipient, please contact the sender by return e-mail and destroy all copies
> of the original message.
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific][accounting] Resource management

2016-05-03 Thread Stig Telfer
Thanks Tim, this is a great read and sets out CERN’s experience and use cases 
for enhanced accounting very well.

Best wishes,
Stig


> On 2 May 2016, at 18:02, Tim Bell  wrote:
> 
> 
> Following the discussions last week, I have put down a blog on how CERN does 
> it’s resource management for the accounting team on the Scientific Working 
> group. The areas we looked at were
>   • Lustre-as-a-Service in Manila
>   • Bare metal management
>   • Accounting
>   • User Stories and Reference Architectures
> The details are at 
> http://openstack-in-production.blogspot.fr/2016/04/resource-management-at-cern.html.
>  I list 5 needs to support these use cases. 
> 
> Need #1 : CPU performance based allocation and scheduling
> Need #2 : Nested Quotas
> Need #3 : Spot Market
> Need #4 : Reducing quota below utilization
> Need #5 : Components without quota
> 
> Given that the aim for the user stories input is that we consolidate a common 
> set of requirements, I’d welcome feedback on the Needs to see if these are 
> common across the scientific community and the relative priorities (mine are 
> unsorted).
> 
> Tim
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators