[openstack-dev] [Tricircle] Original Blueprint

2016-04-06 Thread Shinobu Kinjo
Hi Chaoyi,

In blueprint you described for PoC, there are information of tables. [1]
They seem to out-to-date.

Do you think that those information need to be up-to-date in your blueprint?
If it's not necessary, it's better to move those information to
different sheet or put some comment like "Won't be updated", I think.

[1] 
https://docs.google.com/document/d/18kZZ1snMOCD9IQvUKI5NVDzSASpw-QKj7l2zNqMEd3g

Cheers,
Shinobu

-- 
Email:
shin...@linux.com
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] Stable check of openstack/trove failed

2016-04-06 Thread Tony Breeds
On Wed, Apr 06, 2016 at 10:08:44AM +, Amrith Kumar wrote:
> Stable team, the reason for this failure of py27 in kilo is 
> 
>   https://bugs.launchpad.net/trove/+bug/1437179
> 
> The bug was fixed in liberty in commit 
> I92eaa1c98f5a58ce124210f2b6a2136dfc573a29 and therefore, at this time, only 
> impacts Kilo.
> 
> The issue is that the flavors are coming back in a non-deterministic order.
> The fix was to take this into account while comparing the expected output to
> the actual output. Please advise whether you would like this testing change
> to be backported to Kilo, it is a clean cherry pick of the change listed
> above.

By the letter of the stable policy this isn't appropriate BUT ... 

It's a small self contained fix that corrects an interment test failure.

I'd accepts a backport, and I don't think it would endanger the
stable:follows-policy tag :)

Thanks again for staying on top of the trove periodic-stable queue.

Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday April 7th at 9:00 UTC

2016-04-06 Thread GHANSHYAM MANN
Hello everyone,


Please reminder that the weekly OpenStack QA team IRC meeting will be
Thursday, April 7th at 9:00 UTC in the #openstack-meeting channel.

The agenda for the meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Proposed_Agenda_for_April_7th_2016_.280900_UTC.29

 Anyone is welcome to add an item to the agenda.

To help people figure out what time 9:00 UTC is in other timezones the
next meeting will be at:
04:00 EST

18:00 JST

18:30 ACST

11:00 CEST

04:00 CDT

02:00 PDT

Regards
Ghanshyam Mann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tricircle] Asynchronous Job Management Patches

2016-04-06 Thread joehuang
Hi, Zhiyuan,

You can also add some reviewers in the patch directly.

Best Regards
Chaoyi Huang ( Joe Huang )

From: Vega Cai [mailto:luckyveg...@gmail.com]
Sent: Thursday, April 07, 2016 9:16 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Tricircle] Asynchronous Job Management Patches

Hi, I have submitted the second patch for asynchronous job management, please 
help to review. Here is the link: https://review.openstack.org/#/c/302110/

The first patch has been merged. Link for the first patch: 
https://review.openstack.org/#/c/295729/

BR
Zhiyuan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Generic solution for bare metal testing

2016-04-06 Thread Jeremy Stanley
On 2016-04-06 18:33:06 +0300 (+0300), Igor Belikov wrote:
[...]
> I suppose there are security issues when we talk about running
> custom code on bare metal slaves, but I'm not sure I understand
> the difference from running custom code on a virtual machine if
> bare metal nodes are isolated, don't contain any sensitive data
> and follow a regular redeployment procedure.
[...]

With a virtual machine, you can delete it and create a new one.
Nothing remains behind.

With a physical machine, arbitrary code running in the scope of a
test with root access can do _nasty_ things like backdoor your
server firmware with shims that even masquerade as the firmware
updater and persist through redeployments that include firmware
refreshes.

Physical servers persist, and are therefore vulnerable in this
scenario in ways which virtual servers are not.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Hello

2016-04-06 Thread Zhipeng Huang
Hi Lige,

You could also check out shipengfei's blog
http://shipengfei92.cn/play_tricircle_with_virtualbox on how to play with
devstack to setup a Tricircle env on your laptop : )

On Tue, Apr 5, 2016 at 11:54 AM, joehuang  wrote:

> Hi, Lige,
>
>
>
> Welcome to join Tricircle. You can start  from
> https://wiki.openstack.org/wiki/Tricircle, and the todo-list is listed in
> https://etherpad.openstack.org/p/TricircleToDo, and last week we just
> discussed the cross OpenStack  L2 networking:
> https://etherpad.openstack.org/p/TricircleCrossPodL2Networking.
>
>
>
> You can have a look and pickup one interesting topic to work on.
>
>
>
> Best Regards
>
> Chaoyi Huang ( Joe Huang )
>
>
>
> *From:* 李戈 [mailto:lgmcglm...@126.com]
> *Sent:* Tuesday, April 05, 2016 11:41 AM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* [openstack-dev] [tricircle] Hello
>
>
>
> Hello Team,
>   I am lige, a openstack coder in China UnionPay and glad to join our team.
>
>thx.
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Minimal secure identification of a new VM

2016-04-06 Thread Adam Young

On 04/06/2016 05:41 PM, Fox, Kevin M wrote:

-1 for man in the middle susceptible solutions. This also doesn't solve all the 
issues listed in the spec, such as suspended nodes, snapshotted nodes, etc.
Self signed MITM  That is only an issue if you are not trusting the 
initial setup.  I wan a real CA, we can't force everyone to do that.



Secure the Message bus is essential anyway.  Let's not piecemeal a 
solution here.





Nova has several back channel mechanisms at its disposal. We should use one or 
more of them to solve the problem properly instead of opening a security hole 
in our solution to a security problem.

Such as:
  * The nova console is one mechanism that could be utilized as a secure back 
channel.
  * The vm based instances could add a virutal serial port as a back channel.
  * Some bare metal bmc's support virtual cd's which could be loaded with fresh 
credentials upon request.
  * The metadata server is reliable in certain situations.

I'm sure there are more options too.

The instance user spec covers a lot of that stuff.

I'm ok if we want to refactor the instance user spec to cover creating phase 1 
credentials that are intended to be used for things other then getting a 
keystone token. It could be used to register/reregister with ipa, chef, puppet, 
etc. We just need to reword the spec to cover that use case too.

I'm also not tied to the implementation listed. it just needs to meet the 
requirements.

Thanks,
Kevin


From: Adam Young [ayo...@redhat.com]
Sent: Wednesday, April 06, 2016 2:09 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Minimal secure identification of a new VM

On 04/06/2016 05:42 AM, Daniel P. Berrange wrote:

On Tue, Apr 05, 2016 at 06:00:55PM -0400, Adam Young wrote:

We have a use case where we want to register a newly spawned Virtual machine
with an identity provider.

Heat also has a need to provide some form of Identity for a new VM.


Looking at the set of utilities right now, there does not seem to be a
secure way to do this.  Injecting files does not provide a path that cannot
be seen by other VMs or machines in the system.

For our use case, a short lived One-Time-Password is sufficient, but for
others, I think asymmetric key generation makes more sense.

Is the following possible:

1.  In cloud-init, the VM generates a Keypair, then notifies the No0va
infrastructure (somehow) that it has done so.

There's no currently secure channel for the guest to push information
to Nova.

We need to secure the message queue from the compute node to conductor.
This is very achievable:

1.  Each compute node gets its own rabbit user
2.  Messages from compute node to Conductor are validated as to what
node sent them

We should enable TLS on the network as well, or password can be
sniffed.  Self signed is crappy, but probably sufficient for a baseline
deployment. Does not defend against MITM.  Puppet based deployments can
mitigate.
X509 client cert is a better auth mechanism than password, but not
essential.




   The best we have is the metadata service, but we'd need to
secure that with https, because the metadata server cannot be assumed
to be running on the same host as the VM & so the channel is not protected
against MITM attacks.

Also currently the metadata server is readonly with the guest pulling
information from it - it doesn't currently allow guests to push information
into it. This is nice because the metadata servers could theoretically be
locked down to prevent may interactions with the rest of nova - it should
only need read-only access to info about the guests it is serving. If we
turn the metadata server into a bi-directional service which can update
information about guests, then it opens it up as a more attractive avenue
of attack for guest OS trying breach the host infra. This is a fairly
general concern with any approach where the guest has to have the ability
to push information back into Nova.


2.  Nova Compute reads the public Key off the device and sends it to
conductor, which would then associate the public key with the server?

3.  A third party system could then validate the association of the public
key and the server, and build a work flow based on some signed document from
the VM?

Regards,
Daniel


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for 

[openstack-dev] [tacker] Refactoring heat-driver

2016-04-06 Thread Sridhar Ramaswamy
Now that Mitaka release is out, this is a good time to consider refactoring
Tacker's heat-driver. This driver was one of the big module we inherited
but it got even bloated with recent enhancements.

I've captured some ideas on how to shuffle things out of this component in
the etherpad [1]. Thoughts ?

- Sridhar

[1] https://etherpad.openstack.org/p/tacker-newton-heatdriver-refactoring
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tricircle] Asynchronous Job Management Patches

2016-04-06 Thread Vega Cai
Hi, I have submitted the second patch for asynchronous job management,
please help to review. Here is the link:
https://review.openstack.org/#/c/302110/

The first patch has been merged. Link for the first patch:
https://review.openstack.org/#/c/295729/

BR
Zhiyuan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FPGA as a resource

2016-04-06 Thread Fei K Chen

Hi all,

We agree this is an ingesting direction we can do exploration on OpenStack
and actually we already have such FPGA/GPU as a resource in our public
research cloud called SuperVessel (https://ptopenlab.com). With
modification of OpenStack from community, nearly all features mentioned by
Roman Dobosz are supported, such as FPGA boards detection, scheduling,
images management and user API layer. Here are some materials for you to
reference [1][2].

Unlike CPU and GPU, right now FPGA accelerator is developed without common
protocol above PCIe protocol. Along with huge kinds of FPGA chips, FPGA
boards, it is hard to be managed. Passing through the FPGA as a general
PCIe device to the application stack is a straight way to expose the FPGA
acceleration ability in cloud while it is far way to make it easy used. It
dose not help the accelerator detection, reload, reconfiguration,
deployment in cloud.

Good news is that, the FPGA community is actively promoting high level
programming language, such as OpenCL. They hide nearly all hardware detail
for the developers so it make the "accelerator" common in different
hardware. It is reasonable to see the FPGA become a popular acceleration
hardware. It also reasonable for OpenStack community begin the discussion
to support the FPGA resource management.


[1]
http://openpowerfoundation.org/blogs/fpga-acceleration-in-a-power8-cloud/
[2] http://dl.acm.org/citation.cfm?id=2597929



Best regards,
CHEN Fei





From:   Qiming Teng 
To: "OpenStack Development Mailing List (not for usage questions)"

Cc: Fei K Chen/China/IBM@IBMCN, Yong Hua Lin/China/IBM@IBMCN
Date:   2016/04/06 12:25
Subject:Re: [openstack-dev] [Nova] FPGA as a resource



Emm... finally this is brought up. We from IBM have already done some
work on FPGA/GPU resource management [1]. Let me bring the SMEs into
this discussion and see if we together can work out a concrete roadmap
to land this upstream.

Fei and Yonghua, this is indeed very interesting a topic for us.


[1] SuperVessel Cloud: https://ptopenlab.com/

Regards,
  Qiming

On Tue, Apr 05, 2016 at 02:27:30PM +0200, Roman Dobosz wrote:
> Hey all,
>
> On yesterday's scheduler meeting I was raised the idea of bringing up
> the FPGA to the OpenStack as the resource, which than might be exposed
> to the VMs.
>
> The use cases for motivations, why one want do this, are pretty broad -
> having such chip ready on the computes might be beneficial either for
> consumers of the technology and data center administrators. The
> utilization of the hardware is very broad - the only limitations are
> human imagination and hardware capability - since it might be used for
> accelerating execution of algorithms from compression and cryptography,
> through pattern recognition, transcoding to voice/video analysis and
> processing and all the others in between. Using FPGA to perform data
> processing may significantly reduce CPU utilization, the time and power
> consumption, which is a benefit on its own.
>
> On OpenStack side, unlike utilizing the CPU or memory, for actually
> using specific algorithm with FPGAs, it has to be programmed first. So
> in simplified scenario, it might go like this:
>
> * User selects VM with image which supports acceleration,
> * Scheduler selects the appropriate compute host with FPGA available,
> * Compute gets request, program IP into FPGA and then boot up the
>   VM with accelerator attached.
> * If VM is removed, it may optionally erase the FPGA.
>
> As you can see, it seems not complicated at this point, however it
> become more complex due to following things we also have to take into
> consideration:
>
> * recent FPGA are divided into regions (or slots) and every of them
>   can be programmed separately
> * slots may or may not fit the same bitstream (the program which FPGA
>   is fed, the IP)
> * There is several products around (Altera, Xilinx, others), which make
>   bitstream incompatible. Even between the products of the same company
> * libraries which abstract the hardware layer like AAL[1] and their
>   versions
> * for some products, there is a need for tracking memory usage, which
>   is located on PCI boards
> * some of the FPGAs can be exposed using SR-IOV, while some other not,
>   which implies multiple usage abilities
>
> In other words, it may be necessary to incorporate another actions:
>
> * properly discover FPGA and its capabilities
> * schedule right bitstream with corresponding matching unoccupied FPGA
>   device/slot
> * actual program FPGA
> * provide libraries into VM, which are necessary for interacting between
>   user program and the exposed FPGA (or AAL) (this may be optional,
>   since user can upload complete image with everything in place)
> * bitstream images have to be keep in some kind of service (Glance?)
>   with some kind of way for identifying which image match what FPGA
>
> All of that makes 

[openstack-dev] [Fuel] CI status after Mitaka branching

2016-04-06 Thread Aleksandra Fedorova
Hi, everyone,

here is the current CI status for Fuel:

* there are now regular 10.0 ISO builds which track master branch; 9.0
builds are switched to stable/mitaka

https://ci.fuel-infra.org/view/ISO/

* UCA deployment scenario has been added to regular Build Verification
Tests for both Mitaka (9.0) and master (10.0) branches.

https://ci.fuel-infra.org/view/ISO/job/10.0-community.main.ubuntu.uca_neutron_ha/

* master.* deployment tests use 10.0 ISO now

Don't forget to rebase patches on top of version bumps to pass fuel-library CI.

* there are new mitaka.* jobs which run deployment tests for
stable/mitaka branch

https://ci.fuel-infra.org/view/mitaka/

Known Issues:

* verify-fuel-web-on-fuel-ui is failing, fix on review
https://review.openstack.org/#/c/302328/

* fix for noop fixtures on review
https://review.openstack.org/#/c/302255/

* nightly builds table at ci.fuel-infra.org doesn't show 10.0 status
  Work in progress. Please refer to /ISO/ view.

-- 
Aleksandra Fedorova
CI Team Lead
bookwar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api] advanced search criteria

2016-04-06 Thread Hirofumi Ichihara



On 2016/04/05 22:23, Ihar Hrachyshka wrote:

Hirofumi Ichihara  wrote:


Hi Ihar,

On 2016/04/05 7:57, Ihar Hrachyshka wrote:

Hi all,

in neutron, we have a bunch of configuration options to control 
advanced filtering features for API, f.e. allow_sorting, 
allow_pagination, allow_bulk, etc. Those options have default False 
values.
I saw allow_bulk option is set default True in 
https://github.com/openstack/neutron/blob/master/neutron/common/config.py#L66

Well, I don't think there's someone sets False to the option.


Yes, indeed only allow_sorting and allow_pagination are disabled by 
default.




In the base API controller class, we have support for both native 
sorting/pagination/bulk operations [implemented by the plugin 
itself], as well as a generic implementation for plugins without 
native support. But if corresponding allow_* options are left with 
their default False values, those advanced search/filtering criteria 
just don’t work, no matter whether the plugin support native 
filters, or not.


It seems weird to me that our API behaves differently depending on 
configuration options, and that we have those useful features 
disabled by default.


My immediate interest is to add native support for 
sorting/pagination for QoS service plugin; I have a patch for that, 
and I planned to add some API tests to validate that the features 
work, but I hit failures because those features are not enabled for 
the -api job.


Some questions:
- can we enable those features in -api job?
- is there any reason to keep default values for allow_* as False, 
and if not, can we switch to True?
- why do we even need to control those features with configuration 
options? can we deprecate and remove them?
I agree we will deprecate and remove the option but I think that we 
need more tests if we support it as default.

It looks like there are very few tests(UT only).


That’s a good suggestion. I started a patch to enable those two 
options, plus add first tests for the feature:


https://review.openstack.org/#/c/301634/

For now it covers only for networks. I wonder how we envision the 
coverage. Do we want to have test cases per resource? Any ideas on how 
to make the code more generic to avoid code duplication? For example, 
I could move those test cases into a base class that would require 
some specialization for each resource that we want to cover 
(get/create methods, primary key, …).
The patch is reasonable for me as first step. Second, I agree to make it 
more generic. I think that we should have test per resource but we will 
do in future work.




Also, do we maybe want to split the patch into two pieces:
- first one adding tests [plus enabling those features for API job];
- second one changing the default value for the options.

+1

Thanks,
Hirofumi



Ihar

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate][osc] new sub commands - how should they be named?

2016-04-06 Thread reedip banerjee
Hi Graham,
Service is somewhat a pretty common word and would have been used by
several components. But the distinguishing point between Keystone,
Designate, Nova et.al . would be the component ( in this case , dns) to
which the subcommand belongs to.

Also, the below commands make more sense.
>openstack dns service list
>openstack dns service show


You can continue with these options IMHO.



On Wed, Apr 6, 2016 at 8:23 PM, Morgan Fainberg 
wrote:

>
>
> On Wed, Apr 6, 2016 at 7:44 AM, Sheel Rana Insaan  > wrote:
>
>> Hey Graham,
>>
>> I just added service for block storage, we have named these
>> openstack volume service list/enable/disable.
>>
>> Same protocol is used for nova as well previosly.
>>
>> Hope this will help.
>>
>> Regards,
>> Sheel Rana
>> On Apr 6, 2016 7:54 PM, "Hayes, Graham"  wrote:
>>
>>> On 06/04/2016 15:20, Qiming Teng wrote:
>>> > On Wed, Apr 06, 2016 at 01:59:29PM +, Hayes, Graham wrote:
>>> >> Designate is adding support for viewing the status of the various
>>> >> services that are running.
>>> >>
>>> >> We have added support to our openstack client plugin, but were looking
>>> >> for guidance / advices on what the actual commands should be.
>>> >>
>>> >> We have implemented it in [1] as "dns service list" and
>>> >> "dns service show" - but this is name-spacing the command.
>>> > do you mean?
>>> >
>>> > openstack dns service list
>>> > openstack dns service show
>>>
>>> sorry, yes - I just included the sub commands.
>>>
>>> >
>>> >> Is there an alternative? "service" is already taken by keystone, and
>>> if
>>> >> we take "service-status" (or other generic term) it will most likely
>>> >> conflict when nova / cinder / heat / others add support of their
>>> service
>>> >> listings to OSC.
>>> >>
>>> >> What is the protocol here? First to grab it wins?
>>> >>
>>> >> Thanks
>>> >>
>>> >> - Graham
>>> >>
>>> >> 1 - https://review.openstack.org/284103
>>> >>
>>>
>>
> I think the offered options make a lot of sense:
>
> openstack dns service list
> openstack dns service show
>
>
> I would encourage continued use of the namespacing like this for future
> subcommands where possible (as it seems cinder and nova are already on
> track to do).
>
> --Morgan
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks and Regards,
Reedip Banerjee
IRC: reedip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Generic solution for bare metal testing

2016-04-06 Thread Paul Belanger
On Wed, Apr 06, 2016 at 06:33:06PM +0300, Igor Belikov wrote:
> Hey Stackers,
> 
> In Fuel we use bare metal testing for deployment tests. This is essentially a 
> core component of Fuel CI and as much as we like having it around we’d rather 
> spend time and resources integrating with upstream instead of growing and 
> polishing third-party testing solutions.
> 
> On one of the previous Infra team meetings we discussed the possibility of 
> bringing testing on bare metal nodes to openstack-infra[1]. This is not a new 
> topic, similar question was brought up by Magnum some time ago[2] and there 
> might other times this was discussed. We use bare metal testing for Fuel, I 
> assume that Magnum still wants to use it, TripleO would probably also fit in 
> the picture in some way (though I’m not familiar with current scheme of 
> TripleO CI) - hope this is enough to consider implementation of generic way 
> to use baremetal nodes in CI.
> 
> The most obvious way to do this seems to be using existing OpenStack service 
> for bare metal provisioning - Ironic. Ironic fits pretty well in existing 
> Infra workflow, Ironic usage (in form of Rackspace's OnMetal) was previously 
> discussed in Magnum thread[2] with the main technical issue being inability 
> to use custom glance images to boot instances. AFAIK the situation didn't 
> change much with OnMetal, but Ironic perfectly supports booting from glance 
> images created by diskimage-builder - which is exactly the way Nodepool 
> currently works for virtual machines.
> 
> With the work currently going on InfraCloud there's a possibility to properly 
> design and implement bare metal testing, Zuul v3 spec[3] also brings a number 
> of relevant changes to Nodepool. So, summing up some points of possible 
> implementation:
> * Multiple pools of bare metal nodes under Ironic management are available as 
> a part of InfraCloud
> * Ironic acts as an additional hypervisor for Nova, providing the ability to 
> use bare metal nodes by booting an instance with a specific flavor
> * Nodepool manages booting bare metal instances using the images generated 
> with diskimage-builder and stored in Glance
> * Nodepool also manages redeployment of bare metal nodes - redeploying a 
> glance image on a bare metal node takes only a few minutes, but time may 
> depend on a set of cleaning steps used to redeploy a node
> * Bare metal instances are exposed to Jenkins (or a different worker in case 
> of Zuul v3) by Nodepool 
> 
> I suppose there are security issues when we talk about running custom code on 
> bare metal slaves, but I'm not sure I understand the difference from running 
> custom code on a virtual machine if bare metal nodes are isolated, don't 
> contain any sensitive data and follow a regular redeployment procedure.
> 
> I'd like to add that we're ready to start donating hardware from the Fuel CI 
> pool (2 pools in different locations, to be accurate) to see this initiative 
> taking off.
> 
> Please, share your thoughts and opinions.
> 
Personally, I don't see this happening in the short term.  Currently infracloud
is down (moving data centers) and zuulv3 still has work that needs to be
completed.  While baremetal is a nice to have, I don't see us using infracloud
to do this right now. At our recent -infra midcycle, we talked about not wanting
infracloud become a dominate cloud for nodepool.  Meaning, we'd be bringing
it up and down at specific intervals for tasks like upgrading to the current
release.

I agree that zuulv3 has a lot of potential (and super excited to see it come
online) but we also need to work on ansible playbooks to make all this happen.

TL;DR I see bare metal happing someday, not sure 2016 is in the cards.

My personal opinion.

[4] http://docs.openstack.org/infra/system-config/infra-cloud.html
> [1]http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-03-29-19.03.log.html
> [2]http://lists.openstack.org/pipermail/openstack-infra/2015-September/003138.html
> [3]http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html
> --
> Igor Belikov
> Fuel CI Engineer
> ibeli...@mirantis.com
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptl][keystone] Proposal to split authentication part out of Keystone to separated project

2016-04-06 Thread Morgan Fainberg
On Wed, Apr 6, 2016 at 6:29 PM, David Stanek  wrote:

>
> On Wed, Apr 6, 2016 at 3:26 PM Boris Pavlovic 
> wrote:
>
>>
>> 2) This will reduce scope of Keystone, which means 2 things
>> 2.1) Smaller code base that has less issues and is simpler for testing
>> 2.2) Keystone team would be able to concentrate more on fixing
>> perf/scalability issues of authorization, which is crucial at the moment
>> for large clouds.
>>
>
> I'm not sure that this is entirely true. If we truly just split up the
> project, meaning we don't remove functionality, then we'd have the same
> number of bugs and work. It would just be split across two projects.
>
> I think the current momentum to get out of the authn business is still our
> best bet. As Steve mentioned this is ongoing work.
>
> -- David
>
>
What everyone else said... but add in the need then to either pass the
AuthN over to the Assignment/AuthZ api or bake it in (via apache module?)
and we are basically where we are now.

Steve alluded to splitting out the authentication bit (but not to a new
service), the idea there is to make it so AuthN is not part of the CRUD
interface of the server. All being said, AuthN and AuthZ are going to be
hard to split into two separate services and with exception of the
unfounded "scope" benefit, we already can handle most of what you've
proposed with zero changes to Keystone.

Cheers,
--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] meeting topics for 4/7/2016 networking-sfc project IRC meeting

2016-04-06 Thread Cathy Zhang
Hi everyone,

Here are some topics I have in mind for tomorrow's meeting discussion. Feel 
free to add more.


1.   Source port specification in the FC

2.   Networking-sfc SFC driver for OVN

3.   Networking-sfc SFC driver for ODL

4.   Networking-sfc integration with ONOS completion status update

5.   Tacker Driver for networking-sfc

6.   Consistent Repository rule for networking-sfc related drivers: 
Northbound Tacker driver and Southbound ONOS driver, ODL driver, OVD driver

7.   Generate the Data path chain path ID

8.   Dynamic service chain update without service interruption

9.   Existing Bug scrub

Thanks,
Cathy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptl][keystone] Proposal to split authentication part out of Keystone to separated project

2016-04-06 Thread David Stanek
On Wed, Apr 6, 2016 at 3:26 PM Boris Pavlovic 
wrote:

>
> 2) This will reduce scope of Keystone, which means 2 things
> 2.1) Smaller code base that has less issues and is simpler for testing
> 2.2) Keystone team would be able to concentrate more on fixing
> perf/scalability issues of authorization, which is crucial at the moment
> for large clouds.
>

I'm not sure that this is entirely true. If we truly just split up the
project, meaning we don't remove functionality, then we'd have the same
number of bugs and work. It would just be split across two projects.

I think the current momentum to get out of the authn business is still our
best bet. As Steve mentioned this is ongoing work.

-- David
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Minimal secure identification of a new VM

2016-04-06 Thread Fox, Kevin M
-1 for man in the middle susceptible solutions. This also doesn't solve all the 
issues listed in the spec, such as suspended nodes, snapshotted nodes, etc.

Nova has several back channel mechanisms at its disposal. We should use one or 
more of them to solve the problem properly instead of opening a security hole 
in our solution to a security problem.

Such as:
 * The nova console is one mechanism that could be utilized as a secure back 
channel.
 * The vm based instances could add a virutal serial port as a back channel.
 * Some bare metal bmc's support virtual cd's which could be loaded with fresh 
credentials upon request.
 * The metadata server is reliable in certain situations.

I'm sure there are more options too.

The instance user spec covers a lot of that stuff.

I'm ok if we want to refactor the instance user spec to cover creating phase 1 
credentials that are intended to be used for things other then getting a 
keystone token. It could be used to register/reregister with ipa, chef, puppet, 
etc. We just need to reword the spec to cover that use case too.

I'm also not tied to the implementation listed. it just needs to meet the 
requirements.

Thanks,
Kevin


From: Adam Young [ayo...@redhat.com]
Sent: Wednesday, April 06, 2016 2:09 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Minimal secure identification of a new VM

On 04/06/2016 05:42 AM, Daniel P. Berrange wrote:
> On Tue, Apr 05, 2016 at 06:00:55PM -0400, Adam Young wrote:
>> We have a use case where we want to register a newly spawned Virtual machine
>> with an identity provider.
>>
>> Heat also has a need to provide some form of Identity for a new VM.
>>
>>
>> Looking at the set of utilities right now, there does not seem to be a
>> secure way to do this.  Injecting files does not provide a path that cannot
>> be seen by other VMs or machines in the system.
>>
>> For our use case, a short lived One-Time-Password is sufficient, but for
>> others, I think asymmetric key generation makes more sense.
>>
>> Is the following possible:
>>
>> 1.  In cloud-init, the VM generates a Keypair, then notifies the No0va
>> infrastructure (somehow) that it has done so.
> There's no currently secure channel for the guest to push information
> to Nova.
We need to secure the message queue from the compute node to conductor.
This is very achievable:

1.  Each compute node gets its own rabbit user
2.  Messages from compute node to Conductor are validated as to what
node sent them

We should enable TLS on the network as well, or password can be
sniffed.  Self signed is crappy, but probably sufficient for a baseline
deployment. Does not defend against MITM.  Puppet based deployments can
mitigate.
X509 client cert is a better auth mechanism than password, but not
essential.



>   The best we have is the metadata service, but we'd need to
> secure that with https, because the metadata server cannot be assumed
> to be running on the same host as the VM & so the channel is not protected
> against MITM attacks.
>
> Also currently the metadata server is readonly with the guest pulling
> information from it - it doesn't currently allow guests to push information
> into it. This is nice because the metadata servers could theoretically be
> locked down to prevent may interactions with the rest of nova - it should
> only need read-only access to info about the guests it is serving. If we
> turn the metadata server into a bi-directional service which can update
> information about guests, then it opens it up as a more attractive avenue
> of attack for guest OS trying breach the host infra. This is a fairly
> general concern with any approach where the guest has to have the ability
> to push information back into Nova.
>
>> 2.  Nova Compute reads the public Key off the device and sends it to
>> conductor, which would then associate the public key with the server?
>>
>> 3.  A third party system could then validate the association of the public
>> key and the server, and build a work flow based on some signed document from
>> the VM?
> Regards,
> Daniel


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security][Barbican] BYOK

2016-04-06 Thread Douglas Mendizábal
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi Rob,

The Barbican team is dedicating a Fishbowl session to BYOK for the summi
t:

https://www.openstack.org/summit/austin-2016/summit-schedule/events/9155

- - Doug


On 4/6/16 5:12 AM, Clark, Robert Graham wrote:
> Hi All,
> 
> We’ve had lots of discussion about BYOK and most of it has lead to
> “lets discuss it at the summit”.
> 
> I’ve got some time for this in the security schedule, I’m checking
> – is there some other place where this is already tabled to be
> discussed?
> 
> -Rob 
> __

>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-BEGIN PGP SIGNATURE-

iQIcBAEBCgAGBQJXBYAvAAoJEB7Z2EQgmLX7qIQQAJ1lQfLuL9vbbAwRMf4JFwWg
nD4aJfMfa029YLf9ub0gqFNOozmNuAtr2k8umZl7Z+RewEUlo6pzL17T0zh01cIU
uhImGWF6XPO+DTO/yrRlEprxqQCjRnyn56ucxTQ3Jh2r78omhFnnwq1N8atHcj0V
Sf5SV8DwrHVQeODQsJA7G2+lkiTEhg6S/p7lx4sOPaSIFjJ5Qar/O8JnTWRgGWk1
U4xU9r5ZE7hGAnQeU8NMIEZ7wEQBNnYEwU9hVK4zbCjt1sHgQJo0H5PEeOu+YG7z
MGjaGvUfrtwzVQyVH8B28YvwzIX3G0uAsMM3+e1scPbiy8G98SB6llqKdeX2qgb0
A/wHpfJdtF3I4m2SXn92Gtor5UxqddJznwJjVOZIMaJ6RSyVMvUeVvat/qcI0/kC
JURem11m/OyzdRtG5ckNl84c4Y/g90vIbbdOcgabtq/JPeMtGyFg7G0hFPDkaqLW
n8sBBN7iy5ioGdQEoXvDeURyd/PbVRL9KIsBDzTDRVpx8gO5WeMsw9HlD/sEgHvs
KWNI41cKasGLHFMP0uP418WvsAbqz+KmNZk+bh7jxIM3I8eQ9U3AFpYciHcCXhOd
M6y0WTpWFSXX0iS4KBHE6VUyEZ9Pidx2ZHy/VpLJxpZURvrQY6cS5u2ZOqexROQw
1LV2GT2D+XU5flUK4RtG
=lyPG
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptl][keystone] Proposal to split authentication part out of Keystone to separated project

2016-04-06 Thread Adam Young

On 04/06/2016 04:56 PM, Dolph Mathews wrote:
For some historical perspective, that's basically how v2 was designed. 
The "public" service (port 5000) did nothing but the auth flow. The 
"admin" service (port 35357) was identity management.


Unfortunately, there are (perhaps uncommon) authentication flows 
where, for example, you need to 1) authenticate for an unscoped token, 
2) retrieve a list of tenants where you have some authorization, 3) 
re-authenticate for a scoped token. There was a lot of end-user 
confusion over what port was used for what operations (Q: "Why is my 
curl request returning 404? I'm doing what the docs say to do!" A: 
"You're calling the wrong port."). More and more calls straddled the 
line between the two APIs, blurring their distinction.


The approach we took in v3 was to consolidate the APIs into a single, 
functional superset, and use RBAC to distinguish between use cases in 
a more flexible manner.


On Wed, Apr 6, 2016 at 2:26 PM, Boris Pavlovic > wrote:


Hi stackers,

I would like to suggest very simple idea of splitting out of
Keystone authentication
part in the separated project.

Such change has 2 positive outcomes:
1) It will be quite simple to create scalable service with high
performance for authentication based on very mature projects like:
Kerberos[1] and OpenLDAP[2].


You can basically do this today if you just focus on implementing 
drivers for the few bits of keystone you need, and disable the rest.


We should deprecate the userid/password in the token Body and use the 
BasicAuth mechanism in its place.  Then Password could be a Federated 
call like anything else.  We could do that logic in Middleware instead 
of an apache module.


A comparble middleware/apache module could also be used in other 
services, allowing the identity inside of Keystone to be used with 
remote services.


Ideally, we would get out of the business of distributing tokens 
altogether, and use the standar Mechanism for authentication that the 
web has when talking to the services directly.  Keystone then reduces to 
a service catalog look up for end users.






2) This will reduce scope of Keystone, which means 2 things
2.1) Smaller code base that has less issues and is simpler for testing
2.2) Keystone team would be able to concentrate more on fixing
perf/scalability issues of authorization, which is crucial at the
moment for large clouds.


(2.2) is particularly untrue, because this will cause at least 2 
releases worth of refactoring work for everyone, and another 6 
releases justifying to deployers why their newfound headaches are 
worthwhile. Perhaps after burning those ~4 years of productivity, we'd 
be able to get back to "fixing perf/scalability issues of authorization."



Thoughts?

[1] http://web.mit.edu/kerberos/
[2] http://ldapcon.org/2011/downloads/hummel-slides.pdf

Best regards,
Boris Pavlovic

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptl][keystone] Proposal to split authentication part out of Keystone to separated project

2016-04-06 Thread Steve Martinelli

This has been our hidden agenda for many releases (minus the project
split). There are other projects that you mention that are much better at
handling authentication, many enterprises already have these place as well.
We have been trying to get out of the identity management (and
consequently, the authentication) space for a while. That's why we have
been focusing on federated identity and removing write operations to LDAP.

Enter the admin, service users, and sql backed users. Many existing
deployments store users in an SQL based backend. We pushed back on adding
features for this use case for a while, but there are enough folks out
there that want to do this, which is why we approving a spec to enforce
password lifecycle in the N release. So the new project/repo would have to
handle this case as well.

Architecturally, I can see why you would want to split things up, it is a
logical break. But I also see a few arguments against a split: 1) we
already support Kerberos and OpenLDAP (and other auth services); 2) I don't
think we have a trouble with scope / not enough contribution; and 3)
inertia, adopting new services takes a long time (see v2 to v3 transition),
and this would add to that pile.

Thanks,

Steve Martinelli
OpenStack Keystone Project Team Lead



From:   Boris Pavlovic 
To: OpenStack Development Mailing List

Date:   2016/04/06 03:27 PM
Subject:[openstack-dev] [tc][ptl][keystone] Proposal to split
authentication part out of Keystone to separated project



Hi stackers,

I would like to suggest very simple idea of splitting out of Keystone
authentication
part in the separated project.

Such change has 2 positive outcomes:
1) It will be quite simple to create scalable service with high performance
for authentication based on very mature projects like: Kerberos[1] and
OpenLDAP[2].

2) This will reduce scope of Keystone, which means 2 things
2.1) Smaller code base that has less issues and is simpler for testing
2.2) Keystone team would be able to concentrate more on fixing
perf/scalability issues of authorization, which is crucial at the moment
for large clouds.

Thoughts?

[1] http://web.mit.edu/kerberos/
[2] http://ldapcon.org/2011/downloads/hummel-slides.pdf

Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Minimal secure identification of a new VM

2016-04-06 Thread Adam Young

On 04/06/2016 05:42 AM, Daniel P. Berrange wrote:

On Tue, Apr 05, 2016 at 06:00:55PM -0400, Adam Young wrote:

We have a use case where we want to register a newly spawned Virtual machine
with an identity provider.

Heat also has a need to provide some form of Identity for a new VM.


Looking at the set of utilities right now, there does not seem to be a
secure way to do this.  Injecting files does not provide a path that cannot
be seen by other VMs or machines in the system.

For our use case, a short lived One-Time-Password is sufficient, but for
others, I think asymmetric key generation makes more sense.

Is the following possible:

1.  In cloud-init, the VM generates a Keypair, then notifies the No0va
infrastructure (somehow) that it has done so.

There's no currently secure channel for the guest to push information
to Nova.
We need to secure the message queue from the compute node to conductor.  
This is very achievable:


1.  Each compute node gets its own rabbit user
2.  Messages from compute node to Conductor are validated as to what 
node sent them


We should enable TLS on the network as well, or password can be 
sniffed.  Self signed is crappy, but probably sufficient for a baseline 
deployment. Does not defend against MITM.  Puppet based deployments can 
mitigate.
X509 client cert is a better auth mechanism than password, but not 
essential.





  The best we have is the metadata service, but we'd need to
secure that with https, because the metadata server cannot be assumed
to be running on the same host as the VM & so the channel is not protected
against MITM attacks.

Also currently the metadata server is readonly with the guest pulling
information from it - it doesn't currently allow guests to push information
into it. This is nice because the metadata servers could theoretically be
locked down to prevent may interactions with the rest of nova - it should
only need read-only access to info about the guests it is serving. If we
turn the metadata server into a bi-directional service which can update
information about guests, then it opens it up as a more attractive avenue
of attack for guest OS trying breach the host infra. This is a fairly
general concern with any approach where the guest has to have the ability
to push information back into Nova.


2.  Nova Compute reads the public Key off the device and sends it to
conductor, which would then associate the public key with the server?

3.  A third party system could then validate the association of the public
key and the server, and build a work flow based on some signed document from
the VM?

Regards,
Daniel



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Winstackers][Hyper-V] Newton Design Summit

2016-04-06 Thread Claudiu Belu
Hello everyone,

We are going to have a session in the upcomming Austin Design Summit about the 
upcoming features in OpenStack
for Hyper-V / Windows / other Microsoft technologies.

We have started writing an agenda for the Winstackers work session:

https://etherpad.openstack.org/p/newton-winstackers-design-session

You are welcome to join in and add your use cases, questions, challenges, and 
problems to the etherpad if you
wish to discuss them during the session. Knowing the expectations of the 
community will help us focus our
attention on the most desirable features.

At the moment, our main topics will be:

* Windows containers in Magnum.
* New Windows Server 2016 networking stack vs OVS on Windows vs 
networking-hyperv.
* Newton Nova Hyper-V features: Shielded VMs, Fibre Channel support, Hyper-V 
Cluster, etc.
* Performance improvements.

Hope to see you at the Summit!

Best regards,

Claudiu Belu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-06 Thread Adam Young

On 04/06/2016 10:44 AM, Dan Prince wrote:

On Tue, 2016-04-05 at 19:19 -0600, Rich Megginson wrote:

On 04/05/2016 07:06 PM, Dan Prince wrote:

On Sat, 2016-04-02 at 17:28 -0400, Adam Young wrote:

I finally have enough understanding of what is going on with
Tripleo
to
reasonably discuss how to implement solutions for some of the
main
security needs of a deployment.


FreeIPA is an identity management solution that can provide
support
for:

1. TLS on all network communications:
  A. HTTPS for web services
  B. TLS for the message bus
  C. TLS for communication with the Database.
2. Identity for all Actors in the system:
 A.  API services
 B.  Message producers and consumers
 C.  Database consumers
 D.  Keystone service users
3. Secure  DNS DNSSEC
4. Federation Support
5. SSH Access control to Hosts for both undercloud and overcloud
6. SUDO management
7. Single Sign On for Applications running in the overcloud.


The main pieces of FreeIPA are
1. LDAP (the 389 Directory Server)
2. Kerberos
3. DNS (BIND)
4. Certificate Authority (CA) server (Dogtag)
5. WebUI/Web Service Management Interface (HTTPD)

Of these, the CA is the most critical.  Without a centralized CA,
we
have no reasonable way to do certificate management.

Would using Barbican to provide an API to manage the certificates
make
more sense for our deployment tooling? This could be useful for
both
undercloud and overcloud cases.

As for the rest of this, how invasive is the implementation of
FreeIPA.? Is this something that we can layer on top of an existing
deployment such that users wishing to use FreeIPA can opt-in.


Now, I know a lot of people have an allergic reaction to some,
maybe
all, of these technologies. They should not be required to be
running
in
a development or testbed setup.  But we need to make it possible
to
secure an end deployment, and FreeIPA was designed explicitly for
these
kinds of distributed applications.  Here is what I would like to
implement.

Assuming that the Undercloud is installed on a physical machine,
we
want
to treat the FreeIPA server as a managed service of the
undercloud
that
is then consumed by the rest of the overcloud. Right now, there
are
conflicts for some ports (8080 used by both swift and Dogtag)
that
prevent a drop-in run of the server on the undercloud
controller.  Even
if we could deconflict, there is a possible battle between
Keystone
and
the FreeIPA server on the undercloud.  So, while I would like to
see
the
ability to run the FreeIPA server on the Undercloud machine
eventuall, I
think a more realistic deployment is to build a separate virtual
machine, parallel to the overcloud controller, and install
FreeIPA
there. I've been able to modify Tripleo Quickstart to provision
this
VM.

I was also able to run FreeIPA in a container on the undercloud
machine,
but this is, I think, not how we want to migrate to a container
based
strategy. It should be more deliberate.


While the ideal setup would be to install the IPA layer first,
and
create service users in there, this produces a different install
path
between with-FreeIPA and without-FreeIPA. Thus, I suspect the
right
approach is to run the overcloud deploy, then "harden" the
deployment
with the FreeIPA steps.


The IdM team did just this last summer in preparing for the Tokyo
summit, using Ansible and Packstack.  The Rippowam project
https://github.com/admiyo/rippowam was able to fully lock down a
Packstack based install.  I'd like to reuse as much of Rippowam
as
possible, but called from Heat Templates as part of an overcloud
deploy.  I do not really want to re implement Rippowam in Puppet.

As we are using Puppet for our configuration I think this is
currently
a requirement. There are many good puppet examples out there of
various
servers and a quick google search showed some IPA modules are
available
as well.

I think most TripleO users are quite happy in using puppet modules
for
configuration in that the puppet openstack modules are quite mature
and
well tested. Making a one-off exception for FreeIPA at this point
doesn't make sense to me.

What about calling an ansible playbook from a puppet module?

Given our current toolset in TripleO having the ability to manage all
service configurations with a common language overrides any short cuts
that calling Ansible from Puppet would give you I think.

The best plan I think for IPA integration into the Over and underclouds
would be a puppet-freeipa module.
Puppet is fine. I have some feedback from the IPA side that the 
https://github.com/purpleidea/puppet-ipa/  works ok.  Work on it seems 
to have tapered off last June, but we could revive.





So, big question: is Heat->ansible (instead of Puppet) for an
overcloud
deployment an acceptable path?  We are talking Ansible 1.0
Playbooks,
which should be relatively straightforward ports to 2.0 when the
time
comes.

Thus, the sequence would be:

1. Run existing overcloud deploy steps.
2. Install IPA server on the allocated VM

Re: [openstack-dev] [tc][ptl][keystone] Proposal to split authentication part out of Keystone to separated project

2016-04-06 Thread Dolph Mathews
For some historical perspective, that's basically how v2 was designed. The
"public" service (port 5000) did nothing but the auth flow. The "admin"
service (port 35357) was identity management.

Unfortunately, there are (perhaps uncommon) authentication flows where, for
example, you need to 1) authenticate for an unscoped token, 2) retrieve a
list of tenants where you have some authorization, 3) re-authenticate for a
scoped token. There was a lot of end-user confusion over what port was used
for what operations (Q: "Why is my curl request returning 404? I'm doing
what the docs say to do!" A: "You're calling the wrong port."). More and
more calls straddled the line between the two APIs, blurring their
distinction.

The approach we took in v3 was to consolidate the APIs into a single,
functional superset, and use RBAC to distinguish between use cases in a
more flexible manner.

On Wed, Apr 6, 2016 at 2:26 PM, Boris Pavlovic 
wrote:

> Hi stackers,
>
> I would like to suggest very simple idea of splitting out of Keystone
> authentication
> part in the separated project.
>
> Such change has 2 positive outcomes:
> 1) It will be quite simple to create scalable service with high
> performance for authentication based on very mature projects like:
> Kerberos[1] and OpenLDAP[2].
>

You can basically do this today if you just focus on implementing drivers
for the few bits of keystone you need, and disable the rest.


>
> 2) This will reduce scope of Keystone, which means 2 things
> 2.1) Smaller code base that has less issues and is simpler for testing
> 2.2) Keystone team would be able to concentrate more on fixing
> perf/scalability issues of authorization, which is crucial at the moment
> for large clouds.
>

(2.2) is particularly untrue, because this will cause at least 2 releases
worth of refactoring work for everyone, and another 6 releases justifying
to deployers why their newfound headaches are worthwhile. Perhaps after
burning those ~4 years of productivity, we'd be able to get back to "fixing
perf/scalability issues of authorization."


>
> Thoughts?
>
> [1] http://web.mit.edu/kerberos/
> [2] http://ldapcon.org/2011/downloads/hummel-slides.pdf
>
> Best regards,
> Boris Pavlovic
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] - oAuth tab proposal

2016-04-06 Thread Adam Young

On 04/06/2016 03:20 PM, Brad Pokorny wrote:

The last I heard, oauth is likely to be deprecated in Keystone [1].

If you're interested in having it stay around, please let the Keystone 
team know. It would only make sense to add it to Horizon if it's going 
to stay.


[1] http://openstack.markmail.org/message/ihqbetack26g5gmg

Thanks,
Brad



We are looking to unify all of the delegations mechanisms:  Role 
assigntments, trust, and OAuth.  Its going to be a topic at the Austin 
summit.  A unified UI for these would be awesome.





From: "Rob Cresswell (rcresswe)" >
Reply-To: "OpenStack Development Mailing List (not for usage 
questions)" >

Date: Thursday, March 31, 2016 at 8:31 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>

Subject: Re: [openstack-dev] [horizon] - oAuth tab proposal

Could you put up a blueprint for discussion? We have a weekly meeting 
to review blueprints: 
https://wiki.openstack.org/wiki/Meetings/HorizonDrivers


The blueprint template is here: 
https://blueprints.launchpad.net/horizon/+spec/template


Thanks!

Rob

On 31 Mar 2016, at 10:57, Marcos Fermin Lobo 
> wrote:


Hi all,

I would like to propose a new tab in "Access and security" web page.

As you know, keystone offers an OAUTH plugin for authentication. This 
means that third party applications could access to OpenStack cloud 
resources using OAUTH. Now, this is possible using the CLI but there 
is nothing (AFAIK) in Horizon.


I would propose a new tab in "Access and security" web page to manage 
OAUTH credentials. As usual, this new tab would have a list of OAUTH 
crendentials with buttons to approve and remove them.


Please see a simple mockup 
herehttps://mferminl.web.cern.ch/mferminl/mockups/horizon-oauth-mockup.png


Comments, suggestions... are very welcome!

Cheers,
Marcos.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org 
?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] - oAuth tab proposal

2016-04-06 Thread Steve Martinelli

There was feedback from the murano team asking for it to stick around:
http://lists.openstack.org/pipermail/openstack-dev/2016-March/090459.html

Thanks,

Steve Martinelli
OpenStack Keystone Project Team Lead



From:   Brad Pokorny 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   2016/04/06 03:23 PM
Subject:Re: [openstack-dev] [horizon] - oAuth tab proposal



The last I heard, oauth is likely to be deprecated in Keystone [1].

If you're interested in having it stay around, please let the Keystone team
know. It would only make sense to add it to Horizon if it's going to stay.

[1] http://openstack.markmail.org/message/ihqbetack26g5gmg

Thanks,
Brad


From: "Rob Cresswell (rcresswe)" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Date: Thursday, March 31, 2016 at 8:31 AM
To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [horizon] - oAuth tab proposal

Could you put up a blueprint for discussion? We have a weekly meeting to
review blueprints: https://wiki.openstack.org/wiki/Meetings/HorizonDrivers

The blueprint template is here:
https://blueprints.launchpad.net/horizon/+spec/template

Thanks!

Rob

  On 31 Mar 2016, at 10:57, Marcos Fermin Lobo <
  marcos.fermin.l...@cern.ch> wrote:

  Hi all,

  I would like to propose a new tab in "Access and security" web page.

  As you know, keystone offers an OAUTH plugin for authentication. This
  means that third party applications could access to OpenStack cloud
  resources using OAUTH. Now, this is possible using the CLI but there
  is nothing (AFAIK) in Horizon.

  I would propose a new tab in "Access and security" web page to manage
  OAUTH credentials. As usual, this new tab would have a list of OAUTH
  crendentials with buttons to approve and remove them.

  Please see a simple mockup here
  https://mferminl.web.cern.ch/mferminl/mockups/horizon-oauth-mockup.png


  Comments, suggestions... are very welcome!

  Cheers,
  Marcos.
  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org
  ?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Searchlight] Plans for Horizon cross-region view

2016-04-06 Thread McLellan, Steven
To add to that - Brad, if you're going to be in Austin I'm planning to
bring along a prototype, and I know that this use case is of interest to
others. If there is still sufficient interest it's something we have as a
priority for Newton, and from the few hours I've spent so far setting it
up I think it could be really powerful.

On 4/5/16, 9:40 PM, "Tripp, Travis S"  wrote:

>Sorry the for delayed response on this message. Finishing out Mitaka has
>been quite time consuming!
>
>Cross region searching is a high priority item for Searchlight in Newton.
> Steve has begun work on the spec [1] with initial prototyping. We also
>are considering this as a likely candidate for the design summit.  Please
>take a look and help us work through the design!
>
>[1] https://review.openstack.org/#/c/301227/
>
>Thanks,
>Travis
>
>From: Brad Pokorny
>>
>Reply-To: OpenStack List
>g>>
>Date: Thursday, February 25, 2016 at 3:17 PM
>To: OpenStack List
>g>>
>Subject: [openstack-dev] [Horizon][Searchlight] Plans for Horizon
>cross-region view
>
>The last info I've found on the ML about a cross-region view in Horizon
>is [1], which mentions making asynchronous calls to the APIs. Has anyone
>done further work on such a view?
>
>If not, I think it would make sense to only show the view if Searchlight
>is enabled. One of the Searchlight use cases is cross-region searching,
>and only using the searchlight APIs would cut down on the slowness of
>going directly to the service APIs for what would potentially be a lot of
>records. Thoughts?
>
>[1] http://openstack.markmail.org/message/huk5l73un7t255ox
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][ptl][keystone] Proposal to split authentication part out of Keystone to separated project

2016-04-06 Thread Boris Pavlovic
Hi stackers,

I would like to suggest very simple idea of splitting out of Keystone
authentication
part in the separated project.

Such change has 2 positive outcomes:
1) It will be quite simple to create scalable service with high performance
for authentication based on very mature projects like: Kerberos[1] and
OpenLDAP[2].

2) This will reduce scope of Keystone, which means 2 things
2.1) Smaller code base that has less issues and is simpler for testing
2.2) Keystone team would be able to concentrate more on fixing
perf/scalability issues of authorization, which is crucial at the moment
for large clouds.

Thoughts?

[1] http://web.mit.edu/kerberos/
[2] http://ldapcon.org/2011/downloads/hummel-slides.pdf

Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DIB][Bifrost] Avoid running DIB with simple-init until a new release (1.14.1) is made

2016-04-06 Thread Gregory Haynes
On Wed, Apr 6, 2016, at 11:19 AM, Gregory Haynes wrote:
> This is a notice for users of diskimage-builder's simple-init element (I
> added Bifrost because I believe that is the recommended usage there).
> 
> There is a bug in the latest release (1.14.0) of diskimage-builder which
> will delete ssh host keys on the image building host when using the
> simple-init element. The fix is proposed[1] and we are working on
> merging it ASAP then cutting a new release. If you have a CI type set up
> (possibly via nodepool) which uses the simple-init element then its
> probably a good idea to disable it temporarily and check that you still
> have ssh host keys.
> 
> I really hope this hasn't bit anyone other than infra (sorry infra), but
> if it has bit you then I'm sorry!
> 
> -Greg
> 
> 1: https://review.openstack.org/#/c/302373/
> 

The new release has just been uploaded to pypi. Sorry, again for the
issues!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] - oAuth tab proposal

2016-04-06 Thread Brad Pokorny
The last I heard, oauth is likely to be deprecated in Keystone [1].

If you're interested in having it stay around, please let the Keystone team 
know. It would only make sense to add it to Horizon if it's going to stay.

[1] http://openstack.markmail.org/message/ihqbetack26g5gmg

Thanks,
Brad


From: "Rob Cresswell (rcresswe)" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, March 31, 2016 at 8:31 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [horizon] - oAuth tab proposal

Could you put up a blueprint for discussion? We have a weekly meeting to review 
blueprints: https://wiki.openstack.org/wiki/Meetings/HorizonDrivers

The blueprint template is here: 
https://blueprints.launchpad.net/horizon/+spec/template

Thanks!

Rob

On 31 Mar 2016, at 10:57, Marcos Fermin Lobo 
> wrote:

Hi all,

I would like to propose a new tab in "Access and security" web page.

As you know, keystone offers an OAUTH plugin for authentication. This means 
that third party applications could access to OpenStack cloud resources using 
OAUTH. Now, this is possible using the CLI but there is nothing (AFAIK) in 
Horizon.

I would propose a new tab in "Access and security" web page to manage OAUTH 
credentials. As usual, this new tab would have a list of OAUTH crendentials 
with buttons to approve and remove them.

Please see a simple mockup here 
https://mferminl.web.cern.ch/mferminl/mockups/horizon-oauth-mockup.png

Comments, suggestions... are very welcome!

Cheers,
Marcos.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] IRC Meeting Thursday April 7th

2016-04-06 Thread Christopher Aedo
Join us Thursday for our weekly meeting, scheduled for April 7th at
17:00UTC in #openstack-meeting-3

The agenda can be found here, and please add to if you want to get
something on the agenda:
https://wiki.openstack.org/wiki/Meetings/app-catalog

We'll include status updates on the Glare PoC implementation for App
Catalog along with details of the OSC plugin that's got a great start
from Paul Van Eck.  We'll also talk about the session we'll have at
the summit, and hopefully sort out which other teams we'd like to get
time with.

Looking forward to seeing everyone there tomorrow!

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Proposal to remove `nova image-*` commands from novaclient

2016-04-06 Thread Nikhil Komawar


On 4/6/16 2:43 PM, Matt Riedemann wrote:
>
>
> On 4/6/2016 1:17 PM, Nikhil Komawar wrote:
>>
>>
>> On 4/6/16 2:09 PM, Clint Byrum wrote:
>>> Excerpts from Nikhil Komawar's message of 2016-04-06 10:46:28 -0700:
 Need a inline clarification.

 On 4/6/16 10:58 AM, Flavio Percoco wrote:
> On 06/04/16 08:26 -0400, Sean Dague wrote:
>> On 04/06/2016 04:13 AM, Markus Zoeller wrote:
>>> +1 for deprecation and removal
>>>
>>> To be honest, when I started with Nova during Kilo, I didn't get
>>> why we have those passthrough APIs. They looked like convenience
>>> APIs.
>>> A short history lesson, why they got introduced, would be cool.
>>> I only
>>> found commit [1] which looks like they were there from the
>>> beginning.
>>>
>>> References:
>>> [1]
>>> https://github.com/openstack/python-novaclient/commit/7304ed80df265b3b11a0018a826ce2e38c052572#diff-56f10b3a40a197d5691da75c2b847d31R33
>>>
>>>
>> The short history lesson is nova image API existed before glance.
>> Glance
>> was a spin out from Nova of that API. Doing so doesn't
>> immediately make
>> that API go away however. Especially as all these things live on
>> different ports with different end points. So the image API
>> remained as
>> a proxy (as did volumes, baremetal, and even to some extend
>> networks).
>>
>> It's not super clear how you deprecate and remove these things
>> without
>> breaking a lot of people, as a lot of the libraries implement the
>> nova
>> image resources -
>> https://github.com/fog/fog-openstack/blob/master/lib/fog/openstack/compute.rb
>>
>>
> We can deprecate it without removing it. We make it work with v2 and
> start
> warning people that the API is not supported anymore. We don't fix
> bugs in that
> API but tell people to use the newer version.
>
> I think that should do it, unless I'm missing something.
> Flavio
>
 Is it a safe practice to not fix bugs on a publicly exposed API? What
 are the recommendations for such cases?

>>> I don't think you can make a blanket statement that no bugs will be
>>> fixed.
>>>
>>> There are going to be evolutions behind this API that make a small bug
>>> today into a big bug tomorrow. The idea is to push the user off the API
>>> when they try to do more with it, not when we forcibly explode their
>>> working code.
>>>
>>> "We don't break userspace". I know _we_ didn't say that about our
>>> project. But I like to think we understand the wisdom behind that,
>>> and can
>>> start at least pretending we believe in ourselves and our users enough
>>> to hold to it for some things, even if we don't really like some of the
>>> more dark and dingy corners of userspace that we have put out there.
>>
>> I see, so here's a more subjective idea of how we want to handle such
>> sensitive (being used for long time, important for some core operations,
>> etc) APIs and fix bugs on a case by case basis. We may be going in a
>> positive direction by reducing support but amount of work wise, I think
>> we can set expectations for developers as more process and less fixing.
>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> This thread has gotten longer and more complicated than I expected.
>
> At a very basic level, we aren't going to (knowingly) break the nova
> images API if using glance v2 on the backend, that's where a
> translation layer has to come in as part of the glance v2 adoption.
>
> But the nova images API is feature frozen, meaning we aren't going to
> make it handle glance v2-like requests. The same is true for how we
> don't have volume-type support in the nova volume create API.
>
> So now that we can all agree that we aren't removing the nova images
> API and we aren't going to break it for glance v2 adoption, we can
> also agree that we don't want people using it.
>
> One of the entry points to using it is via the CLI and python API
> bindings in python-novaclient. Hence why I'm proposing that we
> deprecate and eventually remove those. That's not a dependency for
> glance v2 adoption in nova, it's just a parallel thing we should have
> already done awhile back.
>
> OK, I think that's it.
>
Thanks for the clarification and giving us a complete outline of the
plan. It makes sense.

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] Trove weekly meeting minutes

2016-04-06 Thread Amrith Kumar
Minutes from today's weekly Trove meeting are at

http://eavesdrop.openstack.org/meetings/trove/2016/trove.2016-04-06-18.00.html

Detailed session schedule for summit is at 
http://bit.ly/trove-newton-design-summit

-amrith
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

2016-04-06 Thread Fox, Kevin M
Ive treated m1 flavors as an abstraction layer for flavours. I create other 
flavors that match various other things, but leave the m1's around as a target 
for generic, "I need about x size" things. I tweak them slightly to match up 
with the compute nodes closer but never shrink them past what the default is, 
so they should work generically. There's no reason flavors can't be used that 
way, and provide some uniformity across clouds.

My point is, app developers aren't because its too hard. :/

Without stuff like instance users to deal with https certificate storage and 
other important missing features, the generic app developer use case for 
anything non trivial just can't be handled today. So they tend to only exist on 
individual clouds.

We've seen that in the app catalog by not seeing many contributions. Its just 
too hard to write things generically enough to contribute. This is detrimental 
to OpenStack and really needs to be fixed, or other technology will come in to 
replace it.

Thanks,
Kevin

From: Christopher Aedo [d...@aedo.net]
Sent: Wednesday, April 06, 2016 11:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

On Wed, Apr 6, 2016 at 9:29 AM, Fox, Kevin M  wrote:
> It feels kind of like a defcore issue though. Its harder for app developers 
> to create stuff like heat templates intended for cross cloud that recommend a 
> size, m1.small, without a common reference.

For most deployments thought, the flavor definition is a function of
their compute node design.  Trying to standardize (and force that
standard via defcore) would likely drive away the biggest consumers of
OpenStack.

In my opinion this just shines a light on something missing from heat
(or maybes it exists and I'm just unaware) - the ability to discover
flavor details and find one that matches the minimum should be all
that's necessary in this case.  I think in general though, choosing
heat as a cross-cloud compatible application packaging tool is always
going to lead to problems.  Otherwise I think we would have seen an
emergence of people sharing heat templates that deploy applications
and work across many different OpenStack clouds.

> We keep making it hard for app developers to target openstack, so they don't 
> join, and then don't complain about when openstack makes their life harder. 
> we need to encourage ease of development on top of the platform.

I absolutely feel this pain point, but I'm still wondering what
applications people *are* developing for OpenStack (and how they're
packaging and distributing them - opinions welcome![1])

[1]: http://lists.openstack.org/pipermail/user-committee/2016-April/000722.html

-Christopher

>
> Thanks,
> Kevin
> 
> From: Sean Dague [s...@dague.net]
> Sent: Wednesday, April 06, 2016 3:47 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] FYI: Removing default flavors from nova
>
> On 04/06/2016 04:19 AM, Sylvain Bauza wrote:
>>
>>
>> Le 06/04/2016 06:44, Qiming Teng a écrit :
>>> Not an expert of Nova but I am really shocked by such a change. Because
>>> I'm not a Nova expert, I don't have a say on the *huge* efforts in
>>> maintaining some builtin/default flavors. As a user I don't care where
>>> the data have been stored, but I do care that they are gone. They are
>>> gone because they **WILL** be supported by devstack. They are gone with
>>> the workflow +1'ed **BEFORE** the devstack patch gets merged (many
>>> thanks to the depends-on tag). They are gone in hope that all deployment
>>> tools will know this when they fail, or fortunately they read this email,
>>> or they were reviewing nova patches.
>>>
>>> It would be a little nicer to initiate a discussion on the mailinglist
>>> before such a change is introduced.
>>
>>
>> It was communicated accordingly to operators with no strong arguments :
>> http://lists.openstack.org/pipermail/openstack-operators/2016-March/010045.html
>
> Not only with no strong arguments, but with a general - "yes please,
> that simplifies our life".
>
>> You can also see that https://review.openstack.org/#/c/300127/ is having
>> three items :
>>  - a DocImpact tag creating a Launchpad bug for documentation about that
>>  - a reno file meaning that our release notes will provide also some
>> comments about that
>>  - a Depends-On tag (like you said) on a devstack change meaning that
>> people using devstack won't see a modified behavior.
>>
>> Not sure what you need more.
>
> The default flavors were originally hardcoded in Nova (in the initial
> commit) -
> https://github.com/openstack/nova/commit/bf6e6e718cdc7488e2da87b21e258ccc065fe499#diff-5ca8c06795ef481818ea1710fce91800R64
>  and moved into the db 5 years ago to be a copy of the EC2 flavors at
> the time -
> 

Re: [openstack-dev] [nova][glance] Proposal to remove `nova image-*` commands from novaclient

2016-04-06 Thread Nikhil Komawar


On 4/6/16 2:09 PM, Clint Byrum wrote:
> Excerpts from Nikhil Komawar's message of 2016-04-06 10:46:28 -0700:
>> Need a inline clarification.
>>
>> On 4/6/16 10:58 AM, Flavio Percoco wrote:
>>> On 06/04/16 08:26 -0400, Sean Dague wrote:
 On 04/06/2016 04:13 AM, Markus Zoeller wrote:
> +1 for deprecation and removal
>
> To be honest, when I started with Nova during Kilo, I didn't get
> why we have those passthrough APIs. They looked like convenience APIs.
> A short history lesson, why they got introduced, would be cool. I only
> found commit [1] which looks like they were there from the beginning.
>
> References:
> [1]
> https://github.com/openstack/python-novaclient/commit/7304ed80df265b3b11a0018a826ce2e38c052572#diff-56f10b3a40a197d5691da75c2b847d31R33
>
 The short history lesson is nova image API existed before glance. Glance
 was a spin out from Nova of that API. Doing so doesn't immediately make
 that API go away however. Especially as all these things live on
 different ports with different end points. So the image API remained as
 a proxy (as did volumes, baremetal, and even to some extend networks).

 It's not super clear how you deprecate and remove these things without
 breaking a lot of people, as a lot of the libraries implement the nova
 image resources -
 https://github.com/fog/fog-openstack/blob/master/lib/fog/openstack/compute.rb

>>> We can deprecate it without removing it. We make it work with v2 and
>>> start
>>> warning people that the API is not supported anymore. We don't fix
>>> bugs in that
>>> API but tell people to use the newer version.
>>>
>>> I think that should do it, unless I'm missing something.
>>> Flavio
>>>
>> Is it a safe practice to not fix bugs on a publicly exposed API? What
>> are the recommendations for such cases?
>>
> I don't think you can make a blanket statement that no bugs will be
> fixed.
>
> There are going to be evolutions behind this API that make a small bug
> today into a big bug tomorrow. The idea is to push the user off the API
> when they try to do more with it, not when we forcibly explode their
> working code.
>
> "We don't break userspace". I know _we_ didn't say that about our
> project. But I like to think we understand the wisdom behind that, and can
> start at least pretending we believe in ourselves and our users enough
> to hold to it for some things, even if we don't really like some of the
> more dark and dingy corners of userspace that we have put out there.

I see, so here's a more subjective idea of how we want to handle such
sensitive (being used for long time, important for some core operations,
etc) APIs and fix bugs on a case by case basis. We may be going in a
positive direction by reducing support but amount of work wise, I think
we can set expectations for developers as more process and less fixing.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

2016-04-06 Thread Christopher Aedo
On Wed, Apr 6, 2016 at 9:29 AM, Fox, Kevin M  wrote:
> It feels kind of like a defcore issue though. Its harder for app developers 
> to create stuff like heat templates intended for cross cloud that recommend a 
> size, m1.small, without a common reference.

For most deployments thought, the flavor definition is a function of
their compute node design.  Trying to standardize (and force that
standard via defcore) would likely drive away the biggest consumers of
OpenStack.

In my opinion this just shines a light on something missing from heat
(or maybes it exists and I'm just unaware) - the ability to discover
flavor details and find one that matches the minimum should be all
that's necessary in this case.  I think in general though, choosing
heat as a cross-cloud compatible application packaging tool is always
going to lead to problems.  Otherwise I think we would have seen an
emergence of people sharing heat templates that deploy applications
and work across many different OpenStack clouds.

> We keep making it hard for app developers to target openstack, so they don't 
> join, and then don't complain about when openstack makes their life harder. 
> we need to encourage ease of development on top of the platform.

I absolutely feel this pain point, but I'm still wondering what
applications people *are* developing for OpenStack (and how they're
packaging and distributing them - opinions welcome![1])

[1]: http://lists.openstack.org/pipermail/user-committee/2016-April/000722.html

-Christopher

>
> Thanks,
> Kevin
> 
> From: Sean Dague [s...@dague.net]
> Sent: Wednesday, April 06, 2016 3:47 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] FYI: Removing default flavors from nova
>
> On 04/06/2016 04:19 AM, Sylvain Bauza wrote:
>>
>>
>> Le 06/04/2016 06:44, Qiming Teng a écrit :
>>> Not an expert of Nova but I am really shocked by such a change. Because
>>> I'm not a Nova expert, I don't have a say on the *huge* efforts in
>>> maintaining some builtin/default flavors. As a user I don't care where
>>> the data have been stored, but I do care that they are gone. They are
>>> gone because they **WILL** be supported by devstack. They are gone with
>>> the workflow +1'ed **BEFORE** the devstack patch gets merged (many
>>> thanks to the depends-on tag). They are gone in hope that all deployment
>>> tools will know this when they fail, or fortunately they read this email,
>>> or they were reviewing nova patches.
>>>
>>> It would be a little nicer to initiate a discussion on the mailinglist
>>> before such a change is introduced.
>>
>>
>> It was communicated accordingly to operators with no strong arguments :
>> http://lists.openstack.org/pipermail/openstack-operators/2016-March/010045.html
>
> Not only with no strong arguments, but with a general - "yes please,
> that simplifies our life".
>
>> You can also see that https://review.openstack.org/#/c/300127/ is having
>> three items :
>>  - a DocImpact tag creating a Launchpad bug for documentation about that
>>  - a reno file meaning that our release notes will provide also some
>> comments about that
>>  - a Depends-On tag (like you said) on a devstack change meaning that
>> people using devstack won't see a modified behavior.
>>
>> Not sure what you need more.
>
> The default flavors were originally hardcoded in Nova (in the initial
> commit) -
> https://github.com/openstack/nova/commit/bf6e6e718cdc7488e2da87b21e258ccc065fe499#diff-5ca8c06795ef481818ea1710fce91800R64
>  and moved into the db 5 years ago to be a copy of the EC2 flavors at
> the time -
> https://github.com/openstack/nova/commit/563a77fd4aa80da9bddac5cf7f8f27ed2dedb39d.
> Those flavors were meant to be examples, not the final story.
>
> All the public clouds delete these and do their own thing, as do I
> expect many of the products. Any assumption that software or users have
> that these will exist is a bad assumption.
>
> It is a big change, which is why it's being communicated on Mailing
> Lists in addition to in the release notes so that people have time to
> make any of their tooling not assume these flavors by name will be
> there, or to inject them yourself if you are sure you need them (as was
> done in the devstack case).
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

2016-04-06 Thread Jim Meyer

> On Apr 6, 2016, at 11:14 AM, Sean Dague  wrote:
> 
> On 04/06/2016 01:28 PM, Dean Troyer wrote:
>> On Wed, Apr 6, 2016 at 12:10 PM, Tim Bell > > wrote:
>> 
>>I think Heat needs more of an query engine along the lines of “give me a
>>flavor with at least X cores and Y GB RAM” rather than hard coding
>>m1.large.
>>Core performance is another parameter that would be interesting to
>>select,
>>“give me a core with at least 5 bogomips”
>> 
>> 
>> I've played with a version of OSC's "server create" command that uses
>> --cpu, --ram, etc rather than --flavor to size the created VM.  It is a
>> tiny bit of client-side work to do this, Heat could easily do it too... 
>> The trick is to not get carried away with spec'ing every last detail.
> 
> Or even just put it in Nova.
> 
> GET /flavors/?min_ram=1G_cpu=2
> 
> I think would be an entirely reasonable add for the flavors GET call.
> It's an API add, so would need a spec, but it's fundamentally pretty
> easy and probably not very controversial.

Huge, happy, and huggable +1*. 

I see interesting scheduling fun that can come out of this, as well as big 
interoperability wins.

—j

*I don’t rate anything bigger than a +1, so I have to vary the size and 
materials to carry emphasis.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [DIB][Bifrost] Avoid running DIB with simple-init until a new release (1.14.1) is made

2016-04-06 Thread Gregory Haynes
This is a notice for users of diskimage-builder's simple-init element (I
added Bifrost because I believe that is the recommended usage there).

There is a bug in the latest release (1.14.0) of diskimage-builder which
will delete ssh host keys on the image building host when using the
simple-init element. The fix is proposed[1] and we are working on
merging it ASAP then cutting a new release. If you have a CI type set up
(possibly via nodepool) which uses the simple-init element then its
probably a good idea to disable it temporarily and check that you still
have ssh host keys.

I really hope this hasn't bit anyone other than infra (sorry infra), but
if it has bit you then I'm sorry!

-Greg

1: https://review.openstack.org/#/c/302373/

-- 
  Gregory Haynes
  g...@greghaynes.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

2016-04-06 Thread Sean Dague
On 04/06/2016 01:28 PM, Dean Troyer wrote:
> On Wed, Apr 6, 2016 at 12:10 PM, Tim Bell  > wrote:
> 
> I think Heat needs more of an query engine along the lines of “give me a
> flavor with at least X cores and Y GB RAM” rather than hard coding
> m1.large.
> Core performance is another parameter that would be interesting to
> select,
> “give me a core with at least 5 bogomips”
> 
> 
> I've played with a version of OSC's "server create" command that uses
> --cpu, --ram, etc rather than --flavor to size the created VM.  It is a
> tiny bit of client-side work to do this, Heat could easily do it too... 
> The trick is to not get carried away with spec'ing every last detail.

Or even just put it in Nova.

GET /flavors/?min_ram=1G_cpu=2

I think would be an entirely reasonable add for the flavors GET call.
It's an API add, so would need a spec, but it's fundamentally pretty
easy and probably not very controversial.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [all] FYI: Removing default flavors from nova

2016-04-06 Thread Tim Bell

On 06/04/16 19:28, "Fox, Kevin M"  wrote:

>+1
>
>From: Neil Jerram [neil.jer...@metaswitch.com]
>Sent: Wednesday, April 06, 2016 10:15 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [nova] [all] FYI: Removing default flavors from 
>nova
>
>I hesitate to write this, even now, but I do think that OpenStack has a
>problem with casual incompatibilities, such as this appears to be.  But,
>frankly, I've been slapped down for expressing my opinion in the past
>(on the pointless 'tenant' to 'project' change), so I just quietly
>despaired when I saw that ops thread, rather than saying anything.
>
>I haven't researched this particular case in detail, so I could be
>misunderstanding its implications.  But in general my impression, from
>the conversations that occur when these topics are raised, is that many
>prominent OpenStack developers do not care enough about
>release-to-release compatibility.  The rule for incompatible changes
>should be "Just Don't", and I believe that if everyone internalized
>that, they could easily find alternative approaches without breaking
>compatibility.
>
>When an incompatible change like this is made, imagine the 1000s of
>operators and users around the world, with complex automation around
>OpenStack, who see their deployment or testing failing, spend a couple
>of hours debugging, and eventually discover 'oh, they removed m1.small'
>or 'oh, they changed the glance command line'.  Given that hassle and
>bad feeling, is the benefit that developers get from the incompatibility
>still worth it?

I have rarely seen the operator community so in agreement as on this change 
impact.
Over the past 4 years, there have been lots of changes which were debated
with major impacts on end users (EC2, nova-network, …). However, I do not 
believe 
that this is one of those:

This change

- does not break existing clouds
- has a simple 5 line shell script to cover the new cloud install and can
be applied before opening the cloud to the end users
- raises a fundamental compatibility question to be solved by the community

What I’d like to replace it is a generic query along the lines of

give me a flavor with X GB RAM, Y cores, Z system disk and the metadata flags 
so I
get a GCPU and ideally huge pages

There is a major difference from option dropped or major functionality 
deprecated
as I can hide it from my end users with a few flavor definitions which make 
sense 
for my cloud.

Incompatible changes for existing production deployments, e.g. CLIs,  should be 
handled very
carefully. Cleaning up some past choices for new clouds with appropriate 
documentation
and workarounds to keep the old behaviour seems reasonable.

>
>I would guess there are many others like me, who generally don't say
>anything because they've already observed that the prevailing sentiment
>is not sufficiently on the side of compatibility.

We have a production cloud with 2,200 users who feel the pain of incompatible 
change (and
pass that on to the support teams :-) I feel there is a strong distinction 
between 
incompatible change (i.e. you cannot hide this from your end users) vs change 
with a workaround
(where you can do some work for some projects to emulate the prior environment, 
but new projects
can be working with the future only, not accidentally selecting the legacy 
options).

I do feel that people should be able raise their concerns, each environment is 
different and
there is no single scenario. Thus, a debate, such as this one, is valuable to 
find the balance
between the need to move forward versus the risks. 

Tim

>
>Neil
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Proposal to remove `nova image-*` commands from novaclient

2016-04-06 Thread Clint Byrum
Excerpts from Nikhil Komawar's message of 2016-04-06 10:46:28 -0700:
> Need a inline clarification.
> 
> On 4/6/16 10:58 AM, Flavio Percoco wrote:
> > On 06/04/16 08:26 -0400, Sean Dague wrote:
> >> On 04/06/2016 04:13 AM, Markus Zoeller wrote:
> >>> +1 for deprecation and removal
> >>>
> >>> To be honest, when I started with Nova during Kilo, I didn't get
> >>> why we have those passthrough APIs. They looked like convenience APIs.
> >>> A short history lesson, why they got introduced, would be cool. I only
> >>> found commit [1] which looks like they were there from the beginning.
> >>>
> >>> References:
> >>> [1]
> >>> https://github.com/openstack/python-novaclient/commit/7304ed80df265b3b11a0018a826ce2e38c052572#diff-56f10b3a40a197d5691da75c2b847d31R33
> >>>
> >>
> >> The short history lesson is nova image API existed before glance. Glance
> >> was a spin out from Nova of that API. Doing so doesn't immediately make
> >> that API go away however. Especially as all these things live on
> >> different ports with different end points. So the image API remained as
> >> a proxy (as did volumes, baremetal, and even to some extend networks).
> >>
> >> It's not super clear how you deprecate and remove these things without
> >> breaking a lot of people, as a lot of the libraries implement the nova
> >> image resources -
> >> https://github.com/fog/fog-openstack/blob/master/lib/fog/openstack/compute.rb
> >>
> >
> > We can deprecate it without removing it. We make it work with v2 and
> > start
> > warning people that the API is not supported anymore. We don't fix
> > bugs in that
> > API but tell people to use the newer version.
> >
> > I think that should do it, unless I'm missing something.
> > Flavio
> >
> 
> Is it a safe practice to not fix bugs on a publicly exposed API? What
> are the recommendations for such cases?
> 

I don't think you can make a blanket statement that no bugs will be
fixed.

There are going to be evolutions behind this API that make a small bug
today into a big bug tomorrow. The idea is to push the user off the API
when they try to do more with it, not when we forcibly explode their
working code.

"We don't break userspace". I know _we_ didn't say that about our
project. But I like to think we understand the wisdom behind that, and can
start at least pretending we believe in ourselves and our users enough
to hold to it for some things, even if we don't really like some of the
more dark and dingy corners of userspace that we have put out there.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Containers lifecycle management

2016-04-06 Thread Hongbin Lu


> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: April-06-16 12:16 PM
> To: Hongbin Lu
> Cc: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Containers lifecycle management
> 
> On 06/04/16 15:54 +, Hongbin Lu wrote:
> >
> >
> >> -Original Message-
> >> From: Flavio Percoco [mailto:fla...@redhat.com]
> >> Sent: April-06-16 9:14 AM
> >> To: openstack-dev@lists.openstack.org
> >> Subject: [openstack-dev] [magnum] Containers lifecycle management
> >>
> >>
> >> Greetings,
> >>
> >> I'm fairly new to Magnum and I hope my comments below are accurate.
> >>
> >> After reading some docs, links and other references, I seem to
> >> understand the Magnum team has a debate on whether providing
> >> abstraction for containers lifecycle is something the project should
> >> do or not. There's a patch that attempts to remove PODs and some
> >> debates on whether `container-*` commands are actually useful or not.
> >
> >FYI, according to the latest decision [1][2], below is what it will be:
> >* The k8s abstractions (pod/service/replication controller) will be
> removed. Users will need to use native tool (i.e. kubectl) to consume
> the k8s service.
> >* The docker swarm abstraction (container) will be moved to a
> separated driver. In particular, there will be two drivers for
> operators to select. The first driver will have minimum functionality
> (i.e. provision/manage/delete the swarm cluster). The second driver
> will have additional APIs to manage container resources in the swarm
> bay.
> >
> >[1] https://wiki.openstack.org/wiki/Magnum/NativeAPI
> >[2] https://etherpad.openstack.org/p/magnum-native-api
> >
> >>
> >> Based on the above, I wanted to understand what would be the
> >> recommended way for services willing to consume magnum to run
> >> containers? I've been digging a bit into what would be required for
> >> Trove to consume Magnum and based on the above, it seems the answer
> >> is that it should support either docker, k8s or mesos instead.
> >>
> >> - Is the above correct?
> >
> >I think it is correct. At current stage, Trove needs to select a bay
> type (docker swarm, k8s or mesos). If the use case is to manage a
> single container, it is recommended to choose the docker swarm bay type.
> >
> >> - Is there a way to create a container, transparently, on whatever
> >> backend using
> >>   Magnum's API?
> >
> >At current stage, it is impossible. There is a blueprint [3] for
> proposing to unify the heterogeneity of different bay types, but we are
> in disagreement on whether Magnum should provide such functionality.
> You are welcome to contribute your use cases if you prefer to have it
> implemented.
> >
> >[3] https://blueprints.launchpad.net/magnum/+spec/unified-containers
> 
> Thanks for the clarifications Hongbin.
> 
> Would it make sense to have the containers abstraction do this for
> other bays too?

This is a controversial topic. The Magnum team have discussed it before and we 
are in disagreement. I have proposed to re-discuss it in the design summit 
(requested topic #16).

[1] https://etherpad.openstack.org/p/magnum-newton-design-summit-topics

> 
> Flavio
> 
> --
> @flaper87
> Flavio Percoco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Issues Integrating Policy Panel in Horizon

2016-04-06 Thread Bryan Sullivan
Hi Congress team,

I mentioned before that I was able to get Congress installed using a bash 
script that puts the service into an LXC container on the OPNFV Controller node 
(which has all the OpenStack services, some running bare metal and others in 
LXCs). This is per the JOID installer (MAAS/JuJu based) in OPNFV, as described 
at https://wiki.opnfv.org/display/copper/Joid, with the Congress install part 
being described at https://wiki.opnfv.org/display/copper/Congress+on+JOID 
(links to the bash scripts are there). The issue noted in the Congress team 
meeting earlier with this, was that the Policy tab for the OpenStack Dashboard 
depends upon plugin files being copied to the Horizon install folders. Thus if 
Congress is installed in a container, the installer (without modification) 
clearly can't just copy the files to the Horizon folder which is on a different 
container. So the Horizon integration doesn't occur. Congress overall is 
entirely functional, but the Horizon Policy tab is missing.

So I am trying to work around that for now by installing Congress directly into 
the Horizon container. The part I am stuck on is exactly what the process is 
for copying the Congress plugins and activating them. The only guide I see for 
this is in the plugin.sh script under "congress/devstack" in the Congress repo. 
In that script, "function _congress_setup_horizon" shows the plugin files being 
copied, plus some other code for which the purpose and runnable context is 
unclear. When I try to run this code in my install script, I have several 
issues:

1) the "Setup alias for django-admin which could be different depending on 
distro" does not work, as there are unresolved references. See the details at 
[1] below.
2) the OpenStack dashboard is left in a "server error" state, running but 
throwing error log lines per [2] below. The key to this may be the line 
"ImportError: No module named congressclient.v1"

If there are specific suggestions for how to get the plugins installed, it 
would be great to hear them and get them documented.

I also am unclear what other dependencies there are, e.g. the reference to 
congressclient.v1. Does this mean that the python-congressclient needs to be 
installed on the same server as Horizon? If so, why?

[1] Django error
++ django-admin collectstatic --noinput
Traceback (most recent call last):
  File "/usr/bin/django-admin", line 21, in 
management.execute_from_command_line()
  File "/usr/lib/python2.7/dist-packages/django/core/management/__init__.py", 
line 385, in execute_from_command_line
utility.execute()
  File "/usr/lib/python2.7/dist-packages/django/core/management/__init__.py", 
line 345, in execute
settings.INSTALLED_APPS
  File "/usr/lib/python2.7/dist-packages/django/conf/__init__.py", line 46, in 
__getattr__
self._setup(name)
  File "/usr/lib/python2.7/dist-packages/django/conf/__init__.py", line 42, in 
_setup
self._wrapped = Settings(settings_module)
  File "/usr/lib/python2.7/dist-packages/django/conf/__init__.py", line 98, in 
__init__
% (self.SETTINGS_MODULE, e)
ImportError: Could not import settings 'openstack_dashboard.settings' (Is it on 
sys.path? Is there an import error in the settings file?): No module named 
openstack_dashboard.settings
++ DJANGO_SETTINGS_MODULE=openstack_dashboard.settings
++ django-admin compress --force
Traceback (most recent call last):
  File "bin/congress-server", line 33, in 
from congress.server import congress_server
  File "/home/ubuntu/git/congress/congress/server/congress_server.py", line 24, 
in 
from oslo_log import log as logging
ImportError: No module named oslo_log
Traceback (most recent call last):
  File "/usr/bin/django-admin", line 21, in 
management.execute_from_command_line()
  File "/usr/lib/python2.7/dist-packages/django/core/management/__init__.py", 
line 385, in execute_from_command_line
utility.execute()
  File "/usr/lib/python2.7/dist-packages/django/core/management/__init__.py", 
line 345, in execute
settings.INSTALLED_APPS
  File "/usr/lib/python2.7/dist-packages/django/conf/__init__.py", line 46, in 
__getattr__
self._setup(name)
  File "/usr/lib/python2.7/dist-packages/django/conf/__init__.py", line 42, in 
_setup
self._wrapped = Settings(settings_module)
  File "/usr/lib/python2.7/dist-packages/django/conf/__init__.py", line 98, in 
__init__
% (self.SETTINGS_MODULE, e)
ImportError: Could not import settings 'openstack_dashboard.settings' (Is it on 
sys.path? Is there an import error in the settings file?): No module named 
openstack_dashboard.settings

[2] Apache error
[Wed Apr 06 17:49:19.385710 2016] [:error] [pid 5061:tid 139941700216576] 
[remote 192.168.10.118:35264] mod_wsgi (pid=5061): Exception occurred 
processing WSGI script 
'/usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi'.
[Wed Apr 06 17:49:19.385753 2016] [:error] [pid 5061:tid 139941700216576] 
[remote 192.168.10.118:35264] Traceback (most recent call 

Re: [openstack-dev] [nova][glance] Proposal to remove `nova image-*` commands from novaclient

2016-04-06 Thread Nikhil Komawar
Need a inline clarification.

On 4/6/16 10:58 AM, Flavio Percoco wrote:
> On 06/04/16 08:26 -0400, Sean Dague wrote:
>> On 04/06/2016 04:13 AM, Markus Zoeller wrote:
>>> +1 for deprecation and removal
>>>
>>> To be honest, when I started with Nova during Kilo, I didn't get
>>> why we have those passthrough APIs. They looked like convenience APIs.
>>> A short history lesson, why they got introduced, would be cool. I only
>>> found commit [1] which looks like they were there from the beginning.
>>>
>>> References:
>>> [1]
>>> https://github.com/openstack/python-novaclient/commit/7304ed80df265b3b11a0018a826ce2e38c052572#diff-56f10b3a40a197d5691da75c2b847d31R33
>>>
>>
>> The short history lesson is nova image API existed before glance. Glance
>> was a spin out from Nova of that API. Doing so doesn't immediately make
>> that API go away however. Especially as all these things live on
>> different ports with different end points. So the image API remained as
>> a proxy (as did volumes, baremetal, and even to some extend networks).
>>
>> It's not super clear how you deprecate and remove these things without
>> breaking a lot of people, as a lot of the libraries implement the nova
>> image resources -
>> https://github.com/fog/fog-openstack/blob/master/lib/fog/openstack/compute.rb
>>
>
> We can deprecate it without removing it. We make it work with v2 and
> start
> warning people that the API is not supported anymore. We don't fix
> bugs in that
> API but tell people to use the newer version.
>
> I think that should do it, unless I'm missing something.
> Flavio
>

Is it a safe practice to not fix bugs on a publicly exposed API? What
are the recommendations for such cases?

>>
>> -Sean
>>
>> -- 
>> Sean Dague
>> http://dague.net
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Proposal to remove `nova image-*` commands from novaclient

2016-04-06 Thread Mathieu Gagné
On Tue, Apr 5, 2016 at 11:24 PM, Monty Taylor  wrote:
> On 04/05/2016 05:07 PM, Michael Still wrote:
>
>> self.glance = glance_client.Client('2', endpoint, token=token)
>
>
> There are next to zero cases where the thing you want to do is talk to
> glance using a token and an endpoint.

I used to have a use case for that one. I'm only mentioning so you can
have a good laugh at it. =)

We have an internal development cloud where people can spawn a whole
private OpenStack infrastructure for testing and development purposes.
This means ~50 instances.
All instances communicate between them using example.org DNS, this
includes URLs found in the Keystone catalog.
Each stack has a private DNS server resolving those requests for
example.org and our internal tool configure it once the stack is
provisioned.
This means each developper has its own example.org zone which resolves
differently. They can configure their own local resolvers to use the
one found in the stack if they wish.

Now the very funny part:

Developers can decide to destroy their stack as they wish in brutal
ways. The side-effect we saw is that it left orphan volumes on our
shared block storage backend which also happens to have a maximum
number of volumes.
So to clean them up, we wrote a script that connects to each
developer's stack and try to list volumes in Cinder, compare them with
the ones found on the block storage backend and delete orphan ones.
Since everybody has the same example.org DNS in their catalog, we
needed a way to tell python-cinderclient to not use the DNS found in
the catalog but the actual IP of the developper's Cinder instance.
That's our use case where we needed the endpoint argument. =)

Good news, we found an alternative solution where we override the
Python socket resolving methods instead and override the IP from there
instead of using the endpoint argument.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

2016-04-06 Thread Dan Smith
> Ops != Cloud App developers. We've really made the latter's job hard,
> and are often increasing the difficulty on them. This pushes them
> away from OpenStack and eliminates a lot of potential users of
> OpenStack, meaning Ops have fewer users then they should. Lets not
> continue this trend. :/

The fact that these flavors were pre-canned is the reason some people
falsely believe that they should be able to specify m1.small on any
cloud and expect it to work. It would be like if we bundled cirros with
nova or glance and people assumed they could always boot a test instance
with a specifically-named cirros image. That would be crazy and is not
at all the stuff that openstack should be trying to standardize, IMHO.
The same applies to arbitrary flavor names.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

2016-04-06 Thread Fox, Kevin M
Thats an interesting idea. Maybe an extension to the heat param flavour type 
validator?

Thanks,
Kevin

From: Tim Bell [tim.b...@cern.ch]
Sent: Wednesday, April 06, 2016 10:10 AM
To: Daniel P. Berrange; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

On 06/04/16 18:36, "Daniel P. Berrange"  wrote:

>On Wed, Apr 06, 2016 at 04:29:00PM +, Fox, Kevin M wrote:
>> It feels kind of like a defcore issue though. Its harder for app
>> developers to create stuff like heat templates intended for cross
>> cloud that recommend a size, m1.small, without a common reference.
>
>Even with Nova defining these default flavours, it didn't do anything
>to help solve this problem as all the public cloud operators were
>just deleting these flavours & creating their own. So it just gave
>people a false sense of standardization where none actually existed.
>

The problem is when the clouds move to m2.*, m3.* etc. and deprecate
old hardware on m1.*.

I think Heat needs more of an query engine along the lines of “give me a
flavor with at least X cores and Y GB RAM” rather than hard coding m1.large.
Core performance is another parameter that would be interesting to select,
“give me a core with at least 5 bogomips”

I don’t see how flavor names could be standardised in the long term.

Tim

>
>Regards,
>Daniel
>--
>|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
>|: http://libvirt.org  -o- http://virt-manager.org :|
>|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
>|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [all] FYI: Removing default flavors from nova

2016-04-06 Thread Fox, Kevin M
+1

From: Neil Jerram [neil.jer...@metaswitch.com]
Sent: Wednesday, April 06, 2016 10:15 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [all] FYI: Removing default flavors from 
nova

I hesitate to write this, even now, but I do think that OpenStack has a
problem with casual incompatibilities, such as this appears to be.  But,
frankly, I've been slapped down for expressing my opinion in the past
(on the pointless 'tenant' to 'project' change), so I just quietly
despaired when I saw that ops thread, rather than saying anything.

I haven't researched this particular case in detail, so I could be
misunderstanding its implications.  But in general my impression, from
the conversations that occur when these topics are raised, is that many
prominent OpenStack developers do not care enough about
release-to-release compatibility.  The rule for incompatible changes
should be "Just Don't", and I believe that if everyone internalized
that, they could easily find alternative approaches without breaking
compatibility.

When an incompatible change like this is made, imagine the 1000s of
operators and users around the world, with complex automation around
OpenStack, who see their deployment or testing failing, spend a couple
of hours debugging, and eventually discover 'oh, they removed m1.small'
or 'oh, they changed the glance command line'.  Given that hassle and
bad feeling, is the benefit that developers get from the incompatibility
still worth it?

I would guess there are many others like me, who generally don't say
anything because they've already observed that the prevailing sentiment
is not sufficiently on the side of compatibility.

Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

2016-04-06 Thread Dean Troyer
On Wed, Apr 6, 2016 at 12:10 PM, Tim Bell  wrote:

> I think Heat needs more of an query engine along the lines of “give me a
> flavor with at least X cores and Y GB RAM” rather than hard coding
> m1.large.
> Core performance is another parameter that would be interesting to select,
> “give me a core with at least 5 bogomips”
>

I've played with a version of OSC's "server create" command that uses
--cpu, --ram, etc rather than --flavor to size the created VM.  It is a
tiny bit of client-side work to do this, Heat could easily do it too...
The trick is to not get carried away with spec'ing every last detail.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [all] FYI: Removing default flavors from nova

2016-04-06 Thread Dan Smith
> I haven't researched this particular case in detail, so I could be 
> misunderstanding its implications.  But in general my impression, 
> from the conversations that occur when these topics are raised, is 
> that many prominent OpenStack developers do not care enough about 
> release-to-release compatibility.  The rule for incompatible changes 
> should be "Just Don't", and I believe that if everyone internalized 
> that, they could easily find alternative approaches without breaking 
> compatibility.

I don't think this is an incompatible change. If you have those flavors,
they're not deleted. They're just not added to new deployments.

> When an incompatible change like this is made, imagine the 1000s of 
> operators and users around the world, with complex automation around 
> OpenStack, who see their deployment or testing failing, spend a 
> couple of hours debugging, and eventually discover 'oh, they removed 
> m1.small' or 'oh, they changed the glance command line'.

So they didn't read the release notes then? Agree that changing a
command line interface is pretty uncool, but these flavors are literally
data that is injected into your database for you. It's the only data
that fits that description. If we hadn't added these to the database for
you initially, you'd never have assumed that some arbitrary grouping of
resources would be named "m1.small" in a new deployment if you didn't
define it to be so.

> Given that hassle and bad feeling, is the benefit that developers
> get from the incompatibility still worth it?

I honestly can't understand why this is an incompatibility, so yes, I
still think it's imperative that we do this.

Out of curiosity, how is this any different from us changing defaults in
config options from release to release, or adding new features that
require you to do things before an upgrade and/or when doing a new
deployment?

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

2016-04-06 Thread Fox, Kevin M
Ops != Cloud App developers. We've really made the latter's job hard, and are 
often increasing the difficulty on them. This pushes them away from OpenStack 
and eliminates a lot of potential users of OpenStack, meaning Ops have fewer 
users then they should. Lets not continue this trend. :/

Thanks,
Kevin

From: Dan Smith [d...@danplanet.com]
Sent: Wednesday, April 06, 2016 10:09 AM
To: OpenStack Development Mailing List (not for usage questions); Daniel P. 
Berrange
Subject: Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

> It still was more common then not I think? So making it less common
> is probably a step on the wrong direction.

The responses on the operators mailing list were 100% positive for removal.

As Dan said, calling these a standard is really not reasonable. They're
just defaults, copied from AWS years ago so that people have something
ready to go out of the box.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

2016-04-06 Thread Tim Bell

On 06/04/16 18:36, "Daniel P. Berrange"  wrote:

>On Wed, Apr 06, 2016 at 04:29:00PM +, Fox, Kevin M wrote:
>> It feels kind of like a defcore issue though. Its harder for app
>> developers to create stuff like heat templates intended for cross
>> cloud that recommend a size, m1.small, without a common reference.
>
>Even with Nova defining these default flavours, it didn't do anything
>to help solve this problem as all the public cloud operators were
>just deleting these flavours & creating their own. So it just gave
>people a false sense of standardization where none actually existed.
>

The problem is when the clouds move to m2.*, m3.* etc. and deprecate
old hardware on m1.*.

I think Heat needs more of an query engine along the lines of “give me a
flavor with at least X cores and Y GB RAM” rather than hard coding m1.large.
Core performance is another parameter that would be interesting to select,
“give me a core with at least 5 bogomips”

I don’t see how flavor names could be standardised in the long term.

Tim

>
>Regards,
>Daniel
>-- 
>|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
>|: http://libvirt.org  -o- http://virt-manager.org :|
>|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
>|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Minimal secure identification of a new VM

2016-04-06 Thread Hayes, Graham
On 06/04/2016 17:38, Fox, Kevin M wrote:
> A lot of the problems are documented here in the problem description section:
> https://review.openstack.org/#/c/93/
>
> Thanks,
> Kevin

I am very much ++ on instance users.

> 
> From: Daniel P. Berrange [berra...@redhat.com]
> Sent: Wednesday, April 06, 2016 9:04 AM
> To: Hayes, Graham
> Cc: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] Minimal secure identification of a new VM
>
> On Wed, Apr 06, 2016 at 04:03:18PM +, Hayes, Graham wrote:
>> On 06/04/2016 16:54, Gary Kotton wrote:
>>>
>>>
>>> On 4/6/16, 12:42 PM, "Daniel P. Berrange"  wrote:
>>>
 On Tue, Apr 05, 2016 at 06:00:55PM -0400, Adam Young wrote:
> We have a use case where we want to register a newly spawned Virtual
> machine
> with an identity provider.
>
> Heat also has a need to provide some form of Identity for a new VM.
>
>
> Looking at the set of utilities right now, there does not seem to be a
> secure way to do this.  Injecting files does not provide a path that
> cannot
> be seen by other VMs or machines in the system.
>
> For our use case, a short lived One-Time-Password is sufficient, but for
> others, I think asymmetric key generation makes more sense.
>
> Is the following possible:
>
> 1.  In cloud-init, the VM generates a Keypair, then notifies the No0va
> infrastructure (somehow) that it has done so.

 There's no currently secure channel for the guest to push information
 to Nova. The best we have is the metadata service, but we'd need to
 secure that with https, because the metadata server cannot be assumed
 to be running on the same host as the VM & so the channel is not protected
 against MITM attacks.
>>
>> I thought the metadata API traffic was taken off the network by the
>> compute node? Or is that just under the old nova-network?
>
> Nope, there's no guarantee that the metadata server will be on the
> local compute node - it might be co-located, but it equally might
> be anywhere else.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Board of Directors Meeting

2016-04-06 Thread Davanum Srinivas
Hi,

Reading unofficial notes [1], i found one topic very interesting:
One Platform – How do we truly support containers and bare metal under
a common API with VMs? (Ironic, Nova, adjacent communities e.g.
Kubernetes, Apache Mesos etc)

Anyone present at the meeting, please expand on those few notes on
etherpad? And how if any this feedback is getting back to the
projects?

Thanks,
Dims

[1] https://etherpad.openstack.org/p/UnofficialBoardNotes-Mar29-2016

On Tue, Mar 29, 2016 at 6:43 PM, Jonathan Bryce  wrote:
> Hi everyone,
>
> Today the Board of Directors met in in person alongside the Linux Foundation 
> Collaboration Summit. It was a packed agenda with some great discussions on a 
> variety of topics.
>
> Meeting agenda: 
> https://wiki.openstack.org/wiki/Governance/Foundation/29Mar2016BoardMeeting
>
> First, the diversity working group reported some concern about a lack of 
> participation in meetings and activities, and roadblocks due to other 
> dependencies on the Foundation staff or community resources. It also seems 
> like there’s not been strong or centralized communication around all of the 
> diversity and mentoring activities already happening (for example, at the 
> Austin Summit 
> https://www.openstack.org/summit/austin-2016/mentoring-and-diversity/). We 
> agreed to find an internal champion on the Foundation staff to help support 
> the efforts and communicate activities that are already in flight—like 
> Upstream University, Outreachy internships, travel support program, etc.—that 
> support the goals and work streams of the diversity working group.
>
> Next the Board approved Rob Esker and Anni Lai and chair and vice chair 
> respectively of the the New Member Committee. The Defcore Committee then 
> shared an ongoing discussion around taking more ownership of the 
> interoperability tests and asked the board for input on how to proceed.
>
> Allison Randall then presented some ideas around recognizing non-code 
> contributors as “Active Community Contributors.” The idea lines up well with 
> a new working group spun the User Committee recently spun up to recognize 
> non-ATC contributions (https://wiki.openstack.org/wiki/NonATCRecognition), 
> and the two groups will work together to help determine criteria and awards 
> for the different types of contributors that are incredibly valuable to our 
> community overall.
>
> Next we discussed future Summits, including breaking out project team design 
> sessions in a separate event, and focusing planning and working sessions at 
> the Summit on more strategic and cross-project conversations. We also 
> discussed future Summit locations as we’re working to lock down the Summits 
> for 2017, and should be able to share more specifics soon.
>
> Russell Bryant then presented an update to the OpenStack mission statement 
> proposed by the Technical Committee to specifically address interoperability 
> and end users. The Board approved the following updated mission statement 
> that the Technical Committee had previously approved:
>
> "To produce a ubiquitous Open Source Cloud Computing platform that is easy to 
> use, simple to implement, interoperable between deployments, works well at 
> all scales, and meets the needs of users and operators of both public and 
> private clouds."
>
> Finally, Mark Collier kicked off a forum for more strategic discussions about 
> the future of OpenStack. The Board had a lightning round of brainstorming 
> sessions to discuss interoperability & product strategy, NFV and networking, 
> one platform for containers, VMs & bare metal, and community health & 
> culture. The conversations will continue at the Austin Summit board meeting. 
> Look forward to seeing everyone there the last week of April!
>
> Thanks,
> Jonathan
> ___
> Foundation mailing list
> foundat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

2016-04-06 Thread Dan Smith
> It still was more common then not I think? So making it less common
> is probably a step on the wrong direction.

The responses on the operators mailing list were 100% positive for removal.

As Dan said, calling these a standard is really not reasonable. They're
just defaults, copied from AWS years ago so that people have something
ready to go out of the box.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

2016-04-06 Thread Fox, Kevin M
It still was more common then not I think? So making it less common is probably 
a step on the wrong direction.

Thanks,
Kevin 

From: Daniel P. Berrange [berra...@redhat.com]
Sent: Wednesday, April 06, 2016 9:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

On Wed, Apr 06, 2016 at 04:29:00PM +, Fox, Kevin M wrote:
> It feels kind of like a defcore issue though. Its harder for app
> developers to create stuff like heat templates intended for cross
> cloud that recommend a size, m1.small, without a common reference.

Even with Nova defining these default flavours, it didn't do anything
to help solve this problem as all the public cloud operators were
just deleting these flavours & creating their own. So it just gave
people a false sense of standardization where none actually existed.


Regards,
Daniel
--
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-06 Thread Rich Megginson

On 04/06/2016 10:38 AM, Hayes, Graham wrote:

On 06/04/2016 17:17, Rich Megginson wrote:

On 04/06/2016 02:55 AM, Hayes, Graham wrote:

On 06/04/16 03:09, Adam Young wrote:

On 04/05/2016 08:02 AM, Hayes, Graham wrote:

On 02/04/2016 22:33, Adam Young wrote:

I finally have enough understanding of what is going on with Tripleo to
reasonably discuss how to implement solutions for some of the main
security needs of a deployment.


FreeIPA is an identity management solution that can provide support for:

1. TLS on all network communications:
  A. HTTPS for web services
  B. TLS for the message bus
  C. TLS for communication with the Database.
2. Identity for all Actors in the system:
 A.  API services
 B.  Message producers and consumers
 C.  Database consumers
 D.  Keystone service users
3. Secure  DNS DNSSEC
4. Federation Support
5. SSH Access control to Hosts for both undercloud and overcloud
6. SUDO management
7. Single Sign On for Applications running in the overcloud.


The main pieces of FreeIPA are
1. LDAP (the 389 Directory Server)
2. Kerberos
3. DNS (BIND)
4. Certificate Authority (CA) server (Dogtag)
5. WebUI/Web Service Management Interface (HTTPD)





There are a couple ongoing efforts that will tie in with this:

1. Designate should be able to use the DNS from FreeIPA.  That was the
original implementation.

Designate cannot use FreeIPA - we haven't had a driver for it since
Kilo.

There have been various efforts since to support FreeIPA, but it
requires that it is the point of truth for DNS information, as does
Designate.

If FreeIPA supported the traditional Notify and Zone Transfer mechanisms
then we would be fine, but unfortunately it does not.

[1] Actually points out that the goal of FreeIPA's DNS integration
"... is NOT to provide general-purpose DNS server. Features beyond
easing FreeIPA deployment and maintenance are explicitly out of scope."

1 - http://www.freeipa.org/page/DNS#Goals

Lets table that for now. No reason they should not be able to
interoperate somehow.

Without work being done by FreeIPA (to enable the XFR interface on the
bind server) or us (Designate) re-designing our DNS Driver interface
they will not be able to inter-operate.

It's going to be very difficult for FreeIPA to support XFR for the
"main" zone (i.e. the zone in which records are actively
updated/maintained in LDAP and kept in sync globally).  It might be
possible to make it work for a child/sub zone that LDAP doesn't have to
pay much attention to, and let that zone be updated by Designate via
XFR.  I suppose Designate has the same problem with AD DNS integration?

Yes, we do. The MS DNS server has support for other secondary zones
that we could use - that is what we did in the pre Kilo driver.

(as a disclaimer, the msdns driver is known-broken, and unless there
is some resurgence of interest in it it will be deleted soon.)


If you want to discuss this more, we can take the discussion to
freeipa-us...@redhat.com

Will spin up a thread there - thanks.


The ipa/nova join functionality allows new VM hosts to be automatically
registered with IPA, including the DNS records for floating IP
assignments, bypassing Designate.

Ah, I did not realise there was work done on that. There was quite a bit
of work done this cycle to tie nova + neutron + designate together by
adding a "dns_name" to neutron ports - that is what we focused on.


The work that was done for nova/ipa integration:
* is specific to ipa - it uses ipa specific apis, files, commands, etc.
* does a lot more than just DNS registration - it configures the system 
to allow ssh into the system, to allow kerberos auth, HBAC including 
based on hostgroup, etc. - this is the demo I did for OpenStack Tokyo: 
http://richmegginson.livejournal.com/27573.html


Rob Crittenden, Juan Osorio Robles, and Adam Young have helped with this 
effort and have extended it since then.


It unfortunately relies on unsupported internal nova apis (hooks), and 
there will be a discussion in Austin about how to do this going forward.







2.  Juan Antonio Osorio  has been working on TLS everywhere.  The issue
thus far has been Certificate management.  This provides a Dogtag server
for Certs.

3. Rob Crittenden has been working on auto-registration of virtual
machines with an Identity Provider upon launch.  This gives that efforts
an IdM to use.

4. Keystone can make use of the Identity store for administrative users
in their own domain.

5. Many of the compliance audits have complained about cleartext
passwords in config files. This removes most of them.  MySQL supports
X509 based authentication today, and there is Kerberos support in the
works, which should remove the last remaining cleartext Passwords.

I mentioned Centralized SUDO and HBAC.  These are both tools that may be
used by administrators if so desired on the install. I would recommend
that they be used, but there is no requirement to do so.








Re: [openstack-dev] [nova] Minimal secure identification of a new VM

2016-04-06 Thread Fox, Kevin M
A lot of the problems are documented here in the problem description section:
https://review.openstack.org/#/c/93/

Thanks,
Kevin

From: Daniel P. Berrange [berra...@redhat.com]
Sent: Wednesday, April 06, 2016 9:04 AM
To: Hayes, Graham
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Minimal secure identification of a new VM

On Wed, Apr 06, 2016 at 04:03:18PM +, Hayes, Graham wrote:
> On 06/04/2016 16:54, Gary Kotton wrote:
> >
> >
> > On 4/6/16, 12:42 PM, "Daniel P. Berrange"  wrote:
> >
> >> On Tue, Apr 05, 2016 at 06:00:55PM -0400, Adam Young wrote:
> >>> We have a use case where we want to register a newly spawned Virtual
> >>> machine
> >>> with an identity provider.
> >>>
> >>> Heat also has a need to provide some form of Identity for a new VM.
> >>>
> >>>
> >>> Looking at the set of utilities right now, there does not seem to be a
> >>> secure way to do this.  Injecting files does not provide a path that
> >>> cannot
> >>> be seen by other VMs or machines in the system.
> >>>
> >>> For our use case, a short lived One-Time-Password is sufficient, but for
> >>> others, I think asymmetric key generation makes more sense.
> >>>
> >>> Is the following possible:
> >>>
> >>> 1.  In cloud-init, the VM generates a Keypair, then notifies the No0va
> >>> infrastructure (somehow) that it has done so.
> >>
> >> There's no currently secure channel for the guest to push information
> >> to Nova. The best we have is the metadata service, but we'd need to
> >> secure that with https, because the metadata server cannot be assumed
> >> to be running on the same host as the VM & so the channel is not protected
> >> against MITM attacks.
>
> I thought the metadata API traffic was taken off the network by the
> compute node? Or is that just under the old nova-network?

Nope, there's no guarantee that the metadata server will be on the
local compute node - it might be co-located, but it equally might
be anywhere else.

Regards,
Daniel
--
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

2016-04-06 Thread Daniel P. Berrange
On Wed, Apr 06, 2016 at 04:29:00PM +, Fox, Kevin M wrote:
> It feels kind of like a defcore issue though. Its harder for app
> developers to create stuff like heat templates intended for cross
> cloud that recommend a size, m1.small, without a common reference.

Even with Nova defining these default flavours, it didn't do anything
to help solve this problem as all the public cloud operators were
just deleting these flavours & creating their own. So it just gave
people a false sense of standardization where none actually existed.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-06 Thread Hayes, Graham
On 06/04/2016 17:17, Rich Megginson wrote:
> On 04/06/2016 02:55 AM, Hayes, Graham wrote:
>> On 06/04/16 03:09, Adam Young wrote:
>>> On 04/05/2016 08:02 AM, Hayes, Graham wrote:
 On 02/04/2016 22:33, Adam Young wrote:
> I finally have enough understanding of what is going on with Tripleo to
> reasonably discuss how to implement solutions for some of the main
> security needs of a deployment.
>
>
> FreeIPA is an identity management solution that can provide support for:
>
> 1. TLS on all network communications:
>  A. HTTPS for web services
>  B. TLS for the message bus
>  C. TLS for communication with the Database.
> 2. Identity for all Actors in the system:
> A.  API services
> B.  Message producers and consumers
> C.  Database consumers
> D.  Keystone service users
> 3. Secure  DNS DNSSEC
> 4. Federation Support
> 5. SSH Access control to Hosts for both undercloud and overcloud
> 6. SUDO management
> 7. Single Sign On for Applications running in the overcloud.
>
>
> The main pieces of FreeIPA are
> 1. LDAP (the 389 Directory Server)
> 2. Kerberos
> 3. DNS (BIND)
> 4. Certificate Authority (CA) server (Dogtag)
> 5. WebUI/Web Service Management Interface (HTTPD)
>


> There are a couple ongoing efforts that will tie in with this:
>
> 1. Designate should be able to use the DNS from FreeIPA.  That was the
> original implementation.
 Designate cannot use FreeIPA - we haven't had a driver for it since
 Kilo.

 There have been various efforts since to support FreeIPA, but it
 requires that it is the point of truth for DNS information, as does
 Designate.

 If FreeIPA supported the traditional Notify and Zone Transfer mechanisms
 then we would be fine, but unfortunately it does not.

 [1] Actually points out that the goal of FreeIPA's DNS integration
 "... is NOT to provide general-purpose DNS server. Features beyond
 easing FreeIPA deployment and maintenance are explicitly out of scope."

 1 - http://www.freeipa.org/page/DNS#Goals
>>>
>>> Lets table that for now. No reason they should not be able to
>>> interoperate somehow.
>> Without work being done by FreeIPA (to enable the XFR interface on the
>> bind server) or us (Designate) re-designing our DNS Driver interface
>> they will not be able to inter-operate.
>
> It's going to be very difficult for FreeIPA to support XFR for the
> "main" zone (i.e. the zone in which records are actively
> updated/maintained in LDAP and kept in sync globally).  It might be
> possible to make it work for a child/sub zone that LDAP doesn't have to
> pay much attention to, and let that zone be updated by Designate via
> XFR.  I suppose Designate has the same problem with AD DNS integration?

Yes, we do. The MS DNS server has support for other secondary zones
that we could use - that is what we did in the pre Kilo driver.

(as a disclaimer, the msdns driver is known-broken, and unless there
is some resurgence of interest in it it will be deleted soon.)

> If you want to discuss this more, we can take the discussion to
> freeipa-us...@redhat.com

Will spin up a thread there - thanks.

> The ipa/nova join functionality allows new VM hosts to be automatically
> registered with IPA, including the DNS records for floating IP
> assignments, bypassing Designate.

Ah, I did not realise there was work done on that. There was quite a bit
of work done this cycle to tie nova + neutron + designate together by
adding a "dns_name" to neutron ports - that is what we focused on.

>>
>>

> 2.  Juan Antonio Osorio  has been working on TLS everywhere.  The issue
> thus far has been Certificate management.  This provides a Dogtag server
> for Certs.
>
> 3. Rob Crittenden has been working on auto-registration of virtual
> machines with an Identity Provider upon launch.  This gives that efforts
> an IdM to use.
>
> 4. Keystone can make use of the Identity store for administrative users
> in their own domain.
>
> 5. Many of the compliance audits have complained about cleartext
> passwords in config files. This removes most of them.  MySQL supports
> X509 based authentication today, and there is Kerberos support in the
> works, which should remove the last remaining cleartext Passwords.
>
> I mentioned Centralized SUDO and HBAC.  These are both tools that may be
> used by administrators if so desired on the install. I would recommend
> that they be used, but there is no requirement to do so.
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

2016-04-06 Thread Fox, Kevin M
It feels kind of like a defcore issue though. Its harder for app developers to 
create stuff like heat templates intended for cross cloud that recommend a 
size, m1.small, without a common reference.

We keep making it hard for app developers to target openstack, so they don't 
join, and then don't complain about when openstack makes their life harder. we 
need to encourage ease of development on top of the platform.

Thanks,
Kevin

From: Sean Dague [s...@dague.net]
Sent: Wednesday, April 06, 2016 3:47 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

On 04/06/2016 04:19 AM, Sylvain Bauza wrote:
>
>
> Le 06/04/2016 06:44, Qiming Teng a écrit :
>> Not an expert of Nova but I am really shocked by such a change. Because
>> I'm not a Nova expert, I don't have a say on the *huge* efforts in
>> maintaining some builtin/default flavors. As a user I don't care where
>> the data have been stored, but I do care that they are gone. They are
>> gone because they **WILL** be supported by devstack. They are gone with
>> the workflow +1'ed **BEFORE** the devstack patch gets merged (many
>> thanks to the depends-on tag). They are gone in hope that all deployment
>> tools will know this when they fail, or fortunately they read this email,
>> or they were reviewing nova patches.
>>
>> It would be a little nicer to initiate a discussion on the mailinglist
>> before such a change is introduced.
>
>
> It was communicated accordingly to operators with no strong arguments :
> http://lists.openstack.org/pipermail/openstack-operators/2016-March/010045.html

Not only with no strong arguments, but with a general - "yes please,
that simplifies our life".

> You can also see that https://review.openstack.org/#/c/300127/ is having
> three items :
>  - a DocImpact tag creating a Launchpad bug for documentation about that
>  - a reno file meaning that our release notes will provide also some
> comments about that
>  - a Depends-On tag (like you said) on a devstack change meaning that
> people using devstack won't see a modified behavior.
>
> Not sure what you need more.

The default flavors were originally hardcoded in Nova (in the initial
commit) -
https://github.com/openstack/nova/commit/bf6e6e718cdc7488e2da87b21e258ccc065fe499#diff-5ca8c06795ef481818ea1710fce91800R64
 and moved into the db 5 years ago to be a copy of the EC2 flavors at
the time -
https://github.com/openstack/nova/commit/563a77fd4aa80da9bddac5cf7f8f27ed2dedb39d.
Those flavors were meant to be examples, not the final story.

All the public clouds delete these and do their own thing, as do I
expect many of the products. Any assumption that software or users have
that these will exist is a bad assumption.

It is a big change, which is why it's being communicated on Mailing
Lists in addition to in the release notes so that people have time to
make any of their tooling not assume these flavors by name will be
there, or to inject them yourself if you are sure you need them (as was
done in the devstack case).

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Containers lifecycle management

2016-04-06 Thread Flavio Percoco

On 06/04/16 15:54 +, Hongbin Lu wrote:




-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com]
Sent: April-06-16 9:14 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum] Containers lifecycle management


Greetings,

I'm fairly new to Magnum and I hope my comments below are accurate.

After reading some docs, links and other references, I seem to
understand the Magnum team has a debate on whether providing
abstraction for containers lifecycle is something the project should do
or not. There's a patch that attempts to remove PODs and some debates
on whether `container-*` commands are actually useful or not.


FYI, according to the latest decision [1][2], below is what it will be:
* The k8s abstractions (pod/service/replication controller) will be removed. 
Users will need to use native tool (i.e. kubectl) to consume the k8s service.
* The docker swarm abstraction (container) will be moved to a separated driver. 
In particular, there will be two drivers for operators to select. The first 
driver will have minimum functionality (i.e. provision/manage/delete the swarm 
cluster). The second driver will have additional APIs to manage container 
resources in the swarm bay.

[1] https://wiki.openstack.org/wiki/Magnum/NativeAPI
[2] https://etherpad.openstack.org/p/magnum-native-api



Based on the above, I wanted to understand what would be the
recommended way for services willing to consume magnum to run
containers? I've been digging a bit into what would be required for
Trove to consume Magnum and based on the above, it seems the answer is
that it should support either docker, k8s or mesos instead.

- Is the above correct?


I think it is correct. At current stage, Trove needs to select a bay type 
(docker swarm, k8s or mesos). If the use case is to manage a single container, 
it is recommended to choose the docker swarm bay type.


- Is there a way to create a container, transparently, on whatever
backend using
  Magnum's API?


At current stage, it is impossible. There is a blueprint [3] for proposing to 
unify the heterogeneity of different bay types, but we are in disagreement on 
whether Magnum should provide such functionality. You are welcome to contribute 
your use cases if you prefer to have it implemented.

[3] https://blueprints.launchpad.net/magnum/+spec/unified-containers


Thanks for the clarifications Hongbin.

Would it make sense to have the containers abstraction do this for other bays 
too?

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Floating IPs and Public IPs are not equivalent

2016-04-06 Thread Fox, Kevin M
Ok. I'll bite. :)

Security is like a castle. More walls provide more protection. One outer wall 
only is something that tends to bite folks because they assume the first wall 
won't ever be breached.

Nat is one type of wall. Not to be used by itself but provides additional 
protection.

For example, I witnessed an organization recently misconfigure their firewall 
rules by accedent and all of the private servers were suddenly accessible from 
the internet. If these same machines were on private nated space, the failure 
in the firewall wall, would have not immediately exposed all of the private 
servers to unexpected attack. They would be protected by the fact that the ip's 
weren't routeable.

Nat's just another tool for the toolbox. its not good, or evil. Its useful 
though, so stop trying to kill it.

Thanks,
Kevin


From: Salvatore Orlando [salv.orla...@gmail.com]
Sent: Wednesday, April 06, 2016 1:19 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Floating IPs and Public IPs are not equivalent

Hey! This sounds like bike-shedding & yak-shaving... totally my thing!

It is true that the Neutron model currently kind of forces a two-level 
topology, with the external network being a sort of special case.
Regardless, this does not mean you cannot assign directly public IPs to your 
instances - Neutron routers also work without NAT.

Shall we start a discussion on the evils of NAT now?
To me is one of those things like landline telephones. You don't really need 
them, you know how to do without them, but for some reason you keep using them 
and perceiving them as a fundamental service.

As for the issue Kevin pointed out, that's a limitation of the current 
reference implementation that if overcome will probably simplify the Neutron 
control plane as well.

Salvatore

On 2 April 2016 at 00:05, Kevin Benton 
> wrote:
The main barrier to this is that we need to stop using the 
'external_network_bridge = br-ex' option for the L3 agent and define a bridge 
mapping on the L2 agent. Otherwise the external network is treated as a special 
case and the VMs won't actually be able to get wired into the external network.

On Thu, Mar 31, 2016 at 12:58 PM, Sean Dague 
> wrote:
On 03/31/2016 01:23 PM, Monty Taylor wrote:
> Just a friendly reminder to everyone - floating IPs are not synonymous
> with Public IPs in OpenStack.
>
> The most common (and growing, thank you to the beta of the new
> Dreamcompute cloud) configuration for Public Clouds is directly assign
> public IPs to VMs without requiring a user to create a floating IP.
>
> I have heard that the require-floating-ip model is very common for
> private clouds. While I find that even stranger, as the need to run NAT
> inside of another NAT is bizarre, it is what it is.
>
> Both models are common enough that pretty much anything that wants to
> consume OpenStack VMs needs to account for both possibilities.
>
> It would be really great if we could get the default config in devstack
> to be to have a shared direct-attached network that can also have a
> router attached to it and provider floating ips, since that scenario
> actually allows interacting with both models (and is actually the most
> common config across the OpenStack public clouds)

If someone has the the pattern for what that config looks like,
especially if it could work on single interface machines, that would be
great.

The current defaults in devstack are mostly there for legacy reasons
(and because they work everywhere), and for activation energy to getting
a new robust work everywhere setup.

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-06 Thread Rich Megginson

On 04/06/2016 02:55 AM, Hayes, Graham wrote:

On 06/04/16 03:09, Adam Young wrote:

On 04/05/2016 08:02 AM, Hayes, Graham wrote:

On 02/04/2016 22:33, Adam Young wrote:

I finally have enough understanding of what is going on with Tripleo to
reasonably discuss how to implement solutions for some of the main
security needs of a deployment.


FreeIPA is an identity management solution that can provide support for:

1. TLS on all network communications:
A. HTTPS for web services
B. TLS for the message bus
C. TLS for communication with the Database.
2. Identity for all Actors in the system:
   A.  API services
   B.  Message producers and consumers
   C.  Database consumers
   D.  Keystone service users
3. Secure  DNS DNSSEC
4. Federation Support
5. SSH Access control to Hosts for both undercloud and overcloud
6. SUDO management
7. Single Sign On for Applications running in the overcloud.


The main pieces of FreeIPA are
1. LDAP (the 389 Directory Server)
2. Kerberos
3. DNS (BIND)
4. Certificate Authority (CA) server (Dogtag)
5. WebUI/Web Service Management Interface (HTTPD)





There are a couple ongoing efforts that will tie in with this:

1. Designate should be able to use the DNS from FreeIPA.  That was the
original implementation.

Designate cannot use FreeIPA - we haven't had a driver for it since
Kilo.

There have been various efforts since to support FreeIPA, but it
requires that it is the point of truth for DNS information, as does
Designate.

If FreeIPA supported the traditional Notify and Zone Transfer mechanisms
then we would be fine, but unfortunately it does not.

[1] Actually points out that the goal of FreeIPA's DNS integration
"... is NOT to provide general-purpose DNS server. Features beyond
easing FreeIPA deployment and maintenance are explicitly out of scope."

1 - http://www.freeipa.org/page/DNS#Goals


Lets table that for now. No reason they should not be able to
interoperate somehow.

Without work being done by FreeIPA (to enable the XFR interface on the
bind server) or us (Designate) re-designing our DNS Driver interface
they will not be able to inter-operate.


It's going to be very difficult for FreeIPA to support XFR for the 
"main" zone (i.e. the zone in which records are actively 
updated/maintained in LDAP and kept in sync globally).  It might be 
possible to make it work for a child/sub zone that LDAP doesn't have to 
pay much attention to, and let that zone be updated by Designate via 
XFR.  I suppose Designate has the same problem with AD DNS integration?  
If you want to discuss this more, we can take the discussion to 
freeipa-us...@redhat.com


The ipa/nova join functionality allows new VM hosts to be automatically 
registered with IPA, including the DNS records for floating IP 
assignments, bypassing Designate.








2.  Juan Antonio Osorio  has been working on TLS everywhere.  The issue
thus far has been Certificate management.  This provides a Dogtag server
for Certs.

3. Rob Crittenden has been working on auto-registration of virtual
machines with an Identity Provider upon launch.  This gives that efforts
an IdM to use.

4. Keystone can make use of the Identity store for administrative users
in their own domain.

5. Many of the compliance audits have complained about cleartext
passwords in config files. This removes most of them.  MySQL supports
X509 based authentication today, and there is Kerberos support in the
works, which should remove the last remaining cleartext Passwords.

I mentioned Centralized SUDO and HBAC.  These are both tools that may be
used by administrators if so desired on the install. I would recommend
that they be used, but there is no requirement to do so.







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [nova] Minimal secure identification of a new VM

2016-04-06 Thread Hayes, Graham
On 06/04/2016 17:04, Daniel P. Berrange wrote:
> On Wed, Apr 06, 2016 at 04:03:18PM +, Hayes, Graham wrote:
>> On 06/04/2016 16:54, Gary Kotton wrote:
>>>
>>>
>>> On 4/6/16, 12:42 PM, "Daniel P. Berrange"  wrote:
>>>
 On Tue, Apr 05, 2016 at 06:00:55PM -0400, Adam Young wrote:
> We have a use case where we want to register a newly spawned Virtual
> machine
> with an identity provider.
>
> Heat also has a need to provide some form of Identity for a new VM.
>
>
> Looking at the set of utilities right now, there does not seem to be a
> secure way to do this.  Injecting files does not provide a path that
> cannot
> be seen by other VMs or machines in the system.
>
> For our use case, a short lived One-Time-Password is sufficient, but for
> others, I think asymmetric key generation makes more sense.
>
> Is the following possible:
>
> 1.  In cloud-init, the VM generates a Keypair, then notifies the No0va
> infrastructure (somehow) that it has done so.

 There's no currently secure channel for the guest to push information
 to Nova. The best we have is the metadata service, but we'd need to
 secure that with https, because the metadata server cannot be assumed
 to be running on the same host as the VM & so the channel is not protected
 against MITM attacks.
>>
>> I thought the metadata API traffic was taken off the network by the
>> compute node? Or is that just under the old nova-network?
>
> Nope, there's no guarantee that the metadata server will be on the
> local compute node - it might be co-located, but it equally might
> be anywhere else.
>

Sorry - I knew the actual HTTP server was else where, but I thought the
network traffic was taken out of the tenant space at the compute node,
and then moved to the underlying cloud infrastructure networking.

If that network is MITM'd there could be bigger issues.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Minimal secure identification of a new VM

2016-04-06 Thread Daniel P. Berrange
On Wed, Apr 06, 2016 at 04:03:18PM +, Hayes, Graham wrote:
> On 06/04/2016 16:54, Gary Kotton wrote:
> >
> >
> > On 4/6/16, 12:42 PM, "Daniel P. Berrange"  wrote:
> >
> >> On Tue, Apr 05, 2016 at 06:00:55PM -0400, Adam Young wrote:
> >>> We have a use case where we want to register a newly spawned Virtual
> >>> machine
> >>> with an identity provider.
> >>>
> >>> Heat also has a need to provide some form of Identity for a new VM.
> >>>
> >>>
> >>> Looking at the set of utilities right now, there does not seem to be a
> >>> secure way to do this.  Injecting files does not provide a path that
> >>> cannot
> >>> be seen by other VMs or machines in the system.
> >>>
> >>> For our use case, a short lived One-Time-Password is sufficient, but for
> >>> others, I think asymmetric key generation makes more sense.
> >>>
> >>> Is the following possible:
> >>>
> >>> 1.  In cloud-init, the VM generates a Keypair, then notifies the No0va
> >>> infrastructure (somehow) that it has done so.
> >>
> >> There's no currently secure channel for the guest to push information
> >> to Nova. The best we have is the metadata service, but we'd need to
> >> secure that with https, because the metadata server cannot be assumed
> >> to be running on the same host as the VM & so the channel is not protected
> >> against MITM attacks.
> 
> I thought the metadata API traffic was taken off the network by the
> compute node? Or is that just under the old nova-network?

Nope, there's no guarantee that the metadata server will be on the
local compute node - it might be co-located, but it equally might
be anywhere else.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Minimal secure identification of a new VM

2016-04-06 Thread Hayes, Graham
On 06/04/2016 16:54, Gary Kotton wrote:
>
>
> On 4/6/16, 12:42 PM, "Daniel P. Berrange"  wrote:
>
>> On Tue, Apr 05, 2016 at 06:00:55PM -0400, Adam Young wrote:
>>> We have a use case where we want to register a newly spawned Virtual
>>> machine
>>> with an identity provider.
>>>
>>> Heat also has a need to provide some form of Identity for a new VM.
>>>
>>>
>>> Looking at the set of utilities right now, there does not seem to be a
>>> secure way to do this.  Injecting files does not provide a path that
>>> cannot
>>> be seen by other VMs or machines in the system.
>>>
>>> For our use case, a short lived One-Time-Password is sufficient, but for
>>> others, I think asymmetric key generation makes more sense.
>>>
>>> Is the following possible:
>>>
>>> 1.  In cloud-init, the VM generates a Keypair, then notifies the No0va
>>> infrastructure (somehow) that it has done so.
>>
>> There's no currently secure channel for the guest to push information
>> to Nova. The best we have is the metadata service, but we'd need to
>> secure that with https, because the metadata server cannot be assumed
>> to be running on the same host as the VM & so the channel is not protected
>> against MITM attacks.

I thought the metadata API traffic was taken off the network by the
compute node? Or is that just under the old nova-network?

>> Also currently the metadata server is readonly with the guest pulling
>> information from it - it doesn't currently allow guests to push
>> information
>> into it. This is nice because the metadata servers could theoretically be
>> locked down to prevent may interactions with the rest of nova - it should
>> only need read-only access to info about the guests it is serving. If we
>> turn the metadata server into a bi-directional service which can update
>> information about guests, then it opens it up as a more attractive avenue
>> of attack for guest OS trying breach the host infra. This is a fairly
>> general concern with any approach where the guest has to have the ability
>> to push information back into Nova.
>
> What about having metadata support HTTPS?

How do you get the CA cert on to the VM then?

It is more difficult than it seems.



>>
>>> 2.  Nova Compute reads the public Key off the device and sends it to
>>> conductor, which would then associate the public key with the server?
>>>
>>> 3.  A third party system could then validate the association of the
>>> public
>>> key and the server, and build a work flow based on some signed document
>>> from
>>> the VM?
>>
>> Regards,
>> Daniel
>> --
>> |:
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__berrange.com=BQICAg;
>> c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=VlZxHpZBmzzkWT5jqz9JYBk8YT
>> eq9N3-diTlNj4GyNc=lt2m-p7I77Tg88WS4WvodfNMyitjWmfDMMHDQ3tqvwo=SYfUKobB
>> orFrSzQyAW8b93HqsY5XVNomIMKWyfg1bos=   -o-
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.flickr.com_photos_
>> dberrange_=BQICAg=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=VlZxHp
>> ZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNc=lt2m-p7I77Tg88WS4WvodfNMyitjWmfDMM
>> HDQ3tqvwo=_gK2KOkWFfLW-FbojWkpCgftjVLN_QZDGkjh8pMnls0=  :|
>> |:
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__libvirt.org=BQICAg
>> =Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=VlZxHpZBmzzkWT5jqz9JYBk8YTe
>> q9N3-diTlNj4GyNc=lt2m-p7I77Tg88WS4WvodfNMyitjWmfDMMHDQ3tqvwo=Yguim8-Kw
>> fw5GFNoKeCd5_x2TQdaMSYWCRtjLOBMBnU=   -o-
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__virt-2Dmanager.org=B
>> QICAg=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=VlZxHpZBmzzkWT5jqz9J
>> YBk8YTeq9N3-diTlNj4GyNc=lt2m-p7I77Tg88WS4WvodfNMyitjWmfDMMHDQ3tqvwo=tw
>> oB0qqGMKwvX2dYl1m-qYeJRYIU_XnKP4o2bR8pLQ4=  :|
>> |:
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__autobuild.org=BQICAg
>> =Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=VlZxHpZBmzzkWT5jqz9JYBk8Y
>> Teq9N3-diTlNj4GyNc=lt2m-p7I77Tg88WS4WvodfNMyitjWmfDMMHDQ3tqvwo=iSsOzMl
>> SpSW3eL2vujxnGA8kwXnfy7cqGKvNIztgils=-o-
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__search.cpan.org_-7Edan
>> berr_=BQICAg=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=VlZxHpZBmzz
>> kWT5jqz9JYBk8YTeq9N3-diTlNj4GyNc=lt2m-p7I77Tg88WS4WvodfNMyitjWmfDMMHDQ3t
>> qvwo=uigHG_KOapyOIfMYts-LD2fB5Tbvk-7C3fTHl8KntLU=  :|
>> |:
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__entangle-2Dphoto.org
>> =BQICAg=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=VlZxHpZBmzzkWT5jqz
>> 9JYBk8YTeq9N3-diTlNj4GyNc=lt2m-p7I77Tg88WS4WvodfNMyitjWmfDMMHDQ3tqvwo=
>> S8SF6URSAV0y9k6m_v9KqNluZ_ocrHkp9_U5lYxDzfU=-o-
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__live.gnome.org_gtk-2Dv
>> nc=BQICAg=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=VlZxHpZBmzzkWT
>> 5jqz9JYBk8YTeq9N3-diTlNj4GyNc=lt2m-p7I77Tg88WS4WvodfNMyitjWmfDMMHDQ3tqvw
>> o=TMVQTuB-w7M5dWCDE3CRseA9l-xWWfqP-tlPW34Lqg4=  :|
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> 

Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-06 Thread Fox, Kevin M
Yeah. I'm all for something like that.  The solution just needs to meet the 
requirements listed in https://review.openstack.org/93

That solution could also probably be reused for an ssh key. The security of 
openssh vms + nova is pretty bad.

There should be some kind of way for the vm to post its ssh pubkey to nova, and 
then have a nova ssh command on the client that pulls the key out of nova api 
and updates your known hosts with it, to prevent all the man in the middle 
potential we've lived with for a long time.

Thanks,
Kevin



From: Adam Young [ayo...@redhat.com]
Sent: Tuesday, April 05, 2016 7:02 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO] FreeIPA integration

On 04/05/2016 11:42 AM, Fox, Kevin M wrote:
Yeah, and they just deprecated vendor data plugins too, which eliminates my 
other workaround. :/

We need to really discuss this problem at the summit and get a viable path 
forward. Its just getting worse. :/

Thanks,
Kevin

From: Juan Antonio Osorio [jaosor...@gmail.com]
Sent: Tuesday, April 05, 2016 5:16 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] FreeIPA integration



On Tue, Apr 5, 2016 at 2:45 PM, Fox, Kevin M 
> wrote:
This sounds suspiciously like, "how do you get a secret to the instance to get 
a secret from the secret store" issue :)
Yeah, sounds pretty familiar. We were using the nova hooks mechanism for this 
means, but it was deprecated recently. So bummer :/

Nova instance user spec again?

Thanks,
Kevin

Yep, and we need a solution.  I think the right solution is a keypair generated 
on the instance, public key posted by the instace to the hypervisor and stored 
with the instance data in the database.  I wrote that to the mailing list 
earlier today.

A basic rule of a private key is that it never leaves the machine on which it 
is generated.  The rest falls out from there.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Containers lifecycle management

2016-04-06 Thread Hongbin Lu


> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: April-06-16 9:14 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [magnum] Containers lifecycle management
> 
> 
> Greetings,
> 
> I'm fairly new to Magnum and I hope my comments below are accurate.
> 
> After reading some docs, links and other references, I seem to
> understand the Magnum team has a debate on whether providing
> abstraction for containers lifecycle is something the project should do
> or not. There's a patch that attempts to remove PODs and some debates
> on whether `container-*` commands are actually useful or not. 

FYI, according to the latest decision [1][2], below is what it will be:
* The k8s abstractions (pod/service/replication controller) will be removed. 
Users will need to use native tool (i.e. kubectl) to consume the k8s service.
* The docker swarm abstraction (container) will be moved to a separated driver. 
In particular, there will be two drivers for operators to select. The first 
driver will have minimum functionality (i.e. provision/manage/delete the swarm 
cluster). The second driver will have additional APIs to manage container 
resources in the swarm bay.

[1] https://wiki.openstack.org/wiki/Magnum/NativeAPI
[2] https://etherpad.openstack.org/p/magnum-native-api

> 
> Based on the above, I wanted to understand what would be the
> recommended way for services willing to consume magnum to run
> containers? I've been digging a bit into what would be required for
> Trove to consume Magnum and based on the above, it seems the answer is
> that it should support either docker, k8s or mesos instead.
> 
> - Is the above correct?

I think it is correct. At current stage, Trove needs to select a bay type 
(docker swarm, k8s or mesos). If the use case is to manage a single container, 
it is recommended to choose the docker swarm bay type.

> - Is there a way to create a container, transparently, on whatever
> backend using
>   Magnum's API?

At current stage, it is impossible. There is a blueprint [3] for proposing to 
unify the heterogeneity of different bay types, but we are in disagreement on 
whether Magnum should provide such functionality. You are welcome to contribute 
your use cases if you prefer to have it implemented.

[3] https://blueprints.launchpad.net/magnum/+spec/unified-containers

> 
> Sorry if I got something wrong,
> Flavio
> 
> --
> @flaper87
> Flavio Percoco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Minimal secure identification of a new VM

2016-04-06 Thread Fox, Kevin M
Nova Instance user spec.
https://review.openstack.org/93

We really really need to solve this. it is affecting almost every project in 
one way or another.

Can we please get a summit session dedicated to the topic? Last summit we had 
only 10 minutes. :/

Thanks,
Kevin


From: Adam Young [ayo...@redhat.com]
Sent: Tuesday, April 05, 2016 3:00 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [nova] Minimal secure identification of a new VM

We have a use case where we want to register a newly spawned Virtual
machine with an identity provider.

Heat also has a need to provide some form of Identity for a new VM.


Looking at the set of utilities right now, there does not seem to be a
secure way to do this.  Injecting files does not provide a path that
cannot be seen by other VMs or machines in the system.

For our use case, a short lived One-Time-Password is sufficient, but for
others, I think asymmetric key generation makes more sense.

Is the following possible:

1.  In cloud-init, the VM generates a Keypair, then notifies the No0va
infrastructure (somehow) that it has done so.

2.  Nova Compute reads the public Key off the device and sends it to
conductor, which would then associate the public key with the server?

3.  A third party system could then validate the association of the
public key and the server, and build a work flow based on some signed
document from the VM?





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Update on os-vif progress (port binding negotiation)

2016-04-06 Thread Sergey Belous
Hi all.

I want to share a status about os-vif’s functional test gate check job.

Currently, the all necessary changes are merged and there is a way to run the 
all tempest tests on devstack in our CI. The name of job is 
gate-tempest-dsvm-nova-os-vif-nv and it’s a experimental job, that can be 
triggered on any patch to Nova with "check experimental" command.


Best Regards,
Sergey Belous

> On 18 Feb 2016, at 23:06, Sergey Belous  wrote:
> 
> Thanks, Sean. I'll try to keep you and everybody informed about progress on 
> those.
> 
> 2016-02-18 20:20 GMT+03:00 Sean M. Collins  >:
> Jay Pipes wrote:
> > From our Mirantis team, I've asked Sergey Belous to handle any necessary
> > changes to devstack and project-config (for a functional test gate check
> > job).
> 
> I'll keep an eye out in my DevStack review queue for these patches and
> will make sure to review them promptly.
> 
> --
> Sean M. Collins
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> 
> -- 
> Best Regards,
> Sergey Belous

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Minimal secure identification of a new VM

2016-04-06 Thread Gary Kotton


On 4/6/16, 12:42 PM, "Daniel P. Berrange"  wrote:

>On Tue, Apr 05, 2016 at 06:00:55PM -0400, Adam Young wrote:
>> We have a use case where we want to register a newly spawned Virtual
>>machine
>> with an identity provider.
>> 
>> Heat also has a need to provide some form of Identity for a new VM.
>> 
>> 
>> Looking at the set of utilities right now, there does not seem to be a
>> secure way to do this.  Injecting files does not provide a path that
>>cannot
>> be seen by other VMs or machines in the system.
>> 
>> For our use case, a short lived One-Time-Password is sufficient, but for
>> others, I think asymmetric key generation makes more sense.
>> 
>> Is the following possible:
>> 
>> 1.  In cloud-init, the VM generates a Keypair, then notifies the No0va
>> infrastructure (somehow) that it has done so.
>
>There's no currently secure channel for the guest to push information
>to Nova. The best we have is the metadata service, but we'd need to
>secure that with https, because the metadata server cannot be assumed
>to be running on the same host as the VM & so the channel is not protected
>against MITM attacks.
>
>Also currently the metadata server is readonly with the guest pulling
>information from it - it doesn't currently allow guests to push
>information
>into it. This is nice because the metadata servers could theoretically be
>locked down to prevent may interactions with the rest of nova - it should
>only need read-only access to info about the guests it is serving. If we
>turn the metadata server into a bi-directional service which can update
>information about guests, then it opens it up as a more attractive avenue
>of attack for guest OS trying breach the host infra. This is a fairly
>general concern with any approach where the guest has to have the ability
>to push information back into Nova.

What about having metadata support HTTPS?

>
>> 2.  Nova Compute reads the public Key off the device and sends it to
>> conductor, which would then associate the public key with the server?
>> 
>> 3.  A third party system could then validate the association of the
>>public
>> key and the server, and build a work flow based on some signed document
>>from
>> the VM?
>
>Regards,
>Daniel
>-- 
>|: 
>https://urldefense.proofpoint.com/v2/url?u=http-3A__berrange.com=BQICAg;
>c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=VlZxHpZBmzzkWT5jqz9JYBk8YT
>eq9N3-diTlNj4GyNc=lt2m-p7I77Tg88WS4WvodfNMyitjWmfDMMHDQ3tqvwo=SYfUKobB
>orFrSzQyAW8b93HqsY5XVNomIMKWyfg1bos=   -o-
>https://urldefense.proofpoint.com/v2/url?u=http-3A__www.flickr.com_photos_
>dberrange_=BQICAg=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=VlZxHp
>ZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNc=lt2m-p7I77Tg88WS4WvodfNMyitjWmfDMM
>HDQ3tqvwo=_gK2KOkWFfLW-FbojWkpCgftjVLN_QZDGkjh8pMnls0=  :|
>|: 
>https://urldefense.proofpoint.com/v2/url?u=http-3A__libvirt.org=BQICAg
>=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=VlZxHpZBmzzkWT5jqz9JYBk8YTe
>q9N3-diTlNj4GyNc=lt2m-p7I77Tg88WS4WvodfNMyitjWmfDMMHDQ3tqvwo=Yguim8-Kw
>fw5GFNoKeCd5_x2TQdaMSYWCRtjLOBMBnU=   -o-
>https://urldefense.proofpoint.com/v2/url?u=http-3A__virt-2Dmanager.org=B
>QICAg=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=VlZxHpZBmzzkWT5jqz9J
>YBk8YTeq9N3-diTlNj4GyNc=lt2m-p7I77Tg88WS4WvodfNMyitjWmfDMMHDQ3tqvwo=tw
>oB0qqGMKwvX2dYl1m-qYeJRYIU_XnKP4o2bR8pLQ4=  :|
>|: 
>https://urldefense.proofpoint.com/v2/url?u=http-3A__autobuild.org=BQICAg
>=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=VlZxHpZBmzzkWT5jqz9JYBk8Y
>Teq9N3-diTlNj4GyNc=lt2m-p7I77Tg88WS4WvodfNMyitjWmfDMMHDQ3tqvwo=iSsOzMl
>SpSW3eL2vujxnGA8kwXnfy7cqGKvNIztgils=-o-
>https://urldefense.proofpoint.com/v2/url?u=http-3A__search.cpan.org_-7Edan
>berr_=BQICAg=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=VlZxHpZBmzz
>kWT5jqz9JYBk8YTeq9N3-diTlNj4GyNc=lt2m-p7I77Tg88WS4WvodfNMyitjWmfDMMHDQ3t
>qvwo=uigHG_KOapyOIfMYts-LD2fB5Tbvk-7C3fTHl8KntLU=  :|
>|: 
>https://urldefense.proofpoint.com/v2/url?u=http-3A__entangle-2Dphoto.org
>=BQICAg=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=VlZxHpZBmzzkWT5jqz
>9JYBk8YTeq9N3-diTlNj4GyNc=lt2m-p7I77Tg88WS4WvodfNMyitjWmfDMMHDQ3tqvwo=
>S8SF6URSAV0y9k6m_v9KqNluZ_ocrHkp9_U5lYxDzfU=-o-
>https://urldefense.proofpoint.com/v2/url?u=http-3A__live.gnome.org_gtk-2Dv
>nc=BQICAg=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=VlZxHpZBmzzkWT
>5jqz9JYBk8YTeq9N3-diTlNj4GyNc=lt2m-p7I77Tg88WS4WvodfNMyitjWmfDMMHDQ3tqvw
>o=TMVQTuB-w7M5dWCDE3CRseA9l-xWWfqP-tlPW34Lqg4=  :|
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [Infra] Generic solution for bare metal testing

2016-04-06 Thread Igor Belikov
Hey Stackers,

In Fuel we use bare metal testing for deployment tests. This is essentially a 
core component of Fuel CI and as much as we like having it around we’d rather 
spend time and resources integrating with upstream instead of growing and 
polishing third-party testing solutions.

On one of the previous Infra team meetings we discussed the possibility of 
bringing testing on bare metal nodes to openstack-infra[1]. This is not a new 
topic, similar question was brought up by Magnum some time ago[2] and there 
might other times this was discussed. We use bare metal testing for Fuel, I 
assume that Magnum still wants to use it, TripleO would probably also fit in 
the picture in some way (though I’m not familiar with current scheme of TripleO 
CI) - hope this is enough to consider implementation of generic way to use 
baremetal nodes in CI.

The most obvious way to do this seems to be using existing OpenStack service 
for bare metal provisioning - Ironic. Ironic fits pretty well in existing Infra 
workflow, Ironic usage (in form of Rackspace's OnMetal) was previously 
discussed in Magnum thread[2] with the main technical issue being inability to 
use custom glance images to boot instances. AFAIK the situation didn't change 
much with OnMetal, but Ironic perfectly supports booting from glance images 
created by diskimage-builder - which is exactly the way Nodepool currently 
works for virtual machines.

With the work currently going on InfraCloud there's a possibility to properly 
design and implement bare metal testing, Zuul v3 spec[3] also brings a number 
of relevant changes to Nodepool. So, summing up some points of possible 
implementation:
* Multiple pools of bare metal nodes under Ironic management are available as a 
part of InfraCloud
* Ironic acts as an additional hypervisor for Nova, providing the ability to 
use bare metal nodes by booting an instance with a specific flavor
* Nodepool manages booting bare metal instances using the images generated with 
diskimage-builder and stored in Glance
* Nodepool also manages redeployment of bare metal nodes - redeploying a glance 
image on a bare metal node takes only a few minutes, but time may depend on a 
set of cleaning steps used to redeploy a node
* Bare metal instances are exposed to Jenkins (or a different worker in case of 
Zuul v3) by Nodepool 

I suppose there are security issues when we talk about running custom code on 
bare metal slaves, but I'm not sure I understand the difference from running 
custom code on a virtual machine if bare metal nodes are isolated, don't 
contain any sensitive data and follow a regular redeployment procedure.

I'd like to add that we're ready to start donating hardware from the Fuel CI 
pool (2 pools in different locations, to be accurate) to see this initiative 
taking off.

Please, share your thoughts and opinions.

[1]http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-03-29-19.03.log.html
[2]http://lists.openstack.org/pipermail/openstack-infra/2015-September/003138.html
[3]http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html
--
Igor Belikov
Fuel CI Engineer
ibeli...@mirantis.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Using reno

2016-04-06 Thread Steven Dake (stdake)
Hey folks,

Reno is in our master codebase and mitaka.  Every new feature should use reno 
at the conclusion of the patch set.  The full documentation is here:
http://docs.openstack.org/developer/reno/usage.html#creating-new-release-notes

In short, to create a reno note, just run
pip install reno
>From kolla working directory
reno new feature-name (e.g. add-reno)

This will create a yam lfile (and print it out) that needs to be filled in.  
The yaml file contains directions.  The release note doesn't have to be super 
detailed - I suspect we will tune our detail over time to what we think is 
appropriate.

You can see the output by running:
tox -e releasenotes
and opening in your favorite browser.

This is published automatically to here on commit:
http://docs.openstack.org/releasenotes/kolla/

Eventually this will be linked from the releases website here:

http://releases.openstack.org/

Currentlly we are series independent release model, but that will be changing 
in Newton.  I have a patchset up to put our release note son the 
releases.openstack.org website but it has not yet merged.

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-04-06 Thread Zane Bitter

On 03/04/16 21:38, Dan Prince wrote:




On Mon, 2016-03-21 at 16:14 -0400, Zane Bitter wrote:

As of the Liberty release, Magnum now supports provisioning Mesos
clusters, so TripleO wouldn't have to maintain the installer for
that
either. (The choice of Mesos is somewhat unfortunate in our case,
because Magnum's Kubernetes support is much more mature than its
Mesos
support, and because the reasons for the decision are about to be or
have already been overtaken by events - I've heard reports that the
features that Kubernetes was missing to allow it to be used for
controller nodes, and maybe even compute nodes, are now available.
Nonetheless, I expect the level of Magnum support for Mesos is
likely
workable.) This is where the TripleO strategy of using OpenStack to
deploy OpenStack can really pay dividends: because we use Ironic all
of
our servers are accessible through the Nova API, so in theory we can
just run Magnum out of the box.


The chances of me personally having time to prototype this are
slim-to-zero, but I think this is a path worth investigating.


Looking at Magnum more closely... At a high level I like the idea of
Magnum. And interestingly it could be a surprisingly good fit for
someone wanting containers on baremetal to consider using the TripleO
paving machine (instack-undercloud).

We would need to add a few services I think to instack to supply the
Magnum heat templates with the required API's. Specifically:

  -barbican
  -neutron L3 agent
  -neutron Lbaas
  -Magnum (API, and conductor)

This isn't hard and would be a cool thing to have supported withing
instack (although I wouldn't enable these services by default I
think... at least not for now).

So again, at a high level things look good. Taking a closer look at how
Magnum architects its network things start to fall apart a bit I think.
 From what I can tell the Magnum network architecture with its usage of
the L3 agent, and Lbaas the undercloud itself would be much more
important. Depending on the networking vendor we would possibly need to
make the Undercloud itself HA in order to ensure anything built on top
was also HA. Contrast this with the fact that you can deploy an
Overcloud today that will continue to function should the undercloud
(momentarily) go down.


Yeah, we'd definitely need to be able to attach the controller cluster 
to the right networks in order for this to work, and an HA undercloud 
would need to be optional.


Can any Magnum folks reading comment on this?


Then there is the fact that Magnum would be calling Heat to create our
baremetal servers (Magnum creates the OS::Nova::Server resources... not
our own Heat templates). This is fine but we have a lot of value add in
our own templates.


Isn't that sorta the problem? ;)


We could actually write our own Heat templates and
plug them into magnum.conf via k8s_atomic_template_path= or
mesos_fedora_template_path= (doesn't exist yet but it could?). What
this means for our workflow and how end users would would configure
underlying parameters would need to be discussed. Would we still have
our own Heat templates that created OS::Magnum::Bay resources?


I assume so, yes.


Or would
we use totally separate stacks to generate these things? The former
causes a bit of a "Yo Dawg: I hear you like Heat, so I'm calling Heat
to call Magnum to call Heat to spin up your cloud".


That's an implementation detail of Magnum... I don't see why it would be 
an issue. Actually, using the same tool at two different levels of 
abstraction seems strictly better than using two different tools.


Props for the Xzibit reference though :D


Perhaps I'm off
here but we'd still want to expose many of the service level parameters
to end users via our workflows... and then use them to deploy
containers into the bays so something like this would need to happen I
think.

Aside from creating the bays we likely wouldn't use the /containers API
to spin up containers but would go directly at Mesos or Kubernetes
instead. The Magnum API just isn't leaky enough yet for us to get
access to all the container bits we'd need at the moment. Over time it
could get there... but I don't think it is there yet.


Yes, totally agree, this is what I was suggesting.


So all that to say maybe we should integrate it into instack-undercloud
as a baremetal containers side project. This would also make it easier
to develop and evolve Magnum baremetal capabilities if we really want
to pursue them. But I think we'd have an easier go of implementing our
containers architecture (with all the network isolation, HA
architecture, and underpinnings we desire) by managing our own
deployment of these things in the immediate future.

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





Re: [openstack-dev] [Fuel] Merge Freeze for Mitaka branching

2016-04-06 Thread Aleksandra Fedorova
Hi, everyone,

we were delayed by npm issue [0] in the gate, but currently we have
successfully merged all version bumps [1] and got stable master,
thanks to Sergey Kulanov who got it all fully tested in advance.

Merge Freeze is lifted.

Please note:

* To merge change to Mitaka release, you need to merge it to master
branch first and then cherry-pick to stable/mitaka branch.

* Fuel CI deployment tests are being adjusted to new mirrors schema
[2] so currently all master deployment tests are queued. We need about
1-2 hours to finish this work. We'll send a separate e-mail regarding
Fuel CI readiness once we are done.


[0] https://storyboard.openstack.org/#!/story/2000541
[1] https://review.openstack.org/#/q/topic:9.0-scf
[2] https://review.openstack.org/#/c/301018/

-- 
Aleksandra Fedorova
Fuel CI Team Lead
bookwar at #fuel-infra

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Proposal to remove `nova image-*` commands from novaclient

2016-04-06 Thread Flavio Percoco

On 06/04/16 08:26 -0400, Sean Dague wrote:

On 04/06/2016 04:13 AM, Markus Zoeller wrote:

+1 for deprecation and removal

To be honest, when I started with Nova during Kilo, I didn't get
why we have those passthrough APIs. They looked like convenience APIs.
A short history lesson, why they got introduced, would be cool. I only
found commit [1] which looks like they were there from the beginning.

References:
[1]
https://github.com/openstack/python-novaclient/commit/7304ed80df265b3b11a0018a826ce2e38c052572#diff-56f10b3a40a197d5691da75c2b847d31R33


The short history lesson is nova image API existed before glance. Glance
was a spin out from Nova of that API. Doing so doesn't immediately make
that API go away however. Especially as all these things live on
different ports with different end points. So the image API remained as
a proxy (as did volumes, baremetal, and even to some extend networks).

It's not super clear how you deprecate and remove these things without
breaking a lot of people, as a lot of the libraries implement the nova
image resources -
https://github.com/fog/fog-openstack/blob/master/lib/fog/openstack/compute.rb


We can deprecate it without removing it. We make it work with v2 and start
warning people that the API is not supported anymore. We don't fix bugs in that
API but tell people to use the newer version.

I think that should do it, unless I'm missing something.
Flavio



-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Containers lifecycle management

2016-04-06 Thread Flavio Percoco


Greetings,

I'm fairly new to Magnum and I hope my comments below are accurate.

After reading some docs, links and other references, I seem to understand the
Magnum team has a debate on whether providing abstraction for containers
lifecycle is something the project should do or not. There's a patch that
attempts to remove PODs and some debates on whether `container-*` commands are
actually useful or not.

Based on the above, I wanted to understand what would be the recommended way for
services willing to consume magnum to run containers? I've been digging a bit
into what would be required for Trove to consume Magnum and based on the above,
it seems the answer is that it should support either docker, k8s or mesos
instead.

- Is the above correct?
- Is there a way to create a container, transparently, on whatever backend using
 Magnum's API?

Sorry if I got something wrong,
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate][osc] new sub commands - how should they be named?

2016-04-06 Thread Morgan Fainberg
On Wed, Apr 6, 2016 at 7:44 AM, Sheel Rana Insaan 
wrote:

> Hey Graham,
>
> I just added service for block storage, we have named these
> openstack volume service list/enable/disable.
>
> Same protocol is used for nova as well previosly.
>
> Hope this will help.
>
> Regards,
> Sheel Rana
> On Apr 6, 2016 7:54 PM, "Hayes, Graham"  wrote:
>
>> On 06/04/2016 15:20, Qiming Teng wrote:
>> > On Wed, Apr 06, 2016 at 01:59:29PM +, Hayes, Graham wrote:
>> >> Designate is adding support for viewing the status of the various
>> >> services that are running.
>> >>
>> >> We have added support to our openstack client plugin, but were looking
>> >> for guidance / advices on what the actual commands should be.
>> >>
>> >> We have implemented it in [1] as "dns service list" and
>> >> "dns service show" - but this is name-spacing the command.
>> > do you mean?
>> >
>> > openstack dns service list
>> > openstack dns service show
>>
>> sorry, yes - I just included the sub commands.
>>
>> >
>> >> Is there an alternative? "service" is already taken by keystone, and if
>> >> we take "service-status" (or other generic term) it will most likely
>> >> conflict when nova / cinder / heat / others add support of their
>> service
>> >> listings to OSC.
>> >>
>> >> What is the protocol here? First to grab it wins?
>> >>
>> >> Thanks
>> >>
>> >> - Graham
>> >>
>> >> 1 - https://review.openstack.org/284103
>> >>
>>
>
I think the offered options make a lot of sense:

openstack dns service list
openstack dns service show


I would encourage continued use of the namespacing like this for future
subcommands where possible (as it seems cinder and nova are already on
track to do).

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FreeIPA integration

2016-04-06 Thread Dan Prince
On Tue, 2016-04-05 at 19:19 -0600, Rich Megginson wrote:
> On 04/05/2016 07:06 PM, Dan Prince wrote:
> > 
> > On Sat, 2016-04-02 at 17:28 -0400, Adam Young wrote:
> > > 
> > > I finally have enough understanding of what is going on with
> > > Tripleo
> > > to
> > > reasonably discuss how to implement solutions for some of the
> > > main
> > > security needs of a deployment.
> > > 
> > > 
> > > FreeIPA is an identity management solution that can provide
> > > support
> > > for:
> > > 
> > > 1. TLS on all network communications:
> > >  A. HTTPS for web services
> > >  B. TLS for the message bus
> > >  C. TLS for communication with the Database.
> > > 2. Identity for all Actors in the system:
> > > A.  API services
> > > B.  Message producers and consumers
> > > C.  Database consumers
> > > D.  Keystone service users
> > > 3. Secure  DNS DNSSEC
> > > 4. Federation Support
> > > 5. SSH Access control to Hosts for both undercloud and overcloud
> > > 6. SUDO management
> > > 7. Single Sign On for Applications running in the overcloud.
> > > 
> > > 
> > > The main pieces of FreeIPA are
> > > 1. LDAP (the 389 Directory Server)
> > > 2. Kerberos
> > > 3. DNS (BIND)
> > > 4. Certificate Authority (CA) server (Dogtag)
> > > 5. WebUI/Web Service Management Interface (HTTPD)
> > > 
> > > Of these, the CA is the most critical.  Without a centralized CA,
> > > we
> > > have no reasonable way to do certificate management.
> > Would using Barbican to provide an API to manage the certificates
> > make
> > more sense for our deployment tooling? This could be useful for
> > both
> > undercloud and overcloud cases.
> > 
> > As for the rest of this, how invasive is the implementation of
> > FreeIPA.? Is this something that we can layer on top of an existing
> > deployment such that users wishing to use FreeIPA can opt-in.
> > 
> > > 
> > > Now, I know a lot of people have an allergic reaction to some,
> > > maybe
> > > all, of these technologies. They should not be required to be
> > > running
> > > in
> > > a development or testbed setup.  But we need to make it possible
> > > to
> > > secure an end deployment, and FreeIPA was designed explicitly for
> > > these
> > > kinds of distributed applications.  Here is what I would like to
> > > implement.
> > > 
> > > Assuming that the Undercloud is installed on a physical machine,
> > > we
> > > want
> > > to treat the FreeIPA server as a managed service of the
> > > undercloud
> > > that
> > > is then consumed by the rest of the overcloud. Right now, there
> > > are
> > > conflicts for some ports (8080 used by both swift and Dogtag)
> > > that
> > > prevent a drop-in run of the server on the undercloud
> > > controller.  Even
> > > if we could deconflict, there is a possible battle between
> > > Keystone
> > > and
> > > the FreeIPA server on the undercloud.  So, while I would like to
> > > see
> > > the
> > > ability to run the FreeIPA server on the Undercloud machine
> > > eventuall, I
> > > think a more realistic deployment is to build a separate virtual
> > > machine, parallel to the overcloud controller, and install
> > > FreeIPA
> > > there. I've been able to modify Tripleo Quickstart to provision
> > > this
> > > VM.
> > > 
> > > I was also able to run FreeIPA in a container on the undercloud
> > > machine,
> > > but this is, I think, not how we want to migrate to a container
> > > based
> > > strategy. It should be more deliberate.
> > > 
> > > 
> > > While the ideal setup would be to install the IPA layer first,
> > > and
> > > create service users in there, this produces a different install
> > > path
> > > between with-FreeIPA and without-FreeIPA. Thus, I suspect the
> > > right
> > > approach is to run the overcloud deploy, then "harden" the
> > > deployment
> > > with the FreeIPA steps.
> > > 
> > > 
> > > The IdM team did just this last summer in preparing for the Tokyo
> > > summit, using Ansible and Packstack.  The Rippowam project
> > > https://github.com/admiyo/rippowam was able to fully lock down a
> > > Packstack based install.  I'd like to reuse as much of Rippowam
> > > as
> > > possible, but called from Heat Templates as part of an overcloud
> > > deploy.  I do not really want to re implement Rippowam in Puppet.
> > As we are using Puppet for our configuration I think this is
> > currently
> > a requirement. There are many good puppet examples out there of
> > various
> > servers and a quick google search showed some IPA modules are
> > available
> > as well.
> > 
> > I think most TripleO users are quite happy in using puppet modules
> > for
> > configuration in that the puppet openstack modules are quite mature
> > and
> > well tested. Making a one-off exception for FreeIPA at this point
> > doesn't make sense to me.
> What about calling an ansible playbook from a puppet module?

Given our current toolset in TripleO having the ability to manage all
service configurations with a common language overrides any short cuts
that calling 

Re: [openstack-dev] [designate][osc] new sub commands - how should they be named?

2016-04-06 Thread Sheel Rana Insaan
Hey Graham,

I just added service for block storage, we have named these
openstack volume service list/enable/disable.

Same protocol is used for nova as well previosly.

Hope this will help.

Regards,
Sheel Rana
On Apr 6, 2016 7:54 PM, "Hayes, Graham"  wrote:

> On 06/04/2016 15:20, Qiming Teng wrote:
> > On Wed, Apr 06, 2016 at 01:59:29PM +, Hayes, Graham wrote:
> >> Designate is adding support for viewing the status of the various
> >> services that are running.
> >>
> >> We have added support to our openstack client plugin, but were looking
> >> for guidance / advices on what the actual commands should be.
> >>
> >> We have implemented it in [1] as "dns service list" and
> >> "dns service show" - but this is name-spacing the command.
> > do you mean?
> >
> > openstack dns service list
> > openstack dns service show
>
> sorry, yes - I just included the sub commands.
>
> >
> >> Is there an alternative? "service" is already taken by keystone, and if
> >> we take "service-status" (or other generic term) it will most likely
> >> conflict when nova / cinder / heat / others add support of their service
> >> listings to OSC.
> >>
> >> What is the protocol here? First to grab it wins?
> >>
> >> Thanks
> >>
> >> - Graham
> >>
> >> 1 - https://review.openstack.org/284103
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate][osc] new sub commands - how should they be named?

2016-04-06 Thread Hayes, Graham
On 06/04/2016 15:20, Qiming Teng wrote:
> On Wed, Apr 06, 2016 at 01:59:29PM +, Hayes, Graham wrote:
>> Designate is adding support for viewing the status of the various
>> services that are running.
>>
>> We have added support to our openstack client plugin, but were looking
>> for guidance / advices on what the actual commands should be.
>>
>> We have implemented it in [1] as "dns service list" and
>> "dns service show" - but this is name-spacing the command.
> do you mean?
>
> openstack dns service list
> openstack dns service show

sorry, yes - I just included the sub commands.

>
>> Is there an alternative? "service" is already taken by keystone, and if
>> we take "service-status" (or other generic term) it will most likely
>> conflict when nova / cinder / heat / others add support of their service
>> listings to OSC.
>>
>> What is the protocol here? First to grab it wins?
>>
>> Thanks
>>
>> - Graham
>>
>> 1 - https://review.openstack.org/284103
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate][osc] new sub commands - how should they be named?

2016-04-06 Thread Qiming Teng
On Wed, Apr 06, 2016 at 01:59:29PM +, Hayes, Graham wrote:
> Designate is adding support for viewing the status of the various
> services that are running.
> 
> We have added support to our openstack client plugin, but were looking
> for guidance / advices on what the actual commands should be.
> 
> We have implemented it in [1] as "dns service list" and
> "dns service show" - but this is name-spacing the command.
do you mean?

openstack dns service list
openstack dns service show

> Is there an alternative? "service" is already taken by keystone, and if
> we take "service-status" (or other generic term) it will most likely
> conflict when nova / cinder / heat / others add support of their service
> listings to OSC.
> 
> What is the protocol here? First to grab it wins?
> 
> Thanks
> 
> - Graham
> 
> 1 - https://review.openstack.org/284103
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Newton blueprints call for action

2016-04-06 Thread Brent Eagles
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi Armando,

On 05/04/16 01:13 AM, Armando M. wrote:
> Hi Neutrinos,
> 
> During today's team meeting [0], we went through the current
> milestone workload [1].
> 
> This is mostly made of Mitaka backlog items, amongst which we
> discussed two blueprints [2, 3]. These two efforts had their spec
> approved during the Mitaka timeframe, but code lagged behind, and
> hence got deferred [4].
> 
> I would like to understand if these need new owners (both assignees
> and approvers). Code submitted [5,6] has not been touched in a
> while, and whilst I appreciate people have been busy focussing on
> Mitaka (myself included), the Newton master branch has been open
> for a while.
> 
> With this email I would like to appeal to the people in CC to
> report back their interest in continuing working on these items in
> their respective capacities, and/or the wider community, in case
> new owners need to be identified.
> 
> I look forward to hearing back, hoping we can find the right
> resources to resume progress, and bring these important
> requirements to completion in time for Newton.
> 
> Many thanks, Armando

As it happens, I've been tasked to work on TripleO as a general
contributor with a networking focus. For better or worse, the database
related work was the only thing on my early radar and I will shepherd
that along as needed, but I won't be able to commit to the larger bits
of the remaining work.

While we could wait for summit to hand off, I think it would be better
for someone who is looking to take over ownership of all or some of
the pieces to sync up with Bence, myself, and Songming ASAP.

Cheers,

Brent

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJXBRpyAAoJEIXWptqvFlBWIHgQAL3e+HjvDXvziee1oLfkz/kT
DIghPQoqZg+oLJYmezoa4ixzNY53pE/EtkTxCtXrmEfbvwCqNWkgNWqTKm4nGe1J
Uv1HFpdrUtg9j7bS9bIPRKQKaWr9nkNUJZPL5fjIs467WWQP0e6YbigVgoJQRYXi
t/o5ZKgRKp8DOW+bqjXvQvM69WXq9iyH7KmjVfbJ2o3NeoFOmPTlXtAunbp33xj4
6MuFH4USJZS11x0IgIiaCZHJS+RWfDdxI+4ONCqQ1lYkrLp9wl8XNznQzum60wFU
jhjJcaRtfdbMHmRd72//QVeIlX9VA6b5q36a/adPxbKrD2XTd4pntJ86dnU0aQFJ
sriJRk3KlD0IMDMS+rRsKz7EyJJP+9b5zlWCzX0V+1zNlcB6eiowOmo3QUQrFBQT
O50KS9YC7ef0EMWE6kikxyK8AxZ1Hjcm3eM50mShU+eCI/JPgkHeRQX+Z16RBybj
xhEBgRIvLS7bH8c6vqjIgmLQ1zxQ3EPR440Zpi0rw3rChP/lugYYQpcjXDfVFWED
gwe+RQvevj4tJeVjXG662DMuzmjy/cM2nNLZm3AZsaASkR73/M+Qmy53Y22T+T2o
VVcbOsK2+1Y8JFAVZguUib9pQ/z8DgBKs4+rfWiV4mzBAGwVIxePiNDiQ1kQU/Z0
3kUfgrNS0CgmE/nmg05x
=7kzU
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate][osc] new sub commands - how should they be named?

2016-04-06 Thread Hayes, Graham
Designate is adding support for viewing the status of the various
services that are running.

We have added support to our openstack client plugin, but were looking
for guidance / advices on what the actual commands should be.

We have implemented it in [1] as "dns service list" and
"dns service show" - but this is name-spacing the command.

Is there an alternative? "service" is already taken by keystone, and if
we take "service-status" (or other generic term) it will most likely
conflict when nova / cinder / heat / others add support of their service
listings to OSC.

What is the protocol here? First to grab it wins?

Thanks

- Graham

1 - https://review.openstack.org/284103

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [all] FYI: Removing default flavors from nova

2016-04-06 Thread Matt Riedemann



On 4/6/2016 8:06 AM, Qiming Teng wrote:

Thanks for the explanation, Sean. I wasn't subscribing to the operators
mailinglist so wasn't aware of the notice. With that, I'm still
surprised that there was no response to that email.

Anyway, if people believe the benefits overweighs the impacts caused,
just do it.

The only thing I can think of is about tagging a email with some
eye-catching tags, e.g. [!!!] [Disruptive!] [Important!] ...
It is so easy to have such big change go unnoticed.

Regards,
   Qiming

On Wed, Apr 06, 2016 at 06:47:12AM -0400, Sean Dague wrote:

On 04/06/2016 04:19 AM, Sylvain Bauza wrote:



Le 06/04/2016 06:44, Qiming Teng a écrit :

Not an expert of Nova but I am really shocked by such a change. Because
I'm not a Nova expert, I don't have a say on the *huge* efforts in
maintaining some builtin/default flavors. As a user I don't care where
the data have been stored, but I do care that they are gone. They are
gone because they **WILL** be supported by devstack. They are gone with
the workflow +1'ed **BEFORE** the devstack patch gets merged (many
thanks to the depends-on tag). They are gone in hope that all deployment
tools will know this when they fail, or fortunately they read this email,
or they were reviewing nova patches.

It would be a little nicer to initiate a discussion on the mailinglist
before such a change is introduced.



It was communicated accordingly to operators with no strong arguments :
http://lists.openstack.org/pipermail/openstack-operators/2016-March/010045.html


Not only with no strong arguments, but with a general - "yes please,
that simplifies our life".


You can also see that https://review.openstack.org/#/c/300127/ is having
three items :
  - a DocImpact tag creating a Launchpad bug for documentation about that
  - a reno file meaning that our release notes will provide also some
comments about that
  - a Depends-On tag (like you said) on a devstack change meaning that
people using devstack won't see a modified behavior.

Not sure what you need more.


The default flavors were originally hardcoded in Nova (in the initial
commit) -
https://github.com/openstack/nova/commit/bf6e6e718cdc7488e2da87b21e258ccc065fe499#diff-5ca8c06795ef481818ea1710fce91800R64
  and moved into the db 5 years ago to be a copy of the EC2 flavors at
the time -
https://github.com/openstack/nova/commit/563a77fd4aa80da9bddac5cf7f8f27ed2dedb39d.
Those flavors were meant to be examples, not the final story.

All the public clouds delete these and do their own thing, as do I
expect many of the products. Any assumption that software or users have
that these will exist is a bad assumption.

It is a big change, which is why it's being communicated on Mailing
Lists in addition to in the release notes so that people have time to
make any of their tooling not assume these flavors by name will be
there, or to inject them yourself if you are sure you need them (as was
done in the devstack case).

-Sean

--
Sean Dague
http://dague.net



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



There were several replies in the operators list thread and as Sean said 
they were generally favorable about this because they delete the default 
flavors anyway.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [all] FYI: Removing default flavors from nova

2016-04-06 Thread Qiming Teng
Thanks for the explanation, Sean. I wasn't subscribing to the operators
mailinglist so wasn't aware of the notice. With that, I'm still
surprised that there was no response to that email.

Anyway, if people believe the benefits overweighs the impacts caused,
just do it.

The only thing I can think of is about tagging a email with some
eye-catching tags, e.g. [!!!] [Disruptive!] [Important!] ...
It is so easy to have such big change go unnoticed.

Regards,
  Qiming

On Wed, Apr 06, 2016 at 06:47:12AM -0400, Sean Dague wrote:
> On 04/06/2016 04:19 AM, Sylvain Bauza wrote:
> > 
> > 
> > Le 06/04/2016 06:44, Qiming Teng a écrit :
> >> Not an expert of Nova but I am really shocked by such a change. Because
> >> I'm not a Nova expert, I don't have a say on the *huge* efforts in
> >> maintaining some builtin/default flavors. As a user I don't care where
> >> the data have been stored, but I do care that they are gone. They are
> >> gone because they **WILL** be supported by devstack. They are gone with
> >> the workflow +1'ed **BEFORE** the devstack patch gets merged (many
> >> thanks to the depends-on tag). They are gone in hope that all deployment
> >> tools will know this when they fail, or fortunately they read this email,
> >> or they were reviewing nova patches.
> >>
> >> It would be a little nicer to initiate a discussion on the mailinglist
> >> before such a change is introduced.
> > 
> > 
> > It was communicated accordingly to operators with no strong arguments :
> > http://lists.openstack.org/pipermail/openstack-operators/2016-March/010045.html
> 
> Not only with no strong arguments, but with a general - "yes please,
> that simplifies our life".
> 
> > You can also see that https://review.openstack.org/#/c/300127/ is having
> > three items :
> >  - a DocImpact tag creating a Launchpad bug for documentation about that
> >  - a reno file meaning that our release notes will provide also some
> > comments about that
> >  - a Depends-On tag (like you said) on a devstack change meaning that
> > people using devstack won't see a modified behavior.
> > 
> > Not sure what you need more.
> 
> The default flavors were originally hardcoded in Nova (in the initial
> commit) -
> https://github.com/openstack/nova/commit/bf6e6e718cdc7488e2da87b21e258ccc065fe499#diff-5ca8c06795ef481818ea1710fce91800R64
>  and moved into the db 5 years ago to be a copy of the EC2 flavors at
> the time -
> https://github.com/openstack/nova/commit/563a77fd4aa80da9bddac5cf7f8f27ed2dedb39d.
> Those flavors were meant to be examples, not the final story.
> 
> All the public clouds delete these and do their own thing, as do I
> expect many of the products. Any assumption that software or users have
> that these will exist is a bad assumption.
> 
> It is a big change, which is why it's being communicated on Mailing
> Lists in addition to in the release notes so that people have time to
> make any of their tooling not assume these flavors by name will be
> there, or to inject them yourself if you are sure you need them (as was
> done in the devstack case).
> 
>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FPGA as a resource

2016-04-06 Thread Rayson Ho
On Wed, Apr 6, 2016 at 6:37 AM, Daniel P. Berrange 
wrote:
> Most of the
> work to support FPGA will be internal to nova, to deal with modelling
> of assignable devices and their scheduling / allocation.

I think the EPA blueprint already covers some of the FPGA device discovery
& scheduling functionality needed.

https://wiki.openstack.org/wiki/Enhanced-platform-awareness-pcie


However, I was reading the Intel ONP whitepapers earlier, and found that
Intel has dropped coverage of the EPA functionality in 2.1:

 ->
https://download.01.org/packet-processing/ONPS2.1/Intel_ONP_Release_2.1_Reference_Architecture_Guide_Rev1.0.pdf

EPA is mentioned in section "6.3 Enhanced Platform Awareness" in 2.0:

 ->
https://download.01.org/packet-processing/ONPS2.0/Intel_ONP_Release_2.0_Reference_Architecture_Guide_Rev1.0-1.pdf

Rayson

==
Open Grid Scheduler - The Official Open Source Grid Engine
http://gridscheduler.sourceforge.net/
http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html




>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
:|
> |: http://libvirt.org  -o- http://virt-manager.org
:|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
:|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
:|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] New resource tracker

2016-04-06 Thread Murray, Paul (HP Cloud)


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: 04 April 2016 18:47
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] New resource tracker
> 
> On 04/04/2016 01:21 PM, Rayson Ho wrote:
> > I found that even in the latest git tree, the resource_tracker is
> > still marked as "deprecated and will be removed in the 14.0.0" in
> > releasenova/conf/compute.py . With the Mitaka release coming up this
> > week, is it still true that the code will be removed?
> >
> > I googled and found this status update sent to the list (
> > http://lists.openstack.org/pipermail/openstack-dev/2016-February/08637
> > 1.html ), but I was wondering if the new resource tracker is
> > documented or should I just refer to the blueprints.
> 
> The resource tracker has not been deprecated. Only the extensible resource
> tracker code has been deprecated in Mitaka. The extensible resource tracker
> code allowed a deployer to override and extend the set of resources that
> were tracked by the resource tracker. It was determined this functionality
> was leading the scheduler and resource tracker code to a place where it was
> not possible to have consistency in how resources were tracked and thus it
> was deprecated and removed.
> 

See: https://review.openstack.org/#/c/300420/ 

Reviews welcome !

Paul


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] - Austin Design Summit

2016-04-06 Thread Gal Sagie
Hello everyone,

We have split our design summit sessions into topics (and sub topics) for
each session
Please review the topics and agenda here:

https://etherpad.openstack.org/p/kuryr-design-summit

This is still a tentative schedule/agenda, so if you have any
ideas/comments on that agenda
or want to add more topics/points for the written subjects please feel free
to do so, we will
finalize the schedule hopefully one week before the summit.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Proposal to remove `nova image-*` commands from novaclient

2016-04-06 Thread Sean Dague
On 04/06/2016 04:13 AM, Markus Zoeller wrote:
> +1 for deprecation and removal
> 
> To be honest, when I started with Nova during Kilo, I didn't get
> why we have those passthrough APIs. They looked like convenience APIs.
> A short history lesson, why they got introduced, would be cool. I only
> found commit [1] which looks like they were there from the beginning.
> 
> References:
> [1] 
> https://github.com/openstack/python-novaclient/commit/7304ed80df265b3b11a0018a826ce2e38c052572#diff-56f10b3a40a197d5691da75c2b847d31R33

The short history lesson is nova image API existed before glance. Glance
was a spin out from Nova of that API. Doing so doesn't immediately make
that API go away however. Especially as all these things live on
different ports with different end points. So the image API remained as
a proxy (as did volumes, baremetal, and even to some extend networks).

It's not super clear how you deprecate and remove these things without
breaking a lot of people, as a lot of the libraries implement the nova
image resources -
https://github.com/fog/fog-openstack/blob/master/lib/fog/openstack/compute.rb


-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance] Proposal to remove `nova image-*` commands from novaclient

2016-04-06 Thread Flavio Percoco

On 05/04/16 17:49 -0400, Nikhil Komawar wrote:


I think in the interest of supporting the OSC effort, I am with
deprecating the CLI stuff (possibly glanceclient too -- BUT in a
different thread).

I believe removing the bindings/modules that support possibly OSC and
other libs might be lot trickier. (nova proxy stuff for glance may be an
exception but I don't know all the libs that use it).

And on Matt's question of glanceclient supporting image-meta stuff,
Glance and in turn glanceclient should be a superset of the Images API
that Nova and other services support. If that's not the case, then we've
a DefCore problem but AFAIK, it's not.

On the note of adding / removing support for Glance v2 API and proxy
jazz in Nova:: Last time I'd a discussion with johnthetubaguy and we
agreed that the proxy API won't change for Nova (the changes needed for
the Glance v2 adoption would ensure the proxy API remains same) also,
the purpose of Glance v2 adoption (in Nova and everywhere else) is to
promote the "right" public facing Glance API (which is in development &
supposed to be v2).

I'm glad we're chatting about deprecating the Nova proxy API proactively
but I think we should not tie it (or get confused that it's tied with)
the Nova's adoption of Glance v2 API.


Right!

There's been a lot of effort in not breaking Nova's Image proxy. I'd love to see
it burn on the ground till there's nothing left of it but removing it is
probably going t ocause more harm than good right now.

The changes that Mike proposed will make it possible to use Glance V2 and still
keep the image proxy as-is so that it can be deprecated following the right
deprecation path without blocking Nova's migration to V2.

So, to answer Matt's question directly. I'd remove those CLI commands only as
part of the migration to OSC but not as a motivation for the image proxy to go
away. Nova could add a warning on those CLI commands to let users know they
should use OSC for that and that they are talking to an old, likely broken, API.

Nova's adoption of Glance's V2 should not be tied to the CLI/Image Proxy
deprecation.

Flavio


Yours sincerely!

On 4/5/16 5:30 PM, Michael Still wrote:

On Wed, Apr 6, 2016 at 7:28 AM, Ian Cordasco > wrote:





-Original Message-

From: Michael Still >

Reply: OpenStack Development Mailing List (not for usage
questions) >

Date: April 5, 2016 at 16:11:05

To: OpenStack Development Mailing List (not for usage questions)
>

Subject:  Re: [openstack-dev] [nova][glance] Proposal to remove
`nova image-*` commands from novaclient



> As a recent newcomer to using our client libraries, my only real
objection

> to this plan is that our client libraries as a mess [1][2]. The
interfaces

> we expect users to use are quite different for basic things like
initial

> auth between the various clients, and by introducing another
library we

> insist people use we're going to force a lot of devs to
eventually go

> through having to understand how those other people did that thing.

>

> I guess I could ease my concerns here if we could agree to some
sort of

> standard for what auth in a client library looks like...

>

> Some examples of auth at the moment:

>

> self.glance = glance_client.Client('2', endpoint, token=token)

> self.ironic = ironic_client.get_client(1, ironic_url=endpoint,

> os_auth_token=token)

> self.nova = nova_client.Client('2', bypass_url=endpoint,
auth_token=token)

>

> Note how we can't decide if the version number is a string or an
int, and

> the argument names for the endpoint and token are different in
all three.

> Its like we're _trying_ to make this hard.

>

> Michael

>

> 1: I guess I might be doing it wrong, but in that case I'll just
mutter

> about the API docs instead.

> 2: I haven't looked at the unified openstack client library to
see if its

> less crazy.

>



What if we just recommend everyone use the openstacksdk
(https://pypi.python.org/pypi/openstacksdk)? We could add more
developer resources by deprecating our individual client libraries
to use that instead? It's consistent and well-designed and would
probably benefit from us actively helping with each service's portion.


So like I said, I haven't looked at it at all because I am middle
aged, stuck in my ways, hate freedom, and because I didn't think of it.

Does it include a command line interface that's not crazy as well?

If so, why are we maintaining duplicate sets of libraries / clients?
It seems like a lot of wasted effort.

Michael

--
Rackspace 

[openstack-dev] [Neutron][Dragonflow] - Austin Design Summit

2016-04-06 Thread Gal Sagie
Hello everyone,

We have started writing an agenda for our design summit sessions,
please feel free to raise ideas/comments/questions in the following
etherpad:

https://etherpad.openstack.org/p/dragonflow-design-summit

If you are a user/operator and would like to raise questions regarding our
road map
or/and our current deployments and testing, please attend our fishbowl
session, we are more
then welcome to share every information we have. (including control/data
plane performance
testing we are concluding soon)

Hope to see you all there!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][live-migration] Libvirt storage pools and persistent disk metadata specs

2016-04-06 Thread Matthew Booth
I've just submitted a new spec for the imagebackend work was doing in
Mitaka, which is a prerequisite for the libvirt storage pools work:

https://review.openstack.org/#/c/302117/

I was just about to resubmit libvirt storage pools with my spec as a
dependency, but re-reading it perhaps we should discuss it briefly first.
I'm aware that the driving goal for this move is to enable data transfer
over the libvirt pipe. However, while we're messing with the on-disk layout
it might be worth thinking about the image cache.

The current implementation of the image cache is certainly problematic. It
has expanded far beyond its original brief, and it shows. The code has no
coherent structure, which makes following it very difficult even to those
familiar with it. Worst of all, though, the behaviour of the image cache is
distributed across several different modules, with no obvious links between
those modules for the unwary (aka tight coupling). The image cache relies
on locking for correct behaviour, but this locking is also distributed
across many modules. Verifying the correct behaviour of the image cache's
locking is hard enough to be impractical, which shows in the persistent
stream of bugs we see relating to backing files going missing. In short,
the image cache implementation needs an extreme overhaul anyway in the
light of its current usage.

We also need to address the problem that the image cache doesn't store any
metadata about its images. We currently determine the file format of an
image cache entry by inspection. While we sanity check images when writing
them to the image cache, this is not a robust defence against format bugs
and vulnerabilities.

More that this, though, the design of the image cache no longer makes
sense. When the image cache was originally implemented, there was only
local file storage, but the libvirt driver also now supports LVM and ceph.
Over 60% of our users use ceph, so ceph is really important. The ceph
backend is often able to use images directly from glance if they're also in
ceph, but when they're not it continues to use this local file store, which
makes no sense.

When we move to libvirt storage pools (with my dependent change), we open
the possibility for users to have multiple local storage pools, and have
instance storage allocated between them according to policy defined in the
flavor. If we're using a common image cache for all storage pools, this
limits the differentiation between those storage pools. So if a user's
paying for instance storage on SSD, but the backing file is on spinning
rust, that's not great.

Logically, the image cache should be a property of the storage backend. So,
local file stores should cache images in the local file store. LVM should
cache images in LVM, which would also allow it to use writeable snapshots.
Ceph should cache images in the same pool its instance disks are in. This
means that the image cache is always directly usable by its storage pool,
which is its purpose.

The image cache also needs a robust design for operating on shared storage.
It currently doesn't have one, although the locking means that it's
hopefully probably only maybe slightly a little racy, with any luck,
perhaps.

This change may be too large to do right now, but we should understand that
changing it later will require a further migration of some description, and
bear that in mind. A local file/lvm/ceph storage pool with an external
image cache has a different implementation and layout to the same storage
pool with an integrated image cache. Data is stored in different places,
and is managed differently. If we don't do it now, it will be slightly
harder later.

Reading through the existing spec, I also notice that it mentions use of
the 'backingStore' element. This needs to come out of the spec, as we MUST
NOT use this. The problem is that it's introspected. I don't know if
libvirt stores this persistently while it's running, but it most certainly
recreates it after a restart or pool refresh. Introspecting file formats
and backing files when the user can influence it is a severe security hole,
so we can't do it. Instead, we need to use the metadata created and defined
by the spec I linked at the top.

There will also need to be a lot more subclassing than this spec
anticipates. Specifically, I expect most backends to require a subclass. In
particular, the libvirt api doesn't provide storage locking, so we will
have to implement that for each backend.

I don't want to spend too long on the spec. The only thing worth of
discussion is the image cache, I guess.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FPGA as a resource

2016-04-06 Thread Zhipeng Huang
Yes the project is quite new, we are still developing requirements at the
OPNFV side. We will update with full description after the summit when we
got all the inputs. Look forward to have a chat with you at the summit :)

I think you are right that Nomad does not solve the problem per se in this
thread, however it would be the natural next step. After having Nova being
able to use FPGA as a resource, you will need to be able schedule FPGA
resources for the VM.

We also have a team based in Canada that specialized in FPGA related
management that will participate in the Nomad development, we could have
more deep dive chat when we got chance :)

On Wed, Apr 6, 2016 at 6:28 PM, Roman Dobosz  wrote:

> On Wed, 6 Apr 2016 16:12:16 +0800
> Zhipeng Huang  wrote:
>
> > You are actually touching on something we have been working on. There is
> a
> > team in OPNFV DPACC project has been working acceleration related topics,
> > including folks from CMCC, Intel, ARM, Freescale, Huawei. We found out
> that
> > in order to have acceleration working under NFV scenrios, other than Nova
> > and Neutron's support, we also need a standalone service that manage
> > accelerators itself.
> >
> > That means we want to treat accelerators, and FPGA being an important
> part
> > of it, as a first class resource citizen and we want to be able to do
> life
> > cycle management and scheduling on acceleration resources.
> >
> > Based upon that requirement we started a new project called Nomad [1] on
> > Jan this year, to serve as an OpenStack service for distributed
> > acceleration management.
> >
> > We've just started the project, and currently discussing the first BP
> [2].
> > We have a team working on IP-SEC based accelerator mgmt, and would love
> to
> > have more people to work on topics like FPGA.
> >
> > We also have a topic on introducing Nomad accepted in Austin Summit [3].
> >
> > You are more than welcomed to join the conversation : )
>
> Thanks! I'll try to attend.
>
> Nevertheless, I've briefly looked at page of project Nomad, and don't
> quite get it, how this might be related to cases described in this
> thread - i.e. providing attachable, non trivial devices as a new
> resources in Nova.
>
> --
> Cheers,
> Roman Dobosz
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Newton blueprints call for action

2016-04-06 Thread Rossella Sblendido


On 04/05/2016 05:43 AM, Armando M. wrote:
> 
> With this email I would like to appeal to the people in CC to report
> back their interest in continuing working on these items in their
> respective capacities, and/or the wider community, in case new owners
> need to be identified.
> 
> I look forward to hearing back, hoping we can find the right resources
> to resume progress, and bring these important requirements to completion
> in time for Newton.

Count me in for the vlan-aware-vms. We have now a solid design, it's
only a matter of putting it into code. I will help any way I can, I
really want to see this feature in Newton.

cheers,

Rossella

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] Getting the ball rolling on glance v2 in nova in newton cycle

2016-04-06 Thread Mikhail Fedosin
Hello! Thanks for bring this topic up.

First of all, as I mentioned before, the great work was done in Mitaka, so
Glance v2 adoption in Nova it is not a question of "if", and not even a
question of "when" (in Newton), but the question of "how".

There is a set of commits that make this trick:
1. Xen plugin
https://review.openstack.org/#/c/266933
Sean gave us several good suggestions how we can improve it. In short:

   - Make this only add new glance method calls *upload_vhd_glancev2 *
   *download_vhd_glancev2* which do the v2 work
   - Don't refactor existing code to do common code here, copy / paste /
   update instead. We want the final code to be optimized for v1 delete, not
   for v1 fixing (it was done in previous patchsets, but then I made the
   refactor to reduce the amount of code)

2. 'Show' image info
https://review.openstack.org/#/c/228578
Another 'schema-based' handler is added there. It transforms glance v2
image output to format adopted in nova.image.

We have to take in account that image properties in v1 are passed with http
headers which makes them case insensetive. In v2 image info is passed as
json document and 'MyProperty' and 'myproperty' are two different
properties. Thanks Brian Rosmaita who noticed it
http://lists.openstack.org/pipermail/openstack-dev/2016-February/087519.html

Also in v1 user can create custom properties like 'owner' or 'created_at'
and they are stored in special dictionary 'properties'. v2 images have flat
structure, which means that all custom properties are located on the same
level as base properties. It leads to the fact if v1 image has a custom
property that has name coincided with the name of base property, then this
property will be ignored in v2.

3. Listing of artifacts in v2 way
https://review.openstack.org/#/c/238309
There I added additional handlers that transforms v1 image filters in v2,
along with sorting parameters.

'download' and 'delete' patches are included in #238309 since they are
trivial

4. 'creating' and 'updating' images'
https://review.openstack.org/#/c/259097

What were added there:

   - transformation to 2-stepped image creation (creation of instance in db
   + file uploading)
   - special handler for creation active images with size '0' without image
   data
   - the ability to set custom location for an image
   ('show_multiple_locations' option must be enabled in glance config for
   doing that)
   - special handler to remove custom properties from the image:
   purge_props flag in v1 vs. props_to_remove list in v2

What else has to be done:

   - Splitting in 2 patches is required: 'create' and 'update' to make it
   easier to review.
   - Matt suggested that it's better not to hardcode method logic for v1
   and v2 apis. But rather we should create a a common base class which is
   subclassed for v1/v2 specific callback (abstract) methods, and then we
   could have a factory that, given the version, provides the client impl
   we're going to deal with.

5. Also we have a bug: https://bugs.launchpad.net/nova/+bug/1539698
Thanks Samuel Matzek who found it. There is a fix
https://review.openstack.org/#/c/274203/ , but it has contradictory
opinions. If you can suggest a better solution, then I'll be happy :)


If you have any questions about how it was done feel free to send me emails
(mfedo...@mirantis.com) or ping me in IRC (mfedosin)

And finally I really want to thank you all for supporting this transition
to v2 - it's a big update for OpenStack and without community help it
cannot be done.

Best regards,
Mikhail Fedosin



On Wed, Apr 6, 2016 at 9:35 AM, Nikhil Komawar 
wrote:

> Inline comment.
>
> On 4/1/16 10:16 AM, Sean Dague wrote:
> > On 04/01/2016 10:08 AM, Monty Taylor wrote:
> >> On 04/01/2016 08:45 AM, Sean Dague wrote:
> >>> The glance v2 work is currently blocked as there is no active spec,
> >>> would be great if someone from the glance team could get that rolling
> >>> again.
> >>>
> >>> I started digging back through the patches in detail to figure out if
> >>> there are some infrastructure bits we could get in early regardless.
> >>>
> >>> #1 - new methods for glance xenserver plugin
> >>>
> >>> Let's take a simplified approach on this patch -
> >>> https://review.openstack.org/#/c/266933 and only change the
> >>> xenapi/etc/xapi.d/plugins/ content in the following ways.
> >>>
> >>> - add upload/download_vhd_glance2 methods. Don't add an api parameter.
> >>> Add these methods mostly via copy/paste as we're optimizing for
> deleting
> >>> v1 not for fixing v1.
> >>>
> >>> That will put some infrastructure in place so we can just call the v2
> >>> actions based on decision from higher up the stack.
> >>>
> >>> #2 - move discover major version back to glanceclient -
> >>>
> https://github.com/openstack/nova/blob/3cdaa30566c17a2add5d9163a0693c97dc1d065b/nova/image/glance.py#L108
> >>>
> >>>
> >>> I 

Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

2016-04-06 Thread Shinobu Kinjo
On Wed, Apr 6, 2016 at 7:47 PM, Sean Dague  wrote:
> On 04/06/2016 04:19 AM, Sylvain Bauza wrote:
>>
>>
>> Le 06/04/2016 06:44, Qiming Teng a écrit :
>>> Not an expert of Nova but I am really shocked by such a change. Because
>>> I'm not a Nova expert, I don't have a say on the *huge* efforts in
>>> maintaining some builtin/default flavors. As a user I don't care where
>>> the data have been stored, but I do care that they are gone. They are
>>> gone because they **WILL** be supported by devstack. They are gone with
>>> the workflow +1'ed **BEFORE** the devstack patch gets merged (many
>>> thanks to the depends-on tag). They are gone in hope that all deployment
>>> tools will know this when they fail, or fortunately they read this email,
>>> or they were reviewing nova patches.
>>>
>>> It would be a little nicer to initiate a discussion on the mailinglist
>>> before such a change is introduced.
>>
>>
>> It was communicated accordingly to operators with no strong arguments :
>> http://lists.openstack.org/pipermail/openstack-operators/2016-March/010045.html
>
> Not only with no strong arguments, but with a general - "yes please,
> that simplifies our life".
>
>> You can also see that https://review.openstack.org/#/c/300127/ is having
>> three items :
>>  - a DocImpact tag creating a Launchpad bug for documentation about that
>>  - a reno file meaning that our release notes will provide also some
>> comments about that
>>  - a Depends-On tag (like you said) on a devstack change meaning that
>> people using devstack won't see a modified behavior.
>>
>> Not sure what you need more.
>
> The default flavors were originally hardcoded in Nova (in the initial
> commit) -
> https://github.com/openstack/nova/commit/bf6e6e718cdc7488e2da87b21e258ccc065fe499#diff-5ca8c06795ef481818ea1710fce91800R64
>  and moved into the db 5 years ago to be a copy of the EC2 flavors at
> the time -
> https://github.com/openstack/nova/commit/563a77fd4aa80da9bddac5cf7f8f27ed2dedb39d.
> Those flavors were meant to be examples, not the final story.
>
> All the public clouds delete these and do their own thing, as do I
> expect many of the products. Any assumption that software or users have
> that these will exist is a bad assumption.
>
> It is a big change, which is why it's being communicated on Mailing
> Lists in addition to in the release notes so that people have time to
> make any of their tooling not assume these flavors by name will be
> there, or to inject them yourself if you are sure you need them (as was
> done in the devstack case).

I'm clear. Thanks, Sean.

Cheers,
Shinobu

>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Email:
shin...@linux.com
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

2016-04-06 Thread Sean Dague
On 04/06/2016 04:19 AM, Sylvain Bauza wrote:
> 
> 
> Le 06/04/2016 06:44, Qiming Teng a écrit :
>> Not an expert of Nova but I am really shocked by such a change. Because
>> I'm not a Nova expert, I don't have a say on the *huge* efforts in
>> maintaining some builtin/default flavors. As a user I don't care where
>> the data have been stored, but I do care that they are gone. They are
>> gone because they **WILL** be supported by devstack. They are gone with
>> the workflow +1'ed **BEFORE** the devstack patch gets merged (many
>> thanks to the depends-on tag). They are gone in hope that all deployment
>> tools will know this when they fail, or fortunately they read this email,
>> or they were reviewing nova patches.
>>
>> It would be a little nicer to initiate a discussion on the mailinglist
>> before such a change is introduced.
> 
> 
> It was communicated accordingly to operators with no strong arguments :
> http://lists.openstack.org/pipermail/openstack-operators/2016-March/010045.html

Not only with no strong arguments, but with a general - "yes please,
that simplifies our life".

> You can also see that https://review.openstack.org/#/c/300127/ is having
> three items :
>  - a DocImpact tag creating a Launchpad bug for documentation about that
>  - a reno file meaning that our release notes will provide also some
> comments about that
>  - a Depends-On tag (like you said) on a devstack change meaning that
> people using devstack won't see a modified behavior.
> 
> Not sure what you need more.

The default flavors were originally hardcoded in Nova (in the initial
commit) -
https://github.com/openstack/nova/commit/bf6e6e718cdc7488e2da87b21e258ccc065fe499#diff-5ca8c06795ef481818ea1710fce91800R64
 and moved into the db 5 years ago to be a copy of the EC2 flavors at
the time -
https://github.com/openstack/nova/commit/563a77fd4aa80da9bddac5cf7f8f27ed2dedb39d.
Those flavors were meant to be examples, not the final story.

All the public clouds delete these and do their own thing, as do I
expect many of the products. Any assumption that software or users have
that these will exist is a bad assumption.

It is a big change, which is why it's being communicated on Mailing
Lists in addition to in the release notes so that people have time to
make any of their tooling not assume these flavors by name will be
there, or to inject them yourself if you are sure you need them (as was
done in the devstack case).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >