Re: [openstack-dev] [ironic] Remember to follow RFE process

2016-03-02 Thread Haomeng, Wang
Thanks Ruby to point this out.

On Thu, Mar 3, 2016 at 3:25 PM, Haomeng, Wang  wrote:

> Hi Ruby,
>
> Yes, just noticed that RFE is in 'Wishlist' status now, sorry for missing
> the bug status yesterday, so we need to follow the process, and I will help
> to revert the patch and get it back to review again once the REF is
> reviewed.
>
> -- Haomeng
>
>
>
> On Thu, Mar 3, 2016 at 3:07 AM, Ruby Loo  wrote:
>
>> Hi,
>>
>> Ironic'ers, please remember to follow the RFE process; especially the
>> cores.
>>
>> I noticed that a patch [1] got merged yesterday. The patch was associated
>> with an RFE [2] that hadn't been approved yet :-( What caught my eye was
>> that the commit message didn't describe the actual API change so I took a
>> quick look at the (RFE) bug and it wasn't documented there either.
>>
>> As a reminder, the RFE process is documented [3].
>>
>> Spec cores need to try to be more timely wrt specs (I admit, I am
>> guilty). And folks, especially cores, ought to take more care when
>> reviewing. Although I do feel like there are too many things that a
>> reviewer needs to keep in mind.
>>
>> Should we revert the patch [1] for now? (Disclaimer. I haven't looked at
>> the patch itself. But I don't think I should have to, to know what the API
>> change is.)
>>
>> --ruby
>>
>>
>> [1] https://review.openstack.org/#/c/264005/
>> [2] https://bugs.launchpad.net/ironic/+bug/1530626
>> [3]
>> http://docs.openstack.org/developer/ironic/dev/code-contribution-guide.html#adding-new-features
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Remember to follow RFE process

2016-03-02 Thread Haomeng, Wang
Hi Ruby,

Yes, just noticed that RFE is in 'Wishlist' status now, sorry for missing
the bug status yesterday, so we need to follow the process, and I will help
to revert the patch and get it back to review again once the REF is
reviewed.

-- Haomeng



On Thu, Mar 3, 2016 at 3:07 AM, Ruby Loo  wrote:

> Hi,
>
> Ironic'ers, please remember to follow the RFE process; especially the
> cores.
>
> I noticed that a patch [1] got merged yesterday. The patch was associated
> with an RFE [2] that hadn't been approved yet :-( What caught my eye was
> that the commit message didn't describe the actual API change so I took a
> quick look at the (RFE) bug and it wasn't documented there either.
>
> As a reminder, the RFE process is documented [3].
>
> Spec cores need to try to be more timely wrt specs (I admit, I am guilty).
> And folks, especially cores, ought to take more care when reviewing.
> Although I do feel like there are too many things that a reviewer needs to
> keep in mind.
>
> Should we revert the patch [1] for now? (Disclaimer. I haven't looked at
> the patch itself. But I don't think I should have to, to know what the API
> change is.)
>
> --ruby
>
>
> [1] https://review.openstack.org/#/c/264005/
> [2] https://bugs.launchpad.net/ironic/+bug/1530626
> [3]
> http://docs.openstack.org/developer/ironic/dev/code-contribution-guide.html#adding-new-features
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VM could not get IP from dhcp server

2016-03-02 Thread Ptacek, MichalX
Hi Jingting,

just few general hints (probably already checked):

-  security group rules in openstack (check both igress, egress, ….) – 
it’s quite common that after deployment it’s have to be modified

-  Iptables / fw ? – check if some packets are dropped

-  Cross-check if vlan is properly configured on physical interfaces

-  If you use tunnels, ensure that both IP’s are on bridges not on 
ported NIC’s

Best regards,
Michal

From: 康敬亭 [mailto:jingt...@unitedstack.com]
Sent: Thursday, March 03, 2016 4:20 AM
To: openstack-dev 
Subject: [openstack-dev] [Neutron] VM could not get IP from dhcp server

Hi guys:

I have openstack Liberty(linuxbridge + vxlan) installed, and the vm could not 
get IP from dhcp server.

Troubleshooting:
Using tcpdump can get dhcp discover packet on physical NIC on network node, but 
can't get it on vxlan port(vxlan-100) on network node.
In opposite direction, Sending arp broadcast  in dhcp namespace, and we can get 
packet on vxlan port(vxlan-100) but can't get packet on physical NIC.

However, we find port 8472 is being listened. Meanwhile all 
interfaces(vxlan-100, physical NIC) are up and running.

What could be the reason for the issue? Any comments are appreciated.

BR,
Jingting
--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Fuel 9.0/Mitaka is now in Feature Freeze

2016-03-02 Thread Dmitry Borodaenko
Feature Freeze [0] for Fuel 9.0/Mitaka is now in effect. From this
moment and until stable/mitaka branch is created at Soft Code Freeze,
please do not merge feature related changes that have not received a
feature freeze exception.

[0] https://wiki.openstack.org/wiki/FeatureFreeze

We will discuss all outstanding feature freeze exception requests in our
weekly IRC meeting tomorrow [1]. If that discussion takes longer than
the 1 hour time slot we have booked on #openstack-meeting-alt, we'll
move the discussion to #fuel-dev and finish it there.

[1] https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda

The list of exceptions requested so far is exceedingly long and it is
likely that most of these exceptions will be rejected. If you want your
exception to be approved, please have the following information ready
for the meeting:

1) Link to design spec in fuel-specs, spec review status;

2) Links to all outstanding commits for the feature;

3) Dependencies between your change and other features: what will be
broken or useless if your change is not merged, what else has to be
merged for your change to work;

4) Analysis of impact and risks mitigation plan: which components are
affected by the change, what can break, how can impact be verified, how
can the change be isolated;

5) Status of test coverage: what can be tested, what's covered by
automated tests, what's been tested so far (with links to test results).

-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Limits on volume read throughput?

2016-03-02 Thread Philipp Marek
Hi Preston,

 
> The benchmark scripts are in:
> 
>   https://github.com/pbannister/openstack-bootstrap
in case that might help, here are a few notes and hints about doing 
benchmarks for the DRDB block device driver:

http://blogs.linbit.com/p/897/benchmarking-drbd/

Perhaps there's something interesting for you.


> Found that if I repeatedly scanned the same 8GB volume from the physical
> host (with 1/4TB of memory), the entire volume was cached in (host) memory
> (very fast scan times).
If the iSCSI target (or QEMU, for direct access) is set up to use buffer 
cache, yes.
Whether you really want that is up to discussion - it might be much more 
beneficial to move that RAM from the Hypervisor to the VM, which should 
then be able to do more efficient caching of the filesystem contents that 
it should operate on.


> Scanning the same volume from within the instance still gets the same
> ~450MB/s that I saw before. 
Hmmm, with iSCSI inbetween that could be the TCP memcpy limitation.

> The "iostat" numbers from the instance show ~44 %iowait, and ~50 %idle.
> (Which to my reading might explain the ~50% loss of performance.) Why so
> much idle/latency?
> 
> The in-instance "dd" CPU use is ~12%. (Not very interesting.)
Because your "dd" testcase will be single-threaded, io-depth 1.
And that means synchronous access, each IO has to wait for the preceeding 
one to finish...


> Not sure from where the (apparent) latency comes. The host iSCSI target?
> The QEMU iSCSI initiator? Onwards...
Thread scheduling, inter-CPU cache trashing (if the iSCSI target is on 
a different physical CPU package/socket than the VM), ...


Benchmarking is a dark art.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] VM could not get IP from dhcp server

2016-03-02 Thread 康敬亭
Hi guys:


I have openstack Liberty(linuxbridge + vxlan) installed, and the vm could not 
get IP from dhcp server. 


Troubleshooting: 
Using tcpdump can get dhcp discover packet on physical NIC on network node, but 
can't get it on vxlan port(vxlan-100) on network node.
In opposite direction, Sending arp broadcast  in dhcp namespace, and we can get 
packet on vxlan port(vxlan-100) but can't get packet on physical NIC.


However, we find port 8472 is being listened. Meanwhile all 
interfaces(vxlan-100, physical NIC) are up and running.


What could be the reason for the issue? Any comments are appreciated.


BR,
Jingting__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] config options help text improvement: current status

2016-03-02 Thread Matt Riedemann



On 3/2/2016 11:45 AM, Markus Zoeller wrote:

TL;DR: From ~600 nova specific config options are:
 ~140 at a central location with an improved help text
 ~220 options in open reviews (currently on hold)
 ~240 options todo


Background
==
Nova has a lot of config options. Most of them weren't well
documented and without looking in the code you probably don't
understand what they do. That's fine for us developers but the ops
had more problems with the interface we provide for them [1]. After
the Mitaka summit we came to the conclusion that this should be
improved, which is currently in progress with blueprint [2].


Current Status
==
After asking on the ML for help [3] the progress improved a lot.
The goal is clear now and we know how to achieve it. The organization
is done via [4] which also has a section of "odd config options".
This section is important for a later step when we want do deprecate
config options to get rid of unnecessary ones.

As we reached the Mitaka-3 milestone we decided to put the effort [5]
on hold to stabilize the project and focus the review effort on bug
fixes. When the Newton cycle opens, we can continue the work. The
current result can be seen in the sample "nova.conf" file generated
after each commit [6]. The appendix at the end of this post shows an
example.

All options we have will be treated that way and moved to a central
location at "nova/conf/". That's the central location which hosts
now the interface to the ops. It's easier to get an overview now.
The appendix shows how the config options were spread at the beginning
and how they are located now.

I initially thought that we have around 800 config options in Nova
but I learned meanwhile that we import a lot from other libs, for
example from "oslo.db" and expose them as Nova options. We have around
600 Nova specific config options, and ~140 are already treaded like
described above and ca. 220 are in the pipeline of open reviews.
Which leaves us ~240 which are not looked at yet.


Outlook
===
The numbers of the beginning of this ML post make me believe that we
can finish the work in the upcoming Newton cycle. "Finished" means
here:
* all config options we provide to our ops have proper and usable docs
* we have an understanding which options don't make sense anymore
* we know which options should get stronger validation to reduce errors

I'm looking forward to it :)


Thanks
==
I'd like to thank all the people who are working on this and making
this possible. A special thanks goes to Ed Leafe, Esra Celik and
Stephen Finucane. They put a tremendous amount of work in it.


References:
===
[1]
http://lists.openstack.org/pipermail/openstack-operators/2016-January/009301.html
[2] https://blueprints.launchpad.net/nova/+spec/centralize-config-options
[3]
http://lists.openstack.org/pipermail/openstack-dev/2015-December/081271.html
[4] https://etherpad.openstack.org/p/config-options
[5] Gerrit reviews for this topic:
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/centralize-config-options
[6] The sample config file which gets generated after each commit:
 http://docs.openstack.org/developer/nova/sample_config.html


Appendix


Example of the help text improvement
---
As an example, compare the previous documentation of the scheduler
option "scheduler_tracks_instance_changes".
Before we started:

 # Determines if the Scheduler tracks changes to instances to help
 # with its filtering decisions. (boolean value)
 #scheduler_tracks_instance_changes = true

After the improvement:

 # The scheduler may need information about the instances on a host
 # in order to evaluate its filters and weighers. The most common
 # need for this information is for the (anti-)affinity filters,
 # which need to choose a host based on the instances already running
 # on a host.
 #
 # If the configured filters and weighers do not need this information,
 # disabling this option will improve performance. It may also be
 # disabled when the tracking overhead proves too heavy, although
 # this will cause classes requiring host usage data to query the
 # database on each request instead.
 #
 # This option is only used by the FilterScheduler and its subclasses;
 # if you use a different scheduler, this option has no effect.
 #
 # * Services that use this:
 #
 # ``nova-scheduler``
 #
 # * Related options:
 #
 # None
 #  (boolean value)
 #scheduler_tracks_instance_changes = true


The spread of config options in the tree

We started with this in November 2015. It's the Nova project tree and
the numbers behind the package name are the numbers of config options
declared in that package (config options declared in sub-packages are
not accumulated).

 

Re: [openstack-dev] [nova] config options help text improvement: current status

2016-03-02 Thread Rochelle Grober
Don't quote me on this, but the tool that generates the dev docs is the one the 
docs team for the config ref use to generate that document.

And they have been looped in on the upcoming improvements.

--Rocky

-Original Message-
From: Matthew Treinish [mailto:mtrein...@kortar.org] 
Sent: Wednesday, March 02, 2016 3:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] config options help text improvement: 
current status

On Thu, Mar 03, 2016 at 10:24:28AM +1100, Tony Breeds wrote:
> On Wed, Mar 02, 2016 at 06:11:47PM +, Tim Bell wrote:
>  
> > Great. Does this additional improved text also get into the configuration 
> > guide documentation somehow ? 
> 
> It's certainly part of tox -egenconfig, I don't know about docs.o.o

The sample config file is generated (doing basically the same thing as the tox
job) for nova's devref:

http://docs.openstack.org/developer/nova/sample_config.html

-Matt Treinish

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova hooks - document & test or deprecate?

2016-03-02 Thread Adam Young

On 02/29/2016 01:49 PM, Andrew Laski wrote:


On Mon, Feb 29, 2016, at 01:18 PM, Dan Smith wrote:

Forgive my ignorance or for playing devil's advocate, but wouldn't the
main difference between notifications and hooks be that notifications
are asynchronous and hooks aren't?

The main difference is that notifications are external and intended to
be stable (especially with the versioned notifications effort). The
hooks are internal and depend wholly on internal data structures.


In the case of how Rdo was using it,
they are adding things to the injected_files list before the instance is
created in the compute API.  You couldn't do that with notifications as
far as I know.

Nope, definitely not, but I see that as a good thing. Injecting files
like that is likely to be very fragile and I think mostly regarded as
substantially less desirable than the alternatives, regardless of how it
happens.

I think that Laski's point was that the most useful and least dangerous
thing that hooks can be used for is the use case that is much better
served by notifications.


I did the original proof-of-concept for this prior to the impl using the 
hooks, just by modifying the metadata.


http://adam.younglogic.com/2013/09/register-vm-freeipa/

That works for a CLI based approach, but not for auto-registering VMs 
created from the WebUI, and also only works if the user crafts the 
Metadata properly.  It was not a secure approach.


What we need is a way to be able to generate a secret and share that 
between the new VM and the enrolling server.  The call does not strictly 
have to be synchronous.  The enrolling server can wait if the VM is not 
up, and the VM can wait if the enrolling server does not have the secret 
when the VM is ready to enroll.


We had discussed having a seperate service listen to notifications on 
the bus and inject the data necessary into the IdM server.  The hooks 
were a much better solution.


I had seriously thought about using the Keystone token as the symmetric 
shared secret.  It is a horrible solution, but so are all the rest.


There is no security on the message bus at the moment, and until we get 
some, we can't put a secret on the bus.


So, don't deprecate until you have a solution.  All you will be doing is 
putting people in a tight spot where they will have to fork the code 
base, and that is downright antisocial.


Let's plan this out in the Newton Summit and have a plan moving forward.




Yep. My experience with them was things like updating an external cache
on create/delete or calling out to a DNS provider to remove a reverse
DNS entry on instance delete. Things that could easily be handled with
notifications, and use cases that I think we should continue to support
by improving notifications if necessary.



So, if file injection (and any other internals-mangling that other
people may use them for) is not a reason to keep hooks, and if
notifications are the proper way to trigger on things happening, then
there's no reason to keep hooks.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] IRC Meeting Thursday March 3rd at 17:00UTC

2016-03-02 Thread Christopher Aedo
Join us Thursday for our weekly meeting, scheduled for March 3rd at
17:00UTC in #openstack-meeting-3

The agenda can be found here, and please add to if you want to get
something on the agenda:
https://wiki.openstack.org/wiki/Meetings/app-catalog

Looking forward to seeing everyone there tomorrow!

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Limits on volume read throughput?

2016-03-02 Thread Preston L. Bannister
First, my degree from school is in Physics. So I know something about
designing experiments. :)

The benchmark scripts runs "dd" 218 times, against different volumes (of
differing sizes), with differing "bs". Measures are collected both from the
physical host, and from the within the instance. Linux is told to drop
caches before the start.

The benchmark scripts are in:

  https://github.com/pbannister/openstack-bootstrap

(Very much a work in progress! Not complete or properly documented.)


Second, went through the exercise of collecting hints from the web as to
parameters for tuning iSCSI performance. (I did not expect changing Linux
TCP parameters to change the result for iSCSI over loopback, but measured
to be certain.) Followed all the hints, with no change in performance (as
expected).

Found that if I repeatedly scanned the same 8GB volume from the physical
host (with 1/4TB of memory), the entire volume was cached in (host) memory
(very fast scan times).

Scanning the same volume from within the instance still gets the same
~450MB/s that I saw before. The difference is that "iostat" in the host is
~93% idle. In the host, *iscsi_ttx* is using ~58% of a CPU (sound high?),
and *qemu-kvm* is using ~30% of a CPU. (The physical host is a fairly new
box - with 40(!) CPUs.)

The "iostat" numbers from the instance show ~44 %iowait, and ~50 %idle.
(Which to my reading might explain the ~50% loss of performance.) Why so
much idle/latency?

The in-instance "dd" CPU use is ~12%. (Not very interesting.)


Not sure from where the (apparent) latency comes. The host iSCSI target?
The QEMU iSCSI initiator? Onwards...





On Tue, Mar 1, 2016 at 5:13 PM, Rick Jones  wrote:

> On 03/01/2016 04:29 PM, Preston L. Bannister wrote:
>
> Running "dd" in the physical host against the Cinder-allocated volumes
>> nets ~1.2GB/s (roughly in line with expectations for the striped flash
>> volume).
>>
>> Running "dd" in an instance against the same volume (now attached to the
>> instance) got ~300MB/s, which was pathetic. (I was expecting 80-90% of
>> the raw host volume numbers, or better.) Upping read-ahead in the
>> instance via "hdparm" boosted throughput to ~450MB/s. Much better, but
>> still sad.
>>
>> In the second measure the volume data passes through iSCSI and then the
>> QEMU hypervisor. I expected to lose some performance, but not more than
>> half!
>>
>> Note that as this is an all-in-one OpenStack node, iSCSI is strictly
>> local and not crossing a network. (I did not want network latency or
>> throughput to be a concern with this first measure.)
>>
>
> Well, not crossing a physical network :)  You will be however likely
> crossing the loopback network on the node.
>

Well ... yes. I suspect the latency and bandwidth numbers for loopback are
rather better. :)

For the purposes of this experiment, I wanted to eliminate the physical
network limits as a consideration.



What sort of per-CPU utilizations do you see when running the test to the
> instance?  Also, out of curiosity, what block size are you using in dd?  I
> wonder how well that "maps" to what iSCSI will be doing.
>

First, this measure was collected via a script that tried a moderately
exhaustive number of variations. Yes, I had the same question. Kernel host
read-ahead is 6MB (automatically set). Did not see notable gain past
"bs=2M". (Was expecting a bigger gain for larger reads, but not what
measures showed.)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Newton PTL and CL elections

2016-03-02 Thread Dmitry Borodaenko
Team,

We're only two weeks away from the beginning of the Newton elections
period. Based on the Fuel 9.0/Mitaka release schedule [0], I propose the 
following dates for PTL and CL self-nomination and election periods:

PTL self-nomination: March 13-20
PTL election: March 21-27
CL self-nomination: March 28-April 3
CL election: April 4-10

Now that we have separated fuel-ui repository from fuel-web [0], it's
going to be much easier to hold elections for a UI component lead, and I
propose that we do that in the Newton cycle. I proposed a team structure
change that removes the special case we have defined for Fuel UI, and
introduces fuel-ui as a proper component [2].

Please don't postpone this until the very last moment: start thinking
about nominating yourselves for these essential roles, and start working
on your self-nomination announcements.

[0] https://wiki.openstack.org/wiki/Fuel/9.0_Release_Schedule
[1] http://lists.openstack.org/pipermail/openstack-dev/2016-February/087634.html
[2] https://review.openstack.org/287508

-- 
Dmitry Borodaenko

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

2016-03-02 Thread Fox, Kevin M
no removal without an upgrade path. I've got v1 LB's and there still isn't a 
migration script to go from v1 to v2.

Thanks,
Kevin



From: Stephen Balukoff [sbaluk...@bluebox.net]
Sent: Wednesday, March 02, 2016 4:49 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

I am also on-board with removing LBaaS v1 as early as possible in the Newton 
cycle.

On Wed, Mar 2, 2016 at 9:44 AM, Samuel Bercovici 
> wrote:
Thank you all for your response.

In my opinion given that UI/HEAT will make Mitaka and will have one cycle to 
mature, it makes sense to remove LBaaS v1 in Newton.
Do we want do discuss an upgrade process in the summit?

-Sam.


From: Bryan Jones [mailto:jone...@us.ibm.com]
Sent: Wednesday, March 02, 2016 5:54 PM
To: openstack-dev@lists.openstack.org

Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

And as for the Heat support, the resources have made Mitaka, with additional 
functional tests on the way soon.

blueprint: https://blueprints.launchpad.net/heat/+spec/lbaasv2-suport
gerrit topic: https://review.openstack.org/#/q/topic:bp/lbaasv2-suport
BRYAN M. JONES
Software Engineer - OpenStack Development
Phone: 1-507-253-2620
E-mail: jone...@us.ibm.com


- Original message -
From: Justin Pomeroy 
>
To: openstack-dev@lists.openstack.org
Cc:
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are we ready?
Date: Wed, Mar 2, 2016 9:36 AM

As for the horizon support, much of it will make Mitaka.  See the blueprint and 
gerrit topic:

https://blueprints.launchpad.net/horizon/+spec/horizon-lbaas-v2-ui
https://review.openstack.org/#/q/topic:bp/horizon-lbaas-v2-ui,n,z

- Justin

On 3/2/16 9:22 AM, Doug Wiegley wrote:
Hi,

A few things:

- It’s not proposed for removal in Mitaka. That patch is for Newton.
- HEAT and Horizon are planned for Mitaka (see neutron-lbaas-dashboard for the 
latter.)
- I don’t view this as a “keep or delete” question. If sufficient folks are 
interested in maintaining it, there is a third option, which is that the code 
can be maintained in a separate repo, by a separate team (with or without the 
current core team’s blessing.)

No decisions have been made yet, but we are on the cusp of some major 
maintenance changes, and two deprecation cycles have passed. Which path forward 
is being discussed at today’s Octavia meeting, or feedback is of course 
welcomed here, in IRC, or anywhere.

Thanks,
doug

On Mar 2, 2016, at 7:06 AM, Samuel Bercovici 
> wrote:

Hi,

I have just notices the following change: 
https://review.openstack.org/#/c/286381 which aims to remove LBaaS v1.
Is this planned for Mitaka or for Newton?

While LBaaS v2 is becoming the default, I think that we should have the 
following before we replace LBaaS v1:
1.  Horizon Support – was not able to find any real activity on it
2.  HEAT Support – will it be ready in Mitaka?

Do you have any other items that are needed before we get rid of LBaaS v1?

-Sam.







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Stephen Balukoff
Principal Technologist
Blue Box, An IBM Company
www.blueboxcloud.com
sbaluk...@blueboxcloud.com
206-607-0660 x807

[openstack-dev] openstack swift as a cache proxy for nginx, swift proxy report 401 error when authenticate

2016-03-02 Thread Linpeimin
I am trying to find a way to use Openstack swift to cache static file for a web 
server such as nginx, the below are request step:

1.nginx is configured as a load balance proxy server and web server.

2.There are several swift , suppose there are 2, that is 
swift-A,swift-B ,swift-A is control node,and swift-B is storage node

3.client send a request to nginx for url: http://domain.com/filename.txt

4.nginx received the request and it is a cache miss, it need to fetch 
the content from SWIFT proxy server,

5.nginx send a request to swift proxy server for authentication, the 
url looks like http://swift-proxy/auth-account, account information is set in 
header, the response from swift proxy server contains a auth-token for that 
account if authentication success.

6.then nginx use this auth-token and put it in a new request header, 
and send the new request to the swift proxy server for the original request 
content, there could be a map between client request url to the swift proxy 
url, for example, /filename.txt --> /account/container/filename.txt, so the new 
request url could be http://swift-proxy/account/container/filename.txt,plus the 
auth-token.

7.swift proxy server response the content to nginx, then nginx cache 
the content and pass the response to the client.

I have search for the answer on the internet, and referent this solution: 
https://forum.nginx.org/read.php?2,250458,250463#msg-250463

Then ,I change my nginx configuration like this:

server {
listen   80;
server_name  localhost;
location / {
root   html;
index  index.html index.htm;
auth_request /auth/v1.0;
}
location /auth/v1.0 {
proxy_pass  http://192.168.1.1:8080;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
}

Port 80 is for nginx,port 8080 is for swift, both can work independently, but 
after I change the configuration ,use chrome browser enter:10.67. 247.21,it 
just not working like what I expect, swift proxy return 401 error, swift proxy 
logs report like this :

Mar  1 20:43:48 localhost journal: proxy-logging 192.168.1.1 192.168.1.1 
01/Mar/2016/20/43/48 GET /auth/v1.0 HTTP/1.0 401 - 
Mozilla/5.0%20%28Windows%20NT%206.1%3B%20WOW64%29%20AppleWebKit/537.36%20%28KHTML%2C%20like%20Gecko%29%20Chrome/28.0.1500.72%20Safari/537.36
 - - 131 - txbfc24355780143568445c4ddf5d774e3 - 0.0003 -
Mar  1 20:43:48 localhost journal: tempauth - 192.168.1.1 01/Mar/2016/20/43/48 
GET /auth/v1.0 HTTP/1.0 401 - 
Mozilla/5.0%20%28Windows%20NT%206.1%3B%20WOW64%29%20AppleWebKit/537.36%20%28KHTML%2C%20like%20Gecko%29%20Chrome/28.0.1500.72%20Safari/537.36
 - - - - txbfc24355780143568445c4ddf5d774e3 - 0.0007



I don't know does it matter if I use a chrome browser to send request to swift 
,it looks like some unrecognized char are include in the request which is nginx 
send to swift. while I use curl command to send request, it works fine, like 
this :

[root@localhost ~]# curl -v -H 'X-Storage-User: service:swift' -H 
'X-Storage-Pass:swift ' http://192.168.1.1:8080/auth/v1.0
*   Trying 192.168.1.1...
* Connected to 192.168.1.1 (192.168.1.1) port 8080 (#0)
> GET /auth/v1.0 HTTP/1.1
> Host: 192.168.1.1:8080
> User-Agent: curl/7.47.1
> Accept: */*
> X-Storage-User: service:swift
> X-Storage-Pass:swift
>
< HTTP/1.1 200 OK
< X-Storage-Url: http://192.168.1.1:8080/v1/AUTH_service
< X-Auth-Token: AUTH_tk4f2eaa45b35c47b4ab0b955710cce6da
< Content-Type: text/html; charset=UTF-8
< X-Storage-Token: AUTH_tk4f2eaa45b35c47b4ab0b955710cce6da
< Content-Length: 0
< X-Trans-Id: tx3b90f2a8a3284f52951cc80ca41f104a
< Date: Tue, 01 Mar 2016 21:10:50 GMT
<
* Connection #0 to host 192.168.1.1 left intact


It seems swift cannot recognize the request from my nginx which has configed 
with an addictional module named ngx_http_auth_request_module. Maybe nginx was 
not passes right user and password to swift. Or shouldn't I use the chrome 
browser to visit swift through nginx proxy.



Below is my swift proxy-server.conf:

[DEFAULT]

bind_port = 8080
bind_ip = 192.168.1.1

workers = 1

user = swift

log_facility = LOG_LOCAL1

eventlet_debug = true

[pipeline:main]

pipeline = catch_errors healthcheck proxy-logging cache tempurl ratelimit 
tempauth staticweb  proxy-logging proxy-server


[filter:catch_errors]

use = egg:swift#catch_errors
set log_name = cache_errors


[filter:healthcheck]

use = egg:swift#healthcheck
set log_name = healthcheck


[filter:proxy-logging]

use = egg:swift#proxy_logging
set log_name = proxy-logging

[filter:ratelimit]

use = egg:swift#ratelimit
set log_name = ratelimit


[filter:crossdomain]

use = egg:swift#crossdomain
set log_name = crossdomain


[filter:tempurl]

use = egg:swift#tempurl
set log_name = tempurl


[filter:tempauth]
use = egg:swift#tempauth
set log_name = tempauth

Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

2016-03-02 Thread Stephen Balukoff
I am also on-board with removing LBaaS v1 as early as possible in the
Newton cycle.

On Wed, Mar 2, 2016 at 9:44 AM, Samuel Bercovici 
wrote:

> Thank you all for your response.
>
>
>
> In my opinion given that UI/HEAT will make Mitaka and will have one cycle
> to mature, it makes sense to remove LBaaS v1 in Newton.
>
> Do we want do discuss an upgrade process in the summit?
>
>
>
> -Sam.
>
>
>
>
>
> *From:* Bryan Jones [mailto:jone...@us.ibm.com]
> *Sent:* Wednesday, March 02, 2016 5:54 PM
> *To:* openstack-dev@lists.openstack.org
>
> *Subject:* Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are
> weready?
>
>
>
> And as for the Heat support, the resources have made Mitaka, with
> additional functional tests on the way soon.
>
>
>
> blueprint: https://blueprints.launchpad.net/heat/+spec/lbaasv2-suport
>
> gerrit topic: https://review.openstack.org/#/q/topic:bp/lbaasv2-suport
>
> *BRYAN M. JONES*
>
> *Software Engineer - OpenStack Development*
>
> *Phone: *1-507-253-2620
>
> *E-mail: *jone...@us.ibm.com
>
>
>
>
>
> - Original message -
> From: Justin Pomeroy 
> To: openstack-dev@lists.openstack.org
> Cc:
> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are we
> ready?
> Date: Wed, Mar 2, 2016 9:36 AM
>
> As for the horizon support, much of it will make Mitaka.  See the
> blueprint and gerrit topic:
>
> https://blueprints.launchpad.net/horizon/+spec/horizon-lbaas-v2-ui
> https://review.openstack.org/#/q/topic:bp/horizon-lbaas-v2-ui,n,z
>
> - Justin
>
>
> On 3/2/16 9:22 AM, Doug Wiegley wrote:
>
> Hi,
>
>
>
> A few things:
>
>
>
> - It’s not proposed for removal in Mitaka. That patch is for Newton.
>
> - HEAT and Horizon are planned for Mitaka (see neutron-lbaas-dashboard for
> the latter.)
>
> - I don’t view this as a “keep or delete” question. If sufficient folks
> are interested in maintaining it, there is a third option, which is that
> the code can be maintained in a separate repo, by a separate team (with or
> without the current core team’s blessing.)
>
>
>
> No decisions have been made yet, but we are on the cusp of some major
> maintenance changes, and two deprecation cycles have passed. Which path
> forward is being discussed at today’s Octavia meeting, or feedback is of
> course welcomed here, in IRC, or anywhere.
>
>
>
> Thanks,
>
> doug
>
>
>
> On Mar 2, 2016, at 7:06 AM, Samuel Bercovici  > wrote:
>
>
>
> Hi,
>
>
>
> I have just notices the following change:
> https://review.openstack.org/#/c/286381 which aims to remove LBaaS v1.
>
> Is this planned for Mitaka or for Newton?
>
>
>
> While LBaaS v2 is becoming the default, I think that we should have the
> following before we replace LBaaS v1:
>
> 1.  Horizon Support – was not able to find any real activity on it
>
> 2.  HEAT Support – will it be ready in Mitaka?
>
>
>
> Do you have any other items that are needed before we get rid of LBaaS v1?
>
>
>
> -Sam.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Stephen Balukoff
Principal Technologist
Blue Box, An IBM Company
www.blueboxcloud.com
sbaluk...@blueboxcloud.com
206-607-0660 x807
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Feature Freeze Exception Request - switching to CentOS-7.2

2016-03-02 Thread Dmitry Borodaenko
Thanks for the detailed explanation, very helpful!

Considering that this change is atomic and easily revertable, lets
proceed with the change, the sooner we do that the more time we'll have
to confirm that there is no impact and revert if necessary.

-- 
Dmitry Borodaenko

On Thu, Mar 03, 2016 at 03:40:22AM +0300, Aleksandra Fedorova wrote:
> Hi,
> 
> let me add some details about the change:
> 
> 1) There are two repositories used to build Fuel ISO: base OS
> repository [1], and mos repository [2], where we put Fuel dependencies
> and packages we rebuild due to certain version requirements.
> 
> The CentOS 7.2 feature is related to the upstream repo only. Packages
> like RabbitMQ, MCollective, Puppet, MySQL and PostgreSQL live in mos
> repository, which has higher priority then upstream.
> 
> I think we need to setup a separate discussion about our policy
> regarding these packages, but for now they are fixed and won't be
> updated by CentOS 7.2 switch.
> 
> 2) This change doesn't affect Fuel codebase.
> 
> The upstream mirror we use for ISO build is controlled by environment
> variable which is set via Jenkins [3] and can be changed anytime.
> 
> As we have daily snapshots of CentOS repository available at [4], in
> case of regression in upstream we can pin our builds to stable
> snapshot and work on the issue without blocking the main development
> flow.

Please make sure that the current snapshot of CentOS 7.1 is not rotated
away so that we don't loose the point we can revert to.

> 3) The "improve snapshotting" work item which is at the moment in
> progress, will prevent any possibility to "accidentally" migrate to
> CentOS 7.3, when it becomes available.
> Thus the only changes which we can fetch from upstream are changes
> which are published to updates/ component of CentOS 7.2 repo.
> 
> 
> As latest BVT on master is green
>https://ci.fuel-infra.org/job/9.0.fuel_community.ubuntu.bvt_2/69/
> I think we should proceed with Jenkins reconfiguration [5] and switch
> to latest snapshots by default.
> 
> [1] currently http://vault.centos.org/7.1.1503/
> [2] 
> http://mirror.fuel-infra.org/mos-repos/centos/mos9.0-centos7-fuel/os/x86_64/
> [3] 
> https://github.com/fuel-infra/jenkins-jobs/blob/76b5cdf1828b7db1957f7967180d20be099b0c63/common/scripts/all.sh#L84
> [4] http://mirror.fuel-infra.org/pkgs/
> [5] https://review.fuel-infra.org/#/c/17712/
> 
> On Wed, Mar 2, 2016 at 9:22 PM, Mike Scherbakov
>  wrote:
> > It is not just about BVT. I'd suggest to monitor situation overall,
> > including failures of system tests [1]. If we see regressions there, or some
> > test cases will start flapping (what is even worse), then we'd have to
> > revert back to CentOS 7.1.
> >
> > [1] https://github.com/openstack/fuel-qa
> >
> > On Wed, Mar 2, 2016 at 10:16 AM Dmitry Borodaenko 
> > wrote:
> >>
> >> I agree with Mike's concerns, and propose to make these limitations (4
> >> weeks before FF for OS upgrades, 2 weeks for upgrades of key
> >> dependencies -- RabbitMQ, MCollective, Puppet, MySQL, PostgreSQL,
> >> anything else?) official for 10.0/Newton.
> >>
> >> For 9.0/Mitaka, it is too late to impose them, so we just have to be
> >> very careful and conservative with this upgrade. First of all, we need
> >> to have a green BVT before and after switching to the CentOS 7.2 repo
> >> snapshot, so while I approved the spec, we can't move forward with this
> >> until BVT is green again, and right now it's red:
> >>
> >> https://ci.fuel-infra.org/job/9.0.fuel_community.ubuntu.bvt_2/
> >>
> >> If we get it back to green but it becomes red after the upgrade, you
> >> must switch back to CentOS 7.1 *immediately*. If you are able to stick
> >> to this plan, there is still time to complete the transition today
> >> without requiring an FFE.
> >>
> >> --
> >> Dmitry Borodaenko
> >>
> >>
> >> On Wed, Mar 02, 2016 at 05:53:53PM +, Mike Scherbakov wrote:
> >> > Formally, we can merge it today. Historically, every update of OS caused
> >> > us
> >> > instability for some time: from days to a couple of month.
> >> > Taking this into account and number of other exceptions requested,
> >> > overall
> >> > stability of code, my opinion would be to postpone this to 10.0.
> >> >
> >> > Also, I'd suggest to change the process, and have freeze date for all OS
> >> > updates no later than a month before official FF date. This will give us
> >> > time to stabilize, and ensure that base on which all new code is being
> >> > developed is stable when approaching FF.
> >> >
> >> > I'd also propose to have freeze for major upgrades of 3rd party packages
> >> > no
> >> > later than 2 weeks before FF, which Fuel depends heavily upon. For
> >> > instance, such will include RabbitMQ, MCollective, Puppet.
> >> >
> >> > On Wed, Mar 2, 2016 at 7:34 AM Igor Marnat  wrote:
> >> >
> >> > > Igor,
> >> > > couple of points from my side.
> >> > >
> >> > > CentOS 7.2 will be 

Re: [openstack-dev] [Fuel] Feature Freeze Exception Request - switching to CentOS-7.2

2016-03-02 Thread Aleksandra Fedorova
Hi,

let me add some details about the change:

1) There are two repositories used to build Fuel ISO: base OS
repository [1], and mos repository [2], where we put Fuel dependencies
and packages we rebuild due to certain version requirements.

The CentOS 7.2 feature is related to the upstream repo only. Packages
like RabbitMQ, MCollective, Puppet, MySQL and PostgreSQL live in mos
repository, which has higher priority then upstream.

I think we need to setup a separate discussion about our policy
regarding these packages, but for now they are fixed and won't be
updated by CentOS 7.2 switch.

2) This change doesn't affect Fuel codebase.

The upstream mirror we use for ISO build is controlled by environment
variable which is set via Jenkins [3] and can be changed anytime.

As we have daily snapshots of CentOS repository available at [4], in
case of regression in upstream we can pin our builds to stable
snapshot and work on the issue without blocking the main development
flow.

3) The "improve snapshotting" work item which is at the moment in
progress, will prevent any possibility to "accidentally" migrate to
CentOS 7.3, when it becomes available.
Thus the only changes which we can fetch from upstream are changes
which are published to updates/ component of CentOS 7.2 repo.


As latest BVT on master is green
   https://ci.fuel-infra.org/job/9.0.fuel_community.ubuntu.bvt_2/69/
I think we should proceed with Jenkins reconfiguration [5] and switch
to latest snapshots by default.

[1] currently http://vault.centos.org/7.1.1503/
[2] http://mirror.fuel-infra.org/mos-repos/centos/mos9.0-centos7-fuel/os/x86_64/
[3] 
https://github.com/fuel-infra/jenkins-jobs/blob/76b5cdf1828b7db1957f7967180d20be099b0c63/common/scripts/all.sh#L84
[4] http://mirror.fuel-infra.org/pkgs/
[5] https://review.fuel-infra.org/#/c/17712/

On Wed, Mar 2, 2016 at 9:22 PM, Mike Scherbakov
 wrote:
> It is not just about BVT. I'd suggest to monitor situation overall,
> including failures of system tests [1]. If we see regressions there, or some
> test cases will start flapping (what is even worse), then we'd have to
> revert back to CentOS 7.1.
>
> [1] https://github.com/openstack/fuel-qa
>
> On Wed, Mar 2, 2016 at 10:16 AM Dmitry Borodaenko 
> wrote:
>>
>> I agree with Mike's concerns, and propose to make these limitations (4
>> weeks before FF for OS upgrades, 2 weeks for upgrades of key
>> dependencies -- RabbitMQ, MCollective, Puppet, MySQL, PostgreSQL,
>> anything else?) official for 10.0/Newton.
>>
>> For 9.0/Mitaka, it is too late to impose them, so we just have to be
>> very careful and conservative with this upgrade. First of all, we need
>> to have a green BVT before and after switching to the CentOS 7.2 repo
>> snapshot, so while I approved the spec, we can't move forward with this
>> until BVT is green again, and right now it's red:
>>
>> https://ci.fuel-infra.org/job/9.0.fuel_community.ubuntu.bvt_2/
>>
>> If we get it back to green but it becomes red after the upgrade, you
>> must switch back to CentOS 7.1 *immediately*. If you are able to stick
>> to this plan, there is still time to complete the transition today
>> without requiring an FFE.
>>
>> --
>> Dmitry Borodaenko
>>
>>
>> On Wed, Mar 02, 2016 at 05:53:53PM +, Mike Scherbakov wrote:
>> > Formally, we can merge it today. Historically, every update of OS caused
>> > us
>> > instability for some time: from days to a couple of month.
>> > Taking this into account and number of other exceptions requested,
>> > overall
>> > stability of code, my opinion would be to postpone this to 10.0.
>> >
>> > Also, I'd suggest to change the process, and have freeze date for all OS
>> > updates no later than a month before official FF date. This will give us
>> > time to stabilize, and ensure that base on which all new code is being
>> > developed is stable when approaching FF.
>> >
>> > I'd also propose to have freeze for major upgrades of 3rd party packages
>> > no
>> > later than 2 weeks before FF, which Fuel depends heavily upon. For
>> > instance, such will include RabbitMQ, MCollective, Puppet.
>> >
>> > On Wed, Mar 2, 2016 at 7:34 AM Igor Marnat  wrote:
>> >
>> > > Igor,
>> > > couple of points from my side.
>> > >
>> > > CentOS 7.2 will be getting updates for several more months, and we
>> > > have
>> > > snapshots and all the mechanics in place to switch to the next version
>> > > when
>> > > needed.
>> > >
>> > > Speaking of getting this update into 9.0, we actually don't need FFE,
>> > > we
>> > > can merge remaining staff today. It has enough reviews, so if you add
>> > > your
>> > > +1 today, we don't need FFE.
>> > >
>> > > https://review.openstack.org/#/c/280338/
>> > > https://review.fuel-infra.org/#/c/17400/
>> > >
>> > >
>> > >
>> > > Regards,
>> > > Igor Marnat
>> > >
>> > > On Wed, Mar 2, 2016 at 6:23 PM, Dmitry Teselkin
>> > > 
>> > > wrote:
>> > >
>> > >> Igor,
>> > >>
>> > 

Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Rick Jones

On 03/02/2016 02:46 PM, Mike Spreitzer wrote:

Kevin Benton  wrote on 03/02/2016 01:27:27 PM:

 > Does it at least also include the UUID, or is there no way to tell
 > from 'nova show'?

No direct way to tell, as far as I can see.


Yep.  Best I can find is:

neutron port-list -- --device_id 
then
neutron port-show 

Ironically enough, while nova show shows the security group name, 
neutron port-show shows the UUID.  Clearly an eschewing of foolish 
consistency :)


Drifting... it seems that nova list will sort by instance name 
ascending, and openstack server list will sort by instance name ... 
descending.  And openstack server show will emit a less formatted 
version of the security group name than nova show does.


happy benchmarking,

rick jones


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] config options help text improvement: current status

2016-03-02 Thread Matthew Treinish
On Thu, Mar 03, 2016 at 10:24:28AM +1100, Tony Breeds wrote:
> On Wed, Mar 02, 2016 at 06:11:47PM +, Tim Bell wrote:
>  
> > Great. Does this additional improved text also get into the configuration 
> > guide documentation somehow ? 
> 
> It's certainly part of tox -egenconfig, I don't know about docs.o.o

The sample config file is generated (doing basically the same thing as the tox
job) for nova's devref:

http://docs.openstack.org/developer/nova/sample_config.html

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] config options help text improvement: current status

2016-03-02 Thread Tony Breeds
On Wed, Mar 02, 2016 at 06:11:47PM +, Tim Bell wrote:
 
> Great. Does this additional improved text also get into the configuration 
> guide documentation somehow ? 

It's certainly part of tox -egenconfig, I don't know about docs.o.o

Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tosca-parser] [heat-translator] [heat] [tacker] Heat-Translator 0.4.0 PyPI release

2016-03-02 Thread Sahdev P Zala
Hello Everyone, 

On behalf of the Heat-Translator team, I am pleased to announce the 0.4.0 
PyPI release of heat-translator which can be downloaded from 
https://pypi.python.org/pypi/heat-translator

This release includes following enhancements:

▪  Uses latest tosca-parser 0.4.0 release
▪  Introduced support for TOSCA Policy translation
▪  Introduced support for TOSCA NFV translation
▪  New test suite for OpenStackClient (OSC) plug-in
▪  Allows user to provide parameters at the deployment time 
by using Heat get_param function
▪  Dynamic handling of Nova server specific key_name property 
which is not part of TOSCA template, and TOSCA Compute specific 
capabilities properties for constraints based selection of flavor and 
image
▪  Enhanced interfaces translation with support for 
get_artifact function
▪  Bug fixes
▪  Doc updates

Thanks!

Regards, 
Sahdev Zala
PTL, Heat-Translator

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Mike Spreitzer
Kevin Benton  wrote on 03/02/2016 01:27:27 PM:

> Does it at least also include the UUID, or is there no way to tell 
> from 'nova show'?

No direct way to tell, as far as I can see.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Fox, Kevin M
Yeah, we've changed the default so that at very least you can ssh to the vm.

If all you provide is a completely locked or a completely open sg, users will 
choose the completely open one every time. :/

Putting a few common cases might go a long way to keep things more secure by 
default.

Thanks,
Kevin

From: Jeremy Stanley [fu...@yuggoth.org]
Sent: Wednesday, March 02, 2016 1:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] - Changing the Neutron default security 
group rules

On 2016-03-03 07:49:03 +1300 (+1300), Xav Paice wrote:
[...]
> In my mind, the default security group is there so that as people
> are developing their security policy they can at least start with
> a default that offers a small amount of protection.

Well, not a small amount of protection. The instances boot
completely unreachable from the global Internet, so this is pretty
significant protection if you consider the most secure system is one
which isn't connected to anything. Unfortunately this is not, I
think, what most users want as an end state for most of their
instances. I simply wonder if there's a default which can be useful
to at least some majority, rather than having to make things equally
complex for everyone. Hard to identify, rife with opinion, and not a
solution I'm holding my breath for... but probably still more
attainable than world peace.

> Disabling that protection means I'd have to be dealing with a vast
> number of customers with instances that have been compromised
> because they didn't add to the security groups.

Sure, and that's I think how we've arrived at the default
indecision. It's easier to tell customers that they have to adjust
their firewall rules before they can do anything at all (and risk
some of them going elsewhere for an easier out-of-the-box
experience), than to bear the liability and reputation loss from
customers getting compromised because they assumed wrongly that they
shouldn't have to secure their systems "in the cloud." That said,
there _are_ providers whose default behavior is to not filter you.

In IRC I tried to draw comparisons to colocation, where my default
expectation is a routed network I can put my servers on with no risk
that the provider is surreptitiously blocking my traffic. If I want
packet filtering, I can bring a firewall into the colo and plug it
in, then configure it to my liking, but the default bare-bones
experience is a _less_ complex one (no firewall appliance) and if I
want separate filtering that's additional complexity I opt into by
choice.
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][heat] spawn a group of nodes on different availability zones

2016-03-02 Thread Zane Bitter

On 02/03/16 05:50, Mathieu Velten wrote:

Hi all,

I am looking at a way to spawn nodes in different specified
availability zones when deploying a cluster with Magnum.

Currently Magnum directly uses predefined Heat templates with Heat
parameters to handle configuration.
I tried to reach my goal by sticking to this model, however I couldn't
find a suitable Heat construct that would allow that.

Here are the details of my investigation :
- OS::Heat::ResourceGroup doesn't allow to specify a list as a variable
that would be iterated over, so we would need one ResourceGroup by AZ
- OS::Nova::ServerGroup only allows restriction at the hypervisor level
- OS::Heat::InstanceGroup has an AZs parameter but it is marked
unimplemented , and is CFN specific.
- OS::Nova::HostAggregate only seems to allow adding some metadatas to
a group of hosts in a defined availability zone
- repeat function only works inside the properties section of a
resource and can't be used at the resource level itself, hence
something like that is not allowed :

resources:
   repeat:
 for_each:
   <%az%>: { get_param: availability_zones }
 template:
   rg-<%az%>:
 type: OS::Heat::ResourceGroup
 properties:
   count: 2
   resource_def:
 type: hot_single_server.yaml
 properties:
   availability_zone: <%az%>


The only possibility that I see is generating a ResourceGroup by AZ,
but it would induce some big changes in Magnum to handle
modification/generation of templates.

Any ideas ?


This is a long-standing missing feature in Heat. There are two 
blueprints for this (I'm not sure why):


https://blueprints.launchpad.net/heat/+spec/autoscaling-availabilityzones-impl
https://blueprints.launchpad.net/heat/+spec/implement-autoscalinggroup-availabilityzones

The latter had a spec with quite a lot of discussion:

https://review.openstack.org/#/c/105907

And even an attempted implementation:

https://review.openstack.org/#/c/116139/

which was making some progress but is long out of date and would need 
serious work to rebase. The good news is that some of the changes I made 
in Liberty like https://review.openstack.org/#/c/213555/ should 
hopefully make it simpler.


All of which is to say, if you want to help then I think it would be 
totally do-able to land support for this relatively early in Newton :)



Failing that, the only think I can think to try is something I am pretty 
sure won't work: a ResourceGroup with something like:


  availability_zone: {get_param: [AZ_map, "%i"]}

where AZ_map looks something like {"0": "az-1", "1": "az-2", "2": 
"az-1", ...} and you're using the member index to pick out the AZ to use 
from the parameter. I don't think that works (if "%i" is resolved after 
get_param then it won't, and I suspect that's the case) but it's worth a 
try if you need a solution in Mitaka.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][log] Ideas to log request-ids in cross-projects

2016-03-02 Thread Doug Hellmann
Excerpts from Kekane, Abhishek's message of 2016-03-01 06:17:15 +:
> Hi Devs,
> 
> Considering return request-id to caller specs [1] is implemented in
> python-*client, I would like to begin discussion on how these request-ids
> will be logged in cross-projects. In logging work-group meeting (11-Nov-2015)
> [2] there was a discussion about how to log request-id in the log messages.
> In same meeting it wass decided to write oslo.log specs but as of now no 
> specs has been submitted.
> 
> I would like to share our approach to log request-ids and seek suggestions
> for the same. We are planning to use request_utils module [3] which was
> earlier part of oslo-incubator but removed as no one was using it.
> 
> A typical use case is: Tempest asking Nova to perform some action and Nova
> calling Glance internally, then the linkages might look like this:
> 
> RequestID mapping in nova for nova and glance:
> -
> 
> INFO nova.utils [req-f0fb885b-18a2-4510-9e85-b9066b410ee4 admin admin] 
> Request ID Link: request.link 'req-f0fb885b-18a2-4510-9e85-b9066b410ee4' -> 
> Target='glance' TargetId=req-a1ac739c-c816-4f82-ad82-9a9b1a603f43

When is that message emitted? After glance returns a value? What logs
the message?

> 
> RequestID mapping in tempest for tempest and nova:
> -
> 
> INFO tempest.tests [req-a0df655b-18a2-4510-9e85-b9435dh8ye4 admin admin] 
> Request ID Link: request.link 'req-a0df655b-18a2-4510-9e85-b9435dh8ye4' -> 
> Target='nova' TargetId=req-f0fb885b-18a2-4510-9e85-b9066b410ee4
> 
> As there is a reference of nova's request-id in tempest and glance's
> request-id in nova, operator can easily trace the cause of failure.
> 
> Using request_utils module we can also mention the 'stage' parameter to
> divide the entire api cycle with stages, e.g. create server can be
> staged as start, get-image can be staged as download-image and active instance
> can be staged as end of the operation.

I think this is conflating the request stages and "linking" in a way
that isn't going to always apply.

It really seems like what we want it to just log the request id(s)
returned from each call made using a client. The format you proposed
includes all of that data, it's just a bit more verbose than I think we
really need.

Given that we want to log the information regardless of whether
there was an error, we need to put the logging inside the client
libraries themselves where we can always log before raising an
exception. As a bonus, this significantly reduces the number of
places we need to make code changes to log the request id chain.
The clients don't know the request id for the current context, but
that's OK because oslo.context does and apps using oslo.log will
get that for free (that's the repeated value in your example above).

So, we could, from the client, do something like:

  LOG.info('call to %(my_service_name)s.%(my_endpoint_name)s used request id 
%(response_request_id)s',
   extras={'my_service_name': 'nova', 'my_endpoint_name':
   'create_server', 'response_request_id': request_id})

That would produce something like:

  INFO tempest.tests [req-a0df655b-18a2-4510-9e85-b9435dh8ye4 admin admin] call 
to nova.create_server used request id req-f0fb885b-18a2-4510-9e85-b9066b410ee4

and in the JSON formatter output, you would have separate fields for
request_id (the current request) and response_request_id (the value just
returned to the client by the service).

I don't know if we want to use INFO or DEBUG level, so I've used INFO to
be consistent with your example.

> 
> Advantages:
> ---
> 
> With stages provided for API, it's easy for the operator to find out the 
> failure stage from entire API cycle.

I think the "stages" concept is better addressed by updating the
way we log to match the "unit of work" pattern described in our log
guidelines [1].

[1] 
http://specs.openstack.org/openstack/openstack-specs/specs/log-guidelines.html#log-messages-at-info-and-above-should-be-a-unit-of-work

> 
> An example with 'stage' is,
> Tempest asking Nova to perform some action and Nova calling Glance internally,
> then the linkages might look like this:
> 
> INFO tempest.tests [req-a0df655b-18a2-4510-9e85-b9435dh8ye4 admin admin] 
> Request ID Link: request.link.start 'req-a0df655b-18a2-4510-9e85-b9435dh8ye4'
> 
> INFO nova.utils [req-f0fb885b-18a2-4510-9e85-b9066b410ee4 admin admin] 
> Request ID Link: request.link.image_download 
> 'req-f0fb885b-18a2-4510-9e85-b9066b410ee4' -> Target='glance' 
> TargetId=req-a1ac739c-c816-4f82-ad82-9a9b1a603f43
> 
> INFO tempest.tests [req-b0df857fb-18a2-4510-9e85-b9435dh8ye4 admin admin] 
> Request ID Link: request.link.end 'req-b0df857fb-18a2-4510-9e85-b9435dh8ye4'
> 
> Concern:
> 
> 
> As request_utils module is removed from oslo-incubator and this module is
> also getting deprecated, I have following options to add it back to OpenStack.
> 
> 

Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread James Denton
My opinion is that the current stance of ‘deny all’ is probably the safest bet 
for all parties (including users) at this point. It’s been that way for years 
now, and is a substantial change that may result in little benefit. After all, 
you’re probably looking at most users removing the default rule(s) just to add 
something that’s more restrictive and suits their organization’s security 
posture. If they aren’t, then it’s possible they’re introducing unnecessary 
risk. 

There should be some onus put on the provider and/or the user/project/tenant to 
develop a default security policy that meets their needs, even going so far as 
to make the configuration of their default security group the first thing they 
do once the project is created. Maybe some changes to the workflow in Horizon 
could help mitigate some issues users are experiencing with limited access to 
instances by allowing them to apply some rules at the time of instance creation 
rather than associating groups consisting of unknown rules. Or allowing changes 
to the default security group rules of a project when that project is created. 
There are some ways to enable providers/users to help themselves rather than a 
blanket default change across all environments. If I’m a user utilizing 
multiple OpenStack providers, I’m probably bringing my own security groups and 
rules with me anyway and am not relying on any provider defaults.
 

James







On 3/2/16, 3:47 PM, "Jeremy Stanley"  wrote:

>On 2016-03-02 21:25:25 + (+), Sean M. Collins wrote:
>> Jeremy Stanley wrote:
>> > On 2016-03-03 07:49:03 +1300 (+1300), Xav Paice wrote:
>> > [...]
>> > > In my mind, the default security group is there so that as people
>> > > are developing their security policy they can at least start with
>> > > a default that offers a small amount of protection.
>> > 
>> > Well, not a small amount of protection. The instances boot
>> > completely unreachable from the global Internet, so this is pretty
>> > significant protection if you consider the most secure system is one
>> > which isn't connected to anything.
>> 
>> This is only if you are booting on a v4 network, which has NAT enabled.
>> Many public providers, the network you attach to is publicly routed, and
>> with the move to IPv6 - this will become more common. Remember, NAT is
>> not a security device.
>
>I agree that address translation is a blight on the Internet, useful
>in some specific circumstances (such as virtual address load
>balancing) but otherwise an ugly workaround for dealing with address
>exhaustion and connecting conflicting address assignments. I'll be
>thrilled when its use trails off to the point that newcomers cease
>thinking that's what connectivity with the Internet is supposed to
>be like.
>
>What I was referring to in my last message was the default security
>group policy, which blocks all ingress traffic. My point was that
>dropping all inbound connections, while a pretty secure
>configuration, is unlikely to be the desired configuration for
>_most_ servers. The question is whether there's enough overlap in
>different desired filtering policies to come up with a better
>default than one everybody has to change because it's useful for
>basically nobody, or whether we can come up with easier solutions
>for picking between a canned set of default behaviors (per Monty's
>suggestion) which users can expect to find in every OpenStack
>environment and which provide consistent behaviors across all of
>them.
>-- 
>Jeremy Stanley
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] openstack health accounting problem

2016-03-02 Thread Andrea Frittoli
Thanks Sean for bringing this up.

It's a known pain point that we discussed back in Tokyo [0].

Failures in class level fixtures are difficult to handle consistently,
because there is no concept of success / failure at class level in the data
model.

If a failure happens during setup, no test is actually executed. Marking
all the tests in the class as skipped would provide false data. Failures in
setup are related to the creation / deletion of resources that are required
by most test in the class, but not necessarily all of them. Because of that
marking all the tests in the class as failed may also be non accurate -
besides certain tests might have been skipped anyways, and they would be
reported as failed for no good reason.

If a failure happens during cleanup, all tests already had a result, so
what to do with the teardown failure? An option would be to avoid failing
during teardown, and only log issues in cleanup as warnings.

Before we can fix this, we need to get an agreement on a way to treat these
failures, and be are ready to accept the noise that might be added to the
data.

Personally I would prefer to:
- fix tests so that we never fail in a tearDownClass (that should be easy
to do)
- fix the test framework so that all tests are marked as failed upon
failure in setUpClass

The downside of this approach is that it may require changes in each test
framework that reports data in OpenStack Health.

andrea

[0] https://etherpad.openstack.org/p/mitaka-qa-testr-datastore-layering

On Wed, Mar 2, 2016 at 1:19 PM Sean Dague  wrote:

> I noticed in looking at the last 10 failures in the gate view -
> http://status.openstack.org//openstack-health/#/ that 2 of them were
> tearDown in test_volume_boot_pattern.
>
> However, it looks like these don't count against the
> test_volume_boot_pattern success rate. Which means the real fail rate of
> the boot from volumes tests is not accounted for.
>
> Can we get that folded in correctly? I do realize the complexities of
> tearDownClass accounting. But it seems like that should be done here
> otherwise we don't really realize how bad our hot spots are.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [packstack] New upstream integration gate jobs

2016-03-02 Thread David Moreau Simard
Hi everyone !

Throughout the Mitaka cycle, we have been working hard towards getting
Packstack to test itself with a self-installed Tempest implementation
and I'm excited to announce that it's a great success !

This effectively allowed us not only to add three different
integration tests right within Packstack itself [1] but also to find
and resolve several issues thanks to these new tests.

We have been testing the three test scenarios for a while as
non-voting jobs and after some tweaking, they are ready for prime
time.
Thus, I'd like to draw your attention to the fact that the three test
scenarios will be made "voting" and "gating" shortly [2].

This means that each and every patch set will need to pass these tests
before merging into the repository.
We all know that gate jobs can be inconvenient or break due to reasons
beyond your control but it is a necessarily evil.

There are several benefits to these new gate jobs, both for Packstack
users and the RDO community:
- New patches will be tested against the wide range of supported
Packstack project implementations as defined in the test matrix [1],
ensuring a patch does not have unintended regressions or issues
- These patches will be tested using the "current-passed-ci" RDO trunk
repository packages for the ongoing cycle (master branches) and stable
releases from Mitaka onwards
- The three Packstack integration tests will be implemented in the RDO
CI test pipeline for stable package promotion with WeIRDO [3], so RDO
will use Packstack as a way to test RDO packages before declaring
packages stable

This is a first step of many to straighten Packstack up, improving
it's stability, quality and standards.
As the fourth most popular OpenStack installer [4], we owe the
community no less than making sure it works well.

If you have any questions or comments, feel free to discuss on the
thread or reach out to us on #rdo on freenode.

Thanks !

[1]: https://github.com/openstack/packstack#packstack-integration-tests
[2]: https://review.openstack.org/#/c/287461/
[3]: https://github.com/redhat-openstack/weirdo
[4]: https://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Jeremy Stanley
On 2016-03-02 21:25:25 + (+), Sean M. Collins wrote:
> Jeremy Stanley wrote:
> > On 2016-03-03 07:49:03 +1300 (+1300), Xav Paice wrote:
> > [...]
> > > In my mind, the default security group is there so that as people
> > > are developing their security policy they can at least start with
> > > a default that offers a small amount of protection.
> > 
> > Well, not a small amount of protection. The instances boot
> > completely unreachable from the global Internet, so this is pretty
> > significant protection if you consider the most secure system is one
> > which isn't connected to anything.
> 
> This is only if you are booting on a v4 network, which has NAT enabled.
> Many public providers, the network you attach to is publicly routed, and
> with the move to IPv6 - this will become more common. Remember, NAT is
> not a security device.

I agree that address translation is a blight on the Internet, useful
in some specific circumstances (such as virtual address load
balancing) but otherwise an ugly workaround for dealing with address
exhaustion and connecting conflicting address assignments. I'll be
thrilled when its use trails off to the point that newcomers cease
thinking that's what connectivity with the Internet is supposed to
be like.

What I was referring to in my last message was the default security
group policy, which blocks all ingress traffic. My point was that
dropping all inbound connections, while a pretty secure
configuration, is unlikely to be the desired configuration for
_most_ servers. The question is whether there's enough overlap in
different desired filtering policies to come up with a better
default than one everybody has to change because it's useful for
basically nobody, or whether we can come up with easier solutions
for picking between a canned set of default behaviors (per Monty's
suggestion) which users can expect to find in every OpenStack
environment and which provide consistent behaviors across all of
them.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Sean M. Collins
Clark Boylan wrote:
> On Wed, Mar 2, 2016, at 09:38 AM, Sean M. Collins wrote:
> > Kevin Benton wrote:
> > > * Neutron cannot be trusted to do what it says it's doing with the 
> > > security
> > > groups API so users want to orchestrate firewalls directly on their
> > > instances.
> > 
> > This one really rubs me the wrong way. Can we please get a better
> > description of the bug - instead of someone just saying that Neutron
> > doesn't work, therefore we don't want any filtering or security for our
> > instances using an API?
> 
> Sure. There are two ways this manifests. The first is that there have
> been bugs in security groups where traffic is passed despite being told
> not to pass that traffic. This has been treated as a bug in the past and
> corrected which is great so this particular instance of the issue is
> less worrysome.

So as Kevin stated, there does not appear to be any known bugs where
traffic is passed despite being disallowed. If this were the case, I
assure you, this would be treated as a serious issue and fixed quickly.
If you are experiencing this issue, please open a bug and help us
address it.

We can't make serious policy decisions based on rumors and hearsay about
how Neutron doesn't work correctly.

> The second is that I will explicitly tell neutron to
> pass traffic but for whatever reason that traffic ends up being blocked
> anyways. One concrete example of this is the infra team has had to stop
> using GRE because at least two of our clouds do not pass GRE traffic
> despite having explicit "pass all ipv4 and all ipv6 between all possible
> addresses rules".

Are we certain that Neutron is the culprit? If so, please, open a bug
and help us track this down.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Sean M. Collins
Jeremy Stanley wrote:
> On 2016-03-03 07:49:03 +1300 (+1300), Xav Paice wrote:
> [...]
> > In my mind, the default security group is there so that as people
> > are developing their security policy they can at least start with
> > a default that offers a small amount of protection.
> 
> Well, not a small amount of protection. The instances boot
> completely unreachable from the global Internet, so this is pretty
> significant protection if you consider the most secure system is one
> which isn't connected to anything. 

This is only if you are booting on a v4 network, which has NAT enabled.
Many public providers, the network you attach to is publicly routed, and
with the move to IPv6 - this will become more common. Remember, NAT is
not a security device.


-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Jeremy Stanley
On 2016-03-03 07:49:03 +1300 (+1300), Xav Paice wrote:
[...]
> In my mind, the default security group is there so that as people
> are developing their security policy they can at least start with
> a default that offers a small amount of protection.

Well, not a small amount of protection. The instances boot
completely unreachable from the global Internet, so this is pretty
significant protection if you consider the most secure system is one
which isn't connected to anything. Unfortunately this is not, I
think, what most users want as an end state for most of their
instances. I simply wonder if there's a default which can be useful
to at least some majority, rather than having to make things equally
complex for everyone. Hard to identify, rife with opinion, and not a
solution I'm holding my breath for... but probably still more
attainable than world peace.

> Disabling that protection means I'd have to be dealing with a vast
> number of customers with instances that have been compromised
> because they didn't add to the security groups.

Sure, and that's I think how we've arrived at the default
indecision. It's easier to tell customers that they have to adjust
their firewall rules before they can do anything at all (and risk
some of them going elsewhere for an easier out-of-the-box
experience), than to bear the liability and reputation loss from
customers getting compromised because they assumed wrongly that they
shouldn't have to secure their systems "in the cloud." That said,
there _are_ providers whose default behavior is to not filter you.

In IRC I tried to draw comparisons to colocation, where my default
expectation is a routed network I can put my servers on with no risk
that the provider is surreptitiously blocking my traffic. If I want
packet filtering, I can bring a firewall into the colo and plug it
in, then configure it to my liking, but the default bare-bones
experience is a _less_ complex one (no firewall appliance) and if I
want separate filtering that's additional complexity I opt into by
choice.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] adding ovs dpdk agent into neutron

2016-03-02 Thread Vladimir Eremin
Hi MichalX, Sean,

Building from sources is possible, but it will be more stable, if you will use 
packaging system from the OS. Also, it will be really good if your module make 
changes to OpenStack configuration files using puppet-nova and puppet-neutron, 
and it could be split for compute/agent and scheduler changes.

I will really glad to see modular, reusable solution that could be integrated 
with our implementation in fuel-library[1].

[1]: 
https://review.openstack.org/#/q/topic:bp/support-dpdk+project:openstack/fuel-library,n,z

-- 
With best regards,
Vladimir Eremin,
Fuel Deployment Engineer,
Mirantis, Inc.



> On Mar 2, 2016, at 10:48 PM, Ptacek, MichalX  wrote:
> 
> Thanks Emilien, 
> It's becoming more clear to me what has to be done.
> Did I get it correctly that using bash code inside puppet module is "nish 
> nish" and will NOT be accepted by the community ?
> (even if we move the logic into own module like openstack/ovs-dpdk)
> Additionally building from the src or using own packages from such builds is 
> also not possible in such modules even despite its performance or other 
> functional benefits ?
> 
> best regards,
> Michal
> 
> -Original Message-
> From: Emilien Macchi [mailto:emil...@redhat.com] 
> Sent: Wednesday, March 02, 2016 6:51 PM
> To: Ptacek, MichalX ; 'OpenStack Development 
> Mailing List (not for usage questions)' ; 
> m...@mattfischer.com
> Cc: Mooney, Sean K ; Czesnowicz, Przemyslaw 
> 
> Subject: Re: [openstack-dev] [puppet] adding ovs dpdk agent into neutron
> 
> 
> 
> On 03/02/2016 03:07 AM, Ptacek, MichalX wrote:
>> Hi all,
>> 
>> 
>> 
>> we have puppet module for ovs deployments with dpdk support
>> 
>> https://github.com/openstack/networking-ovs-dpdk/tree/master/puppet
> 
> IMHO that's a bad idea to use networking-ovs-dpdk for the puppet module.
> You should initiate the work to create openstack/puppet-dpdk (not sure about 
> the name) or try to patch openstack/puppet-vswitch.
> 
> How puppet-vswitch would be different from puppet-dpdk?
> 
> I've looked at the code and you run bash scripts from Puppet.
> Really ? :-)
> 
>> and we would like to adapt it in a way that it can be used within 
>> upstream neutron module
>> 
>> e.g. to introduce class like this
>> 
>> neutron::agents::ml2::ovsdpdk
>> 
>> 
>> 
>> Current code works as follows:
>> 
>> -  Openstack with installed vanilla ovs is a kind of precondition
>> 
>> -  Ovsdpdk puppet module installation is triggered afterwards
>> and it replace vanilla ovs by ovsdpdk
>> 
>> (in order to have some flexibility and mostly due to performance 
>> reasons we are building ovs from src code)
>> 
>> https://github.com/openstack/networking-ovs-dpdk/blob/master/puppet/ov
>> sdpdk/files/build_ovs_dpdk.erb
>> 
>> -  As a part of deployments we have several shell scripts, which
>> are taking care of build and configuration stuff
>> 
>> 
>> 
>> I assume that some parts of our code can be easily rewritten to start 
>> using standard providers other parts might be rewritten to ruby …
>> 
>> We would like to introduce neutron::agents::ml2::ovsdpdk as adequate 
>> solution with existing neutron::agents::ml2::ovs and not just patching it.
>> 
> 
> What Puppet OpenStack group will let neutron::agents::ml2::ovsdpdk doing:
> 
> * configure what you like in /etc/neutron/*
> * install what you want that is part of OpenStack/Neutron* (upstream).
> 
> What Puppet OpenStack group WILL NOT let neutron::agents::ml2::ovsdpdk
> doing:
> 
> * install third party software (packages from some custom repositories, not 
> upstream).
> * build RPM/DEB from bash scripts
> * build anything from bash scripts
> * configure anything outside /etc/neutron/*
> 
>> 
>> Actually I have following questions:
>> 
>> Q1) Will it be acceptable if we move build logic before deployment and 
>> resulting rpm/deb will be installed instead of ovs package during 
>> deployment ?
> 
> You should engage efforts to have upstream packaging in Ubuntu/Debian and Red 
> Hat systems (RDO).
> 
>> Q2) Do we need to rewrite bash logic into ruby code ?
> 
> Drop bash scripts, and use upstream packages, like we do everywhere else.
> 
>> Q3) Do we need to raise separate blueprint, which has to be approved  
>> before starting adaptations ?
> 
> Feel free to submit a blueprint so our group can be involved in this 
> discussion, or maybe this thread is enough.
> --
> Emilien Macchi
> 
> --
> Intel Research and Development Ireland Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
> 
> 
> This e-mail and any attachments may contain confidential material for the sole
> use of the intended recipient(s). Any review or distribution by others is
> strictly prohibited. If you 

Re: [openstack-dev] [puppet] adding ovs dpdk agent into neutron

2016-03-02 Thread Emilien Macchi


On 03/02/2016 02:48 PM, Ptacek, MichalX wrote:
> Thanks Emilien, 
> It's becoming more clear to me what has to be done.
> Did I get it correctly that using bash code inside puppet module is "nish 
> nish" and will NOT be accepted by the community ?

It's really bad practice in my opinion.

> (even if we move the logic into own module like openstack/ovs-dpdk)
> Additionally building from the src or using own packages from such builds is 
> also not possible in such modules even despite its performance or other 
> functional benefits ?

We like things done upstream, if networking-ovs-dpdk is part of
OpenStack, let's package it (and its dependencies) upstream too.

Do we have any blocker on that?


> best regards,
> Michal
> 
> -Original Message-
> From: Emilien Macchi [mailto:emil...@redhat.com] 
> Sent: Wednesday, March 02, 2016 6:51 PM
> To: Ptacek, MichalX ; 'OpenStack Development 
> Mailing List (not for usage questions)' ; 
> m...@mattfischer.com
> Cc: Mooney, Sean K ; Czesnowicz, Przemyslaw 
> 
> Subject: Re: [openstack-dev] [puppet] adding ovs dpdk agent into neutron
> 
> 
> 
> On 03/02/2016 03:07 AM, Ptacek, MichalX wrote:
>> Hi all,
>>
>>  
>>
>> we have puppet module for ovs deployments with dpdk support
>>
>> https://github.com/openstack/networking-ovs-dpdk/tree/master/puppet
> 
> IMHO that's a bad idea to use networking-ovs-dpdk for the puppet module.
> You should initiate the work to create openstack/puppet-dpdk (not sure about 
> the name) or try to patch openstack/puppet-vswitch.
> 
> How puppet-vswitch would be different from puppet-dpdk?
> 
> I've looked at the code and you run bash scripts from Puppet.
> Really ? :-)
> 
>> and we would like to adapt it in a way that it can be used within 
>> upstream neutron module
>>
>> e.g. to introduce class like this
>>
>> neutron::agents::ml2::ovsdpdk
>>
>>  
>>
>> Current code works as follows:
>>
>> -  Openstack with installed vanilla ovs is a kind of precondition
>>
>> -  Ovsdpdk puppet module installation is triggered afterwards
>> and it replace vanilla ovs by ovsdpdk
>>
>> (in order to have some flexibility and mostly due to performance 
>> reasons we are building ovs from src code)
>>
>> https://github.com/openstack/networking-ovs-dpdk/blob/master/puppet/ov
>> sdpdk/files/build_ovs_dpdk.erb
>>
>> -  As a part of deployments we have several shell scripts, which
>> are taking care of build and configuration stuff
>>
>>  
>>
>> I assume that some parts of our code can be easily rewritten to start 
>> using standard providers other parts might be rewritten to ruby …
>>
>> We would like to introduce neutron::agents::ml2::ovsdpdk as adequate 
>> solution with existing neutron::agents::ml2::ovs and not just patching it.
>>
> 
> What Puppet OpenStack group will let neutron::agents::ml2::ovsdpdk doing:
> 
> * configure what you like in /etc/neutron/*
> * install what you want that is part of OpenStack/Neutron* (upstream).
> 
> What Puppet OpenStack group WILL NOT let neutron::agents::ml2::ovsdpdk
> doing:
> 
> * install third party software (packages from some custom repositories, not 
> upstream).
> * build RPM/DEB from bash scripts
> * build anything from bash scripts
> * configure anything outside /etc/neutron/*
> 
>>
>> Actually I have following questions:
>>
>> Q1) Will it be acceptable if we move build logic before deployment and 
>> resulting rpm/deb will be installed instead of ovs package during 
>> deployment ?
> 
> You should engage efforts to have upstream packaging in Ubuntu/Debian and Red 
> Hat systems (RDO).
> 
>> Q2) Do we need to rewrite bash logic into ruby code ?
> 
> Drop bash scripts, and use upstream packages, like we do everywhere else.
> 
>> Q3) Do we need to raise separate blueprint, which has to be approved  
>> before starting adaptations ?
> 
> Feel free to submit a blueprint so our group can be involved in this 
> discussion, or maybe this thread is enough.
> --
> Emilien Macchi
> 
> --
> Intel Research and Development Ireland Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
> 
> 
> This e-mail and any attachments may contain confidential material for the sole
> use of the intended recipient(s). Any review or distribution by others is
> strictly prohibited. If you are not the intended recipient, please contact the
> sender and delete all copies.
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Monty Taylor

On 03/02/2016 01:53 PM, Andrew Laski wrote:

On Wed, Mar 2, 2016, at 02:36 PM, Gregory Haynes wrote:

Clearly, some operators and users disagree with the opinion that 'by
default security groups should closed off' given that we have several
large public providers who have changed these defaults (despite there
being no documented way to do so), and we have users in this thread
expressing that opinion. Given that, I am not sure there is any value
behind us expressing we have different opinions on what defaults
should be (let alone enforcing them by not allowing them to be
configured) unless there are some technical reasons beyond 'this is
not what my policy is, what my customers wants', etc. I also
understand the goal of trying to make clouds more similar for better
interoperability (and I think that is extremely important), but the
reality is we have created a situation where clouds are already not
identical here in an even worse, undocumented way because we are
enforcing a certain set of opinions here.

To me this is an extremely clear indication that at a minimum the
defaults should be configurable since discussion around them seems to
devolve into different opinions on security policies, and there is no
way we should be in the business of dictating that.


+1. While I happen to agree with closed by default there are many others
who feel differently, and there are cloud deployment scenarios where it
may not be the reasonable default.
It seems to me that visibility should be the primary focus. Make it easy
for users to know what they're getting, and make it clear that it's
something they should check rather than assume it's set a certain way.


++ And make it easy for them to choose the other thing.

(try writing an idempotent ansible playbook that tries to make your 
security group look exactly like you want it not knowing in advance what 
security group rules this provider happens to want to give you that you 
didn't think to explicitly look for.)



Cheers,
Greg

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] 3.rd Party CI requirements for compliance

2016-03-02 Thread Sean McGinnis
On Wed, Mar 02, 2016 at 06:14:30PM +, Indra Harijono wrote:
> Hi,
> 
> I am new in this forum and openstack dev. so please my sincere apology if I 
> submitted stupid (redundant) questions.
> I am writing this to clarify cinder compliance requirements (and 3.rd Party 
> CI Testing).
> We are developing storage appliance and would like to run cinder on it.
> We don't directly modify API but change the underlying (volume provisioning) 
> mechanisms.
> 
> -  Do we need to set up 3.rd party CI for the compliance?
> 
> -  If yes, do other needs full product level documentation (such as 
> specs etc.)?
> 
> -  How long would it be necessary to provide such 3.rd party CI 
> system for others, or does the CI setup mean
> 
> to be permanently used to check compliance each time openstack code is 
> modified?
> Any comments, suggestions and feedback are highly appreciated.
> 
> Thanks,
> Indra

If I'm understanding this correctly, you are not looking to provide a
driver or modify the cinder code in any way. You are just looking at
delivering a Cinder "appliance" that can be used in an OpenStack cloud.
Is that correct?

If that is the case, we don't really have any kind of certification
process for that type of thing. You could certainly use our third party
CI testing process for Cinder volume drivers to show that everything is
in compliance with expected behavior, but that would be up to you. There
is documentation on [1]. You can also ask questions on the
#openstack-cinder IRC channel.

If you go that route, I would recommend running it permanently. The big
value with setting up CI is being able to quickly identify breaking
changes. Passing the tests just a few times wouldn't necessarily mean
everything is working correctly down the road.

Thanks,
Sean (smcginnis)

[1] https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Monty Taylor

On 03/02/2016 01:04 PM, Xav Paice wrote:



On 3 March 2016 at 07:52, Sean Dague > wrote:

On 03/02/2016 01:46 PM, Armando M. wrote:

> IMO, I think that's a loophole that should be closed. We should all
> strive to make OpenStack clouds behave alike.

>

It might be a loophole. But it's also data. People are doing that thing
for a reason based on customer feedback. If the general norms are that
this is allowed, and OpenStack clouds do the opposite, they will be
considered broken compared to the norms.


Not the customers I've spoken with.

I'm OK with doing something different to RAX, in fact some people might
consider RAX to be broken when they see it different from us.  I'm not
keen on any cloud being considered 'broken' because of an implementation
detail, and every cloud has some level of differences.  Just take a look
at os_client_config to see some of the differences (and the ways to make
that easier).


++ to looking at os_client_config :)

I do a bad job of trying to communicate this every time, but I'm going 
to try one more time ...


What I really want out of OpenStack clouds is this:

- A 'public' or 'northbound' network that I can choose to boot my 
computer on. If I choose to boot my computer on this network, I will get 
directly attached to that network. My computer will be able to know its 
IP address.


- The option to create one or more 'private' networks that I can attach 
my computers to.


- The option to create one or more floating ips that allocate an IP from 
the 'public' or 'northbound' network but associate them with the compute 
of my choice via NAT.


- A default security group that does 'sensible' things like blocking a 
bunch of traffic.


- Either a second default security group that is wide-open or the option 
to opt-out of having a security group associated with my computer at all.


I want all of the OpenStack clouds to do all of those things.

Why?

Because those things encompass the set of use cases that people actually 
use OpenStack for.


If you don't have the ability to direct attach IPs to your computer, 
there are a set of things that either don't work, or that work very 
poorly. If you are running one of those things, you know it, and you 
know what you need. If you are running one of those things, NAT is not a 
feature, it's a bug.


If you don't have floating IPs/NAT, then you do not fit easily into the 
"most of my things on private except I want to poke a little bit of 
public on to one of them" model which is prevelant in the "cloud native" 
thinking.


Amazingly enough, though, OpenStack can actually TODAY support all of 
the above, we just have people who believe that some end users should 
not be allowed to want to run the types of applications in the cloud 
that they want (or need) to run.


There are some problems - most notably UI.

If you have two networks defined in a cloud, you have to specify which 
network you want to boot on - and you have to do this via network id. 
That's very un-user-friendly. There is no way for a user of a cloud ot 
say "hey, I really never want to boot things directly on the public 
network". A user CAN delete a private network that was created for them 
if they don't want it - so that direction is friendly. However, a user 
cannot choose to boot a computer directly on a public network if the 
deployer has not allowed them to.


So we have work to do in our command line and REST API in terms of 
dealing with mutli-networks and expressions of user intent. However, we 
can't even begin that work as long as we persist with the idea that 
there is only one "right" networking model.


I'd really like to see devstack nodes start to boot with a shared public 
AND a private AND floating ips. That way we can both test all of the 
possible combinations in a single cloud, and as developers we can 
experience the pain that exists currently for customers in multi-network 
clouds.


Also, with the security groups - why not have a "default" and an "open" 
security group by default?


And finally- how about some sort of user settable preferneces somewhere? 
And how about the ability to use those to have users opt in to various 
networking schemes on a project-by-project basis? (Because I'm sure we 
all already agree that every customer should get a domain and domain 
admin in that domain and be able to create users and projects to their 
heart's desire since we all love our users)


If we have a cloud with a shared public and optional private networks, 
then it's conceivable that at project creation time a user could say 
"hey, make me a project and let that project see the shared public 
network" - or "hey, make me a project and do not let that project see 
the shared public network" - that way default boot commands in each 
project would attach to the right thing - and only in the honestly quite 
strange situation where you actually want both and want direct routing 

Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Andrew Laski



On Wed, Mar 2, 2016, at 02:36 PM, Gregory Haynes wrote:
> Clearly, some operators and users disagree with the opinion that 'by
> default security groups should closed off' given that we have several
> large public providers who have changed these defaults (despite there
> being no documented way to do so), and we have users in this thread
> expressing that opinion. Given that, I am not sure there is any value
> behind us expressing we have different opinions on what defaults
> should be (let alone enforcing them by not allowing them to be
> configured) unless there are some technical reasons beyond 'this is
> not what my policy is, what my customers wants', etc. I also
> understand the goal of trying to make clouds more similar for better
> interoperability (and I think that is extremely important), but the
> reality is we have created a situation where clouds are already not
> identical here in an even worse, undocumented way because we are
> enforcing a certain set of opinions here.
>
> To me this is an extremely clear indication that at a minimum the
> defaults should be configurable since discussion around them seems to
> devolve into different opinions on security policies, and there is no
> way we should be in the business of dictating that.
>

+1. While I happen to agree with closed by default there are many others
who feel differently, and there are cloud deployment scenarios where it
may not be the reasonable default.

It seems to me that visibility should be the primary focus. Make it easy
for users to know what they're getting, and make it clear that it's
something they should check rather than assume it's set a certain way.


> Cheers, Greg
> -
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] adding ovs dpdk agent into neutron

2016-03-02 Thread Ptacek, MichalX
Thanks Emilien, 
It's becoming more clear to me what has to be done.
Did I get it correctly that using bash code inside puppet module is "nish nish" 
and will NOT be accepted by the community ?
(even if we move the logic into own module like openstack/ovs-dpdk)
Additionally building from the src or using own packages from such builds is 
also not possible in such modules even despite its performance or other 
functional benefits ?

best regards,
Michal

-Original Message-
From: Emilien Macchi [mailto:emil...@redhat.com] 
Sent: Wednesday, March 02, 2016 6:51 PM
To: Ptacek, MichalX ; 'OpenStack Development Mailing 
List (not for usage questions)' ; 
m...@mattfischer.com
Cc: Mooney, Sean K ; Czesnowicz, Przemyslaw 

Subject: Re: [openstack-dev] [puppet] adding ovs dpdk agent into neutron



On 03/02/2016 03:07 AM, Ptacek, MichalX wrote:
> Hi all,
> 
>  
> 
> we have puppet module for ovs deployments with dpdk support
> 
> https://github.com/openstack/networking-ovs-dpdk/tree/master/puppet

IMHO that's a bad idea to use networking-ovs-dpdk for the puppet module.
You should initiate the work to create openstack/puppet-dpdk (not sure about 
the name) or try to patch openstack/puppet-vswitch.

How puppet-vswitch would be different from puppet-dpdk?

I've looked at the code and you run bash scripts from Puppet.
Really ? :-)

> and we would like to adapt it in a way that it can be used within 
> upstream neutron module
> 
> e.g. to introduce class like this
> 
> neutron::agents::ml2::ovsdpdk
> 
>  
> 
> Current code works as follows:
> 
> -  Openstack with installed vanilla ovs is a kind of precondition
> 
> -  Ovsdpdk puppet module installation is triggered afterwards
> and it replace vanilla ovs by ovsdpdk
> 
> (in order to have some flexibility and mostly due to performance 
> reasons we are building ovs from src code)
> 
> https://github.com/openstack/networking-ovs-dpdk/blob/master/puppet/ov
> sdpdk/files/build_ovs_dpdk.erb
> 
> -  As a part of deployments we have several shell scripts, which
> are taking care of build and configuration stuff
> 
>  
> 
> I assume that some parts of our code can be easily rewritten to start 
> using standard providers other parts might be rewritten to ruby …
> 
> We would like to introduce neutron::agents::ml2::ovsdpdk as adequate 
> solution with existing neutron::agents::ml2::ovs and not just patching it.
> 

What Puppet OpenStack group will let neutron::agents::ml2::ovsdpdk doing:

* configure what you like in /etc/neutron/*
* install what you want that is part of OpenStack/Neutron* (upstream).

What Puppet OpenStack group WILL NOT let neutron::agents::ml2::ovsdpdk
doing:

* install third party software (packages from some custom repositories, not 
upstream).
* build RPM/DEB from bash scripts
* build anything from bash scripts
* configure anything outside /etc/neutron/*

> 
> Actually I have following questions:
> 
> Q1) Will it be acceptable if we move build logic before deployment and 
> resulting rpm/deb will be installed instead of ovs package during 
> deployment ?

You should engage efforts to have upstream packaging in Ubuntu/Debian and Red 
Hat systems (RDO).

> Q2) Do we need to rewrite bash logic into ruby code ?

Drop bash scripts, and use upstream packages, like we do everywhere else.

> Q3) Do we need to raise separate blueprint, which has to be approved  
> before starting adaptations ?

Feel free to submit a blueprint so our group can be involved in this 
discussion, or maybe this thread is enough.
--
Emilien Macchi

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] 3.rd Party CI requirements for compliance

2016-03-02 Thread Mike Perez
On 18:14 Mar 02, Indra Harijono wrote:
> Hi,
> 
> I am new in this forum and openstack dev. so please my sincere apology if I 
> submitted stupid (redundant) questions.
> I am writing this to clarify cinder compliance requirements (and 3.rd Party 
> CI Testing).
> We are developing storage appliance and would like to run cinder on it.
> We don't directly modify API but change the underlying (volume provisioning) 
> mechanisms.
> 
> -  Do we need to set up 3.rd party CI for the compliance?
> 
> -  If yes, do other needs full product level documentation (such as 
> specs etc.)?
> 
> -  How long would it be necessary to provide such 3.rd party CI 
> system for others, or does the CI setup mean
> 
> to be permanently used to check compliance each time openstack code is 
> modified?
> Any comments, suggestions and feedback are highly appreciated.

The requirements and documentation for starting one are provided here:

https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers

Also you can join the third party ci help meeting after going through that
document:

https://wiki.openstack.org/wiki/Meetings/ThirdParty

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] Documenting configuration options lifespan

2016-03-02 Thread Doug Hellmann
Excerpts from Ronald Bradford's message of 2016-03-02 13:40:42 -0500:
> After evaluation of oslo-config-generator and one of it's common uses by
> operators in configuration option evaluation with upgrades, I am proposing
> adding some meta data for all configuration options to provide better
> applicable documentation as projects continue to evolve.
> 
> I can see an easier and system generated means to provide information such
> as "What's New", "What's Changed", "What's Deprecated" for project
> configurations in each new release, in system documentation and release
> notes.  This will greatly simplify the review of newer available releases
> and improve the experience of upgraded deployments.
> 
> For each configuration option I'm proposing we can identify "released",
> "changed", "deprecated", "removal" release info, e.g. K,L,M,N etc.
> 
> Initial seeding of existing configuration options is that no information is
> needed. i.e. no upfront investment to start.
> Possible work items moving forward would include:
> 
> * When an option is changed, or marked as deprecated, during normal reviews
> it should then be identified accordingly with these new attributes.
> * The "changed" attribute would only be applicable moving forward with a
> developer change in default value, ranges etc (changing help text or
> re-ordering is not a change).

Would "changed" hold a list of changes, to provide a history? Or would
it just describe the difference since the last release?

> * Any new options get the "released" attribute.

And that would be set to the version in which the new attribute was
added? Maybe "added_in" is a better name?

> * Initial work to fill in the "deprecated" and "removal" information (i.e.
> for a small number of existing options per project) would add strong value
> to generated documentation.

Amen.

Doug

> * Additional work to add the initial "released" information can be left to
> an intro contributor task.  Information of an options existence in a
> project can be automated via analysis of branches to provide details of the
> seed info needed.
> 
> As for implementation, the use of a named tuple attribute for
> oslo_config.cfg.Opt [1] is one approach.  Determining how to take advantage
> of debtcollector and versionutils should be considered.
> 
> [1]
> http://git.openstack.org/cgit/openstack/oslo.config/tree/oslo_config/cfg.py#n636

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Gregory Haynes
Clearly, some operators and users disagree with the opinion that 'by
default security groups should closed off' given that we have several
large public providers who have changed these defaults (despite there
being no documented way to do so), and we have users in this thread
expressing that opinion. Given that, I am not sure there is any value
behind us expressing we have different opinions on what defaults should
be (let alone enforcing them by not allowing them to be configured)
unless there are some technical reasons beyond 'this is not what my
policy is, what my customers wants', etc. I also understand the goal of
trying to make clouds more similar for better interoperability (and I
think that is extremely important), but the reality is we have created
a situation where clouds are already not identical here in an even
worse, undocumented way because we are enforcing a certain set of
opinions here.

To me this is an extremely clear indication that at a minimum the
defaults should be configurable since discussion around them seems to
devolve into different opinions on security policies, and there is no
way we should be in the business of dictating that.

Cheers, Greg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Remember to follow RFE process

2016-03-02 Thread Ruby Loo
Hi,

Ironic'ers, please remember to follow the RFE process; especially the cores.

I noticed that a patch [1] got merged yesterday. The patch was associated
with an RFE [2] that hadn't been approved yet :-( What caught my eye was
that the commit message didn't describe the actual API change so I took a
quick look at the (RFE) bug and it wasn't documented there either.

As a reminder, the RFE process is documented [3].

Spec cores need to try to be more timely wrt specs (I admit, I am guilty).
And folks, especially cores, ought to take more care when reviewing.
Although I do feel like there are too many things that a reviewer needs to
keep in mind.

Should we revert the patch [1] for now? (Disclaimer. I haven't looked at
the patch itself. But I don't think I should have to, to know what the API
change is.)

--ruby


[1] https://review.openstack.org/#/c/264005/
[2] https://bugs.launchpad.net/ironic/+bug/1530626
[3]
http://docs.openstack.org/developer/ironic/dev/code-contribution-guide.html#adding-new-features
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Kevin Benton
No, there haven't been vulnerabilities where the rules you expressed in the
API were not rendered as requested (unless there was a denial of service in
which case the whole dataplane would fail to wire). The issues were people
being able to escape their own anti-spoofing filtering so they could do IP
spoofing and ARP poisoning. VM-local firewalls would not have helped in
this case.
On Mar 2, 2016 10:50 AM, "Clark Boylan"  wrote:

> On Wed, Mar 2, 2016, at 09:38 AM, Sean M. Collins wrote:
> > Kevin Benton wrote:
> > > * Neutron cannot be trusted to do what it says it's doing with the
> security
> > > groups API so users want to orchestrate firewalls directly on their
> > > instances.
> >
> > This one really rubs me the wrong way. Can we please get a better
> > description of the bug - instead of someone just saying that Neutron
> > doesn't work, therefore we don't want any filtering or security for our
> > instances using an API?
>
> Sure. There are two ways this manifests. The first is that there have
> been bugs in security groups where traffic is passed despite being told
> not to pass that traffic. This has been treated as a bug in the past and
> corrected which is great so this particular instance of the issue is
> less worrysome. The second is that I will explicitly tell neutron to
> pass traffic but for whatever reason that traffic ends up being blocked
> anyways. One concrete example of this is the infra team has had to stop
> using GRE because at least two of our clouds do not pass GRE traffic
> despite having explicit "pass all ipv4 and all ipv6 between all possible
> addresses rules".
>
> Security groups need to do what I have told them to do and when they
> don't it is almost impossible as a cloud user to debug them.
>
> Clark
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Sean Dague
On 03/02/2016 01:46 PM, Armando M. wrote:

> IMO, I think that's a loophole that should be closed. We should all
> strive to make OpenStack clouds behave alike.

It might be a loophole. But it's also data. People are doing that thing
for a reason based on customer feedback. If the general norms are that
this is allowed, and OpenStack clouds do the opposite, they will be
considered broken compared to the norms.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Xav Paice
>From one operator's standpoint, some comments below.

I can't imagine having to tell my customer base that we've just changed the
'default' security group from not allowing anything inbound, to allowing
everything.  That would mean they would all have to strip the default group
from all their instances (and modify their automated deployment tooling).

In my mind, the default security group is there so that as people are
developing their security policy they can at least start with a default
that offers a small amount of protection.  Disabling that protection means
I'd have to be dealing with a vast number of customers with instances that
have been compromised because they didn't add to the security groups.

On 3 March 2016 at 06:38, Sean M. Collins  wrote:

> Kevin Benton wrote:
> > * Instances without ingress are useless so a bunch of API calls are
> > required to make them useful.
>
> This is not true in all cases. There are plenty of workloads that only
> require outbound connectivity. Workloads where data is fetched,
> computed, then transmitted elsewhere for storage.
>
>
+1


> > * It violates the end-to-end principle of the Internet to have a
> middle-box
> > meddling with traffic (the compute node in this case).
>
> Again, this is someone's *opinion* - but it is not an opinion
> universally shared.
>

+1 entire companies are built on the premise that people want firewalls


>
> > Second, would it be acceptable to make this operator configurable? This
> > would mean users could receive different default filtering as they moved
> > between clouds.
>

If the default is to open it up, I'd want to change that regardless of
other clouds.


>
> It is my belief that an application that is going to be run in a cloud
> environment, it is not enough to just upload your disk image and expect
> that to be the only thing that is needed to run an app in the cloud. You
> will also need to bring your security policy into the cloud as well -
> Who can access? How can they access? Which parts of the app can talk to
> sensitive parts of the app like the database servers?
>
>

This is entirely reasonable for anything remotely 'production', and I've
not heard a single customer be even slightly surprised at this need.  I
have, however, tripped up a number of times forgetting to allow inbound ssh
and thinking I've misconfigured networking.

As people spin up dev instances for making apps, they would often think of
security later down the path.  If they have some degree of protection from
a default setting, that allows them to work with a little peace of mind.



> I think that the default security group should be left as is - and users
> should be trained that they should bring/create security groups with the
> appropriate rules for their need.
>

+1
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Removing old puppetlabs/* forge OS modules

2016-03-02 Thread Emilien Macchi


On 03/02/2016 01:46 PM, Hunter Haugen wrote:
> Several years ago, the at-the-time Stackforge puppet modules were
> published under the forge.puppetlabs.com/puppetlabs
>  namespace. Then those modules
> were migrated to forge.puppetlabs.com/stackforge
>  for a while. When they became
> an official OpenStack project they were migrated to
> forge.puppetlabs.com/openstack 
> where they are currently being published.
> 
> After each migration, the older releases were left available for anyone
> who needed to continue using them before migrating. Now, the older
> versions generally just cause confusion among new users, so I am going
> to remove the following modules from the forge on March 8th:
> 
> * https://forge.puppetlabs.com/puppetlabs/keystone
> * https://forge.puppetlabs.com/puppetlabs/glance
> * https://forge.puppetlabs.com/puppetlabs/cinder
> * https://forge.puppetlabs.com/puppetlabs/horizon
> * https://forge.puppetlabs.com/puppetlabs/swift
> * https://forge.puppetlabs.com/puppetlabs/ceilometer
> * https://forge.puppetlabs.com/puppetlabs/heat
> * https://forge.puppetlabs.com/puppetlabs/tempest
> * https://forge.puppetlabs.com/puppetlabs/nova
> * https://forge.puppetlabs.com/puppetlabs/vswitch
> * https://forge.puppetlabs.com/puppetlabs/neutron
> * https://forge.puppetlabs.com/puppetlabs/quantum
> 
> And as a bonus for related-but-no-longer-relevant modules:
> 
> * https://forge.puppetlabs.com/puppetlabs/grizzly
> * https://forge.puppetlabs.com/puppetlabs/havana
> 

++

Thanks for taking care of that, Hunter!
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Armando M.
On 1 March 2016 at 14:52, Kevin Benton  wrote:

> Hi,
>
> I know this has come up in the past, but some folks in the infra channel
> brought up the topic of changing the default security groups to allow all
> traffic.
>
> They had a few reasons for this that I will try to summarize here:
> * Ports 'just work' out of the box so there is no troubleshooting to
> eventually find out that ingress is blocked by default.
>

What troubleshooting? If users were educated enough they would know that's
the behavior to expect.


> * Instances without ingress are useless so a bunch of API calls are
> required to make them useful.
>

There are not. Besides, you're justing solving a problem to create another:
a bunch of API calls to close ingress traffic!


> * Some cloud providers allow all traffic by default (e.g. Digital Ocean,
> RAX).
>

IMO, I think that's a loophole that should be closed. We should all strive
to make OpenStack clouds behave alike.


> * It violates the end-to-end principle of the Internet to have a
> middle-box meddling with traffic (the compute node in this case).
> * Neutron cannot be trusted to do what it says it's doing with the
> security groups API so users want to orchestrate firewalls directly on
> their instances.
>

On what basis you're justifying these two last claims?


>
>
> So this ultimately brings up two big questions. First, can we agree on a
> set of defaults that is different than the one we have now; and, if so, how
> could we possibly manage upgrades where this will completely change the
> default filtering for users using the API?
>
> Second, would it be acceptable to make this operator configurable? This
> would mean users could receive different default filtering as they moved
> between clouds.
>

A user can customize his/her own security groups rules ahead of booting
instances, for all instances. Why wouldn't that suffice to address your
needs?


>
>
> Cheers,
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread ZZelle
Hi,

I understand that it's more user-friendly to enable by default all traffic
to VMs,
but it seems clearly unsecure to enable by default all traffic to VMs
(including ssh from internet!!!),
as it increases the VM exposition surface on internet and reduces its
security.



Cédric/ZZelle




On Tue, Mar 1, 2016 at 11:52 PM, Kevin Benton  wrote:

> Hi,
>
> I know this has come up in the past, but some folks in the infra channel
> brought up the topic of changing the default security groups to allow all
> traffic.
>
> They had a few reasons for this that I will try to summarize here:
> * Ports 'just work' out of the box so there is no troubleshooting to
> eventually find out that ingress is blocked by default.
> * Instances without ingress are useless so a bunch of API calls are
> required to make them useful.
> * Some cloud providers allow all traffic by default (e.g. Digital Ocean,
> RAX).
> * It violates the end-to-end principle of the Internet to have a
> middle-box meddling with traffic (the compute node in this case).
> * Neutron cannot be trusted to do what it says it's doing with the
> security groups API so users want to orchestrate firewalls directly on
> their instances.
>
>
> So this ultimately brings up two big questions. First, can we agree on a
> set of defaults that is different than the one we have now; and, if so, how
> could we possibly manage upgrades where this will completely change the
> default filtering for users using the API?
>
> Second, would it be acceptable to make this operator configurable? This
> would mean users could receive different default filtering as they moved
> between clouds.
>
>
> Cheers,
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Removing old puppetlabs/* forge OS modules

2016-03-02 Thread Hunter Haugen
Several years ago, the at-the-time Stackforge puppet modules were published
under the forge.puppetlabs.com/puppetlabs namespace. Then those modules
were migrated to forge.puppetlabs.com/stackforge for a while. When they
became an official OpenStack project they were migrated to
forge.puppetlabs.com/openstack where they are currently being published.

After each migration, the older releases were left available for anyone who
needed to continue using them before migrating. Now, the older versions
generally just cause confusion among new users, so I am going to remove the
following modules from the forge on March 8th:

* https://forge.puppetlabs.com/puppetlabs/keystone
* https://forge.puppetlabs.com/puppetlabs/glance
* https://forge.puppetlabs.com/puppetlabs/cinder
* https://forge.puppetlabs.com/puppetlabs/horizon
* https://forge.puppetlabs.com/puppetlabs/swift
* https://forge.puppetlabs.com/puppetlabs/ceilometer
* https://forge.puppetlabs.com/puppetlabs/heat
* https://forge.puppetlabs.com/puppetlabs/tempest
* https://forge.puppetlabs.com/puppetlabs/nova
* https://forge.puppetlabs.com/puppetlabs/vswitch
* https://forge.puppetlabs.com/puppetlabs/neutron
* https://forge.puppetlabs.com/puppetlabs/quantum

And as a bonus for related-but-no-longer-relevant modules:

* https://forge.puppetlabs.com/puppetlabs/grizzly
* https://forge.puppetlabs.com/puppetlabs/havana


-Hunter
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Clark Boylan
On Wed, Mar 2, 2016, at 09:38 AM, Sean M. Collins wrote:
> Kevin Benton wrote:
> > * Neutron cannot be trusted to do what it says it's doing with the security
> > groups API so users want to orchestrate firewalls directly on their
> > instances.
> 
> This one really rubs me the wrong way. Can we please get a better
> description of the bug - instead of someone just saying that Neutron
> doesn't work, therefore we don't want any filtering or security for our
> instances using an API?

Sure. There are two ways this manifests. The first is that there have
been bugs in security groups where traffic is passed despite being told
not to pass that traffic. This has been treated as a bug in the past and
corrected which is great so this particular instance of the issue is
less worrysome. The second is that I will explicitly tell neutron to
pass traffic but for whatever reason that traffic ends up being blocked
anyways. One concrete example of this is the infra team has had to stop
using GRE because at least two of our clouds do not pass GRE traffic
despite having explicit "pass all ipv4 and all ipv6 between all possible
addresses rules".

Security groups need to do what I have told them to do and when they
don't it is almost impossible as a cloud user to debug them.

Clark


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][all] Documenting configuration options lifespan

2016-03-02 Thread Ronald Bradford
After evaluation of oslo-config-generator and one of it's common uses by
operators in configuration option evaluation with upgrades, I am proposing
adding some meta data for all configuration options to provide better
applicable documentation as projects continue to evolve.

I can see an easier and system generated means to provide information such
as "What's New", "What's Changed", "What's Deprecated" for project
configurations in each new release, in system documentation and release
notes.  This will greatly simplify the review of newer available releases
and improve the experience of upgraded deployments.

For each configuration option I'm proposing we can identify "released",
"changed", "deprecated", "removal" release info, e.g. K,L,M,N etc.

Initial seeding of existing configuration options is that no information is
needed. i.e. no upfront investment to start.
Possible work items moving forward would include:

* When an option is changed, or marked as deprecated, during normal reviews
it should then be identified accordingly with these new attributes.
* The "changed" attribute would only be applicable moving forward with a
developer change in default value, ranges etc (changing help text or
re-ordering is not a change).
* Any new options get the "released" attribute.
* Initial work to fill in the "deprecated" and "removal" information (i.e.
for a small number of existing options per project) would add strong value
to generated documentation.
* Additional work to add the initial "released" information can be left to
an intro contributor task.  Information of an options existence in a
project can be automated via analysis of branches to provide details of the
seed info needed.

As for implementation, the use of a named tuple attribute for
oslo_config.cfg.Opt [1] is one approach.  Determining how to take advantage
of debtcollector and versionutils should be considered.

[1]
http://git.openstack.org/cgit/openstack/oslo.config/tree/oslo_config/cfg.py#n636
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] volumes stuck detaching attaching and force detach

2016-03-02 Thread Matt Riedemann



On 3/1/2016 11:36 PM, John Griffith wrote:



On Tue, Mar 1, 2016 at 3:48 PM, Murray, Paul (HP Cloud) > wrote:


> -Original Message-
> From: D'Angelo, Scott
>
> Matt, changing Nova to store the connector info at volume attach time does
> help. Where the gap will remain is after Nova evacuation or live 
migration,

This will happen with shelve as well I think. Volumes are not
detached in shelve
IIRC.

 > when that info will need to be updated in Cinder. We need to
change the
 > Cinder API to have some mechanism to allow this.
 > We'd also like Cinder to store the appropriate info to allow a
force-detach for
 > the cases where Nova cannot make the call to Cinder.
 > Ongoing work for this and related issues is tracked and discussed
here:
 > https://etherpad.openstack.org/p/cinder-nova-api-changes
 >
 > Scott D'Angelo (scottda)
 > 
 > From: Matt Riedemann [mrie...@linux.vnet.ibm.com
]
 > Sent: Monday, February 29, 2016 7:48 AM
 > To: openstack-dev@lists.openstack.org

 > Subject: Re: [openstack-dev] [nova][cinder] volumes stuck detaching
 > attaching and force detach
 >
 > On 2/22/2016 4:08 PM, Walter A. Boring IV wrote:
 > > On 02/22/2016 11:24 AM, John Garbutt wrote:
 > >> Hi,
 > >>
 > >> Just came up on IRC, when nova-compute gets killed half way
through a
 > >> volume attach (i.e. no graceful shutdown), things get stuck in
a bad
 > >> state, like volumes stuck in the attaching state.
 > >>
 > >> This looks like a new addition to this conversation:
 > >> http://lists.openstack.org/pipermail/openstack-dev/2015-
 > December/0826
 > >> 83.html
 > >>
 > >> And brings us back to this discussion:
 > >>
https://blueprints.launchpad.net/nova/+spec/add-force-detach-to-nova
 > >>
 > >> What if we move our attention towards automatically recovering
from
 > >> the above issue? I am wondering if we can look at making our
usually
 > >> recovery code deal with the above situation:
 > >>
 > https://github.com/openstack/nova/blob/834b5a9e3a4f8c6ee2e3387845fc24
 > >> c79f4bf615/nova/compute/manager.py#L934
 > >>
 > >>
 > >> Did we get the Cinder APIs in place that enable the
force-detach? I
 > >> think we did and it was this one?
 > >>
https://blueprints.launchpad.net/python-cinderclient/+spec/nova-force
 > >> -detach-needs-cinderclient-api
 > >>
 > >>
 > >> I think diablo_rojo might be able to help dig for any bugs we have
 > >> related to this. I just wanted to get this idea out there before I
 > >> head out.
 > >>
 > >> Thanks,
 > >> John
 > >>
 > >>
 > __
 > ___
 > >> _
 > >>
 > >> OpenStack Development Mailing List (not for usage questions)
 > >> Unsubscribe:
 > >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

 > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > >> .
 > >>
 > > The problem is a little more complicated.
 > >
 > > In order for cinder backends to be able to do a force detach
 > > correctly, the Cinder driver needs to have the correct 'connector'
 > > dictionary passed in to terminate_connection.  That connector
 > > dictionary is the collection of initiator side information
which is gleaned
 > here:
 > >
https://github.com/openstack/os-brick/blob/master/os_brick/initiator/c
 > > onnector.py#L99-L144
 > >
 > >
 > > The plan was to save that connector information in the Cinder
 > > volume_attachment table.  When a force detach is called, Cinder has
 > > the existing connector saved if Nova doesn't have it.  The
problem was
 > > live migration.  When you migrate to the destination n-cpu
host, the
 > > connector that Cinder had is now out of date.  There is no API in
 > > Cinder today to allow updating an existing attachment.
 > >
 > > So, the plan at the Mitaka summit was to add this new API, but it
 > > required microversions to land, which we still don't have in
Cinder's
 > > API today.
 > >
 > >
 > > Walt
 > >
 > >
 > __
 > 
 > >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> >openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Mike Spreitzer
"Sean M. Collins"  wrote on 03/02/2016 01:16:52 PM:

> Meaning your users are creating new security groups and naming them
> "default" - so you have the "default" default (heh) and then the one
> that they created named default?
> 
> Are security group names in Nova-Net unqiue? I seem to recall that being
> a difference between Nova-Net and Neutron, where security group names
> are not unique in Neutron - hence the problem above.

I have seen this happen in a variety of use cases.  It does not bother me 
that security group names are scoped to tenants; other kinds of names also 
lack various kinds of uniqueness.  I am really just raising a tiny, 
peripheral rant here.  When I go in as admin to look at a problem, `nova 
show` identifies a Compute Instance's security group in a useless way.  If 
only I could make `nova show` tell me the UUID instead of, or in addition 
to, the security group's name then I would be happy.

Thanks,
Mike


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][all] Integration python-*client tests on gates

2016-03-02 Thread Boris Pavlovic
Hi,

It's still not clear for me, why we can't just add Rally jobs with
scenarios related to specific project.
It will work quite fast and it will cover CLI (instantly)  with good
integration/functional testing.


Best regards,
Boris Pavlovic

On Wed, Mar 2, 2016 at 4:52 AM, Sean Dague  wrote:

> On 03/02/2016 07:34 AM, Ivan Kolodyazhny wrote:
> > Sean,
> >
> > I've mentioned above, that current tempest job runs ~1429 tests and only
> > about 10 of them uses cinderclient. It tooks a lot of time without any
> > benefits for cinder, e.g.: tests like tempest.api.network.* verifies
> > Neutron, not python-cinderclient.
>
> We can say that about a lot of things in that stack. For better or
> worse, that's where our testing is. It's a full stack same set of tests
> against all these components which get used. The tempest.api.network
> tests are quite quick. The biggest time hitters in the runs are scenario
> tests, many of which are volumes driven.
>
> 2016-02-12 19:07:46.277 |
>
> tempest.scenario.test_network_advanced_server_ops.TestNetworkAdvancedServerOps.test_server_connectivity_reboot[compute,id-7b6860c2-afa3-4846-9522-adeb38dfbe08,network]
>  193.523
> 2016-02-12 19:07:46.277 |
>
> tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern[compute,id-557cd2c2-4eb8-4dce-98be-f86765ff311b,image,smoke,volume]
> 150.766
> 2016-02-12 19:07:46.277 |
>
> tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern[compute,id-557cd2c2-4eb8-4dce-98be-f86765ff311b,image,smoke,volume]
>   136.834
> 2016-02-12 19:07:46.278 |
>
> tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps.test_cross_tenant_traffic[compute,id-e79f879e-debb-440c-a7e4-efeda05b6848,network]
>107.045
> 2016-02-12 19:07:46.278 |
>
> tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_slaac[compute,id-9178ad42-10e4-47e9-8987-e02b170cc5cd,network]
> 101.252
> 2016-02-12 19:07:46.278 |
>
> tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_dhcpv6_stateless[compute,id-cf1c4425-766b-45b8-be35-e2959728eb00,network]
>   99.041
> 2016-02-12 19:07:46.278 |
>
> tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_network_basic_ops[compute,id-f323b3ba-82f8-4db7-8ea6-6a895869ec49,network,smoke]
> 96.954
> 2016-02-12 19:07:46.278 |
>
> tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_volume_backed_instance[compute,id-c1b6318c-b9da-490b-9c67-9339b627271f,image,network,volume]
> 95.120
> 2016-02-12 19:07:46.278 |
>
> tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario[compute,id-bdbb5441-9204-419d-a225-b4fdbfb1a1a8,image,network,volume]
>86.165
> 2016-02-12 19:07:46.278 |
>
> tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern[compute,id-608e604b-1d63-4a82-8e3e-91bc665c90b4,image,network]
>   85.422
>
>
> If you would like to pitch in on an optimization strategy for all the
> components, that would be great. But this needs to be thought about in
> those terms. It would be great to stop testing 2 versions of cinder API
> in every run, for instance. That would be super helpful to everyone as
> those Boot from volume tests take over 2 minutes each.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Kevin Benton
Does it at least also include the UUID, or is there no way to tell from
'nova show'?

On Wed, Mar 2, 2016 at 10:01 AM, Mike Spreitzer  wrote:

> "Sean M. Collins"  wrote on 03/02/2016 12:38:29 PM:
>
> > I think that the default security group should be left as is - and users
> > should be trained that they should bring/create security groups with the
> > appropriate rules for their need.
>
> Could we at least make it less difficult to figure out which security
> group is attached to a Nova instance?  Right now `nova show` says only that
> the security group is named "default" and guess what --- they are *all*
> named default!  An admin looking at this is lost.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Feature Freeze Exception Request - switching to CentOS-7.2

2016-03-02 Thread Mike Scherbakov
It is not just about BVT. I'd suggest to monitor situation overall,
including failures of system tests [1]. If we see regressions there, or
some test cases will start flapping (what is even worse), then we'd have to
revert back to CentOS 7.1.

[1] https://github.com/openstack/fuel-qa

On Wed, Mar 2, 2016 at 10:16 AM Dmitry Borodaenko 
wrote:

> I agree with Mike's concerns, and propose to make these limitations (4
> weeks before FF for OS upgrades, 2 weeks for upgrades of key
> dependencies -- RabbitMQ, MCollective, Puppet, MySQL, PostgreSQL,
> anything else?) official for 10.0/Newton.
>
> For 9.0/Mitaka, it is too late to impose them, so we just have to be
> very careful and conservative with this upgrade. First of all, we need
> to have a green BVT before and after switching to the CentOS 7.2 repo
> snapshot, so while I approved the spec, we can't move forward with this
> until BVT is green again, and right now it's red:
>
> https://ci.fuel-infra.org/job/9.0.fuel_community.ubuntu.bvt_2/
>
> If we get it back to green but it becomes red after the upgrade, you
> must switch back to CentOS 7.1 *immediately*. If you are able to stick
> to this plan, there is still time to complete the transition today
> without requiring an FFE.
>
> --
> Dmitry Borodaenko
>
>
> On Wed, Mar 02, 2016 at 05:53:53PM +, Mike Scherbakov wrote:
> > Formally, we can merge it today. Historically, every update of OS caused
> us
> > instability for some time: from days to a couple of month.
> > Taking this into account and number of other exceptions requested,
> overall
> > stability of code, my opinion would be to postpone this to 10.0.
> >
> > Also, I'd suggest to change the process, and have freeze date for all OS
> > updates no later than a month before official FF date. This will give us
> > time to stabilize, and ensure that base on which all new code is being
> > developed is stable when approaching FF.
> >
> > I'd also propose to have freeze for major upgrades of 3rd party packages
> no
> > later than 2 weeks before FF, which Fuel depends heavily upon. For
> > instance, such will include RabbitMQ, MCollective, Puppet.
> >
> > On Wed, Mar 2, 2016 at 7:34 AM Igor Marnat  wrote:
> >
> > > Igor,
> > > couple of points from my side.
> > >
> > > CentOS 7.2 will be getting updates for several more months, and we have
> > > snapshots and all the mechanics in place to switch to the next version
> when
> > > needed.
> > >
> > > Speaking of getting this update into 9.0, we actually don't need FFE,
> we
> > > can merge remaining staff today. It has enough reviews, so if you add
> your
> > > +1 today, we don't need FFE.
> > >
> > > https://review.openstack.org/#/c/280338/
> > > https://review.fuel-infra.org/#/c/17400/
> > >
> > >
> > >
> > > Regards,
> > > Igor Marnat
> > >
> > > On Wed, Mar 2, 2016 at 6:23 PM, Dmitry Teselkin <
> dtesel...@mirantis.com>
> > > wrote:
> > >
> > >> Igor,
> > >>
> > >> Your statement about updates for 7.2 isn't correct - it will receive
> > >> updates,  because it's the latest release ATM. There is *no* pinning
> inside
> > >> ISO, and the only place where it was 8.0 were docker containers just
> > >> because we had to workaround some issues. But there are no docker
> > >> containers in 9.0, so that's not the case.
> > >> The proposed solution to switch to CentOS-7.2 in fact is based on
> > >> selecting the right snapshot with packages. There is no pinning in
> ISO (it
> > >> was in earlier versions of the spec but was removed).
> > >>
> > >> On Wed, Mar 2, 2016 at 6:11 PM, Igor Kalnitsky <
> ikalnit...@mirantis.com>
> > >> wrote:
> > >>
> > >>> Dmitry, Igor,
> > >>>
> > >>> > Very important thing is that CentOS 7.1 which master node is based
> now
> > >>> > don't get updates any longer.
> > >>>
> > >>> If you are using "fixed" release you must be ready that you won't get
> > >>> any updates. So with CentOS 7.2 the problem still the same.
> > >>>
> > >>> However, let's wait for Fuel PTL decision. I only shared my POV:
> > >>> that's not a critical feature, and taking into account the risks of
> > >>> regression - I'd prefer to do not accept it in 9.0.
> > >>>
> > >>> Regards,
> > >>> Igor
> > >>>
> > >>> On Wed, Mar 2, 2016 at 4:42 PM, Igor Marnat 
> > >>> wrote:
> > >>> > Igor,
> > >>> > please note that this is pretty much not like update of master node
> > >>> which we
> > >>> > had in 8.0. This is minor _update_ of CentOS from 7.1 to 7.2 which
> team
> > >>> > tested for more than 2 months already.
> > >>> >
> > >>> > We don't expect it to require any additional efforts from core or
> qa
> > >>> team.
> > >>> >
> > >>> > Very important thing is that CentOS 7.1 which master node is based
> now
> > >>> don't
> > >>> > get updates any longer. Updates are only provided for CentOS 7.2.
> > >>> >
> > >>> > So we'll have to switch CentOS 7.1 to CentOS 7.2 anyways.
> > >>> >
> > >>> > We can do it now for more or less free, later in release 

Re: [openstack-dev] [nova] config options help text improvement: current status

2016-03-02 Thread Doug Hellmann
Excerpts from Markus Zoeller's message of 2016-03-02 18:45:45 +0100:

[a lot snipped]

> Appendix
> 
> 
> Example of the help text improvement
> ---
> As an example, compare the previous documentation of the scheduler 
> option "scheduler_tracks_instance_changes". 
> Before we started:
> 
> # Determines if the Scheduler tracks changes to instances to help 
> # with its filtering decisions. (boolean value)
> #scheduler_tracks_instance_changes = true
> 
> After the improvement:
> 
> # The scheduler may need information about the instances on a host 
> # in order to evaluate its filters and weighers. The most common 
> # need for this information is for the (anti-)affinity filters, 
> # which need to choose a host based on the instances already running
> # on a host.
> #
> # If the configured filters and weighers do not need this information,
> # disabling this option will improve performance. It may also be 
> # disabled when the tracking overhead proves too heavy, although 
> # this will cause classes requiring host usage data to query the 
> # database on each request instead.
> #
> # This option is only used by the FilterScheduler and its subclasses;
> # if you use a different scheduler, this option has no effect.
> #
> # * Services that use this:
> #
> # ``nova-scheduler``
> #
> # * Related options:
> #
> # None
> #  (boolean value)
> #scheduler_tracks_instance_changes = true

If, in the course of adding this information, you think it would be
useful for oslo.config or the config generator to provide a way to
expose or derive the information, let me know. We're going to be doing
some more work on the config generator to enable some automation in the
config reference guide maintained by the docs team, and this looks like
the sort of thing that might be useful to include in a discoverable way
(not just within the comment text for the options).

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Release of M3 milestone and FFE for rc-1

2016-03-02 Thread Sergey Kraynev
Hi all.

I want to inform all, that mitaka-3 milestone was recently released:
https://review.openstack.org/#/c/284198/

So now we are going to prepare mitaka-rc1.
This milestone has one Feature Freeze Exception:
https://blueprints.launchpad.net/heat/+spec/lbaasv2-suport

For this BP we still has not merged Functional test:
https://review.openstack.org/#/c/237608/

and patch for release notes:

https://review.openstack.org/#/c/287271/

Also BP:
https://blueprints.launchpad.net/heat/+spec/sfc-heat

was moved to newton-1, because it has some comments for each patch and I'd
like to minimize risk of going out of time before rc-1.

-- 
Regards,
Sergey.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Sean M. Collins
Mike Spreitzer wrote:
> Could we at least make it less difficult to figure out which security 
> group is attached to a Nova instance?  Right now `nova show` says only 
> that the security group is named "default" and guess what --- they are 
> *all* named default!  An admin looking at this is lost.

Meaning your users are creating new security groups and naming them
"default" - so you have the "default" default (heh) and then the one
that they created named default?

Are security group names in Nova-Net unqiue? I seem to recall that being
a difference between Nova-Net and Neutron, where security group names
are not unique in Neutron - hence the problem above.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] 3.rd Party CI requirements for compliance

2016-03-02 Thread Indra Harijono
Hi,

I am new in this forum and openstack dev. so please my sincere apology if I 
submitted stupid (redundant) questions.
I am writing this to clarify cinder compliance requirements (and 3.rd Party CI 
Testing).
We are developing storage appliance and would like to run cinder on it.
We don't directly modify API but change the underlying (volume provisioning) 
mechanisms.

-  Do we need to set up 3.rd party CI for the compliance?

-  If yes, do other needs full product level documentation (such as 
specs etc.)?

-  How long would it be necessary to provide such 3.rd party CI system 
for others, or does the CI setup mean

to be permanently used to check compliance each time openstack code is modified?
Any comments, suggestions and feedback are highly appreciated.

Thanks,
Indra


The information contained in this e-mail is considered confidential of SK hynix 
memory solutions Inc. and intended only for the persons addressed or copied in 
this e-mail. Any unauthorized use, dissemination of the information, or copying 
of this message is strictly prohibited. If you are not the intended recipient, 
please contact the sender immediately and permanently delete the original and 
any copies of this email.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Feature Freeze Exception Request - switching to CentOS-7.2

2016-03-02 Thread Dmitry Borodaenko
I agree with Mike's concerns, and propose to make these limitations (4
weeks before FF for OS upgrades, 2 weeks for upgrades of key
dependencies -- RabbitMQ, MCollective, Puppet, MySQL, PostgreSQL,
anything else?) official for 10.0/Newton. 

For 9.0/Mitaka, it is too late to impose them, so we just have to be
very careful and conservative with this upgrade. First of all, we need
to have a green BVT before and after switching to the CentOS 7.2 repo
snapshot, so while I approved the spec, we can't move forward with this
until BVT is green again, and right now it's red:

https://ci.fuel-infra.org/job/9.0.fuel_community.ubuntu.bvt_2/

If we get it back to green but it becomes red after the upgrade, you
must switch back to CentOS 7.1 *immediately*. If you are able to stick
to this plan, there is still time to complete the transition today
without requiring an FFE.

-- 
Dmitry Borodaenko


On Wed, Mar 02, 2016 at 05:53:53PM +, Mike Scherbakov wrote:
> Formally, we can merge it today. Historically, every update of OS caused us
> instability for some time: from days to a couple of month.
> Taking this into account and number of other exceptions requested, overall
> stability of code, my opinion would be to postpone this to 10.0.
> 
> Also, I'd suggest to change the process, and have freeze date for all OS
> updates no later than a month before official FF date. This will give us
> time to stabilize, and ensure that base on which all new code is being
> developed is stable when approaching FF.
> 
> I'd also propose to have freeze for major upgrades of 3rd party packages no
> later than 2 weeks before FF, which Fuel depends heavily upon. For
> instance, such will include RabbitMQ, MCollective, Puppet.
> 
> On Wed, Mar 2, 2016 at 7:34 AM Igor Marnat  wrote:
> 
> > Igor,
> > couple of points from my side.
> >
> > CentOS 7.2 will be getting updates for several more months, and we have
> > snapshots and all the mechanics in place to switch to the next version when
> > needed.
> >
> > Speaking of getting this update into 9.0, we actually don't need FFE, we
> > can merge remaining staff today. It has enough reviews, so if you add your
> > +1 today, we don't need FFE.
> >
> > https://review.openstack.org/#/c/280338/
> > https://review.fuel-infra.org/#/c/17400/
> >
> >
> >
> > Regards,
> > Igor Marnat
> >
> > On Wed, Mar 2, 2016 at 6:23 PM, Dmitry Teselkin 
> > wrote:
> >
> >> Igor,
> >>
> >> Your statement about updates for 7.2 isn't correct - it will receive
> >> updates,  because it's the latest release ATM. There is *no* pinning inside
> >> ISO, and the only place where it was 8.0 were docker containers just
> >> because we had to workaround some issues. But there are no docker
> >> containers in 9.0, so that's not the case.
> >> The proposed solution to switch to CentOS-7.2 in fact is based on
> >> selecting the right snapshot with packages. There is no pinning in ISO (it
> >> was in earlier versions of the spec but was removed).
> >>
> >> On Wed, Mar 2, 2016 at 6:11 PM, Igor Kalnitsky 
> >> wrote:
> >>
> >>> Dmitry, Igor,
> >>>
> >>> > Very important thing is that CentOS 7.1 which master node is based now
> >>> > don't get updates any longer.
> >>>
> >>> If you are using "fixed" release you must be ready that you won't get
> >>> any updates. So with CentOS 7.2 the problem still the same.
> >>>
> >>> However, let's wait for Fuel PTL decision. I only shared my POV:
> >>> that's not a critical feature, and taking into account the risks of
> >>> regression - I'd prefer to do not accept it in 9.0.
> >>>
> >>> Regards,
> >>> Igor
> >>>
> >>> On Wed, Mar 2, 2016 at 4:42 PM, Igor Marnat 
> >>> wrote:
> >>> > Igor,
> >>> > please note that this is pretty much not like update of master node
> >>> which we
> >>> > had in 8.0. This is minor _update_ of CentOS from 7.1 to 7.2 which team
> >>> > tested for more than 2 months already.
> >>> >
> >>> > We don't expect it to require any additional efforts from core or qa
> >>> team.
> >>> >
> >>> > Very important thing is that CentOS 7.1 which master node is based now
> >>> don't
> >>> > get updates any longer. Updates are only provided for CentOS 7.2.
> >>> >
> >>> > So we'll have to switch CentOS 7.1 to CentOS 7.2 anyways.
> >>> >
> >>> > We can do it now for more or less free, later in release cycle for
> >>> higher
> >>> > risk and QA efforts and after the release for 2x price because of
> >>> additional
> >>> > QA cycle we'll need to pass through.
> >>> >
> >>> >
> >>> >
> >>> > Regards,
> >>> > Igor Marnat
> >>> >
> >>> > On Wed, Mar 2, 2016 at 2:57 PM, Dmitry Teselkin <
> >>> dtesel...@mirantis.com>
> >>> > wrote:
> >>> >>
> >>> >> Hi Igor,
> >>> >>
> >>> >> Postponing this till Fuel 10 means we have to elaborate a plan to do
> >>> such
> >>> >> upgrade for Fuel 9 after the release - the underlying system will not
> >>> get
> >>> >> updated on it's own, and the security issues will 

Re: [openstack-dev] [nova] config options help text improvement: current status

2016-03-02 Thread Tim Bell

Great. Does this additional improved text also get into the configuration guide 
documentation somehow ? 



Tim

On 02/03/16 18:45, "Markus Zoeller"  wrote:

>TL;DR: From ~600 nova specific config options are:
>~140 at a central location with an improved help text
>~220 options in open reviews (currently on hold)
>~240 options todo
>
>
>Background
>==
>Nova has a lot of config options. Most of them weren't well
>documented and without looking in the code you probably don't
>understand what they do. That's fine for us developers but the ops
>had more problems with the interface we provide for them [1]. After
>the Mitaka summit we came to the conclusion that this should be 
>improved, which is currently in progress with blueprint [2].
>
>
>Current Status
>==
>After asking on the ML for help [3] the progress improved a lot. 
>The goal is clear now and we know how to achieve it. The organization 
>is done via [4] which also has a section of "odd config options". 
>This section is important for a later step when we want do deprecate 
>config options to get rid of unnecessary ones. 
>
>As we reached the Mitaka-3 milestone we decided to put the effort [5] 
>on hold to stabilize the project and focus the review effort on bug 
>fixes. When the Newton cycle opens, we can continue the work. The 
>current result can be seen in the sample "nova.conf" file generated 
>after each commit [6]. The appendix at the end of this post shows an
>example.
>
>All options we have will be treated that way and moved to a central
>location at "nova/conf/". That's the central location which hosts
>now the interface to the ops. It's easier to get an overview now.
>The appendix shows how the config options were spread at the beginning
>and how they are located now.
>
>I initially thought that we have around 800 config options in Nova
>but I learned meanwhile that we import a lot from other libs, for 
>example from "oslo.db" and expose them as Nova options. We have around
>600 Nova specific config options, and ~140 are already treaded like
>described above and ca. 220 are in the pipeline of open reviews.
>Which leaves us ~240 which are not looked at yet.
>
>
>Outlook
>===
>The numbers of the beginning of this ML post make me believe that we
>can finish the work in the upcoming Newton cycle. "Finished" means
>here: 
>* all config options we provide to our ops have proper and usable docs
>* we have an understanding which options don't make sense anymore
>* we know which options should get stronger validation to reduce errors
>
>I'm looking forward to it :)
>
>
>Thanks
>==
>I'd like to thank all the people who are working on this and making
>this possible. A special thanks goes to Ed Leafe, Esra Celik and
>Stephen Finucane. They put a tremendous amount of work in it.
>
>
>References:
>===
>[1] 
>http://lists.openstack.org/pipermail/openstack-operators/2016-January/009301.html
>[2] https://blueprints.launchpad.net/nova/+spec/centralize-config-options
>[3] 
>http://lists.openstack.org/pipermail/openstack-dev/2015-December/081271.html
>[4] https://etherpad.openstack.org/p/config-options
>[5] Gerrit reviews for this topic: 
>https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/centralize-config-options
>[6] The sample config file which gets generated after each commit:
>http://docs.openstack.org/developer/nova/sample_config.html
>
>
>Appendix
>
>
>Example of the help text improvement
>---
>As an example, compare the previous documentation of the scheduler 
>option "scheduler_tracks_instance_changes". 
>Before we started:
>
># Determines if the Scheduler tracks changes to instances to help 
># with its filtering decisions. (boolean value)
>#scheduler_tracks_instance_changes = true
>
>After the improvement:
>
># The scheduler may need information about the instances on a host 
># in order to evaluate its filters and weighers. The most common 
># need for this information is for the (anti-)affinity filters, 
># which need to choose a host based on the instances already running
># on a host.
>#
># If the configured filters and weighers do not need this information,
># disabling this option will improve performance. It may also be 
># disabled when the tracking overhead proves too heavy, although 
># this will cause classes requiring host usage data to query the 
># database on each request instead.
>#
># This option is only used by the FilterScheduler and its subclasses;
># if you use a different scheduler, this option has no effect.
>#
># * Services that use this:
>#
># ``nova-scheduler``
>#
># * Related options:
>#
># None
>#  (boolean value)
>#scheduler_tracks_instance_changes = true
>
>
>The spread of config options in the tree

Re: [openstack-dev] [nova] Non-Admin user can show deleted instances using changes-since parameter when calling list API

2016-03-02 Thread Matt Riedemann



On 3/2/2016 3:02 AM, Zhenyu Zheng wrote:

Hi, Nova,

While I'm working on add "changes-since" parameter support for
python-novaclient "list" CLI.

I realized that non-admin can list all deleted instances using
"changes-since" parameter. This is reasonable in some level, as delete
is an update to instances. But as we have a limitation that when list
instances, deleted parameter is only allowed for admin users.

This will lead to inconsistent to the rule of show deleted instances, as
we limit the list of deleted instances to admin only, but non-admin can
get the information using changes-since.

Should we fix this?

https://bugs.launchpad.net/nova/+bug/1552071

Thanks,

Kevin Zheng


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Unless I'm missing some use case, I think that listing instances for 
non-admins should be restricted to the instances they own, regardless of 
whether or not they are deleted, period.


As for listing deleting instances as an admin, that was broken with the 
2.16 microversion and there is a fix here:


https://review.openstack.org/#/c/283820/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bug-smash] Global OpenStack Bug Smash Mitaka

2016-03-02 Thread Markus Zoeller
"Wang, Shane"  wrote on 02/05/2016 04:42:21 AM:

> From: "Wang, Shane" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 02/05/2016 04:43 AM
> Subject: Re: [openstack-dev] [bug-smash] Global OpenStack Bug Smash 
Mitaka
> 
> Hi all,
> 
> After discussing with TC members and other community guys, we thought 
> March 2-4 might not be a good timing for bug smash. So we decided to 
> change the dates to be March 7 ? 9 (Monday ? Wednesday) in R4.
> Please join our efforts to fix bugs for OpenStack.
> 
> Thanks.

Hi Shane,

I'm the bug list maintainer of Nova, is it possible for me to propose
a list of bugs which need fixes? 
Nova (and surely other projects too) would also benefit from:
* a cleanup of inconsistencies in bug reports in Launchpad
* triaging new bugs in Launchpad
* reviews of pushed bug fixes in Gerrit
Basically the steps from [1]. As we're heading to the rc phase in a 
few weeks it would be benefitial to have a lot of eyes on that. 

References:
[1] https://wiki.openstack.org/wiki/BugTriage

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Mike Spreitzer
"Sean M. Collins"  wrote on 03/02/2016 12:38:29 PM:

> I think that the default security group should be left as is - and users
> should be trained that they should bring/create security groups with the
> appropriate rules for their need.

Could we at least make it less difficult to figure out which security 
group is attached to a Nova instance?  Right now `nova show` says only 
that the security group is named "default" and guess what --- they are 
*all* named default!  An admin looking at this is lost.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-03-02 Thread Jeremy Stanley
On 2016-02-29 15:03:19 -0800 (-0800), James Bottomley wrote:
[...]
> it sounds like an an expectation that people who aren't gamers
> would submit more than one patch and, indeed, become part of the
> developer base. I wanted to explain why there's a significant set
> of people who legitimately only submit a single patch and who
> won't really ever become developers.

Some rough curve-fitting was performed based off per-contributor
patch counts over a cycle and the observation was that single-patch
contributors were a significant positive deviation. The model
suggested that we'd have somewhere around 10% fewer active
contributors in a cycle if we calculated the single-patch
contributor projection based on the counts of contributors with an
increasing number of merged patches that cycle. This was used as
justification not to increase the minimum patch requirement for a
free summit pass, since the prediction was that would simply move
the 10% bump to whatever the minimum number of patches was to
qualify.

What I'd love to do, but have yet to find the time, is perform a
more detailed analysis and model incorporating a mapping of which
contributors exercised their free admission. This would provide a
far more accurate picture of whether people are really contributing
just one patch so they can save US$600 on conference admission, or
whether we have a disproportionate number of single-patch
contributors for some other more serious reason. Honestly, if the
rough model turns out to be true, then serving 90% in the way we
intended with only 10% freeloading seems fine to me (as long as
people who have no real intention of contributing stay out of design
sessions and let the rest of us get work done).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Cells meeting cancelled next week

2016-03-02 Thread Andrew Laski
Since we'll be past FF by then work in progress will be slowing down. We
will still meet occasionally to discuss specs or prepare for the summit,
but not next week.

The next meeting will be March 16th at 1700 UTC.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Feature Freeze Exception Request - switching to CentOS-7.2

2016-03-02 Thread Mike Scherbakov
Formally, we can merge it today. Historically, every update of OS caused us
instability for some time: from days to a couple of month.
Taking this into account and number of other exceptions requested, overall
stability of code, my opinion would be to postpone this to 10.0.

Also, I'd suggest to change the process, and have freeze date for all OS
updates no later than a month before official FF date. This will give us
time to stabilize, and ensure that base on which all new code is being
developed is stable when approaching FF.

I'd also propose to have freeze for major upgrades of 3rd party packages no
later than 2 weeks before FF, which Fuel depends heavily upon. For
instance, such will include RabbitMQ, MCollective, Puppet.

On Wed, Mar 2, 2016 at 7:34 AM Igor Marnat  wrote:

> Igor,
> couple of points from my side.
>
> CentOS 7.2 will be getting updates for several more months, and we have
> snapshots and all the mechanics in place to switch to the next version when
> needed.
>
> Speaking of getting this update into 9.0, we actually don't need FFE, we
> can merge remaining staff today. It has enough reviews, so if you add your
> +1 today, we don't need FFE.
>
> https://review.openstack.org/#/c/280338/
> https://review.fuel-infra.org/#/c/17400/
>
>
>
> Regards,
> Igor Marnat
>
> On Wed, Mar 2, 2016 at 6:23 PM, Dmitry Teselkin 
> wrote:
>
>> Igor,
>>
>> Your statement about updates for 7.2 isn't correct - it will receive
>> updates,  because it's the latest release ATM. There is *no* pinning inside
>> ISO, and the only place where it was 8.0 were docker containers just
>> because we had to workaround some issues. But there are no docker
>> containers in 9.0, so that's not the case.
>> The proposed solution to switch to CentOS-7.2 in fact is based on
>> selecting the right snapshot with packages. There is no pinning in ISO (it
>> was in earlier versions of the spec but was removed).
>>
>> On Wed, Mar 2, 2016 at 6:11 PM, Igor Kalnitsky 
>> wrote:
>>
>>> Dmitry, Igor,
>>>
>>> > Very important thing is that CentOS 7.1 which master node is based now
>>> > don't get updates any longer.
>>>
>>> If you are using "fixed" release you must be ready that you won't get
>>> any updates. So with CentOS 7.2 the problem still the same.
>>>
>>> However, let's wait for Fuel PTL decision. I only shared my POV:
>>> that's not a critical feature, and taking into account the risks of
>>> regression - I'd prefer to do not accept it in 9.0.
>>>
>>> Regards,
>>> Igor
>>>
>>> On Wed, Mar 2, 2016 at 4:42 PM, Igor Marnat 
>>> wrote:
>>> > Igor,
>>> > please note that this is pretty much not like update of master node
>>> which we
>>> > had in 8.0. This is minor _update_ of CentOS from 7.1 to 7.2 which team
>>> > tested for more than 2 months already.
>>> >
>>> > We don't expect it to require any additional efforts from core or qa
>>> team.
>>> >
>>> > Very important thing is that CentOS 7.1 which master node is based now
>>> don't
>>> > get updates any longer. Updates are only provided for CentOS 7.2.
>>> >
>>> > So we'll have to switch CentOS 7.1 to CentOS 7.2 anyways.
>>> >
>>> > We can do it now for more or less free, later in release cycle for
>>> higher
>>> > risk and QA efforts and after the release for 2x price because of
>>> additional
>>> > QA cycle we'll need to pass through.
>>> >
>>> >
>>> >
>>> > Regards,
>>> > Igor Marnat
>>> >
>>> > On Wed, Mar 2, 2016 at 2:57 PM, Dmitry Teselkin <
>>> dtesel...@mirantis.com>
>>> > wrote:
>>> >>
>>> >> Hi Igor,
>>> >>
>>> >> Postponing this till Fuel 10 means we have to elaborate a plan to do
>>> such
>>> >> upgrade for Fuel 9 after the release - the underlying system will not
>>> get
>>> >> updated on it's own, and the security issues will not close
>>> themselves. The
>>> >> problem here is that such upgrade of deployed master node requires a
>>> lot of
>>> >> QA work also.
>>> >>
>>> >> Since we are not going to update package we build on our own (they
>>> still
>>> >> targeted 7.1) switching master node base to CentOS-72 is not that
>>> dangerous
>>> >> as it seems.
>>> >>
>>> >> On Wed, Mar 2, 2016 at 1:54 PM, Igor Kalnitsky <
>>> ikalnit...@mirantis.com>
>>> >> wrote:
>>> >>>
>>> >>> Hey Dmitry,
>>> >>>
>>> >>> No offence, but I rather against that exception. We have too many
>>> >>> things to do in Mitaka, and moving to CentOS 7.2 means
>>> >>>
>>> >>> * extra effort from core team
>>> >>> * extra effort from qa team
>>> >>>
>>> >>> Moreover, it might block development by introducing unpredictable
>>> >>> regressions. Remember 8.0? So I think it'd be better to reduce risk
>>> of
>>> >>> regressions that affects so many developers by postponing CentOS 7.2
>>> >>> till Fuel 10.
>>> >>>
>>> >>> Thanks,
>>> >>> Igor
>>> >>>
>>> >>>
>>> >>> On Mon, Feb 29, 2016 at 7:13 PM, Dmitry Teselkin <
>>> dtesel...@mirantis.com>
>>> >>> wrote:
>>> >>> > I'd like to ask for a feature freeze exception 

[openstack-dev] [ceilometer] Unable to get ceilometer events for instances running on demo project

2016-03-02 Thread Umar Yousaf
I have a single node configuration for devstack liberty working and I want
to record all the *ceilometer events* like compute.instance.start,
compute.instance.end, compute.instance.update etc occurred recently.
I am unable to get any event occurred for instances running for demo
project i.e when I try *ceilometer event-list* I end up with an empty list
but I could fortunately get all the necessary events occurred for instances
running for admin project/tenant with the same command.
In addition to this I want to get these through python client so if someone
could provide me with the equivalent python call, that would be more than
handy.
Thanks in advance :)
Regard,
Umar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] adding ovs dpdk agent into neutron

2016-03-02 Thread Emilien Macchi


On 03/02/2016 03:07 AM, Ptacek, MichalX wrote:
> Hi all,
> 
>  
> 
> we have puppet module for ovs deployments with dpdk support
> 
> https://github.com/openstack/networking-ovs-dpdk/tree/master/puppet

IMHO that's a bad idea to use networking-ovs-dpdk for the puppet module.
You should initiate the work to create openstack/puppet-dpdk (not sure
about the name) or try to patch openstack/puppet-vswitch.

How puppet-vswitch would be different from puppet-dpdk?

I've looked at the code and you run bash scripts from Puppet.
Really ? :-)

> and we would like to adapt it in a way that it can be used within
> upstream neutron module
> 
> e.g. to introduce class like this
> 
> neutron::agents::ml2::ovsdpdk
> 
>  
> 
> Current code works as follows:
> 
> -  Openstack with installed vanilla ovs is a kind of precondition
> 
> -  Ovsdpdk puppet module installation is triggered afterwards
> and it replace vanilla ovs by ovsdpdk
> 
> (in order to have some flexibility and mostly due to performance reasons
> we are building ovs from src code)
> 
> https://github.com/openstack/networking-ovs-dpdk/blob/master/puppet/ovsdpdk/files/build_ovs_dpdk.erb
> 
> -  As a part of deployments we have several shell scripts, which
> are taking care of build and configuration stuff
> 
>  
> 
> I assume that some parts of our code can be easily rewritten to start
> using standard providers other parts might be rewritten to ruby …
> 
> We would like to introduce neutron::agents::ml2::ovsdpdk as adequate
> solution with existing neutron::agents::ml2::ovs and not just patching it.
> 

What Puppet OpenStack group will let neutron::agents::ml2::ovsdpdk doing:

* configure what you like in /etc/neutron/*
* install what you want that is part of OpenStack/Neutron* (upstream).

What Puppet OpenStack group WILL NOT let neutron::agents::ml2::ovsdpdk
doing:

* install third party software (packages from some custom repositories,
not upstream).
* build RPM/DEB from bash scripts
* build anything from bash scripts
* configure anything outside /etc/neutron/*

> 
> Actually I have following questions:
> 
> Q1) Will it be acceptable if we move build logic before deployment and
> resulting rpm/deb will be installed instead of ovs package during
> deployment ?

You should engage efforts to have upstream packaging in Ubuntu/Debian
and Red Hat systems (RDO).

> Q2) Do we need to rewrite bash logic into ruby code ?

Drop bash scripts, and use upstream packages, like we do everywhere else.

> Q3) Do we need to raise separate blueprint, which has to be approved
>  before starting adaptations ?

Feel free to submit a blueprint so our group can be involved in this
discussion, or maybe this thread is enough.
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] config options help text improvement: current status

2016-03-02 Thread Markus Zoeller
TL;DR: From ~600 nova specific config options are:
~140 at a central location with an improved help text
~220 options in open reviews (currently on hold)
~240 options todo


Background
==
Nova has a lot of config options. Most of them weren't well
documented and without looking in the code you probably don't
understand what they do. That's fine for us developers but the ops
had more problems with the interface we provide for them [1]. After
the Mitaka summit we came to the conclusion that this should be 
improved, which is currently in progress with blueprint [2].


Current Status
==
After asking on the ML for help [3] the progress improved a lot. 
The goal is clear now and we know how to achieve it. The organization 
is done via [4] which also has a section of "odd config options". 
This section is important for a later step when we want do deprecate 
config options to get rid of unnecessary ones. 

As we reached the Mitaka-3 milestone we decided to put the effort [5] 
on hold to stabilize the project and focus the review effort on bug 
fixes. When the Newton cycle opens, we can continue the work. The 
current result can be seen in the sample "nova.conf" file generated 
after each commit [6]. The appendix at the end of this post shows an
example.

All options we have will be treated that way and moved to a central
location at "nova/conf/". That's the central location which hosts
now the interface to the ops. It's easier to get an overview now.
The appendix shows how the config options were spread at the beginning
and how they are located now.

I initially thought that we have around 800 config options in Nova
but I learned meanwhile that we import a lot from other libs, for 
example from "oslo.db" and expose them as Nova options. We have around
600 Nova specific config options, and ~140 are already treaded like
described above and ca. 220 are in the pipeline of open reviews.
Which leaves us ~240 which are not looked at yet.


Outlook
===
The numbers of the beginning of this ML post make me believe that we
can finish the work in the upcoming Newton cycle. "Finished" means
here: 
* all config options we provide to our ops have proper and usable docs
* we have an understanding which options don't make sense anymore
* we know which options should get stronger validation to reduce errors

I'm looking forward to it :)


Thanks
==
I'd like to thank all the people who are working on this and making
this possible. A special thanks goes to Ed Leafe, Esra Celik and
Stephen Finucane. They put a tremendous amount of work in it.


References:
===
[1] 
http://lists.openstack.org/pipermail/openstack-operators/2016-January/009301.html
[2] https://blueprints.launchpad.net/nova/+spec/centralize-config-options
[3] 
http://lists.openstack.org/pipermail/openstack-dev/2015-December/081271.html
[4] https://etherpad.openstack.org/p/config-options
[5] Gerrit reviews for this topic: 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/centralize-config-options
[6] The sample config file which gets generated after each commit:
http://docs.openstack.org/developer/nova/sample_config.html


Appendix


Example of the help text improvement
---
As an example, compare the previous documentation of the scheduler 
option "scheduler_tracks_instance_changes". 
Before we started:

# Determines if the Scheduler tracks changes to instances to help 
# with its filtering decisions. (boolean value)
#scheduler_tracks_instance_changes = true

After the improvement:

# The scheduler may need information about the instances on a host 
# in order to evaluate its filters and weighers. The most common 
# need for this information is for the (anti-)affinity filters, 
# which need to choose a host based on the instances already running
# on a host.
#
# If the configured filters and weighers do not need this information,
# disabling this option will improve performance. It may also be 
# disabled when the tracking overhead proves too heavy, although 
# this will cause classes requiring host usage data to query the 
# database on each request instead.
#
# This option is only used by the FilterScheduler and its subclasses;
# if you use a different scheduler, this option has no effect.
#
# * Services that use this:
#
# ``nova-scheduler``
#
# * Related options:
#
# None
#  (boolean value)
#scheduler_tracks_instance_changes = true


The spread of config options in the tree

We started with this in November 2015. It's the Nova project tree and 
the numbers behind the package name are the numbers of config options
declared in that package (config options declared in sub-packages are
not accumulated).

Based on:
commit 201090b0bcb 
Date: Thu Nov 19 

Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

2016-03-02 Thread Samuel Bercovici
Thank you all for your response.

In my opinion given that UI/HEAT will make Mitaka and will have one cycle to 
mature, it makes sense to remove LBaaS v1 in Newton.
Do we want do discuss an upgrade process in the summit?

-Sam.


From: Bryan Jones [mailto:jone...@us.ibm.com]
Sent: Wednesday, March 02, 2016 5:54 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

And as for the Heat support, the resources have made Mitaka, with additional 
functional tests on the way soon.

blueprint: https://blueprints.launchpad.net/heat/+spec/lbaasv2-suport
gerrit topic: https://review.openstack.org/#/q/topic:bp/lbaasv2-suport
BRYAN M. JONES
Software Engineer - OpenStack Development
Phone: 1-507-253-2620
E-mail: jone...@us.ibm.com


- Original message -
From: Justin Pomeroy 
>
To: openstack-dev@lists.openstack.org
Cc:
Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are we ready?
Date: Wed, Mar 2, 2016 9:36 AM

As for the horizon support, much of it will make Mitaka.  See the blueprint and 
gerrit topic:

https://blueprints.launchpad.net/horizon/+spec/horizon-lbaas-v2-ui
https://review.openstack.org/#/q/topic:bp/horizon-lbaas-v2-ui,n,z

- Justin

On 3/2/16 9:22 AM, Doug Wiegley wrote:
Hi,

A few things:

- It’s not proposed for removal in Mitaka. That patch is for Newton.
- HEAT and Horizon are planned for Mitaka (see neutron-lbaas-dashboard for the 
latter.)
- I don’t view this as a “keep or delete” question. If sufficient folks are 
interested in maintaining it, there is a third option, which is that the code 
can be maintained in a separate repo, by a separate team (with or without the 
current core team’s blessing.)

No decisions have been made yet, but we are on the cusp of some major 
maintenance changes, and two deprecation cycles have passed. Which path forward 
is being discussed at today’s Octavia meeting, or feedback is of course 
welcomed here, in IRC, or anywhere.

Thanks,
doug

On Mar 2, 2016, at 7:06 AM, Samuel Bercovici 
> wrote:

Hi,

I have just notices the following change: 
https://review.openstack.org/#/c/286381 which aims to remove LBaaS v1.
Is this planned for Mitaka or for Newton?

While LBaaS v2 is becoming the default, I think that we should have the 
following before we replace LBaaS v1:
1.  Horizon Support – was not able to find any real activity on it
2.  HEAT Support – will it be ready in Mitaka?

Do you have any other items that are needed before we get rid of LBaaS v1?

-Sam.







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Sean M. Collins
Kevin Benton wrote:
> * Instances without ingress are useless so a bunch of API calls are
> required to make them useful.

This is not true in all cases. There are plenty of workloads that only
require outbound connectivity. Workloads where data is fetched,
computed, then transmitted elsewhere for storage.

> * It violates the end-to-end principle of the Internet to have a middle-box
> meddling with traffic (the compute node in this case).

Again, this is someone's *opinion* - but it is not an opinion
universally shared.

> * Neutron cannot be trusted to do what it says it's doing with the security
> groups API so users want to orchestrate firewalls directly on their
> instances.

This one really rubs me the wrong way. Can we please get a better
description of the bug - instead of someone just saying that Neutron
doesn't work, therefore we don't want any filtering or security for our
instances using an API?

> Second, would it be acceptable to make this operator configurable? This
> would mean users could receive different default filtering as they moved
> between clouds.

It is my belief that an application that is going to be run in a cloud
environment, it is not enough to just upload your disk image and expect
that to be the only thing that is needed to run an app in the cloud. You
will also need to bring your security policy into the cloud as well -
Who can access? How can they access? Which parts of the app can talk to
sensitive parts of the app like the database servers?

I think that the default security group should be left as is - and users
should be trained that they should bring/create security groups with the
appropriate rules for their need.

If infra wants to opt out of the security group API and allow
everything, and then filter using the guest - then fine. That's their
prerogative. All they've done is change where their security policies
are implemented. Instead of a REST API they want to do it directly on
their guest.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara]FFE Request for nfs-as-a-data-source

2016-03-02 Thread Sergey Lukjanov
Hi,

FFE not approved.

tl;dr

Spec for this feature isn't yet approved, so, I couldn't grant FFE for it,
because it'll take much more time to get spec aligned and then code with it
and merged code finally. Regarding the support in all plugins - I more or
less agree with Vitaly that we it's bad to have support for the Data
Sources only in a single plugin, we can start with it in the beginning of
cycle, but I prefer to not go with a new limited to concrete plugin version
feature in release.

Thanks.

On Wed, Mar 2, 2016 at 8:46 AM, Chen, Weiting 
wrote:

> Hi,
>
>
>
> Currently, there is no plan for other plugin support in this feature.
>
> We would like to put this feature on the table at first and see if it can
> bring more customers who are interested in Big Data on Cloud and expecting
> to integrate Hadoop with different storage type support.
>
> However, it’s just a beginning and should be worth a shot to bring it in
> Mitaka. And to support any other plugin is also still open and on-demand in
> the future.
>
>
>
> *From:* Vitaly Gridnev [mailto:vgrid...@mirantis.com]
> *Sent:* Wednesday, March 2, 2016 3:31 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [sahara]FFE Request for
> nfs-as-a-data-source
>
>
>
> Hi,
>
>
>
> From my point of view, if we adding new type of the datasources (or
> configurations for that), it means that it should be supported in almost
> all plugins (at least in vanilla, spark, ambari, cdh I guess). Current
>  implementation is nice, but it looks like it touches only vanilla 2.7.1
> plugin which strange for me. Are there plans to add support for other
> plugins? If yes, then I think this feature should be done in Newton cycle
> to have complete picture of this support. If no, I think it's ok to land
> this code in RC with other improvements in validation.
>
>
>
> At conclusion I would like to say that from point of view we should
> collaborate actively to implement this support in early Newton-1 cycle,
> that would be a best choice.
>
>
>
> Thanks.
>
>
>
> On Wed, Mar 2, 2016 at 4:23 AM, Chen, Weiting 
> wrote:
>
> Hi all,
>
>
>
> I would like to request a FFE for the feature “nfs-as-a-data-source”:
>
> BP: https://blueprints.launchpad.net/sahara/+spec/nfs-as-a-data-source
>
> BP Review: https://review.openstack.org/#/c/210839/
>
> Sahara Code: https://review.openstack.org/#/c/218638/
>
> Sahara Image Elements Code: https://review.openstack.org/#/c/218637/
>
>
>
> Estimate Complete Time: The BP has been complete and the implementation
> has been complete as well. All the code is under code reviewing and since
> there is no big change or modify for the code we expect it can only take
> one weeks to be merged.
>
> The Benefits for this change: Provide NFS support in Sahara.
>
> The Risk: The risk would be low for this patch, since all the functions
> have been delivered.
>
>
>
> Thanks,
>
> Weiting(William) Chen
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Best Regards,
>
> Vitaly Gridnev
>
> Mirantis, Inc
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sincerely yours,
Sergey Lukjanov
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilosca][Neutron][Monasca]

2016-03-02 Thread Rubab Syed
On 1 Mar 2016 21:25, "Rubab Syed"  wrote:

> Hi all,
>
> I'm planning to write a plugin for Monasca that would enable router's
> traffic monitoring per subnet per tenant. For that purpose, I'm using
> Neutron l3 metering extension [1] that allows you to filter traffic based
> on CIDRs.
>
> My concerns:
>
> - Now given the fact that this extension can be used to create labels and
> rules for particular set of IPs and ceilometer can be used to meter the
> bandwidth based on this data and monasca publisher for ceilometer is also
> available, would that plugin be useful somehow? Where are we at ceilosca
> right now?
>
> - Even though ceilometer allows to meter bandwidth at l3 level, we still
> have to create explicit labels and rules for all subnets attached to a
> router. In a production environment where there could be multiple routers
> belonging to multiple tenants, isn't it a bit of a work? I was wondering if
> I could automate the label and rule creation process. My script would
> automatically detect subnets and create rules per interface of router. It
> would help in ceilosca as well and can be used by the router plugin (given
> plugin is not redundant work). Comments?
>
>
> [1] https://wiki.openstack.org/wiki/Neutron/Metering/Bandwidth
>
>
> Thanks,
> Rubab
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara]FFE Request for nfs-as-a-data-source

2016-03-02 Thread Chen, Weiting
Hi,

Currently, there is no plan for other plugin support in this feature.
We would like to put this feature on the table at first and see if it can bring 
more customers who are interested in Big Data on Cloud and expecting to 
integrate Hadoop with different storage type support.
However, it’s just a beginning and should be worth a shot to bring it in 
Mitaka. And to support any other plugin is also still open and on-demand in the 
future.

From: Vitaly Gridnev [mailto:vgrid...@mirantis.com]
Sent: Wednesday, March 2, 2016 3:31 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [sahara]FFE Request for nfs-as-a-data-source

Hi,

From my point of view, if we adding new type of the datasources (or 
configurations for that), it means that it should be supported in almost all 
plugins (at least in vanilla, spark, ambari, cdh I guess). Current  
implementation is nice, but it looks like it touches only vanilla 2.7.1 plugin 
which strange for me. Are there plans to add support for other plugins? If yes, 
then I think this feature should be done in Newton cycle to have complete 
picture of this support. If no, I think it's ok to land this code in RC with 
other improvements in validation.

At conclusion I would like to say that from point of view we should collaborate 
actively to implement this support in early Newton-1 cycle, that would be a 
best choice.

Thanks.

On Wed, Mar 2, 2016 at 4:23 AM, Chen, Weiting 
> wrote:
Hi all,

I would like to request a FFE for the feature “nfs-as-a-data-source”:
BP: https://blueprints.launchpad.net/sahara/+spec/nfs-as-a-data-source
BP Review: https://review.openstack.org/#/c/210839/
Sahara Code: https://review.openstack.org/#/c/218638/
Sahara Image Elements Code: https://review.openstack.org/#/c/218637/

Estimate Complete Time: The BP has been complete and the implementation has 
been complete as well. All the code is under code reviewing and since there is 
no big change or modify for the code we expect it can only take one weeks to be 
merged.
The Benefits for this change: Provide NFS support in Sahara.
The Risk: The risk would be low for this patch, since all the functions have 
been delivered.

Thanks,
Weiting(William) Chen


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best Regards,
Vitaly Gridnev
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Kevin Benton
Yeah, the only thing this will due will change the default rules that are
generated for a user's default security group. They will still be visible
via the normal security groups API and users will be able to modify them.
On Mar 2, 2016 08:22, "Dean Troyer"  wrote:

> On Wed, Mar 2, 2016 at 10:10 AM, Fawad Khaliq  wrote:
>
>> Neutron security groups APIs should already allow discovery of what
>> default gets created. This should work or are you suggesting something
>> else?
>>
>
> So the default here for an allow all would be to include a single rule to
> do that?  That's fine, I was concerned about a config/deploy option that
> resulted in no visible change to detect...
>
> dt
>
> --
>
> Dean Troyer
> dtro...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Fuel-Library] Nominating Matthew Mosesohn for Fuel Library Core

2016-03-02 Thread Aleksandr Didenko
+1 for Michael Polenchuk

On Wed, Mar 2, 2016 at 5:33 PM, Fedor Zhadaev  wrote:

> +1 for Michael :)
>
> ср, 2 мар 2016, 17:50 Matthew Mosesohn :
>
>> Hi all,
>>
>> Thank you for the nominations and +1s. I would like to propose Michael
>> Polenchuk to add as a maintainer to fuel-library to take my spot when
>> I leave the maintainers list.
>>
>> Best Regards,
>> Matthew Mosesohn
>>
>> On Fri, Feb 26, 2016 at 3:54 PM, Kyrylo Galanov 
>> wrote:
>> > Finally! +2 !
>> >
>> > On Fri, Feb 26, 2016 at 9:08 PM, Roman Vyalov 
>> wrote:
>> >>
>> >> +1
>> >>
>> >> On Fri, Feb 26, 2016 at 12:31 PM, Aleksey Kasatkin
>> >>  wrote:
>> >>>
>> >>> +1
>> >>>
>> >>>
>> >>> Aleksey Kasatkin
>> >>>
>> >>>
>> >>> On Thu, Feb 25, 2016 at 11:59 PM, Sergey Vasilenko
>> >>>  wrote:
>> 
>>  +1
>> 
>> 
>>  /sv
>> 
>> 
>> 
>> 
>> __
>>  OpenStack Development Mailing List (not for usage questions)
>>  Unsubscribe:
>>  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> >>>
>> >>>
>> >>>
>> >>>
>> __
>> >>> OpenStack Development Mailing List (not for usage questions)
>> >>> Unsubscribe:
>> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>
>> >>
>> >>
>> >>
>> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
> Kind Regards,
> Fedor Zhadaev
>
> skype: zhadaevfm
> IRC: fzhadaev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila][python-manilaclient] Should we really be tagging "admin" CLIs?

2016-03-02 Thread Ravi, Goutham
Sure, I meant, we shouldn't leave this in the client going into Newton.

Thanks,
Goutham

From: Rodrigo Barbieri 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, March 2, 2016 at 11:32 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [manila][python-manilaclient] Should we really be 
tagging "admin" CLIs?

+1.

But I do not think we should necessarily do this before FF.

On Wed, Mar 2, 2016 at 1:07 PM, Ravi, Goutham 
> wrote:
Hi Manila community,

This is regarding the "bug": 
https://bugs.launchpad.net/python-manilaclient/+bug/1457155 in the 
python-manilaclient.
A commit was made for this and it merged yesterday: 
https://github.com/openstack/python-manilaclient/commit/37f2e50bd433149b893d30a478947f3e17f928e9
 
(https://review.openstack.org/264110)

I disagree with the approach in this patch. I feel this bug is invalid. 
Deployers have a way to modify policies in "policy.json" as with any other 
OpenStack project. It would be extremely confusing to see this "Admin Only" 
added to certain commands that we think will be "admin only" (as defined in the 
"default" policy.json). Essentially, ANY API we build can be exposed to the 
user (or some users); or administrators; as determined by the deployer.

IMHO, assuming that policies can change, we shouldn't hard code "admin only" as 
help text. Allow the manila-api service  to respond to a request with a  403 if 
it deems fit; it can see the policy file and works with it. That's correct 
behavior, as is.

I feel we should revert this change in Mitaka before the feature freeze.

Thoughts?

Thanks,
Goutham


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Rodrigo Barbieri
Computer Scientist
OpenStack Manila Contributor
Federal University of São Carlos

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-03-02 Thread Jonathan D. Proulx
On Wed, Mar 02, 2016 at 04:11:48PM +, Alexis Lee wrote:
:Walter A. Boring IV said on Mon, Feb 22, 2016 at 11:47:16AM -0800:
:> I'm trying to follow this here.   If we want all of the projects in
:> the same location to hold a design summit, then all of the
:> contributors are still going to have to do international travel,
:> which is the primary cost for attendees.
:
:My understanding is that hotel cost tends to dwarf flight cost. Capital
:city hotels tend to be (much) more expensive than provincial ones, so
:moving to less glamorous locations could noticeably reduce the total
:outlay.
:
:EG 750 flight + 300/night hotel * 5 nights = 2050
:   750 flight + 100/night hotel * 5 nights = 1250
:
:(figures are approx)

Not sure howmuch cost optimization is reasonable to attempt.  It is
true hotel costs in the current arrangement have been a multiple of
flight costs for me at least.

It's also true that hotels in secondary cities tend to be cheaper (not
sure if they're 1/3 though).

If we are going to consider detailed costs, we should also consider
that flights to secondary cities are more expensive.

A semi random pricing comparison of Manchester -vs- London from Boston
USA (picked the dates of Austin summit since that about the time
distance I book things for)

BOS->LHR $900 (nonstop) + London hotel ($250 * 5)  = $2150
BOS->MAN $1200 (1 stop) + Manchester hotel ($120 * 5 ) = $1800

So it is cheaper but $350 on a week's travel isn't a stay or go choice
here.  For different city pairs and times this will all move around
so with out more detailed comparisons this is still pretty sloppy but
I don't think the cost different enough to be significant.

I think availible facilities and local OpenStack community should be
larger factors in location selection than this level of travel cost
optimization.

-Jon


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila][python-manilaclient] Should we really be tagging "admin" CLIs?

2016-03-02 Thread Rodrigo Barbieri
+1.

But I do not think we should necessarily do this before FF.

On Wed, Mar 2, 2016 at 1:07 PM, Ravi, Goutham 
wrote:

> Hi Manila community,
>
> This is regarding the "bug":
> https://bugs.launchpad.net/python-manilaclient/+bug/1457155 in the
> python-manilaclient.
> A commit was made for this and it merged yesterday:
> https://github.com/openstack/python-manilaclient/commit/37f2e50bd433149b893d30a478947f3e17f928e9
>  ( https:/
> /review.
> openstack.
> org/264110
> )
>
> I disagree with the approach in this patch. I feel this bug is invalid.
> Deployers have a way to modify policies in "policy.json" as with any other
> OpenStack project. It would be extremely confusing to see this "Admin Only"
> added to certain commands that *we* think will be "admin only" (as
> defined in the "default" policy.json). Essentially, ANY API we build can be
> exposed to the user (or some users); or administrators; as determined by
> the deployer.
>
> IMHO, assuming that policies can change, we shouldn't hard code "admin
> only" as help text. Allow the manila-api service  to respond to a request
> with a  403 if it deems fit; it can see the policy file and works with it.
> That's correct behavior, as is.
>
> I feel we should revert this change in Mitaka before the feature freeze.
>
> Thoughts?
>
> Thanks,
> Goutham
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rodrigo Barbieri
Computer Scientist
OpenStack Manila Contributor
Federal University of São Carlos
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Fuel-Library] Nominating Matthew Mosesohn for Fuel Library Core

2016-03-02 Thread Fedor Zhadaev
+1 for Michael :)

ср, 2 мар 2016, 17:50 Matthew Mosesohn :

> Hi all,
>
> Thank you for the nominations and +1s. I would like to propose Michael
> Polenchuk to add as a maintainer to fuel-library to take my spot when
> I leave the maintainers list.
>
> Best Regards,
> Matthew Mosesohn
>
> On Fri, Feb 26, 2016 at 3:54 PM, Kyrylo Galanov 
> wrote:
> > Finally! +2 !
> >
> > On Fri, Feb 26, 2016 at 9:08 PM, Roman Vyalov 
> wrote:
> >>
> >> +1
> >>
> >> On Fri, Feb 26, 2016 at 12:31 PM, Aleksey Kasatkin
> >>  wrote:
> >>>
> >>> +1
> >>>
> >>>
> >>> Aleksey Kasatkin
> >>>
> >>>
> >>> On Thu, Feb 25, 2016 at 11:59 PM, Sergey Vasilenko
> >>>  wrote:
> 
>  +1
> 
> 
>  /sv
> 
> 
> 
> 
> __
>  OpenStack Development Mailing List (not for usage questions)
>  Unsubscribe:
>  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> >>>
> >>>
> >>>
> >>>
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Kind Regards,
Fedor Zhadaev

skype: zhadaevfm
IRC: fzhadaev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][FFE] API handler for serialized graph

2016-03-02 Thread Dmitriy Shulyak
Thanks everyone, patch was merged.

On Tue, Mar 1, 2016 at 6:22 PM, Dmitriy Shulyak 
wrote:

> Hello folks,
>
> I am not sure that i will need FFE, but in case i wont be able to land
> this patch [0] tomorrow - i would like to ask for one in advance. I will
> need FFE for 2-3 days, depends mainly on fuel-web cores availability.
>
> Merging this patch has zero user impact, and i am also using it already
> for several days to test others things (works as expected), so it can be
> considered as risk-free.
>
> 0. https://review.openstack.org/#/c/284293/
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Dean Troyer
On Wed, Mar 2, 2016 at 10:10 AM, Fawad Khaliq  wrote:

> Neutron security groups APIs should already allow discovery of what
> default gets created. This should work or are you suggesting something
> else?
>

So the default here for an allow all would be to include a single rule to
do that?  That's fine, I was concerned about a config/deploy option that
resulted in no visible change to detect...

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara]FFE Request for nfs-as-a-data-source

2016-03-02 Thread Chen, Weiting
Hi,

It's different between Sahara and Manila for this feature support.
This feature is to put NetApp Hadoop NFS Connector into Hadoop Cluster and let 
Hadoop can support NFS protocol.
And it can also work with Manila NFS driver, since Manila only need to expose 
the NFS address from the storage side.
Hadoop cluster can use this connector to communicate with NFS protocol 
directly.  
For example:
/bin/hadoop jar terasort nfs://input_file nfs://output_file

-Original Message-
From: Monty Taylor [mailto:mord...@inaugust.com] 
Sent: Wednesday, March 2, 2016 9:33 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [sahara]FFE Request for nfs-as-a-data-source

On 03/01/2016 07:23 PM, Chen, Weiting wrote:
> Hi all,
>
> I would like to request a FFE for the feature "nfs-as-a-data-source":
>
> BP: https://blueprints.launchpad.net/sahara/+spec/nfs-as-a-data-source
>
> BP Review: https://review.openstack.org/#/c/210839/

Please forgive me for not having been involved with this at all...

Wouldn't it make more sense to use Manila for this? I mean, they've got vendor 
drivers already, and this code says "Setup NetApp ..." - I imagine there are 
other NFS providers you'd want to use, no?

Or ignore me - no worries - just a mailing list driveby

> Sahara Code: https://review.openstack.org/#/c/218638/
>
> Sahara Image Elements Code: https://review.openstack.org/#/c/218637/
>
> Estimate Complete Time: The BP has been complete and the 
> implementation has been complete as well. All the code is under code 
> reviewing and since there is no big change or modify for the code we 
> expect it can only take one weeks to be merged.
>
> The Benefits for this change: Provide NFS support in Sahara.
>
> The Risk: The risk would be low for this patch, since all the 
> functions have been delivered.
>
> Thanks,
>
> Weiting(William) Chen
>
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][library] Switching to external fixtures for integration Noop tests

2016-03-02 Thread Bogdan Dobrelya
An update.
The task is done, the noop tests framework
now has automatic docs builds to readthedocs [0] and a Fuel infra CI
gate that tests changes against the fuel-library master, by running
existing integration noop rspecs.

So the CI is now mutual: changes to the fuel library are tested against
the noop tests fixtures and vice versa. This means no manual testing
required, if one wants either update integration rspecs or related
astute.yaml fixtures.

Next step is to refactor tests coverage to reduce duplication of test
cases, which is binding rspecs to corresponding fixtures one to many or
many to one instead of all to all. Hopefully, this will reduce the
current set of ~6000 test cases to more reasonable amounts.

[0] http://fuel-noop-fixtures.readthedocs.org/en/latest/

On 17.02.2016 15:43, Bogdan Dobrelya wrote:
> Hello,
> an update inline!
> 
> On 27.01.2016 17:37, Bogdan Dobrelya wrote:
>> On 26.01.2016 22:18, Kyrylo Galanov wrote:
>>> Hello Bogdan,
>>>
>>> I hope I am not the one of the context. Why do we separate fixtures for
>>> Noop tests from the repo?
>>> I can understand if while noop test block was carried out to a separate
>>> repo.
>>>
>>
>> I believe fixtures normally are downloaded by the rake spec_prep.
>> Developers avoid to ship fixtures with tests.
>>
>> The astute.yaml data fixtures are supposed to be external to the
>> fuel-library as that data comes from the Nailgun backend and corresponds
>> to all known deploy paths.
>>
>> Later, the generated puppet catalogs (see [0]) shall be put to the
>> fixtures repo as well - as they will contain hundreds thousands of
>> auto-generate lines and are tightly related to the astute.yaml fixtures.
>>
>> While the Noop tests framework itself indeed may be moved to another
>> separate repo (later), we should keep our integration tests [1] in the
>> fuel-library repository, which is "under test" by those tests.
> 
> Dmitry Ilyin did a great job and reworked the Fuel-library Noop Tests
> Framework. He also provided docs to describe changes for developers.
> There is a patch [0] to move the astute.yaml fixtures, noop tests docs
> and the framework itself from the fuel-library to the fuel-noop-fixtures
> repo [1].
> 
> With the patch, full run for the Noop tests job shortens from 40 minutes
> to 5 (by 8 times!) as it supports multiple rspec processes running in
> parallel. It also provides advanced test reports. Please see details in
> the docs [2]. You can read as is or build locally with tox. Later, the
> docs will go to readthedocs.org as well.
> 
> Note, there is no impact for developers and all changes are backwards
> compatible to existing noop tests and Fuel jenkins CI jobs. Later we may
> start to add new features from the reworked framework to make things
> even better. So please take a look on the patch and new docs.
> 
> PS. Noop tests gate passed for the patch, though there is CI -1 as we
> disabled non related deployment gates by the "Fuel-CI: disable" tag.
> 
> [0] https://review.openstack.org/#/c/276816/
> [1] https://git.openstack.org/cgit/openstack/fuel-noop-fixtures
> [2] https://git.openstack.org/cgit/openstack/fuel-noop-fixtures/tree/doc
> 
>>
>> [0] https://blueprints.launchpad.net/fuel/+spec/deployment-data-dryrun
>> [1]
>> https://git.openstack.org/cgit/openstack/fuel-library/tree/tests/noop/spec/hosts
>>
>>> On Tue, Jan 26, 2016 at 1:54 PM, Bogdan Dobrelya >> > wrote:
>>>
>>> We are going to switch [0] to external astute.yaml fixtures for Noop
>>> tests and remove them from the fuel-library repo as well.
>>> Please make sure all new changes to astute.yaml fixtures will be
>>> submitted now to the new location. Related mail thread [1].
>>>
>>> [0]
>>> 
>>> https://review.openstack.org/#/c/272480/1/doc/noop-guide/source/noop_fixtures.rst
>>> [1]
>>> 
>>> http://lists.openstack.org/pipermail/openstack-dev/2015-December/082888.html
>>>
>>> --
>>> Best regards,
>>> Bogdan Dobrelya,
>>> Irc #bogdando
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
> 
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [manila][python-manilaclient] Should we really be tagging "admin" CLIs?

2016-03-02 Thread Ravi, Goutham
Hi Manila community,

This is regarding the "bug": 
https://bugs.launchpad.net/python-manilaclient/+bug/1457155 in the 
python-manilaclient.
A commit was made for this and it merged yesterday: 
https://github.com/openstack/python-manilaclient/commit/37f2e50bd433149b893d30a478947f3e17f928e9
 
(https://review.openstack.org/264110)

I disagree with the approach in this patch. I feel this bug is invalid. 
Deployers have a way to modify policies in "policy.json" as with any other 
OpenStack project. It would be extremely confusing to see this "Admin Only" 
added to certain commands that we think will be "admin only" (as defined in the 
"default" policy.json). Essentially, ANY API we build can be exposed to the 
user (or some users); or administrators; as determined by the deployer.

IMHO, assuming that policies can change, we shouldn't hard code "admin only" as 
help text. Allow the manila-api service  to respond to a request with a  403 if 
it deems fit; it can see the policy file and works with it. That's correct 
behavior, as is.

I feel we should revert this change in Mitaka before the feature freeze.

Thoughts?

Thanks,
Goutham

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Fawad Khaliq
On Wed, Mar 2, 2016 at 7:52 AM, Dean Troyer  wrote:

> On Tue, Mar 1, 2016 at 4:52 PM, Kevin Benton  wrote:
>
>> Second, would it be acceptable to make this operator configurable? This
>> would mean users could receive different default filtering as they moved
>> between clouds.
>>
>
> If you must do this, please make it discoverable by the user/instance via
> an API somewhere, even if just in metadata/config drive.   We don't do this
> now for far too many configuration settings (any?), this seems like a good
> place to start.
>

Neutron security groups APIs should already allow discovery of what
default gets created. This should work or are you suggesting something else?


> dt
>
> --
>
> Dean Troyer
> dtro...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Openstack Cinder - Wishlist

2016-03-02 Thread John Griffith
There's actually a Launchpad category for this very thing.  Under the
importance tag.

On Wed, Mar 2, 2016 at 6:27 AM,  wrote:

> Thank you Yatin!
>
>
>
> *From:* yatin kumbhare [mailto:yatinkumbh...@gmail.com]
> *Sent:* Tuesday, March 1, 2016 4:43 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] Openstack Cinder - Wishlist
>
>
>
> Hi Ashraf,
>
>
>
> you can find all such information over launchpad.
>
>
>
> https://bugs.launchpad.net/cinder
>
>
>
> Regards,
>
> Yatin
>
>
>
> On Tue, Mar 1, 2016 at 4:01 PM,  wrote:
>
> Hi,
>
>
>
> Would like to know if there’s  feature wish list/enhancement request for
> Open stack Cinder  I.e. a list of features that we would like to add to
> Cinder Block Storage ; but hasn’t been taken up for development yet.
>
> We have couple  developers who are interested to work on OpenStack
> Cinder... Hence would like to take a look at those wish list…
>
>
>
> Thanks ,
>
> Ashraf
>
> The information contained in this electronic message and any attachments
> to this message are intended for the exclusive use of the addressee(s) and
> may contain proprietary, confidential or privileged information. If you are
> not the intended recipient, you should not disseminate, distribute or copy
> this e-mail. Please notify the sender immediately and destroy all copies of
> this message and any attachments. WARNING: Computer viruses can be
> transmitted via email. The recipient should check this email and any
> attachments for the presence of viruses. The company accepts no liability
> for any damage caused by any virus transmitted by this email.
> www.wipro.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> The information contained in this electronic message and any attachments
> to this message are intended for the exclusive use of the addressee(s) and
> may contain proprietary, confidential or privileged information. If you are
> not the intended recipient, you should not disseminate, distribute or copy
> this e-mail. Please notify the sender immediately and destroy all copies of
> this message and any attachments. WARNING: Computer viruses can be
> transmitted via email. The recipient should check this email and any
> attachments for the presence of viruses. The company accepts no liability
> for any damage caused by any virus transmitted by this email.
> www.wipro.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-03-02 Thread Alexis Lee
Walter A. Boring IV said on Mon, Feb 22, 2016 at 11:47:16AM -0800:
> I'm trying to follow this here.   If we want all of the projects in
> the same location to hold a design summit, then all of the
> contributors are still going to have to do international travel,
> which is the primary cost for attendees.

My understanding is that hotel cost tends to dwarf flight cost. Capital
city hotels tend to be (much) more expensive than provincial ones, so
moving to less glamorous locations could noticeably reduce the total
outlay.

EG 750 flight + 300/night hotel * 5 nights = 2050
   750 flight + 100/night hotel * 5 nights = 1250

(figures are approx)


Alexis (lxsli)
-- 
Nova developer, Hewlett-Packard Limited.
Registered Office: Cain Road, Bracknell, Berkshire RG12 1HN.
Registered Number: 00690597 England
VAT number: GB 314 1496 79

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposal: changes to our current testing process

2016-03-02 Thread Boris Pavlovic
Hi,

I will try to be short.

- Voting unit test coverage job is ready, and you can just use it as is
from rally source code:
   you need this file
https://github.com/openstack/rally/blob/master/tests/ci/cover.sh
   and this change in tox:
https://github.com/openstack/rally/blob/master/tox.ini#L51-L52

- Rally is in gates, and it's easy to add jobs in any project. If you have
any problems with this
  just ping me or someone from Rally team (or just write comment in
openstack-rally IRC)

- Rally was a performance tool, however that change  and now we are more
like common testing
  framework, that allows to do various kinds of testing (perf, volume,
stress, functional, ...)

- In Rally we were testing all plugins with relative small concurrency
(already for more then 1.5 year),
  and I can say that we faced a lot of issues with concurrency (and we are
still facing).
  However I can't give guarantee that we are facing 100% of cases
  (however facing most of issues is better then nothing)



Best regards,
Boris Pavlovic

On Wed, Mar 2, 2016 at 7:30 AM, Michał Dulko  wrote:

> On 03/02/2016 04:11 PM, Gorka Eguileor wrote:
> > On 02/03, Ivan Kolodyazhny wrote:
> >> Eric,
> >>
> >> There are Gorka's patches [10] to remove API Races
> >>
> >>
> >> [10]
> >>
> https://review.openstack.org/#/q/project:openstack/cinder+branch:master+topic:fix/api-races-simplified
> >>
> > I looked at Rally a long time ago so apologies if I'm totally off base
> > here, but it looked like it was a performance evaluation tool, which
> > means that it probably won't help to check for API Races (at least I
> > didn't see how when I looked).
> >
> > Many of the API races only happen if you simultaneously try the same
> > operation multiple times against the same resource or if there are
> > different operations that are trying to operate on the same resource.
> >
> > On the first case if Rally allowed it we could test it because we know
> > only 1 of the operations should succeed, but on the second case when we
> > are talking about preventing races from different operations there is no
> > way to know what the result should be, since the order in which those
> > operations are executed on each test run will determine which one will
> > fail and which one will succeed.
> >
> > I'm not trying to go against the general idea of adding rally tests, I
> > just think that they won't help in the case of the API races.
>
> You're probably right - Rally would need to cache API responses to
> parallel runs, predict the result of accepted requests (these which
> haven't received VolumeIsBusy) and then verify it. In case of API race
> conditions things explode inside the stack, and not on the API response
> level. The issue is that two requests, that shouldn't ever be accepted
> together, get positive API response.
>
> I cannot say it's impossible to implement a situation like that as Rally
> resource, but definitely it seems non-trivial to verify if result is
> correct.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are weready?

2016-03-02 Thread Bryan Jones
And as for the Heat support, the resources have made Mitaka, with additional functional tests on the way soon.
 
blueprint: https://blueprints.launchpad.net/heat/+spec/lbaasv2-suport
gerrit topic: https://review.openstack.org/#/q/topic:bp/lbaasv2-suport
BRYAN M. JONES
Software Engineer - OpenStack Development
Phone: 1-507-253-2620
E-mail: jone...@us.ibm.com
 
 
- Original message -From: Justin Pomeroy To: openstack-dev@lists.openstack.orgCc:Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are we ready?Date: Wed, Mar 2, 2016 9:36 AM  As for the horizon support, much of it will make Mitaka.  See the blueprint and gerrit topic:https://blueprints.launchpad.net/horizon/+spec/horizon-lbaas-v2-uihttps://review.openstack.org/#/q/topic:bp/horizon-lbaas-v2-ui,n,z- Justin 
On 3/2/16 9:22 AM, Doug Wiegley wrote:
Hi,
 
A few things:
 
- It’s not proposed for removal in Mitaka. That patch is for Newton.
- HEAT and Horizon are planned for Mitaka (see neutron-lbaas-dashboard for the latter.)
- I don’t view this as a “keep or delete” question. If sufficient folks are interested in maintaining it, there is a third option, which is that the code can be maintained in a separate repo, by a separate team (with or without the current core team’s blessing.)
 
No decisions have been made yet, but we are on the cusp of some major maintenance changes, and two deprecation cycles have passed. Which path forward is being discussed at today’s Octavia meeting, or feedback is of course welcomed here, in IRC, or anywhere.
 
Thanks,
doug
 
On Mar 2, 2016, at 7:06 AM, Samuel Bercovici  wrote: 

Hi,
 
I have just notices the following change: https://review.openstack.org/#/c/286381 which aims to remove LBaaS v1.
Is this planned for Mitaka or for Newton?
 
While LBaaS v2 is becoming the default, I think that we should have the following before we replace LBaaS v1:
1.  Horizon Support – was not able to find any real activity on it
2.  HEAT Support – will it be ready in Mitaka?
 
Do you have any other items that are needed before we get rid of LBaaS v1?
 
-Sam.
 
 
 
 
 
 
 __OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

  

__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Maksim Malchuk for the fuel-virtualbox-core team

2016-03-02 Thread Roman Vyalov
+1

On Wed, Mar 2, 2016 at 5:47 PM, Sergey Kulanov 
wrote:

> Hey Fuelers,
>
> Since we've successfully moved [1] virtual-box scripts from fuel-main [2]
> to
> separate fuel-virtualbox [3] git repo, I propose to update
> fuel-virtualbox-core
> team [4] by adding Maksim Malchuk. Maksim is the main contributor to these
> scripts during Mitaka release cycle [5]
>
> Fuel Cores, please vote.
>
> [1].
> http://lists.openstack.org/pipermail/openstack-dev/2016-February/086560.html
> [2]. https://github.com/openstack/fuel-main
> [3]. https://github.com/openstack/fuel-virtualbox
> [4]. https://review.openstack.org/#/admin/groups/1299,members
> [5]. https://github.com/openstack/fuel-virtualbox/commits/master
>
> --
> Sergey
> DevOps Engineer
> IRC: SergK
> Skype: Sergey_kul
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-02 Thread Dean Troyer
On Tue, Mar 1, 2016 at 4:52 PM, Kevin Benton  wrote:

> Second, would it be acceptable to make this operator configurable? This
> would mean users could receive different default filtering as they moved
> between clouds.
>

If you must do this, please make it discoverable by the user/instance via
an API somewhere, even if just in metadata/config drive.   We don't do this
now for far too many configuration settings (any?), this seems like a good
place to start.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Feature Freeze Exception Request - switching to CentOS-7.2

2016-03-02 Thread Igor Marnat
Igor,
couple of points from my side.

CentOS 7.2 will be getting updates for several more months, and we have
snapshots and all the mechanics in place to switch to the next version when
needed.

Speaking of getting this update into 9.0, we actually don't need FFE, we
can merge remaining staff today. It has enough reviews, so if you add your
+1 today, we don't need FFE.

https://review.openstack.org/#/c/280338/
https://review.fuel-infra.org/#/c/17400/



Regards,
Igor Marnat

On Wed, Mar 2, 2016 at 6:23 PM, Dmitry Teselkin 
wrote:

> Igor,
>
> Your statement about updates for 7.2 isn't correct - it will receive
> updates,  because it's the latest release ATM. There is *no* pinning inside
> ISO, and the only place where it was 8.0 were docker containers just
> because we had to workaround some issues. But there are no docker
> containers in 9.0, so that's not the case.
> The proposed solution to switch to CentOS-7.2 in fact is based on
> selecting the right snapshot with packages. There is no pinning in ISO (it
> was in earlier versions of the spec but was removed).
>
> On Wed, Mar 2, 2016 at 6:11 PM, Igor Kalnitsky 
> wrote:
>
>> Dmitry, Igor,
>>
>> > Very important thing is that CentOS 7.1 which master node is based now
>> > don't get updates any longer.
>>
>> If you are using "fixed" release you must be ready that you won't get
>> any updates. So with CentOS 7.2 the problem still the same.
>>
>> However, let's wait for Fuel PTL decision. I only shared my POV:
>> that's not a critical feature, and taking into account the risks of
>> regression - I'd prefer to do not accept it in 9.0.
>>
>> Regards,
>> Igor
>>
>> On Wed, Mar 2, 2016 at 4:42 PM, Igor Marnat  wrote:
>> > Igor,
>> > please note that this is pretty much not like update of master node
>> which we
>> > had in 8.0. This is minor _update_ of CentOS from 7.1 to 7.2 which team
>> > tested for more than 2 months already.
>> >
>> > We don't expect it to require any additional efforts from core or qa
>> team.
>> >
>> > Very important thing is that CentOS 7.1 which master node is based now
>> don't
>> > get updates any longer. Updates are only provided for CentOS 7.2.
>> >
>> > So we'll have to switch CentOS 7.1 to CentOS 7.2 anyways.
>> >
>> > We can do it now for more or less free, later in release cycle for
>> higher
>> > risk and QA efforts and after the release for 2x price because of
>> additional
>> > QA cycle we'll need to pass through.
>> >
>> >
>> >
>> > Regards,
>> > Igor Marnat
>> >
>> > On Wed, Mar 2, 2016 at 2:57 PM, Dmitry Teselkin > >
>> > wrote:
>> >>
>> >> Hi Igor,
>> >>
>> >> Postponing this till Fuel 10 means we have to elaborate a plan to do
>> such
>> >> upgrade for Fuel 9 after the release - the underlying system will not
>> get
>> >> updated on it's own, and the security issues will not close
>> themselves. The
>> >> problem here is that such upgrade of deployed master node requires a
>> lot of
>> >> QA work also.
>> >>
>> >> Since we are not going to update package we build on our own (they
>> still
>> >> targeted 7.1) switching master node base to CentOS-72 is not that
>> dangerous
>> >> as it seems.
>> >>
>> >> On Wed, Mar 2, 2016 at 1:54 PM, Igor Kalnitsky <
>> ikalnit...@mirantis.com>
>> >> wrote:
>> >>>
>> >>> Hey Dmitry,
>> >>>
>> >>> No offence, but I rather against that exception. We have too many
>> >>> things to do in Mitaka, and moving to CentOS 7.2 means
>> >>>
>> >>> * extra effort from core team
>> >>> * extra effort from qa team
>> >>>
>> >>> Moreover, it might block development by introducing unpredictable
>> >>> regressions. Remember 8.0? So I think it'd be better to reduce risk of
>> >>> regressions that affects so many developers by postponing CentOS 7.2
>> >>> till Fuel 10.
>> >>>
>> >>> Thanks,
>> >>> Igor
>> >>>
>> >>>
>> >>> On Mon, Feb 29, 2016 at 7:13 PM, Dmitry Teselkin <
>> dtesel...@mirantis.com>
>> >>> wrote:
>> >>> > I'd like to ask for a feature freeze exception for switching to
>> >>> > CentOS-7.2
>> >>> > feature [0].
>> >>> >
>> >>> > CentOS-7.2 ISO's have been tested periodically since the beginning
>> of
>> >>> > the
>> >>> > year, and all major issues were addressed / fixed at the moment.
>> During
>> >>> > the
>> >>> > last weekend I've made 70 BVT runs to verify that the  solution [2]
>> for
>> >>> > the
>> >>> > last issue - e1000 transmit unit hangs works. And it works, 0 tests
>> of
>> >>> > 35
>> >>> > failed [3].
>> >>> >
>> >>> > Benefits of switching to CentOS-7.2 are quite obvious - we will
>> return
>> >>> > to
>> >>> > latest supported CentOS release, will fix a lot of bugs / security
>> >>> > issues
>> >>> > [4] and will make further updates easier.
>> >>> >
>> >>> > Risk of regression still exists, but it's quite low, 35 successful
>> BVTs
>> >>> > can't be wrong.
>> >>> >
>> >>> > To finish that feature the following should be done:
>> >>> > * review and merge e1000 workaround [2]
>> 

Re: [openstack-dev] [cinder] Proposal: changes to our current testing process

2016-03-02 Thread Michał Dulko
On 03/02/2016 04:11 PM, Gorka Eguileor wrote:
> On 02/03, Ivan Kolodyazhny wrote:
>> Eric,
>>
>> There are Gorka's patches [10] to remove API Races
>>
>>
>> [10]
>> https://review.openstack.org/#/q/project:openstack/cinder+branch:master+topic:fix/api-races-simplified
>>
> I looked at Rally a long time ago so apologies if I'm totally off base
> here, but it looked like it was a performance evaluation tool, which
> means that it probably won't help to check for API Races (at least I
> didn't see how when I looked).
>
> Many of the API races only happen if you simultaneously try the same
> operation multiple times against the same resource or if there are
> different operations that are trying to operate on the same resource.
>
> On the first case if Rally allowed it we could test it because we know
> only 1 of the operations should succeed, but on the second case when we
> are talking about preventing races from different operations there is no
> way to know what the result should be, since the order in which those
> operations are executed on each test run will determine which one will
> fail and which one will succeed.
>
> I'm not trying to go against the general idea of adding rally tests, I
> just think that they won't help in the case of the API races.

You're probably right - Rally would need to cache API responses to
parallel runs, predict the result of accepted requests (these which
haven't received VolumeIsBusy) and then verify it. In case of API race
conditions things explode inside the stack, and not on the API response
level. The issue is that two requests, that shouldn't ever be accepted
together, get positive API response.

I cannot say it's impossible to implement a situation like that as Rally
resource, but definitely it seems non-trivial to verify if result is
correct.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are we ready?

2016-03-02 Thread Justin Pomeroy
As for the horizon support, much of it will make Mitaka.  See the 
blueprint and gerrit topic:


https://blueprints.launchpad.net/horizon/+spec/horizon-lbaas-v2-ui
https://review.openstack.org/#/q/topic:bp/horizon-lbaas-v2-ui,n,z

- Justin

On 3/2/16 9:22 AM, Doug Wiegley wrote:

Hi,

A few things:

- It’s not proposed for removal in Mitaka. That patch is for Newton.
- HEAT and Horizon are planned for Mitaka (see neutron-lbaas-dashboard 
for the latter.)
- I don’t view this as a “keep or delete” question. If sufficient 
folks are interested in maintaining it, there is a third option, which 
is that the code can be maintained in a separate repo, by a separate 
team (with or without the current core team’s blessing.)


No decisions have been made yet, but we are on the cusp of some major 
maintenance changes, and two deprecation cycles have passed. Which 
path forward is being discussed at today’s Octavia meeting, or 
feedback is of course welcomed here, in IRC, or anywhere.


Thanks,
doug

On Mar 2, 2016, at 7:06 AM, Samuel Bercovici > wrote:


Hi,
I have just notices the following 
change:https://review.openstack.org/#/c/286381which aims to remove 
LBaaS v1.

Is this planned for Mitaka or forNewton?
While LBaaS v2 is becoming the default, I think that we should have 
the following before we replace LBaaS v1:

1.Horizon Support – was not able to find any real activity on it
2.HEAT Support – will it be ready in Mitaka?
Do you have any other items that are needed before we get rid of 
LBaaS v1?

-Sam.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org 
?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][FFE] Enable UCA repositories for deployment

2016-03-02 Thread Matthew Mosesohn
Hi all,

I would like to request a feature freeze exception for "Deploy with
UCA packages" feature.

I anticipate 2 more days to get tests green and add some depth to the
existing test.

https://blueprints.launchpad.net/fuel/+spec/deploy-with-uca-packages

The impact to BVT stability is quite small because it only touches 1
task in OpenStack deployment, and by default it is not enabled.

Open reviews:
https://review.openstack.org/#/c/281762/
https://review.openstack.org/#/c/279556/
https://review.openstack.org/#/c/279542/
https://review.openstack.org/#/c/284584/

Best Regards,
Matthew Mosesohn

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Feature Freeze Exception Request - switching to CentOS-7.2

2016-03-02 Thread Dmitry Teselkin
Igor,

Your statement about updates for 7.2 isn't correct - it will receive
updates,  because it's the latest release ATM. There is *no* pinning inside
ISO, and the only place where it was 8.0 were docker containers just
because we had to workaround some issues. But there are no docker
containers in 9.0, so that's not the case.
The proposed solution to switch to CentOS-7.2 in fact is based on selecting
the right snapshot with packages. There is no pinning in ISO (it was in
earlier versions of the spec but was removed).

On Wed, Mar 2, 2016 at 6:11 PM, Igor Kalnitsky 
wrote:

> Dmitry, Igor,
>
> > Very important thing is that CentOS 7.1 which master node is based now
> > don't get updates any longer.
>
> If you are using "fixed" release you must be ready that you won't get
> any updates. So with CentOS 7.2 the problem still the same.
>
> However, let's wait for Fuel PTL decision. I only shared my POV:
> that's not a critical feature, and taking into account the risks of
> regression - I'd prefer to do not accept it in 9.0.
>
> Regards,
> Igor
>
> On Wed, Mar 2, 2016 at 4:42 PM, Igor Marnat  wrote:
> > Igor,
> > please note that this is pretty much not like update of master node
> which we
> > had in 8.0. This is minor _update_ of CentOS from 7.1 to 7.2 which team
> > tested for more than 2 months already.
> >
> > We don't expect it to require any additional efforts from core or qa
> team.
> >
> > Very important thing is that CentOS 7.1 which master node is based now
> don't
> > get updates any longer. Updates are only provided for CentOS 7.2.
> >
> > So we'll have to switch CentOS 7.1 to CentOS 7.2 anyways.
> >
> > We can do it now for more or less free, later in release cycle for higher
> > risk and QA efforts and after the release for 2x price because of
> additional
> > QA cycle we'll need to pass through.
> >
> >
> >
> > Regards,
> > Igor Marnat
> >
> > On Wed, Mar 2, 2016 at 2:57 PM, Dmitry Teselkin 
> > wrote:
> >>
> >> Hi Igor,
> >>
> >> Postponing this till Fuel 10 means we have to elaborate a plan to do
> such
> >> upgrade for Fuel 9 after the release - the underlying system will not
> get
> >> updated on it's own, and the security issues will not close themselves.
> The
> >> problem here is that such upgrade of deployed master node requires a
> lot of
> >> QA work also.
> >>
> >> Since we are not going to update package we build on our own (they still
> >> targeted 7.1) switching master node base to CentOS-72 is not that
> dangerous
> >> as it seems.
> >>
> >> On Wed, Mar 2, 2016 at 1:54 PM, Igor Kalnitsky  >
> >> wrote:
> >>>
> >>> Hey Dmitry,
> >>>
> >>> No offence, but I rather against that exception. We have too many
> >>> things to do in Mitaka, and moving to CentOS 7.2 means
> >>>
> >>> * extra effort from core team
> >>> * extra effort from qa team
> >>>
> >>> Moreover, it might block development by introducing unpredictable
> >>> regressions. Remember 8.0? So I think it'd be better to reduce risk of
> >>> regressions that affects so many developers by postponing CentOS 7.2
> >>> till Fuel 10.
> >>>
> >>> Thanks,
> >>> Igor
> >>>
> >>>
> >>> On Mon, Feb 29, 2016 at 7:13 PM, Dmitry Teselkin <
> dtesel...@mirantis.com>
> >>> wrote:
> >>> > I'd like to ask for a feature freeze exception for switching to
> >>> > CentOS-7.2
> >>> > feature [0].
> >>> >
> >>> > CentOS-7.2 ISO's have been tested periodically since the beginning of
> >>> > the
> >>> > year, and all major issues were addressed / fixed at the moment.
> During
> >>> > the
> >>> > last weekend I've made 70 BVT runs to verify that the  solution [2]
> for
> >>> > the
> >>> > last issue - e1000 transmit unit hangs works. And it works, 0 tests
> of
> >>> > 35
> >>> > failed [3].
> >>> >
> >>> > Benefits of switching to CentOS-7.2 are quite obvious - we will
> return
> >>> > to
> >>> > latest supported CentOS release, will fix a lot of bugs / security
> >>> > issues
> >>> > [4] and will make further updates easier.
> >>> >
> >>> > Risk of regression still exists, but it's quite low, 35 successful
> BVTs
> >>> > can't be wrong.
> >>> >
> >>> > To finish that feature the following should be done:
> >>> > * review and merge e1000 workaround [2]
> >>> > * review and merge spec [0]
> >>> > * review and merge request that switches build CI to CentOS-7.2 [5]
> >>> >
> >>> > I expect the last day it could be done is March, 4.
> >>> >
> >>> > [0] https://review.openstack.org/#/c/280338/
> >>> > [1] https://bugs.launchpad.net/fuel/+bug/1526544
> >>> > [2] https://review.openstack.org/#/c/285306/
> >>> > [3]
> https://etherpad.openstack.org/p/r.1c4cfee8185326d6922d6c9321404350
> >>> > [4]
> https://etherpad.openstack.org/p/r.a7fe0b575d891ed81206765fa5be6630
> >>> > [5] https://review.fuel-infra.org/#/c/17400/
> >>> >
> >>> >
> >>> > --
> >>> > Thanks,
> >>> > Dmitry Teselkin
> >>> > Mirantis
> >>> > http://www.mirantis.com
> >>> >
> >>> >
> >>> >
> 

  1   2   >