Re: [openstack-dev] [puppet] weekly meeting #51

2015-09-15 Thread Emilien Macchi


On 09/14/2015 01:12 PM, Emilien Macchi wrote:
> Hello,
> 
> Here's an initial agenda for our weekly meeting, tomorrow at 1500 UTC
> in #openstack-meeting-4:
> 
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150915
> 
> Tomorrow we will make our Sprint Retrospective, please share your
> experience on the etherpad [1].
> 
> Also, feel free to add any additional items you'd like to discuss.
> If our schedule allows it, we'll make bug triage during the meeting.
> 
> Regards,
> 
> [1] https://etherpad.openstack.org/p/puppet-liberty-sprint-retrospective
> 

We did our meeting, you can read the notes here:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-09-15-15.00.html

Thanks for attending, have a great week!
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]Behavior when one cinder-volume service is down

2015-09-15 Thread John Griffith
On Tue, Sep 15, 2015 at 8:53 AM, Eduard Matei <
eduard.ma...@cloudfounders.com> wrote:

> Hi,
>
> Let me see if i got this:
> - running 3 (multiple) c-vols won't automatically give you failover
>
​correct
​


> - each c-vol is "master" of a certain number of volumes
>
​yes
​


> -- if the c-vol is "down" then those volumes cannot be managed by another
> c-vol
>
​By default no, but you can configure an HA setup of multiple c-vol
services.  There are a number of folks doing this in production and there's
probably better documentation on how to achieve this, but this gives a
descent enough start:
http://docs.openstack.org/high-availability-guide/content/s-cinder-api.html
​


>
> What i'm trying to achieve is making sure ANY volume is managed
> (manageable) by WHICHEVER c-vol is running (and gets the call first) - sort
> of A/A - so this means i need to look into Pacemaker and virtual-ips, or i
> should try first the "same name".
>
​Yes, I gathered... and to do that you need to do something like name the
backends the same and use a VIP in from of them.
​


>
> Thanks,
>
> Eduard
>
> PS. @Michal: Where are volumes physically in case of your driver? <-
> similar to ceph, on a distributed object storage service (whose disks can
> be anywhere even on the same compute host)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3][QA] DVR job failure rate and maintainability

2015-09-15 Thread Ryan Moats
I couldn't have said it better, Sean.

Ryan Moats

"Sean M. Collins"  wrote on 09/14/2015 05:01:03 PM:

> From: "Sean M. Collins" 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 09/14/2015 05:01 PM
> Subject: [openstack-dev] [neutron][L3][QA] DVR job failure rate and
> maintainability
>
> [adding neutron tag to subject and resending]
>
> Hi,
>
> Carl Baldwin, Doug Wiegley, Matt Kassawara, Ryan Moats, and myself are
> at the QA sprint in Fort Collins. Earlier today there was a discussion
> about the failure rate about the DVR job, and the possible impact that
> it is having on the gate.
>
> Ryan has a good patch up that shows the failure rates over time:
>
> https://review.openstack.org/223201
>
> To view the graphs, you go over into your neutron git repo, and open the
> .html files that are present in doc/dashboards - which should open up
> your browser and display the Graphite query.
>
> Doug put up a patch to change the DVR job to be non-voting while we
> determine the cause of the recent spikes:
>
> https://review.openstack.org/223173
>
> There was a good discussion after pushing the patch, revolving around
> the need for Neutron to have DVR, to fit operational and reliability
> requirements, and help transition away from Nova-Network by providing
> one of many solutions similar to Nova's multihost feature.  I'm skipping
> over a huge amount of context about the Nova-Network and Neutron work,
> since that is a big and ongoing effort.
>
> DVR is an important feature to have, and we need to ensure that the job
> that tests DVR has a high pass rate.
>
> One thing that I think we need, is to form a group of contributors that
> can help with the DVR feature in the immediate term to fix the current
> bugs, and longer term maintain the feature. It's a big task and I don't
> believe that a single person or company can or should do it by
themselves.
>
> The L3 group is a good place to start, but I think that even within the
> L3 team we need dedicated and diverse group of people who are interested
> in maintaining the DVR feature.
>
> Without this, I think the DVR feature will start to bit-rot and that
> will have a significant impact on our ability to recommend Neutron as a
> replacement for Nova-Network in the future.
>
> --
> Sean M. Collins
>
>
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Policy][Group-based-policy]

2015-09-15 Thread Sagar Pradhan
 Hello ,

We were exploring group based policy for some project.We could find CLI and
REST API documentation for GBP.
Do we have separate REST API for GBP which can be called separately ?
>From documentation it seems that we can only use CLI , Horizon and Heat.
Please point us to CLI or REST API documentation for GBP.


Regards,
Sagar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Current meeting timeslot

2015-09-15 Thread Derek Higgins

On 10/09/15 15:12, Derek Higgins wrote:

Hi All,

The current meeting slot for TripleO is every second Tuesday @ 1900 UTC,
since that time slot was chosen a lot of people have joined the team and
others have moved on, I like to revisit the timeslot to see if we can
accommodate more people at the meeting (myself included).

Sticking with Tuesday I see two other slots available that I think will
accommodate more people currently working on TripleO,

Here is the etherpad[1], can you please add your name under the time
slots that would suit you so we can get a good idea how a change would
effect people


Looks like moving the meeting to 1400 UTC will best accommodate 
everybody, I've proposed a patch to change our slot


https://review.openstack.org/#/c/223538/

In case the etherpad disappears here was the results

Current Slot ( 1900 UTC, Tuesdays,  biweekly)
o Suits me fine - 2 votes
o May make it sometimes - 6 votes

Proposal 1 ( 1600 UTC, Tuesdays,  biweekly)
o Suits me fine - 7 votes
o May make it sometimes - 2 votes

Proposal 2 ( 1400 UTC, Tuesdays,  biweekly)
o Suits me fine - 9 votes
o May make it sometimes - 0 votes

I can't make any of these - 0 votes

thanks,
Derek.




thanks,
Derek.


[1] - https://etherpad.openstack.org/p/SocOjvLr6o

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] monasca,murano,mistral governance

2015-09-15 Thread Ivan Berezovskiy
Emilien,

puppet-murano module have a bunch of patches from Alexey Deryugin on review
[0], which implements most of all Murano deployment stuff.
Murano project was added to OpenStack namespace not so long ago, that's why
I suggest to have murano-core rights on puppet-murano as they are till all
these patches will be merged.
Anyway, murano-core team doesn't merge any patches without OpenStack Puppet
team approvals.

[0] -
https://review.openstack.org/#/q/status:open+project:openstack/puppet-murano+owner:%22Alexey+Deryugin+%253Caderyugin%2540mirantis.com%253E%22,n,z

2015-09-15 1:01 GMT+03:00 Matt Fischer :

> Emilien,
>
> I've discussed this with some of the Monasca puppet guys here who are
> doing most of the work. I think it probably makes sense to move to that
> model now, especially since the pace of development has slowed
> substantially. One blocker before to having it "big tent" was the lack of
> test coverage, so as long as we know that's a work in progress...  I'd also
> like to get Brad Kiein's thoughts on this, but he's out of town this week.
> I'll ask him to reply when he is back.
>
>
> On Mon, Sep 14, 2015 at 3:44 PM, Emilien Macchi 
> wrote:
>
>> Hi,
>>
>> As a reminder, Puppet modules that are part of OpenStack are documented
>> here [1].
>>
>> I can see puppet-murano & puppet-mistral Gerrit permissions different
>> from other modules, because Mirantis helped to bootstrap the module a
>> few months ago.
>>
>> I think [2] the modules should be consistent in governance and only
>> Puppet OpenStack group should be able to merge patches for these modules.
>>
>> Same question for puppet-monasca: if Monasca team wants their module
>> under the big tent, I think they'll have to change Gerrit permissions to
>> only have Puppet OpenStack able to merge patches.
>>
>> [1]
>> http://governance.openstack.org/reference/projects/puppet-openstack.html
>> [2] https://review.openstack.org/223313
>>
>> Any feedback is welcome,
>> --
>> Emilien Macchi
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks, Ivan Berezovskiy
MOS Puppet Team Lead
at Mirantis 

slack: iberezovskiy
skype: bouhforever
phone: + 7-960-343-42-46
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][SR-IOV] Hardware changes and shifting PCI addresses

2015-09-15 Thread Chris Friesen

On 09/15/2015 02:25 AM, Daniel P. Berrange wrote:


Taking a host offline for maintenance, should be considered
equivalent to throwing away the existing host and deploying a new
host. There should be zero state carry-over from OpenStack POV,
since both the software and hardware changes can potentially
invalidate previous informationm used by the schedular for deploying
on that host.  The idea of recovering a previously running guest
should be explicitly unsupported.


This isn't the way the nova code is currently written though.

By default, any instances that were running on that compute node are going to 
still be in the DB as running on that compute node but in the "stopped" state. 
If you then do a "nova start", they'll try to start up on that node again.


Heck, if you enable "resume_guests_state_on_host_boot" then nova will restart 
them automatically for you on startup.


To robustly do what you're talking about would require someone (nova, the 
operator, etc.) to migrate all instances off of a compute node before taking it 
down (which is currently impossible for suspended instances), and then force a 
"nova evacuate" (or maybe "nova delete") for every instance that was on a 
compute node that went down.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] proposed priorities for Mitaka

2015-09-15 Thread Monty Taylor

On 09/15/2015 07:09 PM, stuart.mcla...@hp.com wrote:

I've been looking into the existing task-based-upload that Doug
mentions:
can anyone clarify the following?

On a default devstack install you can do this 'task' call:

http://paste.openstack.org/show/462919


Yup. That's the one.


as an alternative to the traditional image upload (the bytes are
streamed
from the URL).

It's not clear to me if this is just an interesting example of the kind
of operator specific thing you can configure tasks to do, or a real
attempt to define an alternative way to upload images.

The change which added it [1] calls it a 'sample'.

Is it just an example, or is it a second 'official' upload path?


It's how you have to upload images on Rackspace.


Ok, so Rackspace have a task called image_import. But it seems to take
different json input than the devstack version. (A Swift container/object
rather than a URL.)

That seems to suggest that tasks really are operator specific, that there
is no standard task based upload ... and it should be ok to try
again with a clean slate.


Yes - as long as we don't use the payload as a defacto undefined API to 
avoid having specific things implemented in the API I think we're fine.


Like, if it was:

glance import-image

and that presented an interface that had a status field ... I mean, 
that's a known OpenStack pattern - it's how nova boot works.


Amongst the badness with this is:

a) It's only implemented in one cloud and at that cloud with special code
b) The interface is "send some JSON to this endpoint, and we'll infer a 
special sub-API from the JSON, which is not a published nor versioned API"



If you want to see the
full fun:

https://github.com/openstack-infra/shade/blob/master/shade/__init__.py#L1335-L1510


Which is "I want to upload an image to an OpenStack Cloud"

I've listed it on this slide in CLI format too:

http://inaugust.com/talks/product-management/index.html#/27

It should be noted that once you create the task, you need to poll the
task with task-show, and then the image id will be in the completed
task-show output.

Monty


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] os-cloud-config support for custom .ssh configs

2015-09-15 Thread Lennart Regebro
In bug https://bugzilla.redhat.com/show_bug.cgi?id=1252255 is about adding a 
possibility to have an .ssh directory that is not in ~/.ssh
Currently the blocker there is os-cloud-config, that just calls ssh, and ssh 
will look in the users homedirectory for .ssh/config no matter what. 

To solve this we would need a way to add support for custom .ssh configs in 
os-cloud-config, specifically to the _perform_pki_initialization() method, so 
that you can specify a ssh config file, which otherwise will default to 
~/.ssh/con fig. 

Either we can always use ~/.ssh/config, but perform the user expansion in the 
Python code. That way it will pick up $HOME, and that means you can just set 
$HOME first. (There is a patch linked from the bug for python-tripleoclient to 
allow that).

Or we can pass in the path to the config file as a new paremeter. 

In both cases the change is quite trivial.

Thoughts/opinions on this?

//Lennart

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Fox, Kevin M
We run several clouds where there are multiple external networks. the "just run 
it in on THE public network" doesn't work. :/

I also strongly recommend to users to put vms on a private network and use 
floating ip's/load balancers. For many reasons. Such as, if you don't, the ip 
that gets assigned to the vm helps it become a pet. you can't replace the vm 
and get the same IP. Floating IP's and load balancers can help prevent pets. It 
also prevents security issues with DNS and IP's. Also, for every floating ip/lb 
I have, I usually have 3x or more the number of instances that are on the 
private network. Sure its easy to put everything on the public network, but it 
provides much better security if you only put what you must on the public 
network. Consider the internet. would you want to expose every device in your 
house directly on the internet? No. you put them in a private network and poke 
holes just for the stuff that does. we should be encouraging good security 
practices. If we encourage bad ones, then it will bite us later when OpenStack 
gets a reputation for being associated with compromises.

I do consider making things as simple as possible very important. but that is, 
make them as simple as possible, but no simpler. There's danger here of making 
things too simple.

Thanks,
Kevin

From: Doug Hellmann [d...@doughellmann.com]
Sent: Tuesday, September 15, 2015 10:02 AM
To: openstack-dev
Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed 
'default' network model

Excerpts from Armando M.'s message of 2015-09-15 09:30:35 -0700:
> On 15 September 2015 at 08:04, Monty Taylor  wrote:
>
> > Hey all!
> >
> > If any of you have ever gotten drunk with me, you'll know I hate floating
> > IPs more than I hate being stabbed in the face with a very angry fish.
> >
> > However, that doesn't really matter. What should matter is "what is the
> > most sane thing we can do for our users"
> >
> > As you might have seen in the glance thread, I have a bunch of OpenStack
> > public cloud accounts. Since I wrote that email this morning, I've added
> > more - so we're up to 13.
> >
> > auro
> > citycloud
> > datacentred
> > dreamhost
> > elastx
> > entercloudsuite
> > hp
> > ovh
> > rackspace
> > runabove
> > ultimum
> > unitedstack
> > vexxhost
> >
> > Of those public clouds, 5 of them require you to use a floating IP to get
> > an outbound address, the others directly attach you to the public network.
> > Most of those 8 allow you to create a private network, to boot vms on the
> > private network, and ALSO to create a router with a gateway and put
> > floating IPs on your private ip'd machines if you choose.
> >
> > Which brings me to the suggestion I'd like to make.
> >
> > Instead of having our default in devstack and our default when we talk
> > about things be "you boot a VM and you put a floating IP on it" - which
> > solves one of the two usage models - how about:
> >
> > - Cloud has a shared: True, external:routable: True neutron network. I
> > don't care what it's called  ext-net, public, whatever. the "shared" part
> > is the key, that's the part that lets someone boot a vm on it directly.
> >
> > - Each person can then make a private network, router, gateway, etc. and
> > get floating-ips from the same public network if they prefer that model.
> >
> > Are there any good reasons to not push to get all of the public networks
> > marked as "shared"?
> >
>
> The reason is simple: not every cloud deployment is the same: private is
> different from public and even within the same cloud model, the network
> topology may vary greatly.
>
> Perhaps Neutron fails in the sense that it provides you with too much
> choice, and perhaps we have to standardize on the type of networking
> profile expected by a user of OpenStack public clouds before making changes
> that would fragment this landscape even further.
>
> If you are advocating for more flexibility without limiting the existing
> one, we're only making the problem worse.

As with the Glance image upload API discussion, this is an example
of an extremely common use case that is either complex for the end
user or for which they have to know something about the deployment
in order to do it at all. The usability of an OpenStack cloud running
neutron would be enhanced greatly if there was a simple, clear, way
for the user to get a new VM with a public IP on any cloud without
multiple steps on their part. There are a lot of ways to implement
that "under the hood" (what you call "networking profile" above)
but the users don't care about "under the hood" so we should provide
a way for them to ignore it. That's *not* the same as saying we
should only support one profile. Think about the API from the use
case perspective, and build it so if there are different deployment
configurations available, the right action can be taken based on
the deployment choices made without the user 

Re: [openstack-dev] [Cinder]Behavior when one cinder-volume service is down

2015-09-15 Thread Duncan Thomas
Of the two, pacemaker is far, far safer from a cinder PoV - fewer races,
fewer problematic scenarios.

On 15 September 2015 at 17:59, D'Angelo, Scott 
wrote:

> Eduard, Gorka has done a great job of explaining some of the issues with
> Active-Active Cinder-volume services in his blog:
>
> http://gorka.eguileor.com/
>
>
>
> TL;DR: The hacks to use the same hostname or use Pacemaker + VIP are
> dangerous because of races, and are not recommended for Enterprise
> deployments.
>
>
>
> *From:* Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
> *Sent:* Tuesday, September 15, 2015 8:54 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Cinder]Behavior when one cinder-volume
> service is down
>
>
>
> Hi,
>
>
>
> Let me see if i got this:
>
> - running 3 (multiple) c-vols won't automatically give you failover
>
> - each c-vol is "master" of a certain number of volumes
>
> -- if the c-vol is "down" then those volumes cannot be managed by another
> c-vol
>
>
>
> What i'm trying to achieve is making sure ANY volume is managed
> (manageable) by WHICHEVER c-vol is running (and gets the call first) - sort
> of A/A - so this means i need to look into Pacemaker and virtual-ips, or i
> should try first the "same name".
>
>
>
> Thanks,
>
>
>
> Eduard
>
>
>
> PS. @Michal: Where are volumes physically in case of your driver? <-
> similar to ceph, on a distributed object storage service (whose disks can
> be anywhere even on the same compute host)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]Behavior when one cinder-volume service is down

2015-09-15 Thread D'Angelo, Scott
I’m just not sure that you can evacuate with the c-vol service for those volume 
down. Not without the un-safe HA active-active hacks.
In our public cloud, if the c-vol service for a backend/volumes is down, we get 
woken up in the middle of the night and stay at it until we get c-vol back up. 
That’s the only way I know of getting access to those volumes that are 
associated with a c-vol service: get the service back up.

From: Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
Sent: Tuesday, September 15, 2015 9:16 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder]Behavior when one cinder-volume service is 
down

Thanks Scott,
But the question remains: if the "hacks" are not recommended then how can i 
perform Evacuate when the c-vol service of the volumes i need evacuated is 
"down", but there are two more controller node with c-vol services running?

Thanks,

Eduard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Kyle Mestery
That's exactly right, and we need to get this merged early in Mitaka. We'll
discuss this in a design summit session in Tokyo in fact to ensure it's
resourced correctly and continues to address the evolving needs in this
space.

On Tue, Sep 15, 2015 at 10:47 AM, Doug Wiegley  wrote:

> Hi all,
>
> One solution to this was a neutron spec that was added for a “get me a
> network” api, championed by Jay Pipes, which would auto-assign a public
> network on vm boot. It looks like it was resource starved in Liberty,
> though:
>
> https://blueprints.launchpad.net/neutron/+spec/get-me-a-network
>
> Thanks,
> doug
>
>
> On Sep 15, 2015, at 9:27 AM, Mike Spreitzer  wrote:
>
> Monty Taylor  wrote on 09/15/2015 11:04:07 AM:
>
> > a) an update to python-novaclient to allow a named network to be passed
> > to satisfy the "you have more than one network" - the nics argument is
> > still useful for more complex things
>
> I am not using the latest, but rather Juno.  I find that in many places
> the Neutron CLI insists on a UUID when a name could be used.  Three cheers
> for any campaign to fix that.
>
> And, yeah, creating VMs on a shared public network is good too.
>
> Thanks,
> mike
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Remove Tuskar from tripleo-common and python-tripleoclient

2015-09-15 Thread Dougal Matthews
Hi all,

This is partly a heads up for everyone, but also seeking feedback on the
direction.

We are starting to move to a more general Heat workflow without the need for
Tuskar. The CLI is already in a position to do this as we can successfully
deploy without Tuskar.

Moving forward it will be much easier for us to progress if we don't need to
take Tuskar into account in tripleo-common. This will be particularly useful
when working on the overcloud deployment library and API spec [1].

Tuskar UI doesn't currently use tripleo-common (or tripleoclient) and thus
it
is safe to make this change from the UI's point of view.

I have started the process of doing this removal and posted three WIP
reviews
[2][3][4] to assess how much change was needed, I plan to tidy them up over
the next day or two. There is one for tripleo-common, python-tripleoclient
and tripleo-docs. The documentation one only removes references to Tuskar on
the CLI and doesn't remove Tuskar totally - so Tuskar UI is still covered
until it has a suitable replacement.

I don't anticipate any impact for CI as I understand that all the current CI
has migrated from deploying with Tuskar to deploying the templates directly
(Using `openstack overcloud deploy --templates` rather than --plan). I
believe it is safe to remove from python-tripleoclient as that repo is so
new. I am however unsure about the TripleO deprecation policy for tripleo-
common?

Thanks,
Dougal


[1]: https://review.openstack.org/219754
[2]: https://review.openstack.org/223527
[3]: https://review.openstack.org/223535
[4]: https://review.openstack.org/223605
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Help with stable/juno branches / releases

2015-09-15 Thread Doug Hellmann
Excerpts from Tony Breeds's message of 2015-08-27 08:32:01 +1000:
> On Wed, Aug 26, 2015 at 03:11:56PM -0400, Doug Hellmann wrote:
> > Tony,
> > 
> > Thanks for digging into this!
> 
> No problem.  It seemed like such a simple thing :/
> 
> > I should be able to help, but right now we're ramping up for the L3
> > feature freeze and there are a lot of release-related activities going
> > on. Can this wait a few weeks for things to settle down again?
> 
> Hi Doug,
> Of course I'd rather not wait but I understand that I've uncovered a bit 
> of
> a mess that is stable/juno :(

I've created the branches for oslo.utils and oslotest, as requested.
There are patches up for each to update the .gitreview file, which will
make it easier to land the patches to update whatever requirements
settings need to be adjusted.

Since these are managed libraries, you can request releases by
submitting patches to the openstack/releases repository (see the
README and ping me in #openstack-relmgr-office if you need a hand
the first time, I'll be happy to walk you through it).

Doug

> 
> Right now I need 3 releases for oslo packages and then releases for at least 5
> other projects from stable/juno (and that after I get the various reviews
> closed out) and it's quite possible that these releases will in turn generate
> more.
> 
> I had to admit I'm questioning if it's worth it.  Not because I think it's too
> hard but it is sunstantial effort to put into juno which is (in theory) going
> to be EOL'd in 6 - 10 weeks.
> 
> I feel bad for asking that question as I've pulled in favors and people have
> agreed to $things that they're not entirely comfortable with so we can fix
> this.
> 
> Is it worth discussing this at next weeks cross-project meeting?
> 
> Yours Tony.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]Behavior when one cinder-volume service is down

2015-09-15 Thread Eduard Matei
Hi,

Let me see if i got this:
- running 3 (multiple) c-vols won't automatically give you failover
- each c-vol is "master" of a certain number of volumes
-- if the c-vol is "down" then those volumes cannot be managed by another
c-vol

What i'm trying to achieve is making sure ANY volume is managed
(manageable) by WHICHEVER c-vol is running (and gets the call first) - sort
of A/A - so this means i need to look into Pacemaker and virtual-ips, or i
should try first the "same name".

Thanks,

Eduard

PS. @Michal: Where are volumes physically in case of your driver? <-
similar to ceph, on a distributed object storage service (whose disks can
be anywhere even on the same compute host)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]Behavior when one cinder-volume service is down

2015-09-15 Thread D'Angelo, Scott
Eduard, Gorka has done a great job of explaining some of the issues with 
Active-Active Cinder-volume services in his blog:
http://gorka.eguileor.com/

TL;DR: The hacks to use the same hostname or use Pacemaker + VIP are dangerous 
because of races, and are not recommended for Enterprise deployments.

From: Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
Sent: Tuesday, September 15, 2015 8:54 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder]Behavior when one cinder-volume service is 
down

Hi,

Let me see if i got this:
- running 3 (multiple) c-vols won't automatically give you failover
- each c-vol is "master" of a certain number of volumes
-- if the c-vol is "down" then those volumes cannot be managed by another c-vol

What i'm trying to achieve is making sure ANY volume is managed (manageable) by 
WHICHEVER c-vol is running (and gets the call first) - sort of A/A - so this 
means i need to look into Pacemaker and virtual-ips, or i should try first the 
"same name".

Thanks,

Eduard

PS. @Michal: Where are volumes physically in case of your driver? <- similar to 
ceph, on a distributed object storage service (whose disks can be anywhere even 
on the same compute host)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Security hardening

2015-09-15 Thread Clark, Robert Graham
Very interesting discussion.

The Security project has a published security guide that I believe this
would be very appropriate content for, the current guide (for reference)
is here: http://docs.openstack.org/sec/

Contributions welcome, just like any other part of the OpenStack docs :)

-Rob

On 15/09/2015 16:05, "Jeff Keopp"  wrote:

>This is a very interesting proposal and one I believe is needed.  I¹m
>currently looking at hardening the controller nodes from unwanted access
>and discovered that every time the controller node is booted/rebooted, it
>flushes the iptables and writes only those rules that neutron believes
>should be there.  This behavior would render this proposal ineffective
>once the node is rebooted.
>
>So I believe neutron needs to be fixed to not flush the iptables on each
>boot, but to write the iptables to /etc/sysconfig/iptables and then
>restore them as a normal linux box should do.  It should be a good citizen
>with other processes.
>
>A sysadmin should be allowed to use whatever iptables handlers they wish
>to implement security policies and not have an OpenStack process undo what
>they have set.
>
>I should mention this is on a system using a flat network topology and
>bare metal nodes.  No VMs.
>
>‹
>Jeff Keopp | Sr. Software Engineer, ES Systems.
>380 Jackson Street | St. Paul, MN 55101 | USA  | www.cray.com
>
>
>
>
>
>-Original Message-
>From: Major Hayden 
>Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>
>Date: Monday, September 14, 2015 at 11:34
>To: "openstack-dev@lists.openstack.org"
>
>Subject: Re: [openstack-dev] [openstack-ansible] Security hardening
>
>>On 09/14/2015 03:28 AM, Jesse Pretorius wrote:
>>> I agree with Clint that this is a good approach.
>>> 
>>> If there is an automated way that we can verify the security of an
>>>installation at a reasonable/standardised level then I think we should
>>>add a gate check for it too.
>>
>>Here's a rough draft of a spec.  Feel free to throw some darts.
>>
>>  https://review.openstack.org/#/c/222619/
>>
>>--
>>Major Hayden
>>
>>_
>>_
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]Behavior when one cinder-volume service is down

2015-09-15 Thread Eduard Matei
Thanks Scott,
But the question remains: if the "hacks" are not recommended then how can i
perform Evacuate when the c-vol service of the volumes i need evacuated is
"down", but there are two more controller node with c-vol services running?

Thanks,

Eduard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Armando M.
On 15 September 2015 at 08:27, Mike Spreitzer  wrote:

> Monty Taylor  wrote on 09/15/2015 11:04:07 AM:
>
> > a) an update to python-novaclient to allow a named network to be passed
> > to satisfy the "you have more than one network" - the nics argument is
> > still useful for more complex things
>
> I am not using the latest, but rather Juno.  I find that in many places
> the Neutron CLI insists on a UUID when a name could be used.  Three cheers
> for any campaign to fix that.


The client is not particularly tied to a specific version of the server, so
we don't have a Juno version, or a Kilo version, etc. (even though they are
aligned, see [1] for more details).

Having said that, you could use names in place of uuids pretty much
anywhere. If your experience says otherwise, please consider filing a bug
against the client [2] and we'll get it fixed.

Thanks,
Armando

[1] https://launchpad.net/python-neutronclient/+series
[2] https://bugs.launchpad.net/python-neutronclient/+filebug


>
>
> And, yeah, creating VMs on a shared public network is good too.
>
> Thanks,
> mike
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Liberty-RC1: Identification of release critical bugs (urgent)

2015-09-15 Thread Markus Zoeller
We will enter the release candidates period by next week [1], which
means we have to identify the bugs which will block us from creating
a release candidate. The upcoming nova meeting on Thursday (09/17)
will discuss those. The current plan is to release RC1 on next 
Tuesday (09/22).

Everyone, and especially the subteams, are hereby asked to identify
those bugs in their subject of expertise. The bug tag to use is 
"liberty-rc-potential". If there is consensus that a bug with this
tag is blocking our release candidate, it will be targeted to the
"liberty-rc1" milestone. The nova meeting from last week brought
that already to our attention [2]. We can use the liberty tracking
etherpad as usual [3]. 

Regards,
Markus Zoeller (markus_z)

References:
[1] https://wiki.openstack.org/wiki/Liberty_Release_Schedule 
[2] 
http://eavesdrop.openstack.org/meetings/nova/2015/nova.2015-09-10-21.00.log.html
[3] https://etherpad.openstack.org/p/liberty-nova-priorities-tracking


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel-createmirror "command not found"

2015-09-15 Thread Dmitry Borodaenko
Hi Adam,

Can you provide a bit more details, e.g. specific error messages and
logs? We have a fairly detailed checklist of things to look for when
reporting bugs about Fuel:

https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Test_and_report_bugs

Did you check the bugs about fuel-createmirror that have been fixed
between Fuel 6.1 and 7.0? Could it be that your problem is already
fixed? Did you try a more recent version of the script? If you have a
problem we haven't seen before, please file a bug report, it's the best
way to make sure everyone can benefit from your findings and our fixes:

https://bugs.launchpad.net/fuel/+filebug

Thanks,
-- 
Dmitry Borodaenko

On Tue, Sep 15, 2015 at 09:43:20AM -0700, Adam Lawson wrote:
> Hi guys,
> Is there a trick to get the fuel-createmirror command to work? Customer
> fuel environment was at 6.0, upgraded to 6.1, tred to create local mirror
> and failed. Not working from master node.
> 
> 
> *Adam Lawson*
> 
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (844) 4-AQORN-NOW ext. 101
> International: +1 302-387-4660
> Direct: +1 916-246-2072

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Armando M.
On 15 September 2015 at 11:28, Matt Riedemann 
wrote:

>
>
> On 9/15/2015 10:27 AM, Mike Spreitzer wrote:
>
>> Monty Taylor  wrote on 09/15/2015 11:04:07 AM:
>>
>>  > a) an update to python-novaclient to allow a named network to be passed
>>  > to satisfy the "you have more than one network" - the nics argument is
>>  > still useful for more complex things
>>
>> I am not using the latest, but rather Juno.  I find that in many places
>> the Neutron CLI insists on a UUID when a name could be used.  Three
>> cheers for any campaign to fix that.
>>
>
> It's my understanding that network names in neutron, like security groups,
> are not unique, that's why you have to specify a UUID.
>

Last time I checked, that's true of any resource in Openstack.


>
>> And, yeah, creating VMs on a shared public network is good too.
>>
>> Thanks,
>> mike
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Mohammad Banikazemi


"Fox, Kevin M"  wrote on 09/15/2015 02:57:10 PM:

> From: "Fox, Kevin M" 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 09/15/2015 02:59 PM
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model
>
> Most projects let you specify a name, and only force you to use a
> uuid IFF there is a conflict, leaving it up to the user to decide if
> they want the ease of use of names and being careful to name things,
> or having to use uuid's and not.

That is how Neutron works as well. If it doesn't in some cases, then those
are bugs that need to be filed and fixed.

mb@ubuntu14:~$ neutron net-create x1
Created a new network:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| 02fd014d-3a84-463f-a158-317411528ff3 |
| mtu   | 0|
| name  | x1   |
| port_security_enabled | True |
| provider:network_type | vxlan|
| provider:physical_network |  |
| provider:segmentation_id  | 1037 |
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   |  |
| tenant_id | ce56abd5661f4140a5df98927a6f54d8 |
+---+--+
mb@ubuntu14:~$ neutron net-delete x1
Deleted network: x1

mb@ubuntu14:~$ neutron net-create x1
Created a new network:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| db95539c-1c33-4791-a87f-608872ed3e86 |
| mtu   | 0|
| name  | x1   |
| port_security_enabled | True |
| provider:network_type | vxlan|
| provider:physical_network |  |
| provider:segmentation_id  | 1010 |
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   |  |
| tenant_id | ce56abd5661f4140a5df98927a6f54d8 |
+---+--+
mb@ubuntu14:~$ neutron net-create x1
Created a new network:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| b2b3dd55-0f6f-46e7-aaef-c4a89a5d1ef9 |
| mtu   | 0|
| name  | x1   |
| port_security_enabled | True |
| provider:network_type | vxlan|
| provider:physical_network |  |
| provider:segmentation_id  | 1071 |
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   |  |
| tenant_id | ce56abd5661f4140a5df98927a6f54d8 |
+---+--+
mb@ubuntu14:~$ neutron net-delete x1
Multiple network matches found for name 'x1', use an ID to be more
specific.
mb@ubuntu14:~$ neutron net-delete db95539c-1c33-4791-a87f-608872ed3e86
Deleted network: db95539c-1c33-4791-a87f-608872ed3e86
mb@ubuntu14:~$ neutron net-delete x1
Deleted network: x1
mb@ubuntu14:~$


Best,

Mohammad



>
> Neutron also has the odd wrinkle in that if your a cloud admin, it
> always gives you all the resources back in a listing rather then
> just the current tenant with a flag saying all.
>
> This 

Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Armando M.
On 15 September 2015 at 10:02, Doug Hellmann  wrote:

> Excerpts from Armando M.'s message of 2015-09-15 09:30:35 -0700:
> > On 15 September 2015 at 08:04, Monty Taylor 
> wrote:
> >
> > > Hey all!
> > >
> > > If any of you have ever gotten drunk with me, you'll know I hate
> floating
> > > IPs more than I hate being stabbed in the face with a very angry fish.
> > >
> > > However, that doesn't really matter. What should matter is "what is the
> > > most sane thing we can do for our users"
> > >
> > > As you might have seen in the glance thread, I have a bunch of
> OpenStack
> > > public cloud accounts. Since I wrote that email this morning, I've
> added
> > > more - so we're up to 13.
> > >
> > > auro
> > > citycloud
> > > datacentred
> > > dreamhost
> > > elastx
> > > entercloudsuite
> > > hp
> > > ovh
> > > rackspace
> > > runabove
> > > ultimum
> > > unitedstack
> > > vexxhost
> > >
> > > Of those public clouds, 5 of them require you to use a floating IP to
> get
> > > an outbound address, the others directly attach you to the public
> network.
> > > Most of those 8 allow you to create a private network, to boot vms on
> the
> > > private network, and ALSO to create a router with a gateway and put
> > > floating IPs on your private ip'd machines if you choose.
> > >
> > > Which brings me to the suggestion I'd like to make.
> > >
> > > Instead of having our default in devstack and our default when we talk
> > > about things be "you boot a VM and you put a floating IP on it" - which
> > > solves one of the two usage models - how about:
> > >
> > > - Cloud has a shared: True, external:routable: True neutron network. I
> > > don't care what it's called  ext-net, public, whatever. the "shared"
> part
> > > is the key, that's the part that lets someone boot a vm on it directly.
> > >
> > > - Each person can then make a private network, router, gateway, etc.
> and
> > > get floating-ips from the same public network if they prefer that
> model.
> > >
> > > Are there any good reasons to not push to get all of the public
> networks
> > > marked as "shared"?
> > >
> >
> > The reason is simple: not every cloud deployment is the same: private is
> > different from public and even within the same cloud model, the network
> > topology may vary greatly.
> >
> > Perhaps Neutron fails in the sense that it provides you with too much
> > choice, and perhaps we have to standardize on the type of networking
> > profile expected by a user of OpenStack public clouds before making
> changes
> > that would fragment this landscape even further.
> >
> > If you are advocating for more flexibility without limiting the existing
> > one, we're only making the problem worse.
>
> As with the Glance image upload API discussion, this is an example
> of an extremely common use case that is either complex for the end
> user or for which they have to know something about the deployment
> in order to do it at all. The usability of an OpenStack cloud running
> neutron would be enhanced greatly if there was a simple, clear, way
> for the user to get a new VM with a public IP on any cloud without
> multiple steps on their part.


I agree on this last statement wholeheartedly, but we gotta be careful on
how we do it, because there are implications on scalability and security.

Today Neutron provides a few network deployment models [1,2,3,4,5]. You can
mix and match, with the only caveat is that this stuff must be
pre-provisioned.

Now the way I understand Monty's request is that in certain deployments
you'd like automatic provisioning. We can look into that, as we have in
blueprint [6], but we must recognize that hint-less requests can be hard to
achieve because the way the network service is provided can vary from
system to system...a lot.

Defaults are useful, but wrong defaults are worse. A system can make an
educated guess as of the user's intention, in lieu of that an operator can
force the choice for the user, but if that one is hard too, then the only
choice is to defer to the user.

So this boils down to: in light of the possible ways of providing VM
connectivity, how can we make a choice on the user's behalf? Can we assume
that he/she always want a publicly facing VM connected to Internet? The
answer is 'no'.


> There are a lot of ways to implement
> that "under the hood" (what you call "networking profile" above)
> but the users don't care about "under the hood" so we should provide
> a way for them to ignore it. That's *not* the same as saying we
> should only support one profile. Think about the API from the use
> case perspective, and build it so if there are different deployment
> configurations available, the right action can be taken based on
> the deployment choices made without the user providing any hints.
>

[1]
http://docs.openstack.org/havana/install-guide/install/apt/content/section_use-cases-single-flat.html
[2]

Re: [openstack-dev] [cinder] LVM snapshot performance issue -- why isn't thin provisioning the default?

2015-09-15 Thread John Griffith
On Tue, Sep 15, 2015 at 11:38 AM, Eric Harney  wrote:

> On 09/15/2015 01:00 PM, Chris Friesen wrote:
> > I'm currently trying to work around an issue where activating LVM
> > snapshots created through cinder takes potentially a long time.
> > (Linearly related to the amount of data that differs between the
> > original volume and the snapshot.)  On one system I tested it took about
> > one minute per 25GB of data, so the worst-case boot delay can become
> > significant.
>
​Sadly the addition of the whole activate/deactivate has been problematic
ever since it was introduced.  I'd like to better understand why this is
needed and why the long delay.
​


> >
> > According to Zdenek Kabelac on the LVM mailing list, LVM snapshots were
> > not intended to be kept around indefinitely, they were supposed to be
> > used only until the backup was taken and then deleted.  He recommends
>
​Correct, and FWIW this has also been the recommendation from Cinder's
perspective for a long time as well.  Snapshots are NOT backups and
shouldn't be treated as such.
​


> > using thin provisioning for long-lived snapshots due to differences in
> > how the metadata is maintained.  (He also says he's heard reports of
> > volume activation taking half an hour, which is clearly crazy when
> > instances are waiting to access their volumes.)

>
> > Given the above, is there any reason why we couldn't make thin
> > provisioning the default?
>
​I tried, it was rejected.  I think it's crazy not to fix things up and do
this at this point.
​


> >
>
>
> My intention is to move toward thin-provisioned LVM as the default -- it
> is definitely better suited to our use of LVM.  Previously this was less
> easy, since some older Ubuntu platforms didn't support it, but in
> Liberty we added the ability to specify lvm_type = "auto" [1] to use
> thin if it is supported on the platform.
>
> The other issue preventing using thin by default is that we default the
> max oversubscription ratio to 20.  IMO that isn't a safe thing to do for
> the reference implementation, since it means that people who deploy
> Cinder LVM on smaller storage configurations can easily fill up their
> volume group and have things grind to halt.  I think we want something
> closer to the semantics of thick LVM for the default case.
>
> We haven't thought through a reasonable migration strategy for how to
> handle that.  I'm not sure we can change the default oversubscription
> ratio without breaking deployments using other drivers.  (Maybe I'm
> wrong about this?)
>
> If we sort out that issue, I don't see any reason we can't switch over
> in Mitaka.
>
> [1] https://review.openstack.org/#/c/104653/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Mohammad Banikazemi


"Fox, Kevin M"  wrote on 09/15/2015 02:00:03 PM:

> From: "Fox, Kevin M" 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 09/15/2015 02:02 PM
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model
>
> We run several clouds where there are multiple external networks.
> the "just run it in on THE public network" doesn't work. :/
>
> I also strongly recommend to users to put vms on a private network
> and use floating ip's/load balancers.


Just curious to know how many floating IPs you have in each instance of
your OpenStack cloud.

Best,

Mohammad




For many reasons. Such as, if
> you don't, the ip that gets assigned to the vm helps it become a
> pet. you can't replace the vm and get the same IP. Floating IP's and
> load balancers can help prevent pets. It also prevents security
> issues with DNS and IP's. Also, for every floating ip/lb I have, I
> usually have 3x or more the number of instances that are on the
> private network. Sure its easy to put everything on the public
> network, but it provides much better security if you only put what
> you must on the public network. Consider the internet. would you
> want to expose every device in your house directly on the internet?
> No. you put them in a private network and poke holes just for the
> stuff that does. we should be encouraging good security practices.
> If we encourage bad ones, then it will bite us later when OpenStack
> gets a reputation for being associated with compromises.
>
> I do consider making things as simple as possible very important.
> but that is, make them as simple as possible, but no simpler.
> There's danger here of making things too simple.
>
> Thanks,
> Kevin
> 
> From: Doug Hellmann [d...@doughellmann.com]
> Sent: Tuesday, September 15, 2015 10:02 AM
> To: openstack-dev
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model
>
> Excerpts from Armando M.'s message of 2015-09-15 09:30:35 -0700:
> > On 15 September 2015 at 08:04, Monty Taylor 
wrote:
> >
> > > Hey all!
> > >
> > > If any of you have ever gotten drunk with me, you'll know I hate
floating
> > > IPs more than I hate being stabbed in the face with a very angry
fish.
> > >
> > > However, that doesn't really matter. What should matter is "what is
the
> > > most sane thing we can do for our users"
> > >
> > > As you might have seen in the glance thread, I have a bunch of
OpenStack
> > > public cloud accounts. Since I wrote that email this morning, I've
added
> > > more - so we're up to 13.
> > >
> > > auro
> > > citycloud
> > > datacentred
> > > dreamhost
> > > elastx
> > > entercloudsuite
> > > hp
> > > ovh
> > > rackspace
> > > runabove
> > > ultimum
> > > unitedstack
> > > vexxhost
> > >
> > > Of those public clouds, 5 of them require you to use a floating IP to
get
> > > an outbound address, the others directly attach you to the public
network.
> > > Most of those 8 allow you to create a private network, to boot vms on
the
> > > private network, and ALSO to create a router with a gateway and put
> > > floating IPs on your private ip'd machines if you choose.
> > >
> > > Which brings me to the suggestion I'd like to make.
> > >
> > > Instead of having our default in devstack and our default when we
talk
> > > about things be "you boot a VM and you put a floating IP on it" -
which
> > > solves one of the two usage models - how about:
> > >
> > > - Cloud has a shared: True, external:routable: True neutron network.
I
> > > don't care what it's called  ext-net, public, whatever. the "shared"
part
> > > is the key, that's the part that lets someone boot a vm on it
directly.
> > >
> > > - Each person can then make a private network, router, gateway, etc.
and
> > > get floating-ips from the same public network if they prefer that
model.
> > >
> > > Are there any good reasons to not push to get all of the public
networks
> > > marked as "shared"?
> > >
> >
> > The reason is simple: not every cloud deployment is the same: private
is
> > different from public and even within the same cloud model, the network
> > topology may vary greatly.
> >
> > Perhaps Neutron fails in the sense that it provides you with too much
> > choice, and perhaps we have to standardize on the type of networking
> > profile expected by a user of OpenStack public clouds before making
changes
> > that would fragment this landscape even further.
> >
> > If you are advocating for more flexibility without limiting the
existing
> > one, we're only making the problem worse.
>
> As with the Glance image upload API discussion, this is an example
> of an extremely common use case that is either complex for the end
> user or for which they have to know something about the deployment
> in order to do it at all. The usability of an OpenStack cloud running
> neutron 

Re: [openstack-dev] [all][ptl][release] final liberty cycle client library releases needed

2015-09-15 Thread Nikhil Komawar
Hi Doug,

And it would be good to lock in on glance_store (if it applies to this
email) 0.9.1 too. (that's on pypi)

On 9/14/15 9:26 AM, Kuvaja, Erno wrote:
> Hi Doug,
>
> Please find python-glanceclient 1.0.1 release request 
> https://review.openstack.org/#/c/222716/
>
> - Erno
>
>> -Original Message-
>> From: Doug Hellmann [mailto:d...@doughellmann.com]
>> Sent: Monday, September 14, 2015 1:46 PM
>> To: openstack-dev
>> Subject: [openstack-dev] [all][ptl][release] final liberty cycle client 
>> library
>> releases needed
>>
>> PTLs and release liaisons,
>>
>> In order to keep the rest of our schedule for the end-of-cycle release tasks,
>> we need to have final releases for all client libraries in the next day or 
>> two.
>>
>> If you have not already submitted your final release request for this cycle,
>> please do that as soon as possible.
>>
>> If you *have* already submitted your final release request for this cycle,
>> please reply to this email and let me know that you have so I can create your
>> stable/liberty branch.
>>
>> Thanks!
>> Doug
>>
>> __
>> 
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Inspector] Finishing Liberty (was: final liberty cycle client library releases needed)

2015-09-15 Thread Dmitry Tantsur
2015-09-15 17:36 GMT+02:00 Doug Hellmann :

> Excerpts from Dmitry Tantsur's message of 2015-09-15 17:02:52 +0200:
> > Hi folks!
> >
> > As you can see below, we have to make the final release of
> > python-ironic-inspector-client really soon. We have 2 big missing parts:
> >
> > 1. Introspection rules support.
> > I'm working on it: https://review.openstack.org/#/c/223096/
> > This required a substantial requirement, so that our client does not
> > become a complete mess: https://review.openstack.org/#/c/223490/
>
> At this point in the schedule, I'm not sure it's a good idea to be
> doing anything that's considered a "substantial" rewrite (what I
> assume you meant instead of a "substantial requirement").
>

Oh, right. I can't English any more, sorry :)


>
> What depends on python-ironic-inspector-client? Are all of the things
> that depend on it working for liberty right now? If so, that's your
> liberty release and the rewrite should be considered for mitaka.
>

The only thing that has an optional dependency on inspector client is
ironic. Their interaction is well covered by gate tests, so I'm pretty
confident we're not breaking what is working now.


>
> >
> > 2. Support for getting introspection data. John (trown) volunteered to
> > do this work.
> >
> > I'd like to ask the inspector team to pay close attention to these
> > patches, as the deadline for them is Friday (preferably European time).
>
> You should definitely not be trying to write anything new at this point.
> The feature freeze was *last* week. The releases for this week are meant
> to include bug fixes and any needed requirements updates.
>

Yeah, we (and especially I) should have done much better job managing our
schedule this cycle...

Having said that, I'm a bit worried that by marking the last release as
stable/liberty, we'll exclude majority of liberty features from the client.
Which might make this release somewhat useless for liberty downstream
consumers. I'm worried about downstream people (me included) having to
maintain their own stable/liberty based on the next release. What would you
advise we should do?

Thanks.


>
> >
> > Next, please have a look at the milestone page for ironic-inspector
> > itself: https://launchpad.net/ironic-inspector/+milestone/2.2.0
> > There are things that require review, and there are things without an
> > assignee. If you'd like to volunteer for something there, please assign
> > it to yourself. Our deadline is next Thursday, but it would be really
> > good to finish it earlier next week to dedicate some time to testing.
> >
> > Thanks all, I'm looking forward to this release :)
> >
> >
> >  Forwarded Message 
> > Subject: Re: [openstack-dev] [all][ptl][release] final liberty cycle
> > client library releases needed
> > Date: Tue, 15 Sep 2015 10:45:45 -0400
> > From: Doug Hellmann 
> > Reply-To: OpenStack Development Mailing List (not for usage questions)
> > 
> > To: openstack-dev 
> >
> > Excerpts from Dmitry Tantsur's message of 2015-09-15 16:16:00 +0200:
> > > On 09/14/2015 04:18 PM, Doug Hellmann wrote:
> > > > Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:
> > > >> PTLs and release liaisons,
> > > >>
> > > >> In order to keep the rest of our schedule for the end-of-cycle
> release
> > > >> tasks, we need to have final releases for all client libraries in
> the
> > > >> next day or two.
> > > >>
> > > >> If you have not already submitted your final release request for
> this
> > > >> cycle, please do that as soon as possible.
> > > >>
> > > >> If you *have* already submitted your final release request for this
> > > >> cycle, please reply to this email and let me know that you have so
> I can
> > > >> create your stable/liberty branch.
> > > >>
> > > >> Thanks!
> > > >> Doug
> > > >
> > > > I forgot to mention that we also need the constraints file in
> > > > global-requirements updated for all of the releases, so we're
> actually
> > > > testing with them in the gate. Please take a minute to check the
> version
> > > > specified in openstack/requirements/upper-constraints.txt for your
> > > > libraries and submit a patch to update it to the latest release if
> > > > necessary. I'll do a review later in the week, too, but it's easier
> to
> > > > identify the causes of test failures if we have one patch at a time.
> > >
> > > Hi Doug!
> > >
> > > When is the last and final deadline for doing all this for
> > > not-so-important and non-release:managed projects like
> ironic-inspector?
> > > We still lack some Liberty features covered in
> > > python-ironic-inspector-client. Do we have time until end of week to
> > > finish them?
> >
> > We would like for the schedule to be the same for everyone. We need the
> > final versions for all libraries this week, so we can update
> > requirements constraints by early next week before the 

[openstack-dev] [Congress] PTL candidacy

2015-09-15 Thread Tim Hinrichs
Hi all,

I’m writing to announce my candidacy for Congress PTL for the Mitaka
cycle.  I’m excited at the prospect of continuing the development of our
community, our code base, and our integrations with other projects.

This past cycle has been exciting in that we saw several new, consistent
contributors, who actively pushed code, submitted reviews, wrote specs, and
participated in the mid-cycle meet-up.  Additionally, our integration with
the rest of the OpenStack ecosystem improved with our move to running
tempest tests in the gate instead of manually or with our own CI.  The code
base matured as well, as we rounded out some of the features we added near
the end of the Kilo cycle.  We also began making the most significant
architectural change in the project’s history, in an effort meet our
high-availability and API throughput targets.

I’m looking forward to the Mitaka cycle.  My highest priority for the code
base is completing the architectural changes that we began in Liberty.
These changes are undoubtedly the right way forward for production use
cases, but it is equally important that we make Congress easy to use and
understand for both new developers and new end users.  I also plan to
further our integration with the OpenStack ecosystem by better utilizing
the plugin architectures that are available (e.g. devstack and tempest).  I
will also work to begin (or continue) dialogues with other projects that
might benefit from consuming Congress.  Finally I’m excited to continue
working with our newest project members, helping them toward becoming core
contributors.

See you all in Tokyo!
Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Mohammad Banikazemi

Thanks Kevin for your answer. My question was different. You mentioned in
your email that you run several clouds. That's why I used the word
"instance" in my question to refer to each of those clouds. So let me put
the question in a different way: in the biggest cloud you run, how many
total floating IPs do you have. Just a ballpark number will be great. 10s,
100s, 1000s, more?

Thanks,

Mohammad

"Fox, Kevin M"  wrote on 09/15/2015 03:43:45 PM:

> From: "Fox, Kevin M" 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 09/15/2015 03:49 PM
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model
>
> I'm not quite sure how to read your question. I think it can be
> taken multiple ways. I'll guess at what you meant though. If I
> interpreted wrong, please ask again.
>
> For the instances that have floating ip's, usually either 1 or 2.
> One of our clouds has basically a public
> network directly on the internet, and a shared private network that
> crosses tenants but is not internet facing. We can place vm's on
> either network easily by just attaching floating ip's. The private
> shared network has more floating ip's assigned then the internet one
usually.
>
> As LBaaS is maturing, we're using it more and more, putting the
> floating ips on the LB instead of the instances, and putting a pool
> of instances behind it. So our instance counts are growing faster
> then our usage of floating IP's.
>
> Thanks,
> Kevin
>
> From: Mohammad Banikazemi [m...@us.ibm.com]
> Sent: Tuesday, September 15, 2015 12:23 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model

> "Fox, Kevin M"  wrote on 09/15/2015 02:00:03 PM:
>
> > From: "Fox, Kevin M" 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Date: 09/15/2015 02:02 PM
> > Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> > 'default' network model
> >
> > We run several clouds where there are multiple external networks.
> > the "just run it in on THE public network" doesn't work. :/
> >
> > I also strongly recommend to users to put vms on a private network
> > and use floating ip's/load balancers.
>
>
> Just curious to know how many floating IPs you have in each instance
> of your OpenStack cloud.
>
> Best,
>
> Mohammad
>
>
>
>
> For many reasons. Such as, if
> > you don't, the ip that gets assigned to the vm helps it become a
> > pet. you can't replace the vm and get the same IP. Floating IP's and
> > load balancers can help prevent pets. It also prevents security
> > issues with DNS and IP's. Also, for every floating ip/lb I have, I
> > usually have 3x or more the number of instances that are on the
> > private network. Sure its easy to put everything on the public
> > network, but it provides much better security if you only put what
> > you must on the public network. Consider the internet. would you
> > want to expose every device in your house directly on the internet?
> > No. you put them in a private network and poke holes just for the
> > stuff that does. we should be encouraging good security practices.
> > If we encourage bad ones, then it will bite us later when OpenStack
> > gets a reputation for being associated with compromises.
> >
> > I do consider making things as simple as possible very important.
> > but that is, make them as simple as possible, but no simpler.
> > There's danger here of making things too simple.
> >
> > Thanks,
> > Kevin
> > 
> > From: Doug Hellmann [d...@doughellmann.com]
> > Sent: Tuesday, September 15, 2015 10:02 AM
> > To: openstack-dev
> > Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> > 'default' network model
> >
> > Excerpts from Armando M.'s message of 2015-09-15 09:30:35 -0700:
> > > On 15 September 2015 at 08:04, Monty Taylor 
wrote:
> > >
> > > > Hey all!
> > > >
> > > > If any of you have ever gotten drunk with me, you'll know I
> hate floating
> > > > IPs more than I hate being stabbed in the face with a very angry
fish.
> > > >
> > > > However, that doesn't really matter. What should matter is "what is
the
> > > > most sane thing we can do for our users"
> > > >
> > > > As you might have seen in the glance thread, I have a bunch
ofOpenStack
> > > > public cloud accounts. Since I wrote that email this morning, I've
added
> > > > more - so we're up to 13.
> > > >
> > > > auro
> > > > citycloud
> > > > datacentred
> > > > dreamhost
> > > > elastx
> > > > entercloudsuite
> > > > hp
> > > > ovh
> > > > rackspace
> > > > runabove
> > > > ultimum
> > > > unitedstack
> > > > vexxhost
> > > >
> > > > Of those public clouds, 5 of them require you to use a
> floating IP 

Re: [openstack-dev] [oslo][oslo.config] Reloading configuration of service

2015-09-15 Thread Doug Hellmann
Excerpts from mhorban's message of 2015-09-15 19:38:58 +0300:
> Hi guys,
> 
> I would like to talk about reloading config during reloading service.
> Now we have ability to reload config of service with SIGHUP signal.
> Right now SIGHUP causes just calling conf.reload_config_files().
> As result configuration is updated, but services don't know about it, 
> there is no way to notify them.
> I've created review https://review.openstack.org/#/c/213062/ to allow to 
> execute service's code on reloading config event.
> Possible usage can be https://review.openstack.org/#/c/223668/.
> 
> Any ideas or suggestions
> 

Rather than building hooks into oslo.config, why don't we build them
into the thing that is catching the signal. That way the app can do lots
of things in response to a signal, and one of them might be reloading
the configuration.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Fox, Kevin M
I'm not quite sure how to read your question. I think it can be taken multiple 
ways. I'll guess at what you meant though. If I interpreted wrong, please ask 
again.

For the instances that have floating ip's, usually either 1 or 2. One of our 
clouds has basically a public
network directly on the internet, and a shared private network that crosses 
tenants but is not internet facing. We can place vm's on either network easily 
by just attaching floating ip's. The private shared network has more floating 
ip's assigned then the internet one usually.

As LBaaS is maturing, we're using it more and more, putting the floating ips on 
the LB instead of the instances, and putting a pool of instances behind it. So 
our instance counts are growing faster then our usage of floating IP's.

Thanks,
Kevin

From: Mohammad Banikazemi [m...@us.ibm.com]
Sent: Tuesday, September 15, 2015 12:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' 
network model


"Fox, Kevin M"  wrote on 09/15/2015 02:00:03 PM:

> From: "Fox, Kevin M" 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 09/15/2015 02:02 PM
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model
>
> We run several clouds where there are multiple external networks.
> the "just run it in on THE public network" doesn't work. :/
>
> I also strongly recommend to users to put vms on a private network
> and use floating ip's/load balancers.


Just curious to know how many floating IPs you have in each instance of your 
OpenStack cloud.

Best,

Mohammad




For many reasons. Such as, if
> you don't, the ip that gets assigned to the vm helps it become a
> pet. you can't replace the vm and get the same IP. Floating IP's and
> load balancers can help prevent pets. It also prevents security
> issues with DNS and IP's. Also, for every floating ip/lb I have, I
> usually have 3x or more the number of instances that are on the
> private network. Sure its easy to put everything on the public
> network, but it provides much better security if you only put what
> you must on the public network. Consider the internet. would you
> want to expose every device in your house directly on the internet?
> No. you put them in a private network and poke holes just for the
> stuff that does. we should be encouraging good security practices.
> If we encourage bad ones, then it will bite us later when OpenStack
> gets a reputation for being associated with compromises.
>
> I do consider making things as simple as possible very important.
> but that is, make them as simple as possible, but no simpler.
> There's danger here of making things too simple.
>
> Thanks,
> Kevin
> 
> From: Doug Hellmann [d...@doughellmann.com]
> Sent: Tuesday, September 15, 2015 10:02 AM
> To: openstack-dev
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model
>
> Excerpts from Armando M.'s message of 2015-09-15 09:30:35 -0700:
> > On 15 September 2015 at 08:04, Monty Taylor  wrote:
> >
> > > Hey all!
> > >
> > > If any of you have ever gotten drunk with me, you'll know I hate floating
> > > IPs more than I hate being stabbed in the face with a very angry fish.
> > >
> > > However, that doesn't really matter. What should matter is "what is the
> > > most sane thing we can do for our users"
> > >
> > > As you might have seen in the glance thread, I have a bunch of OpenStack
> > > public cloud accounts. Since I wrote that email this morning, I've added
> > > more - so we're up to 13.
> > >
> > > auro
> > > citycloud
> > > datacentred
> > > dreamhost
> > > elastx
> > > entercloudsuite
> > > hp
> > > ovh
> > > rackspace
> > > runabove
> > > ultimum
> > > unitedstack
> > > vexxhost
> > >
> > > Of those public clouds, 5 of them require you to use a floating IP to get
> > > an outbound address, the others directly attach you to the public network.
> > > Most of those 8 allow you to create a private network, to boot vms on the
> > > private network, and ALSO to create a router with a gateway and put
> > > floating IPs on your private ip'd machines if you choose.
> > >
> > > Which brings me to the suggestion I'd like to make.
> > >
> > > Instead of having our default in devstack and our default when we talk
> > > about things be "you boot a VM and you put a floating IP on it" - which
> > > solves one of the two usage models - how about:
> > >
> > > - Cloud has a shared: True, external:routable: True neutron network. I
> > > don't care what it's called  ext-net, public, whatever. the "shared" part
> > > is the key, that's the part that lets someone boot a vm on it directly.
> > >
> > > - Each person can then make a private network, router, gateway, etc. and
> 

[openstack-dev] [TripleO] PTL candidacy

2015-09-15 Thread Dan Prince
Hi TripleO,

My name is Dan Prince and I'm running for the Mitaka TripleO PTL. I've
been working on TripleO since Grizzly and OpenStack since Bexar. I care
deeply about the project and would like to continue the vision of
deploying OpenStack w/ OpenStack.

TripleO has come a long way over the past few years. I like how the
early vision within the project set the stage for how you can deploy
OpenStack with OpenStack. I also like how we are continuing to
transform the OpenStack deployment landscape by using OpenStack, with
all its API goodness, alongside great technologies like Puppet and
Docker. Using the best with the best... that is why I like TripleO.

A couple of areas I'd like to see us focus on for Mitaka:

CI: The ability to land code upstream is critical right now. We need to
continue to refine our CI workflow so that it is both faster, more
reliable, and gives us more coverage.

Upgrades: Perhaps one of the most important areas of focus is around
upgrades. We've made some progress towards minor updates but we've got
plenty of work to be able to support minor updates and full upgrades.

Composability: Better composability within the Heat templates would
make the 3rd party integration even easier and also give us better and
more flexible role flexibility. Role flexibility in particular may
become more desirable as we take steps towards supporting a more
containerized deployment model.

Validations: The extensibility of our low level network infrastructure
is very flexible. But it isn't easy to configure. I would like to see
us continue adding validations at key points to make this easier.
Additionally validations are critical for integration with any sort of
external resource like a Ceph cluster or load balancers.

Features: New network features like IPv6 support and better support for
spine/leaf deployment topologies. We are also refining how Tuskar works
with Heat and continuing to refine Tuskar UI around a common library
that can better leverage the installation workflows OpenStack requires.

Lots of exciting stuff to work on and a great team of developers who
are driving many of these goals. As PTL I would be honored to help
organize and drive forward the efforts of the team where needed.

Thanks for your consideration.

Dan Prince

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Clark Boylan
On Tue, Sep 15, 2015, at 11:00 AM, Fox, Kevin M wrote:
> We run several clouds where there are multiple external networks. the
> "just run it in on THE public network" doesn't work. :/
Maybe this would be better expressed as "just run it on an existing
public network" then?
> 
> I also strongly recommend to users to put vms on a private network and
> use floating ip's/load balancers. For many reasons. Such as, if you
> don't, the ip that gets assigned to the vm helps it become a pet. you
> can't replace the vm and get the same IP. Floating IP's and load
> balancers can help prevent pets. It also prevents security issues with
> DNS and IP's. Also, for every floating ip/lb I have, I usually have 3x or
> more the number of instances that are on the private network. Sure its
> easy to put everything on the public network, but it provides much better
> security if you only put what you must on the public network. Consider
> the internet. would you want to expose every device in your house
> directly on the internet? No. you put them in a private network and poke
> holes just for the stuff that does. we should be encouraging good
> security practices. If we encourage bad ones, then it will bite us later
> when OpenStack gets a reputation for being associated with compromises.
There are a few issues with this. Neutron IPv6 does not support floating
IPs. So now you have to use two completely different concepts for
networking on a single dual stacked VM. IPv4 goes on a private network
and you attach a floating IP. IPv6 is publicly routable. If security and
DNS and not making pets were really the driving force behind floating
IPs we would see IPv6 support them too. These aren't the reasons
floating IPs exist, they exist because we are running out of IPv4
addresses and NAT is everyones preferred solution to that problem. But
that doesn't make it a good default for a cloud; use them if you are
affected by an IP shortage.

Nothing prevents you from load balancing against public IPs to address
the DNS and firewall rule concerns (basically don't make pets). This
works great and is how OpenStack's git mirrors work.

It is also easy to firewall public IPs using Neutron via security groups
(and possibly the firewall service? I have never used it and don't
know). All this to say I think it is reasonable to use public shared
networks by default particularly since IPv6 does not have any concept of
a floating IP in Neutron so using them is just odd unless you really
really need them and you aren't actually any less secure.

Not to get too off topic, but I would love it if all the devices in my
home were publicly routable. I can use my firewall to punch holes for
them, NAT is not required. Unfortunately I still have issues with IPv6
at home. Maybe one day this will be a reality :)
> 
> I do consider making things as simple as possible very important. but
> that is, make them as simple as possible, but no simpler. There's danger
> here of making things too simple.
> 
> Thanks,
> Kevin
>

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Assaf Muller
On Tue, Sep 15, 2015 at 5:09 PM, Fox, Kevin M  wrote:

> Unfortunately, I haven't had enough chance to play with ipv6 yet.
>
> I still think ipv6 with floating ip's probably makes sense though.
>
> In ipv4, the floating ip's solve one particular problem:
>
> End Users want to be able to consume a service provided by a VM. They have
> two options:
> 1. contact the ip directly
> 2. use DNS.
>
> DNS is preferable, since humans don't remember ip's very well. IPv6 is
> much harder to remember then v4 too.
>
> DNS has its own issues, mostly, its usually not very quick to get a DNS
> entry updated.  At our site (and I'm sure, others), I'm afraid to say in
> some cases it takes as long as 24 hours to get updates to happen. Even if
> that was fixed, caching can bite you too.
>

I'm curious if you tried out Designate / DNSaaS.


>
> So, when you register a DNS record, the ip that its pointing at, kind of
> becomes a set of state. If it can't be separated from a VM its a bad thing.
> You can move it from VM to VM and your VM is not a pet. But, if your IP is
> allocated to the VM specifically, as non Floating IP's are, you run into
> problems if your VM dies and you have to replace it. If your unlucky, it
> dies, and someone else gets allocated the fixed ip, and now someone else's
> server is sitting on your DNS entry! So you are very unlikely to want to
> give up your VM, turning it into a pet.
>
> I'd expect v6 usage to have the same issues.
>
> The floating ip is great in that its an abstraction of a contactable
> address, separate from any VM it may currently be bound to.
>
> You allocate a floating ip. You can then register it with DNS, and another
> tenant can not get accidentally assigned it. You can move it from vm to vm
> until your done with it. You can Unregister it from DNS, and then it is
> safe to return to others to use.
>
> To me, the NAT aspect of it is a secondary thing. Its primary importance
> is in enabling things to be more cattleish and helping with dns security.
>
> Thanks,
> Kevin
>
>
>
>
>
>
> 
> From: Clark Boylan [cboy...@sapwetik.org]
> Sent: Tuesday, September 15, 2015 1:06 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model
>
> On Tue, Sep 15, 2015, at 11:00 AM, Fox, Kevin M wrote:
> > We run several clouds where there are multiple external networks. the
> > "just run it in on THE public network" doesn't work. :/
> Maybe this would be better expressed as "just run it on an existing
> public network" then?
> >
> > I also strongly recommend to users to put vms on a private network and
> > use floating ip's/load balancers. For many reasons. Such as, if you
> > don't, the ip that gets assigned to the vm helps it become a pet. you
> > can't replace the vm and get the same IP. Floating IP's and load
> > balancers can help prevent pets. It also prevents security issues with
> > DNS and IP's. Also, for every floating ip/lb I have, I usually have 3x or
> > more the number of instances that are on the private network. Sure its
> > easy to put everything on the public network, but it provides much better
> > security if you only put what you must on the public network. Consider
> > the internet. would you want to expose every device in your house
> > directly on the internet? No. you put them in a private network and poke
> > holes just for the stuff that does. we should be encouraging good
> > security practices. If we encourage bad ones, then it will bite us later
> > when OpenStack gets a reputation for being associated with compromises.
> There are a few issues with this. Neutron IPv6 does not support floating
> IPs. So now you have to use two completely different concepts for
> networking on a single dual stacked VM. IPv4 goes on a private network
> and you attach a floating IP. IPv6 is publicly routable. If security and
> DNS and not making pets were really the driving force behind floating
> IPs we would see IPv6 support them too. These aren't the reasons
> floating IPs exist, they exist because we are running out of IPv4
> addresses and NAT is everyones preferred solution to that problem. But
> that doesn't make it a good default for a cloud; use them if you are
> affected by an IP shortage.
>
> Nothing prevents you from load balancing against public IPs to address
> the DNS and firewall rule concerns (basically don't make pets). This
> works great and is how OpenStack's git mirrors work.
>
> It is also easy to firewall public IPs using Neutron via security groups
> (and possibly the firewall service? I have never used it and don't
> know). All this to say I think it is reasonable to use public shared
> networks by default particularly since IPv6 does not have any concept of
> a floating IP in Neutron so using them is just odd unless you really
> really need them and you aren't actually any less secure.
>
> Not to get too off topic, but I would 

[openstack-dev] [all] New Gerrit translations change proposals from Zanata

2015-09-15 Thread Elizabeth K. Joseph
Hi everyone,

Daisy announced to the i18n team last week[0] that we've moved to
using Zanata for translations for the Liberty cycle. Everyone should
now be using Zanata at https://translate.openstack.org/ for
translations.

We're just now finishing up the infrastructure side of things with the
switch from having Transifex submit the translations proposals to
Gerrit to having Zanata do it.

The Gerrit topic for these change proposals for all projects with
translations has been changed from "transifex/translations" to
"zanata/translations". After a test with oslo.versionedobjects last
week[1], we're moving forward this Wednesday morning UTC time to have
the jobs run so that all translations changes proposed to Gerrit are
made by Zanata.

Please let us know if you run into any problems or concerns with the
changes being proposed to your project. The infra and i18n teams will
have a look and provide help as needed.

Thanks everyone, these are exciting times for the i18n team!

[0] 
http://lists.openstack.org/pipermail/openstack-i18n/2015-September/001331.html

[1] https://review.openstack.org/#/c/222712/ note that the topic: had
not yet been updated at the time of this change, but it has now

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] PTL Non-Candidacy

2015-09-15 Thread Nikesh Kumar Mahalka
Thanks Mike,
It was really a good experience working with you in kilo and liberty.



Regards
Nikesh

On Tue, Sep 15, 2015 at 1:21 PM, Silvan Kaiser  wrote:

> Thanks Mike!
> That was really demanding work!
>
> 2015-09-15 9:27 GMT+02:00 陈莹 :
>
>> Thanks Mike. Thank you for doing a great job.
>>
>>
>> > From: sxmatch1...@gmail.com
>> > Date: Tue, 15 Sep 2015 10:05:22 +0800
>> > To: openstack-dev@lists.openstack.org
>> > Subject: Re: [openstack-dev] [cinder] PTL Non-Candidacy
>>
>> >
>> > Thanks Mike ! Your help is very important to me to get started in
>> > cinder and we do a lot of proud work with your leadership.
>> >
>> > 2015-09-15 6:36 GMT+08:00 John Griffith :
>> > >
>> > >
>> > > On Mon, Sep 14, 2015 at 11:02 AM, Sean McGinnis <
>> sean.mcgin...@gmx.com>
>> > > wrote:
>> > >>
>> > >> On Mon, Sep 14, 2015 at 09:15:44AM -0700, Mike Perez wrote:
>> > >> > Hello all,
>> > >> >
>> > >> > I will not be running for Cinder PTL this next cycle. Each cycle I
>> ran
>> > >> > was for a reason [1][2], and the Cinder team should feel proud of
>> our
>> > >> > accomplishments:
>> > >>
>> > >> Thanks for a couple of awesome cycles Mike!
>> > >>
>> > >>
>> __
>> > >> OpenStack Development Mailing List (not for usage questions)
>> > >> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> > > You did a fantastic job Mike, thank you very much for the hard work
>> and
>> > > dedication.
>> > >
>> > >
>> > >
>> __
>> > > OpenStack Development Mailing List (not for usage questions)
>> > > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> >
>> >
>> >
>> > --
>> > Best Wishes For You!
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Dr. Silvan Kaiser
> Quobyte GmbH
> Hardenbergplatz 2, 10623 Berlin - Germany
> +49-30-814 591 800 - www.quobyte.com
> Amtsgericht Berlin-Charlottenburg, HRB 149012B
> Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>
>
> --
> *Quobyte* GmbH
> Hardenbergplatz 2 - 10623 Berlin - Germany
> +49-30-814 591 800 - www.quobyte.com
> Amtsgericht Berlin-Charlottenburg, HRB 149012B
> management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Fox, Kevin M
Ah. Instance of Cloud, not Nova Instance. Gotcha.

The biggest currently has about 100 addresses on the public net and maybe about 
a quoter of those are allocated to instances. the shared private's about 200 
and around a 30 or 40 are used. We have a lot of big vm's on that cloud for HPC 
like workload, so there are only around a hundred fifty instances at present. 
The majority are huge, taking up a whole node. The rest are small, 
infrastructure related and a lot are HA behind load balancers. We're using host 
aggrigates to keep the workloads separate. Of the non Compute VM's, I'd say 
there's somewhere between a 2x relationship between vm's without floating ip's 
and those with.  That number's growing as we make things more HA.

Thanks,
Kevin

From: Mohammad Banikazemi [m...@us.ibm.com]
Sent: Tuesday, September 15, 2015 1:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' 
network model


Thanks Kevin for your answer. My question was different. You mentioned in your 
email that you run several clouds. That's why I used the word "instance" in my 
question to refer to each of those clouds. So let me put the question in a 
different way: in the biggest cloud you run, how many total floating IPs do you 
have. Just a ballpark number will be great. 10s, 100s, 1000s, more?

Thanks,

Mohammad

"Fox, Kevin M"  wrote on 09/15/2015 03:43:45 PM:

> From: "Fox, Kevin M" 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 09/15/2015 03:49 PM
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model
>
> I'm not quite sure how to read your question. I think it can be
> taken multiple ways. I'll guess at what you meant though. If I
> interpreted wrong, please ask again.
>
> For the instances that have floating ip's, usually either 1 or 2.
> One of our clouds has basically a public
> network directly on the internet, and a shared private network that
> crosses tenants but is not internet facing. We can place vm's on
> either network easily by just attaching floating ip's. The private
> shared network has more floating ip's assigned then the internet one usually.
>
> As LBaaS is maturing, we're using it more and more, putting the
> floating ips on the LB instead of the instances, and putting a pool
> of instances behind it. So our instance counts are growing faster
> then our usage of floating IP's.
>
> Thanks,
> Kevin
>
> From: Mohammad Banikazemi [m...@us.ibm.com]
> Sent: Tuesday, September 15, 2015 12:23 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> 'default' network model

> "Fox, Kevin M"  wrote on 09/15/2015 02:00:03 PM:
>
> > From: "Fox, Kevin M" 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Date: 09/15/2015 02:02 PM
> > Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed
> > 'default' network model
> >
> > We run several clouds where there are multiple external networks.
> > the "just run it in on THE public network" doesn't work. :/
> >
> > I also strongly recommend to users to put vms on a private network
> > and use floating ip's/load balancers.
>
>
> Just curious to know how many floating IPs you have in each instance
> of your OpenStack cloud.
>
> Best,
>
> Mohammad
>
>
>
>
> For many reasons. Such as, if
> > you don't, the ip that gets assigned to the vm helps it become a
> > pet. you can't replace the vm and get the same IP. Floating IP's and
> > load balancers can help prevent pets. It also prevents security
> > issues with DNS and IP's. Also, for every floating ip/lb I have, I
> > usually have 3x or more the number of instances that are on the
> > private network. Sure its easy to put everything on the public
> > network, but it provides much better security if you only put what
> > you must on the public network. Consider the internet. would you
> > want to expose every device in your house directly on the internet?
> > No. you put them in a private network and poke holes just for the
> > stuff that does. we should be encouraging good security practices.
> > If we encourage bad ones, then it will bite us later when OpenStack
> > gets a reputation for being associated with compromises.
> >
> > I do consider making things as simple as possible very important.
> > but that is, make them as simple as possible, but no simpler.
> > There's danger here of making things too simple.
> >
> > Thanks,
> > Kevin
> > 
> > From: Doug Hellmann [d...@doughellmann.com]
> > Sent: Tuesday, September 15, 2015 10:02 AM
> > To: openstack-dev
> > Subject: Re: [openstack-dev] [nova][neutron][devstack] 

[openstack-dev] [Mistral] Mistral PTL Candidacy

2015-09-15 Thread Renat Akhmerov
Hi,

My name is Renat Akhmerov. I decided to run for Mistral PTL for Mitaka release 
cycle.

This is the first time when I’m doing it officially after Mistral was accepted 
into Big Tent.
In fact, I’ve been driving the project for just a little less than 2 years by 
now and I've put a lot of
my energy into this initiative by designing its architecture, coding (including 
initial PoC versions),
reviewing nearly every single patch coming in and presenting Mistral at every 
conference that
I could including OpenStack summits. Of course, I wasn’t doing it alone and I 
couldn’t find enough
words to express how thankful I am to folks from Mirantis, StackStorm, Huawei, 
Alcatel-Lucent,
HP, Ericsson and other companies. You all have done a great work and I’m proud 
to be a part of
such a great team.

Although a lot has been done and we certainly have achievements in the form of 
users who
use Mistral in production, there’s a lot more ahead. And below is what I think 
we need to focus on
during Mitaka cycle.

HA and maturity

Making Mistral truly stable and mature technology capable of running in HA 
mode. I have to admit
that so far we haven’t been paying enough attention to high-load testing and 
tuning. And my belief
that it’s a high time we started doing it. Some of the issues are known to us 
and we know how we
should be fixing them. Some have to be discovered. In my opinion, what we’re 
missing now is a
comprehensive understanding of, believe it or not, how Mistral works :) This 
may sound strange,
but I really mean is that we need to know in very tiny detail how every Mistral 
transaction works
in terms of potential race conditions, isolation level, concurrency model etc. 
In my strong opinion,
this is a prerequisite for everything else. Having said that, I am going to 
bring more expertise
on the project to fill this gap: either by attracting corresponding people and 
by planning more
time for the current team members to work on that.

Apart from that I find it very important to stop developing two many new 
features in workflow engine
and do a proper refactoring of it. In my strong opinion, Mistral engine started 
suffering from
squeezing more and more functionality into it. It’s generally normal but I 
believe that we need to
simplify the code base by cleaning it up wisely and at the same time improving 
test coverage
accounting for all kind of corner cases and negative scenarios.

Use cases

This is probably the most tricky part about this project and I believe I 
personally should have done
much better job of clearly explaining Mistral value for the industry. I plan to 
change the situation
drastically by providing battle proven scenarios where it’s hard or nearly 
impossible to avoid using
Mistral. Also recording screencasts and writing cookbooks is part of the plan.

UI

Thanks to engineers from Huawei and Alcatel Lucent who’ve done a good job in 
Liberty to move Mistral
UI to a much better state. Most of basic CRUD functionality is there and this 
work keeps going on.
However, I still see a lot of ways how to advance Mistral UI and make it really 
remarkable.
For example, one specific thing that I’d really like to work on is workflow 
graph visualisation
(for both editing and monitoring running workflows).

I find it very important particularly because having good UI would help us to 
build even larger
community around the project. Just because it would be easier to deliver a 
message of what the
project goal is.

What else

Other things that I’d like to pay attention to:
Solving guest VMs access problem
More intellectual task scheduling mechanism accounting for workflow priorities 
(FIFO but on workflow level)
New REST API (don’t confuse with DSL, or workflow language) on which we’ve 
almost agreed within the team
Improving significantly CLI so that it becomes truly convenient and fun to use

Eventually, my goal is to build a really useful and beautiful technology

Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel][fuel-library] modules managed with librarian round 2

2015-09-15 Thread Alex Schultz
Hello!

So after our first round of librarian changes for 7.0, it is time to start
switching to upstream for more changes.  We've had a few updates during the
fuel meetings over the last month[0][1].  I have begun to prepare
additional reviews to move modules.

The current modules available for migration are:

memcached - https://review.openstack.org/#/c/217383/
sysctl - https://review.openstack.org/#/c/221945/
staging - https://review.openstack.org/#/c/222350/
vcsrepo - https://review.openstack.org/#/c/222355/
postgresql - https://review.openstack.org/#/c/222368/


Just as an FYI in addition to these modules, I have started work on the
rsyslog module which was a very old version of the module with only a few
minor customizations. Since we leverage the rsyslog module within our
openstack composition layer module, I have also taken some time to put
together a patch[2] with some unit tests for the openstack module in fuel
library since what was there has been disabled[3] for some time and doesn't
function.  The patch[4] with the move to an upstream version of rsyslog is
out there as well if anyone is interested in taking a look. I'm going to do
some additional testing around these two patches to ensure we don't break
any syslog functionality by switching before they should be merged.

Thanks,
-Alex

[0]
http://eavesdrop.openstack.org/meetings/fuel/2015/fuel.2015-08-27-16.00.html
[1]
http://eavesdrop.openstack.org/meetings/fuel/2015/fuel.2015-09-03-16.00.html
[2] https://review.openstack.org/#/c/223395/
[3]
https://github.com/stackforge/fuel-library/blob/master/utils/jenkins/modules.disable_rspec#L29
[4] https://review.openstack.org/#/c/222758/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Mike Spreitzer
"Armando M."  wrote on 09/15/2015 03:50:24 PM:

> On 15 September 2015 at 10:02, Doug Hellmann  
wrote:
> Excerpts from Armando M.'s message of 2015-09-15 09:30:35 -0700:
...
> As with the Glance image upload API discussion, this is an example
> of an extremely common use case that is either complex for the end
> user or for which they have to know something about the deployment
> in order to do it at all. The usability of an OpenStack cloud running
> neutron would be enhanced greatly if there was a simple, clear, way
> for the user to get a new VM with a public IP on any cloud without
> multiple steps on their part. 

<>

...
> 
> So this boils down to: in light of the possible ways of providing VM
> connectivity, how can we make a choice on the user's behalf? Can we 
> assume that he/she always want a publicly facing VM connected to 
> Internet? The answer is 'no'.

While it may be true that in some deployments there is no good way for the 
code to choose, I think that is not the end of the story here.  The 
motivation to do this is that in *some* deployments there *is* a good way 
for the code to figure out what to do.

Regards,
Mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [docs][ptl] Docs PTL Candidacy

2015-09-15 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 15/09/15 20:06, Christian Berendt wrote:
> On 09/14/2015 04:31 AM, Lana Brindley wrote:
>> I'd love to have your support for the PTL role for Mitaka, and
>> I'm looking forward to continuing to grow the documentation
>> team.
> 
> You have my support. Thanks for your great work during the current
> cycle.
> 
> Christian.
> 

It's so lovely to wake up and see this. Thank you, Christian. And
thank you for all that you do, too :)

L

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQEcBAEBAgAGBQJV+IiBAAoJELppzVb4+KUyANIH/0nh3R5HskdjFsFpdxT6pXI5
PuQf0t8YiMXYUxNaLXQL4o11BVXlaHdI3AWOSq/YswIOSB5vrOUT0o17j1+RrJPx
MjOiuaDT7VOBjNAXv3q7qbFM2qBt+o9n2iVX5rosgTLEPFRj/hGsVFIc8xjhJnV+
PCSOs/ZvkOCtSJ2+pYDV9pd7eWJ9Lx7ts3sDapovZeSn4vEooLdrE9q5QxLUHLkb
KnzGe+oLgvlgKZDSCtdogCNKyogJzTVokzgfwm27oZXqq9o9pRHxsw4vAI/6aRWc
cM8hxihlBbNV+3/LhbSDAAEGhhA6TxzSKP3dLTnJ71F4kPkpJ0CDl7QYHlUkppg=
=hRD2
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Mike Spreitzer
Clark Boylan  wrote on 09/15/2015 04:06:26 PM:

> > I also strongly recommend to users to put vms on a private network and
> > use floating ip's/load balancers. For many reasons. Such as, if you
> > don't, the ip that gets assigned to the vm helps it become a pet. you
> > can't replace the vm and get the same IP. Floating IP's and load
> > balancers can help prevent pets. It also prevents security issues with
> > DNS and IP's. Also, for every floating ip/lb I have, I usually have 3x 
or
> > more the number of instances that are on the private network. Sure its
> > easy to put everything on the public network, but it provides much 
better
> > security if you only put what you must on the public network. Consider
> > the internet. would you want to expose every device in your house
> > directly on the internet? No. you put them in a private network and 
poke
> > holes just for the stuff that does. we should be encouraging good
> > security practices. If we encourage bad ones, then it will bite us 
later
> > when OpenStack gets a reputation for being associated with 
compromises.
> There are a few issues with this. Neutron IPv6 does not support floating
> IPs. So now you have to use two completely different concepts for
> networking on a single dual stacked VM. IPv4 goes on a private network
> and you attach a floating IP. IPv6 is publicly routable. If security and
> DNS and not making pets were really the driving force behind floating
> IPs we would see IPv6 support them too. These aren't the reasons
> floating IPs exist, they exist because we are running out of IPv4
> addresses and NAT is everyones preferred solution to that problem. But
> that doesn't make it a good default for a cloud; use them if you are
> affected by an IP shortage.
> 
> Nothing prevents you from load balancing against public IPs to address
> the DNS and firewall rule concerns (basically don't make pets). This
> works great and is how OpenStack's git mirrors work.
> 
> It is also easy to firewall public IPs using Neutron via security groups
> (and possibly the firewall service? I have never used it and don't
> know). All this to say I think it is reasonable to use public shared
> networks by default particularly since IPv6 does not have any concept of
> a floating IP in Neutron so using them is just odd unless you really
> really need them and you aren't actually any less secure.

I'm really glad to see the IPv6 front opened.

But I have to say that the analysis of options for securing public 
addresses omits one case that I think is important: using an external (to 
Neutron) "appliance".  In my environment this is more or less required. 
This reinforces the bifurcation of addresses that was mentioned: some VMs 
are private and do not need any service from the external appliance, while 
others have addresses that need the external appliance on the 
public/private path.

In fact, for this reason, I have taken to using two "external" networks 
(from Neutron's point of view) --- one whose addresses are handled by the 
external appliance and one whose addresses are not.  In fact, both ranges 
of address are on the same VLAN.  This is FYI, some people have wondered 
why these thins might be done.

> Not to get too off topic, but I would love it if all the devices in my
> home were publicly routable. I can use my firewall to punch holes for
> them, NAT is not required. Unfortunately I still have issues with IPv6
> at home. Maybe one day this will be a reality :)

Frankly, given the propensity for bugs to be discovered, I am glad that 
nothing in my home is accessible from the outside (aside from the device 
that does firewall, and I worry about that too).  Not that this is really 
germane to what we want to do for internet-accessible 
applications/services.

Regards,
Mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] PTL Candidacy

2015-09-15 Thread Nikesh Kumar Mahalka
Thanks Sean, Vote +1.

On Tue, Sep 15, 2015 at 8:36 AM, hao wang  wrote:

> Thanks Sean, Vote +1.
>
> 2015-09-14 22:49 GMT+08:00 Sean McGinnis :
> > Hello everyone,
> >
> > I'm announcing my candidacy for Cinder PTL for the Mitaka release.
> >
> > The Cinder team has made great progress. We've not only grown the
> > number of supported backend drivers, but we've made significant
> > improvements to the core code and raised the quality of existing
> > and incoming code contributions. While there are still many things
> > that need more polish, we are headed in the right direction and
> > block storage is a strong, stable component to many OpenStack clouds.
> >
> > Mike and John have provided the leadership to get the project where
> > it is today. I would like to keep that momentum going.
> >
> > I've spent over a decade finding new and interesting ways to create
> > and delete volumes. I also work across many different product teams
> > and have had a lot of experience collaborating with groups to find
> > a balance between the work being done to best benefit all involved.
> >
> > I think I can use this experience to foster collaboration both within
> > the Cinder team as well as between Cinder and other related projects
> > that interact with storage services.
> >
> > Some topics I would like to see focused on for the Mitaka release
> > would be:
> >
> >  * Complete work of making the Cinder code Python3 compatible.
> >  * Complete conversion to objects.
> >  * Sort out object inheritance and appropriate use of ABC.
> >  * Continued stabilization of third party CI.
> >  * Make sure there is a good core feature set regardless of backend type.
> >  * Reevaluate our deadlines to make sure core feature work gets enough
> >time and allows drivers to implement support.
> >
> > While there are some things I think we need to do to move the project
> > forward, I am mostly open to the needs of the community as a whole
> > and making sure that what we are doing is benefiting OpenStack and
> > making it a simpler, easy to use, and ubiquitous platform for the
> > cloud.
> >
> > Thank you for your consideration!
> >
> > Sean McGinnis (smcginnis)
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Best Wishes For You!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] PTL Candidacy

2015-09-15 Thread Duncan Thomas
Voting is done by formal ballot just before the summit. All Cinder ATCs
will be invited to vote. Voting on the mailing list is just noise.

On 15 September 2015 at 23:40, Nikesh Kumar Mahalka <
nikeshmaha...@vedams.com> wrote:

> Thanks Sean, Vote +1.
>
> On Tue, Sep 15, 2015 at 8:36 AM, hao wang  wrote:
>
>> Thanks Sean, Vote +1.
>>
>> 2015-09-14 22:49 GMT+08:00 Sean McGinnis :
>> > Hello everyone,
>> >
>> > I'm announcing my candidacy for Cinder PTL for the Mitaka release.
>> >
>> > The Cinder team has made great progress. We've not only grown the
>> > number of supported backend drivers, but we've made significant
>> > improvements to the core code and raised the quality of existing
>> > and incoming code contributions. While there are still many things
>> > that need more polish, we are headed in the right direction and
>> > block storage is a strong, stable component to many OpenStack clouds.
>> >
>> > Mike and John have provided the leadership to get the project where
>> > it is today. I would like to keep that momentum going.
>> >
>> > I've spent over a decade finding new and interesting ways to create
>> > and delete volumes. I also work across many different product teams
>> > and have had a lot of experience collaborating with groups to find
>> > a balance between the work being done to best benefit all involved.
>> >
>> > I think I can use this experience to foster collaboration both within
>> > the Cinder team as well as between Cinder and other related projects
>> > that interact with storage services.
>> >
>> > Some topics I would like to see focused on for the Mitaka release
>> > would be:
>> >
>> >  * Complete work of making the Cinder code Python3 compatible.
>> >  * Complete conversion to objects.
>> >  * Sort out object inheritance and appropriate use of ABC.
>> >  * Continued stabilization of third party CI.
>> >  * Make sure there is a good core feature set regardless of backend
>> type.
>> >  * Reevaluate our deadlines to make sure core feature work gets enough
>> >time and allows drivers to implement support.
>> >
>> > While there are some things I think we need to do to move the project
>> > forward, I am mostly open to the needs of the community as a whole
>> > and making sure that what we are doing is benefiting OpenStack and
>> > making it a simpler, easy to use, and ubiquitous platform for the
>> > cloud.
>> >
>> > Thank you for your consideration!
>> >
>> > Sean McGinnis (smcginnis)
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> Best Wishes For You!
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] LVM snapshot performance issue -- why isn't thin provisioning the default?

2015-09-15 Thread yang, xing
Hi Eric,

Regarding the default max_over_subscription_ratio, I initially set the
default to 1 while working on oversubscription, and changed it to 2 after
getting review comments.  After it was merged, I got feedback that 2 is
too small and 20 is more appropriated, so I changed it to 20.  So it looks
like we can¹t find a default value that makes everyone happy.

If we can decide what is the best default value for LVM, we can change the
default max_over_subscription_ratio, but we should also allow other
drivers to specify a different config option if a different default value
is more appropriate for them.

Thanks,
Xing


On 9/15/15, 1:38 PM, "Eric Harney"  wrote:

>On 09/15/2015 01:00 PM, Chris Friesen wrote:
>> I'm currently trying to work around an issue where activating LVM
>> snapshots created through cinder takes potentially a long time.
>> (Linearly related to the amount of data that differs between the
>> original volume and the snapshot.)  On one system I tested it took about
>> one minute per 25GB of data, so the worst-case boot delay can become
>> significant.
>> 
>> According to Zdenek Kabelac on the LVM mailing list, LVM snapshots were
>> not intended to be kept around indefinitely, they were supposed to be
>> used only until the backup was taken and then deleted.  He recommends
>> using thin provisioning for long-lived snapshots due to differences in
>> how the metadata is maintained.  (He also says he's heard reports of
>> volume activation taking half an hour, which is clearly crazy when
>> instances are waiting to access their volumes.)
>> 
>> Given the above, is there any reason why we couldn't make thin
>> provisioning the default?
>> 
>
>
>My intention is to move toward thin-provisioned LVM as the default -- it
>is definitely better suited to our use of LVM.  Previously this was less
>easy, since some older Ubuntu platforms didn't support it, but in
>Liberty we added the ability to specify lvm_type = "auto" [1] to use
>thin if it is supported on the platform.
>
>The other issue preventing using thin by default is that we default the
>max oversubscription ratio to 20.  IMO that isn't a safe thing to do for
>the reference implementation, since it means that people who deploy
>Cinder LVM on smaller storage configurations can easily fill up their
>volume group and have things grind to halt.  I think we want something
>closer to the semantics of thick LVM for the default case.
>
>We haven't thought through a reasonable migration strategy for how to
>handle that.  I'm not sure we can change the default oversubscription
>ratio without breaking deployments using other drivers.  (Maybe I'm
>wrong about this?)
>
>If we sort out that issue, I don't see any reason we can't switch over
>in Mitaka.
>
>[1] https://review.openstack.org/#/c/104653/
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' network model

2015-09-15 Thread Fox, Kevin M
Unfortunately, I haven't had enough chance to play with ipv6 yet.

I still think ipv6 with floating ip's probably makes sense though.

In ipv4, the floating ip's solve one particular problem:

End Users want to be able to consume a service provided by a VM. They have two 
options:
1. contact the ip directly
2. use DNS.

DNS is preferable, since humans don't remember ip's very well. IPv6 is much 
harder to remember then v4 too.

DNS has its own issues, mostly, its usually not very quick to get a DNS entry 
updated.  At our site (and I'm sure, others), I'm afraid to say in some cases 
it takes as long as 24 hours to get updates to happen. Even if that was fixed, 
caching can bite you too.

So, when you register a DNS record, the ip that its pointing at, kind of 
becomes a set of state. If it can't be separated from a VM its a bad thing. You 
can move it from VM to VM and your VM is not a pet. But, if your IP is 
allocated to the VM specifically, as non Floating IP's are, you run into 
problems if your VM dies and you have to replace it. If your unlucky, it dies, 
and someone else gets allocated the fixed ip, and now someone else's server is 
sitting on your DNS entry! So you are very unlikely to want to give up your VM, 
turning it into a pet.

I'd expect v6 usage to have the same issues.

The floating ip is great in that its an abstraction of a contactable address, 
separate from any VM it may currently be bound to.

You allocate a floating ip. You can then register it with DNS, and another 
tenant can not get accidentally assigned it. You can move it from vm to vm 
until your done with it. You can Unregister it from DNS, and then it is safe to 
return to others to use.

To me, the NAT aspect of it is a secondary thing. Its primary importance is in 
enabling things to be more cattleish and helping with dns security.

Thanks,
Kevin







From: Clark Boylan [cboy...@sapwetik.org]
Sent: Tuesday, September 15, 2015 1:06 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova][neutron][devstack] New proposed 'default' 
network model

On Tue, Sep 15, 2015, at 11:00 AM, Fox, Kevin M wrote:
> We run several clouds where there are multiple external networks. the
> "just run it in on THE public network" doesn't work. :/
Maybe this would be better expressed as "just run it on an existing
public network" then?
>
> I also strongly recommend to users to put vms on a private network and
> use floating ip's/load balancers. For many reasons. Such as, if you
> don't, the ip that gets assigned to the vm helps it become a pet. you
> can't replace the vm and get the same IP. Floating IP's and load
> balancers can help prevent pets. It also prevents security issues with
> DNS and IP's. Also, for every floating ip/lb I have, I usually have 3x or
> more the number of instances that are on the private network. Sure its
> easy to put everything on the public network, but it provides much better
> security if you only put what you must on the public network. Consider
> the internet. would you want to expose every device in your house
> directly on the internet? No. you put them in a private network and poke
> holes just for the stuff that does. we should be encouraging good
> security practices. If we encourage bad ones, then it will bite us later
> when OpenStack gets a reputation for being associated with compromises.
There are a few issues with this. Neutron IPv6 does not support floating
IPs. So now you have to use two completely different concepts for
networking on a single dual stacked VM. IPv4 goes on a private network
and you attach a floating IP. IPv6 is publicly routable. If security and
DNS and not making pets were really the driving force behind floating
IPs we would see IPv6 support them too. These aren't the reasons
floating IPs exist, they exist because we are running out of IPv4
addresses and NAT is everyones preferred solution to that problem. But
that doesn't make it a good default for a cloud; use them if you are
affected by an IP shortage.

Nothing prevents you from load balancing against public IPs to address
the DNS and firewall rule concerns (basically don't make pets). This
works great and is how OpenStack's git mirrors work.

It is also easy to firewall public IPs using Neutron via security groups
(and possibly the firewall service? I have never used it and don't
know). All this to say I think it is reasonable to use public shared
networks by default particularly since IPv6 does not have any concept of
a floating IP in Neutron so using them is just odd unless you really
really need them and you aren't actually any less secure.

Not to get too off topic, but I would love it if all the devices in my
home were publicly routable. I can use my firewall to punch holes for
them, NAT is not required. Unfortunately I still have issues with IPv6
at home. Maybe one day this will be a reality :)
>
> I do consider making things as simple as possible very 

[openstack-dev] [Cinder]Behavior when one cinder-volume service is down

2015-09-15 Thread Eduard Matei
Hi,

This all started when we were testing Evacuate with our storage driver.
We thought we found a bug (https://bugs.launchpad.net/cinder/+bug/1491276)
then Scott replied that we should be running cinder-volume service separate
from nova-compute.
For some internal reasons we can't do that - yet, but we have some
questions regarding the behavior of the service:

- on our original test setup we have 3 nodes (1 controller + compute +
cinder, 2 compute + cinder).
-- when we shutdown the second node and tried to evacuate, the call was
routed to cinder-volume of the halted node instead of going to other nodes
(there were still 2 cinder-volume services up) - WHY?
- on the new planned setup we will have 6 nodes (3 dedicated controller +
cinder-volume, 3 compute)
-- in this case which cinder-volume will manage which volume on which
compute node?
-- what if: one compute node and one controller go down - will the Evacuate
still work if one of the cinder-volume services is down? How can we tell -
for sure - that this setup will work in case ANY 1 controller and 1 compute
nodes go down?

Hypothetical:
- if 3 dedicated controller + cinder-volume nodes work can perform evacuate
when one of them is down (at the same time with one compute), WHY can't the
same 3 nodes perform evacuate when compute services is running on the same
nodes (so 1 cinder is down and 1 compute)
- if the answer to above question is "They can't " then what is the purpose
of running 3 cinder-volume services if they can't handle one failure?
- and if the answer to above question is "You only run one cinder-volume"
then how can it handle failure of controller node?

Thanks,

Eduard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][release] final liberty cycle client library releases needed

2015-09-15 Thread Dmitry Tantsur

On 09/14/2015 04:18 PM, Doug Hellmann wrote:

Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:

PTLs and release liaisons,

In order to keep the rest of our schedule for the end-of-cycle release
tasks, we need to have final releases for all client libraries in the
next day or two.

If you have not already submitted your final release request for this
cycle, please do that as soon as possible.

If you *have* already submitted your final release request for this
cycle, please reply to this email and let me know that you have so I can
create your stable/liberty branch.

Thanks!
Doug


I forgot to mention that we also need the constraints file in
global-requirements updated for all of the releases, so we're actually
testing with them in the gate. Please take a minute to check the version
specified in openstack/requirements/upper-constraints.txt for your
libraries and submit a patch to update it to the latest release if
necessary. I'll do a review later in the week, too, but it's easier to
identify the causes of test failures if we have one patch at a time.


Hi Doug!

When is the last and final deadline for doing all this for 
not-so-important and non-release:managed projects like ironic-inspector? 
We still lack some Liberty features covered in 
python-ironic-inspector-client. Do we have time until end of week to 
finish them?


Sorry if you hear this question too often :)

Thanks!



Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]Behavior when one cinder-volume service is down

2015-09-15 Thread Dulko, Michal
> From: Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
> Sent: Tuesday, September 15, 2015 4:04 PM
> 
> Hi,
> 
> This all started when we were testing Evacuate with our storage driver.
> We thought we found a bug
> (https://bugs.launchpad.net/cinder/+bug/1491276) then Scott replied that
> we should be running cinder-volume service separate from nova-compute.
> For some internal reasons we can't do that - yet, but we have some
> questions regarding the behavior of the service:
> 
> - on our original test setup we have 3 nodes (1 controller + compute + cinder,
> 2 compute + cinder).
> -- when we shutdown the second node and tried to evacuate, the call was
> routed to cinder-volume of the halted node instead of going to other nodes
> (there were still 2 cinder-volume services up) - WHY?

Cinder assumes that each c-vol can control only volumes which were scheduled 
onto it. As volume services are differentiated by hostname a known workaround 
is to set same value for host option in cinder.conf on each of the c-vols. This 
will make c-vols to listen on the same queue. You may however encounter some 
race conditions when running such configuration in Active/Active manner. 
Generally recommended approach is to use Pacemaker and run such c-vols in 
Active/Passive mode. Also expect that scheduler's decision will be generally 
ignored - as all the nodes are listening on the same queue.

> - on the new planned setup we will have 6 nodes (3 dedicated controller +
> cinder-volume, 3 compute)
> -- in this case which cinder-volume will manage which volume on which
> compute node?

Same situation - a volume will be controlled by c-vol which created it.

> -- what if: one compute node and one controller go down - will the Evacuate
> still work if one of the cinder-volume services is down? How can we tell - for
> sure - that this setup will work in case ANY 1 controller and 1 compute nodes
> go down?

The best idea is I think to use c-vol + Pacemaker in A/P manner. Pacemaker will 
make sure that on failure a new c-vol is spun up. Where are volumes physically 
in case of your driver? Is it like LVM driver (volume lies on the node which is 
running c-vol) or Ceph (Ceph takes care where volume will land physically, 
c-vol is just a proxy). 

> 
> Hypothetical:
> - if 3 dedicated controller + cinder-volume nodes work can perform evacuate
> when one of them is down (at the same time with one compute), WHY can't
> the same 3 nodes perform evacuate when compute services is running on
> the same nodes (so 1 cinder is down and 1 compute)

I think I've explained that.

> - if the answer to above question is "They can't " then what is the purpose of
> running 3 cinder-volume services if they can't handle one failure?

Running 3 c-vols is beneficial if you have multiple backends or use LVM driver.

> - and if the answer to above question is "You only run one cinder-volume"
> then how can it handle failure of controller node?

I've explained that too. There are efforts in the community to make it possible 
to run c-vol in A/A, but I don't think there's ETA yet.

> 
> Thanks,
> 
> Eduard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Denis Dmitriev for fuel-qa(devops) core

2015-09-15 Thread Tatyana Leontovich
+1

Regards,
Tatyana

On Tue, Sep 15, 2015 at 12:16 PM, Alexander Kostrikov <
akostri...@mirantis.com> wrote:

> +1
>
> On Mon, Sep 14, 2015 at 10:19 PM, Anastasia Urlapova <
> aurlap...@mirantis.com> wrote:
>
>> Folks,
>> I would like to nominate Denis Dmitriev[1] for fuel-qa/fuel-devops core.
>>
>> Dennis spent three months in Fuel BugFix team, his velocity was between
>> 150-200% per week. Thanks to his efforts we have won these old issues with
>> time sync and ceph's clock skew. Dennis's ideas constantly help us to
>> improve our functional system suite.
>>
>> Fuelers, please vote for Denis!
>>
>> Nastya.
>>
>> [1]
>> http://stackalytics.com/?user_id=ddmitriev=all_type=all=fuel-qa
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> Kind Regards,
>
> Alexandr Kostrikov,
>
> Mirantis, Inc.
>
> 35b/3, Vorontsovskaya St., 109147, Moscow, Russia
>
>
> Tel.: +7 (495) 640-49-04
> Tel.: +7 (925) 716-64-52 <%2B7%20%28906%29%20740-64-79>
>
> Skype: akostrikov_mirantis
>
> E-mail: akostri...@mirantis.com 
>
> *www.mirantis.com *
> *www.mirantis.ru *
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][versionedobjects][ceilometer] explain the benefits of ceilometer+versionedobjects

2015-09-15 Thread gord chung



On 03/09/2015 4:02 PM, Dan Smith wrote:

- do we need to migrate the db to some handle different set of
>attributes and what happens for nosql dbs?

No, Nova made no schema changes as a result of moving to objects.

so i don't really understand how this part works. if i have two 
collectors -- one collector writes v1 schema, one writes v2 schema -- 
how do they both write to the same db without anything changing in the 
db? presumably the db would only be configured to know how to store only 
one version?


cheers,

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Ceilometer M Midcycle

2015-09-15 Thread Jason Myers

Hello Everyone,
We are setting up a few polls to determine the possibility of 
meeting face to face for a ceilometer midcycle in Dublin, IE. We'd like 
to gather for three days to discuss all the work we are currently doing; 
however, we have access to space for 5 so you could also use that space 
for co working outside of the meeting dates.  We have two date polls: 
one for Nov 30-Dec 18 at http://doodle.com/poll/hmukqwzvq7b54cef, and 
one for Jan 11-22 at http://doodle.com/poll/kbkmk5v2vass249i.  You can 
vote for any of the days in there that work for you.  If we don't get 
enough interest in either poll, we will do a virtual midcycle like we 
did last year.  Please vote for your favorite days in the two polls if 
you are interested in attending in person. If we don't get many votes, 
we'll circulate another poll for the virtual dates.


Cheers,
Jason Myers
--
Sent from Postbox 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] proposed priorities for Mitaka

2015-09-15 Thread Doug Hellmann
Excerpts from Kuvaja, Erno's message of 2015-09-15 09:43:26 +:
> > -Original Message-
> > From: Doug Hellmann [mailto:d...@doughellmann.com]
> > Sent: Monday, September 14, 2015 5:40 PM
> > To: openstack-dev
> > Subject: Re: [openstack-dev] [glance] proposed priorities for Mitaka
> > 
> > Excerpts from Kuvaja, Erno's message of 2015-09-14 15:02:59 +:
> > > > -Original Message-
> > > > From: Flavio Percoco [mailto:fla...@redhat.com]
> > > > Sent: Monday, September 14, 2015 1:41 PM
> > > > To: OpenStack Development Mailing List (not for usage questions)
> > > > Subject: Re: [openstack-dev] [glance] proposed priorities for Mitaka
> > > >
> > > > On 14/09/15 08:10 -0400, Doug Hellmann wrote:
> > > > >
> > > > >I. DefCore
> > > > >
> > > > >The primary issue that attracted my attention was the fact that
> > > > >DefCore cannot currently include an image upload API in its
> > > > >interoperability test suite, and therefore we do not have a way to
> > > > >ensure interoperability between clouds for users or for trademark
> > > > >use. The DefCore process has been long, and at times confusing,
> > > > >even to those of us following it sort of closely. It's not entirely
> > > > >surprising that some projects haven't been following the whole
> > > > >time, or aren't aware of exactly what the whole thing means. I have
> > > > >proposed a cross-project summit session for the Mitaka summit to
> > > > >address this need for communication more broadly, but I'll try to
> > summarize a bit here.
> > > >
> > >
> > > Looking how different OpenStack based public clouds limits or fully
> > prevents their users to upload images to their deployments, I'm not
> > convinced the Image Upload should be included to this definition.
> > 
> > The problem with that approach is that it means end consumers of those
> > clouds cannot write common tools that include image uploads, which is a
> > frequently used/desired feature. What makes that feature so special that we
> > don't care about it for interoperability?
> > 
> 
> I'm not sure it really is so special API or technical wise, it's just the one 
> that was lifted to the pedestal in this discussion.

OK. I'm concerned that my message of "we need an interoperable image
upload API" is sometimes being met with various versions of "that's not
possible." I think that's wrong, and we should fix it. I also think it's
possible to make the API consistent and still support background tasks,
image scanning, and other things deployers want.

> 
> > > >
> > > > The task upload process you're referring to is the one that uses the
> > > > `import` task, which allows you to download an image from an
> > > > external source, asynchronously, and import it in Glance. This is
> > > > the old `copy-from` behavior that was moved into a task.
> > > >
> > > > The "fun" thing about this - and I'm sure other folks in the Glance
> > > > community will disagree - is that I don't consider tasks to be a
> > > > public API. That is to say, I would expect tasks to be an internal
> > > > API used by cloud admins to perform some actions (bsaed on its
> > > > current implementation). Eventually, some of these tasks could be
> > > > triggered from the external API but as background operations that
> > > > are triggered by the well-known public ones and not through the task
> > API.
> > > >
> > > > Ultimately, I believe end-users of the cloud simply shouldn't care
> > > > about what tasks are or aren't and more importantly, as you
> > > > mentioned later in the email, tasks make clouds not interoperable.
> > > > I'd be pissed if my public image service would ask me to learn about 
> > > > tasks
> > to be able to use the service.
> > >
> > > I'd like to bring another argument here. I think our Public Images API 
> > > should
> > behave consistently regardless if there is tasks enabled in the deployment 
> > or
> > not and with what plugins. This meaning that _if_ we expect glance upload
> > work over the POST API and that endpoint is available in the deployment I
> > would expect a) my image hash to match with the one the cloud returns b)
> > I'd assume all or none of the clouds rejecting my image if it gets flagged 
> > by
> > Vendor X virus definitions and c) it being bootable across the clouds taken 
> > it's
> > in supported format. On the other hand if I get told by the vendor that I 
> > need
> > to use cloud specific task that accepts only ova compliant image packages 
> > and
> > that the image will be checked before acceptance, my expectations are quite
> > different and I would expect all that happening outside of the standard API
> > as it's not consistent behavior.
> > 
> > I'm not sure what you're arguing. Is it not possible to have a background
> > process import an image without modifying it?
> 
> Absolutely not my point, sorry. What I was trying to say here is that I 
> rather have multiple ways of getting images into cloud I use if that means I 
> can predict exactly how those 

Re: [openstack-dev] [glance] proposed priorities for Mitaka

2015-09-15 Thread Doug Hellmann
Excerpts from Clint Byrum's message of 2015-09-14 17:06:44 -0700:
> Excerpts from Doug Hellmann's message of 2015-09-14 13:46:16 -0700:
> > Excerpts from Clint Byrum's message of 2015-09-14 13:25:43 -0700:
> > > Excerpts from Doug Hellmann's message of 2015-09-14 12:51:24 -0700:
> > > > Excerpts from Flavio Percoco's message of 2015-09-14 14:41:00 +0200:
> > > > > On 14/09/15 08:10 -0400, Doug Hellmann wrote:
> > > > > >
> > > > > >After having some conversations with folks at the Ops Midcycle a
> > > > > >few weeks ago, and observing some of the more recent email threads
> > > > > >related to glance, glance-store, the client, and the API, I spent
> > > > > >last week contacting a few of you individually to learn more about
> > > > > >some of the issues confronting the Glance team. I had some very
> > > > > >frank, but I think constructive, conversations with all of you about
> > > > > >the issues as you see them. As promised, this is the public email
> > > > > >thread to discuss what I found, and to see if we can agree on what
> > > > > >the Glance team should be focusing on going into the Mitaka summit
> > > > > >and development cycle and how the rest of the community can support
> > > > > >you in those efforts.
> > > > > >
> > > > > >I apologize for the length of this email, but there's a lot to go
> > > > > >over. I've identified 2 high priority items that I think are critical
> > > > > >for the team to be focusing on starting right away in order to use
> > > > > >the upcoming summit time effectively. I will also describe several
> > > > > >other issues that need to be addressed but that are less immediately
> > > > > >critical. First the high priority items:
> > > > > >
> > > > > >1. Resolve the situation preventing the DefCore committee from
> > > > > >   including image upload capabilities in the tests used for 
> > > > > > trademark
> > > > > >   and interoperability validation.
> > > > > >
> > > > > >2. Follow through on the original commitment of the project to
> > > > > >   provide an image API by completing the integration work with
> > > > > >   nova and cinder to ensure V2 API adoption.
> > > > > 
> > > > > Hi Doug,
> > > > > 
> > > > > First and foremost, I'd like to thank you for taking the time to dig
> > > > > into these issues, and for reaching out to the community seeking for
> > > > > information and a better understanding of what the real issues are. I
> > > > > can imagine how much time you had to dedicate on this and I'm glad you
> > > > > did.
> > > > > 
> > > > > Now, to your email, I very much agree with the priorities you
> > > > > mentioned above and I'd like for, whomever will win Glance's PTL
> > > > > election, to bring focus back on that.
> > > > > 
> > > > > Please, find some comments in-line for each point:
> > > > > 
> > > > > >
> > > > > >I. DefCore
> > > > > >
> > > > > >The primary issue that attracted my attention was the fact that
> > > > > >DefCore cannot currently include an image upload API in its
> > > > > >interoperability test suite, and therefore we do not have a way to
> > > > > >ensure interoperability between clouds for users or for trademark
> > > > > >use. The DefCore process has been long, and at times confusing,
> > > > > >even to those of us following it sort of closely. It's not entirely
> > > > > >surprising that some projects haven't been following the whole time,
> > > > > >or aren't aware of exactly what the whole thing means. I have
> > > > > >proposed a cross-project summit session for the Mitaka summit to
> > > > > >address this need for communication more broadly, but I'll try to
> > > > > >summarize a bit here.
> > > > > 
> > > > > +1
> > > > > 
> > > > > I think it's quite sad that some projects, especially those considered
> > > > > to be part of the `starter-kit:compute`[0], don't follow closely
> > > > > what's going on in DefCore. I personally consider this a task PTLs
> > > > > should incorporate in their role duties. I'm glad you proposed such
> > > > > session, I hope it'll help raising awareness of this effort and it'll
> > > > > help moving things forward on that front.
> > > > 
> > > > Until fairly recently a lot of the discussion was around process
> > > > and priorities for the DefCore committee. Now that those things are
> > > > settled, and we have some approved policies, it's time to engage
> > > > more fully.  I'll be working during Mitaka to improve the two-way
> > > > communication.
> > > > 
> > > > > 
> > > > > >
> > > > > >DefCore is using automated tests, combined with business policies,
> > > > > >to build a set of criteria for allowing trademark use. One of the
> > > > > >goals of that process is to ensure that all OpenStack deployments
> > > > > >are interoperable, so that users who write programs that talk to
> > > > > >one cloud can use the same program with another cloud easily. This
> > > > > >is a *REST API* level of compatibility. We cannot insert 
> > > > > >cloud-specific
> > > > > >behavior into our client 

Re: [openstack-dev] [openstack-ansible] PTL Non-Candidacy

2015-09-15 Thread Major Hayden
On 09/14/2015 04:02 PM, Kevin Carter wrote:
> TL;DR - I'm sending this out to announce that I won't be running for PTL of 
> the OpenStack-Ansible project in the upcoming cycle. Although I won't be 
> running for PTL, with community support, I intend to remain an active 
> contributor just with more time spent more cross project and in other 
> upstream communities.

I've only been working on the project for a short while, but I really 
appreciate your hard work and consideration!

--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Current meeting timeslot

2015-09-15 Thread Derek Higgins



On 15/09/15 12:38, Derek Higgins wrote:

On 10/09/15 15:12, Derek Higgins wrote:

Hi All,

The current meeting slot for TripleO is every second Tuesday @ 1900 UTC,
since that time slot was chosen a lot of people have joined the team and
others have moved on, I like to revisit the timeslot to see if we can
accommodate more people at the meeting (myself included).

Sticking with Tuesday I see two other slots available that I think will
accommodate more people currently working on TripleO,

Here is the etherpad[1], can you please add your name under the time
slots that would suit you so we can get a good idea how a change would
effect people


Looks like moving the meeting to 1400 UTC will best accommodate
everybody, I've proposed a patch to change our slot

https://review.openstack.org/#/c/223538/


This has merged so as of next tuesdat, the tripleo meeting will be at 
1400UTC


Hope to see ye there



In case the etherpad disappears here was the results

Current Slot ( 1900 UTC, Tuesdays,  biweekly)
o Suits me fine - 2 votes
o May make it sometimes - 6 votes

Proposal 1 ( 1600 UTC, Tuesdays,  biweekly)
o Suits me fine - 7 votes
o May make it sometimes - 2 votes

Proposal 2 ( 1400 UTC, Tuesdays,  biweekly)
o Suits me fine - 9 votes
o May make it sometimes - 0 votes

I can't make any of these - 0 votes

thanks,
Derek.




thanks,
Derek.


[1] - https://etherpad.openstack.org/p/SocOjvLr6o

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] proposed priorities for Mitaka

2015-09-15 Thread stuart . mclaren


After having some conversations with folks at the Ops Midcycle a
few weeks ago, and observing some of the more recent email threads
related to glance, glance-store, the client, and the API, I spent
last week contacting a few of you individually to learn more about
some of the issues confronting the Glance team. I had some very
frank, but I think constructive, conversations with all of you about
the issues as you see them. As promised, this is the public email
thread to discuss what I found, and to see if we can agree on what
the Glance team should be focusing on going into the Mitaka summit
and development cycle and how the rest of the community can support
you in those efforts.


Doug, thanks for reaching out here.

I've been looking into the existing task-based-upload that Doug mentions:
can anyone clarify the following?

On a default devstack install you can do this 'task' call:

http://paste.openstack.org/show/462919

as an alternative to the traditional image upload (the bytes are streamed
from the URL).

It's not clear to me if this is just an interesting example of the kind
of operator specific thing you can configure tasks to do, or a real
attempt to define an alternative way to upload images.

The change which added it [1] calls it a 'sample'.

Is it just an example, or is it a second 'official' upload path?

-Stuart

[1] https://review.openstack.org/#/c/44355

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] monasca,murano,mistral governance

2015-09-15 Thread Emilien Macchi


On 09/15/2015 07:39 AM, Ivan Berezovskiy wrote:
> Emilien,
> 
> puppet-murano module have a bunch of patches from Alexey Deryugin on
> review [0], which implements most of all Murano deployment stuff.
> Murano project was added to OpenStack namespace not so long ago, that's
> why I suggest to have murano-core rights on puppet-murano as they
> are till all these patches will be merged.
> Anyway, murano-core team doesn't merge any patches without OpenStack
> Puppet team approvals.

[repeating what I said on IRC so it's official and public]

I don't think Murano team needs to be core on a Puppet module.
All OpenStack modules are managed by one group, this is how we worked
until here and I don't think we want to change that.s
Project teams (Keystone, Nova, Neutron, etc) already use -1/+1 to review
Puppet code when they want to share feedback and they are very valuable,
we actually need it.
I don't see why we would do an exception for Murano. I would like Murano
team to continue to give their valuable feedback by -1/+1 patches but
it's the Puppet OpenStack team duty to decide if they merge the code or not.

This collaboration is important and we need your experience to create
new modules, but please understand how Puppet OpenStack governance works
now.

Thanks,


> [0]
> - 
> https://review.openstack.org/#/q/status:open+project:openstack/puppet-murano+owner:%22Alexey+Deryugin+%253Caderyugin%2540mirantis.com%253E%22,n,z
> 
> 2015-09-15 1:01 GMT+03:00 Matt Fischer  >:
> 
> Emilien,
> 
> I've discussed this with some of the Monasca puppet guys here who
> are doing most of the work. I think it probably makes sense to move
> to that model now, especially since the pace of development has
> slowed substantially. One blocker before to having it "big tent" was
> the lack of test coverage, so as long as we know that's a work in
> progress...  I'd also like to get Brad Kiein's thoughts on this, but
> he's out of town this week. I'll ask him to reply when he is back.
> 
> 
> On Mon, Sep 14, 2015 at 3:44 PM, Emilien Macchi  > wrote:
> 
> Hi,
> 
> As a reminder, Puppet modules that are part of OpenStack are
> documented
> here [1].
> 
> I can see puppet-murano & puppet-mistral Gerrit permissions
> different
> from other modules, because Mirantis helped to bootstrap the
> module a
> few months ago.
> 
> I think [2] the modules should be consistent in governance and only
> Puppet OpenStack group should be able to merge patches for these
> modules.
> 
> Same question for puppet-monasca: if Monasca team wants their module
> under the big tent, I think they'll have to change Gerrit
> permissions to
> only have Puppet OpenStack able to merge patches.
> 
> [1]
> 
> http://governance.openstack.org/reference/projects/puppet-openstack.html
> [2] https://review.openstack.org/223313
> 
> Any feedback is welcome,
> --
> Emilien Macchi
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> Thanks, Ivan Berezovskiy
> MOS Puppet Team Lead
> at Mirantis 
> 
> slack: iberezovskiy
> skype: bouhforever
> phone: + 7-960-343-42-46
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] PTL Candidacy

2015-09-15 Thread Rossella Sblendido
Hi everyone,

I decided to run for the Neutron PTL position for the Mitaka release
cycle.

I have been contributing to Neutron since Havana and I am a member of
the control plane core review team. During these years I have touched
most parts of the Neutron code and in the last cycle my main focus has
been to restructure the OVS agent, adding the ability to use events
provided by ovsdb client and making it more resilient to failures.

Mentoring people and spreading knowledge about Neutron have been high
priorities for me in these years. I am a regular speaker at OpenStack
events (local meetups, Openstack days and summits), where my talks have
had high attendance and good feedback.
I have been a mentor for the Outreachy program [1] and the OpenStack
upstream training.

If I become PTL these are my main objectives for Mitaka:

* Make the community even more welcoming.
Neutron is still facing a big challenge in terms of review bandwidth.
Good features can't get merged because of this limit. Even if the
introduction of the Lieutenant system [2] has improved scalability, we
still need to create a better way to share knowledge.
In addition to improving the existing documentations and tutorials I'd
like to create a team of mentors that can help new contributors (the
quality of the proposed patches should increase so that the review time
needed to merge them should decrease) and form new reviewers (more good
people, more bandwidth).

* Keep working hard to increase the stability.
The introduction of functional tests and full stack tests was great, we
just need to keep going in that direction. Moreover I'd love to produce
and store some benchmark data so that we can also keep track of the
performance of Neutron during time.

* Continue getting feedback from operators.
I think it's very important to hear the opinions of the actual Neutron
users and to understand their concerns.

* Paying down technical debt
Introduce oslo versioned objects to improve RPC and keep refactoring
Neutron code to make it more readable and understandable.

Before proposing my candidacy I made sure I have the full support of my
employer for this role.

Neutron has a great team of contributors and great leaders, it would be
an honor for me to be able to coordinate our efforts and push Neutron
forward.


Thanks for reading so far and for considering this candidacy,

Rossella (rossella_s)

[1] https://wiki.openstack.org/wiki/Outreachy
[2]
http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#core-review-hierarchy



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] proposed priorities for Mitaka

2015-09-15 Thread Doug Hellmann
Excerpts from Flavio Percoco's message of 2015-09-15 10:54:04 +0200:
> On 14/09/15 15:51 -0400, Doug Hellmann wrote:
> >Excerpts from Flavio Percoco's message of 2015-09-14 14:41:00 +0200:
> >> On 14/09/15 08:10 -0400, Doug Hellmann wrote:
> 
> >> This is definitely unfortunate. I believe a good step forward for this
> >> discussion would be to create a list of issues related to uploading
> >> images and see how those issues can be addressed. The result from that
> >> work might be that it's not recommended to make that endpoint public
> >> but again, without going through the issues, it'll be hard to
> >> understand how we can improve this situation. I expect most of this
> >> issues to have a security impact.
> >
> >A report like that would be good to have. Can someone on the Glance team
> >volunteer to put it together?
> 
> Here's an attempt from someone that uses clouds but doesn't run any:
> 
> - Image authenticity (we recently landed code that allows for having
>   signed images)
> - Quota management: Glance's quota management is very basic and it
>   allows for setting quota in a per-user level[1]
> - Bandwidth requirements to upload images
> - (add more here)

That seems like a good start. You could add a desire to optionally
take advantage of advanced object store services like Swift and
Ceph.

> >> The mistake here could be that the library should've been refactored
> >> *before* adopting it in Glance.
> >
> >The fact that there is disagreement over the intent of the library makes
> >me think the plan for creating it wasn't sufficiently circulated or
> >detailed.
> 
> There wasn't much disagreement when it was created. Some folks think
> the use-cases for the library don't exist anymore and some folks that
> participated in this effort are not part of OpenStack anymore.

OK. There is definite desire outside of the Glance team to have
*some* library for talking to the image store directly. The evidence
for that is the specs in nova related to a "seam" library, and the
requests by some Cinder driver authors to have something similar.
>From what I can tell, everyone else thought that's what glance-store
was going to be, but it's not quite what is needed.  It seems like
the use cases need to be revisited so the requirements can be
documented properly and then we can figure out what steps to take
with the existing code.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]Behavior when one cinder-volume service is down

2015-09-15 Thread John Griffith
On Tue, Sep 15, 2015 at 8:04 AM, Eduard Matei <
eduard.ma...@cloudfounders.com> wrote:

> Hi,
>
> This all started when we were testing Evacuate with our storage driver.
> We thought we found a bug (https://bugs.launchpad.net/cinder/+bug/1491276)
> then Scott replied that we should be running cinder-volume service separate
> from nova-compute.
> For some internal reasons we can't do that - yet, but we have some
> questions regarding the behavior of the service:
>
> - on our original test setup we have 3 nodes (1 controller + compute +
> cinder, 2 compute + cinder).
> -- when we shutdown the second node and tried to evacuate, the call was
> routed to cinder-volume of the halted node instead of going to other nodes
> (there were still 2 cinder-volume services up) - WHY?
> - on the new planned setup we will have 6 nodes (3 dedicated controller +
> cinder-volume, 3 compute)
> -- in this case which cinder-volume will manage which volume on which
> compute node?
> -- what if: one compute node and one controller go down - will the
> Evacuate still work if one of the cinder-volume services is down? How can
> we tell - for sure - that this setup will work in case ANY 1 controller and
> 1 compute nodes go down?
>
> Hypothetical:
> - if 3 dedicated controller + cinder-volume nodes work can perform
> evacuate when one of them is down (at the same time with one compute), WHY
> can't the same 3 nodes perform evacuate when compute services is running on
> the same nodes (so 1 cinder is down and 1 compute)
> - if the answer to above question is "They can't " then what is the
> purpose of running 3 cinder-volume services if they can't handle one
> failure?
> - and if the answer to above question is "You only run one cinder-volume"
> then how can it handle failure of controller node?
>
> Thanks,
>
> Eduard
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​Not sure I follow all your permutations and things here.  But... one
common misconception about multiple c-vol services;  The act of just
"deploying multiple c-vols" doesn't mean you get any sort of HA or
failover.  The default/base case for multiple c-vols is actually just for
scale out and that's it.

If you want to actually do things like have them fail over you have to look
at configuring the c-vol services with virtual-ips and using the same name
for each service etc.  In other words, do a true HA deployment.

Maybe I'm not following here but it sounds like maybe you have the wrong
expectations around what multiple c-vol services buys you.  ​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress] IRC hangout

2015-09-15 Thread Jeremy Stanley
On 2015-09-14 17:44:14 + (+), Shiv Haris wrote:
> What is the IRC channel where congress folks hangout. I  tried
> #openstack-congress on freenode but is seems not correct.

https://wiki.openstack.org/wiki/IRC has it listed as #congress
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][release] final liberty cycle client library releases needed

2015-09-15 Thread Doug Hellmann
Excerpts from Dmitry Tantsur's message of 2015-09-15 16:16:00 +0200:
> On 09/14/2015 04:18 PM, Doug Hellmann wrote:
> > Excerpts from Doug Hellmann's message of 2015-09-14 08:46:02 -0400:
> >> PTLs and release liaisons,
> >>
> >> In order to keep the rest of our schedule for the end-of-cycle release
> >> tasks, we need to have final releases for all client libraries in the
> >> next day or two.
> >>
> >> If you have not already submitted your final release request for this
> >> cycle, please do that as soon as possible.
> >>
> >> If you *have* already submitted your final release request for this
> >> cycle, please reply to this email and let me know that you have so I can
> >> create your stable/liberty branch.
> >>
> >> Thanks!
> >> Doug
> >
> > I forgot to mention that we also need the constraints file in
> > global-requirements updated for all of the releases, so we're actually
> > testing with them in the gate. Please take a minute to check the version
> > specified in openstack/requirements/upper-constraints.txt for your
> > libraries and submit a patch to update it to the latest release if
> > necessary. I'll do a review later in the week, too, but it's easier to
> > identify the causes of test failures if we have one patch at a time.
> 
> Hi Doug!
> 
> When is the last and final deadline for doing all this for 
> not-so-important and non-release:managed projects like ironic-inspector? 
> We still lack some Liberty features covered in 
> python-ironic-inspector-client. Do we have time until end of week to 
> finish them?

We would like for the schedule to be the same for everyone. We need the
final versions for all libraries this week, so we can update
requirements constraints by early next week before the RC1.

https://wiki.openstack.org/wiki/Liberty_Release_Schedule

Doug

> 
> Sorry if you hear this question too often :)
> 
> Thanks!
> 
> >
> > Doug
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [plugin] Release tagging

2015-09-15 Thread John Griffith
On Mon, Sep 14, 2015 at 7:44 PM, John Griffith 
wrote:

>
>
> On Mon, Sep 14, 2015 at 7:27 PM, Davanum Srinivas 
> wrote:
>
>> John,
>>
>> per ACL in project-config:
>>
>> http://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/acls/stackforge/fuel-plugin-solidfire-cinder.config#n9
>>
>> you are already in that group:
>> https://review.openstack.org/#/admin/groups/956,members
>>
>> The release team would be in charge *if* that line looked like:
>> pushSignedTag = group library-release
>>
>> as documented in:
>> http://docs.openstack.org/infra/manual/creators.html
>>
>> So there's something else wrong..what error did you get?
>>
>> -- Dims
>>
>>
>> On Mon, Sep 14, 2015 at 8:51 PM, John Griffith 
>> wrote:
>>
>>> Hey All,
>>>
>>> I was trying to tag a release for v 1.0.1 on [1] today and noticed I
>>> don't have permissions to do so.  Is there, a release team/process for this?
>>>
>>> [1] https://github.com/stackforge/fuel-plugin-solidfire-cinder
>>>
>>> Thanks,
>>> John
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ​Hmm...  could be that I'm just making bad assumptions and trying to do
> this as I've done with other projects over the years?
>
> Here's what I tried and the error I received:
>
> jgriffith@railbender:~/git/fuel-plugin-solidfire-cinder$ git push --tags
> gerrit
> Counting objects: 1, done.
> Writing objects: 100% (1/1), 168 bytes | 0 bytes/s, done.
> Total 1 (delta 0), reused 0 (delta 0)
> remote: Processing changes: refs: 1, done
> To ssh://
> john-griff...@review.openstack.org:29418/stackforge/fuel-plugin-solidfire-cinder.git
>  ! [remote rejected] v1.0.1 -> v1.0.1 (prohibited by Gerrit)
> error: failed to push some refs to 'ssh://
> john-griff...@review.openstack.org:29418/stackforge/fuel-plugin-solidfire-cinder.git
> '
> jgriffith@railbender:~/git/fuel-plugin-solidfire-cinder$
> So clearly I can't create the remote; but not sure what I need to do to
> make this happen?​
>
> ​Ahh... it appears my key had expired.  Updating now, thanks everyone.​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [ptl] Troubleshooting cross-project communications

2015-09-15 Thread Anne Gentle
Hi all,

What can we do to make the cross-project meeting more helpful and useful
for cross-project communications? I started with a proposal to move it to a
different time, which morphed into an idea to alternate times. But, knowing
that we need to layer communications I wonder if we should troubleshoot
cross-project communications further? These are the current ways
cross-project communications happen:

1. The weekly meeting in IRC
2. The cross-project specs and reviewing those
3. Direct connections between team members
4. Cross-project talks at the Summits

What are some of the problems with each layer?

1. weekly meeting: time zones, global reach, size of cross-project concerns
due to multiple projects being affected, another meeting for PTLs to attend
and pay attention to
2. specs: don't seem to get much attention unless they're brought up at
weekly meeting, finding owners for the work needing to be done in a spec is
difficult since each project team has its own priorities
3. direct communications: decisions from these comms are difficult to then
communicate more widely, it's difficult to get time with busy PTLs
4. Summits: only happens twice a year, decisions made then need to be
widely communicated

I'm sure there are more details and problems I'm missing -- feel free to
fill in as needed.

Lastly, what suggestions do you have for solving problems with any of these
layers?

Thanks,
Anne

-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2