Re: [openstack-dev] [nova][cinder] How will nova advertise that volume multi-attach is supported?

2016-01-14 Thread Matt Riedemann



On 1/14/2016 9:42 AM, Dan Smith wrote:

It is however not ideal when a deployment is set up such that
multiattach will always fail because a hypervisor is in use which
doesn't support it.  An immediate solution would be to add a policy so a
deployer could disallow it that way which would provide immediate
feedback to a user that they can't do it.  A longer term solution would
be to add capabilities to flavors and have flavors act as a proxy
between the user and various hypervisor capabilities available in the
deployment.  Or we can focus on providing better async feedback through
instance-actions, and other discussed async api changes.


Presumably a deployer doesn't enable volumes to be set as multi-attach
on the cinder side if their nova doesn't support it at all, right? I
would expect that is the gating policy element for something global.


There is no policy in cinder to disallow creating multiattach-able 
volumes [1]. It's just a property on the volume and somewhere in cinder 
the volume drivers support the capability or not.


From a very quick look at the cinder code, the scheduler has a 
capabilities filter for multiattach so if you try to create a 
multiattach volume and don't have any hosts (volume backends) that 
support that, you'd fail to create the volume with NoValidHost.


But lvm supports it, so if you have an lvm backend you can create the 
multiattach volume, that doesn't mean you can use it in nova. So it 
seems like you'd also need the same kind of capabilities filter in the 
nova scheduler for this and that capability from the compute host would 
come from the virt driver, of which only libvirt is going to support it 
at first.




Now, if multiple novas share a common cinder, then I guess it gets a
little more confusing...

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



[1] 
https://github.com/openstack/cinder/blob/master/cinder/api/v2/volumes.py#L407


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Python 3.5 is now the default Py3 in Debian Sid

2016-01-14 Thread Thomas Goirand
On 01/14/2016 11:35 PM, Yuriy Taraday wrote:
> 
> 
> On Thu, Jan 14, 2016 at 5:48 PM Jeremy Stanley  > wrote:
> 
> On 2016-01-14 09:47:52 +0100 (+0100), Julien Danjou wrote:
> [...]
> > Is there any plan to add Python 3.5 to infra?
> 
> I expect we'll end up with it shortly after Ubuntu 16.04 LTS
> releases in a few months (does anybody know for sure what its
> default Python 3 is slated to be?).
> 
> 
> It's 3.5.1 already in Xenial: http://packages.ubuntu.com/xenial/python3 

Though 3.5 isn't the default Py3 yet there. Or is it?

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] nova-network removal

2016-01-14 Thread Vitaly Kramskikh
Folks,

We have a request on review which prohibits creating new envs with
nova-network: https://review.openstack.org/#/c/261229/ We're 3 weeks away
from HCF, and I think this is too late for such a change. What do you
think? Should we proceed and remove nova-network support in 8.0, which is
deprecated since 7.0?

-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [defcore] Determine latest set of tests for a given release

2016-01-14 Thread Hugh Saunders
Hi All,
Whats the most reliable way to determine the latest set of required defcore
tests for a given release?

For example, I would currently use 2015.07/2015.07.required.txt but I don't
want to have to update that url each time there is a defcore release.

I could parse 20*.json in the root of the defcore repo, but that seems
brittle.

Thanks.

--
Hugh Saunders




-- 
--
Hugh Saunders
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [defcore] Determine latest set of tests for a given release

2016-01-14 Thread Mark Voelker
[+defcore-committee]

This depends a little on what your objective is.  If you’re looking for the 
tests that a product must pass today if it wants an OpenStack Powered 
logo/trademark agreement, you’ll want to look at either of the two most 
recently-approved DefCore Guidelines (currently 2015.5 and 2015.07, though the 
Board will be voting on 2016.01 by the end of the month).  If you just want to 
find out what Guidelines might have covered a product built on an arbitrary 
OpenStack release in the past, you’ll need to go straight to the JSON.  The two 
most recently approved Guidelines are generally listed on the Foundation’s 
interop page if that’s helpful:

http://www.openstack.org/interop/

If you’re looking for more programmatic methods, the .json files are the 
authoritative data source.  In particular you’ll want to check these keys:

  "status": "approved”,   # can be draft, review, approved or superseded [see 
2015B C6.3]

and:

  "releases": ["icehouse", "juno", "kilo”], # array of releases, lower case 
(generally three releases)

The schema for the JSON files is documented here:

http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/schema

The schema version used is also listed in the JSON files themselves with this 
key:

"schema": "1.4”,

The tests for a given Guideline are also in the 20xx.json files, and as a 
convenience there are also required/flaggged lists in plaintext in each 
Guideline’s working directory, such as:

http://git.openstack.org/cgit/openstack/defcore/tree/2015.07

If you’re writing code to grock all that sort of info though, I suspect you 
could re-use (or at least take inspiration from) a lot of the code that’s 
already been written into RefStack, since it can already parse most or all of 
the above (see the Community Results section of restack.openstack.org or it’s 
corresponding git repo).  Hope that helps!

At Your Service,

Mark T. Voelker



> On Jan 14, 2016, at 12:02 PM, Hugh Saunders  wrote:
> 
> Hi All, 
> Whats the most reliable way to determine the latest set of required defcore 
> tests for a given release? 
> 
> For example, I would currently use 2015.07/2015.07.required.txt but I don't 
> want to have to update that url each time there is a defcore release.
> 
> I could parse 20*.json in the root of the defcore repo, but that seems 
> brittle.
> 
> Thanks. 
> 
> --
> Hugh Saunders
> 
> 
> 
> 
> -- 
> --
> Hugh Saunders 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Nova midcycle list of attendees

2016-01-14 Thread Murray, Paul (HP Cloud)
I have created a list of attendees for the Nova midcycle here: 
https://wiki.openstack.org/wiki/Sprints/NovaMitakaSprintAttendees

Obviously I can't put anyone's name on it for privacy reasons. If are attending 
and you would like to let others know when you will be around you might like to 
add yourself. It would also help us with a few logistics too.

Best regards,
Paul

Paul Murray
Technical Lead, HPE Cloud
Hewlett Packard Enterprise
+44 117 316 2527


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][all] "openstack-meeting-cp" == "openstack-meeting-5"?

2016-01-14 Thread Markus Zoeller
Tony Breeds  wrote on 01/13/2016 11:32:24 PM:

> From: Tony Breeds 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 01/13/2016 11:33 PM
> Subject: Re: [openstack-dev] [infra][all] "openstack-meeting-cp" == 
> "openstack-meeting-5"?
> 
> On Wed, Jan 13, 2016 at 12:11:27PM +0100, Thierry Carrez wrote:
> 
> > One possible solution here would be to check if the meetings currently
> > scheduled on your ideal slots are actually still using the spot, as 
there
> > are a non-trivial amount of dead meetings around. You can ping me on 
IRC so
> > that we do a quick check.
> 
> For the record I did that for all the fully booked slots in November 
last year
> when I suggested creating a 5th meeting room.  After that check the data 
didn't
> indicate that we *really* needed a new meeting room.
> 
> Marcus, I'll be in your TZ next week I can help you work through your 
options
> then if we don't overlap on IRC in the meantime.
> 
> Yours Tony.

Thanks for the offer Tony, but I found an alternative without
losing any benefits. The ML post [1] shows the timeslots which
are available. Channel "openstack-meeting-3" is also available
for both slots. I wait a few days for feedback and the I push
the "irc-meeting" change to Gerrit.

[1] "[openstack-dev] [nova][bugs] nova-bugs-team IRC meeting"; 2016-01-13:
http://lists.openstack.org/pipermail/openstack-dev/2016-January/083966.html

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Diagnostic snapshot generation is broken due to lack of disk space

2016-01-14 Thread Maciej Kwiek
Thanks for your insight guys!

I agree with Oleg, I will see what I can do to make this work this way.

About hardlinks - wouldn't it be better to use symlinks? This way we don't
occupy more space than necessary, and we can link to files and directories
that are in other block device than /var. Please see [1] review for a
proposed change that introduces symlinks.

This doesn't really give us much right now, because most of the logs are
fetched from master node via ssh due to shotgun being run in mcollective
container, but it's something! When we remove containers, this will prove
more useful.

Regards,
Maciej Kwiek

[1] https://review.openstack.org/#/c/266964/

On Tue, Jan 12, 2016 at 1:51 PM, Oleg Gelbukh  wrote:

> I think we need to find a way to:
>
> 1) verify the size of snapshot without actually making it and compare to
> the available disk space beforehand.
> 2) refuse to create snapshot if space is insufficient and notify user
> (otherwise it breaks Admin node as we have seen)
> 3) provide a way to prioritize elements of the snapshot and exclude them
> based on the priorities or user choice.
>
> This will allow for better and safer UX with the snapshot.
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Tue, Jan 12, 2016 at 1:47 PM, Maciej Kwiek  wrote:
>
>> Hi!
>>
>> I need some advice on how to tackle this issue. There is a bug [1]
>> describing the problem with creating a diagnostic snapshot. The issue is
>> that /var/log has 100GB available, while /var (where diagnostic snapshot is
>> being generated - /var/www/nailgun/dump/fuel-snapshot according to [2]) has
>> 10GB available, so dumping the logs can be an issue when logs size exceed
>> free space in /var.
>>
>> There are several things we could do, but I am unsure on which course to
>> take. Should we
>> a) Allocate more disk space for /var/www (or for whole /var)?
>> b) Make the snapshot location share the diskspace of /var/log?
>> c) Something else? What?
>>
>> Please share your thoughts on this.
>>
>> Cheers,
>> Maciej Kwiek
>>
>> [1] https://bugs.launchpad.net/fuel/+bug/1529182
>> [2]
>> https://github.com/openstack/fuel-web/blob/2855a9ba925c146b4802ab3cd2185f1dce2d8a6a/nailgun/nailgun/settings.yaml#L717
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack-Announce List

2016-01-14 Thread Davanum Srinivas
LOL Tom :)

-- Dims

On Thu, Jan 14, 2016 at 2:32 AM, Tom Fifield  wrote:
> On 14/01/16 15:22, Andreas Jaeger wrote:
>>
>> On 2016-01-14 08:13, Tom Fifield wrote:
>>>
>>> So, I'm prompted by another 20 oslo release emails to dredge up this
>>> thread :)
>>>
>>> There appears to be broad consensus that those shouldn't be going to the
>>> announce list ... what do we need to do to get that to change to posted
>>> to "-dev + batched inside the weekly -dev digest from thingee" as
>>> Thierry suggested?
>>
>>
>> So, those 20 odd olso release emails all went to -dev, the release team
>> changed the logic, see also
>>
>> http://lists.openstack.org/pipermail/openstack-dev/2016-January/083749.html
>
>
> Apologies to all. Maybe its time to visit the optometrist for me ... I
> haven't been once in my life yet, could be scary :)
>
>
>
>> Not sure about the "Batching inside the weekly digest from thingee",
>>
>> Andreas
>>
>>>
>>>
>>> Regards,
>>>
>>>
>>> Tom
>>>
>>> On 14/12/15 17:12, Tom Fifield wrote:

 ... and back to this thread after a few weeks :)

 The conclusions I saw were:
 * Audience for openstack-announce should be "users/non-dev"
 * Service project releases announcements are good
 * Client library release announcements good
 * Security announcements are good
 * Internal library (particularly oslo) release announcements don't fit

 Open Questions:
 * Where do Internal library release announcements go? [-dev or new
 -release list or batched inside the weekly newsletter]
 * Do SDK releases fit on -announce?


 Regards,


 Tom


 On 20/11/15 12:00, Tom Fifield wrote:
>
> Hi all,
>
> I'd like to get your thoughts about the OpenStack-Announce list.
>
> We describe the list as:
>
> """
> Subscribe to this list to receive important announcements from the
> OpenStack Release Team and OpenStack Security Team.
>
> This is a low-traffic, read-only list.
> """
>
> Up until July 2015, it was used for the following:
> * Community Weekly Newsletter
> * Stable branch release notifications
> * Major (i.e. Six-monthly) release notifications
> * Important security advisories
>
> and had on average 5-10 messages per month.
>
> After July 2015, the following was added:
> * Release notifications for clients and libraries (one email per
> library, includes contributor-focused projects)
>
> resulting in an average of 70-80 messages per month.
>
>
> Personally, I no longer consider this volume "low traffic" :)
>
> In addition, I have been recently receiving feedback that users have
> been unsubscribing from or deleting without reading the list's posts.
>
> That isn't good news, given this is supposed to be the place where we
> can make very important announcements and have them read.
>
> One simple suggestion might be to batch the week's client/library
> release notifications into a single email. Another might be to look at
> the audience for the list, what kind of notifications they want, and
> chose the announcements differently.
>
> What do you think we should do to ensure the announce list remains
> useful?
>
>
>
> Regards,
>
>
> Tom
>
>
> __
>
>
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __


 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [keystone][security] New BP for anti brute force in keystone

2016-01-14 Thread Julien Danjou
On Wed, Jan 13 2016, Morgan Fainberg wrote:

> A standard method of rate limiting for OpenStack services would be a good
> thing to figure out.

Apache used as a daemon for WSGI application (e.g. like we do by default
in devstack) has support for rate limit for decades – see mod_ratelimit
for example.
So this is a problem we really want to solve in OpenStack – unless we're
really getting bored or victims of the NIH syndrom.

Now, that does mean that other protection methods (as suggested in the
original blueprint proposal) should not be implemented, but this one
shouldn't be reinvented for sure.

Cheers,
-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Neutron][Nova][devstack] Keystone v3 with "old" clients

2016-01-14 Thread Michal Rostecki

On 01/12/2016 02:10 PM, Smigiel, Dariusz wrote:

Hello,
I'm trying to gather all the info necessary to migrate to keystone v3 in 
Neutron.
When I've started to looking through possible problems with clients, it 
occurred that 'neutron' and 'nova' clients do not want to operate with Keystone 
v3.
For keystone client, it's explicit written, that this version is deprecated and 
not supported, so it's not working with Keystone API v3. But for nova and 
neutron, there is nothing.
I didn't see any place where I can find info, that "old" clients shouldn't be 
used with Keystone API v3.

Am I doing something wrong?

http://paste.openstack.org/show/483568/



Hi,

Looks like you're missing OS_PROJECT_DOMAIN_ID and OS_USER_DOMAIN_ID env 
variables, needed for Keystone v3.


Unfortunately, I don't see them in devstack's openrc[1]. Maybe it's a 
good moment to add them here.


[1] https://github.com/openstack-dev/devstack/blob/master/openrc

Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle] move to stateless design in master branch.

2016-01-14 Thread joehuang
Hello,

As the stateless design in the experiment branch has a quite positive feedback, 
the stateless design is moved from the experiment branch to the master branch.

You can try it through Devstack: https://github.com/openstack/tricircle

If you find bug, please feel free to report it at 
https://bugs.launchpad.net/tricircle

You can learn the source code via the BP and spec: 
https://blueprints.launchpad.net/tricircle/+spec/implement-stateless

Best Regards
Chaoyi Huang ( Joe Huang )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Python 3.5 is now the default Py3 in Debian Sid

2016-01-14 Thread Julien Danjou
On Wed, Jan 13 2016, Thomas Goirand wrote:

> In other words, any Python 3.5 problem in Olso, clients and so on will
> be considered a Debian RC bug and shall be addressed ASAP there.

I know there are problems with a few of the Python 3 supported OpenStack
projects. Since we can't gate on Python 3.5 (yet), it's a bit
problematic to support that version without regressions.

Is there any plan to add Python 3.5 to infra?

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Neutron][Nova][devstack] Keystone v3 with "old" clients

2016-01-14 Thread Akihiro Motoki
devstack creates /etc/openstack/clouds.yaml (os-client-config
configuraiton files) which specifies to use keystone v3.
neutronclient supports os-client-config and keystoneauth which handles
the difference of keystone API.
Note that clouds.yaml is very convenient way to use OpenStack CLI [1].

As Michal commented, you can also use OS_PROJECT_DOMAIN_xx and OS_USER_DOMAIN_xx
for keystone v3 API.

[1] 
http://docs.openstack.org/developer/python-neutronclient/usage/cli.html#using-with-os-client-config

Akihiro

2016-01-14 18:13 GMT+09:00 Michal Rostecki :
> On 01/12/2016 02:10 PM, Smigiel, Dariusz wrote:
>>
>> Hello,
>> I'm trying to gather all the info necessary to migrate to keystone v3 in
>> Neutron.
>> When I've started to looking through possible problems with clients, it
>> occurred that 'neutron' and 'nova' clients do not want to operate with
>> Keystone v3.
>> For keystone client, it's explicit written, that this version is
>> deprecated and not supported, so it's not working with Keystone API v3. But
>> for nova and neutron, there is nothing.
>> I didn't see any place where I can find info, that "old" clients shouldn't
>> be used with Keystone API v3.
>>
>> Am I doing something wrong?
>>
>> http://paste.openstack.org/show/483568/
>>
>
> Hi,
>
> Looks like you're missing OS_PROJECT_DOMAIN_ID and OS_USER_DOMAIN_ID env
> variables, needed for Keystone v3.
>
> Unfortunately, I don't see them in devstack's openrc[1]. Maybe it's a good
> moment to add them here.
>
> [1] https://github.com/openstack-dev/devstack/blob/master/openrc
>
> Cheers,
> Michal
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][stable] Proposal to add Tony Breeds to nova-stable-maint

2016-01-14 Thread Michael Still
I think Tony would be a valuable addition to the team.

+1

Michael
On 14 Jan 2016 7:59 AM, "Matt Riedemann"  wrote:

> I'm formally proposing that the nova-stable-maint team [1] adds Tony
> Breeds to the core team.
>
> I don't have a way to track review status on stable branches, but there
> are review numbers from gerrit for stable/liberty [2] and stable/kilo [3].
>
> I know that Tony does a lot of stable branch reviews and knows the
> backport policy well, and he's also helped out numerous times over the last
> year or so with fixing stable branch QA / CI issues (think gate wedge
> failures in stable/juno over the last 6 months). So I think Tony would be a
> great addition to the team.
>
> So for those on the team already, please reply with a +1 or -1 vote.
>
> [1] https://review.openstack.org/#/admin/groups/540,members
> [2]
> https://review.openstack.org/#/q/reviewer:%22Tony+Breeds%22+branch:stable/liberty+project:openstack/nova
> [3]
> https://review.openstack.org/#/q/reviewer:%22Tony+Breeds%22+branch:stable/kilo+project:openstack/nova
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3] Wrong fail over of HA-Router

2016-01-14 Thread Lubosz Kosnik
Here you have an information about that bug - it's a known issue in 
Keepalived v.1.2.7

https://bugs.launchpad.net/neutron/+bug/1497272

Regards,
Lubosz

On 01/14/2016 02:27 AM, Ikuo Kumagai wrote:

Hi All

Could you give me some advice for our problem?

We use Neutron L3-HA.
When I associate/disassociate floating-ip, the state of ha keepalived 
become MASTER/MASTER state. Then
Because at that time,the sending of vrrp packet from MASTER to BACKUP 
stop for 40 seconds.


I checked logs of Keepalived, there is the 40 seconds delay of between 
Log ( Initializing ipvs 2.6) and Log (Opening file ~).

ex is below.
--
Jan 14 09:51:22 stg-anlk-ctrl003 Keepalived_vrrp[1989701]: 
Initializing ipvs 2.6
Jan 14 09:52:02 stg-anlk-ctrl003 Keepalived_vrrp[1989701]: Opening 
file 
'/var/lib/neutron/ha_confs/666d9e40-a95c-44a1-a876-bf44ca281f3e/keepalived.conf'.

--

During this time, the packet of vrrp for MASTER to BACKUP,so BACKUP 
changes to MASTER, and then original MASTER starts sendding  vrrp 
packets again.


The versions of our environment is below
 - Keepalived v1.2.7 (08/14,2013)
 - Linux stg-anlk-ctrl003 3.13.0-74-generic #118-Ubuntu SMP Thu Dec 17 
22:52:10 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux



We did a adjustement of ha_vrrp_advert_int to avoid this.
But I would like to know why is  the term always 40 seconds.

If you know anything about that please let me know.

Regards
Ikuo Kumagai



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Diagnostic snapshot generation is broken due to lack of disk space

2016-01-14 Thread Igor Kalnitsky
Hey Maceij -

> About hardlinks - wouldn't it be better to use symlinks?
> This way we don't occupy more space than necessary

AFAIK, hardlinks won't occupy much space. They are the links, after all. :)

As for symlinks, I'm afraid shotgun (and fabric underneath) won't
resolve them and links are get to snapshot As Is. That means if there
will be no content in the snapshot they are pointing to, they are
simply useless. Needs to be checked, though.

- Igor

On Thu, Jan 14, 2016 at 10:31 AM, Maciej Kwiek  wrote:
> Thanks for your insight guys!
>
> I agree with Oleg, I will see what I can do to make this work this way.
>
> About hardlinks - wouldn't it be better to use symlinks? This way we don't
> occupy more space than necessary, and we can link to files and directories
> that are in other block device than /var. Please see [1] review for a
> proposed change that introduces symlinks.
>
> This doesn't really give us much right now, because most of the logs are
> fetched from master node via ssh due to shotgun being run in mcollective
> container, but it's something! When we remove containers, this will prove
> more useful.
>
> Regards,
> Maciej Kwiek
>
> [1] https://review.openstack.org/#/c/266964/
>
> On Tue, Jan 12, 2016 at 1:51 PM, Oleg Gelbukh  wrote:
>>
>> I think we need to find a way to:
>>
>> 1) verify the size of snapshot without actually making it and compare to
>> the available disk space beforehand.
>> 2) refuse to create snapshot if space is insufficient and notify user
>> (otherwise it breaks Admin node as we have seen)
>> 3) provide a way to prioritize elements of the snapshot and exclude them
>> based on the priorities or user choice.
>>
>> This will allow for better and safer UX with the snapshot.
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>>
>> On Tue, Jan 12, 2016 at 1:47 PM, Maciej Kwiek  wrote:
>>>
>>> Hi!
>>>
>>> I need some advice on how to tackle this issue. There is a bug [1]
>>> describing the problem with creating a diagnostic snapshot. The issue is
>>> that /var/log has 100GB available, while /var (where diagnostic snapshot is
>>> being generated - /var/www/nailgun/dump/fuel-snapshot according to [2]) has
>>> 10GB available, so dumping the logs can be an issue when logs size exceed
>>> free space in /var.
>>>
>>> There are several things we could do, but I am unsure on which course to
>>> take. Should we
>>> a) Allocate more disk space for /var/www (or for whole /var)?
>>> b) Make the snapshot location share the diskspace of /var/log?
>>> c) Something else? What?
>>>
>>> Please share your thoughts on this.
>>>
>>> Cheers,
>>> Maciej Kwiek
>>>
>>> [1] https://bugs.launchpad.net/fuel/+bug/1529182
>>> [2]
>>> https://github.com/openstack/fuel-web/blob/2855a9ba925c146b4802ab3cd2185f1dce2d8a6a/nailgun/nailgun/settings.yaml#L717
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][os-vif] os-vif core review team membership

2016-01-14 Thread Maxime Leroy
On Wed, Jan 13, 2016 at 10:59 AM, Daniel P. Berrange
 wrote:
> On Tue, Jan 12, 2016 at 10:28:49PM +, Mooney, Sean K wrote:
>> > -Original Message-
>> > From: Moshe Levi [mailto:mosh...@mellanox.com]
>> > Sent: Tuesday, January 12, 2016 4:23 PM
>> > To: Russell Bryant; Daniel P. Berrange; openstack-
>> > d...@lists.openstack.org
>> > Cc: Jay Pipes; Mooney, Sean K; Sahid Orentino Ferdjaoui; Maxime Leroy
>> > Subject: RE: [nova][neutron][os-vif] os-vif core review team membership
>> >
>> >
>> >
>> > > -Original Message-
>> > > From: Russell Bryant [mailto:rbry...@redhat.com]
>> > > Sent: Tuesday, January 12, 2016 5:24 PM
>> > > To: Daniel P. Berrange ; openstack-
>> > > d...@lists.openstack.org
>> > > Cc: Jay Pipes ; Sean Mooney
>> > > ; Moshe Levi ; Sahid
>> > > Orentino Ferdjaoui ; Maxime Leroy
>> > > 
>> > > Subject: Re: [nova][neutron][os-vif] os-vif core review team
>> > > membership
>> > >
>> > > On 01/12/2016 10:15 AM, Daniel P. Berrange wrote:
>> > > > So far myself & Jay Pipes have been working on the initial os-vif
>> > > > prototype and setting up infrastructure for the project. Obviously
>> > > > we need more then just 2 people on a core team, and after looking at
>> > > > those who've expressed interest in os-vif, we came up with a
>> > > > cross-section of contributors across the Nova, Neutron and NFV
>> > > > spaces to be the initial core team:
>> > > >
>> > > >   Jay Pipes
>> > > >   Daniel Berrange
>> > > >   Sean Mooney
>> > > >   Moshe Levi
>> > > >   Russell Bryant
>> > > >   Sahid Ferdjaoui
>> > > >   Maxime Leroy
>> > > >
>> > > > So unless anyone wishes to decline the offer, once infra actually
>> > > > add me to the os-vif-core team I'll be making these people os-vif
>> > > > core, so we can move forward with the work on the library...

Thanks, I'm happy to help on os-vif too.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Diagnostic snapshot generation is broken due to lack of disk space

2016-01-14 Thread Maciej Kwiek
Igor,

I meant that symlinks also give us the benefit of not using additional
space (just as hardlinks do) while being able to link to files from
different filesystems.

Also, as Barłomiej pointed out the `h` switch for tar should do the trick
[1].

Cheers,
Maciej

[1] http://www.gnu.org/software/tar/manual/html_node/dereference.html

On Thu, Jan 14, 2016 at 11:22 AM, Bartlomiej Piotrowski <
bpiotrow...@mirantis.com> wrote:

> Igor,
>
> I took a glance on Maciej's patch and it adds a switch to tar command to
> make it follow symbolic links, so it looks good to me.
>
> Bartłomiej
>
> On Thu, Jan 14, 2016 at 10:39 AM, Igor Kalnitsky 
> wrote:
>
>> Hey Maceij -
>>
>> > About hardlinks - wouldn't it be better to use symlinks?
>> > This way we don't occupy more space than necessary
>>
>> AFAIK, hardlinks won't occupy much space. They are the links, after all.
>> :)
>>
>> As for symlinks, I'm afraid shotgun (and fabric underneath) won't
>> resolve them and links are get to snapshot As Is. That means if there
>> will be no content in the snapshot they are pointing to, they are
>> simply useless. Needs to be checked, though.
>>
>> - Igor
>>
>> On Thu, Jan 14, 2016 at 10:31 AM, Maciej Kwiek 
>> wrote:
>> > Thanks for your insight guys!
>> >
>> > I agree with Oleg, I will see what I can do to make this work this way.
>> >
>> > About hardlinks - wouldn't it be better to use symlinks? This way we
>> don't
>> > occupy more space than necessary, and we can link to files and
>> directories
>> > that are in other block device than /var. Please see [1] review for a
>> > proposed change that introduces symlinks.
>> >
>> > This doesn't really give us much right now, because most of the logs are
>> > fetched from master node via ssh due to shotgun being run in mcollective
>> > container, but it's something! When we remove containers, this will
>> prove
>> > more useful.
>> >
>> > Regards,
>> > Maciej Kwiek
>> >
>> > [1] https://review.openstack.org/#/c/266964/
>> >
>> > On Tue, Jan 12, 2016 at 1:51 PM, Oleg Gelbukh 
>> wrote:
>> >>
>> >> I think we need to find a way to:
>> >>
>> >> 1) verify the size of snapshot without actually making it and compare
>> to
>> >> the available disk space beforehand.
>> >> 2) refuse to create snapshot if space is insufficient and notify user
>> >> (otherwise it breaks Admin node as we have seen)
>> >> 3) provide a way to prioritize elements of the snapshot and exclude
>> them
>> >> based on the priorities or user choice.
>> >>
>> >> This will allow for better and safer UX with the snapshot.
>> >>
>> >> --
>> >> Best regards,
>> >> Oleg Gelbukh
>> >>
>> >> On Tue, Jan 12, 2016 at 1:47 PM, Maciej Kwiek 
>> wrote:
>> >>>
>> >>> Hi!
>> >>>
>> >>> I need some advice on how to tackle this issue. There is a bug [1]
>> >>> describing the problem with creating a diagnostic snapshot. The issue
>> is
>> >>> that /var/log has 100GB available, while /var (where diagnostic
>> snapshot is
>> >>> being generated - /var/www/nailgun/dump/fuel-snapshot according to
>> [2]) has
>> >>> 10GB available, so dumping the logs can be an issue when logs size
>> exceed
>> >>> free space in /var.
>> >>>
>> >>> There are several things we could do, but I am unsure on which course
>> to
>> >>> take. Should we
>> >>> a) Allocate more disk space for /var/www (or for whole /var)?
>> >>> b) Make the snapshot location share the diskspace of /var/log?
>> >>> c) Something else? What?
>> >>>
>> >>> Please share your thoughts on this.
>> >>>
>> >>> Cheers,
>> >>> Maciej Kwiek
>> >>>
>> >>> [1] https://bugs.launchpad.net/fuel/+bug/1529182
>> >>> [2]
>> >>>
>> https://github.com/openstack/fuel-web/blob/2855a9ba925c146b4802ab3cd2185f1dce2d8a6a/nailgun/nailgun/settings.yaml#L717
>> >>>
>> >>>
>> >>>
>> __
>> >>> OpenStack Development Mailing List (not for usage questions)
>> >>> Unsubscribe:
>> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>
>> >>
>> >>
>> >>
>> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> 

Re: [openstack-dev] [kolla] Heka v ELK stack logistics

2016-01-14 Thread Eric LEMOINE
On Wed, Jan 13, 2016 at 1:15 PM, Steven Dake (stdake)  wrote:
> Hey folks,
>
> I'd like to have a mailing list discussion about logistics of the ELKSTACK
> solution that Alicja has sorted out vs the Heka implementation that Eric is
> proposing.
>
> My take on that is Eric wants to replace rsyslog and logstash with Heka.


See my other email on this point.  At this point, given the
requirements we have (get logs from services that only speak syslog
and write logs to local files), we cannot guarantee that Heka will
replace Rsyslog.  We are going to test the use of Heka's UdpInput
(with "net" set to "unixgram") et FileOutput plugins for that.  Stay
tuned!


> That seems fine, but I want to make certain this doesn't happen in a way
> that leaves Kolla completely non-functional as we finish up Mitaka.  Liberty
> is the first version of Kolla people will deploy, and Mitaka is the first
> version of Kolla people will upgrade to, so making sure that we don't
> completely bust diagnostics (and I recognize diags as is are a little weak
> is critical).
>
> It sounds like from my reading of the previous thread on this topic, unless
> there is some intractable problem, our goal is to use Heka to replace
> resyslog and logstash.  I'd ask inc0 (who did the rsyslog work) and Alicja
> (who did the elkstack work) to understand that replacement often happens on
> work that has already been done, and its not a "waste of time" so to speak
> as an evolution of the system.
>
> Here are the deadlines:
> http://docs.openstack.org/releases/schedules/mitaka.html
>
> Let me help decode that for folks. March 4th is the final deadline to have a
> completely working solution based upon Heka if its to enter Mitaka.


Understood.


>
> Unlike previous releases of Kolla, I want to hand off release management of
> Kolla to the release management team, and to do that, we need to show a
> track record of hitting our deadlines and not adding features past feature
> freeze (the m3 milestone on March 4th).  In the past releases of Kolla we as
> a team were super loose on this requirement – going forward I prefer us
> being super strict.  Handing off to release management is a sign of maturity
> and would have an overall positive impact, assuming we can get the software
> written in time :)
>
> Eric,
>
> I'd like a plan and commitment to either hit Mitaka 3, or the N cycle.  It
> must work well first on Ansible, and second on Mesos.  If it doesn't work at
> all on Mesos, I could live with that -  I think the Mesos implementation
> will really not be ready for prime time until the middle or completion of
> the N cycle.  We lead with Ansible, and I don't see that changing any time
> soon – as a result, I want our Ansible deployment to be rock solid and
> usable out of the gate.  I don't expect to "Market" Mitaka Mesos (with the
> OpenStack foundation's help) as "production ready" but rather as "tech
> preview" and something for folks to evaluate.


It is our intent to meet the March 4th deadline.



>
> Alicja,
>
> I think a parallel development effort with the ELKSTACK that your working on
> makes sense.  In case the Heka development fails entirely, or misses Mitaka
> 3, I don't want us left lacking a diagnostics solution for Mitaka.
> Diagnostics is my priority #2 for Kolla (#1 is upgrades).  Unfortunately
> what this means is you may end up wasting your time doing development that
> is replaced at the last minute in Mitaka 3, or later in the N cycle.  This
> is very common in software development (all the code I wrote for Magnum has
> been sadly replaced).  I know you can be a good team player here and take
> one for the team so to speak, but I'm asking you if you would take offense
> to this approach.


I'd like to moderate this a bit.  We want to build on Alicja's work,
and we will reuse everything that Alicja has done/will do on
Elasticsearch and Kibana, as this part of the stack will be the same.



>
> I'd like comments/questions/concerns on the above logistics approach
> discussed, and a commitment from Eric as to when he thinks all the code
> would land as one patch stream unit.
>
> I'd also like to see the code come in as one super big patch stream (think
> 30 patches in the stream) so the work can be evaluated and merged as one
> unit.  I could also live with 2-3 different patch streams with 10-15 patches
> per stream, just so we can eval as a unit.  This means lots of rebasing on
> your part Eric ;-)  It also means a commitment from the core reviewer team
> to test and review this critical change.  If there isn't a core reviewer on
> board with this approach, please speak up now.


Makes total sense to me.


Thanks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Diagnostic snapshot generation is broken due to lack of disk space

2016-01-14 Thread Bartlomiej Piotrowski
Igor,

I took a glance on Maciej's patch and it adds a switch to tar command to
make it follow symbolic links, so it looks good to me.

Bartłomiej

On Thu, Jan 14, 2016 at 10:39 AM, Igor Kalnitsky 
wrote:

> Hey Maceij -
>
> > About hardlinks - wouldn't it be better to use symlinks?
> > This way we don't occupy more space than necessary
>
> AFAIK, hardlinks won't occupy much space. They are the links, after all. :)
>
> As for symlinks, I'm afraid shotgun (and fabric underneath) won't
> resolve them and links are get to snapshot As Is. That means if there
> will be no content in the snapshot they are pointing to, they are
> simply useless. Needs to be checked, though.
>
> - Igor
>
> On Thu, Jan 14, 2016 at 10:31 AM, Maciej Kwiek 
> wrote:
> > Thanks for your insight guys!
> >
> > I agree with Oleg, I will see what I can do to make this work this way.
> >
> > About hardlinks - wouldn't it be better to use symlinks? This way we
> don't
> > occupy more space than necessary, and we can link to files and
> directories
> > that are in other block device than /var. Please see [1] review for a
> > proposed change that introduces symlinks.
> >
> > This doesn't really give us much right now, because most of the logs are
> > fetched from master node via ssh due to shotgun being run in mcollective
> > container, but it's something! When we remove containers, this will prove
> > more useful.
> >
> > Regards,
> > Maciej Kwiek
> >
> > [1] https://review.openstack.org/#/c/266964/
> >
> > On Tue, Jan 12, 2016 at 1:51 PM, Oleg Gelbukh 
> wrote:
> >>
> >> I think we need to find a way to:
> >>
> >> 1) verify the size of snapshot without actually making it and compare to
> >> the available disk space beforehand.
> >> 2) refuse to create snapshot if space is insufficient and notify user
> >> (otherwise it breaks Admin node as we have seen)
> >> 3) provide a way to prioritize elements of the snapshot and exclude them
> >> based on the priorities or user choice.
> >>
> >> This will allow for better and safer UX with the snapshot.
> >>
> >> --
> >> Best regards,
> >> Oleg Gelbukh
> >>
> >> On Tue, Jan 12, 2016 at 1:47 PM, Maciej Kwiek 
> wrote:
> >>>
> >>> Hi!
> >>>
> >>> I need some advice on how to tackle this issue. There is a bug [1]
> >>> describing the problem with creating a diagnostic snapshot. The issue
> is
> >>> that /var/log has 100GB available, while /var (where diagnostic
> snapshot is
> >>> being generated - /var/www/nailgun/dump/fuel-snapshot according to
> [2]) has
> >>> 10GB available, so dumping the logs can be an issue when logs size
> exceed
> >>> free space in /var.
> >>>
> >>> There are several things we could do, but I am unsure on which course
> to
> >>> take. Should we
> >>> a) Allocate more disk space for /var/www (or for whole /var)?
> >>> b) Make the snapshot location share the diskspace of /var/log?
> >>> c) Something else? What?
> >>>
> >>> Please share your thoughts on this.
> >>>
> >>> Cheers,
> >>> Maciej Kwiek
> >>>
> >>> [1] https://bugs.launchpad.net/fuel/+bug/1529182
> >>> [2]
> >>>
> https://github.com/openstack/fuel-web/blob/2855a9ba925c146b4802ab3cd2185f1dce2d8a6a/nailgun/nailgun/settings.yaml#L717
> >>>
> >>>
> >>>
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Can Heka solve all the deficiencies in the current rsyslog implementation: was Re: [kolla] Introduction of Heka in Kolla

2016-01-14 Thread Eric LEMOINE
On Wed, Jan 13, 2016 at 1:27 PM, Steven Dake (stdake)  wrote:
> Eric,


Hi Steven


>
> Apologies for top post, not really sure where in this thread to post this
> list of questions as its sort of a change in topic so I changed the
> subject line :)
>
> 1.
> Somewhere I read when researching this Heka topic, that Heka cannot log
> all details from /dev/log.  Some services like mariadb for example don't
> log to stdout as I think Heka requires to operate correctly.  Would you
> mind responding on the question "Would Heka be able to effectively log
> every piece of information coming off the system related to OpenStack (our
> infrastructure services like ceph/mariadb/etc as well as the OpenStack
> services)?


My first reaction to this is: if we have services, such as mariadb,
that can only send their logs to syslog then let's continue using
Rsyslog.  And with Rsyslog we can easily store logs on the local
filesystem as well (your requirement #3 below).

That being said, it appears that Heka supports reading logs from
/dev/log.  This can be done using the UdpInput plugin with "net" set
to "unixgram".  See
 for the original
issue.  Heka also supports writing logs to files on the local
filesystem, through the FileOutput plugin.  We do not currently use
the UdpInput plugin, so we need to test it and see if it can work for
Kolla.  We will work on these tests, and report back to the list.



> 2.
> Also, I want to make sure we can fix up the backtrace defeciency.
> Currently rsyslog doesn't log backtraces in python code.  Perhaps Sam or
> inc0 know the reason behind it, but I want to make sure we can fix up this
> annoyance, because backtraces are mightily important.


I've had a look on my AIO Kolla.  And I do see Python tracebacks in
OpenStack log files created by Rsyslog (in
/var/lib/docker/volumes/rsyslog/_data/nova/nova-api.log for example).
They're just on a single line, with "#012" used as the separator [*].
So they are hard to read, but they are there.  I think that is
consistent with what SamYaple and inc0 said yesterday on IRC.

[*] This is related to Rsyslog's $EscapeControlCharactersOnReceive
setting. See 
.


> 3.
> Also I want to make sure each node ends up with log files in a data
> container (or data volume or whatever we just recently replaced the data
> containers with) for all the services for individual node diagnostics.
> This helps fill the gap of the Kibana visualization and Elasticsearch
> where we may not have a perfect diagnostic solution at the conclusion of
> Mitaka and in need of individual node inspection of the logs.  Can Heka be
> made to do this?  Our rsyslog implementation does today, and its a hard
> requirement for the moment.  If we need some special software to run in
> addition to Heka, I could live with that.


That "special software" could be Rsyslog :)  Seriously, Rsyslog
provides a solution for getting logs from services that only log to
syslog.  We can also easily configure Rsyslog to write logs on the
local filesystem, as done in Kolla already today.  And using Heka we
can easily make Python tracebacks look good in Kibana.

I would like to point out that our initial intent was not to remove
Rsyslog.  Our intent was to propose a scalable/decentralized log
processing architecture based on Heka running on each node, instead of
relying on a centralized Logstash instance.  Using Heka we eliminate
the need to deploy and manage a resilient Logstash/Redis cluster.  And
it is to be noted that Heka gives us a lot of flexibility.  In
particular, Heka makes it possible to collect logs from services that
don't speak syslog (RabbitMQ for example, whose logs are not currently
collected!).

As mentioned above Heka provides plugins that we could possibly
leverage to remove Rsyslog completely, but at this point we cannot
guarantee that they will do the job.  Our coming tests will tell.

Thanks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][stable] Proposal to add Tony Breeds to nova-stable-maint

2016-01-14 Thread Ihar Hrachyshka

Matt Riedemann  wrote:

I'm formally proposing that the nova-stable-maint team [1] adds Tony  
Breeds to the core team.


I don't have a way to track review status on stable branches, but there  
are review numbers from gerrit for stable/liberty [2] and stable/kilo [3].


I know that Tony does a lot of stable branch reviews and knows the  
backport policy well, and he's also helped out numerous times over the  
last year or so with fixing stable branch QA / CI issues (think gate  
wedge failures in stable/juno over the last 6 months). So I think Tony  
would be a great addition to the team.


So for those on the team already, please reply with a +1 or -1 vote.


I am not part of the nova-stable-maint group, so I will only wonder why not  
also making Tony part of *stable-maint-core* team that supervises all  
project stable teams and has core access to all of project stable branches.  
Seems like Tony is often core to unwedging stable gate breakages, and could  
be useful if having more power in this regard.


Note: I don’t suggest we don’t add him to the nova-stable-maint group  
though. There is still value in having Tony in it in addition to  
stable-maint-core, to indicate his advanced involvement in nova specific  
stable affairs.




[1] https://review.openstack.org/#/admin/groups/540,members
[2]  
https://review.openstack.org/#/q/reviewer:%22Tony+Breeds%22+branch:stable/liberty+project:openstack/nova
[3]  
https://review.openstack.org/#/q/reviewer:%22Tony+Breeds%22+branch:stable/kilo+project:openstack/nova


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Diagnostic snapshot generation is broken due to lack of disk space

2016-01-14 Thread Igor Kalnitsky
> I took a glance on Maciej's patch and it adds a switch to tar command
> to make it follow symbolic links

Yeah, that should work. Except one thing - we previously had fqdn ->
ipaddr links in snapshots. So now they will be resolved into full
copy?

> I meant that symlinks also give us the benefit of not using additional
> space (just as hardlinks do) while being able to link to files from
> different filesystems.

I'm sorry, I got you wrong. :)

- Igor

On Thu, Jan 14, 2016 at 12:34 PM, Maciej Kwiek  wrote:
> Igor,
>
> I meant that symlinks also give us the benefit of not using additional space
> (just as hardlinks do) while being able to link to files from different
> filesystems.
>
> Also, as Barłomiej pointed out the `h` switch for tar should do the trick
> [1].
>
> Cheers,
> Maciej
>
> [1] http://www.gnu.org/software/tar/manual/html_node/dereference.html
>
> On Thu, Jan 14, 2016 at 11:22 AM, Bartlomiej Piotrowski
>  wrote:
>>
>> Igor,
>>
>> I took a glance on Maciej's patch and it adds a switch to tar command to
>> make it follow symbolic links, so it looks good to me.
>>
>> Bartłomiej
>>
>> On Thu, Jan 14, 2016 at 10:39 AM, Igor Kalnitsky 
>> wrote:
>>>
>>> Hey Maceij -
>>>
>>> > About hardlinks - wouldn't it be better to use symlinks?
>>> > This way we don't occupy more space than necessary
>>>
>>> AFAIK, hardlinks won't occupy much space. They are the links, after all.
>>> :)
>>>
>>> As for symlinks, I'm afraid shotgun (and fabric underneath) won't
>>> resolve them and links are get to snapshot As Is. That means if there
>>> will be no content in the snapshot they are pointing to, they are
>>> simply useless. Needs to be checked, though.
>>>
>>> - Igor
>>>
>>> On Thu, Jan 14, 2016 at 10:31 AM, Maciej Kwiek 
>>> wrote:
>>> > Thanks for your insight guys!
>>> >
>>> > I agree with Oleg, I will see what I can do to make this work this way.
>>> >
>>> > About hardlinks - wouldn't it be better to use symlinks? This way we
>>> > don't
>>> > occupy more space than necessary, and we can link to files and
>>> > directories
>>> > that are in other block device than /var. Please see [1] review for a
>>> > proposed change that introduces symlinks.
>>> >
>>> > This doesn't really give us much right now, because most of the logs
>>> > are
>>> > fetched from master node via ssh due to shotgun being run in
>>> > mcollective
>>> > container, but it's something! When we remove containers, this will
>>> > prove
>>> > more useful.
>>> >
>>> > Regards,
>>> > Maciej Kwiek
>>> >
>>> > [1] https://review.openstack.org/#/c/266964/
>>> >
>>> > On Tue, Jan 12, 2016 at 1:51 PM, Oleg Gelbukh 
>>> > wrote:
>>> >>
>>> >> I think we need to find a way to:
>>> >>
>>> >> 1) verify the size of snapshot without actually making it and compare
>>> >> to
>>> >> the available disk space beforehand.
>>> >> 2) refuse to create snapshot if space is insufficient and notify user
>>> >> (otherwise it breaks Admin node as we have seen)
>>> >> 3) provide a way to prioritize elements of the snapshot and exclude
>>> >> them
>>> >> based on the priorities or user choice.
>>> >>
>>> >> This will allow for better and safer UX with the snapshot.
>>> >>
>>> >> --
>>> >> Best regards,
>>> >> Oleg Gelbukh
>>> >>
>>> >> On Tue, Jan 12, 2016 at 1:47 PM, Maciej Kwiek 
>>> >> wrote:
>>> >>>
>>> >>> Hi!
>>> >>>
>>> >>> I need some advice on how to tackle this issue. There is a bug [1]
>>> >>> describing the problem with creating a diagnostic snapshot. The issue
>>> >>> is
>>> >>> that /var/log has 100GB available, while /var (where diagnostic
>>> >>> snapshot is
>>> >>> being generated - /var/www/nailgun/dump/fuel-snapshot according to
>>> >>> [2]) has
>>> >>> 10GB available, so dumping the logs can be an issue when logs size
>>> >>> exceed
>>> >>> free space in /var.
>>> >>>
>>> >>> There are several things we could do, but I am unsure on which course
>>> >>> to
>>> >>> take. Should we
>>> >>> a) Allocate more disk space for /var/www (or for whole /var)?
>>> >>> b) Make the snapshot location share the diskspace of /var/log?
>>> >>> c) Something else? What?
>>> >>>
>>> >>> Please share your thoughts on this.
>>> >>>
>>> >>> Cheers,
>>> >>> Maciej Kwiek
>>> >>>
>>> >>> [1] https://bugs.launchpad.net/fuel/+bug/1529182
>>> >>> [2]
>>> >>>
>>> >>> https://github.com/openstack/fuel-web/blob/2855a9ba925c146b4802ab3cd2185f1dce2d8a6a/nailgun/nailgun/settings.yaml#L717
>>> >>>
>>> >>>
>>> >>>
>>> >>> __
>>> >>> OpenStack Development Mailing List (not for usage questions)
>>> >>> Unsubscribe:
>>> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>>
>>> >>
>>> >>
>>> >>
>>> >> __
>>> >> OpenStack 

Re: [openstack-dev] [TripleO] Removing unused/deprecated template parameters?

2016-01-14 Thread Steven Hardy
On Wed, Jan 13, 2016 at 02:16:44PM -0500, Dan Prince wrote:
> On Tue, 2016-01-12 at 20:47 +, Steven Hardy wrote:
> > Hi all,
> > 
> > I've noticed that we have a fairly large number of unused parameters
> > in
> > t-h-t, some of which are marked deprecated, some aren't.
> > 
> > Since we moved tripleoclient to use parameter_defaults everywhere, I
> > think
> > it should be safe to remove these unused parameters, even in
> > overcloud.yaml.
> > 
> > See:
> > 
> > https://review.openstack.org/#/c/227057/
> > 
> > https://review.openstack.org/#/c/227057/
> > 
> > Since those, we can pass removed/deprecated parameters from the
> > client and
> > they will be ignored, even if they're removed from the template
> > (unlike if
> > you use "parameters", where a validation error would occur.
> > 
> > I'd like to go ahead and clean these up (only on the master branch),
> > is
> > that reasonable?  We can document the change via a mitaka release
> > note?
> 
> This sounds fine to me.
> 
> > 
> > Ideally, we'd have user-visible warnings for a deprecation period,
> > but
> > there's no way to output such warnings atm via heat, so we'd need to
> > wire
> > them in via tripleoclient or tripleo-common, which seems a bit
> > backwards
> > given that we can just remove the parameters there instead.
> > 
> > Thoughts?
> 
> Adding some sort of deprecation mechanism to Heat proper, perhaps an
> async way to communicate back to the end users that the parameters
> being used are deprecated would be the nicest option I think.

Yeah, this was discussed at summit and is definitely something we should
look into.

> Give lack of that would could design something into tripleo-common to
> do this, but it would require parsing all of the parameters, and
> environments before sending them off to Heat which seems to duplicate
> things. Not my favorite place for the functionality to live but it
> could be doable in tripleo-common I think as an additional deployment
> workflow step.
> 
> Perhaps one thing that might make sense is to create a
> deprecated_params.txt file somewhere to track these for each release
> cycle? Maybe this lives in t-h-t? I'm not sure we get a lot of value
> out of maintaining this though unless we intend to test for deprecated
> parameters in some fashion and display them during the deployment
> workflow.

Another option would be to add deprecated parameters to a parameter group
in the template, then it becomes easy for any client/common code to output
a warning when these are passed in parameter_defaults?

Regardless, it sounds like we have a consensus that it's OK to remove the
currently unused parameters (because we're pretty confident the only
consumer of these is tripleoclient atm), but in future we need to put in
place a more robust deprecation method?

If that's correct, I'll go ahead and propose a series of patches which
removes the currently unused parameters.

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Diagnostic snapshot generation is broken due to lack of disk space

2016-01-14 Thread Maciej Kwiek
Igor,

I will investigate this, thanks!

Artem,

I guess that if we have an untrusted user on master node, he could just put
something he wants to be in the snapshot in /var/log without having to time
the attack carefully with tar execution.

I want to use links for directories, this saves me the trouble of creating
hardlinks for every single file in the directory. Although with how
exclusion is currently implemented it can cause deleting log files from
original directories, need to check this out.

About your PS: whole /var/log on master node (not in container) is
currently downloaded, I think we shouldn't change this as we plan to drop
containers in 9.0.

Cheers,
Maciej

On Thu, Jan 14, 2016 at 12:32 PM, Artem Panchenko 
wrote:

> Hi,
>
> using symlinks is a bit dangerous, here is a quote from the man you
> mentioned [0]:
>
> > The `--dereference' option is unsafe if an untrusted user can modify
> directories while tar is running.
>
> Hard links usage is much safer, because you can't use them for
> directories. But at the same time implementation in shotgun would be more
> complicated than with symlinks.
>
> Anyway, in order to determine what linking to use we need to decide where
> (/var/log or another partition) diagnostic snapshot will be stored.
>
> p.s.
>
> >This doesn't really give us much right now, because most of the logs are 
> >fetched from master node via ssh due to shotgun being run in mcollective 
> >container
>
>
> AFAIK '/var/log/docker-logs/' is available from mcollective container and
> mounted to /var/log/:
>
> [root@fuel-lab-cz5557 ~]# dockerctl shell mcollective mount -l | grep
> os-varlog
> /dev/mapper/os-varlog on /var/log type ext4
> (rw,relatime,stripe=128,data=ordered)
>
> From my experience '/var/log/docker-logs/remote' folder is most ' heavy'
> thing in snapshot.
>
> [0] http://www.gnu.org/software/tar/manual/html_node/dereference.html
>
> Thanks!
>
>
> On 14.01.16 13:00, Igor Kalnitsky wrote:
>
> I took a glance on Maciej's patch and it adds a switch to tar command
> to make it follow symbolic links
>
> Yeah, that should work. Except one thing - we previously had fqdn ->
> ipaddr links in snapshots. So now they will be resolved into full
> copy?
>
>
> I meant that symlinks also give us the benefit of not using additional
> space (just as hardlinks do) while being able to link to files from
> different filesystems.
>
> I'm sorry, I got you wrong. :)
>
> - Igor
>
> On Thu, Jan 14, 2016 at 12:34 PM, Maciej Kwiek  
>  wrote:
>
> Igor,
>
> I meant that symlinks also give us the benefit of not using additional space
> (just as hardlinks do) while being able to link to files from different
> filesystems.
>
> Also, as Barłomiej pointed out the `h` switch for tar should do the trick
> [1].
>
> Cheers,
> Maciej
>
> [1] http://www.gnu.org/software/tar/manual/html_node/dereference.html
>
> On Thu, Jan 14, 2016 at 11:22 AM, Bartlomiej 
> Piotrowski  wrote:
>
> Igor,
>
> I took a glance on Maciej's patch and it adds a switch to tar command to
> make it follow symbolic links, so it looks good to me.
>
> Bartłomiej
>
> On Thu, Jan 14, 2016 at 10:39 AM, Igor Kalnitsky  
> 
> wrote:
>
> Hey Maceij -
>
>
> About hardlinks - wouldn't it be better to use symlinks?
> This way we don't occupy more space than necessary
>
> AFAIK, hardlinks won't occupy much space. They are the links, after all.
> :)
>
> As for symlinks, I'm afraid shotgun (and fabric underneath) won't
> resolve them and links are get to snapshot As Is. That means if there
> will be no content in the snapshot they are pointing to, they are
> simply useless. Needs to be checked, though.
>
> - Igor
>
> On Thu, Jan 14, 2016 at 10:31 AM, Maciej Kwiek  
> 
> wrote:
>
> Thanks for your insight guys!
>
> I agree with Oleg, I will see what I can do to make this work this way.
>
> About hardlinks - wouldn't it be better to use symlinks? This way we
> don't
> occupy more space than necessary, and we can link to files and
> directories
> that are in other block device than /var. Please see [1] review for a
> proposed change that introduces symlinks.
>
> This doesn't really give us much right now, because most of the logs
> are
> fetched from master node via ssh due to shotgun being run in
> mcollective
> container, but it's something! When we remove containers, this will
> prove
> more useful.
>
> Regards,
> Maciej Kwiek
>
> [1] https://review.openstack.org/#/c/266964/
>
> On Tue, Jan 12, 2016 at 1:51 PM, Oleg Gelbukh  
> 
> wrote:
>
> I think we need to find a way to:
>
> 1) verify the size of snapshot without actually making it and compare
> to
> the available disk space beforehand.
> 2) refuse to create snapshot if space is insufficient and notify user
> (otherwise it breaks Admin node as 

Re: [openstack-dev] [Keystone][Neutron][Nova][devstack] Keystone v3 with "old" clients

2016-01-14 Thread Henrique Truta
Hi, Did exporting the variables solve your problem? I'm working on
improving the support of v3 in devstack, like the openrc you've mentioned.

Em qui, 14 de jan de 2016 às 06:50, Akihiro Motoki 
escreveu:

> devstack creates /etc/openstack/clouds.yaml (os-client-config
> configuraiton files) which specifies to use keystone v3.
> neutronclient supports os-client-config and keystoneauth which handles
> the difference of keystone API.
> Note that clouds.yaml is very convenient way to use OpenStack CLI [1].
>
> As Michal commented, you can also use OS_PROJECT_DOMAIN_xx and
> OS_USER_DOMAIN_xx
> for keystone v3 API.
>
> [1]
> http://docs.openstack.org/developer/python-neutronclient/usage/cli.html#using-with-os-client-config
>
> Akihiro
>
> 2016-01-14 18:13 GMT+09:00 Michal Rostecki :
> > On 01/12/2016 02:10 PM, Smigiel, Dariusz wrote:
> >>
> >> Hello,
> >> I'm trying to gather all the info necessary to migrate to keystone v3 in
> >> Neutron.
> >> When I've started to looking through possible problems with clients, it
> >> occurred that 'neutron' and 'nova' clients do not want to operate with
> >> Keystone v3.
> >> For keystone client, it's explicit written, that this version is
> >> deprecated and not supported, so it's not working with Keystone API v3.
> But
> >> for nova and neutron, there is nothing.
> >> I didn't see any place where I can find info, that "old" clients
> shouldn't
> >> be used with Keystone API v3.
> >>
> >> Am I doing something wrong?
> >>
> >> http://paste.openstack.org/show/483568/
> >>
> >
> > Hi,
> >
> > Looks like you're missing OS_PROJECT_DOMAIN_ID and OS_USER_DOMAIN_ID env
> > variables, needed for Keystone v3.
> >
> > Unfortunately, I don't see them in devstack's openrc[1]. Maybe it's a
> good
> > moment to add them here.
> >
> > [1] https://github.com/openstack-dev/devstack/blob/master/openrc
> >
> > Cheers,
> > Michal
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Further closing the holes that let gate breakage happen

2016-01-14 Thread Neil Jerram
On 14/01/16 12:01, Davanum Srinivas wrote:
> Neil,
>
> The global requirements upper-constraints.txt do not cover neutron
> unit test targets. So the unit tests pick up latest from pypi.

I'm afraid I don't understand how that's related to my question below. 
Could you explain further?

It seems you might be saying that upper-constraints.txt should have no
effect on Neutron UTs.  But my understanding from Carl's message is that
an upper-constraints.txt change caused a Neutron UT (running as part of
a gate job) to fail.  So I'm not sure how to understand your statement.

Thanks,
Neil


>
> -- Dims
>
>
>
> On Thu, Jan 14, 2016 at 6:51 AM, Neil Jerram  
> wrote:
>> On 13/01/16 19:27, Carl Baldwin wrote:
>>> Hi,
>>>
>>> I was looking at the most recent gate breakage in Neutron [1], fixed
>>> by [2].  This gate breakage was held off for some time by the
>>> upper-constraints.txt file.   This is great progress and I applaud it.
>>> I'll continue to cheer on this effort.
>>>
>>> Now to the next problem.   If my assessment of this gate failure is
>>> correct, the update to the upper-constraints file [3] was merged
>>> without running all of the tests across all of the projects that would
>>> be broken by bringing in this new constraint.  So, we still get
>>> breakage and it is still (IMO) too often.
>>>
>>> As I see it, there are a couple of options.
>>>
>>> 1) We run all tests under the upper-constraints control on all updates
>>> to the upper constraints file like [2].  This would probably mean each
>>> update has a very long list of tests and we would require that they
>>> all be fixed before the upper constraint update can be merged.  This
>>> seems like a difficult thing to coordinate all at once.
>>> 2) We handle upper-constraints much like we do the global requirements
>>> updates.  We have the master and a bot that proposes updates to it out
>>> to the individual projects.  This would create a situation where
>>> projects are out of sync with the master but I think if we froze the
>>> master early enough, we could have time to reconcile before release.
>>> 3) We continue to allow changes in the upper constraints to break
>>> individual projects.
>>>
>>> Are there options that I missed?  What is your opinion?  In my
>>> opinion, gate breakage happens a bit too often and the effect on the
>>> community is widespread.  I'd like to contain it even a little bit
>>> more.
>>>
>>> Carl
>>>
>>> [1] https://bugs.launchpad.net/neutron/+bug/1533638
>>> [2] https://review.openstack.org/#/c/266885/
>>> [3] https://review.openstack.org/#/c/266042/
>> I've only just started to learn about requirements and constraints, so I
>> may be misunderstanding.  However,
>> https://github.com/openstack/requirements/blob/master/README.rst says:
>>
>>> For upper-constraints.txt changes
>>>
>>> If the change was proposed by the OpenStack CI bot, then if the
>>> change has passed CI, only one reviewer is needed and they should +2
>>> +A without thinking about things.
>>>
>>> If the change was not proposed by the OpenStack CI bot, and does not
>>> include a global-requirements.txt change, then it should be rejected:
>>> the CI bot will generate an appropriate change itself. Ask in
>>> #openstack-infra if the bot needs to be run more quickly.
>> Doesn't that mean that [3] should have been rejected, and hence already
>> cover the recent situation?
>>
>> Neil
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] "Upstream Development" track at the Austin OpenStack Summit

2016-01-14 Thread Sean Dague
On 01/13/2016 05:58 AM, Thierry Carrez wrote:
> Hi everyone,
> 
> As you may have already noticed in the CFP[1], in Austin for the first
> time we'll have a conference track clearly targeted to upstream
> OpenStack developers (us all on this mailing-list). It will run on the
> Monday, /before/ the design summit tracks start, so it should actually
> be possible to attend them!
> 
> Presentations in this track will be specifically tailored to the people
> who write OpenStack source code. We should be able to learn about new
> development processes, get more information on tools that the
> infrastructure team gives us, discover new features in oslo libraries
> (or elsewhere) that your own OpenStack project could take advantage of,
> or share development best practices and other cool tricks.
> 
> So if you have a topic you feel is a good fit for that audience, feel
> free to submit[1] a talk for the "Upstream development" track! The
> deadline is February 1st. If you only have an idea of something that
> would make a great talk (but you aren't the best person to give it),
> then you can dump your idea on the brainstorming etherpad[2], it may
> inspire others.
> 
> [1] https://www.openstack.org/summit/austin-2016/call-for-speakers/
> [2] https://etherpad.openstack.org/p/austin-upstream-dev-track-ideas
> 
> Cheers!

It would be really nice for this track to also be a "pull topics" track.

So everyone should think about the 1 thing that they really wish they
knew more about in OpenStack, especially if it's something slightly
outside their normal experience in the community. The real goal here is
to cross educate each other on the project.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Further closing the holes that let gate breakage happen

2016-01-14 Thread Davanum Srinivas
Neil,

Apologies. you are right, test_bash_completion is a unit test. and
neutron failure is in "gate-neutron-python27-constraints". so yes,
that was broken by the change to upper-constraints.txt.

https://review.openstack.org/#/c/266042/ is a valid request as it has
a g-r change and a u-c change even though it was not logged by the
Bot.

Thanks,
dims

On Thu, Jan 14, 2016 at 7:09 AM, Neil Jerram  wrote:
> On 14/01/16 12:01, Davanum Srinivas wrote:
>> Neil,
>>
>> The global requirements upper-constraints.txt do not cover neutron
>> unit test targets. So the unit tests pick up latest from pypi.
>
> I'm afraid I don't understand how that's related to my question below.
> Could you explain further?
>
> It seems you might be saying that upper-constraints.txt should have no
> effect on Neutron UTs.  But my understanding from Carl's message is that
> an upper-constraints.txt change caused a Neutron UT (running as part of
> a gate job) to fail.  So I'm not sure how to understand your statement.
>
> Thanks,
> Neil
>
>
>>
>> -- Dims
>>
>>
>>
>> On Thu, Jan 14, 2016 at 6:51 AM, Neil Jerram  
>> wrote:
>>> On 13/01/16 19:27, Carl Baldwin wrote:
 Hi,

 I was looking at the most recent gate breakage in Neutron [1], fixed
 by [2].  This gate breakage was held off for some time by the
 upper-constraints.txt file.   This is great progress and I applaud it.
 I'll continue to cheer on this effort.

 Now to the next problem.   If my assessment of this gate failure is
 correct, the update to the upper-constraints file [3] was merged
 without running all of the tests across all of the projects that would
 be broken by bringing in this new constraint.  So, we still get
 breakage and it is still (IMO) too often.

 As I see it, there are a couple of options.

 1) We run all tests under the upper-constraints control on all updates
 to the upper constraints file like [2].  This would probably mean each
 update has a very long list of tests and we would require that they
 all be fixed before the upper constraint update can be merged.  This
 seems like a difficult thing to coordinate all at once.
 2) We handle upper-constraints much like we do the global requirements
 updates.  We have the master and a bot that proposes updates to it out
 to the individual projects.  This would create a situation where
 projects are out of sync with the master but I think if we froze the
 master early enough, we could have time to reconcile before release.
 3) We continue to allow changes in the upper constraints to break
 individual projects.

 Are there options that I missed?  What is your opinion?  In my
 opinion, gate breakage happens a bit too often and the effect on the
 community is widespread.  I'd like to contain it even a little bit
 more.

 Carl

 [1] https://bugs.launchpad.net/neutron/+bug/1533638
 [2] https://review.openstack.org/#/c/266885/
 [3] https://review.openstack.org/#/c/266042/
>>> I've only just started to learn about requirements and constraints, so I
>>> may be misunderstanding.  However,
>>> https://github.com/openstack/requirements/blob/master/README.rst says:
>>>
 For upper-constraints.txt changes

 If the change was proposed by the OpenStack CI bot, then if the
 change has passed CI, only one reviewer is needed and they should +2
 +A without thinking about things.

 If the change was not proposed by the OpenStack CI bot, and does not
 include a global-requirements.txt change, then it should be rejected:
 the CI bot will generate an appropriate change itself. Ask in
 #openstack-infra if the bot needs to be run more quickly.
>>> Doesn't that mean that [3] should have been rejected, and hence already
>>> cover the recent situation?
>>>
>>> Neil
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Testing Neutron with latest OVS

2016-01-14 Thread Jakub Libosvar
On 01/14/2016 12:38 AM, Matthew Treinish wrote:
> On Wed, Jan 13, 2016 at 10:47:21PM +, Sean M. Collins wrote:
>> On Wed, Jan 13, 2016 at 03:57:37PM CST, Mooney, Sean K wrote:
>>> One of the ideas that I have been thinking about over the last month or two 
>>> is do we
>>> Want to create a dedicated library file in devstack to support compilation 
>>> and installation
>>> Of ovs. 
>>
>> So, my suggestion is as follows: create a new devstack plugin that is
>> *specifically* geared towards just compiling OVS and installing it to
>> get the bits that you need.
> 
> +1, you do not want a monolithic plugin that does everything. Creating a
> separate small plugin to install ovs from source and the other pieces 
> necessary
> for doing that is the best path for enabling this.
If I understand correctly, this requires to either create a new git
repository that just provides a function to compile ovs or add it to
some already existing project like ovs.

Given that we will likely not even use it with devstack but only in
gate_hook.sh for functional/ job, I don't see the point in creating new
repository for one bash function or any "small plugin".

I know it was me who originally put the code in the Neutron devstack
plugin. So seems like the thing we need to solve is "Where will it
live". How about creating a separate script in tools?

> 
>> I'm just concerned about the feature creep
>> that is happening in the Neutron DevStack plugin ( which I didn't like in
>> the first place ) where now every little thing is getting proposed
>> against it.
>>
>> I'd prefer to see small, very specific DevStack plugins that have narrow
>> focus, and jobs that need them for specific things adding them to their
>> local.conf settings explicitly via enable_repo lines.
> 
> This is the intended way to use plugins. Dumping everything but the kitchen
> sink into a single plugin is just going to tightly couple too much and lead
> to an undebugable tangled ball of yarn. The idea with plugins is to have
> smaller plugins that can be used in conjunction to configure and enable
> various additional pieces.
> 
>>
>> The concern I have with compiling bleeding edge OVS and then running our
>> Neutron jobs is that yes, we get new features, but yes we also get
>> the newest bugs and the configuration matrix for Neutron now gets a new
>> dimension of 'packaged OVS versus git commit SHA'
>>
> 
> I don't think you ever want to have a gating job with OVS from source.
> There is way too much potential instability when building something like this
> from source to rely on it. An experimental job would probably be as far as I
> would go with something like this. Going any further than that is just asking
> for trouble.
Experimental job sounds like the best candidate but it also brings
disadvantages like having the new code prone to regressions, as tests
for new features will need to be skipped in normal gate jobs.

In functional tests we don't use any complex scenarios on OVS. It mostly
tests interface drivers with simple steps like creating bridge and
plugging interface to it. I believe this is pretty basic thing that
works even in unstable development branch of OVS.

Thanks for ideas.
Kuba

> 
> -Matt Treinish
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Further closing the holes that let gate breakage happen

2016-01-14 Thread Neil Jerram
On 13/01/16 19:27, Carl Baldwin wrote:
> Hi,
>
> I was looking at the most recent gate breakage in Neutron [1], fixed
> by [2].  This gate breakage was held off for some time by the
> upper-constraints.txt file.   This is great progress and I applaud it.
> I'll continue to cheer on this effort.
>
> Now to the next problem.   If my assessment of this gate failure is
> correct, the update to the upper-constraints file [3] was merged
> without running all of the tests across all of the projects that would
> be broken by bringing in this new constraint.  So, we still get
> breakage and it is still (IMO) too often.
>
> As I see it, there are a couple of options.
>
> 1) We run all tests under the upper-constraints control on all updates
> to the upper constraints file like [2].  This would probably mean each
> update has a very long list of tests and we would require that they
> all be fixed before the upper constraint update can be merged.  This
> seems like a difficult thing to coordinate all at once.
> 2) We handle upper-constraints much like we do the global requirements
> updates.  We have the master and a bot that proposes updates to it out
> to the individual projects.  This would create a situation where
> projects are out of sync with the master but I think if we froze the
> master early enough, we could have time to reconcile before release.
> 3) We continue to allow changes in the upper constraints to break
> individual projects.
>
> Are there options that I missed?  What is your opinion?  In my
> opinion, gate breakage happens a bit too often and the effect on the
> community is widespread.  I'd like to contain it even a little bit
> more.
>
> Carl
>
> [1] https://bugs.launchpad.net/neutron/+bug/1533638
> [2] https://review.openstack.org/#/c/266885/
> [3] https://review.openstack.org/#/c/266042/

I've only just started to learn about requirements and constraints, so I
may be misunderstanding.  However,
https://github.com/openstack/requirements/blob/master/README.rst says:

> For upper-constraints.txt changes
>
> If the change was proposed by the OpenStack CI bot, then if the
> change has passed CI, only one reviewer is needed and they should +2
> +A without thinking about things.
>
> If the change was not proposed by the OpenStack CI bot, and does not
> include a global-requirements.txt change, then it should be rejected:
> the CI bot will generate an appropriate change itself. Ask in
> #openstack-infra if the bot needs to be run more quickly.

Doesn't that mean that [3] should have been rejected, and hence already
cover the recent situation?

Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Further closing the holes that let gate breakage happen

2016-01-14 Thread Davanum Srinivas
Neil,

The global requirements upper-constraints.txt do not cover neutron
unit test targets. So the unit tests pick up latest from pypi.

-- Dims



On Thu, Jan 14, 2016 at 6:51 AM, Neil Jerram  wrote:
> On 13/01/16 19:27, Carl Baldwin wrote:
>> Hi,
>>
>> I was looking at the most recent gate breakage in Neutron [1], fixed
>> by [2].  This gate breakage was held off for some time by the
>> upper-constraints.txt file.   This is great progress and I applaud it.
>> I'll continue to cheer on this effort.
>>
>> Now to the next problem.   If my assessment of this gate failure is
>> correct, the update to the upper-constraints file [3] was merged
>> without running all of the tests across all of the projects that would
>> be broken by bringing in this new constraint.  So, we still get
>> breakage and it is still (IMO) too often.
>>
>> As I see it, there are a couple of options.
>>
>> 1) We run all tests under the upper-constraints control on all updates
>> to the upper constraints file like [2].  This would probably mean each
>> update has a very long list of tests and we would require that they
>> all be fixed before the upper constraint update can be merged.  This
>> seems like a difficult thing to coordinate all at once.
>> 2) We handle upper-constraints much like we do the global requirements
>> updates.  We have the master and a bot that proposes updates to it out
>> to the individual projects.  This would create a situation where
>> projects are out of sync with the master but I think if we froze the
>> master early enough, we could have time to reconcile before release.
>> 3) We continue to allow changes in the upper constraints to break
>> individual projects.
>>
>> Are there options that I missed?  What is your opinion?  In my
>> opinion, gate breakage happens a bit too often and the effect on the
>> community is widespread.  I'd like to contain it even a little bit
>> more.
>>
>> Carl
>>
>> [1] https://bugs.launchpad.net/neutron/+bug/1533638
>> [2] https://review.openstack.org/#/c/266885/
>> [3] https://review.openstack.org/#/c/266042/
>
> I've only just started to learn about requirements and constraints, so I
> may be misunderstanding.  However,
> https://github.com/openstack/requirements/blob/master/README.rst says:
>
>> For upper-constraints.txt changes
>>
>> If the change was proposed by the OpenStack CI bot, then if the
>> change has passed CI, only one reviewer is needed and they should +2
>> +A without thinking about things.
>>
>> If the change was not proposed by the OpenStack CI bot, and does not
>> include a global-requirements.txt change, then it should be rejected:
>> the CI bot will generate an appropriate change itself. Ask in
>> #openstack-infra if the bot needs to be run more quickly.
>
> Doesn't that mean that [3] should have been rejected, and hence already
> cover the recent situation?
>
> Neil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Gate failure

2016-01-14 Thread Ihar Hrachyshka

UPD both fixes merged, should now be safe to recheck neutron patches.

Doug Wiegley  wrote:

This fix failed to merge, due to a new regression in how another job is  
using dib.  Here's a non voting patch for that, until it gets debugged:


https://review.openstack.org/267223

The fail stack is now two deep.

Doug


On Jan 13, 2016, at 11:41 AM, Armando M.  wrote:


It's the usual time of the week where I submit the dreaded email

Please do not push anything in the queue until change [1] merges.

Cheers,
Armando

[1] https://review.openstack.org/#/c/266885/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Diagnostic snapshot generation is broken due to lack of disk space

2016-01-14 Thread Artem Panchenko

Hi,

using symlinks is a bit dangerous, here is a quote from the man you 
mentioned [0]:


> The`--dereference'option is unsafe if an untrusted user can modify 
directories while|tar|is running.


Hard links usage is much safer, because you can't use them for 
directories. But at the same time implementation in shotgun would be 
more complicated than with symlinks.


Anyway, in order to determine what linking to use we need to decide 
where (/var/log or another partition) diagnostic snapshot will be stored.


p.s.


This doesn't really give us much right now, because most of the logs are 
fetched from master node via ssh due to shotgun being run in mcollective 
container


AFAIK '/var/log/docker-logs/' is available from mcollective container 
and mounted to /var/log/:


[root@fuel-lab-cz5557 ~]# dockerctl shell mcollective mount -l | grep 
os-varlog
/dev/mapper/os-varlog on /var/log type ext4 
(rw,relatime,stripe=128,data=ordered)


From my experience '/var/log/docker-logs/remote' folder is most ' 
heavy' thing in snapshot.


[0] http://www.gnu.org/software/tar/manual/html_node/dereference.html

Thanks!

On 14.01.16 13:00, Igor Kalnitsky wrote:

I took a glance on Maciej's patch and it adds a switch to tar command
to make it follow symbolic links

Yeah, that should work. Except one thing - we previously had fqdn ->
ipaddr links in snapshots. So now they will be resolved into full
copy?


I meant that symlinks also give us the benefit of not using additional
space (just as hardlinks do) while being able to link to files from
different filesystems.

I'm sorry, I got you wrong. :)

- Igor

On Thu, Jan 14, 2016 at 12:34 PM, Maciej Kwiek  wrote:

Igor,

I meant that symlinks also give us the benefit of not using additional space
(just as hardlinks do) while being able to link to files from different
filesystems.

Also, as Barłomiej pointed out the `h` switch for tar should do the trick
[1].

Cheers,
Maciej

[1] http://www.gnu.org/software/tar/manual/html_node/dereference.html

On Thu, Jan 14, 2016 at 11:22 AM, Bartlomiej Piotrowski
 wrote:

Igor,

I took a glance on Maciej's patch and it adds a switch to tar command to
make it follow symbolic links, so it looks good to me.

Bartłomiej

On Thu, Jan 14, 2016 at 10:39 AM, Igor Kalnitsky 
wrote:

Hey Maceij -


About hardlinks - wouldn't it be better to use symlinks?
This way we don't occupy more space than necessary

AFAIK, hardlinks won't occupy much space. They are the links, after all.
:)

As for symlinks, I'm afraid shotgun (and fabric underneath) won't
resolve them and links are get to snapshot As Is. That means if there
will be no content in the snapshot they are pointing to, they are
simply useless. Needs to be checked, though.

- Igor

On Thu, Jan 14, 2016 at 10:31 AM, Maciej Kwiek 
wrote:

Thanks for your insight guys!

I agree with Oleg, I will see what I can do to make this work this way.

About hardlinks - wouldn't it be better to use symlinks? This way we
don't
occupy more space than necessary, and we can link to files and
directories
that are in other block device than /var. Please see [1] review for a
proposed change that introduces symlinks.

This doesn't really give us much right now, because most of the logs
are
fetched from master node via ssh due to shotgun being run in
mcollective
container, but it's something! When we remove containers, this will
prove
more useful.

Regards,
Maciej Kwiek

[1] https://review.openstack.org/#/c/266964/

On Tue, Jan 12, 2016 at 1:51 PM, Oleg Gelbukh 
wrote:

I think we need to find a way to:

1) verify the size of snapshot without actually making it and compare
to
the available disk space beforehand.
2) refuse to create snapshot if space is insufficient and notify user
(otherwise it breaks Admin node as we have seen)
3) provide a way to prioritize elements of the snapshot and exclude
them
based on the priorities or user choice.

This will allow for better and safer UX with the snapshot.

--
Best regards,
Oleg Gelbukh

On Tue, Jan 12, 2016 at 1:47 PM, Maciej Kwiek 
wrote:

Hi!

I need some advice on how to tackle this issue. There is a bug [1]
describing the problem with creating a diagnostic snapshot. The issue
is
that /var/log has 100GB available, while /var (where diagnostic
snapshot is
being generated - /var/www/nailgun/dump/fuel-snapshot according to
[2]) has
10GB available, so dumping the logs can be an issue when logs size
exceed
free space in /var.

There are several things we could do, but I am unsure on which course
to
take. Should we
a) Allocate more disk space for /var/www (or for whole /var)?
b) Make the snapshot location share the diskspace of /var/log?
c) Something else? What?

Please share your thoughts on this.

Cheers,
Maciej Kwiek

[1] https://bugs.launchpad.net/fuel/+bug/1529182
[2]


Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-14 Thread Steven Hardy
On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:
> Hey all,
> 
> I realize now from the title of the other TripleO/Mistral thread [1] that
> the discussion there may have gotten confused.  I think using Mistral for
> TripleO processes that are obviously workflows - stack deployment, node
> registration - makes perfect sense.  That thread is exploring practicalities
> for doing that, and I think that's great work.
> 
> What I inappropriately started to address in that thread was a somewhat
> orthogonal point that Dan asked in his original email, namely:
> 
> "what it might look like if we were to use Mistral as a replacement for the
> TripleO API entirely"
> 
> I'd like to create this thread to talk about that; more of a 'should we'
> than 'can we'.  And to do that, I want to indulge in a thought exercise
> stemming from an IRC discussion with Dan and others.  All, please correct me
> if I've misstated anything.
> 
> The IRC discussion revolved around one use case: deploying a Heat stack
> directly from a Swift container.  With an updated patch, the Heat CLI can
> support this functionality natively.  Then we don't need a TripleO API; we
> can use Mistral to access that functionality, and we're done, with no need
> for additional code within TripleO.  And, as I understand it, that's the
> true motivation for using Mistral instead of a TripleO API: avoiding custom
> code within TripleO.
> 
> That's definitely a worthy goal... except from my perspective, the story
> doesn't quite end there.  A GUI needs additional functionality, which boils
> down to: understanding the Heat deployment templates in order to provide
> options for a user; and persisting those options within a Heat environment
> file.
> 
> Right away I think we hit a problem.  Where does the code for 'understanding
> options' go?  Much of that understanding comes from the capabilities map
> in tripleo-heat-templates [2]; it would make sense to me that responsibility
> for that would fall to a TripleO library.
> 
> Still, perhaps we can limit the amount of TripleO code.  So to give API
> access to 'getDeploymentOptions', we can create a Mistral workflow.
> 
>   Retrieve Heat templates from Swift -> Parse capabilities map
> 
> Which is fine-ish, except from an architectural perspective
> 'getDeploymentOptions' violates the abstraction layer between storage and
> business logic, a problem that is compounded because 'getDeploymentOptions'
> is not the only functionality that accesses the Heat templates and needs
> exposure through an API.  And, as has been discussed on a separate TripleO
> thread, we're not even sure Swift is sufficient for our needs; one possible
> consideration right now is allowing deployment from templates stored in
> multiple places, such as the file system or git.

Actually, that whole capabilities map thing is a workaround for a missing
feature in Heat, which I have proposed, but am having a hard time reaching
consensus on within the Heat community:

https://review.openstack.org/#/c/196656/

Given that is a large part of what's anticipated to be provided by the
proposed TripleO API, I'd welcome feedback and collaboration so we can move
that forward, vs solving only for TripleO.

> Are we going to have duplicate 'getDeploymentOptions' workflows for each
> storage mechanism?  If we consolidate the storage code within a TripleO
> library, do we really need a *workflow* to call a single function?  Is a
> thin TripleO API that contains no additional business logic really so bad
> at that point?

Actually, this is an argument for making the validation part of the
deployment a workflow - then the interface with the storage mechanism
becomes more easily pluggable vs baked into an opaque-to-operators API.

E.g, in the long term, imagine the capabilities feature exists in Heat, you
then have a pre-deployment workflow that looks something like:

1. Retrieve golden templates from a template store
2. Pass templates to Heat, get capabilities map which defines features user
must/may select.
3. Prompt user for input to select required capabilites
4. Pass user input to Heat, validate the configuration, get a mapping of
required options for the selected capabilities (nested validation)
5. Push the validated pieces ("plan" in TripleO API terminology) to a
template store

This is a pre-deployment validation workflow, and it's a superset of the
getDeploymentOptions feature you refer to.

Historically, TripleO has had a major gap wrt workflow, meaning that we've
always implemented it either via shell scripts (tripleo-incubator) or
python code (tripleo-common/tripleo-client, potentially TripleO API).

So I think what Dan is exploring is, how do we avoid reimplementing a
workflow engine, when a project exists which already does that.

> My gut reaction is to say that proposing Mistral in place of a TripleO API
> is to look at the engineering concerns from the wrong direction.  The
> Mistral alternative comes from a desire to limit 

Re: [openstack-dev] [fuel] RabbitMQ in dedicated network

2016-01-14 Thread Bogdan Dobrelya
On 28.12.2015 10:12, Bogdan Dobrelya wrote:
> On 23.12.2015 18:50, Matthew Mosesohn wrote:
>> I agree. As far as I remember, rabbit needs fqdns to work and map
>> correctly. I think it means we should disable the ability to move the
>> internal messaging network role in order to fix this bug until we can
>> add extra dns entries per network role (or at least addr)
> 
> For DNS resolve, we could use SRV [0] records perhaps.
> Although, nodes rely on /etc/hosts instead, AFAIK.
> 
> So we could as well do net-template-based FQDNs instead, like
> messaging-node*-domain.local 1.2.3.4
> corosync-node*-domain.local 5.6.7.8
> database-node*-domain.local 9.10.11.12
> 
> and rely on *these* FQDNS instead.
> 
> [0] https://en.wikipedia.org/wiki/SRV_record


The original idea with the "fqdn_prefix" OCF RA parameter [0] appeared
the way more simple. It would as well allow to instantiate multiple
rabbit clusters constructed from prefix-based instances of rabbit nodes.

The complexity with DNS alias is what we have 1) a node name in CIB
(corosync cluster's crm_node -n), 2) a node name as we want it in rabbit
cluster (with some prefix), 3) the actual clustered rabbit node name in
the mnesia DB constructed from the name (2) with the rabbit@<...> prefix
added.

So, as we often compare those in the RA logic and assume (1) == (2), the
prefix-based solution would be more simple. Otherwise we shall introduce
some translate_name() function, which translates the name (1) into the
name (2), for example using DNS resolving, and fix all of the type (1),
(2), (3) names comparsions in the OCF RA. Which would end up in much
more changes to test and maintain.

[0] https://review.openstack.org/#/c/262535/8

> 
>>
>> On Dec 23, 2015 8:42 PM, "Andrew Maksimov" > > wrote:
>>
>> Hi Kirill,
>>
>> I don't think we can give up on using fqdn node names for RabbitMQ
>> because we need to support TLS in the future. 
>>
>> Thanks,
>> Andrey Maximov
>> Fuel Project Manager
>>
>> On Wed, Dec 23, 2015 at 8:24 PM, Kyrylo Galanov
>> > wrote:
>>
>> Hello,
>>
>> I would like to start discussion regarding the issue we have
>> discovered recently [1].
>>
>> In a nutshell, if RabbitMQ is configured to run in separate
>> mgmt/messaging network it fails on building cluster.
>> While RabbitMQ is managed by Pacemaker and OCF script, the
>> cluster is built using FQDN. Apparently, FQDN resolves to admin
>> network which is different in this particular case.
>> As a result, RabbitMQ on secondary controller node fails to join
>> to primary controller node.
>>
>> I can suggest two ways to tackle the issue: one is pretty
>> simple, while other is not.
>>
>> The first way is to accept by design using admin network for
>> RabbitMQ internal communication between controller nodes.
>>
>> The second way is to dig into pacemaker
>> and RabbitMQ reconfiguration. Since it requires to refuse from
>> using common fqdn/node names, this approach can be argued.
>>
>>
>> --
>> [1] https://bugs.launchpad.net/fuel/+bug/1528707
>>
>> Best regards,
>> Kyrylo
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] Enabling Graphana GUI from Horizon

2016-01-14 Thread Pradip Mukhopadhyay
Thanks. I can able to see the 'Monitoring' tab in the Horizon UI.

However not able to launch the Monasca UI (Graphana). When trying to launch
it, getting a "Page not found error:" :


Using the URLconf defined in openstack_dashboard.urls, Django tried these
URL patterns, in this order:

   1. ^$ [name='splash']
   2. ^api/
   3. ^home/$ [name='user_home']
   4. ^i18n/js/(?P\S+?)/$ [name='jsi18n']
   5. ^i18n/setlang/$ [name='set_language']
   6. ^i18n/
   7. ^jasmine-legacy/$ [name='jasmine_tests']
   8. ^jasmine/.*?$
   9. ^settings/
   10. ^monitoring/
   11. ^developer/
   12. ^admin/
   13. ^identity/
   14. ^project/
   15. ^auth/
   16. ^static\/(?P.*)$
   17. ^media\/(?P.*)$
   18. ^500/$

The current URL, grafana/index.html, didn't match any of these.


Any help would be good.


Thanks,
Pradip









On Tue, Jan 12, 2016 at 10:42 PM, Pradip Mukhopadhyay <
pradip.inte...@gmail.com> wrote:

> Thanks! So I can temporarily follow the step to enable it in my setup.
>
>
> --pradip
>
>
> On Tue, Jan 12, 2016 at 9:45 PM, Lin Hua Cheng 
> wrote:
>
>>
>> You would need to propose the new feature in the monasca-ui [1], which is
>> the horizon plugin for displaying monasca dashboard.
>>
>> -Lin
>>
>> [1] https://github.com/openstack/monasca-ui/
>>
>> On Tue, Jan 12, 2016 at 3:32 AM, Pradip Mukhopadhyay <
>> pradip.inte...@gmail.com> wrote:
>>
>>> Hello,
>>>
>>>
>>> We're using the following fullsetup to install Monasca (python):
>>>
>>> https://github.com/openstack/monasca-api/tree/master/devstack
>>>
>>>
>>>
>>> Most likely we need to do something more to see the "Monitoring" tab in
>>> left hand side that takes us to Monasca graphana GUI.
>>>
>>>
>>> Can anyone please point me?
>>>
>>>
>>> We do see it when do a vagrant setup with mini-mon and devstack VMs.
>>>
>>>
>>> Any help is highly solicited.
>>>
>>>
>>> --pradip
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Testing Neutron with latest OVS

2016-01-14 Thread Russell Bryant
On 01/13/2016 11:51 PM, Tony Breeds wrote:
> The challenge for you guys is the kernel side of things but if I
> understood correctly you can get the kenrel module from the ovs
> source tree and just compile it against the stock ubuntu kernel
> (assuming the kernel devel headers are available) is that right?

It's kernel and userspace.  There's multiple current development
efforts that involve changes to OpenStack, OVS userspace, and the
appropriate datapath (OVS kernel module or DPDK).

The consensus I'm picking up roughly is that for those working on the
features, testing with source builds seems to be working fine.  It's
just not something anyone wants to gate the main Neutron repo with.
That seems quite reasonable.  If the features aren't in proper
releases yet, I don't see gating as that important anyway.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] RabbitMQ in dedicated network

2016-01-14 Thread Vladimir Kuklin
+1 to Bogdan here.

On Thu, Jan 14, 2016 at 4:27 PM, Bogdan Dobrelya 
wrote:

> On 28.12.2015 10:12, Bogdan Dobrelya wrote:
> > On 23.12.2015 18:50, Matthew Mosesohn wrote:
> >> I agree. As far as I remember, rabbit needs fqdns to work and map
> >> correctly. I think it means we should disable the ability to move the
> >> internal messaging network role in order to fix this bug until we can
> >> add extra dns entries per network role (or at least addr)
> >
> > For DNS resolve, we could use SRV [0] records perhaps.
> > Although, nodes rely on /etc/hosts instead, AFAIK.
> >
> > So we could as well do net-template-based FQDNs instead, like
> > messaging-node*-domain.local 1.2.3.4
> > corosync-node*-domain.local 5.6.7.8
> > database-node*-domain.local 9.10.11.12
> >
> > and rely on *these* FQDNS instead.
> >
> > [0] https://en.wikipedia.org/wiki/SRV_record
>
>
> The original idea with the "fqdn_prefix" OCF RA parameter [0] appeared
> the way more simple. It would as well allow to instantiate multiple
> rabbit clusters constructed from prefix-based instances of rabbit nodes.
>
> The complexity with DNS alias is what we have 1) a node name in CIB
> (corosync cluster's crm_node -n), 2) a node name as we want it in rabbit
> cluster (with some prefix), 3) the actual clustered rabbit node name in
> the mnesia DB constructed from the name (2) with the rabbit@<...> prefix
> added.
>
> So, as we often compare those in the RA logic and assume (1) == (2), the
> prefix-based solution would be more simple. Otherwise we shall introduce
> some translate_name() function, which translates the name (1) into the
> name (2), for example using DNS resolving, and fix all of the type (1),
> (2), (3) names comparsions in the OCF RA. Which would end up in much
> more changes to test and maintain.
>
> [0] https://review.openstack.org/#/c/262535/8
>
> >
> >>
> >> On Dec 23, 2015 8:42 PM, "Andrew Maksimov"  >> > wrote:
> >>
> >> Hi Kirill,
> >>
> >> I don't think we can give up on using fqdn node names for RabbitMQ
> >> because we need to support TLS in the future.
> >>
> >> Thanks,
> >> Andrey Maximov
> >> Fuel Project Manager
> >>
> >> On Wed, Dec 23, 2015 at 8:24 PM, Kyrylo Galanov
> >> > wrote:
> >>
> >> Hello,
> >>
> >> I would like to start discussion regarding the issue we have
> >> discovered recently [1].
> >>
> >> In a nutshell, if RabbitMQ is configured to run in separate
> >> mgmt/messaging network it fails on building cluster.
> >> While RabbitMQ is managed by Pacemaker and OCF script, the
> >> cluster is built using FQDN. Apparently, FQDN resolves to admin
> >> network which is different in this particular case.
> >> As a result, RabbitMQ on secondary controller node fails to join
> >> to primary controller node.
> >>
> >> I can suggest two ways to tackle the issue: one is pretty
> >> simple, while other is not.
> >>
> >> The first way is to accept by design using admin network for
> >> RabbitMQ internal communication between controller nodes.
> >>
> >> The second way is to dig into pacemaker
> >> and RabbitMQ reconfiguration. Since it requires to refuse from
> >> using common fqdn/node names, this approach can be argued.
> >>
> >>
> >> --
> >> [1] https://bugs.launchpad.net/fuel/+bug/1528707
> >>
> >> Best regards,
> >> Kyrylo
> >>
> >>
>  __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> <
> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> >>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >>
>  __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> <
> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

[openstack-dev] [glance] FFE for "Multi-tenant Swift store service token support"

2016-01-14 Thread stuart . mclaren

Hi,

I'd like to request an exception for this spec:

 https://review.openstack.org/#/c/170564/

-Stuart

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][stable] Proposal to add Tony Breeds to nova-stable-maint

2016-01-14 Thread Thierry Carrez

Michael Still wrote:

I think Tony would be a valuable addition to the team.


I think Tony would be a valuable addition to /any/ team.

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] allow a ranking mechanism for glance-api to order image locations

2016-01-14 Thread Flavio Percoco

On 14/01/16 11:07 +1100, Jake Yip wrote:

Hi all,

I've recently ran across a constraint in glance-api while working with image
locations. In essence, there is no way to customize ordering of image-locations
other than the default location strategies, namely location_order and
store_type [0]. It seems like a more generic method of ordering image locations
is needed, IMHO.

Some background - We are in a multi-cell environment and each cell has it's own
glance-api server. All images are stored in a global swift cluster. We would
like glance to be able to fetch images from a local store, so that we can do
COW for backends like RBD.

Unfortunately, none of the current location strategies works for us, as we
might have multiple cells sharing the same backend. I've opened a bug /
wishlist describing this issue [1]. I have also implemented code that allows us
to achieve that based on image location metadata.

I am wondering anyone else have solved this before? I would like to hear your
opinions on how we can achieve this, and whether ranking it by metadata is the
way to go.

The current wishlist is now tracked as a spec-lite. Is this ok?


Yes, this sounds like a good example of a spec lite. We'll now proceed to review
it, triage it and give a go if everything looks fine.

As far as the feature goes, I think it is fine to add other location strategies.
I'll follow-up on the bug.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Should the playbook stop on certain tasks?

2016-01-14 Thread Major Hayden
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 01/13/2016 02:59 PM, Clark, Robert Graham wrote:
> I’m pretty new to openstack-ansible-security but based on my use cases which 
> are as much
> About using this for verification as they are for building secure boxes my 
> preference 
> would be 3) Use an Ansible callback plugin to catch these and print them at 
> the end of the
> playbook run

I'm leaning in that direction as well, but I'm not sure if there's a way to 
wedge this type of functionality into a role.  It can be done easily with a 
playbook, but I'm not sure if we can add this to a role by itself.

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJWl6O9AAoJEHNwUeDBAR+xCUMQAIg+eZudAHowbFXqwBu3XQ74
Kov9gD2hwd3wq6LPzpeFVjrd61vlw+GOMQUwJlvf5jeM0oXlw7/oRHtJWaHvLcLc
mFQDW2QTfA/jX1gGOSYctkFF6nTahNmWuSQ3G01Om0WkjNBGrZLJQM42BK+UQ+VF
/aEXS6Rg/hPACd92ebXBpD9VSw7EI/K6i8Qt6fbTfLxSSVgGiRtWoJ6bsj8cWKft
OKNSnsddDC2+40z91X84eiRIRvMeblBDl7q0wdyS3c+ZwkyJyG9YL3CT92qbtjPK
gd3i9zjJ2XMlF6MPv06aNeiHidV+8bzupr8ZSh/gP7Zr4SkwmQLv0SppG/M2mb6h
nHqJD1QtJTmKbE4jynfqkEwVL1MSwAvRG7Yx3Y1JletONybYOSjkQ+PRcl0Wl+IM
4SF6Fo8NFF48ywaGSrNSp9TSlzFecKxSc0XTN/0LK+XoquqQYV0TurboHlUYFrRK
/AW8Q3M9Zf6R5vqAolut8fxNgaizZnNTFWp2ZlI1dbKoCFlKvmmPY75xrD17j963
Zna4DHgvglXOxtEYjLrDGbw8KOItvZXdjRMnIZOdBdnnpaN2eOjYfTOCpjoSunKD
MXyiqMj3svg9vUJLeoGTVmKKhYgP3hyDJd9W8aS3GC2U5bWfd65fzgrG0Qmx+fqw
VF/jWXNDzYryEKMFzR87
=J+iA
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Neutron][Nova][devstack] Keystone v3 with "old" clients

2016-01-14 Thread Smigiel, Dariusz
Hey,
Thanks for responses.

Michal your suggestion helped solve my problem.
Akihiro, thanks for this info. Didn't know about os-client but it seems to be 
very powerful. Great tool!

Henrique, yes. Right now I have no problems with keystone v3.

Best regards,
Dariusz (dasm) Smigiel
Intel Technology Poland


> -Original Message-
> From: Henrique Truta [mailto:henriquecostatr...@gmail.com]
> Sent: Thursday, January 14, 2016 1:10 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Keystone][Neutron][Nova][devstack]
> Keystone v3 with "old" clients
> 
> Hi, Did exporting the variables solve your problem? I'm working on improving
> the support of v3 in devstack, like the openrc you've mentioned.
> 
> 
> Em qui, 14 de jan de 2016 às 06:50, Akihiro Motoki   > escreveu:
> 
> 
>   devstack creates /etc/openstack/clouds.yaml (os-client-config
>   configuraiton files) which specifies to use keystone v3.
>   neutronclient supports os-client-config and keystoneauth which
> handles
>   the difference of keystone API.
>   Note that clouds.yaml is very convenient way to use OpenStack CLI
> [1].
> 
>   As Michal commented, you can also use OS_PROJECT_DOMAIN_xx
> and OS_USER_DOMAIN_xx
>   for keystone v3 API.
> 
>   [1] http://docs.openstack.org/developer/python-
> neutronclient/usage/cli.html#using-with-os-client-config
> 
>   Akihiro
> 
>   2016-01-14 18:13 GMT+09:00 Michal Rostecki
>  >:
>   > On 01/12/2016 02:10 PM, Smigiel, Dariusz wrote:
>   >>
>   >> Hello,
>   >> I'm trying to gather all the info necessary to migrate to keystone
> v3 in
>   >> Neutron.
>   >> When I've started to looking through possible problems with
> clients, it
>   >> occurred that 'neutron' and 'nova' clients do not want to operate
> with
>   >> Keystone v3.
>   >> For keystone client, it's explicit written, that this version is
>   >> deprecated and not supported, so it's not working with Keystone
> API v3. But
>   >> for nova and neutron, there is nothing.
>   >> I didn't see any place where I can find info, that "old" clients
> shouldn't
>   >> be used with Keystone API v3.
>   >>
>   >> Am I doing something wrong?
>   >>
>   >> http://paste.openstack.org/show/483568/
>   >>
>   >
>   > Hi,
>   >
>   > Looks like you're missing OS_PROJECT_DOMAIN_ID and
> OS_USER_DOMAIN_ID env
>   > variables, needed for Keystone v3.
>   >
>   > Unfortunately, I don't see them in devstack's openrc[1]. Maybe it's
> a good
>   > moment to add them here.
>   >
>   > [1] https://github.com/openstack-
> dev/devstack/blob/master/openrc
>   >
>   > Cheers,
>   > Michal
>   >
>   >
>   >
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] How will nova advertise that volume multi-attach is supported?

2016-01-14 Thread Matt Riedemann



On 1/14/2016 9:42 AM, Dan Smith wrote:

It is however not ideal when a deployment is set up such that
multiattach will always fail because a hypervisor is in use which
doesn't support it.  An immediate solution would be to add a policy so a
deployer could disallow it that way which would provide immediate
feedback to a user that they can't do it.  A longer term solution would
be to add capabilities to flavors and have flavors act as a proxy
between the user and various hypervisor capabilities available in the
deployment.  Or we can focus on providing better async feedback through
instance-actions, and other discussed async api changes.


Presumably a deployer doesn't enable volumes to be set as multi-attach
on the cinder side if their nova doesn't support it at all, right? I
would expect that is the gating policy element for something global.


Is there a policy in cinder for that though? /me looks



Now, if multiple novas share a common cinder, then I guess it gets a
little more confusing...

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Testing Neutron with latest OVS

2016-01-14 Thread Ihar Hrachyshka

Sean M. Collins  wrote:


On Wed, Jan 13, 2016 at 03:57:37PM CST, Mooney, Sean K wrote:
One of the ideas that I have been thinking about over the last month or  
two is do we
Want to create a dedicated library file in devstack to support  
compilation and installation

Of ovs.


So, my suggestion is as follows: create a new devstack plugin that is
*specifically* geared towards just compiling OVS and installing it to
get the bits that you need. I'm just concerned about the feature creep
that is happening in the Neutron DevStack plugin ( which I didn't like in
the first place ) where now every little thing is getting proposed
against it.


Currently, devstack plugin has code for:
* qos
* sr-iov
* l2 agent extensions
* flavors

I think most of those could indeed live in a separate plugin (except  
probably l2 agent extensions that seems like a common feature for different  
agent types).


I wonder whether we can extend devstack plugin interface to support  
completely separate *per-feature* plugins, but in the *same* repo. In that  
case, we would have best of both worlds: code separation, the need for  
explicit enable_plugin call to enable a specific feature; and at the same  
time, no tiny git repos to maintain 10s of lines of bash code in each.




I'd prefer to see small, very specific DevStack plugins that have narrow
focus, and jobs that need them for specific things adding them to their
local.conf settings explicitly via enable_repo lines.

The concern I have with compiling bleeding edge OVS and then running our
Neutron jobs is that yes, we get new features, but yes we also get
the newest bugs and the configuration matrix for Neutron now gets a new
dimension of 'packaged OVS versus git commit SHA'

--
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Bareon][Fuel] Dynamic allocation algorithm

2016-01-14 Thread Evgeniy L
Hi,

In addition I've generated several examples in order to show how current
prototype allocates the volumes [0].

Thanks,

[0] http://bareon-allocator.readthedocs.org/en/latest/examples.html


On Wed, Jan 13, 2016 at 2:14 PM, Evgeniy L  wrote:

> Hi Artur,
>
> You are correct, we probably may consider using bytes instead of megabytes.
>
> Regarding to question "ssd vs hdd", user can describe which space is better
> to allocate on ssd and which is better on hdd, the mechanism is completely
> data driven,
> it can be done using "best_with_disks" [0], in fact it covers much more
> cases,
> since user can build sets of disks where space can be allocated based on
> any
> parameter of HW which discovery can provide.
>
> Thanks,
>
> [0]
> http://bareon-allocator.readthedocs.org/en/latest/architecture.html#best-with-disks
>
> On Wed, Jan 13, 2016 at 1:56 PM, Artur Svechnikov <
> asvechni...@mirantis.com> wrote:
>
>> Hi.
>>
>> Very good documentation. For Integer Solution you can use bytes instead
>> of megabytes. Hence N bytes will be unallocated in the worst case.
>>
>> I didn't find solution for problem:
>>
>>- Don’t allocate a single volume on ssd and hdd
>>
>>
>> Best regards,
>> Svechnikov Artur
>>
>> On Tue, Jan 12, 2016 at 9:37 PM, Evgeniy L  wrote:
>>
>>> Hi,
>>>
>>> For the last several weeks I've been working on algorithm (and prototype)
>>> for dynamic allocation of volumes on disks.
>>>
>>> I have some results [0] and would like to ask you to review it and
>>> provide
>>> some feedback.
>>>
>>> Our plan is to implement it as an external driver for Bareon [1].
>>>
>>> Thanks,
>>>
>>> [0] http://bareon-allocator.readthedocs.org/en/latest/architecture.html
>>> [1] https://wiki.openstack.org/wiki/Bareon
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Python 3.5 is now the default Py3 in Debian Sid

2016-01-14 Thread Thomas Goirand
On 01/14/2016 10:47 PM, Jeremy Stanley wrote:
> On 2016-01-14 09:47:52 +0100 (+0100), Julien Danjou wrote:
> [...]
>> Is there any plan to add Python 3.5 to infra?
> 
> I expect we'll end up with it shortly after Ubuntu 16.04 LTS
> releases in a few months (does anybody know for sure what its
> default Python 3 is slated to be?). Otherwise if a debian-sid
> nodepool image shows up we could certainly try running Py3K jobs on
> that instead.

In Tokyo, during the Py3 session, Chuck volunteered to make a backport
of Python 3.5 to Trusty. Though if we're asking in this thread, probably
this means it's not done (yet).

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

2016-01-14 Thread Kyle Kelley
When using the native Docker tooling, the Docker daemon controls the UUID of 
the container, not Magnum.

-- Kyle


From: Ryan Brown 
Sent: Wednesday, January 13, 2016 3:16 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under
/bays

On 01/13/2016 04:42 AM, Jamie Hannaford wrote:
> I've recently been gathering feedback about the Magnum API and one of
> the things that people commented on​ was the global /containers
> endpoints. One person highlighted the danger of UUID collisions:
>
>
> """
>
> It takes a container ID which is intended to be unique within that
> individual cluster. Perhaps this doesn't matter, considering the surface
> for hash collisions. You're running a 1% risk of collision on the
> shorthand container IDs:
>
>
> In [14]: n = lambda p,H: math.sqrt(2*H * math.log(1/(1-p)))
>
> In [15]: n(.01, 0x1)
> Out[15]: 2378620.6298183016
>
>
> (this comes from the Birthday Attack -
> https://en.wikipedia.org/wiki/Birthday_attack)
> 
>
>
> The main reason I questioned this is that we're not in control of how
> the hashes are created whereas each Docker node or Swarm cluster will
> pick a new ID under collisions. We don't have that guarantee when
> aggregating across.
>
>
> The use case that was outlined appears to be aggregation and reporting.
> That can be done in a different manner than programmatic access to
> single containers.​
>
> """
>
>
> Representing a resource without reference to its parent resource also
> goes against the convention of many other OpenStack APIs.
>
>
> Nesting a container resource under its parent bay would mitigate both of
> these issues:
>
>
> /bays/{uuid}/containers/{uuid}​
>
>
> I'd like to get feedback from folks in the Magnum team and see if
> anybody has differing opinions about this.
>
>
> Jamie

I'm not a member of the Magnum community, but I am on the API working
group, so my opinions come from a slightly different perspective.

Nesting resources is not a "bad" thing, and as long as containers will
always be in bays (from what I understand of the Magnum architecture,
this is indeed true) then nesting them makes sense.

Of course, it's a big change and will have to be communicated to users &
client libraries, probably via a version bump.

--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] ceilometer meter-list output is empty

2016-01-14 Thread gord chung

hi,

please ensure you are running latest code (as you seem to want master)

you are running into 
https://review.openstack.org/#/q/I4765b3b9627983a245aa5521a85ad89e83ab8551


On 14/01/2016 12:10 AM, lichen.hangzhou wrote:

Hi,

I have installed a devstack environment with ceilometer enabled:
enable_plugin ceilometer 
https://git.openstack.org/openstack/ceilometer


I  am trying to run some ceilometer commands to test whether my 
ceilometer is working well and get to know ceilometer as well.
But, no matter how many instances I created and how long I wait, the 
out put for command "ceilometer meter-list" & "ceilometer sample-list" 
are empty.


ceilometer meter-list
+--+--+--+-+-++
| Name | Type | Unit | Resource ID | User ID | Project ID |
+--+--+--+-+-++
+--+--+--+-+-++

By checking devstack log based on my active instance id, I get :

ceilometer-acentral.log:
2016-01-14-170750:2794:2016-01-14 20:58:17.750 20284 
ERROR ceilometer.hardware.discovery 
[req-51230835-1a74-4846-be5e-c032e24ad54f admin - - - -] Couldn't 
obtain IP address of instance d0a781a2-5adc-48c3-8976-f55f6613b68b


  ceilometer-acompute.log:
2016-01-14 18:48:41.030 20885 WARNING 
ceilometer.compute.pollsters.memory 
[req-218d31b1-d0c5-4b7f-99b8-57a8b4a6fc48 admin - - - -] Cannot 
inspect data of MemoryUsagePollster for 
d0a781a2-5adc-48c3-8976-f55f6613b68b, non-fatal reason: Failed to 
inspect memory usage of instance 

Re: [openstack-dev] [Oslo] Improving deprecated options identification and documentation

2016-01-14 Thread Jay Pipes

On 01/14/2016 11:45 AM, Ronald Bradford wrote:

Presently the oslo.config Opt class has the attributes
deprecated_for_removal and deprecated_reason [1]

I would like to propose that we use deprecated_reason (at a minimum) to
detail in what release an option was deprecated in, and what release it
is then removed in.


You mean what release it *will* be removed in, right? Clearly, once it's 
removed, there won't be any indication it ever existed ;)



I see examples of deprecated_for_removal=True but no information on why
or when.  i.e. Ideally I'd like to move to an implied situation of
  if deprecated_for_removal=True then deprecated_reason is mandatory.


+1


A great example is an already documented help message in oslo.log
configuration option use_syslog_rfc_format that at least provides a
guideline.  [2] shows a proposed review to take this low road approach.
An image of what the change actually looks like in documentation using
this approach [3].  This also needs  #267151 that fixes an issue where
deprecated options are not producing a warning message in docs.

The high road would be to have a discussion about if there is a better
way to mark and manage deprecated options. For example, if there was a
deprecated_release and a removal_release attribute then a level of
tooling could make this easier.  I would be wary in considering this, as
it adds complexity (is it needed), and just how many options are
deprecated.  I'd appreciate thoughts and feedback.


Any improvement in this regard I think would enhance the user experience 
considerably, thank you Ronald for tackling this area. I'd also suggest 
cc'ing (or sending a separate ML post) to the openstack-operators@ ML to 
gather feedback from ops folks.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

2016-01-14 Thread Kyle Kelley
This presumes a model where Magnum is in complete control of the IDs of 
individual containers. How does this work with the Docker daemon?


> In Rest API, you can set the “uuid” field in the json request body (this is 
> not supported in CLI, but it is an easy add).​


In the Rest API for Magnum or Docker? Has Magnum completely broken away from 
exposing native tooling - are all container operations assumed to be routed 
through Magnum endpoints?


> For the idea of nesting container resource, I prefer not to do that if there 
> are alternatives or it can be work around. IMO, it sets a limitation that a 
> container must have a bay, which might not be the case in future. For 
> example, we might add a feature that creating a container will automatically 
> create a bay. If a container must have a bay on creation, such feature is 
> impossible.


If that's *really* a feature you need and are fully involved in designing for, 
this seems like a case where creating a container via these endpoints would 
create a bay and return the full resource+subresource.


Personally, I think these COE endpoints need to not be in the main spec, to 
reduce the surface area until these are put into further use.





From: Hongbin Lu 
Sent: Wednesday, January 13, 2016 5:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

Hi Jamie,

I would like to clarify several things.

First, a container uuid is intended to be unique globally (not within 
individual cluster). If you create a container with duplicated uuid, the 
creation will fail regardless of its bay. Second, you are in control of the 
uuid of the container that you are going to create. In Rest API, you can set 
the “uuid” field in the json request body (this is not supported in CLI, but it 
is an easy add). If a uuid is provided, Magnum will use it as the uuid of the 
container (instead of generating a new uuid).

For the idea of nesting container resource, I prefer not to do that if there 
are alternatives or it can be work around. IMO, it sets a limitation that a 
container must have a bay, which might not be the case in future. For example, 
we might add a feature that creating a container will automatically create a 
bay. If a container must have a bay on creation, such feature is impossible.

Best regards,
Hongbin

From: Jamie Hannaford [mailto:jamie.hannaf...@rackspace.com]
Sent: January-13-16 4:43 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum] Nesting /containers resource under /bays


I've recently been gathering feedback about the Magnum API and one of the 
things that people commented on​ was the global /containers endpoints. One 
person highlighted the danger of UUID collisions:



"""

It takes a container ID which is intended to be unique within that individual 
cluster. Perhaps this doesn't matter, considering the surface for hash 
collisions. You're running a 1% risk of collision on the shorthand container 
IDs:



In [14]: n = lambda p,H: math.sqrt(2*H * math.log(1/(1-p)))
In [15]: n(.01, 0x1)
Out[15]: 2378620.6298183016



(this comes from the Birthday Attack - 
https://en.wikipedia.org/wiki/Birthday_attack)



The main reason I questioned this is that we're not in control of how the 
hashes are created whereas each Docker node or Swarm cluster will pick a new ID 
under collisions. We don't have that guarantee when aggregating across.



The use case that was outlined appears to be aggregation and reporting. That 
can be done in a different manner than programmatic access to single 
containers.​

"""



Representing a resource without reference to its parent resource also goes 
against the convention of many other OpenStack APIs.



Nesting a container resource under its parent bay would mitigate both of these 
issues:



/bays/{uuid}/containers/{uuid}​



I'd like to get feedback from folks in the Magnum team and see if anybody has 
differing opinions about this.



Jamie






Rackspace International GmbH a company registered in the Canton of Zurich, 
Switzerland (company identification number CH-020.4.047.077-1) whose registered 
office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland. Rackspace 
International GmbH privacy policy can be viewed at 
www.rackspace.co.uk/legal/swiss-privacy-policy
 - This e-mail message may contain confidential or privileged information 
intended for the recipient. Any dissemination, distribution or copying of the 
enclosed material is prohibited. If you receive this transmission in error, 
please notify us immediately by e-mail at 
ab...@rackspace.com and delete the original 
message. Your cooperation is appreciated.

Re: [openstack-dev] [Nova] Nova midcycle list of attendees

2016-01-14 Thread Anita Kuno
On 01/14/2016 12:38 PM, Murray, Paul (HP Cloud) wrote:
> I have created a list of attendees for the Nova midcycle here: 
> https://wiki.openstack.org/wiki/Sprints/NovaMitakaSprintAttendees
> 
> Obviously I can't put anyone's name on it for privacy reasons.

What privacy reasons? Every other project lists attendees either on a
wikipage or an etherpad.

I don't know that nova has a privacy clause that is different from any
other project. I don't recall a privacy clause when I registered to attend.

Thanks Paul,
Anita.

> If are attending and you would like to let others know when you will be 
> around you might like to add yourself. It would also help us with a few 
> logistics too.
> 
> Best regards,
> Paul
> 
> Paul Murray
> Technical Lead, HPE Cloud
> Hewlett Packard Enterprise
> +44 117 316 2527
> 
> 
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] Automating some aspects of catalog maintenance

2016-01-14 Thread Christopher Aedo
While we are looking forward to implementing an API based on Glare, I
think it would be nice to have a few aspects of catalog maintenance be
automated.  For instance discovering and removingt/agging assets with
dead links, updating the hash for assets that change frequently or
exposing when an entry was last modified.

Initially I thought the best approach would be to create a very simple
API service using Flask on top of a DB.  This would provide output
identical to the current "v1" API.  But of course that "simple" idea
starts to look too complicated for something that would eventually be
abandoned wholesale.  Someone on the infra team suggested a dead-link
checker that would run as a periodic job similar to other proposal-bot
jobs, so I took a first pass at that [1].

As expected that resulted in a VERY large initial change[2] due to
"normalizing" the existing human-edited assets.yaml file.  I think the
feedback that this is un-reviewable without some external tools is
reasonable (though it's possible to verify the 86 assets are
unmolested, only slightly reformatted).  One thing that would help
would be forcing all entries to meet a specific format which would not
need adjustment by proposal-bot.  But even that change would require a
near-complete rewrite of the assets file, so I don't think it would
help in this case.

I'm generally in favor of this approach because it keeps all the
information on the assets in one place (the assets.yaml file) which
makes it easy for humans to read and understand.

An alternate proposed direction is to merge machine-generated
information with the human-generated assets.yaml during the creation
of the JSON file[3] that is used by the website and Horizon plugin.
The start of that work is this script to discover last-modified times
for assets based on git history[4].

While I think the approach of merging machine-generated and
human-generated files could work, it feels a lot like creating a
relational database out of yaml files glued together with a bash
script.  If it works though, maybe it's the best short term approach?

Ultimately my goal is to make sure the assets in the catalog are kept
up to date without introducing a great deal of administrative overhead
or obfuscating how the display version of the catalog is created.  How
are other projects handling concerns like this?  Would love to hear
feedback on how you've seen something like this handled - thanks!

[1]: https://review.openstack.org/#/c/264978/
[2]: https://review.openstack.org/#/c/266218/
[3]: https://apps.openstack.org/api/v1/assets
[4]: https://review.openstack.org/#/c/267087/

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Improving deprecated options identification and documentation

2016-01-14 Thread Ronald Bradford
On Thu, Jan 14, 2016 at 11:58 AM, Jay Pipes  wrote:
> On 01/14/2016 11:45 AM, Ronald Bradford wrote:
>>
>> Presently the oslo.config Opt class has the attributes
>> deprecated_for_removal and deprecated_reason [1]
>>
>> I would like to propose that we use deprecated_reason (at a minimum) to
>> detail in what release an option was deprecated in, and what release it
>> is then removed in.
>
>
> You mean what release it *will* be removed in, right? Clearly, once it's
> removed, there won't be any indication it ever existed ;)
>

Yes, in what release it is to be removed, e.g. Mitaka.  So when is
that release cycle, i.e. now once removed there is no record.


>
> Any improvement in this regard I think would enhance the user experience
> considerably, thank you Ronald for tackling this area. I'd also suggest
> cc'ing (or sending a separate ML post) to the openstack-operators@ ML to
> gather feedback from ops folks.
>


will do!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] App Catalog IRC meeting minutes - 1/14/2016

2016-01-14 Thread Christopher Aedo
This morning we had a quick status update regarding some automation
within the catalog followed by a good chat about adding Mistral
workflow templates to the catalog.  We're going to work out what
metadata the assets will need to include and start discussing the
changes we'll need to make to the website to accommodate additional
asset types.

We also touched on adding TOSCA bits to the catalog, as the effort to
modify the web site to support additional asset types will benefit
those wishing to include TOSCA stuff as well.  Further discussion on
that topic has been added to next weeks agenda.

=
#openstack-meeting-3: app-catalog
=
Meeting started by docaedo at 17:00:34 UTC.  The full logs are available
at
http://eavesdrop.openstack.org/meetings/app_catalog/2016/app_catalog.2016-01-14-17.00.log.html
.
Meeting summary
---
* rollcall  (docaedo, 17:00:50)
  * LINK: https://wiki.openstack.org/wiki/Meetings/app-catalog
(docaedo, 17:02:59)
* Updates  (docaedo, 17:03:11)
  * LINK: https://review.openstack.org/266218  (docaedo, 17:04:19)
  * LINK: https://review.openstack.org/267087  (docaedo, 17:08:07)
* Adding Mistral workflow templates to the catalog  (docaedo, 17:11:08)
* Open discussion  (docaedo, 17:43:34)

Meeting ended at 17:59:24 UTC.

People present (lines said)
---
* docaedo (82)
* rakhmerov (51)
* spzala (23)
* toddjohn (12)
* kzaitsev_mb (7)
* ativelkov (3)
* openstack (3)

Generated by `MeetBot`_ 0.1.4

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [defcore] Determine latest set of tests for a given release

2016-01-14 Thread Hugh Saunders
Hi Mark,
Thanks for your useful and timely response.

I did look at the refstack client, but it seems that it requires a test
list url as input, I was wondering how to programmatically get the
appropriate test list url. It sounds like there isn't a single url that
points to an index of defcore release json files, so I'd need to parse all
the .json files in the root of the repo, then find the latest file that has
status approved and the release I'm interested in.

Thanks again.

--
Hugh Saunders
-- 
--
Hugh Saunders
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] gate-neutron-dsvm-api failures

2016-01-14 Thread John Davidge (jodavidg)
On 1/14/16, 3:04 PM, "Brian Haley"  wrote:


>On 01/14/2016 05:42 PM, John Davidge (jodavidg) wrote:
>> The 
>>neutron.tests.api.admin.test_floating_ips_admin_actions.FloatingIPAdminTe
>>stJSON
>> test has been consistently failing for patch
>> https://review.openstack.org/#/c/258754/ and I don¹t see how they can be
>> related. This patch has been trying to merge for a month.
>>
>> This test seems to be experiencing a lot of failures recently:
>>
>> 
>>http://status.openstack.org//elastic-recheck/data/uncategorized.html#gate
>>-neutron-dsvm-api
>>
>> Has it been diagnosed? Could somebody more familiar with the test take
>>a look
>> please?
>
>John,
>
>That test was just recently changed too:
>
>https://review.openstack.org/#/c/265016/2/neutron/tests/api/admin/test_flo
>ating_ips_admin_actions.py
>
>So perhaps that change didn't actually fix things.

It¹s failing with tempest_lib.exceptions.ServerFault rather than
tempest_lib.exceptions.Conflict, so that change won¹t have helped in this
case.

John


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] Call for testing: 2015.1.3 candidate tarballs

2016-01-14 Thread Dave Walker
Hi all,

We are scheduled to release the next point release of stable Kilo
(2015.1.3).  The current candidates for release are: Ceilometer,
Cinder, Glance, Heat, Horizon, Ironic, Keystone, Neutron,
Neutron-lbaas, Neutron-vpnaas, Nova and Sahara releases on Thursday
Jan 21st 2016.

The current open changes on Gerrit have been frozen to help testing of
the branches prior to release.  Please help test the current
candidates as described below:

Ceilometer
 - Commits since previous release: 19
 - Last built: Sat, 09 Jan 2016 00:58:01 GMT
 - http://tarballs.openstack.org/ceilometer/ceilometer-stable-kilo.tar.gz

Cinder
 - Commits since previous release: 19
 - Last built: Tue, 05 Jan 2016 19:45:36 GMT
 - http://tarballs.openstack.org/cinder/cinder-stable-kilo.tar.gz

Glance
 - Commits since previous release: 7
 - Last built: Wed, 06 Jan 2016 20:46:58 GMT
 - http://tarballs.openstack.org/glance/glance-stable-kilo.tar.gz

Heat
 - Commits since previous release: 32
 - Last built: Tue, 12 Jan 2016 06:14:01 GMT
 - http://tarballs.openstack.org/heat/heat-stable-kilo.tar.gz

Horizon
 - Commits since previous release: 13
 - Last built: Thu, 14 Jan 2016 12:59:37 GMT
 - http://tarballs.openstack.org/horizon/horizon-stable-kilo.tar.gz

Ironic
 - Commits since previous release: 3
 - Last built: Thu, 03 Dec 2015 07:45:38 GMT
 - http://tarballs.openstack.org/ironic/ironic-stable-kilo.tar.gz

Keystone
 - Commits since previous release: 19
 - Last built: Thu, 14 Jan 2016 04:03:08 GMT
 - http://tarballs.openstack.org/keystone/keystone-stable-kilo.tar.gz

Neutron
 - Commits since previous release: 75
 - Last built: Wed, 13 Jan 2016 08:41:11 GMT
 - http://tarballs.openstack.org/neutron/neutron-stable-kilo.tar.gz

Neutron-lbaas
 - Commits since previous release: 6
 - Last built: Thu, 31 Dec 2015 13:21:20 GMT
 - http://tarballs.openstack.org/neutron-lbaas/neutron-lbaas-stable-kilo.tar.gz

Neutron-vpnaas
 - Commits since previous release: 5
 - Last built: Wed, 02 Dec 2015 16:34:11 GMT
 - 
http://tarballs.openstack.org/neutron-vpnaas/neutron-vpnaas-stable-kilo.tar.gz

Nova
 - Commits since previous release: 4
 - Last built: Fri, 18 Dec 2015 03:42:49 GMT
 - http://tarballs.openstack.org/nova/nova-stable-kilo.tar.gz

Sahara
 - Commits since previous release: 3
 - Last built: Wed, 02 Dec 2015 11:15:06 GMT
 - http://tarballs.openstack.org/sahara/sahara-stable-kilo.tar.gz

If you have any questions please direct them to this thread or ping me
(Daviey) on #openstack-stable.

Thanks

--
Kind Regards,
Dave Walker

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Can Heka solve all the deficiencies in the current rsyslog implementation: was Re: [kolla] Introduction of Heka in Kolla

2016-01-14 Thread Steven Dake (stdake)
Eric,

Comments inline.

On 1/14/16, 3:31 AM, "Eric LEMOINE"  wrote:

>On Wed, Jan 13, 2016 at 1:27 PM, Steven Dake (stdake) 
>wrote:
>> Eric,
>
>
>Hi Steven


Feel free to call me Steve

>
>
>>
>> Apologies for top post, not really sure where in this thread to post
>>this
>> list of questions as its sort of a change in topic so I changed the
>> subject line :)
>>
>> 1.
>> Somewhere I read when researching this Heka topic, that Heka cannot log
>> all details from /dev/log.  Some services like mariadb for example don't
>> log to stdout as I think Heka requires to operate correctly.  Would you
>> mind responding on the question "Would Heka be able to effectively log
>> every piece of information coming off the system related to OpenStack
>>(our
>> infrastructure services like ceph/mariadb/etc as well as the OpenStack
>> services)?
>
>
>My first reaction to this is: if we have services, such as mariadb,
>that can only send their logs to syslog then let's continue using
>Rsyslog.  And with Rsyslog we can easily store logs on the local
>filesystem as well (your requirement #3 below).

Rsyslog may be the best tool for this job.  I'll be looking to your POC to
get a feel for the validity of that claim :)

>
>That being said, it appears that Heka supports reading logs from
>/dev/log.  This can be done using the UdpInput plugin with "net" set
>to "unixgram".  See
> for the original
>issue.  Heka also supports writing logs to files on the local
>filesystem, through the FileOutput plugin.  We do not currently use
>the UdpInput plugin, so we need to test it and see if it can work for
>Kolla.  We will work on these tests, and report back to the list.
>
>
>
>> 2.
>> Also, I want to make sure we can fix up the backtrace defeciency.
>> Currently rsyslog doesn't log backtraces in python code.  Perhaps Sam or
>> inc0 know the reason behind it, but I want to make sure we can fix up
>>this
>> annoyance, because backtraces are mightily important.
>
>
>I've had a look on my AIO Kolla.  And I do see Python tracebacks in
>OpenStack log files created by Rsyslog (in
>/var/lib/docker/volumes/rsyslog/_data/nova/nova-api.log for example).
>They're just on a single line, with "#012" used as the separator [*].
>So they are hard to read, but they are there.  I think that is
>consistent with what SamYaple and inc0 said yesterday on IRC.
>
>[*] This is related to Rsyslog's $EscapeControlCharactersOnReceive
>setting. See 
>escapecontrolcharactersonreceive.html>.
>

Apparently the one-lining of the tracebacks is a new feature because of a
bug fix in oslo.log that dims fixed for us.  So my original comments about
missing backtraces was dated information from December.

>
>> 3.
>> Also I want to make sure each node ends up with log files in a data
>> container (or data volume or whatever we just recently replaced the data
>> containers with) for all the services for individual node diagnostics.
>> This helps fill the gap of the Kibana visualization and Elasticsearch
>> where we may not have a perfect diagnostic solution at the conclusion of
>> Mitaka and in need of individual node inspection of the logs.  Can Heka
>>be
>> made to do this?  Our rsyslog implementation does today, and its a hard
>> requirement for the moment.  If we need some special software to run in
>> addition to Heka, I could live with that.
>
>
>That "special software" could be Rsyslog :)  Seriously, Rsyslog
>provides a solution for getting logs from services that only log to
>syslog.  We can also easily configure Rsyslog to write logs on the
>local filesystem, as done in Kolla already today.  And using Heka we
>can easily make Python tracebacks look good in Kibana.
>
>I would like to point out that our initial intent was not to remove
>Rsyslog.  Our intent was to propose a scalable/decentralized log
>processing architecture based on Heka running on each node, instead of
>relying on a centralized Logstash instance.  Using Heka we eliminate
>the need to deploy and manage a resilient Logstash/Redis cluster.  And
>it is to be noted that Heka gives us a lot of flexibility.  In
>particular, Heka makes it possible to collect logs from services that
>don't speak syslog (RabbitMQ for example, whose logs are not currently
>collected!).


FWIW the concern that logstash isn't scalable is unsubstantiated.  IMNSHO
the reason to use Heka is it does two things we need with one component
while removing JVMs from each compute node.  This is my main attraction to
the adoption of Heka.

>
>As mentioned above Heka provides plugins that we could possibly
>leverage to remove Rsyslog completely, but at this point we cannot
>guarantee that they will do the job.  Our coming tests will tell.

If the plugins work in a way in which we can remove rsyslog completely, I
would prefer that outcome.  Less dependencies in software systems 

Re: [openstack-dev] [Nova] Nova midcycle list of attendees

2016-01-14 Thread Augustina Ragwitz
Another issue is some people may not want to share where they are 
staying or their arrival/departure details. I also appreciated Paul's 
tactfulness here :) Thanks Paul for putting that page up! I particularly 
appreciate the transportation column as I've heard transit in that area 
is a bit sparse. Now I know who to bug for rides!!


Augustina

On 2016-01-14 15:39, Carl Baldwin wrote:

On Jan 14, 2016 3:43 PM, "Anita Kuno"  wrote:
 >
 > On 01/14/2016 12:38 PM, Murray, Paul (HP Cloud) wrote:
 > > I have created a list of attendees for the Nova midcycle here:
https://wiki.openstack.org/wiki/Sprints/NovaMitakaSprintAttendees [1]
 > >
 > > Obviously I can't put anyone's name on it for privacy reasons.
 >
 > What privacy reasons? Every other project lists attendees either on
a
 > wikipage or an etherpad.
 >
 > I don't know that nova has a privacy clause that is different from
any
 > other project. I don't recall a privacy clause when I registered to
attend.

I took this to mean that Paul did not want to publish the list of
attendees himself when it had not been public already.  In my
experience, every other mid-cycle ether pad or wiki has done
registration publicly to begin with but this one initially took
registration privately and they didn't seem to want to take the whole
list public without consent.  I appreciated Paul's consideration here
and gladly added myself.

Carl

Links:
--
[1] https://wiki.openstack.org/wiki/Sprints/NovaMitakaSprintAttendees

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] gate-neutron-dsvm-api failures

2016-01-14 Thread Brian Haley

On 01/14/2016 05:42 PM, John Davidge (jodavidg) wrote:

The 
neutron.tests.api.admin.test_floating_ips_admin_actions.FloatingIPAdminTestJSON
test has been consistently failing for patch
https://review.openstack.org/#/c/258754/ and I don’t see how they can be
related. This patch has been trying to merge for a month.

This test seems to be experiencing a lot of failures recently:

http://status.openstack.org//elastic-recheck/data/uncategorized.html#gate-neutron-dsvm-api

Has it been diagnosed? Could somebody more familiar with the test take a look
please?


John,

That test was just recently changed too:

https://review.openstack.org/#/c/265016/2/neutron/tests/api/admin/test_floating_ips_admin_actions.py

So perhaps that change didn't actually fix things.

-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] kilo 2015.1.3 freeze exception request for cve fixes

2016-01-14 Thread Matt Riedemann

We should get this series in for nova in the kilo 2015.1.3 release:

https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/kilo+topic:bug/1524274

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Testing Neutron with latest OVS

2016-01-14 Thread Sean M. Collins
On Thu, Jan 14, 2016 at 10:00:03AM CST, Ihar Hrachyshka wrote:
> Currently, devstack plugin has code for:
> * qos
> * sr-iov
> * l2 agent extensions
> * flavors

Right - and from the start on the review that introduced the Neutron devstack
plugin in the neutron tree, as well on IRC I advocated that the QoS work
should have been a small, narrowly defined DevStack plugin. Instead, we
now find ourselves where the kitchen sink has been thrown into this
thing.

Can we please take this all out of Neutron and put all these different pieces
into their own separate DevStack plugins?

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Improving deprecated options identification and documentation

2016-01-14 Thread Doug Hellmann
Excerpts from Jay Pipes's message of 2016-01-14 11:58:13 -0500:
> On 01/14/2016 11:45 AM, Ronald Bradford wrote:
> > Presently the oslo.config Opt class has the attributes
> > deprecated_for_removal and deprecated_reason [1]
> >
> > I would like to propose that we use deprecated_reason (at a minimum) to
> > detail in what release an option was deprecated in, and what release it
> > is then removed in.
> 
> You mean what release it *will* be removed in, right? Clearly, once it's 
> removed, there won't be any indication it ever existed ;)
>
> > I see examples of deprecated_for_removal=True but no information on why
> > or when.  i.e. Ideally I'd like to move to an implied situation of
> >   if deprecated_for_removal=True then deprecated_reason is mandatory.
> 
> +1

+1

> 
> > A great example is an already documented help message in oslo.log
> > configuration option use_syslog_rfc_format that at least provides a
> > guideline.  [2] shows a proposed review to take this low road approach.
> > An image of what the change actually looks like in documentation using
> > this approach [3].  This also needs  #267151 that fixes an issue where
> > deprecated options are not producing a warning message in docs.
> >
> > The high road would be to have a discussion about if there is a better
> > way to mark and manage deprecated options. For example, if there was a
> > deprecated_release and a removal_release attribute then a level of
> > tooling could make this easier.  I would be wary in considering this, as
> > it adds complexity (is it needed), and just how many options are
> > deprecated.  I'd appreciate thoughts and feedback.
> 
> Any improvement in this regard I think would enhance the user experience 
> considerably, thank you Ronald for tackling this area. I'd also suggest 
> cc'ing (or sending a separate ML post) to the openstack-operators@ ML to 
> gather feedback from ops folks.

Good idea.

Maybe we could generate a warning when deprecated_for_removal is
True instead of a string? Then when the value is a string we could use
it as part of the deprecation warning when the option is being set in a
deployment.

Doug

> 
> Best,
> -jay
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up, Doc? 15 January

2016-01-14 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi everyone,

Welcome to 2016! A short newsletter this week as I get my feet back under me 
after my vacation. I hope you all had a wonderful holiday period, and got some 
well-earned down time. I'm looking forward to working hard with you all over 
the next few months to get Mitaka out into the wild :)

== Progress towards Mitaka ==

82 days to go!

324 bugs closed so far for this release.

RST Conversions
* All RST conversions are now complete! Well done to all the contributors and 
reviewers who made this happen so quickly this time around. We are now, with 
only a couple of exceptions, completely converted. Great job :) 

Reorganisations
* Arch Guide
** 
https://blueprints.launchpad.net/openstack-manuals/+spec/archguide-mitaka-reorg
** Contact the Ops Guide Speciality team: 
https://wiki.openstack.org/wiki/Documentation/OpsGuide
* User Guides
** 
https://blueprints.launchpad.net/openstack-manuals/+spec/user-guides-reorganised
** Contact the User Guide Speciality team: 
https://wiki.openstack.org/wiki/User_Guides

DocImpact
* After some discussion on the dev list, we're adjusting our approach to this 
problem. Watch this space.

== Speciality Teams ==

No updates this week, as I'm totally disorganised. Next week for sure!

== Core Team Changes ==

I would like to offer a warm welcome to Atsushi Sakai as our newest core team 
member. Welcome to the team :) 

We will resume regular reviews in February 2016.

== Doc team meeting ==

There's been some confusion over the meeting schedule, but as far as I can 
tell, these are the next meeting dates:

Next meetings:
US: Wednesday 20 January, 14:00 UTC
APAC: Wednesday 27 January, 00:30 UTC

Please go ahead and add any agenda items to the meeting page here: 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

- --

Keep on doc'ing!

Lana

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCAAGBQJWmHLwAAoJELppzVb4+KUydocH/30zQwOcQ7kOPSeWIYvGPJ6u
/WO2Tv9QkoOZe4O7WulkTRXTsVTlr9lP3esANYf9Vys8E1720Wl3pPNL6YYJ8XCe
1WWIxZpZNiIRB86dfgJn6/oLdhNjAS6W6SvNHNKtecQ5m/6Kv0uRfpVHxw89BRBV
+f5QvPnDsydUlKEamQJAyRWzKc6BnpM/AbgS20BL3/luhZpZI3V1Z2bnxbGJCRlD
10uAwY3+z6Oo3SCQI7HVe3TfZQDa6BBYPUK2KSlY5Kr8VjclEgy4tuootRd/4rva
WCS4Z61XPvR6DQdLPS6RVCmCrkRtHd1yYeBZt5fP9GBDrXm2CUZipl13grZMXY8=
=XuVl
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] gate-neutron-dsvm-api failures

2016-01-14 Thread Kevin Benton
Looks like this is exposing a legitimate bug:
https://bugs.launchpad.net/neutron/+bug/1534447

On Thu, Jan 14, 2016 at 5:23 PM, John Davidge (jodavidg)  wrote:

> On 1/14/16, 3:04 PM, "Brian Haley"  wrote:
>
>
> >On 01/14/2016 05:42 PM, John Davidge (jodavidg) wrote:
> >> The
> >>neutron.tests.api.admin.test_floating_ips_admin_actions.FloatingIPAdminTe
> >>stJSON
> >> test has been consistently failing for patch
> >> https://review.openstack.org/#/c/258754/ and I don¹t see how they can
> be
> >> related. This patch has been trying to merge for a month.
> >>
> >> This test seems to be experiencing a lot of failures recently:
> >>
> >>
> >>
> http://status.openstack.org//elastic-recheck/data/uncategorized.html#gate
> >>-neutron-dsvm-api
> >>
> >> Has it been diagnosed? Could somebody more familiar with the test take
> >>a look
> >> please?
> >
> >John,
> >
> >That test was just recently changed too:
> >
> >
> https://review.openstack.org/#/c/265016/2/neutron/tests/api/admin/test_flo
> >ating_ips_admin_actions.py
> >
> >So perhaps that change didn't actually fix things.
>
> It¹s failing with tempest_lib.exceptions.ServerFault rather than
> tempest_lib.exceptions.Conflict, so that change won¹t have helped in this
> case.
>
> John
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Get-validserver-state default policy

2016-01-14 Thread Juvonen, Tomi (Nokia - FI/Espoo)
This API change was agreed is the spec review to be "rule: admin_or_owner", but 
during code review "rule: admin_api" was also wanted.
Link to spec to see details what this is about 
(https://review.openstack.org/192246/):
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/get-valid-server-state.html

In my deployment where this is crucial information for the owner, this will 
certainly be "admin_or_owner". The question is now what is the general feeling 
about the default value in policy.json and should it just be as agreed in spec 
or should it be changed still.

Br,
Tomi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Email as User Name on the Horizon login page

2016-01-14 Thread Adrian Turjak
I've run into a weird issue with the Liberty release of Horizon.

For our deployment we enforce emails as usernames, and thus for Horizon
we used to have "User Name" on the login page replaced with "Email".
This used to be a straightforward change in the html template file, and
with the introduction of themes we assumed it would be the same. When
one of our designers was migrating our custom CSS and html changes to
the new theme system they missed that change and I at first it was a
silly mistake.

Only on digging through the code myself I found that the "User Name" on
the login screen isn't in the html file at all, nor anywhere else
straightforward. The login page form is built on the fly with javascript
to facilitate different modes of authentication. While a bit annoying
that didn't seem too bad and I then assumed it might mean a javascript
change, only that the more I dug, the more I became confused.

Where exactly is the login form defined? And where exactly is the "User
Name" text for the login form set?

I've tried all manner of stuff to change it with no luck and I feel like
I must have missed something obvious.

Cheers,
-Adrian Turjak

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Should we fix XML request issues?

2016-01-14 Thread Jethani, Ravishekar
Hi Devs,

I have come across a few 500 response issues while sending request
body as XML to cinder service. For example:

--
openstack@openstack-136:/opt/stack/cinder$ curl -i -X PUT -H "X-Auth-Token: 
79e6f8f529d2494b81dbd1a6ea5e077d"  -H "Accept: application/xml" 
"http://10.69.4.136:8776/v2/0fea9a45c8504875bcda9690a5625eab/volumes/921d806e-313f-47f5-9a1a-3ecffa0aa8ba/metadata;
 -H "Content-Type: application/xml" -d ' 
  v2 '
HTTP/1.1 500 Internal Server Error
Content-Length: 215
Content-Type: application/xml; charset=UTF-8
X-Compute-Request-Id: req-177f7212-af65-4bad-84d0-aac0263d46eb
X-Openstack-Request-Id: req-177f7212-af65-4bad-84d0-aac0263d46eb
Date: Thu, 14 Jan 2016 07:28:06 GMT

http://docs.openstack.org/api/openstack-block-storage/2.0/content;>The
 server has either erred or is incapable of performing the requested 
operation.openstack@openstack-136:/opt/stack/cinder$


LOG:
2016-01-13 23:28:06.420 DEBUG eventlet.wsgi.server [-] (7587) accepted 
('10.69.4.136', 59959) from (pid=7587) server 
/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py:826
2016-01-13 23:28:06.678 DEBUG oslo_policy._cache_handler 
[req-177f7212-af65-4bad-84d0-aac0263d46eb e1079bbb3aa54660b2cacd9c8ca1e1d7 
0fea9a45c8504875bcda9690a5625eab] Reloading cached file /etc/cinder/policy.json 
from (pid=7587) read_cached_file 
/usr/local/lib/python2.7/dist-packages/oslo_policy/_cache_handler.py:38
2016-01-13 23:28:06.680 DEBUG oslo_policy.policy 
[req-177f7212-af65-4bad-84d0-aac0263d46eb e1079bbb3aa54660b2cacd9c8ca1e1d7 
0fea9a45c8504875bcda9690a5625eab] Reloaded policy file: /etc/cinder/policy.json 
from (pid=7587) _load_policy_file 
/usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:493
2016-01-13 23:28:06.890 INFO cinder.api.openstack.wsgi 
[req-177f7212-af65-4bad-84d0-aac0263d46eb e1079bbb3aa54660b2cacd9c8ca1e1d7 
0fea9a45c8504875bcda9690a5625eab] PUT 
http://10.69.4.136:8776/v2/0fea9a45c8504875bcda9690a5625eab/volumes/921d806e-313f-47f5-9a1a-3ecffa0aa8ba/metadata
2016-01-13 23:28:06.891 ERROR cinder.api.middleware.fault 
[req-177f7212-af65-4bad-84d0-aac0263d46eb e1079bbb3aa54660b2cacd9c8ca1e1d7 
0fea9a45c8504875bcda9690a5625eab] Caught error:
2016-01-13 23:28:06.892 INFO cinder.api.middleware.fault 
[req-177f7212-af65-4bad-84d0-aac0263d46eb e1079bbb3aa54660b2cacd9c8ca1e1d7 
0fea9a45c8504875bcda9690a5625eab] 
http://10.69.4.136:8776/v2/0fea9a45c8504875bcda9690a5625eab/volumes/921d806e-313f-47f5-9a1a-3ecffa0aa8ba/metadata
 returned with HTTP 500
2016-01-13 23:28:06.893 WARNING cinder.api.openstack.wsgi 
[req-177f7212-af65-4bad-84d0-aac0263d46eb e1079bbb3aa54660b2cacd9c8ca1e1d7 
0fea9a45c8504875bcda9690a5625eab] Deprecated: XML support has been deprecated 
and will be removed in the N release.
2016-01-13 23:28:06.894 INFO eventlet.wsgi.server 
[req-177f7212-af65-4bad-84d0-aac0263d46eb e1079bbb3aa54660b2cacd9c8ca1e1d7 
0fea9a45c8504875bcda9690a5625eab] 10.69.4.136 "PUT 
/v2/0fea9a45c8504875bcda9690a5625eab/volumes/921d806e-313f-47f5-9a1a-3ecffa0aa8ba/metadata
 HTTP/1.1" status: 500  len: 487 time: 0.4734759


I can see that XML support has been marked as depricated and will be
removed in 'N' release. So is it still worth trying fixing these
issues during Mitaka time frame?

Thanks.
Ravi Jethani

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] gate-neutron-dsvm-api failures

2016-01-14 Thread John Davidge (jodavidg)
The 
neutron.tests.api.admin.test_floating_ips_admin_actions.FloatingIPAdminTestJSON 
test has been consistently failing for patch 
https://review.openstack.org/#/c/258754/ and I don't see how they can be 
related. This patch has been trying to merge for a month.

This test seems to be experiencing a lot of failures recently:

http://status.openstack.org//elastic-recheck/data/uncategorized.html#gate-neutron-dsvm-api

Has it been diagnosed? Could somebody more familiar with the test take a look 
please?

Thanks,

John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nova midcycle list of attendees

2016-01-14 Thread Carl Baldwin
On Jan 14, 2016 3:43 PM, "Anita Kuno"  wrote:
>
> On 01/14/2016 12:38 PM, Murray, Paul (HP Cloud) wrote:
> > I have created a list of attendees for the Nova midcycle here:
https://wiki.openstack.org/wiki/Sprints/NovaMitakaSprintAttendees
> >
> > Obviously I can't put anyone's name on it for privacy reasons.
>
> What privacy reasons? Every other project lists attendees either on a
> wikipage or an etherpad.
>
> I don't know that nova has a privacy clause that is different from any
> other project. I don't recall a privacy clause when I registered to
attend.

I took this to mean that Paul did not want to publish the list of attendees
himself when it had not been public already.  In my experience, every other
mid-cycle ether pad or wiki has done registration publicly to begin with
but this one initially took registration privately and they didn't seem to
want to take the whole list public without consent.  I appreciated Paul's
consideration here and gladly added myself.

Carl
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Please vote -> Removal of Harm Weites from the core reviewer team

2016-01-14 Thread Steven Dake (stdake)
Hi fellow core reviewers,

Harm joined Kolla early on with great enthusiasm and did a bang-up job for 
several months working on Kolla.  We voted unanimously to add him to the core 
team.  Over the last 6 months Harm hasn't really made much contribution to 
Kolla.  I have spoken to him about it in the past, and he indicated his work 
and other activities keep him from being able to execute the full job of a core 
reviewer and nothing environmental is changing in the near term that would 
improve things.

I faced a similar work/life balance problem when working on Magnum as a core 
reviewer and also serving as PTL for Kolla.  My answer there was to step down 
from the Magnum core reviewer team [1] because Kolla needed a PTL more then 
Magnum needed a core reviewer.  I would strongly prefer if folks don't have the 
time available to serve as a Kolla core reviewer, to step down as was done in 
the above example.  Folks that follow this path will always be welcome back as 
a core reviewer in the future once they become familiar with the code base, 
people, and the project.

The other alternative to stepping down is for the core reviewer team to vote to 
remove an individual from the core review team if that is deemed necessary.  
For future reference, if you as a core reviewer have concerns about a fellow 
core reviewer's performance, please contact the current PTL who will discuss 
the issue with you.

I propose removing Harm from the core review team.  Please vote:

+1 = remove Harm from the core review team
-1 = don't remove Harm from the core review team

Note folks that are voted off the core review team are always welcome to rejoin 
the core team in the future once they become familiar with the code base, 
people, and the project.  Harm is a great guy, and I hope in the future he has 
more time available to rejoin the Kolla core review team assuming this vote 
passes with simple majority.

It is important to explain why, for some folks that may be new to a core 
reviewer role (which many of our core reviewers are), why a core reviewer 
should have their +2/-2 voting rights removed when they become inactive or 
their activity drops below an acceptable threshold for extended or permanent 
periods.  This hasn't happened in Harm's case, but it is possible that a core 
reviewer could approve a patch that is incorrect because they lack sufficient 
context with the code base.  Our core reviewers are the most important part of 
ensuring quality software.  If the individual has lost context with the code 
base, their voting may be suspect, and more importantly the other core 
reviewers may not trust the individual's votes.  Trust is the cornerstone of a 
software review process, so we need to maximize trust on a technical level 
between our core team members.  That is why maintaining context with the code 
base is critical and why I am proposing a vote to remove Harm from the core 
reviewer team.

On a final note, folks should always know, joining the core review team is 
never "permanent".  Folks are free to move on if their interests take them into 
other areas or their availability becomes limited.  Core Reviewers can also be 
removed by majority vote.  If there is any core reviewer's performance you are 
concerned with, please contact the current PTL to first work on improving 
performance, or alternatively initiating a core reviewer removal voting process.

On a more personal note, I want to personally thank Harm for his many and 
significant contributions to Kolla and especially going above and beyond by 
taking on the responsibility of a core reviewer.  Harm's reviews were always 
very thorough and very high quality, and I really do hope in the future Harm 
will rejoin the Kolla core team.

Regards,
-steve

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-October/077844.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Testing Neutron with latest OVS

2016-01-14 Thread Jeremy Stanley
On 2016-01-14 22:14:09 + (+), Sean M. Collins wrote:
[...]
> The problem we have is - most operators are using Ubuntu according to
> the user survey. Most likely they are using LTS releases. We already get
> flack for our pace of releases and our release lifecycle duration, so
> if we were to move off LTS for our gate, we would be pushing operators
> to move to more frequent upgrades for their base operating system. Maybe
> that's a discussion that needs to be had, but it will be contentious.

As a point of reference, the OpenStack Infrastructure team only uses
LTS distro releases to run production systems. We've also got a
modest sized OpenStack deployment on its way to production, again on
an LTS distro release. I agree that releasing server software which
is only well tested on "desktop" pace distro releases would be a
serious misstep for the project.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Long description of oslo.privsep

2016-01-14 Thread Thomas Goirand
Hi,

Lucky I have written, in the cookie-butter repo:

Please feel here a long description which must be at least 3 lines
wrapped on 80 cols, so that distribution package maintainers can use it
in their packages. Note that this is a hard requirement.

Because without it, we could see stuff like this:
https://pypi.python.org/pypi/oslo.privsep

Seriously, what shall I put as a long description for the package? Shall
I read the code to guess?

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Python 3.5 is now the default Py3 in Debian Sid

2016-01-14 Thread Victor Stinner

Hi,

What are the issues? Is there a list of issues somewhere?

Victor

Le 13/01/2016 03:07, Thomas Goirand a écrit :

Hi,

Just a quick notice for those not following developments in Debian.
Python 3.5 is now the default Python 3 interpreter in Debian Unstable,
and when the transition will be finished [1], it's going to be the one
in Testing (aka: Stretch) as well. The current plan is to have Python
3.5 only in Stretch (ie: get rid of Python 3.4 completely).

Stretch is to be frozen in late 2016, so there's a lot of chances that
it's going to include Mitaka.

In other words, any Python 3.5 problem in Olso, clients and so on will
be considered a Debian RC bug and shall be addressed ASAP there.

Cheers,

Thomas Goirand (zigo)

[1] https://release.debian.org/transitions/html/python3.5.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Get-validserver-state default policy

2016-01-14 Thread Jay Pipes

On 01/15/2016 01:50 AM, Juvonen, Tomi (Nokia - FI/Espoo) wrote:

This API change was agreed is the spec review to be “rule:
admin_or_owner”, but during code review “rule: admin_api” was also wanted.
Link to spec to see details what this is about
(https://review.openstack.org/192246/):
_http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/get-valid-server-state.html_
In my deployment where this is crucial information for the owner, this
will certainly be “admin_or_owner”. The question is now what is the
general feeling about the default value in policy.json and should it
just be as agreed in spec or should it be changed still.


The host state is NOT something that a regular cloud user should be able 
to query, IMHO. Only admins should be able to see anything about the 
underlying compute hardware.


Exposing hardware information and statuses out through the REST API is a 
bad leak of implementation.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Heka v ELK stack logistics

2016-01-14 Thread Michal Rostecki

On 01/14/2016 03:56 PM, Michał Jastrzębski wrote:

On 14 January 2016 at 04:46, Eric LEMOINE  wrote:

On Wed, Jan 13, 2016 at 1:15 PM, Steven Dake (stdake)  wrote:

Hey folks,

I'd like to have a mailing list discussion about logistics of the ELKSTACK
solution that Alicja has sorted out vs the Heka implementation that Eric is
proposing.

My take on that is Eric wants to replace rsyslog and logstash with Heka.



See my other email on this point.  At this point, given the
requirements we have (get logs from services that only speak syslog
and write logs to local files), we cannot guarantee that Heka will
replace Rsyslog.  We are going to test the use of Heka's UdpInput
(with "net" set to "unixgram") et FileOutput plugins for that.  Stay
tuned!


Yeah, also please try out different configs of only-rsyslog services.
Maybe Mariadb can be set up in a way compliant with heka?



In case of MariaDB I see that you can

- do --skip-syslog, then all things go to "error log"
- set "error log" path by --log-error

So maybe setting stdout/stderr may work here?

I'd suggest to check the similar options in RabbitMQ and other 
non-OpenStack components.


I may help with this, because in my opinion, logging to stdout is the 
best default option for Mesos and Kubernetes - i.e. Mesos UI shows a 
"sandbox" of application, which generally is stdout/stderr. So if 
someone will not want to use Heka/ELK, then having everything in 
Mesos/Kubernetes would be probably the lesser evil than trying rsyslog 
to work here.


Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Get-validserver-state default policy

2016-01-14 Thread Juvonen, Tomi (Nokia - FI/Espoo)
>-Original Message-
>From: EXT Jay Pipes [mailto:jaypi...@gmail.com] 
>Sent: Friday, January 15, 2016 9:25 AM
>To: openstack-dev@lists.openstack.org
>Subject: Re: [openstack-dev] [Nova] Get-validserver-state default policy
>
>On 01/15/2016 01:50 AM, Juvonen, Tomi (Nokia - FI/Espoo) wrote:
>> This API change was agreed is the spec review to be "rule:
>> admin_or_owner", but during code review "rule: admin_api" was also wanted.
>> Link to spec to see details what this is about
>> (https://review.openstack.org/192246/):
>> _http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/get-valid-server-state.html_
>> In my deployment where this is crucial information for the owner, this
>> will certainly be "admin_or_owner". The question is now what is the
>> general feeling about the default value in policy.json and should it
>> just be as agreed in spec or should it be changed still.
>
>The host state is NOT something that a regular cloud user should be able 
>to query, IMHO. Only admins should be able to see anything about the 
>underlying compute hardware.
>
>Exposing hardware information and statuses out through the REST API is a 
>bad leak of implementation.

Jay, yes agreed in code review. The question just rose again as the code change 
was against spec. I guess the spec can still be revisited. I have a small bit 
to spec anyhow, so can make "rule: admin_api"  at the same :)

Br,
Tomi

>Best,
>-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Testing Neutron with latest OVS

2016-01-14 Thread Russell Bryant
On 01/14/2016 03:43 PM, Assaf Muller wrote:
> On Thu, Jan 14, 2016 at 9:28 AM, Russell Bryant  wrote:
>> On 01/13/2016 11:51 PM, Tony Breeds wrote:
>>> The challenge for you guys is the kernel side of things but if I
>>> understood correctly you can get the kenrel module from the ovs
>>> source tree and just compile it against the stock ubuntu kernel
>>> (assuming the kernel devel headers are available) is that right?
>>
>> It's kernel and userspace.  There's multiple current development
>> efforts that involve changes to OpenStack, OVS userspace, and the
>> appropriate datapath (OVS kernel module or DPDK).
>>
>> The consensus I'm picking up roughly is that for those working on the
>> features, testing with source builds seems to be working fine.  It's
>> just not something anyone wants to gate the main Neutron repo with.
>> That seems quite reasonable.  If the features aren't in proper
>> releases yet, I don't see gating as that important anyway.
> 
> I want to have voting tests for new features. For the past year the
> OVS agent ARP responder feature has been without proper coverage, and
> now it's the upcoming OVS firewall driver. I think that as long as we
> compile from a specific OVS patch (And not a moving target), I don't
> see much of a difference between gating on OVS 2.0 and gating on, for
> example, the current tip of the OVS 2.5 branch (But continuing to
> gate on that patch, so when the OVS 2.5 branch gets backports we
> won't gate on those, and we'll be able to move to a new tip in our
> own pace). As long as we pick a patch to compile against and run the
> functional tests a few times and verify that it works, I think it's
> reasonable. We've been gating against OVS 2.0 for the past few years,
> that to me seems unreasonable. We're gating against an OVS version
> nobody is using in production anymore.

I would agree that still using OVS 2.0 doesn't make any sense.
-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Testing Neutron with latest OVS

2016-01-14 Thread Clark Boylan
On Thu, Jan 14, 2016, at 01:07 PM, Russell Bryant wrote:
> On 01/14/2016 03:43 PM, Assaf Muller wrote:
> > On Thu, Jan 14, 2016 at 9:28 AM, Russell Bryant  wrote:
> >> On 01/13/2016 11:51 PM, Tony Breeds wrote:
> >>> The challenge for you guys is the kernel side of things but if I
> >>> understood correctly you can get the kenrel module from the ovs
> >>> source tree and just compile it against the stock ubuntu kernel
> >>> (assuming the kernel devel headers are available) is that right?
> >>
> >> It's kernel and userspace.  There's multiple current development
> >> efforts that involve changes to OpenStack, OVS userspace, and the
> >> appropriate datapath (OVS kernel module or DPDK).
> >>
> >> The consensus I'm picking up roughly is that for those working on the
> >> features, testing with source builds seems to be working fine.  It's
> >> just not something anyone wants to gate the main Neutron repo with.
> >> That seems quite reasonable.  If the features aren't in proper
> >> releases yet, I don't see gating as that important anyway.
> > 
> > I want to have voting tests for new features. For the past year the
> > OVS agent ARP responder feature has been without proper coverage, and
> > now it's the upcoming OVS firewall driver. I think that as long as we
> > compile from a specific OVS patch (And not a moving target), I don't
> > see much of a difference between gating on OVS 2.0 and gating on, for
> > example, the current tip of the OVS 2.5 branch (But continuing to
> > gate on that patch, so when the OVS 2.5 branch gets backports we
> > won't gate on those, and we'll be able to move to a new tip in our
> > own pace). As long as we pick a patch to compile against and run the
> > functional tests a few times and verify that it works, I think it's
> > reasonable. We've been gating against OVS 2.0 for the past few years,
> > that to me seems unreasonable. We're gating against an OVS version
> > nobody is using in production anymore.
> 
> I would agree that still using OVS 2.0 doesn't make any sense.
>
Forgive my ignorance, but if we are gating on OVS 2.0 that is because it
is what is shipped with the distros that we test on. Are we saying that
no one uses the distro provided OVS packages to run Neutron? If not what
are they using?

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-11, Jan 18-22, Mitaka-2 milestone

2016-01-14 Thread Doug Hellmann
Focus
-

Next week is the second milestone for the Mitaka cycle. Major feature
work should be making good progress or be re-evaluated to see whether
it will really land this cycle.

Release Actions
---

Liaisons should submit tag requests to the openstack/releases
repository for all projects following the cycle-with-milestone
release model before the end of the day on Jan 21.

We're working on updating the documented responsibilities for release
liaisons. Please have a look at https://review.openstack.org/#/c/262003/
and leave comments if you have questions or concerns.

Important Dates
---

Mitaka 2: Jan 19-21

Deadline for Mitaka 2 tag: Jan 21

Mitaka release schedule: 
http://docs.openstack.org/releases/schedules/mitaka.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Testing Neutron with latest OVS

2016-01-14 Thread Sean M. Collins
On Thu, Jan 14, 2016 at 03:13:16PM CST, Clark Boylan wrote:
> Forgive my ignorance, but if we are gating on OVS 2.0 that is because it
> is what is shipped with the distros that we test on. Are we saying that
> no one uses the distro provided OVS packages to run Neutron? If not what
> are they using?

Right - this was my impression as well.

I know that at least in Ubuntu, they have repos like Cloud-Archive[1]
that are targeted towards OpenStack releases and more recent packages of
libraries. Perhaps we need to investigate if our Ubuntu images at the
gate should be using this repo or something like this to pick up more
recent packaged versions of components like Open vSwitch?


[1]: https://launchpad.net/cloud-archive

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Testing Neutron with latest OVS

2016-01-14 Thread Ben Pfaff
On Thu, Jan 14, 2016 at 09:30:03PM +, Sean M. Collins wrote:
> On Thu, Jan 14, 2016 at 03:13:16PM CST, Clark Boylan wrote:
> > Forgive my ignorance, but if we are gating on OVS 2.0 that is because it
> > is what is shipped with the distros that we test on. Are we saying that
> > no one uses the distro provided OVS packages to run Neutron? If not what
> > are they using?
> 
> Right - this was my impression as well.

Even Debian stable (jessie) has OVS 2.3, what distro has 2.0?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Testing Neutron with latest OVS

2016-01-14 Thread Assaf Muller
On Thu, Jan 14, 2016 at 9:28 AM, Russell Bryant  wrote:
> On 01/13/2016 11:51 PM, Tony Breeds wrote:
>> The challenge for you guys is the kernel side of things but if I
>> understood correctly you can get the kenrel module from the ovs
>> source tree and just compile it against the stock ubuntu kernel
>> (assuming the kernel devel headers are available) is that right?
>
> It's kernel and userspace.  There's multiple current development
> efforts that involve changes to OpenStack, OVS userspace, and the
> appropriate datapath (OVS kernel module or DPDK).
>
> The consensus I'm picking up roughly is that for those working on the
> features, testing with source builds seems to be working fine.  It's
> just not something anyone wants to gate the main Neutron repo with.
> That seems quite reasonable.  If the features aren't in proper
> releases yet, I don't see gating as that important anyway.

I want to have voting tests for new features. For the past year the OVS agent
ARP responder feature has been without proper coverage, and now it's the
upcoming OVS firewall driver. I think that as long as we compile from a specific
OVS patch (And not a moving target), I don't see much of a difference between
gating on OVS 2.0 and gating on, for example, the current tip of the
OVS 2.5 branch
(But continuing to gate on that patch, so when the OVS 2.5 branch gets backports
we won't gate on those, and we'll be able to move to a new tip in our own pace).
As long as we pick a patch to compile against and run the functional
tests a few times
and verify that it works, I think it's reasonable. We've been gating
against OVS 2.0
for the past few years, that to me seems unreasonable. We're gating
against an OVS
version nobody is using in production anymore.

>
> --
> Russell Bryant
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-14 Thread Tzu-Mainn Chen


- Original Message -
> On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:
> > Hey all,
> > 
> > I realize now from the title of the other TripleO/Mistral thread [1] that
> > the discussion there may have gotten confused.  I think using Mistral for
> > TripleO processes that are obviously workflows - stack deployment, node
> > registration - makes perfect sense.  That thread is exploring
> > practicalities
> > for doing that, and I think that's great work.
> > 
> > What I inappropriately started to address in that thread was a somewhat
> > orthogonal point that Dan asked in his original email, namely:
> > 
> > "what it might look like if we were to use Mistral as a replacement for the
> > TripleO API entirely"
> > 
> > I'd like to create this thread to talk about that; more of a 'should we'
> > than 'can we'.  And to do that, I want to indulge in a thought exercise
> > stemming from an IRC discussion with Dan and others.  All, please correct
> > me
> > if I've misstated anything.
> > 
> > The IRC discussion revolved around one use case: deploying a Heat stack
> > directly from a Swift container.  With an updated patch, the Heat CLI can
> > support this functionality natively.  Then we don't need a TripleO API; we
> > can use Mistral to access that functionality, and we're done, with no need
> > for additional code within TripleO.  And, as I understand it, that's the
> > true motivation for using Mistral instead of a TripleO API: avoiding custom
> > code within TripleO.
> > 
> > That's definitely a worthy goal... except from my perspective, the story
> > doesn't quite end there.  A GUI needs additional functionality, which boils
> > down to: understanding the Heat deployment templates in order to provide
> > options for a user; and persisting those options within a Heat environment
> > file.
> > 
> > Right away I think we hit a problem.  Where does the code for
> > 'understanding
> > options' go?  Much of that understanding comes from the capabilities map
> > in tripleo-heat-templates [2]; it would make sense to me that
> > responsibility
> > for that would fall to a TripleO library.
> > 
> > Still, perhaps we can limit the amount of TripleO code.  So to give API
> > access to 'getDeploymentOptions', we can create a Mistral workflow.
> > 
> >   Retrieve Heat templates from Swift -> Parse capabilities map
> > 
> > Which is fine-ish, except from an architectural perspective
> > 'getDeploymentOptions' violates the abstraction layer between storage and
> > business logic, a problem that is compounded because 'getDeploymentOptions'
> > is not the only functionality that accesses the Heat templates and needs
> > exposure through an API.  And, as has been discussed on a separate TripleO
> > thread, we're not even sure Swift is sufficient for our needs; one possible
> > consideration right now is allowing deployment from templates stored in
> > multiple places, such as the file system or git.
> 
> Actually, that whole capabilities map thing is a workaround for a missing
> feature in Heat, which I have proposed, but am having a hard time reaching
> consensus on within the Heat community:
> 
> https://review.openstack.org/#/c/196656/
> 
> Given that is a large part of what's anticipated to be provided by the
> proposed TripleO API, I'd welcome feedback and collaboration so we can move
> that forward, vs solving only for TripleO.
> 
> > Are we going to have duplicate 'getDeploymentOptions' workflows for each
> > storage mechanism?  If we consolidate the storage code within a TripleO
> > library, do we really need a *workflow* to call a single function?  Is a
> > thin TripleO API that contains no additional business logic really so bad
> > at that point?
> 
> Actually, this is an argument for making the validation part of the
> deployment a workflow - then the interface with the storage mechanism
> becomes more easily pluggable vs baked into an opaque-to-operators API.
> 
> E.g, in the long term, imagine the capabilities feature exists in Heat, you
> then have a pre-deployment workflow that looks something like:
> 
> 1. Retrieve golden templates from a template store
> 2. Pass templates to Heat, get capabilities map which defines features user
> must/may select.
> 3. Prompt user for input to select required capabilites
> 4. Pass user input to Heat, validate the configuration, get a mapping of
> required options for the selected capabilities (nested validation)
> 5. Push the validated pieces ("plan" in TripleO API terminology) to a
> template store
> 
> This is a pre-deployment validation workflow, and it's a superset of the
> getDeploymentOptions feature you refer to.
> 
> Historically, TripleO has had a major gap wrt workflow, meaning that we've
> always implemented it either via shell scripts (tripleo-incubator) or
> python code (tripleo-common/tripleo-client, potentially TripleO API).
> 
> So I think what Dan is exploring is, how do we avoid reimplementing a
> workflow engine, when a project 

[openstack-dev] [cross-project] Cross-Project Specs and Your Project

2016-01-14 Thread Mike Perez
Hello all!

We've been discussing cross-project spec liaisons on the mailing list [1] and
cross-project meeting [2][3] for a bit, and we're now stating the official
responsibilities [4] of the group.

The liaisons are reps of OpenStack projects to watch the cross-project spec
repo [5] for things that effect their project. This is to avoid projects
missing out on cross-project specs that are agreed by the community, and later
approved by the technical committee.

The responsibilities are more detailed and listed in the project team guide doc
[4], so please provide feedback. It defaults to the PTLs, so I have emailed
each PTL to comment that's part of the governance projects.yaml.

Please respond to the review [4] to give feedback, not this thread. This thread
is purely to promote attention. Thanks!


[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2015-December/080869.html
[2] - 
http://eavesdrop.openstack.org/meetings/crossproject/2015/crossproject.2015-12-01-21.00.html
[3] - 
http://eavesdrop.openstack.org/meetings/crossproject/2016/crossproject.2016-01-12-21.02.html
[4] - https://review.openstack.org/#/c/266072
[5] - https://review.openstack.org/#/q/project:openstack/openstack-specs

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] New API Guidelines Ready for Cross Project Review

2016-01-14 Thread michael mccune

hi all,

The following API guideline is ready for cross project review. It will 
be merged on Jan. 21 if there is no further feedback.


1. Add description of pagination parameters
https://review.openstack.org/#/c/190743

regards,
mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

2016-01-14 Thread Hongbin Lu
In short, the container IDs assigned by Magnum are independent of the container 
IDs assigned by Docker daemon. Magnum do the IDs mapping before doing a native 
API call. In particular, here is how it works.

If users create a container through Magnum endpoint, Magnum will do the 
followings:

1.   Generate a uuid (if not provided).

2.   Call Docker Swarm API to create a container, with its hostname equal 
to the generated uuid.

3.   Persist container to DB with the generated uuid.

If users perform an operation on an existing container, they must provide the 
uuid (or the name) of the container (if name is provided, it will be used to 
lookup the uuid). Magnum will do the followings:

1.   Call Docker Swarm API to list all containers.

2.   Find the container whose hostname is equal to the provided uuid, 
record its “docker_id” that is the ID assigned by native tool.

3.   Call Docker Swarm API with “docker_id” to perform the operation.

Magnum doesn’t assume all operations to be routed through Magnum endpoints. 
Alternatively, users can directly call the native APIs. In this case, the 
created resources are not managed by Magnum and won’t be accessible through 
Magnum’s endpoints.

Hope it is clear.

Best regards,
Hongbin

From: Kyle Kelley [mailto:kyle.kel...@rackspace.com]
Sent: January-14-16 11:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays


This presumes a model where Magnum is in complete control of the IDs of 
individual containers. How does this work with the Docker daemon?



> In Rest API, you can set the “uuid” field in the json request body (this is 
> not supported in CLI, but it is an easy add).​



In the Rest API for Magnum or Docker? Has Magnum completely broken away from 
exposing native tooling - are all container operations assumed to be routed 
through Magnum endpoints?



> For the idea of nesting container resource, I prefer not to do that if there 
> are alternatives or it can be work around. IMO, it sets a limitation that a 
> container must have a bay, which might not be the case in future. For 
> example, we might add a feature that creating a container will automatically 
> create a bay. If a container must have a bay on creation, such feature is 
> impossible.



If that's *really* a feature you need and are fully involved in designing for, 
this seems like a case where creating a container via these endpoints would 
create a bay and return the full resource+subresource.



Personally, I think these COE endpoints need to not be in the main spec, to 
reduce the surface area until these are put into further use.








From: Hongbin Lu >
Sent: Wednesday, January 13, 2016 5:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

Hi Jamie,

I would like to clarify several things.

First, a container uuid is intended to be unique globally (not within 
individual cluster). If you create a container with duplicated uuid, the 
creation will fail regardless of its bay. Second, you are in control of the 
uuid of the container that you are going to create. In Rest API, you can set 
the “uuid” field in the json request body (this is not supported in CLI, but it 
is an easy add). If a uuid is provided, Magnum will use it as the uuid of the 
container (instead of generating a new uuid).

For the idea of nesting container resource, I prefer not to do that if there 
are alternatives or it can be work around. IMO, it sets a limitation that a 
container must have a bay, which might not be the case in future. For example, 
we might add a feature that creating a container will automatically create a 
bay. If a container must have a bay on creation, such feature is impossible.

Best regards,
Hongbin

From: Jamie Hannaford [mailto:jamie.hannaf...@rackspace.com]
Sent: January-13-16 4:43 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum] Nesting /containers resource under /bays


I've recently been gathering feedback about the Magnum API and one of the 
things that people commented on​ was the global /containers endpoints. One 
person highlighted the danger of UUID collisions:



"""

It takes a container ID which is intended to be unique within that individual 
cluster. Perhaps this doesn't matter, considering the surface for hash 
collisions. You're running a 1% risk of collision on the shorthand container 
IDs:



In [14]: n = lambda p,H: math.sqrt(2*H * math.log(1/(1-p)))
In [15]: n(.01, 0x1)
Out[15]: 2378620.6298183016



(this comes from the Birthday Attack - 
https://en.wikipedia.org/wiki/Birthday_attack)



The main reason I questioned this is that we're not in 

[openstack-dev] you almost never need to change every repository

2016-01-14 Thread Doug Hellmann
We've started seeing a lot of changes that tweak the same thing across
many, many repositories. While it's good to have standardization, I
think it's safe to say that if you find yourself making the same change
to more than a few repositories at once, we should be looking for
another way to have that change applied.

I don't want to pick on individuals, because this isn't a new
situation and the specific examples in the review queue right now
aren't unique.  But as a general rule, if you think you need to
change any of the build configuration files of every single project
and you're not involved in a major initiative to roll out something
new like a python version upgrade or other project-wide change,
that's probably an indication that there is something else going
on and we should look for a work-around for the issue you're hitting.

So, if you start finding yourself making the same change over and over
in a lot of projects, please start a conversation here on the mailing
list before you go too far. It's quite likely that someone else has
already encountered and resolved the issue you've found.

Thanks,
Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Python 3.5 is now the default Py3 in Debian Sid

2016-01-14 Thread Jeremy Stanley
On 2016-01-14 09:47:52 +0100 (+0100), Julien Danjou wrote:
[...]
> Is there any plan to add Python 3.5 to infra?

I expect we'll end up with it shortly after Ubuntu 16.04 LTS
releases in a few months (does anybody know for sure what its
default Python 3 is slated to be?). Otherwise if a debian-sid
nodepool image shows up we could certainly try running Py3K jobs on
that instead.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance][artifacts] FFE for Glare specs

2016-01-14 Thread Alexander Tivelkov
Hi,

Unfortunately I skipped the "spec freeze exception week" due to a long
holiday season, so I'd like to ask for the freeze exception for the
following two specs now:

*1. Add more implementation details to 'deprecate-v3-api'* [1]
*2. Glare Public API *[2]

Spec [1] is actually a patch adding more concrete details to the spec which
describes the removal of glance v3 API in favour of standalone glare v0.1
API ([3]), which was accepted for Mitaka and merged. So, it makes no sense
to me in accepting [3] but postponing [1] which actually just adds more
details of the very same job.

The second spec ([2]) aims to stabilise the glare API by addressing DefCore
and API-WG comments to the currently present API. The discussions of this
API tend to take a lot, but the actual implementation is really quick
(since these are just changes in API routers with the same domain and DB
and  code underneath), and I believe that we will still be able to do this
work in Mitaka, even if the spec will be approved much later in the cycle.
Also, we've agreed that for this type of work our FastTrack approach should
still be applied, which means much less review burden required.

Thanks for considering this

[1] https://review.openstack.org/#/c/259427/
[2] https://review.openstack.org/#/c/254710/
[3] https://review.openstack.org/#/c/254163/
-- 
Regards,
Alexander Tivelkov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] How will nova advertise that volume multi-attach is supported?

2016-01-14 Thread Andrew Laski


On Wed, Jan 13, 2016, at 07:25 PM, Dan Smith wrote:
> > While I don't think it's strictly required by the api change guidelines [3]
> > I think the API interactions and behavior here feel different enough to 
> > warrant
> > having a microversion. Ideally there should have been some versioning in the
> > cinder api around the multiattach support but that ship has already sailed.
> > Treating the volume attach case in nova as the traditional single attach 
> > case
> > by default and having to specify a new microversion to enable using multiple
> > attach will at least make it more explicit to users which I think is a good
> > thing.
> 
> Right, I think the client explicitly saying "I know that there is this
> new thing called multi-attach" or "I should know but I didn't read the
> docs and irresponsibly claim to support this version anyway" is an
> important thing to have. While it doesn't (AFAIK) fall under the
> guidelines for signalling a change as you say, it is a big change
> regardless. There could certainly be clients that have the same
> attachment assumptions as nova currently has.
> 
> The problem is that we can't honor the pre-microversion semantics to
> older clients. Meaning, a client that claims to know nothing about
> multi-attach is going to make the assumptions it was making anyway, and
> we can't un-ring the bell for that client.
> 
> Still, I think it's useful to signal this change if for no other reason
> than it will hopefully catch the attention of careful client authors as
> they bump their maximum supported version declaration.
> 
> > I'm probably overlooking something major but shouldn't nova know if the virt
> > driver supports multiattach? If there are no computes with a compatible 
> > setup
> > why not just return an error and not even attempt the cast? I'm guessing 
> > all the
> > necessary info isn't in the DB which means there isn't a way to check this 
> > up
> > front.
> 
> We don't have that information, and as you hint above, we can have
> multiple virt drivers with varying levels of support in a single
> deployment. However, the inevitable result of "No Valid Host" is a
> little more correct in the case of the virt driver support situation.
> You asked us to do a thing, which was reasonable and supported by nova
> but ... during scheduling we failed to find any computes willing to
> honor the request. That could have been different ten minutes ago, and
> could certainly be different an hour from now. That fits NoValidHost
> properly I think.
> 
> If you've been told by cinder that your volume supports multi-attach,
> and nova is new enough to claim it supports it, returning 400 seems
> unfair and confusing to the user -- the operation should be valid.
> 
> So in summary:
> 
> - I think a microversion is not specifically required, but useful
> - I think a config or dynamic flag to change the API behavior is wrong
> - NoValidHost when no available hypervisors support it seems appropriate

I think NoValidHost is appropriate for now as well.

It is however not ideal when a deployment is set up such that
multiattach will always fail because a hypervisor is in use which
doesn't support it.  An immediate solution would be to add a policy so a
deployer could disallow it that way which would provide immediate
feedback to a user that they can't do it.  A longer term solution would
be to add capabilities to flavors and have flavors act as a proxy
between the user and various hypervisor capabilities available in the
deployment.  Or we can focus on providing better async feedback through
instance-actions, and other discussed async api changes.



> 
> --Dan
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Heka v ELK stack logistics

2016-01-14 Thread Michał Jastrzębski
On 14 January 2016 at 04:46, Eric LEMOINE  wrote:
> On Wed, Jan 13, 2016 at 1:15 PM, Steven Dake (stdake)  
> wrote:
>> Hey folks,
>>
>> I'd like to have a mailing list discussion about logistics of the ELKSTACK
>> solution that Alicja has sorted out vs the Heka implementation that Eric is
>> proposing.
>>
>> My take on that is Eric wants to replace rsyslog and logstash with Heka.
>
>
> See my other email on this point.  At this point, given the
> requirements we have (get logs from services that only speak syslog
> and write logs to local files), we cannot guarantee that Heka will
> replace Rsyslog.  We are going to test the use of Heka's UdpInput
> (with "net" set to "unixgram") et FileOutput plugins for that.  Stay
> tuned!

Yeah, also please try out different configs of only-rsyslog services.
Maybe Mariadb can be set up in a way compliant with heka?

>> That seems fine, but I want to make certain this doesn't happen in a way
>> that leaves Kolla completely non-functional as we finish up Mitaka.  Liberty
>> is the first version of Kolla people will deploy, and Mitaka is the first
>> version of Kolla people will upgrade to, so making sure that we don't
>> completely bust diagnostics (and I recognize diags as is are a little weak
>> is critical).
>>
>> It sounds like from my reading of the previous thread on this topic, unless
>> there is some intractable problem, our goal is to use Heka to replace
>> resyslog and logstash.  I'd ask inc0 (who did the rsyslog work) and Alicja
>> (who did the elkstack work) to understand that replacement often happens on
>> work that has already been done, and its not a "waste of time" so to speak
>> as an evolution of the system.
>>
>> Here are the deadlines:
>> http://docs.openstack.org/releases/schedules/mitaka.html
>>
>> Let me help decode that for folks. March 4th is the final deadline to have a
>> completely working solution based upon Heka if its to enter Mitaka.
>
>
> Understood.
>
>
>>
>> Unlike previous releases of Kolla, I want to hand off release management of
>> Kolla to the release management team, and to do that, we need to show a
>> track record of hitting our deadlines and not adding features past feature
>> freeze (the m3 milestone on March 4th).  In the past releases of Kolla we as
>> a team were super loose on this requirement – going forward I prefer us
>> being super strict.  Handing off to release management is a sign of maturity
>> and would have an overall positive impact, assuming we can get the software
>> written in time :)
>>
>> Eric,
>>
>> I'd like a plan and commitment to either hit Mitaka 3, or the N cycle.  It
>> must work well first on Ansible, and second on Mesos.  If it doesn't work at
>> all on Mesos, I could live with that -  I think the Mesos implementation
>> will really not be ready for prime time until the middle or completion of
>> the N cycle.  We lead with Ansible, and I don't see that changing any time
>> soon – as a result, I want our Ansible deployment to be rock solid and
>> usable out of the gate.  I don't expect to "Market" Mitaka Mesos (with the
>> OpenStack foundation's help) as "production ready" but rather as "tech
>> preview" and something for folks to evaluate.
>
>
> It is our intent to meet the March 4th deadline.
>
>
>
>>
>> Alicja,
>>
>> I think a parallel development effort with the ELKSTACK that your working on
>> makes sense.  In case the Heka development fails entirely, or misses Mitaka
>> 3, I don't want us left lacking a diagnostics solution for Mitaka.
>> Diagnostics is my priority #2 for Kolla (#1 is upgrades).  Unfortunately
>> what this means is you may end up wasting your time doing development that
>> is replaced at the last minute in Mitaka 3, or later in the N cycle.  This
>> is very common in software development (all the code I wrote for Magnum has
>> been sadly replaced).  I know you can be a good team player here and take
>> one for the team so to speak, but I'm asking you if you would take offense
>> to this approach.
>
>
> I'd like to moderate this a bit.  We want to build on Alicja's work,
> and we will reuse everything that Alicja has done/will do on
> Elasticsearch and Kibana, as this part of the stack will be the same.
>
>
>
>>
>> I'd like comments/questions/concerns on the above logistics approach
>> discussed, and a commitment from Eric as to when he thinks all the code
>> would land as one patch stream unit.
>>
>> I'd also like to see the code come in as one super big patch stream (think
>> 30 patches in the stream) so the work can be evaluated and merged as one
>> unit.  I could also live with 2-3 different patch streams with 10-15 patches
>> per stream, just so we can eval as a unit.  This means lots of rebasing on
>> your part Eric ;-)  It also means a commitment from the core reviewer team
>> to test and review this critical change.  If there isn't a core reviewer on
>> board with this approach, please speak up now.

I'm on board:)

>
> Makes 

Re: [openstack-dev] [neutron] Testing Neutron with latest OVS

2016-01-14 Thread Brian Haley

On 1/14/16 9:28 AM, Russell Bryant wrote:


The consensus I'm picking up roughly is that for those working on the
features, testing with source builds seems to be working fine.  It's
just not something anyone wants to gate the main Neutron repo with.
That seems quite reasonable.  If the features aren't in proper
releases yet, I don't see gating as that important anyway.


Yes, I don't see it as being used for gating (yet), just for easily 
selecting a version of OVS I want to use for testing, like something 
that supports IPv6 tunnels, whether it's pre-built or built by the script.


-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][openstack] os-client-config release 1.14.0 (mitaka)

2016-01-14 Thread doug
We are chuffed to announce the release of:

os-client-config 1.14.0: OpenStack Client Configuation Library

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/os-client-config

With package available at:

https://pypi.python.org/pypi/os-client-config

Please report issues through launchpad:

http://bugs.launchpad.net/os-client-config

For more details, please see below.

1.14.0
^^

Other Notes

* Started using reno for release notes.


Changes in os-client-config 1.13.1..1.14.0
--

a8532f6 Fix a precedence problem with auth arguments
7e54967 Return empty dict instead of None for lack of file
cd5f16c Pass version arg by name not position
f61a487 Use _get_client in make_client helper function
9835daf Add barbicanclient support
caae8ad Remove openstack-common.conf
cab0469 Add IBM Public Cloud
0b270f0 Replace assertEqual(None, *) with assertIsNone in tests
3b5673c Update auth urls and identity API versions
0bc9e33 Stop hardcoding compute in simple_client
1cd3e5b Update volume API default version from v1 to v2
c514b85 Debug log a deferred keystone exception, else we mask some useful diag
9688f8e Fix README.rst, add a check for it to fit PyPI rules
594e31a Use reno for release notes
f3678f0 add URLs for release announcement tools
7ee7156 Allow filtering clouds on command line

Diffstat (except docs and test files)
-

README.rst | 10 +++-
openstack-common.conf  |  6 -
os_client_config/__init__.py   | 11 +
os_client_config/cloud_config.py   | 10 
os_client_config/config.py | 24 ---
os_client_config/constructors.json |  1 +
os_client_config/defaults.json |  3 ++-
os_client_config/vendors/auro.json |  1 +
os_client_config/vendors/bluebox.json  |  1 +
os_client_config/vendors/catalyst.json |  1 +
os_client_config/vendors/citycloud.json|  1 +
os_client_config/vendors/conoha.json   |  5 ++--
os_client_config/vendors/datacentred.json  |  3 ++-
os_client_config/vendors/dreamhost.json|  3 ++-
os_client_config/vendors/elastx.json   |  3 ++-
os_client_config/vendors/entercloudsuite.json  |  4 +++-
os_client_config/vendors/hp.json   |  4 +++-
os_client_config/vendors/ibmcloud.json | 13 ++
os_client_config/vendors/internap.json |  3 ++-
os_client_config/vendors/ovh.json  |  3 ++-
os_client_config/vendors/rackspace.json|  1 +
os_client_config/vendors/runabove.json |  3 ++-
os_client_config/vendors/switchengines.json|  1 +
os_client_config/vendors/ultimum.json  |  4 +++-
os_client_config/vendors/unitedstack.json  |  1 +
.../notes/started-using-reno-242e2b0cd27f9480.yaml |  3 +++
test-requirements.txt  |  1 +
tox.ini|  7 +-
34 files changed, 161 insertions(+), 45 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 7053051..a50a202 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -18,0 +19 @@ oslotest>=1.5.1,<1.6.0  # Apache-2.0
+reno>=0.1.1  # Apache2



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo] Improving deprecated options identification and documentation

2016-01-14 Thread Ronald Bradford
Presently the oslo.config Opt class has the attributes
deprecated_for_removal and deprecated_reason [1]

I would like to propose that we use deprecated_reason (at a minimum) to
detail in what release an option was deprecated in, and what release it is
then removed in.
I see examples of deprecated_for_removal=True but no information on why or
when.  i.e. Ideally I'd like to move to an implied situation of
 if deprecated_for_removal=True then deprecated_reason is mandatory.

A great example is an already documented help message in oslo.log
configuration option use_syslog_rfc_format that at least provides a
guideline.  [2] shows a proposed review to take this low road approach.
An image of what the change actually looks like in documentation using this
approach [3].  This also needs  #267151 that fixes an issue where
deprecated options are not producing a warning message in docs.

The high road would be to have a discussion about if there is a better way
to mark and manage deprecated options. For example, if there was a
deprecated_release and a removal_release attribute then a level of tooling
could make this easier.  I would be wary in considering this, as it adds
complexity (is it needed), and just how many options are deprecated.  I'd
appreciate thoughts and feedback.

Regards

Ronald


[1] http://docs.openstack.org/developer/oslo.config/opts.html
[2] https://review.openstack.org/#/c/267176/
[3] http://postimg.org/image/vdkh3x46t/full/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Python 3.5 is now the default Py3 in Debian Sid

2016-01-14 Thread Yuriy Taraday
On Thu, Jan 14, 2016 at 5:48 PM Jeremy Stanley  wrote:

> On 2016-01-14 09:47:52 +0100 (+0100), Julien Danjou wrote:
> [...]
> > Is there any plan to add Python 3.5 to infra?
>
> I expect we'll end up with it shortly after Ubuntu 16.04 LTS
> releases in a few months (does anybody know for sure what its
> default Python 3 is slated to be?).
>

It's 3.5.1 already in Xenial: http://packages.ubuntu.com/xenial/python3
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Testing Neutron with latest OVS

2016-01-14 Thread Russell Bryant
On 01/14/2016 04:32 PM, Ben Pfaff wrote:
> On Thu, Jan 14, 2016 at 09:30:03PM +, Sean M. Collins wrote:
>> On Thu, Jan 14, 2016 at 03:13:16PM CST, Clark Boylan wrote:
>>> Forgive my ignorance, but if we are gating on OVS 2.0 that is because it
>>> is what is shipped with the distros that we test on. Are we saying that
>>> no one uses the distro provided OVS packages to run Neutron? If not what
>>> are they using?
>>
>> Right - this was my impression as well.
> 
> Even Debian stable (jessie) has OVS 2.3, what distro has 2.0?

Ubuntu 14.04 (latest LTS)

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >