Re: [openstack-dev] [kolla] Please vote -> Removal of Harm Weites from the core reviewer team

2016-01-15 Thread Ryan Hallisey
Hello all,

Harm, really appreciate all your reviews.  You were always very thorough and 
insightful.  You'll always be welcome back to the core team in the future. 
Thanks for everything Harm! 

+1
-Ryan

- Original Message -
From: "Michal Rostecki" 
To: openstack-dev@lists.openstack.org
Sent: Friday, January 15, 2016 6:46:43 AM
Subject: Re: [openstack-dev] [kolla] Please vote -> Removal of Harm Weites from 
the core reviewer team

On 01/15/2016 01:12 AM, Steven Dake (stdake) wrote:
> Hi fellow core reviewers,
>
> Harm joined Kolla early on with great enthusiasm and did a bang-up job
> for several months working on Kolla.  We voted unanimously to add him to
> the core team.  Over the last 6 months Harm hasn't really made much
> contribution to Kolla.  I have spoken to him about it in the past, and
> he indicated his work and other activities keep him from being able to
> execute the full job of a core reviewer and nothing environmental is
> changing in the near term that would improve things.
>
> I faced a similar work/life balance problem when working on Magnum as a
> core reviewer and also serving as PTL for Kolla.  My answer there was to
> step down from the Magnum core reviewer team [1] because Kolla needed a
> PTL more then Magnum needed a core reviewer.  I would strongly prefer if
> folks don't have the time available to serve as a Kolla core reviewer,
> to step down as was done in the above example.  Folks that follow this
> path will always be welcome back as a core reviewer in the future once
> they become familiar with the code base, people, and the project.
>
> The other alternative to stepping down is for the core reviewer team to
> vote to remove an individual from the core review team if that is deemed
> necessary.  For future reference, if you as a core reviewer have
> concerns about a fellow core reviewer's performance, please contact the
> current PTL who will discuss the issue with you.
>
> I propose removing Harm from the core review team.  Please vote:
>
> +1 = remove Harm from the core review team
> -1 = don't remove Harm from the core review team
>
> Note folks that are voted off the core review team are always welcome to
> rejoin the core team in the future once they become familiar with the
> code base, people, and the project.  Harm is a great guy, and I hope in
> the future he has more time available to rejoin the Kolla core review
> team assuming this vote passes with simple majority.
>
> It is important to explain why, for some folks that may be new to a core
> reviewer role (which many of our core reviewers are), why a core
> reviewer should have their +2/-2 voting rights removed when they become
> inactive or their activity drops below an acceptable threshold for
> extended or permanent periods.  This hasn't happened in Harm's case, but
> it is possible that a core reviewer could approve a patch that is
> incorrect because they lack sufficient context with the code base.  Our
> core reviewers are the most important part of ensuring quality software.
>   If the individual has lost context with the code base, their voting
> may be suspect, and more importantly the other core reviewers may not
> trust the individual's votes.  Trust is the cornerstone of a software
> review process, so we need to maximize trust on a technical level
> between our core team members.  That is why maintaining context with the
> code base is critical and why I am proposing a vote to remove Harm from
> the core reviewer team.
>
> On a final note, folks should always know, joining the core review team
> is never "permanent".  Folks are free to move on if their interests take
> them into other areas or their availability becomes limited.  Core
> Reviewers can also be removed by majority vote.  If there is any core
> reviewer's performance you are concerned with, please contact the
> current PTL to first work on improving performance, or alternatively
> initiating a core reviewer removal voting process.
>
> On a more personal note, I want to personally thank Harm for his many
> and significant contributions to Kolla and especially going above and
> beyond by taking on the responsibility of a core reviewer.  Harm's
> reviews were always very thorough and very high quality, and I really do
> hope in the future Harm will rejoin the Kolla core team.
>
> Regards,
> -steve
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-October/077844.html
>

Hi all,

First of all, thank you Harm for your work on Kolla in the Liberty 
release. And especially for your good reviews and advices when I was 
beginning my contribution to this project.

For now, I'm +1 for removing Harm from core team. But I also hope that 
in future it will be possible for him to rejoin our team and be as 
active as before.

Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [stable] kilo 2015.1.3 freeze exception request for cve fixes

2016-01-15 Thread Dave Walker
On 15 January 2016 at 10:01, Thierry Carrez  wrote:
> Ihar Hrachyshka wrote:
>>
>> +1. CVE fixes obviously should be granted an exception.
>
>
> +1
>

Agreed, I have already +2'd on Gerrit.  Can another core please do the same?

Thanks

--
Kind Regards,
Dave Walker

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][upgrade][devstack] Grenade multinode partial upgrade

2016-01-15 Thread Ihar Hrachyshka

Sean M. Collins  wrote:

Nice find. I actually pushed a patch recently that we should be  
advertising the MTU by default. I think this really shows that it should  
be enabled by default.


https://review.openstack.org/263486l
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


UPD on multinode neutron grenade: it indeed seems like MTU issue (multiple  
rechecks show we pass past the ssh exchange check). We are currently  
working on a bunch of fixes that should get us in better shape for the job:


https://review.openstack.org/#/q/topic:multinode-neutron-mtu

It also seems like the same fixes may help, if not fix, other multinode  
jobs we have non-voting and failing in neutron gate now. [Need more  
rechecks to be sure.]


Changes already span neutron, devstack, devstack-gate repos. Hence adding  
[devstack] to the thread tags.


Reviews are obviously welcome.

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: copyright-holders file in each project, or copyright holding forced to the OpenStack Foundation

2016-01-15 Thread Chris Dent

On Fri, 15 Jan 2016, Thomas Goirand wrote:


Whatever we choose, I think we should ban having copyright holding text
within our source code. While licensing is a good idea, as it is
accurate, the copyright holding information isn't and it's just missleading.


I think we should not add new copyright notifications in files.

I'd also be happy to see all the existing ones removed, but that may
be a bigger problem.


If I was the only person to choose, I'd say let's go for 1/, but
probably managers of every company wont agree.


I think option one is correct.

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: copyright-holders file in each project, or copyright holding forced to the OpenStack Foundation

2016-01-15 Thread Jan Klare
Hi Thomas,

good thoughts for a very important topic. I am currently refactoring a lot of 
code inside of the openstack-chef cookbooks and constantly have to ask myself, 
if i now have to add any copyright thingy to any file or am even allowed to 
delete the original copyright in a file after changing 100% of the code in it. 
I do not want to remove any copyright here, but i think this distribution of 
copyrights is very annoying and leads to the issues you mentioned below. In my 
opinion (although i agree, that 1) would be the best option but might be 
impossible since you would have to ask all companies, or find the persons 
responsible or allowed to remove these copyrights), we should move all the 
copyrights from all files to one central file per repo. The big question for me 
is, if we are allowed to do so and finally remove the copyrights from the files 
they are currently in.

Cheers,
Jan

> On 15 Jan 2016, at 13:48, Thomas Goirand  wrote:
> 
> This isn't the first time I'm calling for it. Let's hope this time, I'll
> be heard.
> 
> Randomly, contributors put their company names into source code. When
> they do, then effectively, this tells that a given source file copyright
> holder is whatever is claimed, even though someone from another company
> may have patched it.
> 
> As a result, we have a huge mess. It's impossible for me, as a package
> maintainer, to accurately set the copyright holder names in the
> debian/copyright file, which is a required by the Debian FTP masters.
> 
> I see 2 ways forward:
> 1/ Require everyone to give-up copyright holding, and give it to the
> OpenStack Foundation.
> 2/ Maintain a copyright-holder file in each project.
> 
> The later is needed if we want to do things correctly. Leaving the
> possibility for everyone to just write (c) MyCompany LLC randomly in the
> source code doesn't cut it. Expecting that a package maintainer should
> double-guess copyright holding just by reading the email addresses of
> "git log" output doesn't work either.
> 
> Please remember that a copyright holder has nothing to do with the
> license, neither with the author of some code. So please do *not* take
> over this thread, and discuss authorship or licensing.
> 
> Whatever we choose, I think we should ban having copyright holding text
> within our source code. While licensing is a good idea, as it is
> accurate, the copyright holding information isn't and it's just missleading.
> 
> If I was the only person to choose, I'd say let's go for 1/, but
> probably managers of every company wont agree.
> 
> Some thoughts anyone?
> 
> Cheers,
> 
> Thomas Goirand (zigo)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][stable] Freeze exception for kilo CVE-2015-7548 backports

2016-01-15 Thread Thierry Carrez

Matthew Booth wrote:

The following 3 patches fix CVE-2015-7548 Unprivileged api user can
access host data using instance snapshot:

https://review.openstack.org/#/c/264819/
https://review.openstack.org/#/c/264820/
https://review.openstack.org/#/c/264821/

The OSSA is rated critical. The patches have now landed on master and
liberty after some delays in the gate. Given the importance of the fix I
suspect that most/all downstream distributions will have already patched
(certainly Red Hat has), but it would be good to have them in upstream
stable.


Matt already posted a thread about giving an exception to this series:

http://lists.openstack.org/pipermail/openstack-dev/2016-January/084161.html

Cheers,

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Heka v ELK stack logistics

2016-01-15 Thread Simon Pasquier
My 2 cents on RabbitMQ logging...

On Fri, Jan 15, 2016 at 8:39 AM, Michal Rostecki 
wrote:

I'd suggest to check the similar options in RabbitMQ and other
> non-OpenStack components.
>

AFAICT RabbitMQ can't log to Syslog anyway. But you have option to make
RabbitMQ log to stdout [1].
BR,
Simon.
[1] http://www.superpumpup.com/docker-rabbitmq-stdout
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] kolla-mesos IRC meeting

2016-01-15 Thread Ryan Hallisey
Hello,

I think creating a separate kolla-mesos meeting is a good idea. My only issue 
is that I'm a little afraid it might separate our community a little, but I 
think it's necessary for kolla-mesos to grow.  My other thought is what is 
there to come of other kolla-* repos? That being not only kolla-mesos, but in 
the future kolla-anisble.  Maybe kolla meetings can be an incubator for those 
repos until they need to move out on their own?  Just a thought here.

+1
-Ryan

- Original Message -
From: "Michal Rostecki" 
To: openstack-dev@lists.openstack.org
Sent: Friday, January 15, 2016 6:38:39 AM
Subject: [openstack-dev] [kolla] kolla-mesos IRC meeting

Hi,

Currently we're discussing stuff about kolla-mesos project on kolla IRC 
meetings[1]. We have an idea of creating the separate meeting for 
kolla-mesos. I see the following reasons for that:

- kolla-mesos has some contributors which aren't able to attend kolla 
meeting because of timezone reasons
- kolla meetings had a lot of topics recently and there was a short time 
for discussing kolla-mesos things
- in the most of kolla meetings, we treated the whole kolla-mesos as one 
topic, which is bad in terms of analyzing single problems inside this 
project

The things I would like to know from you is:
- whether you're +1 or -1 to the whole idea of having separate meeting
- what is your preferred time of meeting - please use this etherpad[2] 
(I already added there some names of most active contributors from who 
I'd like to hear an opinion, so if you're interested - please "override 
color"; if not, remove the corresponding line)

About the time of meeting and possible conflicts - I think that in case 
of conflicting times and the equal number of votes, opinion of core 
reviewers and people who are already contributing to the project 
(reviews and commits) will be more important. You can see the 
contributors here[3][4].

[1] https://wiki.openstack.org/wiki/Meetings/Kolla
[2] https://etherpad.openstack.org/p/kolla-mesos-irc-meeting
[3] http://stackalytics.com/?module=kolla-mesos
[4] http://stackalytics.com/?module=kolla-mesos=commits

Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Heka v ELK stack logistics

2016-01-15 Thread Michal Rostecki

On 01/15/2016 11:14 AM, Simon Pasquier wrote:

My 2 cents on RabbitMQ logging...

On Fri, Jan 15, 2016 at 8:39 AM, Michal Rostecki > wrote:

I'd suggest to check the similar options in RabbitMQ and other
non-OpenStack components.


AFAICT RabbitMQ can't log to Syslog anyway. But you have option to make
RabbitMQ log to stdout [1].
BR,
Simon.
[1] http://www.superpumpup.com/docker-rabbitmq-stdout



That's OK for Heka/Mesos/k8s approach.

Just for the curiosity,
@inc0: so we don't receive any logs from RabbitMQ in the current rsyslog 
approach?


Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Email as User Name on the Horizon login page

2016-01-15 Thread Adrian Turjak

Thanks, as a hack that's not a bad fix.

We maintain our own Horizon repo anyway and rebase often, so it's a 
change we can carry if needed.


I'm just confused as to why these changes were made to work as they are. 
I mean sure having the html populate based on the form details is nice 
in theory, but having that in a place you can't access or customize 
because it's in a secondary library feels like over engineering.


What is the point of themes if we can't change something as basic as a 
form field label?


On 15/01/2016 9:54 p.m., Itxaka Serrano Garcia wrote:


Looks like the form comes from django_openstack_auth:
https://github.com/openstack/django_openstack_auth/blob/master/openstack_auth/forms.py#L53 




But to be honest, no idea how that can be overridden trough the 
themes, not sure if its even possible to override anything on that 
page without modifying django_openstack_auth directly :(


Maybe someone else has a better insight on this than me.


* Horrible Hack Incoming, read at your own discretion *

You can override the template here:
https://github.com/openstack/horizon/blob/master/horizon/templates/horizon/common/_form_field.html#L51 



And change this line:
for="{{ field.auto_id }}">{{ field.label }}


For this:
for="{{ field.auto_id }}">{% if field.label == "User Name" and not 
request.user.is_authenticated %}Email{% else %}{{ field.label }}{% 
endif %}



Which will check if the label is "User Name" and the user is logged 
out and directly write "Email" as the field label.


I know, its horrible and if you update horizon it will be overriden, 
but probably works for the time being if you really need it ¯\_(ツ)_/¯


* Horrible Hack Finished *




Itxaka




On 01/15/2016 05:13 AM, Adrian Turjak wrote:

I've run into a weird issue with the Liberty release of Horizon.

For our deployment we enforce emails as usernames, and thus for Horizon
we used to have "User Name" on the login page replaced with "Email".
This used to be a straightforward change in the html template file, and
with the introduction of themes we assumed it would be the same. When
one of our designers was migrating our custom CSS and html changes to
the new theme system they missed that change and I at first it was a
silly mistake.

Only on digging through the code myself I found that the "User Name" on
the login screen isn't in the html file at all, nor anywhere else
straightforward. The login page form is built on the fly with javascript
to facilitate different modes of authentication. While a bit annoying
that didn't seem too bad and I then assumed it might mean a javascript
change, only that the more I dug, the more I became confused.

Where exactly is the login form defined? And where exactly is the "User
Name" text for the login form set?

I've tried all manner of stuff to change it with no luck and I feel like
I must have missed something obvious.

Cheers,
-Adrian Turjak

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [api] API Working Group Refresher

2016-01-15 Thread Chris Dent


At yesterday's API Working Group meeting we decided it would be a
good idea to send out a refresher on the existence of the group,
its goals and activities. If you have interest in the improvement
and standardization of OpenStack APIs please take this as an
invitation to participate.

The group meets once a week in openstack-meeting-3 on Thursdays
alternating between 00:00 UTC and 16:00 UTC[0].

The primary goal of the group is to help OpenStack HTTP APIs adhere
to standards and be consistent in the context of OpenStack[1]. To that
end the main activities of members are:

* Creating, encouraging projects to create, and reviewing API
  guidelines:

  * https://review.openstack.org/#/q/project:openstack/api-wg
  * http://specs.openstack.org/openstack/api-wg/

  To be clear these are guidelines, not rules. The members of the
  API working group are not cops. There's an open question of
  whether the guidelines should codify existing behaviors or be more
  aspirational. If you have an opinion on such things, you might
  like to come along and share it.

* Providing guidance and sounding boards for changes which impact
  HTTP APIS:

  * 
https://review.openstack.org/#/q/status:open+AND+%28message:ApiImpact+OR+message:APIImpact%29,n,z

* Exploring existing APIs to get a sense of existing practices and
  tease out which are "best":

  * https://wiki.openstack.org/wiki/API_Working_Group/Current_Design

Ideally every OpenStack project that has an HTTP API should have a
designated cross project liaison who is willing to participate in the
API working group and operate as bi-directional conduit. Talk to your
PTL if you want this responsibility and then show up at the meetings.

[0] https://wiki.openstack.org/wiki/Meetings/API-WG
[1] https://wiki.openstack.org/wiki/API_Working_Group#Purpose
--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Proposal: copyright-holders file in each project, or copyright holding forced to the OpenStack Foundation

2016-01-15 Thread Thomas Goirand
This isn't the first time I'm calling for it. Let's hope this time, I'll
be heard.

Randomly, contributors put their company names into source code. When
they do, then effectively, this tells that a given source file copyright
holder is whatever is claimed, even though someone from another company
may have patched it.

As a result, we have a huge mess. It's impossible for me, as a package
maintainer, to accurately set the copyright holder names in the
debian/copyright file, which is a required by the Debian FTP masters.

I see 2 ways forward:
1/ Require everyone to give-up copyright holding, and give it to the
OpenStack Foundation.
2/ Maintain a copyright-holder file in each project.

The later is needed if we want to do things correctly. Leaving the
possibility for everyone to just write (c) MyCompany LLC randomly in the
source code doesn't cut it. Expecting that a package maintainer should
double-guess copyright holding just by reading the email addresses of
"git log" output doesn't work either.

Please remember that a copyright holder has nothing to do with the
license, neither with the author of some code. So please do *not* take
over this thread, and discuss authorship or licensing.

Whatever we choose, I think we should ban having copyright holding text
within our source code. While licensing is a good idea, as it is
accurate, the copyright holding information isn't and it's just missleading.

If I was the only person to choose, I'd say let's go for 1/, but
probably managers of every company wont agree.

Some thoughts anyone?

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] kilo 2015.1.3 freeze exception request for cve fixes

2016-01-15 Thread Alan Pevec
2016-01-15 11:08 GMT+01:00 Dave Walker :
> On 15 January 2016 at 10:01, Thierry Carrez  wrote:
>> Ihar Hrachyshka wrote:
>>>
>>> +1. CVE fixes obviously should be granted an exception.
>>
>>
>> +1
>>
>
> Agreed, I have already +2'd on Gerrit.  Can another core please do the same?

Done and workflow started.

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [api] Reminder about the State of WSME

2016-01-15 Thread Chris Dent


I was checking the review queue for WSME earlier this week and
discovered a few new patches and some patches that are mostly
idling. This has inspired me to send out a reminder about the state
of WSME, at least as it relates to OpenStack.

Last summer when it seemed liked WSME was not being actively
maintained Lucas Gomes and I were pressganged into becoming core on
it. Since then we've made a couple releases to handle the pending
bug fixes but our recommendation all along has been: If you're
considering using WSME for something new: don't.

Unless active contributors step up this will continue to be the
case.

Even if active contributors do step up I'd still not recommend WSME.
Its architecture and implementation for handling the fundamentals
of HTTP is not great; the patches that have been done to try to
address this are overly complex and increase the difficulty in
maintaining the code.

We've had some discussion in the past to try and reach a consensus
on an alternative but not had any conclusions (though Flask gets
a lot of votes of confidence simply because it has such an active
community). At this point it seems best to leave it where it
probably should be anyway: In the hands of the main developers of
any project. Please just make sure that any app that is created can
be hosted by any reasonable WSGI host.

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] kolla-mesos IRC meeting

2016-01-15 Thread Michal Rostecki

Hi,

Currently we're discussing stuff about kolla-mesos project on kolla IRC 
meetings[1]. We have an idea of creating the separate meeting for 
kolla-mesos. I see the following reasons for that:


- kolla-mesos has some contributors which aren't able to attend kolla 
meeting because of timezone reasons
- kolla meetings had a lot of topics recently and there was a short time 
for discussing kolla-mesos things
- in the most of kolla meetings, we treated the whole kolla-mesos as one 
topic, which is bad in terms of analyzing single problems inside this 
project


The things I would like to know from you is:
- whether you're +1 or -1 to the whole idea of having separate meeting
- what is your preferred time of meeting - please use this etherpad[2] 
(I already added there some names of most active contributors from who 
I'd like to hear an opinion, so if you're interested - please "override 
color"; if not, remove the corresponding line)


About the time of meeting and possible conflicts - I think that in case 
of conflicting times and the equal number of votes, opinion of core 
reviewers and people who are already contributing to the project 
(reviews and commits) will be more important. You can see the 
contributors here[3][4].


[1] https://wiki.openstack.org/wiki/Meetings/Kolla
[2] https://etherpad.openstack.org/p/kolla-mesos-irc-meeting
[3] http://stackalytics.com/?module=kolla-mesos
[4] http://stackalytics.com/?module=kolla-mesos=commits

Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][stable] Proposal to add Tony Breeds to nova-stable-maint

2016-01-15 Thread John Garbutt
On 14 January 2016 at 10:45, Michael Still  wrote:
> I think Tony would be a valuable addition to the team.

+1

> +1

+1

John

> On 14 Jan 2016 7:59 AM, "Matt Riedemann"  wrote:
>>
>> I'm formally proposing that the nova-stable-maint team [1] adds Tony
>> Breeds to the core team.
>>
>> I don't have a way to track review status on stable branches, but there
>> are review numbers from gerrit for stable/liberty [2] and stable/kilo [3].
>>
>> I know that Tony does a lot of stable branch reviews and knows the
>> backport policy well, and he's also helped out numerous times over the last
>> year or so with fixing stable branch QA / CI issues (think gate wedge
>> failures in stable/juno over the last 6 months). So I think Tony would be a
>> great addition to the team.
>>
>> So for those on the team already, please reply with a +1 or -1 vote.
>>
>> [1] https://review.openstack.org/#/admin/groups/540,members
>> [2]
>> https://review.openstack.org/#/q/reviewer:%22Tony+Breeds%22+branch:stable/liberty+project:openstack/nova
>> [3]
>> https://review.openstack.org/#/q/reviewer:%22Tony+Breeds%22+branch:stable/kilo+project:openstack/nova
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Heka v ELK stack logistics

2016-01-15 Thread Eric LEMOINE
On Fri, Jan 15, 2016 at 11:57 AM, Michal Rostecki
 wrote:
> On 01/15/2016 11:14 AM, Simon Pasquier wrote:
>>
>> My 2 cents on RabbitMQ logging...
>>
>> On Fri, Jan 15, 2016 at 8:39 AM, Michal Rostecki > > wrote:
>>
>> I'd suggest to check the similar options in RabbitMQ and other
>> non-OpenStack components.
>>
>>
>> AFAICT RabbitMQ can't log to Syslog anyway. But you have option to make
>> RabbitMQ log to stdout [1].
>> BR,
>> Simon.
>> [1] http://www.superpumpup.com/docker-rabbitmq-stdout
>>
>
> That's OK for Heka/Mesos/k8s approach.
>
> Just for the curiosity,
> @inc0: so we don't receive any logs from RabbitMQ in the current rsyslog
> approach?


/var/lib/docker/volumes/rsyslog/_data is where logs are stored, and
you'll see that there is no file for RabbitMQ.  This is related to
RabbitMQ not logging to syslog.  So our impression is that Kolla
doesn't at all collect RabbitMQ logs today.  I guess this should be
fixed.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Please vote -> Removal of Harm Weites from the core reviewer team

2016-01-15 Thread Michal Rostecki

On 01/15/2016 01:12 AM, Steven Dake (stdake) wrote:

Hi fellow core reviewers,

Harm joined Kolla early on with great enthusiasm and did a bang-up job
for several months working on Kolla.  We voted unanimously to add him to
the core team.  Over the last 6 months Harm hasn't really made much
contribution to Kolla.  I have spoken to him about it in the past, and
he indicated his work and other activities keep him from being able to
execute the full job of a core reviewer and nothing environmental is
changing in the near term that would improve things.

I faced a similar work/life balance problem when working on Magnum as a
core reviewer and also serving as PTL for Kolla.  My answer there was to
step down from the Magnum core reviewer team [1] because Kolla needed a
PTL more then Magnum needed a core reviewer.  I would strongly prefer if
folks don't have the time available to serve as a Kolla core reviewer,
to step down as was done in the above example.  Folks that follow this
path will always be welcome back as a core reviewer in the future once
they become familiar with the code base, people, and the project.

The other alternative to stepping down is for the core reviewer team to
vote to remove an individual from the core review team if that is deemed
necessary.  For future reference, if you as a core reviewer have
concerns about a fellow core reviewer's performance, please contact the
current PTL who will discuss the issue with you.

I propose removing Harm from the core review team.  Please vote:

+1 = remove Harm from the core review team
-1 = don't remove Harm from the core review team

Note folks that are voted off the core review team are always welcome to
rejoin the core team in the future once they become familiar with the
code base, people, and the project.  Harm is a great guy, and I hope in
the future he has more time available to rejoin the Kolla core review
team assuming this vote passes with simple majority.

It is important to explain why, for some folks that may be new to a core
reviewer role (which many of our core reviewers are), why a core
reviewer should have their +2/-2 voting rights removed when they become
inactive or their activity drops below an acceptable threshold for
extended or permanent periods.  This hasn't happened in Harm's case, but
it is possible that a core reviewer could approve a patch that is
incorrect because they lack sufficient context with the code base.  Our
core reviewers are the most important part of ensuring quality software.
  If the individual has lost context with the code base, their voting
may be suspect, and more importantly the other core reviewers may not
trust the individual's votes.  Trust is the cornerstone of a software
review process, so we need to maximize trust on a technical level
between our core team members.  That is why maintaining context with the
code base is critical and why I am proposing a vote to remove Harm from
the core reviewer team.

On a final note, folks should always know, joining the core review team
is never "permanent".  Folks are free to move on if their interests take
them into other areas or their availability becomes limited.  Core
Reviewers can also be removed by majority vote.  If there is any core
reviewer's performance you are concerned with, please contact the
current PTL to first work on improving performance, or alternatively
initiating a core reviewer removal voting process.

On a more personal note, I want to personally thank Harm for his many
and significant contributions to Kolla and especially going above and
beyond by taking on the responsibility of a core reviewer.  Harm's
reviews were always very thorough and very high quality, and I really do
hope in the future Harm will rejoin the Kolla core team.

Regards,
-steve

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-October/077844.html



Hi all,

First of all, thank you Harm for your work on Kolla in the Liberty 
release. And especially for your good reviews and advices when I was 
beginning my contribution to this project.


For now, I'm +1 for removing Harm from core team. But I also hope that 
in future it will be possible for him to rejoin our team and be as 
active as before.


Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Please vote -> Removal of Harm Weites from the core reviewer team

2016-01-15 Thread Michał Jastrzębski
That's an unfortunate +1, it's always sad to lose one of our own. Hope
we'll see you back soon! Thanks for all the fis...reviews!

On 15 January 2016 at 06:59, Ryan Hallisey  wrote:
> Hello all,
>
> Harm, really appreciate all your reviews.  You were always very thorough and 
> insightful.  You'll always be welcome back to the core team in the future. 
> Thanks for everything Harm!
>
> +1
> -Ryan
>
> - Original Message -
> From: "Michal Rostecki" 
> To: openstack-dev@lists.openstack.org
> Sent: Friday, January 15, 2016 6:46:43 AM
> Subject: Re: [openstack-dev] [kolla] Please vote -> Removal of Harm Weites 
> from the core reviewer team
>
> On 01/15/2016 01:12 AM, Steven Dake (stdake) wrote:
>> Hi fellow core reviewers,
>>
>> Harm joined Kolla early on with great enthusiasm and did a bang-up job
>> for several months working on Kolla.  We voted unanimously to add him to
>> the core team.  Over the last 6 months Harm hasn't really made much
>> contribution to Kolla.  I have spoken to him about it in the past, and
>> he indicated his work and other activities keep him from being able to
>> execute the full job of a core reviewer and nothing environmental is
>> changing in the near term that would improve things.
>>
>> I faced a similar work/life balance problem when working on Magnum as a
>> core reviewer and also serving as PTL for Kolla.  My answer there was to
>> step down from the Magnum core reviewer team [1] because Kolla needed a
>> PTL more then Magnum needed a core reviewer.  I would strongly prefer if
>> folks don't have the time available to serve as a Kolla core reviewer,
>> to step down as was done in the above example.  Folks that follow this
>> path will always be welcome back as a core reviewer in the future once
>> they become familiar with the code base, people, and the project.
>>
>> The other alternative to stepping down is for the core reviewer team to
>> vote to remove an individual from the core review team if that is deemed
>> necessary.  For future reference, if you as a core reviewer have
>> concerns about a fellow core reviewer's performance, please contact the
>> current PTL who will discuss the issue with you.
>>
>> I propose removing Harm from the core review team.  Please vote:
>>
>> +1 = remove Harm from the core review team
>> -1 = don't remove Harm from the core review team
>>
>> Note folks that are voted off the core review team are always welcome to
>> rejoin the core team in the future once they become familiar with the
>> code base, people, and the project.  Harm is a great guy, and I hope in
>> the future he has more time available to rejoin the Kolla core review
>> team assuming this vote passes with simple majority.
>>
>> It is important to explain why, for some folks that may be new to a core
>> reviewer role (which many of our core reviewers are), why a core
>> reviewer should have their +2/-2 voting rights removed when they become
>> inactive or their activity drops below an acceptable threshold for
>> extended or permanent periods.  This hasn't happened in Harm's case, but
>> it is possible that a core reviewer could approve a patch that is
>> incorrect because they lack sufficient context with the code base.  Our
>> core reviewers are the most important part of ensuring quality software.
>>   If the individual has lost context with the code base, their voting
>> may be suspect, and more importantly the other core reviewers may not
>> trust the individual's votes.  Trust is the cornerstone of a software
>> review process, so we need to maximize trust on a technical level
>> between our core team members.  That is why maintaining context with the
>> code base is critical and why I am proposing a vote to remove Harm from
>> the core reviewer team.
>>
>> On a final note, folks should always know, joining the core review team
>> is never "permanent".  Folks are free to move on if their interests take
>> them into other areas or their availability becomes limited.  Core
>> Reviewers can also be removed by majority vote.  If there is any core
>> reviewer's performance you are concerned with, please contact the
>> current PTL to first work on improving performance, or alternatively
>> initiating a core reviewer removal voting process.
>>
>> On a more personal note, I want to personally thank Harm for his many
>> and significant contributions to Kolla and especially going above and
>> beyond by taking on the responsibility of a core reviewer.  Harm's
>> reviews were always very thorough and very high quality, and I really do
>> hope in the future Harm will rejoin the Kolla core team.
>>
>> Regards,
>> -steve
>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/2015-October/077844.html
>>
>
> Hi all,
>
> First of all, thank you Harm for your work on Kolla in the Liberty
> release. And especially for your good reviews and advices when I was
> beginning my contribution to this project.
>
> For now, I'm +1 for 

Re: [openstack-dev] [stable] kilo 2015.1.3 freeze exception request for cve fixes

2016-01-15 Thread Matt Riedemann



On 1/15/2016 4:13 AM, Alan Pevec wrote:

2016-01-15 11:08 GMT+01:00 Dave Walker :

On 15 January 2016 at 10:01, Thierry Carrez  wrote:

Ihar Hrachyshka wrote:


+1. CVE fixes obviously should be granted an exception.



+1



Agreed, I have already +2'd on Gerrit.  Can another core please do the same?


Done and workflow started.

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Thanks everyone!

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-15 Thread Dan Prince
On Thu, 2016-01-14 at 16:04 -0500, Tzu-Mainn Chen wrote:
> 
> - Original Message -
> > On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:
> > > Hey all,
> > > 
> > > I realize now from the title of the other TripleO/Mistral thread
> > > [1] that
> > > the discussion there may have gotten confused.  I think using
> > > Mistral for
> > > TripleO processes that are obviously workflows - stack
> > > deployment, node
> > > registration - makes perfect sense.  That thread is exploring
> > > practicalities
> > > for doing that, and I think that's great work.
> > > 
> > > What I inappropriately started to address in that thread was a
> > > somewhat
> > > orthogonal point that Dan asked in his original email, namely:
> > > 
> > > "what it might look like if we were to use Mistral as a
> > > replacement for the
> > > TripleO API entirely"
> > > 
> > > I'd like to create this thread to talk about that; more of a
> > > 'should we'
> > > than 'can we'.  And to do that, I want to indulge in a thought
> > > exercise
> > > stemming from an IRC discussion with Dan and others.  All, please
> > > correct
> > > me
> > > if I've misstated anything.
> > > 
> > > The IRC discussion revolved around one use case: deploying a Heat
> > > stack
> > > directly from a Swift container.  With an updated patch, the Heat
> > > CLI can
> > > support this functionality natively.  Then we don't need a
> > > TripleO API; we
> > > can use Mistral to access that functionality, and we're done,
> > > with no need
> > > for additional code within TripleO.  And, as I understand it,
> > > that's the
> > > true motivation for using Mistral instead of a TripleO API:
> > > avoiding custom
> > > code within TripleO.
> > > 
> > > That's definitely a worthy goal... except from my perspective,
> > > the story
> > > doesn't quite end there.  A GUI needs additional functionality,
> > > which boils
> > > down to: understanding the Heat deployment templates in order to
> > > provide
> > > options for a user; and persisting those options within a Heat
> > > environment
> > > file.
> > > 
> > > Right away I think we hit a problem.  Where does the code for
> > > 'understanding
> > > options' go?  Much of that understanding comes from the
> > > capabilities map
> > > in tripleo-heat-templates [2]; it would make sense to me that
> > > responsibility
> > > for that would fall to a TripleO library.
> > > 
> > > Still, perhaps we can limit the amount of TripleO code.  So to
> > > give API
> > > access to 'getDeploymentOptions', we can create a Mistral
> > > workflow.
> > > 
> > >   Retrieve Heat templates from Swift -> Parse capabilities map
> > > 
> > > Which is fine-ish, except from an architectural perspective
> > > 'getDeploymentOptions' violates the abstraction layer between
> > > storage and
> > > business logic, a problem that is compounded because
> > > 'getDeploymentOptions'
> > > is not the only functionality that accesses the Heat templates
> > > and needs
> > > exposure through an API.  And, as has been discussed on a
> > > separate TripleO
> > > thread, we're not even sure Swift is sufficient for our needs;
> > > one possible
> > > consideration right now is allowing deployment from templates
> > > stored in
> > > multiple places, such as the file system or git.
> > 
> > Actually, that whole capabilities map thing is a workaround for a
> > missing
> > feature in Heat, which I have proposed, but am having a hard time
> > reaching
> > consensus on within the Heat community:
> > 
> > https://review.openstack.org/#/c/196656/
> > 
> > Given that is a large part of what's anticipated to be provided by
> > the
> > proposed TripleO API, I'd welcome feedback and collaboration so we
> > can move
> > that forward, vs solving only for TripleO.
> > 
> > > Are we going to have duplicate 'getDeploymentOptions' workflows
> > > for each
> > > storage mechanism?  If we consolidate the storage code within a
> > > TripleO
> > > library, do we really need a *workflow* to call a single
> > > function?  Is a
> > > thin TripleO API that contains no additional business logic
> > > really so bad
> > > at that point?
> > 
> > Actually, this is an argument for making the validation part of the
> > deployment a workflow - then the interface with the storage
> > mechanism
> > becomes more easily pluggable vs baked into an opaque-to-operators
> > API.
> > 
> > E.g, in the long term, imagine the capabilities feature exists in
> > Heat, you
> > then have a pre-deployment workflow that looks something like:
> > 
> > 1. Retrieve golden templates from a template store
> > 2. Pass templates to Heat, get capabilities map which defines
> > features user
> > must/may select.
> > 3. Prompt user for input to select required capabilites
> > 4. Pass user input to Heat, validate the configuration, get a
> > mapping of
> > required options for the selected capabilities (nested validation)
> > 5. Push the validated pieces ("plan" in TripleO API terminology) to
> > a
> > template 

Re: [openstack-dev] [kolla] Heka v ELK stack logistics

2016-01-15 Thread Michał Jastrzębski
Yeah that's true. We did all of openstack systems but we didn't
implement infra around yet. I'd guess most of services can log either
to stdout or file, and both sources should be accessible by heka.
Also, I'd be surprised if heka wouldn't have syslog driver? That
should be one of first:) Maybe worth writing one? I wanted an excuse
to write some golang;)

Regards,
Michal

On 15 January 2016 at 06:42, Eric LEMOINE  wrote:
> On Fri, Jan 15, 2016 at 11:57 AM, Michal Rostecki
>  wrote:
>> On 01/15/2016 11:14 AM, Simon Pasquier wrote:
>>>
>>> My 2 cents on RabbitMQ logging...
>>>
>>> On Fri, Jan 15, 2016 at 8:39 AM, Michal Rostecki >> > wrote:
>>>
>>> I'd suggest to check the similar options in RabbitMQ and other
>>> non-OpenStack components.
>>>
>>>
>>> AFAICT RabbitMQ can't log to Syslog anyway. But you have option to make
>>> RabbitMQ log to stdout [1].
>>> BR,
>>> Simon.
>>> [1] http://www.superpumpup.com/docker-rabbitmq-stdout
>>>
>>
>> That's OK for Heka/Mesos/k8s approach.
>>
>> Just for the curiosity,
>> @inc0: so we don't receive any logs from RabbitMQ in the current rsyslog
>> approach?
>
>
> /var/lib/docker/volumes/rsyslog/_data is where logs are stored, and
> you'll see that there is no file for RabbitMQ.  This is related to
> RabbitMQ not logging to syslog.  So our impression is that Kolla
> doesn't at all collect RabbitMQ logs today.  I guess this should be
> fixed.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-11, Jan 18-22, Mitaka-2 milestone

2016-01-15 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2016-01-14 16:20:54 -0500:
> Focus
> -
> 
> Next week is the second milestone for the Mitaka cycle. Major feature
> work should be making good progress or be re-evaluated to see whether
> it will really land this cycle.
> 
> Release Actions
> ---
> 
> Liaisons should submit tag requests to the openstack/releases
> repository for all projects following the cycle-with-milestone
> release model before the end of the day on Jan 21.
> 
> We're working on updating the documented responsibilities for release
> liaisons. Please have a look at https://review.openstack.org/#/c/262003/
> and leave comments if you have questions or concerns.
> 
> Important Dates
> ---
> 
> Mitaka 2: Jan 19-21
> 
> Deadline for Mitaka 2 tag: Jan 21
> 
> Mitaka release schedule:
> http://docs.openstack.org/releases/schedules/mitaka.html

One important reminder I left out: As Thierry described on this
list earlier [1], we will be freezing changes to the release model
tags for projects after the Mitaka 2 tags are in place. If you've
been considering submitting patches to change the release tags for
your project, please do that between now and next week.

Doug

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-January/083726.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] nova-network removal

2016-01-15 Thread Roman Prykhodchenko
I’d like to add that nova-network support was removed from python-fuelclient in 
8.0.

> 14 січ. 2016 р. о 17:54 Vitaly Kramskikh  
> написав(ла):
> 
> Folks,
> 
> We have a request on review which prohibits creating new envs with 
> nova-network: https://review.openstack.org/#/c/261229/ 
>  We're 3 weeks away from HCF, and I 
> think this is too late for such a change. What do you think? Should we 
> proceed and remove nova-network support in 8.0, which is deprecated since 7.0?
> 
> --
> Vitaly Kramskikh,
> Fuel UI Tech Lead,
> Mirantis, Inc.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Flame creates incorrect HEAT template for provider network

2016-01-15 Thread Vikram Hosakote (vhosakot)
Hi OpenStack Flame developers,

I see that the HEAT template generated by OpenStack Flame for a provider
network is incorrect and has missing fields.

For the provider network Provider_153, Flame creates the HEAT template

  network_0:
properties:
  admin_state_up: true
  name: Provider_153
  shared: true
type: OS::Neutron::Net


instead of


  network_0:
properties:
  admin_state_up: true
  name: Provider_153
  shared: true
  provider:physical_network: physnet2  <- Missing
  provider:network_type: vlan  <- Missing
  provider:segmentation_id: 153<- Missing
type: OS::Neutron::Net


If I propose a fix for this bug, will I get an ATC code for the OpenStack 
summit in
Austin ?  Also, I am not able to find the OpenStack Flame project in Launchpad
https://launchpad.net/openstack. Where do I file this bug ?

I cloned Flame from https://github.com/openstack/flame.

FYI, the neutron CLI used to create the provider network is

neutron net-create --provider:physical_network=physnet2 
--provider:network_type=vlan --provider:segmentation_id=153 --shared 
Provider_153


Regards,
Vikram Hosakote
OpenStack Software Developer
Cisco Systems
vhosa...@cisco.com
Boxborough MA
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] kolla-mesos IRC meeting

2016-01-15 Thread Michał Jastrzębski
As long as you guys do your best to attend normal Kolla meeting that's
ok by me. Shame that probably we won't be able to attend because of
out tz, but that's ok, we can read logs;) Just make sure to be part of
kolla community as well, don't let separation like Ryan mentioned
happen.

+1

On 15 January 2016 at 07:10, Ryan Hallisey  wrote:
> Hello,
>
> I think creating a separate kolla-mesos meeting is a good idea. My only issue 
> is that I'm a little afraid it might separate our community a little, but I 
> think it's necessary for kolla-mesos to grow.  My other thought is what is 
> there to come of other kolla-* repos? That being not only kolla-mesos, but in 
> the future kolla-anisble.  Maybe kolla meetings can be an incubator for those 
> repos until they need to move out on their own?  Just a thought here.
>
> +1
> -Ryan
>
> - Original Message -
> From: "Michal Rostecki" 
> To: openstack-dev@lists.openstack.org
> Sent: Friday, January 15, 2016 6:38:39 AM
> Subject: [openstack-dev] [kolla] kolla-mesos IRC meeting
>
> Hi,
>
> Currently we're discussing stuff about kolla-mesos project on kolla IRC
> meetings[1]. We have an idea of creating the separate meeting for
> kolla-mesos. I see the following reasons for that:
>
> - kolla-mesos has some contributors which aren't able to attend kolla
> meeting because of timezone reasons
> - kolla meetings had a lot of topics recently and there was a short time
> for discussing kolla-mesos things
> - in the most of kolla meetings, we treated the whole kolla-mesos as one
> topic, which is bad in terms of analyzing single problems inside this
> project
>
> The things I would like to know from you is:
> - whether you're +1 or -1 to the whole idea of having separate meeting
> - what is your preferred time of meeting - please use this etherpad[2]
> (I already added there some names of most active contributors from who
> I'd like to hear an opinion, so if you're interested - please "override
> color"; if not, remove the corresponding line)
>
> About the time of meeting and possible conflicts - I think that in case
> of conflicting times and the equal number of votes, opinion of core
> reviewers and people who are already contributing to the project
> (reviews and commits) will be more important. You can see the
> contributors here[3][4].
>
> [1] https://wiki.openstack.org/wiki/Meetings/Kolla
> [2] https://etherpad.openstack.org/p/kolla-mesos-irc-meeting
> [3] http://stackalytics.com/?module=kolla-mesos
> [4] http://stackalytics.com/?module=kolla-mesos=commits
>
> Cheers,
> Michal
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible][security] Improving SSL/TLS in OpenStack-Ansible

2016-01-15 Thread Major Hayden
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hey folks,

I've attended some of the OpenStack Security Mid-Cycle meeting this week and 
Robert Clark was kind enough to give me a deep dive on the Anchor project[1].  
We had a good discussion around my original email thread[2] on improving 
SSL/TLS certificates within OpenStack-Ansible (OSA) and we went over my 
proposed spec[3] on the topic.

Jean-Philippe Evrard helped me assemble an etherpad[4] this morning where we 
brainstormed some problem statements, user stories, and potential solutions for 
improving the certificate experience in OSA.  It seems like an ephemeral PKI 
solution, like Anchor, might provide a better certificate experience for users 
while also making the revocation and issuance process easier.

I'd really like to get some feedback from the OpenStack community on our 
current brainstorming efforts.  We've enumerated a few use cases and user 
stories already, but we've probably missed some other important ones.  Feel 
free to stop by #openstack-ansible or join us in the etherpad.

Thanks!

[1] https://wiki.openstack.org/wiki/Security/Projects/Anchor
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-October/077877.html
[3] https://review.openstack.org/#/c/243332/
[4] https://etherpad.openstack.org/p/openstack-ansible-tls-improvement

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJWmP+9AAoJEHNwUeDBAR+xZpwP/Ana9JFTEGRvZSzKQHv/jQeY
KjUFTjXIBqijVysPpv4VIus8A8wiZNIUk2GMFy6IAA3XrBuAMXaRYmTvJZ6/gUq+
k57o3buH2pxlLiYJkK4DToPqzgYx2pjfUzO3IXPrmDS82JQrKp7xLvGgICe0lgtS
VCSjEDfXFRQuaKg5Uk99hzoZsuRVsiIpAAd97Q2h603FNzZk3bqleF1czrSQS/0i
vjLYQoCcUKYTy9dvqZ39dhh4ACtsaccKv0tF72v0rEn7y6eTJZ6ssAC1257Duzii
UffLA+t++BZ0SMeIhVGoI7kE+KoItEdzPMJ9V4i+/HZBbUQPmFik01vlfGsrAH9r
uygSnZyDJ2+jIx/eoLTM9QRjf4rqXjBbTlz9EpwQoo0nhJWV/EBrUNoFmRFTItr+
MkNwRty1HK4g28yqUI/iHiVu+GOU91M6EDlGqBO/lvMyy8886SPakZaNLfB4Mo2K
+LwvwIrRHBgQNC12FkG7nwOXnetRoaxYvw0hu5Zbm/yhQiIDe5LFu0REKNiJb6KG
kDSaCmKWNixHiOwCWYecRpkGqIJJfIasQ8DYaUm905WsxaDwisBG4lu3TEJSHKs/
SmoLmMFNaN9PhiaVlLSeuj+FwN4arTDBxAahASQoaMSDMCy/HURTaQSt7+FXn+wD
eEVF2pRXgeRQl31B5Dpe
=ukvd
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Testing Neutron with latest OVS

2016-01-15 Thread Russell Bryant
On 01/14/2016 08:04 PM, Jeremy Stanley wrote:
> On 2016-01-14 22:14:09 + (+), Sean M. Collins wrote:
> [...]
>> The problem we have is - most operators are using Ubuntu according to
>> the user survey. Most likely they are using LTS releases. We already get
>> flack for our pace of releases and our release lifecycle duration, so
>> if we were to move off LTS for our gate, we would be pushing operators
>> to move to more frequent upgrades for their base operating system. Maybe
>> that's a discussion that needs to be had, but it will be contentious.
> 
> As a point of reference, the OpenStack Infrastructure team only uses
> LTS distro releases to run production systems. We've also got a
> modest sized OpenStack deployment on its way to production, again on
> an LTS distro release. I agree that releasing server software which
> is only well tested on "desktop" pace distro releases would be a
> serious misstep for the project.
> 

but are most people using that LTS release with cloud-archive enabled?

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: copyright-holders file in each project, or copyright holding forced to the OpenStack Foundation

2016-01-15 Thread Jeremy Stanley
On 2016-01-15 23:09:34 +0800 (+0800), Thomas Goirand wrote:
> On 01/15/2016 09:57 PM, Jeremy Stanley wrote:
[...]
> > resulting in the summary at
> > https://wiki.openstack.org/wiki/LegalIssuesFAQ#Copyright_Headers for
> > those who choose to learn from history rather than repeating it.
> 
> Well, this wiki entry doesn't even have a single line about copyright
> holding, it only tells about licensing and how/when to put a license
> header in source code. Or did I miss it when reading too fast?

What? That entire section I linked is _only_ about copyright
headers.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: copyright-holders file in each project, or copyright holding forced to the OpenStack Foundation

2016-01-15 Thread Jeremy Stanley
On 2016-01-15 07:31:09 -0600 (-0600), Dolph Mathews wrote:
> This is a topic for legal-discuss, not -dev.
> 
>   http://lists.openstack.org/cgi-bin/mailman/listinfo/legal-discuss

And not only that, but we've discussed it to death in years gone by
(my how short some memories are), resulting in the summary at
https://wiki.openstack.org/wiki/LegalIssuesFAQ#Copyright_Headers for
those who choose to learn from history rather than repeating it.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] nova-network removal

2016-01-15 Thread Sheena Gregson
Although we are very close to HCF, I see no option but to continue removing
nova-network as I understand it is not currently functional or well-tested
for the Mitaka release.  We must either remove it or test it, and we want
to remove it anyway so that seems like the better path.



*Mike*, what do you think?



*From:* Roman Prykhodchenko [mailto:m...@romcheg.me]
*Sent:* Friday, January 15, 2016 8:04 AM
*To:* OpenStack Development Mailing List (not for usage questions) <
openstack-dev@lists.openstack.org>
*Subject:* Re: [openstack-dev] [Fuel] nova-network removal



I’d like to add that nova-network support was removed from
python-fuelclient in 8.0.



14 січ. 2016 р. о 17:54 Vitaly Kramskikh 
написав(ла):



Folks,

We have a request on review which prohibits creating new envs with
nova-network: https://review.openstack.org/#/c/261229/ We're 3 weeks away
from HCF, and I think this is too late for such a change. What do you
think? Should we proceed and remove nova-network support in 8.0, which is
deprecated since 7.0?


-- 

Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Python 3.5 is now the default Py3 in Debian Sid

2016-01-15 Thread Thomas Goirand
On 01/15/2016 03:45 PM, Victor Stinner wrote:
> Hi,
> 
> What are the issues? Is there a list of issues somewhere?
> 
> Victor

I reported a bunch of them, and so far, they seem fixed. Hopefully, I
wont see some new or old ones for the B2 release. I'll post links to
bugs in this thread if I encounter some.

Though I recently reported this one:
https://bugs.launchpad.net/python-neutronclient/+bug/1534008

Though this one seems just Py3 specific, not really Py3.5.

As always, your precious help is always so much welcome, Victor.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Diagnostic snapshot generation is broken due to lack of disk space

2016-01-15 Thread Sheena Gregson
I’ve also seen the request multiple times to be able to provide more
targeted snapshots which might also (partially) solve this problem as it
would require significantly less disk space to grab logs from a subset of
nodes for a specific window of time, instead of the more robust grab-all
solution we have now.



*From:* Maciej Kwiek [mailto:mkw...@mirantis.com]
*Sent:* Thursday, January 14, 2016 5:59 AM
*To:* OpenStack Development Mailing List (not for usage questions) <
openstack-dev@lists.openstack.org>
*Subject:* Re: [openstack-dev] [Fuel] Diagnostic snapshot generation is
broken due to lack of disk space



Igor,



I will investigate this, thanks!



Artem,



I guess that if we have an untrusted user on master node, he could just put
something he wants to be in the snapshot in /var/log without having to time
the attack carefully with tar execution.



I want to use links for directories, this saves me the trouble of creating
hardlinks for every single file in the directory. Although with how
exclusion is currently implemented it can cause deleting log files from
original directories, need to check this out.



About your PS: whole /var/log on master node (not in container) is
currently downloaded, I think we shouldn't change this as we plan to drop
containers in 9.0.



Cheers,

Maciej



On Thu, Jan 14, 2016 at 12:32 PM, Artem Panchenko 
wrote:

Hi,

using symlinks is a bit dangerous, here is a quote from the man you
mentioned [0]:

> The `--dereference' option is unsafe if an untrusted user can modify
directories while tar is running.

Hard links usage is much safer, because you can't use them for directories.
But at the same time implementation in shotgun would be more complicated
than with symlinks.

Anyway, in order to determine what linking to use we need to decide where
(/var/log or another partition) diagnostic snapshot will be stored.

p.s.

>This doesn't really give us much right now, because most of the logs are 
>fetched from master node via ssh due to shotgun being run in mcollective 
>container



AFAIK '/var/log/docker-logs/' is available from mcollective container and
mounted to /var/log/:

[root@fuel-lab-cz5557 ~]# dockerctl shell mcollective mount -l | grep
os-varlog
/dev/mapper/os-varlog on /var/log type ext4
(rw,relatime,stripe=128,data=ordered)

>From my experience '/var/log/docker-logs/remote' folder is most ' heavy'
thing in snapshot.

[0] http://www.gnu.org/software/tar/manual/html_node/dereference.html

Thanks!



On 14.01.16 13:00, Igor Kalnitsky wrote:

I took a glance on Maciej's patch and it adds a switch to tar command

to make it follow symbolic links

Yeah, that should work. Except one thing - we previously had fqdn ->

ipaddr links in snapshots. So now they will be resolved into full

copy?



I meant that symlinks also give us the benefit of not using additional

space (just as hardlinks do) while being able to link to files from

different filesystems.

I'm sorry, I got you wrong. :)



- Igor



On Thu, Jan 14, 2016 at 12:34 PM, Maciej Kwiek 
 wrote:

Igor,



I meant that symlinks also give us the benefit of not using additional space

(just as hardlinks do) while being able to link to files from different

filesystems.



Also, as Barłomiej pointed out the `h` switch for tar should do the trick

[1].



Cheers,

Maciej



[1] http://www.gnu.org/software/tar/manual/html_node/dereference.html



On Thu, Jan 14, 2016 at 11:22 AM, Bartlomiej Piotrowski

  wrote:

Igor,



I took a glance on Maciej's patch and it adds a switch to tar command to

make it follow symbolic links, so it looks good to me.



Bartłomiej



On Thu, Jan 14, 2016 at 10:39 AM, Igor Kalnitsky
 

wrote:

Hey Maceij -



About hardlinks - wouldn't it be better to use symlinks?

This way we don't occupy more space than necessary

AFAIK, hardlinks won't occupy much space. They are the links, after all.

:)



As for symlinks, I'm afraid shotgun (and fabric underneath) won't

resolve them and links are get to snapshot As Is. That means if there

will be no content in the snapshot they are pointing to, they are

simply useless. Needs to be checked, though.



- Igor



On Thu, Jan 14, 2016 at 10:31 AM, Maciej Kwiek 


wrote:

Thanks for your insight guys!



I agree with Oleg, I will see what I can do to make this work this way.



About hardlinks - wouldn't it be better to use symlinks? This way we

don't

occupy more space than necessary, and we can link to files and

directories

that are in other block device than /var. Please see [1] review for a

proposed change that introduces symlinks.



This doesn't really give us much right now, because most of the logs

are

fetched from master node via ssh due to shotgun being run in

mcollective

container, but it's something! When we remove 

Re: [openstack-dev] [cinder] Should we fix XML request issues?

2016-01-15 Thread Doug Hellmann
Excerpts from Michał Dulko's message of 2016-01-15 09:08:54 +0100:
> On 01/15/2016 07:14 AM, Jethani, Ravishekar wrote:
> > Hi Devs,
> >
> > I have come across a few 500 response issues while sending request
> > body as XML to cinder service. For example:
> >
> > 
> >
> > I can see that XML support has been marked as depricated and will be
> > removed in 'N' release. So is it still worth trying fixing these
> > issues during Mitaka time frame?
> >
> > Thanks.
> > Ravi Jethani
> 
> One of the reasons XML API was deprecated is the fact that it weren't
> getting much CI testing and as Doug Hellmann once mentioned - "if
> something isn't tested then it isn't working".

I may have said it, but I was hardly the first to do so! :-)

Doug

> 
> I'm okay with fixing it (if someone really needs that feature), but we
> don't have any means to prevent further regressions, so it may not be
> worth it.
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: copyright-holders file in each project, or copyright holding forced to the OpenStack Foundation

2016-01-15 Thread Thomas Goirand
On 01/15/2016 09:57 PM, Jeremy Stanley wrote:
> On 2016-01-15 07:31:09 -0600 (-0600), Dolph Mathews wrote:
>> This is a topic for legal-discuss, not -dev.
>>
>>   http://lists.openstack.org/cgi-bin/mailman/listinfo/legal-discuss
> 
> And not only that

It is important that everyone writing code reads the thread, otherwise
we will see no progress.

> but we've discussed it to death in years gone by
> (my how short some memories are)

Probably, but with no progress. I've just had an issue with wrong
copyright holders with oslo.privsep, which is very new. So,
unfortunately, I have to bring the topic back on the table. :(

> resulting in the summary at
> https://wiki.openstack.org/wiki/LegalIssuesFAQ#Copyright_Headers for
> those who choose to learn from history rather than repeating it.

Well, this wiki entry doesn't even have a single line about copyright
holding, it only tells about licensing and how/when to put a license
header in source code. Or did I miss it when reading too fast?

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: copyright-holders file in each project, or copyright holding forced to the OpenStack Foundation

2016-01-15 Thread Jan Klare

> On 15 Jan 2016, at 14:57, Jeremy Stanley  wrote:
> 
> On 2016-01-15 07:31:09 -0600 (-0600), Dolph Mathews wrote:
>> This is a topic for legal-discuss, not -dev.
>> 
>>  http://lists.openstack.org/cgi-bin/mailman/listinfo/legal-discuss
> 
> And not only that, but we've discussed it to death in years gone by
> (my how short some memories are), resulting in the summary at
> https://wiki.openstack.org/wiki/LegalIssuesFAQ#Copyright_Headers for
> those who choose to learn from history rather than repeating it.
> -- 
> Jeremy Stanley

You are completely right, but maybe this mail was also pointing to the current 
situation we are in (which is ugly) and the sentence:

"
We do not yet have guidance for when to add or remove a copyright header in 
source files.
“

from the wiki you mentioned above. Nevertheless, we should move this discussion 
the another mailing list like stated before and try to fix the “not yet” or at 
least move a bit forward.

> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Should we fix XML request issues?

2016-01-15 Thread Sean McGinnis
On Fri, Jan 15, 2016 at 09:08:54AM +0100, Michał Dulko wrote:
> On 01/15/2016 07:14 AM, Jethani, Ravishekar wrote:
> > Hi Devs,
> >
> > I have come across a few 500 response issues while sending request
> > body as XML to cinder service. For example:
> >
> > 
> >
> > I can see that XML support has been marked as depricated and will be
> > removed in 'N' release. So is it still worth trying fixing these
> > issues during Mitaka time frame?
> >
> > Thanks.
> > Ravi Jethani
> 
> One of the reasons XML API was deprecated is the fact that it weren't
> getting much CI testing and as Doug Hellmann once mentioned - "if
> something isn't tested then it isn't working".
> 
> I'm okay with fixing it (if someone really needs that feature), but we
> don't have any means to prevent further regressions, so it may not be
> worth it.

I'm in the same boat. I think we have other more important things to
focus on right now, but if there is someone that wants to submit a patch
to fix these - and do their own testing to validate the fix - I'm fine
with letting that through.

But given that this code will all be removed shortly, I would really
rather see that effort going the other way to update the API consumer's
code to switch to JSON.



> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: copyright-holders file in each project, or copyright holding forced to the OpenStack Foundation

2016-01-15 Thread Dolph Mathews
This is a topic for legal-discuss, not -dev.

  http://lists.openstack.org/cgi-bin/mailman/listinfo/legal-discuss

On Friday, January 15, 2016, Thomas Goirand  wrote:

> This isn't the first time I'm calling for it. Let's hope this time, I'll
> be heard.
>
> Randomly, contributors put their company names into source code. When
> they do, then effectively, this tells that a given source file copyright
> holder is whatever is claimed, even though someone from another company
> may have patched it.
>
> As a result, we have a huge mess. It's impossible for me, as a package
> maintainer, to accurately set the copyright holder names in the
> debian/copyright file, which is a required by the Debian FTP masters.
>
> I see 2 ways forward:
> 1/ Require everyone to give-up copyright holding, and give it to the
> OpenStack Foundation.
> 2/ Maintain a copyright-holder file in each project.
>
> The later is needed if we want to do things correctly. Leaving the
> possibility for everyone to just write (c) MyCompany LLC randomly in the
> source code doesn't cut it. Expecting that a package maintainer should
> double-guess copyright holding just by reading the email addresses of
> "git log" output doesn't work either.
>
> Please remember that a copyright holder has nothing to do with the
> license, neither with the author of some code. So please do *not* take
> over this thread, and discuss authorship or licensing.
>
> Whatever we choose, I think we should ban having copyright holding text
> within our source code. While licensing is a good idea, as it is
> accurate, the copyright holding information isn't and it's just
> missleading.
>
> If I was the only person to choose, I'd say let's go for 1/, but
> probably managers of every company wont agree.
>
> Some thoughts anyone?
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] re-introducing twisted to global-requirements

2016-01-15 Thread Jim Rollenhagen
On Thu, Jan 14, 2016 at 11:00:16AM +1300, Robert Collins wrote:
> On 8 January 2016 at 08:09, Jim Rollenhagen  wrote:
> > Hi all,
> >
> > A change to global-requirements[1] introduces mimic, which is an http
> > server that can mock various APIs, including nova and ironic, including
> > control of error codes and timeouts. The ironic team plans to use this
> > for testing python-ironicclient without standing up a full ironic
> > environment.
> 
> Whee that was a bit of a thread. Sorry for kicking this off on IRC and
> not chiming in until now.
> 
> As I read it the following points were made:
>   - this is a new an different testing thing
>   - which has the same interface skew issues as regular mocks
>   - and devs may need to hack twisted to fix issues with it in future
>   - but it is being used to target local devs, not gate jobs [today]
> 
> Interface skew can be handled a couple of ways. I'm a big fan of a
> technique (that I *think* came out of Google) of having a test fake
> layer which is substitutable by real things, so you can always run it
> in 'slow mode' to make sure things still work with real components.
> It's my understanding that with the way mimic is used, this can be
> done: run the same tests with real e.g. devstack brought up
> components. I think that that is good insurance, particularly if this
> is being done periodically rather than
> when-someone-remembers-or-has-an-issue.

Indeed, this is mimic's intent, and the way it's been used elsewhere
(e.g. by autoscale QA folks).

> Another way to mitigate interface skew is to have the publisher of an
> interface also publish a fast test fake for folk to use. I am also a
> fan of doing that, but as we haven't done so yet, I think its ok to go
> with what we have. If we couldn't run the same things with a real
> implementation, I would be more worried - because mocks of moving
> interfaces are fragile (e.g. mocking out oslo.config is fragile vs
> mocking out os.unlink).

So this made me think a bit and poke Glyph - turns out out-of-tree
plugins are being worked on by mimic; might be worth investigating
putting the mimic plugin for ironic in ironic's tree itself.

> Personally, I'm fine hacking on Twisted code - I quite like it
> (particularly good Twisted code :)) - and since folk seem ok with that
> aspect, great - thank you for raising the discussion here.

Totally; thanks for chiming in :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Should we fix XML request issues?

2016-01-15 Thread Ivan Kolodyazhny
500 error is bad for any API. IMO, I'm OK to fix it Mitaka. Deprecated
means that it could be dropped soon. It doesn't mean that it's not working
at all.

BTW, XML API almost has no tests so I'm not surprised that it's broken

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Fri, Jan 15, 2016 at 10:08 AM, Michał Dulko 
wrote:

> On 01/15/2016 07:14 AM, Jethani, Ravishekar wrote:
> > Hi Devs,
> >
> > I have come across a few 500 response issues while sending request
> > body as XML to cinder service. For example:
> >
> > 
> >
> > I can see that XML support has been marked as depricated and will be
> > removed in 'N' release. So is it still worth trying fixing these
> > issues during Mitaka time frame?
> >
> > Thanks.
> > Ravi Jethani
>
> One of the reasons XML API was deprecated is the fact that it weren't
> getting much CI testing and as Doug Hellmann once mentioned - "if
> something isn't tested then it isn't working".
>
> I'm okay with fixing it (if someone really needs that feature), but we
> don't have any means to prevent further regressions, so it may not be
> worth it.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Diagnostic snapshot generation is broken due to lack of disk space

2016-01-15 Thread Igor Kalnitsky
Sheena -

What do you mean by *targeted*? Shotgun's designed to be a *targeted*
solution. If someone wants more *precise* targets - it's easy to
specify them in Nailgun's settings.yaml.

- Igor

On Fri, Jan 15, 2016 at 5:02 PM, Sheena Gregson  wrote:
> I’ve also seen the request multiple times to be able to provide more
> targeted snapshots which might also (partially) solve this problem as it
> would require significantly less disk space to grab logs from a subset of
> nodes for a specific window of time, instead of the more robust grab-all
> solution we have now.
>
>
>
> From: Maciej Kwiek [mailto:mkw...@mirantis.com]
> Sent: Thursday, January 14, 2016 5:59 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [Fuel] Diagnostic snapshot generation is broken
> due to lack of disk space
>
>
>
> Igor,
>
>
>
> I will investigate this, thanks!
>
>
>
> Artem,
>
>
>
> I guess that if we have an untrusted user on master node, he could just put
> something he wants to be in the snapshot in /var/log without having to time
> the attack carefully with tar execution.
>
>
>
> I want to use links for directories, this saves me the trouble of creating
> hardlinks for every single file in the directory. Although with how
> exclusion is currently implemented it can cause deleting log files from
> original directories, need to check this out.
>
>
>
> About your PS: whole /var/log on master node (not in container) is currently
> downloaded, I think we shouldn't change this as we plan to drop containers
> in 9.0.
>
>
>
> Cheers,
>
> Maciej
>
>
>
> On Thu, Jan 14, 2016 at 12:32 PM, Artem Panchenko 
> wrote:
>
> Hi,
>
> using symlinks is a bit dangerous, here is a quote from the man you
> mentioned [0]:
>
>> The `--dereference' option is unsafe if an untrusted user can modify
>> directories while tar is running.
>
> Hard links usage is much safer, because you can't use them for directories.
> But at the same time implementation in shotgun would be more complicated
> than with symlinks.
>
> Anyway, in order to determine what linking to use we need to decide where
> (/var/log or another partition) diagnostic snapshot will be stored.
>
> p.s.
>
>>This doesn't really give us much right now, because most of the logs are
>> fetched from master node via ssh due to shotgun being run in mcollective
>> container
>
>
>
> AFAIK '/var/log/docker-logs/' is available from mcollective container and
> mounted to /var/log/:
>
> [root@fuel-lab-cz5557 ~]# dockerctl shell mcollective mount -l | grep
> os-varlog
> /dev/mapper/os-varlog on /var/log type ext4
> (rw,relatime,stripe=128,data=ordered)
>
> From my experience '/var/log/docker-logs/remote' folder is most ' heavy'
> thing in snapshot.
>
> [0] http://www.gnu.org/software/tar/manual/html_node/dereference.html
>
> Thanks!
>
>
>
> On 14.01.16 13:00, Igor Kalnitsky wrote:
>
> I took a glance on Maciej's patch and it adds a switch to tar command
>
> to make it follow symbolic links
>
> Yeah, that should work. Except one thing - we previously had fqdn ->
>
> ipaddr links in snapshots. So now they will be resolved into full
>
> copy?
>
>
>
> I meant that symlinks also give us the benefit of not using additional
>
> space (just as hardlinks do) while being able to link to files from
>
> different filesystems.
>
> I'm sorry, I got you wrong. :)
>
>
>
> - Igor
>
>
>
> On Thu, Jan 14, 2016 at 12:34 PM, Maciej Kwiek  wrote:
>
> Igor,
>
>
>
> I meant that symlinks also give us the benefit of not using additional space
>
> (just as hardlinks do) while being able to link to files from different
>
> filesystems.
>
>
>
> Also, as Barłomiej pointed out the `h` switch for tar should do the trick
>
> [1].
>
>
>
> Cheers,
>
> Maciej
>
>
>
> [1] http://www.gnu.org/software/tar/manual/html_node/dereference.html
>
>
>
> On Thu, Jan 14, 2016 at 11:22 AM, Bartlomiej Piotrowski
>
>  wrote:
>
> Igor,
>
>
>
> I took a glance on Maciej's patch and it adds a switch to tar command to
>
> make it follow symbolic links, so it looks good to me.
>
>
>
> Bartłomiej
>
>
>
> On Thu, Jan 14, 2016 at 10:39 AM, Igor Kalnitsky 
>
> wrote:
>
> Hey Maceij -
>
>
>
> About hardlinks - wouldn't it be better to use symlinks?
>
> This way we don't occupy more space than necessary
>
> AFAIK, hardlinks won't occupy much space. They are the links, after all.
>
> :)
>
>
>
> As for symlinks, I'm afraid shotgun (and fabric underneath) won't
>
> resolve them and links are get to snapshot As Is. That means if there
>
> will be no content in the snapshot they are pointing to, they are
>
> simply useless. Needs to be checked, though.
>
>
>
> - Igor
>
>
>
> On Thu, Jan 14, 2016 at 10:31 AM, Maciej Kwiek 
>
> wrote:
>
> Thanks for your insight guys!
>
>
>
> I agree with Oleg, I will see what I can do to make this work this way.
>

Re: [openstack-dev] [nova][cinder] How will nova advertise that volume multi-attach is supported?

2016-01-15 Thread Ildikó Váncsa
Hi All,

I wonder whether we could provide an interface on the API where these kind of 
capabilities can be retrieved? I know we have a support matrix in the 
documentation that's good to have. I asked the question, because here we have a 
base functionality, which is attaching a volume that Cinder exports. The 
multiattach feature is an extension to this, which is provided by Cinder and we 
wire it into Nova to provide this functionality for the instances. It's not a 
question that the API behavior will change by this, but it's more the matter of 
the components that allows the multiple attachments. In this sense the API 
microversion does not provide too much information standalone, it can only say 
that if you have all the good drivers set up in your environment, then you can 
use it. But do we have a way to check this?

Also in order to be able to introduce multiattach in the N cycle there are two 
patches that we have to land for Mitaka [1] [2]. [1] prepares the detach 
mechanism to send all the information to Cinder in order to identify the right 
attachment. This means to pass the attachment_id to Cinder. In case of an 
upgrade when we can have old and new components in the system it is important 
that if a new component attaches a volume for the second time then the detach 
called on the old one can be executed properly. [2] is need for Cinder as some 
of the drivers need the host information for tracking the attachments of the 
same volume on the same host properly.

> -Original Message-
> From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
> Sent: January 14, 2016 17:11
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova][cinder] How will nova advertise that 
> volume multi-attach is supported?
> 
> 
> 
> On 1/14/2016 9:42 AM, Dan Smith wrote:
> >> It is however not ideal when a deployment is set up such that
> >> multiattach will always fail because a hypervisor is in use which
> >> doesn't support it.  An immediate solution would be to add a policy
> >> so a deployer could disallow it that way which would provide
> >> immediate feedback to a user that they can't do it.  A longer term
> >> solution would be to add capabilities to flavors and have flavors act
> >> as a proxy between the user and various hypervisor capabilities
> >> available in the deployment.  Or we can focus on providing better
> >> async feedback through instance-actions, and other discussed async api 
> >> changes.
> >
> > Presumably a deployer doesn't enable volumes to be set as multi-attach
> > on the cinder side if their nova doesn't support it at all, right? I
> > would expect that is the gating policy element for something global.
> 
> There is no policy in cinder to disallow creating multiattach-able volumes 
> [1]. It's just a property on the volume and somewhere in
> cinder the volume drivers support the capability or not.
> 
>  From a very quick look at the cinder code, the scheduler has a capabilities 
> filter for multiattach so if you try to create a multiattach
> volume and don't have any hosts (volume backends) that support that, you'd 
> fail to create the volume with NoValidHost.
> 
> But lvm supports it, so if you have an lvm backend you can create the 
> multiattach volume, that doesn't mean you can use it in nova. So
> it seems like you'd also need the same kind of capabilities filter in the 
> nova scheduler for this and that capability from the compute
> host would come from the virt driver, of which only libvirt is going to 
> support it at first.

Do I understand correctly that you mean that we would specify at VM creation 
time that it should go to a compute host where the hypervisor supports 
multiattach?

Thanks,
/Ildikó

[1] https://review.openstack.org/#/c/193134/ 
[2] https://review.openstack.org/#/c/256273/ 

> 
> >
> > Now, if multiple novas share a common cinder, then I guess it gets a
> > little more confusing...
> >
> > --Dan
> >
> > __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> [1]
> https://github.com/openstack/cinder/blob/master/cinder/api/v2/volumes.py#L407
> 
> --
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] kolla-mesos IRC meeting

2016-01-15 Thread Steven Dake (stdake)
I am -2 on a separate meeting.

If there are problems with the current approach to how the agenda is
managed as it relates to mesos topics, lets fix that.  If there are
timezone problems, lets try to fix that.  Lets fix the problems you point
out rather then create another meeting for our core team to have to try to
make.  Even of the core team is not actively writing code, they are
actively reviewing code (I hope:) and it is helpful for the core team to
act as one unit when it comes to decision making.  This includes
informational meetings like kolla-mesos.

I find as running the typical meetings goes, I typically have over half
the agenda time available for open topics unrelated to normal project
business (such as figuring out what will be discussed at the midcycle or
summit as an example).  It would be a real shame to have two shorter
meetings when we can just jam the one topic Kolla related stuff into one
meeting..

Regards
-steve



On 1/15/16, 4:38 AM, "Michal Rostecki"  wrote:

>Hi,
>
>Currently we're discussing stuff about kolla-mesos project on kolla IRC
>meetings[1]. We have an idea of creating the separate meeting for
>kolla-mesos. I see the following reasons for that:
>
>- kolla-mesos has some contributors which aren't able to attend kolla
>meeting because of timezone reasons
>- kolla meetings had a lot of topics recently and there was a short time
>for discussing kolla-mesos things
>- in the most of kolla meetings, we treated the whole kolla-mesos as one
>topic, which is bad in terms of analyzing single problems inside this
>project
>
>The things I would like to know from you is:
>- whether you're +1 or -1 to the whole idea of having separate meeting
>- what is your preferred time of meeting - please use this etherpad[2]
>(I already added there some names of most active contributors from who
>I'd like to hear an opinion, so if you're interested - please "override
>color"; if not, remove the corresponding line)
>
>About the time of meeting and possible conflicts - I think that in case
>of conflicting times and the equal number of votes, opinion of core
>reviewers and people who are already contributing to the project
>(reviews and commits) will be more important. You can see the
>contributors here[3][4].
>
>[1] https://wiki.openstack.org/wiki/Meetings/Kolla
>[2] https://etherpad.openstack.org/p/kolla-mesos-irc-meeting
>[3] http://stackalytics.com/?module=kolla-mesos
>[4] http://stackalytics.com/?module=kolla-mesos=commits
>
>Cheers,
>Michal
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Heka v ELK stack logistics

2016-01-15 Thread Patrick Petit
On 15 Jan 2016 at 14:43:39, Michał Jastrzębski (inc...@gmail.com) wrote:
Yeah that's true. We did all of openstack systems but we didn't 
implement infra around yet. I'd guess most of services can log either 
to stdout or file, and both sources should be accessible by heka. 
Also, I'd be surprised if heka wouldn't have syslog driver? That 
should be one of first:) Maybe worth writing one? I wanted an excuse 
to write some golang;) 
Well writing in golang plugin would require to rebuild Heka.

The beauty is that you don’t have to do that.

Just a decoder plugin in Lua like it’s already the case for rsyslog…

Cheers,

Patrick



Regards, 
Michal 

On 15 January 2016 at 06:42, Eric LEMOINE  wrote: 
> On Fri, Jan 15, 2016 at 11:57 AM, Michal Rostecki 
>  wrote: 
>> On 01/15/2016 11:14 AM, Simon Pasquier wrote: 
>>> 
>>> My 2 cents on RabbitMQ logging... 
>>> 
>>> On Fri, Jan 15, 2016 at 8:39 AM, Michal Rostecki >> > wrote: 
>>> 
>>> I'd suggest to check the similar options in RabbitMQ and other 
>>> non-OpenStack components. 
>>> 
>>> 
>>> AFAICT RabbitMQ can't log to Syslog anyway. But you have option to make 
>>> RabbitMQ log to stdout [1]. 
>>> BR, 
>>> Simon. 
>>> [1] http://www.superpumpup.com/docker-rabbitmq-stdout 
>>> 
>> 
>> That's OK for Heka/Mesos/k8s approach. 
>> 
>> Just for the curiosity, 
>> @inc0: so we don't receive any logs from RabbitMQ in the current rsyslog 
>> approach? 
> 
> 
> /var/lib/docker/volumes/rsyslog/_data is where logs are stored, and 
> you'll see that there is no file for RabbitMQ. This is related to 
> RabbitMQ not logging to syslog. So our impression is that Kolla 
> doesn't at all collect RabbitMQ logs today. I guess this should be 
> fixed. 
> 
> __ 
> OpenStack Development Mailing List (not for usage questions) 
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] MTU configuration pain

2016-01-15 Thread Sean M. Collins
MTU has been an ongoing issue in Neutron for _years_.

It's such a hassle, that most people just throw up their hands and set
their physical infrastructure to jumbo frames. We even document it.

http://docs.openstack.org/juno/install-guide/install/apt-debian/content/neutron-network-node.html

> Ideally, you can prevent these problems by enabling jumbo frames on
> the physical network that contains your tenant virtual networks. Jumbo
> frames support MTUs up to approximately 9000 bytes which negates the
> impact of GRE overhead on virtual networks.

We've pushed this onto operators and deployers. There's a lot of
code in provisioning projects to handle MTUs.

http://codesearch.openstack.org/?q=MTU=nope==

We have mentions to it in our architecture design guide

http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/arch-design/source/network-focus-architecture.rst#n150

I want to get Neutron to the point where it starts discovering this
information and automatically configuring, in the optimistic cases. I
understand that it can be complex and have corner cases, but the issue
we have today is that it is broken in some multinode jobs, even Neutron
developers are configuring it correctly.

I also had this discussion on the DevStack side in 
https://review.openstack.org/#/c/112523/
where basically, sure we can fix it in DevStack and at the gate, but it
doesn't fix the problem for anyone who isn't using DevStack to deploy
their cloud.

Today we have a ton of MTU configuration options sprinkled throghout the
L3 agent, dhcp agent, l2 agents, and at least one API extension to the
REST API for handling MTUs.

So yeah, a lot of knobs and not a lot of documentation on how to make
this thing work correctly. I'd like to try and simplify.


Further reading:

http://techbackground.blogspot.co.uk/2013/06/path-mtu-discovery-and-gre.html

http://lists.openstack.org/pipermail/openstack/2013-October/001778.html

https://ask.openstack.org/en/question/6140/quantum-neutron-gre-slow-performance/

https://ask.openstack.org/en/question/12499/forcing-mtu-to-1400-via-etcneutrondnsmasq-neutronconf-per-daniels/

http://blog.systemathic.ch/2015/03/05/openstack-mtu-pitfalls-with-tunnels/

https://twitter.com/search?q=openstack%20neutron%20MTU

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Email as User Name on the Horizon login page

2016-01-15 Thread Lin Hua Cheng
It might be simpler to just update the label on the python code. This is
where the form label are defined.

You can update the label here:
https://github.com/openstack/django_openstack_auth/blob/stable/kilo/openstack_auth/forms.py#L51

-Lin

On Fri, Jan 15, 2016 at 12:54 AM, Itxaka Serrano Garcia 
wrote:

>
> Looks like the form comes from django_openstack_auth:
>
> https://github.com/openstack/django_openstack_auth/blob/master/openstack_auth/forms.py#L53
>
>
> But to be honest, no idea how that can be overridden trough the themes,
> not sure if its even possible to override anything on that page without
> modifying django_openstack_auth directly :(
>
> Maybe someone else has a better insight on this than me.
>
>
> * Horrible Hack Incoming, read at your own discretion *
>
> You can override the template here:
>
> https://github.com/openstack/horizon/blob/master/horizon/templates/horizon/common/_form_field.html#L51
>
> And change this line:
> {{
> field.label }}
>
> For this:
> {% if
> field.label == "User Name" and not request.user.is_authenticated %}Email{%
> else %}{{ field.label }}{% endif %}
>
>
> Which will check if the label is "User Name" and the user is logged out
> and directly write "Email" as the field label.
>
> I know, its horrible and if you update horizon it will be overriden, but
> probably works for the time being if you really need it ¯\_(ツ)_/¯
>
> * Horrible Hack Finished *
>
>
>
>
> Itxaka
>
>
>
>
>
> On 01/15/2016 05:13 AM, Adrian Turjak wrote:
>
>> I've run into a weird issue with the Liberty release of Horizon.
>>
>> For our deployment we enforce emails as usernames, and thus for Horizon
>> we used to have "User Name" on the login page replaced with "Email".
>> This used to be a straightforward change in the html template file, and
>> with the introduction of themes we assumed it would be the same. When
>> one of our designers was migrating our custom CSS and html changes to
>> the new theme system they missed that change and I at first it was a
>> silly mistake.
>>
>> Only on digging through the code myself I found that the "User Name" on
>> the login screen isn't in the html file at all, nor anywhere else
>> straightforward. The login page form is built on the fly with javascript
>> to facilitate different modes of authentication. While a bit annoying
>> that didn't seem too bad and I then assumed it might mean a javascript
>> change, only that the more I dug, the more I became confused.
>>
>> Where exactly is the login form defined? And where exactly is the "User
>> Name" text for the login form set?
>>
>> I've tried all manner of stuff to change it with no luck and I feel like
>> I must have missed something obvious.
>>
>> Cheers,
>> -Adrian Turjak
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][stable] Proposal to add Tony Breeds to nova-stable-maint

2016-01-15 Thread Dan Smith
> I'm formally proposing that the nova-stable-maint team [1] adds Tony 
> Breeds to the core team.

My major complaint with Tony is that he talks funny. If he's willing to
work on fixing that, I'm +1.

:-P

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Plugin installation issue

2016-01-15 Thread Alexander Tivelkov
Hi Vahid,

Sorry for the delayed response.

I've run your plugin and I am facing an error during the loading of your
plugin: the loader complains that it is unable to find the module named
"heat_translator".
As far as I can see, your plugin attempts the following import statement:
*"from heat_translator import translator_shell"* and this is the exact
statement which fails.
Most probably this happens because the heat_translator project does not
expose its heat_translator.py file (the one defined in the root of the
module) to the outer world, thus when you install it with pip this module
cannot be found.
So, if you replace this import statement with something like
"*from translator import shell as translator_shell*" this should work,
since the "translator" module is properly exported by the heat-translator
package.

Hope this helps

On Mon, Jan 11, 2016 at 10:51 PM Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hello,
>
> I have been scratching my head at this issue for a couple of days without
> any success.
>
> Here's the issue:
> I try to install my plugin (can be found here
> as WIP) using this command:
>
> pip install -e contrib/plugins/murano_heat-translator_plugin/
>
> The installer runs and reports that "Successfully installed
> io.murano.plugins.oasis.tosca".
> However, when I try to import a package to the catalog I see (while
> debugging) that the plugin format I just installed is not among the
> plugin_loader formats that murano discovers (here
> 
> ).
>
> I run the same scenario for the Cloudify plugin and everything goes fine.
>
> So it appears there is something wrong with my package (I was able to
> install it before). But I don't know where to look for the root cause of
> this issue.
> Any pointers would be highly appreciated.
>
> Thanks.
> --Vahid
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Regards,
Alexander Tivelkov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] api working group proposed actions guideline needs input

2016-01-15 Thread Chris Dent


In this review:

https://review.openstack.org/#/c/234994/

there's a proposal that provides guidance for how to represent certains
types of actions against resources in an HTTP API. There's been a fair
bit of back and forth between me and the original author without
conclusion.

It would be great to get additional eyes on this spec so that we
could reach agreement. It is quite likely everyone involved is wrong
in some fashion or there are misunderstandings happening. If you're
working to implement actions in APIs, or just thinking about it,
pile on.

There's quite a lot of meat in the comments, discussing various
alternatives.

Thanks.

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] nova-network removal

2016-01-15 Thread Roman Alekseenkov
I agree with Sheena. Sounds like removing support for nova-network would be
the best option, even though it's late.

However, I'd like us to think about the impact on vCenter integration.
vCenter+nova-network was fully supported before. Since we are now
recommending DVS or NSX backends, I'd like the team to explicitly confirm
that those configurations have been tested.

Thanks,
Roman

On Fri, Jan 15, 2016 at 6:43 AM, Sheena Gregson 
wrote:

> Although we are very close to HCF, I see no option but to continue
> removing nova-network as I understand it is not currently functional or
> well-tested for the Mitaka release.  We must either remove it or test it,
> and we want to remove it anyway so that seems like the better path.
>
>
>
> *Mike*, what do you think?
>
>
>
> *From:* Roman Prykhodchenko [mailto:m...@romcheg.me]
> *Sent:* Friday, January 15, 2016 8:04 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [Fuel] nova-network removal
>
>
>
> I’d like to add that nova-network support was removed from
> python-fuelclient in 8.0.
>
>
>
> 14 січ. 2016 р. о 17:54 Vitaly Kramskikh 
> написав(ла):
>
>
>
> Folks,
>
> We have a request on review which prohibits creating new envs with
> nova-network: https://review.openstack.org/#/c/261229/ We're 3 weeks away
> from HCF, and I think this is too late for such a change. What do you
> think? Should we proceed and remove nova-network support in 8.0, which is
> deprecated since 7.0?
>
>
> --
>
> Vitaly Kramskikh,
> Fuel UI Tech Lead,
> Mirantis, Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Should the playbook stop on certain tasks?

2016-01-15 Thread Darren J Moffat

On 01/13/16 15:10, Major Hayden wrote:

After presenting openstack-ansible-security at the Security Project Mid-Cycle 
meeting yesterday, the question came up around how to handle situations where 
automation might cause problems.

For example, the STIG requires[1] that all system accounts other than root are 
locked.  This could be dangerous on a running production system as Ubuntu has 
non-root accounts that are not locked.  At the moment, the playbook does a hard 
stop (using the fail module) when this check fails[2].  Although that can be 
skipped with --skip-tag, it can be a little annoying if you have automation 
that depends on the playbook running without stopping.

Is there a good alternative for this?  I've found a few options:

   1) Leave it as-is and do a hard stop on these tasks
   2) Print a warning to the console but let the playbook continue
   3) Use an Ansible callback plugin to catch these and print them at the end 
of the playbook run


The STIG is just DISA's interpretation and based on my experience with 
helping get the Solaris 10 and Solaris 11 STIG correct it is often 
overly strict and sometimes poor advice for the general case.


In the case of Solaris, Ubuntu, Fedora requiring some of these system 
accounts to be locked would actually weaken system security because 
certain required functionality would break.


So I would strongly caution against taking the DISA STIG as an 
authoratative stance for OS security configuration.  A lot of it is very 
good and overlaps with CIS and vendor recommendations.  For Red Hat, 
Fedora and Solaris I would recommend instead to look at the vendor 
delivered XCCDF profiles.


I think it would be much more valuable for us to focus on getting 
XCCDF/OVAL developed for OpenStack specific rules and leave the OS 
configuration/recommendations to the OS vendors.


--
Darren J Moffat

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Testing Neutron with latest OVS

2016-01-15 Thread Jeremy Stanley
On 2016-01-15 09:51:57 -0500 (-0500), Russell Bryant wrote:
> but are most people using that LTS release with cloud-archive
> enabled?

I can't answer whether most Ubuntu users are adding it, though the
(several) times we've tried to enable UCA in our test platforms in
the past we've reported crippling bugs in newer versions of system
dependencies which have caused us to scurry to roll that back. While
it may be time to try our luck again and hope we fare better, we're
also only a few months from the next Ubuntu LTS (and as also
mentioned CentOS 7 is available for use right now).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Please vote -> Removal of Harm Weites from the core reviewer team

2016-01-15 Thread Paul Bourke

+1, hopefully we'll see Harm again in the future.

Cheers,
-Paul

On 15/01/16 13:44, Michał Jastrzębski wrote:

That's an unfortunate +1, it's always sad to lose one of our own. Hope
we'll see you back soon! Thanks for all the fis...reviews!

On 15 January 2016 at 06:59, Ryan Hallisey  wrote:

Hello all,

Harm, really appreciate all your reviews.  You were always very thorough and 
insightful.  You'll always be welcome back to the core team in the future. 
Thanks for everything Harm!

+1
-Ryan

- Original Message -
From: "Michal Rostecki" 
To: openstack-dev@lists.openstack.org
Sent: Friday, January 15, 2016 6:46:43 AM
Subject: Re: [openstack-dev] [kolla] Please vote -> Removal of Harm Weites from 
the core reviewer team

On 01/15/2016 01:12 AM, Steven Dake (stdake) wrote:

Hi fellow core reviewers,

Harm joined Kolla early on with great enthusiasm and did a bang-up job
for several months working on Kolla.  We voted unanimously to add him to
the core team.  Over the last 6 months Harm hasn't really made much
contribution to Kolla.  I have spoken to him about it in the past, and
he indicated his work and other activities keep him from being able to
execute the full job of a core reviewer and nothing environmental is
changing in the near term that would improve things.

I faced a similar work/life balance problem when working on Magnum as a
core reviewer and also serving as PTL for Kolla.  My answer there was to
step down from the Magnum core reviewer team [1] because Kolla needed a
PTL more then Magnum needed a core reviewer.  I would strongly prefer if
folks don't have the time available to serve as a Kolla core reviewer,
to step down as was done in the above example.  Folks that follow this
path will always be welcome back as a core reviewer in the future once
they become familiar with the code base, people, and the project.

The other alternative to stepping down is for the core reviewer team to
vote to remove an individual from the core review team if that is deemed
necessary.  For future reference, if you as a core reviewer have
concerns about a fellow core reviewer's performance, please contact the
current PTL who will discuss the issue with you.

I propose removing Harm from the core review team.  Please vote:

+1 = remove Harm from the core review team
-1 = don't remove Harm from the core review team

Note folks that are voted off the core review team are always welcome to
rejoin the core team in the future once they become familiar with the
code base, people, and the project.  Harm is a great guy, and I hope in
the future he has more time available to rejoin the Kolla core review
team assuming this vote passes with simple majority.

It is important to explain why, for some folks that may be new to a core
reviewer role (which many of our core reviewers are), why a core
reviewer should have their +2/-2 voting rights removed when they become
inactive or their activity drops below an acceptable threshold for
extended or permanent periods.  This hasn't happened in Harm's case, but
it is possible that a core reviewer could approve a patch that is
incorrect because they lack sufficient context with the code base.  Our
core reviewers are the most important part of ensuring quality software.
   If the individual has lost context with the code base, their voting
may be suspect, and more importantly the other core reviewers may not
trust the individual's votes.  Trust is the cornerstone of a software
review process, so we need to maximize trust on a technical level
between our core team members.  That is why maintaining context with the
code base is critical and why I am proposing a vote to remove Harm from
the core reviewer team.

On a final note, folks should always know, joining the core review team
is never "permanent".  Folks are free to move on if their interests take
them into other areas or their availability becomes limited.  Core
Reviewers can also be removed by majority vote.  If there is any core
reviewer's performance you are concerned with, please contact the
current PTL to first work on improving performance, or alternatively
initiating a core reviewer removal voting process.

On a more personal note, I want to personally thank Harm for his many
and significant contributions to Kolla and especially going above and
beyond by taking on the responsibility of a core reviewer.  Harm's
reviews were always very thorough and very high quality, and I really do
hope in the future Harm will rejoin the Kolla core team.

Regards,
-steve

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-October/077844.html



Hi all,

First of all, thank you Harm for your work on Kolla in the Liberty
release. And especially for your good reviews and advices when I was
beginning my contribution to this project.

For now, I'm +1 for removing Harm from core team. But I also hope that
in future it will be possible for him to rejoin our team and be as
active as 

Re: [openstack-dev] [kolla] Please vote -> Removal of Harm Weites from the core reviewer team

2016-01-15 Thread Jeff Peeler
+1, I think this has been a good upholding of openstack core team
policy. Always a nice reminder that any previous core member can be
fast tracked back into place if they decide to become active again.

On Fri, Jan 15, 2016 at 8:44 AM, Michał Jastrzębski  wrote:
> That's an unfortunate +1, it's always sad to lose one of our own. Hope
> we'll see you back soon! Thanks for all the fis...reviews!
>
> On 15 January 2016 at 06:59, Ryan Hallisey  wrote:
>> Hello all,
>>
>> Harm, really appreciate all your reviews.  You were always very thorough and 
>> insightful.  You'll always be welcome back to the core team in the future. 
>> Thanks for everything Harm!
>>
>> +1
>> -Ryan
>>
>> - Original Message -
>> From: "Michal Rostecki" 
>> To: openstack-dev@lists.openstack.org
>> Sent: Friday, January 15, 2016 6:46:43 AM
>> Subject: Re: [openstack-dev] [kolla] Please vote -> Removal of Harm Weites 
>> from the core reviewer team
>>
>> On 01/15/2016 01:12 AM, Steven Dake (stdake) wrote:
>>> Hi fellow core reviewers,
>>>
>>> Harm joined Kolla early on with great enthusiasm and did a bang-up job
>>> for several months working on Kolla.  We voted unanimously to add him to
>>> the core team.  Over the last 6 months Harm hasn't really made much
>>> contribution to Kolla.  I have spoken to him about it in the past, and
>>> he indicated his work and other activities keep him from being able to
>>> execute the full job of a core reviewer and nothing environmental is
>>> changing in the near term that would improve things.
>>>
>>> I faced a similar work/life balance problem when working on Magnum as a
>>> core reviewer and also serving as PTL for Kolla.  My answer there was to
>>> step down from the Magnum core reviewer team [1] because Kolla needed a
>>> PTL more then Magnum needed a core reviewer.  I would strongly prefer if
>>> folks don't have the time available to serve as a Kolla core reviewer,
>>> to step down as was done in the above example.  Folks that follow this
>>> path will always be welcome back as a core reviewer in the future once
>>> they become familiar with the code base, people, and the project.
>>>
>>> The other alternative to stepping down is for the core reviewer team to
>>> vote to remove an individual from the core review team if that is deemed
>>> necessary.  For future reference, if you as a core reviewer have
>>> concerns about a fellow core reviewer's performance, please contact the
>>> current PTL who will discuss the issue with you.
>>>
>>> I propose removing Harm from the core review team.  Please vote:
>>>
>>> +1 = remove Harm from the core review team
>>> -1 = don't remove Harm from the core review team
>>>
>>> Note folks that are voted off the core review team are always welcome to
>>> rejoin the core team in the future once they become familiar with the
>>> code base, people, and the project.  Harm is a great guy, and I hope in
>>> the future he has more time available to rejoin the Kolla core review
>>> team assuming this vote passes with simple majority.
>>>
>>> It is important to explain why, for some folks that may be new to a core
>>> reviewer role (which many of our core reviewers are), why a core
>>> reviewer should have their +2/-2 voting rights removed when they become
>>> inactive or their activity drops below an acceptable threshold for
>>> extended or permanent periods.  This hasn't happened in Harm's case, but
>>> it is possible that a core reviewer could approve a patch that is
>>> incorrect because they lack sufficient context with the code base.  Our
>>> core reviewers are the most important part of ensuring quality software.
>>>   If the individual has lost context with the code base, their voting
>>> may be suspect, and more importantly the other core reviewers may not
>>> trust the individual's votes.  Trust is the cornerstone of a software
>>> review process, so we need to maximize trust on a technical level
>>> between our core team members.  That is why maintaining context with the
>>> code base is critical and why I am proposing a vote to remove Harm from
>>> the core reviewer team.
>>>
>>> On a final note, folks should always know, joining the core review team
>>> is never "permanent".  Folks are free to move on if their interests take
>>> them into other areas or their availability becomes limited.  Core
>>> Reviewers can also be removed by majority vote.  If there is any core
>>> reviewer's performance you are concerned with, please contact the
>>> current PTL to first work on improving performance, or alternatively
>>> initiating a core reviewer removal voting process.
>>>
>>> On a more personal note, I want to personally thank Harm for his many
>>> and significant contributions to Kolla and especially going above and
>>> beyond by taking on the responsibility of a core reviewer.  Harm's
>>> reviews were always very thorough and very high quality, and I really do
>>> hope in the future Harm will 

Re: [openstack-dev] [all] Proposal: copyright-holders file in each project, or copyright holding forced to the OpenStack Foundation

2016-01-15 Thread Daniel P. Berrange
On Fri, Jan 15, 2016 at 12:53:49PM +, Chris Dent wrote:
> On Fri, 15 Jan 2016, Thomas Goirand wrote:
> 
> >Whatever we choose, I think we should ban having copyright holding text
> >within our source code. While licensing is a good idea, as it is
> >accurate, the copyright holding information isn't and it's just missleading.
> 
> I think we should not add new copyright notifications in files.
> 
> I'd also be happy to see all the existing ones removed, but that may
> be a bigger problem.

Only the copyright holder who added the notice is permitted to
remove it. ie you can't unilaterally remove Copyright notices
added by other copyright holders. See LICENSE term (4)(c)

While you could undertake an exercise to get agreement from
every copyright holder to remote their notices, its is honestly
not worth the work IMHO.

> >If I was the only person to choose, I'd say let's go for 1/, but
> >probably managers of every company wont agree.
> 
> I think option one is correct.

Copyright assignment is never the correct answer.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: copyright-holders file in each project, or copyright holding forced to the OpenStack Foundation

2016-01-15 Thread Daniel P. Berrange
On Fri, Jan 15, 2016 at 08:48:21PM +0800, Thomas Goirand wrote:
> This isn't the first time I'm calling for it. Let's hope this time, I'll
> be heard.
> 
> Randomly, contributors put their company names into source code. When
> they do, then effectively, this tells that a given source file copyright
> holder is whatever is claimed, even though someone from another company
> may have patched it.
> 
> As a result, we have a huge mess. It's impossible for me, as a package
> maintainer, to accurately set the copyright holder names in the
> debian/copyright file, which is a required by the Debian FTP masters.

I don't think OpenStack is in a different situation to the vast
majority of open source projects I've worked with or seen. Except
for those projects requiring copyright assignment to a single
entity, it is normal for source files to contain an unreliable
random splattering of Copyright notices. This hasn't seemed to
create a blocking problem for their maintenance in Debian. Loooking
at the debian/copyright files I see most of them have just done a
grep for the 'Copyright' statements & included as is - IOW just
ignored the fact that this is essentially worthless info and included
it regardless.

> I see 2 ways forward:
> 1/ Require everyone to give-up copyright holding, and give it to the
> OpenStack Foundation.
> 2/ Maintain a copyright-holder file in each project.

3/ Do nothing, just populate debian/copyright with the random
   set of 'Copyright' lines that happen to be the source files,
   as appears to be common practice across many debian packages

   eg the kernel package


http://metadata.ftp-master.debian.org/changelogs/main/l/linux/linux_3.16.7-ckt11-1+deb8u3_copyright

"Copyright: 1991-2012 Linus Torvalds and many others"

   if its good enough for the Debian kernel package, it should be
   good enough for openstack packages too IMHO.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: copyright-holders file in each project, or copyright holding forced to the OpenStack Foundation

2016-01-15 Thread Monty Taylor

On 01/15/2016 10:28 AM, Daniel P. Berrange wrote:

On Fri, Jan 15, 2016 at 12:53:49PM +, Chris Dent wrote:

On Fri, 15 Jan 2016, Thomas Goirand wrote:


Whatever we choose, I think we should ban having copyright holding text
within our source code. While licensing is a good idea, as it is
accurate, the copyright holding information isn't and it's just missleading.


I think we should not add new copyright notifications in files.

I'd also be happy to see all the existing ones removed, but that may
be a bigger problem.


Only the copyright holder who added the notice is permitted to
remove it. ie you can't unilaterally remove Copyright notices
added by other copyright holders. See LICENSE term (4)(c)

While you could undertake an exercise to get agreement from
every copyright holder to remote their notices, its is honestly
not worth the work IMHO.


++ - also, there are people/companies from which you'll never get it.


If I was the only person to choose, I'd say let's go for 1/, but
probably managers of every company wont agree.


I think option one is correct.


Copyright assignment is never the correct answer.


+1


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: copyright-holders file in each project, or copyright holding forced to the OpenStack Foundation

2016-01-15 Thread Monty Taylor

On 01/15/2016 10:38 AM, Daniel P. Berrange wrote:

On Fri, Jan 15, 2016 at 08:48:21PM +0800, Thomas Goirand wrote:

This isn't the first time I'm calling for it. Let's hope this time, I'll
be heard.

Randomly, contributors put their company names into source code. When
they do, then effectively, this tells that a given source file copyright
holder is whatever is claimed, even though someone from another company
may have patched it.

As a result, we have a huge mess. It's impossible for me, as a package
maintainer, to accurately set the copyright holder names in the
debian/copyright file, which is a required by the Debian FTP masters.


I don't think OpenStack is in a different situation to the vast
majority of open source projects I've worked with or seen. Except
for those projects requiring copyright assignment to a single
entity, it is normal for source files to contain an unreliable
random splattering of Copyright notices. This hasn't seemed to
create a blocking problem for their maintenance in Debian. Loooking
at the debian/copyright files I see most of them have just done a
grep for the 'Copyright' statements & included as is - IOW just
ignored the fact that this is essentially worthless info and included
it regardless.


Agree. debian/copyright should be a best effort - but it can only be as 
good as the input data available to the packager. Try getting an 
accurate debian/copyright file for the MySQL source tree at some point. 
(and good luck)



I see 2 ways forward:
1/ Require everyone to give-up copyright holding, and give it to the
OpenStack Foundation.
2/ Maintain a copyright-holder file in each project.



3/ Do nothing, just populate debian/copyright with the random
set of 'Copyright' lines that happen to be the source files,
as appears to be common practice across many debian packages

eg the kernel package

 
http://metadata.ftp-master.debian.org/changelogs/main/l/linux/linux_3.16.7-ckt11-1+deb8u3_copyright

 "Copyright: 1991-2012 Linus Torvalds and many others"

if its good enough for the Debian kernel package, it should be
good enough for openstack packages too IMHO.


I vote for 3


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Shovel (RackHD/OpenStack)

2016-01-15 Thread Jim Rollenhagen
On Wed, Jan 13, 2016 at 09:56:57PM +, Heck, Joseph wrote:
> Hey Jay! (yeah, I’m here and lurking in the corners, albeit with a
> different email at the moment)
> 
> Yep - RackHD was created by a company that was acquired by EMC to attack
> the lowest-level of hardware automation. EMC was interesting in pushing
> that into Open Source, and surprisingly I was completely game for that
> project :-) There’s all sorts of PR around that project that I won’t
> bother replaying here, but if anyone’s interested, I’d be happy to share
> more details.
> 
> There was immediately interest in how this could work with OpenStack, and
> as a plugin/driver to Ironic was the obvious play. We took a couple
> different options of possible attacks, and decided to leverage something
> that would both show off the underlying hardware introspection which
> wasn’t obviously visible (or arguably perhaps relevant) from the Ironic
> style APIs (the horizon plugin) as well as be leverage by Ironic to do
> hardware provisioning using those APIs.
> 
> Andre (who was key in doing this effort inside EMC) was interested in
> helping manage it and is bringing it here to introduce folks to the fact
> that we’ve done this work, and that it will be submitted it into
> incubation with OpenStack. So yep - we totally want to contribute it to
> the Ironic efforts. Andre and Jim are coordinating on that effort (hi
> jroll, nice to meet you) and it was Jim that suggested that Andre let the
> community know here that we’ve started this effort.

Hi. :)

So, to be clear, I haven't been working with Andre on this, except to
help him figure out how to create an OpenStack project, and to suggest
he email this list.

From what I know, the things RackHD/shovel offer that Ironic (Inspector)
doesn't have is additional SEL monitoring, as well as the capability to
register a second ironic node as a failover for another node (I haven't
investigated how this actually works).

I do share Jay's concern - why are these separate projects, rather than
contributing to ironic (inspector) itself? Why would a user want to use
both ironic *and* RackHD?

From the RackHD docs:
"RackHD is focused on being the lowest level of automation that
interrogates agnostic hardware and provisions machines with operating
systems."

And the Ironic mission statement:
"To produce an OpenStack service and associated libraries capable of
managing and provisioning physical machines, and to do this in a
security-aware and fault-tolerant manner."

So, I'm not sure I see much difference in the goals, which makes me
wonder if ironic and RackHD are truly complementary (as shovel implies)
or if they are actually aiming to do the same thing.

I'd love to see RackHD folks contributing to Ironic. Would it possible
for EMC to work on contributing RackHD features that Ironic lacks into
Ironic, rather than building a bridge between the two?

// jim

> 
> Anyway, I’m lurking here again - but Andre is doing to real lifting :-)
> 
> -joe
> 
> On 1/13/16, 1:22 PM, "Mooney, Sean K"  wrote:
> 
> >
> >
> >> -Original Message-
> >> From: Jay Pipes [mailto:jaypi...@gmail.com]
> >> Sent: Wednesday, January 13, 2016 8:53 PM
> >> To: openstack-dev@lists.openstack.org
> >> Subject: Re: [openstack-dev] Shovel (RackHD/OpenStack)
> >> 
> >> On 01/13/2016 03:28 PM, Keedy, Andre wrote:
> >> > Hi All, I'm pleased to announce a new application called 'Shovel 'that
> >> > is now available in a public repository on GitHub
> >> > (https://github.com/keedya/Shovel).  Shovel is a server with a set of
> >> > APIs that wraps around RackHD/Ironic's existing APIs allowing users to
> >> > find Baremetal Compute nodes that are dynamically discovered by RackHD
> >> > and register them with Ironic. Shovel also uses the SEL pollers
> >> > service in RackHD to monitor compute nodes and logs errors from SEL
> >> > into the Ironic Database.  Shovel includes a graphical interface using
> >> Swagger UI.
> >> >
> >> > Also provided is a Shovel Horizon plugin to interface with the Shovel
> >> > service that is available in a public repository on GitHub
> >> > (https://github.com/keedya/shovel-horizon-plugin). The Plugin adds a
> >> > new Panel to the admin Dashboard called rackhd that displays a table
> >> > of all the Baremetal systems discovered by RackHD. It also allows the
> >> > user to see the node catalog in a nice table view, register/unregister
> >> > node in Ironic, display node SEL and enable/register a failover node.
> >> >
> >> > I invite you to take a look at Shovel and Shovel horizon plugin that
> >> > is available to the public on GitHub.
> >> 
> >> Would EMC be interested in contributing to the OpenStack Ironic project
> >> around hardware discovery and automated registration of hardware? It
> >> would be nice to have a single community pulling in the same direction.
> >> It looks to me that RackHD is only a few months old. Was there a
> >> particular reason that EMC decided to start a new 

Re: [openstack-dev] [Horizon] Email as User Name on the Horizon login page

2016-01-15 Thread Diana Whitten
Adrian,

Changing the label is also possible through overrides, but customizing any
label that goes through Localization might affect other languages.  Not
sure if this might be a problem for you.

If this isn't ideal, we can easily put in hooks to allow customization
purely through CSS, using pseudo selectors like ':before' or ':after' in
combination with 'content'.  We just need to wrap the contents of the label
with an inner span.

Thoughts?

- Diana



On Fri, Jan 15, 2016 at 10:47 AM, Lin Hua Cheng  wrote:

> It might be simpler to just update the label on the python code. This is
> where the form label are defined.
>
> You can update the label here:
> https://github.com/openstack/django_openstack_auth/blob/stable/kilo/openstack_auth/forms.py#L51
>
> -Lin
>
>
> On Fri, Jan 15, 2016 at 12:54 AM, Itxaka Serrano Garcia  > wrote:
>
>>
>> Looks like the form comes from django_openstack_auth:
>>
>> https://github.com/openstack/django_openstack_auth/blob/master/openstack_auth/forms.py#L53
>>
>>
>> But to be honest, no idea how that can be overridden trough the themes,
>> not sure if its even possible to override anything on that page without
>> modifying django_openstack_auth directly :(
>>
>> Maybe someone else has a better insight on this than me.
>>
>>
>> * Horrible Hack Incoming, read at your own discretion *
>>
>> You can override the template here:
>>
>> https://github.com/openstack/horizon/blob/master/horizon/templates/horizon/common/_form_field.html#L51
>>
>> And change this line:
>> {{ field.label }}
>>
>> For this:
>> {% if field.label == "User Name" and not
>> request.user.is_authenticated %}Email{% else %}{{ field.label }}{% endif
>> %}
>>
>>
>> Which will check if the label is "User Name" and the user is logged out
>> and directly write "Email" as the field label.
>>
>> I know, its horrible and if you update horizon it will be overriden, but
>> probably works for the time being if you really need it ¯\_(ツ)_/¯
>>
>> * Horrible Hack Finished *
>>
>>
>>
>>
>> Itxaka
>>
>>
>>
>>
>>
>> On 01/15/2016 05:13 AM, Adrian Turjak wrote:
>>
>>> I've run into a weird issue with the Liberty release of Horizon.
>>>
>>> For our deployment we enforce emails as usernames, and thus for Horizon
>>> we used to have "User Name" on the login page replaced with "Email".
>>> This used to be a straightforward change in the html template file, and
>>> with the introduction of themes we assumed it would be the same. When
>>> one of our designers was migrating our custom CSS and html changes to
>>> the new theme system they missed that change and I at first it was a
>>> silly mistake.
>>>
>>> Only on digging through the code myself I found that the "User Name" on
>>> the login screen isn't in the html file at all, nor anywhere else
>>> straightforward. The login page form is built on the fly with javascript
>>> to facilitate different modes of authentication. While a bit annoying
>>> that didn't seem too bad and I then assumed it might mean a javascript
>>> change, only that the more I dug, the more I became confused.
>>>
>>> Where exactly is the login form defined? And where exactly is the "User
>>> Name" text for the login form set?
>>>
>>> I've tried all manner of stuff to change it with no luck and I feel like
>>> I must have missed something obvious.
>>>
>>> Cheers,
>>> -Adrian Turjak
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Please vote -> Removal of Harm Weites from the core reviewer team

2016-01-15 Thread Steven Dake (stdake)
I counted 6 votes in favor of removal.  We have 10 people on our core team.  A 
majority has been met and I have removed Harm from the core reviewer team.

Harm,

Thanks again for your helpful reviews and remember, your always welcome back in 
the future if your availability changes.

For the record, the core reviewers that voted for removal were:
Steven Dake
Jeff Peeler
Paul Bourke
Michal Jastrzebski
Ryan Hallisey
Michal Rostecki

Regards,
-steve


From: Steven Dake >
Reply-To: openstack-dev 
>
Date: Thursday, January 14, 2016 at 5:12 PM
To: openstack-dev 
>
Subject: [openstack-dev] [kolla] Please vote -> Removal of Harm Weites from the 
core reviewer team

Hi fellow core reviewers,

Harm joined Kolla early on with great enthusiasm and did a bang-up job for 
several months working on Kolla.  We voted unanimously to add him to the core 
team.  Over the last 6 months Harm hasn't really made much contribution to 
Kolla.  I have spoken to him about it in the past, and he indicated his work 
and other activities keep him from being able to execute the full job of a core 
reviewer and nothing environmental is changing in the near term that would 
improve things.

I faced a similar work/life balance problem when working on Magnum as a core 
reviewer and also serving as PTL for Kolla.  My answer there was to step down 
from the Magnum core reviewer team [1] because Kolla needed a PTL more then 
Magnum needed a core reviewer.  I would strongly prefer if folks don't have the 
time available to serve as a Kolla core reviewer, to step down as was done in 
the above example.  Folks that follow this path will always be welcome back as 
a core reviewer in the future once they become familiar with the code base, 
people, and the project.

The other alternative to stepping down is for the core reviewer team to vote to 
remove an individual from the core review team if that is deemed necessary.  
For future reference, if you as a core reviewer have concerns about a fellow 
core reviewer's performance, please contact the current PTL who will discuss 
the issue with you.

I propose removing Harm from the core review team.  Please vote:

+1 = remove Harm from the core review team
-1 = don't remove Harm from the core review team

Note folks that are voted off the core review team are always welcome to rejoin 
the core team in the future once they become familiar with the code base, 
people, and the project.  Harm is a great guy, and I hope in the future he has 
more time available to rejoin the Kolla core review team assuming this vote 
passes with simple majority.

It is important to explain why, for some folks that may be new to a core 
reviewer role (which many of our core reviewers are), why a core reviewer 
should have their +2/-2 voting rights removed when they become inactive or 
their activity drops below an acceptable threshold for extended or permanent 
periods.  This hasn't happened in Harm's case, but it is possible that a core 
reviewer could approve a patch that is incorrect because they lack sufficient 
context with the code base.  Our core reviewers are the most important part of 
ensuring quality software.  If the individual has lost context with the code 
base, their voting may be suspect, and more importantly the other core 
reviewers may not trust the individual's votes.  Trust is the cornerstone of a 
software review process, so we need to maximize trust on a technical level 
between our core team members.  That is why maintaining context with the code 
base is critical and why I am proposing a vote to remove Harm from the core 
reviewer team.

On a final note, folks should always know, joining the core review team is 
never "permanent".  Folks are free to move on if their interests take them into 
other areas or their availability becomes limited.  Core Reviewers can also be 
removed by majority vote.  If there is any core reviewer's performance you are 
concerned with, please contact the current PTL to first work on improving 
performance, or alternatively initiating a core reviewer removal voting process.

On a more personal note, I want to personally thank Harm for his many and 
significant contributions to Kolla and especially going above and beyond by 
taking on the responsibility of a core reviewer.  Harm's reviews were always 
very thorough and very high quality, and I really do hope in the future Harm 
will rejoin the Kolla core team.

Regards,
-steve

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-October/077844.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] kolla-mesos IRC meeting

2016-01-15 Thread Angus Salkeld
On Fri, Jan 15, 2016 at 7:44 AM Steven Dake (stdake) 
wrote:

> I am -2 on a separate meeting.
>
> If there are problems with the current approach to how the agenda is
> managed as it relates to mesos topics, lets fix that.  If there are
> timezone problems, lets try to fix that.  Lets fix the problems you point
>

The current meeting is 16:30 UTC, that is 2:30am for me:-(
I suspect that I maybe the only person in this situation, so basically I
can't attend meetings. So that kinda sucks...

-A


> out rather then create another meeting for our core team to have to try to
> make.  Even of the core team is not actively writing code, they are
> actively reviewing code (I hope:) and it is helpful for the core team to
> act as one unit when it comes to decision making.  This includes
> informational meetings like kolla-mesos.
>
> I find as running the typical meetings goes, I typically have over half
> the agenda time available for open topics unrelated to normal project
> business (such as figuring out what will be discussed at the midcycle or
> summit as an example).  It would be a real shame to have two shorter
> meetings when we can just jam the one topic Kolla related stuff into one
> meeting..
>
> Regards
> -steve
>
>
>
> On 1/15/16, 4:38 AM, "Michal Rostecki"  wrote:
>
> >Hi,
> >
> >Currently we're discussing stuff about kolla-mesos project on kolla IRC
> >meetings[1]. We have an idea of creating the separate meeting for
> >kolla-mesos. I see the following reasons for that:
> >
> >- kolla-mesos has some contributors which aren't able to attend kolla
> >meeting because of timezone reasons
> >- kolla meetings had a lot of topics recently and there was a short time
> >for discussing kolla-mesos things
> >- in the most of kolla meetings, we treated the whole kolla-mesos as one
> >topic, which is bad in terms of analyzing single problems inside this
> >project
> >
> >The things I would like to know from you is:
> >- whether you're +1 or -1 to the whole idea of having separate meeting
> >- what is your preferred time of meeting - please use this etherpad[2]
> >(I already added there some names of most active contributors from who
> >I'd like to hear an opinion, so if you're interested - please "override
> >color"; if not, remove the corresponding line)
> >
> >About the time of meeting and possible conflicts - I think that in case
> >of conflicting times and the equal number of votes, opinion of core
> >reviewers and people who are already contributing to the project
> >(reviews and commits) will be more important. You can see the
> >contributors here[3][4].
> >
> >[1] https://wiki.openstack.org/wiki/Meetings/Kolla
> >[2] https://etherpad.openstack.org/p/kolla-mesos-irc-meeting
> >[3] http://stackalytics.com/?module=kolla-mesos
> >[4] http://stackalytics.com/?module=kolla-mesos=commits
> >
> >Cheers,
> >Michal
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] N. Makhotkin availability next 2 weeks

2016-01-15 Thread Nikolay Makhotkin
Hi team!

I will be on vacation next 2 weeks right till 31 Jan. Please ping me via
email if I am needed for reviewing some patches.

-- 
Best Regards,
Nikolay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [app-catalog] Automating some aspects of catalog maintenance

2016-01-15 Thread Fox, Kevin M
The vast majority of projects are using a db to solve it. But that requires a 
db and a lot more API. Which we will get to when we go to glare. In the mean 
time, I suspect we're in fairly uncharted waters at the moment.

Thanks,
Kevin

From: Christopher Aedo [d...@aedo.net]
Sent: Thursday, January 14, 2016 5:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [app-catalog] Automating some aspects of catalog   
maintenance

While we are looking forward to implementing an API based on Glare, I
think it would be nice to have a few aspects of catalog maintenance be
automated.  For instance discovering and removingt/agging assets with
dead links, updating the hash for assets that change frequently or
exposing when an entry was last modified.

Initially I thought the best approach would be to create a very simple
API service using Flask on top of a DB.  This would provide output
identical to the current "v1" API.  But of course that "simple" idea
starts to look too complicated for something that would eventually be
abandoned wholesale.  Someone on the infra team suggested a dead-link
checker that would run as a periodic job similar to other proposal-bot
jobs, so I took a first pass at that [1].

As expected that resulted in a VERY large initial change[2] due to
"normalizing" the existing human-edited assets.yaml file.  I think the
feedback that this is un-reviewable without some external tools is
reasonable (though it's possible to verify the 86 assets are
unmolested, only slightly reformatted).  One thing that would help
would be forcing all entries to meet a specific format which would not
need adjustment by proposal-bot.  But even that change would require a
near-complete rewrite of the assets file, so I don't think it would
help in this case.

I'm generally in favor of this approach because it keeps all the
information on the assets in one place (the assets.yaml file) which
makes it easy for humans to read and understand.

An alternate proposed direction is to merge machine-generated
information with the human-generated assets.yaml during the creation
of the JSON file[3] that is used by the website and Horizon plugin.
The start of that work is this script to discover last-modified times
for assets based on git history[4].

While I think the approach of merging machine-generated and
human-generated files could work, it feels a lot like creating a
relational database out of yaml files glued together with a bash
script.  If it works though, maybe it's the best short term approach?

Ultimately my goal is to make sure the assets in the catalog are kept
up to date without introducing a great deal of administrative overhead
or obfuscating how the display version of the catalog is created.  How
are other projects handling concerns like this?  Would love to hear
feedback on how you've seen something like this handled - thanks!

[1]: https://review.openstack.org/#/c/264978/
[2]: https://review.openstack.org/#/c/266218/
[3]: https://apps.openstack.org/api/v1/assets
[4]: https://review.openstack.org/#/c/267087/

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Team meeting on Monday at 2100UTC

2016-01-15 Thread Armando M.
Hi folks,

A reminder of next week meeting. Let's see if we manage to have the first
one of the year!

Word of notice: Monday is a public holiday for US.

Cheers,
Armando
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Shovel (RackHD/OpenStack)

2016-01-15 Thread Heck, Joseph
They’re definitely overlapping, but RackHD wasn’t created, and isn’t
meant, to be specific to OpenStack, but as a more general purpose need for
low level automation of hardware. Just like there’s Cobbler, Razor, and
Hanlon out there - RackHD isn’t aiming to be OpenStack, just agnostic, and
with some different features and functionality than what Ironic is trying
to do.

That said, we do think they could work well together, which was the point
of putting together this effort and submitting it to the community for
potential inclusion within OpenStack, in my mind in the same fashion that
you have a Cisco or Juniper driver set of Neutron (yes, I still think of
it as Quantum). I suggested that Andre reach out to you guys to see how
best to accommodate that.

We’ve started the process to set it up for incubation - is that still the
best route to take? Or more specifically to the general community as well
to to Ironic folks - in what way can we provide the means to make this
most available to the OpenStack Community at large?

I haven’t been following the community and the norms closely for a couple
years - Stackforge appears to have dissipated in favor of projects in
incubation to share things like this. Is that accurate?

-joe

On 1/15/16, 10:15 AM, "Jim Rollenhagen"  wrote:
>On Wed, Jan 13, 2016 at 09:56:57PM +, Heck, Joseph wrote:
>> Hey Jay! (yeah, I’m here and lurking in the corners, albeit with a
>> different email at the moment)
>> 
>> Yep - RackHD was created by a company that was acquired by EMC to attack
>> the lowest-level of hardware automation. EMC was interesting in pushing
>> that into Open Source, and surprisingly I was completely game for that
>> project :-) There’s all sorts of PR around that project that I won’t
>> bother replaying here, but if anyone’s interested, I’d be happy to share
>> more details.
>> 
>> There was immediately interest in how this could work with OpenStack,
>>and
>> as a plugin/driver to Ironic was the obvious play. We took a couple
>> different options of possible attacks, and decided to leverage something
>> that would both show off the underlying hardware introspection which
>> wasn’t obviously visible (or arguably perhaps relevant) from the Ironic
>> style APIs (the horizon plugin) as well as be leverage by Ironic to do
>> hardware provisioning using those APIs.
>> 
>> Andre (who was key in doing this effort inside EMC) was interested in
>> helping manage it and is bringing it here to introduce folks to the fact
>> that we’ve done this work, and that it will be submitted it into
>> incubation with OpenStack. So yep - we totally want to contribute it to
>> the Ironic efforts. Andre and Jim are coordinating on that effort (hi
>> jroll, nice to meet you) and it was Jim that suggested that Andre let
>>the
>> community know here that we’ve started this effort.
>
>Hi. :)
>
>So, to be clear, I haven't been working with Andre on this, except to
>help him figure out how to create an OpenStack project, and to suggest
>he email this list.
>
>From what I know, the things RackHD/shovel offer that Ironic (Inspector)
>doesn't have is additional SEL monitoring, as well as the capability to
>register a second ironic node as a failover for another node (I haven't
>investigated how this actually works).
>
>I do share Jay's concern - why are these separate projects, rather than
>contributing to ironic (inspector) itself? Why would a user want to use
>both ironic *and* RackHD?
>
>From the RackHD docs:
>"RackHD is focused on being the lowest level of automation that
>interrogates agnostic hardware and provisions machines with operating
>systems."
>
>And the Ironic mission statement:
>"To produce an OpenStack service and associated libraries capable of
>managing and provisioning physical machines, and to do this in a
>security-aware and fault-tolerant manner."
>
>So, I'm not sure I see much difference in the goals, which makes me
>wonder if ironic and RackHD are truly complementary (as shovel implies)
>or if they are actually aiming to do the same thing.
>
>I'd love to see RackHD folks contributing to Ironic. Would it possible
>for EMC to work on contributing RackHD features that Ironic lacks into
>Ironic, rather than building a bridge between the two?
>
>// jim
>
>> 
>> Anyway, I’m lurking here again - but Andre is doing to real lifting :-)
>> 
>> -joe
>> 
>> On 1/13/16, 1:22 PM, "Mooney, Sean K"  wrote:
>> 
>> >
>> >
>> >> -Original Message-
>> >> From: Jay Pipes [mailto:jaypi...@gmail.com]
>> >> Sent: Wednesday, January 13, 2016 8:53 PM
>> >> To: openstack-dev@lists.openstack.org
>> >> Subject: Re: [openstack-dev] Shovel (RackHD/OpenStack)
>> >> 
>> >> On 01/13/2016 03:28 PM, Keedy, Andre wrote:
>> >> > Hi All, I'm pleased to announce a new application called 'Shovel
>>'that
>> >> > is now available in a public repository on GitHub
>> >> > (https://github.com/keedya/Shovel).  Shovel is a server with a set
>>of
>> >> > APIs 

Re: [openstack-dev] Shovel (RackHD/OpenStack)

2016-01-15 Thread Jeremy Stanley
On 2016-01-15 18:44:42 + (+), Heck, Joseph wrote:
[...]
> I haven’t been following the community and the norms closely for a couple
> years - Stackforge appears to have dissipated in favor of projects in
> incubation to share things like this. Is that accurate?

The idea of "incubating" before becoming an official project
basically went away with the advent of the "Big Tent"[1]. Also the
_term_ "StackForge" isn't dead (yet[2] anyway), but we have
unofficial^WStackForge repos just share the same Git repository
namespace[3] with official ones now for ease of management and to
reduce the amount of disruptive renaming we used to have to
accommodate moving between namespaces.

[1] 
http://governance.openstack.org/resolutions/20141202-project-structure-reform-spec.html
[2] https://review.openstack.org/265352
[3] 
http://governance.openstack.org/resolutions/20150615-stackforge-retirement.html
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] nova-network removal

2016-01-15 Thread Sheena Gregson
Adrian – can someone from the PI team please confirm what testing was
performed?



*From:* Roman Alekseenkov [mailto:ralekseen...@mirantis.com]
*Sent:* Friday, January 15, 2016 11:30 AM
*To:* OpenStack Development Mailing List (not for usage questions) <
openstack-dev@lists.openstack.org>
*Subject:* Re: [openstack-dev] [Fuel] nova-network removal



I agree with Sheena. Sounds like removing support for nova-network would be
the best option, even though it's late.



However, I'd like us to think about the impact on vCenter integration.
vCenter+nova-network was fully supported before. Since we are now
recommending DVS or NSX backends, I'd like the team to explicitly confirm
that those configurations have been tested.



Thanks,

Roman



On Fri, Jan 15, 2016 at 6:43 AM, Sheena Gregson 
wrote:

Although we are very close to HCF, I see no option but to continue removing
nova-network as I understand it is not currently functional or well-tested
for the Mitaka release.  We must either remove it or test it, and we want
to remove it anyway so that seems like the better path.



*Mike*, what do you think?



*From:* Roman Prykhodchenko [mailto:m...@romcheg.me]
*Sent:* Friday, January 15, 2016 8:04 AM
*To:* OpenStack Development Mailing List (not for usage questions) <
openstack-dev@lists.openstack.org>
*Subject:* Re: [openstack-dev] [Fuel] nova-network removal



I’d like to add that nova-network support was removed from
python-fuelclient in 8.0.



14 січ. 2016 р. о 17:54 Vitaly Kramskikh 
написав(ла):



Folks,

We have a request on review which prohibits creating new envs with
nova-network: https://review.openstack.org/#/c/261229/ We're 3 weeks away
from HCF, and I think this is too late for such a change. What do you
think? Should we proceed and remove nova-network support in 8.0, which is
deprecated since 7.0?


-- 

Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Please vote -> Removal of Harm Weites from the core reviewer team

2016-01-15 Thread Harm Weites
Hi guys,

As Steven noted, activity from my side has dropped significantly, and with +2 
comes a certain responsibility of at the very least keeping track of the 
codebase. Various reasons keep me from even doing that so this seems the 
logical outcome.

Thanks for your support, it’s been a great ride :)

see you in #kolla!

> Op 15 jan. 2016, om 20:10 heeft Steven Dake (stdake)  het 
> volgende geschreven:
> 
> I counted 6 votes in favor of removal.  We have 10 people on our core team.  
> A majority has been met and I have removed Harm from the core reviewer team.
> 
> Harm,
> 
> Thanks again for your helpful reviews and remember, your always welcome back 
> in the future if your availability changes.
> 
> For the record, the core reviewers that voted for removal were:
> Steven Dake
> Jeff Peeler
> Paul Bourke
> Michal Jastrzebski
> Ryan Hallisey
> Michal Rostecki
> 
> Regards,
> -steve
> 
> 
> From: Steven Dake >
> Reply-To: openstack-dev  >
> Date: Thursday, January 14, 2016 at 5:12 PM
> To: openstack-dev  >
> Subject: [openstack-dev] [kolla] Please vote -> Removal of Harm Weites from 
> the core reviewer team
> 
> Hi fellow core reviewers,
> 
> Harm joined Kolla early on with great enthusiasm and did a bang-up job for 
> several months working on Kolla.  We voted unanimously to add him to the core 
> team.  Over the last 6 months Harm hasn't really made much contribution to 
> Kolla.  I have spoken to him about it in the past, and he indicated his work 
> and other activities keep him from being able to execute the full job of a 
> core reviewer and nothing environmental is changing in the near term that 
> would improve things.
> 
> I faced a similar work/life balance problem when working on Magnum as a core 
> reviewer and also serving as PTL for Kolla.  My answer there was to step down 
> from the Magnum core reviewer team [1] because Kolla needed a PTL more then 
> Magnum needed a core reviewer.  I would strongly prefer if folks don't have 
> the time available to serve as a Kolla core reviewer, to step down as was 
> done in the above example.  Folks that follow this path will always be 
> welcome back as a core reviewer in the future once they become familiar with 
> the code base, people, and the project.
> 
> The other alternative to stepping down is for the core reviewer team to vote 
> to remove an individual from the core review team if that is deemed 
> necessary.  For future reference, if you as a core reviewer have concerns 
> about a fellow core reviewer's performance, please contact the current PTL 
> who will discuss the issue with you.
> 
> I propose removing Harm from the core review team.  Please vote:
> 
> +1 = remove Harm from the core review team
> -1 = don't remove Harm from the core review team
> 
> Note folks that are voted off the core review team are always welcome to 
> rejoin the core team in the future once they become familiar with the code 
> base, people, and the project.  Harm is a great guy, and I hope in the future 
> he has more time available to rejoin the Kolla core review team assuming this 
> vote passes with simple majority.
> 
> It is important to explain why, for some folks that may be new to a core 
> reviewer role (which many of our core reviewers are), why a core reviewer 
> should have their +2/-2 voting rights removed when they become inactive or 
> their activity drops below an acceptable threshold for extended or permanent 
> periods.  This hasn't happened in Harm's case, but it is possible that a core 
> reviewer could approve a patch that is incorrect because they lack sufficient 
> context with the code base.  Our core reviewers are the most important part 
> of ensuring quality software.  If the individual has lost context with the 
> code base, their voting may be suspect, and more importantly the other core 
> reviewers may not trust the individual's votes.  Trust is the cornerstone of 
> a software review process, so we need to maximize trust on a technical level 
> between our core team members.  That is why maintaining context with the code 
> base is critical and why I am proposing a vote to remove Harm from the core 
> reviewer team.
> 
> On a final note, folks should always know, joining the core review team is 
> never "permanent".  Folks are free to move on if their interests take them 
> into other areas or their availability becomes limited.  Core Reviewers can 
> also be removed by majority vote.  If there is any core reviewer's 
> performance you are concerned with, please contact the current PTL to first 
> work on improving performance, or alternatively initiating a core reviewer 
> removal voting process.
> 
> On a more personal note, I want to personally thank Harm for his many and 
> significant contributions to 

Re: [openstack-dev] [nova][stable] Proposal to add Tony Breeds to nova-stable-maint

2016-01-15 Thread Michael Still
We can give him elocution classes at the mid-cycle. It's like my fair lady,
but nerdier.

Michael
On 16 Jan 2016 4:11 AM, "Dan Smith"  wrote:

> > I'm formally proposing that the nova-stable-maint team [1] adds Tony
> > Breeds to the core team.
>
> My major complaint with Tony is that he talks funny. If he's willing to
> work on fixing that, I'm +1.
>
> :-P
>
> --Dan
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Shovel (RackHD/OpenStack)

2016-01-15 Thread Heck, Joseph
Yep! Thanks Jim!

___
From: Jim Rollenhagen >
Sent: Friday, January 15, 2016 5:48 PM
Subject: Re: [openstack-dev] Shovel (RackHD/OpenStack)
To: OpenStack Development Mailing List (not for usage questions) 
>


On Fri, Jan 15, 2016 at 06:44:42PM +, Heck, Joseph wrote:
> They’re definitely overlapping, but RackHD wasn’t created, and isn’t
> meant, to be specific to OpenStack, but as a more general purpose need for
> low level automation of hardware. Just like there’s Cobbler, Razor, and
> Hanlon out there - RackHD isn’t aiming to be OpenStack, just agnostic, and
> with some different features and functionality than what Ironic is trying
> to do.

Right, I agree. I just don't see why a deployer would want to stand up
ironic (and shovel) if they already had RackHD running, or vice versa.
There's a pretty small feature gap, and it seems like features Ironic
would like to have but doesn't yet.

> That said, we do think they could work well together, which was the point
> of putting together this effort and submitting it to the community for
> potential inclusion within OpenStack, in my mind in the same fashion that
> you have a Cisco or Juniper driver set of Neutron (yes, I still think of
> it as Quantum). I suggested that Andre reach out to you guys to see how
> best to accommodate that.
>
> We’ve started the process to set it up for incubation - is that still the
> best route to take? Or more specifically to the general community as well
> to to Ironic folks - in what way can we provide the means to make this
> most available to the OpenStack Community at large?
>
> I haven’t been following the community and the norms closely for a couple
> years - Stackforge appears to have dissipated in favor of projects in
> incubation to share things like this. Is that accurate?

Right, like Jeremy said, you'd now just create an unofficial OpenStack
project, as described here:
http://docs.openstack.org/infra/manual/creators.html

Then you'd be able to use the OpenStack infrastructure, like CI systems
and Launchpad.

The governance step is the part that makes it "official", and you could
skip that for now. Later, if we do find that this is something that
should be managed by the Ironic team, we'd add it to the Ironic project
in the governance repo, making it an official OpenStack project. Note
that you could also start your own project team (similar to the ironic
project team) with its own mission statement and set of code
repositories. This is roughly the equivalent of the old "incubation"
thing OpenStack used to do.

Does that help?

// jim

>
> -joe
>
> On 1/15/16, 10:15 AM, "Jim Rollenhagen" 
> > wrote:
> >On Wed, Jan 13, 2016 at 09:56:57PM +, Heck, Joseph wrote:
> >> Hey Jay! (yeah, I’m here and lurking in the corners, albeit with a
> >> different email at the moment)
> >>
> >> Yep - RackHD was created by a company that was acquired by EMC to attack
> >> the lowest-level of hardware automation. EMC was interesting in pushing
> >> that into Open Source, and surprisingly I was completely game for that
> >> project :-) There’s all sorts of PR around that project that I won’t
> >> bother replaying here, but if anyone’s interested, I’d be happy to share
> >> more details.
> >>
> >> There was immediately interest in how this could work with OpenStack,
> >>and
> >> as a plugin/driver to Ironic was the obvious play. We took a couple
> >> different options of possible attacks, and decided to leverage something
> >> that would both show off the underlying hardware introspection which
> >> wasn’t obviously visible (or arguably perhaps relevant) from the Ironic
> >> style APIs (the horizon plugin) as well as be leverage by Ironic to do
> >> hardware provisioning using those APIs.
> >>
> >> Andre (who was key in doing this effort inside EMC) was interested in
> >> helping manage it and is bringing it here to introduce folks to the fact
> >> that we’ve done this work, and that it will be submitted it into
> >> incubation with OpenStack. So yep - we totally want to contribute it to
> >> the Ironic efforts. Andre and Jim are coordinating on that effort (hi
> >> jroll, nice to meet you) and it was Jim that suggested that Andre let
> >>the
> >> community know here that we’ve started this effort.
> >
> >Hi. :)
> >
> >So, to be clear, I haven't been working with Andre on this, except to
> >help him figure out how to create an OpenStack project, and to suggest
> >he email this list.
> >
> >From what I know, the things RackHD/shovel offer that Ironic (Inspector)
> >doesn't have is additional SEL monitoring, as well as the capability to
> >register a second ironic node as a failover for another node (I haven't
> >investigated how this actually works).
> >
> >I do share Jay's concern - why are these separate projects, rather 

Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

2016-01-15 Thread Hongbin Lu
A reason is the container abstraction brings containers to OpenStack: Keystone 
for authentication, Heat for orchestration, Horizon for UI, etc.

From: Kyle Kelley [mailto:rgb...@gmail.com]
Sent: January-15-16 10:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

What are the reasons for keeping /containers?

On Fri, Jan 15, 2016 at 9:14 PM, Hongbin Lu 
> wrote:
Disagree.

If the container managing part is removed, Magnum is just a COE deployment 
tool. This is really a scope-mismatch IMO. The middle ground I can see is to 
have a flag that allows operators to turned off the container managing part. If 
it is turned off, COEs are not managed by Magnum and requests sent to the 
/container endpoint will return a reasonable error code. Thoughts?

Best regards,
Hongbin

From: Mike Metral 
[mailto:mike.met...@rackspace.com]
Sent: January-15-16 6:24 PM
To: openstack-dev@lists.openstack.org

Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

I too believe that the /containers endpoint is obstructive to the overall goal 
of Magnum.

IMO, Magnum’s scope should only be concerned with:

  1.  Provisioning the underlying infrastructure required by the Container 
Orchestration Engine (COE) and
  2.  Instantiating the COE itself on top of said infrastructure from step #1.
Anything further regarding Magnum interfacing or interacting with containers 
starts to get into a gray area that could easily evolve into:

  *   Potential race conditions between Magnum and the designated COE and
  *   Would create design & implementation overhead and debt that could bite us 
in the long run seeing how all COE’s operate & are based off various different 
paradigms in terms of describing & managing containers, and this divergence 
will only continue to grow with time.
  *   Not to mention, the recreation of functionality around managing 
containers in Magnum seems redundant in nature as this is the very reason to 
want to use a COE in the first place – because it’s a more suited tool for the 
task
If there is low-hanging fruit in terms of common functionality across all 
COE’s, then those generic capabilities could be abstracted and integrated into 
Magnum, but these have to be carefully examined beforehand to ensure true 
parity exists for the capability across all COE’s.

However, I still worry that going down this route toes the line that Magnum 
should and could be a part of the managing container story to some degree – 
which again should be the sole responsibility of the COE, not Magnum.

I’m in favor of doing away with the /containers endpoint – continuing with it 
just looks like a snowball of scope-mismatch and management issues just waiting 
to happen.

Mike Metral
Product Architect – Private Cloud R - Rackspace

From: Hongbin Lu >
Sent: Thursday, January 14, 2016 1:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

In short, the container IDs assigned by Magnum are independent of the container 
IDs assigned by Docker daemon. Magnum do the IDs mapping before doing a native 
API call. In particular, here is how it works.

If users create a container through Magnum endpoint, Magnum will do the 
followings:
1.   Generate a uuid (if not provided).
2.   Call Docker Swarm API to create a container, with its hostname equal 
to the generated uuid.
3.   Persist container to DB with the generated uuid.

If users perform an operation on an existing container, they must provide the 
uuid (or the name) of the container (if name is provided, it will be used to 
lookup the uuid). Magnum will do the followings:
1.   Call Docker Swarm API to list all containers.
2.   Find the container whose hostname is equal to the provided uuid, 
record its “docker_id” that is the ID assigned by native tool.
3.   Call Docker Swarm API with “docker_id” to perform the operation.

Magnum doesn’t assume all operations to be routed through Magnum endpoints. 
Alternatively, users can directly call the native APIs. In this case, the 
created resources are not managed by Magnum and won’t be accessible through 
Magnum’s endpoints.

Hope it is clear.

Best regards,
Hongbin

From: Kyle Kelley [mailto:kyle.kel...@rackspace.com]
Sent: January-14-16 11:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

This presumes a model where Magnum is in complete control of the IDs of 
individual containers. How does this work with the Docker daemon?

> In Rest API, you can set the “uuid” field in the json request body (this 

Re: [openstack-dev] Long description of oslo.privsep

2016-01-15 Thread Davanum Srinivas
Haha, needs work :)

-- Dims

On Fri, Jan 15, 2016 at 10:53 PM, Thomas Goirand  wrote:
> On 01/16/2016 08:35 AM, Davanum Srinivas wrote:
>> Zigo,
>>
>> Seriously, chill please.
>
> I was trying to write it funnily. Sorry if it wasn't obvious! :)
>
> Cheers,
>
> Thomas Goirand (zigo)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Shovel (RackHD/OpenStack)

2016-01-15 Thread Heck, Joseph
Thanks for the links Jeremy! I'm still reading through what exactly "bigtent" 
means, not sure I grok the placement for littler/ancillary things like this 
effort, but the the links are hugely helpful!

- joe
_
From: Jeremy Stanley >
Sent: Friday, January 15, 2016 11:44 AM
Subject: Re: [openstack-dev] Shovel (RackHD/OpenStack)
To: OpenStack Development Mailing List (not for usage questions) 
>


On 2016-01-15 18:44:42 + (+), Heck, Joseph wrote:
[...]
> I haven’t been following the community and the norms closely for a couple
> years - Stackforge appears to have dissipated in favor of projects in
> incubation to share things like this. Is that accurate?

The idea of "incubating" before becoming an official project
basically went away with the advent of the "Big Tent"[1]. Also the
_term_ "StackForge" isn't dead (yet[2] anyway), but we have
unofficial^WStackForge repos just share the same Git repository
namespace[3] with official ones now for ease of management and to
reduce the amount of disruptive renaming we used to have to
accommodate moving between namespaces.

[1] 
http://governance.openstack.org/resolutions/20141202-project-structure-reform-spec.html
[2] https://review.openstack.org/265352
[3] 
http://governance.openstack.org/resolutions/20150615-stackforge-retirement.html
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: copyright-holders file in each project, or copyright holding forced to the OpenStack Foundation

2016-01-15 Thread Thomas Goirand
On 01/15/2016 11:38 PM, Daniel P. Berrange wrote:
> On Fri, Jan 15, 2016 at 08:48:21PM +0800, Thomas Goirand wrote:
>> This isn't the first time I'm calling for it. Let's hope this time, I'll
>> be heard.
>>
>> Randomly, contributors put their company names into source code. When
>> they do, then effectively, this tells that a given source file copyright
>> holder is whatever is claimed, even though someone from another company
>> may have patched it.
>>
>> As a result, we have a huge mess. It's impossible for me, as a package
>> maintainer, to accurately set the copyright holder names in the
>> debian/copyright file, which is a required by the Debian FTP masters.
> 
> I don't think OpenStack is in a different situation to the vast
> majority of open source projects I've worked with or seen. Except
> for those projects requiring copyright assignment to a single
> entity, it is normal for source files to contain an unreliable
> random splattering of Copyright notices. This hasn't seemed to
> create a blocking problem for their maintenance in Debian. Loooking
> at the debian/copyright files I see most of them have just done a
> grep for the 'Copyright' statements & included as is - IOW just
> ignored the fact that this is essentially worthless info and included
> it regardless.

Correct, that's how I do things. And that's what I would like to fix.

>> I see 2 ways forward:
>> 1/ Require everyone to give-up copyright holding, and give it to the
>> OpenStack Foundation.
>> 2/ Maintain a copyright-holder file in each project.
> 
> 3/ Do nothing, just populate debian/copyright with the random
>set of 'Copyright' lines that happen to be the source files,
>as appears to be common practice across many debian packages
> 
>eg the kernel package
> 
> 
> http://metadata.ftp-master.debian.org/changelogs/main/l/linux/linux_3.16.7-ckt11-1+deb8u3_copyright
> 
> "Copyright: 1991-2012 Linus Torvalds and many others"
> 
>if its good enough for the Debian kernel package, it should be
>good enough for openstack packages too IMHO.

I've just asked this very point with the same example to the FTP
masters. Let's see what they say...

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Proposal: copyright-holders file in each project, or copyright holding forced to the OpenStack Foundation

2016-01-15 Thread Arkady_Kanevsky
Either 1 or 3.
2 does not solve anything.

-Original Message-
From: Monty Taylor [mailto:mord...@inaugust.com]
Sent: Friday, January 15, 2016 10:01 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] Proposal: copyright-holders file in each 
project, or copyright holding forced to the OpenStack Foundation

On 01/15/2016 10:38 AM, Daniel P. Berrange wrote:
> On Fri, Jan 15, 2016 at 08:48:21PM +0800, Thomas Goirand wrote:
>> This isn't the first time I'm calling for it. Let's hope this time,
>> I'll be heard.
>>
>> Randomly, contributors put their company names into source code. When
>> they do, then effectively, this tells that a given source file
>> copyright holder is whatever is claimed, even though someone from
>> another company may have patched it.
>>
>> As a result, we have a huge mess. It's impossible for me, as a
>> package maintainer, to accurately set the copyright holder names in
>> the debian/copyright file, which is a required by the Debian FTP masters.
>
> I don't think OpenStack is in a different situation to the vast
> majority of open source projects I've worked with or seen. Except for
> those projects requiring copyright assignment to a single entity, it
> is normal for source files to contain an unreliable random splattering
> of Copyright notices. This hasn't seemed to create a blocking problem
> for their maintenance in Debian. Loooking at the debian/copyright
> files I see most of them have just done a grep for the 'Copyright'
> statements & included as is - IOW just ignored the fact that this is
> essentially worthless info and included it regardless.

Agree. debian/copyright should be a best effort - but it can only be as good as 
the input data available to the packager. Try getting an accurate 
debian/copyright file for the MySQL source tree at some point.
(and good luck)

>> I see 2 ways forward:
>> 1/ Require everyone to give-up copyright holding, and give it to the
>> OpenStack Foundation.
>> 2/ Maintain a copyright-holder file in each project.

> 3/ Do nothing, just populate debian/copyright with the random
> set of 'Copyright' lines that happen to be the source files,
> as appears to be common practice across many debian packages
>
> eg the kernel package
>
>
> http://metadata.ftp-master.debian.org/changelogs/main/l/linux/linux_3.
> 16.7-ckt11-1+deb8u3_copyright
>
> "Copyright: 1991-2012 Linus Torvalds and many others"
>
> if its good enough for the Debian kernel package, it should be
> good enough for openstack packages too IMHO.

I vote for 3


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Long description of oslo.privsep

2016-01-15 Thread Davanum Srinivas
Zigo,

Seriously, chill please. the library is no where ready. It's not in
global requirements either at the moment. the code quite a bit of time
to go before we can switch over projects from oslo.rootwrap to
oslo.privsep.

We have check lists we that we go over before we let folks use it.
Right now we it's in pypi because we needed to kick the tires.

Thanks,
Dims

On Fri, Jan 15, 2016 at 7:00 PM, Joshua Harlow  wrote:
> Hopefully the following helps out here.
>
> https://review.openstack.org/#/c/268377/
>
> Gus or others hopefully can review that (and correct me if it's not the a
> good long description).
>
> -Josh
>
> Thomas Goirand wrote:
>>
>> Hi,
>>
>> Lucky I have written, in the cookie-butter repo:
>>
>> Please feel here a long description which must be at least 3 lines
>> wrapped on 80 cols, so that distribution package maintainers can use it
>> in their packages. Note that this is a hard requirement.
>>
>> Because without it, we could see stuff like this:
>> https://pypi.python.org/pypi/oslo.privsep
>>
>> Seriously, what shall I put as a long description for the package? Shall
>> I read the code to guess?
>>
>> Cheers,
>>
>> Thomas Goirand (zigo)
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

2016-01-15 Thread Kyle Kelley
What are the reasons for keeping /containers?

On Fri, Jan 15, 2016 at 9:14 PM, Hongbin Lu  wrote:

> Disagree.
>
>
>
> If the container managing part is removed, Magnum is just a COE deployment
> tool. This is really a scope-mismatch IMO. The middle ground I can see is
> to have a flag that allows operators to turned off the container managing
> part. If it is turned off, COEs are not managed by Magnum and requests sent
> to the /container endpoint will return a reasonable error code. Thoughts?
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Mike Metral [mailto:mike.met...@rackspace.com]
> *Sent:* January-15-16 6:24 PM
> *To:* openstack-dev@lists.openstack.org
>
> *Subject:* Re: [openstack-dev] [magnum] Nesting /containers resource
> under /bays
>
>
>
> I too believe that the /containers endpoint is obstructive to the overall
> goal of Magnum.
>
>
>
> IMO, Magnum’s scope should *only* be concerned with:
>
>1. Provisioning the underlying infrastructure required by the
>Container Orchestration Engine (COE) and
>2. Instantiating the COE itself on top of said infrastructure from
>step #1.
>
> Anything further regarding Magnum interfacing or interacting with
> containers starts to get into a gray area that could easily evolve into:
>
>- Potential race conditions between Magnum and the designated COE and
>- Would create design & implementation overhead and debt that could
>bite us in the long run seeing how all COE’s operate & are based off
>various different paradigms in terms of describing & managing containers,
>and this divergence will only continue to grow with time.
>- Not to mention, the recreation of functionality around managing
>containers in Magnum seems redundant in nature as this is the very reason
>to want to use a COE in the first place – because it’s a more suited tool
>for the task
>
> If there is low-hanging fruit in terms of common functionality across
> *all* COE’s, then those generic capabilities *could* be abstracted and
> integrated into Magnum, but these have to be carefully examined beforehand
> to ensure true parity exists for the capability across all COE’s.
>
>
>
> However, I still worry that going down this route toes the line that
> Magnum should and could be a part of the managing container story to some
> degree – which again should be the sole responsibility of the COE, not
> Magnum.
>
>
>
> I’m in favor of doing away with the /containers endpoint – continuing with
> it just looks like a snowball of scope-mismatch and management issues just
> waiting to happen.
>
>
>
> Mike Metral
>
> Product Architect – Private Cloud R - Rackspace
> --
>
> *From:* Hongbin Lu 
> *Sent:* Thursday, January 14, 2016 1:59 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] Nesting /containers resource
> under /bays
>
>
>
> In short, the container IDs assigned by Magnum are independent of the
> container IDs assigned by Docker daemon. Magnum do the IDs mapping before
> doing a native API call. In particular, here is how it works.
>
>
>
> If users create a container through Magnum endpoint, Magnum will do the
> followings:
>
> 1.   Generate a uuid (if not provided).
>
> 2.   Call Docker Swarm API to create a container, with its hostname
> equal to the generated uuid.
>
> 3.   Persist container to DB with the generated uuid.
>
>
>
> If users perform an operation on an existing container, they must provide
> the uuid (or the name) of the container (if name is provided, it will be
> used to lookup the uuid). Magnum will do the followings:
>
> 1.   Call Docker Swarm API to list all containers.
>
> 2.   Find the container whose hostname is equal to the provided uuid,
> record its “docker_id” that is the ID assigned by native tool.
>
> 3.   Call Docker Swarm API with “docker_id” to perform the operation.
>
>
>
> Magnum doesn’t assume all operations to be routed through Magnum
> endpoints. Alternatively, users can directly call the native APIs. In this
> case, the created resources are not managed by Magnum and won’t be
> accessible through Magnum’s endpoints.
>
>
>
> Hope it is clear.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Kyle Kelley [mailto:kyle.kel...@rackspace.com
> ]
> *Sent:* January-14-16 11:39 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] Nesting /containers resource
> under /bays
>
>
>
> This presumes a model where Magnum is in complete control of the IDs of
> individual containers. How does this work with the Docker daemon?
>
>
>
> > In Rest API, you can set the “uuid” field in the json request body
> (this is not supported in CLI, but it is an easy add).​
>
>
>
> In the Rest API for Magnum or Docker? Has Magnum completely broken away
> from exposing native tooling - are all 

Re: [openstack-dev] [all] Proposal: copyright-holders file in each project, or copyright holding forced to the OpenStack Foundation

2016-01-15 Thread Thomas Goirand
On 01/15/2016 11:26 PM, Jeremy Stanley wrote:
> On 2016-01-15 23:09:34 +0800 (+0800), Thomas Goirand wrote:
>> On 01/15/2016 09:57 PM, Jeremy Stanley wrote:
> [...]
>>> resulting in the summary at
>>> https://wiki.openstack.org/wiki/LegalIssuesFAQ#Copyright_Headers for
>>> those who choose to learn from history rather than repeating it.
>>
>> Well, this wiki entry doesn't even have a single line about copyright
>> holding, it only tells about licensing and how/when to put a license
>> header in source code. Or did I miss it when reading too fast?
> 
> What? That entire section I linked is _only_ about copyright
> headers.

holder != header


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-15 Thread Brandon Logan
I filed a bug [1] a while ago that subnet_id should be an optional
parameter for member creation.  Currently it is required.  Review [2] is
makes it optional.

The original thinking was that if the load balancer is ever connected to
that same subnet, be it by another member on that subnet or the vip on
that subnet, then the user does not need to specify the subnet for new
member if that new member is on one of those subnets.

At the midcycle we discussed it and we had an informal agreement that it
required too many assumptions on the part of the end user, neutron
lbaas, and driver.

If anyone wants to voice their opinion on this matter, do so on the bug
report, review, or in response to this thread.  Otherwise, it'll
probably be abandoned and not done at some point.

Thanks,
Brandon

[1] https://bugs.launchpad.net/neutron/+bug/1426248
[2] https://review.openstack.org/#/c/267935/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Long description of oslo.privsep

2016-01-15 Thread Joshua Harlow

Hopefully the following helps out here.

https://review.openstack.org/#/c/268377/

Gus or others hopefully can review that (and correct me if it's not the 
a good long description).


-Josh

Thomas Goirand wrote:

Hi,

Lucky I have written, in the cookie-butter repo:

Please feel here a long description which must be at least 3 lines
wrapped on 80 cols, so that distribution package maintainers can use it
in their packages. Note that this is a hard requirement.

Because without it, we could see stuff like this:
https://pypi.python.org/pypi/oslo.privsep

Seriously, what shall I put as a long description for the package? Shall
I read the code to guess?

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

2016-01-15 Thread Hongbin Lu
Disagree.

If the container managing part is removed, Magnum is just a COE deployment 
tool. This is really a scope-mismatch IMO. The middle ground I can see is to 
have a flag that allows operators to turned off the container managing part. If 
it is turned off, COEs are not managed by Magnum and requests sent to the 
/container endpoint will return a reasonable error code. Thoughts?

Best regards,
Hongbin

From: Mike Metral [mailto:mike.met...@rackspace.com]
Sent: January-15-16 6:24 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

I too believe that the /containers endpoint is obstructive to the overall goal 
of Magnum.

IMO, Magnum’s scope should only be concerned with:

  1.  Provisioning the underlying infrastructure required by the Container 
Orchestration Engine (COE) and
  2.  Instantiating the COE itself on top of said infrastructure from step #1.
Anything further regarding Magnum interfacing or interacting with containers 
starts to get into a gray area that could easily evolve into:

  *   Potential race conditions between Magnum and the designated COE and
  *   Would create design & implementation overhead and debt that could bite us 
in the long run seeing how all COE’s operate & are based off various different 
paradigms in terms of describing & managing containers, and this divergence 
will only continue to grow with time.
  *   Not to mention, the recreation of functionality around managing 
containers in Magnum seems redundant in nature as this is the very reason to 
want to use a COE in the first place – because it’s a more suited tool for the 
task
If there is low-hanging fruit in terms of common functionality across all 
COE’s, then those generic capabilities could be abstracted and integrated into 
Magnum, but these have to be carefully examined beforehand to ensure true 
parity exists for the capability across all COE’s.

However, I still worry that going down this route toes the line that Magnum 
should and could be a part of the managing container story to some degree – 
which again should be the sole responsibility of the COE, not Magnum.

I’m in favor of doing away with the /containers endpoint – continuing with it 
just looks like a snowball of scope-mismatch and management issues just waiting 
to happen.

Mike Metral
Product Architect – Private Cloud R - Rackspace

From: Hongbin Lu >
Sent: Thursday, January 14, 2016 1:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

In short, the container IDs assigned by Magnum are independent of the container 
IDs assigned by Docker daemon. Magnum do the IDs mapping before doing a native 
API call. In particular, here is how it works.

If users create a container through Magnum endpoint, Magnum will do the 
followings:
1.   Generate a uuid (if not provided).
2.   Call Docker Swarm API to create a container, with its hostname equal 
to the generated uuid.
3.   Persist container to DB with the generated uuid.

If users perform an operation on an existing container, they must provide the 
uuid (or the name) of the container (if name is provided, it will be used to 
lookup the uuid). Magnum will do the followings:
1.   Call Docker Swarm API to list all containers.
2.   Find the container whose hostname is equal to the provided uuid, 
record its “docker_id” that is the ID assigned by native tool.
3.   Call Docker Swarm API with “docker_id” to perform the operation.

Magnum doesn’t assume all operations to be routed through Magnum endpoints. 
Alternatively, users can directly call the native APIs. In this case, the 
created resources are not managed by Magnum and won’t be accessible through 
Magnum’s endpoints.

Hope it is clear.

Best regards,
Hongbin

From: Kyle Kelley [mailto:kyle.kel...@rackspace.com]
Sent: January-14-16 11:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

This presumes a model where Magnum is in complete control of the IDs of 
individual containers. How does this work with the Docker daemon?

> In Rest API, you can set the “uuid” field in the json request body (this is 
> not supported in CLI, but it is an easy add).​

In the Rest API for Magnum or Docker? Has Magnum completely broken away from 
exposing native tooling - are all container operations assumed to be routed 
through Magnum endpoints?

> For the idea of nesting container resource, I prefer not to do that if there 
> are alternatives or it can be work around. IMO, it sets a limitation that a 
> container must have a bay, which might not be the case in future. For 
> example, we might add a feature that creating a container will automatically 
> create a bay. If a container must have a bay 

Re: [openstack-dev] [all] Proposal: copyright-holders file in each project, or copyright holding forced to the OpenStack Foundation

2016-01-15 Thread Thomas Goirand
On 01/15/2016 11:28 PM, Daniel P. Berrange wrote:
> On Fri, Jan 15, 2016 at 12:53:49PM +, Chris Dent wrote:
>> On Fri, 15 Jan 2016, Thomas Goirand wrote:
>>
>>> Whatever we choose, I think we should ban having copyright holding text
>>> within our source code. While licensing is a good idea, as it is
>>> accurate, the copyright holding information isn't and it's just missleading.
>>
>> I think we should not add new copyright notifications in files.
>>
>> I'd also be happy to see all the existing ones removed, but that may
>> be a bigger problem.
> 
> Only the copyright holder who added the notice is permitted to
> remove it. ie you can't unilaterally remove Copyright notices
> added by other copyright holders. See LICENSE term (4)(c)
> 
> While you could undertake an exercise to get agreement from
> every copyright holder to remote their notices, its is honestly
> not worth the work IMHO.

Though we could ask copyright holders to declare giving it to the
foundation.

>>> If I was the only person to choose, I'd say let's go for 1/, but
>>> probably managers of every company wont agree.
>>
>> I think option one is correct.
> 
> Copyright assignment is never the correct answer.

We don't have to force it, we can politely ask... Also, if we have a
top-level file declaring copyright holding by the foundation, if nobody
write in individual files, it is as if they agreed.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Long description of oslo.privsep

2016-01-15 Thread Thomas Goirand
On 01/16/2016 08:35 AM, Davanum Srinivas wrote:
> Zigo,
> 
> Seriously, chill please.

I was trying to write it funnily. Sorry if it wasn't obvious! :)

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Long description of oslo.privsep

2016-01-15 Thread Michael Still
I have just approved that review, as it moves the ball in the right
direction.

Michael

On Sat, Jan 16, 2016 at 3:04 PM, Davanum Srinivas  wrote:

> Haha, needs work :)
>
> -- Dims
>
> On Fri, Jan 15, 2016 at 10:53 PM, Thomas Goirand  wrote:
> > On 01/16/2016 08:35 AM, Davanum Srinivas wrote:
> >> Zigo,
> >>
> >> Seriously, chill please.
> >
> > I was trying to write it funnily. Sorry if it wasn't obvious! :)
> >
> > Cheers,
> >
> > Thomas Goirand (zigo)
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

2016-01-15 Thread Mike Metral
I too believe that the /containers endpoint is obstructive to the overall goal 
of Magnum.

IMO, Magnum’s scope should only be concerned with:

  1.  Provisioning the underlying infrastructure required by the Container 
Orchestration Engine (COE) and
  2.  Instantiating the COE itself on top of said infrastructure from step #1.

Anything further regarding Magnum interfacing or interacting with containers 
starts to get into a gray area that could easily evolve into:

  *   Potential race conditions between Magnum and the designated COE and
  *   Would create design & implementation overhead and debt that could bite us 
in the long run seeing how all COE’s operate & are based off various different 
paradigms in terms of describing & managing containers, and this divergence 
will only continue to grow with time.
  *   Not to mention, the recreation of functionality around managing 
containers in Magnum seems redundant in nature as this is the very reason to 
want to use a COE in the first place – because it’s a more suited tool for the 
task

If there is low-hanging fruit in terms of common functionality across all 
COE’s, then those generic capabilities could be abstracted and integrated into 
Magnum, but these have to be carefully examined beforehand to ensure true 
parity exists for the capability across all COE’s.

However, I still worry that going down this route toes the line that Magnum 
should and could be a part of the managing container story to some degree – 
which again should be the sole responsibility of the COE, not Magnum.

I’m in favor of doing away with the /containers endpoint – continuing with it 
just looks like a snowball of scope-mismatch and management issues just waiting 
to happen.

Mike Metral
Product Architect – Private Cloud R - Rackspace

From: Hongbin Lu >
Sent: Thursday, January 14, 2016 1:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

In short, the container IDs assigned by Magnum are independent of the container 
IDs assigned by Docker daemon. Magnum do the IDs mapping before doing a native 
API call. In particular, here is how it works.

If users create a container through Magnum endpoint, Magnum will do the 
followings:
1.   Generate a uuid (if not provided).
2.   Call Docker Swarm API to create a container, with its hostname equal 
to the generated uuid.
3.   Persist container to DB with the generated uuid.

If users perform an operation on an existing container, they must provide the 
uuid (or the name) of the container (if name is provided, it will be used to 
lookup the uuid). Magnum will do the followings:
1.   Call Docker Swarm API to list all containers.
2.   Find the container whose hostname is equal to the provided uuid, 
record its “docker_id” that is the ID assigned by native tool.
3.   Call Docker Swarm API with “docker_id” to perform the operation.

Magnum doesn’t assume all operations to be routed through Magnum endpoints. 
Alternatively, users can directly call the native APIs. In this case, the 
created resources are not managed by Magnum and won’t be accessible through 
Magnum’s endpoints.

Hope it is clear.

Best regards,
Hongbin

From: Kyle Kelley [mailto:kyle.kel...@rackspace.com]
Sent: January-14-16 11:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

This presumes a model where Magnum is in complete control of the IDs of 
individual containers. How does this work with the Docker daemon?



> In Rest API, you can set the “uuid” field in the json request body (this is 
> not supported in CLI, but it is an easy add).​



In the Rest API for Magnum or Docker? Has Magnum completely broken away from 
exposing native tooling - are all container operations assumed to be routed 
through Magnum endpoints?



> For the idea of nesting container resource, I prefer not to do that if there 
> are alternatives or it can be work around. IMO, it sets a limitation that a 
> container must have a bay, which might not be the case in future. For 
> example, we might add a feature that creating a container will automatically 
> create a bay. If a container must have a bay on creation, such feature is 
> impossible.



If that's *really* a feature you need and are fully involved in designing for, 
this seems like a case where creating a container via these endpoints would 
create a bay and return the full resource+subresource.



Personally, I think these COE endpoints need to not be in the main spec, to 
reduce the surface area until these are put into further use.








From: Hongbin Lu >
Sent: Wednesday, January 13, 2016 5:00 PM
To: OpenStack Development Mailing List (not for 

[openstack-dev] OpenStack Developer Mailing List Digest January 9-15

2016-01-15 Thread Mike Perez
Perma link: 
http://www.openstack.org/blog/2016/01/openstack-developer-mailing-list-digest-20160115/

Success Bot Says

* stevemar: Latest python-neutronclient use keystoneauth, yay!
* devkulkarni: Devstack plugin for Solum finally working as expected.
* dulek: Initial tests show that our rolling upgrades stuff is working fine -
  I’m able to use Mitaka’s Cinder API service with Liberty’s cinder-scheduler
  and cinder volume services.
* Tell us yours via IRC with a message “#success [insert success]”.
* More: https://wiki.openstack.org/wiki/Successes

Cross-Project Specs & API Guidelines

* Add clouds.yaml support specification [1]
* Deprecate individual CLIs in favor of OSC [2]
* Add description of pagination parameters [3]

Release Models To Be Frozen Around Mitaka-2
===
* Deadline: Mitaka-2, January 21st
* Example, your release model is release:independent, and you want to switch to
  cycle-oriented models (e.g. release:cycle-with-intermediary or
  release:cycle-with-milestones). [4]
* To change your project, propose an openstack/governance change in
  reference/projects.yaml file [5].
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2016-January/083726.html

Vision and Changes For OpenStack API Guides
===
* New tool: fairy-slipper [6]
  - Migrate files from WADL to Swagger.
  - Serve up reference info.
* New build jobs to build API guides from project repos to
  developer.openstack.org
* It was discussed in the last cross-project meeting [7] to answer questions.
* There are a variety of specs [8]96] to go over this work.
* See what’s happening this month [10].
* Ful thread: 
http://lists.openstack.org/pipermail/openstack-dev/2016-January/083670.html

"Upstream Development” Track At the Austin OpenStack Summit
===
* Call for speakers [11] for the OpenStack conference in Austin will have a new
  track targeted towards upstream OpenStack developers.
  - Learn about new development processes
  - Tools that the infrastructure team gives us
  - New OSLO library features (or elsewhere)
  - Best practices
* Probably Monday before the design summit tracks start.
* Have a topic that fits this audience? Submit it! [12]
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2016-January/083958.html

You Almost Never Need to Change Every Repository

* There have been a lot of patches that tweak the same thing across many, many
  repositories.
* Standardizations are great, but if you’re making the same change to more than
  a few repositories, we should be looking at another way to have that change
  applied.
* If you find yourself making the same change over and over in a lot of
  projects, start a conversation on the dev mailing list first.
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2016-January/084133.html

Release Countdown For Week R-11, Jan 18-22
==
* Focus:
  - Next week is the second milestone for the Mitaka cycle.
  - Major feature work should be making good process, or be re-evaluated to see
it really needs to land this cycle.
* Release Actions:
  - Liaisons should submit tag requests to the openstack/releases repository
for projects following the cycle-with-milestone before the end of the day
on Jan 21.
  - Release liaison responsibility update should be reviewed [13].
* Important Dates:
  - Mitaka 2: January 19-21
  - Deadline for Mitaka 2 tag: Jan 21
  - Release models to be frozen: Jan 21
- Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2016-January/084143.html


[1] - https://review.openstack.org/#/c/236712/
[2] - https://review.openstack.org/#/c/243348/
[3] - https://review.openstack.org/#/c/190743/16
[4] - 
http://governance.openstack.org/reference/tags/index.html#release-management-tags
[5] - 
https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml
[6] - http://git.openstack.org/cgit/openstack/fairy-slipper/tree/
[7] - 
http://eavesdrop.openstack.org/meetings/crossproject/2016/crossproject.2016-01-12-21.02.log.html#l-34
[8] - 
http://specs.openstack.org/openstack/docs-specs/specs/mitaka/app-guides-mitaka-vision.html
[9] - 
http://specs.openstack.org/openstack/docs-specs/specs/liberty/api-site.html
[10] - 
http://www.openstack.org/blog/2016/01/whats-next-for-application-developer-guides/
[11] - https://www.openstack.org/summit/austin-2016/call-for-speakers/
[12] - https://etherpad.openstack.org/p/austin-upstream-dev-track-ideas
[13] - https://review.openstack.org/#/c/262003/

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubsc

Re: [openstack-dev] [stable] kilo 2015.1.3 freeze exception request for cve fixes

2016-01-15 Thread Ihar Hrachyshka

+1. CVE fixes obviously should be granted an exception.

Matt Riedemann  wrote:


We should get this series in for nova in the kilo 2015.1.3 release:

https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/kilo+topic:bug/1524274

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Email as User Name on the Horizon login page

2016-01-15 Thread Itxaka Serrano Garcia


Looks like the form comes from django_openstack_auth:
https://github.com/openstack/django_openstack_auth/blob/master/openstack_auth/forms.py#L53


But to be honest, no idea how that can be overridden trough the themes, 
not sure if its even possible to override anything on that page without 
modifying django_openstack_auth directly :(


Maybe someone else has a better insight on this than me.


* Horrible Hack Incoming, read at your own discretion *

You can override the template here:
https://github.com/openstack/horizon/blob/master/horizon/templates/horizon/common/_form_field.html#L51

And change this line:
{{ field.label }}


For this:
{% if field.label == "User Name" and not 
request.user.is_authenticated %}Email{% else %}{{ field.label }}{% endif 
%}



Which will check if the label is "User Name" and the user is logged out 
and directly write "Email" as the field label.


I know, its horrible and if you update horizon it will be overriden, but 
probably works for the time being if you really need it ¯\_(ツ)_/¯


* Horrible Hack Finished *




Itxaka




On 01/15/2016 05:13 AM, Adrian Turjak wrote:

I've run into a weird issue with the Liberty release of Horizon.

For our deployment we enforce emails as usernames, and thus for Horizon
we used to have "User Name" on the login page replaced with "Email".
This used to be a straightforward change in the html template file, and
with the introduction of themes we assumed it would be the same. When
one of our designers was migrating our custom CSS and html changes to
the new theme system they missed that change and I at first it was a
silly mistake.

Only on digging through the code myself I found that the "User Name" on
the login screen isn't in the html file at all, nor anywhere else
straightforward. The login page form is built on the fly with javascript
to facilitate different modes of authentication. While a bit annoying
that didn't seem too bad and I then assumed it might mean a javascript
change, only that the more I dug, the more I became confused.

Where exactly is the login form defined? And where exactly is the "User
Name" text for the login form set?

I've tried all manner of stuff to change it with no luck and I feel like
I must have missed something obvious.

Cheers,
-Adrian Turjak

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Should we fix XML request issues?

2016-01-15 Thread Michał Dulko
On 01/15/2016 07:14 AM, Jethani, Ravishekar wrote:
> Hi Devs,
>
> I have come across a few 500 response issues while sending request
> body as XML to cinder service. For example:
>
> 
>
> I can see that XML support has been marked as depricated and will be
> removed in 'N' release. So is it still worth trying fixing these
> issues during Mitaka time frame?
>
> Thanks.
> Ravi Jethani

One of the reasons XML API was deprecated is the fact that it weren't
getting much CI testing and as Doug Hellmann once mentioned - "if
something isn't tested then it isn't working".

I'm okay with fixing it (if someone really needs that feature), but we
don't have any means to prevent further regressions, so it may not be
worth it.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Heka v ELK stack logistics

2016-01-15 Thread Eric LEMOINE
>
> In case of MariaDB I see that you can
>
> - do --skip-syslog, then all things go to "error log"
> - set "error log" path by --log-error
>
> So maybe setting stdout/stderr may work here?
>
> I'd suggest to check the similar options in RabbitMQ and other non-OpenStack
> components.
>
> I may help with this, because in my opinion, logging to stdout is the best
> default option for Mesos and Kubernetes - i.e. Mesos UI shows a "sandbox" of
> application, which generally is stdout/stderr. So if someone will not want
> to use Heka/ELK, then having everything in Mesos/Kubernetes would be
> probably the lesser evil than trying rsyslog to work here.


That makes sense to me Michal. Thanks!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] Specialty teams wiki page moving to Contributor Guide

2016-01-15 Thread Olena Logvinova
Good Friday to everyone!

This is just a note that the Specialty Teams wiki page has been moved from
wiki [1] to the Documentation Contributor Guide [2] as of now.

Thanks and happy week-end!

Olena

[1] https://wiki.openstack.org/wiki/Documentation/SpecialityTeams
[2] http://docs.openstack.org/contributor-guide/team-structure.html

-- 
Best regards,
Olena Logvinova,
Technical Writer | Mirantis, Kharkiv | 38, Lenin av., Kharkiv
ologvin...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Python 3.5 is now the default Py3 in Debian Sid

2016-01-15 Thread Jens Rosenboom
2016-01-14 17:12 GMT+01:00 Thomas Goirand :
> On 01/14/2016 11:35 PM, Yuriy Taraday wrote:
>>
>>
>> On Thu, Jan 14, 2016 at 5:48 PM Jeremy Stanley > > wrote:
>>
>> On 2016-01-14 09:47:52 +0100 (+0100), Julien Danjou wrote:
>> [...]
>> > Is there any plan to add Python 3.5 to infra?
>>
>> I expect we'll end up with it shortly after Ubuntu 16.04 LTS
>> releases in a few months (does anybody know for sure what its
>> default Python 3 is slated to be?).
>>
>>
>> It's 3.5.1 already in Xenial: http://packages.ubuntu.com/xenial/python3
>
> Though 3.5 isn't the default Py3 yet there. Or is it?

Looks like it is:

ubuntu@jr-xeni1:~$ apt-cache policy python3
python3:
  Installed: 3.5.1-1
  Candidate: 3.5.1-1
  Version table:
 *** 3.5.1-1 500
500 http://nova.clouds.archive.ubuntu.com/ubuntu xenial/main
amd64 Packages
100 /var/lib/dpkg/status

On Wily the default still is 3.4.3.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] How to auto allocate VIPs for roles in different network node groups?

2016-01-15 Thread Aleksandr Didenko
Hi,

We need to come up with some solution for a problem with VIP generation
(auto allocation), see the original bug [0].

The main problem here is: how do we know what exactly IPs to auto allocate
for VIPs when needed roles are in different nodegroups (i.e. in different
IP networks)?
For example 'public_vip' for 'controller' roles.

Currently we have two possible solutions.

1) Fail early in pre-deployment check (when user hit "Deploy changes") with
error about inability to auto allocate VIP for nodes in different
nodegroups (racks). So in order to run deploy user has to put all roles
with the same VIPs in the same nodegroups (for example: all controllers in
the same nodegroup).

Pros:

   - VIPs are always correct, they are from the same network as nodes that
   are going to use them, thus user simply can't configure invalid VIPs for
   cluster and break deployment

Cons:

   - hardcoded limitation that is impossible to bypass, does not allow to
   spread roles with VIPs across multiple racks even if it's properly handled
   by Fuel Plugin, i.e. made so by design


2) Allow to move roles that use VIPs into different nodegroups, auto
allocate VIPs from "default" nodegroup and send an alert/notification to
user that such configuration may not work and it's up to user how to
proceed (either fix config or deploy at his/her own risk).

Pros:

   - relatively simple solution


   - impossible to break VIP serialization because in the worst case we
   allocate VIPs from default nodegroup

Cons:

   - user can deploy invalid environment that will fail during deployment
   or will not operate properly (for example when public_vip is not able to
   migrate to controller from different rack)


   - which nodegroup to choose to allocate VIPs? default nodegroup? random
   pick? in case of random pick troubleshooting may become problematic


   - waste of IPs - IP address from the network range will be implicitly
   allocated and marked as used, even it's not used by deployment (plugin uses
   own ones)


*Please also note that this solution is needed for 8.0 only.* In 9.0 we
have new feature for manual VIPs allocation [1]. So in 9.0, if we can't
auto allocate VIPs for some cluster configuration, we can simply ask user
to manually set those problem VIPs or move roles to the same network node
group (rack).

So, guys, please feel free to share your thoughts on this matter. Any input
is greatly appreciated.

Regards,
Alex

[0] https://bugs.launchpad.net/fuel/+bug/1524320
[1] https://blueprints.launchpad.net/fuel/+spec/allow-any-vip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][stable] Freeze exception for kilo CVE-2015-7548 backports

2016-01-15 Thread Matthew Booth
The following 3 patches fix CVE-2015-7548 Unprivileged api user can access
host data using instance snapshot:

https://review.openstack.org/#/c/264819/
https://review.openstack.org/#/c/264820/
https://review.openstack.org/#/c/264821/

The OSSA is rated critical. The patches have now landed on master and
liberty after some delays in the gate. Given the importance of the fix I
suspect that most/all downstream distributions will have already patched
(certainly Red Hat has), but it would be good to have them in upstream
stable.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] How to auto allocate VIPs for roles in different network node groups?

2016-01-15 Thread Bogdan Dobrelya
On 15.01.2016 10:19, Aleksandr Didenko wrote:
> Hi,
> 
> We need to come up with some solution for a problem with VIP generation
> (auto allocation), see the original bug [0].
> 
> The main problem here is: how do we know what exactly IPs to auto
> allocate for VIPs when needed roles are in different nodegroups (i.e. in
> different IP networks)?
> For example 'public_vip' for 'controller' roles.
> 
> Currently we have two possible solutions.
> 
> 1) Fail early in pre-deployment check (when user hit "Deploy changes")
> with error about inability to auto allocate VIP for nodes in different
> nodegroups (racks). So in order to run deploy user has to put all roles
> with the same VIPs in the same nodegroups (for example: all controllers
> in the same nodegroup).
> 
> Pros:
> 
>   * VIPs are always correct, they are from the same network as nodes
> that are going to use them, thus user simply can't configure invalid
> VIPs for cluster and break deployment
> 
> Cons:
> 
>   * hardcoded limitation that is impossible to bypass, does not allow to
> spread roles with VIPs across multiple racks even if it's properly
> handled by Fuel Plugin, i.e. made so by design

That'd be no good at all.

> 
> 
> 2) Allow to move roles that use VIPs into different nodegroups, auto
> allocate VIPs from "default" nodegroup and send an alert/notification to
> user that such configuration may not work and it's up to user how to
> proceed (either fix config or deploy at his/her own risk).

It seems we have not much choice then, but use the option 2

> 
> Pros:
> 
>   * relatively simple solution
> 
>   * impossible to break VIP serialization because in the worst case we
> allocate VIPs from default nodegroup
> 
> Cons:
> 
>   * user can deploy invalid environment that will fail during deployment
> or will not operate properly (for example when public_vip is not
> able to migrate to controller from different rack)
> 
>   * which nodegroup to choose to allocate VIPs? default nodegroup?
> random pick? in case of random pick troubleshooting may become
> problematic

Random choices aren't good IMHO, let's use defaults.

> 
>   * waste of IPs - IP address from the network range will be implicitly
> allocated and marked as used, even it's not used by deployment
> (plugin uses own ones)
> 
> 
> *Please also note that this solution is needed for 8.0 only.*In 9.0 we
> have new feature for manual VIPs allocation [1]. So in 9.0, if we can't
> auto allocate VIPs for some cluster configuration, we can simply ask
> user to manually set those problem VIPs or move roles to the same
> network node group (rack).
> 
> So, guys, please feel free to share your thoughts on this matter. Any
> input is greatly appreciated.
> 
> Regards,
> Alex
> 
> [0] https://bugs.launchpad.net/fuel/+bug/1524320
> [1] https://blueprints.launchpad.net/fuel/+spec/allow-any-vip
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] kilo 2015.1.3 freeze exception request for cve fixes

2016-01-15 Thread Thierry Carrez

Ihar Hrachyshka wrote:

+1. CVE fixes obviously should be granted an exception.


+1

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev