[openstack-dev] [magnum] after create cluster for kubernetes, kubect create command was failed.

2017-05-16 Thread KiYoun Sung
Hello,
Magnum team.

I Installed Openstack newton and magnum.
I installed Magnum by source(master branch).

I have two questions.

1.
After installation,
I created kubernetes cluster and it's CREATE_COMPLETE,
and I want to create kubernetes pod.

My create script is below.
--
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
app: nginx
spec:
  containers:
  - name: nginx
image: nginx
ports:
- containerPort: 80
--

I tried "kubectl create -f nginx.yaml"
But, error has occured.

Error message is below.
error validating "pod-nginx-with-label.yaml": error validating data:
unexpected type: object; if you choose to ignore these errors, turn
validation off with --validate=false

Why did this error occur?

2.
I want to access this kubernetes cluster service(like nginx) above the
Openstack magnum environment from outside world.

I refer to this guide(
https://docs.openstack.org/developer/magnum/dev/kubernetes-load-balancer.html#how-it-works),
but it didn't work.

Openstack: newton
Magnum: 4.1.1 (master branch)

How can I do?
Do I must install Lbaasv2?

Thank you.
Best regards.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Bug Smash for Pike Release

2017-05-16 Thread Fred Li
Hi all,

OpenStack Bug Smash for Pike is ongoing now. I will last from Wednesday to
Friday May 17 to 19 in Suzhou, China.

Around 60 engineers are working on Nova, Cinder, Neutron, Keystone, Heat,
Telemetry, Ironic, Oslo, OSC, Kolla, Trove, Dragonflow, Karbor, Manila,
Zaqar, Tricircle, Cloudkitty, Cyborg, Mogan, and etc.

You are appreciated to review the patches in the coming days.

Please find the homepage of the bug smash in [1] and the list of bugs we
are working on in [2].

[1] https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Pike-Suzhou[2] [2]
https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Pike-Suzhou-Bug-List

Fred

On Wednesday, March 29, 2017, ChangBo Guo  wrote:

> I attended the bug smash two times before. it's really like we did at PTG,
> but just fix bugs in same room in 3 days,
> It's appricated core reviewers can help review online.
>
> 2017-03-28 21:35 GMT+08:00 Sean McGinnis  >:
>
>> I can say from my experience being involved in the last event that these
>> can be very productive. It was great seeing a room full of devs just
>> focused on getting bugs fixed!
>>
>> I highly encourage anyone interested to attend. I would also recommend
>> cores for each project to pay some attention to getting these reviewed.
>> It can be a great way to build up momentum and really get a lot fixed in
>> a short amount of time.
>>
>> Sean
>>
>> On Tue, Mar 28, 2017 at 07:15:02AM +, Liyongle (Fred) wrote:
>> > Hi all,
>> >
>> > We are planning to have the Bug Smash for Pike release from Wednesday
>> to Friday, May 17 to 19 in Suzhou, China.
>> > After considering summit Boston (May 8 to 11) and Pike-2 milestone (Jun
>> 5 to 9), we finalized the schedule.
>> >
>> > Bug Smash China will probably cover Nova, Neutron, Cinder, Keystone,
>> Manila, Heat, Telemetry, Karbor, Tricircle, which finally depends on the
>> attendees.
>> >
>> > If you want to set up bug smash in your city, please share the
>> information at [1].
>> > If you are planning to join the 6th Bug Smash in China, please register
>> at [2].
>> >
>> > [1] https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Pike
>> > [2] https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Pike-Suzhou
>> >
>> > Fred (李永乐)
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> ChangBo Guo(gcb)
>


-- 
Regards
Fred Li (李永乐)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Ed Leafe
On May 16, 2017, at 3:06 PM, Jeremy Stanley  wrote:
> 
>> It's pretty clear now some see drawbacks in reusing #openstack-dev, and
>> so far the only benefit expressed (beyond not having to post the config
>> change to make it happen) is that "everybody is already there". By that
>> rule, we should not create any new channel :)
> 
> That was not the only concern expressed. It also silos us away from
> the community who has elected us to represent them, potentially
> creating another IRC echo chamber.

Unless you somehow restrict access to the channel, it isn't much of a silo. I 
see TC members in many other channels, so it isn't as if there will be no 
interaction between TC members and the community that they serve.

I also think that a channel like #openstack-tc is more discoverable to people 
who might want to interact with the TC, as it follows the same naming 
convention as #openstack-nova, #openstack-ironic, etc.

-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [User] Achieving Resiliency at Scales of 1000+

2017-05-16 Thread Arkady.Kanevsky
Team,
We manage to have a productive discussion  on resiliency for 1000+ nodes.
Many thanks to Adam Spiers on helping with it.
https://etherpad.openstack.org/p/Achieving_Resiliency_at_Scales_of_1000+
There are several concrete actions especially for current gate testing.
Will bring these at the next user committee meeting.
Thanks,
Arkady

Arkady Kanevsky, Ph.D.
Director of SW Development
Dell EMC CPSD
Dell Inc. One Dell Way, MS PS2-91
Round Rock, TX 78682, USA
Phone: 512 723 5264

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Future of the tripleo-quickstart-utils project

2017-05-16 Thread Emilien Macchi
Hey Raoul,

Thanks for putting this up in the ML. Replying inline:

On Tue, May 16, 2017 at 4:59 PM, Raoul Scarazzini  wrote:
> Hi everybody,
> as discussed in today's TripleO meeting [1] here's a brief recap of the
> tripleo-quickstart-utils topic.
>
> ### TL;DR ###
>
> We are trying to understand whether is good or not to put the contents
> of [2] somewhere else for a wider exposure.
>
> ### Long version ###
>
> tripleo-quickstart-utils project started after splitting the
> ha-validation stuff from the tripleo-quickstart-extras repo [3],
> basically because the specificity of the topic was creating a leak of
> reviewers.
> Today this repository have three roles:
>
> 1 - validate-ha: to do ha specific tests depending on the version. This
> role relies on a micro bash framework named ha-test-suite available in
> the same repo, under the utils directory;

I've looked at 
https://github.com/redhat-openstack/tripleo-quickstart-utils/blob/master/roles/validate-ha/tasks/main.yml
and I see it's basically a set of tasks that validates that HA is
working well on the overcloud.
Despite little things that might be adjusted (calling bash scripts
from Ansible), I think this role would be a good fit with
tripleo-validations projects, which is "a collection of Ansible
playbooks to detect and report potential issues during TripleO
deployments".

> 2 - stonith-config: to configure STONITH inside an HA env;

IMHO (and tell me if I'm wrong), this role is something you want to
apply at Day 1 during your deployment, right?
If that's the case, I think the playbooks could really live in THT
where we already have automation to deploy & configure Pacemaker with
Heat and Puppet.
Some tasks might be useful for the upgrade operations but we also have
upgrade_tasks that use Ansible, so possibly easily re-usable.

If it's more Day 2 operations, then we should investigate by creating
a new repository for tripleo with some playbooks useful for Day 2, but
AFIK we've managed to avoid that until now.

> 3 - instance-ha: to configure high availability for instances on the
> compute nodes;

Same as stonith. It sounds like some tasks done during initial
deployment to enable instakce HA and then during upgrade to disable /
enable configurations. I think it could also be done by THT like
stonith configuration.

> Despite of the name, this is not just a tripleo-quickstart related
> project, it is also usable on every TripleO deployed environment, and is
> meant to support all the TripleO OpenStack versions from kilo to pike
> for all the roles it sells;

Great, it means we could easily re-use the bits, modulo some technical
adjustments.

> There's also a docs related to the Multi Virtual Undercloud project [4]
> that explains how to have more than one virtual Undercloud on a physical
> machine to manage more environments from the same place.

I would suggest to move it to tripleo-docs, so we have a single place for doc.

> That's basically the meaning of the word "utils" in the name of the repo.
>
> What I would like to understand is if you see this as something useful
> that can be placed somewhere more near to upstream TripleO project, to
> reach a wider audience for further contribution/evolution.
versus
IIRC, everything in this repo could be moved to existing projects in
TripleO that are already productized, so little efforts would be done.

> ###
>
> [1]
> http://eavesdrop.openstack.org/meetings/tripleo/2017/tripleo.2017-05-16-14.00.log.html
> [2] https://github.com/redhat-openstack/tripleo-quickstart-utils
> [3]
> https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/validate-ha
> [4]
> https://github.com/redhat-openstack/tripleo-quickstart-utils/tree/master/docs/multi-virtual-undercloud
>
> ###
>
> Thanks for your time,

Thanks for bringing this up!

> --
> Raoul Scarazzini
> ra...@redhat.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [acceleration]Cyborg Team Weekly Meeting 2017.05.17 Agenda

2017-05-16 Thread Zhipeng Huang
Hi Team,

This is a kind reminder for our weekly meeting this week at
#openstack-cyborg EST 11:00am (UTC 15:00) on Wed.

The agenda for today's meeting is try to finalize and freeze our current
specs and get to code development as soon as possible.

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][api][docs] Feedback requested on proposed formatting change to API docs

2017-05-16 Thread Qiming Teng
On Tue, May 16, 2017 at 05:13:16PM -0500, Monty Taylor wrote:
> Hey all!
> 
> I read the API docs A LOT. (thank you to all of you who have worked
> on writing them)
> 
> As I do, a gotcha I hit up against a non-zero amount is mapping the
> descriptions of the response parameters to the form of the response
> itself. Most of the time there is a top level parameter under which
> either an object or a list resides, but the description lists list
> the top level and the sub-parameters as siblings.
> 
> So I wrote a patch to os-api-ref taking a stab at providing a way to
> show things a little differently:
> 
> https://review.openstack.org/#/c/464255/
> 
> You can see the output here:
> 
> http://docs-draft.openstack.org/55/464255/5/check/gate-nova-api-ref-src/f02b170//api-ref/build/html/
> 
> If you go expand either the GET / or the GET /servers/details
> sections and go to look at their Response sections, you can see it
> in action.
> 
> We'd like some feedback on impact from humans who read the API docs
> decently regularly...
> 
> The questions:
> 
> - Does this help, hurt, no difference?

It helps.

> - servers[].name - servers is a list, containing objects with a name
> field. Good or bad?

Good.

> - servers[].addresses.$network-name - addresses is an object and the
> keys of the object are the name of the network in question.

This is a little bit confusing but still understandable.

my $0.0002 

Qiming
> Thanks!
> Monty
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Sean McGinnis
On Tue, May 16, 2017 at 11:13:05PM +0200, Thierry Carrez wrote:
> Sean Dague wrote:
> > On 05/16/2017 03:59 PM, Thierry Carrez wrote:
> >> Thierry Carrez wrote:
> >>> Here we have a clear topic, and TC members need to pay a certain level
> >>> of attention to whatever is said. Mixing it with other community
> >>> discussions (which I have to prioritize lower) just makes it harder to
> >>> pay the right level of attention to the channel. Basically I have
> >>> trouble to see how we can repurpose a general discussion channel into a
> >>> specific group office-hours channel (different topics, different level
> >>> of attention to be paid). Asking people to use a ping list when they
> >>> *really* need to get TC members attention feels like a band-aid.
> >>
> >> To summarize, I fear that using a general development discussion channel
> >> as the TC discussion channel will turn *every* development discussion
> >> into a TC discussion. I don't think that's desirable.
> > 
> > Maybe we have different ideas of what we expect to be the kinds of
> > discussions and asks. What do you think would be in #openstack-tc that
> > would not be appropriate for #openstack-dev, and why?
> 
> It's the other way around. There are (hopefully more and more)
> discussions in #openstack-dev that are noise for anyone watching this
> channel in "TC-office-hours mode" (like a TC member who needs to pay
> attention to what is being said on that channel).
> 
> I prioritize channels based on the attention I need to pay to them. The
> channel used for TC questions / office hours would be at the top of my
> list, so I would rather avoid all the noise we can :)

Same for me. If there is a dedicated -tc channel, then when I see
activity in the channel I would know I need to pay attention.

If it is in -dev, I fear there would be frequent distractions jumping
between channels to see if the activity is relevant to me or not.

> 
> Looking at recent logs from #openstack-dev, I can see it would be a bit
> painful to sort out what is actionable TC stuff from what is long
> general development discussions and random are-you-around pings.
> 
> -- 
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][nova][cinder][policy] policy meeting tomorrow

2017-05-16 Thread Lance Bragstad
Hey folks,

Sending out a reminder that we will have the policy meeting tomorrow [0].
The agenda [1] is already pretty full but we are going to need
cross-project involvement tomorrow considering the topics and impacts.

I'll be reviewing policy things in the morning so if anyone has questions
or wants to hash things out before hand, come find me.

Thanks,

Lance

[0] http://eavesdrop.openstack.org/#Keystone_Policy_Meeting
[1] https://etherpad.openstack.org/p/keystone-policy-meeting
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-16 Thread Zane Bitter

On 16/05/17 01:06, Colleen Murphy wrote:

Additionally, I think OAuth - either extending the existing OAuth1.0
plugin or implementing OAuth2.0 - should probably be on the table.


I believe that OAuth is not a good fit for long-lived things like an 
application needing to communicate with its own infrastructure. Tokens 
are (a) tied to a user, and (b) expire, neither of which we want. Any 
use case where you can't just drop the user into a web browser and ask 
for their password at any time seem to be, at a minimum, excruciatingly 
painful and often impossible with OAuth, because that is the use case it 
was designed for.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-16 Thread Zane Bitter

On 15/05/17 20:07, Adrian Turjak wrote:


On 16/05/17 01:09, Lance Bragstad wrote:



On Sun, May 14, 2017 at 11:59 AM, Monty Taylor > wrote:

On 05/11/2017 02:32 PM, Lance Bragstad wrote:

Hey all,

One of the Baremetal/VM sessions at the summit focused on what
we need
to do to make OpenStack more consumable for application
developers [0].
As a group we recognized the need for application specific
passwords or
API keys and nearly everyone (above 85% is my best guess) in
the session
thought it was an important thing to pursue. The API
key/application-specific password specification is up for
review [1].

The problem is that with all the recent churn in the keystone
project,
we don't really have the capacity to commit to this for the
cycle. As a
project, we're still working through what we've committed to
for Pike
before the OSIC fallout. It was suggested that we reach out to
the PWG
to see if this is something we can get some help on from a
keystone
development perspective. Let's use this thread to see if there
is anyway
we can better enable the community through API
keys/application-specific
passwords by seeing if anyone can contribute resources to this
effort.


In the session, I signed up to help get the spec across the finish
line. I'm also going to do my best to write up something
resembling a user story so that we're all on the same page about
what this is, what it isn't and what comes next.


Thanks Monty. If you have questions about the current proposal, Ron
might be lingering in IRC (rderose). David (dstanek) was also
documenting his perspective in another spec [0].


[0] https://review.openstack.org/#/c/440593/




Based on the specs that are currently up in Keystone-specs, I would
highly recommend not doing this per user.

The scenario I imagine is you have a sysadmin at a company who created a
ton of these for various jobs and then leaves. The company then needs to
keep his user account around, or create tons of new API keys, and then
disable his user once all the scripts he had keys for are replaced. Or
more often then not, disable his user and then cry as everything breaks
and no one really knows why or no one fully documented it all, or didn't
read the docs. Keeping them per project and unrelated to the user makes
more sense, as then someone else on your team can regenerate the secrets
for the specific Keys as they want. Sure we can advise them to use
generic user accounts within which to create these API keys but that
implies password sharing which is bad.


Yes, absolutely. Like other OpenStack resources, API keys need to belong 
to the project, not the user.



That said, I'm curious why we would make these as a thing separate to
users. In reality, if you can create users, you can create API specific
users. Would this be a different authentication mechanism? Why? Why not
just continue the work on better access control and let people create
users for this. Because lets be honest, isn't a user already an API key?


Essentially, with the exception that an API key only gets you 
authenticated to OpenStack, whereas a user's 'key' (password) probably 
also gets them into a lot of other things.


I want to be clear though: if that's the only benefit we get from API 
keys then we are *failing*. We must put fine-grained authorisation 
control in the end user's hands to actually solve the problem.



The issue (and the Ron's spec mentions this) is a user having too much
access, how would this fix that when the issue is that we don't have
fine grained policy in the first place? How does a new auth mechanism
fix that? Both specs mention roles so I assume it really doesn't. If we
had fine grained policy we could just create users specific to a service
with only the roles it needs, and the same problem is solved without any
special API, new auth, or different 'user-lite' object model. It feels
like this is trying to solve an issue that is better solved by fixing
the existing problems.

I like the idea behind these specs, but... I'm curious what exactly they
are trying to solve. Not to mention if you wanted to automate anything
larger such as creating sub-projects and setting up a basic network for
each new developer to get access to your team, this wouldn't work unless
you could have your API key inherit to subprojects or something more
complex, at which point they may as well be users. Users already work
for all of this, why reinvent the wheel when really the issue isn't the
wheel itself, but the steering mechanism (access control/policy in this
case)?


This was my assumption to start with as well - create a separate domain 
backed by a DB, allow all users (not just admins) to create 'user' 
accounts in that 

Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat / Healthcheck Monitoring

2017-05-16 Thread Adam Spiers

Waines, Greg  wrote:

Sam,

Two other more higher-level points I wanted to discuss with you about Masaraki.


First,
so I notice that you are doing both monitoring, auto-recovery and even host 
maintenance
type functionality as part of the Masaraki architecture.

are you open to some configurability (enabling/disabling) of these capabilities 
?


I can't speak for Sampath or the Masakari developers, but the monitors
are standalone components.  Currently they can only send notifications
in a format which the masakari-api service can understand, but I guess
it wouldn't be hard to extend them to send notifications in other
formats if that made sense.


e.g. OPNFV guys would NOT want auto-recovery, they would prefer that fault 
events
 get reported to Vitrage ... and eventually filter up to 
Aodh Alarms that get
 received by VNFManagers which would be responsible for the 
recovery.

e.g. some deployers of openstack might want to disable parts or all of your 
monitoring,
if using other mechanisms such as Zabbix or Nagios for the host 
monitoring (say)


Yes, exactly!  This kind of configurability and flexibility which
would allow each cloud architect to choose which monitoring / alerting
/ recovery components suit their requirements best in a "mix'n'match"
fashion, is exactly what we are aiming for with our modular approach
to the design of compute plane HA.  If the various monitoring
components adopt a driver-based approach to alerting and/or the
ability to alert via a lowest common denominator format such as simple
HTTP POST of JSON blobs, then it should be possible for each cloud
deployer to integrate the monitors with whichever reporting dashboards
/ recovery workflow controllers best satisfy their requirements.


Second, are you open to configurably having fault events reported to
Vitrage ?


Again I can't speak on behalf of the Masakari project, but this sounds
like a great idea to me :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-16 Thread Adrian Turjak


On 16/05/17 22:39, Sean Dague wrote:
> On 05/15/2017 10:00 PM, Adrian Turjak wrote:

>> I'm well aware of the policy work, and it is fantastic to see it
>> progressing! I can't wait to actually be able to play with that stuff!
>> We've been painstakingly tweaking the json policy files which is a giant
>> mess.
>>
>> I'm just concerned that this feels like a feature we don't really need
>> when really it's just a slight variant of a user with a new auth model
>> (that is really just another flavour of username/password). The sole
>> reason most of the other cloud services have API keys is because a user
>> can't talk to the API directly. OpenStack does not have that problem,
>> users are API keys. So I think what we really need to consider is what
>> exact benefit does API keys actually give us that won't be solved with
>> users and better policy?
> The benefits of API key are if it's the same across all deployments, so
> your applications can depend on it working. That means the application
> has to be able to:
>
> 1. provision an API Key with normal user credentials
> 2. set/reduce permissions with that with those same user credentials
> 3. operate with those credentials at the project level (so that when you
> leave, someone else in your dept can take over)
> 4. have all it's resources built in the same project that you are in, so
> API Key created resources could interact with manually created resources.
> 5. revoke at any time (and possibly bake in an expiration to begin with)
>
> #1 means these can't just be users. By the user survey 30% are using
> LDAP/AD, which means the authority to create a user isn't even cloud
> admin level, it's company AD level. It may literally be impossible to do.
>
> #2 means permissions can't be done with roles. Normal users can't create
> roles, and roles don't properly express permissions inherent in them
> either. Even if users started to be able to create roles, that would
> mean an incredible role explosion.
>
> #2 also means this interface can't use policy. Policy an internal
> structure for operators setting allow points, and is a DSL that we
> *really* don't want to make everyone learn every bit of.
>
> #4 means this can't be done with special projects where users could
> create other users using the existing SQL split backend setup in
> keystone (even if they had AD). This is complicated to setup in the
> first place, but if API Key created servers aren't able to get to the
> network that was manually setup in a different tenant, the usefulness is
> limited.
>
>
> This is why the proposal out of the room going forward was some concrete
> steps:
>
> 1) Make a new top level construct of an APPKey that exists within a
> project, that all users can create in projects they are members of.
>
> This immediately solves #1. And even inheriting Member role becomes
> useful because of the revoke facility. There are now a set of
> credentials that are ephemeral enough to back into images / scripts,
> that aren't also getting into your health records or direct deposit at
> your company.
>
> 2) Provide a mechanism to reduce what these APPKeys can do. Policy &
> Roles is actually the wrong approach, those are operator constructs. API
> consuming things understand operations in terms of ("Region1",
> "compute", "/servers", "GET"). Something along those lines would be
> provided as the way to describe permissions from a user.
>
> The complaint is this is a second way of describing permissions. It is.
> But the alternative to teach our entire user base about policy name
> points is ... far less appealing. We should be tailoring this to the
> audience we want to consume it.
>
>
> Yes, these are 2 distinct steps, but I think it's disengenous to say the
> first step is pointless until the second one is done. The first step
> immediately enables a set of use cases that are completely blocked today.
>
>
>   -Sean
>
Thank you Sean! That is the answer I was after. I wanted a concrete
reason as to why we couldn't solve this in Keystone with users.

Whoever is working on the specs, please add something close to (emphasis
on the LDAP/AD/etc part):
"Users as a alternative isn't always viable because of the
inconsistencies in Keystone backends (LDAP/AD) and the (in)ability for
non-admin users to create additional users across different cloud
deployments and manage the roles of those created users."

I `may` have a possible solution to that problem... but it involves a
service external to Keystone to do non-admin user management, more on
that next week as I'll hopefully be ready to start announcing stuff.

Anyway that aside, I'm sold on API keys as a concept in this case
provided they are project owned rather than user owned, I just don't
think we should make them too unique, and we shouldn't be giving them a
unique policy system because that way madness lies.

Policy is already a complicated system, lets not have to maintain two
systems. Any policy system we make for API keys ought to 

Re: [openstack-dev] [vitrage] [nova] [HA] VM Heartbeat / Healthcheck Monitoring

2017-05-16 Thread Adam Spiers

Waines, Greg  wrote:

thanks for the pointers Sam.

I took a quick look.
I agree that the VM Heartbeat / Health-check looks like a good fit into 
Masakari.

Currently your instance monitoring looks like it is strictly black-box type 
monitoring thru libvirt events.
Is that correct ?
i.e. you do not do any intrusive type monitoring of the instance thru the QUEMU 
Guest Agent facility
  correct ?


That is correct:

https://github.com/openstack/masakari-monitors/blob/master/masakarimonitors/instancemonitor/instance.py


I think this is what VM Heartbeat / Health-check would add to Masaraki.
Let me know if you agree.


OK, so you are looking for something slightly different I guess, based
on this QEMU guest agent?

   https://wiki.libvirt.org/page/Qemu_guest_agent

That would require the agent to be installed in the images, which is
extra work but I imagine quite easily justifiable in some scenarios.
What failure modes do you have in mind for covering with this
approach - things like the guest kernel freezing, for instance?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Jeremy Stanley
On 2017-05-16 19:56:31 + (+), Fox, Kevin M wrote:
[...]
> Lets provide the tools to make it as easy as possible to identify
> containers with issues, and allow upgrading the system to newer
> ones.
> 
> Which CVE's are on the system is somewhat less important then
> being able to get to newer versions installed easily. Right now,
> thats probably harder then it should be. If its hard, people won't
> do it.
[...]

My point (which I've trimmed because I don't have the patience to
undo your top-posting at the moment) was that security expectations
for these images should be clearly documented and communicated,
that's all. I'm not sure what you were reading into it.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-16 Thread Monty Taylor

On 05/16/2017 02:44 PM, Sean Dague wrote:

On 05/16/2017 03:40 PM, Monty Taylor wrote:

On 05/16/2017 10:20 AM, Doug Hellmann wrote:

Excerpts from Chris Dent's message of 2017-05-16 15:16:08 +0100:

On Tue, 16 May 2017, Monty Taylor wrote:


FWIW - I'm un-crazy about the term API Key - but I'm gonna just roll
with
that until someone has a better idea. I'm uncrazy about it for two
reasons:

a) the word "key" implies things to people that may or may not be
true here.
If we do stick with it - we need some REALLY crisp language about
what it is
and what it isn't.

b) Rackspace Public Cloud (and back in the day HP Public Cloud) have
a thing
called by this name. While what's written in the spec is quite
similar in
usage to that construct, I'm wary of re-using the name without the
semantics
actually being fully the same for risk of user confusion. "This uses
api-key... which one?" Sean's email uses "APPKey" instead of
"APIKey" - which
may be a better term. Maybe just "ApplicationAuthorization"?


"api key" is a fairly common and generic term for "this magical
thingie I can create to delegate my authority to some automation".
It's also sometimes called "token", perhaps that's better (that's
what GitHub uses, for example)? In either case the "api" bit is
pretty important because it is the thing used to talk to the API.

I really hope we can avoid creating yet more special language for
OpenStack. We've got an API. We want to send keys or tokens. Let's
just call them that.



+1


Fair. That's an excellent argument for "api key" - because I certainly
don't think we want to overload 'token'.


As someone who accidentally named "API Microversions", I fully cede
naming territory to others here.


I named "jeepyb" on _purpose_.

For those playing at home, that's a phoneticization of "GPB" which is an 
otherwise never-used acronym for "Gerrit Project Builder".


/me hides


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon][api][docs] Feedback requested on proposed formatting change to API docs

2017-05-16 Thread Monty Taylor

Hey all!

I read the API docs A LOT. (thank you to all of you who have worked on 
writing them)


As I do, a gotcha I hit up against a non-zero amount is mapping the 
descriptions of the response parameters to the form of the response 
itself. Most of the time there is a top level parameter under which 
either an object or a list resides, but the description lists list the 
top level and the sub-parameters as siblings.


So I wrote a patch to os-api-ref taking a stab at providing a way to 
show things a little differently:


https://review.openstack.org/#/c/464255/

You can see the output here:

http://docs-draft.openstack.org/55/464255/5/check/gate-nova-api-ref-src/f02b170//api-ref/build/html/

If you go expand either the GET / or the GET /servers/details sections 
and go to look at their Response sections, you can see it in action.


We'd like some feedback on impact from humans who read the API docs 
decently regularly...


The questions:

- Does this help, hurt, no difference?
- servers[].name - servers is a list, containing objects with a name 
field. Good or bad?
- servers[].addresses.$network-name - addresses is an object and the 
keys of the object are the name of the network in question.


Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-16 Thread Joshua Harlow

Chris Friesen wrote:

On 05/16/2017 10:45 AM, Joshua Harlow wrote:

So fyi,

If you really want something like this:

Just use:

http://fasteners.readthedocs.io/en/latest/api/lock.html#fasteners.lock.ReaderWriterLock



And always get a write lock.

It is a slightly different way of getting those locks (via a context
manager)
but the implementation underneath is a deque; so fairness should be
assured in
FIFO order...


That might work as a local patch, but doesn't help the more general case
of fair locking in OpenStack. The alternative to adding fair locks in
oslo would be to add fairness code to all the various OpenStack services
that use locking, which seems to miss the whole point of oslo.


Replace 'openstack community' with 'python community'? ;)



In the implementation above it might also be worth using one condition
variable per waiter, since that way you can wake up only the next waiter
in line rather than waking up everyone only to have all-but-one of them
go back to sleep right away.



Ah good idea, I'll see about doing/adding/changing that.

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 20

2017-05-16 Thread Chris Dent


I'm not fully recovered from summit yet, so this will not cover
everything. It will pick back up next week once I'm back in the
full flow.

# Notes from Summit

Elsewhere I'll create a more complete write up of the Foundation board
meeting that happened on the Sunday before summit, but some comments
from that feel relevant to the purpose of these reports:

When new people were being introduced around the table, TC members
did not include their company affiliations, board members did. This
is good.

Repeated (but not frequent) theme of different members of the board
expressing a need for requirements to be given t the TC to give to
"the devs", sometimes via the product working grop. Sometimes this
felt a bit like "someone please give the devs a kick".

At the same time new board members were accepted wthout the new
members giving concrete statements about how many resources (human
or hardware) they were going to promise "upstream".
Cultural barriers to engagement for new participants is a huge deal.
Lots of employees at lots of companies do not have an easy way to
grok open source in general or the OpenStack way.

It's a complicated situation resolution of which likely requires
greater communication between the TC, the UC (and related working
groups) and the Board, but as usual, many people are over-tasked.

# Meeting Minutes

Log: 
http://eavesdrop.openstack.org/meetings/tc/2017/tc.2017-05-16-20.01.log.html

## Summit Followup

Some discussion on expanding the breadth of Forum sessions beyond
three at a time. Most people did not want to expand as double booking
can be a significant problem.

Some people (notably mriedem, nova PTL) said that the sessions
seemed too dev-centric and did not provide as much user and ops
feedback as was hoped. That is: the number of users and operators
present was not greatly increased and the set of users and operators
who spoke were the usual suspects.

Forum attendees felt obliged to be in forum rooms so missed out on
opportunities to witness conference presentations and see what the
real world is up to. This is unfortunate because, well, it's the
real world where real stuff is happening.

But, in general, it felt like good improvement. For next time:
* recording the on-boarding sessions
* improving (making more visible) last minute scheduling of un-used
  rooms
* greater preparation/agenda-making for some sessions

## Next Steps TC Vision

https://review.openstack.org/453262

There was a feedback session at summit regarding the vision:


https://www.openstack.org/videos/boston-2017/the-openstack-technical-committee-vision-for-2019-updates-stories-and-q-and-a

The feedback from that needs to be integrated with the feedback from
the recent survey, digested, and a new version of the review
created. Apparently a lot of people at the summit session weren't
aware of it, which says something about the info flow.

## Deprecating postgresql in OpenStack

https://review.openstack.org/427880

This is a move to document that postgresql is not actively tested in
the gate nor considered a good choice for deployment.

Since this is my opinionated view of events: I don't think this is
the right approach (because it is effectively changing a deployed
API and we don't allow that other places), but there are many
legitimate reasons for the described approach, mostly to do with not
misleading people about the degree of support that upstream
OpenStack is providing for postgresql.

## Document voting process for formal-vote

https://review.openstack.org/#/c/463141/

With the change to be more mailing list oriented, changes are
needed to the TC voting process to make sure that resolutions are
visible to everyone and have sufficient time for review. This turns
out to be more complicated than it might initially sound: some
votes? all votes? count by business days or absolute days?

# Thoughts from the Meeting and Summit

It felt like we spent a lot of the meeting expressing strategies for
how to constrain options and actions in reaction to unavailable
resources, and not enough time on considering strategies for gaining
more resources. This while the board is expressing a relatively
healthy budget and welcoming one new platinum and two new gold
members and not being fully cognizant of how OpenStack development
really works. The TC needs to be more active in expressing the needs
of the technical community to the board that represents the
companies that provides many resources. It's a two way street.

The TC is currently doing a lot of work to improve its process:
potentially killing meetings, changing how pending resolutions are
communicated to the community. This is important stuff because it
will eventually help to increase participation and diversity of
participants, but presumably it needs to be alongside ensuring that
we (as an entire community) are making good stuff in a sustainable
fashion. Only a very small amount of the discussion at today's TC
meeting or at least week's board meeting 

Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Jeremy Stanley
On 2017-05-16 23:13:05 +0200 (+0200), Thierry Carrez wrote:
[...]
> Looking at recent logs from #openstack-dev, I can see it would be a bit
> painful to sort out what is actionable TC stuff from what is long
> general development discussions and random are-you-around pings.

I'm likely misunderstanding the specifics of what you expect to see
discussed in this case. It seems to me like exactly the same sort of
cross-project/inter-project communication I would expect to come up
in informal IRC discussions where the participants might want to
also get input from a handful of TC members. I would only be likely
to prioritize the channel during scheduled TC member office hours
and then treat it as potential mid-priority background where I'm
mostly monitoring for nick/keyword highlights the rest of the time.

Anything important should still surface in an ML thread afterward,
and eventually in Gerrit if the action to be taken involves a
motion/resolution or reference change.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Thierry Carrez
Sean Dague wrote:
> On 05/16/2017 03:59 PM, Thierry Carrez wrote:
>> Thierry Carrez wrote:
>>> Here we have a clear topic, and TC members need to pay a certain level
>>> of attention to whatever is said. Mixing it with other community
>>> discussions (which I have to prioritize lower) just makes it harder to
>>> pay the right level of attention to the channel. Basically I have
>>> trouble to see how we can repurpose a general discussion channel into a
>>> specific group office-hours channel (different topics, different level
>>> of attention to be paid). Asking people to use a ping list when they
>>> *really* need to get TC members attention feels like a band-aid.
>>
>> To summarize, I fear that using a general development discussion channel
>> as the TC discussion channel will turn *every* development discussion
>> into a TC discussion. I don't think that's desirable.
> 
> Maybe we have different ideas of what we expect to be the kinds of
> discussions and asks. What do you think would be in #openstack-tc that
> would not be appropriate for #openstack-dev, and why?

It's the other way around. There are (hopefully more and more)
discussions in #openstack-dev that are noise for anyone watching this
channel in "TC-office-hours mode" (like a TC member who needs to pay
attention to what is being said on that channel).

I prioritize channels based on the attention I need to pay to them. The
channel used for TC questions / office hours would be at the top of my
list, so I would rather avoid all the noise we can :)

Looking at recent logs from #openstack-dev, I can see it would be a bit
painful to sort out what is actionable TC stuff from what is long
general development discussions and random are-you-around pings.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Future of the tripleo-quickstart-utils project

2017-05-16 Thread Raoul Scarazzini
Hi everybody,
as discussed in today's TripleO meeting [1] here's a brief recap of the
tripleo-quickstart-utils topic.

### TL;DR ###

We are trying to understand whether is good or not to put the contents
of [2] somewhere else for a wider exposure.

### Long version ###

tripleo-quickstart-utils project started after splitting the
ha-validation stuff from the tripleo-quickstart-extras repo [3],
basically because the specificity of the topic was creating a leak of
reviewers.
Today this repository have three roles:

1 - validate-ha: to do ha specific tests depending on the version. This
role relies on a micro bash framework named ha-test-suite available in
the same repo, under the utils directory;

2 - stonith-config: to configure STONITH inside an HA env;

3 - instance-ha: to configure high availability for instances on the
compute nodes;

Despite of the name, this is not just a tripleo-quickstart related
project, it is also usable on every TripleO deployed environment, and is
meant to support all the TripleO OpenStack versions from kilo to pike
for all the roles it sells;

There's also a docs related to the Multi Virtual Undercloud project [4]
that explains how to have more than one virtual Undercloud on a physical
machine to manage more environments from the same place.

That's basically the meaning of the word "utils" in the name of the repo.

What I would like to understand is if you see this as something useful
that can be placed somewhere more near to upstream TripleO project, to
reach a wider audience for further contribution/evolution.

###

[1]
http://eavesdrop.openstack.org/meetings/tripleo/2017/tripleo.2017-05-16-14.00.log.html
[2] https://github.com/redhat-openstack/tripleo-quickstart-utils
[3]
https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/validate-ha
[4]
https://github.com/redhat-openstack/tripleo-quickstart-utils/tree/master/docs/multi-virtual-undercloud

###

Thanks for your time,

-- 
Raoul Scarazzini
ra...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Sean Dague
On 05/16/2017 03:59 PM, Thierry Carrez wrote:
> Thierry Carrez wrote:
>> Here we have a clear topic, and TC members need to pay a certain level
>> of attention to whatever is said. Mixing it with other community
>> discussions (which I have to prioritize lower) just makes it harder to
>> pay the right level of attention to the channel. Basically I have
>> trouble to see how we can repurpose a general discussion channel into a
>> specific group office-hours channel (different topics, different level
>> of attention to be paid). Asking people to use a ping list when they
>> *really* need to get TC members attention feels like a band-aid.
> 
> To summarize, I fear that using a general development discussion channel
> as the TC discussion channel will turn *every* development discussion
> into a TC discussion. I don't think that's desirable.

Maybe we have different ideas of what we expect to be the kinds of
discussions and asks. What do you think would be in #openstack-tc that
would not be appropriate for #openstack-dev, and why?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Jeremy Stanley
On 2017-05-16 21:53:43 +0200 (+0200), Thierry Carrez wrote:
[...]
> I wouldn't say it's premature optimization, we create channels all the
> time. #openstack-dev is a general discussion channel, which is used for
> anything that doesn't fit anywhere else. If you look at recent logs,
> you'll see that it is used for general community pings, but also lengthy
> inter-project discussions.

Those sound entirely on-topic for #openstack-dev to me. Like others
on this thread I also worry that a TC-specific channel will seem
"exclusive" when really we're just members of the community having
discussions with other members of the community (elected or not).

> Here we have a clear topic, and TC members need to pay a certain level
> of attention to whatever is said. Mixing it with other community
> discussions (which I have to prioritize lower) just makes it harder to
> pay the right level of attention to the channel. Basically I have
> trouble to see how we can repurpose a general discussion channel into a
> specific group office-hours channel (different topics, different level
> of attention to be paid). Asking people to use a ping list when they
> *really* need to get TC members attention feels like a band-aid.

If we go with office hours as proposed I for one would pay close
attention to whatever's said on #openstack-dev during those
scheduled timeframes, highlighted or not. Having a common highlight
is merely a possible means of getting the attention of specific
people in a channel without them needing to pay close attention to
everything said in that channel.

> It's pretty clear now some see drawbacks in reusing #openstack-dev, and
> so far the only benefit expressed (beyond not having to post the config
> change to make it happen) is that "everybody is already there". By that
> rule, we should not create any new channel :)

That was not the only concern expressed. It also silos us away from
the community who has elected us to represent them, potentially
creating another IRC echo chamber.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Thierry Carrez
Thierry Carrez wrote:
> Here we have a clear topic, and TC members need to pay a certain level
> of attention to whatever is said. Mixing it with other community
> discussions (which I have to prioritize lower) just makes it harder to
> pay the right level of attention to the channel. Basically I have
> trouble to see how we can repurpose a general discussion channel into a
> specific group office-hours channel (different topics, different level
> of attention to be paid). Asking people to use a ping list when they
> *really* need to get TC members attention feels like a band-aid.

To summarize, I fear that using a general development discussion channel
as the TC discussion channel will turn *every* development discussion
into a TC discussion. I don't think that's desirable.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Fox, Kevin M
Security is a spectrum, not a boolean. I know some sites that have instituted 
super long/complex password requirements. The end result is usually humans just 
writing pw's down on stickies then since its too hard to remember, making 
security worse, not better. Humans are always the weakest link in the security 
system and must be taken into account.

There are people that will do things insecurely. If we can make it much easier 
for them to get access to much more fresh stuff rather then build it themselves 
(which they wont), the community as a whole will be better off. There will be 
far fewer clouds out there with known bad CVE's baked in.

Lets provide the tools to make it as easy as possible to identify containers 
with issues, and allow upgrading the system to newer ones.

Which CVE's are on the system is somewhat less important then being able to get 
to newer versions installed easily. Right now, thats probably harder then it 
should be. If its hard, people won't do it.

Fresh images are just a step in that process.

Thanks,
Kevin

From: Jeremy Stanley [fu...@yuggoth.org]
Sent: Tuesday, May 16, 2017 12:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] 
[tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
 do we want to be publishing binary container images?

On 2017-05-16 11:46:14 -0700 (-0700), Michał Jastrzębski wrote:
[...]
> So CVE tracking might not be required by us. Since we still use
> distro packages under the hood, we can just use these.
[...]

I think the question is how I, as a semi-clueful downstream user of
your images, can tell whether the image I'm deploying has fixes for
some specific recently disclosed vulnerability. It sounds like your
answer is that I should compare the package manifest against the
versions listed on the distro's CVE tracker or similar service? That
should be prominently documented, perhaps in a highly visible FAQ
list.

> Since we'd rebuild daily, that alone would ensure timely update to
> our containers. What we can promise to potential users is that
> containers out there were built lately (24hrs)
[...]

As outlined elsewhere in the thread, there are a myriad of reasons
why this could end up not being the case from time to time so I can
only assume your definition of "promise" differs from mine (and
unfortunately, from most people who might be trying to decide
whether it's safe to rely on these images in a sensitive/production
environment).
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Thierry Carrez
Jeremy Stanley wrote:
> On 2017-05-16 09:38:34 -0400 (-0400), Davanum Srinivas wrote:
>> See $TITLE :)
> 
> Trying not to rehash other points, I'm in favor of using
> #openstack-dev for now until we see it's not working out. Creating a
> new channel for this purpose before we've even undertaken the
> experiment seems like a social form of premature optimization.

I wouldn't say it's premature optimization, we create channels all the
time. #openstack-dev is a general discussion channel, which is used for
anything that doesn't fit anywhere else. If you look at recent logs,
you'll see that it is used for general community pings, but also lengthy
inter-project discussions.

Here we have a clear topic, and TC members need to pay a certain level
of attention to whatever is said. Mixing it with other community
discussions (which I have to prioritize lower) just makes it harder to
pay the right level of attention to the channel. Basically I have
trouble to see how we can repurpose a general discussion channel into a
specific group office-hours channel (different topics, different level
of attention to be paid). Asking people to use a ping list when they
*really* need to get TC members attention feels like a band-aid.

It's pretty clear now some see drawbacks in reusing #openstack-dev, and
so far the only benefit expressed (beyond not having to post the config
change to make it happen) is that "everybody is already there". By that
rule, we should not create any new channel :)

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 12:36, Jeremy Stanley  wrote:
> On 2017-05-16 11:46:14 -0700 (-0700), Michał Jastrzębski wrote:
> [...]
>> So CVE tracking might not be required by us. Since we still use
>> distro packages under the hood, we can just use these.
> [...]
>
> I think the question is how I, as a semi-clueful downstream user of
> your images, can tell whether the image I'm deploying has fixes for
> some specific recently disclosed vulnerability. It sounds like your
> answer is that I should compare the package manifest against the
> versions listed on the distro's CVE tracker or similar service? That
> should be prominently documented, perhaps in a highly visible FAQ
> list.

One thing we've been working on prior to summit was manifesto of
versions - I think we can provide single file with all the versions of
packages in container, we can add track of CI jobs that led containers
to this place, all the informations that semi-careful downstream user
can use to help him/her to determine what's that they're getting. I'm
all for that kind of features.

>> Since we'd rebuild daily, that alone would ensure timely update to
>> our containers. What we can promise to potential users is that
>> containers out there were built lately (24hrs)
> [...]
>
> As outlined elsewhere in the thread, there are a myriad of reasons
> why this could end up not being the case from time to time so I can
> only assume your definition of "promise" differs from mine (and
> unfortunately, from most people who might be trying to decide
> whether it's safe to rely on these images in a sensitive/production
> environment).

By "promise" I mean clear documentation of where containers came from
and what did they pass. After that, take it or leave it.

> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-16 Thread Sean Dague
On 05/16/2017 03:40 PM, Monty Taylor wrote:
> On 05/16/2017 10:20 AM, Doug Hellmann wrote:
>> Excerpts from Chris Dent's message of 2017-05-16 15:16:08 +0100:
>>> On Tue, 16 May 2017, Monty Taylor wrote:
>>>
 FWIW - I'm un-crazy about the term API Key - but I'm gonna just roll
 with
 that until someone has a better idea. I'm uncrazy about it for two
 reasons:

 a) the word "key" implies things to people that may or may not be
 true here.
 If we do stick with it - we need some REALLY crisp language about
 what it is
 and what it isn't.

 b) Rackspace Public Cloud (and back in the day HP Public Cloud) have
 a thing
 called by this name. While what's written in the spec is quite
 similar in
 usage to that construct, I'm wary of re-using the name without the
 semantics
 actually being fully the same for risk of user confusion. "This uses
 api-key... which one?" Sean's email uses "APPKey" instead of
 "APIKey" - which
 may be a better term. Maybe just "ApplicationAuthorization"?
>>>
>>> "api key" is a fairly common and generic term for "this magical
>>> thingie I can create to delegate my authority to some automation".
>>> It's also sometimes called "token", perhaps that's better (that's
>>> what GitHub uses, for example)? In either case the "api" bit is
>>> pretty important because it is the thing used to talk to the API.
>>>
>>> I really hope we can avoid creating yet more special language for
>>> OpenStack. We've got an API. We want to send keys or tokens. Let's
>>> just call them that.
>>>
>>
>> +1
> 
> Fair. That's an excellent argument for "api key" - because I certainly
> don't think we want to overload 'token'.

As someone who accidentally named "API Microversions", I fully cede
naming territory to others here.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-16 Thread Monty Taylor

On 05/16/2017 10:20 AM, Doug Hellmann wrote:

Excerpts from Chris Dent's message of 2017-05-16 15:16:08 +0100:

On Tue, 16 May 2017, Monty Taylor wrote:


FWIW - I'm un-crazy about the term API Key - but I'm gonna just roll with
that until someone has a better idea. I'm uncrazy about it for two reasons:

a) the word "key" implies things to people that may or may not be true here.
If we do stick with it - we need some REALLY crisp language about what it is
and what it isn't.

b) Rackspace Public Cloud (and back in the day HP Public Cloud) have a thing
called by this name. While what's written in the spec is quite similar in
usage to that construct, I'm wary of re-using the name without the semantics
actually being fully the same for risk of user confusion. "This uses
api-key... which one?" Sean's email uses "APPKey" instead of "APIKey" - which
may be a better term. Maybe just "ApplicationAuthorization"?


"api key" is a fairly common and generic term for "this magical
thingie I can create to delegate my authority to some automation".
It's also sometimes called "token", perhaps that's better (that's
what GitHub uses, for example)? In either case the "api" bit is
pretty important because it is the thing used to talk to the API.

I really hope we can avoid creating yet more special language for
OpenStack. We've got an API. We want to send keys or tokens. Let's
just call them that.



+1


Fair. That's an excellent argument for "api key" - because I certainly 
don't think we want to overload 'token'.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Jeremy Stanley
On 2017-05-16 11:46:14 -0700 (-0700), Michał Jastrzębski wrote:
[...]
> So CVE tracking might not be required by us. Since we still use
> distro packages under the hood, we can just use these.
[...]

I think the question is how I, as a semi-clueful downstream user of
your images, can tell whether the image I'm deploying has fixes for
some specific recently disclosed vulnerability. It sounds like your
answer is that I should compare the package manifest against the
versions listed on the distro's CVE tracker or similar service? That
should be prominently documented, perhaps in a highly visible FAQ
list.

> Since we'd rebuild daily, that alone would ensure timely update to
> our containers. What we can promise to potential users is that
> containers out there were built lately (24hrs)
[...]

As outlined elsewhere in the thread, there are a myriad of reasons
why this could end up not being the case from time to time so I can
only assume your definition of "promise" differs from mine (and
unfortunately, from most people who might be trying to decide
whether it's safe to rely on these images in a sensitive/production
environment).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-16 Thread Chris Friesen

On 05/16/2017 10:45 AM, Joshua Harlow wrote:

So fyi,

If you really want something like this:

Just use:

http://fasteners.readthedocs.io/en/latest/api/lock.html#fasteners.lock.ReaderWriterLock


And always get a write lock.

It is a slightly different way of getting those locks (via a context manager)
but the implementation underneath is a deque; so fairness should be assured in
FIFO order...


That might work as a local patch, but doesn't help the more general case of fair 
locking in OpenStack.  The alternative to adding fair locks in oslo would be to 
add fairness code to all the various OpenStack services that use locking, which 
seems to miss the whole point of oslo.


In the implementation above it might also be worth using one condition variable 
per waiter, since that way you can wake up only the next waiter in line rather 
than waking up everyone only to have all-but-one of them go back to sleep right 
away.


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Dean Troyer
On Tue, May 16, 2017 at 1:02 PM, Jeremy Stanley  wrote:
> rooters these days). Something like "tc-members" can be used to
> address a question specifically to those on the TC who happen to be
> around and paying attention and also gives people looking at the
> logs a useful string to grep/search/whatever. I've gone ahead and
> configured my client to highlight on that now.

Awesome idea... this may also be as close to an in-band tag (ML
subject-like tag) as we're going to get short of #topic in IRC. (/me
avoids discussion of the effectiveness of such a feature).

dt

P.S. I used 'tc-member' to catch both singular and plural forms.

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Dean Troyer
On Tue, May 16, 2017 at 10:17 AM, Sean McGinnis  wrote:
> If we just use -dev, there is a high chance there will be a lot of cross-
> talk during discussions. There would also be a lot of effort to grep
> through the full day of activity to find things relevant to TC
> discussions. If we have a dedicated channel for this, it makes it very
> easy for anyone to know where to go to get a clean, easy to read capture
> of all relevant discussions. I think that will be important with the
> lack of a captured and summarized meeting to look at.

Right now the filtering aspect is swaying me to a dedicated channel.
There is really no way to extract topics out of IRC logs, and for many
discussions we do not need to be able to do that.  Here we are talking
about the background information for decisions that will be summarized
and recorded elsewhere.

I would really like to not add a new channel and do like the increased
traffic in -dev as of the last few months, but right now I think -tc
may be warranted.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Fox, Kevin M
We can put warnings all over it and if folks choose to ignore them, then its 
they who took the risk and get to keep the pieces when it breaks. Some folks 
are crazy enough to run devstack in production. But does that mean we should 
just abandon devstack? No. of course not. I don't think we should hold back 
OpenStack from the benefits this provides just because a few folks might 
possibly do something bad with it. We do what we can to allow folks to 
recognize bad ideas. But if they want to do it anyway, thats the freedom of 
open source. They can. They get to deal with the fallout though, not us. If we 
always catered to the lowest common denominator we would never get anywhere.

Thanks,
Kevin

From: Doug Hellmann [d...@doughellmann.com]
Sent: Tuesday, May 16, 2017 11:49 AM
To: openstack-dev
Subject: Re: [openstack-dev]
[tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
do we want to be publishing binary container images?

Excerpts from Michał Jastrzębski's message of 2017-05-16 11:38:19 -0700:
> On 16 May 2017 at 11:27, Doug Hellmann  wrote:
> > Excerpts from Michał Jastrzębski's message of 2017-05-16 09:46:19 -0700:
> >> So another consideration. Do you think whole rule of "not building
> >> binares" should be reconsidered? We are kind of new use case here. We
> >> aren't distro but we are packagers (kind of). I don't think putting us
> >> on equal footing as Red Hat, Canonical or other companies is correct
> >> here.
> >>
> >> K8s is something we want to work with, and what we are discussing is
> >> central to how k8s is used. K8s community creates this culture of
> >> "organic packages" built by anyone, most of companies/projects already
> >> have semi-official container images and I think expectations on
> >> quality of these are well...none? You get what you're given and if you
> >> don't agree, there is always way to reproduce this yourself.
> >>
> >> [Another huge snip]
> >>
> >
> > I wanted to have the discussion, but my position for now is that
> > we should continue as we have been and not change the policy.
> >
> > I don't have a problem with any individual or group of individuals
> > publishing their own organic packages. The issue I have is with
> > making sure it is clear those *are* "organic" and not officially
> > supported by the broader community. One way to do that is to say
> > they need to be built somewhere other than on our shared infrastructure.
> > There may be other ways, though, so I'm looking for input on that.
>
> What I was trying to say here is, current discussion aside, maybe we
> should revise this "not supported by broader community" rule. They may
> very well be supported to a certain point. Support is not just yes or
> no, it's all the levels in between. I think we can afford *some* level
> of official support, even if that some level means best effort made by
> community. If Kolla community, not an individual like myself, would
> like to support these images best to our ability, why aren't we
> allowed? As long as we are crystal clear what is scope of our support,
> why can't we do it? I think we've already proven that it's going to be
> tremendously useful for a lot of people, even in a shape we discuss
> today, that is "best effort, you still need to validate it for
> yourself"...

Right, I understood that. So far I haven't heard anything to change
my mind, though.

I think you're underestimating the amount of risk you're taking on
for yourselves and by extension the rest of the community, and
introducing to potential consumers of the images, by promising to
support production deployments with a small team of people without
the economic structure in place to sustain the work.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Fox, Kevin M
And bandwidth can be conserved by only uploading images that actually changed 
in non trivial ways (packages were updated, not just logfile with a new 
timestamp)

Thanks,
Keivn

From: Michał Jastrzębski [inc...@gmail.com]
Sent: Tuesday, May 16, 2017 11:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] 
[tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
 do we want to be publishing binary container images?

On 16 May 2017 at 11:33, Doug Hellmann  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-16 08:20:17 -0700:
>> On 16 May 2017 at 08:12, Doug Hellmann  wrote:
>> > Excerpts from Michał Jastrzębski's message of 2017-05-16 06:52:12 -0700:
>> >> On 16 May 2017 at 06:20, Flavio Percoco  wrote:
>> >> > On 16/05/17 14:08 +0200, Thierry Carrez wrote:
>> >> >>
>> >> >> Flavio Percoco wrote:
>> >> >>>
>> >> >>> From a release perspective, as Doug mentioned, we've avoided releasing
>> >> >>> projects
>> >> >>> in any kind of built form. This was also one of the concerns I raised
>> >> >>> when
>> >> >>> working on the proposal to support other programming languages. The
>> >> >>> problem of
>> >> >>> releasing built images goes beyond the infrastructure requirements. 
>> >> >>> It's
>> >> >>> the
>> >> >>> message and the guarantees implied with the built product itself that 
>> >> >>> are
>> >> >>> the
>> >> >>> concern here. And I tend to agree with Doug that this might be a 
>> >> >>> problem
>> >> >>> for us
>> >> >>> as a community. Unfortunately, putting your name, Michal, as contact
>> >> >>> point is
>> >> >>> not enough. Kolla is not the only project producing container images 
>> >> >>> and
>> >> >>> we need
>> >> >>> to be consistent in the way we release these images.
>> >> >>>
>> >> >>> Nothing prevents people for building their own images and uploading 
>> >> >>> them
>> >> >>> to
>> >> >>> dockerhub. Having this as part of the OpenStack's pipeline is a 
>> >> >>> problem.
>> >> >>
>> >> >>
>> >> >> I totally subscribe to the concerns around publishing binaries (under
>> >> >> any form), and the expectations in terms of security maintenance that 
>> >> >> it
>> >> >> would set on the publisher. At the same time, we need to have images
>> >> >> available, for convenience and testing. So what is the best way to
>> >> >> achieve that without setting strong security maintenance expectations
>> >> >> for the OpenStack community ? We have several options:
>> >> >>
>> >> >> 1/ Have third-parties publish images
>> >> >> It is the current situation. The issue is that the Kolla team (and
>> >> >> likely others) would rather automate the process and use OpenStack
>> >> >> infrastructure for it.
>> >> >>
>> >> >> 2/ Have third-parties publish images, but through OpenStack infra
>> >> >> This would allow to automate the process, but it would be a bit weird 
>> >> >> to
>> >> >> use common infra resources to publish in a private repo.
>> >> >>
>> >> >> 3/ Publish transient (per-commit or daily) images
>> >> >> A "daily build" (especially if you replace it every day) would set
>> >> >> relatively-limited expectations in terms of maintenance. It would end 
>> >> >> up
>> >> >> picking up security updates in upstream layers, even if not 
>> >> >> immediately.
>> >> >>
>> >> >> 4/ Publish images and own them
>> >> >> Staff release / VMT / stable team in a way that lets us properly own
>> >> >> those images and publish them officially.
>> >> >>
>> >> >> Personally I think (4) is not realistic. I think we could make (3) 
>> >> >> work,
>> >> >> and I prefer it to (2). If all else fails, we should keep (1).
>> >> >
>> >> >
>> >> > Agreed #4 is a bit unrealistic.
>> >> >
>> >> > Not sure I understand the difference between #2 and #3. Is it just the
>> >> > cadence?
>> >> >
>> >> > I'd prefer for these builds to have a daily cadence because it sets the
>> >> > expectations w.r.t maintenance right: "These images are daily builds 
>> >> > and not
>> >> > certified releases. For stable builds you're better off building it
>> >> > yourself"
>> >>
>> >> And daily builds are exactly what I wanted in the first place:) We
>> >> probably will keep publishing release packages too, but we can be so
>> >> called 3rd party. I also agree [4] is completely unrealistic and I
>> >> would be against putting such heavy burden of responsibility on any
>> >> community, including Kolla.
>> >>
>> >> While daily cadence will send message that it's not stable, truth will
>> >> be that it will be more stable than what people would normally build
>> >> locally (again, it passes more gates), but I'm totally fine in not
>> >> saying that and let people decide how they want to use it.
>> >>
>> >> So, can we move on with implementation?
>> >
>> > I don't want the images published to docker hub. Are they still useful
>> > to you if they aren't published?
>>

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 11:49, Doug Hellmann  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-16 11:38:19 -0700:
>> On 16 May 2017 at 11:27, Doug Hellmann  wrote:
>> > Excerpts from Michał Jastrzębski's message of 2017-05-16 09:46:19 -0700:
>> >> So another consideration. Do you think whole rule of "not building
>> >> binares" should be reconsidered? We are kind of new use case here. We
>> >> aren't distro but we are packagers (kind of). I don't think putting us
>> >> on equal footing as Red Hat, Canonical or other companies is correct
>> >> here.
>> >>
>> >> K8s is something we want to work with, and what we are discussing is
>> >> central to how k8s is used. K8s community creates this culture of
>> >> "organic packages" built by anyone, most of companies/projects already
>> >> have semi-official container images and I think expectations on
>> >> quality of these are well...none? You get what you're given and if you
>> >> don't agree, there is always way to reproduce this yourself.
>> >>
>> >> [Another huge snip]
>> >>
>> >
>> > I wanted to have the discussion, but my position for now is that
>> > we should continue as we have been and not change the policy.
>> >
>> > I don't have a problem with any individual or group of individuals
>> > publishing their own organic packages. The issue I have is with
>> > making sure it is clear those *are* "organic" and not officially
>> > supported by the broader community. One way to do that is to say
>> > they need to be built somewhere other than on our shared infrastructure.
>> > There may be other ways, though, so I'm looking for input on that.
>>
>> What I was trying to say here is, current discussion aside, maybe we
>> should revise this "not supported by broader community" rule. They may
>> very well be supported to a certain point. Support is not just yes or
>> no, it's all the levels in between. I think we can afford *some* level
>> of official support, even if that some level means best effort made by
>> community. If Kolla community, not an individual like myself, would
>> like to support these images best to our ability, why aren't we
>> allowed? As long as we are crystal clear what is scope of our support,
>> why can't we do it? I think we've already proven that it's going to be
>> tremendously useful for a lot of people, even in a shape we discuss
>> today, that is "best effort, you still need to validate it for
>> yourself"...
>
> Right, I understood that. So far I haven't heard anything to change
> my mind, though.
>
> I think you're underestimating the amount of risk you're taking on
> for yourselves and by extension the rest of the community, and
> introducing to potential consumers of the images, by promising to
> support production deployments with a small team of people without
> the economic structure in place to sustain the work.

Again, we tell what it is and what it is not. I think support is
loaded term here. Instead we can create lengthy documentation
explaining to a detail lifecycle and testing certain container had to
pass before it lands in dockerhub. Maybe add link to particular set of
jobs that container had passed. Only thing we can offer is automated
and transparent process of publishing. On top of that? You are on your
own. But even within these boundaries, a lot of people could have
better experience of running OpenStack...

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Sean Dague
On 05/16/2017 02:39 PM, Doug Hellmann wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-16 09:51:00 -0700:
>> On 16 May 2017 at 09:40, Clint Byrum  wrote:
>>>
>>> What's at stake isn't so much "how do we get the bits to the users" but
>>> "how do we only get bits to users that they need". If you build and push
>>> daily, do you expect all of your users to also _pull_ daily? Redeploy
>>> all their containers? How do you detect that there's new CVE-fixing
>>> stuff in a daily build?
>>>
>>> This is really the realm of distributors that have full-time security
>>> teams tracking issues and providing support to paying customers.
>>>
>>> So I think this is a fine idea, however, it needs to include a commitment
>>> for a full-time paid security team who weighs in on every change to
>>> the manifest. Otherwise we're just lobbing time bombs into our users'
>>> data-centers.
>>
>> One thing I struggle with is...well...how does *not having* built
>> containers help with that? If your company have full time security
>> team, they can check our containers prior to deployment. If your
>> company doesn't, then building locally will be subject to same risks
>> as downloading from dockerhub. Difference is, dockerhub containers
>> were tested in our CI to extend that our CI allows. No matter whether
>> or not you have your own security team, local CI, staging env, that
>> will be just a little bit of testing on top of that which you get for
>> free, and I think that's value enough for users to push for this.
> 
> The benefit of not building images ourselves is that we are clearly
> communicating that the responsibility for maintaining the images
> falls on whoever *does* build them. There can be no question in any
> user's mind that the community somehow needs to maintain the content
> of the images for them, just because we're publishing new images
> at some regular cadence.

+1. It is really easy to think that saying "don't use this in
production" prevents people from using it in production. See: User
Survey 2017 and the number of folks reporting DevStack as their
production deployment tool.

We need to not only manage artifacts, but expectations. And with all the
confusion of projects in the openstack git namespace being officially
blessed openstack projects over the past few years, I can't imagine
people not thinking that openstack infra generated content in dockerhub
is officially supported content.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-16 11:38:19 -0700:
> On 16 May 2017 at 11:27, Doug Hellmann  wrote:
> > Excerpts from Michał Jastrzębski's message of 2017-05-16 09:46:19 -0700:
> >> So another consideration. Do you think whole rule of "not building
> >> binares" should be reconsidered? We are kind of new use case here. We
> >> aren't distro but we are packagers (kind of). I don't think putting us
> >> on equal footing as Red Hat, Canonical or other companies is correct
> >> here.
> >>
> >> K8s is something we want to work with, and what we are discussing is
> >> central to how k8s is used. K8s community creates this culture of
> >> "organic packages" built by anyone, most of companies/projects already
> >> have semi-official container images and I think expectations on
> >> quality of these are well...none? You get what you're given and if you
> >> don't agree, there is always way to reproduce this yourself.
> >>
> >> [Another huge snip]
> >>
> >
> > I wanted to have the discussion, but my position for now is that
> > we should continue as we have been and not change the policy.
> >
> > I don't have a problem with any individual or group of individuals
> > publishing their own organic packages. The issue I have is with
> > making sure it is clear those *are* "organic" and not officially
> > supported by the broader community. One way to do that is to say
> > they need to be built somewhere other than on our shared infrastructure.
> > There may be other ways, though, so I'm looking for input on that.
> 
> What I was trying to say here is, current discussion aside, maybe we
> should revise this "not supported by broader community" rule. They may
> very well be supported to a certain point. Support is not just yes or
> no, it's all the levels in between. I think we can afford *some* level
> of official support, even if that some level means best effort made by
> community. If Kolla community, not an individual like myself, would
> like to support these images best to our ability, why aren't we
> allowed? As long as we are crystal clear what is scope of our support,
> why can't we do it? I think we've already proven that it's going to be
> tremendously useful for a lot of people, even in a shape we discuss
> today, that is "best effort, you still need to validate it for
> yourself"...

Right, I understood that. So far I haven't heard anything to change
my mind, though.

I think you're underestimating the amount of risk you're taking on
for yourselves and by extension the rest of the community, and
introducing to potential consumers of the images, by promising to
support production deployments with a small team of people without
the economic structure in place to sustain the work.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 11:33, Doug Hellmann  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-16 08:20:17 -0700:
>> On 16 May 2017 at 08:12, Doug Hellmann  wrote:
>> > Excerpts from Michał Jastrzębski's message of 2017-05-16 06:52:12 -0700:
>> >> On 16 May 2017 at 06:20, Flavio Percoco  wrote:
>> >> > On 16/05/17 14:08 +0200, Thierry Carrez wrote:
>> >> >>
>> >> >> Flavio Percoco wrote:
>> >> >>>
>> >> >>> From a release perspective, as Doug mentioned, we've avoided releasing
>> >> >>> projects
>> >> >>> in any kind of built form. This was also one of the concerns I raised
>> >> >>> when
>> >> >>> working on the proposal to support other programming languages. The
>> >> >>> problem of
>> >> >>> releasing built images goes beyond the infrastructure requirements. 
>> >> >>> It's
>> >> >>> the
>> >> >>> message and the guarantees implied with the built product itself that 
>> >> >>> are
>> >> >>> the
>> >> >>> concern here. And I tend to agree with Doug that this might be a 
>> >> >>> problem
>> >> >>> for us
>> >> >>> as a community. Unfortunately, putting your name, Michal, as contact
>> >> >>> point is
>> >> >>> not enough. Kolla is not the only project producing container images 
>> >> >>> and
>> >> >>> we need
>> >> >>> to be consistent in the way we release these images.
>> >> >>>
>> >> >>> Nothing prevents people for building their own images and uploading 
>> >> >>> them
>> >> >>> to
>> >> >>> dockerhub. Having this as part of the OpenStack's pipeline is a 
>> >> >>> problem.
>> >> >>
>> >> >>
>> >> >> I totally subscribe to the concerns around publishing binaries (under
>> >> >> any form), and the expectations in terms of security maintenance that 
>> >> >> it
>> >> >> would set on the publisher. At the same time, we need to have images
>> >> >> available, for convenience and testing. So what is the best way to
>> >> >> achieve that without setting strong security maintenance expectations
>> >> >> for the OpenStack community ? We have several options:
>> >> >>
>> >> >> 1/ Have third-parties publish images
>> >> >> It is the current situation. The issue is that the Kolla team (and
>> >> >> likely others) would rather automate the process and use OpenStack
>> >> >> infrastructure for it.
>> >> >>
>> >> >> 2/ Have third-parties publish images, but through OpenStack infra
>> >> >> This would allow to automate the process, but it would be a bit weird 
>> >> >> to
>> >> >> use common infra resources to publish in a private repo.
>> >> >>
>> >> >> 3/ Publish transient (per-commit or daily) images
>> >> >> A "daily build" (especially if you replace it every day) would set
>> >> >> relatively-limited expectations in terms of maintenance. It would end 
>> >> >> up
>> >> >> picking up security updates in upstream layers, even if not 
>> >> >> immediately.
>> >> >>
>> >> >> 4/ Publish images and own them
>> >> >> Staff release / VMT / stable team in a way that lets us properly own
>> >> >> those images and publish them officially.
>> >> >>
>> >> >> Personally I think (4) is not realistic. I think we could make (3) 
>> >> >> work,
>> >> >> and I prefer it to (2). If all else fails, we should keep (1).
>> >> >
>> >> >
>> >> > Agreed #4 is a bit unrealistic.
>> >> >
>> >> > Not sure I understand the difference between #2 and #3. Is it just the
>> >> > cadence?
>> >> >
>> >> > I'd prefer for these builds to have a daily cadence because it sets the
>> >> > expectations w.r.t maintenance right: "These images are daily builds 
>> >> > and not
>> >> > certified releases. For stable builds you're better off building it
>> >> > yourself"
>> >>
>> >> And daily builds are exactly what I wanted in the first place:) We
>> >> probably will keep publishing release packages too, but we can be so
>> >> called 3rd party. I also agree [4] is completely unrealistic and I
>> >> would be against putting such heavy burden of responsibility on any
>> >> community, including Kolla.
>> >>
>> >> While daily cadence will send message that it's not stable, truth will
>> >> be that it will be more stable than what people would normally build
>> >> locally (again, it passes more gates), but I'm totally fine in not
>> >> saying that and let people decide how they want to use it.
>> >>
>> >> So, can we move on with implementation?
>> >
>> > I don't want the images published to docker hub. Are they still useful
>> > to you if they aren't published?
>>
>> What do you mean? We need images available...whether it's dockerhub,
>> infra-hosted registry or any other way to have them, we need to be
>> able to have images that are available and fresh without building.
>> Dockerhub/quay.io is least problems for infra team/resources.
>
> There are 2 separate concerns.
>
> The first concern is whether this is a good idea at all, from a
> policy perspective. Do we have the people to maintain the images,
> track CVEs, etc.? Do we have the response time to update or remove
> bad images? 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Joshua Harlow

My guess is same with octavia.

https://github.com/openstack/octavia/tree/master/diskimage-create#diskimage-builder-script-for-creating-octavia-amphora-images

-Josh

Fox, Kevin M wrote:

+1. ironic and trove have the same issues as well. lowering the bar in order to 
kick the tires will help OpenStack a lot in adoption.

From: Sean Dague [s...@dague.net]
Sent: Tuesday, May 16, 2017 6:28 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] 
[tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
 do we want to be publishing binary container images?

On 05/16/2017 09:24 AM, Doug Hellmann wrote:




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-16 09:51:00 -0700:
> On 16 May 2017 at 09:40, Clint Byrum  wrote:
> >
> > What's at stake isn't so much "how do we get the bits to the users" but
> > "how do we only get bits to users that they need". If you build and push
> > daily, do you expect all of your users to also _pull_ daily? Redeploy
> > all their containers? How do you detect that there's new CVE-fixing
> > stuff in a daily build?
> >
> > This is really the realm of distributors that have full-time security
> > teams tracking issues and providing support to paying customers.
> >
> > So I think this is a fine idea, however, it needs to include a commitment
> > for a full-time paid security team who weighs in on every change to
> > the manifest. Otherwise we're just lobbing time bombs into our users'
> > data-centers.
> 
> One thing I struggle with is...well...how does *not having* built
> containers help with that? If your company have full time security
> team, they can check our containers prior to deployment. If your
> company doesn't, then building locally will be subject to same risks
> as downloading from dockerhub. Difference is, dockerhub containers
> were tested in our CI to extend that our CI allows. No matter whether
> or not you have your own security team, local CI, staging env, that
> will be just a little bit of testing on top of that which you get for
> free, and I think that's value enough for users to push for this.

The benefit of not building images ourselves is that we are clearly
communicating that the responsibility for maintaining the images
falls on whoever *does* build them. There can be no question in any
user's mind that the community somehow needs to maintain the content
of the images for them, just because we're publishing new images
at some regular cadence.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Fox, Kevin M
I'm not sure I follow the problem. The containers being built don't pull from 
infra's mirrors, so they can be secure containers. Once built, during 
deploy/testing, they don't install anything, so shouldn't have any issues there 
either.

Am I misunderstanding?

Thanks,
Kevin


From: Sam Yaple [sam...@yaple.net]
Sent: Tuesday, May 16, 2017 7:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] 
[tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
 do we want to be publishing binary container images?

I would like to bring up a subject that hasn't really been discussed in this 
thread yet, forgive me if I missed an email mentioning this.

What I personally would like to see is a publishing infrastructure to allow 
pushing built images to an internal infra mirror/repo/registry for consumption 
of internal infra jobs (deployment tools like kolla-ansible and 
openstack-ansible). The images built from infra mirrors with security turned 
off are perfect for testing internally to infra.

If you build images properly in infra, then you will have an image that is not 
security checked (no gpg verification of packages) and completely unverifiable. 
These are absolutely not images we want to push to DockerHub/quay for obvious 
reasons. Security and verification being chief among them. They are absolutely 
not images that should ever be run in production and are only suited for 
testing. These are the only types of images that can come out of infra.

Thanks,
SamYaple

On Tue, May 16, 2017 at 1:57 PM, Michał Jastrzębski 
> wrote:
On 16 May 2017 at 06:22, Doug Hellmann 
> wrote:
> Excerpts from Thierry Carrez's message of 2017-05-16 14:08:07 +0200:
>> Flavio Percoco wrote:
>> > From a release perspective, as Doug mentioned, we've avoided releasing 
>> > projects
>> > in any kind of built form. This was also one of the concerns I raised when
>> > working on the proposal to support other programming languages. The 
>> > problem of
>> > releasing built images goes beyond the infrastructure requirements. It's 
>> > the
>> > message and the guarantees implied with the built product itself that are 
>> > the
>> > concern here. And I tend to agree with Doug that this might be a problem 
>> > for us
>> > as a community. Unfortunately, putting your name, Michal, as contact point 
>> > is
>> > not enough. Kolla is not the only project producing container images and 
>> > we need
>> > to be consistent in the way we release these images.
>> >
>> > Nothing prevents people for building their own images and uploading them to
>> > dockerhub. Having this as part of the OpenStack's pipeline is a problem.
>>
>> I totally subscribe to the concerns around publishing binaries (under
>> any form), and the expectations in terms of security maintenance that it
>> would set on the publisher. At the same time, we need to have images
>> available, for convenience and testing. So what is the best way to
>> achieve that without setting strong security maintenance expectations
>> for the OpenStack community ? We have several options:
>>
>> 1/ Have third-parties publish images
>> It is the current situation. The issue is that the Kolla team (and
>> likely others) would rather automate the process and use OpenStack
>> infrastructure for it.
>>
>> 2/ Have third-parties publish images, but through OpenStack infra
>> This would allow to automate the process, but it would be a bit weird to
>> use common infra resources to publish in a private repo.
>>
>> 3/ Publish transient (per-commit or daily) images
>> A "daily build" (especially if you replace it every day) would set
>> relatively-limited expectations in terms of maintenance. It would end up
>> picking up security updates in upstream layers, even if not immediately.
>>
>> 4/ Publish images and own them
>> Staff release / VMT / stable team in a way that lets us properly own
>> those images and publish them officially.
>>
>> Personally I think (4) is not realistic. I think we could make (3) work,
>> and I prefer it to (2). If all else fails, we should keep (1).
>>
>
> At the forum we talked about putting test images on a "private"
> repository hosted on openstack.org somewhere. I think 
> that's option
> 3 from your list?
>
> Paul may be able to shed more light on the details of the technology
> (maybe it's just an Apache-served repo, rather than a full blown
> instance of Docker's service, for example).

Issue with that is

1. Apache served is harder to use because we want to follow docker API
and we'd have to reimplement it
2. Running registry is single command
3. If we host in in infra, in case someone actually uses it (there
will be people like that), that will eat up lot of network traffic
potentially
4. With local caching of images (working already) in nodepools we
loose 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 11:27, Doug Hellmann  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-16 09:46:19 -0700:
>> So another consideration. Do you think whole rule of "not building
>> binares" should be reconsidered? We are kind of new use case here. We
>> aren't distro but we are packagers (kind of). I don't think putting us
>> on equal footing as Red Hat, Canonical or other companies is correct
>> here.
>>
>> K8s is something we want to work with, and what we are discussing is
>> central to how k8s is used. K8s community creates this culture of
>> "organic packages" built by anyone, most of companies/projects already
>> have semi-official container images and I think expectations on
>> quality of these are well...none? You get what you're given and if you
>> don't agree, there is always way to reproduce this yourself.
>>
>> [Another huge snip]
>>
>
> I wanted to have the discussion, but my position for now is that
> we should continue as we have been and not change the policy.
>
> I don't have a problem with any individual or group of individuals
> publishing their own organic packages. The issue I have is with
> making sure it is clear those *are* "organic" and not officially
> supported by the broader community. One way to do that is to say
> they need to be built somewhere other than on our shared infrastructure.
> There may be other ways, though, so I'm looking for input on that.

What I was trying to say here is, current discussion aside, maybe we
should revise this "not supported by broader community" rule. They may
very well be supported to a certain point. Support is not just yes or
no, it's all the levels in between. I think we can afford *some* level
of official support, even if that some level means best effort made by
community. If Kolla community, not an individual like myself, would
like to support these images best to our ability, why aren't we
allowed? As long as we are crystal clear what is scope of our support,
why can't we do it? I think we've already proven that it's going to be
tremendously useful for a lot of people, even in a shape we discuss
today, that is "best effort, you still need to validate it for
yourself"...

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-16 08:20:17 -0700:
> On 16 May 2017 at 08:12, Doug Hellmann  wrote:
> > Excerpts from Michał Jastrzębski's message of 2017-05-16 06:52:12 -0700:
> >> On 16 May 2017 at 06:20, Flavio Percoco  wrote:
> >> > On 16/05/17 14:08 +0200, Thierry Carrez wrote:
> >> >>
> >> >> Flavio Percoco wrote:
> >> >>>
> >> >>> From a release perspective, as Doug mentioned, we've avoided releasing
> >> >>> projects
> >> >>> in any kind of built form. This was also one of the concerns I raised
> >> >>> when
> >> >>> working on the proposal to support other programming languages. The
> >> >>> problem of
> >> >>> releasing built images goes beyond the infrastructure requirements. 
> >> >>> It's
> >> >>> the
> >> >>> message and the guarantees implied with the built product itself that 
> >> >>> are
> >> >>> the
> >> >>> concern here. And I tend to agree with Doug that this might be a 
> >> >>> problem
> >> >>> for us
> >> >>> as a community. Unfortunately, putting your name, Michal, as contact
> >> >>> point is
> >> >>> not enough. Kolla is not the only project producing container images 
> >> >>> and
> >> >>> we need
> >> >>> to be consistent in the way we release these images.
> >> >>>
> >> >>> Nothing prevents people for building their own images and uploading 
> >> >>> them
> >> >>> to
> >> >>> dockerhub. Having this as part of the OpenStack's pipeline is a 
> >> >>> problem.
> >> >>
> >> >>
> >> >> I totally subscribe to the concerns around publishing binaries (under
> >> >> any form), and the expectations in terms of security maintenance that it
> >> >> would set on the publisher. At the same time, we need to have images
> >> >> available, for convenience and testing. So what is the best way to
> >> >> achieve that without setting strong security maintenance expectations
> >> >> for the OpenStack community ? We have several options:
> >> >>
> >> >> 1/ Have third-parties publish images
> >> >> It is the current situation. The issue is that the Kolla team (and
> >> >> likely others) would rather automate the process and use OpenStack
> >> >> infrastructure for it.
> >> >>
> >> >> 2/ Have third-parties publish images, but through OpenStack infra
> >> >> This would allow to automate the process, but it would be a bit weird to
> >> >> use common infra resources to publish in a private repo.
> >> >>
> >> >> 3/ Publish transient (per-commit or daily) images
> >> >> A "daily build" (especially if you replace it every day) would set
> >> >> relatively-limited expectations in terms of maintenance. It would end up
> >> >> picking up security updates in upstream layers, even if not immediately.
> >> >>
> >> >> 4/ Publish images and own them
> >> >> Staff release / VMT / stable team in a way that lets us properly own
> >> >> those images and publish them officially.
> >> >>
> >> >> Personally I think (4) is not realistic. I think we could make (3) work,
> >> >> and I prefer it to (2). If all else fails, we should keep (1).
> >> >
> >> >
> >> > Agreed #4 is a bit unrealistic.
> >> >
> >> > Not sure I understand the difference between #2 and #3. Is it just the
> >> > cadence?
> >> >
> >> > I'd prefer for these builds to have a daily cadence because it sets the
> >> > expectations w.r.t maintenance right: "These images are daily builds and 
> >> > not
> >> > certified releases. For stable builds you're better off building it
> >> > yourself"
> >>
> >> And daily builds are exactly what I wanted in the first place:) We
> >> probably will keep publishing release packages too, but we can be so
> >> called 3rd party. I also agree [4] is completely unrealistic and I
> >> would be against putting such heavy burden of responsibility on any
> >> community, including Kolla.
> >>
> >> While daily cadence will send message that it's not stable, truth will
> >> be that it will be more stable than what people would normally build
> >> locally (again, it passes more gates), but I'm totally fine in not
> >> saying that and let people decide how they want to use it.
> >>
> >> So, can we move on with implementation?
> >
> > I don't want the images published to docker hub. Are they still useful
> > to you if they aren't published?
> 
> What do you mean? We need images available...whether it's dockerhub,
> infra-hosted registry or any other way to have them, we need to be
> able to have images that are available and fresh without building.
> Dockerhub/quay.io is least problems for infra team/resources.

There are 2 separate concerns.

The first concern is whether this is a good idea at all, from a
policy perspective. Do we have the people to maintain the images,
track CVEs, etc.? Do we have the response time to update or remove
bad images? Can we, as a community, actually staff the support to
an appropriate level? Or, can we clearly communicate that we do not
support the images for production use and effectively avoid having
someone start to rely on them?

The 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2017-05-16 17:41:28 +:
> On 2017-05-16 11:17:31 -0400 (-0400), Doug Hellmann wrote:
> > Excerpts from Sam Yaple's message of 2017-05-16 14:11:18 +:
> [...]
> > > If you build images properly in infra, then you will have an image that is
> > > not security checked (no gpg verification of packages) and completely
> > > unverifiable. These are absolutely not images we want to push to
> > > DockerHub/quay for obvious reasons. Security and verification being chief
> > > among them. They are absolutely not images that should ever be run in
> > > production and are only suited for testing. These are the only types of
> > > images that can come out of infra.
> > 
> > This sounds like an implementation detail of option 3? I think not
> > signing the images does help indicate that they're not meant to be used
> > in production environments.
> [...]
> 
> I'm pretty sure Sam wasn't talking about whether or not the images
> which get built are signed, but whether or not the package manager
> used when building the images vets the distro packages it retrieves
> (the Ubuntu package mirror we maintain in our CI doesn't have
> "secure APT" signatures available for its indices so we disable that
> security measure by default in the CI system to allow us to use
> those mirrors). Point being, if images are built in the upstream CI
> with packages from our Ubuntu package mirror then they are (at least
> at present) not suitable for production use from a security
> perspective for this particular reason even in absence of the other
> concerns expressed.

Thanks for clarifying; that makes more sense.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc] pep8 failing

2017-05-16 Thread Ihar Hrachyshka
Make sure you have the latest neutron-lib in your tree: neutron-lib==1.6.0

On Tue, May 16, 2017 at 3:05 AM, Vikash Kumar
 wrote:
> Hi Team,
>
>   pep8 is failing in master code. translation hint helpers are removed from
> LOG messages. Is this purposefully done ? Let me know if it is not, will
> change it.
>
> ./networking_sfc/db/flowclassifier_db.py:342:13: N531  Log messages require
> translation hints!
> LOG.info("Deleting a non-existing flow classifier.")
> ^
> ./networking_sfc/db/sfc_db.py:383:13: N531  Log messages require translation
> hints!
> LOG.info("Deleting a non-existing port chain.")
> ^
> ./networking_sfc/db/sfc_db.py:526:13: N531  Log messages require translation
> hints!
> LOG.info("Deleting a non-existing port pair.")
> ^
> ./networking_sfc/db/sfc_db.py:658:13: N531  Log messages require translation
> hints!
> LOG.info("Deleting a non-existing port pair group.")
> ^
> ./networking_sfc/services/flowclassifier/driver_manager.py:38:9: N531  Log
> messages require translation hints!
> LOG.info("Configured Flow Classifier drivers: %s", names)
> ^
> ./networking_sfc/services/flowclassifier/driver_manager.py:44:9: N531  Log
> messages require translation hints!
> LOG.info("Loaded Flow Classifier drivers: %s",
> ^
> ./networking_sfc/services/flowclassifier/driver_manager.py:80:9: N531  Log
> messages require translation hints!
> LOG.info("Registered Flow Classifier drivers: %s",
> ^
> ./networking_sfc/services/flowclassifier/driver_manager.py:87:13: N531  Log
> messages require translation hints!
> LOG.info("Initializing Flow Classifier driver '%s'",
> ^
> ./networking_sfc/services/flowclassifier/driver_manager.py:107:17: N531  Log
> messages require translation hints!
> LOG.error(
> ^
> ./networking_sfc/services/flowclassifier/plugin.py:63:17: N531  Log messages
> require translation hints!
> LOG.error("Create flow classifier failed, "
> ^
> ./networking_sfc/services/flowclassifier/plugin.py:87:17: N531  Log messages
> require translation hints!
> LOG.error("Update flow classifier failed, "
> ^
> ./networking_sfc/services/flowclassifier/plugin.py:102:17: N531  Log
> messages require translation hints!
> LOG.error("Delete flow classifier failed, "
> ^
> ./networking_sfc/services/sfc/driver_manager.py:38:9: N531  Log messages
> require translation hints!
> LOG.info("Configured SFC drivers: %s", names)
> ^
> ./networking_sfc/services/sfc/driver_manager.py:43:9: N531  Log messages
> require translation hints!
> LOG.info("Loaded SFC drivers: %s", self.names())
> ^
> ./networking_sfc/services/sfc/driver_manager.py:78:9: N531  Log messages
> require translation hints!
> LOG.info("Registered SFC drivers: %s",
> ^
> ./networking_sfc/services/sfc/driver_manager.py:85:13: N531  Log messages
> require translation hints!
> LOG.info("Initializing SFC driver '%s'", driver.name)
> ^
> ./networking_sfc/services/sfc/driver_manager.py:104:17: N531  Log messages
> require translation hints!
> LOG.error(
> ^
> ./networking_sfc/services/sfc/plugin.py:57:17: N531  Log messages require
> translation hints!
> LOG.error("Create port chain failed, "
> ^
> ./networking_sfc/services/sfc/plugin.py:82:17: N531  Log messages require
> translation hints!
> LOG.error("Update port chain failed, port_chain '%s'",
> ^
> ./networking_sfc/services/sfc/plugin.py:97:17: N531  Log messages require
> translation hints!
> LOG.error("Delete port chain failed, portchain '%s'",
> ^
> ./networking_sfc/services/sfc/plugin.py:122:17: N531  Log messages require
> translation hints!
> LOG.error("Create port pair failed, "
> ^
> ./networking_sfc/services/sfc/plugin.py:144:17: N531  Log messages require
> translation hints!
> LOG.error("Update port pair failed, port_pair '%s'",
> ^
> ./networking_sfc/services/sfc/plugin.py:159:17: N531  Log messages require
> translation hints!
> LOG.error("Delete port pair failed, port_pair '%s'",
> ^
> ./networking_sfc/services/sfc/plugin.py:185:17: N531  Log messages require
> translation hints!
> LOG.error("Create port pair group failed, "
> ^
> ./networking_sfc/services/sfc/plugin.py:213:17: N531  Log messages require
> translation hints!
> LOG.error("Update port pair group failed, "
> ^
> ./networking_sfc/services/sfc/plugin.py:229:17: N531  Log messages require
> translation hints!
> LOG.error("Delete port pair 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-16 09:46:19 -0700:
> So another consideration. Do you think whole rule of "not building
> binares" should be reconsidered? We are kind of new use case here. We
> aren't distro but we are packagers (kind of). I don't think putting us
> on equal footing as Red Hat, Canonical or other companies is correct
> here.
> 
> K8s is something we want to work with, and what we are discussing is
> central to how k8s is used. K8s community creates this culture of
> "organic packages" built by anyone, most of companies/projects already
> have semi-official container images and I think expectations on
> quality of these are well...none? You get what you're given and if you
> don't agree, there is always way to reproduce this yourself.
> 
> [Another huge snip]
> 

I wanted to have the discussion, but my position for now is that
we should continue as we have been and not change the policy.

I don't have a problem with any individual or group of individuals
publishing their own organic packages. The issue I have is with
making sure it is clear those *are* "organic" and not officially
supported by the broader community. One way to do that is to say
they need to be built somewhere other than on our shared infrastructure.
There may be other ways, though, so I'm looking for input on that.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Fox, Kevin M
+1. ironic and trove have the same issues as well. lowering the bar in order to 
kick the tires will help OpenStack a lot in adoption.

From: Sean Dague [s...@dague.net]
Sent: Tuesday, May 16, 2017 6:28 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] 
[tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
 do we want to be publishing binary container images?

On 05/16/2017 09:24 AM, Doug Hellmann wrote:
> Excerpts from Luigi Toscano's message of 2017-05-16 11:50:53 +0200:
>> On Monday, 15 May 2017 21:12:16 CEST Doug Hellmann wrote:
>>> Excerpts from Michał Jastrzębski's message of 2017-05-15 10:52:12 -0700:
>>>
 On 15 May 2017 at 10:34, Doug Hellmann  wrote:
> I'm raising the issue here to get some more input into how to
> proceed. Do other people think this concern is overblown? Can we
> mitigate the risk by communicating through metadata for the images?
> Should we stick to publishing build instructions (Dockerfiles, or
> whatever) instead of binary images? Are there other options I haven't
> mentioned?

 Today we do publish build instructions, that's what Kolla is. We also
 publish built containers already, just we do it manually on release
 today. If we decide to block it, I assume we should stop doing that
 too? That will hurt users who uses this piece of Kolla, and I'd hate
 to hurt our users:(
>>>
>>> Well, that's the question. Today we have teams publishing those
>>> images themselves, right? And the proposal is to have infra do it?
>>> That change could be construed to imply that there is more of a
>>> relationship with the images and the rest of the community (remember,
>>> folks outside of the main community activities do not always make
>>> the same distinctions we do about teams). So, before we go ahead
>>> with that, I want to make sure that we all have a chance to discuss
>>> the policy change and its implications.
>>
>> Sorry for hijacking the thread, but we have a similar scenario for example in
>> Sahara. It is about full VM images containing Hadoop/Spark/other_big_data
>> stuff, and not containers, but it's looks really the same.
>> So far ready-made images have been published under 
>> http://sahara-files.mirantis.com/images/upstream/, but we are looking to 
>> have them hosted on
>> openstack.org, just like other artifacts.
>>
>> We asked about this few days ago on openstack-infra@, but no answer so far
>> (the Summit didn't help):
>>
>> http://lists.openstack.org/pipermail/openstack-infra/2017-April/005312.html
>>
>> I think that the answer to the question raised in this thread is definitely
>> going to be relevant for our use case.
>>
>> Ciao
>
> Thanks for raising this. I think the same concerns apply to VM images.

Agreed.

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Fox, Kevin M
Sorry, I'm a bit late to the discussion.

Over the time I've maintained openstack clouds, I've seen many times, 
integration issues pop up between distro packages and openstack packages. The 
released openstack packages are usually fine. Changes in the distros break 
kolla builds over time. 

In the past Kolla's dealt with this by having users show up, say "the *stuff* 
doesn't build" and then someone on irc scrambles to figure out why it broke. 
This is not a good place to be in. We even suffer through it ourselves with the 
gates. An upstream releases something, everything breaks, and then we spend all 
our time debugging the same problem rather then some of us debugging that and 
the rest making forward progress.

So, we're starting to break up the gate jobs into periodic gates that test 
known good kolla stuff against newer upstream stuff to see if it works. if it 
does, it caches it for the regular gate tests for new patch sets. If it fails, 
then someone has time to look at it without the rest of the gates being broken. 
As part of this process we also produce known working, up to date, openstack 
stable release containers, since we need them for upgrade gate testing.

So, basically to have proper gates in kolla, we have to do all the work of 
building up to date stable/trunk containers that are tested with the gate suite 
of tests.

So, the real question is, can we go the last 2 feet and push them to the docker 
hub rather then doing it manually like we do today and they tend to rot there. 
There is a fair amount of benefit to some users and very little additional cost 
to openstack. I see very little reason not to.

We should still recommend users build it themselves. But for many use cases, 
such as testing the waters, an operator might just want the easiest way to see 
the thing work (pull updated containers from the hub), prove out that its worth 
doing it with Kolla, then either building their own containers for production, 
or better yet, paying a Distro for support.

We want to make it as easy as possible to try out OpenStack. This is one of the 
biggest/most reported problems with OpenStack. Saying, step 1, you must always 
build all the containers yourself is not a part of solving that problem.

Thanks,
Kevin


From: Flavio Percoco [fla...@redhat.com]
Sent: Tuesday, May 16, 2017 6:20 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] 
[tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
 do we want to be publishing binary container images?

On 16/05/17 14:08 +0200, Thierry Carrez wrote:
>Flavio Percoco wrote:
>> From a release perspective, as Doug mentioned, we've avoided releasing 
>> projects
>> in any kind of built form. This was also one of the concerns I raised when
>> working on the proposal to support other programming languages. The problem 
>> of
>> releasing built images goes beyond the infrastructure requirements. It's the
>> message and the guarantees implied with the built product itself that are the
>> concern here. And I tend to agree with Doug that this might be a problem for 
>> us
>> as a community. Unfortunately, putting your name, Michal, as contact point is
>> not enough. Kolla is not the only project producing container images and we 
>> need
>> to be consistent in the way we release these images.
>>
>> Nothing prevents people for building their own images and uploading them to
>> dockerhub. Having this as part of the OpenStack's pipeline is a problem.
>
>I totally subscribe to the concerns around publishing binaries (under
>any form), and the expectations in terms of security maintenance that it
>would set on the publisher. At the same time, we need to have images
>available, for convenience and testing. So what is the best way to
>achieve that without setting strong security maintenance expectations
>for the OpenStack community ? We have several options:
>
>1/ Have third-parties publish images
>It is the current situation. The issue is that the Kolla team (and
>likely others) would rather automate the process and use OpenStack
>infrastructure for it.
>
>2/ Have third-parties publish images, but through OpenStack infra
>This would allow to automate the process, but it would be a bit weird to
>use common infra resources to publish in a private repo.
>
>3/ Publish transient (per-commit or daily) images
>A "daily build" (especially if you replace it every day) would set
>relatively-limited expectations in terms of maintenance. It would end up
>picking up security updates in upstream layers, even if not immediately.
>
>4/ Publish images and own them
>Staff release / VMT / stable team in a way that lets us properly own
>those images and publish them officially.
>
>Personally I think (4) is not realistic. I think we could make (3) work,
>and I prefer it to (2). If all else fails, we should keep (1).

Agreed #4 is a bit unrealistic.

Not sure I understand the 

[openstack-dev] [all][release] sphinx 1.6.1 behavior changes triggering job failures

2017-05-16 Thread Doug Hellmann
We now have 2 separate bugs related to changes in today's Sphinx 1.6.1
release causing our doc jobs to fail in different ways.

https://bugs.launchpad.net/pbr/+bug/1691129 describes a traceback
produced when building the developer documentation through pbr.

https://bugs.launchpad.net/reno/+bug/1691224 describes a change where
Sphinx now treats log messages at WARNING or ERROR level as reasons to
abort the build when strict mode is enabled.

I have a patch up to the global requirements list to block 1.6.1 for
builds following g-r and constraints:
https://review.openstack.org/#/c/465135/

Many of our doc builds do not use constraints, so if your doc build
fails you will want to apply the same change locally.

There's a patch in review for the reno issue. It would be great if
someone had time to look into a fix for pbr to make it work with
older and newer Sphinx.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.messaging] Call to deprecate the 'pika' driver in the oslo.messaging project

2017-05-16 Thread Joshua Harlow

Fine with me,

I'd personally rather get down to say 2 'great' drivers for RPC,

And say 1 (or 2?) for notifications.

So ya, wfm.

-Josh

Mehdi Abaakouk wrote:

+1 too, I haven't seen its contributors since a while.

On Mon, May 15, 2017 at 09:42:00PM -0400, Flavio Percoco wrote:

On 15/05/17 15:29 -0500, Ben Nemec wrote:



On 05/15/2017 01:55 PM, Doug Hellmann wrote:

Excerpts from Davanum Srinivas (dims)'s message of 2017-05-15
14:27:36 -0400:

On Mon, May 15, 2017 at 2:08 PM, Ken Giusti  wrote:

Folks,

It was decided at the oslo.messaging forum at summit that the pika
driver will be marked as deprecated [1] for removal.


[dims} +1 from me.


+1


Also +1


+1

Flavio

--
@flaper87
Flavio Percoco





__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Jeremy Stanley
On 2017-05-16 09:38:34 -0400 (-0400), Davanum Srinivas wrote:
> See $TITLE :)

Trying not to rehash other points, I'm in favor of using
#openstack-dev for now until we see it's not working out. Creating a
new channel for this purpose before we've even undertaken the
experiment seems like a social form of premature optimization.

If the concern is that it's hard to get the attention of (I hate to
say "ping" since contextlessly highlighting people in channel to
find out whether they're around is especially annoying to me at
least) members of the OpenStack Technical Committee, the Infra
team's root sysadmins already solved this issue by all configuring
their clients to highliht on a specific keyword (in that case,
"infra-root" mentioned in channel gets the attention of most of our
rooters these days). Something like "tc-members" can be used to
address a question specifically to those on the TC who happen to be
around and paying attention and also gives people looking at the
logs a useful string to grep/search/whatever. I've gone ahead and
configured my client to highlight on that now.

As for losing context when a discussion transfers from informal
temperature taking, brainstorming and bikeshedding in IRC to a less
synchronous thread on the ML, simply making sure to include a URL to
the point in the channel log where the discussion began ought to be
sufficient (and should be encouraged _regardless_ of which channel
that was).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 10:41, Jeremy Stanley  wrote:
> On 2017-05-16 11:17:31 -0400 (-0400), Doug Hellmann wrote:
>> Excerpts from Sam Yaple's message of 2017-05-16 14:11:18 +:
> [...]
>> > If you build images properly in infra, then you will have an image that is
>> > not security checked (no gpg verification of packages) and completely
>> > unverifiable. These are absolutely not images we want to push to
>> > DockerHub/quay for obvious reasons. Security and verification being chief
>> > among them. They are absolutely not images that should ever be run in
>> > production and are only suited for testing. These are the only types of
>> > images that can come out of infra.
>>
>> This sounds like an implementation detail of option 3? I think not
>> signing the images does help indicate that they're not meant to be used
>> in production environments.
> [...]
>
> I'm pretty sure Sam wasn't talking about whether or not the images
> which get built are signed, but whether or not the package manager
> used when building the images vets the distro packages it retrieves
> (the Ubuntu package mirror we maintain in our CI doesn't have
> "secure APT" signatures available for its indices so we disable that
> security measure by default in the CI system to allow us to use
> those mirrors). Point being, if images are built in the upstream CI
> with packages from our Ubuntu package mirror then they are (at least
> at present) not suitable for production use from a security
> perspective for this particular reason even in absence of the other
> concerns expressed.
> --
> Jeremy Stanley

This is valid concern, but also particularly easy to solve. If we
decide to use nightly builds (or midday in Hawaii? Any timezone with
least traffic would do), we can skip infra mirrors. In fact, that
approach would help us in a different sense as well. Since these
wouldn't be bound to any particular patchset, we could test it to an
extreme, so voting gates for both kolla-ansible and kolla-kubernetes
deployment. I was reluctant to have deploy gates voting inside Kolla,
but that would allow us to do it. In fact, net uplink consumption from
infra would go down, as we won't need to publish tarballs of registry
every commit, we'll do it once a day in a most convenient hour.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Jeremy Stanley
On 2017-05-16 11:17:31 -0400 (-0400), Doug Hellmann wrote:
> Excerpts from Sam Yaple's message of 2017-05-16 14:11:18 +:
[...]
> > If you build images properly in infra, then you will have an image that is
> > not security checked (no gpg verification of packages) and completely
> > unverifiable. These are absolutely not images we want to push to
> > DockerHub/quay for obvious reasons. Security and verification being chief
> > among them. They are absolutely not images that should ever be run in
> > production and are only suited for testing. These are the only types of
> > images that can come out of infra.
> 
> This sounds like an implementation detail of option 3? I think not
> signing the images does help indicate that they're not meant to be used
> in production environments.
[...]

I'm pretty sure Sam wasn't talking about whether or not the images
which get built are signed, but whether or not the package manager
used when building the images vets the distro packages it retrieves
(the Ubuntu package mirror we maintain in our CI doesn't have
"secure APT" signatures available for its indices so we disable that
security measure by default in the CI system to allow us to use
those mirrors). Point being, if images are built in the upstream CI
with packages from our Ubuntu package mirror then they are (at least
at present) not suitable for production use from a security
perspective for this particular reason even in absence of the other
concerns expressed.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-16 Thread John Dickinson


On 14 May 2017, at 4:04, Sean Dague wrote:

> One of the things that came up in a logging Forum session is how much effort 
> operators are having to put into reconstructing flows for things like server 
> boot when they go wrong, as every time we jump a service barrier the 
> request-id is reset to something new. The back and forth between Nova / 
> Neutron and Nova / Glance would be definitely well served by this. Especially 
> if this is something that's easy to query in elastic search.
>
> The last time this came up, some people were concerned that trusting 
> request-id on the wire was concerning to them because it's coming from random 
> users. We're going to assume that's still a concern by some. However, since 
> the last time that came up, we've introduced the concept of "service users", 
> which are a set of higher priv services that we are using to wrap user 
> requests between services so that long running request chains (like image 
> snapshot). We trust these service users enough to keep on trucking even after 
> the user token has expired for this long run operations. We could use this 
> same trust path for request-id chaining.
>
> So, the basic idea is, services will optionally take an inbound 
> X-OpenStack-Request-ID which will be strongly validated to the format 
> (req-$uuid). They will continue to always generate one as well. When the 
> context is built (which is typically about 3 more steps down the paste 
> pipeline), we'll check that the service user was involved, and if not, reset 
> the request_id to the local generated one. We'll log both the global and 
> local request ids. All of these changes happen in oslo.middleware, 
> oslo.context, oslo.log, and most projects won't need anything to get this 
> infrastructure.
>
> The python clients, and callers, will then need to be augmented to pass the 
> request-id in on requests. Servers will effectively decide when they want to 
> opt into calling other services this way.
>
> This only ends up logging the top line global request id as well as the last 
> leaf for each call. This does mean that full tree construction will take more 
> work if you are bouncing through 3 or more servers, but it's a step which I 
> think can be completed this cycle.
>
> I've got some more detailed notes, but before going through the process of 
> putting this into an oslo spec I wanted more general feedback on it so that 
> any objections we didn't think about yet can be raised before going through 
> the detailed design.
>
>   -Sean
>
> -- 
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I'm not sure the best place to respond (mailing list or gerrit), so
I'll write this up and post it to both places.

I think the idea behind this proposal is great. It has the potential
to bring a lot of benefit to users who are tracing a request across
many different services, in part by making it easy to search in an
indexing system like ELK.

The current proposal has some elements that won't work with the way
Swift currently solves this problem. This is mostly due to the
proposed uuid-ish check for validation. However, the Swift solution
has a few aspects that I believe would be very helpful for the entire
community.

NB: Swift returns both an `X-OpenStack-Request-ID` and an `X-Trans-ID`
header in every response. The `X-Trans-ID` was implemented before the
OpenStack request ID was proposed, and so we've kept the `X-Trans-ID` so
as not to break existing clients. The value of `X-OpenStack-Request-ID`
in any response from Swift is simply a mirror of the `X-Trans-ID` value.

The request id in Swift is made up of a few parts:

X-Openstack-Request-Id: txbea0071df2b0465082501-00591b3077saio-extraextra


In the code, this in generated from:

'tx%s-%010x%s' % (uuid.uuid4().hex[:21], time.time(), 
quote(trans_id_suffix))

...meaning that there are three parts to the request id. Let's take
each in turn.

The first part always starts with 'tx' (originally from the
"transaction id") and then is the first 21 hex characters of a uuid4.
The truncation is to limit the overall length of the value.

The second part is the hex value of the current time, padded to 10
characters.

Finally, the third part is the quoted suffix, and it defaults to the
empty string. The suffix itself can be made of two parts. The first is
configured in the Swift proxy server itself (ie the service that does
the logging) via the `trans_id_suffix` config. This allows an operator
to set a different suffix for each API endpoint or each region or each
cluster in order to help distinguish them in logs. For example, if a
deployment with multiple clusters uses centralized log aggregation, a
different trans_id_suffix value for each 

Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Thierry Carrez
Davanum Srinivas wrote:
> On Tue, May 16, 2017 at 11:52 AM, Michał Jastrzębski  wrote:
>> On 16 May 2017 at 08:32, Doug Hellmann  wrote:
>>> Excerpts from Sean McGinnis's message of 2017-05-16 10:17:35 -0500:
 My preference would be to have an #openstack-tc channel.

 One thing I like about the dedicated meeting time was if I was not able to
 attend, or when I was just a casual observer, it was easy to catch up on
 what was discussed because it was all in one place and did not have any
 non TC conversations interlaced.

 If we just use -dev, there is a high chance there will be a lot of cross-
 talk during discussions. There would also be a lot of effort to grep
 through the full day of activity to find things relevant to TC
 discussions. If we have a dedicated channel for this, it makes it very
 easy for anyone to know where to go to get a clean, easy to read capture
 of all relevant discussions. I think that will be important with the
 lack of a captured and summarized meeting to look at.
>>>
>>> I definitely understand this desire. I think, though, that any
>>> significant conversations should be made discoverable via an email
>>> thread summarizing them. That honors the spirit of moving our
>>> "decision making" to asynchronous communication tools.

I also prefer we opt for an #openstack-tc channel. A channel is defined
by the topic of its discussions, and #openstack-dev is a catch-all, a
default channel. Reusing it for topical discussions will force everyone
to filter through random discussions in order to get to the
really-TC-related ones. Yes, it's not used much. But that isn't a good
reason to recycle it.

>> To both this and Dims's concerns, I actually think we need some place
>> to just come and ask "guys, is this fine?". If answer would be "let's
>> talk on ML because it's important", that's cool, but on the other hand
>> sometimes simple "yes" would suffice. Not all conversations with TC
>> requires mailing thread, but I'd love to have some "semi-official" TC
>> space where I can drop question, quickly discuss cross-project issues
>> and such.
> 
> Michal,
> 
> Let's try using the ping list on #openstack-dev channel:
> cdent dhellmann dims dtroyer emilienm flaper87 fungi johnthetubaguy
> mordred sdague smcginnis stevemar ttx
> 
> The IRC nicks are here:
> https://governance.openstack.org/tc/#current-members

If you need a ping list to reuse the default channel as a TC "office
hours" channel, that kind of proves that a dedicated channel would be
more appropriate :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Anita Kuno

On 2017-05-16 11:46 AM, Sean Dague wrote:

On 05/16/2017 11:17 AM, Sean McGinnis wrote:

On Tue, May 16, 2017 at 09:38:34AM -0400, Davanum Srinivas wrote:

Folks,

See $TITLE :)

Thanks,
Dims


My preference would be to have an #openstack-tc channel.

One thing I like about the dedicated meeting time was if I was not able to
attend, or when I was just a casual observer, it was easy to catch up on
what was discussed because it was all in one place and did not have any
non TC conversations interlaced.

If we just use -dev, there is a high chance there will be a lot of cross-
talk during discussions. There would also be a lot of effort to grep
through the full day of activity to find things relevant to TC
discussions. If we have a dedicated channel for this, it makes it very
easy for anyone to know where to go to get a clean, easy to read capture
of all relevant discussions. I think that will be important with the
lack of a captured and summarized meeting to look at.

The thing is, IRC should never be a summary or long term storage medium.
IRC is a discussion medium. It is a hallway track. It's where ideas
bounce around lots are left on the floor, there are lots of
misstatements as people explore things. It's not store and forward
messaging, it's realtime chat.

If we want digestible summaries with context, that's never IRC, and we
shouldn't expect people to look to IRC for that. It's source material at
best. I'm not sure of any IRC conversation that's ever been clean, easy
to read, and captures the entire context within it without jumping to
assumptions of shared background that the conversation participants
already have.

Summaries with context need to emerge from here for people to be able to
follow along (out to email or web), and work their way back into the
conversations.

-Sean


I'll disagree on this point.

I do agree IRC is a discussion medium. I further agree that any 
agreements decided upon need to be further disseminated via other media. 
However, I disagree the only value for those trying to catch up with 
items that took place in the past lies in a digestible summary. The 
conversation of how that agreement was arrived at holds great value.


Feel free to disregard what I have to say, because I'm not really 
involved right now. But I would like to feel that should occasion arise 
I could step back in, do my homework reading past conversations and have 
a reasonable understanding of the current state of things.


For me OpenStack is about people, foibles and mistakes included. I think 
there is a huge value in seeing how a conversation develops and how an 
agreement came into being, sometimes this is far more valuable to me 
than the agreement itself. Agreements and policies are constantly 
changing, but the process of discussion and how we reach this agreement 
is often more important both to me and as a demonstration to others of 
how to interact effectively than the final agreement, which will likely 
change next release or two.


If you are going to do away with tc meetings and I can't find the 
backstory in an IRC tc meeting log then at least let me find the 
backstory in a channel somewhere.


I am in favour of using #openstack-dev for this purpose. I appreciate 
Sean McGuinnis' point about have the conversation focused, I don't think 
you would get that even if you had a dedicated #openstack-tc channel. 
Either channel would include side conversations and unrelated chat, I 
don't see anyway for that not to happen. So for me I would go with using 
what we already have, also including Sean and Doug's previous points 
that we already are fractured enough, it sure would be nice to see some 
good use of already existing public spaces.


Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 09:40, Clint Byrum  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-15 10:52:12 -0700:
>> > Container images introduce some extra complexity, over the basic
>> > operating system style packages mentioned above. Due to the way
>> > they are constructed, they are likely to include content we don't
>> > produce ourselves (either in the form of base layers or via including
>> > build tools or other things needed when assembling the full image).
>> > That extra content means there would need to be more tracking of
>> > upstream issues (bugs, CVEs, etc.) to ensure the images are updated
>> > as needed.
>>
>> We can do this by building daily, which was the plan in fact. If we
>> build every day you have at most 24hrs old packages, CVEs and things
>> like that on non-openstack packages are still maintained by distro
>> maintainers.
>>
>
> What's at stake isn't so much "how do we get the bits to the users" but
> "how do we only get bits to users that they need". If you build and push
> daily, do you expect all of your users to also _pull_ daily? Redeploy
> all their containers? How do you detect that there's new CVE-fixing
> stuff in a daily build?
>
> This is really the realm of distributors that have full-time security
> teams tracking issues and providing support to paying customers.
>
> So I think this is a fine idea, however, it needs to include a commitment
> for a full-time paid security team who weighs in on every change to
> the manifest. Otherwise we're just lobbing time bombs into our users'
> data-centers.

One thing I struggle with is...well...how does *not having* built
containers help with that? If your company have full time security
team, they can check our containers prior to deployment. If your
company doesn't, then building locally will be subject to same risks
as downloading from dockerhub. Difference is, dockerhub containers
were tested in our CI to extend that our CI allows. No matter whether
or not you have your own security team, local CI, staging env, that
will be just a little bit of testing on top of that which you get for
free, and I think that's value enough for users to push for this.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-16 Thread Joshua Harlow

So fyi,

If you really want something like this:

Just use:

http://fasteners.readthedocs.io/en/latest/api/lock.html#fasteners.lock.ReaderWriterLock

And always get a write lock.

It is a slightly different way of getting those locks (via a context 
manager) but the implementation underneath is a deque; so fairness 
should be assured in FIFO order...


https://github.com/harlowja/fasteners/blob/master/fasteners/lock.py#L139

and

https://github.com/harlowja/fasteners/blob/master/fasteners/lock.py#L220

-Josh

Chris Friesen wrote:

On 05/15/2017 03:42 PM, Clint Byrum wrote:


In order to implement fairness you'll need every lock request to happen
in a FIFO queue. This is often implemented with a mutex-protected queue
of condition variables. Since the mutex for the queue is only held while
you append to the queue, you will always get the items from the queue
in the order they were written to it.

So you have lockers add themselves to the queue and wait on their
condition variable, and then a thread running all the time that reads
the queue and acts on each condition to make sure only one thread is
activated at a time (or that one thread can just always do all the work
if the arguments are simple enough to put in a queue).


Do you even need the extra thread? The implementations I've seen for a
ticket lock (in C at least) usually have the unlock routine wake up the
next pending locker.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [devstack] [deployment] nova-api and meta-api under uwsgi

2017-05-16 Thread Matt Riedemann

On 5/16/2017 11:11 AM, Chris Dent wrote:


(This is a followup to
http://lists.openstack.org/pipermail/openstack-dev/2017-May/116267.html
but I don't have that around anymore to make a proper response to.)

In a devstack change:

https://review.openstack.org/#/c/457715/

nova-api and nova-metadata will be changed to run as WSGI
applications with a uwsgi server by default. This helps to enable
a few recent goals:

* everything under systemd in devstack
* minimizing custom ports for HTTP in devstack
* is part of a series of changes[1] which gets the compute api working
  under WSGI, including some devref for wsgi use:
  https://docs.openstack.org/developer/nova/wsgi.html
* helps enforce the idea that any WSGI server is okay

This last point is important consideration for deployers: Although
devstack will (once the change merges) default to using a
combination of apache2, mod_proxy_uwsgi, and uwsgi there is zero
requirement that deployments replicate that arrangement. The new
'nova-api-wsgi' and 'nova-metadata-wsgi' script provide a
module-level 'application' that can be run by any WSGI compliant
server.

In those contexts things like the path-prefix of an application and
the port used (if any) to host the application are entirely in the
domain of the web server's config, not the application. This is
a good thing, but it does mean that any deployment automation needs
to make some decisions about how to manipulate the web server's
configuration.

Some other details which might be relevant:

In the devstack change the compute service is registered to run on a
default port of either 80 or 443 at '/compute' and _not_ on a custom
port.

The metadata API, however, continues to run as its own service on
its own port. In fact, it runs using solely uwsgi, without apache2
being involved at all.

Please follow up if there any questions.


[1] https://review.openstack.org/#/c/457283/
https://review.openstack.org/#/c/459413/
https://review.openstack.org/#/c/461289/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Thanks for taking the lead on this work Chris (with credit to sdague as 
well for the assist).


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
So another consideration. Do you think whole rule of "not building
binares" should be reconsidered? We are kind of new use case here. We
aren't distro but we are packagers (kind of). I don't think putting us
on equal footing as Red Hat, Canonical or other companies is correct
here.

K8s is something we want to work with, and what we are discussing is
central to how k8s is used. K8s community creates this culture of
"organic packages" built by anyone, most of companies/projects already
have semi-official container images and I think expectations on
quality of these are well...none? You get what you're given and if you
don't agree, there is always way to reproduce this yourself.

[Another huge snip]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Clint Byrum
Excerpts from Michał Jastrzębski's message of 2017-05-15 10:52:12 -0700:
> > Container images introduce some extra complexity, over the basic
> > operating system style packages mentioned above. Due to the way
> > they are constructed, they are likely to include content we don't
> > produce ourselves (either in the form of base layers or via including
> > build tools or other things needed when assembling the full image).
> > That extra content means there would need to be more tracking of
> > upstream issues (bugs, CVEs, etc.) to ensure the images are updated
> > as needed.
> 
> We can do this by building daily, which was the plan in fact. If we
> build every day you have at most 24hrs old packages, CVEs and things
> like that on non-openstack packages are still maintained by distro
> maintainers.
> 

What's at stake isn't so much "how do we get the bits to the users" but
"how do we only get bits to users that they need". If you build and push
daily, do you expect all of your users to also _pull_ daily? Redeploy
all their containers? How do you detect that there's new CVE-fixing
stuff in a daily build?

This is really the realm of distributors that have full-time security
teams tracking issues and providing support to paying customers.

So I think this is a fine idea, however, it needs to include a commitment
for a full-time paid security team who weighs in on every change to
the manifest. Otherwise we're just lobbing time bombs into our users'
data-centers.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Davanum Srinivas
On Tue, May 16, 2017 at 11:52 AM, Michał Jastrzębski  wrote:
> On 16 May 2017 at 08:32, Doug Hellmann  wrote:
>> Excerpts from Sean McGinnis's message of 2017-05-16 10:17:35 -0500:
>>> On Tue, May 16, 2017 at 09:38:34AM -0400, Davanum Srinivas wrote:
>>> > Folks,
>>> >
>>> > See $TITLE :)
>>> >
>>> > Thanks,
>>> > Dims
>>> >
>>>
>>> My preference would be to have an #openstack-tc channel.
>>>
>>> One thing I like about the dedicated meeting time was if I was not able to
>>> attend, or when I was just a casual observer, it was easy to catch up on
>>> what was discussed because it was all in one place and did not have any
>>> non TC conversations interlaced.
>>>
>>> If we just use -dev, there is a high chance there will be a lot of cross-
>>> talk during discussions. There would also be a lot of effort to grep
>>> through the full day of activity to find things relevant to TC
>>> discussions. If we have a dedicated channel for this, it makes it very
>>> easy for anyone to know where to go to get a clean, easy to read capture
>>> of all relevant discussions. I think that will be important with the
>>> lack of a captured and summarized meeting to look at.
>>>
>>> Sean
>>>
>>
>> I definitely understand this desire. I think, though, that any
>> significant conversations should be made discoverable via an email
>> thread summarizing them. That honors the spirit of moving our
>> "decision making" to asynchronous communication tools.
>
> To both this and Dims's concerns, I actually think we need some place
> to just come and ask "guys, is this fine?". If answer would be "let's
> talk on ML because it's important", that's cool, but on the other hand
> sometimes simple "yes" would suffice. Not all conversations with TC
> requires mailing thread, but I'd love to have some "semi-official" TC
> space where I can drop question, quickly discuss cross-project issues
> and such.

Michal,

Let's try using the ping list on #openstack-dev channel:
cdent dhellmann dims dtroyer emilienm flaper87 fungi johnthetubaguy
mordred sdague smcginnis stevemar ttx

The IRC nicks are here:
https://governance.openstack.org/tc/#current-members

Looks like the foundation page needs refreshing, will ping folks about it.
https://www.openstack.org/foundation/tech-committee/

Thanks,
Dims

>> Doug
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [devstack] [deployment] nova-api and meta-api under uwsgi

2017-05-16 Thread Chris Dent


(This is a followup to
http://lists.openstack.org/pipermail/openstack-dev/2017-May/116267.html
but I don't have that around anymore to make a proper response to.)

In a devstack change:

https://review.openstack.org/#/c/457715/

nova-api and nova-metadata will be changed to run as WSGI
applications with a uwsgi server by default. This helps to enable
a few recent goals:

* everything under systemd in devstack
* minimizing custom ports for HTTP in devstack
* is part of a series of changes[1] which gets the compute api working
  under WSGI, including some devref for wsgi use:
  https://docs.openstack.org/developer/nova/wsgi.html
* helps enforce the idea that any WSGI server is okay

This last point is important consideration for deployers: Although
devstack will (once the change merges) default to using a
combination of apache2, mod_proxy_uwsgi, and uwsgi there is zero
requirement that deployments replicate that arrangement. The new
'nova-api-wsgi' and 'nova-metadata-wsgi' script provide a
module-level 'application' that can be run by any WSGI compliant
server.

In those contexts things like the path-prefix of an application and
the port used (if any) to host the application are entirely in the
domain of the web server's config, not the application. This is
a good thing, but it does mean that any deployment automation needs
to make some decisions about how to manipulate the web server's
configuration.

Some other details which might be relevant:

In the devstack change the compute service is registered to run on a
default port of either 80 or 443 at '/compute' and _not_ on a custom
port.

The metadata API, however, continues to run as its own service on
its own port. In fact, it runs using solely uwsgi, without apache2
being involved at all.

Please follow up if there any questions.


[1] https://review.openstack.org/#/c/457283/
https://review.openstack.org/#/c/459413/
https://review.openstack.org/#/c/461289/

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Ironic-UI review requirements - single core reviews

2017-05-16 Thread Dmitry Tantsur

On 05/15/2017 09:10 PM, Julia Kreger wrote:

All,

In our new reality, in order to maximize velocity, I propose that we
loosen the review requirements for ironic-ui to allow faster
iteration. To this end, I suggest we move ironic-ui to using a single
core reviewer for code approval, along the same lines as Horizon[0].


Ok, Horizon example makes me feel a bit better about this :)



Our new reality is a fairly grim one, but there is always hope. We
have several distinct active core reviewers. The problem is available
time to review, and then getting any two reviewers to be on the same,
at the same time, with the same patch set. Reducing the requirements
will help us iterate faster and reduce the time a revision waits for
approval to land, which should ultimately help everyone contributing.

If there are no objections from my fellow ironic folk, then I propose
we move to this for ironic-ui immediately.


I'm fine with that. As I mentioned to you, it's clearly more important to be 
able to move forward than to make sure we never miss a sub-perfect patch. 
Especially for leaf projects like UI.




Thanks,

-Julia

[0]: 
http://lists.openstack.org/pipermail/openstack-dev/2017-February/113029.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 08:32, Doug Hellmann  wrote:
> Excerpts from Sean McGinnis's message of 2017-05-16 10:17:35 -0500:
>> On Tue, May 16, 2017 at 09:38:34AM -0400, Davanum Srinivas wrote:
>> > Folks,
>> >
>> > See $TITLE :)
>> >
>> > Thanks,
>> > Dims
>> >
>>
>> My preference would be to have an #openstack-tc channel.
>>
>> One thing I like about the dedicated meeting time was if I was not able to
>> attend, or when I was just a casual observer, it was easy to catch up on
>> what was discussed because it was all in one place and did not have any
>> non TC conversations interlaced.
>>
>> If we just use -dev, there is a high chance there will be a lot of cross-
>> talk during discussions. There would also be a lot of effort to grep
>> through the full day of activity to find things relevant to TC
>> discussions. If we have a dedicated channel for this, it makes it very
>> easy for anyone to know where to go to get a clean, easy to read capture
>> of all relevant discussions. I think that will be important with the
>> lack of a captured and summarized meeting to look at.
>>
>> Sean
>>
>
> I definitely understand this desire. I think, though, that any
> significant conversations should be made discoverable via an email
> thread summarizing them. That honors the spirit of moving our
> "decision making" to asynchronous communication tools.

To both this and Dims's concerns, I actually think we need some place
to just come and ask "guys, is this fine?". If answer would be "let's
talk on ML because it's important", that's cool, but on the other hand
sometimes simple "yes" would suffice. Not all conversations with TC
requires mailing thread, but I'd love to have some "semi-official" TC
space where I can drop question, quickly discuss cross-project issues
and such.

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-16 Thread Sean Dague
On 05/16/2017 11:28 AM, Eric Fried wrote:
>> The idea is that a regular user calling into a service should not
>> be able to set the request id, but outgoing calls from that service
>> to other services as part of the same request would.
> 
> Yeah, so can anyone explain to me why this is a real problem?  If a
> regular user wanted to be a d*ck and inject a bogus (or worse, I
> imagine, duplicated) request-id, can any actual harm come out of it?  Or
> does it just cause confusion to the guy reading the logs later?
> 
> (I'm assuming, of course, that the format will still be validated
> strictly (req-$UUID) to preclude code injection kind of stuff.)

Honestly, I don't know. I know it was once a concern. I'm totally happy
to remove the trust checking knowing we could add it back in later if
required.

Maybe reach out to some public cloud providers to know if they have any
issues with it?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 08:30, Emilien Macchi  wrote:
> On Tue, May 16, 2017 at 11:12 AM, Doug Hellmann  wrote:
>> Excerpts from Flavio Percoco's message of 2017-05-16 10:07:52 -0400:
>>> On 16/05/17 09:45 -0400, Doug Hellmann wrote:
>>> >Excerpts from Flavio Percoco's message of 2017-05-15 21:50:23 -0400:
>>> >> On 15/05/17 11:49 -0700, Michał Jastrzębski wrote:
>>> >> >On 15 May 2017 at 11:19, Davanum Srinivas  wrote:
>>> >> >> Sorry for the top post, Michal, Can you please clarify a couple of 
>>> >> >> things:
>>> >> >>
>>> >> >> 1) Can folks install just one or two services for their specific 
>>> >> >> scenario?
>>> >> >
>>> >> >Yes, that's more of a kolla-ansible feature and require a little bit
>>> >> >of ansible know-how, but entirely possible. Kolla-k8s is built to
>>> >> >allow maximum flexibility in that space.
>>> >> >
>>> >> >> 2) Can the container images from kolla be run on bare docker daemon?
>>> >> >
>>> >> >Yes, but they need to either override our default CMD (kolla_start) or
>>> >> >provide ENVs requred by it, not a huge deal
>>> >> >
>>> >> >> 3) Can someone take the kolla container images from say dockerhub and
>>> >> >> use it without the Kolla framework?
>>> >> >
>>> >> >Yes, there is no such thing as kolla framework really. Our images
>>> >> >follow stable ABI and they can be deployed by any deploy mechanism
>>> >> >that will follow it. We have several users who wrote their own deploy
>>> >> >mechanism from scratch.
>>> >> >
>>> >> >Containers are just blobs with binaries in it. Little things that we
>>> >> >add are kolla_start script to allow our config file management and
>>> >> >some custom startup scripts for things like mariadb to help with
>>> >> >bootstrapping, both are entirely optional.
>>> >>
>>> >> Just as a bonus example, TripleO is currently using kolla images. They 
>>> >> used to
>>> >> be vanilla and they are not anymore but only because TripleO depends on 
>>> >> puppet
>>> >> being in the image, which has nothing to do with kolla.
>>> >>
>>> >> Flavio
>>> >>
>>> >
>>> >When you say "using kolla images," what do you mean? In upstream
>>> >CI tests? On contributors' dev/test systems? Production deployments?
>>>
>>> All of them. Note that TripleO now builds its own "kolla images" (it uses 
>>> the
>>> kolla Dockerfiles and kolla-build) because the dependency of puppet. When I
>>> said, TripleO uses kolla images was intended to answer Dims question on 
>>> whether
>>> these images (or Dockerfiles) can be consumed by other projects.
>>>
>>> Flavio
>>>
>>
>> Ah, OK. So TripleO is using the build instructions for kolla images, but
>> not the binary images being produced today?
>
> Exactly. We have to add Puppet packaging into the list of things we
> want in the binary, that's why we don't consume the binary directly.

And frankly, if we get this thing agreed on, I don't see why TripleO
couldn't publish their images too. If we build technical infra in
Kolla, everyone else can benefit from it.

>> Doug
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Sean Dague
On 05/16/2017 11:17 AM, Sean McGinnis wrote:
> On Tue, May 16, 2017 at 09:38:34AM -0400, Davanum Srinivas wrote:
>> Folks,
>>
>> See $TITLE :)
>>
>> Thanks,
>> Dims
>>
> 
> My preference would be to have an #openstack-tc channel.
> 
> One thing I like about the dedicated meeting time was if I was not able to
> attend, or when I was just a casual observer, it was easy to catch up on
> what was discussed because it was all in one place and did not have any
> non TC conversations interlaced.
> 
> If we just use -dev, there is a high chance there will be a lot of cross-
> talk during discussions. There would also be a lot of effort to grep
> through the full day of activity to find things relevant to TC
> discussions. If we have a dedicated channel for this, it makes it very
> easy for anyone to know where to go to get a clean, easy to read capture
> of all relevant discussions. I think that will be important with the
> lack of a captured and summarized meeting to look at.

The thing is, IRC should never be a summary or long term storage medium.
IRC is a discussion medium. It is a hallway track. It's where ideas
bounce around lots are left on the floor, there are lots of
misstatements as people explore things. It's not store and forward
messaging, it's realtime chat.

If we want digestible summaries with context, that's never IRC, and we
shouldn't expect people to look to IRC for that. It's source material at
best. I'm not sure of any IRC conversation that's ever been clean, easy
to read, and captures the entire context within it without jumping to
assumptions of shared background that the conversation participants
already have.

Summaries with context need to emerge from here for people to be able to
follow along (out to email or web), and work their way back into the
conversations.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Emilien Macchi
On Tue, May 16, 2017 at 11:12 AM, Doug Hellmann  wrote:
> Excerpts from Flavio Percoco's message of 2017-05-16 10:07:52 -0400:
>> On 16/05/17 09:45 -0400, Doug Hellmann wrote:
>> >Excerpts from Flavio Percoco's message of 2017-05-15 21:50:23 -0400:
>> >> On 15/05/17 11:49 -0700, Michał Jastrzębski wrote:
>> >> >On 15 May 2017 at 11:19, Davanum Srinivas  wrote:
>> >> >> Sorry for the top post, Michal, Can you please clarify a couple of 
>> >> >> things:
>> >> >>
>> >> >> 1) Can folks install just one or two services for their specific 
>> >> >> scenario?
>> >> >
>> >> >Yes, that's more of a kolla-ansible feature and require a little bit
>> >> >of ansible know-how, but entirely possible. Kolla-k8s is built to
>> >> >allow maximum flexibility in that space.
>> >> >
>> >> >> 2) Can the container images from kolla be run on bare docker daemon?
>> >> >
>> >> >Yes, but they need to either override our default CMD (kolla_start) or
>> >> >provide ENVs requred by it, not a huge deal
>> >> >
>> >> >> 3) Can someone take the kolla container images from say dockerhub and
>> >> >> use it without the Kolla framework?
>> >> >
>> >> >Yes, there is no such thing as kolla framework really. Our images
>> >> >follow stable ABI and they can be deployed by any deploy mechanism
>> >> >that will follow it. We have several users who wrote their own deploy
>> >> >mechanism from scratch.
>> >> >
>> >> >Containers are just blobs with binaries in it. Little things that we
>> >> >add are kolla_start script to allow our config file management and
>> >> >some custom startup scripts for things like mariadb to help with
>> >> >bootstrapping, both are entirely optional.
>> >>
>> >> Just as a bonus example, TripleO is currently using kolla images. They 
>> >> used to
>> >> be vanilla and they are not anymore but only because TripleO depends on 
>> >> puppet
>> >> being in the image, which has nothing to do with kolla.
>> >>
>> >> Flavio
>> >>
>> >
>> >When you say "using kolla images," what do you mean? In upstream
>> >CI tests? On contributors' dev/test systems? Production deployments?
>>
>> All of them. Note that TripleO now builds its own "kolla images" (it uses the
>> kolla Dockerfiles and kolla-build) because the dependency of puppet. When I
>> said, TripleO uses kolla images was intended to answer Dims question on 
>> whether
>> these images (or Dockerfiles) can be consumed by other projects.
>>
>> Flavio
>>
>
> Ah, OK. So TripleO is using the build instructions for kolla images, but
> not the binary images being produced today?

Exactly. We have to add Puppet packaging into the list of things we
want in the binary, that's why we don't consume the binary directly.

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Doug Hellmann
Excerpts from Sean McGinnis's message of 2017-05-16 10:17:35 -0500:
> On Tue, May 16, 2017 at 09:38:34AM -0400, Davanum Srinivas wrote:
> > Folks,
> > 
> > See $TITLE :)
> > 
> > Thanks,
> > Dims
> > 
> 
> My preference would be to have an #openstack-tc channel.
> 
> One thing I like about the dedicated meeting time was if I was not able to
> attend, or when I was just a casual observer, it was easy to catch up on
> what was discussed because it was all in one place and did not have any
> non TC conversations interlaced.
> 
> If we just use -dev, there is a high chance there will be a lot of cross-
> talk during discussions. There would also be a lot of effort to grep
> through the full day of activity to find things relevant to TC
> discussions. If we have a dedicated channel for this, it makes it very
> easy for anyone to know where to go to get a clean, easy to read capture
> of all relevant discussions. I think that will be important with the
> lack of a captured and summarized meeting to look at.
> 
> Sean
> 

I definitely understand this desire. I think, though, that any
significant conversations should be made discoverable via an email
thread summarizing them. That honors the spirit of moving our
"decision making" to asynchronous communication tools.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-16 Thread Eric Fried
> The idea is that a regular user calling into a service should not
> be able to set the request id, but outgoing calls from that service
> to other services as part of the same request would.

Yeah, so can anyone explain to me why this is a real problem?  If a
regular user wanted to be a d*ck and inject a bogus (or worse, I
imagine, duplicated) request-id, can any actual harm come out of it?  Or
does it just cause confusion to the guy reading the logs later?

(I'm assuming, of course, that the format will still be validated
strictly (req-$UUID) to preclude code injection kind of stuff.)

Thanks,
Eric (efried)
.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Davanum Srinivas
Michał,

My fear is that anything said on that channel will come down as "TC
told us to do / not do such-and-such a thing"

-- Dims

On Tue, May 16, 2017 at 11:00 AM, Michał Jastrzębski  wrote:
> On 16 May 2017 at 07:49, Sean Dague  wrote:
>> On 05/16/2017 09:38 AM, Davanum Srinivas wrote:
>>> Folks,
>>>
>>> See $TITLE :)
>>>
>>> Thanks,
>>> Dims
>>
>> I'd rather avoid #openstack-tc and just use #openstack-dev.
>> #openstack-dev is pretty low used environment (compared to like
>> #openstack-infra or #openstack-nova). I've personally been trying to
>> make it my go to way to hit up members of other teams whenever instead
>> of diving into project specific channels, because typically it means we
>> can get a broader conversation around the item in question.
>>
>> Our fragmentation of shared understanding on many issues is definitely
>> exacerbated by many project channels, and the assumption that people
>> need to watch 20+ different channels, with different context, to stay up
>> on things.
>>
>> I would love us to have the problem that too many interesting topics are
>> being discussed in #openstack-dev that we feel the need to parallelize
>> them with a different channel. But I would say we should wait until
>> that's actually a problem.
>>
>> -Sean
>
> I, on the flip side, would be all for #openstack-tc. First,
> #openstack-dev is not obvious to look for TC members, #openstack-tc
> would be channel to talk about tc related stuff, which in large
> portion would be something significant and worth coming back to, so
> having this "filtered" field just for cross-community discussions
> would make digging through logs much easier.
>
>> --
>> Sean Dague
>> http://dague.net
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Sean McGinnis
On Tue, May 16, 2017 at 02:08:07PM +0200, Thierry Carrez wrote:
> 
> I totally subscribe to the concerns around publishing binaries (under
> any form), and the expectations in terms of security maintenance that it
> would set on the publisher. At the same time, we need to have images
> available, for convenience and testing. So what is the best way to
> achieve that without setting strong security maintenance expectations
> for the OpenStack community ? We have several options:
> 
> 1/ Have third-parties publish images
> It is the current situation. The issue is that the Kolla team (and
> likely others) would rather automate the process and use OpenStack
> infrastructure for it.
> 
> 2/ Have third-parties publish images, but through OpenStack infra
> This would allow to automate the process, but it would be a bit weird to
> use common infra resources to publish in a private repo.
> 
> 3/ Publish transient (per-commit or daily) images
> A "daily build" (especially if you replace it every day) would set
> relatively-limited expectations in terms of maintenance. It would end up
> picking up security updates in upstream layers, even if not immediately.
> 

I share the concerns around implying support for any of these. But I
also think they could be incredibly useful, and if we don't do it,
there is even more of a chance of multiple "bad" images being published
by others.

I agree having an automated daily image published should give a
reasonable expectation that there is not long term maintenance for
these.

> 4/ Publish images and own them
> Staff release / VMT / stable team in a way that lets us properly own
> those images and publish them officially.
> 
> Personally I think (4) is not realistic. I think we could make (3) work,
> and I prefer it to (2). If all else fails, we should keep (1).
> 
> -- 
> Thierry Carrez (ttx)
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 08:12, Doug Hellmann  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-16 06:52:12 -0700:
>> On 16 May 2017 at 06:20, Flavio Percoco  wrote:
>> > On 16/05/17 14:08 +0200, Thierry Carrez wrote:
>> >>
>> >> Flavio Percoco wrote:
>> >>>
>> >>> From a release perspective, as Doug mentioned, we've avoided releasing
>> >>> projects
>> >>> in any kind of built form. This was also one of the concerns I raised
>> >>> when
>> >>> working on the proposal to support other programming languages. The
>> >>> problem of
>> >>> releasing built images goes beyond the infrastructure requirements. It's
>> >>> the
>> >>> message and the guarantees implied with the built product itself that are
>> >>> the
>> >>> concern here. And I tend to agree with Doug that this might be a problem
>> >>> for us
>> >>> as a community. Unfortunately, putting your name, Michal, as contact
>> >>> point is
>> >>> not enough. Kolla is not the only project producing container images and
>> >>> we need
>> >>> to be consistent in the way we release these images.
>> >>>
>> >>> Nothing prevents people for building their own images and uploading them
>> >>> to
>> >>> dockerhub. Having this as part of the OpenStack's pipeline is a problem.
>> >>
>> >>
>> >> I totally subscribe to the concerns around publishing binaries (under
>> >> any form), and the expectations in terms of security maintenance that it
>> >> would set on the publisher. At the same time, we need to have images
>> >> available, for convenience and testing. So what is the best way to
>> >> achieve that without setting strong security maintenance expectations
>> >> for the OpenStack community ? We have several options:
>> >>
>> >> 1/ Have third-parties publish images
>> >> It is the current situation. The issue is that the Kolla team (and
>> >> likely others) would rather automate the process and use OpenStack
>> >> infrastructure for it.
>> >>
>> >> 2/ Have third-parties publish images, but through OpenStack infra
>> >> This would allow to automate the process, but it would be a bit weird to
>> >> use common infra resources to publish in a private repo.
>> >>
>> >> 3/ Publish transient (per-commit or daily) images
>> >> A "daily build" (especially if you replace it every day) would set
>> >> relatively-limited expectations in terms of maintenance. It would end up
>> >> picking up security updates in upstream layers, even if not immediately.
>> >>
>> >> 4/ Publish images and own them
>> >> Staff release / VMT / stable team in a way that lets us properly own
>> >> those images and publish them officially.
>> >>
>> >> Personally I think (4) is not realistic. I think we could make (3) work,
>> >> and I prefer it to (2). If all else fails, we should keep (1).
>> >
>> >
>> > Agreed #4 is a bit unrealistic.
>> >
>> > Not sure I understand the difference between #2 and #3. Is it just the
>> > cadence?
>> >
>> > I'd prefer for these builds to have a daily cadence because it sets the
>> > expectations w.r.t maintenance right: "These images are daily builds and 
>> > not
>> > certified releases. For stable builds you're better off building it
>> > yourself"
>>
>> And daily builds are exactly what I wanted in the first place:) We
>> probably will keep publishing release packages too, but we can be so
>> called 3rd party. I also agree [4] is completely unrealistic and I
>> would be against putting such heavy burden of responsibility on any
>> community, including Kolla.
>>
>> While daily cadence will send message that it's not stable, truth will
>> be that it will be more stable than what people would normally build
>> locally (again, it passes more gates), but I'm totally fine in not
>> saying that and let people decide how they want to use it.
>>
>> So, can we move on with implementation?
>
> I don't want the images published to docker hub. Are they still useful
> to you if they aren't published?

What do you mean? We need images available...whether it's dockerhub,
infra-hosted registry or any other way to have them, we need to be
able to have images that are available and fresh without building.
Dockerhub/quay.io is least problems for infra team/resources.

> Doug
>
>>
>> Thanks!
>> Michal
>>
>> >
>> > Flavio
>> >
>> > --
>> > @flaper87
>> > Flavio Percoco
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack 

Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-16 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2017-05-16 15:16:08 +0100:
> On Tue, 16 May 2017, Monty Taylor wrote:
> 
> > FWIW - I'm un-crazy about the term API Key - but I'm gonna just roll with 
> > that until someone has a better idea. I'm uncrazy about it for two reasons:
> >
> > a) the word "key" implies things to people that may or may not be true 
> > here. 
> > If we do stick with it - we need some REALLY crisp language about what it 
> > is 
> > and what it isn't.
> >
> > b) Rackspace Public Cloud (and back in the day HP Public Cloud) have a 
> > thing 
> > called by this name. While what's written in the spec is quite similar in 
> > usage to that construct, I'm wary of re-using the name without the 
> > semantics 
> > actually being fully the same for risk of user confusion. "This uses 
> > api-key... which one?" Sean's email uses "APPKey" instead of "APIKey" - 
> > which 
> > may be a better term. Maybe just "ApplicationAuthorization"?
> 
> "api key" is a fairly common and generic term for "this magical
> thingie I can create to delegate my authority to some automation".
> It's also sometimes called "token", perhaps that's better (that's
> what GitHub uses, for example)? In either case the "api" bit is
> pretty important because it is the thing used to talk to the API.
> 
> I really hope we can avoid creating yet more special language for
> OpenStack. We've got an API. We want to send keys or tokens. Let's
> just call them that.
> 

+1

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Issue while applying customs configuration to overcloud.

2017-05-16 Thread Dnyaneshwar Pawar
Hi Steve,

Thanks for your reply.

Out of interest, where did you find OS::TripleO::ControllerServer, do we
have a mistake in our docs somewhere?

I referred below template. 
https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/controller-role.yaml

resources:

  Controller:
type: OS::TripleO::ControllerServer
metadata:
  os-collect-config:


OS::Heat::SoftwareDeployment is referred instead of 
OS::Heat::SoftwareDeployments at following places.

1. 
https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/11/pdf/partner_integration/Red_Hat_OpenStack_Platform-11-Partner_Integration-en-US.pdf
   Section 2.1.4. TripleO and TripleO Heat Templates (page #13 in pdf)
   Section 5.4. CUSTOMIZING CONFIGURATION BEFORE OVERCLOUD CONFIGURATION 
(Page #32 in pdf)
2. http://hardysteven.blogspot.in/2015/05/heat-softwareconfig-resources.html
   Section: Heat SoftwareConfig resources
   Section: SoftwareDeployment HOT template definition
3. 
http://hardysteven.blogspot.in/2015/05/tripleo-heat-templates-part-2-node.html
Section: Initial deployment flow, step by step


Thanks and Regards,
Dnyaneshwar


On 5/16/17, 4:40 PM, "Steven Hardy"  wrote:

On Tue, May 16, 2017 at 04:33:33AM +, Dnyaneshwar Pawar wrote:
> Hi TripleO team,
> 
> I am trying to apply custom configuration to an existing overcloud. 
(using openstack overcloud deploy command)
> Though there is no error, the configuration is in not applied to 
overcloud.
> Am I missing anything here?
> http://paste.openstack.org/show/609619/

In your paste you have the resource_registry like this:

OS::TripleO::ControllerServer: /home/stack/test/heat3_ocata.yaml

The problem is OS::TripleO::ControllerServer isn't a resource type we use,
e.g it's not a valid hook to enable additional node configuration.

Instead try something like this:

OS::TripleO::NodeExtraConfigPost: /home/stack/test/heat3_ocata.yaml

Which will run the script on all nodes, as documented here:


https://docs.openstack.org/developer/tripleo-docs/advanced_deployment/extra_config.html

Out of interest, where did you find OS::TripleO::ControllerServer, do we
have a mistake in our docs somewhere?

Also in your template the type: OS::Heat::SoftwareDeployment should be
either type: OS::Heat::SoftwareDeployments (as in the docs) or type:
OS::Heat::SoftwareDeploymentGroup (the newer name for SoftwareDeployments,
we should switch the docs to that..).

Hope that helps!

-- 
Steve Hardy
Red Hat Engineering, Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Sean McGinnis
On Tue, May 16, 2017 at 09:38:34AM -0400, Davanum Srinivas wrote:
> Folks,
> 
> See $TITLE :)
> 
> Thanks,
> Dims
> 

My preference would be to have an #openstack-tc channel.

One thing I like about the dedicated meeting time was if I was not able to
attend, or when I was just a casual observer, it was easy to catch up on
what was discussed because it was all in one place and did not have any
non TC conversations interlaced.

If we just use -dev, there is a high chance there will be a lot of cross-
talk during discussions. There would also be a lot of effort to grep
through the full day of activity to find things relevant to TC
discussions. If we have a dedicated channel for this, it makes it very
easy for anyone to know where to go to get a clean, easy to read capture
of all relevant discussions. I think that will be important with the
lack of a captured and summarized meeting to look at.

Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Sam Yaple's message of 2017-05-16 14:11:18 +:
> I would like to bring up a subject that hasn't really been discussed in
> this thread yet, forgive me if I missed an email mentioning this.
> 
> What I personally would like to see is a publishing infrastructure to allow
> pushing built images to an internal infra mirror/repo/registry for
> consumption of internal infra jobs (deployment tools like kolla-ansible and
> openstack-ansible). The images built from infra mirrors with security
> turned off are perfect for testing internally to infra.
> 
> If you build images properly in infra, then you will have an image that is
> not security checked (no gpg verification of packages) and completely
> unverifiable. These are absolutely not images we want to push to
> DockerHub/quay for obvious reasons. Security and verification being chief
> among them. They are absolutely not images that should ever be run in
> production and are only suited for testing. These are the only types of
> images that can come out of infra.
> 
> Thanks,
> SamYaple

This sounds like an implementation detail of option 3? I think not
signing the images does help indicate that they're not meant to be used
in production environments.

Is some sort of self-hosted solution a reasonable compromise between
building images in test jobs (which I understand makes them take
extra time) and publishing images to public registries (which is
the thing I object to)?

If self-hosting is reasonable, then we can work out which tool to
use to do it as a second question.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-16 06:52:12 -0700:
> On 16 May 2017 at 06:20, Flavio Percoco  wrote:
> > On 16/05/17 14:08 +0200, Thierry Carrez wrote:
> >>
> >> Flavio Percoco wrote:
> >>>
> >>> From a release perspective, as Doug mentioned, we've avoided releasing
> >>> projects
> >>> in any kind of built form. This was also one of the concerns I raised
> >>> when
> >>> working on the proposal to support other programming languages. The
> >>> problem of
> >>> releasing built images goes beyond the infrastructure requirements. It's
> >>> the
> >>> message and the guarantees implied with the built product itself that are
> >>> the
> >>> concern here. And I tend to agree with Doug that this might be a problem
> >>> for us
> >>> as a community. Unfortunately, putting your name, Michal, as contact
> >>> point is
> >>> not enough. Kolla is not the only project producing container images and
> >>> we need
> >>> to be consistent in the way we release these images.
> >>>
> >>> Nothing prevents people for building their own images and uploading them
> >>> to
> >>> dockerhub. Having this as part of the OpenStack's pipeline is a problem.
> >>
> >>
> >> I totally subscribe to the concerns around publishing binaries (under
> >> any form), and the expectations in terms of security maintenance that it
> >> would set on the publisher. At the same time, we need to have images
> >> available, for convenience and testing. So what is the best way to
> >> achieve that without setting strong security maintenance expectations
> >> for the OpenStack community ? We have several options:
> >>
> >> 1/ Have third-parties publish images
> >> It is the current situation. The issue is that the Kolla team (and
> >> likely others) would rather automate the process and use OpenStack
> >> infrastructure for it.
> >>
> >> 2/ Have third-parties publish images, but through OpenStack infra
> >> This would allow to automate the process, but it would be a bit weird to
> >> use common infra resources to publish in a private repo.
> >>
> >> 3/ Publish transient (per-commit or daily) images
> >> A "daily build" (especially if you replace it every day) would set
> >> relatively-limited expectations in terms of maintenance. It would end up
> >> picking up security updates in upstream layers, even if not immediately.
> >>
> >> 4/ Publish images and own them
> >> Staff release / VMT / stable team in a way that lets us properly own
> >> those images and publish them officially.
> >>
> >> Personally I think (4) is not realistic. I think we could make (3) work,
> >> and I prefer it to (2). If all else fails, we should keep (1).
> >
> >
> > Agreed #4 is a bit unrealistic.
> >
> > Not sure I understand the difference between #2 and #3. Is it just the
> > cadence?
> >
> > I'd prefer for these builds to have a daily cadence because it sets the
> > expectations w.r.t maintenance right: "These images are daily builds and not
> > certified releases. For stable builds you're better off building it
> > yourself"
> 
> And daily builds are exactly what I wanted in the first place:) We
> probably will keep publishing release packages too, but we can be so
> called 3rd party. I also agree [4] is completely unrealistic and I
> would be against putting such heavy burden of responsibility on any
> community, including Kolla.
> 
> While daily cadence will send message that it's not stable, truth will
> be that it will be more stable than what people would normally build
> locally (again, it passes more gates), but I'm totally fine in not
> saying that and let people decide how they want to use it.
> 
> So, can we move on with implementation?

I don't want the images published to docker hub. Are they still useful
to you if they aren't published?

Doug

> 
> Thanks!
> Michal
> 
> >
> > Flavio
> >
> > --
> > @flaper87
> > Flavio Percoco
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Flavio Percoco's message of 2017-05-16 10:07:52 -0400:
> On 16/05/17 09:45 -0400, Doug Hellmann wrote:
> >Excerpts from Flavio Percoco's message of 2017-05-15 21:50:23 -0400:
> >> On 15/05/17 11:49 -0700, Michał Jastrzębski wrote:
> >> >On 15 May 2017 at 11:19, Davanum Srinivas  wrote:
> >> >> Sorry for the top post, Michal, Can you please clarify a couple of 
> >> >> things:
> >> >>
> >> >> 1) Can folks install just one or two services for their specific 
> >> >> scenario?
> >> >
> >> >Yes, that's more of a kolla-ansible feature and require a little bit
> >> >of ansible know-how, but entirely possible. Kolla-k8s is built to
> >> >allow maximum flexibility in that space.
> >> >
> >> >> 2) Can the container images from kolla be run on bare docker daemon?
> >> >
> >> >Yes, but they need to either override our default CMD (kolla_start) or
> >> >provide ENVs requred by it, not a huge deal
> >> >
> >> >> 3) Can someone take the kolla container images from say dockerhub and
> >> >> use it without the Kolla framework?
> >> >
> >> >Yes, there is no such thing as kolla framework really. Our images
> >> >follow stable ABI and they can be deployed by any deploy mechanism
> >> >that will follow it. We have several users who wrote their own deploy
> >> >mechanism from scratch.
> >> >
> >> >Containers are just blobs with binaries in it. Little things that we
> >> >add are kolla_start script to allow our config file management and
> >> >some custom startup scripts for things like mariadb to help with
> >> >bootstrapping, both are entirely optional.
> >>
> >> Just as a bonus example, TripleO is currently using kolla images. They 
> >> used to
> >> be vanilla and they are not anymore but only because TripleO depends on 
> >> puppet
> >> being in the image, which has nothing to do with kolla.
> >>
> >> Flavio
> >>
> >
> >When you say "using kolla images," what do you mean? In upstream
> >CI tests? On contributors' dev/test systems? Production deployments?
> 
> All of them. Note that TripleO now builds its own "kolla images" (it uses the
> kolla Dockerfiles and kolla-build) because the dependency of puppet. When I
> said, TripleO uses kolla images was intended to answer Dims question on 
> whether
> these images (or Dockerfiles) can be consumed by other projects.
> 
> Flavio
> 

Ah, OK. So TripleO is using the build instructions for kolla images, but
not the binary images being produced today?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Project updates - OpenStack Summit

2017-05-16 Thread Emilien Macchi
If you missed the TripleO project updates presentation, feel free to
watch the recording:
https://www.openstack.org/videos/boston-2017/project-update-triple0

and the slides:
https://docs.google.com/presentation/d/1knOesCs3HTqKvIl9iUZciUtE006ff9I3zhxCtbLZz4c

If you have any question or feedback regarding our roadmap, feel free
to use this thread to discuss about it on the public forum.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-16 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2017-05-16 15:28:11 +0100:
> On Sun, 14 May 2017, Sean Dague wrote:
> 
> > So, the basic idea is, services will optionally take an inbound 
> > X-OpenStack-Request-ID which will be strongly validated to the format 
> > (req-$uuid). They will continue to always generate one as well. When the 
> > context is built (which is typically about 3 more steps down the paste 
> > pipeline), we'll check that the service user was involved, and if not, 
> > reset 
> > the request_id to the local generated one. We'll log both the global and 
> > local request ids. All of these changes happen in oslo.middleware, 
> > oslo.context, oslo.log, and most projects won't need anything to get this 
> > infrastructure.
> 
> I may not be understanding this paragraph, but this sounds like you
> are saying: accept a valid and authentic incoming request id, but
> only use it in ongoing requests if the service user was involved in
> those requests.
> 
> If that's correct, I'd suggest not doing that because it confuses
> traceability of a series of things. Instead, always use the request
> id if it is valid and authentic.
> 
> But maybe you mean "if the request id could not be proven authentic,
> don't use it"?
> 

The idea is that a regular user calling into a service should not
be able to set the request id, but outgoing calls from that service
to other services as part of the same request would.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-16 Thread Sean Dague
On 05/16/2017 10:28 AM, Chris Dent wrote:
> On Sun, 14 May 2017, Sean Dague wrote:
> 
>> So, the basic idea is, services will optionally take an inbound
>> X-OpenStack-Request-ID which will be strongly validated to the format
>> (req-$uuid). They will continue to always generate one as well. When
>> the context is built (which is typically about 3 more steps down the
>> paste pipeline), we'll check that the service user was involved, and
>> if not, reset the request_id to the local generated one. We'll log
>> both the global and local request ids. All of these changes happen in
>> oslo.middleware, oslo.context, oslo.log, and most projects won't need
>> anything to get this infrastructure.
> 
> I may not be understanding this paragraph, but this sounds like you
> are saying: accept a valid and authentic incoming request id, but
> only use it in ongoing requests if the service user was involved in
> those requests.
> 
> If that's correct, I'd suggest not doing that because it confuses
> traceability of a series of things. Instead, always use the request
> id if it is valid and authentic.
> 
> But maybe you mean "if the request id could not be proven authentic,
> don't use it"?

It is a little more clear in the detailed spec, the issue is that the
place where this is generated is before we have enough ability to know
if we should be allowed to use it (it's actually before keystone auth).
I put some annotations of paste pipelines inline to help explain.

We either assume success, or assume failure, and fix later. We don't
actually have a functional logger using the request-id until we've got
keystone auth (bootstrapping is fun!) so assuming success, and reverting
if auth says no, actually should cause less confusion (and require less
code) than the other way around.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2017-05-16 10:49:54 -0400:
> On 05/16/2017 09:38 AM, Davanum Srinivas wrote:
> > Folks,
> > 
> > See $TITLE :)
> > 
> > Thanks,
> > Dims
> 
> I'd rather avoid #openstack-tc and just use #openstack-dev.
> #openstack-dev is pretty low used environment (compared to like
> #openstack-infra or #openstack-nova). I've personally been trying to
> make it my go to way to hit up members of other teams whenever instead
> of diving into project specific channels, because typically it means we
> can get a broader conversation around the item in question.
> 
> Our fragmentation of shared understanding on many issues is definitely
> exacerbated by many project channels, and the assumption that people
> need to watch 20+ different channels, with different context, to stay up
> on things.
> 
> I would love us to have the problem that too many interesting topics are
> being discussed in #openstack-dev that we feel the need to parallelize
> them with a different channel. But I would say we should wait until
> that's actually a problem.
> 
> -Sean
> 

+1, let's start with just the -dev channel and see if volume becomes
an issue.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Thierry Carrez
Michał Jastrzębski wrote:
> On 16 May 2017 at 06:20, Flavio Percoco  wrote:
>> I'd prefer for these builds to have a daily cadence because it sets the
>> expectations w.r.t maintenance right: "These images are daily builds and not
>> certified releases. For stable builds you're better off building it
>> yourself"
> 
> And daily builds are exactly what I wanted in the first place:) We
> probably will keep publishing release packages too, but we can be so
> called 3rd party. I also agree [4] is completely unrealistic and I
> would be against putting such heavy burden of responsibility on any
> community, including Kolla.
> 
> While daily cadence will send message that it's not stable, truth will
> be that it will be more stable than what people would normally build
> locally (again, it passes more gates), but I'm totally fine in not
> saying that and let people decide how they want to use it.
> 
> So, can we move on with implementation?

I'm just listing options to help frame the discussion. I still think we
need a global answer on this (for container images and VMs) so I think
it would be great to have a clear TC resolution (picking one of those
options) before moving on with implementation.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Thierry Carrez
Flavio Percoco wrote:
> On 16/05/17 14:08 +0200, Thierry Carrez wrote:
>> 1/ Have third-parties publish images
>> It is the current situation. The issue is that the Kolla team (and
>> likely others) would rather automate the process and use OpenStack
>> infrastructure for it.
>>
>> 2/ Have third-parties publish images, but through OpenStack infra
>> This would allow to automate the process, but it would be a bit weird to
>> use common infra resources to publish in a private repo.
>>
>> 3/ Publish transient (per-commit or daily) images
>> A "daily build" (especially if you replace it every day) would set
>> relatively-limited expectations in terms of maintenance. It would end up
>> picking up security updates in upstream layers, even if not immediately.
>>
>> 4/ Publish images and own them
>> Staff release / VMT / stable team in a way that lets us properly own
>> those images and publish them officially.
>>
>> Personally I think (4) is not realistic. I think we could make (3) work,
>> and I prefer it to (2). If all else fails, we should keep (1).
> 
> Agreed #4 is a bit unrealistic.
> 
> Not sure I understand the difference between #2 and #3. Is it just the
> cadence?

In #3 the infrastructure ends up publishing to an official
"openstack-daily" repository. In #2 the infrastructure job ends up
publishing to some "flavios-garage" repository.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 07:49, Sean Dague  wrote:
> On 05/16/2017 09:38 AM, Davanum Srinivas wrote:
>> Folks,
>>
>> See $TITLE :)
>>
>> Thanks,
>> Dims
>
> I'd rather avoid #openstack-tc and just use #openstack-dev.
> #openstack-dev is pretty low used environment (compared to like
> #openstack-infra or #openstack-nova). I've personally been trying to
> make it my go to way to hit up members of other teams whenever instead
> of diving into project specific channels, because typically it means we
> can get a broader conversation around the item in question.
>
> Our fragmentation of shared understanding on many issues is definitely
> exacerbated by many project channels, and the assumption that people
> need to watch 20+ different channels, with different context, to stay up
> on things.
>
> I would love us to have the problem that too many interesting topics are
> being discussed in #openstack-dev that we feel the need to parallelize
> them with a different channel. But I would say we should wait until
> that's actually a problem.
>
> -Sean

I, on the flip side, would be all for #openstack-tc. First,
#openstack-dev is not obvious to look for TC members, #openstack-tc
would be channel to talk about tc related stuff, which in large
portion would be something significant and worth coming back to, so
having this "filtered" field just for cross-community discussions
would make digging through logs much easier.

> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Sean Dague
On 05/16/2017 09:38 AM, Davanum Srinivas wrote:
> Folks,
> 
> See $TITLE :)
> 
> Thanks,
> Dims

I'd rather avoid #openstack-tc and just use #openstack-dev.
#openstack-dev is pretty low used environment (compared to like
#openstack-infra or #openstack-nova). I've personally been trying to
make it my go to way to hit up members of other teams whenever instead
of diving into project specific channels, because typically it means we
can get a broader conversation around the item in question.

Our fragmentation of shared understanding on many issues is definitely
exacerbated by many project channels, and the assumption that people
need to watch 20+ different channels, with different context, to stay up
on things.

I would love us to have the problem that too many interesting topics are
being discussed in #openstack-dev that we feel the need to parallelize
them with a different channel. But I would say we should wait until
that's actually a problem.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-16 Thread Chris Friesen

On 05/15/2017 03:42 PM, Clint Byrum wrote:


In order to implement fairness you'll need every lock request to happen
in a FIFO queue. This is often implemented with a mutex-protected queue
of condition variables. Since the mutex for the queue is only held while
you append to the queue, you will always get the items from the queue
in the order they were written to it.

So you have lockers add themselves to the queue and wait on their
condition variable, and then a thread running all the time that reads
the queue and acts on each condition to make sure only one thread is
activated at a time (or that one thread can just always do all the work
if the arguments are simple enough to put in a queue).


Do you even need the extra thread?  The implementations I've seen for a ticket 
lock (in C at least) usually have the unlock routine wake up the next pending 
locker.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] Proposing Fanglei Zhu for Tempest core

2017-05-16 Thread Masayuki Igawa
+1!

-- 
  Masayuki Igawa
  masay...@igawa.me



On Tue, May 16, 2017, at 05:22 PM, Andrea Frittoli wrote:
> Hello team,
> 
> I'm very pleased to propose Fanglei Zhu (zhufl) for Tempest core.
> 
> Over the past two cycle Fanglei has been steadily contributing to
> Tempest and its community.> She's done a great deal of work in making Tempest 
> code cleaner, easier
> to read, maintain and> debug, fixing bugs and removing cruft. Both her code 
> as well as her
> reviews demonstrate a> very good understanding of Tempest internals and of 
> the project future
> direction.> I believe Fanglei will make an excellent addition to the team.
> 
> As per the usual, if the current Tempest core team members would
> please vote +1> or -1(veto) to the nomination when you get a chance. We'll 
> keep the
> polls open> for 5 days or until everyone has voted.
> 
> References:
> https://review.openstack.org/#/q/owner:zhu.fanglei%2540zte.com.cn 
> https://review.openstack.org/#/q/reviewer:zhufl 
> 
> Thank you,
> 
> Andrea (andreaf)
> -
> > OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] Proposing Fanglei Zhu for Tempest core

2017-05-16 Thread Masayuki Igawa
+1!

-- 
  Masayuki Igawa
  masay...@igawa.me



On Tue, May 16, 2017, at 05:22 PM, Andrea Frittoli wrote:
> Hello team,
> 
> I'm very pleased to propose Fanglei Zhu (zhufl) for Tempest core.
> 
> Over the past two cycle Fanglei has been steadily contributing to
> Tempest and its community.> She's done a great deal of work in making Tempest 
> code cleaner, easier
> to read, maintain and> debug, fixing bugs and removing cruft. Both her code 
> as well as her
> reviews demonstrate a> very good understanding of Tempest internals and of 
> the project future
> direction.> I believe Fanglei will make an excellent addition to the team.
> 
> As per the usual, if the current Tempest core team members would
> please vote +1> or -1(veto) to the nomination when you get a chance. We'll 
> keep the
> polls open> for 5 days or until everyone has voted.
> 
> References:
> https://review.openstack.org/#/q/owner:zhu.fanglei%2540zte.com.cn 
> https://review.openstack.org/#/q/reviewer:zhufl 
> 
> Thank you,
> 
> Andrea (andreaf)
> -
> > OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat template example repository

2017-05-16 Thread Lance Haig

On 15.05.17 19:01, Zane Bitter wrote:

On 15/05/17 12:10, Steven Hardy wrote:

On Mon, May 15, 2017 at 04:46:28PM +0200, Lance Haig wrote:

Hi Steve,

I am happy to assist in any way to be honest.


It was great to meet you in Boston, and thanks very much for 
volunteering to help out.


BTW one issue I'm aware of is that the autoscaling template examples 
we have all use OS::Ceilometer::* resources for alarms. We have a 
global environment thingy that maps those to OS::Aodh::*, so at least 
in theory those templates should continue to work, but there are 
actually no examples that I can find of autoscaling templates doing 
things the way we want everyone to do them.
I think we can perhaps come up with some standard scenarios that we want 
to showcase and then we can work on getting this setup.


I might suggest that you look at the repo that my colleague Florin and I 
setup for our library and training material.

https://github.com/heat-extras

In the lib repo we have a test directory that tests each library 
template it might be an idea as to how to achieve test coverage of the 
different resources.
We currently just run yamllint testing with the script in there but I am 
sure we can add other tests as needed.



The backwards compatibility is not always correct as I have seen when
developing our library of templates on Liberty and then trying to 
deploy it

on Mitaka for example.


Yeah, I guess it's true that there are sometimes deprecated resource
interfaces that get removed on upgrade to a new OpenStack version, 
and that

is independent of the HOT version.


What if instead of a directory per release, we just had a 'deprecated' 
directory where we move stuff that is going away (e.g. anything 
relying on OS::Glance::Image), and then deleted them when it 
disappeared from any supported release (e.g. LBaaSv1 must be close if 
it isn't gone already).


I agree in general this would be good. How would we deal with users who 
are running older versions of openstack?
Most of the customers I support have Liberty and newer so I would 
perhaps like to have these available as tested.
The challenge for us is that the newer the OStack version the more 
features are available e.g. conditionals etc..
To support that in a backwards compatible fashion is going to be tough I 
think. Unless I am missing something.
As we've proven, maintaining these templates has been a challenge 
given the
available resources, so I guess I'm still in favor of not duplicating 
a bunch

of templates, e.g perhaps we could focus on a target of CI testing
templates on the current stable release as a first step?


I'd rather do CI against Heat master, I think, but yeah that sounds 
like the first step. Note that if we're doing CI on old stuff then 
we'd need to do heat-templates stable branches rather than 
directory-per-release.


With my suggestion above, we could just not check anything in the 
'deprecated' directory maybe?

I agree in part.
If we are using the heat examples to test the functionality of the 
master branch then that would be a good idea.
If we want to provide useable templates for users to reference and use 
then I would suggest we test against stable.


I am sure we could find a way to do both.
I would suggets that we first get reliable CICD running on the current 
templates and fix what we can in there.

Then we can look at what would be a good way forward.

I am just brain dumping so any other ideas would also be good.


As you guys mentioned in our discussions the Networking example I 
quoted is
not something you guys can deal with as the source project affects 
this.


Unless we can use this exercise to test these and fix them then I am
happier.

My vision would be to have a set of templates and examples that are 
tested

regularly against a running OS deployment so that we can make sure the
combinations still run. I am sure we can agree on a way to do this 
with CICD

so that we test the fetureset.


Agreed, getting the approach to testing agreed seems like the first 
step -
FYI we do already have automated scenario tests in the main heat tree 
that

consume templates similar to many of the examples:

https://github.com/openstack/heat/tree/master/heat_integrationtests/scenario 



So, in theory, getting a similar test running on heat_templates 
should be
fairly simple, but getting all the existing templates working is 
likely to

be a bigger challenge.


Even if we just ran the 'template validate' command on them to check 
that all of the resource types & properties still exist, that would be 
pretty helpful. It'd catch of of the times when we break backwards 
compatibility so we can decide to either fix it or deprecate/remove 
the obsolete template. (Note that you still need all of the services 
installed, or at least endpoints in the catalog, for the validation to 
work.)


Actually creating all of the stuff would be nice, but it'll likely be 
difficult (just keeping up-to-date OS images to boot from is a giant 

Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-16 Thread Chris Dent

On Sun, 14 May 2017, Sean Dague wrote:

So, the basic idea is, services will optionally take an inbound 
X-OpenStack-Request-ID which will be strongly validated to the format 
(req-$uuid). They will continue to always generate one as well. When the 
context is built (which is typically about 3 more steps down the paste 
pipeline), we'll check that the service user was involved, and if not, reset 
the request_id to the local generated one. We'll log both the global and 
local request ids. All of these changes happen in oslo.middleware, 
oslo.context, oslo.log, and most projects won't need anything to get this 
infrastructure.


I may not be understanding this paragraph, but this sounds like you
are saying: accept a valid and authentic incoming request id, but
only use it in ongoing requests if the service user was involved in
those requests.

If that's correct, I'd suggest not doing that because it confuses
traceability of a series of things. Instead, always use the request
id if it is valid and authentic.

But maybe you mean "if the request id could not be proven authentic,
don't use it"?

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Flavio Percoco

On 16/05/17 09:38 -0400, Davanum Srinivas wrote:

Folks,

See $TITLE :)

Thanks,
Dims


Just to give more context to other folks:

This came up when we were discussing how we can move forward with the idea of
replacing the TC meetings. One concern is that by not having meetings, it might
be hard to contact the TC other than sending emails to the ML, which is not
ideal for all cases.

One option is to just use #openstack-dev and the other one is, well, create the
#openstack-tc channel.

I'm ok with whatever option, really. However, using #openstack-dev would help
with the following:

* We're all there and, more importantly, most of the comunity is there already.
 Whenever discussions will happen, they'll be logged and they encourage other
 ppl to join, ask questions, or just lurk.

* #openstack-tc gives the sense of a "special" group, which we're not. Yes,
  we've responsibilities. Yes, we are a team BUT we serve the community and I
  don't want anyone feeling that "#openstack-tc" is just were the "TC" hangs
  out.

* New members in the community are more likely to join #openstack-dev first.

There are some drawbacks, though:

* Whenever there are ad-hoc conversations that need to happen, using
#openstack-dev might end up "locking" the channel until those conversations are
finished. Not explicitly but it'll happen. A dedicated channel would allow for
reducing the noise during these discussions.


Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 07:11, Sam Yaple  wrote:
> I would like to bring up a subject that hasn't really been discussed in this
> thread yet, forgive me if I missed an email mentioning this.
>
> What I personally would like to see is a publishing infrastructure to allow
> pushing built images to an internal infra mirror/repo/registry for
> consumption of internal infra jobs (deployment tools like kolla-ansible and
> openstack-ansible). The images built from infra mirrors with security turned
> off are perfect for testing internally to infra.
>
> If you build images properly in infra, then you will have an image that is
> not security checked (no gpg verification of packages) and completely
> unverifiable. These are absolutely not images we want to push to
> DockerHub/quay for obvious reasons. Security and verification being chief
> among them. They are absolutely not images that should ever be run in
> production and are only suited for testing. These are the only types of
> images that can come out of infra.

So I guess we need new feature:) since we can test gpg packages...

> Thanks,
> SamYaple
>
> On Tue, May 16, 2017 at 1:57 PM, Michał Jastrzębski 
> wrote:
>>
>> On 16 May 2017 at 06:22, Doug Hellmann  wrote:
>> > Excerpts from Thierry Carrez's message of 2017-05-16 14:08:07 +0200:
>> >> Flavio Percoco wrote:
>> >> > From a release perspective, as Doug mentioned, we've avoided
>> >> > releasing projects
>> >> > in any kind of built form. This was also one of the concerns I raised
>> >> > when
>> >> > working on the proposal to support other programming languages. The
>> >> > problem of
>> >> > releasing built images goes beyond the infrastructure requirements.
>> >> > It's the
>> >> > message and the guarantees implied with the built product itself that
>> >> > are the
>> >> > concern here. And I tend to agree with Doug that this might be a
>> >> > problem for us
>> >> > as a community. Unfortunately, putting your name, Michal, as contact
>> >> > point is
>> >> > not enough. Kolla is not the only project producing container images
>> >> > and we need
>> >> > to be consistent in the way we release these images.
>> >> >
>> >> > Nothing prevents people for building their own images and uploading
>> >> > them to
>> >> > dockerhub. Having this as part of the OpenStack's pipeline is a
>> >> > problem.
>> >>
>> >> I totally subscribe to the concerns around publishing binaries (under
>> >> any form), and the expectations in terms of security maintenance that
>> >> it
>> >> would set on the publisher. At the same time, we need to have images
>> >> available, for convenience and testing. So what is the best way to
>> >> achieve that without setting strong security maintenance expectations
>> >> for the OpenStack community ? We have several options:
>> >>
>> >> 1/ Have third-parties publish images
>> >> It is the current situation. The issue is that the Kolla team (and
>> >> likely others) would rather automate the process and use OpenStack
>> >> infrastructure for it.
>> >>
>> >> 2/ Have third-parties publish images, but through OpenStack infra
>> >> This would allow to automate the process, but it would be a bit weird
>> >> to
>> >> use common infra resources to publish in a private repo.
>> >>
>> >> 3/ Publish transient (per-commit or daily) images
>> >> A "daily build" (especially if you replace it every day) would set
>> >> relatively-limited expectations in terms of maintenance. It would end
>> >> up
>> >> picking up security updates in upstream layers, even if not
>> >> immediately.
>> >>
>> >> 4/ Publish images and own them
>> >> Staff release / VMT / stable team in a way that lets us properly own
>> >> those images and publish them officially.
>> >>
>> >> Personally I think (4) is not realistic. I think we could make (3)
>> >> work,
>> >> and I prefer it to (2). If all else fails, we should keep (1).
>> >>
>> >
>> > At the forum we talked about putting test images on a "private"
>> > repository hosted on openstack.org somewhere. I think that's option
>> > 3 from your list?
>> >
>> > Paul may be able to shed more light on the details of the technology
>> > (maybe it's just an Apache-served repo, rather than a full blown
>> > instance of Docker's service, for example).
>>
>> Issue with that is
>>
>> 1. Apache served is harder to use because we want to follow docker API
>> and we'd have to reimplement it
>> 2. Running registry is single command
>> 3. If we host in in infra, in case someone actually uses it (there
>> will be people like that), that will eat up lot of network traffic
>> potentially
>> 4. With local caching of images (working already) in nodepools we
>> loose complexity of mirroring registries across nodepools
>>
>> So bottom line, having dockerhub/quay.io is simply easier.
>>
>> > Doug
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Edward Leafe
On May 15, 2017, at 9:00 PM, Flavio Percoco  wrote:

> [huge snip]

Thank you! We don’t need 50K of repeated text in every response.

-- Ed Leafe





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-16 Thread Lance Bragstad
On Tue, May 16, 2017 at 8:54 AM, Monty Taylor  wrote:

> On 05/16/2017 05:39 AM, Sean Dague wrote:
>
>> On 05/15/2017 10:00 PM, Adrian Turjak wrote:
>>
>>>
>>>
>>> On 16/05/17 13:29, Lance Bragstad wrote:
>>>


 On Mon, May 15, 2017 at 7:07 PM, Adrian Turjak
 > wrote:

>>> 
>>
>>> Based on the specs that are currently up in Keystone-specs, I
 would highly recommend not doing this per user.

 The scenario I imagine is you have a sysadmin at a company who
 created a ton of these for various jobs and then leaves. The
 company then needs to keep his user account around, or create tons
 of new API keys, and then disable his user once all the scripts he
 had keys for are replaced. Or more often then not, disable his
 user and then cry as everything breaks and no one really knows why
 or no one fully documented it all, or didn't read the docs.
 Keeping them per project and unrelated to the user makes more
 sense, as then someone else on your team can regenerate the
 secrets for the specific Keys as they want. Sure we can advise
 them to use generic user accounts within which to create these API
 keys but that implies password sharing which is bad.


 That said, I'm curious why we would make these as a thing separate
 to users. In reality, if you can create users, you can create API
 specific users. Would this be a different authentication
 mechanism? Why? Why not just continue the work on better access
 control and let people create users for this. Because lets be
 honest, isn't a user already an API key? The issue (and the Ron's
 spec mentions this) is a user having too much access, how would
 this fix that when the issue is that we don't have fine grained
 policy in the first place? How does a new auth mechanism fix that?
 Both specs mention roles so I assume it really doesn't. If we had
 fine grained policy we could just create users specific to a
 service with only the roles it needs, and the same problem is
 solved without any special API, new auth, or different 'user-lite'
 object model. It feels like this is trying to solve an issue that
 is better solved by fixing the existing problems.

 I like the idea behind these specs, but... I'm curious what
 exactly they are trying to solve. Not to mention if you wanted to
 automate anything larger such as creating sub-projects and setting
 up a basic network for each new developer to get access to your
 team, this wouldn't work unless you could have your API key
 inherit to subprojects or something more complex, at which point
 they may as well be users. Users already work for all of this, why
 reinvent the wheel when really the issue isn't the wheel itself,
 but the steering mechanism (access control/policy in this case)?


 All valid points, but IMO the discussions around API keys didn't set
 out to fix deep-rooted issues with policy. We have several specs in
 flights across projects to help mitigate the real issues with policy
 [0] [1] [2] [3] [4].

 I see an API key implementation as something that provides a cleaner
 fit and finish once we've addressed the policy bits. It's also a
 familiar concept for application developers, which was the use case
 the session was targeting.

 I probably should have laid out the related policy work before jumping
 into API keys. We've already committed a bunch of keystone resource to
 policy improvements this cycle, but I'm hoping we can work API keys
 and policy improvements in parallel.

 [0] https://review.openstack.org/#/c/460344/
 [1] https://review.openstack.org/#/c/462733/
 [2] https://review.openstack.org/#/c/464763/
 [3] https://review.openstack.org/#/c/433037/
 [4] https://review.openstack.org/#/c/427872/

 I'm well aware of the policy work, and it is fantastic to see it
>>> progressing! I can't wait to actually be able to play with that stuff!
>>> We've been painstakingly tweaking the json policy files which is a giant
>>> mess.
>>>
>>> I'm just concerned that this feels like a feature we don't really need
>>> when really it's just a slight variant of a user with a new auth model
>>> (that is really just another flavour of username/password). The sole
>>> reason most of the other cloud services have API keys is because a user
>>> can't talk to the API directly. OpenStack does not have that problem,
>>> users are API keys. So I think what we really need to consider is what
>>> exact benefit does API keys actually give us that won't be solved with
>>> users and better policy?
>>>
>>
>> The benefits of API key are 

Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-16 Thread Chris Dent

On Tue, 16 May 2017, Monty Taylor wrote:

FWIW - I'm un-crazy about the term API Key - but I'm gonna just roll with 
that until someone has a better idea. I'm uncrazy about it for two reasons:


a) the word "key" implies things to people that may or may not be true here. 
If we do stick with it - we need some REALLY crisp language about what it is 
and what it isn't.


b) Rackspace Public Cloud (and back in the day HP Public Cloud) have a thing 
called by this name. While what's written in the spec is quite similar in 
usage to that construct, I'm wary of re-using the name without the semantics 
actually being fully the same for risk of user confusion. "This uses 
api-key... which one?" Sean's email uses "APPKey" instead of "APIKey" - which 
may be a better term. Maybe just "ApplicationAuthorization"?


"api key" is a fairly common and generic term for "this magical
thingie I can create to delegate my authority to some automation".
It's also sometimes called "token", perhaps that's better (that's
what GitHub uses, for example)? In either case the "api" bit is
pretty important because it is the thing used to talk to the API.

I really hope we can avoid creating yet more special language for
OpenStack. We've got an API. We want to send keys or tokens. Let's
just call them that.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >