Re: [openstack-dev] [Heat] OS::Neutron::Port fails to set security group by name, no way to retrieve group ID from Neutron::SecurityGroup

2015-08-06 Thread TIANTIAN
1) OS::Neutron::Port does not seem to recognize security groups by name

-- 
https://github.com/openstack/heat/blob/stable/kilo/heat/engine/resources/openstack/neutron/port.py#L303

https://github.com/openstack/heat/blob/stable/kilo/heat/engine/clients/os/neutron.py#L111
we can recognize group name 
2) OS::Neutron::SecurityGroup has no attributes so it can not return a security 
group ID  
-- 
https://github.com/openstack/heat/blob/stable/kilo/heat/engine/resources/openstack/neutron/neutron.py#L133
we can get the resource id (security group id) by function 
'get_resource'
So what do you want? And what's the problems?


At 2015-08-07 11:10:37, "jason witkowski"  wrote:

Hey All,


I am having issues on the Kilo branch creating an auto-scaling template that 
builds a security group and then adds instances to it.  I have tried every 
various method I could think of with no success.  My issues are as such:


1) OS::Neutron::Port does not seem to recognize security groups by name  

2) OS::Neutron::SecurityGroup has no attributes so it can not return a security 
group ID


These issues combined find me struggling to automate the building of a security 
group and instances in one heat stack.  I have read and looked at every example 
online and they all seem to use either the name of the security group or the 
get_resource function to return the security group object itself.  Neither of 
these work for me.


Here are my heat template files:


autoscaling.yaml - http://paste.openstack.org/show/412143/

redirector.yaml - http://paste.openstack.org/show/412144/

env.yaml - http://paste.openstack.org/show/412145/



Heat Client: 0.4.1

Heat-Manage: 2015.1.1


Any help would be greatly appreciated.


Best Regards,


Jason
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up, Doc? 7 August 2015

2015-08-06 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi everyone,

Welcome to August! Today we can finally announce that the Cloud Admin
Guide is now completely converted to RST! I have been doing a fair bit
of 'behind the scenes' work this week, focusing mainly on Training
Guides and our docs licensing. We also welcome another new core team
member this week, and for those of you in the APAC region, get ready for
the Docs Swarm in Brisbane, where we'll be working on restructuring the
Architecture Design Guide.

== Progress towards Liberty ==

68 days to go!

* RST conversion:
** Install Guide: Conversion is nearly done, sign up here:
https://wiki.openstack.org/wiki/Documentation/Migrate#Installation_Guide_Migration
** Cloud Admin Guide: is complete! The new version will be available on
docs.openstack.org very soon.
** HA Guide: is also nearly done. Get in touch with Meg or Matt:
https://wiki.openstack.org/wiki/Documentation/HA_Guide_Update
** Security Guide: Conversion is now underway, sign up here:
https://etherpad.openstack.org/p/sec-guide-rst

* User Guides information architecture overhaul
** Waiting on the RST conversion of the Cloud Admin Guide to be complete

* Greater focus on helping out devs with docs in their repo
** Work has stalled on the Ironic docs, we need to pick this up again.
Contact me if you want to know more, or are willing to help out.

* Improve how we communicate with and support our corporate contributors
** I have been brainstorming ideas with Foundation, watch this space!

* Improve communication with Docs Liaisons
** I'm very pleased to see liaisons getting more involved in our bugs
and reviews. Keep up the good work!

* Clearing out old bugs
** Sadly, no action on the spotlight bugs this week. Perhaps we're all
worn out from the RST conversions? I'll keep the current three bugs for
this week, to give everyone a little more time.

== RST Migration ==

With the Cloud Admin Guide complete, we are now working on the Install
Guide, HA Guide, and the Security Guide. If you would like to assist,
please get in touch with the appropriate speciality team:

* Install Guide:
** Contact Karin Levenstein 
** Sign up here:
https://wiki.openstack.org/wiki/Documentation/Migrate#Installation_Guide_Migration

* HA Guide
** Contact Meg McRoberts  or Matt Griffin

** Blueprint:
https://blueprints.launchpad.net/openstack-manuals/+spec/improve-ha-guide

* Security Guide
** Contact Nathaniel Dillon 
** Info: https://etherpad.openstack.org/p/sec-guide-rst

For books that are now being converted, don't forget that any change you
make to the XML must also be made to the RST version until conversion is
complete. Our lovely team of cores will be keeping an eye out to make
sure loose changes to XML don't pass the gate, but try to help them out
by pointing out both patches in your reviews.

== Training Guides ==

I've been working with the Training Guides group and the docs core team
to determine the best way to move forward with the Training Guides
project. At this stage, we're planning on breaking the project up into a
few distinct parts, and bringing Training Guides back into the
documentation group as a speciality team. If you have any opinions or
ideas on this, feel free to contact me so I can make sure we're
considering all the options.

== APAC Docs Swarm ==

We're less than a week away from the APAC doc swarm! This time we'll be
working on the Architecture Design Guide. It's to be held at the Red Hat
office in Brisbane, on 13-14 August. Check out
http://openstack-swarm.rhcloud.com/ for all the info and to RSVP.

== Core Team Changes ==

This month, we welcome KATO Tomoyuki on to our docs core team. Thanks
for all your hard work Tomoyuki-san, and welcome to the team!

== Doc team meeting ==

The APAC meeting was held this week. Read the minutes here:
https://wiki.openstack.org/wiki/Documentation/MeetingLogs#2015-08-05

The next meetings are:
US: Wednesday 12 August, 14:00:00 UTC
APAC: Wednesday 19 August, 00:30:00 UTC

Please go ahead and add any agenda items to the meeting page here:
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

== Spotlight bugs for this week ==

Let's give these three a little more oxygen:

https://bugs.launchpad.net/openstack-manuals/+bug/1257018 VPNaaS isn't
documented in cloud admin

https://bugs.launchpad.net/openstack-manuals/+bug/1257656 VMware: add
support for VM diagnostics

https://bugs.launchpad.net/openstack-manuals/+bug/1261969 Document nova
server package

- --

Remember, if you have content you would like to add to this newsletter,
or you would like to be added to the distribution list, please email me
directly at openst...@lanabrindley.com, or visit:
https://wiki.openstack.org/w/index.php?title=Documentation/WhatsUpDoc

Keep on doc'ing!

Lana

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVxFJBAAoJELppzVb4+KUy9d8IAKgEM8succPK

[openstack-dev] ][third-party-ci]Running custom code before tests

2015-08-06 Thread Eduard Matei
Hi,

I managed to get the jobs triggered, i read
https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers but i can't
figure out where to put the code for pre_test_hook so i can setup my
backend.

Thanks,

-- 

*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com



*CloudFounders, The Private Cloud Software Company*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] A possible solution for HA Active-Active

2015-08-06 Thread Andrew Beekhof

> On 5 Aug 2015, at 1:34 am, Joshua Harlow  wrote:
> 
> Philipp Marek wrote:
>>> If we end up using a DLM then we have to detect when the connection to
>>> the DLM is lost on a node and stop all ongoing operations to prevent
>>> data corruption.
>>> 
>>> It may not be trivial to do, but we will have to do it in any solution
>>> we use, even on my last proposal that only uses the DB in Volume Manager
>>> we would still need to stop all operations if we lose connection to the
>>> DB.
>> 
>> Well, is it already decided that Pacemaker would be chosen to provide HA in
>> Openstack? There's been a talk "Pacemaker: the PID 1 of Openstack" IIRC.
>> 
>> I know that Pacemaker's been pushed aside in an earlier ML post, but IMO
>> there's already *so much* been done for HA in Pacemaker that Openstack
>> should just use it.
>> 
>> All HA nodes needs to participate in a Pacemaker cluster - and if one node
>> looses connection, all services will get stopped automatically (by
>> Pacemaker) - or the node gets fenced.
>> 
>> 
>> No need to invent some sloppy scripts to do exactly the tasks (badly!) that
>> the Linux HA Stack has been providing for quite a few years.
>> 
>> 
>> Yes, Pacemaker needs learning - but not more than any other involved
>> project, and there are already quite a few here, which have to be known to
>> any operator or developer already.
>> 
>> 
>> (BTW, LINBIT sells training for the Linux HA Cluster Stack - and yes,
>>  I work for them ;)
> 
> So just a piece of information, but yahoo (the company I work for, with vms 
> in the tens of thousands, baremetal in the much more than that...) hasn't 
> used pacemaker, and in all honesty this is the first project (openstack) that 
> I have heard that needs such a solution. I feel that we really should be 
> building our services better so that they can be A-A vs having to depend on 
> another piece of software to get around our 'sloppiness' (for lack of a 
> better word).

HA is a deceptively hard problem.
There is really no need for every project to attempt to solve it on their own.
Having everyone consuming/calculating a different membership list is a very 
good way to go insane.

Aside from the usual bugs, the HA space lends itself to making simplifying 
assumptions early on, only to trap you with them down the road.
Its even worse if you’re trying to bolt it on after-the-fact...

Perhaps try to think of pacemaker as a distribute finite state machine instead 
of a cluster manager.
That is part of the value we bring to projects like galera and rabbitmq.

Sure they are A-A, and once they’re up they can survive many failures, but 
bringing them up can be non-trivial.
We also provide the additional context (eg. quorum and fencing) that allow more 
kinds of failures to be safely recovered from.

Something to think about perhaps.

— Andrew

> 
> Nothing against pacemaker personally... IMHO it just doesn't feel like we are 
> doing this right if we need such a product in the first place.
> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qos][ml2] extensions swallow exceptions

2015-08-06 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/04/2015 08:41 PM, Robert Kukura wrote:
> The process_[create|update]_() extension driver methods
> are intended to validate user input. Exceptions raised by these
> need to be returned to users so they know what they did wrong.

I am not sure they should be returned as-is, since those exceptions
may map to InternalError in the server unless inherited properly.

Anyway, I think the patch in question reached some level of maturity
that would need attention from ml2 and api people:

https://review.openstack.org/#/c/202061

Bob and others, please take a look.

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVxETvAAoJEC5aWaUY1u57OPIH/iKYuuLjWUyiq8l1BGQjx26+
sgsiWsYqjQxbKvU/LDmPK6bAcv2ElsMSWhWzBFGFe/XVFxNp8GmLCRrWN3XhW2lD
PVrApIwGsf0jMMiqRCuBGBIV4RxCqVUKYF8yETRphsDuaK8iYMwwsQl3feITddTf
byTMQS78GeFZGhyX2CVUge5oiVdLuCit22F8QdyndntttfYciOb7jKhhTGZuanhv
Xf1dkGGx+D6vuj90F0Ng9EuESDbFBgmEMkcMAWUGsgI+vjLwWHZD+D788gWjGsFp
QHiAxDVmJ94WNxBouoAOgpWVv+OwQ0Lmd+9LXHwFd/1cBbY9oPai8CgftQUV47o=
=htRR
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] glance_store and glance

2015-08-06 Thread Nikhil Komawar
Hi,

During the mid-cycle we had another proposal that wanted to put back the
glance_store library back into the Glance repo and not leave it is as a
separate repo/project.

The questions outstanding are: what are the use cases that want it as a
separate library?

The original use cases that supported a separate lib have not had much
progress or adoption yet. There have been complaints about overhead of
maintaining it as a separate lib and version tracking without much gain.
The proposals for the re-factor of the library is also a worrysome topic
in terms of the stability of the codebase.

The original use cases from my memory are:
1. Other projects consuming glance_store -- this has become less likely
to be useful.
2. another upload path for users for the convenience of tasks -- not
preferable as we don't want to expose this library to users.
3. ease of addition of newer drivers for the developers -- drivers are
only being removed since.
4. cleaner api / more methods that support backend store capabilities -
a separate library is not necessarily needed, smoother re-factor is
possible within Glance codebase.

Also, the authN/Z complexities and ACL restrictions on the back-end
stores can be potential security loopholes with the library and Glance
evolution separately.

In order to move forward smoothly on this topic in Liberty, I hereby
request input from all concerned developer parties. The decision to keep
this as a separate library will remain in effect if we do not come to
resolution within 2 weeks from now. However, if there aren't any
significant use cases we may consider a port back of the same.

Please find some corresponding discussion from the latest Glance weekly
meeting:
http://eavesdrop.openstack.org/meetings/glance/2015/glance.2015-08-06-14.03.log.html#l-21

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to use the log server in CI ?

2015-08-06 Thread Tang Chen

Hi Asselin,

On 08/06/2015 09:44 PM, Asselin, Ramy wrote:


Hi Tang,

First, I recommend you use os-loganalyze because it significantly 
increases the value of the log files by making them easier to consume.


I'm not sure what the issue you encountered is. The link you provide 
is to use swift, but that is the newer alternative to the old-fashion 
files on disk approach, and not a requirement.




True. And I'm not using swift right now.

That said, you'll find the rules in one of the files located here: 
/etc/apache2/sites-enabled/


It is created by this template [1]. As you see, there's not htmlify 
directory because it's an alias that invokes os-loganalyze.




Yes, I saw the source. Thank you very much to clarify that.
It is OK now.

But one more thing I want to confirm.

install_log_server.sh should be run on log server, right ?
Then $DOMAIN is configured to the domain name of the log server itself, 
and nothing about jenkins master is configured.

So how does the jenkins master find the log server ?
In other word, how does the jenkins master know the domain name of log 
server ?


Thanks.


Ramy

[1] 
http://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/templates/logs.vhost.erb#n85


*From:*Tang Chen [mailto:tangc...@cn.fujitsu.com]
*Sent:* Thursday, August 06, 2015 5:07 AM
*To:* openstack-dev@lists.openstack.org
*Subject:* Re: [openstack-dev] How to use the log server in CI ?

Hi Joshua,

Thanks to reply.

On 08/06/2015 07:45 PM, Joshua Hesketh wrote:

Hi Tang,

For OpenStack's set up, os-loganalyze sits at /htmlify/ and is
used to add markup and filter log lines when viewing in a browser.


On my box, I don't have a /htmlify/ directory, and I don't think I 
installed os-loganalyze at all.
But when I accessed to the log site, the URL was just modified, added 
/htmlify/.




For your own set up you don't need to use this and could simply
serve anything straight off your disk. It should be safe to remove
the apache matching rules in order to do so.


I'm sorry, how to remove the apache matching rules ? From where ?

Thanks. :)



Hope that helps.

Cheers,

Josh

On Thu, Aug 6, 2015 at 6:50 PM, Tang Chen mailto:tangc...@cn.fujitsu.com>> wrote:

Hi Abhishek,

After I setup a log server, if the request ends in .txt.gz,
console.html or console.html.gz rewrite the url to prepend
/htmlify/ .
But actually the log file is on my local machine.

Is this done by os-loganalyze ?  Is this included in
install_log_server.sh ? (I don't think so.)
Could I disable it and access my log file locally ?

I found this URL for reference.

http://josh.people.rcbops.com/2014/10/openstack-infrastructure-swift-logs-and-performance/

Thanks. :)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] OS::Neutron::Port fails to set security group by name, no way to retrieve group ID from Neutron::SecurityGroup

2015-08-06 Thread jason witkowski
Hey All,

I am having issues on the Kilo branch creating an auto-scaling template
that builds a security group and then adds instances to it.  I have tried
every various method I could think of with no success.  My issues are as
such:

1) OS::Neutron::Port does not seem to recognize security groups by name
2) OS::Neutron::SecurityGroup has no attributes so it can not return a
security group ID

These issues combined find me struggling to automate the building of a
security group and instances in one heat stack.  I have read and looked at
every example online and they all seem to use either the name of the
security group or the get_resource function to return the security group
object itself.  Neither of these work for me.

Here are my heat template files:

autoscaling.yaml - http://paste.openstack.org/show/412143/
redirector.yaml - http://paste.openstack.org/show/412144/
env.yaml - http://paste.openstack.org/show/412145/

Heat Client: 0.4.1
Heat-Manage: 2015.1.1

Any help would be greatly appreciated.

Best Regards,

Jason
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-06 Thread Adam Young

On 08/06/2015 04:56 AM, David Chadwick wrote:


On 05/08/2015 19:28, Thai Q Tran wrote:

I agree with Lance. Quite honestly, the list of Idps does not belong
in horizon's settings. Just throwing out some ideas, why not white-list
the Idps you want public it in keystone's settings, and have an API call
for that?

that was the conclusion reached many months ago the last time this was
discussed.

regards

David


Posted a spec for review here.  It needs a corresponding API change.

https://review.openstack.org/#/c/209941/




  
  


 - Original message -
 From: Lance Bragstad 
 To: "OpenStack Development Mailing List (not for usage questions)"
 
 Cc:
 Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
 Date: Wed, Aug 5, 2015 11:19 AM
  
  
  
 On Wed, Aug 5, 2015 at 1:02 PM, Steve Martinelli

 mailto:steve...@ca.ibm.com>> wrote:

 Some folks said that they'd prefer not to list all associated
 idps, which i can understand.

 Actually, I like jamie's suggestion of just making horizon a bit
 smarter, and expecting the values in the horizon settings
 (idp+protocol)

  
 This *might* lead to a more complicated user experience, unless we

 deduce the protocol for the IdP selected (but that would defeat the
 point?). Also, wouldn't we have to make changes to Horizon every
 time we add an IdP? This might be case by case, but if you're
 consistently adding Identity Providers, then your ops team might not
 be too happy reconfiguring Horizon all the time.
  




 Thanks,

 Steve Martinelli
 OpenStack Keystone Core

 Inactive hide details for Dolph Mathews ---2015/08/05 01:38:09
 PM---On Wed, Aug 5, 2015 at 5:39 AM, David Chadwick
 mailto:d.w.chadw...@kent.ac.uk>> wrote:

 From: Dolph Mathews mailto:dolph.math...@gmail.com>>
 To: "OpenStack Development Mailing List (not for usage
 questions)" mailto:openstack-dev@lists.openstack.org>>
 Date: 2015/08/05 01:38 PM
 Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login

 





 On Wed, Aug 5, 2015 at 5:39 AM, David Chadwick
 <_d.w.chadw...@kent.ac.uk_ > wrote:




   *   On 04/08/2015 18:59, Steve Martinelli wrote:
 > Right, but that API is/should be protected. If we want to
 list IdPs
 > *before* authenticating a user, we either need: 1) a new
 API for listing
 > public IdPs or 2) a new policy that doesn't protect that API.

 Hi Steve

 yes this was my understanding of the discussion that took
 place many
 months ago. I had assumed (wrongly) that something had been
 done about
 it, but I guess from your message that we are no further
 forward on this
 Actually 2) above might be better reworded as - a new
 policy/engine that
 allows public access to be a bona fide policy rule


 The existing policy simply seems wrong. Why protect the list of
 IdPs?
  



   * regards

 David

 >
 > Thanks,
 >
 > Steve Martinelli
 > OpenStack Keystone Core
 >
 > Inactive hide details for Lance Bragstad ---2015/08/04
 01:49:29 PM---On
 > Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish
  ---2015/08/04 01:49:29 PM---On Tue, Aug 4, 2015 at 10:52
 AM, Douglas
 > Fish <_drf...@us.ibm.com_ >
 wrote: > Hi David,
 >
 > From: Lance Bragstad <_lbragstad@gmail.com_
 >
 > To: "OpenStack Development Mailing List (not for usage
 questions)"
 > <_openstack-dev@lists.openstack.org_
 >
 > Date: 2015/08/04 01:49 PM
 > Subject: Re: [openstack-dev] [Keystone] [Horizon]
 Federated Login
 >
 >
 

 >
 >
 >
 >
 >
 > On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish
 <_drf...@us.ibm.com_
 > >>
 wrote:
 >
 > Hi David,
 >
 > This is a cool looking UI. I've made a minor comment
 on it in InVision.
 >
 > I'm curious if this is an implementable idea - does
 keystone support
 > large

Re: [openstack-dev] [tricircle] local pluggable cascaded service

2015-08-06 Thread joehuang
Hi, Zhiyuan,

Good idea. I am also trying to find whether there are other values could be 
added by a new bottom cascade service.

And the most negative part of a bottom cascade service is that it’ll impact the 
openstack distribution and deployment.

Best Regards
Chaoyi Huang ( Joe Huang )

From: Vega Cai [mailto:luckyveg...@gmail.com]
Sent: Monday, August 03, 2015 3:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tricircle] local pluggable cascaded service

Agree that we use polling mode to synchronize resource status, but it's a bit 
weird to deploy a localized service for just polling. Maybe we can define a 
status synchronizatioin task to handle such work so it can be done by cascade 
service itself. If multiple cascade service is deployed, to avoid conflict, we 
can utilize leader selection (provided by zookeeper and other tools) to find a 
leader cascade service. Leader cascade service is responsible for periodically 
creating status synchronization task and other members finish the task.

BR
Zhiyuan

On 3 August 2015 at 14:48, joehuang 
mailto:joehu...@huawei.com>> wrote:
Hi,

In the PoC, the status synchronization is done by the proxy node running with 
periodic task to poll the recent changed status, for example, VM status, volume 
status, and port status, etc. One proxy node will be responsible for one 
cascaded OpenStack instance, and configured with one user to be able to query 
the status. It works, although not perfect, and the controllable.

During the last weekly meeting, Gampel mentioned that to have a local pluggable 
cascaded service for "port status" and other site-localized information 
collection, and push to the top layer cascade service. I would like to know 
more your thoughts on the “push” method.


1.  We cannot push every status change to the top layer, especially if one 
site’s down and restart all service, then all object’s status will be changed 
frequently in very short time, if “push” based on every status, imaging that  
there are lots of objects in one site, the burst API calling to the top layer 
will be un-controllable.



2.  The local cascaded service has to listen on the message bus to track 
each status change event of all objects. It’ll work like Ceilometers agent to 
capture the status change events. Not all status change will send notification 
even to the message bus, have to add code to each service. It’s also complex to 
implement.


To my understanding, the more viable way is to deploy a localized service, but 
still using polling method to get the status, and send to the up layer in batch 
mode to reduce the number of API calling to the top layer.


In the top layer, using a task to process the status change in case of burst 
status refresh.

Best Regards
Chaoyi Huang ( joehuang )

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VPNaaS and DVR compatibility

2015-08-06 Thread shihanzhang
I have same question, I have filed a bug on launchpad: 
https://bugs.launchpad.net/neutron/+bug/1476469, 
who can help to clarify it?
Thanks,
Hanzhang, shi 






At 2015-08-05 00:33:05, "Sergey Kolekonov"  wrote:

Hi,


I'd like to clarify a situation around VPNaaS and DVR compatibility in Neutron.
In non-DVR case VMs use a network node to access each other and external 
network.
So with VPNaaS enabled we just have additional setup steps performed on network 
nodes to establish VPN connection between VMs.
With DVR enabled two VMs from different networks (or even clouds) should still 
reach each other through network nodes, but if floating IPs are assigned, this 
doesn't work.
So my question is: is it expected and if yes are there any plans to add full 
support for VPNaaS on DVR-enabled clusters?


Thank you.
--

Regards,
Sergey Kolekonov__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-06 Thread Jamie Lennox
- Original Message -

> From: "Dolph Mathews" 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Sent: Friday, August 7, 2015 9:09:25 AM
> Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login

> On Thu, Aug 6, 2015 at 11:25 AM, Lance Bragstad < lbrags...@gmail.com >
> wrote:

> > On Thu, Aug 6, 2015 at 10:47 AM, Dolph Mathews < dolph.math...@gmail.com >
> > wrote:
> 

> > > On Wed, Aug 5, 2015 at 6:54 PM, Jamie Lennox < jamielen...@redhat.com >
> > > wrote:
> > 
> 

> > > > - Original Message -
> > > 
> > 
> 
> > > > > From: "David Lyle" < dkly...@gmail.com >
> > > 
> > 
> 
> > > > > To: "OpenStack Development Mailing List (not for usage questions)" <
> > > > > openstack-dev@lists.openstack.org >
> > > 
> > 
> 
> > > > > Sent: Thursday, August 6, 2015 5:52:40 AM
> > > 
> > 
> 
> > > > > Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
> > > 
> > 
> 
> > > > >
> > > 
> > 
> 
> > > > > Forcing Horizon to duplicate Keystone settings just makes everything
> > > > > much
> > > 
> > 
> 
> > > > > harder to configure and much more fragile. Exposing whitelisted, or
> > > > > all,
> > > 
> > 
> 
> > > > > IdPs makes much more sense.
> > > 
> > 
> 
> > > > >
> > > 
> > 
> 
> > > > > On Wed, Aug 5, 2015 at 1:33 PM, Dolph Mathews <
> > > > > dolph.math...@gmail.com
> > > > > >
> > > 
> > 
> 
> > > > > wrote:
> > > 
> > 
> 
> > > > >
> > > 
> > 
> 
> > > > >
> > > 
> > 
> 
> > > > >
> > > 
> > 
> 
> > > > > On Wed, Aug 5, 2015 at 1:02 PM, Steve Martinelli <
> > > > > steve...@ca.ibm.com
> > > > > >
> > > 
> > 
> 
> > > > > wrote:
> > > 
> > 
> 
> > > > >
> > > 
> > 
> 
> > > > >
> > > 
> > 
> 
> > > > >
> > > 
> > 
> 
> > > > >
> > > 
> > 
> 
> > > > >
> > > 
> > 
> 
> > > > > Some folks said that they'd prefer not to list all associated idps,
> > > > > which
> > > > > i
> > > 
> > 
> 
> > > > > can understand.
> > > 
> > 
> 
> > > > > Why?
> > > 
> > 
> 

> > > > So the case i heard and i think is fairly reasonable is providing
> > > > corporate
> > > > logins to a public cloud. Taking the canonical coke/pepsi example if
> > > > i'm
> > > > coke, i get asked to login to this public cloud i then have to scroll
> > > > though
> > > > all the providers to find the COKE.COM domain and i can see for example
> > > > that
> > > > PEPSI.COM is also providing logins to this cloud. Ignoring the
> > > > corporate
> > > > privacy implications this list has the potential to get long. Think
> > > > about
> > > > for example how you can do a corporate login to gmail, you certainly
> > > > don't
> > > > pick from a list of auth providers for gmail - there would be
> > > > thousands.
> > > 
> > 
> 

> > > > My understanding of the usage then would be that coke would have been
> > > > provided a (possibly branded) dedicated horizon that backed onto a
> > > > public
> > > > cloud and that i could then from horizon say that it's only allowed
> > > > access
> > > > to the COKE.COM domain (because the UX for inputting a domain at login
> > > > is
> > > > not great so per customer dashboards i think make sense) and that for
> > > > this
> > > > instance of horizon i want to show the 3 or 4 login providers that
> > > > COKE.COM
> > > > is going to allow.
> > > 
> > 
> 

> > > > Anyway you want to list or whitelist that in keystone is going to
> > > > involve
> > > > some form of IdP tagging system where we have to say which set of idps
> > > > we
> > > > want in this case and i don't think we should.
> > > 
> > 
> 

> > > That all makes sense, and I was admittedly only thinking of the private
> > > cloud
> > > use case. So, I'd like to discuss the public and private use cases
> > > separately:
> > 
> 

> > > In a public cloud, is there a real use case for revealing *any* IdPs
> > > publicly? If not, the entire list should be made "private" using
> > > policy.json, which we already support today.
> > 
> 

> > The user would be required to know the id of the IdP in which they want to
> > federate with, right?
> 

> As a federated end user in a public cloud, I'd be happy to have a custom URL
> / bookmark for my IdP / domain (like http://customer-x.cloud.example.com/ or
> http://cloud.example.com/customer-x ) that I need to know to kickoff the
> correct federated handshake with my IdP using a single button press
> ("Login").

I always envisioned the subdomain method. I would say no to listing IdPs, but 
it's not simply making the list private because you will still need to provide 
at least one IdP option manually in that horizon's local_settings and at which 
point you should just turn off listing because you know it's always going to 
get a 403. I'm not sure how this would be managed today because we have a 
single WebSSO entry point so you can't really specify the IdP you want from the 
login page, it's expected to have your own discovery page - hence the spec 
https://review.openstack.org/#/c/199339/ 

> > > In a private cloud, is there a real use case for fine-grained
> >

[openstack-dev] [murano] periodic jobs reminder

2015-08-06 Thread Kirill Zaitsev
hi

This letter is just a heads up, that murano now has periodic jobs, that report 
any failures in stable branches to 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint 
mailing list.

So if you’re interested — you should subscribe to the mailing list (and 
possibly add some filters for murano =))
Thanks for your attention =)

-- 
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] formatting security patches

2015-08-06 Thread John Dickinson
I suspect that many people do not know that the life of a security patch to an 
OpenStack project looks different than normal patches. Gerrit is public, so 
patches for private security bugs can't be proposed or reviewed there. Instead, 
they need to be proposed to and reviewed in the comments of the Launchpad bug 
report.

What we want to avoid is someone filing a security bug and then proposing the 
patch to gerrit for review.

https://security.openstack.org/#how-to-propose-and-review-a-security-patch has 
been created to show how to create and apply a security patch. (I've summarized 
it below)

When you, the patch author, want to propose a patch, you should export it and 
attach it to the Launchpad bug review as a comment. How do you export the 
patch? Like this:

# check out the committed patch locally, then do this
git format-patch --stdout HEAD~1 >path/to/local/file.patch

Now you have a local file you can attach to comments, email around, or whatever 
you want. It contains not only the patch diff, but the author, timestamp, and 
other metadata needed for someone to apply it locally to their own repo.

Now, if you, as a patch reviewer, want to test out a patch, download it from 
the Launchpad bug report and run the following:

git am 

signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-06 Thread Dolph Mathews
On Thu, Aug 6, 2015 at 6:09 PM, Dolph Mathews 
wrote:

>
> On Thu, Aug 6, 2015 at 11:25 AM, Lance Bragstad 
> wrote:
>
>>
>>
>> On Thu, Aug 6, 2015 at 10:47 AM, Dolph Mathews 
>> wrote:
>>
>>>
>>> On Wed, Aug 5, 2015 at 6:54 PM, Jamie Lennox 
>>> wrote:
>>>


 - Original Message -
 > From: "David Lyle" 
 > To: "OpenStack Development Mailing List (not for usage questions)" <
 openstack-dev@lists.openstack.org>
 > Sent: Thursday, August 6, 2015 5:52:40 AM
 > Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
 >
 > Forcing Horizon to duplicate Keystone settings just makes everything
 much
 > harder to configure and much more fragile. Exposing whitelisted, or
 all,
 > IdPs makes much more sense.
 >
 > On Wed, Aug 5, 2015 at 1:33 PM, Dolph Mathews <
 dolph.math...@gmail.com >
 > wrote:
 >
 >
 >
 > On Wed, Aug 5, 2015 at 1:02 PM, Steve Martinelli <
 steve...@ca.ibm.com >
 > wrote:
 >
 >
 >
 >
 >
 > Some folks said that they'd prefer not to list all associated idps,
 which i
 > can understand.
 > Why?

 So the case i heard and i think is fairly reasonable is providing
 corporate logins to a public cloud. Taking the canonical coke/pepsi example
 if i'm coke, i get asked to login to this public cloud i then have to
 scroll though all the providers to find the COKE.COM domain and i can
 see for example that PEPSI.COM is also providing logins to this cloud.
 Ignoring the corporate privacy implications this list has the potential to
 get long. Think about for example how you can do a corporate login to
 gmail, you certainly don't pick from a list of auth providers for gmail -
 there would be thousands.

 My understanding of the usage then would be that coke would have been
 provided a (possibly branded) dedicated horizon that backed onto a public
 cloud and that i could then from horizon say that it's only allowed access
 to the COKE.COM domain (because the UX for inputting a domain at login
 is not great so per customer dashboards i think make sense) and that for
 this instance of horizon i want to show the 3 or 4 login providers that
 COKE.COM is going to allow.

 Anyway you want to list or whitelist that in keystone is going to
 involve some form of IdP tagging system where we have to say which set of
 idps we want in this case and i don't think we should.

>>>
>>> That all makes sense, and I was admittedly only thinking of the private
>>> cloud use case. So, I'd like to discuss the public and private use cases
>>> separately:
>>>
>>> In a public cloud, is there a real use case for revealing *any* IdPs
>>> publicly? If not, the entire list should be made "private" using
>>> policy.json, which we already support today.
>>>
>>
>> The user would be required to know the id of the IdP in which they want
>> to federate with, right?
>>
>>
>
> As a federated end user in a public cloud, I'd be happy to have a custom
> URL / bookmark for my IdP / domain (like
> http://customer-x.cloud.example.com/ or
> http://cloud.example.com/customer-x) that I need to know to kickoff the
> correct federated handshake with my IdP using a single button press
> ("Login").
>

The benefit of the first example is that I can easily setup DNS to redirect
cloud.customer-x.com to customer-x.cloud.example.com, where example.com is
my public cloud provider. The benefit of the second example is that it's
completely trivial for the public cloud provider to implement.


>
>
>>
>>> In a private cloud, is there a real use case for fine-grained
>>> public/private attributes per IdP? (The stated use case was for a public
>>> cloud.) It seems the default behavior should be that horizon fetches the
>>> entire list from keystone.
>>>
>>>

 @David - when you add a new IdP to the university network are you
 having to provide a new mapping each time? I know the CERN answer to this
 with websso was to essentially group many IdPs behind the same keystone idp
 because they will all produce the same assertion values and consume the
 same mapping.

 Maybe the answer here is to provide the option in
 django_openstack_auth, a plugin (again) of fetch from keystone, fixed list
 in settings or let it point at a custom text file/url that is maintained by
 the deployer. Honestly if you're adding and removing idps this frequently i
 don't mind making the deployer maintain some of this information out of
 scope of keystone.


 Jamie

 >
 >
 >
 >
 >
 > Actually, I like jamie's suggestion of just making horizon a bit
 smarter, and
 > expecting the values in the horizon settings (idp+protocol)
 > But, it's already in keystone.
 >
 >
 >
 >
 >
 >
 >
 > Thanks,
 >
 > Steve Martinelli
 > OpenStack

Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-06 Thread Dolph Mathews
On Thu, Aug 6, 2015 at 11:25 AM, Lance Bragstad  wrote:

>
>
> On Thu, Aug 6, 2015 at 10:47 AM, Dolph Mathews 
> wrote:
>
>>
>> On Wed, Aug 5, 2015 at 6:54 PM, Jamie Lennox 
>> wrote:
>>
>>>
>>>
>>> - Original Message -
>>> > From: "David Lyle" 
>>> > To: "OpenStack Development Mailing List (not for usage questions)" <
>>> openstack-dev@lists.openstack.org>
>>> > Sent: Thursday, August 6, 2015 5:52:40 AM
>>> > Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
>>> >
>>> > Forcing Horizon to duplicate Keystone settings just makes everything
>>> much
>>> > harder to configure and much more fragile. Exposing whitelisted, or
>>> all,
>>> > IdPs makes much more sense.
>>> >
>>> > On Wed, Aug 5, 2015 at 1:33 PM, Dolph Mathews <
>>> dolph.math...@gmail.com >
>>> > wrote:
>>> >
>>> >
>>> >
>>> > On Wed, Aug 5, 2015 at 1:02 PM, Steve Martinelli < steve...@ca.ibm.com
>>> >
>>> > wrote:
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > Some folks said that they'd prefer not to list all associated idps,
>>> which i
>>> > can understand.
>>> > Why?
>>>
>>> So the case i heard and i think is fairly reasonable is providing
>>> corporate logins to a public cloud. Taking the canonical coke/pepsi example
>>> if i'm coke, i get asked to login to this public cloud i then have to
>>> scroll though all the providers to find the COKE.COM domain and i can
>>> see for example that PEPSI.COM is also providing logins to this cloud.
>>> Ignoring the corporate privacy implications this list has the potential to
>>> get long. Think about for example how you can do a corporate login to
>>> gmail, you certainly don't pick from a list of auth providers for gmail -
>>> there would be thousands.
>>>
>>> My understanding of the usage then would be that coke would have been
>>> provided a (possibly branded) dedicated horizon that backed onto a public
>>> cloud and that i could then from horizon say that it's only allowed access
>>> to the COKE.COM domain (because the UX for inputting a domain at login
>>> is not great so per customer dashboards i think make sense) and that for
>>> this instance of horizon i want to show the 3 or 4 login providers that
>>> COKE.COM is going to allow.
>>>
>>> Anyway you want to list or whitelist that in keystone is going to
>>> involve some form of IdP tagging system where we have to say which set of
>>> idps we want in this case and i don't think we should.
>>>
>>
>> That all makes sense, and I was admittedly only thinking of the private
>> cloud use case. So, I'd like to discuss the public and private use cases
>> separately:
>>
>> In a public cloud, is there a real use case for revealing *any* IdPs
>> publicly? If not, the entire list should be made "private" using
>> policy.json, which we already support today.
>>
>
> The user would be required to know the id of the IdP in which they want to
> federate with, right?
>
>

As a federated end user in a public cloud, I'd be happy to have a custom
URL / bookmark for my IdP / domain (like
http://customer-x.cloud.example.com/ or http://cloud.example.com/customer-x)
that I need to know to kickoff the correct federated handshake with my IdP
using a single button press ("Login").


>
>> In a private cloud, is there a real use case for fine-grained
>> public/private attributes per IdP? (The stated use case was for a public
>> cloud.) It seems the default behavior should be that horizon fetches the
>> entire list from keystone.
>>
>>
>>>
>>> @David - when you add a new IdP to the university network are you having
>>> to provide a new mapping each time? I know the CERN answer to this with
>>> websso was to essentially group many IdPs behind the same keystone idp
>>> because they will all produce the same assertion values and consume the
>>> same mapping.
>>>
>>> Maybe the answer here is to provide the option in django_openstack_auth,
>>> a plugin (again) of fetch from keystone, fixed list in settings or let it
>>> point at a custom text file/url that is maintained by the deployer.
>>> Honestly if you're adding and removing idps this frequently i don't mind
>>> making the deployer maintain some of this information out of scope of
>>> keystone.
>>>
>>>
>>> Jamie
>>>
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > Actually, I like jamie's suggestion of just making horizon a bit
>>> smarter, and
>>> > expecting the values in the horizon settings (idp+protocol)
>>> > But, it's already in keystone.
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > Thanks,
>>> >
>>> > Steve Martinelli
>>> > OpenStack Keystone Core
>>> >
>>> > Dolph Mathews ---2015/08/05 01:38:09 PM---On Wed, Aug 5, 2015 at 5:39
>>> AM,
>>> > David Chadwick < d.w.chadw...@kent.ac.uk > wrote:
>>> >
>>> > From: Dolph Mathews < dolph.math...@gmail.com >
>>> > To: "OpenStack Development Mailing List (not for usage questions)" <
>>> > openstack-dev@lists.openstack.org >
>>> > Date: 2015/08/05 01:38 PM
>>> > Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > On Wed, Aug 5,

Re: [openstack-dev] [keystone] policy issues when generating trusts with different clients

2015-08-06 Thread Jamie Lennox


- Original Message -
> From: "michael mccune" 
> To: openstack-dev@lists.openstack.org
> Sent: Friday, August 7, 2015 1:21:53 AM
> Subject: Re: [openstack-dev] [keystone] policy issues when generating trusts 
> with different clients
> 
> On 08/05/2015 10:27 PM, Jamie Lennox wrote:
> > Hey Mike,
> >
> > I think it could be one of the hacks that are in place to try and keep
> > compatibility with the old and new way of using the client is returning
> > the wrong thing. Compare the output of trustor.user_id and
> > trustor_auth.get_user_id(sess). For me trustor.user_id is None which will
> > make sense why you'd get permission errors.
> >
> > Whether this is a bug in keystoneclient is debatable because we had to keep
> > compatibility with the old options just not update them for the new paths,
> > the ambiguity is certainly bad.
> >
> > The command that works for me is:
> >
> > trustor.trusts.create(
> >  trustor_user=trustor_auth.get_user_id(sess),
> >  trustee_user=trustee_auth.get_user_id(sess),
> >  project=trustor_auth.get_project_id(sess),
> >  role_names=['Member'],
> >  impersonation=True,
> >  expires_at=None)
> >
> > We're working on a keystoneclient 2.0 that will remove all that old code.
> >
> >
> > Let me know if that fixes it for you.
> 
> hi Jamie,
> 
> this does work for me. but now i have a few questions as i start to
> refactor our code.

Great

> previously we have been handing around keystone Client objects to
> perform all of our operations. this leads to some trouble as we expected
> the user_id, and project_id, to be present in the client. so, 3 questions.
> 
> 1. is it safe to set the user_id and project_id on a Client object?
> (i notice that i am able to perform this operation and it would make
> things slightly easier to refactor)

It's safe in that if you force set it on the client then there isn't anything 
in the client that will override it, I don't know if i'd recommend it though.
 
> 2. are there plans for the new keystoneclient to automatically fill in
> user_id and project_id for Session/Auth based clients?

No
 
> 3. would it be better to transform our code to pass around Auth plugin
> objects instead of Client objects?

Absolutely.

So conceptually this is what we're trying to get to with keystoneclient. 
Keystone has 2 related but distinct jobs, one is to provide authentication for 
all the services and one is to manage it's own CRUD operations. Using sessions 
and auth plugins are authentication opreations and the existing keystoneclient 
should be used only for keystone's CRUD operations. This is the also the intent 
behind the upcoming keystoneauth/keystoneclient split where authentication 
options that are common to all clients are going to get moved to keystoneauth.

If you find yourself using a keystoneclient object for auth I would consider 
that a bug.

There is a larger question here about what sahara is doing with the user_id and 
project_id, typically this would be received from auth_token middleware or 
otherwise be implied by the token that is passed around. If you are passing 
these parameters to other clients we should fix those clients. For most 
projects this is handled by passing around the Context object which contains a 
session, a plugin and all the information from auth_token middleware.

> thanks again for the help,
> mike
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Proposal to cancel meeting on August 11

2015-08-06 Thread James Slagle
I'll be unavailable next week to run the TripleO meeting (taking the
week off). I'm also aware that the majority of our other attendees are
tied up in travel as well. So, I propose to cancel the meeting next
week on Tuesday August 11th.

If there are no objections, consider it cancelled. :).

If you object, consider yourself volunteering to run the meeting :)

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Moving instack upstream

2015-08-06 Thread James Slagle
On Thu, Aug 6, 2015 at 8:12 AM, Dan Prince  wrote:

>
> One more side effect is that I think it also means we no longer have
> the capability to test arbitrary Zuul refspecs for projects like Heat,
> Neutron, Nova, or Ironic in our undercloud CI jobs. We've relied on the
> source-repositories element to do this for us in the undercloud and
> since most of the instack stuff uses packages I think we would loose
> this capability.
>
> I'm all for testing with packages mind you... would just like to see us
> build packages for any projects that have Zuul refspecs inline, create
> a per job repo, and then use that to build out the resulting instack
> undercloud.
>
> This to me is the biggest loss in our initial switch to instack
> undercloud for CI. Perhaps there is a middle ground here where instack
> (which used to support tripleo-image-elements itself) could still
> support use of the source-repositories element in one CI job until we
> get our package building processes up to speed?

Isn't this what's happening at line 89 in
https://review.openstack.org/#/c/185151/6/toci_devtest_instack.sh ?

Or would $ZUUL_CHANGES not be populated when check-experimental is run?



-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova][ci] Tintri Cinder CI failures after Nova change

2015-08-06 Thread Skyler Berg
After the change "cleanup NovaObjectDictCompat from virtual_interface"
[1] was merged into Nova on the morning of August 5th, Tintri's CI for
Cinder started failing 13 test cases that involve a volume being
attached to an instance [2].

I have verified that the tests fail with the above mentioned change and
pass when running against the previous commit.

If anyone knows why this patch is causing an issue or is experiencing
similar problems, please let me know.

In the meantime, expect Tintri's CI to be either down or reporting
failures until a solution is found.

[1] https://review.openstack.org/#/c/200823/
[2] http://openstack-ci.tintri.com/tintri/refs-changes-06-201406-35/
-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] RPC API versioning

2015-08-06 Thread Zane Bitter

On 06/08/15 08:20, Grasza, Grzegorz wrote:




-Original Message-
From: Zane Bitter [mailto:zbit...@redhat.com]
Sent: Thursday, 6 August, 2015 2:57
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Heat] RPC API versioning

We've been talking about this since before summit without much consensus.
I think a large part of the problem is that very few people have deep
knowledge of both Heat and Versioned Objects. However, I think we are at a
point where we should be able to settle on an approach at least for the API<-

engine RPC interface. I've been talking to Dan Smith about the Nova team's

plan for upgrades, which goes something like this:

* Specify a max RPC API version in the config file
* In the RPC client lib, add code to handle versions as far back as the previous
release
* The operator rolls out the updated code, keeping the existing config file
with the pin to the previous release's RPC version
* Once all services are upgraded, the operator rolls out a new config file
shifting the pin
* The backwards compat code to handle release N-1 is removed in the N+1
release

This is, I believe, sufficient to solve our entire problem.
Specifically, we have no need for an indirection API that rebroadcasts
messages that are too new (since that can't happen with pinning) and no
need for Versioned Objects in the RPC layer. (Versioned objects for the DB
are still critical, and we are very much better off for all the hard work that
Michal and others have put into them. Thanks!)


What is the use of versioned objects outside of RPC?
I've written some documentation for Oslo VO and helped in introducing them in 
Heat.
As I understand it, the only use cases for VO are
* to serialize objects to dicts with version information when they are sent 
over RPC
* handle version dependent code inside the objects (instead of scattering it 
around the codebase)


This is what we currently have them for AIUI - so that we can do updates 
to the DB schema without downtime. The VO abstracts the 
version-dependent parts in a single place, so the rest of the code can 
continue to work even with an old DB schema. This means that we can roll 
out new code and only update the schema once it is all running.



* provide an object oriented and transparent access to the resources 
represented by the objects to services which don't have direct access to that 
resource (via RPC) - the indirection API

The last point was not yet discussed in Heat as far as I know, but the 
indirection API also contains an interface for backporting objects, which is 
something that is currently only used in Nova, and as you say, doesn't have a 
use when version pinning is in place.



The nature of Heat's RPC API is that it is effectively user-facing - the 
heat-api
process is essentially a thin proxy between ReST and RPC. We already have a
translation layer between the internal representation(s) of objects and the
user-facing representation, in the form of heat.engine.api, and the RPC API is
firmly on the user-facing side. The requirements for the content of these
messages are actually much stricter than anything we need for RPC API
stability, since they need to remain compatible not just with heat-api but
with heatclient - and we have *zero* control over when that gets upgraded.
Despite that, we've managed quite nicely for ~3 years without breaking
changes afaik.

Versioned Objects is a great way of retaining control when you need to share
internal data structures between processes. Fortunately the architecture of
Heat makes that unnecessary. That was a good design decision. We are not
going to reverse that design decision in order to use Versioned Objects. (In
the interest of making sure everyone uses their time productively, perhaps I
should clarify that to: "your patch is subject to -2 on sight if it introduces
internal engine data structures to heat-api/heat-cfn-api".)

Hopefully I've convinced you of the sufficiency of this plan for the API<-

engine interface specifically. If anyone disagrees, let them speak now, &c.


I don't understand - what is the distinction between internal and external data 
structures?


Internal data structures are just whatever it's most convenient for the 
application to work with. They're subject to change/refactoring at any time.


External data structures are ones that have an explicit stability 
guarantee, in this case because they're part of the user interface.



 From what I understand, versioned objects were introduced in Heat to represent 
objects which are sent over RPC between Heat services.


I believe you have been misinformed. All of the versioned objects that 
have been created so far are representations of tables in the database. 
However, we never send these over RPC: pre-VO it would obviously have 
been dumb to couple the user interface to the representation in the 
database, so we just didn't do that and therefore that was never a 
motivation for creating those VOs. And given that we a

Re: [openstack-dev] [app-catalog][heat] Heat template contributors repo

2015-08-06 Thread Fox, Kevin M
Its about both high quality shared heat template components like, a place to 
house:
https://github.com/EMSL-MSC/heat-templates/tree/master/cfn/lib

But also a common place for developers to congregate around producing 
production worthy cloud scaled app templates. So, like your mongodb cluster 
example. I know these sorts of templates have been written over and over again 
and not shared. I even wrote a set myself: 
https://github.com/EMSL-MSC/heat-templates/tree/master/hot/MongoDB a while 
back. Its gotten a bit bit rotten though. If it was CI'ed in an OpenStack repo, 
it could be kept in much better shape.

I know our organization has written chef server templates too, and j^2 just 
wrote one from scratch for the app catalog.

I think the app catalog project would benefit greatly by way of a greatly 
increased number of production ready heat templates, if there was a good place 
to start building them in the open.

The 5 +1's idea sounds like a reasonable starting point to me. If nothing else, 
it provides some valuable peer feedback to contributors on how they may improve 
the templates until a big enough community can be formed to have cores 
specifically for the repo.  Right now, users writing their own templates don't 
get any feedback unless they know to ask in the right places. This process ends 
up creating templates that aren't very portable across clouds and therefore not 
very suitable for the app catalog.

Thanks,
Kevin

From: Ryan Brown [rybr...@redhat.com]
Sent: Thursday, August 06, 2015 11:55 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [app-catalog][heat] Heat template contributors repo

On 08/06/2015 01:53 PM, Christopher Aedo wrote:
> Today during the app-catalog IRC meeting we talked about hosting Heat
> templates for contributors.  Right now someone who wants to create
> their own templates can easily self-host them on github, but until
> they get people pointed at it, nobody will know about their work on
> that template, and getting guidance and feedback from all the people
> who know Heat well takes a fair amount of effort.

Discoverability is a problem, but so is ownership in the shared repo
case. There's also the heat-templates repo, which has some example
content and such.

> What do you think about us creating a new repo (app-catalog-heat
> perhaps), and collectively we could encourage those interested in
> contributing Heat templates to host them there?  Ideally members of
> the Heat community would become reviewers of the content, and give
> guidance and feedback.

I think being able to review something requires a lot more than "hey we
have a central/shared repo," including having some shared purpose and
knowledge of the goal.

Of course, people with heat knowledge can look at templates and say
things like "well that's not valid YAML," but that's not really a code
review. I'd much rather see folks come to IRC or ask.openstack with
specific questions so we can 1) answer them or 2) improve our docs.

Having a shared repo of "here are some heat templates" doesn't strike me
as incredibly useful, especially if the templates don't all go together
and make one big thingy.

> It would also allow us to hook into OpenStack
> CI so these templates could be tested, and contributors would have a
> better sense of the utility/portability of their templates.  Over time
> it could lead to much more exposure for all the useful Heat templates
> people are creating.
>
> Thoughts?
>
> -Christopher

What do you imagine these templates being for? Are people creating
little reusable snippets/nested stacks that can be incorporated into
someone else's infrastructure? Or standalone templates for stuff like
"here, instant mongodb cluster"?

Also, the obvious question of the central repo is "how does reviewing
work?" are heat cores expected to also be cores on this new repo, or
maybe just take anything that gets 5 +1's?

--
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [app-catalog][heat] Heat template contributors repo

2015-08-06 Thread Ryan Brown
On 08/06/2015 01:53 PM, Christopher Aedo wrote:
> Today during the app-catalog IRC meeting we talked about hosting Heat
> templates for contributors.  Right now someone who wants to create
> their own templates can easily self-host them on github, but until
> they get people pointed at it, nobody will know about their work on
> that template, and getting guidance and feedback from all the people
> who know Heat well takes a fair amount of effort.

Discoverability is a problem, but so is ownership in the shared repo
case. There's also the heat-templates repo, which has some example
content and such.

> What do you think about us creating a new repo (app-catalog-heat
> perhaps), and collectively we could encourage those interested in
> contributing Heat templates to host them there?  Ideally members of
> the Heat community would become reviewers of the content, and give
> guidance and feedback. 

I think being able to review something requires a lot more than "hey we
have a central/shared repo," including having some shared purpose and
knowledge of the goal.

Of course, people with heat knowledge can look at templates and say
things like "well that's not valid YAML," but that's not really a code
review. I'd much rather see folks come to IRC or ask.openstack with
specific questions so we can 1) answer them or 2) improve our docs.

Having a shared repo of "here are some heat templates" doesn't strike me
as incredibly useful, especially if the templates don't all go together
and make one big thingy.

> It would also allow us to hook into OpenStack
> CI so these templates could be tested, and contributors would have a
> better sense of the utility/portability of their templates.  Over time
> it could lead to much more exposure for all the useful Heat templates
> people are creating.
> 
> Thoughts?
> 
> -Christopher

What do you imagine these templates being for? Are people creating
little reusable snippets/nested stacks that can be incorporated into
someone else's infrastructure? Or standalone templates for stuff like
"here, instant mongodb cluster"?

Also, the obvious question of the central repo is "how does reviewing
work?" are heat cores expected to also be cores on this new repo, or
maybe just take anything that gets 5 +1's?

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [app-catalog][heat] Heat template contributors repo

2015-08-06 Thread Fox, Kevin M
Heat templates so far seems to be a place to dump examples for showing off how 
to use specific heat resources/features.

Are there any intentions to maintain production ready heat templates in it? 
Last I asked the answer seemed to be no.

If I misunderstood, heat-templates would be a logical place to put them then.

Thanks,
Kevin

From: Zane Bitter [zbit...@redhat.com]
Sent: Thursday, August 06, 2015 11:13 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [app-catalog][heat] Heat template contributors repo

On 06/08/15 13:53, Christopher Aedo wrote:
> Today during the app-catalog IRC meeting we talked about hosting Heat
> templates for contributors.  Right now someone who wants to create
> their own templates can easily self-host them on github, but until
> they get people pointed at it, nobody will know about their work on
> that template, and getting guidance and feedback from all the people
> who know Heat well takes a fair amount of effort.
>
> What do you think about us creating a new repo (app-catalog-heat
> perhaps), and collectively we could encourage those interested in
> contributing Heat templates to host them there?  Ideally members of
> the Heat community would become reviewers of the content, and give
> guidance and feedback.  It would also allow us to hook into OpenStack
> CI so these templates could be tested, and contributors would have a
> better sense of the utility/portability of their templates.  Over time
> it could lead to much more exposure for all the useful Heat templates
> people are creating.
>
> Thoughts?

Already exists:

https://git.openstack.org/cgit/openstack/heat-templates/

- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [app-catalog][heat] Heat template contributors repo

2015-08-06 Thread Fox, Kevin M
+1. I have some heat template libraries here:
https://github.com/EMSL-MSC/heat-templates/tree/master/cfn/lib
that I'd like to get contributed somewhere.

Also, in the parent directory are a bunch of other templates that would be nice 
to contribute too. CI would definitely be a plus. I'm not sure how bit rotten 
some of the templates have gotten over time.

Thanks,
Kevin

From: Christopher Aedo [d...@aedo.net]
Sent: Thursday, August 06, 2015 10:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [app-catalog][heat] Heat template contributors repo

Today during the app-catalog IRC meeting we talked about hosting Heat
templates for contributors.  Right now someone who wants to create
their own templates can easily self-host them on github, but until
they get people pointed at it, nobody will know about their work on
that template, and getting guidance and feedback from all the people
who know Heat well takes a fair amount of effort.

What do you think about us creating a new repo (app-catalog-heat
perhaps), and collectively we could encourage those interested in
contributing Heat templates to host them there?  Ideally members of
the Heat community would become reviewers of the content, and give
guidance and feedback.  It would also allow us to hook into OpenStack
CI so these templates could be tested, and contributors would have a
better sense of the utility/portability of their templates.  Over time
it could lead to much more exposure for all the useful Heat templates
people are creating.

Thoughts?

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [app-catalog][heat] Heat template contributors repo

2015-08-06 Thread Zane Bitter

On 06/08/15 13:53, Christopher Aedo wrote:

Today during the app-catalog IRC meeting we talked about hosting Heat
templates for contributors.  Right now someone who wants to create
their own templates can easily self-host them on github, but until
they get people pointed at it, nobody will know about their work on
that template, and getting guidance and feedback from all the people
who know Heat well takes a fair amount of effort.

What do you think about us creating a new repo (app-catalog-heat
perhaps), and collectively we could encourage those interested in
contributing Heat templates to host them there?  Ideally members of
the Heat community would become reviewers of the content, and give
guidance and feedback.  It would also allow us to hook into OpenStack
CI so these templates could be tested, and contributors would have a
better sense of the utility/portability of their templates.  Over time
it could lead to much more exposure for all the useful Heat templates
people are creating.

Thoughts?


Already exists:

https://git.openstack.org/cgit/openstack/heat-templates/

- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] App Catalog IRC meeting minutes - 8/6/2015

2015-08-06 Thread Christopher Aedo
Thanks again for a great meeting this week, and I'm really encouraged
to see the conversation around building up the Community App Catalog
is continuing to grow.  One thing we touched on this week was the idea
of creating a new repo for Heat contributors to host their templates
on.  I sent a note to OpenStack-dev [1] in hopes of sparking a
conversation on this.  Additionally we've been adding more blueprints
to make our roadmap intentions clearer [2].

As always, please join us on IRC (#openstack-app-catalog), or speak up
here on the mailing list if you want to help us make this the top
destination for people using OpenStack clouds!

[1]: http://lists.openstack.org/pipermail/openstack-dev/2015-August/071555.html
[2]: https://blueprints.launchpad.net/app-catalog

=
#openstack-meeting-3: app-catalog
=

Meeting started by docaedo at 17:00:33 UTC.  The full logs are available
at
http://eavesdrop.openstack.org/meetings/app_catalog/2015/app_catalog.2015-08-06-17.00.log.html

Meeting summary
---
* rollcall  (docaedo, 17:00:45)

* Status updates  (docaedo, 17:01:33)
  * LINK: https://review.openstack.org/#/c/207253/  (docaedo, 17:01:43)
  * LINK: https://blueprints.launchpad.net/app-catalog  (docaedo,
17:01:55)
  * LINK:
https://blueprints.launchpad.net/app-catalog/+spec/murano-apps-dependencies
(kfox, 17:03:46)

* App Catalog Horizon Plugin Update (kfox)  (docaedo, 17:05:49)
  * LINK: https://youtu.be/2UQ6xa6uDQY  (kfox, 17:09:45)
  * LINK: https://play.google.com/store  (kfox, 17:11:19)
  * LINK:
https://blueprints.launchpad.net/app-catalog/+spec/contributed-by-logo
(docaedo, 17:19:43)

* Open discussion  (docaedo, 17:21:26)
  * ACTION: docaedo to writ to ML regarding Heat contributors repo for
app-catalog  (docaedo, 17:38:45)

Meeting ended at 17:45:47 UTC.

Action items, by person
---
* docaedo
  * docaedo to writ to ML regarding Heat contributors repo for
app-catalog

People present (lines said)
---
* kfox (70)
* docaedo (66)
* kzaitsev_mb (21)
* openstack (3)
* j^2 (2)

Generated by `MeetBot`_ 0.1.4

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog][heat] Heat template contributors repo

2015-08-06 Thread Christopher Aedo
Today during the app-catalog IRC meeting we talked about hosting Heat
templates for contributors.  Right now someone who wants to create
their own templates can easily self-host them on github, but until
they get people pointed at it, nobody will know about their work on
that template, and getting guidance and feedback from all the people
who know Heat well takes a fair amount of effort.

What do you think about us creating a new repo (app-catalog-heat
perhaps), and collectively we could encourage those interested in
contributing Heat templates to host them there?  Ideally members of
the Heat community would become reviewers of the content, and give
guidance and feedback.  It would also allow us to hook into OpenStack
CI so these templates could be tested, and contributors would have a
better sense of the utility/portability of their templates.  Over time
it could lead to much more exposure for all the useful Heat templates
people are creating.

Thoughts?

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][keystone] oslo_config and wsgi middlewares

2015-08-06 Thread Mehdi Abaakouk
On Thu, Aug 06, 2015 at 04:25:58PM +, Michael Krotscheck wrote:
> Hi there!
> 
> The most recent version of the CORS middleware (~2.4) no longer requires
> the use of Oslo.config, and supports pastedeploy. While using oslo.config
> provides far better features - such as multiple origins - it doesn't
> prevent you from using it in the paste pipeline. The documentation has been
> updated to reflect this.

Yes, but you can't use oslo.config without hardcode the loading the
middleware to pass the oslo.config object into the application.

> I fall on the operators side, and as a result feel that we should be using
> oslo.config for everything. One single configuration method across
> services, consistent naming conventions, autogenerated with sane options,
> with tooling and testing that makes it reliable. Special Snowflakes really
> just add cognitive friction, documentation overhead, and noise.

I'm clearly on the operator side too, and I just try to find a solution to 
be able to use all middlewares without having to write code for each 
in each application and use oslo.config. Zaqar, Gnocchi and Aodh are
the first projects that do to not use cfg.CONF and can't load many middlewares
without writing code for each. When middleware should be just something that
deployer enabled and configuration. Our middleware looks more like a lib
than a middleware)

(Zaqar, Gnocchi and Aodh have written hack for keystonemiddleware because 
this is an essential piece but other middlewares are broken for them).

Cheers,
-- 
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][keystone] oslo_config and wsgi middlewares

2015-08-06 Thread Julien Danjou
On Thu, Aug 06 2015, Mehdi Abaakouk wrote:

> This is confusing for developer to have some middlewares that need pre-setup,
> enforce them to rely on global python object, and some others not.
> This is confusing for deployer their can't do the configuration of middlewares
> in the same way for each middlewares and each projects.

Agreed, this is a terrible design.

> Do you agree:
>
> * all openstack middlewares should load their options with oslo.config ?
>  this permits type checking and all other features it provides, it's cool :)
>  configuration in paste-deploy conf is thing of past

That sounds like a good idea.

> * we must support local AND global oslo.config object ?
>  This is an application choice not something enforced by middleware.
>  The deployer experience should be the same in both case.

Well, I guess if you support local you can anyway use the global as
local. :)

> * the middleware must be responsible of the section name in the oslo.config ?
>  Gnocchi/Zaqar hack have to hardcode the section name in their code, this
> doesn't looks good.

I'd think so.

> * we must support legacy python signature for WSGI object, MyMiddleware(app,
> options_as_dict) ? To be able to use paste for
>  application/deployer that want it and not break already deployed
> things.

Definitely agreed.

> Possible solution:
> --
>
> I have already started to work on something that do all of that for all
> middlewares [5], [6]
>

[…]

> I have already tested that in Gnocchi and Aodh, and that solves all of my
> issues. Remove all hacks, the application doesn't need special pre
> setup. All our middleware become normal middleware but still can use
> oslo.config.
> WDYT ?

I already +2 some of the patches, I think the approach is good and sane,
and I honestly don't have anything better so thank you Mehdi. :)

Cheers,
-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Bug Triage Day - Liberty

2015-08-06 Thread Markus Zoeller
How did we do on our bug triage day for Liberty? The results are at [1]
and I would say we did a good job to prioritize "undecided" bugs, which
is a good thing for our upcoming bug review day. We could also reduce
the "new" bugs, which could have the potential to be serious issues.

Thanks a lot for your participation!

Regards,
Markus Zoeller (markus_z)

[1] https://wiki.openstack.org/wiki/Nova/BugTriage#Liberty

John Garbutt  wrote on 07/29/2015 07:21:31 PM:

> From: John Garbutt 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 07/29/2015 07:25 PM
> Subject: Re: [openstack-dev] [nova] Bug Triage Day - Liberty
> 
> On 29 July 2015 at 17:13, Jay Pipes  wrote:
> > On 07/29/2015 08:13 AM, Markus Zoeller wrote:
> >>
> >> I'd like to spread the idea of doing a "bug triage day" on next
> >> Wednesday, August 5th.
> >>
> >> This is one of our special review days [1] and sets the ground for 
the
> >> following timeframe where we focus on fixing bugs. Would be great if 
you
> >> could shift your focus on this day to triaging [2] to
> >> * review critical/high bugs (bug supervisors)
> >> * prioritize confirmed/triaged bugs (bug supervisors)
> >> * confirm/invalidate new bugs   (anyone)
> >> Most of the bugs are tagged to enable a filtering for contributors 
which
> >> have expertise in these certain areas [3]. Some tags don't yet have
> >> contributors associated, you're welcomed to join in.
> >> If you have questions, contact me (markus_z) or another member of the
> >> nova bug team on IRC #openstack-nova.
> >> [1] special review days:
> >>
> >> https://wiki.openstack.org/wiki/Nova/
> Liberty_Release_Schedule#Special_review_days
> >> [2] common bug triage: https://wiki.openstack.org/wiki/BugTriage
> >> [3] bug tag owner: https://wiki.openstack.org/wiki/Nova/BugTriage
> >> [4] Nova Bug Team: https://launchpad.net/~nova-bugs
> >
> >
> > ++ Thank you for taking this initiative, Markus!
> 
> +1
> 
> Please note the change in date from the previously announced date (it
> moved from the Friday to the Wednesday).
> 
> All hail our new Bug Czar [1]!
> 
> Thanks,
> John
> 
> [1] https://wiki.openstack.org/wiki/Nova#People 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] RPC API versioning

2015-08-06 Thread Zane Bitter

On 06/08/15 10:08, Dan Smith wrote:

This is, I believe, sufficient to solve our entire problem.
Specifically, we have no need for an indirection API that rebroadcasts
messages that are too new (since that can't happen with pinning) and no
need for Versioned Objects in the RPC layer. (Versioned objects for the
DB are still critical, and we are very much better off for all the hard
work that Michal and others have put into them. Thanks!)


So all your calls have simple types for all the arguments? Meaning,
everything looks like this:

   do_thing(uuid, 'foo', 'bar', 123)


Mostly.


and not:

   do_thing(uuid, params, data, dict_of_stuff)

?


We do have that, but dict_of_stuff is always just data that was provided 
to us by the user, verbatim. e.g. it's the contents of a template or an 
environment file. We can't control what the user sends us, so pretending 
to 'version' it is meaningless. We just pass it on without modification, 
and the engine either handles it or raises an exception so we can 
provide a 40x error to the user.


This isn't actually the interesting part though, because you're still 
thinking of it backwards - in Heat (unlike Nova) the API has no access 
to the DB, so it's not like dict_of_stuff could contain any internal 
data structures because there _are_ no internal data structures.


The interesting part is the *response* containing complex types. 
However, the same argument applies: the response just contains data that 
we are going to pass straight back to the user verbatim (at least in the 
native API), and comprises a mix of simple types and echoing data we 
originally received from the user.



If you have the latter, then just doing RPC versioning is a mirage. Nova
has had basic RPC versioning forever, but we didn't get actual upgrade
ability until we tightened the screws on what we're actually sending
over the wire. Just versioning the signatures of the calls doesn't help
you if you're sending complex data structures (such as our Instance)
over the wire.

If you think that the object facade is necessary for insulating you from
DB changes, I feel pretty confident that you need it for the RPC side
for the same reason.


This assumes Nova's architecture.


Unless you're going to unpack everything from the
object into primitive call arguments and ensure that nobody ever changes
one.


This is effectively what we do, although as noted above it's actually 
the response and not the arguments that we're talking about.



If you pull things out of the DB and send them over the wire, then
the DB schema affects your RPC API.


As I've been trying to explain, apparently unsuccessfully, we never ever 
ever pull things out of the DB and send them over the wire. Ever. Never 
have. Never will.


Here's an example of what we actually do:

http://git.openstack.org/cgit/openstack/heat/tree/heat/engine/api.py?h=stable%2Fkilo#n158

This is how we show a resource. The function takes a Resource object, 
which in turn contains a VO with the DB representation of the resource. 
We extract various attributes and perform various calculations with the 
methods of the Resource object (all of which rely to some extent on data 
obtained from the DB). Each bit of data becomes an entry in a dict - 
this is actually the return value, but you could think of it as 
equivalent to each item in the dict as being an argument to call if the 
RPC were initiated from the opposite direction. The values are, for the 
most part, simple types, and the few exceptions are either very basic, 
well-defined and unchanging or they're just echoing data provided 
originally by the user.


The keys to the dict (~argument names) are well-defined in the 
heat.rpc.api module. We never remove a key, because that would break 
userspace. We never change the format of an item, because that would 
break userspace. Sometimes we add a key, but we always implement 
heat-api in such a way that it doesn't care whether the new key is 
present or not (i.e. it passes the data directly to the user without 
looking, or uses response.get(rpc_api.NEW_KEY, default) if it really 
needs to introspect it).



The nature of Heat's RPC API is that it is effectively user-facing - the
heat-api process is essentially a thin proxy between ReST and RPC. We
already have a translation layer between the internal representation(s)
of objects and the user-facing representation, in the form of
heat.engine.api, and the RPC API is firmly on the user-facing side. The
requirements for the content of these messages are actually much
stricter than anything we need for RPC API stability, since they need to
remain compatible not just with heat-api but with heatclient - and we
have *zero* control over when that gets upgraded. Despite that, we've
managed quite nicely for ~3 years without breaking changes afaik.


I'm not sure how you evolve the internals without affecting the REST
side if you don't have a translation layer. If you do, then the RPC API
changes independently from

Re: [openstack-dev] [oslo][keystone] oslo_config and wsgi middlewares

2015-08-06 Thread Michael Krotscheck
Hi there!

The most recent version of the CORS middleware (~2.4) no longer requires
the use of Oslo.config, and supports pastedeploy. While using oslo.config
provides far better features - such as multiple origins - it doesn't
prevent you from using it in the paste pipeline. The documentation has been
updated to reflect this.

As for the "How do we proceed" question, I feel your question can be better
asked as: Do we support developers, or do we support operators? The
developer stance is that of architectural purity, of minimizing
dependencies, and of staying as true to default as possible. The Operator's
stance is that of simplified configuration, performance, good
documentation, and reliability.

I fall on the operators side, and as a result feel that we should be using
oslo.config for everything. One single configuration method across
services, consistent naming conventions, autogenerated with sane options,
with tooling and testing that makes it reliable. Special Snowflakes really
just add cognitive friction, documentation overhead, and noise.

I'm not saying oslo is perfect. I'm not saying that how oslo is used is
always correct. But we're fixing it, and consistency, in my mind, always
trumps ideology.

Michael

On Thu, Aug 6, 2015 at 9:02 AM Mehdi Abaakouk  wrote:

> Hi,
>
> I want to share with you some problems I have recently encountered with
> openstack middlewares and oslo.config.
>
> The issues
> --
>
> In project Gnocchi, I would use oslo.middleware.cors, I have expected to
> just put the name of the middleware to the wsgi pipeline, but I can't.
> The middlewares only works if you pass the oslo_config.cfg.ConfigOpts()
> object or via 'paste-deploy'... Gnocchi doesn't use paste-deploy, so
> I have to modify the code to load it...
> (For the keystonemiddleware, Gnocchi already have a special
> handling/hack to load it [1] and [2]).
> I don't want to write the same hack for each openstack middlewares.
>
>
> In project Aodh (ceilometer-alarm), we recently got an issue with
> keystonemiddleware since we remove the usage of the global object
> oslo_config.cfg.CONF. The middleware doesn't load its options from the
> config file of aodh anymore. Our authentication is broken.
> We can still pass them through paste-deploy configuration but this looks
> a method of the past. I still don't want to write a hack for each
> openstack middlewares.
>
>
> Then I have digged into other middlewares and applications to see how
> they handle their conf.
>
> oslo_middlewarre.sizelimit and oslo_middlewarre.ssl take options only
> via the global oslo_config.cfg.CONF. So they are unusable for application
> that doesn't use this global object.
>
> oslo_middleware.healthcheck take options as dict like any other python
> middleware. This is suitable for 'paste-deploy'. But doesn't allow
> configuration via oslo.config, doesn't have a strong config options
> type checking and co.
>
> Zaqar seems got same kind of issue about keystonemiddleware, and just
> write a hack to workaround the issue (monkeypatch the cfg.CONF of
> keystonemiddleware with their local version of the object [3] and then
> transform the loaded options into a dict to pass them via the legacy
> middleware dict options [4]) .
>
> Most applications, just still use the global object for the
> configuration and don't, yet, see those issues.
>
>
> All of that is really not consistent.
>
> This is confusing for developer to have some middlewares that need
> pre-setup,
> enforce them to rely on global python object, and some others not.
> This is confusing for deployer their can't do the configuration of
> middlewares in the same way for each middlewares and each projects.
>
> But keystonemiddleware, oslo.middleware.cors,... are supposed to be wsgi
> middlewares, something that is independant of the app.
> And this is not really the case.
>
> From my point of view and what wsgi looks like generally in python, the
> middleware object should be just MyMiddleware(app, options_as_dict),
> if the middleware want to rely to another configuration system it should
> do the setup/initialisation itself.
>
>
>
> So, how to solve that ?
> 
>
> Do you agree:
>
> * all openstack middlewares should load their options with oslo.config ?
>   this permits type checking and all other features it provides, it's cool
> :)
>   configuration in paste-deploy conf is thing of past
>
> * we must support local AND global oslo.config object ?
>   This is an application choice not something enforced by middleware.
>   The deployer experience should be the same in both case.
>
> * the middleware must be responsible of the section name in the
> oslo.config ?
>   Gnocchi/Zaqar hack have to hardcode the section name in their code,
>   this doesn't looks good.
>
> * we must support legacy python signature for WSGI object,
>   MyMiddleware(app, options_as_dict) ? To be able to use paste for
>   application/deployer that want it and not break already deployed things.

Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-06 Thread Lance Bragstad
On Thu, Aug 6, 2015 at 10:47 AM, Dolph Mathews 
wrote:

>
> On Wed, Aug 5, 2015 at 6:54 PM, Jamie Lennox 
> wrote:
>
>>
>>
>> - Original Message -
>> > From: "David Lyle" 
>> > To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> > Sent: Thursday, August 6, 2015 5:52:40 AM
>> > Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
>> >
>> > Forcing Horizon to duplicate Keystone settings just makes everything
>> much
>> > harder to configure and much more fragile. Exposing whitelisted, or all,
>> > IdPs makes much more sense.
>> >
>> > On Wed, Aug 5, 2015 at 1:33 PM, Dolph Mathews < dolph.math...@gmail.com
>> >
>> > wrote:
>> >
>> >
>> >
>> > On Wed, Aug 5, 2015 at 1:02 PM, Steve Martinelli < steve...@ca.ibm.com
>> >
>> > wrote:
>> >
>> >
>> >
>> >
>> >
>> > Some folks said that they'd prefer not to list all associated idps,
>> which i
>> > can understand.
>> > Why?
>>
>> So the case i heard and i think is fairly reasonable is providing
>> corporate logins to a public cloud. Taking the canonical coke/pepsi example
>> if i'm coke, i get asked to login to this public cloud i then have to
>> scroll though all the providers to find the COKE.COM domain and i can
>> see for example that PEPSI.COM is also providing logins to this cloud.
>> Ignoring the corporate privacy implications this list has the potential to
>> get long. Think about for example how you can do a corporate login to
>> gmail, you certainly don't pick from a list of auth providers for gmail -
>> there would be thousands.
>>
>> My understanding of the usage then would be that coke would have been
>> provided a (possibly branded) dedicated horizon that backed onto a public
>> cloud and that i could then from horizon say that it's only allowed access
>> to the COKE.COM domain (because the UX for inputting a domain at login
>> is not great so per customer dashboards i think make sense) and that for
>> this instance of horizon i want to show the 3 or 4 login providers that
>> COKE.COM is going to allow.
>>
>> Anyway you want to list or whitelist that in keystone is going to involve
>> some form of IdP tagging system where we have to say which set of idps we
>> want in this case and i don't think we should.
>>
>
> That all makes sense, and I was admittedly only thinking of the private
> cloud use case. So, I'd like to discuss the public and private use cases
> separately:
>
> In a public cloud, is there a real use case for revealing *any* IdPs
> publicly? If not, the entire list should be made "private" using
> policy.json, which we already support today.
>

The user would be required to know the id of the IdP in which they want to
federate with, right?


>
> In a private cloud, is there a real use case for fine-grained
> public/private attributes per IdP? (The stated use case was for a public
> cloud.) It seems the default behavior should be that horizon fetches the
> entire list from keystone.
>
>
>>
>> @David - when you add a new IdP to the university network are you having
>> to provide a new mapping each time? I know the CERN answer to this with
>> websso was to essentially group many IdPs behind the same keystone idp
>> because they will all produce the same assertion values and consume the
>> same mapping.
>>
>> Maybe the answer here is to provide the option in django_openstack_auth,
>> a plugin (again) of fetch from keystone, fixed list in settings or let it
>> point at a custom text file/url that is maintained by the deployer.
>> Honestly if you're adding and removing idps this frequently i don't mind
>> making the deployer maintain some of this information out of scope of
>> keystone.
>>
>>
>> Jamie
>>
>> >
>> >
>> >
>> >
>> >
>> > Actually, I like jamie's suggestion of just making horizon a bit
>> smarter, and
>> > expecting the values in the horizon settings (idp+protocol)
>> > But, it's already in keystone.
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > Thanks,
>> >
>> > Steve Martinelli
>> > OpenStack Keystone Core
>> >
>> > Dolph Mathews ---2015/08/05 01:38:09 PM---On Wed, Aug 5, 2015 at 5:39
>> AM,
>> > David Chadwick < d.w.chadw...@kent.ac.uk > wrote:
>> >
>> > From: Dolph Mathews < dolph.math...@gmail.com >
>> > To: "OpenStack Development Mailing List (not for usage questions)" <
>> > openstack-dev@lists.openstack.org >
>> > Date: 2015/08/05 01:38 PM
>> > Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
>> >
>> >
>> >
>> >
>> >
>> > On Wed, Aug 5, 2015 at 5:39 AM, David Chadwick <
>> d.w.chadw...@kent.ac.uk >
>> > wrote:
>> >
>> > On 04/08/2015 18:59, Steve Martinelli wrote: > Right, but that API
>> is/should
>> > be protected. If we want to list IdPs > *before* authenticating a user,
>> we
>> > either need: 1) a new API for listing > public IdPs or 2) a new policy
>> that
>> > doesn't protect that API. Hi Steve yes this was my understanding of the
>> > discussion that took place many months ago. I had assumed (wrongly) that
>> > something had been 

[openstack-dev] [Rally][Meeting][Agenda]

2015-08-06 Thread Roman Vasilets
Hi, its a friendly reminder that if you what to discuss some topics at
Rally meetings, please add you topic to our Meeting agenda
https://wiki.openstack.org/wiki/Meetings/Rally#Agenda. Don't forget to
specify by whom led this topic. Add some information about topic(links,
etc.) Thank you for your attention.

- Best regards, Vasilets Roman.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security] Would people see a value in the cve-check-tool? (Reshetova, Elena)

2015-08-06 Thread Reshetova, Elena
> I guess it depends on whether the tool needs to read the entire database
to perform its queries (in which case using AFS would be basically the same
as downloading).

I am including below the reply from Michael, cve-check-tool maintainer, and
also including him in this conversation.

" Right now we force-update the database every 4 hours as this is roughly
how often the NVD DB is centrally updated with new/modified entries.

This behaviour can be disabled, and cve-check-update can be run to manually
update. We download all of the NVD XML feeds and convert them into a local
sqlite3 database for faster usage.

What you're proposing sounds more like making cve-check-tool run as a
Security As A Service setup, which is feasible. Would they be hosting
internal copies of the XML feeds or prefer a central remote DB to be used
here?"

-Original Message-
From: Jeremy Stanley [mailto:fu...@yuggoth.org] 
Sent: Wednesday, August 5, 2015 10:16 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Security] Would people see a value in the
cve-check-tool? (Reshetova, Elena)

On 2015-08-05 09:54:52 -0700 (-0700), Clint Byrum wrote:
> Doesn't this feel like a job for AFS? Maintain the db there, and let 
> the nodes access it as-needed?

I guess it depends on whether the tool needs to read the entire database to
perform its queries (in which case using AFS would be basically the same as
downloading).
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][keystone] oslo_config and wsgi middlewares

2015-08-06 Thread Mehdi Abaakouk

Hi,

I want to share with you some problems I have recently encountered with 
openstack middlewares and oslo.config.


The issues
--

In project Gnocchi, I would use oslo.middleware.cors, I have expected to 
just put the name of the middleware to the wsgi pipeline, but I can't.
The middlewares only works if you pass the oslo_config.cfg.ConfigOpts() 
object or via 'paste-deploy'... Gnocchi doesn't use paste-deploy, so 
I have to modify the code to load it...
(For the keystonemiddleware, Gnocchi already have a special 
handling/hack to load it [1] and [2]).

I don't want to write the same hack for each openstack middlewares.


In project Aodh (ceilometer-alarm), we recently got an issue with 
keystonemiddleware since we remove the usage of the global object 
oslo_config.cfg.CONF. The middleware doesn't load its options from the 
config file of aodh anymore. Our authentication is broken.
We can still pass them through paste-deploy configuration but this looks 
a method of the past. I still don't want to write a hack for each 
openstack middlewares.



Then I have digged into other middlewares and applications to see how 
they handle their conf.


oslo_middlewarre.sizelimit and oslo_middlewarre.ssl take options only 
via the global oslo_config.cfg.CONF. So they are unusable for application

that doesn't use this global object.

oslo_middleware.healthcheck take options as dict like any other python 
middleware. This is suitable for 'paste-deploy'. But doesn't allow 
configuration via oslo.config, doesn't have a strong config options 
type checking and co. 

Zaqar seems got same kind of issue about keystonemiddleware, and just 
write a hack to workaround the issue (monkeypatch the cfg.CONF of 
keystonemiddleware with their local version of the object [3] and then 
transform the loaded options into a dict to pass them via the legacy 
middleware dict options [4]) .


Most applications, just still use the global object for the 
configuration and don't, yet, see those issues.



All of that is really not consistent.

This is confusing for developer to have some middlewares that need pre-setup,
enforce them to rely on global python object, and some others not.
This is confusing for deployer their can't do the configuration of 
middlewares in the same way for each middlewares and each projects.


But keystonemiddleware, oslo.middleware.cors,... are supposed to be wsgi 
middlewares, something that is independant of the app.

And this is not really the case.

From my point of view and what wsgi looks like generally in python, the 

middleware object should be just MyMiddleware(app, options_as_dict),
if the middleware want to rely to another configuration system it should 
do the setup/initialisation itself.




So, how to solve that ?


Do you agree:

* all openstack middlewares should load their options with oslo.config ?
 this permits type checking and all other features it provides, it's cool :)
 configuration in paste-deploy conf is thing of past

* we must support local AND global oslo.config object ?
 This is an application choice not something enforced by middleware.
 The deployer experience should be the same in both case.

* the middleware must be responsible of the section name in the oslo.config ?
 Gnocchi/Zaqar hack have to hardcode the section name in their code, 
 this doesn't looks good.


* we must support legacy python signature for WSGI object, 
 MyMiddleware(app, options_as_dict) ? To be able to use paste for

 application/deployer that want it and not break already deployed things.


I really think all our middlewares should be consistent:

* to be usable by all applications without enforcing them to write crap around 
them.
* and to made the deployer life easier.


Possible solution:
--

I have already started to work on something that do all of that for all 
middlewares [5], [6]


The idea is, the middleware should create a oslo_config.cfg.ConfigOpts() 
(instead of rely on the global one) and load the configuration file of the 
application in. oslo.config will discover the file location just with the 
name of application as usual.


So the middleware can now be loaded like this:

code example:

  app = MyMiddleware(app, {"oslo_config_project": "aodh"})

paste-deploy example:

  [filter:foobar]
  paste.filter_factory = foobar:MyMiddleware.filter_factory
  oslo_config_project = aodh

oslo_config.cfg.ConfigOpts() will easly find the /etc/aodh/aodh.conf,
This cut the hidden links between middleware and the application 
(through the global object).


And of course if oslo_config_project is not provided, the middleware 
fallback the global oslo.config object.

The middleware options can still be passed via the legacy dict.
The backward compatibility is conserved.

I have already tested that in Gnocchi and Aodh, and that solves all of 
my issues. Remove all hacks, the application doesn't need special pre

setup. All our middleware become normal middleware b

Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-06 Thread Dolph Mathews
On Wed, Aug 5, 2015 at 6:54 PM, Jamie Lennox  wrote:

>
>
> - Original Message -
> > From: "David Lyle" 
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> > Sent: Thursday, August 6, 2015 5:52:40 AM
> > Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
> >
> > Forcing Horizon to duplicate Keystone settings just makes everything much
> > harder to configure and much more fragile. Exposing whitelisted, or all,
> > IdPs makes much more sense.
> >
> > On Wed, Aug 5, 2015 at 1:33 PM, Dolph Mathews < dolph.math...@gmail.com
> >
> > wrote:
> >
> >
> >
> > On Wed, Aug 5, 2015 at 1:02 PM, Steve Martinelli < steve...@ca.ibm.com >
> > wrote:
> >
> >
> >
> >
> >
> > Some folks said that they'd prefer not to list all associated idps,
> which i
> > can understand.
> > Why?
>
> So the case i heard and i think is fairly reasonable is providing
> corporate logins to a public cloud. Taking the canonical coke/pepsi example
> if i'm coke, i get asked to login to this public cloud i then have to
> scroll though all the providers to find the COKE.COM domain and i can see
> for example that PEPSI.COM is also providing logins to this cloud.
> Ignoring the corporate privacy implications this list has the potential to
> get long. Think about for example how you can do a corporate login to
> gmail, you certainly don't pick from a list of auth providers for gmail -
> there would be thousands.
>
> My understanding of the usage then would be that coke would have been
> provided a (possibly branded) dedicated horizon that backed onto a public
> cloud and that i could then from horizon say that it's only allowed access
> to the COKE.COM domain (because the UX for inputting a domain at login is
> not great so per customer dashboards i think make sense) and that for this
> instance of horizon i want to show the 3 or 4 login providers that
> COKE.COM is going to allow.
>
> Anyway you want to list or whitelist that in keystone is going to involve
> some form of IdP tagging system where we have to say which set of idps we
> want in this case and i don't think we should.
>

That all makes sense, and I was admittedly only thinking of the private
cloud use case. So, I'd like to discuss the public and private use cases
separately:

In a public cloud, is there a real use case for revealing *any* IdPs
publicly? If not, the entire list should be made "private" using
policy.json, which we already support today.

In a private cloud, is there a real use case for fine-grained
public/private attributes per IdP? (The stated use case was for a public
cloud.) It seems the default behavior should be that horizon fetches the
entire list from keystone.


>
> @David - when you add a new IdP to the university network are you having
> to provide a new mapping each time? I know the CERN answer to this with
> websso was to essentially group many IdPs behind the same keystone idp
> because they will all produce the same assertion values and consume the
> same mapping.
>
> Maybe the answer here is to provide the option in django_openstack_auth, a
> plugin (again) of fetch from keystone, fixed list in settings or let it
> point at a custom text file/url that is maintained by the deployer.
> Honestly if you're adding and removing idps this frequently i don't mind
> making the deployer maintain some of this information out of scope of
> keystone.
>
>
> Jamie
>
> >
> >
> >
> >
> >
> > Actually, I like jamie's suggestion of just making horizon a bit
> smarter, and
> > expecting the values in the horizon settings (idp+protocol)
> > But, it's already in keystone.
> >
> >
> >
> >
> >
> >
> >
> > Thanks,
> >
> > Steve Martinelli
> > OpenStack Keystone Core
> >
> > Dolph Mathews ---2015/08/05 01:38:09 PM---On Wed, Aug 5, 2015 at 5:39 AM,
> > David Chadwick < d.w.chadw...@kent.ac.uk > wrote:
> >
> > From: Dolph Mathews < dolph.math...@gmail.com >
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> > openstack-dev@lists.openstack.org >
> > Date: 2015/08/05 01:38 PM
> > Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
> >
> >
> >
> >
> >
> > On Wed, Aug 5, 2015 at 5:39 AM, David Chadwick < d.w.chadw...@kent.ac.uk
> >
> > wrote:
> >
> > On 04/08/2015 18:59, Steve Martinelli wrote: > Right, but that API
> is/should
> > be protected. If we want to list IdPs > *before* authenticating a user,
> we
> > either need: 1) a new API for listing > public IdPs or 2) a new policy
> that
> > doesn't protect that API. Hi Steve yes this was my understanding of the
> > discussion that took place many months ago. I had assumed (wrongly) that
> > something had been done about it, but I guess from your message that we
> are
> > no further forward on this Actually 2) above might be better reworded as
> - a
> > new policy/engine that allows public access to be a bona fide policy rule
> > The existing policy simply seems wrong. Why protect the list of IdPs?
> >
> >
> > regards Da

Re: [openstack-dev] [murano] [mistral] [yaql] Prepare to Yaql 1.0 release

2015-08-06 Thread Lingxian Kong
Thanks Alexander for the effort!

With so many new features and improvements, I'd like to see a detailed
documentation as the guideline or introduction
​to​
 YAQL
​ for developer or orchestrator​
, we can start with simple samples
​
(maybe it have been covered in [1]
​for ​
a little bit)
​ ​
and then advanced usage,
​since I could see little content in [2]
 currently
​.​
​ ​


what do you think?

[1] https://pypi.python.org/pypi/yaql/1.0.0.0rc1
[2] https://yaql.readthedocs.org


On Wed, Aug 5, 2015 at 10:47 AM, Dmitri Zimine 
wrote:

> Thank you Stan!
>
> On Aug 4, 2015, at 3:07 PM, Stan Lagun  wrote:
>
> Dmitry, this depends on how you're going to use yaql. yaql 1.0 has so
> called legacy mode that is a layer on top of yaql that brings nearly full
> backward compatibility including some of the things that were wrong in yaql
> 0.2. Use this mode when backward compatibility is a must. Without it the
> following changes may affect queries written for 0.2:
>
> 1. There are no more tuples. foo(a => b) will mean now f(a='b') instead of
> foo(tuple('a', 'b')) as it was in yaql 0.2. However dict(key1 => value1,
> key2 => value2)  and several other functions with the same syntax work
> exactly how they used to work in 0.2
> 2. Contradictory to the first point there are no more lists. Yaql works on
> immutable structures (tuple, frozenset and yaql1 own FrozenDict). All
> operations that seem to modify data in fact return modified copies. All
> input data will be automatically converted to their immutable versions and
> converted back upon completion. This has 2 side effects: a) if you have
> custom yaql functions they should follow the same pattern b) even if your
> input data were build from tuples etc output will still use lists (e.g.
> JSON format)
> 3. $dict.key will raise KeyError for missing keys instead of returning
> None. Also key must be a valid keyword and cannot start with 2 underscores.
> There are many other ways to retrieve dictionary value: $dict.get(value),
> $dict.get(value, default), $dict[value], $dict[value, default]
> 4. In yaql 0.2 every function could be called as a method. So $x.foo() was
> the same as foo($x). In yaql 1.0 it is up to function to decide if it wants
> to be a function (foo($x), method ($x.foo()) or extension method (both
> syntax).Considering that yaql handles different function overloads
> $x.foo() may be a valid expression and execute something completely
> different from foo($x).
>  Most of type-specific functions are declared as a methods. So you no
> longer can say toUpper(str) but only str.toUpper()
> 5. yaql 0.2 had very few functions. yaql 1.0 has huge (~260
> functions/operators etc) standard library. However several of the 0.2
> functions now have different signature or name, replaced with better
> alternative or just accept additional (optional) parameters
> 6. $collection[$ > 0] doesn't work anymore. Use $collection.select($ > 0)
> for that
>
>
>
> Sincerely yours,
> Stan Lagun
> Principal Software Engineer @ Mirantis
>
> 
>
> On Tue, Aug 4, 2015 at 9:57 PM, Dmitri Zimine 
> wrote:
>
>> This is great news Alex, was looking forward to it, will be happy to
>> migrate Mistral.
>>
>> Some heads-up on what syntactically changed would be much appreciated to
>> pass on to our users;
>> we likely will catch much of them with Mistral tests, but some may bubble
>> up.
>>
>> DZ.
>>
>> On Jul 27, 2015, at 2:04 AM, Alexander Tivelkov 
>> wrote:
>>
>> > Hi folks,
>> >
>> > We are finally ready to release the 1.0.0 version of YAQL. It is a
>> > huge milestone: the language finally looks the way we initially wanted
>> > it to look. The engine got completely rewritten, tons of new
>> > capabilities have been added. Here is a brief (and incomplete) list of
>> > new features and improvements:
>> >
>> > * Support for kwargs and keyword-only args (Py3)
>> > * Optional function arguments
>> > * Smart algorithm to find matching function overload without side
>> effects
>> > * Ability to organize functions into layers
>> > * Configurable list of operators (left/right associative binary,
>> > prefix/suffix unary with precedence)
>> > * No global variables. There can be  more than one parser with
>> > different set of operators simultaneously
>> > * List literals ([a, b])
>> > * Dictionary literals ({ a => b})
>> > * Handling of escape characters in string literals
>> > * Verbatim strings (`...`) and double-quotes ("...")
>> > * =~ and !~ operators in default configuration (similar to Perl)
>> > * -> operator to pass context
>> > * Alternate operator names (for example '*equal' instead of
>> '#operator_=')
>> >  so that it will be possible to have different symbol for particular
>> operator
>> >  without breaking standard library that expects operator to have well
>> > known names
>> > * Set operations
>> > * Support for lists and dictionaries as a dictionary keys and set
>> elements
>> > * New framework to decorate functions
>> > * Ability to distinguish between functions and methods

Re: [openstack-dev] [neutron][dvr] Removing fip namespace when restarting L3 agent.

2015-08-06 Thread Oleg Bondarev
On Thu, Aug 6, 2015 at 5:23 PM, Korzeniewski, Artur <
artur.korzeniew...@intel.com> wrote:

> Thanks Kevin for that hint.
>
> But it does not resolve the connectivity problem, it is just not removing
> the namespace when it is asked to.
>
> The real question is, why do we invoke the 
> /neutron/neutron/agent/l3/dvr_fip_ns.py
> FipNamespace.delete() method in the first place?
>
>
>
> I’ve captured the traceback for this situation:
>
> 2015-08-06 06:35:28.469 DEBUG neutron.agent.linux.utils [-] Unable to
> access
> /opt/openstack/data/neutron/external/pids/8223e12e-837b-49d4-9793-63603fccbc9f.pid
> from (pid=70216) get_value_from_file
> /opt/openstack/neutron/neutron/agent/linux/utils.py:222
>
> 2015-08-06 06:35:28.469 DEBUG neutron.agent.linux.utils [-] Unable to
> access
> /opt/openstack/data/neutron/external/pids/8223e12e-837b-49d4-9793-63603fccbc9f.pid
> from (pid=70216) get_value_from_file
> /opt/openstack/neutron/neutron/agent/linux/utils.py:222
>
> 2015-08-06 06:35:28.469 DEBUG neutron.agent.linux.external_process [-] No
> process started for 8223e12e-837b-49d4-9793-63603fccbc9f from (pid=70216)
> disable /opt/openstack/neutron/neutron/agent/linux/external_process.py:113
>
> Traceback (most recent call last):
>
>  File "/usr/local/lib/python2.7/dist-packages/eventlet/queue.py", line
> 117, in switch
>
> self.greenlet.switch(value)
>
>   File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py",
> line 214, in main
>
> result = function(*args, **kwargs)
>
>   File "/usr/local/lib/python2.7/dist-packages/oslo_service/service.py",
> line 612, in run_service
>
> service.start()
>
>   File "/opt/openstack/neutron/neutron/service.py", line 233, in start
>
> self.manager.after_start()
>
>   File "/opt/openstack/neutron/neutron/agent/l3/agent.py", line 641, in
> after_start
>
> self.periodic_sync_routers_task(self.context)
>
>   File "/opt/openstack/neutron/neutron/agent/l3/agent.py", line 519, in
> periodic_sync_routers_task
>
> self.fetch_and_sync_all_routers(context, ns_manager)
>
>   File "/opt/openstack/neutron/neutron/agent/l3/namespace_manager.py",
> line 91, in __exit__
>
> self._cleanup(_ns_prefix, ns_id)
>
>   File "/opt/openstack/neutron/neutron/agent/l3/namespace_manager.py",
> line 140, in _cleanup
>
> ns.delete()
>
>   File "/opt/openstack/neutron/neutron/agent/l3/dvr_fip_ns.py", line 147,
> in delete
>
> raise TypeError("ss")
>
> TypeError: ss
>
>
>
> It seems that the fip namespace is not processed at startup of L3 agent,
> and the cleanup is removing the namespace…
>
> It is also removing the interface to local dvr router connection so… VM
> has no internet access with floating IP:
>
> Command: ['ip', 'netns', 'exec',
> 'fip-8223e12e-837b-49d4-9793-63603fccbc9f', 'ip', 'link', 'del',
> u'fpr-fe517b4b-d']
>
>
>
> If the interface inside the fip namespace is not deleted, the VM has full
> internet access without any downtime.
>
>
>
> Ca we consider it a bug? I guess it is something in startup/full-sync
> logic since the log is saying:
>
>
> /opt/openstack/data/neutron/external/pids/8223e12e-837b-49d4-9793-63603fccbc9f.pid
>

I think yes, we can consider it a bug. Can you please file one? I can take
and probably fix it.


>
>
> And after finishing the sync loop, the fip namespace is deleted…
>
>
>
> Regards,
>
> Artur
>
>
>
> *From:* Kevin Benton [mailto:blak...@gmail.com]
> *Sent:* Thursday, August 6, 2015 7:40 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron][dvr] Removing fip namespace when
> restarting L3 agent.
>
>
>
> Can you try setting the following to False:
>
>
> https://github.com/openstack/neutron/blob/dc0944f2d4e347922054bba679ba7f5d1ae6ffe2/etc/l3_agent.ini#L97
>
>
>
> On Wed, Aug 5, 2015 at 3:36 PM, Korzeniewski, Artur <
> artur.korzeniew...@intel.com> wrote:
>
> Hi all,
>
> During testing of Neutron upgrades, I have found that restarting the L3
> agent in DVR mode is causing the VM network downtime for configured
> floating IP.
>
> The lockdown is visible when pinging the VM from external network, 2-3
> pings are lost.
>
> The responsible place in code is:
>
> DVR: destroy fip ns: fip-8223e12e-837b-49d4-9793-63603fccbc9f from
> (pid=156888) delete
> /opt/openstack/neutron/neutron/agent/l3/dvr_fip_ns.py:164
>
>
>
> Can someone explain why the fip namespace is deleted? Can we workout the
> situation, when there is no downtime of VM access?
>
>
>
> Artur Korzeniewski
>
> 
>
> Intel Technology Poland sp. z o.o.
>
> KRS 101882
>
> ul. Slowackiego 173, 80-298 Gdansk
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Kevin Benton
>
> ___

Re: [openstack-dev] [keystone] policy issues when generating trusts with different clients

2015-08-06 Thread michael mccune

On 08/05/2015 10:27 PM, Jamie Lennox wrote:

Hey Mike,

I think it could be one of the hacks that are in place to try and keep 
compatibility with the old and new way of using the client is returning the 
wrong thing. Compare the output of trustor.user_id and 
trustor_auth.get_user_id(sess). For me trustor.user_id is None which will make 
sense why you'd get permission errors.

Whether this is a bug in keystoneclient is debatable because we had to keep 
compatibility with the old options just not update them for the new paths, the 
ambiguity is certainly bad.

The command that works for me is:

trustor.trusts.create(
 trustor_user=trustor_auth.get_user_id(sess),
 trustee_user=trustee_auth.get_user_id(sess),
 project=trustor_auth.get_project_id(sess),
 role_names=['Member'],
 impersonation=True,
 expires_at=None)

We're working on a keystoneclient 2.0 that will remove all that old code.


Let me know if that fixes it for you.


hi Jamie,

this does work for me. but now i have a few questions as i start to 
refactor our code.


previously we have been handing around keystone Client objects to 
perform all of our operations. this leads to some trouble as we expected 
the user_id, and project_id, to be present in the client. so, 3 questions.


1. is it safe to set the user_id and project_id on a Client object?
(i notice that i am able to perform this operation and it would make 
things slightly easier to refactor)


2. are there plans for the new keystoneclient to automatically fill in 
user_id and project_id for Session/Auth based clients?


3. would it be better to transform our code to pass around Auth plugin 
objects instead of Client objects?


thanks again for the help,
mike


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2015-08-06 Thread Alex Xu
Hi,

We have weekly Nova API meeting this week. The meeting is being held tomorrow 
Friday UTC1200.

In other timezones the meeting is at:

EST 08:00 (Fri)
Japan 21:00 (Fri)
China 20:00 (Fri)
United Kingdom 13:00 (Fri)

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI 


Please feel free to add items to the agenda.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dvr] Removing fip namespace when restarting L3 agent.

2015-08-06 Thread Korzeniewski, Artur
Thanks Kevin for that hint.
But it does not resolve the connectivity problem, it is just not removing the 
namespace when it is asked to.
The real question is, why do we invoke the 
/neutron/neutron/agent/l3/dvr_fip_ns.py FipNamespace.delete() method in the 
first place?

I’ve captured the traceback for this situation:
2015-08-06 06:35:28.469 DEBUG neutron.agent.linux.utils [-] Unable to access 
/opt/openstack/data/neutron/external/pids/8223e12e-837b-49d4-9793-63603fccbc9f.pid
 from (pid=70216) get_value_from_file 
/opt/openstack/neutron/neutron/agent/linux/utils.py:222
2015-08-06 06:35:28.469 DEBUG neutron.agent.linux.utils [-] Unable to access 
/opt/openstack/data/neutron/external/pids/8223e12e-837b-49d4-9793-63603fccbc9f.pid
 from (pid=70216) get_value_from_file 
/opt/openstack/neutron/neutron/agent/linux/utils.py:222
2015-08-06 06:35:28.469 DEBUG neutron.agent.linux.external_process [-] No 
process started for 8223e12e-837b-49d4-9793-63603fccbc9f from (pid=70216) 
disable /opt/openstack/neutron/neutron/agent/linux/external_process.py:113
Traceback (most recent call last):
 File "/usr/local/lib/python2.7/dist-packages/eventlet/queue.py", line 117, in 
switch
self.greenlet.switch(value)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
214, in main
result = function(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/oslo_service/service.py", line 
612, in run_service
service.start()
  File "/opt/openstack/neutron/neutron/service.py", line 233, in start
self.manager.after_start()
  File "/opt/openstack/neutron/neutron/agent/l3/agent.py", line 641, in 
after_start
self.periodic_sync_routers_task(self.context)
  File "/opt/openstack/neutron/neutron/agent/l3/agent.py", line 519, in 
periodic_sync_routers_task
self.fetch_and_sync_all_routers(context, ns_manager)
  File "/opt/openstack/neutron/neutron/agent/l3/namespace_manager.py", line 91, 
in __exit__
self._cleanup(_ns_prefix, ns_id)
  File "/opt/openstack/neutron/neutron/agent/l3/namespace_manager.py", line 
140, in _cleanup
ns.delete()
  File "/opt/openstack/neutron/neutron/agent/l3/dvr_fip_ns.py", line 147, in 
delete
raise TypeError("ss")
TypeError: ss

It seems that the fip namespace is not processed at startup of L3 agent, and 
the cleanup is removing the namespace…
It is also removing the interface to local dvr router connection so… VM has no 
internet access with floating IP:
Command: ['ip', 'netns', 'exec', 'fip-8223e12e-837b-49d4-9793-63603fccbc9f', 
'ip', 'link', 'del', u'fpr-fe517b4b-d']

If the interface inside the fip namespace is not deleted, the VM has full 
internet access without any downtime.

Ca we consider it a bug? I guess it is something in startup/full-sync logic 
since the log is saying:
/opt/openstack/data/neutron/external/pids/8223e12e-837b-49d4-9793-63603fccbc9f.pid

And after finishing the sync loop, the fip namespace is deleted…

Regards,
Artur

From: Kevin Benton [mailto:blak...@gmail.com]
Sent: Thursday, August 6, 2015 7:40 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][dvr] Removing fip namespace when 
restarting L3 agent.

Can you try setting the following to False:
https://github.com/openstack/neutron/blob/dc0944f2d4e347922054bba679ba7f5d1ae6ffe2/etc/l3_agent.ini#L97

On Wed, Aug 5, 2015 at 3:36 PM, Korzeniewski, Artur 
mailto:artur.korzeniew...@intel.com>> wrote:
Hi all,
During testing of Neutron upgrades, I have found that restarting the L3 agent 
in DVR mode is causing the VM network downtime for configured floating IP.
The lockdown is visible when pinging the VM from external network, 2-3 pings 
are lost.
The responsible place in code is:
DVR: destroy fip ns: fip-8223e12e-837b-49d4-9793-63603fccbc9f from (pid=156888) 
delete /opt/openstack/neutron/neutron/agent/l3/dvr_fip_ns.py:164

Can someone explain why the fip namespace is deleted? Can we workout the 
situation, when there is no downtime of VM access?

Artur Korzeniewski

Intel Technology Poland sp. z o.o.
KRS 101882
ul. Slowackiego 173, 80-298 Gdansk


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] RPC API versioning

2015-08-06 Thread Dan Smith
> This is, I believe, sufficient to solve our entire problem.
> Specifically, we have no need for an indirection API that rebroadcasts
> messages that are too new (since that can't happen with pinning) and no
> need for Versioned Objects in the RPC layer. (Versioned objects for the
> DB are still critical, and we are very much better off for all the hard
> work that Michal and others have put into them. Thanks!)

So all your calls have simple types for all the arguments? Meaning,
everything looks like this:

  do_thing(uuid, 'foo', 'bar', 123)

and not:

  do_thing(uuid, params, data, dict_of_stuff)

?

If you have the latter, then just doing RPC versioning is a mirage. Nova
has had basic RPC versioning forever, but we didn't get actual upgrade
ability until we tightened the screws on what we're actually sending
over the wire. Just versioning the signatures of the calls doesn't help
you if you're sending complex data structures (such as our Instance)
over the wire.

If you think that the object facade is necessary for insulating you from
DB changes, I feel pretty confident that you need it for the RPC side
for the same reason. Unless you're going to unpack everything from the
object into primitive call arguments and ensure that nobody ever changes
one. If you pull things out of the DB and send them over the wire, then
the DB schema affects your RPC API.

> The nature of Heat's RPC API is that it is effectively user-facing - the
> heat-api process is essentially a thin proxy between ReST and RPC. We
> already have a translation layer between the internal representation(s)
> of objects and the user-facing representation, in the form of
> heat.engine.api, and the RPC API is firmly on the user-facing side. The
> requirements for the content of these messages are actually much
> stricter than anything we need for RPC API stability, since they need to
> remain compatible not just with heat-api but with heatclient - and we
> have *zero* control over when that gets upgraded. Despite that, we've
> managed quite nicely for ~3 years without breaking changes afaik.

I'm not sure how you evolve the internals without affecting the REST
side if you don't have a translation layer. If you do, then the RPC API
changes independently from the REST API.

Anyway, I don't really know anything about the internals of heat, and am
completely willing to believe that it's fundamentally different in some
way that makes it immune to the problems something like Nova has trying
to make this work. I'm not sure I'm convinced of that so far, but that's
fine :)

--Dan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Moving instack upstream

2015-08-06 Thread Dougal Matthews
- Original Message -
> From: "Dan Prince" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Thursday, 6 August, 2015 1:12:42 PM
> Subject: Re: [openstack-dev] [TripleO] Moving instack upstream
> 
> On Thu, 2015-07-23 at 07:40 +0100, Derek Higgins wrote:
> > See below
> > 
> > On 21/07/15 20:29, Derek Higgins wrote:
> > > Hi All,
> > > Something we discussed at the summit was to switch the focus of
> > > tripleo's deployment method to deploy using instack using images
> > > built
> > > with tripleo-puppet-elements. Up to now all the instack work has
> > > been
> > > done downstream of tripleo as part of rdo. Having parts of our
> > > deployment story outside of upstream gives us problems mainly
> > > because it
> > > becomes very difficult to CI what we expect deployers to use while
> > > we're
> > > developing the upstream parts.
> > > 
> > > Essentially what I'm talking about here is pulling instack
> > > -undercloud
> > > upstream along with a few of its dependency projects (instack,
> > > tripleo-common, tuskar-ui-extras etc..) into tripleo and using them
> > > in
> > > our CI in place of devtest.
> > > 
> > > Getting our CI working with instack is close to working but has
> > > taken
> > > longer then I expected because of various complications and
> > > distractions
> > > but I hope to have something over the next few days that we can use
> > > to
> > > replace devtest in CI, in a lot of ways this will start out by
> > > taking a
> > > step backwards but we should finish up in a better place where we
> > > will
> > > be developing (and running CI on) what we expect deployers to use.
> > > 
> > > Once I have something that works I think it makes sense to drop the
> > > jobs
> > > undercloud-precise-nonha and overcloud-precise-nonha, while
> > > switching
> > > overcloud-f21-nonha to use instack, this has a few effects that
> > > need to
> > > be called out
> > > 
> > > 1. We will no longer be running CI on (and as a result not
> > > supporting)
> > > most of the the bash based elements
> > > 2. We will no longer be running CI on (and as a result not
> > > supporting)
> > > ubuntu
> 
> One more side effect is that I think it also means we no longer have
> the capability to test arbitrary Zuul refspecs for projects like Heat,
> Neutron, Nova, or Ironic in our undercloud CI jobs. We've relied on the
> source-repositories element to do this for us in the undercloud and
> since most of the instack stuff uses packages I think we would loose
> this capability.
> 
> I'm all for testing with packages mind you... would just like to see us
> build packages for any projects that have Zuul refspecs inline, create
> a per job repo, and then use that to build out the resulting instack
> undercloud.
> 
> This to me is the biggest loss in our initial switch to instack
> undercloud for CI. Perhaps there is a middle ground here where instack
> (which used to support tripleo-image-elements itself) could still
> support use of the source-repositories element in one CI job until we
> get our package building processes up to speed?
> 
> /me really wants 'check experimental' to give us TripleO coverage for
> select undercloud projects
> 
> > > 
> > > Should anybody come along in the future interested in either of
> > > these
> > > things (and prepared to put the time in) we can pick them back up
> > > again.
> > > In fact the move to puppet element based images should mean we can
> > > more
> > > easily add in extra distros in the future.
> > > 
> > > 3. While we find our feet we should remove all tripleo-ci jobs from
> > > non
> > > tripleo projects, once we're confident with it we can explore
> > > adding our
> > > jobs back into other projects again
> > > 
> > > Nothing has changed yet, I order to check we're all on the same
> > > page
> > > this is high level details of how I see things should proceed so
> > > shout
> > > now if I got anything wrong or you disagree.
> > 
> > Ok, I have a POC that has worked end to end in our CI environment[1],
> > 
> > there are a *LOT* of workarounds in there so before we can merge it I
> > 
> > need to clean up and remove some of those workarounds and todo that a
> > 
> > few things need to move around, below is a list of what has to happen
> > 
> > (as best I can tell)
> > 
> > 1) Pull in tripleo-heat-template spec changes to master delorean
> > We had two patches in the tripleo-heat-template midstream packaging
> > that
> > havn't made it into the master packaging, these are
> > https://review.gerrithub.io/241056 Package firstboot and extraconfig
> > templates
> > https://review.gerrithub.io/241057 Package environments and newtork
> > directories
> > 
> > 2) Fixes for instack-undercloud (I didn't push these directly incase
> > it
> > effected people on old versions of puppet modules)
> > https://github.com/rdo-management/instack-undercloud/pull/5
> > 
> > 3) Add packaging for various repositories into openstack-packaging
> > I've pulled the 

Re: [openstack-dev] How to use the log server in CI ?

2015-08-06 Thread Asselin, Ramy
Hi Tang,

First, I recommend you use os-loganalyze because it significantly increases the 
value of the log files by making them easier to consume.
I'm not sure what the issue you encountered is. The link you provide is to use 
swift, but that is the newer alternative to the old-fashion files on disk 
approach, and not a requirement.

That said, you'll find the rules in one of the files located here: 
/etc/apache2/sites-enabled/
It is created by this template [1]. As you see, there's not htmlify directory 
because it's an alias that invokes os-loganalyze.

Ramy

[1] 
http://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/templates/logs.vhost.erb#n85



From: Tang Chen [mailto:tangc...@cn.fujitsu.com]
Sent: Thursday, August 06, 2015 5:07 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] How to use the log server in CI ?

Hi Joshua,

Thanks to reply.
On 08/06/2015 07:45 PM, Joshua Hesketh wrote:
Hi Tang,
For OpenStack's set up, os-loganalyze sits at /htmlify/ and is used to add 
markup and filter log lines when viewing in a browser.

On my box, I don't have a /htmlify/ directory, and I don't think I installed 
os-loganalyze at all.
But when I accessed to the log site, the URL was just modified, added /htmlify/.



For your own set up you don't need to use this and could simply serve anything 
straight off your disk. It should be safe to remove the apache matching rules 
in order to do so.

I'm sorry, how to remove the apache matching rules ? From where ?

Thanks. :)




Hope that helps.
Cheers,
Josh

On Thu, Aug 6, 2015 at 6:50 PM, Tang Chen 
mailto:tangc...@cn.fujitsu.com>> wrote:
Hi Abhishek,

After I setup a log server, if the request ends in .txt.gz, console.html or 
console.html.gz rewrite the url to prepend /htmlify/ .
But actually the log file is on my local machine.

Is this done by os-loganalyze ?  Is this included in install_log_server.sh ? (I 
don't think so.)
Could I disable it and access my log file locally ?

I found this URL for reference.
http://josh.people.rcbops.com/2014/10/openstack-infrastructure-swift-logs-and-performance/

Thanks. :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party-ci]Issue with running "noop-check-communication"

2015-08-06 Thread Asselin, Ramy
I added it very recently.

From: Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
Sent: Thursday, August 06, 2015 2:08 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [third-party-ci]Issue with running 
"noop-check-communication"


Thanks,
Indeed running on a slave did the trick.

Strange how i missed that part in the README.

--

Eduard Biceri Matei, Senior Software Developer

www.cloudfounders.com

 | eduard.ma...@cloudfounders.com






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet]puppet-mistral

2015-08-06 Thread Emilien Macchi
The patch were need to review is here:
https://review.openstack.org/#/c/208457/

---
Emilien Macchi
On Aug 6, 2015 8:26 AM, "Emilien Macchi"  wrote:

> https://github.com/openstack/puppet-mistral
>
> Thanks for bringing this up, we will look it closely.
> Any chance to have it packaged upstream to test it?
>
> ---
> Emilien Macchi
> On Aug 6, 2015 7:59 AM, "Gaël Chamoulaud"  wrote:
>
>> On 06/Aug/2015 .::. 11:44, BORTMAN, Limor (Limor) wrote:
>> > Hi all,
>> > I am trying to push mistral puppet but so far no one from the puppet
>> team take a look.
>> > can anyone please take a look ?
>>
>> Hi Limor,
>>
>> Where is puppet-mistral module hosted? Can't see it on stackforge.
>>
>> So if you want the community take a look, you will need to tell us where
>> it is.
>>
>> Cheers,
>> GC.
>>
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet]puppet-mistral

2015-08-06 Thread Emilien Macchi
https://github.com/openstack/puppet-mistral

Thanks for bringing this up, we will look it closely.
Any chance to have it packaged upstream to test it?

---
Emilien Macchi
On Aug 6, 2015 7:59 AM, "Gaël Chamoulaud"  wrote:

> On 06/Aug/2015 .::. 11:44, BORTMAN, Limor (Limor) wrote:
> > Hi all,
> > I am trying to push mistral puppet but so far no one from the puppet
> team take a look.
> > can anyone please take a look ?
>
> Hi Limor,
>
> Where is puppet-mistral module hosted? Can't see it on stackforge.
>
> So if you want the community take a look, you will need to tell us where
> it is.
>
> Cheers,
> GC.
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] RPC API versioning

2015-08-06 Thread Grasza, Grzegorz


> -Original Message-
> From: Zane Bitter [mailto:zbit...@redhat.com]
> Sent: Thursday, 6 August, 2015 2:57
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [Heat] RPC API versioning
> 
> We've been talking about this since before summit without much consensus.
> I think a large part of the problem is that very few people have deep
> knowledge of both Heat and Versioned Objects. However, I think we are at a
> point where we should be able to settle on an approach at least for the API<-
> >engine RPC interface. I've been talking to Dan Smith about the Nova team's
> plan for upgrades, which goes something like this:
> 
> * Specify a max RPC API version in the config file
> * In the RPC client lib, add code to handle versions as far back as the 
> previous
> release
> * The operator rolls out the updated code, keeping the existing config file
> with the pin to the previous release's RPC version
> * Once all services are upgraded, the operator rolls out a new config file
> shifting the pin
> * The backwards compat code to handle release N-1 is removed in the N+1
> release
> 
> This is, I believe, sufficient to solve our entire problem.
> Specifically, we have no need for an indirection API that rebroadcasts
> messages that are too new (since that can't happen with pinning) and no
> need for Versioned Objects in the RPC layer. (Versioned objects for the DB
> are still critical, and we are very much better off for all the hard work that
> Michal and others have put into them. Thanks!)

What is the use of versioned objects outside of RPC?
I've written some documentation for Oslo VO and helped in introducing them in 
Heat.
As I understand it, the only use cases for VO are
* to serialize objects to dicts with version information when they are sent 
over RPC
* handle version dependent code inside the objects (instead of scattering it 
around the codebase)
* provide an object oriented and transparent access to the resources 
represented by the objects to services which don't have direct access to that 
resource (via RPC) - the indirection API

The last point was not yet discussed in Heat as far as I know, but the 
indirection API also contains an interface for backporting objects, which is 
something that is currently only used in Nova, and as you say, doesn't have a 
use when version pinning is in place.

> 
> The nature of Heat's RPC API is that it is effectively user-facing - the 
> heat-api
> process is essentially a thin proxy between ReST and RPC. We already have a
> translation layer between the internal representation(s) of objects and the
> user-facing representation, in the form of heat.engine.api, and the RPC API is
> firmly on the user-facing side. The requirements for the content of these
> messages are actually much stricter than anything we need for RPC API
> stability, since they need to remain compatible not just with heat-api but
> with heatclient - and we have *zero* control over when that gets upgraded.
> Despite that, we've managed quite nicely for ~3 years without breaking
> changes afaik.
> 
> Versioned Objects is a great way of retaining control when you need to share
> internal data structures between processes. Fortunately the architecture of
> Heat makes that unnecessary. That was a good design decision. We are not
> going to reverse that design decision in order to use Versioned Objects. (In
> the interest of making sure everyone uses their time productively, perhaps I
> should clarify that to: "your patch is subject to -2 on sight if it introduces
> internal engine data structures to heat-api/heat-cfn-api".)
> 
> Hopefully I've convinced you of the sufficiency of this plan for the API<-
> >engine interface specifically. If anyone disagrees, let them speak now, &c.

I don't understand - what is the distinction between internal and external data 
structures?

>From what I understand, versioned objects were introduced in Heat to represent 
>objects which are sent over RPC between Heat services.

> 
> I think there is still a case that could be made for a different approach to 
> the
> RPC API for convergence, which is engine->engine and
> (probably) doesn't yet have a formal translation layer of the same kind.
> At a minimum, obviously, we should do the same stuff listed above (though I
> don't think we need to declare that interface stable until the first release
> where we enable convergence by default).
> 

I agree this could be a good use case for VO.


> There's probably places where versioned objects could benefit us. For
> example, when we trigger a check on a resource we pass it a bundle of data
> containing all the attributes and IDs it might need from resources it depends
> on. It definitely makes sense to me that that bundle would be a Versioned
> Object. (In fact, that data gets stored in the DB - as SyncPoint in the
> prototype - so we wouldn't even need to create a new object type. This
> seems like a clear win.)
> 
> What I do NOT want to do is to e

Re: [openstack-dev] [Fuel] SSL for master node API

2015-08-06 Thread Stanislaw Bogatkin
Hello again,

I heard a questions from team how exactly we will implement SSL for master
node. There is a way how I look it:
1. We implement ability to use SSL in fuel-nailgun-agent (I already created
a patch, it is on review now) and merge it in 7.0 release cycle. Also we
will need to implement SSL ability in other clients (for example, fuel-cli)
2. We save both HTTP and HTTPS on master node, because we need to have
seamless master node upgrade. As long as we have 2 years of release
support, we will need to have both protocols enabled for 2 years. After
that period of time we will disable HTTP completely and continue to use
HTTPS only.
3. For those people, who need to force HTTPS-only before 2 years
expiration, I'll prepare a patch that will disable plain HTTP in nginx on
master node if special key in hiera will be found. Also we will add a note
about this to our documentation and release notes (cause if someone will
decide to force HTTPS, he must to upgrade fuel-nailgun-client on all old
nodes too).

On Tue, Aug 4, 2015 at 5:05 PM, Stanislaw Bogatkin 
wrote:

> Seems that second solution is okay.
>
> Sebastian, I'll try to fix it before SCF.
>
> On Tue, Aug 4, 2015 at 4:25 PM, Sebastian Kalinowski <
> skalinow...@mirantis.com> wrote:
>
>> +1 for option 2)
>>
>> But I have a question: how do we fit with this into the scope of Feature
>> Freeze and Soft Code Freeze this week?
>> Any ETAs?
>>
>> 2015-08-04 15:06 GMT+02:00 Vitaly Kramskikh :
>>
>>> FYI: There is Strict-Transport-Security
>>>  header
>>> which can also be useful here (unless we want to make SSL for master node
>>> optional)
>>>
>>> 2015-08-04 15:07 GMT+03:00 Vladimir Sharshov :
>>>
 Hi,

 +1 to 2nd solution too.

 On Tue, Aug 4, 2015 at 1:45 PM, Evgeniy L  wrote:

> Hi,
>
> +1 to 2nd solution, in this case old environments will work without
> additional
> actions. Agents for new environments, CLI and UI will use SSL.
> But probably for UI we will have to perform redirect on JS level.
>
> Thanks,
>
> On Tue, Aug 4, 2015 at 1:32 PM, Stanislaw Bogatkin <
> sbogat...@mirantis.com> wrote:
>
>> Hi guys,
>> in overall movement of Fuel to use secure sockets we think about
>> wrapping master node UI and API calls to SSL. But there are next caveat:
>>
>> a) fuel-nailgun-agent cannot work via SSL now and need to be
>> rewritten a little. But if it will be rewritten in 7.0 and HTTPS on 
>> master
>> node will be forced by default, it will break upgrade from previous
>> releases to 7.0 due fact that after master node upgrade from 6.1 to 7.0 
>> we
>> will have HTTPS by default and fuel-nailgun-agent on all environments 
>> won't
>> upgraded, so it won't be able to connect to master node after upgrade. It
>> breaks seamless upgrade procedure.
>>
>> What options I see there:
>> 1. We can forcedly enable SSL for master node and rewrite clients in
>> 7.0 to be able to work over it. In release notes for 7.0 we will write
>> forewarning that clients which want to upgrade master node from previous
>> releases to 7.0 must also install new fuel-nailgun-agent to all nodes in
>> all deployed environments.
>>
>> 2. We can have both SSL and non-SSL versions enabled by default and
>> rewrite fuel-nailgun-client in 7.0 such way that it will check SSL
>> availability and be able to work in plain HTTP for legacy mode. So, for 
>> all
>> new environments SSL will be used by default and for old ones plain HTTP
>> will continue to work too. Master node upgrade will not be broken in this
>> case.
>>
>> 3. We can do some mixed way by gradually rewrite fuel-nailgun-client,
>> save both HTTP and HTTPS for master node in 7.0 and drop plain HTTP in 
>> next
>> releases. It is just postponed version of first clause, so it doesn't 
>> seems
>> valid for me, actually.
>>
>> I would be really glad to hear what you think about this. Thank you
>> in advance.
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-

Re: [openstack-dev] [TripleO] Moving instack upstream

2015-08-06 Thread Dan Prince
On Thu, 2015-07-23 at 07:40 +0100, Derek Higgins wrote:
> See below
> 
> On 21/07/15 20:29, Derek Higgins wrote:
> > Hi All,
> > Something we discussed at the summit was to switch the focus of
> > tripleo's deployment method to deploy using instack using images 
> > built
> > with tripleo-puppet-elements. Up to now all the instack work has 
> > been
> > done downstream of tripleo as part of rdo. Having parts of our
> > deployment story outside of upstream gives us problems mainly 
> > because it
> > becomes very difficult to CI what we expect deployers to use while 
> > we're
> > developing the upstream parts.
> > 
> > Essentially what I'm talking about here is pulling instack
> > -undercloud
> > upstream along with a few of its dependency projects (instack,
> > tripleo-common, tuskar-ui-extras etc..) into tripleo and using them 
> > in
> > our CI in place of devtest.
> > 
> > Getting our CI working with instack is close to working but has 
> > taken
> > longer then I expected because of various complications and 
> > distractions
> > but I hope to have something over the next few days that we can use 
> > to
> > replace devtest in CI, in a lot of ways this will start out by 
> > taking a
> > step backwards but we should finish up in a better place where we 
> > will
> > be developing (and running CI on) what we expect deployers to use.
> > 
> > Once I have something that works I think it makes sense to drop the 
> > jobs
> > undercloud-precise-nonha and overcloud-precise-nonha, while 
> > switching
> > overcloud-f21-nonha to use instack, this has a few effects that 
> > need to
> > be called out
> > 
> > 1. We will no longer be running CI on (and as a result not 
> > supporting)
> > most of the the bash based elements
> > 2. We will no longer be running CI on (and as a result not 
> > supporting)
> > ubuntu

One more side effect is that I think it also means we no longer have
the capability to test arbitrary Zuul refspecs for projects like Heat,
Neutron, Nova, or Ironic in our undercloud CI jobs. We've relied on the
source-repositories element to do this for us in the undercloud and
since most of the instack stuff uses packages I think we would loose
this capability.

I'm all for testing with packages mind you... would just like to see us
build packages for any projects that have Zuul refspecs inline, create
a per job repo, and then use that to build out the resulting instack
undercloud.

This to me is the biggest loss in our initial switch to instack
undercloud for CI. Perhaps there is a middle ground here where instack
(which used to support tripleo-image-elements itself) could still
support use of the source-repositories element in one CI job until we
get our package building processes up to speed?

/me really wants 'check experimental' to give us TripleO coverage for
select undercloud projects

> > 
> > Should anybody come along in the future interested in either of 
> > these
> > things (and prepared to put the time in) we can pick them back up 
> > again.
> > In fact the move to puppet element based images should mean we can 
> > more
> > easily add in extra distros in the future.
> > 
> > 3. While we find our feet we should remove all tripleo-ci jobs from 
> > non
> > tripleo projects, once we're confident with it we can explore 
> > adding our
> > jobs back into other projects again
> > 
> > Nothing has changed yet, I order to check we're all on the same 
> > page
> > this is high level details of how I see things should proceed so 
> > shout
> > now if I got anything wrong or you disagree.
> 
> Ok, I have a POC that has worked end to end in our CI environment[1], 
> 
> there are a *LOT* of workarounds in there so before we can merge it I 
> 
> need to clean up and remove some of those workarounds and todo that a 
> 
> few things need to move around, below is a list of what has to happen 
> 
> (as best I can tell)
> 
> 1) Pull in tripleo-heat-template spec changes to master delorean
> We had two patches in the tripleo-heat-template midstream packaging 
> that 
> havn't made it into the master packaging, these are
> https://review.gerrithub.io/241056 Package firstboot and extraconfig 
> templates
> https://review.gerrithub.io/241057 Package environments and newtork 
> directories
> 
> 2) Fixes for instack-undercloud (I didn't push these directly incase 
> it 
> effected people on old versions of puppet modules)
> https://github.com/rdo-management/instack-undercloud/pull/5
> 
> 3) Add packaging for various repositories into openstack-packaging
> I've pulled the packaging for 5 repositories into 
> https://github.com/openstack-packages
> https://github.com/openstack-packages/python-ironic-inspector-client
> https://github.com/openstack-packages/python-rdomanager-oscplugin
> https://github.com/openstack-packages/tuskar-ui-extras
> https://github.com/openstack-packages/ironic-discoverd
> https://github.com/openstack-packages/tripleo-common
> 
> I havn't imported these into gerrithub (incase followi

Re: [openstack-dev] How to use the log server in CI ?

2015-08-06 Thread Tang Chen

Hi Joshua,

Thanks to reply.

On 08/06/2015 07:45 PM, Joshua Hesketh wrote:

Hi Tang,

For OpenStack's set up, os-loganalyze sits at /htmlify/ and is used to 
add markup and filter log lines when viewing in a browser.


On my box, I don't have a /htmlify/ directory, and I don't think I 
installed os-loganalyze at all.
But when I accessed to the log site, the URL was just modified, added 
/htmlify/.




For your own set up you don't need to use this and could simply serve 
anything straight off your disk. It should be safe to remove the 
apache matching rules in order to do so.


I'm sorry, how to remove the apache matching rules ? From where ?

Thanks. :)




Hope that helps.

Cheers,
Josh

On Thu, Aug 6, 2015 at 6:50 PM, Tang Chen > wrote:


Hi Abhishek,

After I setup a log server, if the request ends in .txt.gz,
console.html or console.html.gz rewrite the url to prepend /htmlify/ .
But actually the log file is on my local machine.

Is this done by os-loganalyze ?  Is this included in
install_log_server.sh ? (I don't think so.)
Could I disable it and access my log file locally ?

I found this URL for reference.

http://josh.people.rcbops.com/2014/10/openstack-infrastructure-swift-logs-and-performance/

Thanks. :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet]puppet-mistral

2015-08-06 Thread Gaël Chamoulaud
On 06/Aug/2015 .::. 11:44, BORTMAN, Limor (Limor) wrote:
> Hi all,
> I am trying to push mistral puppet but so far no one from the puppet team 
> take a look.
> can anyone please take a look ?

Hi Limor,

Where is puppet-mistral module hosted? Can't see it on stackforge.

So if you want the community take a look, you will need to tell us where it is.

Cheers,
GC.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet]puppet-mistral

2015-08-06 Thread BORTMAN, Limor (Limor)
Hi all,
I am trying to push mistral puppet but so far no one from the puppet team take 
a look. 
can anyone please take a look ?

Thanks 
Limor Stotland
ALCATEL-LUCENT
SENIOR SOFTWARE ENGINEER
CLOUDBAND BUSINESS UNIT
16 Atir Yeda St. Kfar-Saba 44643, ISRAEL
T:  +972 (0) 9 793 3166
M: +972 (0) 54 585 3736
limor.bort...@alcatel-lucent.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to use the log server in CI ?

2015-08-06 Thread Joshua Hesketh
Hi Tang,

For OpenStack's set up, os-loganalyze sits at /htmlify/ and is used to add
markup and filter log lines when viewing in a browser.

For your own set up you don't need to use this and could simply serve
anything straight off your disk. It should be safe to remove the apache
matching rules in order to do so.

Hope that helps.

Cheers,
Josh

On Thu, Aug 6, 2015 at 6:50 PM, Tang Chen  wrote:

> Hi Abhishek,
>
> After I setup a log server, if the request ends in .txt.gz, console.html
> or console.html.gz rewrite the url to prepend /htmlify/ .
> But actually the log file is on my local machine.
>
> Is this done by os-loganalyze ?  Is this included in install_log_server.sh
> ? (I don't think so.)
> Could I disable it and access my log file locally ?
>
> I found this URL for reference.
>
> http://josh.people.rcbops.com/2014/10/openstack-infrastructure-swift-logs-and-performance/
>
> Thanks. :)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] puppet-mistral

2015-08-06 Thread BORTMAN, Limor (Limor)
Hi all,
I am trying to push mistral puppet but so far no one from the puppet team take 
a look. 
can anyone please take a look ?

Thanks 
Limor Stotland
ALCATEL-LUCENT
SENIOR SOFTWARE ENGINEER
CLOUDBAND BUSINESS UNIT
16 Atir Yeda St. Kfar-Saba 44643, ISRAEL
T:  +972 (0) 9 793 3166
M: +972 (0) 54 585 3736
limor.bort...@alcatel-lucent.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-06 Thread Jamie Lennox


- Original Message -
> From: "David Chadwick" 
> To: openstack-dev@lists.openstack.org
> Sent: Thursday, August 6, 2015 6:25:29 PM
> Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
> 
> 
> 
> On 06/08/2015 00:54, Jamie Lennox wrote:
> > 
> > 
> > - Original Message -
> >> From: "David Lyle"  To: "OpenStack Development
> >> Mailing List (not for usage questions)"
> >>  Sent: Thursday, August 6, 2015
> >> 5:52:40 AM Subject: Re: [openstack-dev] [Keystone] [Horizon]
> >> Federated Login
> >> 
> >> Forcing Horizon to duplicate Keystone settings just makes
> >> everything much harder to configure and much more fragile. Exposing
> >> whitelisted, or all, IdPs makes much more sense.
> >> 
> >> On Wed, Aug 5, 2015 at 1:33 PM, Dolph Mathews <
> >> dolph.math...@gmail.com > wrote:
> >> 
> >> 
> >> 
> >> On Wed, Aug 5, 2015 at 1:02 PM, Steve Martinelli <
> >> steve...@ca.ibm.com > wrote:
> >> 
> >> 
> >> 
> >> 
> >> 
> >> Some folks said that they'd prefer not to list all associated idps,
> >> which i can understand. Why?
> > 
> > So the case i heard and i think is fairly reasonable is providing
> > corporate logins to a public cloud. Taking the canonical coke/pepsi
> > example if i'm coke, i get asked to login to this public cloud i then
> > have to scroll though all the providers to find the COKE.COM domain
> > and i can see for example that PEPSI.COM is also providing logins to
> > this cloud. Ignoring the corporate privacy implications this list has
> > the potential to get long.
> 
> This is the whole purpose of the mod we are currently making to Horizon.
> If you look at our screenshots on InVision, you will see the user has
> the choice to either list all (potentially hundreds of) IdPs, or start
> to type in the name of his organisation. With type ahead, we then filter
> the IdPs to match the characters the user enters. This is also shown in
> our screenshots. So using your coke/pepsi example above, the Coke user
> would type C and the list of IdPs would be immediately culled to only
> contain those with C in their name (and pepsi would be removed from the
> list). When he enters O then the list is further culled to only IdPs
> containing co in their name. Consequently with only minor effort from
> the user (both mental load and physical load) the user's IdP is very
> quickly revealed to him, allowing him to login. See
> 
> https://openstack.invisionapp.com/d/#/console/4277424/92772663/preview

So my point here was that in many situations you strictly don't want to allow 
people to see this entire list. 

> Think about for example how you can do a
> > corporate login to gmail, you certainly don't pick from a list of
> > auth providers for gmail - there would be thousands.
> 
> Actually gmail (at least for me) works in a different way. It takes your
> email address and ASSUMES that your idp is the same as your domain name.
> So no list of IdPs is presented. Instead the IdP name is computed
> automatically from your email address. This approach wont work for everyone.
> 
> 
> > 
> > My understanding of the usage then would be that coke would have been
> > provided a (possibly branded) dedicated horizon that backed onto a
> > public cloud and that i could then from horizon say that it's only
> > allowed access to the COKE.COM domain (because the UX for inputting a
> > domain at login is not great so per customer dashboards i think make
> > sense) and that for this instance of horizon i want to show the 3 or
> > 4 login providers that COKE.COM is going to allow.
> > 
> > Anyway you want to list or whitelist that in keystone is going to
> > involve some form of IdP tagging system where we have to say which
> > set of idps we want in this case and i don't think we should.
> > 
> > @David - when you add a new IdP to the university network are you
> 
> the list of IdPs is centrally (i.e. nationally) managed, and every UK
> university/federation member is sent a new list periodically. So we do
> not add new IdPs, we simply use the list that is provided to us.
> 
> 
> > having to provide a new mapping each time?
> 
> Since all federation members use the EduPerson schema, then one set of
> mapping rules are applicable to all IdPs. They dont need to be updated.
> 
> So to conclude
> a) we dont need to do anything when the federation membership changes
> (except use the new list)
> b) we dont need to change mapping rules
> c) we dont need to tailor user interfaces
> 
> We would like to move OpenStack in this direction, where there is
> minimal effort to managing federation membership. We believe our
> proposed change to Horizon is one step in the right direction.
> 
> 
> > I know the CERN answer to
> > this with websso was to essentially group many IdPs behind the same
> > keystone idp because they will all produce the same assertion values
> > and consume the same mapping.
> 
> Not a good solution from a trust perspective since you dont know who the
> actual IdP is. You are told it is a

Re: [openstack-dev] [third-party-ci]Issue with running "noop-check-communication"

2015-08-06 Thread Eduard Matei
Thanks,
Indeed running on a slave did the trick.

Strange how i missed that part in the README.

-- 

*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Parameters possible default value

2015-08-06 Thread Yanis Guenane
Hi Andrew,

Sorry for the delay in this answer

On 07/30/2015 09:20 PM, Andrew Woodward wrote:
> On Thu, Jul 30, 2015 at 3:36 AM Sebastien Badia  wrote:
>
>> On Mon, Jul 27, 2015 at 09:43:28PM (+), Andrew Woodward wrote:
>>> Sorry, I forgot to finish this up and send it out.
>>>
>>> #--SNIP--
>>> def absent_default(
>>>   $value,
>>>   $default,
>>>   $unset_when_default = true,
>>> ){
>>>   if ( $value == $default ) and $unset_when_default {
>>> # I cant think of a way to deal with this in a define so lets pretend
>>> # we can re-use this with multiple providers like we could if this
>> was
>>> # in the actual provider.
>>>
>>> keystone_config {$name: ensure => absent,}
>>>   } else {
>>> keystone_config {$name: value = $value,}
>>>   }
>>> }
>>>
>>> # Usage:
>>> absent_default{'DEFAULT/foo': default => 'bar', value => $foo }
>> Hi,
>>
>> Hum, but you want to add this definition in all our modules, or directly in
>> openstacklib?
>>
> I only mocked it up in a puppet define, because its easier for me (my ruby
> is terrible) It should be done by adding these kinds of extra providers to
> the inifile provider override that Yanis proposed.
>
>
>> In case of openstacklib, in which manner do you define the
>> _config
>> resource? (eg, generic def, but specialized resource).
>>
>>> #--SNIP--
>>>
>>> (I threw this together and haven't tried to run it yet, so It might not
>> run
>>> verbatim, I will create a test project with it to show it working)
>>>
>>> So In the long-term we should be able to add some new functionality to
>> the
>>> inifile provider to simply just do this for us. We can add the 'default'
>>> and 'unset_when_default' parameter so that we can use them straight w/o a
>>> wrapping function (but the warping function could be used too). This
>> would
>>> give us the defaults (I have an idea on that too that I will try to put
>>> into the prototype) that should allow us to have something that looks
>> quite
>>> clean, but is highly functional
>>>
 Keystone_config{unset_when_default => true} #probably flatly enabled in
>>> our inifile provider for the module
 keystone_config {'DEFAULT/foo': value => 'bar', default => 'bar'}
>>
>> I'm not sure to see the difference with the Yanis solution here¹, and not
>> sure
>> to see the link between the define resource and the type/provider resource.
>>
> This adds on to Yanis' solution so that we can authoritatively understand
> what the default value is, and how it should be treated (instead of hoping
> some magic word doesn't conflict)
So I think we agree on most points here. '' value has
been chosen based on our weekly meetings two weeks ago but it remains
customizable (via the ensure_absent_val parameter).

We need an explicit one by default so it can be set as a default value
in all manifests.

We mainly picked that value because we thought it was the less likely to
be used as a valid value in any OpenStack related component
configuration file.

If by any chance it turns out to be a valid value for a parameter, we
can use the temporary fix of changing ensure_absent_val for this
specific parameter and raise the point during a meeting.

I take the point to make it clear in the README that if a X_config
resource has as a value set to '' it will ensure absent
on the resource.

Does that sound good with you ?

>
> Seb
>> ¹https://review.openstack.org/#/c/202574/
>> --
>> Sebastien Badia
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Yanis Guenane

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-06 Thread David Chadwick


On 05/08/2015 18:36, Dolph Mathews wrote:
> 
> On Wed, Aug 5, 2015 at 5:39 AM, David Chadwick  > wrote:
> 
> 
> 
> On 04/08/2015 18:59, Steve Martinelli wrote:
> > Right, but that API is/should be protected. If we want to list IdPs
> > *before* authenticating a user, we either need: 1) a new API for listing
> > public IdPs or 2) a new policy that doesn't protect that API.
> 
> Hi Steve
> 
> yes this was my understanding of the discussion that took place many
> months ago. I had assumed (wrongly) that something had been done about
> it, but I guess from your message that we are no further forward on this
> Actually 2) above might be better reworded as - a new policy/engine that
> allows public access to be a bona fide policy rule
> 
> 
> The existing policy simply seems wrong. Why protect the list of IdPs?

this is a value judgement that admins take. I think we should allow this
to be configurable, by either improving the policy engine to allow a
public access rule (coarse grained), or adding a public/private flag to
each configured IdP (fine grained)

regards

David

>  
> 
> 
> regards
> 
> David
> 
> >
> > Thanks,
> >
> > Steve Martinelli
> > OpenStack Keystone Core
> >
> > Inactive hide details for Lance Bragstad ---2015/08/04 01:49:29 PM---On
> > Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish  > ---2015/08/04 01:49:29 PM---On Tue, Aug 4, 2015 at 10:52 AM, Douglas
> > Fish mailto:drf...@us.ibm.com>> wrote: > Hi David,
> >
> > From: Lance Bragstad mailto:lbrags...@gmail.com>>
> > To: "OpenStack Development Mailing List (not for usage questions)"
> >  >
> > Date: 2015/08/04 01:49 PM
> > Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
> >
> >
> 
> >
> >
> >
> >
> >
> > On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish <_drf...@us.ibm.com_
> > >> wrote:
> >
> > Hi David,
> >
> > This is a cool looking UI. I've made a minor comment on it in 
> InVision.
> >
> > I'm curious if this is an implementable idea - does keystone support
> > large
> > numbers of 3rd party idps? is there an API to retreive the list of
> > idps or
> > does this require carefully coordinated configuration between
> > Horizon and
> > Keystone so they both recognize the same list of idps?
> >
> >
> > There is an API call for getting a list of Identity Providers from 
> Keystone
> >
> >
> 
> _http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html#list-identity-providers_
> >
> >
> >
> > Doug Fish
> >
> >
> > David Chadwick <_d.w.chadw...@kent.ac.uk_
> >  >> wrote on 08/01/2015 06:01:48 AM:
> >
> > > From: David Chadwick <_d.w.chadw...@kent.ac.uk_
> > >>
> > > To: OpenStack Development Mailing List
> > <_openstack-dev@lists.openstack.org_
> >  >>
> > > Date: 08/01/2015 06:05 AM
> > > Subject: [openstack-dev]  [Keystone] [Horizon] Federated Login
> > >
> > > Hi Everyone
> > >
> > > I have a student building a GUI for federated login with Horizon. 
> The
> > > interface supports both a drop down list of configured IDPs, and 
> also
> > > Type Ahead for massive federations with hundreds of IdPs. 
> Screenshots
> > > are visible in InVision here
> > >
> > > _https://invis.io/HQ3QN2123_
> > >
> > > All comments on the design are appreciated. You can make them 
> directly
> > > to the screens via InVision
> > >
> > > Regards
> > >
> > > David
> > >
> > >
> > >
> > >
> > 
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:_
> >   
>  __openstack-dev-requ...@lists.openstack.org?subject:unsubscribe_
> 
> >   
>  
> > >
> _http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_
> > >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usa

Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-06 Thread David Chadwick


On 05/08/2015 19:28, Thai Q Tran wrote:
> I agree with Lance. Quite honestly, the list of Idps does not belong
> in horizon's settings. Just throwing out some ideas, why not white-list
> the Idps you want public it in keystone's settings, and have an API call
> for that?

that was the conclusion reached many months ago the last time this was
discussed.

regards

David

>  
>  
> 
> - Original message -
> From: Lance Bragstad 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Cc:
> Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
> Date: Wed, Aug 5, 2015 11:19 AM
>  
>  
>  
> On Wed, Aug 5, 2015 at 1:02 PM, Steve Martinelli
> mailto:steve...@ca.ibm.com>> wrote:
> 
> Some folks said that they'd prefer not to list all associated
> idps, which i can understand.
> 
> Actually, I like jamie's suggestion of just making horizon a bit
> smarter, and expecting the values in the horizon settings
> (idp+protocol)
> 
>  
> This *might* lead to a more complicated user experience, unless we
> deduce the protocol for the IdP selected (but that would defeat the
> point?). Also, wouldn't we have to make changes to Horizon every
> time we add an IdP? This might be case by case, but if you're
> consistently adding Identity Providers, then your ops team might not
> be too happy reconfiguring Horizon all the time. 
>  
> 
> 
> 
> Thanks,
> 
> Steve Martinelli
> OpenStack Keystone Core
> 
> Inactive hide details for Dolph Mathews ---2015/08/05 01:38:09
> PM---On Wed, Aug 5, 2015 at 5:39 AM, David Chadwick
>  Aug 5, 2015 at 5:39 AM, David Chadwick  > wrote:
> 
> From: Dolph Mathews  >
> To: "OpenStack Development Mailing List (not for usage
> questions)"  >
> Date: 2015/08/05 01:38 PM
> Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
> 
> 
> 
> 
> 
> 
> 
> On Wed, Aug 5, 2015 at 5:39 AM, David Chadwick
> <_d.w.chadw...@kent.ac.uk_ > wrote:
> 
> 
> 
> 
>   *   On 04/08/2015 18:59, Steve Martinelli wrote:
> > Right, but that API is/should be protected. If we want to
> list IdPs
> > *before* authenticating a user, we either need: 1) a new
> API for listing
> > public IdPs or 2) a new policy that doesn't protect that API.
> 
> Hi Steve
> 
> yes this was my understanding of the discussion that took
> place many
> months ago. I had assumed (wrongly) that something had been
> done about
> it, but I guess from your message that we are no further
> forward on this
> Actually 2) above might be better reworded as - a new
> policy/engine that
> allows public access to be a bona fide policy rule
> 
> 
> The existing policy simply seems wrong. Why protect the list of
> IdPs?
>  
> 
> 
>   * regards
> 
> David
> 
> >
> > Thanks,
> >
> > Steve Martinelli
> > OpenStack Keystone Core
> >
> > Inactive hide details for Lance Bragstad ---2015/08/04
> 01:49:29 PM---On
> > Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish
>  > ---2015/08/04 01:49:29 PM---On Tue, Aug 4, 2015 at 10:52
> AM, Douglas
> > Fish <_drf...@us.ibm.com_ >
> wrote: > Hi David,
> >
> > From: Lance Bragstad <_lbragstad@gmail.com_
> >
> > To: "OpenStack Development Mailing List (not for usage
> questions)"
> > <_openstack-dev@lists.openstack.org_
> >
> > Date: 2015/08/04 01:49 PM
> > Subject: Re: [openstack-dev] [Keystone] [Horizon]
> Federated Login
> >
> >
> 
> 
> >
> >
> >
> >
> >
> > On Tue, Aug 4, 2015 at 10:52 AM, Douglas Fish
> <_drf...@us.ibm.com_
> > >>
> wrote:
> >
> > Hi David,
> >
> > This is a cool looking UI. I've made a minor comment
> on it in InVision.
> >
> > I'm curious if th

Re: [openstack-dev] How to use the log server in CI ?

2015-08-06 Thread Tang Chen

Hi Abhishek,

After I setup a log server, if the request ends in .txt.gz, console.html 
or console.html.gz rewrite the url to prepend /htmlify/ .

But actually the log file is on my local machine.

Is this done by os-loganalyze ?  Is this included in 
install_log_server.sh ? (I don't think so.)

Could I disable it and access my log file locally ?

I found this URL for reference.
http://josh.people.rcbops.com/2014/10/openstack-infrastructure-swift-logs-and-performance/

Thanks. :)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][ec2tokens] Questions about ec2tokens under keystone v3 api.

2015-08-06 Thread Ethan Lynn
That helps a lot! Thanks for the reply!
Seems heat didn't deal with keystone v3 response, I will report a bug.

2015-08-05 21:37 GMT+08:00 Andrey Pavlov :

> As I saw heat`s ec2tokens can work only with keystone v2 URL.
> It happens because keystone has different responses for v2 and v3 versions
> for token request by ec2 credentials.
> I found same problem in our ec2api project and keystonemiddleware project.
>
> For example:
> Patch for our ec2api project will be here -
> https://review.openstack.org/#/c/209085/2/ec2api/api/__init__.py
> Patch for keystonemiddleware is here -
> https://review.openstack.org/#/c/205440/
>
> --
> Kind regards,
> Andrey Pavlov.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-06 Thread David Chadwick


On 06/08/2015 00:54, Jamie Lennox wrote:
> 
> 
> - Original Message -
>> From: "David Lyle"  To: "OpenStack Development
>> Mailing List (not for usage questions)"
>>  Sent: Thursday, August 6, 2015
>> 5:52:40 AM Subject: Re: [openstack-dev] [Keystone] [Horizon]
>> Federated Login
>> 
>> Forcing Horizon to duplicate Keystone settings just makes
>> everything much harder to configure and much more fragile. Exposing
>> whitelisted, or all, IdPs makes much more sense.
>> 
>> On Wed, Aug 5, 2015 at 1:33 PM, Dolph Mathews <
>> dolph.math...@gmail.com > wrote:
>> 
>> 
>> 
>> On Wed, Aug 5, 2015 at 1:02 PM, Steve Martinelli <
>> steve...@ca.ibm.com > wrote:
>> 
>> 
>> 
>> 
>> 
>> Some folks said that they'd prefer not to list all associated idps,
>> which i can understand. Why?
> 
> So the case i heard and i think is fairly reasonable is providing
> corporate logins to a public cloud. Taking the canonical coke/pepsi
> example if i'm coke, i get asked to login to this public cloud i then
> have to scroll though all the providers to find the COKE.COM domain
> and i can see for example that PEPSI.COM is also providing logins to
> this cloud. Ignoring the corporate privacy implications this list has
> the potential to get long. 

This is the whole purpose of the mod we are currently making to Horizon.
If you look at our screenshots on InVision, you will see the user has
the choice to either list all (potentially hundreds of) IdPs, or start
to type in the name of his organisation. With type ahead, we then filter
the IdPs to match the characters the user enters. This is also shown in
our screenshots. So using your coke/pepsi example above, the Coke user
would type C and the list of IdPs would be immediately culled to only
contain those with C in their name (and pepsi would be removed from the
list). When he enters O then the list is further culled to only IdPs
containing co in their name. Consequently with only minor effort from
the user (both mental load and physical load) the user's IdP is very
quickly revealed to him, allowing him to login. See

https://openstack.invisionapp.com/d/#/console/4277424/92772663/preview


Think about for example how you can do a
> corporate login to gmail, you certainly don't pick from a list of
> auth providers for gmail - there would be thousands.

Actually gmail (at least for me) works in a different way. It takes your
email address and ASSUMES that your idp is the same as your domain name.
So no list of IdPs is presented. Instead the IdP name is computed
automatically from your email address. This approach wont work for everyone.


> 
> My understanding of the usage then would be that coke would have been
> provided a (possibly branded) dedicated horizon that backed onto a
> public cloud and that i could then from horizon say that it's only
> allowed access to the COKE.COM domain (because the UX for inputting a
> domain at login is not great so per customer dashboards i think make
> sense) and that for this instance of horizon i want to show the 3 or
> 4 login providers that COKE.COM is going to allow.
> 
> Anyway you want to list or whitelist that in keystone is going to
> involve some form of IdP tagging system where we have to say which
> set of idps we want in this case and i don't think we should.
> 
> @David - when you add a new IdP to the university network are you

the list of IdPs is centrally (i.e. nationally) managed, and every UK
university/federation member is sent a new list periodically. So we do
not add new IdPs, we simply use the list that is provided to us.


> having to provide a new mapping each time?

Since all federation members use the EduPerson schema, then one set of
mapping rules are applicable to all IdPs. They dont need to be updated.

So to conclude
a) we dont need to do anything when the federation membership changes
(except use the new list)
b) we dont need to change mapping rules
c) we dont need to tailor user interfaces

We would like to move OpenStack in this direction, where there is
minimal effort to managing federation membership. We believe our
proposed change to Horizon is one step in the right direction.


> I know the CERN answer to
> this with websso was to essentially group many IdPs behind the same
> keystone idp because they will all produce the same assertion values
> and consume the same mapping.

Not a good solution from a trust perspective since you dont know who the
actual IdP is. You are told it is always the proxy IdP.

> 
> Maybe the answer here is to provide the option in
> django_openstack_auth, a plugin (again) of fetch from keystone, fixed
> list in settings or let it point at a custom text file/url that is
> maintained by the deployer. Honestly if you're adding and removing
> idps this frequently i don't mind making the deployer maintain some
> of this information out of scope of keystone.

It cant be outside the scope of Keystone, since the list of trusted IdPs
should be in Keystone

regards

David

> 
> 
> Jamie
> 

Re: [openstack-dev] [CI]How to set proxy for nodepool

2015-08-06 Thread Xie, Xianshan
Hi Ramy,
> This one you need to skip because you don’t have access to that server.
   OK, got it.

   Thanks a lot.

Xiexs


From: Asselin, Ramy [mailto:ramy.asse...@hp.com]
Sent: Wednesday, August 05, 2015 10:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [CI]How to set proxy for nodepool

HI Xiexs,

“Also, I’ve found some of the infra project-config elements don’t work in my 
environment and aren’t needed as they’re specific to infra. For those, simply 
comment out the portions that don’t work. I didn’t notice any negative 
side-effects.”
This one you need to skip because you don’t have access to that server.

In fact, here are the ones that I skip in my setup:

etc/nodepool/elements/nodepool-base/install.d/90-venv-swift-logs:# skipped
etc/nodepool/elements/nodepool-base/install.d/99-install-zuul:# Skipped
etc/nodepool/elements/nodepool-base/finalise.d/99-unbound:# Skipped
etc/nodepool/elements/cache-devstack/install.d/99-cache-testrepository-db:# 
Skipped

The ’99-unbound’ may work in your setup. If not, you need to disable it here 
too:
etc/nodepool/elements/puppet/bin/prepare-node:  enable_unbound => false,

Ramy

From: Xie, Xianshan [mailto:xi...@cn.fujitsu.com]
Sent: Wednesday, August 05, 2015 3:40 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [CI]How to set proxy for nodepool

Hi Ramy,
Thanks for your patience.
I have tried your suggestion, but it did not work for me.
According to the log, this element has already ran in the chroot before the pip 
commands are executed.
So, in theory, the pip command  would run behind this proxy, but the connection 
errors are still raised.
It`s weird.

Then  I tried to hard code the proxy with --proxy option into the pip command, 
it works.
Anyway, this is a merely temporary solution for this issue until I figure it 
out.

But, after that, I got a new error:
-
nodepool.image.build.dpc: + sudo env 
'PATH=/opt/git/subunit2sql-env/bin:/usr/lib64/ccache:/usr/lib/ccache:$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
 /opt/git/subunit2sql-env/bin/python2 
/opt/nodepool-scripts/prepare_tempest_testrepository.py 
/opt/git/openstack/tempest
nodepool.image.build.dpc: sudo: unable to resolve host fnst01
nodepool.image.build.dpc: No handlers could be found for logger 
"oslo_db.sqlalchemy.session"
nodepool.image.build.dpc: Traceback (most recent call last):
nodepool.image.build.dpc: File 
"/opt/nodepool-scripts/prepare_tempest_testrepository.py", line 50, in 
nodepool.image.build.dpc: main()
nodepool.image.build.dpc: File 
"/opt/nodepool-scripts/prepare_tempest_testrepository.py", line 39, in main
nodepool.image.build.dpc: session = api.get_session()
nodepool.image.build.dpc: File 
"/opt/git/subunit2sql-env/local/lib/python2.7/site-packages/subunit2sql/db/api.py",
 line 47, in get_session
nodepool.image.build.dpc: facade = _create_facade_lazily()
nodepool.image.build.dpc: File 
"/opt/git/subunit2sql-env/local/lib/python2.7/site-packages/subunit2sql/db/api.py",
 line 37, in _create_facade_lazily
nodepool.image.build.dpc: **dict(CONF.database.iteritems()))
nodepool.image.build.dpc: File 
"/opt/git/subunit2sql-env/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/session.py",
 line 822, in __init__
nodepool.image.build.dpc: **engine_kwargs)
nodepool.image.build.dpc: File 
"/opt/git/subunit2sql-env/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/session.py",
 line 417, in create_engine
nodepool.image.build.dpc: test_conn = _test_connection(engine, max_retries, 
retry_interval)
nodepool.image.build.dpc: File 
"/opt/git/subunit2sql-env/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/session.py",
 line 596, in _test_connection
nodepool.image.build.dpc: six.reraise(type(de_ref), de_ref)
nodepool.image.build.dpc: File "", line 2, in reraise
nodepool.image.build.dpc: oslo_db.exception.DBConnectionError: 
(pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 
'logstash.openstack.org' ([Errno -5] No address associated with hostname)")
-
I think it is also a proxy problem about remote access to the subunit2sql 
database.
And it should work with simpleproxy I think.
But I couldn`t find how to use simpleproxy to forward data for the subunit2sql 
db.
Could you please give me more hints?

Thanks again.

Xiexs

From: Asselin, Ramy [mailto:ramy.asse...@hp.com]
Sent: Wednesday, August 05, 2015 6:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [CI]How to set proxy for nodepool

Hi Xiexs,

You might need to configure pip to use your proxy.

I added my own element here:

cache-devstack/install.d/98-setup-pip

Basically:

set -eux

mkdir -p /root/.pip/

cat < /root/.pip/pip.conf
[global]
proxy = 
EOF

cp -f /root/.pip/pip.conf /etc/

Ramy

From: Xie, Xianshan [mailto:xi...@cn.fujitsu.com]
Sent: Tuesday, August 04, 2015 12:05 AM
To: OpenStack Dev