[openstack-dev] [valence] Valence week IRC meeting is at Wednesdays, 15:00 HRS UTC

2016-11-09 Thread Bhandaru, Malini K
USA folks, remember we have daylight savings time in effect now. So one hour 
earlier.

Regards,
Malini

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][ironic] Announcing Valence, support for integrating Rack Scale Devices

2016-11-03 Thread Bhandaru, Malini K
Hello Everyone!
We are very pleased to announce Valence. Please visit our wiki 
at https://wiki.openstack.org/wiki/Valence

Regards,
Malini

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Barcelona Summit -- Monday -- openings in command-presence-workshop

2016-10-11 Thread Bhandaru, Malini K
Hello Everyone!

We have a few slots still available on Monday's Command Presence workshop and 
are opening out to a broader audience. Please register if interested. Past 
attendees, despite their many skills and experience, vouched that they had 
learned a trick or two.

https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16888/command-presence-workshop

Regards,
Malini

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Announcing RSC (Rack Scale Controller)

2016-09-12 Thread Bhandaru, Malini K
Hello Everyone!
Think disaggregated hardware assembled to meet workload needs, 
dynamically growing and shrinking your cloud, and more. 
https://wiki.openstack.org/wiki/Rsc
Do come and join us for our very first IRC meeting on Sept 13, UTC 3:00 hrs.
Please forward to others you think may be interested.
Regards,
Malini

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] venting -- OpenStack wiki reCAPTCHA

2016-09-08 Thread Bhandaru, Malini K
Is it just me who likes to hit the save button often?
It gets tedious proving often that you are not a robot. Wiki reCAPTCHA likes 
proof even if saves are spaced less than a minute apart!
Wiki Gods, hear my plea!

Regards,
Malini

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [VMT] [Security] Proposal to add Brian Rosmaita to the glance-coresec team

2016-05-11 Thread Bhandaru, Malini K
+1   :-)

-Original Message-
From: Nikhil Komawar [mailto:nik.koma...@gmail.com] 
Sent: Wednesday, May 11, 2016 8:40 PM
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [glance] [VMT] [Security] Proposal to add Brian 
Rosmaita to the glance-coresec team

Hello all,

I would like to propose adding add Brian to the team. He has been doing great 
work on improving the Glance experience for user and operators and tying the 
threads with the security aspects of the service. He also brings in good 
perspective of running large scale glance deployment and the issues seen 
therein.

Please cast your vote with +1, 0 or -1, or you can reply back to me.

Thank you.

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-21 Thread Bhandaru, Malini K
I vote for Monday to get the ball rolling, meet the interested parties, and 
Continue on Thursday at 3:10 in a quieter setting ... so we leave with some 
consensus.
Thanks Cathy!
Malini

-Original Message-
From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com] 
Sent: Thursday, April 21, 2016 11:43 AM
To: Cathy Zhang ; OpenStack Development Mailing List 
(not for usage questions) ; Ihar Hrachyshka 
; Vikram Choudhary ; Sean M. 
Collins ; Haim Daniel ; Mathieu Rohon 
; Shaughnessy, David ; 
Eichberger, German ; Henry Fourie 
; arma...@gmail.com; Miguel Angel Ajo 
; Reedip ; Thierry 
Carrez 
Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
Agent extension for Newton cycle

Hi everyone,

We have room 400 at 3:10pm on Thursday available for discussion of the two 
topics. 
Another option is to use the common room with roundtables in "Salon C" during 
Monday or Wednesday lunch time.

Room 400 at 3:10pm is a closed room while the Salon C is a big open room which 
can host 500 people.

I am Ok with either option. Let me know if anyone has a strong preference. 

Thanks,
Cathy


-Original Message-
From: Cathy Zhang
Sent: Thursday, April 14, 2016 1:23 PM
To: OpenStack Development Mailing List (not for usage questions); 'Ihar 
Hrachyshka'; Vikram Choudhary; 'Sean M. Collins'; 'Haim Daniel'; 'Mathieu 
Rohon'; 'Shaughnessy, David'; 'Eichberger, German'; Cathy Zhang; Henry Fourie; 
'arma...@gmail.com'
Subject: RE: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
Agent extension for Newton cycle

Thanks for everyone's reply! 

Here is the summary based on the replies I received: 

1.  We should have a meet-up for these two topics. The "to" list are the people 
who have interest in these topics. 
I am thinking about around lunch time on Tuesday or Wednesday since some of 
us will fly back on Friday morning/noon. 
If this time is OK with everyone, I will find a place and let you know 
where and what time to meet. 

2.  There is a bug opened for the QoS Flow Classifier 
https://bugs.launchpad.net/neutron/+bug/1527671
We can either change the bug title and modify the bug details or start with a 
new one for the common FC which provides info on all requirements needed by all 
relevant use cases. There is a bug opened for OVS agent extension 
https://bugs.launchpad.net/neutron/+bug/1517903

3.  There are some very rough, ugly as Sean put it:-), and preliminary work on 
common FC https://github.com/openstack/neutron-classifier which we can see how 
to leverage. There is also a SFC API spec which covers the FC API for SFC usage 
https://github.com/openstack/networking-sfc/blob/master/doc/source/api.rst,
the following is the CLI version of the Flow Classifier for your reference:

neutron flow-classifier-create [-h]
[--description ]
[--protocol ]
[--ethertype ]
[--source-port :]
[--destination-port :]
[--source-ip-prefix ]
[--destination-ip-prefix ]
[--logical-source-port ]
[--logical-destination-port ]
[--l7-parameters ] FLOW-CLASSIFIER-NAME

The corresponding code is here 
https://github.com/openstack/networking-sfc/tree/master/networking_sfc/extensions

4.  We should come up with a formal Neutron spec for FC and another one for OVS 
Agent extension and get everyone's review and approval. Here is the etherpad 
catching our previous requirement discussion on OVS agent (Thanks David for the 
link! I remember we had this discussion before) 
https://etherpad.openstack.org/p/l2-agent-extensions-api-expansion


More inline. 

Thanks,
Cathy


-Original Message-
From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
Sent: Thursday, April 14, 2016 3:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
Agent extension for Newton cycle

Cathy Zhang  wrote:

> Hi everyone,
> Per Armando’s request, Louis and I are looking into the following 
> features for Newton cycle.
> · Neutron Common FC used for SFC, QoS, Tap as a service etc.,
> · OVS Agent extension
> Some of you might know that we already developed a FC in 
> networking-sfc project and QoS also has a FC. It makes sense that we 
> have one common FC in Neutron that could be shared by SFC, QoS, Tap as a 
> service etc.
> features in Neutron.

I don’t actually know of any classifier in QoS. It’s only planned to emerge, 
but there are no specs or anything specific to the feature.

Anyway, I agree that classifier API belongs to core neutron and should be 
reused by all 

Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-14 Thread Bhandaru, Malini K
Hello Michael!

Quite recently David Lyle hosted the Horizon Mid cycle at Intel Portland 
offices – they enjoyed getting out and dinning in the cafeteria.
Internet accounts were set up ahead of schedule but IRC still was an issue. We 
shall figure out proxy settings to tackle this.
We are also looking for rooms that have easy access to facilities – the 
previous Nova/Ironic mid cycle Intel hosted unfortunately was next to a 
construction zone.

And we note preference for Portland over San Antonio in the summer! ☺

Regards
Malini

From: Michael Still [mailto:mi...@stillhq.com]
Sent: Wednesday, April 13, 2016 10:36 PM
To: OpenStack Development Mailing List <openstack-dev@lists.openstack.org>
Cc: Ding, Jian-feng <jian-feng.d...@intel.com>; Bhargava, Ruchi 
<ruchi.bharg...@intel.com>; Fuller, Michael <michael.ful...@intel.com>; 
Apostol, Michael J <michael.j.apos...@intel.com>
Subject: Re: [openstack-dev] [nova] Newton midcycle planning


We had issues with physical security and unfiltered internet access last time 
we were in Hillsboro. Do we know if those issues are now resolved?

Michael
On 13 Apr 2016 9:08 AM, "Bhandaru, Malini K" 
<malini.k.bhand...@intel.com<mailto:malini.k.bhand...@intel.com>> wrote:
Hi Everyone!

Intel would be pleased to host the Nova midcycle meetup either at San 
Antonio, Texas or Hillsboro, Oregon during R-15 (June 20-24) or R-11 (July 
18-22) as preferred by the Nova community.

Regards
Malini

 Forwarded Message 
Subject:Re: [openstack-dev] [nova] Newton midcycle planning
Date:   Tue, 12 Apr 2016 08:54:17 +1000
From:   Michael Still <mi...@stillhq.com<mailto:mi...@stillhq.com>>
Reply-To:   OpenStack Development Mailing List (not for usage questions)
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
To: OpenStack Development Mailing List (not for usage questions)
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>



On Tue, Apr 12, 2016 at 6:49 AM, Matt Riedemann 
<mrie...@linux.vnet.ibm.com<mailto:mrie...@linux.vnet.ibm.com> 
<mailto:mrie...@linux.vnet.ibm.com<mailto:mrie...@linux.vnet.ibm.com>>> wrote:

A few people have been asking about planning for the nova midcycle
for newton. Looking at the schedule [1] I'm thinking weeks R-15 or
R-11 work the best. R-14 is close to the US July 4th holiday, R-13
is during the week of the US July 4th holiday, and R-12 is the week
of the n-2 milestone.

R-16 is too close to the summit IMO, and R-10 is pushing it out too
far in the release. I'd be open to R-14 though but don't know what
other people's plans are.

As far as a venue is concerned, I haven't heard any offers from
companies to host yet. If no one brings it up by the summit, I'll
see if hosting in Rochester, MN at the IBM site is a possibility.


Intel at Hillsboro had expressed an interest in hosting the N mid-cycle last 
release, so they might still be an option? I don't recall any other possible 
hosts in the queue, but its possible I've missed someone.

Michael

--
Rackspace Australia


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Newton midcycle planning

2016-04-12 Thread Bhandaru, Malini K
Hi Everyone!

Intel would be pleased to host the Nova midcycle meetup either at San 
Antonio, Texas or Hillsboro, Oregon during R-15 (June 20-24) or R-11 (July 
18-22) as preferred by the Nova community.

Regards
Malini 

 Forwarded Message 
Subject:Re: [openstack-dev] [nova] Newton midcycle planning
Date:   Tue, 12 Apr 2016 08:54:17 +1000
From:   Michael Still 
Reply-To:   OpenStack Development Mailing List (not for usage questions)

To: OpenStack Development Mailing List (not for usage questions)




On Tue, Apr 12, 2016 at 6:49 AM, Matt Riedemann > wrote:

A few people have been asking about planning for the nova midcycle
for newton. Looking at the schedule [1] I'm thinking weeks R-15 or
R-11 work the best. R-14 is close to the US July 4th holiday, R-13
is during the week of the US July 4th holiday, and R-12 is the week
of the n-2 milestone.

R-16 is too close to the summit IMO, and R-10 is pushing it out too
far in the release. I'd be open to R-14 though but don't know what
other people's plans are.

As far as a venue is concerned, I haven't heard any offers from
companies to host yet. If no one brings it up by the summit, I'll
see if hosting in Rochester, MN at the IBM site is a possibility.


Intel at Hillsboro had expressed an interest in hosting the N mid-cycle last 
release, so they might still be an option? I don't recall any other possible 
hosts in the queue, but its possible I've missed someone.

Michael

--
Rackspace Australia



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Glance Mitaka: Passing the torch

2016-03-09 Thread Bhandaru, Malini K
Flavio, Glance and OpenStack benefited during your reign or period of humble 
service.
Will miss you at the helm. Also thank you for anointing/attracting two new 
solid cores: Brian and Sabari
Malini


-Original Message-
From: Tom Fifield [mailto:t...@openstack.org] 
Sent: Wednesday, March 09, 2016 7:55 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [glance] Glance Mitaka: Passing the torch

A beautiful post, sir. Thank you for everything!

On 09/03/16 22:15, Flavio Percoco wrote:
>
> Greetings,
>
> I'm not going to run for Glance's PTL position for the Newton timeframe.
>
> There are many motivations behind this choice. Some of them I'm 
> willing to discuss in private if people are interested but I'll go as 
> far as saying there are personal and professional reasons for me to 
> not run again.
>
> As I've always done in my past cycles as PTL, I'd like to take some 
> time to summarize what's happened in the past cycle not only for the 
> new PTL to know what's coming up but for the community to know how 
> things went.
>
> Before I even start, I'd like to thank everyone in the Glance community.
> I truly
> believe this was a great cycle for the project and the community has 
> gotten stronger. None of this would have been possible without the 
> help of all of you and for that, I'm deeply in debt with you all. It 
> does not just take an employer to get someone to contribute to a 
> project. Being paid, for those who are, to do Open Source is not 
> enough. It takes passion, motivation and a lot of patience to analyze 
> a technology, think out of the box and look for ways it can be 
> improved either by fixing bugs or by implementing new features. The 
> amount of time and dedication this process requires is probably worth 
> way more than what we get back from it.
>
> Now, with all that being said, here's Glance Mitaka for all of you:
>
> Completed Features
> ==
>
> I think I've mentioned this already but I'm proud of it so I'll say it 
> again.
> The prioritization and scheduling of Glance Mitaka went so well that 
> we managed to release M-3 without any feature freeze exception (FFE) 
> request. This doesn't mean all the features were implemented. In fact, 
> at least 4 were pushed back to Newton. However, the team communicated, 
> reviewed, sprinted and coded in such a way that we were able to 
> re-organize the schedule to avoid wasting time on things we new 
> weren't going to make it. This required transparency and hard 
> decisions but that's part of the job, right?
>
> * [0] CIM Namespace Metadata
> * [1] Support download from and upload to Cinder volumes
> * [2] Glance db purge utility
> * [3] Deprecate Glance v3 API
> * [4] Implement trusts for Glance
> * [5] Migrate the HTTP Store to Use Requests
> * [6] Glance Image Signing and Verification
> * [7] Supporting OVF Single Disk Image Upload
> * [8] Prevention of Unauthorized errors during upload/download in 
> Swift driver
> * [9] Add filters using an ‘in’ operator
>
> [0]
> http://specs.openstack.org/openstack/glance-specs/specs/mitaka/impleme
> nted/cim-namespace-metadata-definitions.html
>
> [1]
> http://specs.openstack.org/openstack/glance-specs/specs/mitaka/impleme
> nted/cinder-store-upload-download.html
>
> [2]
> http://specs.openstack.org/openstack/glance-specs/specs/mitaka/impleme
> nted/database-purge.html
>
> [3]
> http://specs.openstack.org/openstack/glance-specs/specs/mitaka/impleme
> nted/deprecate-v3-api.html
>
> [4]
> http://specs.openstack.org/openstack/glance-specs/specs/mitaka/impleme
> nted/glance-trusts.html
>
> [5]
> http://specs.openstack.org/openstack/glance-specs/specs/mitaka/impleme
> nted/http-store-on-requests.html
>
> [6]
> http://specs.openstack.org/openstack/glance-specs/specs/mitaka/impleme
> nted/image-signing-and-verification-support.html
>
> [7]
> http://specs.openstack.org/openstack/glance-specs/specs/mitaka/impleme
> nted/ovf-lite.html
>
> [8]
> http://specs.openstack.org/openstack/glance-specs/specs/mitaka/impleme
> nted/prevention-of-401-in-swift-driver.html
>
> [9]
> http://specs.openstack.org/openstack/glance-specs/specs/mitaka/impleme
> nted/v2-add-filters-with-in-operator.html
>
>
> If the above doesn't sound impressive to you, let me fill you in with 
> some extra info about Glance's community.
>
> Community
> =
>
> Glance's community currently has 12 core members, 3 of which joined 
> during Mitaka and 2 of those 3 members joined at the end of the cycle. 
> That means the team ran on 9 reviewers for most of the cycle except 
> that out of those 9, 1 left the team and joined later in the cycle and 
> 3 folks weren't super active this cycle. That left the team with 5 
> constant reviewers throughout the cycle.
>
> Now, the above is *NOT* to say that the success of the cycle is thanks 
> to those
> 5 constant reviewers. On the contrary, it's to say that we've managed 
> to build a community capable of working together with other non-core 
> 

Re: [openstack-dev] [nova] Migration progress

2016-02-04 Thread Bhandaru, Malini K
I agree with Daniel,  keep the periods consistent 5 - 5 .

Another thought, for such ephemeral/changing data, such as progress, why not 
save the information in the cache (and flush to database at a lower rate), and 
retrieve for display to active listeners/UI from the cache. Once complete or 
aborted, of course flush the cache.

Also should we provide a "verbose flag", that is only capture progress 
information when requested? That is when a human user might be issuing the 
command from the cli or GUI tool.

Regards
Malini

-Original Message-
From: Daniel P. Berrange [mailto:berra...@redhat.com] 
Sent: Wednesday, February 03, 2016 11:46 AM
To: Paul Carlton 
Cc: Feng, Shaohe ; OpenStack Development Mailing List 
(not for usage questions) 
Subject: Re: [openstack-dev] [nova] Migration progress

On Wed, Feb 03, 2016 at 11:27:16AM +, Paul Carlton wrote:
> On 03/02/16 10:49, Daniel P. Berrange wrote:
> >On Wed, Feb 03, 2016 at 10:44:36AM +, Daniel P. Berrange wrote:
> >>On Wed, Feb 03, 2016 at 10:37:24AM +, Koniszewski, Pawel wrote:
> >>>Hello everyone,
> >>>
> >>>On the yesterday's live migration meeting we had concerns that 
> >>>interval of writing migration progress to the database is too short.
> >>>
> >>>Information about migration progress will be stored in the database 
> >>>and exposed through the API (/servers//migrations/). In 
> >>>current proposition [1] migration progress will be updated every 2 
> >>>seconds. It basically means that every 2 seconds a call through RPC 
> >>>will go from compute to conductor to write migration data to the 
> >>>database. In case of parallel live migrations each migration will report 
> >>>progress by itself.
> >>>
> >>>Isn't 2 seconds interval too short for updates if the information 
> >>>is exposed through the API and it requires RPC and DB call to 
> >>>actually save it in the DB?
> >>>
> >>>Our default configuration allows only for 1 concurrent live 
> >>>migration [2], but it might vary between different deployments and 
> >>>use cases as it is configurable. Someone might want to trigger 10 
> >>>(or even more) parallel live migrations and each might take even a 
> >>>day to finish in case of block migration. Also if deployment is big enough 
> >>>rabbitmq might be fully-loaded.
> >>>I'm not sure whether updating each migration every 2 seconds makes 
> >>>sense in this case. On the other hand it might be hard to observe 
> >>>fast enough that migration is stuck if we increase this interval...
> >>Do we have any actual data that this is a real problem. I have a 
> >>pretty hard time believing that a database update of a single field 
> >>every 2 seconds is going to be what pushes Nova over the edge into a 
> >>performance collapse, even if there are 20 migrations running in 
> >>parallel, when you compare it to the amount of DB queries & updates 
> >>done across other areas of the code for pretty much every singke API call 
> >>and background job.
> >Also note that progress is rounded to the nearest integer. So even if 
> >the migration runs all day, there is a maximum of 100 possible 
> >changes in value for the progress field, so most of the updates 
> >should turn in to no-ops at the database level.
> >
> >Regards,
> >Daniel
> I agree with Daniel, these rpc and db access ops are a tiny percentage 
> of the overall load on rabbit and mysql and properly configured these 
> subsystems should have no issues with this workload.
> 
> One correction, unless I'm misreading it, the existing 
> _live_migration_monitor code updates the progress field of the 
> instance record every 5 seconds.  However this value can go up and 
> down so an infinate number of updates are possible?

Oh yes, you are in fact correct. Technically you could have an unbounded number 
of updates if migration goes backwards. Some mitigation against this is if we 
see progress going backwards we'll actually abort the migration if it gets 
stuck for too long. We'll also be progressively increasing the permitted 
downtime. So except in pathelogical scenarios I think the number of updates 
should still be relatively small.

> However, the issue raised here is not with the existing implementation 
> but with the proposed change 
> https://review.openstack.org/#/c/258813/5/nova/virt/libvirt/driver.py
> This add a save() operation on the migration object every 2 seconds

Ok, that is more heavy weight since it is recording the raw byte values and so 
it is guaranteed to do a database update pretty much every time.
It still shouldn't be too unreasonable a loading though. FWIW I think it is 
worth being consistent in the update frequency betweeen the progress value & 
the migration object save, so switching to be every
5 seconds probably makes more sense, so we know both objects are reflecting the 
same point in time.

Regards,
Daniel
-- 
|: http://berrange.com  -o-

[openstack-dev] [glance] Seeking FFE for "Add CIM namespace metadata definitions"

2016-01-04 Thread Bhandaru, Malini K
Hello Glance Team!

Hope you had a wonderful vacation and wishing you health and 
happiness for 2016.

Would very much appreciate your considering https://review.openstack.org/259694 
for a feature freeze exception.

Thank you to Travis Tripp for chiming in. Searchlight APIs will provide elastic 
search capability, and users such as Horizon can leverage these.

Supporting CIM namespace thus requires just importing the namespace tags and we 
already have a PoC implementation by Lin, which is how

we were able to generate the graphics in Horizon that were attached to the 
spec. Would appreciate core votes to approve this spec.



Regards

Malini

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Seeking FFE for "Support single disk image OVA/OVF package upload"

2016-01-04 Thread Bhandaru, Malini K
Hello Glance Team!
Hope you had a wonderful vacation and wishing you health and 
happiness for 2016.

Would very much appreciate your considering https://review.openstack.org/194868 
for a feature freeze exception.

I believe the spec is pretty solid, and we can deliver on the implementation by 
M-2. But were unable to get enough core

Votes during the holiday season.

Regards

Malini


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] OVF/OVA introspection in Mitaka?

2015-12-18 Thread Bhandaru, Malini K
Flavio, I uploaded a patch 15. Your comment about making import available to 
non-admin users and Mike Gerdt's comments with ova schema references
Prompts the question how important is it to support upload of compressed ova.
I am happy to jettison the compressed support for generally available.

Also, will the new refactored import be available any time soon?
Should we go ahead with this feature with the original Liberty task flow and 
then update it( as a bug or compatibility feature) when the refactored import 
becomes available?

I have one more question, we would like to save the extracted meta data using 
the CIM namespace.
The Horizon metadata tags support allows one to represent a key with multiple 
values as a list/array.
Would like to do the same here ... example 
CIM:InstructionSetExtensionName: {
"x86:3DNow", 
"x86:3DNowExt", 
"x86:ABM", 
"x86:AES", 
"x86:AVX", 
"x86:AVX2", 
"x86:BMI", 
"x86:CX16", 
"x86:F16C", 
"x86:FSGSBASE", 
"x86:LWP", 
"x86:MMX", 
"x86:PCLMUL", 
"x86:RDRND", 
"x86:SSE2", 
"x86:SSE3", 
"x86:SSSE3", 
"x86:SSE4A", 
"x86:SSE41", 
"x86:SSE42", 
"x86:FMA }

Versus a bunch of CIM:InstructionSetExtensionName: x86:3Dnow, 
CIM:InstructionSetExtensionName:AVX2 ...
The latter more a tag style representation verus key-value pairs.


-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com] 
Sent: Friday, December 18, 2015 11:13 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [glance] OVF/OVA introspection in Mitaka?

Greetings,

This spec[0] was started back in Liberty and I'd like to hear from the folks 
involved in this work whether the intent of moving this forward is still active 
and the goals. This has been a long standing request and as it's being 
presented it seems to have no impact on the current API refactor we're doing. 
Neither in the code nor in the design.

I'm sending this out not only to hear from the folks involved but also from 
other memebers of the community. We're skipping the next couple of meetings and 
I thought about starting this thread as we're approaching the spec freeze date.

One more note. The code this spec impacts, as mentioned above, is not going to 
be changed. The task engine (the one that uses taskflow) should not be confused 
with the task aPI (the one that currently triggers the task engine and that's 
also going away).

Flavio

[0] https://review.openstack.org/#/c/194868/

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Add Ian Cordasco back into glance-core

2015-12-07 Thread Bhandaru, Malini K
+1  ☺

From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
Sent: Monday, December 07, 2015 8:55 AM
To: Flavio Percoco ; OpenStack Development Mailing List (not 
for usage questions) 
Subject: Re: [openstack-dev] [glance] Add Ian Cordasco back into glance-core

+2

Great to see this happen.
On 12/7/15 11:36 AM, Flavio Percoco wrote:
Greetings,

Not long ago, Ian Cordasco, sent an email out stepping down from his
core roles as he didn't have the time to dedicate to the project
team's he was part of.

Ian has contacted me mentioning that he's gotten clearance, and
therefore, time to dedicate to Glance and other activities around our
community (I'll let him expand on this and answer questions if there
are).

As it was mentioned in the "goodbye thread" - and because Ian knows
Glance quite well already, including the processes we follow - I'd
like to propose a fast-track addition for him to join the team again.

Please, just like for every other folk volunteering for this role, do
provide your feedback on this. If no rejections are made, I'll proceed
to adding Ian back to our core team in a week from now.

Cheers,
Flavio





__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--



Thanks,

Nikhil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][glance] Add Sabari Kumar Murugesan <smuruge...@vmware.com>

2015-11-30 Thread Bhandaru, Malini K
+1 on Sabari!  :-)

-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com] 
Sent: Monday, November 23, 2015 12:21 PM
To: openstack-dev@lists.openstack.org
Cc: Sabari Kumar Murugesan 
Subject: [openstack-dev] [all][glance] Add Sabari Kumar Murugesan 


Greetings,

I'd like to propose adding Sabari Kumar Murugesan to the glance-core team. 
Sabari has been contributing for quite a bit to the project with great reviews 
and he's also been providing great feedback in matters related to the design of 
the service, libraries and other areas of the team.

I believe he'd be a great addition to the glance-core team as he has 
demonstrated a good knowledge of the code, service and project's priorities.

If Sabari accepts to join and there are no objections from other members of the 
community, I'll proceed to add Sabari to the team in a week from now.

Thanks,
Flavio

--
@flaper87
Flavio Percoco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-11-06 Thread Bhandaru, Malini K
+1 on Chris comments on implementation and API.
Migrate, if all is ideal, should take the initial launch flavor.

-Original Message-
From: Chris Friesen [mailto:chris.frie...@windriver.com] 
Sent: Thursday, November 05, 2015 8:46 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Migration state machine proposal.

On 11/05/2015 08:33 AM, Andrew Laski wrote:
> On 11/05/15 at 01:28pm, Murray, Paul (HP Cloud) wrote:

>> Or more specifically, the migrate and resize API actions both call 
>> the resize function in the compute api. As Ed said, they are 
>> basically the same behind the scenes. (But the API difference is 
>> important.)
>
> Can you be a little more specific on what API difference is important to you?
> There are two differences currently between migrate and resize in the API:
>
> 1. There is a different policy check, but this only really protects the next 
> bit.
>
> 2. Resize passes in a new flavor and migration does not.
>
> Both actions result in an instance being scheduled to a new host.  If 
> they were consolidated into a single action with a policy check to 
> enforce that users specified a new flavor and admins could leave that 
> off would that be problematic for you?


To me, the fact that resize and cold migration share the same implementation is 
just that, an implementation detail.

 From the outside they are different things...one is "take this instance and 
move it somewhere else", and the other "take this instance and change its 
resource profile".

To me, the external API would make more sense as:

1) resize

2) migrate (with option of cold or live, and with option to specify a 
destination, and with option to override the scheduler if the specified 
destination doesn't pass filters)


And while we're talking, I don't understand why "allow_resize_to_same_host" 
defaults to False.  The comments in https://bugs.launchpad.net/nova/+bug/1251266
say that it's not intended to be used in production, but doesn't give a 
rationale for that statement.  If you're using local storage and you just want 
to add some more CPUs/RAM to the instance, wouldn't it be beneficial to avoid 
the need to copy the rootfs?

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All][Glance Mitaka Priorities

2015-11-06 Thread Bhandaru, Malini K
Hello Glance Team/Flavio

Would you please provide link to Glance priorities at 
https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Glance

[ Malini] Regards
Malini

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [nova] Verification of glance images before boot

2015-09-10 Thread Bhandaru, Malini K
Brianna, I can imagine a denial of service attack by uploading images whose 
signature is invalid if we allow them to reside in Glance
In a "killed" state. This would be less of an issue "killed" images still 
consume storage quota until actually deleted.
Also given MD-5 less secure, why not have the default hash be SHA-1 or 2?
Regards
Malini

-Original Message-
From: Poulos, Brianna L. [mailto:brianna.pou...@jhuapl.edu] 
Sent: Wednesday, September 09, 2015 9:54 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: stuart.mcla...@hp.com
Subject: Re: [openstack-dev] [glance] [nova] Verification of glance images 
before boot

Stuart is right about what will currently happen in Nova when an image is 
downloaded, which protects against unintentional modifications to the image 
data.

What is currently being worked on is adding the ability to verify a signature 
of the checksum.  The flow of this is as follows:
1. The user creates a signature of the "checksum hash" (currently MD5) of the 
image data offline.
2. The user uploads a public key certificate, which can be used to verify the 
signature to a key manager (currently Barbican).
3. The user creates an image in glance, with signature metadata properties.
4. The user uploads the image data to glance.
5. If the signature metadata properties exist, glance verifies the signature of 
the "checksum hash", including retrieving the certificate from the key manager.
6. If the signature verification fails, glance moves the image to a killed 
state, and returns an error message to the user.
7. If the signature verification succeeds, a log message indicates that it 
succeeded, and the image upload finishes successfully.

8. Nova requests the image from glance, along with the image properties, in 
order to boot it.
9. Nova uses the signature metadata properties to verify the signature (if a 
configuration option is set).
10. If the signature verification fails, nova does not boot the image, but 
errors out.
11. If the signature verification succeeds, nova boots the image, and a log 
message notes that the verification succeeded.

Regarding what is currently in Liberty, the blueprint mentioned [1] has merged, 
and code [2] has also been merged in glance, which handles steps
1-7 of the flow above.

For steps 7-11, there is currently a nova blueprint [3], along with code [4], 
which are proposed for Mitaka.

Note that we are in the process of adding official documentation, with examples 
of creating the signature as well as the properties that need to be added for 
the image before upload.  In the meantime, there's an etherpad that describes 
how to test the signature verification functionality in Glance [5].

Also note that this is the initial approach, and there are some limitations.  
For example, ideally the signature would be based on a cryptographically secure 
(i.e. not MD5) hash of the image.  There is a spec in glance to allow this hash 
to be configurable [6].

[1]
https://blueprints.launchpad.net/glance/+spec/image-signing-and-verificatio
n-support
[2]
https://github.com/openstack/glance/commit/484ef1b40b738c87adb203bba6107ddb
4b04ff6e
[3] https://review.openstack.org/#/c/188874/
[4] https://review.openstack.org/#/c/189843/
[5]
https://etherpad.openstack.org/p/liberty-glance-image-signing-instructions
[6] https://review.openstack.org/#/c/191542/


Thanks,
~Brianna




On 9/9/15, 12:16 , "Nikhil Komawar"  wrote:

>That's correct.
>
>The size and the checksum are to be verified outside of Glance, in this 
>case Nova. However, you may want to note that it's not necessary that 
>all Nova virt drivers would use py-glanceclient so you would want to 
>check the download specific code in the virt driver your Nova 
>deployment is using.
>
>Having said that, essentially the flow seems appropriate. Error must be 
>raise on mismatch.
>
>The signing BP was to help prevent the compromised Glance from changing 
>the checksum and image blob at the same time. Using a digital 
>signature, you can prevent download of compromised data. However, the 
>feature has just been implemented in Glance; Glance users may take time to 
>adopt.
>
>
>
>On 9/9/15 11:15 AM, stuart.mcla...@hp.com wrote:
>>
>> The glance client (running 'inside' the Nova server) will 
>> re-calculate the checksum as it downloads the image and then compare 
>> it against the expected value. If they don't match an error will be raised.
>>
>>> How can I know that the image that a new instance is spawned from - 
>>> is actually the image that was originally registered in glance - and 
>>> has not been maliciously tampered with in some way?
>>>
>>> Is there some kind of verification that is performed against the 
>>> md5sum of the registered image in glance before a new instance is spawned?
>>>
>>> Is that done by Nova?
>>> Glance?
>>> Both? Neither?
>>>
>>> The reason I ask is some 'paranoid' security (that is their job I
>>> suppose) people have raised these questions.

Re: [openstack-dev] [Glance] Feature Freeze Exception proposal

2015-09-10 Thread Bhandaru, Malini K
Thank you! -- Malini

-Original Message-
From: Nikhil Komawar [mailto:nik.koma...@gmail.com] 
Sent: Wednesday, September 09, 2015 8:06 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal

FYI, this was granted FFE.

On 9/8/15 11:02 AM, Nikhil Komawar wrote:
> Malini,
>
> Your note on the etherpad [1] went unnoticed as we had that sync on 
> Friday outside of our regular meeting and weekly meeting agenda 
> etherpad was not fit for discussion purposes.
>
> It would be nice if you all can update & comment on the spec, ref. the 
> note or have someone send a relative email here that explains the 
> redressal of the issues raised on the spec and during Friday sync [2].
>
> [1] https://etherpad.openstack.org/p/glance-team-meeting-agenda
> [2]
> http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstac
> k-glance.2015-09-04.log.html#t2015-09-04T14:29:47
>
> On 9/5/15 4:40 PM, Bhandaru, Malini K wrote:
>> Thank you Nikhil and Glance team on the FFE consideration.
>> We are committed to making the revisions per suggestion and separately seek 
>> help from the Flavio, Sabari, and Harsh.
>> Regards
>> Malini, Kent, and Jakub
>>
>>
>> -Original Message-
>> From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
>> Sent: Friday, September 04, 2015 9:44 AM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>> proposal
>>
>> Hi Malini et.al.,
>>
>> We had a sync up earlier today on this topic and a few items were discussed 
>> including new comments on the spec and existing code proposal.
>> You can find the logs of the conversation here [1].
>>
>> There are 3 main outcomes of the discussion:
>> 1. We hope to get a commitment on the feature (spec and the code) that the 
>> comments would be addressed and code would be ready by Sept 18th; after 
>> which the RC1 is planned to be cut [2]. Our hope is that the spec is merged 
>> way before and implementation to the very least is ready if not merged. The 
>> comments on the spec and merge proposal are currently implementation details 
>> specific so we were positive on this front.
>> 2. The decision to grant FFE will be on Tuesday Sept 8th after the spec has 
>> newer patch sets with major concerns addressed.
>> 3. We cannot commit to granting a backport to this feature so, we ask the 
>> implementors to consider using the plug-ability and modularity of the 
>> taskflow library. You may consult developers who have already worked on 
>> adopting this library in Glance (Flavio, Sabari and Harsh). Deployers can 
>> then use those scripts and put them back in their Liberty deployments even 
>> if it's not in the standard tarball.
>>
>> Please let me know if you have more questions.
>>
>> [1]
>> http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23opensta
>> ck-glance.2015-09-04.log.html#t2015-09-04T14:29:47
>> [2] https://wiki.openstack.org/wiki/Liberty_Release_Schedule
>>
>> On 9/3/15 1:13 PM, Bhandaru, Malini K wrote:
>>> Thank you Nikhil and Brian!
>>>
>>> -Original Message-
>>> From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
>>> Sent: Thursday, September 03, 2015 9:42 AM
>>> To: openstack-dev@lists.openstack.org
>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>> proposal
>>>
>>> We agreed to hold off on granting it a FFE until tomorrow.
>>>
>>> There's a sync up meeting on this topic tomorrow, Friday Sept 4th at
>>> 14:30 UTC ( #openstack-glance ). Please be there to voice your opinion and 
>>> cast your vote.
>>>
>>> On 9/3/15 9:15 AM, Brian Rosmaita wrote:
>>>> I added an agenda item for this for today's Glance meeting:
>>>>https://etherpad.openstack.org/p/glance-team-meeting-agenda
>>>>
>>>> I'd prefer to hold my vote until after the meeting.
>>>>
>>>> cheers,
>>>> brian
>>>>
>>>>
>>>> On 9/3/15, 6:14 AM, "Kuvaja, Erno" <kuv...@hp.com> wrote:
>>>>
>>>>> Malini, all,
>>>>>
>>>>> My current opinion is -1 for FFE based on the concerns in the spec 
>>>>> and implementation.
>>>>>
>>>>> I'm more than happy to realign my stand after we have updated spec 
>>>>> and a) it's agreed to be the approach as of now and b) we can 
>>>>> evaluate how much work the implementation n

Re: [openstack-dev] [Glance] Feature Freeze Exception proposal

2015-09-05 Thread Bhandaru, Malini K
Thank you Nikhil and Glance team on the FFE consideration.
We are committed to making the revisions per suggestion and separately seek 
help from the Flavio, Sabari, and Harsh.
Regards
Malini, Kent, and Jakub 


-Original Message-
From: Nikhil Komawar [mailto:nik.koma...@gmail.com] 
Sent: Friday, September 04, 2015 9:44 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal

Hi Malini et.al.,

We had a sync up earlier today on this topic and a few items were discussed 
including new comments on the spec and existing code proposal.
You can find the logs of the conversation here [1].

There are 3 main outcomes of the discussion:
1. We hope to get a commitment on the feature (spec and the code) that the 
comments would be addressed and code would be ready by Sept 18th; after which 
the RC1 is planned to be cut [2]. Our hope is that the spec is merged way 
before and implementation to the very least is ready if not merged. The 
comments on the spec and merge proposal are currently implementation details 
specific so we were positive on this front.
2. The decision to grant FFE will be on Tuesday Sept 8th after the spec has 
newer patch sets with major concerns addressed.
3. We cannot commit to granting a backport to this feature so, we ask the 
implementors to consider using the plug-ability and modularity of the taskflow 
library. You may consult developers who have already worked on adopting this 
library in Glance (Flavio, Sabari and Harsh). Deployers can then use those 
scripts and put them back in their Liberty deployments even if it's not in the 
standard tarball.

Please let me know if you have more questions.

[1]
http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2015-09-04.log.html#t2015-09-04T14:29:47
[2] https://wiki.openstack.org/wiki/Liberty_Release_Schedule

On 9/3/15 1:13 PM, Bhandaru, Malini K wrote:
> Thank you Nikhil and Brian!
>
> -Original Message-
> From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
> Sent: Thursday, September 03, 2015 9:42 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
> proposal
>
> We agreed to hold off on granting it a FFE until tomorrow.
>
> There's a sync up meeting on this topic tomorrow, Friday Sept 4th at
> 14:30 UTC ( #openstack-glance ). Please be there to voice your opinion and 
> cast your vote.
>
> On 9/3/15 9:15 AM, Brian Rosmaita wrote:
>> I added an agenda item for this for today's Glance meeting:
>>https://etherpad.openstack.org/p/glance-team-meeting-agenda
>>
>> I'd prefer to hold my vote until after the meeting.
>>
>> cheers,
>> brian
>>
>>
>> On 9/3/15, 6:14 AM, "Kuvaja, Erno" <kuv...@hp.com> wrote:
>>
>>> Malini, all,
>>>
>>> My current opinion is -1 for FFE based on the concerns in the spec 
>>> and implementation.
>>>
>>> I'm more than happy to realign my stand after we have updated spec 
>>> and a) it's agreed to be the approach as of now and b) we can 
>>> evaluate how much work the implementation needs to meet with the revisited 
>>> spec.
>>>
>>> If we end up to the unfortunate situation that this functionality 
>>> does not merge in time for Liberty, I'm confident that this is one 
>>> of the first things in Mitaka. I really don't think there is too 
>>> much to go, we just might run out of time.
>>>
>>> Thanks for your patience and endless effort to get this done.
>>>
>>> Best,
>>> Erno
>>>
>>>> -Original Message-
>>>> From: Bhandaru, Malini K [mailto:malini.k.bhand...@intel.com]
>>>> Sent: Thursday, September 03, 2015 10:10 AM
>>>> To: Flavio Percoco; OpenStack Development Mailing List (not for 
>>>> usage
>>>> questions)
>>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>>> proposal
>>>>
>>>> Flavio, first thing in the morning Kent will upload a new BP that 
>>>> addresses the comments. We would very much appreciate a +1 on the 
>>>> FFE.
>>>>
>>>> Regards
>>>> Malini
>>>>
>>>>
>>>>
>>>> -Original Message-
>>>> From: Flavio Percoco [mailto:fla...@redhat.com]
>>>> Sent: Thursday, September 03, 2015 1:52 AM
>>>> To: OpenStack Development Mailing List (not for usage questions)
>>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>>> proposal
>>>>
>>>> On 02/09/15 22:11 -0400, Nikhil Komawar wrote:
>>>&g

Re: [openstack-dev] [Glance] Feature Freeze Exception proposal

2015-09-03 Thread Bhandaru, Malini K
Thank you Nikhil and Brian!

-Original Message-
From: Nikhil Komawar [mailto:nik.koma...@gmail.com] 
Sent: Thursday, September 03, 2015 9:42 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal

We agreed to hold off on granting it a FFE until tomorrow.

There's a sync up meeting on this topic tomorrow, Friday Sept 4th at
14:30 UTC ( #openstack-glance ). Please be there to voice your opinion and cast 
your vote.

On 9/3/15 9:15 AM, Brian Rosmaita wrote:
> I added an agenda item for this for today's Glance meeting:
>https://etherpad.openstack.org/p/glance-team-meeting-agenda
>
> I'd prefer to hold my vote until after the meeting.
>
> cheers,
> brian
>
>
> On 9/3/15, 6:14 AM, "Kuvaja, Erno" <kuv...@hp.com> wrote:
>
>> Malini, all,
>>
>> My current opinion is -1 for FFE based on the concerns in the spec 
>> and implementation.
>>
>> I'm more than happy to realign my stand after we have updated spec 
>> and a) it's agreed to be the approach as of now and b) we can 
>> evaluate how much work the implementation needs to meet with the revisited 
>> spec.
>>
>> If we end up to the unfortunate situation that this functionality 
>> does not merge in time for Liberty, I'm confident that this is one of 
>> the first things in Mitaka. I really don't think there is too much to 
>> go, we just might run out of time.
>>
>> Thanks for your patience and endless effort to get this done.
>>
>> Best,
>> Erno
>>
>>> -Original Message-
>>> From: Bhandaru, Malini K [mailto:malini.k.bhand...@intel.com]
>>> Sent: Thursday, September 03, 2015 10:10 AM
>>> To: Flavio Percoco; OpenStack Development Mailing List (not for 
>>> usage
>>> questions)
>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>> proposal
>>>
>>> Flavio, first thing in the morning Kent will upload a new BP that 
>>> addresses the comments. We would very much appreciate a +1 on the 
>>> FFE.
>>>
>>> Regards
>>> Malini
>>>
>>>
>>>
>>> -Original Message-
>>> From: Flavio Percoco [mailto:fla...@redhat.com]
>>> Sent: Thursday, September 03, 2015 1:52 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>> proposal
>>>
>>> On 02/09/15 22:11 -0400, Nikhil Komawar wrote:
>>>> Hi,
>>>>
>>>> I wanted to propose 'Single disk image OVA import' [1] feature 
>>>> proposal for exception. This looks like a decently safe proposal 
>>>> that should be able to adjust in the extended time period of 
>>>> Liberty. It has been discussed at the Vancouver summit during a 
>>>> work session and the proposal has been trimmed down as per the 
>>>> suggestions then; has been overall accepted by those present during 
>>>> the discussions (barring a few changes needed on the spec itself). 
>>>> It being a addition to already existing import task, doesn't 
>>>> involve API change or change to any of the core Image functionality as of 
>>>> now.
>>>>
>>>> Please give your vote: +1 or -1 .
>>>>
>>>> [1] https://review.openstack.org/#/c/194868/
>>> I'd like to see support for OVF being, finally, implemented in Glance.
>>> Unfortunately, I think there are too many open questions in the spec 
>>> right now to make this FFE worthy.
>>>
>>> Could those questions be answered to before the EOW?
>>>
>>> With those questions answered, we'll be able to provide a more, 
>>> realistic, vote.
>>>
>>> Also, I'd like us to evaluate how mature the implementation[0] is 
>>> and the likelihood of it addressing the concerns/comments in time.
>>>
>>> For now, it's a -1 from me.
>>>
>>> Thanks all for working on this, this has been a long time requested 
>>> format to have in Glance.
>>> Flavio
>>>
>>> [0] https://review.openstack.org/#/c/214810/
>>>
>>>
>>> --
>>> @flaper87
>>> Flavio Percoco
>>> __
>>> 
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-
>>> requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/op

Re: [openstack-dev] [Glance] loading a new conf file

2015-09-03 Thread Bhandaru, Malini K
Sorry for the spam .. but email better for us in 2 disparate time zones,

We are a little stuck on how to integrate a new conf file and read it. 
Or perhaps we are approaching it wrong/missing something obvious.
This is with respect 
https://review.openstack.org/#/c/194868/11/specs/liberty/ovf-lite.rst line 146
Steps as we see it
1)  goal a new conf file in /etc
2)  Provide a sample
3)  Deploy it (part of packaging/install)
4)  Load it the code 

Any pointer by way of a class file name, or link or other would be very 
appreciated.

Regards
Malini


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Feature Freeze Exception proposal

2015-09-03 Thread Bhandaru, Malini K
Flavio, first thing in the morning Kent will upload a new BP that addresses the 
comments. We would very much appreciate a +1 on the FFE.

Regards
Malini



-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com] 
Sent: Thursday, September 03, 2015 1:52 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal

On 02/09/15 22:11 -0400, Nikhil Komawar wrote:
>Hi,
>
>I wanted to propose 'Single disk image OVA import' [1] feature proposal 
>for exception. This looks like a decently safe proposal that should be 
>able to adjust in the extended time period of Liberty. It has been 
>discussed at the Vancouver summit during a work session and the 
>proposal has been trimmed down as per the suggestions then; has been 
>overall accepted by those present during the discussions (barring a few 
>changes needed on the spec itself). It being a addition to already 
>existing import task, doesn't involve API change or change to any of 
>the core Image functionality as of now.
>
>Please give your vote: +1 or -1 .
>
>[1] https://review.openstack.org/#/c/194868/

I'd like to see support for OVF being, finally, implemented in Glance.
Unfortunately, I think there are too many open questions in the spec right now 
to make this FFE worthy.

Could those questions be answered to before the EOW?

With those questions answered, we'll be able to provide a more, realistic, vote.

Also, I'd like us to evaluate how mature the implementation[0] is and the 
likelihood of it addressing the concerns/comments in time.

For now, it's a -1 from me.

Thanks all for working on this, this has been a long time requested format to 
have in Glance.
Flavio

[0] https://review.openstack.org/#/c/214810/


--
@flaper87
Flavio Percoco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Security]Would people see a value in the cve-check-tool?

2015-08-11 Thread Bhandaru, Malini K
Rob, Timur, Travis, and Victor, thank you for your input! We are excited about 
the feedback.

Added [Security] in subject per Rob’s suggestion. Copied all the security 
interested parties who responded.

Another place I see value is running periodically against past releases – 
Icehouse, Juno etc
to catch any vulnerabilities in production systems. When we issue security 
notes we typically specify any past releases that carry the vulnerability and 
this would be on par with that.

A developer could introduce a vulnerability in any edit, which bandit would 
catch. However CVE check would not be such an active threat, running it once a 
day may  be adequate.

Regards
Malini

From: Clark, Robert Graham [mailto:robert.cl...@hp.com]
Sent: Tuesday, August 04, 2015 11:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Heath, Constanza M; Ding, Jian-feng; Demeter, Michael; Bhandaru, Malini K
Subject: RE: [openstack-dev] Would people see a value in the cve-check-tool?

Hi Elena,

This is interesting work, thanks for posting it (and for posting it here on 
openstack-dev, we are trying to wind down the security ML) though maybe use the 
[Security] tag in the subject line next time.

I think this is a very interesting project, though it’s unclear to me who might 
be the targeted users for this? It seems like it would make the most sense for 
this to be in the gate. Now this could be the standard build gates (Jenkins 
etc) but I’m not sure how much sense that makes on its own, after all most 
production consumers (those who care about CVEs) of OpenStack are probably not 
consuming it vanilla from source but are more likely to be consuming it from a 
vendor who’s already packaged it up.

In the latter case, I’m sure vendors would find this tool very useful, we do 
something similar at HP today but I’m sure a tool like this would add value and 
it’s probably something we could contribute to.

As I write this I’ve realised that there would be an interesting possibility in 
the former case (putting this in the upstream OpenStack gates). It would be 
interesting to see something running that regularly checks for CVE’s in the 
libraries that _could_ be included in OpenStack, (library requirements within 
OpenStack often include more than one version) and bumps the version to the 
next safest and submits a change request for manual verification etc.

-Rob







From: Adam Heczko [mailto:ahec...@mirantis.com]
Sent: 03 August 2015 23:18
To: OpenStack Development Mailing List (not for usage questions)
Cc: Heath, Constanza M; Ding, Jian-feng; Demeter, Michael; Bhandaru, Malini K
Subject: Re: [openstack-dev] Would people see a value in the cve-check-tool?

Hi Elena, the tool looks very interesting.
Maybe try to spread out this proposal also through openstack-security@ ML.
BTW, I can't find the wrapper mentioned - am I missing something?

Regards,

Adam

On Mon, Aug 3, 2015 at 11:08 PM, Reshetova, Elena 
elena.reshet...@intel.commailto:elena.reshet...@intel.com wrote:
Hi,

We would like to ask opinions if people find it valuable to include a 
cve-check-tool into the OpenStack continuous integration process?
A tool can be run against the package and module dependencies of OpenStack 
components and detect any CVEs (in future there are also plans to integrate 
more functionality to the tool, such as scanning of other vulnerability 
databases and etc.). It would not only provide fast detection of new 
vulnerabilities that are being released for existing dependencies, but also 
control that people are not introducing new vulnerable dependencies.

The tool is located here: https://github.com/ikeydoherty/cve-check-tool

I am attaching an example of a very simple Python wrapper for the tool, which 
is able to process formats like: 
http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt
and an example of html output if you would be running it for the python module 
requests 2.2.1 version (which is vulnerable to 3 CVEs).

Best Regards,
Elena.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Host Aggregate resource pool tracking

2015-07-25 Thread Bhandaru, Malini K
Thanks Jay for covering Host Aggregate Resource Pool tracking at the mid-cycle 
meetup.
I could see the implementation being very similar to the extensible resource 
tracker defined today but would like to understand better the value it provides
1) Is it to quickly respond able to honor a scheduling request?
2) Is it to capture some statistics - usage trends, mean/median during the 
day/week etc for capacity planning?

To determine the actual host in such a pool, weighting conditions still apply ..
If more free resources on a host are weighted higher, we shall spread the 
workload in the pool, and if the opposite, we shall consolidate workloads on 
active hosts.

So this deeper dive into the elements of the pool is inescapable.

I could see the use of heuristics instead of checking each of the hosts in the 
resource pool.

Regards
Malini



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Liberty mid-cycle meet up.

2015-06-30 Thread Bhandaru, Malini K
Nikhil any chance we can have remote participation? Based on the agenda folks 
can remote dial in.
Regards,
Malini

From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
Sent: Tuesday, June 30, 2015 9:19 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Glance] Liberty mid-cycle meetup.

Hi all,
As discussed in the earlier Glance weekly meeting, the mid-cycle meetup for 
Glance would be at Blacksburg, VA from July 28-July30. A tentative schedule and 
some details have been put in the etherpad. Please fill in your details in the 
survey and the etherpad so as to help the site manager handle surrounding 
details.

https://etherpad.openstack.org/p/liberty-glance-mid-cycle-meetup
Thanks,
Nikhil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to properly detect and fence a compromised host (and why I dislike TrustedFilter)

2015-06-23 Thread Bhandaru, Malini K
Would like to add to Shane's points below.

1) The Trust filter can be treated as an API, with different underlying 
implementations. Its default could even be Not Implemented and always return 
false.
 And Nova.conf could specify use the OAT trust implementation. This would 
not break present day users of the functionality.

2) The issue in the original bug is a a VM waking up after a reboot on a host 
that has not pre-determined whether the host is still trustable.
 This is essentially begging a feature to check that all constraints 
requested by a VM during launch are confirmed to hold when it re-awakens, even 
if it is not
 going through Nova scheduler at this point. 

 This holds even for aggregates that might be specified by geo, or even 
reservation such as Coke or Pepsi.
 What if a host, even without a reboot and certainly before a reboot was 
assigned from Coke to Pepsi, there is cross contamination.
 Perhaps we need Nova hooks that can be registered with functions that 
check expected aggregate values.

 Better still have  libvirt functionality that makes a call back for each 
VM on a host to ensure its constraints are satisfied on start-up/boot, and 
re-start when it comes out of pause.

 Using aggregate for trust with a cron job to check for trust is 
inefficient in this case, trust status gets updated only on a host reboot. 
Intel TXT is a boot
 time authentication.

Regards
Malini


-Original Message-
From: Wang, Shane [mailto:shane.w...@intel.com] 
Sent: Tuesday, June 23, 2015 9:26 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] How to properly detect and fence a 
compromised host (and why I dislike TrustedFilter)

AFAIK, TrustedFilter is using a sort of cache to cache the trusted state, which 
is designed to solve the performance issue mentioned here.

My thoughts for deprecating it are:
#1. We already have customers here in China who are using that filter. How are 
they going to do upgrade in the future?
#2. Dependency should not be a reason to deprecate a module in OpenStack, Nova 
is not a stand-alone module, and it depends on various technologies and 
libraries.

Intel is setting up the third party CI for TCP/OAT in Liberty, which is to 
address the concerns mentioned in the thread. And also, OAT is an open source 
project which is being maintained as the long-term strategy.

For the situation that a host gets compromised, OAT checks trusted or untrusted 
from the start point of boot/reboot, it is hard for OAT to detect whether a 
host gets compromised when it is running, I don't know how to detect that 
without the filter?
Back to Michael's question, the process of the verification is done by software 
automatically when a host boots or reboots, will that be an overhead for the 
admin to have a separate job?

Thanks.
--
Shane

-Original Message-
From: Michael Still [mailto:mi...@stillhq.com]
Sent: Wednesday, June 24, 2015 7:49 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] How to properly detect and fence a 
compromised host (and why I dislike TrustedFilter)

I agree. I feel like this is another example of functionality which is 
trivially implemented outside nova, and where it works much better if we don't 
do it. Couldn't an admin just have a cron job which verifies hosts, and then 
adds them to a compromised-hosts host aggregate if they're owned? I assume 
without testing it that you can migrate instances _out_ of a host aggregate you 
can't boot in?

Michael

On Tue, Jun 23, 2015 at 8:41 PM, Sylvain Bauza sba...@redhat.com wrote:
 Hi team,

 Some discussion occurred over IRC about a bug which was publicly open 
 related to TrustedFilter [1] I want to take the opportunity for 
 raising my concerns about that specific filter, why I dislike it and 
 how I think we could improve the situation - and clarify everyone's
 thoughts)

 The current situation is that way : Nova only checks if one host is 
 compromised only when the scheduler is called, ie. only when 
 booting/migrating/evacuating/unshelving an instance (well, not exactly 
 all the evacuate/live-migrate cases, but let's not discuss about that 
 now). When the request goes in the scheduler, all the hosts are 
 checked against all the enabled filters and the TrustedFilter is 
 making an external HTTP(S) call to the Attestation API service (not 
 handled by Nova) for *each host* to see if the host is valid (not 
 compromised) or not.

 To be clear, that's the only in-tree scheduler filter which explicitly 
 does an external call to a separate service that Nova is not managing.
 I can see at least 3 reasons for thinking about why it's bad :

 #1 : that's a terrible bottleneck for performance, because we're 
 IO-blocking N times given N hosts (we're even not multiplexing the 
 HTTP requests)
 #2 : all the filters are checking an internal Nova state for the host 
 

Re: [openstack-dev] [glance] [nova] Glance bug with Kilo upgrade Nova

2015-06-09 Thread Bhandaru, Malini K
Flavio, would a DB script that writes an empty string or NOP or something 
instead of NULL In the column do the trick?
Then the problem degenerates to a new DB upgrade script.

Regards
Malini

-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com] 
Sent: Monday, June 08, 2015 11:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: openstack-operat...@lists.openstack.org
Subject: Re: [openstack-dev] [glance] [nova] Glance bug with Kilo upgrade  Nova

On 08/06/15 11:46 -0400, Clayton O'Neill wrote:
We tested testing Kilo upgrades in our hardware dev environments last 
week and the second time through ran into this bug which right now is 
probably a show-stopper for us.

https://bugs.launchpad.net/glance/+bug/1419823

The issue here is that the v1 Glance API allows you to create images 
with properties that are 'NULL' in the Glance database.  For example:

    glance image-create --name cirros_test --disk-format qcow2 
--container-format bare --file cirros-0.3.4-x86_64-disk.img --is-public 
True --is-protected True --progress --property description=

It's apparently also fairly easy to end up with a NULL description when 
editing images properties via Horizon.

The issue is that the v2 Glance API returns these NULL properties to 
the client, which then validates them against the schema returned by the v2 
API.
This schema returns specifies that the description property *must* be a string.

In the Kilo release, Nova has been changed to use the v2 API, so 
suddenly this matters.  The net effect is that end users can pretty 
easily create properties with NULL values, and then won't be able to boot 
instances using those images.
What makes this worse is that it's completely opaque to end users, 
since this just reports that no node was available to schedule the instance.

However, Nova *only* uses the v2 api to list images if the 
glance.allowed_direct_url_schemes config key is set in the config file.
However, this config item defaults to an empty array, meaning that by 
default it's *always* set.  There doesn't appear to be a way to unset a 
value with oslo-config that has a default value, blocking off that 
route to work around the issue.  Disabling the v2 Glance API we don't 
think will work, since Nova appears to assume the v2 API is available.

AFAIK, Nova supports V2 image-lists since before Juno when the 
allowed_direct_url_schemes config option is set. Are you referring to another 
change? Has the default been changed in Nova? I'm asking because I was working 
on this migration to V2 and we decided to postpone it to L.


Another work around we've looked at is to change the DB schema for 
image properties (yuck) to not allow NULL values.  This results in 
Glance returning a
500 error since glance-api is attempting to insert an invalid value.  
This is better than instances failing in an opaque fashion, but still pretty 
horrible.

Has anyone else run into this issue yet?  Are there other work arounds 
that we've not thought of other than Don't create images with NULL 
properties?
User education is definitely an option, but given the failure mode, 
it's not a great solution for us.

I believe this needs to be fixed in the client rather than the API and/or the 
schemas. I'll take a look at this right away.

Another workaround could be updateting `schema-image.json` to add the schemas 
that are missing and let the client download the final schema from the V2 API.

Keep an eye on the bug, patches coming your way (assuming what I have in mind 
will work).
Flavio

--
@flaper87
Flavio Percoco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [nova] Glance bug with Kilo upgrade Nova

2015-06-09 Thread Bhandaru, Malini K
A DB migrate script that puts some token default string only for glance 
properties that are NULL. Should not change anything else.
Hope it works Flavio.
Regards
Malini

-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com] 
Sent: Tuesday, June 09, 2015 12:33 AM
To: Bhandaru, Malini K
Cc: OpenStack Development Mailing List (not for usage questions); 
openstack-operat...@lists.openstack.org
Subject: Re: [openstack-dev] [glance] [nova] Glance bug with Kilo upgrade  Nova

On 09/06/15 09:22 +0200, Flavio Percoco wrote:
On 09/06/15 07:09 +, Bhandaru, Malini K wrote:
Flavio, would a DB script that writes an empty string or NOP or something 
instead of NULL In the column do the trick?
Then the problem degenerates to a new DB upgrade script.

I now remembered about a change[0] - that I wrote myself - that 
required a bump on the API version - which we did - that allows None 
values to be returned in the API. This is probably what is causing this 
behavior.

A DB migration would certainly work, I'd love to avoid it but I guess 
that's the best solution in this case.

Actually, we can't just migrate the database as there might be custom 
properties that were explicitly set to None.

I'll keep you posted


Cheers,
Flavio

[0] https://review.openstack.org/#/c/138184/


Regards
Malini

-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com]
Sent: Monday, June 08, 2015 11:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: openstack-operat...@lists.openstack.org
Subject: Re: [openstack-dev] [glance] [nova] Glance bug with Kilo 
upgrade  Nova

On 08/06/15 11:46 -0400, Clayton O'Neill wrote:
We tested testing Kilo upgrades in our hardware dev environments last 
week and the second time through ran into this bug which right now is 
probably a show-stopper for us.

https://bugs.launchpad.net/glance/+bug/1419823

The issue here is that the v1 Glance API allows you to create images 
with properties that are 'NULL' in the Glance database.  For example:

    glance image-create --name cirros_test --disk-format qcow2 
--container-format bare --file cirros-0.3.4-x86_64-disk.img 
--is-public True --is-protected True --progress --property 
description=

It's apparently also fairly easy to end up with a NULL description 
when editing images properties via Horizon.

The issue is that the v2 Glance API returns these NULL properties to 
the client, which then validates them against the schema returned by the v2 
API.
This schema returns specifies that the description property *must* be a 
string.

In the Kilo release, Nova has been changed to use the v2 API, so 
suddenly this matters.  The net effect is that end users can pretty 
easily create properties with NULL values, and then won't be able to boot 
instances using those images.
What makes this worse is that it's completely opaque to end users, 
since this just reports that no node was available to schedule the instance.

However, Nova *only* uses the v2 api to list images if the 
glance.allowed_direct_url_schemes config key is set in the config file.
However, this config item defaults to an empty array, meaning that by 
default it's *always* set.  There doesn't appear to be a way to unset 
a value with oslo-config that has a default value, blocking off that 
route to work around the issue.  Disabling the v2 Glance API we don't 
think will work, since Nova appears to assume the v2 API is available.

AFAIK, Nova supports V2 image-lists since before Juno when the 
allowed_direct_url_schemes config option is set. Are you referring to another 
change? Has the default been changed in Nova? I'm asking because I was 
working on this migration to V2 and we decided to postpone it to L.


Another work around we've looked at is to change the DB schema for 
image properties (yuck) to not allow NULL values.  This results in 
Glance returning a
500 error since glance-api is attempting to insert an invalid value. 
This is better than instances failing in an opaque fashion, but still pretty 
horrible.

Has anyone else run into this issue yet?  Are there other work 
arounds that we've not thought of other than Don't create images with NULL 
properties?
User education is definitely an option, but given the failure mode, 
it's not a great solution for us.

I believe this needs to be fixed in the client rather than the API and/or the 
schemas. I'll take a look at this right away.

Another workaround could be updateting `schema-image.json` to add the schemas 
that are missing and let the client download the final schema from the V2 API.

Keep an eye on the bug, patches coming your way (assuming what I have in mind 
will work).
Flavio

--
@flaper87
Flavio Percoco

--
@flaper87
Flavio Percoco



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin

Re: [openstack-dev] [Glance][Keystone] Glance and trusts

2015-06-05 Thread Bhandaru, Malini K
Continuing with David's example and the need to control access to a Swift 
object that Adam points out,

How about using the Glance token from glance-API service to glance-registry but 
carry along extra data in the call, namely user-id, domain, and public/private 
information, so the object can be access controlled.

Alternately and encapsulating token

Glance-token user-token -- keeping it simple, only two levels.  This 
protects from on the cusp expired user-tokens.
Could check user quota before attempting the storage.

Should user not have paid dues, Glance knows which objects to garbage collect!

Regards
Malini

From: Adam Young [mailto:ayo...@redhat.com]
Sent: Friday, June 05, 2015 4:11 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Glance][Keystone] Glance and trusts

On 06/05/2015 10:39 AM, Dolph Mathews wrote:

On Thu, Jun 4, 2015 at 1:54 AM, David Chadwick 
d.w.chadw...@kent.ac.ukmailto:d.w.chadw...@kent.ac.uk wrote:
I did suggest another solution to Adam whilst we were in Vancouver, and
this mirrors what happens in the real world today when I order something
from a supplier and a whole supply chain is involved in creating the end
product that I ordered. This is not too dissimilar to a user requesting
a new VM. Essentially each element in the supply chain trusts the two
adjacent elements. It has contracts with both its customers and its
suppliers to define the obligations of each party. When something is
ordered from it, it trusts the purchaser, and on the strength of this,
it will order from its suppliers. Each element may or may not know who
the original customer is, but if it needs to know, it trusts the
purchaser to tell it. Furthermore the customer does not need to delegate
any of his/her permissions to his/her supplier. If we used such a system
of trust between Openstack services, then we would not need delegation
of authority and trusts as they are implemented today. It could
significantly simplify the interactions between OpenStack services.

+1! I feel like this is the model that we started with in OpenStack, and have 
grown additional complexity over time without much benefit.

We could roll Glance into Nova, too, and get the same benefit.  There is a 
reason we have separate services.  GLance shoud not Trust Nova for all 
operations, just some.

David's example elides the fact that there  are checks built in to the supply 
chain system to prevent cheating.







regards
David

On 03/06/2015 21:03, Adam Young wrote:
 On 06/02/2015 12:57 PM, Mikhail Fedosin wrote:
 Hello! I think it's a good time to discuss implementation of trusts in
 Glance v2 and v3 api.

 Currently we have two different situations during image creation where
 our token may expire, which leads to unsuccessful operation.

 First is connection between glance-api and glance-registry. In this
 case we have a solution (https://review.openstack.org/#/c/29967/) -
 use_user_token parameter in glance-api.conf, but it is True by default
 . If it's changed to False then glance-api will use its own
 credentials to authorize in glance-registry and it prevents many
 possible issues with user token expiration. So, I'm interested if
 there are some performance degradations if we change use_user_token to
 False and what are the reasons against making it the default value.

 Second one is linked with Swift. Current implementation uploads chunks
 one by one and requires authorization each time. It may lead to
 problems: for example we have to upload 100 chunks, after 99th one,
 token expired and glance can't upload the last one, catches an
 exception and tries to remove stale chunks from storage. Of course it
 will fail, because token is not valid anymore, and that's why there
 will be 99 garbage objects in the storage.
 With Single-tenant mode glance uses its own credentials to upload
 files, so it's possible to create new connection on each chunk upload
 or catch Unauthorized exception and recreate connections only in that
 cases. But with Multi-tenant mode there is no way to do it, because
 user credentials are required. So it seems that trusts is the only one
 solution here.
 The problem with using trusts is that it would need to be created
 per-user, and that is going to be expensive.  It would be possible, as
 Heat does something of this nature:

 1. User calls glance,
 2. Glance creates a trust with some limitation, either time or number of
 uses
 3.  Trusts are used for all operations with swift.
 4. Glance should clean up the trust when it is complete.

 I don't love the solution, but I think it is the best we have.  Ideally
 the user would opt in to the trust, but in this case, it is kindof
 implicit by them calling the API.


 We should limit the trust creation to only have those roles (or a
 subset) on the token used to create the trust.




 I would be happy to hear your opinions on that matter. If you know
 other situations where trusts are useful or some other approaches
 please share.

 Best 

Re: [openstack-dev] [keystone] [nova] [oslo] oslo.policy requests from the Nova team

2015-06-03 Thread Bhandaru, Malini K
Hello Sean!

+1 on defaults, resource-url style entries, hierarchy

But, in the interest of staying declarative, I am not comfortable with having 
default policies in code.
I would rather have a default nova policy.json file in the nova code base and 
if no policy.json is supplied, have the nova code
copy over this default to the /etc location, and log the same.

Admin related access changes are easier to determine in the custom policy.json, 
but with the introduction of roles, which could act as aliases,
Policy.json can easily be morphed to become more promiscuous or ultra 
stringent. Harder to determine and alert.

Also thinking that in the context of dynamic policies and being able via API to 
introduce policy changes that take into consideration new roles
Introduced, can see policy changes being saved in the database, changes being 
logged, but also for ease of use/review, nice to write out to a policy.json
file, one per project.

Thanks
Malini

-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.com] 
Sent: Wednesday, June 03, 2015 2:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone] [nova] [oslo] oslo.policy requests from 
the Nova team

On 2 June 2015 at 17:22, Sean Dague s...@dague.net wrote:
 Nova has a very large API, and during the last release cycle a lot of 
 work was done to move all the API checking properly into policy, and 
 not do admin context checks at the database level. The result is a 
 very large policy file - 
 https://github.com/openstack/nova/blob/master/etc/nova/policy.json

In summary, we need to make it easier for the deployer configuring the policy 
to to the right thing.

The plan to remove the ability to turn off API extensions, so we get the Nova 
API back to a single official (microversioned) API, will make it more important 
that its easy to maintain policy tweaks.

 This provides a couple of challenges. One of which is in recent 
 defcore discussions some deployers have been arguing that the 
 existence of policy files means that anything you can do with 
 policy.json is valid and shouldn't impact trademark usage, because the 
 knobs were given. Nova specifically states this is not ok - 
 https://github.com/openstack/nova/blob/master/doc/source/devref/policy
 _enforcement.rst#existed-nova-api-being-restricted
 however, we'd like to go a step further here.

 What we'd really like is sane defaults for policy that come from code, 
 not from etc files. So that a Nova deploy with an empty policy.json is 
 completely valid, and does a reasonable thing.

 Policy.json would then be just a set of overrides for existing policy.
 That would make it a lot more clear what was changed from the existing 
 policy.

 We'd also really like the policy system to be able to WARN when the 
 server starts if the policy was changed in some way that could 
 negatively impact compatibility of the system, i.e. if functions that 
 we felt were essential were turned off. Because the default policy is 
 in code, we could have a view of the old and new world and actually 
 warn the Operator that they did a weird thing.

 Lastly, we'd actually really like to redo our policy to look more like 
 resource urls instead of extension names, as this should be a lot more 
 sensible to the administrators, and hopefully make it easier to think 
 about policy. Which I think means an aliasing facility in oslo.policy 
 to allow a graceful transition for users. (This may exist, I don't know).

+1 to all that.

One more thing to help those maintaining a policy that has several levels of 
admin (frankly the most acceptable use of policy tweaks, and something we 
might want to encode into our defaults at some point if clear patterns emerge).

I think we need more hierarchy in the policy. For example, if you want to 
disable all floating ip actions, it would be nice if that was a single policy 
change. Basically having all floating ip actions inherit from the top level 
policy (i.e. the actions default to the top level policy, and have overrides 
when required). As we add extra API actions, or extra more granular policy 
items, it should default in a way thats easy to understand across an upgrade.

 I'm happy to write specs here, but mostly wanted to have the 
 discussion on the list first to ensure we're all generally good with this 
 direction.

Thanks for the awesome summary here.

I have added this to the list of post summit actions I am (still!) compiling, 
in the section where we need folks to step on an own stuff:
https://etherpad.openstack.org/p/YVR-nova-liberty-summit-action-items

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][oslo] Stepping down from oslo-ironic liaison

2015-05-28 Thread Bhandaru, Malini K
Victor, reproducing John 's liason message. Copied John.

-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.com]
Sent: Saturday, May 09, 2015 3:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Ironic] Large number of ironic driver bugs 
in nova

On 6 May 2015 at 19:04, John Villalovos openstack@sodarock.com wrote:
 JohnG,

 I work on Ironic and would be willing to be a cross project liaison 
 for Nova and Ironic.  I would just need a little info on what to do 
 from the Nova side.  Meetings to attend, web pages to monitor, etc...

 I assume I would start with this page:
 https://bugs.launchpad.net/nova/+bugs?field.tag=ironic

 And try to work with the Ironic and Nova teams on getting bugs resolved.

 I would appreciate any other info and suggestions to help improve the 
 process.

Thank you for stepping up to help us here.

I have added your name on this list:
https://wiki.openstack.org/wiki/Nova#People
(if you can please add your IRC handle for me, that would be awesome)

In terms of whats required, thats really up to what works for you.

The top things that come to mind:
* Raise ironic questions to nova in nova-meetings (and maybe v.v.)
* For ironic features that need exposing in Nova, track those
* Help triage Nova's ironic bugs into Nova bugs and Ironic bugs
* Try to find folks to fix important ironic bugs

But fundamentally, lets just see what works, and evolve the role to match what 
works for you.

I hope that helps.

Thanks,
John

-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com] 
Sent: Thursday, May 28, 2015 12:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic][oslo] Stepping down from oslo-ironic 
liaison

On 28/05/15 09:20 +0200, Victor Stinner wrote:
Oh, on the wiki page I read that The liaison should be a core reviewer 
for the project, but does not need to be the PTL.. I'm not a core 
reviewer for nova. Is it an issue?

This was more like a general recommendation, I guess. I don't think the liaison 
has to be a core reviewer at all.

Cheers,
Flavio


On the wiki page, I see that John Villalovos (happycamp) is the Nova 
liaison for Oslo, not Joe Goron. I don't understand.

Victor

Le 27/05/2015 20:44, Joe Gordon a écrit :


On Wed, May 27, 2015 at 3:20 AM, Davanum Srinivas dava...@gmail.com 
mailto:dava...@gmail.com wrote:

Victor,

Nice, yes, Joe was the liaison with Nova so far. Yes, please go ahead
and add your name in the wiki for Nova as i believe Joe is winding
down the oslo liaison as well.
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Oslo



Yup, thank you Victor!



thanks,
dims

On Wed, May 27, 2015 at 5:12 AM, Victor Stinner vstin...@redhat.com
mailto:vstin...@redhat.com wrote:
  Hi,
 
  By the way, who is the oslo liaison for nova? If there is
nobody, I would
  like to take this position.
 
  Victor
 
  Le 25/05/2015 18:45, Ghe Rivero a écrit :
 
  My focus on the Ironic project has been decreasing in the last
cycles,
  so it's about time to relinquish my position as a oslo-ironic
liaison so
  new contributors can take over it and help ironic to be the vibrant
  project it is.
 
  So long, and thanks for all the fish,
 
  Ghe Rivero
  --
  Pinky: Gee, Brain, what do you want to do tonight?
  The Brain: The same thing we do every night, Pinky—try to take
over the
  world!
 
.''`.  Pienso, Luego Incordio
  : :' :
  `. `'
 `- www.debian.org http://www.debian.org
http://www.debian.org www.openstack.com http://www.openstack.com
  http://www.openstack.com
 
  GPG Key: 26F020F7
  GPG fingerprint: 4986 39DA D152 050B 4699  9A71 66DB 5A36 26F0 20F7
 
 
 
__
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
__
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Ironic][oslo] Stepping down from oslo-ironic liaison

2015-05-28 Thread Bhandaru, Malini K
Victor -- error on my part for filling wrong table.
I put your name instead of John's. Thanks to Tan Lin for pointing out my error.
All the best.
Regards
Malini

-Original Message-
From: Bhandaru, Malini K [mailto:malini.k.bhand...@intel.com] 
Sent: Thursday, May 28, 2015 1:05 AM
To: Flavio Percoco; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic][oslo] Stepping down from oslo-ironic 
liaison

Victor, reproducing John 's liason message. Copied John.

-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.com]
Sent: Saturday, May 09, 2015 3:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Ironic] Large number of ironic driver bugs 
in nova

On 6 May 2015 at 19:04, John Villalovos openstack@sodarock.com wrote:
 JohnG,

 I work on Ironic and would be willing to be a cross project liaison 
 for Nova and Ironic.  I would just need a little info on what to do 
 from the Nova side.  Meetings to attend, web pages to monitor, etc...

 I assume I would start with this page:
 https://bugs.launchpad.net/nova/+bugs?field.tag=ironic

 And try to work with the Ironic and Nova teams on getting bugs resolved.

 I would appreciate any other info and suggestions to help improve the 
 process.

Thank you for stepping up to help us here.

I have added your name on this list:
https://wiki.openstack.org/wiki/Nova#People
(if you can please add your IRC handle for me, that would be awesome)

In terms of whats required, thats really up to what works for you.

The top things that come to mind:
* Raise ironic questions to nova in nova-meetings (and maybe v.v.)
* For ironic features that need exposing in Nova, track those
* Help triage Nova's ironic bugs into Nova bugs and Ironic bugs
* Try to find folks to fix important ironic bugs

But fundamentally, lets just see what works, and evolve the role to match what 
works for you.

I hope that helps.

Thanks,
John

-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com]
Sent: Thursday, May 28, 2015 12:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic][oslo] Stepping down from oslo-ironic 
liaison

On 28/05/15 09:20 +0200, Victor Stinner wrote:
Oh, on the wiki page I read that The liaison should be a core reviewer 
for the project, but does not need to be the PTL.. I'm not a core 
reviewer for nova. Is it an issue?

This was more like a general recommendation, I guess. I don't think the liaison 
has to be a core reviewer at all.

Cheers,
Flavio


On the wiki page, I see that John Villalovos (happycamp) is the Nova 
liaison for Oslo, not Joe Goron. I don't understand.

Victor

Le 27/05/2015 20:44, Joe Gordon a écrit :


On Wed, May 27, 2015 at 3:20 AM, Davanum Srinivas dava...@gmail.com 
mailto:dava...@gmail.com wrote:

Victor,

Nice, yes, Joe was the liaison with Nova so far. Yes, please go ahead
and add your name in the wiki for Nova as i believe Joe is winding
down the oslo liaison as well.
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Oslo



Yup, thank you Victor!



thanks,
dims

On Wed, May 27, 2015 at 5:12 AM, Victor Stinner vstin...@redhat.com
mailto:vstin...@redhat.com wrote:
  Hi,
 
  By the way, who is the oslo liaison for nova? If there is
nobody, I would
  like to take this position.
 
  Victor
 
  Le 25/05/2015 18:45, Ghe Rivero a écrit :
 
  My focus on the Ironic project has been decreasing in the last
cycles,
  so it's about time to relinquish my position as a oslo-ironic
liaison so
  new contributors can take over it and help ironic to be the vibrant
  project it is.
 
  So long, and thanks for all the fish,
 
  Ghe Rivero
  --
  Pinky: Gee, Brain, what do you want to do tonight?
  The Brain: The same thing we do every night, Pinky—try to take
over the
  world!
 
.''`.  Pienso, Luego Incordio
  : :' :
  `. `'
 `- www.debian.org http://www.debian.org
http://www.debian.org www.openstack.com http://www.openstack.com
  http://www.openstack.com
 
  GPG Key: 26F020F7
  GPG fingerprint: 4986 39DA D152 050B 4699  9A71 66DB 5A36 26F0 20F7
 
 
 
__
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
__
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http

Re: [openstack-dev] [Neutron] reminder: service chaining feature development meeting at 10am pacific time today May 5th

2015-05-05 Thread Bhandaru, Malini K
I am happy with the big picture and API being defined now .. to be complete and 
inclusive but have uses cases that we prioritize and implement by that priority

10:35 AM
vikramres://\\G2MResource_en.dll/%3cA%20HREF=%22%3conLeftClick%3eeCMD_SetChatTo%20102%3c/onLeftClick%3e%3conRightClick%3eeCMD_DoAttendeeContextMenu%206684684%3c/onRightClick%3e%22%3e%3c/A%3e
+1

10:38 AM
Me
https://openstacksummitmay2015vancouver.sched.org/event/11286c1fb47118f09cdd178aeee6b946

10:38 AM
Me
Can have a lightning session

10:40 AM
Yamahata, 
Isakures://\\G2MResource_en.dll/%3cA%20HREF=%22%3conLeftClick%3eeCMD_SetChatTo%20107%3c/onLeftClick%3e%3conRightClick%3eeCMD_DoAttendeeContextMenu%207012364%3c/onRightClick%3e%22%3e%3c/A%3e
https://wiki.openstack.org/wiki/Summit/Liberty/Etherpads#Neutron

10:40 AM
Yamahata, 
Isakures://\\G2MResource_en.dll/%3cA%20HREF=%22%3conLeftClick%3eeCMD_SetChatTo%20107%3c/onLeftClick%3e%3conRightClick%3eeCMD_DoAttendeeContextMenu%207012364%3c/onRightClick%3e%22%3e%3c/A%3e
https://etherpad.openstack.org/p/YVR-neutron-nfv-enhancements

10:41 AM
Joe 
D'Andreares://\\G2MResource_en.dll/%3cA%20HREF=%22%3conLeftClick%3eeCMD_SetChatTo%20108%3c/onLeftClick%3e%3conRightClick%3eeCMD_DoAttendeeContextMenu%207077900%3c/onRightClick%3e%22%3e%3c/A%3e
Thanks for the links!


From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com]
Sent: Tuesday, May 05, 2015 9:57 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron] reminder: service chaining feature 
development meeting at 10am pacific time today May 5th

Hello everyone,

Some of you have reached out to me asking questions about when we will start 
meeting discussion on the service chaining feature BPs in OpenStack.

I have set up agoto meeting for an audio meeting discussion so that we can 
sync up thought and bring everyone on the same page in a more efficient way. 
The meeting will be 10am~11am May 5th pacific time. Anyone who has interest in 
this feature development is welcome to join the meeting and contribute together 
to the service chain feature in OpenStack. Hope the time is good for most 
people.


OpenStack BP discussion for service chaining
Please join the meeting from your computer, tablet or smartphone.
https://global.gotomeeting.com/join/199553557, meeting password: 199-553-557
You can also dial in using your phone.
United States +1 (224) 501-3212
Access Code: 199-553-557

-
Following are the links to the Neutron related service chain specs and the bug 
IDs. Feel free to sign up and add you comments/input to the BPs.
https://review.openstack.org/#/c/177946
https://bugs.launchpad.net/neutron/+bug/1450617
https://bugs.launchpad.net/neutron/+bug/1450625



Just FYI. There is an approved service chain project in OPNFV. Here is the link 
to the project page. It will be good to sync up the service chain work between 
the two communities. 
https://wiki.opnfv.org/requirements_projects/openstack_based_vnf_forwarding_graph



Thanks,

Cathy



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][heat] Versioned objects backporting

2015-05-04 Thread Bhandaru, Malini K
In the discussion of N nodes in version V, needing to get upgraded to V+1,
I do not see the issue of loss of HA.
These N nodes are the servers. The clients are the ones still at version V. 
Does it not make sense to upgrade all the servers to V+1.
(need to cross check against database that all servers upgraded)
Then start on the clients. When all clients upgrades (need to cross check 
against database that all clients upgraded) before turning off
Compatibility mode. 
Also it would not be a server reboot, it would just be a single service on the 
servers?

Regards
Malini

-Original Message-
From: Jastrzebski, Michal [mailto:michal.jastrzeb...@intel.com] 
Sent: Monday, May 04, 2015 3:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo][heat] Versioned objects backporting

W dniu 5/4/2015 o 11:50 AM, Angus Salkeld pisze: On Mon, May 4, 2015 at 6:33 
PM, Jastrzebski, Michal 
 michal.jastrzeb...@intel.com mailto:michal.jastrzeb...@intel.com wrote:
 
 W dniu 5/4/2015 o 8:21 AM, Angus Salkeld pisze: On Thu, Apr 30,
 2015 at 9:25 PM, Jastrzebski, Michal
   michal.jastrzeb...@intel.com
 mailto:michal.jastrzeb...@intel.com
 mailto:michal.jastrzeb...@intel.com
 mailto:michal.jastrzeb...@intel.com wrote:
  
   Hello,
  
   After discussions, we've spotted possible gap in versioned
 objects:
   backporting of too-new versions in RPC.
   Nova does that by conductor, but not every service has something
   like that. I want to propose another approach:
  
   1. Milestone pinning - we need to make single reference to
 versions
   of various objects - for example heat in version 15.1 will mean
   stack in version 1.1 and resource in version 1.5.
   2. Compatibility mode - this will add flag to service
   --compatibility=15.1, that will mean that every outgoing RPC
   communication will be backported before sending to object
 versions
   bound to this milestone.
  
   With this 2 things landed we'll achieve rolling upgrade like
 that:
   1. We have N nodes in version V
   2. We take down 1 node and upgrade code to version V+1
   3. Run code in ver V+1 with --compatibility=V
   4. Repeat 2 and 3 until every node will have version V+1
   5. Restart each service without compatibility flag
  
   This approach has one big disadvantage - 2 restarts required, but
   should solve problem of backporting of too-new versions.
   Any ideas? Alternatives?
  
  
   AFAIK if nova gets a message that is too new, it just forwards it on
   (and a newer server will handle it).
  
   With that this *should* work, shouldn't it?
   1. rolling upgrade of heat-engine
 
 That will be hard part. When we'll have only one engine from given
 version, we lose HA. Also, since we never know where given task
 lands, we might end up with one task bouncing from old version to
 old version, making call indefinitely long. Ofc with each upgraded
 engine we'll lessen change for that to happen, but I think we should
 aim for lowest possible downtime. That being said, that might be
 good idea to solve this problem not-too-clean, but quickly.
 
 
 I don't think losing HA in the time it takes some heat-engines to 
 stop, install new software and restart the heat-engines is a big deal (IMHO).
 
 -Angus

We will also lose guarantee that this RPC call will be completed in any given 
time. It can bounce from incompatible node to incompatible node until there are 
no incompatible nodes. Especially if there are no other tasks on queue and when 
service returns it to queue and takes call right afterwards, there is good 
chance that it will take this particular one, and we'll get loop out there.

 
 
  2. db sync
  3. rolling upgrade of heat-api
 
  -Angus
 
 
  Regards,
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Melanie Witt to nova-core

2015-04-30 Thread Bhandaru, Malini K
+1 :-)

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Thursday, April 30, 2015 6:44 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Proposal to add Melanie Witt to nova-core

On 04/30/2015 07:30 AM, John Garbutt wrote:
 Hi,

 I propose we add Melanie to nova-core.

 She has been consistently doing great quality code reviews[1], 
 alongside a wide array of other really valuable contributions to the 
 Nova project.

 Please respond with comments, +1s, or objections within one week.

+1. Mel's an excellent reviewer and has caught a number of mistakes I've
made during reviews. That's always my personal sign of a great potential core :)

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] HELP -- Please review some Kilo bug fixes

2015-04-16 Thread Bhandaru, Malini K
Hello Nova and Neutron developers!

OpenStack China developers held a bug fest  April 13-15.
Worked on 43 bugs and submitted patches for 29 of them. 
Etherpad with the bug fix details (at the bottom): 
https://etherpad.openstack.org/p/prc_kilo_nova_neutron_hackathon

Their efforts to make the Kilo release better can reach fruition only with your 
reviewing them. 
Cores and PTLS, would really appreciate your help.

In addition to making Kilo stronger, you will be acknowledging and motivating 
our China
OpenStack developer community.

Regards,
Ruchi and Malini



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GBP] PTL elections - Results

2015-03-18 Thread Bhandaru, Malini K
Hello OpenStackers!

   The nomination deadline is past .. and Sumit Naiksatam is the 
uncontested PTL of OpenStack GBP!
Congratulations Sumit and all the very best!

Regards
Malini

From: Bhandaru, Malini K
Sent: Wednesday, March 11, 2015 2:18 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [GBP] PTL elections


Hello OpenStackers!



To meet the requirement of an  officially elected PTL, we're running elections 
for the Group Based Policy (GBP)  PTL for Kilo and Liberty cycles. Schedule and 
policies are fully aligned with official OpenStack PTLs elections.



You can find more information in the official elections wiki page [0] and the 
same page for GBP elections [1], additionally some more info in the past 
official nominations opening email [2].



Timeline:



Till 05:59 UTC March 17, 2015: Open candidacy to PTL positions March 17, 2015 - 
1300 UTC March 24, 2015: PTL elections



To announce your candidacy please start a new openstack-dev at 
lists.openstack.org mailing list thread with the following subject:

[GBP] PTL Candidacy.

[0] https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014

[1] https://wiki.openstack.org/wiki/GroupBasedPolicy/PTL_Elections_Kilo_Liberty



Thank you.



Sincerely yours,



Malini Bhandaru
Architect and Engineering Manager,
Open source Technology Center,
Intel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] what is our shipped paste.ini going to be for Kilo

2015-03-16 Thread Bhandaru, Malini K
+1 Rob. Warning could carry/indicate all the attributes ignored.

-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net] 
Sent: Monday, March 16, 2015 7:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] what is our shipped paste.ini going to be 
for Kilo

On 17 March 2015 at 14:27, Ken'ichi Ohmichi ken1ohmi...@gmail.com wrote:

 I am worried about SDKs making requests that have additional JSON 
 attributes that were previously ignored by v2, but will be considered 
 invalid by the v2.1 validation code. If we were to just strip out the 
 extra items, rather than error out the request (when you don't 
 specify a microversion), I would be less worried about the 
 transition. Maybe that what we do?

 Nice point.
 That is a main difference in API behaviors between v2 and v2.1 APIs.
 If SDKs pass additional JSON attributes to Nova API now, developers 
 need to fix/remove these attributes because that is a bug on SDKs 
 side.
 These attributes are unused and meaningless, so some APIs of SDKs 
 would contain problems if passing this kind of attributes.

 Sometime it was difficult to know what are available attributes before
 v2.1 API, so The full monty approach will clarify problems of SDKs 
 and make SDKs' quality better.

 Thanks
 Ken Ohmichi

Better at the cost of forcing all existing users to upgrade just to keep using 
code of their own that already worked.

Not really 'better' IMO. Different surely.

We could (should) add Warning: headers to inform about this, but breaking isn't 
healthy IMO.

-Rob

--
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Group-based-policy][GBP] PTL Candidacy

2015-03-13 Thread Bhandaru, Malini K
Sumit's candidacy for GBP PTL is confirmed!

Regards
Malini

-Original Message-
From: Sumit Naiksatam [mailto:sumitnaiksa...@gmail.com] 
Sent: Thursday, March 12, 2015 12:04 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Bhandaru, Malini K
Subject: [Group-based-policy][GBP] PTL Candidacy

Hi All,

I would like to announce my candidacy for the Group Based Policy (GBP) [1] 
project’s PTL position [2].

I have been involved with GBP for more than a year now. I was responsible for 
setting it up as a StackForge project across multiple repositories, and have 
been serving as the de facto lead. I have made contributions to the project in 
terms of design, implementation, and reviews [3].

My focus during the Kilo cycle has, and, will be to improve the quality of our 
code (reduce bug count, identify and remove obvious performance issues, clear 
technical debt), and most of all to gather feedback from users on the GBP Juno 
release. Based on this feedback, I would like to steer the project in the 
Liberty release towards better integration with other OpenStack components, and 
advanced features that will allow to comprehensively manage and automate policy 
enforcement across those components.

I have enjoyed working with each and every one of the GBP team members, and 
look forward to working with them in the formal capacity of a PTL. I am proud 
of what the team has achieved, and hope to facilitate its growth even further.

To summarize, I am very excited about this opportunity in playing a part in 
GBP’s mission to fully realize the intent-based policy-driven abstractions' 
model.

Best,
Sumit Naiksatam.
(IRC: SumitNaiksatam)

[1] https://wiki.openstack.org/wiki/GroupBasedPolicy
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-March/058783.html
[3] http://stackalytics.com/report/contribution/group-based-policy-group/150
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [GBP] PTL elections

2015-03-11 Thread Bhandaru, Malini K
Hello OpenStackers!



To meet the requirement of an  officially elected PTL, we're running elections 
for the Group Based Policy (GBP)  PTL for Kilo and Liberty cycles. Schedule and 
policies are fully aligned with official OpenStack PTLs elections.



You can find more information in the official elections wiki page [0] and the 
same page for GBP elections [1], additionally some more info in the past 
official nominations opening email [2].



Timeline:



Till 05:59 UTC March 17, 2015: Open candidacy to PTL positions March 17, 2015 - 
1300 UTC March 24, 2015: PTL elections



To announce your candidacy please start a new openstack-dev at 
lists.openstack.org mailing list thread with the following subject:

[GBP] PTL Candidacy.

[0] https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014

[1] https://wiki.openstack.org/wiki/GroupBasedPolicy/PTL_Elections_Kilo_Liberty



Thank you.



Sincerely yours,



Malini Bhandaru
Architect and Engineering Manager,
Open source Technology Center,
Intel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Core nominations.

2015-03-04 Thread Bhandaru, Malini K
Flavio, I concur, for a lively committee need active core reviewers. Core 
status is an honor and responsibility.
I agree it’s a good idea to replace inactive cores, no offense, priorities and 
focus of developers change, and should they want to return, can be fast pathed 
then.
Regards
Malini

-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com] 
Sent: Wednesday, March 04, 2015 4:09 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: krag...@gmail.com
Subject: Re: [openstack-dev] [Glance] Core nominations.

On 03/03/15 16:10 +, Nikhil Komawar wrote:
If it was not clear in my previous message, I would like to again 
emphasize that I truly appreciate the vigor and intent behind Flavio's 
proposal. We need to be proactive and keep making the community better in such 
regards.


However, at the same time we need to act fairly, with patience and have 
a friendly strategy for doing the same (thus maintaining a good balance 
in our progress). I should probably respond to another thread on ML 
mentioning my opinion that the community's success depends on trust 
and empathy and everyone's intent as well as effort in maintaining 
these principles. Without them, it will not take very long to make the 
situation chaotic.

I'm sorry but no. I don't think there's anything that requires extra patience 
than 2 (or even more) cycles without provaiding reviews or even any kind of 
active contribution.

I personally don't think adding new cores without cleaning up that list is 
something healthy for our community, which is what we're trying to improve 
here. Therefore I'm still -2-W on adding new folks without removing non-active 
core members.

The questions I poised are still unanswered:

There are a few members who have been relatively inactive this cycle in 
terms of reviews and have been missed in Flavio's list (That list is 
not comprehensive). On what basis have some of them been missed out and 
if we do not have strong reason, are we being fair? Again, I would like 
to emphasize that, cleaning of the list in such proportions at this 
point of time does NOT look OK strategy to me.

The list contains the names of ppl that have not provided *any* kind of review 
in the last 2 cycles. If there are folks in that list that you think shouldn't 
be there, please, bring them up now. If there are folks you think *should* be 
in that list, please, bring them on now.

There's nothing unpolite in what's being discussed here. The proposal is based 
on the facts that these folks seem to be focused in different things now and 
that's perfectly fine.

As I mentioned in my first email, we're not questioning their knowledge but 
their focus and they are more than welcome to join again.

I do not think *counting* the stats of everyone makes sense here, we're not 
competing on who reviews more patches. That's nonsense.
We're just trying to keep the list of folks that will have the power to approve 
patches short.

To answer your concerns: (Why was this not proposed earlier in the 
cycle?)

[snip] ?

The essence of the matter is:

We need to change the dynamics slowly and with patience for maintaining 
a good balance.

As I mentioned above, I don't think we're being impatient. As a matter of fact, 
some of this folks haven't been around in *years* so, pardon my stubborness but 
I believe we have been way to patient and I'd have loved this folks to step 
down themselves.

I infinitely thank these folks past work and efforts (and hopefully future 
works too) but I think it's time for us to have a clearer view of who's working 
in the project.

As a last note, it's really important to have the list of members updated, some 
folks rely on that to know who are the contacts for some projects.

Flavio

Best,
-Nikhil
━━━

From: Kuvaja, Erno kuv...@hp.com
Sent: Tuesday, March 3, 2015 9:48 AM
To: OpenStack Development Mailing List (not for usage questions); Daniel P.
Berrange
Cc: krag...@gmail.com
Subject: Re: [openstack-dev] [Glance] Core nominations.
 

Nikhil,

 

If I recall correctly this matter was discussed last time at the start 
of the L-cycle and at that time we agreed to see if there is change of 
pattern to later of the cycle. There has not been one and I do not see 
reason to postpone this again, just for the courtesy of it in the hopes 
some of our older cores happens to make review or two.

 

I think Flavio’s proposal combined with the new members would be the 
right way to reinforce to momentum we’ve gained in Glance over past few 
months. I think it’s also the right message to send out for the new 
cores (including you and myself ;) ) that activity is the key to maintain such 
status.

 

-  Erno

 

From: Nikhil Komawar [mailto:nikhil.koma...@rackspace.com]
Sent: 03 March 2015 04:47
To: Daniel P. Berrange; OpenStack Development Mailing List (not for 
usage
questions)
Cc: krag...@gmail.com
Subject: 

Re: [openstack-dev] [nova] Flavor extra-spec and image metadata documentation

2015-02-11 Thread Bhandaru, Malini K
Pasquale Porreca,
The flexibility/ freedom to create meta data tags for images and Nova flavor 
extra specs can be confusing. Even allows one to make typographical errors that 
may be hard to detect.
As Daniel mentions, some tags have a definite meaning/semantics, others can be 
totally random.
Tags with semantic significance will typically be handled by special purpose 
filters, look in Nova/filters directory, /opt/stack/nova/nova/scheduler/filters
The filters documentation in Nova admin guide may help too.
All the rest are just matched as strings.
Hope that helps.
Regards
Malini
 

-Original Message-
From: Kashyap Chamarthy [mailto:kcham...@redhat.com] 
Sent: Wednesday, February 11, 2015 2:14 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Flavor extra-spec and image metadata 
documentation

On Wed, Feb 11, 2015 at 10:23:54AM +0100, Pasquale Porreca wrote:
 Hello
 
 I am working on a little patch that introduce a new flavor extra-spec 
 and image metadata key-value pair 
 https://review.openstack.org/#/c/153607/
 
 I am wondering how an openstack admin can be aware that a specific 
 value of a flavor extra-spec or image metadata provides a feature he 
 may desire, in other words is there a place where the flavor 
 extra-specs and/or image metadata key-value pairs are documented?

Unfortunately, there's none as of now. I found this the hard way that you 
cannot trivially find all possible 'extra_spec' key values that can be set by 
`nova flavor-key`

I did gross things like this:

$ grep hw\: nova/virt/hardware.py nova/tests/unit/virt/test_hardware.py | 
sort | uniq

And, obviously the above will only find you 'hw' properties.

Daniel Berrangé once suggested that image properties and flavor extra specs 
need to be 'objectified' to alleviate this.

 I found plenty of documentation on how to list, create, delete, etc.
 flavor extra-spec and image metadata, but the only place where I found 
 a list (is that complete?) of the accepted (i.e. that trigger specific 
 feature in nova) key-value pairs is in horizon dashboard, when logged 
 with admin credential.
 
 I am a bit confused on how someone working to add a new key/value pair 
 should proceed.
 

--
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nominating Melanie Witt for python-novaclient-core

2015-01-27 Thread Bhandaru, Malini K
+1

-Original Message-
From: Dan Smith [mailto:d...@danplanet.com] 
Sent: Tuesday, January 27, 2015 3:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Nominating Melanie Witt for 
python-novaclient-core

 Please respond with +1s or any concerns.

+1

--Dan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OVF/OVA support

2014-11-07 Thread Bhandaru, Malini K
Gosha – this is wonderful news. Complements Intel interest.
I am in the Glance area .. stopped by a couple of times, the room was available 
2 pm onwards.
Contact made and can continue via email and IRC.
Malini

From: Georgy Okrokvertskhov [mailto:gokrokvertsk...@mirantis.com]
Sent: Friday, November 07, 2014 8:20 AM
To: Bhandaru, Malini K
Cc: OpenStack Development Mailing List
Subject: Re: OVF/OVA support


Hi Malini,

I am interested in OVa support for applications. Specifically Ova to Heat as 
this is whay we usually do in Murano project.

When is free format session for Glance? Should we add this to session etherpad?

Thanks,
Gosha
On Nov 5, 2014 6:06 PM, Bhandaru, Malini K 
malini.k.bhand...@intel.commailto:malini.k.bhand...@intel.com wrote:
Please join us on Friday in the Glance track – free format session, to discuss 
supporting OVF/OVA in OpenStack.

Poll:

1)  How interested are you in this feature? 0 – 10

2)  Interested enough to help develop the feature?



Artifacts are ready for use.

We are considering defining an artifact for OVF/OVA.
What should the scope of this work be? Who are our fellow travelers?
Intel is interested in parsing OVF meta data associated with images – to ensure 
that a VM image lands on the most appropriate hardware in the cloud instance, 
to ensure optimal performance.
The goal is to remove the need to manually specify image meta data, allow the 
appliance provider to specify HW requirements, and in so doing reduce human 
error.
Are any partners interested in writing an OVF/OVA artifact = stack deployment? 
Along the lines of heat?
As a first pass, Intel we could at least

1)  Defining artifact for OVA, parsing the OVF in it, pulling out the 
images therein and storing them in the glance image database and attaching meta 
data to the same.

2)  Do not want to imply that OpenStack supports OVA/OVF -- need to be 
clear on this.

3)  An OpenStack user could create a heat template using the images 
registered in step -1

4)  OVA to Heat – there may be a loss in translation! Should we attempt 
this?

5)  What should we do with multiple volume artifacts?

6)  Are volumes read-only? Or on cloning, make copies of them?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-security] [Barbican][OSSG] Mid Cycle Attendance / Crossover.

2014-11-07 Thread Bhandaru, Malini K
+1 attend both  -- Malini

-Original Message-
From: Clark, Robert Graham [mailto:robert.cl...@hp.com] 
Sent: Friday, November 07, 2014 11:02 AM
To: OpenStack List
Cc: openstack-secur...@lists.openstack.org
Subject: [Openstack-security] [Barbican][OSSG] Mid Cycle Attendance / Crossover.

Hi All,

How many people would want to attend both the OSSG mid-cycle and the Barbican 
one? Both expected to be on the west coast of the US.

We are trying to work out how/if we should organise these events to take place 
at adjacent times and if they should be in the same location, back to back to 
reduce travel costs.

Cheers
-Rob


___
Openstack-security mailing list
openstack-secur...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-security

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OVF/OVA support

2014-11-05 Thread Bhandaru, Malini K
Please join us on Friday in the Glance track - free format session, to discuss 
supporting OVF/OVA in OpenStack.

Poll:

1)  How interested are you in this feature? 0 - 10

2)  Interested enough to help develop the feature?



Artifacts are ready for use.

We are considering defining an artifact for OVF/OVA.
What should the scope of this work be? Who are our fellow travelers?
Intel is interested in parsing OVF meta data associated with images - to ensure 
that a VM image lands on the most appropriate hardware in the cloud instance, 
to ensure optimal performance.
The goal is to remove the need to manually specify image meta data, allow the 
appliance provider to specify HW requirements, and in so doing reduce human 
error.
Are any partners interested in writing an OVF/OVA artifact = stack deployment? 
Along the lines of heat?
As a first pass, Intel we could at least

1)  Defining artifact for OVA, parsing the OVF in it, pulling out the 
images therein and storing them in the glance image database and attaching meta 
data to the same.

2)  Do not want to imply that OpenStack supports OVA/OVF -- need to be 
clear on this.

3)  An OpenStack user could create a heat template using the images 
registered in step -1

4)  OVA to Heat - there may be a loss in translation! Should we attempt 
this?

5)  What should we do with multiple volume artifacts?

6)  Are volumes read-only? Or on cloning, make copies of them?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ova support in glance

2014-07-29 Thread Bhandaru, Malini K
Hello Everyone!

We were discussing the following blueprint in Glance:
Enhanced-Platform-Awareness-OVF-Meta-Data-Import 
:https://review.openstack.org/#/c/104904/

The OVA format is very rich and the proposal here in its first incarnation is 
to essentially Untar the ova package, andimport the first disk image therein 
and parse the ovf file and attach meta data to the disk image.
There is a nova effort  in a similar vein that supports OVA, limiting its 
availability to the VMWare hypervisor. Our efforts will combine.

The issue that is raised is how many openstack users and OpenStack cloud 
providers tackle OVA data with multiple disk images, using them as an 
application.
Do your users using OVA with content other than 1 disk image + OVF? 
That is does it have other files that are used? Do any of you use OVAs with 
snapshot chains?
Would this solution path break your system, result in unhappy users?  


If the solution will at least address 50% of the use cases, a low bar, and ease 
deploying NFV applications, this would be worthy.
If so, how would we message around this so as not to imply that OpenStack 
supports OVA in its full glory?

Down the road the Artefacts blueprint will provide a place holder for OVA. 
Perhaps even the OVA format may be transformed into a Heat template to work in 
OpenStack.

Please do prov ide us your feedback.
Regards
Malini

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TXT/TCP

2014-05-08 Thread Bhandaru, Malini K
Hello Der!
Shane and I work with Gang Wei who leads the Intel open source effort on TXT 
(tboot and OAT). Would like you to include us in your emails and be happy to 
help in any way we can.
We are working with HP to jointly float a trusted bare metal blueprint for 
TripleO and would welcome more participants.
BTW, Shane and I shall also be at the summit. Day-1 I shall mostly be at the 
security track talks.
Regards
Malini

-Original Message-
From: Tynan, Dermot [mailto:ty...@hp.com] 
Sent: Thursday, May 08, 2014 3:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] TXT/TCP

Hi Robert, I work with Tom Norton and Tim Reddin in the new OpenStack Services 
group. I used to be a member of the Neutron team, and before that the Cinder 
team. In fact, my office is very close to Stephen Mulcahy. I am working on an 
Intel PoC, and they want to validate something called TXT/TCP. I'm told you 
have done some work on this. Would it be possible to have a quick chat about it 
and what it means? We may need tO enable it on some compute hosts in the public 
cloud. What time would be good for you? I'm GMT+1 at the moment. I will also be 
at OpenStack, if you had some time to spare there.
- Der
-- 
Dermot Tynan
Cloud Consulting Principal
OpenStack Services


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request for Barbican

2013-12-17 Thread Bhandaru, Malini K
To add to Jarret's arguments,  across OpenStack we have seen as subsystems grow 
more mature and complex from additional feature extensions, they spawn off into 
separate projects.
Case in point -- Neutron rose out of Nova Networking, and is marching on in 
richness and community support.  Common libraries went into Oslo. The Nova 
scheduler is currently being forklifted into a service of its own called gantt.
At the Portland summit such considerations were raised and given that Barbican 
provides a separate functionality, it does cleanly live in its own project. 
True the public/private key pair of a service, tenant etc is part of its 
identity. In that respect Keystone and Barbican would intersect, which could be 
managed by delegating the storage of the public key in Barbican, like a 
directory service.

Regards
Malini

-Original Message-
From: Jarret Raim [mailto:jarret.r...@rackspace.com] 
Sent: Tuesday, December 17, 2013 11:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Incubation Request for Barbican

On 12/13/13, 7:56 AM, Russell Bryant rbry...@redhat.com wrote:


1) Are each of the items you mention big enough to have a sustainable 
team that can exist as its own program?

The answer here for Barbican and Keystone is yes.

2) Would there be a benefit of *changing* the scope and mission of the 
Identity program to accomodate a larger problem space?  Security
sounds too broad ... but I'm sure you see what I'm getting at.

Dolph and I have talked about this a bit. Right now, if we combined them, it 
feels like we would have meetings where the first half would be about Keystone 
and the second about Barbican. Same for design sessions. The systems and the 
concerns they address are entirely separate. Currently the teams are also 
entirely separate.

While I think we can encourage both teams to have a close relationship (Adam 
Young and I had a conversion about that recently), there is no benefit to 
combining the teams now other than to reduce the number of programs. As the 
combination doesn¹t help either project, it seems like Barbican having its own 
program is the best option.

When we're talking about authentication, authorization, identity 
management, key management, key distribution ... these things really
*do* seem related enough that it would be *really* nice if a group was 
looking at all of them and how they fit into the bigger OpenStack 
picture.  I really don't want to see silos for each of these things.

I don¹t agree here. Key management and distribution can be used to solve 
problems in the identity space. They can also be used to solve problems in 
other spaces in openstack. Barbican uses keystone to provide auth / auth to 
keys, much like Nova uses keystone to provide auth / auth to servers.
Additionally, Barbican will deal with other parts of the encryption space (e.g. 
SSL) that have very little to do with identity.

So, would OpenStack benefit from a tighter relationship between these 
projects?  I think this may be the case, personally.

I think there would be benefit to individuals working together from the two 
projects where it makes sense - especially where we have knowledge overlaps. I 
don¹t agree that including Barbican in the Identity program is the right way to 
do that.

Could this tighter relationship happen between separate programs?  It 
could, but I think a single program better expresses the intent if 
that's really what is best.

Barbican¹s intent is to simplify key management to enable consuming systems and 
users to offer or use encryption in their services. This is a fundementally 
different mission than Keystone has.



Jarret

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request for Barbican

2013-12-17 Thread Bhandaru, Malini K
Barbican, key manager is essential to openstack, paves the way to greater 
security.
Instead of rejecting the project because of its current existence owed so 
heavily to Rackspace and to John Wood, why not we adopt it, code review, 
contribute code etc. We can have cores from multiple companies. Swift was a 
project that was born similarly.
During development John Wood and the whole Rackspace team has been open to 
feature design discussions and providing good code review.  

Intel plans to create a plugin for Barbican, along the lines of a low cost HSM, 
essentially using the Intel TXT and the Trusted Platform Module to save a 
master secret used to encrypt all the other secrets.
Our Intel team is small and some of us had other distractions in October and 
November, but we are back and may even grow in strength.

John, Jarret, and team, thank you for all the hard work.

Malini


-Original Message-
From: Jarret Raim [mailto:jarret.r...@rackspace.com] 
Sent: Tuesday, December 17, 2013 11:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Incubation Request for Barbican

On 12/13/13, 4:50 AM, Thierry Carrez thie...@openstack.org wrote:


If you remove Jenkins and attach Paul Kehrer, jqxin2006 (Michael Xin), 
Arash Ghoreyshi, Chad Lung and Steven Gonzales to Rackspace, then the 
picture is:

67% of commits come from a single person (John Wood) 96% of commits 
come from a single company (Rackspace)

I think that's a bit brittle: if John Wood or Rackspace were to decide 
to place their bets elsewhere, the project would probably die instantly.
I would feel more comfortable if a single individual didn't author more 
than 50% of the changes, and a single company didn't sponsor more than 
80% of the changes.


I think these numbers somewhat miss the point. It is true that Rackspace is the 
primary sponsor of Barbican and that John Wood is the developer that has been 
on the project the longest. However, % of commits is not the only measure of 
contributions to the project. That number doesn¹t include the work on our 
chef-automation scripts or design work to figure out the HSM interfaces or work 
on the testing suite or writing our documentation or the million other tasks 
for the project.

Rackspace is committed to this project. If John Wood leaves, we¹ll hire 
additional developers to replace him. There is no risk of the project lacking 
resources because a single person decides to work on something else. 

We¹ve seen other folks from HP, RedHat, Nebula, etc. say that they are 
interested in contributing and we are getting outside contributions today.
That will only continue, but I think the risk of the project somehow collapsing 
is being overstated.

There are problems that aren¹t necessarily the sexiest things to work on, but 
need to be done. It may be hard to get a large number of people interested in 
such a project in a short period of time. I think it would be a mistake to 
reject projects that solve important problems just because the team is a bit 
one sided at the time.





Jarret

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Encrypt Cinder volumes

2013-09-06 Thread Bhandaru, Malini K
Thank you Russell for the special consideration.
+1

 The positive vote is for multiple reasons, the JHU team took care of:
1) boot from encrypted volume
2) have laid the foundation for securing volumes with keys served from a strong 
key manager
3) blueprint and diligently addressing concerns
4) feature by default off.

Regards
malini

-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com] 
Sent: Friday, September 06, 2013 2:47 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] FFE Request: Encrypt Cinder volumes

On 09/06/2013 04:14 PM, Benjamin, Bruce P. wrote:
 We request that volume encryption [1] be granted an exception to the 
 feature freeze for Havana-3.  Volume encryption [2] provides a usable 
 layer of protection to user data as it is transmitted through a 
 network and when it is stored on disk. The main patch [2] has been 
 under review since the end of May and had received two +2s in mid-August.
 Subsequently, support was requested for booting from encrypted volumes 
 and integrating a working key manager [3][4] as a stipulation for 
 acceptance, and both these requests have been satisfied within the 
 past week. The risk of disruption to deployments from this exception 
 is minimal because the volume encryption feature is unused by default.
 Note that the corresponding Cinder support for this feature has 
 already been approved, so acceptance into Nova will keep this code from 
 becoming
 abandoned.   Thank you for your consideration.
 
  
 
 The APL Development Team
 
  
 
 [1] https://blueprints.launchpad.net/nova/+spec/encrypt-cinder-volumes
 
 [2] https://review.openstack.org/#/c/30976/
 
 [3] https://review.openstack.org/#/c/45103/
 
 [4] https://review.openstack.org/#/c/45123/

Thanks for all of your hard work on this!  It sounds to me like the code was 
ready to go aside from the issues you mentioned above, which have now been 
addressed.

I think the feature provides a lot of value and has fairly low risk if we get 
it merged ASAP, since it's off by default.  The main risk is around the 
possibility of security vulnerabilities.  Hopefully good review (both from a 
code and security perspective) can mitigate that risk.  This feature has been 
in the works for a while and has very good documentation on the blueprint, so I 
take it that it has been vetted by a number of people already.  It would be 
good to get ACKs on this point in this thread.

I would be good with the exception for this, assuming that:

1) Those from nova-core that have reviewed the code are still happy with it and 
would do a final review to get it merged.

2) There is general consensus that the simple config based key manager (single 
key) does provide some amount of useful security.  I believe it does, just want 
to make sure we're in agreement on it.  Obviously we want to improve this in 
the future.

Again, thank you very much for all of your work on this (both technical and 
non-technical)!

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Encrypt Cinder volumes

2013-09-06 Thread Bhandaru, Malini K
Bruce - well-crafted message. Good work, looks like it is eliciting desired 
result.

From: Benjamin, Bruce P. [mailto:bruce.benja...@jhuapl.edu]
Sent: Friday, September 06, 2013 1:14 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Nova] FFE Request: Encrypt Cinder volumes

We request that volume encryption [1] be granted an exception to the feature 
freeze for Havana-3.  Volume encryption [2] provides a usable layer of 
protection to user data as it is transmitted through a network and when it is 
stored on disk. The main patch [2] has been under review since the end of May 
and had received two +2s in mid-August.  Subsequently, support was requested 
for booting from encrypted volumes and integrating a working key manager [3][4] 
as a stipulation for acceptance, and both these requests have been satisfied 
within the past week. The risk of disruption to deployments from this exception 
is minimal because the volume encryption feature is unused by default.  Note 
that the corresponding Cinder support for this feature has already been 
approved, so acceptance into Nova will keep this code from becoming abandoned.  
 Thank you for your consideration.

The APL Development Team

[1] https://blueprints.launchpad.net/nova/+spec/encrypt-cinder-volumes
[2] https://review.openstack.org/#/c/30976/
[3] https://review.openstack.org/#/c/45103/
[4] https://review.openstack.org/#/c/45123/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] key management and Cinder volume encryption

2013-09-03 Thread Bhandaru, Malini K
The issue here is the key manager, barbican, under development is in incubation.
Folks can download and use barbican. The barbican team has worked deligently to 
produce the system.
In fact, folks can download and use and vote for Joel's patch to be merged.
And do give us feedback on barbican.

The chicken-egg problem .. and the desire to keep the key manager as a separate 
service entails the incubation requirement.
Regards
Malini

From: Joe Gordon [joe.gord...@gmail.com]
Sent: Tuesday, September 03, 2013 6:06 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [nova] key management and Cinder volume
encryption

On Tue, Sep 3, 2013 at 5:41 PM, Coffman, Joel M. 
joel.coff...@jhuapl.edumailto:joel.coff...@jhuapl.edu wrote:

 How can someone use your code without a key manager?

Some key management mechanism is required although it could be simplistic. For 
example, we’ve tested our code internally with an implementation of the key 
manager interface that returns a single, constant key.

That works for testing but doesn't address: the current dearth of key 
management within OpenStack does not preclude the use of our existing work 
within a production environment




I think the underlying issue is how to handle interrelated features – if Nova 
doesn’t want to accept the volume encryption feature without a full-fledged key 
manager, then why accept a key manager (or its interface stubs) unless it 
already has a feature that requires it (e.g., volume encryption)? And 
round-and-round it goes.

You can propose both patches at the same time one being dependent on the other, 
so we can merge both at the same time




I’d also like to point out that the volume encryption feature is “complete” and 
won’t require changes when a full-fledged key manager becomes available. All 
that’s needed is to specify the key manager via a configuration option. So this 
request is definitely *not* a case of trying to land a feature that isn’t 
finished and is disabled by default (see [1], [2], and [3]).

Is a feature complete if no one can use it?

I am happy with a less then secure but fully functional key manager.  But with 
no key manager that can be used in a real deployment, what is the value of 
including this code?




[1] http://lists.openstack.org/pipermail/openstack-dev/2013-April/008244.html

[2] http://lists.openstack.org/pipermail/openstack-dev/2013-April/008315.html

[3] http://lists.openstack.org/pipermail/openstack-dev/2013-April/008268.html





From: Joe Gordon [mailto:joe.gord...@gmail.commailto:joe.gord...@gmail.com]
Sent: Tuesday, September 03, 2013 4:48 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [nova] key management and Cinder volume encryption







On Tue, Sep 3, 2013 at 4:38 PM, Coffman, Joel M. 
joel.coff...@jhuapl.edumailto:joel.coff...@jhuapl.edu wrote:

We have fully implemented support for transparently encrypting Cinder 
volumeshttps://blueprints.launchpad.net/nova/+spec/encrypt-cinder-volumes 
from within Nova (see  https://review.openstack.org/#/c/30976/), but the lack 
of a secure key manager within OpenStack currently precludes us from 
integrating our work with that piece of the overall architecture. Instead, a 
key manager interface (see  https://review.openstack.org/#/c/30973/) abstracts 
this interaction. We would appreciate the consideration of the Nova core team 
regarding merging our existing work because 1) there is nothing immediately 
available with which to integrate; 2) services such as 
Barbicanhttps://launchpad.net/cloudkeep/+announcements are on the path to 
incubation and alternative key management schemes (e.g., KMIP Client for volume 
encryption key 
managementhttps://blueprints.launchpad.net/nova/+spec/kmip-client-for-volume-encryption)
 have also been proposed; 3) we avoid the hassle of rebasing until the 
aforementioned services become available; and 4) our code does not directly 
depend upon a particular key manager but upon the aforementioned interface, 
which should be simple for key managers to implement. Furthermore, the current 
dearth of key management within OpenStack does not preclude the use of our 
existing work within a production environment; although the security is 
diminished, our implementation provides protection against certain attacks like 
intercepting the iSCSI communication between the compute and storage host.





How can someone use your code without a key manager?





Feedback regarding the possibility of merging our work would be appreciated.



Joel

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

Re: [openstack-dev] The PCI support blueprint

2013-07-22 Thread Bhandaru, Malini K
Ian, your suggestion of retrieving changes since a timestamp is good.  When a 
scheduler first comes online (in an HA context), it requests compute node 
status providing Null for timestamp to retrieve everything.

It also paves the way for full in memory record of all compute node status 
because it requires that each scheduler keep a copy of the status.

The Scheduler could retrieve status every second or whenever it gets a new VM 
request. Under heavy load, that is frequent requests, the timestamps would be 
closer and hopefully fewer changes being returned. We may want to make the 
frequency of polling a configurable item
to tune .. too infrequent means payload large (no worse than today's full 
load), too often may  be moot.

Regards
Malini


-Original Message-
From: Ian Wells [mailto:ijw.ubu...@cack.org.uk] 
Sent: Monday, July 22, 2013 1:56 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] The PCI support blueprint

On 22 July 2013 21:08, Boris Pavlovic bo...@pavlovic.me wrote:
 Ian,

 I don't like to write anything personally.
 But I have to write some facts:

 1) I see tons of hands and only 2 solutions my and one more that is 
 based on code.
 2) My code was published before session (18. Apr 2013)
 3) Blueprints from summit were published (03. Mar 2013)
 4) My Blueprints were published (25. May 2013)
 5) Patches based on my patch were published only (5. Jul 2013)

Absolutely.  Your patch and our organisation crossed in the mail, and everyone 
held off work on this because you were working on this.
That's perfectly normal, just unfortunate, and I'm grateful for your work on 
this, not pointing fingers.

 After making investigations and tests we found that one of the reason 
 why scheduler works slow and has problems with scalability is work with DB.
 JOINS are pretty unscalable and slow thing and if we add one more JOIN 
 that is required by PCI passthrough we will get much worse situation.

Your current PCI passthrough design adds a new database that stores every PCI 
device in the cluster, and you're thinking of crossing that with the compute 
node and its friends.  That's certainly unscalable.

I think the issue here is, in fact, more that you're storing every PCI device.  
The scheduler doesn't care.  In most cases, devices are equivalent, so instead 
of storing 1024 devices you can store one single row in the stats table saying 
pci_device_class_networkcard = 1024.  There may be a handful of these classes, 
but there won't be
1024 of them per cluster node.  The compute node can take any one of the PCI 
devices in that class and use it - the scheduler should neither know nor care.

This drastically reduces the transfer of information from the compute node to 
host and also reduces the amount of data you need to store in the database - 
and the scheduler DB doesn't need changing at all.

This seems like a much more low impact approach for now - it doesn't change the 
database at all and it and doesn't add much to the scheduling problem (indeed, 
no overhead at all for the non-PCI users) until we solve the scalability issues 
you're talking about at some later date.

For what it's worth, one way of doing that without drastic database design 
would be to pass compute_node_get_all a timestamp, return only stats updated 
since that timestamp, return a new timestamp, and merge that in with what the 
scheduler already knows about.  There's some refinement to that - since 
timestamps are not reliable clocks in databases - but it reduces the flow of 
data from the DB file substantially and works with an eventually consistent 
system.
(Truthfully, I prefer your in-memory-store idea, there's nothing about these 
stats that really needs to survive a reboot of the control node, but this might 
be a quick fix.)
--
Ian.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Move key pair management out of Nova and into Keystone?

2013-07-02 Thread Bhandaru, Malini K
Greetings Simo, Jay, Bryan, Jarret, Dolph, Phil, Nachi, Jamie, Thierry, Arvind 
and others!



1)  The key manager, barbican, under development supports the blueprint we 
developed and discussed on this mailing list.

https://wiki.openstack.org/wiki/KeyManager#Key_Manager

2)  Its full featured version is to hold all things key related, 
including public keys and certificates, their renewal and supporting necessary 
KMIP interfaces.

3)  Only authenticated users/services can access the keys in barbican, 
barbican itself uses keystone for authentication and authorization like other 
OpenStack services.

4)  First use case to support is  volume encryption (John Hopkins Applied 
Physics Lab team)

https://review.openstack.org/30976

5)  Rackspace (Jarret and his team) and Intel have been working hard to 
meet Havana release milestones.



At the Portland summit we did discuss whether to keep the key management 
functionality as a separate entity or as a part of keystone.

Participants included Adam Young, Dolph, and several other keystone cores and 
the Rackspace and Intel folks.

1) pro - if part of keystone, less of an incubation hurdle.

2) cons - keystone is already feature rich and this is a separate piece of 
functionality. Should we want to later pull it out and float as a separate 
service a lot of work. (The need for a key manager has been felt as more of us 
seek to provide greater security for user data at rest (volumes, objects) )

3) Key manager would be a pluggable module for folks who might want an HSM.

4) We did mention at the summit that storing nova ssl keys to access 
instances could be shifted to the key manager given a broader scope as a

 repository of all things used to encrypt/decrypt data.

5) Saving the users', OpenStack service endpoints', and instance pubic keys 
and/or certificate intersects with Keystone's identity credential storage.

 All things identity related are the prerogative of keystone.

 This is where Jarret's comment fits in about a pointer to the 
certificate or public key could be saved in keystone with the public key, 
certificate, even private key inside key manager. To meet compliance needs more 
audit logging will be present in the key manager. Certainly, wherever keys are 
stored more audit logging is feasible. This is just a logical divide of whether 
to build in the functionality.

6) Today keystone provides a catalog of service endpoints (including the 
key manager), it is logical to extend this to include access to their 
certificates.  This would then serve as central point to determine how to 
securely communicate with the endpoint - assuming neither keystone nor barbican 
are compromised.











-Original Message-
From: Simo Sorce [mailto:s...@redhat.com]
Sent: Tuesday, July 02, 2013 10:43 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Move keypair management out of Nova and into 
Keystone?



On Tue, 2013-07-02 at 16:55 +, Tiwari, Arvind wrote:

 Hi Simo,



 I am lost.



 Does Barbican is product came out of 
 https://wiki.openstack.org/wiki/KeyManager BP?



Yes Barbican is an implementation of this Blueprint afaik.



 If yes, then why it is deviating from the BP which says Key Manager will be 
 separate service but not a part of Keystone.



Sorry I don't follow, Barbican is separated from Keystone.



 If no, then why we are thinking about new Key manager (which seems to me a 
 subset of above BP)?



New ?



Simo.



--

Simo Sorce * Red Hat, Inc * New York





___

OpenStack-dev mailing list

OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev