Re: [openstack-dev] [nova] Different length limit for tags in object definition and db model definition

2017-01-16 Thread Zhenyu Zheng
Hi, Sergey!

Thanks for the info, but we are now to the point that should it be a
microversion bump or not. The users would love to have longer tags of
cause. But it seems too late to have a microversion for this cycle.
But will the DB migration be a problem? From 80 to 60?


On Tue, Jan 17, 2017 at 3:16 PM, Sergey Nikitin 
wrote:

> Hi, folks!
>
> I guess I found the reason of the problem. The first spec was created by
> Jay. At that moment I was just an implementer. In this spec we have a
> contradiction between lines #74 and #99.
> https://review.openstack.org/#/c/91444/16/specs/juno/tag-instances.rst
>
> Line 74 says "A tag shall be defined as a Unicode bytestring no longer
> than 60 bytes in length."
>
> Line 99 contains SQL instruction for migration "tag VARCHAR(80) NOT NULL
> CHARACTER SET utf8"
>
> It seems to me that everybody missed this contradiction and I just copy
> the whole migration script with length 80.
>
> So it's just an old mistake and I think we can change length of tag from
> 60 to 80.
>
> 2017-01-17 10:04 GMT+04:00 GHANSHYAM MANN :
>
>> On Tue, Jan 17, 2017 at 2:37 PM, Alex Xu  wrote:
>>
>>>
>>>
>>> 2017-01-17 10:26 GMT+08:00 Matt Riedemann :
>>>
 On 1/16/2017 7:12 PM, Zhenyu Zheng wrote:

> Hi Nova,
>
> I just discovered something interesting, the tag has a limited length,
> and in the current implementation, it is 60 in the tag object
> definition:
> http://git.openstack.org/cgit/openstack/nova/tree/nova/objec
> ts/tag.py#n18
>
> but 80 in the db model:
> http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sq
> lalchemy/models.py#n1464
>
> As asked in the IRC and some of the cores responded(thanks to Matt and
> Jay), it seems to be an
> oversight and has no particular reason to do it this way.
>
> Since we have already created a 80 long space in DB and the current
> implementation might be confusing,  maybe we should expand the
> limitation in tag object definition to 80. Besides, users can enjoy
> longer tags.
>
> And the question could be, does anyone know why it is 60 in object but
> 80 in DB model? is it an oversight or we have some particular reason?
>
> If we could expand it to be the same as DB model (80 for both), it is
> ok
> to do this tiny change without microversion?
>
> Thanks,
>
> Kevin Zheng
>
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.op
> enstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
 As I said in IRC, the tags feature took a long time to land (several
 releases) so between the time that the spec was written and then the data
 model patch and finally the REST API change, we might have just totally
 missed that the length of the column in the DB was different than what was
 allowed in the REST API.

 I'm not aware of any technical reason why they are different. I'm
 hoping that Sergey Nikitin might remember something about this. But even
 looking at the spec:

 https://specs.openstack.org/openstack/nova-specs/specs/liber
 ty/approved/tag-instances.html

 The column was meant to be 60 so my guess is someone noticed that in
 the REST API review but missed it in the data model review.

>>>
>>> I can't remember the detail also. Hoping Sergey can remember something
>>> also.
>>>
>>>

 As for needing a microversion of changing this, I tend to think we
 don't need a microversion because we're not restricting the schema in the
 REST API, we're just increasing it to match the length in the data model.
 But I'd like opinions from the API subteam about that.


>>> We still need microversion for the user to discover the max length
>>> between different cloud deployments.
>>>
>>
>> ​I agree that we still need microversion for this. As Alex mentioned,
>> otherwise it will be issue on interoperability.
>>
>> Can we just keep it 60 and change DB to match the same?
>>
>> -gmann
>>
>>>
>>>
 --

 Thanks,

 Matt Riedemann


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> 

Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Tim Bell

On 17 Jan 2017, at 01:19, Brandon B. Jozsa 
> wrote:

Inline


On January 16, 2017 at 7:04:00 PM, Fox, Kevin M 
(kevin@pnnl.gov) wrote:

I'm not stating that the big tent should be abolished and we go back to the way 
things were. But I also know the status quo is not working either. How do we 
fix this? Anyone have any thoughts?


Are we really talking about Barbican or has the conversation drifted towards 
Big Tent concerns?

Perhaps we can flip this thread on it’s head and more positively discuss what 
can be done to improve Barbican, or ways that we can collaboratively address 
any issues. I’m almost wondering if some opinions about Barbican are even 
coming from its heavy users, or users who’ve placed much time into 
developing/improving Barbican? If not, let’s collectively change that.


When we started deploying Magnum, there was a pre-req for Barbican to store the 
container engine secrets. We were not so enthusiastic since there was no puppet 
configuration or RPM packaging.  However, with a few upstream contributions, 
these are now all resolved.

the operator documentation has improved, HA deployment is working and the 
unified openstack client support is now available in the latest versions.

These extra parts may not be a direct deliverable of the code contributions 
itself but they make a major difference on deployability which Barbican now 
satisfies. Big tent projects should aim to cover these areas also if they wish 
to thrive in the community.

Tim


Thanks,
Kevin


Brandon B. Jozsa

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Different length limit for tags in object definition and db model definition

2017-01-16 Thread Sergey Nikitin
Hi, folks!

I guess I found the reason of the problem. The first spec was created by
Jay. At that moment I was just an implementer. In this spec we have a
contradiction between lines #74 and #99.
https://review.openstack.org/#/c/91444/16/specs/juno/tag-instances.rst

Line 74 says "A tag shall be defined as a Unicode bytestring no longer than
60 bytes in length."

Line 99 contains SQL instruction for migration "tag VARCHAR(80) NOT NULL
CHARACTER SET utf8"

It seems to me that everybody missed this contradiction and I just copy the
whole migration script with length 80.

So it's just an old mistake and I think we can change length of tag from 60
to 80.

2017-01-17 10:04 GMT+04:00 GHANSHYAM MANN :

> On Tue, Jan 17, 2017 at 2:37 PM, Alex Xu  wrote:
>
>>
>>
>> 2017-01-17 10:26 GMT+08:00 Matt Riedemann :
>>
>>> On 1/16/2017 7:12 PM, Zhenyu Zheng wrote:
>>>
 Hi Nova,

 I just discovered something interesting, the tag has a limited length,
 and in the current implementation, it is 60 in the tag object
 definition:
 http://git.openstack.org/cgit/openstack/nova/tree/nova/objec
 ts/tag.py#n18

 but 80 in the db model:
 http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sq
 lalchemy/models.py#n1464

 As asked in the IRC and some of the cores responded(thanks to Matt and
 Jay), it seems to be an
 oversight and has no particular reason to do it this way.

 Since we have already created a 80 long space in DB and the current
 implementation might be confusing,  maybe we should expand the
 limitation in tag object definition to 80. Besides, users can enjoy
 longer tags.

 And the question could be, does anyone know why it is 60 in object but
 80 in DB model? is it an oversight or we have some particular reason?

 If we could expand it to be the same as DB model (80 for both), it is ok
 to do this tiny change without microversion?

 Thanks,

 Kevin Zheng


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>> As I said in IRC, the tags feature took a long time to land (several
>>> releases) so between the time that the spec was written and then the data
>>> model patch and finally the REST API change, we might have just totally
>>> missed that the length of the column in the DB was different than what was
>>> allowed in the REST API.
>>>
>>> I'm not aware of any technical reason why they are different. I'm hoping
>>> that Sergey Nikitin might remember something about this. But even looking
>>> at the spec:
>>>
>>> https://specs.openstack.org/openstack/nova-specs/specs/liber
>>> ty/approved/tag-instances.html
>>>
>>> The column was meant to be 60 so my guess is someone noticed that in the
>>> REST API review but missed it in the data model review.
>>>
>>
>> I can't remember the detail also. Hoping Sergey can remember something
>> also.
>>
>>
>>>
>>> As for needing a microversion of changing this, I tend to think we don't
>>> need a microversion because we're not restricting the schema in the REST
>>> API, we're just increasing it to match the length in the data model. But
>>> I'd like opinions from the API subteam about that.
>>>
>>>
>> We still need microversion for the user to discover the max length
>> between different cloud deployments.
>>
>
> ​I agree that we still need microversion for this. As Alex mentioned,
> otherwise it will be issue on interoperability.
>
> Can we just keep it 60 and change DB to match the same?
>
> -gmann
>
>>
>>
>>> --
>>>
>>> Thanks,
>>>
>>> Matt Riedemann
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [all] Improving Vendor Driver Discoverability

2017-01-16 Thread Isaac Beckman
I think that it would also be a good idea to have the option to let the CI 
maintainers add some useful information on the current status.
It is very helpful to know that the CI system is under maintenance which 
is the reason why it hasn't been reporting for the last week or so... 

Isaac Beckman

Office: +972-3-6897874
Fax: +972-3-6897755
Mobile: +972-50-2680180
Email: isa...@il.ibm.com

IBM XIV, Cloud Storage Solutions (previously HSG)
www.ibm.com/storage/disk/xiv
 



From:   "Jay S. Bryant" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date:   16/01/2017 21:56
Subject:Re: [openstack-dev] [all] Improving Vendor Driver 
Discoverability





On 01/16/2017 12:19 PM, Jonathan Bryce wrote:
>> On Jan 16, 2017, at 11:58 AM, Jay S. Bryant 
 wrote:
>>
>> On 01/13/2017 10:29 PM, Mike Perez wrote:
>>> The way validation works is completely up to the project team. In my 
research
>>> as shown in the Summit etherpad [5] there's a clear trend in projects 
doing
>>> continuous integration for validation. If we wanted to we could also 
have the
>>> marketplace give the current CI results, which was also requested in 
the
>>> feedback from driver maintainers.
>> Having the CI results reported would be an interesting experiment. I 
wonder if having the results even more publicly reported would result in 
more stable CI's.  It is a dual edged sword however. Given the instability 
of many CI's it could make OpenStack look bad to customers who don't 
understand what they are looking at.  Just my thoughts on that idea.
> That?s very useful feedback. Having that kind of background upfront is 
really helpful. As we make updates on the display side, we can take into 
account if certain attributes are potentially unreliable or at a higher 
risk of showing instability and have the interface better support that 
without it looking like everything is failing and a river of red X?s. Are 
there other things that might be similar?
>
> Jonathan
>
Jonathan,

Glad to be of assistance.

I think reporting some percentage of success might be the most accurate 
way to report the CI results.  Not necessarily flagging it good or bed 
but leave it for the consumers to see and compare.  Also combine that 
with Anita's idea of when the CI last successfully reported and I think 
it could give a good barometer for consumers. Our systems all have their 
rough times so we need to avoid a 'snapshot in time' view and provide 
more of a 'activity over time' view.  Third party CI is a good barometer 
of community activity and attention, but not always 100% accurate.

Obviously there will need to be some information included with the 
results explaining what they are and helping guide interpretations.

Jay

>
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Marking Tintri driver as unsupported

2017-01-16 Thread Silvan Kaiser
For those interested, this is the required local.conf option to switch off
managed snapshot testing:

TEMPEST_VOLUME_MANAGE_SNAPSHOT=False


2017-01-16 10:04 GMT+01:00 Silvan Kaiser :

> Regarding the reason for the failing tests:
> Looks like [1] switches the default for support of managed snapshots to
> true in devstack.
> As the default on that was 'false' until friday Quobyte CI did not set
> that option previously. I'm running tests with a revised config now.
> Btw, feel free to contact me in IRC (kaisers).
> Best
> Silvan
>
> [1] https://review.openstack.org/#/c/419073/
>
>
>
> 2017-01-16 8:52 GMT+01:00 Silvan Kaiser :
>
>> Apoorva, Sean,
>> after some time i managed to bring up Quobyte CI last friday which tested
>> fine [1,2,3] for a short time and then ran into the same issues with 
>> manage_snapshot
>> related tempest tests Apoorva describes (Starting chronologically at [4]).
>> From here i see two steps:
>> a) look into the reason for the issue with manage_snapshot tests
>> b) a short note on how to proceed for marking drivers with reinstated CIs
>> back as supported is much appreciated
>> Best
>> Silvan
>>
>> [1] https://review.openstack.org/#/c/412085/4
>> [2] https://review.openstack.org/#/c/313140/24
>> [3] https://review.openstack.org/#/c/418643/6
>> [4] https://review.openstack.org/#/c/363010/42
>>
>>
>>
>>
>> 2017-01-15 20:46 GMT+01:00 Apoorva Deshpande :
>>
>>> Sean,
>>>
>>> We have resolved issues related to our CI infra[1][2]. At this point
>>> manage_snapshot related tempest tests (2) are failing, but Tintri driver
>>> does not support manage/unmanage snapshot functionalities.
>>>
>>> Could you please assist me on how to skip these tests? We are using
>>> sos-ci for our CI runs.
>>>
>>> If this satisfies the compliance requirements can I propose a patch to
>>> revert changes introduced by [3] and make Tintri driver SUPPORTED again?
>>>
>>> [1] http://openstack-ci.tintri.com/tintri/refs-changes-69-353069-52/
>>> [2] http://openstack-ci.tintri.com/tintri/refs-changes-10-419710-2/
>>> [3] https://review.openstack.org/#/c/411975/
>>>
>>> On Sat, Dec 17, 2016 at 6:34 PM, Apoorva Deshpande 
>>> wrote:
>>>
 Sean,

 As communicated earlier [1][2][3], Tintri CI is facing a devstack
 failure issue, potentially due to [4].
 We are working on it and request you to give us more time before
 approving the unsupported driver patch [5].


 [1] https://www.mail-archive.com/openstack-dev@lists.opensta
 ck.org/msg97085.html
 [2] https://www.mail-archive.com/openstack-dev@lists.opensta
 ck.org/msg97057.html
 [3] http://eavesdrop.openstack.org/irclogs/%23openstack-cind
 er/%23openstack-cinder.2016-12-05.log.html
 [4] https://review.openstack.org/#/c/399550/
 [5] https://review.openstack.org/#/c/411975/

 On Sat, Dec 17, 2016 at 2:05 AM, Sean McGinnis 
 wrote:

> Checking name: Tintri CI
> last seen: 2016-12-16 16:50:50 (0:43:36 old)
> last success: 2016-11-16 20:42:29 (29 days, 20:45:46 old)
> success rate: 19%
>
> Per Cinder's non-compliance policy [1] this patch [2] marks
> the driver as unsupported and deprecated and it will be
> approved if the issue is not corrected by the next cycle.
>
> [1] https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drive
> rs#Non-Compliance_Policy
> [2] https://review.openstack.org/#/c/411975/
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.op
> enstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Dr. Silvan Kaiser
>> Quobyte GmbH
>> Hardenbergplatz 2, 10623 Berlin - Germany
>> +49-30-814 591 800 <+49%2030%20814591800> - www.quobyte.com> uobyte.com/>
>> Amtsgericht Berlin-Charlottenburg, HRB 149012B
>> Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>>
>
>
>
> --
> Dr. Silvan Kaiser
> Quobyte GmbH
> Hardenbergplatz 2, 10623 Berlin - Germany
> +49-30-814 591 800 <+49%2030%20814591800> - www.quobyte.com quobyte.com/>
> Amtsgericht Berlin-Charlottenburg, HRB 149012B
> Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>



-- 
Dr. Silvan Kaiser
Quobyte GmbH
Hardenbergplatz 2, 10623 Berlin - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
Management 

Re: [openstack-dev] [nova] Different length limit for tags in object definition and db model definition

2017-01-16 Thread GHANSHYAM MANN
On Tue, Jan 17, 2017 at 2:37 PM, Alex Xu  wrote:

>
>
> 2017-01-17 10:26 GMT+08:00 Matt Riedemann :
>
>> On 1/16/2017 7:12 PM, Zhenyu Zheng wrote:
>>
>>> Hi Nova,
>>>
>>> I just discovered something interesting, the tag has a limited length,
>>> and in the current implementation, it is 60 in the tag object definition:
>>> http://git.openstack.org/cgit/openstack/nova/tree/nova/objec
>>> ts/tag.py#n18
>>>
>>> but 80 in the db model:
>>> http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sq
>>> lalchemy/models.py#n1464
>>>
>>> As asked in the IRC and some of the cores responded(thanks to Matt and
>>> Jay), it seems to be an
>>> oversight and has no particular reason to do it this way.
>>>
>>> Since we have already created a 80 long space in DB and the current
>>> implementation might be confusing,  maybe we should expand the
>>> limitation in tag object definition to 80. Besides, users can enjoy
>>> longer tags.
>>>
>>> And the question could be, does anyone know why it is 60 in object but
>>> 80 in DB model? is it an oversight or we have some particular reason?
>>>
>>> If we could expand it to be the same as DB model (80 for both), it is ok
>>> to do this tiny change without microversion?
>>>
>>> Thanks,
>>>
>>> Kevin Zheng
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> As I said in IRC, the tags feature took a long time to land (several
>> releases) so between the time that the spec was written and then the data
>> model patch and finally the REST API change, we might have just totally
>> missed that the length of the column in the DB was different than what was
>> allowed in the REST API.
>>
>> I'm not aware of any technical reason why they are different. I'm hoping
>> that Sergey Nikitin might remember something about this. But even looking
>> at the spec:
>>
>> https://specs.openstack.org/openstack/nova-specs/specs/liber
>> ty/approved/tag-instances.html
>>
>> The column was meant to be 60 so my guess is someone noticed that in the
>> REST API review but missed it in the data model review.
>>
>
> I can't remember the detail also. Hoping Sergey can remember something
> also.
>
>
>>
>> As for needing a microversion of changing this, I tend to think we don't
>> need a microversion because we're not restricting the schema in the REST
>> API, we're just increasing it to match the length in the data model. But
>> I'd like opinions from the API subteam about that.
>>
>>
> We still need microversion for the user to discover the max length between
> different cloud deployments.
>

​I agree that we still need microversion for this. As Alex mentioned,
otherwise it will be issue on interoperability.

Can we just keep it 60 and change DB to match the same?

-gmann

>
>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Different length limit for tags in object definition and db model definition

2017-01-16 Thread Alex Xu
2017-01-17 10:26 GMT+08:00 Matt Riedemann :

> On 1/16/2017 7:12 PM, Zhenyu Zheng wrote:
>
>> Hi Nova,
>>
>> I just discovered something interesting, the tag has a limited length,
>> and in the current implementation, it is 60 in the tag object definition:
>> http://git.openstack.org/cgit/openstack/nova/tree/nova/objects/tag.py#n18
>>
>> but 80 in the db model:
>> http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sq
>> lalchemy/models.py#n1464
>>
>> As asked in the IRC and some of the cores responded(thanks to Matt and
>> Jay), it seems to be an
>> oversight and has no particular reason to do it this way.
>>
>> Since we have already created a 80 long space in DB and the current
>> implementation might be confusing,  maybe we should expand the
>> limitation in tag object definition to 80. Besides, users can enjoy
>> longer tags.
>>
>> And the question could be, does anyone know why it is 60 in object but
>> 80 in DB model? is it an oversight or we have some particular reason?
>>
>> If we could expand it to be the same as DB model (80 for both), it is ok
>> to do this tiny change without microversion?
>>
>> Thanks,
>>
>> Kevin Zheng
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> As I said in IRC, the tags feature took a long time to land (several
> releases) so between the time that the spec was written and then the data
> model patch and finally the REST API change, we might have just totally
> missed that the length of the column in the DB was different than what was
> allowed in the REST API.
>
> I'm not aware of any technical reason why they are different. I'm hoping
> that Sergey Nikitin might remember something about this. But even looking
> at the spec:
>
> https://specs.openstack.org/openstack/nova-specs/specs/liber
> ty/approved/tag-instances.html
>
> The column was meant to be 60 so my guess is someone noticed that in the
> REST API review but missed it in the data model review.
>

I can't remember the detail also. Hoping Sergey can remember something also.


>
> As for needing a microversion of changing this, I tend to think we don't
> need a microversion because we're not restricting the schema in the REST
> API, we're just increasing it to match the length in the data model. But
> I'd like opinions from the API subteam about that.
>
>
We still need microversion for the user to discover the max length between
different cloud deployments.


> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Marking Tintri driver as unsupported

2017-01-16 Thread GHANSHYAM MANN
Or how about just disable the flag which will skip snapshot manage tests
for CI where it should not run.

Flag in Devtsack is TEMPEST_VOLUME_MANAGE_SNAPSHOT which can be set to
False on CI.

I added the same on ceph gate jobs [1] and similarly can be added on CI
side if ok?

..1 https://review.openstack.org/#/c/421073/

​-gmann

On Tue, Jan 17, 2017 at 4:32 AM, Sean McGinnis 
wrote:

> On Mon, Jan 16, 2017 at 08:52:34AM +0100, Silvan Kaiser wrote:
> > Apoorva, Sean,
> > after some time i managed to bring up Quobyte CI last friday which tested
> > fine [1,2,3] for a short time and then ran into the same issues with
> > manage_snapshot
> > related tempest tests Apoorva describes (Starting chronologically at
> [4]).
> > From here i see two steps:
> > a) look into the reason for the issue with manage_snapshot tests
> > b) a short note on how to proceed for marking drivers with reinstated CIs
>
> Hey Sylvan,
>
> As mentioned, if the CI requirements can be met yet this cycle, we can
> just do a revert for the patch that set the unsupported flag. That
> should be a quick and easy one to get through once we see the CI is
> back and stable.
>
> Thanks!
> Sean
>
> > back as supported is much appreciated
> > Best
> > Silvan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa][glance] gate-tempest-dsvm-full-ceph-plugin-src-glance_store-ubuntu-xenial failures

2017-01-16 Thread GHANSHYAM MANN
Yea, manage snapshot tests should be skipped on ceph backend.

I disabled those tests for *-ceph-* jobs and glance-store will be unblocked
after that merged.

-  https://review.openstack.org/#/c/421073/


There is discussion going on whether to enable the manage snapshot false by
default on devstack side and improve
tempest tests also but that might take time.

But for ceph jobs it will be disabled in that patch and should not block
gate etc.

Also if any CI failing due to that they can quickly disable that flag and
skip the tests which not meant to be run on their CI.


​-gmann

On Tue, Jan 17, 2017 at 11:18 AM, Brian Rosmaita  wrote:

> I need some help troubleshooting a glance_store gate failure that I
> think is due to a recent change in a tempest test and a configuration
> problem (or it could be something else entirely).  I'd appreciate some
> help solving this as it appears to be blocking all merges into
> glance_store, which, as a non-client library, is supposed to be frozen
> later this week.
>
> Here's an example of the failure in a global requirements update patch:
> https://review.openstack.org/#/c/420832/
> (I should mention that the failure is occurring in a volume test in
> tempest.api.volume.admin.v2.test_snapshot_manage.
> SnapshotManageAdminV2Test,
> not a glance_store test.)
>
> The test is being run by this gate:
> gate-tempest-dsvm-full-ceph-plugin-src-glance_store-ubuntu-xenial
>
> The test that's failing, test_unmanage_manage_snapshot was recently
> modified by Change-Id: I77be1cf85a946bf72e852f6378f0d7b43af8023a
> To be more precise, the test itself wasn't changed, rather the criterion
> for skipping the test was changed (from a skipIf based on whether the
> backend was ceph, to a skipUnless based on a boolean config option).
>
> From the comment in the old code on that patch, it seems like the test
> config value should be False when ceph is the backend (and that's its
> default).  But in the config dump of the failing test run,
> http://logs.openstack.org/32/420832/1/check/gate-tempest-
> dsvm-full-ceph-plugin-src-glance_store-ubuntu-xenial/
> dab27eb/logs/tempest_conf.txt.gz
> you can see that manage_snapshot is True.
>
> That's why I think the problem is being caused by a flipped test config
> value, but I'm not sure where the configuration for this particular gate
> lives so I don't know what repo to propose a patch to.
>
> Thanks in advance for any help,
> brian
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Qiming Teng
On Mon, Jan 16, 2017 at 08:21:02PM +, Fox, Kevin M wrote:
> IMO, This is why the big tent has been so damaging to OpenStack's progress. 
> Instead of lifting the commons up, by requiring dependencies on other 
> projects, there by making them commonly deployed and high quality, post big 
> tent, each project reimplements just enough to get away with making something 
> optional, and then the commons, and OpenStack as a whole suffers. This 
> behavior MUST STOP if OpenStack is to make progress again. Other projects, 
> such as Kubernetes are making tremendous progress because they are not 
> hamstrung by one component trying desperately not to depend on another when 
> the dependency is appropriate. They enhance the existing component until its 
> suitable and the whole project benefits. Yes, as an isolated dev, the 
> behavior to make deps optional seems to make sense. But as a whole, OpenStack 
> is suffering and will become increasingly irrelevant moving forward if the 
> current path is continued. Please, please reconsider what the current stance 
> on dependencies is d
>  oing to the community. This problem is not just isolated to barbican, but 
> lots of other projects as well. We can either help pull each other up, or we 
> can step on each other to try and get "on top". I'd rather we help each other 
> rather then the destructive path we seem to be on.
> 
> My 2 cents.
> Kevin
> 

Very well said, Kevin. The problem is not just about Barbican. Time for
the TC and the whole community to rethink or even just to realize
where we are heading ... Time for each and every projects to do some
introspection ... Time to solve this chicken-and-egg problem.

Stick together, there seems still a chance for future; otherwise, we
will feel guilty wasting people's life building something that is
falling apart eventually. Should we kill all "sh**-as-a-service"
projects and focus on the few "core" services and hope they will meet
all users' requirements? Or, should we give every project an equal
chance to be adopted? Who is blocking other services to get adopted?
How many projects are listed on the project navigator?

- Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa][glance] gate-tempest-dsvm-full-ceph-plugin-src-glance_store-ubuntu-xenial failures

2017-01-16 Thread Rochelle Grober
There was a driver thread about snapshot management test failures.  It appears 
there is a config option that changed for devstack from false to true, causing 
the cinder drivers all sorts of issues.  Here is the email that discusses the 
change and its effects on cinder drivers:

http://lists.openstack.org/pipermail/openstack-dev/2017-January/110184.html

It might not be related, but.

A bit of a coincidence if not.

--Rocky

-Original Message-
From: Brian Rosmaita [mailto:rosmaita.foss...@gmail.com] 
Sent: Monday, January 16, 2017 6:18 PM
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [infra][qa][glance] 
gate-tempest-dsvm-full-ceph-plugin-src-glance_store-ubuntu-xenial failures

I need some help troubleshooting a glance_store gate failure that I think is 
due to a recent change in a tempest test and a configuration problem (or it 
could be something else entirely).  I'd appreciate some help solving this as it 
appears to be blocking all merges into glance_store, which, as a non-client 
library, is supposed to be frozen later this week.

Here's an example of the failure in a global requirements update patch:
https://review.openstack.org/#/c/420832/
(I should mention that the failure is occurring in a volume test in 
tempest.api.volume.admin.v2.test_snapshot_manage.SnapshotManageAdminV2Test,
not a glance_store test.)

The test is being run by this gate:
gate-tempest-dsvm-full-ceph-plugin-src-glance_store-ubuntu-xenial

The test that's failing, test_unmanage_manage_snapshot was recently modified by 
Change-Id: I77be1cf85a946bf72e852f6378f0d7b43af8023a
To be more precise, the test itself wasn't changed, rather the criterion for 
skipping the test was changed (from a skipIf based on whether the backend was 
ceph, to a skipUnless based on a boolean config option).

>From the comment in the old code on that patch, it seems like the test config 
>value should be False when ceph is the backend (and that's its default).  But 
>in the config dump of the failing test run, 
>http://logs.openstack.org/32/420832/1/check/gate-tempest-dsvm-full-ceph-plugin-src-glance_store-ubuntu-xenial/dab27eb/logs/tempest_conf.txt.gz
you can see that manage_snapshot is True.

That's why I think the problem is being caused by a flipped test config value, 
but I'm not sure where the configuration for this particular gate lives so I 
don't know what repo to propose a patch to.

Thanks in advance for any help,
brian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][networking_sfc] flow table seems incorrect

2017-01-16 Thread 郑杰
Hi ,all




I deploy OpenStack mitaka with networking-sfc (-b stable/mitaka) and modify 
neutron.conf with 
service_plugins=router,metering,networking_sfc.services.sfc.plugin.SfcPlugin 
and new section [sfc] drivers=ovs ,and then next mitigrate neutron DB and 
restart all neutron and nova services,all seem all right.


next I create my basic network :


  net-create net1
  net-create --router:external=true --shared net2
  subnet-create --name subnet1 --enable-dhcp net1 10.0.1.0/24
  subnet-create --name subnet2 --enable-dhcp net2 130.140.150.0/24
  router-create r1
  router-interface-add r1 subnet1 
  router-gateway-set r1 net2



and then create 4 ports within net1 :
  port-create --name p1 net1
  port-create --name p2 net1
  port-create --name p3 net1
  port-create --name p4 net1

then boot two servers respectively on two servers through assigning 
--availability-zone :
 openstack server create --image cirros --flavor m1.tiny --nic port-id={p1} 
--nic port-id={p2}  --availability-zone nova::host1 testvm1 
 openstack server create --image cirros --flavor m1.tiny --nic port-id={p3} 
--nic port-id={p4}  --availability-zone nova::host2 testvm2
and then create port-chain
 port-pair-create --ingress p1 --egress p2 pp1
 port-pair-create --ingress p3 --egress p4 pp2
 port-pair-group-create --port-pair pp1 --port-pair pp2  ppg1 
 port-chain-create --port-pair-group ppg1 pc1 


last,we dump ovs flows table ,we only find table 0 and 10 in br-int with mpls 
rules ,no other places we can find more mpls rules .


[root@host173 ~(keystone_admin)]# ovs-ofctl dump-flows br-int |grep mpls
 cookie=0x9c6ea94ea755534a, duration=412203.263s, table=0, n_packets=0, 
n_bytes=0, idle_age=65534, hard_age=65534, priority=20,mpls 
actions=resubmit(,10)
 cookie=0x9c6ea94ea755534a, duration=344324.930s, table=10, n_packets=0, 
n_bytes=0, idle_age=65534, hard_age=65534, 
priority=1,mpls,dl_vlan=1,dl_dst=fa:16:3e:9d:89:1f,mpls_label=511 
actions=strip_vlan,pop_mpls:0x0800,output:15

 
there is no flow  which can push mpls lables at all


does any body know what's wrong with my steps or anything else ?


Thanks & Regards__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Mistral][Ansible] Calling Ansible from Mistral workflows

2017-01-16 Thread Renat Akhmerov
Dougal, I looked at the source code. Seems like it’s already usable enough.
Do you think we need to put a section about Ansible actions into Mistral docs?
I’m also thinking if we need to move this code into the mistral repo or leave 
it on github.
Maybe a better time for moving it under Mistral umbrella will be when we finish 
our actions
refactoring activity (when actions are moved into a separate repo, e.g. 
mistral-extra).

Thoughts?

Renat Akhmerov
@Nokia

> On 12 Jan 2017, at 22:27, Dougal Matthews  wrote:
> 
> Hey all,
> 
> I just wanted to share a quick experiment that I tried out. I had heard there 
> was some interest in native Ansible actions for Mistral. After much dragging 
> my heels I decided to give it a go, and it turns out to be very easy.
> 
> This code is very raw and has only been lightly tested - I just wanted to 
> make sure it was going in the right direction and see what everyone thought.
> 
> I wont duplicate it all again here, but you can see the details on either 
> GitHub or a quick blog post that I put together.
> 
> https://github.com/d0ugal/mistral-ansible-actions 
> 
> http://www.dougalmatthews.com/2017/Jan/12/calling-ansible-from-mistral-workflows/
>  
> 
> 
> Cheers,
> Dougal
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Different length limit for tags in object definition and db model definition

2017-01-16 Thread Matt Riedemann

On 1/16/2017 7:12 PM, Zhenyu Zheng wrote:

Hi Nova,

I just discovered something interesting, the tag has a limited length,
and in the current implementation, it is 60 in the tag object definition:
http://git.openstack.org/cgit/openstack/nova/tree/nova/objects/tag.py#n18

but 80 in the db model:
http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/models.py#n1464

As asked in the IRC and some of the cores responded(thanks to Matt and
Jay), it seems to be an
oversight and has no particular reason to do it this way.

Since we have already created a 80 long space in DB and the current
implementation might be confusing,  maybe we should expand the
limitation in tag object definition to 80. Besides, users can enjoy
longer tags.

And the question could be, does anyone know why it is 60 in object but
80 in DB model? is it an oversight or we have some particular reason?

If we could expand it to be the same as DB model (80 for both), it is ok
to do this tiny change without microversion?

Thanks,

Kevin Zheng


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



As I said in IRC, the tags feature took a long time to land (several 
releases) so between the time that the spec was written and then the 
data model patch and finally the REST API change, we might have just 
totally missed that the length of the column in the DB was different 
than what was allowed in the REST API.


I'm not aware of any technical reason why they are different. I'm hoping 
that Sergey Nikitin might remember something about this. But even 
looking at the spec:


https://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/tag-instances.html

The column was meant to be 60 so my guess is someone noticed that in the 
REST API review but missed it in the data model review.


As for needing a microversion of changing this, I tend to think we don't 
need a microversion because we're not restricting the schema in the REST 
API, we're just increasing it to match the length in the data model. But 
I'd like opinions from the API subteam about that.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG? / Was (Consistent Versioned Endpoints)

2017-01-16 Thread Rochelle Grober
YEES!

-Original Message-
From: Tom Fifield [mailto:t...@openstack.org] 
Sent: Monday, January 16, 2017 3:48 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] PTG? / Was (Consistent Versioned Endpoints)

On 14/01/17 04:07, Joshua Harlow wrote:
>
> Sometimes I almost wish we just rented out a football stadium (or 
> equivalent, a soccer field?) and put all the contributors in the 'field'
> with bean bags and some tables and a bunch of white boards (and a lot 
> of wifi and power cords) and let everyone 'have at it' (ideally in a 
> stadium with a roof in the winter). Maybe put all the infra people in 
> a circle in the middle and make the foundation people all wear referee 
> outfits.
>
> It'd be an interesting social experiment at least :-P

I have been informed we have located at least 3 referee outfits across 
Foundation staff, along with a set of red/yellow cards.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]Tricircle Pike PTG

2017-01-16 Thread joehuang
As the Ocata stable branch will be created and released soon, it's time to 
prepare what we need to discuss and implement in Pike release:

The etherpad has been created at: 
https://etherpad.openstack.org/p/tricircle-ptg-pike

Please feel free to add the topics, ideas into the etherpad, and let's plan the 
agenda as well.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][qa][glance] gate-tempest-dsvm-full-ceph-plugin-src-glance_store-ubuntu-xenial failures

2017-01-16 Thread Brian Rosmaita
I need some help troubleshooting a glance_store gate failure that I
think is due to a recent change in a tempest test and a configuration
problem (or it could be something else entirely).  I'd appreciate some
help solving this as it appears to be blocking all merges into
glance_store, which, as a non-client library, is supposed to be frozen
later this week.

Here's an example of the failure in a global requirements update patch:
https://review.openstack.org/#/c/420832/
(I should mention that the failure is occurring in a volume test in
tempest.api.volume.admin.v2.test_snapshot_manage.SnapshotManageAdminV2Test,
not a glance_store test.)

The test is being run by this gate:
gate-tempest-dsvm-full-ceph-plugin-src-glance_store-ubuntu-xenial

The test that's failing, test_unmanage_manage_snapshot was recently
modified by Change-Id: I77be1cf85a946bf72e852f6378f0d7b43af8023a
To be more precise, the test itself wasn't changed, rather the criterion
for skipping the test was changed (from a skipIf based on whether the
backend was ceph, to a skipUnless based on a boolean config option).

>From the comment in the old code on that patch, it seems like the test
config value should be False when ceph is the backend (and that's its
default).  But in the config dump of the failing test run,
http://logs.openstack.org/32/420832/1/check/gate-tempest-dsvm-full-ceph-plugin-src-glance_store-ubuntu-xenial/dab27eb/logs/tempest_conf.txt.gz
you can see that manage_snapshot is True.

That's why I think the problem is being caused by a flipped test config
value, but I'm not sure where the configuration for this particular gate
lives so I don't know what repo to propose a patch to.

Thanks in advance for any help,
brian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Different length limit for tags in object definition and db model definition

2017-01-16 Thread Zhenyu Zheng
Hi Nova,

I just discovered something interesting, the tag has a limited length, and
in the current implementation, it is 60 in the tag object definition:
http://git.openstack.org/cgit/openstack/nova/tree/nova/objects/tag.py#n18

but 80 in the db model:
http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/models.py#n1464

As asked in the IRC and some of the cores responded(thanks to Matt and
Jay), it seems to be an
oversight and has no particular reason to do it this way.

Since we have already created a 80 long space in DB and the current
implementation might be confusing,  maybe we should expand the limitation
in tag object definition to 80. Besides, users can enjoy longer tags.

And the question could be, does anyone know why it is 60 in object but 80
in DB model? is it an oversight or we have some particular reason?

If we could expand it to be the same as DB model (80 for both), it is ok to
do this tiny change without microversion?

Thanks,

Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Vitrage] About alarms reported by datasource and the alarms generated by vitrage evaluator

2017-01-16 Thread Yujun Zhang
Sounds good.

Have you created an etherpad page for collecting topics, Ifat?

On Mon, Jan 16, 2017 at 10:43 PM Afek, Ifat (Nokia - IL) <
ifat.a...@nokia.com> wrote:

>
>
> *From: *Yujun Zhang 
> *Date: *Sunday, 15 January 2017 at 17:53
>
>
>
> About fault and alarm, what I was thinking about the causal/deducing chain
> in root cause analysis.
>
>
>
> Fault state means the resource is not fully functional and it is evaluated
> by related indicators. There are alarms on events like power loss or
> measurands like CPU high, memory low, temperature high. There are also
> alarms based on deduced state, such as "host fault", "instance fault".
>
>
>
> So an example chain would be
>
> · "FAULT: power line cut off" =(monitor)=> "ALARM: host power
> loss" =(inspect)=> "FAULT: host is unavailable" =(action)=> "ALARM: host
> fault"
>
> · "FAULT: power line cut off" =(monitor)=> "ALARM: host power
> loss" =(inspect)=> "FAULT: host is unavailable" =(inspect)=> "FAULT:
> instance is unavailable" =(action)=> "ALARM: instance fault"
>
> If we omit the resource, then we get the causal chain as it is in Vitrage
>
> · "ALARM: host power loss" =(causes)=> "ALARM: host fault"
>
> · "ALARM: host power loss" =(causes)=> "ALARM: instance fault"
>
> But what the user care about might be there "FAULT: power line cut off"
> causes all these alarms. What I haven't made clear yet is the equivalence
> between fault and alarm.
>
>
>
> I may have made it more complex with my *immature* thoughts. It could be
> even more complex if we consider multiple upstream causes and downstream
> outcome. It may be an interesting topic to be discussed in design session.
>
>
>
>
>
> [Ifat] I agree. Let’s discuss this in the next design session we’ll have
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Fei Long Wang


On 17/01/17 13:00, Fox, Kevin M wrote:
> Your right, it is not what the big tent was about, but the big tent had some 
> unintended side affects. The list, as you stated:
>
> * No longer having a formal incubation and graduation period/review for
> applying projects
> * Having a single, objective list of requirements and responsibilities
> for inclusion into the OpenStack development community
> * Specifically allowing competition of different source projects in the
> same "space" (e.g. deployment or metrics)
>
> Turned into (my opinion):
>
> #1, projects, having a formal incubation/graduation period had the 
> opportunity to get feedback on what they could do to better integrate with 
> other projects and were strongly encouraged to do so to make progress towards 
> graduation. Without the formality, no one tended to bother.
>
> #2, Not having a single, objective list of requirements/responsibility: I 
> believe not having a list made folks take a hard look at what other projects 
> were doing and try harder to play nice in order to get graduated or risk the 
> unknown of folks coming back over and over and telling them more integration 
> was required.
>
> #3, the benefits/drawbacks of specifically allowing competition is rather 
> hard to predict. It can encourage alternate solutions to be created and 
> create a place where better ideas can overcome less good ideas. But it also 
> removes pressure to cooperate on one project rather then just take the 
> sometimes much easier route of just doing it yourself in your own project.
>
> I'm not blaming the big tent for all the ills of the OpenStack world. It has 
> had some real benefits. This problem is something bigger then the big tent. 
> It preexisted the tent. The direction the pressure to share was very 
> unidirectional pre big tent, applied to new projects much more then old 
> projects.
>
> But, I'm just saying the Big Tent had an (unintended) net negative affect 
> making this particular problem worse.
>
> Looking at the why of a problem is one of the important steps to formulating 
> a solution. OpenStack no longer has the amount of tooling to help elevate the 
> issue it had under the time before the Big Tent. Nothing has come up since to 
> replace it.
>
> I'm not stating that the big tent should be abolished and we go back to the 
> way things were. But I also know the status quo is not working either. How do 
> we fix this? Anyone have any thoughts? 

Could we create a new tag (at
https://governance.openstack.org/tc/reference/tags/) to indicate the
project is trusted to be integrated. Then, if there is a existing
project want to use a feature in the trusted-integrated project, then no
new wheel. To be more clear, the integration is a force integration.
Look at this list, https://www.openstack.org/software/project-navigator/
most of the projects has been developed more than 3 years,
unfortunately, they're not trusted, on the contrary, sometimes we're
brave to use some 3rd party library very new. That's a little ironic.

>
> Thanks,
> Kevin
> 
> From: Jay Pipes [jaypi...@gmail.com]
> Sent: Monday, January 16, 2017 1:57 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all] [barbican] [security] Why are projects 
> trying to avoid Barbican, still?
>
> On 01/16/2017 04:09 PM, Fox, Kevin M wrote:
>> If the developers that had issue with the lack of functionality,
>> contributed to Barbican rather then go off on their own, the problem
>>  would have been solved much more quickly. The lack of sharing means
>>  the problems don't get fixed as fast.
> Agreed completely.
>
>> As for operators, If the more common projects all started depending
>> on it, it would be commonly deployed.
> Also agreed.
>
>> Would the operators deploy Barbican just for Magnum? maybe not. maybe
>> so. For Magnum, Ironic, and Sahara, more likely . Would they deploy
>> it if Neutron and Keystone depended on it, yeah. they would. And then
>> all the other projects would benefit from it being there, such as
>> Magnum.
> Totally agreed.
>
>  > The sooner OpenStack as a whole can decide on some new core
>> components so that projects can start hard depending on them, the
>> better I think. That process kind of stopped with the arrival of the
>> big tent.
> You are using a false equivalence again.
>
> As I've mentioned numerous times before on the mailing list, the Big
> Tent was NOT either of these things:
>
> * Expanding what the "core components" of OpenStack
> * Expanding the mission or scope of OpenStack
>
> What the Big Tent -- technically "Project Structure Reform" -- was about
> was actually the following:
>
> * No longer having a formal incubation and graduation period/review for
> applying projects
> * Having a single, objective list of requirements and responsibilities
> for inclusion into the OpenStack development community
> * Specifically allowing competition of different source projects in the
> same 

Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Amrith Kumar
Ian,

This is a fascinating conversation. Let me offer two observations.

First, Trove has long debated the ideal solution for storing secrets. There
have been many conversations, and Barbican has been considered many times.
We sought input from people who were deploying and operating Trove at scale;
customers of Tesora, self described users of the upstream Trove, and some of
the (then) active contributors who were also operators.

The consensus was that installing and deploying OpenStack was hard enough
and requiring the installation of yet more services was problematic. This is
not something which singles out Barbican in any way. For example, Trove uses
Swift as the default object store where backups are stored, and in
implementing replication we leveraged the backup capability. This means that
to have replication, one needs to have Swift. Several deployers have
objected to this since they don't have swift. But that is a dependency which
we considered to be a hard dependency and offer no alternatives; you can
have Ceph if you so desire but we still access it as a swift store.
Similarly we needed some capabilities of job scheduling and opted to use
mistral for this; we didn't reimplement all of these capabilities in Trove.

However, when it comes to secret storage, the consensus of opinion is
Yet another service.

Here is the second observation. This conversation reminds me of many
conversations from years past "Why do you want to use a NoSQL database, we
have a  database already". I've sat in on heated arguments
amongst architects who implemented "lightweight key-value storage based on
" and didn't use the corporate standard RDBMS.

One size doesn't fit all. And today, ten years on, it is clear that there
are legitimate situations where one would be silly to require an architect
to use a RDBMS; we talk of polyglot persistence as a matter of course.

The thing is this; Barbican may be a fine project with a ton of capabilities
that I don't even know of nor have the ability to comprehend. But there's a
minimum threshold of requirements that I need to have before the benefit of
the new dependency becomes valuable. From Trove's perspective, I don't
believe we have crossed that threshold (yet). If Barbican were a library not
a project, it may be a much easier sell for deployers.

Finally, it is my personal belief that making software pluggable such that
"if it discovers Barbican, it uses it, if it discovers XYZ it uses it, if it
discovers PQR it uses that ..." is a very expensive design paradigm.  Unless
Barbican, PQR, XYZ and any other implementation provide such material value
to the consumer, and there is significant deployment and usage of each, the
cost of maintaining the transparent pluggability of these, the cost of
testing, and development all add up very quickly.

Which is why when some project wants to store a secret, it ciphers it using
some one way hash and stuffs that in a database (if that's all it needs).

My 2c is that requiring projects to use Barbican as the secret store is the
equivalent of requiring developers (10 years ago) to use an RDBMS. One size
doesn't fit all ... Barbican is a "one size" secret store, I don't need all
of the bells and whistles just as the guy who wants a key value store
doesn't mind eventual consistency, lost writes, but can't take the cost of a
traditional RDBMS.

Thanks,

-amrith



> -Original Message-
> From: Ian Cordasco [mailto:sigmaviru...@gmail.com]
> Sent: Monday, January 16, 2017 8:36 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: [openstack-dev] [all] [barbican] [security] Why are projects
trying to
> avoid Barbican, still?
> 
> Hi everyone,
> 
> I've seen a few nascent projects wanting to implement their own secret
> storage to either replace Barbican or avoid adding a dependency on it.
> When I've pressed the developers on this point, the only answer I've
> received is to make the operator's lives simpler.
> 
> I've been struggling to understand the reasoning behind this and I'm
> wondering if there are more people around who can help me understand.
> 
> To help others help me, let me provide my point of view. Barbican's been
> around for a few years already and has been deployed by several companies
> which have probably audited it for security purposes. Most of the
technology
> involved in Barbican is proven to be secure and the way the project has
> strung those pieces together has been analyzed by the OSSP (OpenStack's
> own security group). It doesn't have a requirement on a hardware TPM
> which means there's no hardware upgrade cost. Furthermore, several
> services already provide the option of using Barbican (but won't place a
hard
> requirement on it). It stands to reason (in my opinion) that if new
services
> have a need for secrets and other services already support using Barbican
as
> secret storage, then those new services should be using Barbican. It seems
a
> 

Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Joshua Harlow

Fox, Kevin M wrote:

Your right, it is not what the big tent was about, but the big tent had some 
unintended side affects. The list, as you stated:

* No longer having a formal incubation and graduation period/review for
applying projects
* Having a single, objective list of requirements and responsibilities
for inclusion into the OpenStack development community
* Specifically allowing competition of different source projects in the
same "space" (e.g. deployment or metrics)

Turned into (my opinion):

#1, projects, having a formal incubation/graduation period had the opportunity 
to get feedback on what they could do to better integrate with other projects 
and were strongly encouraged to do so to make progress towards graduation. 
Without the formality, no one tended to bother.

#2, Not having a single, objective list of requirements/responsibility: I 
believe not having a list made folks take a hard look at what other projects 
were doing and try harder to play nice in order to get graduated or risk the 
unknown of folks coming back over and over and telling them more integration 
was required.

#3, the benefits/drawbacks of specifically allowing competition is rather hard 
to predict. It can encourage alternate solutions to be created and create a 
place where better ideas can overcome less good ideas. But it also removes 
pressure to cooperate on one project rather then just take the sometimes much 
easier route of just doing it yourself in your own project.

I'm not blaming the big tent for all the ills of the OpenStack world. It has 
had some real benefits. This problem is something bigger then the big tent. It 
preexisted the tent. The direction the pressure to share was very 
unidirectional pre big tent, applied to new projects much more then old 
projects.

But, I'm just saying the Big Tent had an (unintended) net negative affect 
making this particular problem worse.

Looking at the why of a problem is one of the important steps to formulating a 
solution. OpenStack no longer has the amount of tooling to help elevate the 
issue it had under the time before the Big Tent. Nothing has come up since to 
replace it.

I'm not stating that the big tent should be abolished and we go back to the way 
things were. But I also know the status quo is not working either. How do we 
fix this? Anyone have any thoughts?


Embrace the larger world instead of trying to recreate parts of it, 
create alliances with the CNCF and/or other companies that are getting 
actively involved there and make bets that solutions there are things 
that people want to use directly (instead of turning openstack into some 
kind of 'integration aka, middleware engine').


How many folks have been watching 
https://github.com/cncf/toc/tree/master/proposals or 
https://github.com/cncf/toc/pulls?


Start accepting that what we call OpenStack may be better off as 
extracting the *current* good parts of OpenStack and cutting off some of 
the parts that aren't really worth it/nobody really uses/deploys anyway 
(and say starting to modernize the parts that are left by say moving 
them under the CNCF umbrella and adopting some of the technology there 
instead).


Rinse and repeat this same shift after say another ~6 years when the 
CNCF accumulates enough projects that nobody really uses/deploys.


-Josh




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Brandon B. Jozsa
Inline


On January 16, 2017 at 7:04:00 PM, Fox, Kevin M 
(kevin@pnnl.gov) wrote:

I'm not stating that the big tent should be abolished and we go back to the way 
things were. But I also know the status quo is not working either. How do we 
fix this? Anyone have any thoughts?


Are we really talking about Barbican or has the conversation drifted towards 
Big Tent concerns?

Perhaps we can flip this thread on it’s head and more positively discuss what 
can be done to improve Barbican, or ways that we can collaboratively address 
any issues. I’m almost wondering if some opinions about Barbican are even 
coming from its heavy users, or users who’ve placed much time into 
developing/improving Barbican? If not, let’s collectively change that.


Thanks,
Kevin


Brandon B. Jozsa
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Joshua Harlow

Is the problem perhaps that no one is aware of other projects using
Barbican? Is the status on the project navigator alarming (it looks
like some of this information is potentially out of date)? Has
Barbican been deemed too hard to deploy?

I really want to understand why so many projects feel the need to
implement their own secrets storage. This seems a bit short-sighted
and foolish. While these projects are making themselves easier to
deploy, if not done properly they are potentially endangering their
users and that seems like a bigger problem than deploying Barbican to
me.



Just food for thought, and I'm pretty sure it's probably the same for 
various others; but one part that I feel is a reason that folks don't 
deploy barbican is because most companies need a solution that works 
beyond OpenStack and whether people like it or not, a OpenStack specific 
solution isn't really something that is attractive (especially with the 
growing adoption of other things that are *not* OpenStack).


Another reason, some companies have or are already building/built 
solutions that offer functionality like what's in 
https://github.com/square/keywhiz and others and such things integrate 
with kubernetes and **their existing** systems ... natively already so 
why would they bother with a service like barbican?


IMHO we've got to get our heads out of the sand with regard to some of 
this stuff, expecting people to consume all things OpenStack and only 
all things OpenStack is a losing battle; companies will consume what is 
right for their need, whether that is in the OpenStack community or not, 
it doesn't really matter (maybe at one point it did).


My 2 cents,

Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Fox, Kevin M
Your right, it is not what the big tent was about, but the big tent had some 
unintended side affects. The list, as you stated:

* No longer having a formal incubation and graduation period/review for
applying projects
* Having a single, objective list of requirements and responsibilities
for inclusion into the OpenStack development community
* Specifically allowing competition of different source projects in the
same "space" (e.g. deployment or metrics)

Turned into (my opinion):

#1, projects, having a formal incubation/graduation period had the opportunity 
to get feedback on what they could do to better integrate with other projects 
and were strongly encouraged to do so to make progress towards graduation. 
Without the formality, no one tended to bother.

#2, Not having a single, objective list of requirements/responsibility: I 
believe not having a list made folks take a hard look at what other projects 
were doing and try harder to play nice in order to get graduated or risk the 
unknown of folks coming back over and over and telling them more integration 
was required.

#3, the benefits/drawbacks of specifically allowing competition is rather hard 
to predict. It can encourage alternate solutions to be created and create a 
place where better ideas can overcome less good ideas. But it also removes 
pressure to cooperate on one project rather then just take the sometimes much 
easier route of just doing it yourself in your own project.

I'm not blaming the big tent for all the ills of the OpenStack world. It has 
had some real benefits. This problem is something bigger then the big tent. It 
preexisted the tent. The direction the pressure to share was very 
unidirectional pre big tent, applied to new projects much more then old 
projects.

But, I'm just saying the Big Tent had an (unintended) net negative affect 
making this particular problem worse.

Looking at the why of a problem is one of the important steps to formulating a 
solution. OpenStack no longer has the amount of tooling to help elevate the 
issue it had under the time before the Big Tent. Nothing has come up since to 
replace it.

I'm not stating that the big tent should be abolished and we go back to the way 
things were. But I also know the status quo is not working either. How do we 
fix this? Anyone have any thoughts?

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Monday, January 16, 2017 1:57 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [barbican] [security] Why are projects 
trying to avoid Barbican, still?

On 01/16/2017 04:09 PM, Fox, Kevin M wrote:
> If the developers that had issue with the lack of functionality,
> contributed to Barbican rather then go off on their own, the problem
>  would have been solved much more quickly. The lack of sharing means
>  the problems don't get fixed as fast.

Agreed completely.

> As for operators, If the more common projects all started depending
> on it, it would be commonly deployed.

Also agreed.

> Would the operators deploy Barbican just for Magnum? maybe not. maybe
> so. For Magnum, Ironic, and Sahara, more likely . Would they deploy
> it if Neutron and Keystone depended on it, yeah. they would. And then
> all the other projects would benefit from it being there, such as
> Magnum.

Totally agreed.

 > The sooner OpenStack as a whole can decide on some new core
> components so that projects can start hard depending on them, the
> better I think. That process kind of stopped with the arrival of the
> big tent.

You are using a false equivalence again.

As I've mentioned numerous times before on the mailing list, the Big
Tent was NOT either of these things:

* Expanding what the "core components" of OpenStack
* Expanding the mission or scope of OpenStack

What the Big Tent -- technically "Project Structure Reform" -- was about
was actually the following:

* No longer having a formal incubation and graduation period/review for
applying projects
* Having a single, objective list of requirements and responsibilities
for inclusion into the OpenStack development community
* Specifically allowing competition of different source projects in the
same "space" (e.g. deployment or metrics)

What you are complaining about (rightly IMHO) regarding OpenStack
project contributors not contributing missing functionality to Barbican
has absolutely nothing to do with the Big Tent:

There's no competing secret storage project in OpenStack other than
Barbican/Castellan.

Furthermore, this behaviour of projects choosing to DIY/NIH something
that existed in other projects was around long before the advent of the
Big Tent. In fact, in this specific case, the Magnum team knew about
Barbican, previously depended on it, and chose to make Barbican an
option not because Barbican wasn't OpenStack -- it absolutely WAS -- but
because it wasn't commonly deployed, which limited their own adoption.

What you are asking for, Kevin, is a 

Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Adam Harwell
The "single master token" issue is something I think a lot of services may
suffer from, and it's definitely something the Barbican folks are aware of
(I've made it a point to personally bring this up many times, including
hijacking parts of the keystone and barbican sessions at the Tokyo, Austin,
and Barcelona summits). When ACLs work, they should do this job,  but there
should also be better ways to do this. I know there have been proposals
around using more tightly scoped Trusts (I participated in proposing some)
but I don't know how much traction they actually got.

Yes, currently Barbican does both secret storage and certificate
provisioning, though I'm certain that basic secret storage was fully
implemented before any of the certificate stuff ever happened… I believe
you are correct though that it should be more tightly focused, and I think
the Barbican team agrees as well -- to my (admittedly fuzzy) recollection
there was agreement to split the certificate provisioning system into
another project as of version two of the API. Maybe Dave or Doug can
confirm this?

And for the record, Neutron-LBaaS and Octavia have at least a soft
requirement for Barbican, which is to say we only support our TLS
Termination features if Barbican is deployed. We do have our own
Castellan-like interface, but it only has a Barbican driver, and we'd love
to have that interface merged with Castellan if possible (I'm still salty
that this didn't happen years ago, but that's a much longer story).

--Adam Harwell

On Mon, Jan 16, 2017, 14:36 Duncan Thomas  wrote:

> To give a totally different prospective on why somebody might dislike
> Barbican (I'm one of those people). Note that I'm working from pretty hazy
> memories so I don't guarantee I've got everything spot on, and I am without
> a doubt giving a very one sided view. But hey, that's the side I happen to
> sit on. I certainly don't mean to cause great offence to the people
> concerned, but rather to give  ahistory from a PoV that hasn't appeared yet.
>
> Cinder needed somewhere to store volume encryption keys. Long, long ago,
> Barbican gave a great presentation about secrets as a service, ACLs on
> secrets, setups where one service could ask for keep material to be created
> and only accessible to some other service. Having one service give another
> service permission to get at a secret (but never be able to access that
> secret itself). All the clever things that cinder could possibly leverage.
> It would also handle hardware security modules and all of the other
> craziness that no sane person wants to understand the fine details of. Key
> revocation, rekeying and some other stuff was mentioned as being possible
> future work.
>
> So I waited, and I waited, and I asked some security people about what
> Barbican was doing, and I got told it had gone off and done some unrelated
> to anything we wanted certificate cleverness stuff for some other service,
> but secrets-as-a-service would be along at some point. Eventually, a long
> time after all my enthusiasm had waned, the basic feature
>
> It doesn't do what it says on the tin. It isn't very good at keeping
> secrets. If I've got a token then I can get the keys for all my volumes.
> That kind of sucks. For several threat models, I'd have done better to just
> stick the keys in the cinder db.
>
> I really wish I'd got a video of that first presentation, because it would
> be an interesting project to implement. Barbican though, from a really
> narrow focused since usecase view point really isn't very good though.
>
> (If I've missed something and Barbican can do the clever ACL type stuff
> that was talked about, please let me know - I'd be very interested in
> trying to fit it to cinder, and I'm not even working on cinder
> professionally currently.)
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][diskimage-builder] containers, Containers, CONTAINERS!

2017-01-16 Thread Gregory Haynes
On Thu, Jan 12, 2017, at 03:32 PM, Andre Florath wrote:
> Hello!
> 
> > The end result of this would be we have distro-minimal which depends on
> > kernel, minimal-userspace, and yum/debootstrap to build a vm/baremetal
> > capable image. We could also create a distro-container element which
> > only depends on minimal-userspace and yum/debootstrap and creates a
> > minimal container. The point being - the top level -container or
> > -minimal elements are basically convenience elements for exporting a few
> > vars and pulling in the proper elements at this point and the
> > elements/code are broken down by the functionality they provide rather
> > than use case.
> 
> This sounds awesome! Do we have some outline (etherpad) around
> where we collect all those ideas?
> 

Not that I know of... we have the ML now though :).

In seriousness though, doing this as part of the spec, or as a different
spec sounds like a great idea.

> Kind regards
> 
> Andre
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Duncan Thomas
To give a totally different prospective on why somebody might dislike
Barbican (I'm one of those people). Note that I'm working from pretty hazy
memories so I don't guarantee I've got everything spot on, and I am without
a doubt giving a very one sided view. But hey, that's the side I happen to
sit on. I certainly don't mean to cause great offence to the people
concerned, but rather to give  ahistory from a PoV that hasn't appeared yet.

Cinder needed somewhere to store volume encryption keys. Long, long ago,
Barbican gave a great presentation about secrets as a service, ACLs on
secrets, setups where one service could ask for keep material to be created
and only accessible to some other service. Having one service give another
service permission to get at a secret (but never be able to access that
secret itself). All the clever things that cinder could possibly leverage.
It would also handle hardware security modules and all of the other
craziness that no sane person wants to understand the fine details of. Key
revocation, rekeying and some other stuff was mentioned as being possible
future work.

So I waited, and I waited, and I asked some security people about what
Barbican was doing, and I got told it had gone off and done some unrelated
to anything we wanted certificate cleverness stuff for some other service,
but secrets-as-a-service would be along at some point. Eventually, a long
time after all my enthusiasm had waned, the basic feature

It doesn't do what it says on the tin. It isn't very good at keeping
secrets. If I've got a token then I can get the keys for all my volumes.
That kind of sucks. For several threat models, I'd have done better to just
stick the keys in the cinder db.

I really wish I'd got a video of that first presentation, because it would
be an interesting project to implement. Barbican though, from a really
narrow focused since usecase view point really isn't very good though.

(If I've missed something and Barbican can do the clever ACL type stuff
that was talked about, please let me know - I'd be very interested in
trying to fit it to cinder, and I'm not even working on cinder
professionally currently.)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress][Fuel] Fuel plugin for installing Congress

2017-01-16 Thread Carlos Gonçalves
Hi Serg,

This is great news! On behalf of the OPNFV Doctor team, thank you Fuel@Opnfv
team for this effort!
We will certainly test it out as soon as possible, and we'll provide
feedback.

Cheers,
Carlos


On Mon, Jan 16, 2017 at 8:33 PM, Serg Melikyan 
wrote:

> I'd like to introduce you fuel plugin for installing and configuring
> Congress for Fuel [0].
>
> This plugin was developed by Fuel@Opnfv [1] Community in order to be
> included to the next release of the Fuel@Opnfv - Danube. We believe
> that this plugin might be helpful not only for us but also for general
> OpenStack community and decided to continue development of the plugin
> in the OpenStack Community.
>
> Please join us in the development of the Congress plugin, your
> feedback is greatly appreciated.
>
> P.S. Right now core team includes Fedor Zhadaev - original developer
> of the plugin, and couple of developers from Fuel@Opnfv including me.
> We considered adding congress-core to the list but decided to see
> amount of interest and feedback first from Congress team.
>
> References:
> [0] http://git.openstack.org/cgit/openstack/fuel-plugin-congress/
> [1] https://wiki.opnfv.org/display/Fuel/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Jay Pipes

On 01/16/2017 04:09 PM, Fox, Kevin M wrote:

If the developers that had issue with the lack of functionality,
contributed to Barbican rather then go off on their own, the problem
 would have been solved much more quickly. The lack of sharing means
 the problems don't get fixed as fast.


Agreed completely.


As for operators, If the more common projects all started depending
on it, it would be commonly deployed.


Also agreed.


Would the operators deploy Barbican just for Magnum? maybe not. maybe
so. For Magnum, Ironic, and Sahara, more likely . Would they deploy
it if Neutron and Keystone depended on it, yeah. they would. And then
all the other projects would benefit from it being there, such as
Magnum.


Totally agreed.

> The sooner OpenStack as a whole can decide on some new core

components so that projects can start hard depending on them, the
better I think. That process kind of stopped with the arrival of the
big tent.


You are using a false equivalence again.

As I've mentioned numerous times before on the mailing list, the Big 
Tent was NOT either of these things:


* Expanding what the "core components" of OpenStack
* Expanding the mission or scope of OpenStack

What the Big Tent -- technically "Project Structure Reform" -- was about 
was actually the following:


* No longer having a formal incubation and graduation period/review for 
applying projects
* Having a single, objective list of requirements and responsibilities 
for inclusion into the OpenStack development community
* Specifically allowing competition of different source projects in the 
same "space" (e.g. deployment or metrics)


What you are complaining about (rightly IMHO) regarding OpenStack 
project contributors not contributing missing functionality to Barbican 
has absolutely nothing to do with the Big Tent:


There's no competing secret storage project in OpenStack other than 
Barbican/Castellan.


Furthermore, this behaviour of projects choosing to DIY/NIH something 
that existed in other projects was around long before the advent of the 
Big Tent. In fact, in this specific case, the Magnum team knew about 
Barbican, previously depended on it, and chose to make Barbican an 
option not because Barbican wasn't OpenStack -- it absolutely WAS -- but 
because it wasn't commonly deployed, which limited their own adoption.


What you are asking for, Kevin, is a single opinionated and consolidated 
OpenStack deployment; a single OpenStack "product" if you will. This is 
a perfectly valid request. However it has nothing to do with the Big 
Tent governance reform.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-api-metadata managing firewall

2017-01-16 Thread Sam Morrison
Thanks Jens,

Is someone able to change the status of the bug from won’t-fix to confirmed so 
its visible.

Cheers,
Sam


> On 10 Jan 2017, at 10:52 pm, Jens Rosenboom  wrote:
> 
> 2017-01-10 4:33 GMT+01:00 Sam Morrison  >:
>> Hi nova-devs,
>> 
>> I raised a bug about nova-api-metadata messing with iptables on a host
>> 
>> https://bugs.launchpad.net/nova/+bug/1648643
>> 
>> It got closed as won’t fix but I think it could do with a little more
>> discussion.
>> 
>> Currently nova-api-metadata will create an iptable rule and also delete
>> other rules on the host. This was needed for back in the nova-network days
>> as there was some trickery going on there.
>> Now with neutron and neutron-metadata-proxy nova-api-metadata is little more
>> that a web server much like nova-api.
>> 
>> I may be missing some use case but I don’t think nova-api-metadata needs to
>> care about firewall rules (much like nova-api doesn’t care about firewall
>> rules)
> 
> I agree with Sam on this. Looking a bit into the code, the mangling part of 
> the
> iptables rules is only called in nova/network/l3.py, which seems to happen 
> only
> when nova-network is being used. The installation of the global nova-iptables
> setup however happens unconditionally in nova/api/manager.py as soon as the
> nova-api-metadata service is started, which doesn't make much sense in a
> Neutron environment. So I would propose to either make this setup happen
> only when nova-network is used or at least allow an deployer to turn it off 
> via
> a config option.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Fei Long Wang


On 17/01/17 10:09, Fox, Kevin M wrote:
> If the developers that had issue with the lack of functionality, contributed 
> to Barbican rather then go off on their own, the problem would have been 
> solved much more quickly. The lack of sharing means the problems don't get 
> fixed as fast.
>
> As for operators, If the more common projects all started depending on it, it 
> would be commonly deployed. Would the operators deploy Barbican just for 
> Magnum? maybe not. maybe so. For Magnum, Ironic, and Sahara, more likely . 
> Would they deploy it if Neutron and Keystone depended on it, yeah. they would.

IMHO, I think one of reasons is some projects just care the projects
themselves. They don't want to add any any dependency for the other
projects if they can. Though we know we should think this from higher
level, which can finally benefit OpenStack.  In other words, we only
care about if the project I'm working on can be adopted more widely. We
don't think this from the whole community's view. Big tent is making
things worse from this point of view. Most of the new non-core services
want more adoption, so any thing may impact their adoptions will be
removed from the list. As for core service, anything may impact their
existing positions will be removed from the list. It maybe correct from
the project's view, but it's not good from the OpenStack's view. For
example, if Nova or Neutron need a secret store, what are they going to
do? I'm sure Barbican will be a soft dep, just like Magnum did.

We're fear to let the projects end up depending on each other. I don't
think integration is wrong if the integration is made for good reasons.

> And then all the other projects would benefit from it being there, such as 
> Magnum. The sooner OpenStack as a whole can decide on some new core 
> components so that projects can start hard depending on them, the better I 
> think. That process kind of stopped with the arrival of the big tent.
>
> Thanks,
> Kevin
>
> 
> From: Adrian Otto [adrian.o...@rackspace.com]
> Sent: Monday, January 16, 2017 11:55 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all] [barbican] [security] Why are projects 
> trying to avoid Barbican, still?
>
>> On Jan 16, 2017, at 11:02 AM, Dave McCowan (dmccowan)  
>> wrote:
>>
>> On 1/16/17, 11:52 AM, "Ian Cordasco"  wrote:
>>
>>> -Original Message-
>>> From: Rob C 
>>> Reply: OpenStack Development Mailing List (not for usage questions)
>>> 
>>> Date: January 16, 2017 at 10:33:20
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> 
>>> Subject:  Re: [openstack-dev] [all] [barbican] [security] Why are
>>> projects trying to avoid Barbican, still?
>>>
 Thanks for raising this on the mailing list Ian, I too share some of
 your consternation regarding this issue.

 I think the main point has already been hit on, developers don't want to
 require that Barbican be deployed in order for their service to be
 used.

 The resulting spread of badly audited secret management code is pretty
 ugly and makes certifying OpenStack for some types of operation very
 difficult, simply listing where key management "happens" and what
 protocols are in use quickly becomes a non-trivial operation with some
 teams using hard coded values while others use configurable algorithms
 and no connection between any of them.

 In some ways I think that the castellan project was supposed to help
 address the issue. The castellan documentation[1] is a little sparse but
 my understanding is that it exists as an abstraction layer for
 key management, such that a service can just be set to use castellan,
 which in turn can be told to use either a local key-manager, provided by
 the project or Barbican when it is available.

 Perhaps a miss-step previously was that Barbican made no efforts to
 really provide a robust non-HSM mode of operation. An obvious contrast
 here is with Hashicorp Vault[2] which has garnered significant market
 share in key management because it's software-only* mode of operation is
 well documented, robust and cryptographically sound. I think that the
 lack of a sane non-HSM mode, has resulted in developers trying to create
 their own and contributed to the situation.
> Bingo!
>
 I'd be interested to know if development teams would be less concerned
 about requiring Barbican deployments, if it had a robust non-HSM
 (i.e software only) mode of operation. Lowering the cost of deployment
 for organisations that want sensible key management without the expense
 of deploying multi-site HSMs.

 * Vault supports HSM deployments also

 [1] 

Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Lingxian Kong
On Tue, Jan 17, 2017 at 10:09 AM, Fox, Kevin M  wrote:

> As for operators, If the more common projects all started depending on it,
> it would be commonly deployed. Would the operators deploy Barbican just for
> Magnum? maybe not. maybe so. For Magnum, Ironic, and Sahara, more likely .
> Would they deploy it if Neutron and Keystone depended on it, yeah. they
> would. And then all the other projects would benefit from it being there,
> such as Magnum.


Agree.

The problem is, was one project created just for being used together with
other OpenStack projects or it could be used perfectly in standalone mode?
There are a lot of projects nowadays in OpenStack besides the several most
important ones (Nova, Cinder, Neutron, Keystone, Glance, etc.). Most new
projects will say they can be used separately without necessarily in
OpenStack deployment, the question is, what are the advantages of the
project over the existing solutions? If the operators could get more
benefit by deploying and maintaining a new service than using a
pre-existing one?

>From my perspective (maybe I'm wrong), many projects are struggling in
OpenStack world, and at the same time, they are not that competitive with
solutions outside OpenStack world.

Just my $0.002



Cheers,
Lingxian Kong (Larry)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [refstack] RefStack IRC meeting on January 17, 2017

2017-01-16 Thread Catherine Cuong Diep

Hi Everyone,

Just a reminder that we will have our weekly RefStack IRC meeting tomorrow
January 17,  at 19:00 UTC in  #openstack-meeting-alt.  Thanks!


Catherine Diep
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Fei Long Wang


On 17/01/17 09:21, Fox, Kevin M wrote:
> IMO, This is why the big tent has been so damaging to OpenStack's progress. 
> Instead of lifting the commons up, by requiring dependencies on other 
> projects, there by making them commonly deployed and high quality, post big 
> tent, each project reimplements just enough to get away with making something 
> optional, and then the commons, and OpenStack as a whole suffers. This 
> behavior MUST STOP if OpenStack is to make progress again. Other projects, 
> such as Kubernetes are making tremendous progress because they are not 
> hamstrung by one component trying desperately not to depend on another when 
> the dependency is appropriate. They enhance the existing component until its 
> suitable and the whole project benefits. Yes, as an isolated dev, the 
> behavior to make deps optional seems to make sense. But as a whole, OpenStack 
> is suffering and will become increasingly irrelevant moving forward if the 
> current path is continued. Please, please reconsider what the current stance 
> on dependencies is doing to 
>  the community. This problem is not just isolated to barbican, but lots of 
> other projects as well. We can either help pull each other up, or we can step 
> on each other to try and get "on top". I'd rather we help each other rather 
> then the destructive path we seem to be on. 
+ 100

As the PTL of Zaqar, I know some projects using agent are reluctant to
leverage Zaqar to resolve potential security/communication issues. As a
result, customer/deployer don't want to deploy the project. So that
said, a new dependency may make the deployment harder, but sometimes
without the support/benefit from the other services, that project may
won't be on the list unless you reimplement the wheel.

> My 2 cents.
> Kevin
>
> 
> From: Chris Friesen [chris.frie...@windriver.com]
> Sent: Monday, January 16, 2017 9:25 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all] [barbican] [security] Why are projects 
> trying to avoid Barbican, still?
>
> On 01/16/2017 10:31 AM, Rob C wrote:
>
>> I think the main point has already been hit on, developers don't want to
>> require that Barbican be deployed in order for their service to be
>> used.
> I think that this is a perfectly reasonable stance for developers to take.  As
> long as Barbican is an optional component, then making your service depend on 
> it
> has a good chance of limiting your potential install base.
>
> Given that, it seems like the ideal model from a security perspective would be
> to use Barbican if it's available at runtime, otherwise use something 
> else...but
> that has development and maintenance costs.
>
> Chris
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers & Best regards,
FeiLong Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Fox, Kevin M
If the developers that had issue with the lack of functionality, contributed to 
Barbican rather then go off on their own, the problem would have been solved 
much more quickly. The lack of sharing means the problems don't get fixed as 
fast.

As for operators, If the more common projects all started depending on it, it 
would be commonly deployed. Would the operators deploy Barbican just for 
Magnum? maybe not. maybe so. For Magnum, Ironic, and Sahara, more likely . 
Would they deploy it if Neutron and Keystone depended on it, yeah. they would. 
And then all the other projects would benefit from it being there, such as 
Magnum. The sooner OpenStack as a whole can decide on some new core components 
so that projects can start hard depending on them, the better I think. That 
process kind of stopped with the arrival of the big tent.

Thanks,
Kevin


From: Adrian Otto [adrian.o...@rackspace.com]
Sent: Monday, January 16, 2017 11:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [barbican] [security] Why are projects 
trying to avoid Barbican, still?

> On Jan 16, 2017, at 11:02 AM, Dave McCowan (dmccowan)  
> wrote:
>
> On 1/16/17, 11:52 AM, "Ian Cordasco"  wrote:
>
>> -Original Message-
>> From: Rob C 
>> Reply: OpenStack Development Mailing List (not for usage questions)
>> 
>> Date: January 16, 2017 at 10:33:20
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject:  Re: [openstack-dev] [all] [barbican] [security] Why are
>> projects trying to avoid Barbican, still?
>>
>>> Thanks for raising this on the mailing list Ian, I too share some of
>>> your consternation regarding this issue.
>>>
>>> I think the main point has already been hit on, developers don't want to
>>> require that Barbican be deployed in order for their service to be
>>> used.
>>>
>>> The resulting spread of badly audited secret management code is pretty
>>> ugly and makes certifying OpenStack for some types of operation very
>>> difficult, simply listing where key management "happens" and what
>>> protocols are in use quickly becomes a non-trivial operation with some
>>> teams using hard coded values while others use configurable algorithms
>>> and no connection between any of them.
>>>
>>> In some ways I think that the castellan project was supposed to help
>>> address the issue. The castellan documentation[1] is a little sparse but
>>> my understanding is that it exists as an abstraction layer for
>>> key management, such that a service can just be set to use castellan,
>>> which in turn can be told to use either a local key-manager, provided by
>>> the project or Barbican when it is available.
>>>
>>> Perhaps a miss-step previously was that Barbican made no efforts to
>>> really provide a robust non-HSM mode of operation. An obvious contrast
>>> here is with Hashicorp Vault[2] which has garnered significant market
>>> share in key management because it's software-only* mode of operation is
>>> well documented, robust and cryptographically sound. I think that the
>>> lack of a sane non-HSM mode, has resulted in developers trying to create
>>> their own and contributed to the situation.

Bingo!

>>> I'd be interested to know if development teams would be less concerned
>>> about requiring Barbican deployments, if it had a robust non-HSM
>>> (i.e software only) mode of operation. Lowering the cost of deployment
>>> for organisations that want sensible key management without the expense
>>> of deploying multi-site HSMs.
>>>
>>> * Vault supports HSM deployments also
>>>
>>> [1] http://docs.openstack.org/developer/castellan/
>>> [2] https://www.vaultproject.io/
>>
>> The last I checked, Rob, they also support DogTag IPA which is purely
>> a Software based HSM. Hopefully the Barbican team can confirm this.
>> --
>> Ian Cordasco
>
> Yep.  Barbican supports four backend secret stores. [1]
>
> The first (Simple Crypto) is easy to deploy, but not extraordinarily
> secure, since the secrets are encrypted using a static key defined in the
> barbican.conf file.
>
> The second and third (PKCS#11 and KMIP) are secure, but require an HSM as
> a hardware base to encrypt and/or store the secrets.
> The fourth (Dogtag) is secure, but requires a deployment of Dogtag to
> encrypt and store the secrets.
>
> We do not currently have a secret store that is both highly secure and
> easy to deploy/manage.
>
> We, the Barbican community, are very open to any ideas, blueprints, or
> patches on how to achieve this.
> In any of the homegrown per-project secret stores, has a solution been
> developed that solves both of these?
>
>
> [1]
> http://docs.openstack.org/project-install-guide/key-manager/draft/barbican-
> backend.html

The above list of four backend secret stores, each with serious drawbacks is 

Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Ade Lee
Seems to me that there are two different audiences here.

Developers want something that is easy to set up and develop against.
For that, the simple crypto plugin is provided, and it requires
essentially no setup.

In case Barbican is not available, developers should be coding against
castellan.

Deployers want something relatively simple and secure.  This could be
an HSM, or it could be Dogtag (which can be configured to store secrets
in either an HSM or in a software based HSM).

There seems to be a misconception that Dogtag is hard to deploy.  That
may have been the case in the past, but there have been great strides
that have been made to make Dogtag deployment easier.  We now have
puppet scripts etc.

In Barcelona, for example, we held a couple of workshops where Barbican
was deployed by over a hundred people using Dogtag.  The installation
scripts (which took about 10 minutes to run) can be found here:  
https://github.com/cloudkeep/barbican-workshop

And yes, Dogtag is not a simple python app. But it has been
successfully deployed behind thousands of FreeIPA installations in HA
and non-HA modes, with minimal maintenance.

This is not to say that something like a Vault back-end should not be
developed.  It absolutely should.  But we should note that any real
secure back-end is going to require some investment of time/
understanding on the deployer's side for maintenance or for setting up
HA.  And Dogtag is not as bad as it is sometimes made out to be.


Its not without its warts though, and I'll be happy to work with anyone
who has trouble with it.
 
Ade

On Mon, 2017-01-16 at 10:50 -0800, Ian Cordasco wrote:
> -Original Message-
> From: Chris Friesen 
> Reply: OpenStack Development Mailing List (not for usage questions)
> 
> Date: January 16, 2017 at 11:26:41
> To: openstack-dev@lists.openstack.org  org>
> Subject:  Re: [openstack-dev] [all] [barbican] [security] Why are
> projects trying to avoid Barbican, still?
> 
> > 
> > On 01/16/2017 10:31 AM, Rob C wrote:
> > 
> > > 
> > > I think the main point has already been hit on, developers don't
> > > want to
> > > require that Barbican be deployed in order for their service to
> > > be
> > > used.
> > 
> > I think that this is a perfectly reasonable stance for developers
> > to take. As
> > long as Barbican is an optional component, then making your service
> > depend on it
> > has a good chance of limiting your potential install base.
> > 
> > Given that, it seems like the ideal model from a security
> > perspective would be
> > to use Barbican if it's available at runtime, otherwise use
> > something else...but
> > that has development and maintenance costs.
> 
> More seriously it requires developers who aren't familiar with
> securely storing that kind of data re-implement exactly what Barbican
> has done, potentially.
> 
> Being realistic, and not to discount anyone's willingness to try, but
> I think the largest group of people qualified to build, review, and
> maintain that kind of software would be the Barbican team.
> 
> I guess the question then becomes: How many operators would be
> willing
> to deploy Barbican versus having to update each service as
> vulnerabilities are found, disclosed, and fixed in their clouds. If
> Barbican is as difficult to deploy as Rob is suggesting (that even
> DogTag is difficult to deploy) maybe developers should be focusing on
> fixing that instead of haphazardly reimplementing Barbican?
> 
> --
> Ian Cordasco
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Fox, Kevin M
IMO, This is why the big tent has been so damaging to OpenStack's progress. 
Instead of lifting the commons up, by requiring dependencies on other projects, 
there by making them commonly deployed and high quality, post big tent, each 
project reimplements just enough to get away with making something optional, 
and then the commons, and OpenStack as a whole suffers. This behavior MUST STOP 
if OpenStack is to make progress again. Other projects, such as Kubernetes are 
making tremendous progress because they are not hamstrung by one component 
trying desperately not to depend on another when the dependency is appropriate. 
They enhance the existing component until its suitable and the whole project 
benefits. Yes, as an isolated dev, the behavior to make deps optional seems to 
make sense. But as a whole, OpenStack is suffering and will become increasingly 
irrelevant moving forward if the current path is continued. Please, please 
reconsider what the current stance on dependencies is doing to 
 the community. This problem is not just isolated to barbican, but lots of 
other projects as well. We can either help pull each other up, or we can step 
on each other to try and get "on top". I'd rather we help each other rather 
then the destructive path we seem to be on.

My 2 cents.
Kevin


From: Chris Friesen [chris.frie...@windriver.com]
Sent: Monday, January 16, 2017 9:25 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [barbican] [security] Why are projects 
trying to avoid Barbican, still?

On 01/16/2017 10:31 AM, Rob C wrote:

> I think the main point has already been hit on, developers don't want to
> require that Barbican be deployed in order for their service to be
> used.

I think that this is a perfectly reasonable stance for developers to take.  As
long as Barbican is an optional component, then making your service depend on it
has a good chance of limiting your potential install base.

Given that, it seems like the ideal model from a security perspective would be
to use Barbican if it's available at runtime, otherwise use something else...but
that has development and maintenance costs.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Adam Harwell
Someone mentioned Castellan, and was exactly correct -- Castellan is
supposed to allow for flexibility, so developers can code for the Castellan
interface and simply configure it to use Barbican or whatever else they
want.

The only drawback of Castellan at the moment is that it doesn't directly
support certificate storage (that is, if you are using groupings of
cert/intermediates/PK, they have to be stored individually). Otherwise,
using that interface would make it very easy to allow use of Barbican for
any clouds that deploy it, and something else (maybe even a *common*
something simple) otherwise (though I am fully behind just using Barbican).

   --Adam Harwell (LBaaS/Octavia)

On Mon, Jan 16, 2017, 11:57 Adrian Otto  wrote:

>
> > On Jan 16, 2017, at 11:02 AM, Dave McCowan (dmccowan) <
> dmcco...@cisco.com> wrote:
> >
> > On 1/16/17, 11:52 AM, "Ian Cordasco"  wrote:
> >
> >> -Original Message-
> >> From: Rob C 
> >> Reply: OpenStack Development Mailing List (not for usage questions)
> >> 
> >> Date: January 16, 2017 at 10:33:20
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> 
> >> Subject:  Re: [openstack-dev] [all] [barbican] [security] Why are
> >> projects trying to avoid Barbican, still?
> >>
> >>> Thanks for raising this on the mailing list Ian, I too share some of
> >>> your consternation regarding this issue.
> >>>
> >>> I think the main point has already been hit on, developers don't want
> to
> >>> require that Barbican be deployed in order for their service to be
> >>> used.
> >>>
> >>> The resulting spread of badly audited secret management code is pretty
> >>> ugly and makes certifying OpenStack for some types of operation very
> >>> difficult, simply listing where key management "happens" and what
> >>> protocols are in use quickly becomes a non-trivial operation with some
> >>> teams using hard coded values while others use configurable algorithms
> >>> and no connection between any of them.
> >>>
> >>> In some ways I think that the castellan project was supposed to help
> >>> address the issue. The castellan documentation[1] is a little sparse
> but
> >>> my understanding is that it exists as an abstraction layer for
> >>> key management, such that a service can just be set to use castellan,
> >>> which in turn can be told to use either a local key-manager, provided
> by
> >>> the project or Barbican when it is available.
> >>>
> >>> Perhaps a miss-step previously was that Barbican made no efforts to
> >>> really provide a robust non-HSM mode of operation. An obvious contrast
> >>> here is with Hashicorp Vault[2] which has garnered significant market
> >>> share in key management because it's software-only* mode of operation
> is
> >>> well documented, robust and cryptographically sound. I think that the
> >>> lack of a sane non-HSM mode, has resulted in developers trying to
> create
> >>> their own and contributed to the situation.
>
> Bingo!
>
> >>> I'd be interested to know if development teams would be less concerned
> >>> about requiring Barbican deployments, if it had a robust non-HSM
> >>> (i.e software only) mode of operation. Lowering the cost of deployment
> >>> for organisations that want sensible key management without the expense
> >>> of deploying multi-site HSMs.
> >>>
> >>> * Vault supports HSM deployments also
> >>>
> >>> [1] http://docs.openstack.org/developer/castellan/
> >>> [2] https://www.vaultproject.io/
> >>
> >> The last I checked, Rob, they also support DogTag IPA which is purely
> >> a Software based HSM. Hopefully the Barbican team can confirm this.
> >> --
> >> Ian Cordasco
> >
> > Yep.  Barbican supports four backend secret stores. [1]
> >
> > The first (Simple Crypto) is easy to deploy, but not extraordinarily
> > secure, since the secrets are encrypted using a static key defined in the
> > barbican.conf file.
> >
> > The second and third (PKCS#11 and KMIP) are secure, but require an HSM as
> > a hardware base to encrypt and/or store the secrets.
> > The fourth (Dogtag) is secure, but requires a deployment of Dogtag to
> > encrypt and store the secrets.
> >
> > We do not currently have a secret store that is both highly secure and
> > easy to deploy/manage.
> >
> > We, the Barbican community, are very open to any ideas, blueprints, or
> > patches on how to achieve this.
> > In any of the homegrown per-project secret stores, has a solution been
> > developed that solves both of these?
> >
> >
> > [1]
> >
> http://docs.openstack.org/project-install-guide/key-manager/draft/barbican-
> > backend.html
>
> The above list of four backend secret stores, each with serious drawbacks
> is the reason why Barbican has not been widely adopted. Other projects are
> reluctant to depend on Barbican because it’s not present in most clouds.
> Magnum, for example believed that using Barbican for 

Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Ian Cordasco
-Original Message-
From: Dave McCowan (dmccowan) 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: January 16, 2017 at 13:03:41
To: OpenStack Development Mailing List (not for usage questions)

Subject:  Re: [openstack-dev] [all] [barbican] [security] Why are
projects trying to avoid Barbican, still?
> Yep. Barbican supports four backend secret stores. [1]
>
> The first (Simple Crypto) is easy to deploy, but not extraordinarily
> secure, since the secrets are encrypted using a static key defined in the
> barbican.conf file.
>
> The second and third (PKCS#11 and KMIP) are secure, but require an HSM as
> a hardware base to encrypt and/or store the secrets.
> The fourth (Dogtag) is secure, but requires a deployment of Dogtag to
> encrypt and store the secrets.
>
> We do not currently have a secret store that is both highly secure and
> easy to deploy/manage.
>
> We, the Barbican community, are very open to any ideas, blueprints, or
> patches on how to achieve this.
> In any of the homegrown per-project secret stores, has a solution been
> developed that solves both of these?
>
>
> [1]
> http://docs.openstack.org/project-install-guide/key-manager/draft/barbican-
> backend.html

So there seems to be a consensus that Vault is a good easy and secure
solution to deploy. Can Barbican use that as a backend secret store?

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Adrian Otto

> On Jan 16, 2017, at 11:02 AM, Dave McCowan (dmccowan)  
> wrote:
> 
> On 1/16/17, 11:52 AM, "Ian Cordasco"  wrote:
> 
>> -Original Message-
>> From: Rob C 
>> Reply: OpenStack Development Mailing List (not for usage questions)
>> 
>> Date: January 16, 2017 at 10:33:20
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject:  Re: [openstack-dev] [all] [barbican] [security] Why are
>> projects trying to avoid Barbican, still?
>> 
>>> Thanks for raising this on the mailing list Ian, I too share some of
>>> your consternation regarding this issue.
>>> 
>>> I think the main point has already been hit on, developers don't want to
>>> require that Barbican be deployed in order for their service to be
>>> used.
>>> 
>>> The resulting spread of badly audited secret management code is pretty
>>> ugly and makes certifying OpenStack for some types of operation very
>>> difficult, simply listing where key management "happens" and what
>>> protocols are in use quickly becomes a non-trivial operation with some
>>> teams using hard coded values while others use configurable algorithms
>>> and no connection between any of them.
>>> 
>>> In some ways I think that the castellan project was supposed to help
>>> address the issue. The castellan documentation[1] is a little sparse but
>>> my understanding is that it exists as an abstraction layer for
>>> key management, such that a service can just be set to use castellan,
>>> which in turn can be told to use either a local key-manager, provided by
>>> the project or Barbican when it is available.
>>> 
>>> Perhaps a miss-step previously was that Barbican made no efforts to
>>> really provide a robust non-HSM mode of operation. An obvious contrast
>>> here is with Hashicorp Vault[2] which has garnered significant market
>>> share in key management because it's software-only* mode of operation is
>>> well documented, robust and cryptographically sound. I think that the
>>> lack of a sane non-HSM mode, has resulted in developers trying to create
>>> their own and contributed to the situation.

Bingo!

>>> I'd be interested to know if development teams would be less concerned
>>> about requiring Barbican deployments, if it had a robust non-HSM
>>> (i.e software only) mode of operation. Lowering the cost of deployment
>>> for organisations that want sensible key management without the expense
>>> of deploying multi-site HSMs.
>>> 
>>> * Vault supports HSM deployments also
>>> 
>>> [1] http://docs.openstack.org/developer/castellan/
>>> [2] https://www.vaultproject.io/
>> 
>> The last I checked, Rob, they also support DogTag IPA which is purely
>> a Software based HSM. Hopefully the Barbican team can confirm this.
>> --
>> Ian Cordasco
> 
> Yep.  Barbican supports four backend secret stores. [1]
> 
> The first (Simple Crypto) is easy to deploy, but not extraordinarily
> secure, since the secrets are encrypted using a static key defined in the
> barbican.conf file.
> 
> The second and third (PKCS#11 and KMIP) are secure, but require an HSM as
> a hardware base to encrypt and/or store the secrets.
> The fourth (Dogtag) is secure, but requires a deployment of Dogtag to
> encrypt and store the secrets.
> 
> We do not currently have a secret store that is both highly secure and
> easy to deploy/manage.
> 
> We, the Barbican community, are very open to any ideas, blueprints, or
> patches on how to achieve this.
> In any of the homegrown per-project secret stores, has a solution been
> developed that solves both of these?
> 
> 
> [1] 
> http://docs.openstack.org/project-install-guide/key-manager/draft/barbican-
> backend.html

The above list of four backend secret stores, each with serious drawbacks is 
the reason why Barbican has not been widely adopted. Other projects are 
reluctant to depend on Barbican because it’s not present in most clouds. 
Magnum, for example believed that using Barbican for certificate storage was 
the correct design, and we implemented our solution such that it required 
Barbican. We quickly discovered that it was hurting Magnum’s adoption by 
multiple cloud operators that were reluctant to add the Barbican service in 
order to add Magnum. So, we built internal certificate storage to decouple 
Magnum from Barbican. It’s even less secure than using Barbican with Simple 
Crypto, but it solved our adoption problem. Furthermore, that’s how most clouds 
are using Magnum, because they still don’t run Barbican.

Bottom line: As long as cloud operators have any reluctance to adopt Barbican, 
other community projects will avoid depending on it, even when it’s the right 
technical solution.

Regards,

Adrian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [all] Improving Vendor Driver Discoverability

2017-01-16 Thread Jay S. Bryant



On 01/16/2017 12:19 PM, Jonathan Bryce wrote:

On Jan 16, 2017, at 11:58 AM, Jay S. Bryant  
wrote:

On 01/13/2017 10:29 PM, Mike Perez wrote:

The way validation works is completely up to the project team. In my research
as shown in the Summit etherpad [5] there's a clear trend in projects doing
continuous integration for validation. If we wanted to we could also have the
marketplace give the current CI results, which was also requested in the
feedback from driver maintainers.

Having the CI results reported would be an interesting experiment. I wonder if 
having the results even more publicly reported would result in more stable 
CI's.  It is a dual edged sword however. Given the instability of many CI's it 
could make OpenStack look bad to customers who don't understand what they are 
looking at.  Just my thoughts on that idea.

That’s very useful feedback. Having that kind of background upfront is really 
helpful. As we make updates on the display side, we can take into account if 
certain attributes are potentially unreliable or at a higher risk of showing 
instability and have the interface better support that without it looking like 
everything is failing and a river of red X’s. Are there other things that might 
be similar?

Jonathan


Jonathan,

Glad to be of assistance.

I think reporting some percentage of success might be the most accurate 
way to report the CI results.  Not necessarily flagging it good or bed 
but leave it for the consumers to see and compare.  Also combine that 
with Anita's idea of when the CI last successfully reported and I think 
it could give a good barometer for consumers. Our systems all have their 
rough times so we need to avoid a 'snapshot in time' view and provide 
more of a 'activity over time' view.  Third party CI is a good barometer 
of community activity and attention, but not always 100% accurate.


Obviously there will need to be some information included with the 
results explaining what they are and helping guide interpretations.


Jay



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tosca-parser] [heat-translator] [heat] [tacker] [opnfv] heat-translator and tosca-parser 0.7.0

2017-01-16 Thread Sahdev P Zala
Hello Everyone, 

On behalf of the Heat Translator and TOSCA Parser team, I am pleased to 
announce the 0.7.0 PyPI release of heat-translator and tosca-parser which 
can be downloaded fromhttps://pypi.python.org/pypi/heat-translator and 
https://pypi.python.org/pypi/tosca-parser respectively. 

This release includes following enhancements, 
heat-translator:
  - new APIs to produce and access multiple translated templates
  - translation support for Heat SoftwareDeploymentGroup resource
  - new test templates
  - bug fixes
  - doc updates
tosca-parser:
  - support for parsing TOSCA qualified names
  - TOSCA substitution_mappings parsing and validation
  - support for get_operation_output function 
  - support for custom interfaces
  - enhanced template validation 
  - bug fixes
  - doc updates

Thanks!! 

Regards, 
Sahdev Zala


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-ansible-deployment] Periodic job in infra to test upgrades?

2017-01-16 Thread Sean M. Collins
Jesse Pretorius wrote:
> Hi Sean,
> 
> Great to see you taking the initiative on this.
> 
> I think the starting point we’d have to work from with the way the builds are 
> executed now would be to have the upgrade job execute in a periodic pipeline 
> that has a longer timeout. While it would be ideal to do on-commit tests it’s 
> just untenable right now as it would severely slow down the workflow.

OK - I pushed the following patch to project-config to reflect this
change

https://review.openstack.org/419517

It depends on https://review.openstack.org/418521 - I will work on
cleaning it up and fixing the errors (it might just need a recheck?)
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress][Fuel] Fuel plugin for installing Congress

2017-01-16 Thread Serg Melikyan
I'd like to introduce you fuel plugin for installing and configuring
Congress for Fuel [0].

This plugin was developed by Fuel@Opnfv [1] Community in order to be
included to the next release of the Fuel@Opnfv - Danube. We believe
that this plugin might be helpful not only for us but also for general
OpenStack community and decided to continue development of the plugin
in the OpenStack Community.

Please join us in the development of the Congress plugin, your
feedback is greatly appreciated.

P.S. Right now core team includes Fedor Zhadaev - original developer
of the plugin, and couple of developers from Fuel@Opnfv including me.
We considered adding congress-core to the list but decided to see
amount of interest and feedback first from Congress team.

References:
[0] http://git.openstack.org/cgit/openstack/fuel-plugin-congress/
[1] https://wiki.opnfv.org/display/Fuel/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Marking Tintri driver as unsupported

2017-01-16 Thread Sean McGinnis
On Mon, Jan 16, 2017 at 08:52:34AM +0100, Silvan Kaiser wrote:
> Apoorva, Sean,
> after some time i managed to bring up Quobyte CI last friday which tested
> fine [1,2,3] for a short time and then ran into the same issues with
> manage_snapshot
> related tempest tests Apoorva describes (Starting chronologically at [4]).
> From here i see two steps:
> a) look into the reason for the issue with manage_snapshot tests
> b) a short note on how to proceed for marking drivers with reinstated CIs

Hey Sylvan,

As mentioned, if the CI requirements can be met yet this cycle, we can
just do a revert for the patch that set the unsupported flag. That
should be a quick and easy one to get through once we see the CI is
back and stable.

Thanks!
Sean

> back as supported is much appreciated
> Best
> Silvan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] this week's priorities and subteam reports

2017-01-16 Thread Loo, Ruby
Hi,

We are jubilant to present this week's priorities and subteam report for 
Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and 
formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. nova code for portgroups and attach/detach: 
https://review.openstack.org/#/c/364413/ and 
https://review.openstack.org/#/c/388756/
2. client patch for soft power/reboot: https://review.openstack.org/#/c/357627/ 
to unblock the nova patch: https://review.openstack.org/#/c/407977/
3. ironicclient queue: 
https://review.openstack.org/#/q/status:open+project:openstack/python-ironicclient
4. ironic-inspector-client queue: 
https://review.openstack.org/#/q/status:open+project:openstack/python-ironic-inspector-client
5. Continue reviewing driver composition things: 
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1524745
6. Rolling upgrades work: 
https://review.openstack.org/#/q/topic:bug/1526283+status:open
7. boot from volume: next up: 
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1559691


Bugs (dtantsur)
===
- Stats (diff between 09 Jan 2017 and 16 Jan 2017)
- Ironic: 230 bugs (+7) + 238 wishlist items. 19 new (+5), 191 in progress 
(-1), 0 critical, 29 high (+1) and 30 incomplete (+4)
- Inspector: 12 bugs + 23 wishlist items. 0 new, 14 in progress (+2), 0 
critical, 2 high and 5 incomplete
- Nova bugs with Ironic tag: 10. 0 new, 0 critical, 0 high

Portgroups support (sambetts, vdrok)

* trello: https://trello.com/c/KvVjeK5j/29-portgroups-support
- status as of most recent weekly meeting:
- just one patch left on the ironic side: 
https://review.openstack.org/#/q/topic:bug/1618754
- once that lands, then nova patch - https://review.openstack.org/388756
- note that the nova patch cannot land until after the attach/detach 
API nova-patch lands (https://review.openstack.org/364413)

Interface attach/detach API (sambetts)
==
* trello: https://trello.com/c/nryU4w58/39-interface-attach-detach-api
- status as of most recent weekly meeting:
- Spec merged and Nova BP approved
- ironic patches merged, ironicclient released
- Nova patch needs reviews - https://review.openstack.org/364413

CI refactoring (dtantsur, lucasagomes)
==
* trello: https://trello.com/c/c96zb3dm/32-ci-refactoring
- status as of most recent weekly meeting:
- Two more patches to go to add support for deploying UEFI images with 
Ironic in devstack: 1) https://review.openstack.org/#/c/414604/ (DevStack) 2) 
https://review.openstack.org/#/c/374988/

Rolling upgrades and grenade-partial (rloo, jlvillal)
=
* trello: 
https://trello.com/c/GAlhSzLm/2-rolling-upgrades-and-grenade-with-multi-node
- status as of most recent weekly meeting:
- patches need reviews: https://review.openstack.org/#/q/topic:bug/1526283
- Testing work:
- Tempest "smoke" is now working for multi-tenant / multi-node
- Patch up to enable tempest "smoke" for the multi-node job
- https://review.openstack.org/417959
- Next step Grenade
- Work is ongoing for enabling Grenade with multi-tenant: 
https://review.openstack.org/389268

Generic boot-from-volume (TheJulia)
===
* trello: https://trello.com/c/UttNjDB7/13-generic-boot-from-volume
- status as of most recent weekly meeting:
- API side changes for volume connector information has a procedural -2 
until we can begin making use of the data in the conductor, but should stil be 
reviewed
- https://review.openstack.org/#/c/214586/
- This change has been rebased on top of the iPXE template update 
revision to support cinder/iscsi booting.
- Boot from volume/storage cinder interface is up for review.
- 
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1559691
- Original volume connection information client patches were rebased this 
past week
- They need OSC support added into the revisions.
- These changes are extremely unlikely to make client freeze for this 
cycle, which means we should target to land them at the beginning of Pike
- 
https://review.openstack.org/#/q/status:open+project:openstack/python-ironicclient+branch:master+topic:bug/1526231

Driver composition (dtantsur)
=
* trello: https://trello.com/c/fTya14y6/14-driver-composition
- gerrit topic: https://review.openstack.org/#/q/status:open+topic:bug/1524745
- status as of most recent weekly meeting:
- next patch makes conductor actually load defined hardware types: 
https://review.openstack.org/#/c/412631/
- small inspector-related clean up: https://review.openstack.org/416232
 

Re: [openstack-dev] [all] Improving Vendor Driver Discoverability

2017-01-16 Thread Anita Kuno

On 2017-01-16 01:19 PM, Jonathan Bryce wrote:

On Jan 16, 2017, at 11:58 AM, Jay S. Bryant  
wrote:

On 01/13/2017 10:29 PM, Mike Perez wrote:

The way validation works is completely up to the project team. In my research
as shown in the Summit etherpad [5] there's a clear trend in projects doing
continuous integration for validation. If we wanted to we could also have the
marketplace give the current CI results, which was also requested in the
feedback from driver maintainers.

Having the CI results reported would be an interesting experiment. I wonder if 
having the results even more publicly reported would result in more stable 
CI's.  It is a dual edged sword however. Given the instability of many CI's it 
could make OpenStack look bad to customers who don't understand what they are 
looking at.  Just my thoughts on that idea.

That’s very useful feedback. Having that kind of background upfront is really 
helpful. As we make updates on the display side, we can take into account if 
certain attributes are potentially unreliable or at a higher risk of showing 
instability and have the interface better support that without it looking like 
everything is failing and a river of red X’s.


You could show the timestamp since the last passing test, rather than 
pass or fail as well as how long the driver has been tested. If a driver 
has been tested for 2 years or longer and has gone a week since the last 
passing test chances are the team is working on a bug, either with the 
driver code or the ci system (this can be explained on the page in a 
legend of some sort). This gives the reader more context with which to 
evaluate comparable drivers regarding their elapsed time since last 
successful completion of their ci as well as how long their ci has been 
active.


This might be a more useful and consumable approach for the audience 
which might have little understanding of continuous integration, its 
meaning and its artifacts.


Thanks,
Anita.


  Are there other things that might be similar?

Jonathan






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Dave McCowan (dmccowan)


On 1/16/17, 11:52 AM, "Ian Cordasco"  wrote:

>-Original Message-
>From: Rob C 
>Reply: OpenStack Development Mailing List (not for usage questions)
>
>Date: January 16, 2017 at 10:33:20
>To: OpenStack Development Mailing List (not for usage questions)
>
>Subject:  Re: [openstack-dev] [all] [barbican] [security] Why are
>projects trying to avoid Barbican, still?
>
>> Thanks for raising this on the mailing list Ian, I too share some of
>> your consternation regarding this issue.
>>
>> I think the main point has already been hit on, developers don't want to
>> require that Barbican be deployed in order for their service to be
>> used.
>>
>> The resulting spread of badly audited secret management code is pretty
>> ugly and makes certifying OpenStack for some types of operation very
>> difficult, simply listing where key management "happens" and what
>> protocols are in use quickly becomes a non-trivial operation with some
>> teams using hard coded values while others use configurable algorithms
>> and no connection between any of them.
>>
>> In some ways I think that the castellan project was supposed to help
>> address the issue. The castellan documentation[1] is a little sparse but
>> my understanding is that it exists as an abstraction layer for
>> key management, such that a service can just be set to use castellan,
>> which in turn can be told to use either a local key-manager, provided by
>> the project or Barbican when it is available.
>>
>> Perhaps a miss-step previously was that Barbican made no efforts to
>> really provide a robust non-HSM mode of operation. An obvious contrast
>> here is with Hashicorp Vault[2] which has garnered significant market
>> share in key management because it's software-only* mode of operation is
>> well documented, robust and cryptographically sound. I think that the
>> lack of a sane non-HSM mode, has resulted in developers trying to create
>> their own and contributed to the situation.
>>
>> I'd be interested to know if development teams would be less concerned
>> about requiring Barbican deployments, if it had a robust non-HSM
>> (i.e software only) mode of operation. Lowering the cost of deployment
>> for organisations that want sensible key management without the expense
>> of deploying multi-site HSMs.
>>
>> * Vault supports HSM deployments also
>>
>> [1] http://docs.openstack.org/developer/castellan/
>> [2] https://www.vaultproject.io/
>
>The last I checked, Rob, they also support DogTag IPA which is purely
>a Software based HSM. Hopefully the Barbican team can confirm this.
>--
>Ian Cordasco

Yep.  Barbican supports four backend secret stores. [1]

The first (Simple Crypto) is easy to deploy, but not extraordinarily
secure, since the secrets are encrypted using a static key defined in the
barbican.conf file.

The second and third (PKCS#11 and KMIP) are secure, but require an HSM as
a hardware base to encrypt and/or store the secrets.
The fourth (Dogtag) is secure, but requires a deployment of Dogtag to
encrypt and store the secrets.

We do not currently have a secret store that is both highly secure and
easy to deploy/manage.

We, the Barbican community, are very open to any ideas, blueprints, or
patches on how to achieve this.
In any of the homegrown per-project secret stores, has a solution been
developed that solves both of these?


[1] 
http://docs.openstack.org/project-install-guide/key-manager/draft/barbican-
backend.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Ian Cordasco
-Original Message-
From: Chris Friesen 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: January 16, 2017 at 11:26:41
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [all] [barbican] [security] Why are
projects trying to avoid Barbican, still?

> On 01/16/2017 10:31 AM, Rob C wrote:
>
> > I think the main point has already been hit on, developers don't want to
> > require that Barbican be deployed in order for their service to be
> > used.
>
> I think that this is a perfectly reasonable stance for developers to take. As
> long as Barbican is an optional component, then making your service depend on 
> it
> has a good chance of limiting your potential install base.
>
> Given that, it seems like the ideal model from a security perspective would be
> to use Barbican if it's available at runtime, otherwise use something 
> else...but
> that has development and maintenance costs.

More seriously it requires developers who aren't familiar with
securely storing that kind of data re-implement exactly what Barbican
has done, potentially.

Being realistic, and not to discount anyone's willingness to try, but
I think the largest group of people qualified to build, review, and
maintain that kind of software would be the Barbican team.

I guess the question then becomes: How many operators would be willing
to deploy Barbican versus having to update each service as
vulnerabilities are found, disclosed, and fixed in their clouds. If
Barbican is as difficult to deploy as Rob is suggesting (that even
DogTag is difficult to deploy) maybe developers should be focusing on
fixing that instead of haphazardly reimplementing Barbican?

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Rob C
>
>
> The last I checked, Rob, they also support DogTag IPA which is purely
> a Software based HSM. Hopefully the Barbican team can confirm this.
> --
> Ian Cordasco
>

Yup, that's my understanding too. However, that requires Barbican _and_
Dogtag, an even bigger overhead. Especially as at least historically
Dogtag has been difficult to maintain. If you have a deployment already,
there's a great synergy there. If you don't then it introduces a lot of
overhead.

I'm interested to know if an out of the box, stand alone software-only
version of Barbican would be any more appealing

Cheers
-Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Improving Vendor Driver Discoverability

2017-01-16 Thread Jonathan Bryce

> On Jan 16, 2017, at 11:58 AM, Jay S. Bryant  
> wrote:
> 
> On 01/13/2017 10:29 PM, Mike Perez wrote:
>> The way validation works is completely up to the project team. In my research
>> as shown in the Summit etherpad [5] there's a clear trend in projects doing
>> continuous integration for validation. If we wanted to we could also have the
>> marketplace give the current CI results, which was also requested in the
>> feedback from driver maintainers.
> Having the CI results reported would be an interesting experiment. I wonder 
> if having the results even more publicly reported would result in more stable 
> CI's.  It is a dual edged sword however. Given the instability of many CI's 
> it could make OpenStack look bad to customers who don't understand what they 
> are looking at.  Just my thoughts on that idea.

That’s very useful feedback. Having that kind of background upfront is really 
helpful. As we make updates on the display side, we can take into account if 
certain attributes are potentially unreliable or at a higher risk of showing 
instability and have the interface better support that without it looking like 
everything is failing and a river of red X’s. Are there other things that might 
be similar?

Jonathan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][tempest][api] community images, tempest tests, and API stability

2017-01-16 Thread Ken'ichi Ohmichi
2017-01-13 9:25 GMT-08:00 Ian Cordasco :
> -Original Message-
> From: Ian Cordasco 
> Reply: Ian Cordasco 
> Date: January 13, 2017 at 08:12:12
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject:  Re: [openstack-dev] [glance][tempest][api] community images,
> tempest tests, and API stability
>
>> And besides "No one uses Glance" [ref: 
>> http://lists.openstack.org/pipermail/openstack-dev/2013-February/005758.html]
>
> I was being a bit glib when I wrote this last sentence this morning,
> but in commenting on the Gerrit patch to skip the test in question, I
> realized this is actually far more valid than I realized.
>
> Let's look at the state of Glance v2 and be brutally honest:
>
> Interoperability
>
> Glance v2 is currently incapable of being truly interoperable between
> existing publicly accessible clouds. There are two ways to currently
> upload images to Glance. Work is being done to add a third way that
> suits the needs of all cloud providers. This introduces further
> interoperability incompatibility (say *that* three times fast ;)) and
> honestly, I don't see it being a problem for the next reason.
>
> Further, the tasks API presents a huge number of interoperability
> problems. We've limited that to users with the admin role, but if you
> have an admin on two clouds operated by different people, there is a
> good likelihood the tasks will not be the same.
>
>
> v2 deployment and usage
>
> The best anyone working on Glance can determine, v2 is rarely deployed
> for users and if it is, it isn't chosen. v2 was written to specifically
> excise some problematic "features" that some users still rely on. A
> such, you see conversations even between Glance and *other services*
> about how to migrate to v2. Nova only recently made the migration. Heat
> still has yet to do so and I think has only just relented in their
> desire to avoid it.

Humm, Defcore list contains Glance v2 tests for the interoperability
like https://github.com/openstack/defcore/blob/master/2016.08.json#L1366
# We can see Tempest tests of Glance v2 API by searching "tempest.api.image.v2".
I guess many deployments provide the v2 API today..

> Security Concerns
>
> There are some serious security issues that will be fixed by this
> change. If we were to add the backwards compatibility shim that the QA
> team has suggested repeatedly that we add, we'd be keeping those security
> issues.

Security issues/problems should be solved as the highest priority.
The progress should be nice if having CVE.

> In short, I feel like the constant refrain from the QA team has been two fold:
>
> - "This will cause interoperability problems"
> - "This is backwards incompatible"
>
> The Glance team has come to terms with this over the course of several
> cycles. I don't think anyone is thrilled about the prospect of
> potentially breaking some users' workflows. If we had been that
> enthusiastic about it, then we simply would have acted on this when it
> was first proposed. It would have completely gone unnoticed except by
> some users. There's no acceptable way (without sacrificing security -
> which let's be clear, is entirely unacceptable) that we can maintain a
> backwards compatibility shim and Glance v2 already has loads of
> interoperability problems. We're working on fixing them, but we're
> also working on fixing the user experience, which is a big part of
> this patch.

I think Glance team has spent time and considering for moving to this
direction and I believe the team will take responsibility if facing
issues on the direction.
Then I also am going to take this way.

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Improving Vendor Driver Discoverability

2017-01-16 Thread Jay S. Bryant


On 01/13/2017 10:29 PM, Mike Perez wrote:

Hello all,

In the spirit of recent Technical Committee discussions I would like to bring
focus on how we're doing vendor driver discoverability. Today we're doing this
with the OpenStack Foundation marketplace [1] which is powered by the driverlog
project. In a nutshell, it is a big JSON file [2] that has information on which
vendor solutions are supported by which projects in which releases. This
information is then parsed to generate the marketplace so that users can
discover them. As discussed in previous TC meetings [3] we need to recognize
vendors that are trying to make great products work in OpenStack so that they
can be successful, which allows our community to be successful and healthy.

In the feedback I have received from various people in the community, some
didn’t know how it worked, and were unhappy that the projects themselves
weren’t owning this. I totally agree that project teams should own this and
should be encouraged to be involved in the reviews. Today that’s not happening.
I’d like to propose we come up with a way for the marketplace to be more
community-driven by the projects that are validating these solutions.

At the Barcelona Summit [4] we discussed ways to improve driverlog. Projects
like Nova have a support matrix of hypervisors in their in-tree documentation.
Various members of the Cinder project also expressed interest in using this
solution. It was suggested in the session that the marketplace should just link
to the projects' appropriate documentation. The problem with this solution is
the information is not presented in a consistent way across projects, as
driverlog does it today. We could accomplish this instead by using a parsable
format that is stored in each appropriate project's git repository. I'm
thinking of pretty much how driverlog works today, but broken up into
individual projects.

The marketplace can parse this information and present it in one place
consistently. Projects may also continue to parse this information in their own
documentation, and we can even write a common tool to do this. The way a vendor
is listed here is based on being validated by the project team itself. Keeping
things in the marketplace would also address the suggestions that came out of
the recent feedback we received from various driver maintainers [4].

The way validation works is completely up to the project team. In my research
as shown in the Summit etherpad [5] there's a clear trend in projects doing
continuous integration for validation. If we wanted to we could also have the
marketplace give the current CI results, which was also requested in the
feedback from driver maintainers.
Having the CI results reported would be an interesting experiment. I 
wonder if having the results even more publicly reported would result in 
more stable CI's.  It is a dual edged sword however. Given the 
instability of many CI's it could make OpenStack look bad to customers 
who don't understand what they are looking at.  Just my thoughts on that 
idea.


I would like to volunteer in creating the initial files for each project with
what the marketplace says today.

[1] - https://www.openstack.org/marketplace/drivers/
[2] - 
http://git.openstack.org/cgit/openstack/driverlog/tree/etc/default_data.json
[3] - 
http://eavesdrop.openstack.org/meetings/tc/2017/tc.2017-01-10-20.01.log.html#l-106
[4] - 
http://lists.openstack.org/pipermail/openstack-dev/2017-January/109855.html
[5] - https://etherpad.openstack.org/p/driverlog-validation




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pike PTG facilities and remote participation

2017-01-16 Thread Thierry Carrez
Brian Rosmaita wrote:
> I have a quick facilities question about the PTG.  I know of at least
> one developer who can't attend physically but will be willing to join
> via some type of videoconferencing software (vidyo, or blue jeans, or
> google hangout).  Do you think it will be possible?  The wifi has gotten
> better with each summit (I remember sitting right next one of the wifi
> endpoints in Portland and still kept getting dropped; it hasn't been
> such a problem at more recent summits), but we're going to be at a much
> smaller facility for the PTG.
> 
> Anyway, my question is whether it makes sense to plan a few sessions
> over video (this dev would probably lead one design session and I'd need
> his hearty participation in another -- it's not just that I'd like to
> make it possible for someone to follow along, I need the bandwidth to be
> sufficient for a really good connection).  So basically, my question is
> whether it makes sense to *plan* for remote collaboration, or whether
> the remote connection should be looked at as something that might be
> nice if it happens, but shouldn't really be planned for.

I'll let Erin answer for the events team with extra information on
networking reliability. I think it should be possible to patch someone
into critical discussions, especially if the relay is handled by someone
attending the meeting. The fact that the schedule is pretty fluid should
also facilitate that (no 40-min timebox for a given discussion). It
becomes more difficult once you start having several people to patch in :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][all][ptl][stable] the process for creating stable/ocata branches

2017-01-16 Thread Doug Hellmann
As mentioned previously [1], it is now possible for teams to set
up stable branches when they are ready. We will be taking advantage
of this new capability as we approach the end of the Ocata cycle
and start preparing final releases.

The release team will not be automatically setting up branches this
cycle, and will rely on the release liaisons to let us know when
teams are ready.

The PTL or release liaison for a project may request a new branch
by submitting a patch to the openstack/releases repository specifying
the tagged version to be used as the base of the branch. We always
create stable branches from tagged versions, and the release repo
validation job will enforce this when the branch request is submitted.

Here are some guidelines for when projects should branch:

* Projects using the cycle-with-milestone release model should
  include the request for their stable branch along with the RC1
  tag request (target week is R-3 week, so use Feb 2 as the deadline)

* Library projects should be branched with, or shortly after, their
  final release this week (use Jan 19 as the deadline)

* I will branch the requirements repository shortly after all of
  the cycle-with-milestone projects have branched. After the
  requirements repository is branched and the master requirements
  list is opened, projects that have not branched will be tested
  with Pike requirements as the requirements master branch advances
  and stable/ocata stays stable. Waiting too long to create the
  stable/ocata branch may result in broken CI jobs in either
  stable/ocata or master. Don't delay any further than necessary.

* Projects using the cycle-trailing release model should branch by
  R-0 (23 Feb). The remaining two weeks before the trailing deadline
  should be used for last-minute fixes, which will need to be
  backported into the branch to create the final release.

* Other projects, including cycle-with-intermediary and independent
  projects that create branches, should request their stable branch
  when they are ready to declare a final version and start working
  on Pike-related changes. This must be completed before the final
  release week, use 16 Feb as the deadline.

The branch request information should be added to the same file
you're using to tag Ocata releases. Add a new section to the file,
above the releases section, with the key "branches". For the stable
branch, add a list item mapping the name to "stable/ocata" and the
location to a tagged version.

For example:

branches:
  - name: stable/ocata
location: x.y.z.0rc1

See the README.rst file in openstack/releases for more details about
how to format branch specifications, and look at the deliverable
files for newton for examples (we have imported the definitions of
the stable/newton branches for projects).

Doug

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-December/108923.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Chris Friesen

On 01/16/2017 10:31 AM, Rob C wrote:


I think the main point has already been hit on, developers don't want to
require that Barbican be deployed in order for their service to be
used.


I think that this is a perfectly reasonable stance for developers to take.  As 
long as Barbican is an optional component, then making your service depend on it 
has a good chance of limiting your potential install base.


Given that, it seems like the ideal model from a security perspective would be 
to use Barbican if it's available at runtime, otherwise use something else...but 
that has development and maintenance costs.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Ian Cordasco
-Original Message-
From: Rob C 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: January 16, 2017 at 10:33:20
To: OpenStack Development Mailing List (not for usage questions)

Subject:  Re: [openstack-dev] [all] [barbican] [security] Why are
projects trying to avoid Barbican, still?

> Thanks for raising this on the mailing list Ian, I too share some of
> your consternation regarding this issue.
>
> I think the main point has already been hit on, developers don't want to
> require that Barbican be deployed in order for their service to be
> used.
>
> The resulting spread of badly audited secret management code is pretty
> ugly and makes certifying OpenStack for some types of operation very
> difficult, simply listing where key management "happens" and what
> protocols are in use quickly becomes a non-trivial operation with some
> teams using hard coded values while others use configurable algorithms
> and no connection between any of them.
>
> In some ways I think that the castellan project was supposed to help
> address the issue. The castellan documentation[1] is a little sparse but
> my understanding is that it exists as an abstraction layer for
> key management, such that a service can just be set to use castellan,
> which in turn can be told to use either a local key-manager, provided by
> the project or Barbican when it is available.
>
> Perhaps a miss-step previously was that Barbican made no efforts to
> really provide a robust non-HSM mode of operation. An obvious contrast
> here is with Hashicorp Vault[2] which has garnered significant market
> share in key management because it's software-only* mode of operation is
> well documented, robust and cryptographically sound. I think that the
> lack of a sane non-HSM mode, has resulted in developers trying to create
> their own and contributed to the situation.
>
> I'd be interested to know if development teams would be less concerned
> about requiring Barbican deployments, if it had a robust non-HSM
> (i.e software only) mode of operation. Lowering the cost of deployment
> for organisations that want sensible key management without the expense
> of deploying multi-site HSMs.
>
> * Vault supports HSM deployments also
>
> [1] http://docs.openstack.org/developer/castellan/
> [2] https://www.vaultproject.io/

The last I checked, Rob, they also support DogTag IPA which is purely
a Software based HSM. Hopefully the Barbican team can confirm this.
--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Project Navigator Out of Date?

2017-01-16 Thread Dave McCowan (dmccowan)
Hi Ian--
   Thanks for the reminder.  As PTL, I know I have some action items to
update our project navigator status.
   Speaking on behalf of the Barbican community, I can say that we do
follow the rules of stable branches and deprecation.  I'll submit a patch
now to state this assertion.
   I also believe that we currently have the appropriate variety of
distributions available.  Our installation guide gives instructions on how
to install from each of these.  I don't know how to apply for this "star"
in project navigator.
   We have taken steps to qualify for vulnerability management, most
notably we completed a threat modeling exercise with the security project
team.  I'll reach out to that team to find out what remaining steps are
necessary to be tagged as vulnerability managed.
--Dave

On 1/16/17, 8:55 AM, "Ian Cordasco"  wrote:

>Hi barbicaneers (I don't actually know what y'all call yourselves :)),
>
>Related to the other thread I just started, I was looking at the
>project navigator [1] for Barbican and found some things that look
>wrong (to an outsider) and was hoping could be cleared up.
>
>First, "Is this project maintained following the common Stable branch
>policy?" appears to be "Yes" now. I notice you have stable branches
>that actually look stable. Are y'all working with the stable
>maintenance team on them?
>
>Second, "Does this project follows standard deprecation?" I'm not
>(yet) a user of Barbican, but are you still not following the standard
>deprecation policy?
>
>Third, "Existence and quality of packages for this project in popular
>distributions." it seems Fedora [2], Debian [3], Ubuntu [4], and
>OpenSUSE [5] all have packages (including in stable versions). I can't
>speak to the quality of the packages, but knowing the hard work most
>of our downstream redistributors put into those packages, I'm certain
>they're good quality. This should *definitely* be updated, in my
>opinion.
>
>Finally, "Are vulnerability issues managed by the OpenStack security
>team?". I know that the OpenStack Security Project worked with the
>Barbican team to come up with a vulnerability analysis a few midcycles
>ago. Is that roughly where you all stopped? Is there a reason you
>haven't attempted to work with the VMT on security issues?
>
>Hopefully my agenda is obvious - I'd like to see fewer projects
>attempting to implement their own secret storage and instead use
>Barbican. Keeping the navigator up-to-date seems (to me) to be a good
>way to improve Barbican's image. I would be happy to work with you all
>(with what little time I have) to update the navigator to better
>reflect Barbican's reality.
>
>[1]: 
>https://www.openstack.org/software/releases/newton/components/barbican
>[2]: https://apps.fedoraproject.org/packages/s/barbican
>[3]: 
>https://packages.debian.org/search?keywords=barbican=all=al
>l=all
>[4]: 
>http://packages.ubuntu.com/search?keywords=barbican=names=a
>ll=all
>[5]: 
>https://software.opensuse.org/search?utf8=✓=barbican_devel=false;
>search_unsupported=false=openSUSE:Leap:42.2
>
>Cheers,
>--
>Ian Cordasco
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Rob C
Thanks for raising this on the mailing list Ian, I too share some of
your consternation regarding this issue.

I think the main point has already been hit on, developers don't want to
require that Barbican be deployed in order for their service to be
used.

The resulting spread of badly audited secret management code is pretty
ugly and makes certifying OpenStack for some types of operation very
difficult, simply listing where key management "happens" and what
protocols are in use quickly becomes a non-trivial operation with some
teams using hard coded values while others use configurable algorithms
and no connection between any of them.

In some ways I think that the castellan project was supposed to help
address the issue. The castellan documentation[1] is a little sparse but
my understanding is that it exists as an abstraction layer for
key management, such that a service can just be set to use castellan,
which in turn can be told to use either a local key-manager, provided by
the project or Barbican when it is available.

Perhaps a miss-step previously was that Barbican made no efforts to
really provide a robust non-HSM mode of operation. An obvious contrast
here is with Hashicorp Vault[2] which has garnered significant market
share in key management because it's software-only* mode of operation is
well documented, robust and cryptographically sound. I think that the
lack of a sane non-HSM mode, has resulted in developers trying to create
their own and contributed to the situation.

I'd be interested to know if development teams would be less concerned
about requiring Barbican deployments, if it had a robust non-HSM
(i.e software only) mode of operation. Lowering the cost of deployment
for organisations that want sensible key management without the expense
of deploying multi-site HSMs.

* Vault supports HSM deployments also

[1] http://docs.openstack.org/developer/castellan/
[2] https://www.vaultproject.io/

On Mon, Jan 16, 2017 at 4:14 PM, Ian Cordasco 
wrote:

> -Original Message-
> From: Hayes, Graham 
> Reply: OpenStack Development Mailing List (not for usage questions)
> 
> Date: January 16, 2017 at 09:26:00
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject:  Re: [openstack-dev] [all] [barbican] [security] Why are
> projects trying to avoid Barbican, still?
>
> > On 16/01/2017 13:38, Ian Cordasco wrote:
> > > Is the problem perhaps that no one is aware of other projects using
> > > Barbican? Is the status on the project navigator alarming (it looks
> > > like some of this information is potentially out of date)? Has
> > > Barbican been deemed too hard to deploy?
> >
> > I know that historically it was considered hard to do a HA deploy of
> > Barbican. When we initially evaluated DNSSEC in Designate (many years
> > ago now) it was one of the sticking points.
> >
> > This may have (and most likely has) changed, but we seem to have long
> > memories.
>
> I know Rackspace recently made Barbican available to its cloud
> customers. I suspect it's easier now to perform an HA deploy.
>
> > It could be a side effect of the Big Tent - there are so many projects
> > doing so many different things that projects don't want deployers to
> > have deploy everything.
>
> Yeah, I completely understand that. The thing is that in one case,
> there's a project that currently relies on Barbican and wants to
> replace that with a completely brand new service that will be doing
> other things and then wants to layer secrets on top of it. It seems to
> me like a terrible case of both scope creep and not actually caring
> about the security the users expect from services that have to
> interact with secrets. N services (besides Barbican) implementing
> their own secrets storage each in their own way seem like N different
> services that will be dealing with vulnerabilities and security
> releases for the next few years. Perhaps that's pessimistic, but
> looking at that with my operator hat on, I'd rather have to update *1*
> service (barbican) rather than N if there's some vulnerability that
> comes up. It's the same argument when it comes down to packaging and
> vendoring dependencies. Update once instead of N times for each
> package that has a copy of the library bundled in it.
>
> > > I really want to understand why so many projects feel the need to
> > > implement their own secrets storage. This seems a bit short-sighted
> > > and foolish. While these projects are making themselves easier to
> > > deploy, if not done properly they are potentially endangering their
> > > users and that seems like a bigger problem than deploying Barbican to
> > > me.
> >
> > +100 - One of the reasons we didn't just write our own signing was I
> > am allergic to writing crypto code - I am not very good at it, and there
> > is a project that people that either are, or know how to use the 

[openstack-dev] [barbican] Project Navigator Out of Date

2017-01-16 Thread Jimmy McArthur
Hi all. Just wanted to throw out that if you have bug reports or issues 
with the content on the project navigator, please feel free to send them 
to https://bugs.launchpad.net/openstack-org/ and someone on the 
Foundation Staff will look into it. I've already fielded a one for 
Designate this morning, which we just pushed a patch out for.


If you have other concerns that weren't already addressed by fifieldt or 
ttx, please let me know.


Cheers,
Jimmy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Project Navigator Out of Date?

2017-01-16 Thread Tom Fifield

On 17/01/17 00:14, Thierry Carrez wrote:

Ian Cordasco wrote:

From: Tom Fifield 

On 16/01/17 21:55, Ian Cordasco wrote:


Third, "Existence and quality of packages for this project in popular
distributions." it seems Fedora [2], Debian [3], Ubuntu [4], and
OpenSUSE [5] all have packages (including in stable versions). I can't
speak to the quality of the packages, but knowing the hard work most
of our downstream redistributors put into those packages, I'm certain
they're good quality. This should *definitely* be updated, in my
opinion.


https://bugs.launchpad.net/openstack-org/+bug/1656843

https://github.com/OpenStackweb/openstack-org/pull/59


So, if I understand the two links correctly, changes are planned to
make that tag better and until they're made you're going to stop
displaying it using that with projects. Is that correct? Are there
other ways the community can help keep the navigator up-to-date?


Regarding stable branch policy and standard deprecation, this
information is directly pulled from governance project tags. Both are
assertion tags that the team must assert by themselves (by proposing a
change to openstack/governance).

Quick glance to:
https://governance.openstack.org/tc/reference/projects/barbican.html

shows that those tags haven't been asserted by the Barbican team yet.

Reference:
https://governance.openstack.org/tc/reference/tags/index.html



... and just for completeness, the relatively small number of tags 
starting with ops: are derived from:


https://github.com/openstack/ops-tags-team

patches via the gerrit welcome :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Ian Cordasco
-Original Message-
From: Hayes, Graham 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: January 16, 2017 at 09:26:00
To: OpenStack Development Mailing List (not for usage questions)

Subject:  Re: [openstack-dev] [all] [barbican] [security] Why are
projects trying to avoid Barbican, still?

> On 16/01/2017 13:38, Ian Cordasco wrote:
> > Is the problem perhaps that no one is aware of other projects using
> > Barbican? Is the status on the project navigator alarming (it looks
> > like some of this information is potentially out of date)? Has
> > Barbican been deemed too hard to deploy?
>
> I know that historically it was considered hard to do a HA deploy of
> Barbican. When we initially evaluated DNSSEC in Designate (many years
> ago now) it was one of the sticking points.
>
> This may have (and most likely has) changed, but we seem to have long
> memories.

I know Rackspace recently made Barbican available to its cloud
customers. I suspect it's easier now to perform an HA deploy.

> It could be a side effect of the Big Tent - there are so many projects
> doing so many different things that projects don't want deployers to
> have deploy everything.

Yeah, I completely understand that. The thing is that in one case,
there's a project that currently relies on Barbican and wants to
replace that with a completely brand new service that will be doing
other things and then wants to layer secrets on top of it. It seems to
me like a terrible case of both scope creep and not actually caring
about the security the users expect from services that have to
interact with secrets. N services (besides Barbican) implementing
their own secrets storage each in their own way seem like N different
services that will be dealing with vulnerabilities and security
releases for the next few years. Perhaps that's pessimistic, but
looking at that with my operator hat on, I'd rather have to update *1*
service (barbican) rather than N if there's some vulnerability that
comes up. It's the same argument when it comes down to packaging and
vendoring dependencies. Update once instead of N times for each
package that has a copy of the library bundled in it.

> > I really want to understand why so many projects feel the need to
> > implement their own secrets storage. This seems a bit short-sighted
> > and foolish. While these projects are making themselves easier to
> > deploy, if not done properly they are potentially endangering their
> > users and that seems like a bigger problem than deploying Barbican to
> > me.
>
> +100 - One of the reasons we didn't just write our own signing was I
> am allergic to writing crypto code - I am not very good at it, and there
> is a project that people that either are, or know how to use the libs
> correctly.

I have the same allergy! This is why I've been pushing folks talking
about implementing their own secrets storage.

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Project Navigator Out of Date?

2017-01-16 Thread Thierry Carrez
Ian Cordasco wrote:
> From: Tom Fifield 
>> On 16/01/17 21:55, Ian Cordasco wrote:
>>>
>>> Third, "Existence and quality of packages for this project in popular
>>> distributions." it seems Fedora [2], Debian [3], Ubuntu [4], and
>>> OpenSUSE [5] all have packages (including in stable versions). I can't
>>> speak to the quality of the packages, but knowing the hard work most
>>> of our downstream redistributors put into those packages, I'm certain
>>> they're good quality. This should *definitely* be updated, in my
>>> opinion.
>>
>> https://bugs.launchpad.net/openstack-org/+bug/1656843
>>
>> https://github.com/OpenStackweb/openstack-org/pull/59
> 
> So, if I understand the two links correctly, changes are planned to
> make that tag better and until they're made you're going to stop
> displaying it using that with projects. Is that correct? Are there
> other ways the community can help keep the navigator up-to-date?

Regarding stable branch policy and standard deprecation, this
information is directly pulled from governance project tags. Both are
assertion tags that the team must assert by themselves (by proposing a
change to openstack/governance).

Quick glance to:
https://governance.openstack.org/tc/reference/projects/barbican.html

shows that those tags haven't been asserted by the Barbican team yet.

Reference:
https://governance.openstack.org/tc/reference/tags/index.html

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Project Navigator Out of Date?

2017-01-16 Thread Ian Cordasco
Hi Tom!

-Original Message-
From: Tom Fifield 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: January 16, 2017 at 10:02:24
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [barbican] Project Navigator Out of Date?

> On 16/01/17 21:55, Ian Cordasco wrote:
> >
> > Third, "Existence and quality of packages for this project in popular
> > distributions." it seems Fedora [2], Debian [3], Ubuntu [4], and
> > OpenSUSE [5] all have packages (including in stable versions). I can't
> > speak to the quality of the packages, but knowing the hard work most
> > of our downstream redistributors put into those packages, I'm certain
> > they're good quality. This should *definitely* be updated, in my
> > opinion.
>
> https://bugs.launchpad.net/openstack-org/+bug/1656843
>
> https://github.com/OpenStackweb/openstack-org/pull/59

So, if I understand the two links correctly, changes are planned to
make that tag better and until they're made you're going to stop
displaying it using that with projects. Is that correct? Are there
other ways the community can help keep the navigator up-to-date?

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress][oslo.config][keystone] NoSuchOptError: no such option project_domain_name in group [keystone_authtoken]

2017-01-16 Thread Doug Hellmann
Excerpts from Eric K's message of 2017-01-12 14:31:58 -0800:
> On a freshly stacked devstack (Jan 12), attempting to access
> `cfg.CONF.keystone_authtoken.project_domain_name` gave the error:
> NoSuchOptError: no such option project_domain_name in group
> [keystone_authtoken]
> 
> I¹m a little confused because it¹s part of the [keystone_authtoken] config
> section generated by devstack. Could anyone point me to where these options
> are declared (I¹ve searched several repos) and maybe why this option doesn¹t
> exist? Thanks a lot!
> 
> Of all the options supplied by devstack under [keystone_authtoken], the
> following were accessible:
> memcached_servers
> signing_dir
> cafile
> auth_uri
> auth_url
> auth_type
> 
> But the following were unaccessible:
> project_domain_name
> project_name
> user_domain_name
> password
> username

Options are usually declared in the code that uses them, with a
call to register_opts() either at runtime or when a module is
imported.  You should ensure that your use of the option comes after
its declaration (having the option present in the configuration
file isn't the same as declaring it in the code).

One other important point: Options are not typically part of the
API of a library (at least not for Oslo libs, and we encourage that
same approach for other libs). If the options you need are defined
in a library, look for a public API to call to retrieve the values
or to instantiate objects using the config but without having your
application code rely on option definitions that may change as options
are moved or deprecated.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Project Navigator Out of Date?

2017-01-16 Thread Tom Fifield

On 16/01/17 21:55, Ian Cordasco wrote:


Third, "Existence and quality of packages for this project in popular
distributions." it seems Fedora [2], Debian [3], Ubuntu [4], and
OpenSUSE [5] all have packages (including in stable versions). I can't
speak to the quality of the packages, but knowing the hard work most
of our downstream redistributors put into those packages, I'm certain
they're good quality. This should *definitely* be updated, in my
opinion.


https://bugs.launchpad.net/openstack-org/+bug/1656843

https://github.com/OpenStackweb/openstack-org/pull/59

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Hayes, Graham
On 16/01/2017 13:38, Ian Cordasco wrote:
> Hi everyone,
>
> I've seen a few nascent projects wanting to implement their own secret
> storage to either replace Barbican or avoid adding a dependency on it.
> When I've pressed the developers on this point, the only answer I've
> received is to make the operator's lives simpler.
>
> I've been struggling to understand the reasoning behind this and I'm
> wondering if there are more people around who can help me understand.
>
> To help others help me, let me provide my point of view. Barbican's
> been around for a few years already and has been deployed by several
> companies which have probably audited it for security purposes. Most
> of the technology involved in Barbican is proven to be secure and the
> way the project has strung those pieces together has been analyzed by
> the OSSP (OpenStack's own security group). It doesn't have a
> requirement on a hardware TPM which means there's no hardware upgrade
> cost. Furthermore, several services already provide the option of
> using Barbican (but won't place a hard requirement on it). It stands
> to reason (in my opinion) that if new services have a need for secrets
> and other services already support using Barbican as secret storage,
> then those new services should be using Barbican. It seems a bit
> short-sighted of its developers to say that their users are definitely
> not deploying Barbican when projects like Magnum have soft
> dependencies on it.
>
> Is the problem perhaps that no one is aware of other projects using
> Barbican? Is the status on the project navigator alarming (it looks
> like some of this information is potentially out of date)? Has
> Barbican been deemed too hard to deploy?

I know that historically it was considered hard to do a HA deploy of
Barbican. When we initially evaluated DNSSEC in Designate (many years
ago now) it was one of the sticking points.

This may have (and most likely has) changed, but we seem to have long
memories.

It could be a side effect of the Big Tent - there are so many projects
doing so many different things that projects don't want deployers to
have deploy everything.

> I really want to understand why so many projects feel the need to
> implement their own secrets storage. This seems a bit short-sighted
> and foolish. While these projects are making themselves easier to
> deploy, if not done properly they are potentially endangering their
> users and that seems like a bigger problem than deploying Barbican to
> me.

+100 - One of the reasons we didn't just write our own signing was I
am allergic to writing crypto code - I am not very good at it, and there
is a project that people that either are, or know how to use the libs
correctly.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Unable to add new metrics using meters.yaml

2017-01-16 Thread Srikanth Vavilapalli
Thanks Gord

Yes, for my quick test, I directly added a new exchange to listen for in 
meter/notifications.py and verified that the custom metrics are getting 
processed after that.

Thanks
Srikanth

-Original Message-
From: gordon chung [mailto:g...@live.ca] 
Sent: Monday, January 16, 2017 6:09 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ceilometer] Unable to add new metrics using 
meters.yaml



On 13/01/17 06:47 PM, Srikanth Vavilapalli wrote:
> So the question is, is there any config that I can use to let 
> "ceilometer/meter/notifications.py" listen on other rabbitmq exchanges in 
> addition to predefined ones, such that this framework can be extended to 
> receive meters from non openstack services? Appreciate your inputs.

sorry, hit send to quickly.

you could also hack the code to support your exchange[1]. i also believe in 
theory, you should be able to re-use http_control_exchanges to leverage your 
custom exchanges[2]

[1]
https://github.com/openstack/ceilometer/blob/aa3f491bb714c613681125d242e4c9ea254bdbe2/ceilometer/meter/notifications.py#L210
[2]
https://github.com/openstack/ceilometer/blob/34a699a598122f2f1f44e5f169ee21d6c22665d0/ceilometer/middleware.py#L30

cheers,
--
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement/resource providers update 7

2017-01-16 Thread Jay Pipes

On 01/11/2017 01:11 PM, Chris Dent wrote:

On Fri, 6 Jan 2017, Chris Dent wrote:


## can_host, aggregates in filtering

There's still some confusion (from at least me) on whether the
can_host field is relevant when making queries to filter resource
providers. Similarly, when requesting resource providers to satisfy a
set of resources, we don't (unless I've completely missed it) return
resource providers (as compute nodes) that are associated with other
resource providers (by aggregate) that can satisfy a resource
requirement. Feels like we need to work backwards from a test or use
case and see what's missing.


At several points throughout the day I've been talking with edleafe
about this to see whether "knowing about aggregates (or can_host)" when
making a request to `GET /resource_providers?resources=`
needs to be dealt with on a scale of now, soon, later.

After much confusion I think we've established that for now we don't
need to. But we need to confirm so I said I'd write something down.

The basis for this conclusion is from three assumptions:

* The value of 'local_gb' on the compute_node object is any disk the
  compute_node can see/use and the concept of associating with shared
  disk by aggregates is not something that is real yet[0].


Yes.


* Any query for resources from the scheduler client is going to
  include a VCPU requirement of at least one (meaning that every
  resource provider returned will be a compute node[1]).


Meh, we *could* do that, but for now it's unnecessary since the only 
provider records being currently created by the resource tracker 
(scheduler report client) are compute node provider records.



* Claiming the consumption of some of that local_gb by the resource
  tracker is the resource tracker's problem and not something we're
  talking about here[2].


Yes.


If all that's true, then we're getting pretty close for near term
joy on limiting the number of hosts the filter scheduler needs to
filter[3].


Yes, and the joy merged. So, we're in full joy mode.


If it's not true (for the near term), can someone explain why not
and what need to do to fix it?

In the longer term:

Presumably the resource tracker will start reporting inventory
without DISK_GB when using shared disk, and shared disk will be
managed via aggregate associations. When that happens, the query
to GET /resource_providers will need a way to say "only give me
compute nodes that can either satisfy this resource request
directly or via associated stuff". Something tidier than:

GET
/resource_providers?resources:_only_want_capable_or_associated_compute_nodes=True


The request from the scheduler will not change at all. The user is 
requesting some resources; where those resources live is not a concern 
of the user.



The techniques to do that, if I understand correctly, are in an
email from Jay that some of us received a while go with a subject of
"Some attachments to help with resource providers querying".
Butterfly joins and such like.


Yes, indeed. The can_host field -- probably better named as "is_shared" 
or something like that -- can simplify some of the more complex join 
conditions that querying with associated shared resource pools brings 
into play. But it's more of an optimization (server side) than anything 
else. Thus, I'd prefer if we keep can_host out of any REST API interfacers.


Best,
-jay


Thoughts, questions, clarifications?

[0] This is different from the issue with allocations not needing to
be recorded when the instance has non-local disk (is volume backed):
https://review.openstack.org/#/c/407180/ . Here we are talking about
recording compute node inventory.

[1] This ignores for the moment that unless someone has been playing
around there are no resource providers being created in the
placement API that are not compute nodes.

[2] But for reference will presumably come from the work started
here https://review.openstack.org/#/c/407309/ .

[3] That work starts here: https://review.openstack.org/#/c/392569/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Project Navigator Out of Date?

2017-01-16 Thread Hayes, Graham
On 16/01/2017 14:33, Julien Danjou wrote:
> On Mon, Jan 16 2017, Ian Cordasco wrote:
>
>> Related to the other thread I just started, I was looking at the
>> project navigator [1] for Barbican and found some things that look
>> wrong (to an outsider) and was hoping could be cleared up.
>
> Don't worry, we (Telemetry) have already asked for that to be updated
> but nothing happened. I don't even recall that we had an answer about
> how the page is managed. See this thread from October 2016:
>
>   http://lists.openstack.org/pipermail/openstack-dev/2016-October/105617.html
>

I have filed bugs on the foundation website launchpad [0] to get these
fixed in the past.

That said, there is problems with the navigator (like what levels are
considered "mature", the choice of tags, etc) that make me just ignore
it. It even considers our Juno release as "not deprecated" (in the API 
version history?)

> The page misses components for us also. I feel bad that the foundation
> Web site reflects bad data. It actually harms small projects that made
> good progress achieving better at what is called "maturity" being still
> wrongly noted 1/8.

Yup. As a small project it is hard enough to keep up without losing out
in such a public way.

> Sigh.
>
> Cheers,
>

0 - https://launchpad.net/openstack-org

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Chris Dent

On Mon, 16 Jan 2017, Ian Cordasco wrote:


I really want to understand why so many projects feel the need to
implement their own secrets storage. This seems a bit short-sighted
and foolish. While these projects are making themselves easier to
deploy, if not done properly they are potentially endangering their
users and that seems like a bigger problem than deploying Barbican to
me.


What I've heard in the past is that no one wants to rely on
something that they cannot guarantee will be present in a
deployment. The debate surrounding what ought to be guaranteed in a
deployment is part of what inspired the notion of a "base services"
which is a topic up for proposal in the architecture working group:

https://review.openstack.org/#/c/419397/

(In other words: yeah, important topic.)
--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Project Navigator Out of Date?

2017-01-16 Thread Matthew Thode
On 01/16/2017 07:55 AM, Ian Cordasco wrote:
> Keeping the navigator up-to-date seems (to me) to be a good
> way to improve Barbican's image. I would be happy to work with you all
> (with what little time I have) to update the navigator to better
> reflect Barbican's reality.

Part of the reason I've avoided packaging barbican is this page.  That
and projects not really using it.  I would like to see more projects
using barbican.

-- 
Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Vitrage] About alarms reported by datasource and the alarms generated by vitrage evaluator

2017-01-16 Thread Afek, Ifat (Nokia - IL)

From: Yujun Zhang 
Date: Sunday, 15 January 2017 at 17:53


About fault and alarm, what I was thinking about the causal/deducing chain in 
root cause analysis.

Fault state means the resource is not fully functional and it is evaluated by 
related indicators. There are alarms on events like power loss or measurands 
like CPU high, memory low, temperature high. There are also alarms based on 
deduced state, such as "host fault", "instance fault".

So an example chain would be
· "FAULT: power line cut off" =(monitor)=> "ALARM: host power loss" 
=(inspect)=> "FAULT: host is unavailable" =(action)=> "ALARM: host fault"
· "FAULT: power line cut off" =(monitor)=> "ALARM: host power loss" 
=(inspect)=> "FAULT: host is unavailable" =(inspect)=> "FAULT: instance is 
unavailable" =(action)=> "ALARM: instance fault"
If we omit the resource, then we get the causal chain as it is in Vitrage
· "ALARM: host power loss" =(causes)=> "ALARM: host fault"
· "ALARM: host power loss" =(causes)=> "ALARM: instance fault"
But what the user care about might be there "FAULT: power line cut off" causes 
all these alarms. What I haven't made clear yet is the equivalence between 
fault and alarm.

I may have made it more complex with my immature thoughts. It could be even 
more complex if we consider multiple upstream causes and downstream outcome. It 
may be an interesting topic to be discussed in design session.


[Ifat] I agree. Let’s discuss this in the next design session we’ll have


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting - 01/16/2017 (NO MORE REMINDERS)

2017-01-16 Thread Renat Akhmerov
Hi,

We’ll have a team meeting today at 16.00 UTC at #openstack-meeting as usually.

IMPORTANT: We discussed it with the team and decided to stop sending out these 
reminders since
we don’t find them much useful anymore. So this is the last one. We agreed to 
send notifications only
in some special cases like: meeting cancellation, intention to discuss very 
important topics, format
changes etc. Having that said, everyone is still welcome to join our meetings 
and bring up any topics
that are important for you. You can also let us know in advance what you would 
like to discuss either
by sending an email or chatting with us at #openstack-mistral IRC channel, or 
by adding items at [1].

Agenda for today’s meeting:
Review action items
Current status (progress, issues, roadblocks, further plans)
PTG preparations
Open discussion

[1] https://wiki.openstack.org/wiki/Meetings/MistralAgenda 


Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] tripleoclient release : January 26th

2017-01-16 Thread Emilien Macchi
One day I'll read calendars correctly :-)
Client releases are next week, so we'll release tripleoclient by January 26th.

Sorry for confusion.

On Sun, Jan 15, 2017 at 6:41 PM, Emilien Macchi  wrote:
> https://releases.openstack.org/ocata/schedule.html
>
> It's time to release python-tripleoclient this week.
> We still have 15 bugs in progress targeted for ocata-3.
> https://goo.gl/R2hO4Z
>
> Please triage them to pike-1 unless they are critical or high, so we
> need to fix them afterward and backport it to stable/ocata.
>
> We'll release the client by Thursday 19th end of day.
> Please let us know any blocker,
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Project Navigator Out of Date?

2017-01-16 Thread Julien Danjou
On Mon, Jan 16 2017, Ian Cordasco wrote:

> Related to the other thread I just started, I was looking at the
> project navigator [1] for Barbican and found some things that look
> wrong (to an outsider) and was hoping could be cleared up.

Don't worry, we (Telemetry) have already asked for that to be updated
but nothing happened. I don't even recall that we had an answer about
how the page is managed. See this thread from October 2016:

  http://lists.openstack.org/pipermail/openstack-dev/2016-October/105617.html

The page misses components for us also. I feel bad that the foundation
Web site reflects bad data. It actually harms small projects that made
good progress achieving better at what is called "maturity" being still
wrongly noted 1/8.

Sigh.

Cheers,
-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Unable to add new metrics using meters.yaml

2017-01-16 Thread gordon chung


On 13/01/17 06:47 PM, Srikanth Vavilapalli wrote:
> So the question is, is there any config that I can use to let 
> "ceilometer/meter/notifications.py" listen on other rabbitmq exchanges in 
> addition to predefined ones, such that this framework can be extended to 
> receive meters from non openstack services? Appreciate your inputs.

sorry, hit send to quickly.

you could also hack the code to support your exchange[1]. i also believe 
in theory, you should be able to re-use http_control_exchanges to 
leverage your custom exchanges[2]

[1] 
https://github.com/openstack/ceilometer/blob/aa3f491bb714c613681125d242e4c9ea254bdbe2/ceilometer/meter/notifications.py#L210
[2] 
https://github.com/openstack/ceilometer/blob/34a699a598122f2f1f44e5f169ee21d6c22665d0/ceilometer/middleware.py#L30

cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Next notification meeting

2017-01-16 Thread Balázs Gibizer
Hi,

The next notification subteam meeting will be held on 2017.01.17 17:00 UTC [1] 
on #openstack-meeting-4.

Cheers,
gibi

[1]
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170117T17

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Unable to add new metrics using meters.yaml

2017-01-16 Thread gordon chung


On 13/01/17 06:47 PM, Srikanth Vavilapalli wrote:
> So the question is, is there any config that I can use to let 
> "ceilometer/meter/notifications.py" listen on other rabbitmq exchanges in 
> addition to predefined ones, such that this framework can be extended to 
> receive meters from non openstack services? Appreciate your inputs.

support for custom exchanges was planned but not finished i believe. you 
are welcome to add support for this... or wait until someone gets around 
to it.

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Project Navigator Out of Date?

2017-01-16 Thread Ian Cordasco
Hi barbicaneers (I don't actually know what y'all call yourselves :)),

Related to the other thread I just started, I was looking at the
project navigator [1] for Barbican and found some things that look
wrong (to an outsider) and was hoping could be cleared up.

First, "Is this project maintained following the common Stable branch
policy?" appears to be "Yes" now. I notice you have stable branches
that actually look stable. Are y'all working with the stable
maintenance team on them?

Second, "Does this project follows standard deprecation?" I'm not
(yet) a user of Barbican, but are you still not following the standard
deprecation policy?

Third, "Existence and quality of packages for this project in popular
distributions." it seems Fedora [2], Debian [3], Ubuntu [4], and
OpenSUSE [5] all have packages (including in stable versions). I can't
speak to the quality of the packages, but knowing the hard work most
of our downstream redistributors put into those packages, I'm certain
they're good quality. This should *definitely* be updated, in my
opinion.

Finally, "Are vulnerability issues managed by the OpenStack security
team?". I know that the OpenStack Security Project worked with the
Barbican team to come up with a vulnerability analysis a few midcycles
ago. Is that roughly where you all stopped? Is there a reason you
haven't attempted to work with the VMT on security issues?

Hopefully my agenda is obvious - I'd like to see fewer projects
attempting to implement their own secret storage and instead use
Barbican. Keeping the navigator up-to-date seems (to me) to be a good
way to improve Barbican's image. I would be happy to work with you all
(with what little time I have) to update the navigator to better
reflect Barbican's reality.

[1]: https://www.openstack.org/software/releases/newton/components/barbican
[2]: https://apps.fedoraproject.org/packages/s/barbican
[3]: 
https://packages.debian.org/search?keywords=barbican=all=all=all
[4]: 
http://packages.ubuntu.com/search?keywords=barbican=names=all=all
[5]: 
https://software.opensuse.org/search?utf8=✓=barbican_devel=false_unsupported=false=openSUSE:Leap:42.2

Cheers,
--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Ian Cordasco
Hi everyone,

I've seen a few nascent projects wanting to implement their own secret
storage to either replace Barbican or avoid adding a dependency on it.
When I've pressed the developers on this point, the only answer I've
received is to make the operator's lives simpler.

I've been struggling to understand the reasoning behind this and I'm
wondering if there are more people around who can help me understand.

To help others help me, let me provide my point of view. Barbican's
been around for a few years already and has been deployed by several
companies which have probably audited it for security purposes. Most
of the technology involved in Barbican is proven to be secure and the
way the project has strung those pieces together has been analyzed by
the OSSP (OpenStack's own security group). It doesn't have a
requirement on a hardware TPM which means there's no hardware upgrade
cost. Furthermore, several services already provide the option of
using Barbican (but won't place a hard requirement on it). It stands
to reason (in my opinion) that if new services have a need for secrets
and other services already support using Barbican as secret storage,
then those new services should be using Barbican. It seems a bit
short-sighted of its developers to say that their users are definitely
not deploying Barbican when projects like Magnum have soft
dependencies on it.

Is the problem perhaps that no one is aware of other projects using
Barbican? Is the status on the project navigator alarming (it looks
like some of this information is potentially out of date)? Has
Barbican been deemed too hard to deploy?

I really want to understand why so many projects feel the need to
implement their own secrets storage. This seems a bit short-sighted
and foolish. While these projects are making themselves easier to
deploy, if not done properly they are potentially endangering their
users and that seems like a bigger problem than deploying Barbican to
me.

-- 
Ian Cordasco
Glance, Hacking, Bandit, and Craton core reviewer

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI]

2017-01-16 Thread Emilien Macchi
On Mon, Jan 16, 2017 at 3:08 AM, Dougal Matthews  wrote:
>
>
> On 15 January 2017 at 20:24, Sagi Shnaidman  wrote:
>>
>> Hi, all
>>
>> FYI, the periodic TripleO nonha jobs fail because of introspection
>> failure, there is opened bug in mistral:
>>
>> Ironic introspection fails because unexpected keyword "insecure"
>> https://bugs.launchpad.net/tripleo/+bug/1656692
>
>
> I've taken this on and posted https://review.openstack.org/#/c/420547/
>
> How can I verify that fixes the periodic job?

We use https://review.openstack.org/#/c/359215/ (but needs to be used
carefully).

But we've got a promotion thanks to your patch, thanks a lot!

>
>>
>>
>> and marked as promotion blocker.
>>
>> Thanks
>> --
>> Best regards
>> Sagi Shnaidman
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][ci] replacing periodic tempest job with quickstart

2017-01-16 Thread Gabriele Cerami
Hi,

as part of an effort to bring success rate of tempest test closer to
100% in tripleo-ci, we propose to replace the current periocic ha
tempest job with a one that is using quickstart, but tests in nonha.

We pushed a change in infra: https://review.openstack.org/420647
that will replace the current job with a one driven by quickstart, and
makes it possible to selectively increase the timeout of a single job.
This is needed since a deployment + full tempest test may take as long
as 5 hours to finish.
Another approach for this could be to clone the job template
specifically for tempest, and associate the job to the new template. But
this creates code duplication, and the way we name the jobs at the
moment would make naming this new job a bit difficult

this change https://review.openstack.org/420620 prposed to tripleo-ci
instead activates the configuration needed to launch a tempest jobs with
quickstart

any feedback will be very appreciated,

thanks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG? / Was (Consistent Versioned Endpoints)

2017-01-16 Thread Tom Fifield

On 14/01/17 04:07, Joshua Harlow wrote:


Sometimes I almost wish we just rented out a football stadium (or
equivalent, a soccer field?) and put all the contributors in the 'field'
with bean bags and some tables and a bunch of white boards (and a lot of
wifi and power cords) and let everyone 'have at it' (ideally in a
stadium with a roof in the winter). Maybe put all the infra people in a
circle in the middle and make the foundation people all wear referee
outfits.

It'd be an interesting social experiment at least :-P


I have been informed we have located at least 3 referee outfits across 
Foundation staff, along with a set of red/yellow cards.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement/resource providers update 7

2017-01-16 Thread Chris Dent

On Wed, 11 Jan 2017, Chris Dent wrote:


The basis for this conclusion is from three assumptions:

* The value of 'local_gb' on the compute_node object is any disk the
 compute_node can see/use and the concept of associating with shared
 disk by aggregates is not something that is real yet[0].

* Any query for resources from the scheduler client is going to
 include a VCPU requirement of at least one (meaning that every
 resource provider returned will be a compute node[1]).

* Claiming the consumption of some of that local_gb by the resource
 tracker is the resource tracker's problem and not something we're
 talking about here[2].

If all that's true, then we're getting pretty close for near term
joy on limiting the number of hosts the filter scheduler needs to
filter[3].


These assumptions don't address any situations where baremetal is
what's being requested. Custom resource classes will help address
that, but it is not clear what the state of that is (or will be
soon) from the scheduler side.

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [machine learning] Question: Why there is no serious project for machine learning ?

2017-01-16 Thread eran

Not sure what you mean by serious.

Maybe you could have a look at Meteos[1]. It is a young project but surely
focuses on machine learning.

[1]: https://wiki.openstack.org/wiki/Meteos
Another avenue is to use Storlets for either the learn or prediction  
phase where the data resides in Swift.
We are currently adding IPython integration [1] that makes it very  
easy to deploy and invoke Storlets from IPython (a data scientists  
beloved tool :-), plus [2] is an initial working towards leveraging  
Storlets for machine learning.


In few more words: Storlets [3] allow to run a serverless computation  
inside Swift nodes, where the computation is done inside a Docker  
container. This basically means that you can write a piece of code (in  
either Python or Java) upload that code to Swift (as if it was a data  
object) and then invoke the uploaded code (called storlet) on your  
data (much like AWS Lambda). The nice thing is that the Docker image  
where the storlet is executed can be tailored by the admin, and as to  
make sure it has, e.g. scikit-learn installed. With such a Docker  
image you can write a storlet that would use the sickit-learn  
algorithms on swift objects.



[1] https://review.openstack.org/#/c/416089/
[2] https://github.com/eranr/mlstorlets
[3] http://storlets.readthedocs.io/en/latest/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG? / Was (Consistent Versioned Endpoints)

2017-01-16 Thread Thierry Carrez
Fox, Kevin M wrote:
> Don't want to hijack the thread too much but... when the PTG was being sold, 
> it was a way to get the various developers in to one place and make it 
> cheaper to go to for devs. Now it seems to be being made into a place where 
> each of the silo's can co'exist but not talk, and then the summit is still 
> required to get cross project work done, so it only increases the devs cost 
> by requiring attendance at both. This is very troubling. :/ Whats the main 
> benefit of the PTG then?

To me the main benefit is to separate the time we are trying to get
things done /within/ development teams from the time we are trying to
reach out /beyond/ development teams. For some teams it was really
difficult to find time to listen to users and gather requirements while
at the same time trying to build trust, priorities and organize work for
the coming cycle, all within the same week.

About "cross-project" work, the problem is (as always) that the term is
*very* overloaded. There is actually two kinds of transversal work:

- cross-community work: discussions between all segments of our
community: developers, operators, app developers, organizations building
products on top of OpenStack... This is for example about getting
feedback on recent releases or features, or evolving stable or
deprecation policies, or gathering requirements for future cycles (like
the recent "what would you like to see in Pike" discussion on ops ML).
This needs to happen in a forum where there is representation of all the
segments, i.e. at the Summit.

- inter-project work: discussions between a number of upstream project
teams (for example Nova+Cinder, or all teams using oslo.privsep, or all
devs working on a given release goal, or release liaisons giving
feedback to the release management team, or people involved in
consistent versioned endpoints). This is necessary to break the silos
between teams, and will happen at the PTG. We'll use the week split to
hopefully facilitate that cross-attendance (Mon-Tue vs. Wed-Fri), as
well as a fishbowl room to schedule any necessary inter-project
discussions. If this event format is not cutting it, we'll evolve it for
PTG2.

Obviously there are things that live close to the edge, and for which
there might be a bit of overlap (I suspect we'll be discussing them in
both venues).

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia]redirection and barbican config

2017-01-16 Thread Abed Abu-Dbai
Hi,

I updated the description accordingly. Please update the status
 
https://bugs.launchpad.net/devstack/+bug/1655656

Thanks,
Abed

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Marking Tintri driver as unsupported

2017-01-16 Thread Silvan Kaiser
Regarding the reason for the failing tests:
Looks like [1] switches the default for support of managed snapshots to
true in devstack.
As the default on that was 'false' until friday Quobyte CI did not set that
option previously. I'm running tests with a revised config now.
Btw, feel free to contact me in IRC (kaisers).
Best
Silvan

[1] https://review.openstack.org/#/c/419073/



2017-01-16 8:52 GMT+01:00 Silvan Kaiser :

> Apoorva, Sean,
> after some time i managed to bring up Quobyte CI last friday which tested
> fine [1,2,3] for a short time and then ran into the same issues with 
> manage_snapshot
> related tempest tests Apoorva describes (Starting chronologically at [4]).
> From here i see two steps:
> a) look into the reason for the issue with manage_snapshot tests
> b) a short note on how to proceed for marking drivers with reinstated CIs
> back as supported is much appreciated
> Best
> Silvan
>
> [1] https://review.openstack.org/#/c/412085/4
> [2] https://review.openstack.org/#/c/313140/24
> [3] https://review.openstack.org/#/c/418643/6
> [4] https://review.openstack.org/#/c/363010/42
>
>
>
>
> 2017-01-15 20:46 GMT+01:00 Apoorva Deshpande :
>
>> Sean,
>>
>> We have resolved issues related to our CI infra[1][2]. At this point
>> manage_snapshot related tempest tests (2) are failing, but Tintri driver
>> does not support manage/unmanage snapshot functionalities.
>>
>> Could you please assist me on how to skip these tests? We are using
>> sos-ci for our CI runs.
>>
>> If this satisfies the compliance requirements can I propose a patch to
>> revert changes introduced by [3] and make Tintri driver SUPPORTED again?
>>
>> [1] http://openstack-ci.tintri.com/tintri/refs-changes-69-353069-52/
>> [2] http://openstack-ci.tintri.com/tintri/refs-changes-10-419710-2/
>> [3] https://review.openstack.org/#/c/411975/
>>
>> On Sat, Dec 17, 2016 at 6:34 PM, Apoorva Deshpande 
>> wrote:
>>
>>> Sean,
>>>
>>> As communicated earlier [1][2][3], Tintri CI is facing a devstack
>>> failure issue, potentially due to [4].
>>> We are working on it and request you to give us more time before
>>> approving the unsupported driver patch [5].
>>>
>>>
>>> [1] https://www.mail-archive.com/openstack-dev@lists.opensta
>>> ck.org/msg97085.html
>>> [2] https://www.mail-archive.com/openstack-dev@lists.opensta
>>> ck.org/msg97057.html
>>> [3] http://eavesdrop.openstack.org/irclogs/%23openstack-cind
>>> er/%23openstack-cinder.2016-12-05.log.html
>>> [4] https://review.openstack.org/#/c/399550/
>>> [5] https://review.openstack.org/#/c/411975/
>>>
>>> On Sat, Dec 17, 2016 at 2:05 AM, Sean McGinnis 
>>> wrote:
>>>
 Checking name: Tintri CI
 last seen: 2016-12-16 16:50:50 (0:43:36 old)
 last success: 2016-11-16 20:42:29 (29 days, 20:45:46 old)
 success rate: 19%

 Per Cinder's non-compliance policy [1] this patch [2] marks
 the driver as unsupported and deprecated and it will be
 approved if the issue is not corrected by the next cycle.

 [1] https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drive
 rs#Non-Compliance_Policy
 [2] https://review.openstack.org/#/c/411975/

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Dr. Silvan Kaiser
> Quobyte GmbH
> Hardenbergplatz 2, 10623 Berlin - Germany
> +49-30-814 591 800 <+49%2030%20814591800> - www.quobyte.com quobyte.com/>
> Amtsgericht Berlin-Charlottenburg, HRB 149012B
> Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>



-- 
Dr. Silvan Kaiser
Quobyte GmbH
Hardenbergplatz 2, 10623 Berlin - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Ocata cycle ending and proposing new people as Kuryr cores

2017-01-16 Thread Antoni Segura Puimedon
That's a majority of the cores having cast positive votes.

Congratulations to Liping Mao and Ilya Chukhnakov! You're now cores and on
the hook!


On Mon, Jan 16, 2017 at 3:10 AM, Vikas Choudhary  wrote:

> +1 for both.
>
> On Sun, Jan 15, 2017 at 12:42 PM, Gal Sagie  wrote:
>
>> +1 for both.
>>
>> On Sun, Jan 15, 2017 at 9:05 AM, Irena Berezovsky 
>> wrote:
>>
>>>
>>>
>>> On Fri, Jan 13, 2017 at 6:49 PM, Antoni Segura Puimedon <
>>> celeb...@gmail.com> wrote:
>>>
 Hi fellow kuryrs!

 We are getting close to the end of the Ocata and it is time to look
 back and appreciate the good work all the contributors did. I would like to
 thank you all for the continued dedication and participation in gerrit, the
 weekly meetings, answering queries on IRC, etc.

 I also want to propose two people that I think will help us a lot as
 core contributors in the next cycles.

 For Kuryr-lib and kuryr-libnetwork I want to propose Liping Mao. Liping
 has been contributing a lot of since Mitaka, not just in code but in
 reviews catching important details and fixing bugs. It is overdue that he
 gets to help us even more!

 +1
>>>
 For Kuryr-kubernetes I want to propose Ilya Chukhnakov. Ilya got into
 Kuryr at the end of the Newton cycle and has done a wonderful job in the
 Kubernetes integration contributing heaps of code and being an important
 part of the design discussions and patches. It is also time for him to
 start approving patches :-)

 +1
>>>

 Let's have the votes until next Friday (unless enough votes are cast
 earlier).

 Regards,

 Toni

>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best Regards ,
>>
>> The G.
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI]

2017-01-16 Thread Dougal Matthews
On 15 January 2017 at 20:24, Sagi Shnaidman  wrote:

> Hi, all
>
> FYI, the periodic TripleO nonha jobs fail because of introspection
> failure, there is opened bug in mistral:
>
> Ironic introspection fails because unexpected keyword "insecure"
> https://bugs.launchpad.net/tripleo/+bug/1656692
>

I've taken this on and posted https://review.openstack.org/#/c/420547/

How can I verify that fixes the periodic job?



>
> and marked as promotion blocker.
>
> Thanks
> --
> Best regards
> Sagi Shnaidman
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev