Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing problem in devstack install - No Network found for private

2017-01-17 Thread Andreas Scheuring
Without looking into the details 

you're specifying 
Q_USE_PROVIDER_NETWORKING=True
in your local.conf - usually this results in the creation of a single
provider network called "public". But the manila devstack plugin seems
not to be able to deal with provider networks as it's expecting a
network named "private" to be present.


Why are you using provider networks? Just for sake of VLANs? You can
also configure devstack to use vlans with the default setup. This has
worked for me in the past - results in a private network using vlans
(assuming you have created ovs b bridge br-data manually):


OVS_PHYSICAL_BRIDGE=br-data
PHYSICAL_NETWORK=phys-data

ENABLE_TENANT_TUNNELS=False
Q_ML2_TENANT_NETWORK_TYPE=vlan
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=1000:1000




-- 
-
Andreas 
IRC: andreas_s



On Mi, 2017-01-18 at 06:59 +, nidhi.h...@wipro.com wrote:
> Hi All, 
> 
> 
> I was trying to install latest Newton version of OpenStack using
> devstack on my laptop, all in one machine,
> 
> using Virtualbox VM. Lately i have been facing same problem in last
> few tries and installation doesn't get successful.
> 
> 
> My VM network adapter configuration is as below.
> 
> 
> Adapter1 
> 
> 
> 
> 
> 
> 
> 
> and 2nd adapter is as 
> 
> Adapter2
> 
> 
> 
> 
> 
> 
> 
> Thats detail of Host Only Networking
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Thats my local.conf for devstack
> 
> 
> 
> http://paste.openstack.org/show/595313/
> 
> 
> 
> 
> excerpt is 
> 
> FIXED_RANGE=10.11.12.0/24
> 
> 
> NETWORK_GATEWAY=10.11.12.1
> FIXED_NETWORK_SIZE=256
> 
> 
> FLOATING_RANGE=10.0.2.0/24
> Q_FLOATING_ALLOCATION_POOL=start=10.0.2.104,end=10.0.2.111
> PUBLIC_NETWORK_GATEWAY=10.0.2.1
> HOST_IP=10.0.2.15
> 
> 
> PUBLIC_INTERFACE=eth0
> 
> 
> 
> Thats ubuntu version on VM
> stack@ubuntu:~/devstack$ lsb_release -d
> Description: Ubuntu 14.04.5 LTS
> stack@ubuntu:~/devstack$ 
> 
> 
> Thats my machine's network interfaces file
> 
> 
> stack@ubuntu:~/devstack$ cat /etc/network/interfaces
> 
> 
> # This file describes the network interfaces available on your system
> # and how to activate them. For more information, see interfaces(5).
> 
> 
> # The loopback network interface
> auto lo
> iface lo inet loopback
> 
> 
> # The primary network interface
> auto eth1
> iface eth1 inet static
> address 192.168.56.150
> netmask 255.255.255.0
> 
> 
> auto eth0
> iface eth0 inet dhcp
> stack@ubuntu:~/devstack$ 
> 
> 
> 
> Error I am facing is 
> 
> 
> http://paste.openstack.org/show/595315/
> 
> 
> 
> Excerpt is
> 
> 
> 2017-01-18 06:29:55.396 |
> +++ /opt/stack/manila/devstack/plugin.sh:create_service_share_servers:287 :   
> openstack network show private -f value -c id
> 2017-01-18 06:29:56.778 | ResourceNotFound: No Network found for
> private
> 2017-01-18 06:29:56.805 |
> ++ /opt/stack/manila/devstack/plugin.sh:create_service_share_servers:287 :   
> private_net_id=
> 2017-01-18 06:29:56.807 |
> + /opt/stack/manila/devstack/plugin.sh:create_service_share_servers:1 :   
> exit_trap
> 2017-01-18 06:29:56.809 | + ./stack.sh:exit_trap:487 :
> local r=1
> 2017-01-18 06:29:56.815 | ++ ./stack.sh:exit_trap:488
>   :   jobs -p
> 2017-01-18 06:29:56.817 | + ./stack.sh:exit_trap:488 :
> jobs=
> 2017-01-18 06:29:56.819 | + ./stack.sh:exit_trap:491 :
> [[ -n '' ]]
> 2017-01-18 06:29:56.821 | + ./stack.sh:exit_trap:497 :
> kill_spinner
> 2017-01-18 06:29:56.823 | + ./stack.sh:kill_spinner:383  :
> '[' '!' -z '' ']'
> 2017-01-18 06:29:56.824 | + ./stack.sh:exit_trap:499 :
> [[ 1 -ne 0 ]]
> 2017-01-18 06:29:56.826 | + ./stack.sh:exit_trap:500 :
> echo 'Error on exit'
> 2017-01-18 06:29:56.826 | Error on exit
> 2017-01-18 06:29:56.828 | + ./stack.sh:exit_trap:501 :
> generate-subunit 1484720095 901 fail
> 2017-01-18 06:29:57.844 | + ./stack.sh:exit_trap:502 :
> [[ -z /opt/stack/logs ]]
> 2017-01-18 06:29:57.846 | + ./stack.sh:exit_trap:505
>   :   /home/stack/devstack/tools/worlddump.py -d /opt/stack/logs
> 2017-01-18 06:30:03.325 | + ./stack.sh:exit_trap:511 :
> exit 1
> 
> 
> 
> Devstack does not scucceed at all i have tried couple of times.
> 
> 
> Can someone help in pointing what mistake i am making that private
> network is not getting created.
> 
> I do not need to use generic driver for manila share at all i can skip
> that option also.
> 
> 
> Any kind of input will be really helpful.
> 
> 
> Thanks
> 
> Nidhi
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> The information contained in this electronic message and any
> attachments to this message are intended for the exclusive use of the
> addressee(s) and may contain proprietary, confidential or privileged
> information. If you are not the intended recipient, you should not
> disseminate, distribute or copy this e-mail. Please notify the sender
> immediately and destroy all copies of this message and any
> attachments. WARNING: 

[openstack-dev] [kolla] Contributors welcome to kolla-kubernetes 0.5.0

2017-01-17 Thread Steven Dake (stdake)
Hey folks,

The release team released kolla-kubernetes 0.4.0 Sunday January 15th.  Now we 
are in 0.5.0 development which lasts one month.

The general architecture of OpenStack based deployments with a Kubernetes 
underlay is taking form.  There are 5 blueprints in 0.5.0 which we expect 
should land prior to the PTG:
https://launchpad.net/kolla-kubernetes/+milestone/0.5.0

If you have personal interest in any of these blueprints, the fact that they 
are “assigned” doesn’t mean there isn’t a contribution to be made.  If you 
click through to an individual blueprint, you can see the “Work Items” field 
which contains each work item that needs addressing in each of these master 
blueprints.  For each master blueprint, there may be 10+ work items or more.

The goal of these “master” blueprints is to distribute the work among the 
development community.  There should be enough to work from in the whiteboard 
patches to produce an implementation.

Feel free to change “unassigned” to your Launchpad id for any of the blueprint 
work items.  The person assigned is responsible for tracking the state of the 
blueprint’s work items or generally leading the effort around those blueprints 
as has been done in other Kolla deliverables.

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] Ocata cycle ending and proposing new people as Kuryr cores

2017-01-17 Thread Liping Mao (limao)
Thanks for all, It's pleasure to work with all of you.

Regards,
Liping Mao

发件人: Antoni Segura Puimedon 
日期: 2017年1月16日 星期一 16:34
至: OpenStack List 
抄送: "Liping Mao (limao)" , Ilya Chukhnakov 
, Vikas Choudhary , Irena 
Berezovsky 
主题: Re: [openstack-dev] [kuryr] Ocata cycle ending and proposing new people as 
Kuryr cores

That's a majority of the cores having cast positive votes.
Congratulations to Liping Mao and Ilya Chukhnakov! You're now cores and on the 
hook!

On Mon, Jan 16, 2017 at 3:10 AM, Vikas Choudhary 
> wrote:
+1 for both.

On Sun, Jan 15, 2017 at 12:42 PM, Gal Sagie 
> wrote:
+1 for both.

On Sun, Jan 15, 2017 at 9:05 AM, Irena Berezovsky 
> wrote:


On Fri, Jan 13, 2017 at 6:49 PM, Antoni Segura Puimedon 
> wrote:
Hi fellow kuryrs!
We are getting close to the end of the Ocata and it is time to look back and 
appreciate the good work all the contributors did. I would like to thank you 
all for the continued dedication and participation in gerrit, the weekly 
meetings, answering queries on IRC, etc.
I also want to propose two people that I think will help us a lot as core 
contributors in the next cycles.
For Kuryr-lib and kuryr-libnetwork I want to propose Liping Mao. Liping has 
been contributing a lot of since Mitaka, not just in code but in reviews 
catching important details and fixing bugs. It is overdue that he gets to help 
us even more!
+1
For Kuryr-kubernetes I want to propose Ilya Chukhnakov. Ilya got into Kuryr at 
the end of the Newton cycle and has done a wonderful job in the Kubernetes 
integration contributing heaps of code and being an important part of the 
design discussions and patches. It is also time for him to start approving 
patches :-)
+1

Let's have the votes until next Friday (unless enough votes are cast earlier).
Regards,
Toni


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best Regards ,

The G.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [infra] Nested KVM + the gate

2017-01-17 Thread Amrith Kumar
Clark is right, trove does detect and try to use kvm where possible. The
performance has been well worth the change (IMHO).

-amrith

On Jan 17, 2017 6:53 PM, "Clark Boylan"  wrote:

> On Tue, Jan 17, 2017, at 03:41 PM, Jay Faulkner wrote:
> > Hi all,
> >
> > Back in late October, Vasyl wrote support for devstack to auto detect,
> > and when possible, use kvm to power Ironic gate jobs
> > (0036d83b330d98e64d656b156001dd2209ab1903). This has lowered some job
> > time when it works, but has caused failures — how many? It’s hard to
> > quantify as the log messages that show the error don’t appear to be
> > indexed by elastic search. It’s something seen often enough that the
> > issue has become a permanent staple on our gate whiteboard, and doesn’t
> > appear to be decreasing in quantity.
> >
> > I pushed up a patch, https://review.openstack.org/#/c/421581, which
> keeps
> > the auto detection behavior, but defaults devstack to use qemu emulation
> > instead of kvm.
> >
> > I have two questions:
> > 1) Is there any way I’m not aware of we can quantify the number of
> > failures this is causing? The key log message, "KVM: entry failed,
> > hardware error 0x0”, shows up in logs/libvirt/qemu/node-*.txt.gz.
> > 2) Are these failures avoidable or visible in any way?
> >
> > IMO, if we can’t fix these failures, in my opinion, we have to do a
> > change to avoid using nested KVM altogether. Lower reliability for our
> > jobs is not worth a small decrease in job run time.
>
> Part of the problem with nested KVM failures is that in many cases they
> destroy the test nodes in unrecoverable ways. In which case you don't
> get any logs, and zuul will restart the job for you. I think that
> graphite will capture this as a job that resulted in a Null/None status
> though (rather than SUCCESS/FAILURE).
>
> As for collecting info when you do get logs, we don't index the libvirt
> instance logs currently and I am not sure we want to. We already
> struggle to keep up with the existing set of logs when we are busy.
> Instead we might have job cleanup do a quick grep for known nested virt
> problem indicators and then log that to the console log which will be
> indexed.
>
> I think trove has also seen kernel panic type errors in syslog that we
> hypothesized were a result of using nested virt.
>
> The infra team explicitly attempts to force qemu instead of kvm on jobs
> using devstack-gate for these reasons. We know it doesn't work reliably
> and not all clouds support it. Unfortunately my understanding of the
> situation is that base hypervisor cpu and kernel, second level
> hypervisor kernel, and nested guest kernel all come into play here. And
> there can be nasty interactions between them causing a variety of
> problems.
>
> Put another way:
>
> 2017-01-14T00:42:00   if we're talking nested kvm
> 2017-01-14T00:42:04   it's kindof a nightmare
> from
> http://eavesdrop.openstack.org/irclogs/%23openstack-
> infra/%23openstack-infra.2017-01-14.log
>
> Clark
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MassivelyDistributed] IRC Meeting tomorrow 15:00 UTC

2017-01-17 Thread joehuang
Hello,

I read the meeting log and etherpad, and find that you mentioned OPNFV 
Multisite and Kingbird
project. Some comment on these multi-site related projects: OPNFV multisite, 
kingbird, tricircle.

Multisite is a requirement project in OPNFV to identify the gap and requirement 
in OpenStack to make OpenStack work for NFV multi-site cloud.

Kingbird is one sub-project of Multisite, started after gap analysis in OPNFV, 
which is aiming at centralized quota management, centralized view for 
distributed virtual resources, synchronization of ssh keys, images, flavors, 
etc. across regions in OpenStack multi-region deployments. 
Currently the project is working on key-pair sync, and 
centralized quota management feature has been implemented in OPNFV
C release. Kingbird is one major topic in OPNFV Multisite weekly meeting.

While Tricircle is one OpenStack big-tent official project, which was accepted 
in
Nov.2016, and has been narrowed its scope on networking automation across 
Neutron in
OpenStack multi-region deployments during the big-tent application. 
Tricircle has basic features of L2/L3 networking across OpenStack cloud, 
currently local 
network and shared_VLAN based L2/L3 networking are supported, and is working on
VxLAN L2 networking across Neutron, so that L2/L3 networking can also leverage 
the 
VxLAN L2 networking capability. You can refer to (review) the networking guide 
prepared:
https://review.openstack.org/#/c/420316/.

During the discussion happened in 2015 , both kingbird / tricircle are 
candidate 
solutions to address the multisite clouds, Kingbird and Tricircle 
can work together or separately in OpenStack multi-region deployment scenario, 
they are 
complimented each other now. Kingbird has no features about networking 
automation, and Tricircle has no features related to Nova/Cinder...

Tricircle is mostly visible in OpenStack community, while Kingbird is mostly 
visible in OPNFV community.

Welcome to join the meeting:
   Tricircle: IRC meeting: 
https://webchat.freenode.net/?channels=openstack-meeting on every Wednesday 
starting from UTC 13:00
   Multisite & Kingbird: IRC: 
http://webchat.freenode.net/?channels=opnfv-meeting on every Thursday 8:00-9:00 
UTC (During winter time, means CET 9:00 AM).

Best Regards
Chaoyi Huang (joehuang)


From: Anthony SIMONET [anthony.simo...@inria.fr]
Sent: 17 January 2017 22:11
To: openstack-dev@lists.openstack.org; openstack-operat...@lists.openstack.org
Subject: [openstack-dev] [MassivelyDistributed] IRC Meeting tomorrow 15:00  
UTC

Hi all,

The agenda is available at:
https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2017 (line 
82)
Please feel free to add items to the agenda.

The meeting while take place on #openstack-meeting.

Cheers,
Anthony


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][monasca] Can we uncap python-kafka ?

2017-01-17 Thread Keen, Joe
Tony, I have some observations on the new client based on a short term
test and a long running test.

For short term use it uses 2x the memory compared to the older client.
The logic that deals with receiving partial messages from Kafka was
completely rewritten in the 1.x series and with logging enabled I see
continual warnings about truncated messages.  I don’t lose any data
because of this but I haven’t been able to verify if it’s doing more reads
than necessary.  I don’t know that either of these problems are really a
sticking point for Monasca but the increase in memory usage is potentially
a problem.

Long term testing showed some additional problems.  On a Kafka server that
has been running for a couple weeks I can write data in but the
kafka-python library is no longer able to read data from Kafka.  Clients
written in other languages are able to read successfully.  Profiling of
the python-kafka client shows that it’s spending all it’s time in a loop
attempting to connect to Kafka:

276150.0860.0000.0860.000 {method 'acquire’ of
'thread.lock' objects}
431520.2500.0000.3850.000 types.py:15(_unpack)
431530.1350.0000.1350.000 {_struct.unpack}
48040/477980.1640.0000.1650.000 {len}
603510.2010.0000.2010.000 {method 'read’ of
'_io.BytesIO' objects}
  7389962   23.9850.000   23.9850.000 {method 'keys' of ‘dict'
objects}
  738  104.9310.000  395.6540.000 conn.py:560(recv)
  738   58.3420.000  100.0050.000
conn.py:722(_requests_timed_out)
  738   97.7870.000  167.5680.000 conn.py:588(_recv)
  7390071   46.5960.000   46.5960.000 {method 'recv’ of
'_socket.socket' objects}
  7390145   23.1510.000   23.1510.000 conn.py:458(connected)
  7390266   21.4170.000   21.4170.000 {method 'tell’ of
'_io.BytesIO' objects}
  7395664   41.6950.000   41.6950.000 {time.time}



I also see additional problems with the use of the deprecated
SimpleConsumer and SimpleProducer clients.  We really do need to
investigate migrating to the new async only Producer objects while still
maintaining the reliability guarantees that Monasca requires.


On 12/13/16, 10:01 PM, "Tony Breeds"  wrote:

>On Mon, Dec 05, 2016 at 04:03:13AM +, Keen, Joe wrote:
>
>> I don’t know, yet, that we can.  Unless we can find an answer to the
>> questions I had above I’m not sure that this new library will be
>> performant and durable enough for the use cases Monasca has.  I’m fairly
>> confident that we can make it work but the performance issues with
>> previous versions prevented us from even trying to integrate so it will
>> take us some time.  If you need an answer more quickly than a week or
>>so,
>> and if anyone in the community is willing, I can walk them through the
>> testing I’d expect to happen to validate the new library.
>
>Any updates Joe?  It's been 10 days and we're running close to Christamas
>so
>at this rate it'll be next year before we know if this is workable.
>
>Yours Tony.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [infra] Nested KVM + the gate

2017-01-17 Thread Clark Boylan
On Tue, Jan 17, 2017, at 03:41 PM, Jay Faulkner wrote:
> Hi all,
> 
> Back in late October, Vasyl wrote support for devstack to auto detect,
> and when possible, use kvm to power Ironic gate jobs
> (0036d83b330d98e64d656b156001dd2209ab1903). This has lowered some job
> time when it works, but has caused failures — how many? It’s hard to
> quantify as the log messages that show the error don’t appear to be
> indexed by elastic search. It’s something seen often enough that the
> issue has become a permanent staple on our gate whiteboard, and doesn’t
> appear to be decreasing in quantity.
> 
> I pushed up a patch, https://review.openstack.org/#/c/421581, which keeps
> the auto detection behavior, but defaults devstack to use qemu emulation
> instead of kvm.
> 
> I have two questions:
> 1) Is there any way I’m not aware of we can quantify the number of
> failures this is causing? The key log message, "KVM: entry failed,
> hardware error 0x0”, shows up in logs/libvirt/qemu/node-*.txt.gz.
> 2) Are these failures avoidable or visible in any way?
> 
> IMO, if we can’t fix these failures, in my opinion, we have to do a
> change to avoid using nested KVM altogether. Lower reliability for our
> jobs is not worth a small decrease in job run time.

Part of the problem with nested KVM failures is that in many cases they
destroy the test nodes in unrecoverable ways. In which case you don't
get any logs, and zuul will restart the job for you. I think that
graphite will capture this as a job that resulted in a Null/None status
though (rather than SUCCESS/FAILURE).

As for collecting info when you do get logs, we don't index the libvirt
instance logs currently and I am not sure we want to. We already
struggle to keep up with the existing set of logs when we are busy.
Instead we might have job cleanup do a quick grep for known nested virt
problem indicators and then log that to the console log which will be
indexed.

I think trove has also seen kernel panic type errors in syslog that we
hypothesized were a result of using nested virt.

The infra team explicitly attempts to force qemu instead of kvm on jobs
using devstack-gate for these reasons. We know it doesn't work reliably
and not all clouds support it. Unfortunately my understanding of the
situation is that base hypervisor cpu and kernel, second level
hypervisor kernel, and nested guest kernel all come into play here. And
there can be nasty interactions between them causing a variety of
problems.

Put another way:

2017-01-14T00:42:00   if we're talking nested kvm
2017-01-14T00:42:04   it's kindof a nightmare
from
http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2017-01-14.log

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [infra] Nested KVM + the gate

2017-01-17 Thread Jay Faulkner
Hi all,

Back in late October, Vasyl wrote support for devstack to auto detect, and when 
possible, use kvm to power Ironic gate jobs 
(0036d83b330d98e64d656b156001dd2209ab1903). This has lowered some job time when 
it works, but has caused failures — how many? It’s hard to quantify as the log 
messages that show the error don’t appear to be indexed by elastic search. It’s 
something seen often enough that the issue has become a permanent staple on our 
gate whiteboard, and doesn’t appear to be decreasing in quantity.

I pushed up a patch, https://review.openstack.org/#/c/421581, which keeps the 
auto detection behavior, but defaults devstack to use qemu emulation instead of 
kvm.

I have two questions:
1) Is there any way I’m not aware of we can quantify the number of failures 
this is causing? The key log message, "KVM: entry failed, hardware error 0x0”, 
shows up in logs/libvirt/qemu/node-*.txt.gz.
2) Are these failures avoidable or visible in any way?

IMO, if we can’t fix these failures, in my opinion, we have to do a change to 
avoid using nested KVM altogether. Lower reliability for our jobs is not worth 
a small decrease in job run time.

Thanks,
Jay Faulkner
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Upstream backwards compatibility job for Newton oooq

2017-01-17 Thread Ben Nemec



On 01/17/2017 09:57 AM, mathieu bultel wrote:

On 01/17/2017 04:42 PM, Emilien Macchi wrote:

On Tue, Jan 17, 2017 at 9:34 AM, mathieu bultel  wrote:

Hi Adriano

On 01/17/2017 03:05 PM, Adriano Petrich wrote:

So I want to make a backwards compatibility job upstream so from last scrum
I got the feeling that we should not be adding more stuff to the
experimental jobs due to lack of resources (and large queues)

What kind of "test" do you want to add ?
I ask because since few days we have upstream an upgrade job that does:
master UC -> deploying a Newton OC with Newton OC + tht stable/newton ->
then upgrade the OC to master with tht master branch.
It's sounds like a "small backward compatibility" validation, but I'm not
sure if it's cover what you need.

While I understand what is the idea, I don't see the use case.
In which case you want to deploy a old version of overcloud by using a
recent undercloud?
Why don't use deploy a stable undercloud to deploy a stable overcloud?

From my side, the use case is the major OC upgrade. We don't want to
test the major upgrade of the undercloud (since a job already exist),


The problem I see with this is that the undercloud upgrade job does 
extremely basic testing of the upgraded undercloud.  It's just a smoke 
test to make sure the upgrade can be completed and no services went down 
after it.  I would not assume that is sufficient coverage of the upgrade 
case, it's just what we could do quickly(-ish) to get _some_ undercloud 
upgrade coverage in place.



only overcloud, that's why we start by a "master" undercloud, and that
save us from unwanted/unrelated issues due to the UC upgrade and reduce
the duration of the job.




Is that so? I was thinking about using nonha-multinode-oooq that seems to be
working.

Is that allright to add this new job or should I wait until we get more
resource and do ci.centos for now, or any idea on where to do this is also
welcome.


Cheers,
   Adriano


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstacksdk and compute limits for projects

2017-01-17 Thread Brian Curtin
On Tue, Jan 17, 2017 at 4:23 PM, Michael Gale 
wrote:

> Hello,
>
> Does anyone know what the equivalent of the following command would be
> via the API?
> `openstack limits show --absolute --project `
>
> I am using an admin account to pull stats and information from a Mitaka
> environment, now I can run the above command in bash, looping over each
> project that exist. However I would like to get the information using the
> openstacksdk via Python.
>
> I can use:
> `connection.compute.get_limits()`
>  however that only works for the project I logged in with.
>

It could take an id/name, but the REST API doesn't seem to be documented
that it takes an id or name, which is probably why it doesn't currently
accept them:
http://developer.openstack.org/api-ref/compute/?expanded=show-rate-and-absolute-limits-detail

How should we implement this?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] short term roadmap (actions required)

2017-01-17 Thread Emilien Macchi
I'm trying to dress a list of things important to know so we can
successfully deliver Ocata release, please take some time to read and
comment if needed.

== Triaging Ocata & Pike bugs

As we discussed in our weekly meeting, we decided to:

* move ocata-3 low/medium unassigned bugs to pike-1
* move ocata-3 high/critical unassigned bugs to ocata-rc1
* keep ocata-3 In Progress bugs to ocata-3 until next week and move
them to ocata-rc1 if not fixed on time.

Which means, if you plan to file a new bug:

* low/medium: target it for pike-1
* high/critical: target it for ocata-rc1

We still have 66 bugs In Progress for ocata-3. The top priority for
this week is to make progress on those bugs and close it on time for
ocata final release.


== Releasing tripleoclient next week

If you're working on tripleoclient, you might want to help in fixing
the bugs still targeted for Ocata:
https://goo.gl/R2hO4Z
We'll release python-tripleoclient final ocata by next week.


== Freezing features next week

If you're working on a feature in TripleO which is part of a blueprint
targeted for ocata-3, keep in mind you have until next week to make it
merged.
After January 27th, We will block (by a -2 in Gerrit) any patch that
adds a feature in master until we release Ocata and branch
stable/ocata.
Some exceptions can be made, but they have to be requested on
openstack-dev and team + PTL will decide if whether or not we accept
it.
If your blueprint is not High or Critical, there are a few chances we accept it.


== Preparing Pike together

In case you missed it, we're preparing Pike sessions for next PTG:
https://etherpad.openstack.org/p/tripleo-ptg-pike
Feel free to propose a session and announce/discuss it on the
openstack-dev mailing-list.


== CI freeze

>From January 27th until final Ocata release, we will freeze any chance
in our CI, except critical fixes but they need to be reported in
Launchpad and team + PTL needs to know (ML openstack-dev).


If there is any question or feedback, please don't hesitate to use this thread.

Thanks and let's make Ocata our best release ever ;-)
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] openstacksdk and compute limits for projects

2017-01-17 Thread Michael Gale
Hello,

Does anyone know what the equivalent of the following command would be
via the API?
`openstack limits show --absolute --project `

I am using an admin account to pull stats and information from a Mitaka
environment, now I can run the above command in bash, looping over each
project that exist. However I would like to get the information using the
openstacksdk via Python.

I can use:
`connection.compute.get_limits()`
 however that only works for the project I logged in with.
Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] FIPS compliance

2017-01-17 Thread Ian Cordasco
-Original Message-
From: Doug Hellmann 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: January 17, 2017 at 10:53:06
To: openstack-dev 
Subject:  Re: [openstack-dev] [security] FIPS compliance

> Excerpts from Ian Cordasco's message of 2017-01-17 05:59:13 -0600:
> > On Tue, Jan 17, 2017 at 4:11 AM, Yolanda Robla Mota wrote:
> > > Hi, in previous threads, there have been discussions about enabling FIPS,
> > > and the problems we are hitting with md5 inside OpenStack:
> > > http://lists.openstack.org/pipermail/openstack-dev/2016-November/107035.html
> > >
> > > It is important from a security perspective to enable FIPS, however
> > > OpenStack cannot boot with that, because of the existence of md5 calls in
> > > several projects. These calls are not used for security, just for hash
> > > generation, but even with that, FIPS is blocking them.
> > >
> > > There is a patch proposed for newest versions of python, to avoid that
> > > problem. The idea is that when a hash method is called, users could 
> > > specify
> > > if these are used for security or not. If the useforsecurity flag is set 
> > > to
> > > False, FIPS won't block the call. See: http://bugs.python.org/issue9216
> >
> > This patch looks to have died off in 2013 prior to Robert's comment from 
> > today.
> >
> > > This won't land until next versions of Python, however the patch is 
> > > already
> > > on place for current RHEL and CentOS versions that are used in OpenStack
> > > deploys. Using that patch as a base, I have a proposal to allow FIPS
> > > enabling, at least in the distros that support it.
> > >
> > > The idea is to create a wrapper around md5, something like:
> > > md5_wrapper('string_to_hash', useforsecurity=False)
> >
> > We should probably work harder on actually landing the patch in Python
> > first. I agree with Robert that the optional boolean parameter is
> > awkward. It'd be better to have a fips submodule.
>
> Please see my comment on that patch about why that approach doesn't
> solve the problem.

I think you're right, but I also still think that Robert's right. =)
I've commented accordingly.

> > > This method will check the signature of hashlib.md5, and see if that's
> > > offering the useforsecurity parameter. If that's offered, it will pass the
> > > given parameter from the wrapper. If not, we will just call
> > > md5('string_to_hash') .
> > >
> > > This gives us the possibility to whitelist all the md5 calls, and enabling
> > > FIPS kernel booting without problems. It will start to work for distros
> > > supporting it, and it will be ready to use generally when the patch lands 
> > > in
> > > python upstream and another distros adopt it. At some point, when all
> > > projects are using newest python versions, this wrapper could disappear 
> > > and
> > > use md5 useforsecurity parameter natively.
> >
> > I'd much rather have the upstream interface fixed in Python and then
> > to have a wrapper that does things the correct way. Otherwise, we're
> > encouraging other distros to use a patch that still requires a lot of
> > edits to address the review comments and might be defining an API that
> > will never end up in Python.
> >
> > > The steps needed to achieve it are:
> > > - create a wrapper, place it on some existing project or create a new fips
> > > one
> > > - search and replace all md5 calls used in OpenStack core projects , to 
> > > use
> > > that new wrapper. Note that all the md5 calls will be whitelisted by
> > > default. We have not noted any md5 call that is used for security, but if
> > > that exists, it shall be better to use another algorithms, in terms of
> > > security.
> > >
> > > What do people think about it?
> >
> > I think people should work on the Python patches *first*. Once they're
> > merged, *then* we should potentially create a wrapper (if it's still
> > necessary at that point) to do this.
> >
>
> The idea is to use the wrapper as a short-term solution to give us the
> time to make that happen. The original patch did lose interest, but even
> if it landed today it wouldn't necessarily be the sort of thing that
> would qualify for a backport, so it might take quite a while to see a
> real release.
>
> As you point out, the final version of the upstream API may be
> different. With a wrapper in place, we ought to be able to modify the
> implementation of the wrapper to accommodate that to ensure backwards
> compatibility, during the deprecation period after the upstream fix is
> implemented.

Sure, wrappers provide us with a lot of flexibility. It would just be
convenient for the wrapper to mimic what folks would expect from that
final API. That's why the original wrapper proposal is mimicking what
Red Hat already ships.

I don't think the wrapper should have the same poor design decisions,
but I do emphatically support the idea of this wrapper and the desired

Re: [openstack-dev] [devstack][keystone] DRaaS for Keystone

2017-01-17 Thread Lance Bragstad
Hi Wasiq!

On Tue, Jan 17, 2017 at 1:34 PM, Wasiq Noor 
wrote:

> Hello,
>
> I am Wasiq from Namal College Mianwali, Pakistan. Following the link:
> https://wiki.openstack.org/wiki/DisasterRecovery, I have developed a
> disaster recovery solution for Keystone for various recovery mechanism. I
> have the code with me.
>

Do you happen to have bits published anywhere publicly, so that others can
take a look?

Can anybody help how can I make it into the devstack repository.
>

Are you looking to use devstack to test DR scenarios?


> I have followed some links but found them very confusing.
>

Do you have the links handy? Specific feedback can be useful to improve
project documentation.


>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Ian Cordasco
-Original Message-
From: Jay Pipes 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: January 17, 2017 at 12:31:21
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [all] [barbican] [security] Why are
projects trying to avoid Barbican, still?

> On 01/17/2017 07:57 AM, Ian Cordasco wrote:
> > On Mon, Jan 16, 2017 at 6:20 PM, Amrith Kumar wrote:
> >> Ian,
> >>
> >> This is a fascinating conversation. Let me offer two observations.
> >>
> >> First, Trove has long debated the ideal solution for storing secrets. There
> >> have been many conversations, and Barbican has been considered many times.
> >> We sought input from people who were deploying and operating Trove at 
> >> scale;
> >> customers of Tesora, self described users of the upstream Trove, and some 
> >> of
> >> the (then) active contributors who were also operators.
> >>
> >> The consensus was that installing and deploying OpenStack was hard enough
> >> and requiring the installation of yet more services was problematic. This 
> >> is
> >> not something which singles out Barbican in any way. For example, Trove 
> >> uses
> >> Swift as the default object store where backups are stored, and in
> >> implementing replication we leveraged the backup capability. This means 
> >> that
> >> to have replication, one needs to have Swift. Several deployers have
> >> objected to this since they don't have swift. But that is a dependency 
> >> which
> >> we considered to be a hard dependency and offer no alternatives; you can
> >> have Ceph if you so desire but we still access it as a swift store.
> >> Similarly we needed some capabilities of job scheduling and opted to use
> >> mistral for this; we didn't reimplement all of these capabilities in Trove.
> >>
> >> However, when it comes to secret storage, the consensus of opinion is
> >> Yet another service.
> >
> > So, what spurred this thread is that I'm currently working on Craton
> > which wants to store deployment secrets for operators and I've
> > recently received a lot of private mail about Glare and how one of its
> > goals is to replace Barbican (as well as Glance).
>
> Problem #1: private emails. Why? Encourage whomever is privately
> emailing you to instead post to the mailing list, otherwise parties are
> not acting in the Open[Stack] Way.

That has come up with those people.

> Problem #2: What does Glare have to do with secret storage? I can
> understand someone saying that Glare might eventually replace Glance,
> but I'm not aware of anyone ever building crypto use cases or
> functionality into the design of Glare. Ever.

This is exactly my understanding as well. Glare was meant to be an
artifact service, not a secrets service. I guess a reductionist could
claim all data is an artifact, but that's obviously a flawed argument.

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][ptl] final reminder about non-client library releases

2017-01-17 Thread Doug Hellmann
The deadline for non-client library releases is Thursday 19 Jan.
We do not grant Feature Freeze Extensions for any libraries, so
that is a hard freeze date. Any feature work that requires updates
to non-client libraries should be prioritized so it can be completed
by that time.

We have quite a few libraries with unreleased changes. See the report
output in http://paste.openstack.org/show/595268/ for details.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][keystone] DRaaS for Keystone

2017-01-17 Thread Wasiq Noor
Hello,

I am Wasiq from Namal College Mianwali, Pakistan. Following the link:
https://wiki.openstack.org/wiki/DisasterRecovery, I have developed a
disaster recovery solution for Keystone for various recovery mechanism. I
have the code with me. Can anybody help how can I make it into the devstack
repository. I have followed some links but found them very confusing.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Jay Pipes

On 01/16/2017 07:19 PM, Joshua Harlow wrote:

Fox, Kevin M wrote:

Your right, it is not what the big tent was about, but the big tent
had some unintended side affects. The list, as you stated:

* No longer having a formal incubation and graduation period/review for
applying projects
* Having a single, objective list of requirements and responsibilities
for inclusion into the OpenStack development community
* Specifically allowing competition of different source projects in the
same "space" (e.g. deployment or metrics)

Turned into (my opinion):

#1, projects, having a formal incubation/graduation period had the
opportunity to get feedback on what they could do to better integrate
with other projects and were strongly encouraged to do so to make
progress towards graduation. Without the formality, no one tended to
bother.

#2, Not having a single, objective list of
requirements/responsibility: I believe not having a list made folks
take a hard look at what other projects were doing and try harder to
play nice in order to get graduated or risk the unknown of folks
coming back over and over and telling them more integration was required.

#3, the benefits/drawbacks of specifically allowing competition is
rather hard to predict. It can encourage alternate solutions to be
created and create a place where better ideas can overcome less good
ideas. But it also removes pressure to cooperate on one project rather
then just take the sometimes much easier route of just doing it
yourself in your own project.

I'm not blaming the big tent for all the ills of the OpenStack world.
It has had some real benefits. This problem is something bigger then
the big tent. It preexisted the tent. The direction the pressure to
share was very unidirectional pre big tent, applied to new projects
much more then old projects.

But, I'm just saying the Big Tent had an (unintended) net negative
affect making this particular problem worse.

Looking at the why of a problem is one of the important steps to
formulating a solution. OpenStack no longer has the amount of tooling
to help elevate the issue it had under the time before the Big Tent.
Nothing has come up since to replace it.

I'm not stating that the big tent should be abolished and we go back
to the way things were. But I also know the status quo is not working
either. How do we fix this? Anyone have any thoughts?


Embrace the larger world instead of trying to recreate parts of it,
create alliances with the CNCF and/or other companies


The CNCF isn't a company...


that are getting actively involved there and make bets that solutions
there are things that people want to use directly (instead of turning
openstack into some kind of 'integration aka, middleware engine').


The complaint about Barbican that I heard from most folks on this thread 
was that it added yet another service to deploy to an OpenStack deployment.


If we use technology from the CNCF or elsewhere, we're still going to 
end up deploying yet another service. Just like if we want to use 
ZooKeeper for group membership instead of the Nova DB.


So, while I applaud the general idea of looking at the CNCF projects as 
solutions to some problems, you wouldn't be solving the actual issue 
brought to attention by operators and OpenStack project contributors (to 
Magnum/Craton): of needing to install yet another dependency.




How many folks have been watching
https://github.com/cncf/toc/tree/master/proposals or
https://github.com/cncf/toc/pulls?


I don't look at that feed, but I do monitor the pull requests for k8s 
and some other projects like rkt and ACI/OCI specs.



Start accepting that what we call OpenStack may be better off as
extracting the *current* good parts of OpenStack and cutting off some of
the parts that aren't really worth it/nobody really uses/deploys anyway


I'm curious what you think would be left in OpenStack?

BTW, the CNCF is already creating projects that duplicate functionality 
that's been available for years in other open source projects -- see 
prometheus and fluentd [1] -- in the guise of "unifying" things for a 
cloud-native world. I suspect that trend will continue as vendors jump 
from OpenStack to CNCF projects because they perceive it as the new 
shiny thing and capable of accepting their vendor-specific code quicker 
than OpenStack.


In fact, if you look at the CNCF projects, you see the exact same 
disagreement about the exact same two areas that we see so much 
duplication in the OpenStack community: deployment/installation and 
logging/monitoring/metrics. I mean, how many ways are there to deploy 
k8s at this point?


The things that the OpenStack ecosystem has proliferated as services or 
plugins are the exact same things that the CNCF projects are building 
into their architecture. How many service discovery mechanisms can 
Prometheus use? How many source and destination backends can fluentd 
support? And now with certain vendors trying to get more 
hardware-specific 

Re: [openstack-dev] [release][requirements] disable constraint bot updates for our own libraries

2017-01-17 Thread Doug Hellmann
Excerpts from Dmitry Tantsur's message of 2017-01-17 18:48:59 +0100:
> On 01/17/2017 04:55 PM, Doug Hellmann wrote:
> > In this review for the ironic-inspector-client newton release [1], Alan
> > pointed out that the new release was pulled into our master requirements
> > because the constraints bot saw it as a newer release. That doesn't seem
> > like something we want to have happen, as a general case. Should we
> > update the bot to avoid changing constraints for the things we release
> > ourselves? That will let us more carefully manage which updates go into
> > which branches, since the release jobs update the constraints files
> > as part of the release process.
> 
> In theory there is nothing wrong with this, as 1.10 is the latest release 
> indeed. In practice, that means pulling in something with stable/newton 
> requirements into master, which is concerning, I agree.
> 
> However, not updating upper constraints at all seems overly strict too. This 
> will essentially cause people to do it manually. Maybe we should just make 
> sure 
> that for our projects we only take releases from an appropriate branch?

Well, the release process submits those patches automatically (updating
only the constraint setting for the newly released library). So no more
manual work would be needed.

Doug

> 
> >
> > Doug
> >
> > [1] https://review.openstack.org/#/c/398401/
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements] disable constraint bot updates for our own libraries

2017-01-17 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2017-01-17 18:15:59 +:
> On 2017-01-17 18:48:59 +0100 (+0100), Dmitry Tantsur wrote:
> [...]
> > In theory there is nothing wrong with this, as 1.10 is the latest
> > release indeed. In practice, that means pulling in something with
> > stable/newton requirements into master, which is concerning, I
> > agree.
> [...]
> 
> I don't really see why this is a problem at all. The change in
> question updated master constraints from 1.9.0 (a pre-Newton
> release) to 1.10.0 (a stable Newton release). Did anything
> substantial change in stable/newton between 1.9.0 and 1.10.0 to make
> the newer version unsuitable for use with master branch versions of
> other projects? Newer is newer is newer. If projects need
> integration testing against the master branch (or any particular
> branch) of something, they need to be installing from source and not
> packages. If the package corresponding to this tag from the stable
> branch works with master versions of other projects, then it seems
> like our automation worked as intended. Is there a reason to think
> that our master branches should be using _older_ versions of
> dependencies than our stable branches?

>From our CI perspective, it doesn't matter. It looks a bit odd from the
perspective of us telling downstream packagers that the constraints list
is what they should be trying to bundle for compatibility. It's not
terribly weird, but I do see how it can introduce some confusion. Of
course the same case may come up frequently for dependencies of our
libraries.

> Granted, it's unclear to me why a stable branch got a release tagged
> with a version which semver says is more than straight up bug fixes.
> That would seem to fly in the face of stable branch change policy
> (but is orthogonal to the topic of this thread).

In this case there was a change in the dependencies of the library.
IIUC, the change wasn't "real" in the sense that the dependency was
always there, but the new version of the lib more accurately reflected
its dependencies.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Jay Pipes

On 01/17/2017 07:57 AM, Ian Cordasco wrote:

On Mon, Jan 16, 2017 at 6:20 PM, Amrith Kumar  wrote:

Ian,

This is a fascinating conversation. Let me offer two observations.

First, Trove has long debated the ideal solution for storing secrets. There
have been many conversations, and Barbican has been considered many times.
We sought input from people who were deploying and operating Trove at scale;
customers of Tesora, self described users of the upstream Trove, and some of
the (then) active contributors who were also operators.

The consensus was that installing and deploying OpenStack was hard enough
and requiring the installation of yet more services was problematic. This is
not something which singles out Barbican in any way. For example, Trove uses
Swift as the default object store where backups are stored, and in
implementing replication we leveraged the backup capability. This means that
to have replication, one needs to have Swift. Several deployers have
objected to this since they don't have swift. But that is a dependency which
we considered to be a hard dependency and offer no alternatives; you can
have Ceph if you so desire but we still access it as a swift store.
Similarly we needed some capabilities of job scheduling and opted to use
mistral for this; we didn't reimplement all of these capabilities in Trove.

However, when it comes to secret storage, the consensus of opinion is
Yet another service.


So, what spurred this thread is that I'm currently working on Craton
which wants to store deployment secrets for operators and I've
recently received a lot of private mail about Glare and how one of its
goals is to replace Barbican (as well as Glance).


Problem #1: private emails. Why? Encourage whomever is privately 
emailing you to instead post to the mailing list, otherwise parties are 
not acting in the Open[Stack] Way.


Problem #2: What does Glare have to do with secret storage? I can 
understand someone saying that Glare might eventually replace Glance, 
but I'm not aware of anyone ever building crypto use cases or 
functionality into the design of Glare. Ever.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements] disable constraint bot updates for our own libraries

2017-01-17 Thread Alec Hothan (ahothan)


From: Jeremy Stanley 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, January 17, 2017 at 10:15 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [release][requirements] disable constraint bot 
updates for our own libraries

On 2017-01-17 18:48:59 +0100 (+0100), Dmitry Tantsur wrote:
[...]
In theory there is nothing wrong with this, as 1.10 is the latest
release indeed. In practice, that means pulling in something with
stable/newton requirements into master, which is concerning, I
agree.
[...]

I don't really see why this is a problem at all. The change in
question updated master constraints from 1.9.0 (a pre-Newton
release) to 1.10.0 (a stable Newton release). Did anything
substantial change in stable/newton between 1.9.0 and 1.10.0 to make
the newer version unsuitable for use with master branch versions of
other projects? Newer is newer is newer. If projects need
integration testing against the master branch (or any particular
branch) of something, they need to be installing from source and not
packages. If the package corresponding to this tag from the stable
branch works with master versions of other projects, then it seems
like our automation worked as intended. Is there a reason to think
that our master branches should be using _older_ versions of
dependencies than our stable branches?


Ksjdlkjads







Granted, it's unclear to me why a stable branch got a release tagged
with a version which semver says is more than straight up bug fixes.
That would seem to fly in the face of stable branch change policy
(but is orthogonal to the topic of this thread).
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements] disable constraint bot updates for our own libraries

2017-01-17 Thread Jeremy Stanley
On 2017-01-17 18:48:59 +0100 (+0100), Dmitry Tantsur wrote:
[...]
> In theory there is nothing wrong with this, as 1.10 is the latest
> release indeed. In practice, that means pulling in something with
> stable/newton requirements into master, which is concerning, I
> agree.
[...]

I don't really see why this is a problem at all. The change in
question updated master constraints from 1.9.0 (a pre-Newton
release) to 1.10.0 (a stable Newton release). Did anything
substantial change in stable/newton between 1.9.0 and 1.10.0 to make
the newer version unsuitable for use with master branch versions of
other projects? Newer is newer is newer. If projects need
integration testing against the master branch (or any particular
branch) of something, they need to be installing from source and not
packages. If the package corresponding to this tag from the stable
branch works with master versions of other projects, then it seems
like our automation worked as intended. Is there a reason to think
that our master branches should be using _older_ versions of
dependencies than our stable branches?

Granted, it's unclear to me why a stable branch got a release tagged
with a version which semver says is more than straight up bug fixes.
That would seem to fly in the face of stable branch change policy
(but is orthogonal to the topic of this thread).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [telemetry] How to handle security bugs

2017-01-17 Thread Julien Danjou
On Tue, Jan 17 2017, Jeremy Stanley wrote:

> Others have already answered most of your questions in this thread,
> but since nobody from the VMT has chimed in yet I'll just state on
> our behalf that we're generally happy to consult privately or
> publicly on any suspected vulnerability report within the OpenStack
> ecosystem (and sometimes beyond). If you subscribe
> openstack-vuln-mgmt (OpenStack Vulnerability Management team) on
> Launchpad to the private bug in question we'll get notified
> automatically and take a look. For deliverables with the
> vulnerability:managed governance tag this happens automatically and
> we prioritize our time toward those, but we're available to help on
> others as well on a best-effort basis and time permitting.
>
> The VMT's process document exists primarily for the purposes of
> transparency, and outlines the steps we follow and templates we use
> when triaging suspected vulnerabilities for OpenStack deliverables
> with the vulnerability:managed governance tag. It's also usable in
> great part by other deliverables, and though the VMT doesn't
> officially take responsibility for those we're still usually able to
> help take you through the process and answer questions. If you need
> to reach us through a secure channel, E-mail addresses and
> corresponding OpenPGP keys are published at
> https://security.openstack.org/#how-to-report-security-issues-to-openstack
> for anyone who needs them.

Amazing feedback, thanks Jeremy.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements] disable constraint bot updates for our own libraries

2017-01-17 Thread Dmitry Tantsur

On 01/17/2017 04:55 PM, Doug Hellmann wrote:

In this review for the ironic-inspector-client newton release [1], Alan
pointed out that the new release was pulled into our master requirements
because the constraints bot saw it as a newer release. That doesn't seem
like something we want to have happen, as a general case. Should we
update the bot to avoid changing constraints for the things we release
ourselves? That will let us more carefully manage which updates go into
which branches, since the release jobs update the constraints files
as part of the release process.


In theory there is nothing wrong with this, as 1.10 is the latest release 
indeed. In practice, that means pulling in something with stable/newton 
requirements into master, which is concerning, I agree.


However, not updating upper constraints at all seems overly strict too. This 
will essentially cause people to do it manually. Maybe we should just make sure 
that for our projects we only take releases from an appropriate branch?




Doug

[1] https://review.openstack.org/#/c/398401/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Lance Bragstad
I would consider that to be something that spans further than just barbican
and keystone. The ability to restrict a token to a single
service/operation/resource is a super interesting problem especially when
you start to consider operational dependencies between the services. If the
approach spans multiple service (which in this case I think it would need
to since it seems closely related to policy) communication gaps will only
make achieving it harder. I think Sean nailed it with his comment about
championing an effort across projects and closing communication gaps. We
are currently doing this on a smaller scale with the horizon team to smooth
out issues between horizon and keystone based on a set of things discussed
in Barcelona [0]. It's seems to be proving successful for both teams.

I'd love to set aside some time to get a discussion rolling in Atlanta
about this.


[0] http://eavesdrop.openstack.org/#Keystone/Horizon_Collaboration_Meeting

On Tue, Jan 17, 2017 at 10:55 AM, Fox, Kevin M  wrote:

> Is this a Barbican problem or a Keystone one? The inability to restrict a
> token to go only to one service but instead any hacked service can be used
> to get tokens that can be used on any other service seems to to me to be a
> more general Keystone architectural problem to solve?
>
> Thanks,
> Kevin
> --
> *From:* Duncan Thomas [duncan.tho...@gmail.com]
> *Sent:* Tuesday, January 17, 2017 6:04 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [all] [barbican] [security] Why are
> projects trying to avoid Barbican, still?
>
>
>
> On 17 January 2017 at 13:41, Dave McCowan (dmccowan) 
> wrote:
>
>>
>> I don't know everything that was proposed in the Juno timeframe, or
>> before, but the Nova and Cinder integration has been done now.  The
>> documentation is at [1].  A cinder user can create an encryption key
>> through Barbican when creating a volume, then the same user (or a user with
>> permissions granted by that user), as a nova user, can retrieve that key
>> when mounting the encrypted volume.
>>
>
> Sure, cinder can add a secret and nova can retrieve it. But glance can
> *also* retrieve it. So can trove. And any other service that gets a normal
> keystone token from the user (i.e. just about all of them). This is, for
> some threat models, far worse that the secret being nice and safe int he
> cinder DB and only ever given out to nova via a trusted API path. The
> original design vision I saw for barbican was intended to have much better
> controls than this, but they never showed up AFAIK. And that's just the
> problem - people think 'Oh, barbican is storing the cinder volume secrets,
> great, we're secure' when actually barbican has made the security situation
> worse not better. It's a pretty terrible secrets-as-a-service product at
> the moment. Fixing it is not trivial.
>
> --
> Duncan Thomas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Dave McCowan (dmccowan)
On 1/17/17, 5:37 AM, "Thierry Carrez"  wrote:

>I think the focus question is an illusion, as Ed brilliantly explained
>in https://blog.leafe.com/openstack-focus/
>
>The issue here is that it's just a lot more profitable career-wise and a
>lot less risky to work first-level user-visible features like Machine
>Learning as a service, than it is to work on infrastructural services
>like Glance, Keystone or Barbican. Developers naturally prefer to go to
>shiny objects than to boring technology. As long as their corporate
>sponsors are happy with them ignoring critical services, that will
>continue. Saying that some of those things are not part of our
>community, while they are developed by our community, is sticking our
>heads in the sand.

This trend identified by Ed and Thierry is evident in the group of
Barbican contributors.  Many of our previously active contributors have
moved on to other projects.  There are some quality ideas in this thread.
I hope I'm just stating the obvious here: there are no Barbican
contributors waiting in the wings with extra cycles to develop them.

If a Vault plugin or cross-project fine-grained access controls are
important to you or your company, please help us out.  I promise the
community is open to new ideas, new developers, and new reviewers.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Attempting to proxy websockets through Apache or HAProxy for Zaqar

2017-01-17 Thread Dan Trainor
Hi -

In an attempt to work on [0], I've been playing around with proxying all
the service API endpoints that the UI needs to communicate with, through
either haproxy or Apache to avoid a bug[1] around how non-Chrome browsers
handle SSL connections to different ports on the same domain.

The blueprint suggests using haproxy for this, but we're currently using
the "old" notation of listen/server, not frontend/backend.  The distinction
is important because the ACLs that would allow any kind of proxying to
facilitate this are only available in the latter notation.  In order to do
this in haproxy, tripleo::haproxy would need a rewrite (looks pretty
trivial, but likely out of scope for this).  So I'd really like to isolate
this to UI, which is convenient since UI runs largely self-contained inside
Apache.

I've made some good progress with most all of the services, since they were
pretty straight-forward - mod_proxy handles them just fine.  The one I'm
not able to make work right now is the websocket service that UI uses.
Ultimately, I see the Websocket connection get upgraded and the Websocket
opens, but stays open indefinitely and will never see more than 0 bytes.
No data is transferred from the browser over the Websocket.  This
connection hangs indefinitely, and UI does not complete any operations that
depend on the Zaqar Websocket.

Observing trace6[4] output, I can see mod_proxy_wstunnel (which relies on
mod_proxy) make the connection, I can see Zaqar recognize the request in
logs, the client (UI) doesn't send or receive any data from it.  It's as if
immediately after the Upgrade[2], the persistent Websocket connection just
dies.

I've had limited success using a couple different implementations of this
in Apache.  ProxyPass/ProxyPassReverse looks as if it should work (so long
as mod_proxy_wstunnel is loaded), but this is not my experience.  Using a
mod_rewrite rule[3] to force the specific Websocket proxy for a specific
URI (/zaqar) has the same outcome.

In its most simple form, the ProxyPass rule I'm attempting to use is:

  ProxyPass "/zaqar""ws://192.0.2.1:9000/"
  ProxyPassReverse  "/zaqar""ws://192.0.2.1:9000/"

Note that I've used several variations of both ProxyPass configurations and
mod_rewrite rules using the [P] flag which all seem to net the same
result.  I've also tried writing the same functional equivalent in haproxy
using a frontend/backend notation to confirm if this was a protocol thing
or a software thing (if haproxy could do this, but Apache could not).

>From the top, here's some Apache logs (note that trace6 is noisy, I just
grep'd for ws, wss, and the websocket port (9000); full logs of this
request are [4]):

[Tue Jan 17 12:08:16.639170 2017] [proxy_wstunnel:debug] [pid 32128]
mod_proxy_wstunnel.c(253): [client 192.0.2.1:51508] AH02445: woke from
poll(), i=1
[Tue Jan 17 12:08:16.639220 2017] [proxy_wstunnel:debug] [pid 32128]
mod_proxy_wstunnel.c(278): [client 192.0.2.1:51508] AH02448: client was
readable
[Tue Jan 17 12:08:16.639265 2017] [core:trace6] [pid 32128]
core_filters.c(525): [remote 192.0.2.1:9000] core_output_filter: flushing
because of FLUSH bucket
[Tue Jan 17 12:08:16.639337 2017] [proxy_wstunnel:trace2] [pid 32128]
mod_proxy_wstunnel.c(295): [client 192.0.2.1:51508] finished with poll() -
cleaning up
[Tue Jan 17 12:08:16.640023 2017] [proxy:debug] [pid 32128]
proxy_util.c(2218): AH00943: WS: has released connection for (192.0.2.1)
[Tue Jan 17 12:08:19.238044 2017] [core:trace5] [pid 32128]
protocol.c(618): [client 192.0.2.1:51996] Request received from client: GET
/zaqar HTTP/1.1
[Tue Jan 17 12:08:19.238191 2017] [core:trace3] [pid 32128] request.c(293):
[client 192.0.2.1:51996] request authorized without authentication by
access_checker_ex hook: /zaqar
[Tue Jan 17 12:08:19.238202 2017] [proxy_wstunnel:trace1] [pid 32128]
mod_proxy_wstunnel.c(51): [client 192.0.2.1:51996] canonicalising URL //
192.0.2.1:9000/
[Tue Jan 17 12:08:19.238223 2017] [proxy:trace2] [pid 32128]
proxy_util.c(1985): [client 192.0.2.1:51996] ws: found worker ws://
192.0.2.1:9000/ for ws://192.0.2.1:9000/
[Tue Jan 17 12:08:19.238227 2017] [proxy:debug] [pid 32128]
mod_proxy.c(1117): [client 192.0.2.1:51996] AH01143: Running scheme ws
handler (attempt 0)
[Tue Jan 17 12:08:19.238231 2017] [proxy_http:debug] [pid 32128]
mod_proxy_http.c(1925): [client 192.0.2.1:51996] AH01113: HTTP: declining
URL ws://192.0.2.1:9000/
[Tue Jan 17 12:08:19.238236 2017] [proxy_wstunnel:debug] [pid 32128]
mod_proxy_wstunnel.c(333): [client 192.0.2.1:51996] AH02451: serving URL
ws://192.0.2.1:9000/
[Tue Jan 17 12:08:19.238239 2017] [proxy:debug] [pid 32128]
proxy_util.c(2203): AH00942: WS: has acquired connection for (192.0.2.1)
[Tue Jan 17 12:08:19.238244 2017] [proxy:debug] [pid 32128]
proxy_util.c(2256): [client 192.0.2.1:51996] AH00944: connecting ws://
192.0.2.1:9000/ to 192.0.2.1:9000
[Tue Jan 17 12:08:19.238249 2017] [proxy:debug] [pid 32128]
proxy_util.c(2422): [client 

Re: [openstack-dev] [security] [telemetry] How to handle security bugs

2017-01-17 Thread Jeremy Stanley
On 2017-01-17 13:26:02 +0100 (+0100), Julien Danjou wrote:
> I've asked on #openstack-security without success, so let me try here
> insteead:
> 
> We, Telemetry, have a security bug and we're not managed by VMT, any
> hint as how to handle our bug? Or how to get covered by VMT? 

Others have already answered most of your questions in this thread,
but since nobody from the VMT has chimed in yet I'll just state on
our behalf that we're generally happy to consult privately or
publicly on any suspected vulnerability report within the OpenStack
ecosystem (and sometimes beyond). If you subscribe
openstack-vuln-mgmt (OpenStack Vulnerability Management team) on
Launchpad to the private bug in question we'll get notified
automatically and take a look. For deliverables with the
vulnerability:managed governance tag this happens automatically and
we prioritize our time toward those, but we're available to help on
others as well on a best-effort basis and time permitting.

The VMT's process document exists primarily for the purposes of
transparency, and outlines the steps we follow and templates we use
when triaging suspected vulnerabilities for OpenStack deliverables
with the vulnerability:managed governance tag. It's also usable in
great part by other deliverables, and though the VMT doesn't
officially take responsibility for those we're still usually able to
help take you through the process and answer questions. If you need
to reach us through a secure channel, E-mail addresses and
corresponding OpenPGP keys are published at
https://security.openstack.org/#how-to-report-security-issues-to-openstack
for anyone who needs them.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] FIPS compliance

2017-01-17 Thread Yolanda Robla Mota
I completely agree that this shall be upstream first. So the main effort
will be on landing this python patch first. This has been up since 2010, so
more effort in terms of code contribution and reviews is needed, I'm happy
to collaborate in amending the patch as the reviews are requesting.

But the general idea is still there, and that's why a wrapper can make
sense. Even if the final patch has a different signature, or a different
functionality, the idea is the same: don't block md5 if that's not used for
security.

Even if the python patch lands, this would be in 3.7 and this version
adoption can take long in OpenStack. And enabling FIPS kernel is an
important security feature that we shall cover, if we just wait for the
patch to land it can take long time. The wrapper can be the short-term
solution as Doug says, allowing us to enable this important feature.

On Tue, Jan 17, 2017 at 5:51 PM, Doug Hellmann 
wrote:

> Excerpts from Ian Cordasco's message of 2017-01-17 05:59:13 -0600:
> > On Tue, Jan 17, 2017 at 4:11 AM, Yolanda Robla Mota 
> wrote:
> > > Hi, in previous threads, there have been discussions about enabling
> FIPS,
> > > and the problems we are hitting with md5 inside OpenStack:
> > > http://lists.openstack.org/pipermail/openstack-dev/2016-
> November/107035.html
> > >
> > > It is important from a security perspective to enable FIPS, however
> > > OpenStack cannot boot with that, because of the existence of md5 calls
> in
> > > several projects. These calls are not used for security, just for hash
> > > generation, but even with that, FIPS is blocking them.
> > >
> > > There is a patch proposed for newest versions of python, to avoid that
> > > problem. The idea is that when a hash method is called, users could
> specify
> > > if these are used for security or not. If the useforsecurity flag is
> set to
> > > False, FIPS won't block the call. See: http://bugs.python.org/
> issue9216
> >
> > This patch looks to have died off in 2013 prior to Robert's comment from
> today.
> >
> > > This won't land until next versions of Python, however the patch is
> already
> > > on place for current RHEL and CentOS versions that are used in
> OpenStack
> > > deploys. Using that patch as a base, I have a proposal to allow FIPS
> > > enabling, at least in the distros that support it.
> > >
> > > The idea is to create a wrapper around md5, something like:
> > > md5_wrapper('string_to_hash', useforsecurity=False)
> >
> > We should probably work harder on actually landing the patch in Python
> > first. I agree with Robert that the optional boolean parameter is
> > awkward. It'd be better to have a fips submodule.
>
> Please see my comment on that patch about why that approach doesn't
> solve the problem.
>
> > > This method will check the signature of hashlib.md5, and see if that's
> > > offering the useforsecurity parameter. If that's offered, it will pass
> the
> > > given parameter from the wrapper. If not, we will just call
> > > md5('string_to_hash') .
> > >
> > > This gives us the possibility to whitelist all the md5 calls, and
> enabling
> > > FIPS kernel booting without problems. It will start to work for distros
> > > supporting it, and it will be ready to use generally when the patch
> lands in
> > > python upstream and another distros adopt it. At some point, when all
> > > projects are using newest python versions, this wrapper could
> disappear and
> > > use md5 useforsecurity parameter natively.
> >
> > I'd much rather have the upstream interface fixed in Python and then
> > to have a wrapper that does things the correct way. Otherwise, we're
> > encouraging other distros to use a patch that still requires a lot of
> > edits to address the review comments and might be defining an API that
> > will never end up in Python.
> >
> > > The steps needed to achieve it are:
> > > - create a wrapper, place it on some existing project or create a new
> fips
> > > one
> > > - search and replace all md5 calls used in OpenStack core projects ,
> to use
> > > that new wrapper. Note that all the md5 calls will be whitelisted by
> > > default. We have not noted any md5 call that is used for security, but
> if
> > > that exists, it shall be better to use another algorithms, in terms of
> > > security.
> > >
> > > What do people think about it?
> >
> > I think people should work on the Python patches *first*. Once they're
> > merged, *then* we should potentially create a wrapper (if it's still
> > necessary at that point) to do this.
> >
>
> The idea is to use the wrapper as a short-term solution to give us the
> time to make that happen. The original patch did lose interest, but even
> if it landed today it wouldn't necessarily be the sort of thing that
> would qualify for a backport, so it might take quite a while to see a
> real release.
>
> As you point out, the final version of the upstream API may be
> different. With a wrapper in place, we ought to be able to 

Re: [openstack-dev] [devstack] issues with requiring python3 only tool?

2017-01-17 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2017-01-17 11:50:39 -0500:
> On 01/17/2017 11:46 AM, Victor Stinner wrote:
> > Le 17/01/2017 à 17:36, Sean Dague a écrit :
> >> When putting the cli interface on it, I discovered python3's argparse
> >> has subparsers built in. This makes building up the cli much easier, and
> >> removes pulling in a dependency for that. (Currently the only item in
> >> requirements.txt is pbr). This is useful both from an ease to install,
> >> as well as overall runtime.
> > 
> > Do you mean the argparse module of the Python standard library? It is
> > available on Python 2.7. Subparsers are also supported on Python 2.7, no?
> > https://docs.python.org/2/library/argparse.html#sub-commands
> > 
> > If you need a more recent version of argparse on Python 2.7, you might try:
> > https://pypi.python.org/pypi/argparse
> > 
> > But I'm not sure that this third-party module is used on Python 2.7,
> > since import checks the stdlib before checking site-packages.
> 
> Hmm... I don't know how I missed that in the docs. I guess I was going
> code blind last night. I guess it should be easy to make it all work. I
> did specifically want to avoid installing pypi argparse.
> 
> I'll probably still default this to python3, it is the future direction
> we are headed.

+1

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Fox, Kevin M
Is this a Barbican problem or a Keystone one? The inability to restrict a token 
to go only to one service but instead any hacked service can be used to get 
tokens that can be used on any other service seems to to me to be a more 
general Keystone architectural problem to solve?

Thanks,
Kevin

From: Duncan Thomas [duncan.tho...@gmail.com]
Sent: Tuesday, January 17, 2017 6:04 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [barbican] [security] Why are projects 
trying to avoid Barbican, still?



On 17 January 2017 at 13:41, Dave McCowan (dmccowan) 
> wrote:

I don't know everything that was proposed in the Juno timeframe, or before, but 
the Nova and Cinder integration has been done now.  The documentation is at 
[1].  A cinder user can create an encryption key through Barbican when creating 
a volume, then the same user (or a user with permissions granted by that user), 
as a nova user, can retrieve that key when mounting the encrypted volume.

Sure, cinder can add a secret and nova can retrieve it. But glance can *also* 
retrieve it. So can trove. And any other service that gets a normal keystone 
token from the user (i.e. just about all of them). This is, for some threat 
models, far worse that the secret being nice and safe int he cinder DB and only 
ever given out to nova via a trusted API path. The original design vision I saw 
for barbican was intended to have much better controls than this, but they 
never showed up AFAIK. And that's just the problem - people think 'Oh, barbican 
is storing the cinder volume secrets, great, we're secure' when actually 
barbican has made the security situation worse not better. It's a pretty 
terrible secrets-as-a-service product at the moment. Fixing it is not trivial.

--
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] FIPS compliance

2017-01-17 Thread Doug Hellmann
Excerpts from Ian Cordasco's message of 2017-01-17 05:59:13 -0600:
> On Tue, Jan 17, 2017 at 4:11 AM, Yolanda Robla Mota  
> wrote:
> > Hi, in previous threads, there have been discussions about enabling FIPS,
> > and the problems we are hitting with md5 inside OpenStack:
> > http://lists.openstack.org/pipermail/openstack-dev/2016-November/107035.html
> >
> > It is important from a security perspective to enable FIPS, however
> > OpenStack cannot boot with that, because of the existence of md5 calls in
> > several projects. These calls are not used for security, just for hash
> > generation, but even with that, FIPS is blocking them.
> >
> > There is a patch proposed for newest versions of python, to avoid that
> > problem. The idea is that when a hash method is called, users could specify
> > if these are used for security or not. If the useforsecurity flag is set to
> > False, FIPS won't block the call. See: http://bugs.python.org/issue9216
> 
> This patch looks to have died off in 2013 prior to Robert's comment from 
> today.
> 
> > This won't land until next versions of Python, however the patch is already
> > on place for current RHEL and CentOS versions that are used in OpenStack
> > deploys. Using that patch as a base, I have a proposal to allow FIPS
> > enabling, at least in the distros that support it.
> >
> > The idea is to create a wrapper around md5, something like:
> > md5_wrapper('string_to_hash', useforsecurity=False)
> 
> We should probably work harder on actually landing the patch in Python
> first. I agree with Robert that the optional boolean parameter is
> awkward. It'd be better to have a fips submodule.

Please see my comment on that patch about why that approach doesn't
solve the problem.

> > This method will check the signature of hashlib.md5, and see if that's
> > offering the useforsecurity parameter. If that's offered, it will pass the
> > given parameter from the wrapper. If not, we will just call
> > md5('string_to_hash') .
> >
> > This gives us the possibility to whitelist all the md5 calls, and enabling
> > FIPS kernel booting without problems. It will start to work for distros
> > supporting it, and it will be ready to use generally when the patch lands in
> > python upstream and another distros adopt it. At some point, when all
> > projects are using newest python versions, this wrapper could disappear and
> > use md5 useforsecurity parameter natively.
> 
> I'd much rather have the upstream interface fixed in Python and then
> to have a wrapper that does things the correct way. Otherwise, we're
> encouraging other distros to use a patch that still requires a lot of
> edits to address the review comments and might be defining an API that
> will never end up in Python.
> 
> > The steps needed to achieve it are:
> > - create a wrapper, place it on some existing project or create a new fips
> > one
> > - search and replace all md5 calls used in OpenStack core projects , to use
> > that new wrapper. Note that all the md5 calls will be whitelisted by
> > default. We have not noted any md5 call that is used for security, but if
> > that exists, it shall be better to use another algorithms, in terms of
> > security.
> >
> > What do people think about it?
> 
> I think people should work on the Python patches *first*. Once they're
> merged, *then* we should potentially create a wrapper (if it's still
> necessary at that point) to do this.
> 

The idea is to use the wrapper as a short-term solution to give us the
time to make that happen. The original patch did lose interest, but even
if it landed today it wouldn't necessarily be the sort of thing that
would qualify for a backport, so it might take quite a while to see a
real release.

As you point out, the final version of the upstream API may be
different. With a wrapper in place, we ought to be able to modify the
implementation of the wrapper to accommodate that to ensure backwards
compatibility, during the deprecation period after the upstream fix is
implemented.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] issues with requiring python3 only tool?

2017-01-17 Thread Sean Dague
On 01/17/2017 11:46 AM, Victor Stinner wrote:
> Le 17/01/2017 à 17:36, Sean Dague a écrit :
>> When putting the cli interface on it, I discovered python3's argparse
>> has subparsers built in. This makes building up the cli much easier, and
>> removes pulling in a dependency for that. (Currently the only item in
>> requirements.txt is pbr). This is useful both from an ease to install,
>> as well as overall runtime.
> 
> Do you mean the argparse module of the Python standard library? It is
> available on Python 2.7. Subparsers are also supported on Python 2.7, no?
> https://docs.python.org/2/library/argparse.html#sub-commands
> 
> If you need a more recent version of argparse on Python 2.7, you might try:
> https://pypi.python.org/pypi/argparse
> 
> But I'm not sure that this third-party module is used on Python 2.7,
> since import checks the stdlib before checking site-packages.

Hmm... I don't know how I missed that in the docs. I guess I was going
code blind last night. I guess it should be easy to make it all work. I
did specifically want to avoid installing pypi argparse.

I'll probably still default this to python3, it is the future direction
we are headed.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] issues with requiring python3 only tool?

2017-01-17 Thread Victor Stinner

Le 17/01/2017 à 17:36, Sean Dague a écrit :

When putting the cli interface on it, I discovered python3's argparse
has subparsers built in. This makes building up the cli much easier, and
removes pulling in a dependency for that. (Currently the only item in
requirements.txt is pbr). This is useful both from an ease to install,
as well as overall runtime.


Do you mean the argparse module of the Python standard library? It is 
available on Python 2.7. Subparsers are also supported on Python 2.7, no?

https://docs.python.org/2/library/argparse.html#sub-commands

If you need a more recent version of argparse on Python 2.7, you might try:
https://pypi.python.org/pypi/argparse

But I'm not sure that this third-party module is used on Python 2.7, 
since import checks the stdlib before checking site-packages.


Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] FIPS compliance

2017-01-17 Thread Jeremy Stanley
On 2017-01-17 05:59:13 -0600 (-0600), Ian Cordasco wrote:
[...]
> I think people should work on the Python patches *first*. Once they're
> merged, *then* we should potentially create a wrapper (if it's still
> necessary at that point) to do this.

Yes, I encourage everyone to think back to the frequent wailing and
gnashing of teeth we encounter from downstream consumers of our
software who develop their own workarounds to problems and _then_
try to get the upstream bits merged only to discover we (generally
for good reason) reject their solutions and suggest that things
should be done in completely different ways.

Upstream first.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] issues with requiring python3 only tool?

2017-01-17 Thread Sean Dague
In attempting to get local.conf support into devstack-gate and grenade,
some of the more advanced merging scenarios of local.conf fragments have
surpassed anyone's desire and ability to do this in awk. So I started
down the path of moving the ini file and local.conf manipulation code
into a python tool that's currently being prototyped here -
https://github.com/sdague/devstack-tools

When putting the cli interface on it, I discovered python3's argparse
has subparsers built in. This makes building up the cli much easier, and
removes pulling in a dependency for that. (Currently the only item in
requirements.txt is pbr). This is useful both from an ease to install,
as well as overall runtime.

Short term, this is only going to be in grenade, which only runs on
Ubuntu today, where python3 is easy to have access to.

But as we expand this into devstack-gate and devstack, it will put a
hard python3 dependency on those environments. Is this an issue in the
Centos 7 land (or any other platforms)?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Upstream backwards compatibility job for Newton oooq

2017-01-17 Thread mathieu bultel
On 01/17/2017 05:19 PM, Emilien Macchi wrote:
> On Tue, Jan 17, 2017 at 10:57 AM, mathieu bultel  wrote:
>> On 01/17/2017 04:42 PM, Emilien Macchi wrote:
>>> On Tue, Jan 17, 2017 at 9:34 AM, mathieu bultel  wrote:
 Hi Adriano

 On 01/17/2017 03:05 PM, Adriano Petrich wrote:

 So I want to make a backwards compatibility job upstream so from last scrum
 I got the feeling that we should not be adding more stuff to the
 experimental jobs due to lack of resources (and large queues)

 What kind of "test" do you want to add ?
 I ask because since few days we have upstream an upgrade job that does:
 master UC -> deploying a Newton OC with Newton OC + tht stable/newton ->
 then upgrade the OC to master with tht master branch.
 It's sounds like a "small backward compatibility" validation, but I'm not
 sure if it's cover what you need.
>>> While I understand what is the idea, I don't see the use case.
>>> In which case you want to deploy a old version of overcloud by using a
>>> recent undercloud?
>>> Why don't use deploy a stable undercloud to deploy a stable overcloud?
>> From my side, the use case is the major OC upgrade. We don't want to
>> test the major upgrade of the undercloud (since a job already exist),
>> only overcloud, that's why we start by a "master" undercloud, and that
>> save us from unwanted/unrelated issues due to the UC upgrade and reduce
>> the duration of the job.
> ok so your use-case is CI focused. Good to know.
> Another question for you then, have we count the time needed to
> upgrade an undercloud? I'm doing it quite oftent and it doesn't take
> more than 15 min for me. Thoughts?
Yes it's approximately this duration. Actually it depend on the hardware
and the network connectivity, but saying 15/20 minutes avg sounds
reasonable.
>
 Is that so? I was thinking about using nonha-multinode-oooq that seems to 
 be
 working.

 Is that allright to add this new job or should I wait until we get more
 resource and do ci.centos for now, or any idea on where to do this is also
 welcome.


 Cheers,
Adriano


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Upstream backwards compatibility job for Newton oooq

2017-01-17 Thread Steven Hardy
On Tue, Jan 17, 2017 at 10:42:18AM -0500, Emilien Macchi wrote:
> On Tue, Jan 17, 2017 at 9:34 AM, mathieu bultel  wrote:
> > Hi Adriano
> >
> > On 01/17/2017 03:05 PM, Adriano Petrich wrote:
> >
> > So I want to make a backwards compatibility job upstream so from last scrum
> > I got the feeling that we should not be adding more stuff to the
> > experimental jobs due to lack of resources (and large queues)
> >
> > What kind of "test" do you want to add ?
> > I ask because since few days we have upstream an upgrade job that does:
> > master UC -> deploying a Newton OC with Newton OC + tht stable/newton ->
> > then upgrade the OC to master with tht master branch.
> > It's sounds like a "small backward compatibility" validation, but I'm not
> > sure if it's cover what you need.
> 
> While I understand what is the idea, I don't see the use case.
> In which case you want to deploy a old version of overcloud by using a
> recent undercloud?
> Why don't use deploy a stable undercloud to deploy a stable overcloud?

For development & test usage it's actually really useful - I can deploy any
version overcloud locally for testing (any kind of overcloud bugfixes etc,
not only testing upgrades), and it's even possible to deploy two overclouds
at once, with different versions, to easily do comparative testing.

This is something we get "for free" because TripleO is using Heat/Glance
etc which provide stable interfaces, and although I accept it's something
of a specialist use-case, I think it is a valid one.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Upstream backwards compatibility job for Newton oooq

2017-01-17 Thread Steven Hardy
On Tue, Jan 17, 2017 at 02:48:27PM +, Adriano Petrich wrote:
>Mathieu,
>    That sounds exactly what we need. Do we run tempest or something on
>those to validate it?

It doesn't currently run tempest, only some basic sanity tests (crud
operations where we create some resources for each service before the
upgrade, then check they are still there after the upgrade is completed).

In future we could probably add more validation, but we're constrained by
walltime of the job.

As Mathieu says this does provide at least partial coverage of deploying an
old undercloud version (e.g Newton) with a latest (trunk/ocata) undercloud
- hopefully we can adjust the upgrade test coverage to meet your needs and
  avoid the overhead of a completely new job.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Upstream backwards compatibility job for Newton oooq

2017-01-17 Thread Emilien Macchi
On Tue, Jan 17, 2017 at 10:57 AM, mathieu bultel  wrote:
> On 01/17/2017 04:42 PM, Emilien Macchi wrote:
>> On Tue, Jan 17, 2017 at 9:34 AM, mathieu bultel  wrote:
>>> Hi Adriano
>>>
>>> On 01/17/2017 03:05 PM, Adriano Petrich wrote:
>>>
>>> So I want to make a backwards compatibility job upstream so from last scrum
>>> I got the feeling that we should not be adding more stuff to the
>>> experimental jobs due to lack of resources (and large queues)
>>>
>>> What kind of "test" do you want to add ?
>>> I ask because since few days we have upstream an upgrade job that does:
>>> master UC -> deploying a Newton OC with Newton OC + tht stable/newton ->
>>> then upgrade the OC to master with tht master branch.
>>> It's sounds like a "small backward compatibility" validation, but I'm not
>>> sure if it's cover what you need.
>> While I understand what is the idea, I don't see the use case.
>> In which case you want to deploy a old version of overcloud by using a
>> recent undercloud?
>> Why don't use deploy a stable undercloud to deploy a stable overcloud?
> From my side, the use case is the major OC upgrade. We don't want to
> test the major upgrade of the undercloud (since a job already exist),
> only overcloud, that's why we start by a "master" undercloud, and that
> save us from unwanted/unrelated issues due to the UC upgrade and reduce
> the duration of the job.

ok so your use-case is CI focused. Good to know.
Another question for you then, have we count the time needed to
upgrade an undercloud? I'm doing it quite oftent and it doesn't take
more than 15 min for me. Thoughts?

>>
>>> Is that so? I was thinking about using nonha-multinode-oooq that seems to be
>>> working.
>>>
>>> Is that allright to add this new job or should I wait until we get more
>>> resource and do ci.centos for now, or any idea on where to do this is also
>>> welcome.
>>>
>>>
>>> Cheers,
>>>Adriano
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Upstream backwards compatibility job for Newton oooq

2017-01-17 Thread mathieu bultel
On 01/17/2017 04:42 PM, Emilien Macchi wrote:
> On Tue, Jan 17, 2017 at 9:34 AM, mathieu bultel  wrote:
>> Hi Adriano
>>
>> On 01/17/2017 03:05 PM, Adriano Petrich wrote:
>>
>> So I want to make a backwards compatibility job upstream so from last scrum
>> I got the feeling that we should not be adding more stuff to the
>> experimental jobs due to lack of resources (and large queues)
>>
>> What kind of "test" do you want to add ?
>> I ask because since few days we have upstream an upgrade job that does:
>> master UC -> deploying a Newton OC with Newton OC + tht stable/newton ->
>> then upgrade the OC to master with tht master branch.
>> It's sounds like a "small backward compatibility" validation, but I'm not
>> sure if it's cover what you need.
> While I understand what is the idea, I don't see the use case.
> In which case you want to deploy a old version of overcloud by using a
> recent undercloud?
> Why don't use deploy a stable undercloud to deploy a stable overcloud?
>From my side, the use case is the major OC upgrade. We don't want to
test the major upgrade of the undercloud (since a job already exist),
only overcloud, that's why we start by a "master" undercloud, and that
save us from unwanted/unrelated issues due to the UC upgrade and reduce
the duration of the job.

>
>> Is that so? I was thinking about using nonha-multinode-oooq that seems to be
>> working.
>>
>> Is that allright to add this new job or should I wait until we get more
>> resource and do ci.centos for now, or any idea on where to do this is also
>> welcome.
>>
>>
>> Cheers,
>>Adriano
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][requirements] disable constraint bot updates for our own libraries

2017-01-17 Thread Doug Hellmann
In this review for the ironic-inspector-client newton release [1], Alan
pointed out that the new release was pulled into our master requirements
because the constraints bot saw it as a newer release. That doesn't seem
like something we want to have happen, as a general case. Should we
update the bot to avoid changing constraints for the things we release
ourselves? That will let us more carefully manage which updates go into
which branches, since the release jobs update the constraints files
as part of the release process.

Doug

[1] https://review.openstack.org/#/c/398401/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Upstream backwards compatibility job for Newton oooq

2017-01-17 Thread Emilien Macchi
On Tue, Jan 17, 2017 at 9:34 AM, mathieu bultel  wrote:
> Hi Adriano
>
> On 01/17/2017 03:05 PM, Adriano Petrich wrote:
>
> So I want to make a backwards compatibility job upstream so from last scrum
> I got the feeling that we should not be adding more stuff to the
> experimental jobs due to lack of resources (and large queues)
>
> What kind of "test" do you want to add ?
> I ask because since few days we have upstream an upgrade job that does:
> master UC -> deploying a Newton OC with Newton OC + tht stable/newton ->
> then upgrade the OC to master with tht master branch.
> It's sounds like a "small backward compatibility" validation, but I'm not
> sure if it's cover what you need.

While I understand what is the idea, I don't see the use case.
In which case you want to deploy a old version of overcloud by using a
recent undercloud?
Why don't use deploy a stable undercloud to deploy a stable overcloud?

> Is that so? I was thinking about using nonha-multinode-oooq that seems to be
> working.
>
> Is that allright to add this new job or should I wait until we get more
> resource and do ci.centos for now, or any idea on where to do this is also
> welcome.
>
>
> Cheers,
>Adriano
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Sean Dague
On 01/16/2017 08:35 AM, Ian Cordasco wrote:
> Hi everyone,
> 
> I've seen a few nascent projects wanting to implement their own secret
> storage to either replace Barbican or avoid adding a dependency on it.
> When I've pressed the developers on this point, the only answer I've
> received is to make the operator's lives simpler.
> 
> I've been struggling to understand the reasoning behind this and I'm
> wondering if there are more people around who can help me understand.
> 
> To help others help me, let me provide my point of view. Barbican's
> been around for a few years already and has been deployed by several
> companies which have probably audited it for security purposes. Most
> of the technology involved in Barbican is proven to be secure and the
> way the project has strung those pieces together has been analyzed by
> the OSSP (OpenStack's own security group). It doesn't have a
> requirement on a hardware TPM which means there's no hardware upgrade
> cost. Furthermore, several services already provide the option of
> using Barbican (but won't place a hard requirement on it). It stands
> to reason (in my opinion) that if new services have a need for secrets
> and other services already support using Barbican as secret storage,
> then those new services should be using Barbican. It seems a bit
> short-sighted of its developers to say that their users are definitely
> not deploying Barbican when projects like Magnum have soft
> dependencies on it.
> 
> Is the problem perhaps that no one is aware of other projects using
> Barbican? Is the status on the project navigator alarming (it looks
> like some of this information is potentially out of date)? Has
> Barbican been deemed too hard to deploy?
> 
> I really want to understand why so many projects feel the need to
> implement their own secrets storage. This seems a bit short-sighted
> and foolish. While these projects are making themselves easier to
> deploy, if not done properly they are potentially endangering their
> users and that seems like a bigger problem than deploying Barbican to
> me.

I don't pretend to have all the answers, but when doing some exploration
around the question of barbican as a default service during the late
summer, there were some community disconnects as well.

For instance, the barbican devstack plugin was just setting up barbican.
It wasn't actually configuring any existing services to use barbican, so
there wasn't any simple way to experiment with development with it (this
looks to have been fixed in Sept), or to understand gate reliability so
that it could be made less optional.

There was also a real concern about testability. For testing purposes
fixed key managers make a ton of sense, because you can crack them open
when things go wrong very easily to see what was going on. The barbican
team was pushing back on maintaining one of those on their side, because
it is inherently not secure. That moved the key manager plug point back
into the projects instead of a hard dependency on barbican, with a
testing mode that could be run. Joel and I had a long conversation about
this at the Nova midcycle in Portland. I'm not entirely sure where this
all landed.

There were also previous concerns about the stability of the API, where
version 2 -> 3 changes were made without a deprecation path or
guarantees. Only the fact that no one was deploying it saved people from
a pretty major upgrade breakage.

I think there is also a very real concern about how secure the secrets
are given how open ended the tokens are for users. Duncan raised this in
another part of the thread. From the outside it feels like Keystone and
Barbican need to be much closer integrated given that token security
implications directly impact on the security of the secrets in question.
If those things don't get solved coherently together, there are lots of
exposures there.

I definitely think that Barbican would be a good project to get elevated
to required component. Encrypting disks at rest by default with sane
keys should be standard behavior for an IaaS as it massively decreases
the data exposure of rogue VMs. (``dd if=/dev/vda | strings`` can turn
up interesting data in shared environments that aren't encrypted).

Doing so basically is going to require someone to champion project
managing this whole process, and discovering and bridging the existing
communication gaps that are there. There doesn't seem to be a ton of
natural overlap between contributors to barbican and the base IaaS
services today, which means plenty of communication gaps.

-Sean

-- 
Sean Dague
http://dague.net


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [telemetry] How to handle security bugs

2017-01-17 Thread Julien Danjou
On Tue, Jan 17 2017, Ian Cordasco wrote:

> Or, perhaps the last time people complained that the process
> documentation was too detailed and the telemetry project decided it
> didn't want to have to follow it? If that's the case, following the
> embargoed procedures might not be what you want as a project. At that
> point, you don't need to work with the VMT and you can immediately
> open the bug to start collaborating on Gerrit. You of course open up
> all of your deployers to being targeted, but that's the project's call
> in the end I guess.

Yeah it sucks, though if you have little help (resources) from the
deployers, that's what is going to happen sooner or later.

> I would think that if you want the "vulnerability:managed" tag, you
> might be willing to follow the process outlined. Perhaps it's verbose,
> but it is verbose for good reason. OpenStack's handling of embargoed
> issues is pretty much as good as it gets for a project the size of
> OpenStack. It benefits deployers and users by making the issue AND the
> fix known at the same time which gives deployers the ability to
> immediately consume the fix.

Yeah don't read me wrong (though I was not precise :-) but we don't have
any problem with _respecting_ the procedure. I think small projects like
us have it is nearly impossible to _apply_ the procedure on our own:
requesting CVE, OSSA, OSSN, getting the right classification,
publishing, getting in touch with downstream… is too much work for such
small teams.

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [telemetry] How to handle security bugs

2017-01-17 Thread Ian Cordasco
On Tue, Jan 17, 2017 at 8:02 AM, Julien Danjou  wrote:
> On Tue, Jan 17 2017, Adam Heczko wrote:
>
>> Hi Julien, I think that you should follow this [1] workflow.
>>
>> TL;DR: Pls make sure that if the bug is serious make it private on LP so
>> that only core team members can access it and propose patches. Please do
>> not send patches to Gerrit review queue but rather attach it to LP bug
>> ticket and discuss there. Contact VMT members to get more details on how to
>> get Telemetry project covered by VMT.
>>
>> [1] https://security.openstack.org/vmt-process.html
>
> IMHO that's a problem. The page is so long and the process so complex
> that if nobody has the time to do all of that, it'll never be fixed or
> I'll just send the patch to Gerrit to get it fix and be done with it.
>
> At first glance Telemetry matches all requirements to get covered by
> VMT. IIRC last time we asked for it we get punted because there was
> already too much work for the VMT team. But if that's possible, we'd be
> glad to apply again. :-)

Or, perhaps the last time people complained that the process
documentation was too detailed and the telemetry project decided it
didn't want to have to follow it? If that's the case, following the
embargoed procedures might not be what you want as a project. At that
point, you don't need to work with the VMT and you can immediately
open the bug to start collaborating on Gerrit. You of course open up
all of your deployers to being targeted, but that's the project's call
in the end I guess.

I would think that if you want the "vulnerability:managed" tag, you
might be willing to follow the process outlined. Perhaps it's verbose,
but it is verbose for good reason. OpenStack's handling of embargoed
issues is pretty much as good as it gets for a project the size of
OpenStack. It benefits deployers and users by making the issue AND the
fix known at the same time which gives deployers the ability to
immediately consume the fix.

-- 
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Upstream backwards compatibility job for Newton oooq

2017-01-17 Thread Adriano Petrich
Mathieu,

That sounds exactly what we need. Do we run tempest or something on
those to validate it?

On Tue, Jan 17, 2017 at 2:34 PM, mathieu bultel  wrote:

> Hi Adriano
>
> On 01/17/2017 03:05 PM, Adriano Petrich wrote:
>
> So I want to make a backwards compatibility job upstream so from last
> scrum I got the feeling that we should not be adding more stuff to the
> experimental jobs due to lack of resources (and large queues)
>
> What kind of "test" do you want to add ?
> I ask because since few days we have upstream an upgrade job that does:
> master UC -> deploying a Newton OC with Newton OC + tht stable/newton ->
> then upgrade the OC to master with tht master branch.
> It's sounds like a "small backward compatibility" validation, but I'm not
> sure if it's cover what you need.
>
> Is that so? I was thinking about using nonha-multinode-oooq that seems to
> be working.
>
> Is that allright to add this new job or should I wait until we get more
> resource and do ci.centos for now, or any idea on where to do this is also
> welcome.
>
>
> Cheers,
>Adriano
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Ian Cordasco
On Tue, Jan 17, 2017 at 8:04 AM, Duncan Thomas  wrote:
> controls than this, but they never showed up AFAIK. And that's just the
> problem - people think 'Oh, barbican is storing the cinder volume secrets,
> great, we're secure' when actually barbican has made the security situation
> worse not better. It's a pretty terrible secrets-as-a-service product at the
> moment. Fixing it is not trivial.

So this is the second time you've asserted that Barbican is "a pretty
terrible secrets-as-a-service product". Instead of repeatedly saying
the same thing, have you worked with them on this? From your own
accounts, it sounds like you're not providing the constructively
critical feedback necessary to help the Barbican team and haven't
attempted to prior to this thread (although I'd not call your
criticisms constructive). I somehow doubt you'd be accepting of this
kind of feedback if it were aimed at Cinder. Are there open bugs that
have been ignored that you've filed? Items you've brought up at their
meetings?

To be clear, I started this thread to help the Barbican team gather
actionable items to further adoption because it seems a worthwhile
goal. Yes Barbican can improve, so can Cinder. So let's keep these
discussions constructive, okay?

-- 
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Upstream backwards compatibility job for Newton oooq

2017-01-17 Thread mathieu bultel
Hi Adriano

On 01/17/2017 03:05 PM, Adriano Petrich wrote:
> So I want to make a backwards compatibility job upstream so from last
> scrum I got the feeling that we should not be adding more stuff to the
> experimental jobs due to lack of resources (and large queues) 
>
What kind of "test" do you want to add ?
I ask because since few days we have upstream an upgrade job that does:
master UC -> deploying a Newton OC with Newton OC + tht stable/newton ->
then upgrade the OC to master with tht master branch.
It's sounds like a "small backward compatibility" validation, but I'm
not sure if it's cover what you need.
> Is that so? I was thinking about using nonha-multinode-oooq that seems
> to be working.
>
> Is that allright to add this new job or should I wait until we get
> more resource and do ci.centos for now, or any idea on where to do
> this is also welcome.
>
>
> Cheers,
>Adriano
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [classifier] Common Classification Framework meeting

2017-01-17 Thread Duarte Cardoso, Igor
Hi all,

Common Classification Framework developers and interested parties are invited 
for today's meeting. The agenda is below, feel free to add more topics.

https://wiki.openstack.org/wiki/Neutron/CommonFlowClassifier#Discussion_Topic_17_January_2017

1700 UTC @ #openstack-meeting.

Best regards,
Igor.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo] Heat memory usage in the TripleO gate during Ocata

2017-01-17 Thread Zane Bitter

On 11/01/17 09:21, Zane Bitter wrote:


From that run, total memory usage by Heat was 2.32GiB. That's a little
lower than the peak that occurred near the end of Newton development for
the legacy path, but still more than double the current legacy path
usage (0.90GiB on the job that ran for that same review). So we have
work to do.

I still expect storing output values in the database at the time
resources are created/updated, rather than generating them on the fly,
will create the biggest savings. There may be other infelicities we can
iron out to get some more wins as well.


Crag and I discovered that we were accidentally loading all of the 
resources from the database when doing a check on one resource 
(basically meaning we had to read O(n^2) resources on each traversal - 
ouch). The patch https://review.openstack.org/#/c/420971/ brings the 
memory usage down to 2.10GiB (10% saving) and has given us a few other 
ideas for further improvements too.


- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MassivelyDistributed] IRC Meeting tomorrow 15:00 UTC

2017-01-17 Thread Anthony SIMONET
Hi all,

The agenda is available at:
https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2017 (line 
82)
Please feel free to add items to the agenda.

The meeting while take place on #openstack-meeting.

Cheers,
Anthony



signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Upstream backwards compatibility job for Newton oooq

2017-01-17 Thread Adriano Petrich
So I want to make a backwards compatibility job upstream so from last scrum
I got the feeling that we should not be adding more stuff to the
experimental jobs due to lack of resources (and large queues)

Is that so? I was thinking about using nonha-multinode-oooq that seems to
be working.

Is that allright to add this new job or should I wait until we get more
resource and do ci.centos for now, or any idea on where to do this is also
welcome.


Cheers,
   Adriano
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Duncan Thomas
On 17 January 2017 at 13:41, Dave McCowan (dmccowan) 
wrote:

>
> I don't know everything that was proposed in the Juno timeframe, or
> before, but the Nova and Cinder integration has been done now.  The
> documentation is at [1].  A cinder user can create an encryption key
> through Barbican when creating a volume, then the same user (or a user with
> permissions granted by that user), as a nova user, can retrieve that key
> when mounting the encrypted volume.
>

Sure, cinder can add a secret and nova can retrieve it. But glance can
*also* retrieve it. So can trove. And any other service that gets a normal
keystone token from the user (i.e. just about all of them). This is, for
some threat models, far worse that the secret being nice and safe int he
cinder DB and only ever given out to nova via a trusted API path. The
original design vision I saw for barbican was intended to have much better
controls than this, but they never showed up AFAIK. And that's just the
problem - people think 'Oh, barbican is storing the cinder volume secrets,
great, we're secure' when actually barbican has made the security situation
worse not better. It's a pretty terrible secrets-as-a-service product at
the moment. Fixing it is not trivial.

-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [telemetry] How to handle security bugs

2017-01-17 Thread Julien Danjou
On Tue, Jan 17 2017, Rob C wrote:

> Ian has provided advice on how you might become security managed, which
> is a good aspiration for any team to have.
>
> However, if you have a serious security issue that you need help mitigating
> the security project can help. We can work with you on the solution and also
> issue an OpenStack Security Note to notify users of the update/patch that
> they might need to apply.
>
> Please go ahead and add me to the security bug, if required I'll add other
> core-sec people as required.

Thanks a lot Rob, that's very helpful. I'll add you.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [yaql] Yaql validating performance

2017-01-17 Thread lương hữu tuấn
Hi Kirill,

Thank you for you information. I hope we will have more information about
it. Just keep in touch when you guys in Mirantis have some performance
results about Yaql.

Br,

@Nokia/Tuan

On Tue, Jan 17, 2017 at 2:32 PM, Kirill Zaitsev 
wrote:

> I think fuel team encountered similar problems, I’d advice asking them
> around. Also Stan (author of yaql) might shed some light on the problem =)
>
> --
> Kirill Zaitsev
> Murano Project Tech Lead
> Software Engineer at
> Mirantis, Inc
>
> On 17 January 2017 at 15:11:52, lương hữu tuấn (tuantulu...@gmail.com)
> wrote:
>
> Hi,
>
> We are now using yaql in mistral and what we see that the process of
> validating yaql expression of input takes a lot of time, especially with
> the big size input. Do you guys have any information about performance of
> yaql?
>
> Br,
>
> @Nokia/Tuan
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [telemetry] How to handle security bugs

2017-01-17 Thread Julien Danjou
On Tue, Jan 17 2017, Adam Heczko wrote:

> Hi Julien, I think that you should follow this [1] workflow.
>
> TL;DR: Pls make sure that if the bug is serious make it private on LP so
> that only core team members can access it and propose patches. Please do
> not send patches to Gerrit review queue but rather attach it to LP bug
> ticket and discuss there. Contact VMT members to get more details on how to
> get Telemetry project covered by VMT.
>
> [1] https://security.openstack.org/vmt-process.html

IMHO that's a problem. The page is so long and the process so complex
that if nobody has the time to do all of that, it'll never be fixed or
I'll just send the patch to Gerrit to get it fix and be done with it.

At first glance Telemetry matches all requirements to get covered by
VMT. IIRC last time we asked for it we get punted because there was
already too much work for the VMT team. But if that's possible, we'd be
glad to apply again. :-)

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Dave McCowan (dmccowan)


On 1/16/17, 3:06 PM, "Ian Cordasco"  wrote:

>-Original Message-
>From: Dave McCowan (dmccowan) 
>Reply: OpenStack Development Mailing List (not for usage questions)
>
>Date: January 16, 2017 at 13:03:41
>To: OpenStack Development Mailing List (not for usage questions)
>
>Subject:  Re: [openstack-dev] [all] [barbican] [security] Why are
>projects trying to avoid Barbican, still?
>> Yep. Barbican supports four backend secret stores. [1]
>>
>> The first (Simple Crypto) is easy to deploy, but not extraordinarily
>> secure, since the secrets are encrypted using a static key defined in
>>the
>> barbican.conf file.
>>
>> The second and third (PKCS#11 and KMIP) are secure, but require an HSM
>>as
>> a hardware base to encrypt and/or store the secrets.
>> The fourth (Dogtag) is secure, but requires a deployment of Dogtag to
>> encrypt and store the secrets.
>>
>> We do not currently have a secret store that is both highly secure and
>> easy to deploy/manage.
>>
>> We, the Barbican community, are very open to any ideas, blueprints, or
>> patches on how to achieve this.
>> In any of the homegrown per-project secret stores, has a solution been
>> developed that solves both of these?
>>
>>
>> [1]
>> 
>>http://docs.openstack.org/project-install-guide/key-manager/draft/barbica
>>n-
>> backend.html
>
>So there seems to be a consensus that Vault is a good easy and secure
>solution to deploy. Can Barbican use that as a backend secret store?

Adding a new secret store plugin for Vault would be a welcome addition.
We have documentation in our repo on how to write a new plugin. [1]   I
can schedule some time at the PTG to plan for this in Pike if there are
interested developers.

[1] 
https://github.com/openstack/barbican/blob/master/doc/source/plugin/secret_
store.rst


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Dave McCowan (dmccowan)


From: Duncan Thomas >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, January 16, 2017 at 5:33 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [all] [barbican] [security] Why are projects 
trying to avoid Barbican, still?

To give a totally different prospective on why somebody might dislike Barbican 
(I'm one of those people). Note that I'm working from pretty hazy memories so I 
don't guarantee I've got everything spot on, and I am without a doubt giving a 
very one sided view. But hey, that's the side I happen to sit on. I certainly 
don't mean to cause great offence to the people concerned, but rather to give  
ahistory from a PoV that hasn't appeared yet.

Cinder needed somewhere to store volume encryption keys. Long, long ago, 
Barbican gave a great presentation about secrets as a service, ACLs on secrets, 
setups where one service could ask for keep material to be created and only 
accessible to some other service. Having one service give another service 
permission to get at a secret (but never be able to access that secret itself). 
All the clever things that cinder could possibly leverage. It would also handle 
hardware security modules and all of the other craziness that no sane person 
wants to understand the fine details of. Key revocation, rekeying and some 
other stuff was mentioned as being possible future work.

So I waited, and I waited, and I asked some security people about what Barbican 
was doing, and I got told it had gone off and done some unrelated to anything 
we wanted certificate cleverness stuff for some other service, but 
secrets-as-a-service would be along at some point. Eventually, a long time 
after all my enthusiasm had waned, the basic feature

It doesn't do what it says on the tin. It isn't very good at keeping secrets. 
If I've got a token then I can get the keys for all my volumes. That kind of 
sucks. For several threat models, I'd have done better to just stick the keys 
in the cinder db.

I really wish I'd got a video of that first presentation, because it would be 
an interesting project to implement. Barbican though, from a really narrow 
focused since usecase view point really isn't very good though.

(If I've missed something and Barbican can do the clever ACL type stuff that 
was talked about, please let me know - I'd be very interested in trying to fit 
it to cinder, and I'm not even working on cinder professionally currently.)

I don't know everything that was proposed in the Juno timeframe, or before, but 
the Nova and Cinder integration has been done now.  The documentation is at 
[1].  A cinder user can create an encryption key through Barbican when creating 
a volume, then the same user (or a user with permissions granted by that user), 
as a nova user, can retrieve that key when mounting the encrypted volume.

[1] 
http://docs.openstack.org/mitaka/config-reference/block-storage/volume-encryption.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [telemetry] How to handle security bugs

2017-01-17 Thread Rob C
You've done the right thing by posting here with the [Security] tag.

Ian has provided advice on how you might become security managed, which
is a good aspiration for any team to have.

However, if you have a serious security issue that you need help mitigating
the security project can help. We can work with you on the solution and also
issue an OpenStack Security Note to notify users of the update/patch that
they might need to apply.

Please go ahead and add me to the security bug, if required I'll add other
core-sec people as required.

Cheers
-Rob



On Tue, Jan 17, 2017 at 1:14 PM, Adam Heczko  wrote:

> Hi Julien, I think that you should follow this [1] workflow.
>
> TL;DR: Pls make sure that if the bug is serious make it private on LP so
> that only core team members can access it and propose patches. Please do
> not send patches to Gerrit review queue but rather attach it to LP bug
> ticket and discuss there. Contact VMT members to get more details on how to
> get Telemetry project covered by VMT.
>
> [1] https://security.openstack.org/vmt-process.html
>
> On Tue, Jan 17, 2017 at 1:26 PM, Julien Danjou  wrote:
>
>> Hi,
>>
>> I've asked on #openstack-security without success, so let me try here
>> insteead:
>>
>> We, Telemetry, have a security bug and we're not managed by VMT, any
>> hint as how to handle our bug? Or how to get covered by VMT? 
>>
>> Cheers,
>> --
>> Julien Danjou
>> /* Free Software hacker
>>https://julien.danjou.info */
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Adam Heczko
> Security Engineer @ Mirantis Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa][glance] gate-tempest-dsvm-full-ceph-plugin-src-glance_store-ubuntu-xenial failures

2017-01-17 Thread Brian Rosmaita
On 1/17/17 12:10 AM, GHANSHYAM MANN wrote:
> Yea, manage snapshot tests should be skipped on ceph backend.
> 
> I disabled those tests for *-ceph-* jobs and glance-store will be unblocked
> after that merged.
> 
> -  https://review.openstack.org/#/c/421073/

Thanks for getting the patches up so quickly!  Appreciate the quick fix.

cheers,
brian

> 
> 
> There is discussion going on whether to enable the manage snapshot false by
> default on devstack side and improve
> tempest tests also but that might take time.
> 
> But for ceph jobs it will be disabled in that patch and should not block
> gate etc.
> 
> Also if any CI failing due to that they can quickly disable that flag and
> skip the tests which not meant to be run on their CI.
> 
> 
> ​-gmann
> 
> On Tue, Jan 17, 2017 at 11:18 AM, Brian Rosmaita > wrote:
> 
>> I need some help troubleshooting a glance_store gate failure that I
>> think is due to a recent change in a tempest test and a configuration
>> problem (or it could be something else entirely).  I'd appreciate some
>> help solving this as it appears to be blocking all merges into
>> glance_store, which, as a non-client library, is supposed to be frozen
>> later this week.
>>
>> Here's an example of the failure in a global requirements update patch:
>> https://review.openstack.org/#/c/420832/
>> (I should mention that the failure is occurring in a volume test in
>> tempest.api.volume.admin.v2.test_snapshot_manage.
>> SnapshotManageAdminV2Test,
>> not a glance_store test.)
>>
>> The test is being run by this gate:
>> gate-tempest-dsvm-full-ceph-plugin-src-glance_store-ubuntu-xenial
>>
>> The test that's failing, test_unmanage_manage_snapshot was recently
>> modified by Change-Id: I77be1cf85a946bf72e852f6378f0d7b43af8023a
>> To be more precise, the test itself wasn't changed, rather the criterion
>> for skipping the test was changed (from a skipIf based on whether the
>> backend was ceph, to a skipUnless based on a boolean config option).
>>
>> From the comment in the old code on that patch, it seems like the test
>> config value should be False when ceph is the backend (and that's its
>> default).  But in the config dump of the failing test run,
>> http://logs.openstack.org/32/420832/1/check/gate-tempest-
>> dsvm-full-ceph-plugin-src-glance_store-ubuntu-xenial/
>> dab27eb/logs/tempest_conf.txt.gz
>> you can see that manage_snapshot is True.
>>
>> That's why I think the problem is being caused by a flipped test config
>> value, but I'm not sure where the configuration for this particular gate
>> lives so I don't know what repo to propose a patch to.
>>
>> Thanks in advance for any help,
>> brian
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [yaql] Yaql validating performance

2017-01-17 Thread Kirill Zaitsev
I think fuel team encountered similar problems, I’d advice asking them
around. Also Stan (author of yaql) might shed some light on the problem =)

-- 
Kirill Zaitsev
Murano Project Tech Lead
Software Engineer at
Mirantis, Inc

On 17 January 2017 at 15:11:52, lương hữu tuấn (tuantulu...@gmail.com)
wrote:

Hi,

We are now using yaql in mistral and what we see that the process of
validating yaql expression of input takes a lot of time, especially with
the big size input. Do you guys have any information about performance of
yaql?

Br,

@Nokia/Tuan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Rob C
Just a quick note on Castellan, at the moment it's not a particularly
strong abstraction for key management in general, just the openstack
key management interface.

The reason this is important is because if I recall correctly, Castellan
requires a keystone token for auth. It should be no suprise that COTS
key managers, software or hardware, do not support this method of
authentication.

Unless something has changed recently, Castellan is good for allowing
teams to pivot between a local key management implementation or
Barbican but a long way from allowing a direct pivot to another key
management system.

I do recall some efforts to move beyond this limitation and implement
KMIP[1] for direct access to HSMs that support it, however I'm not sure
what the end result there was.

[1]
https://specs.openstack.org/openstack/barbican-specs/specs/mitaka/kmip-key-manager.html

On Tue, Jan 17, 2017 at 12:57 PM, Ian Cordasco 
wrote:

> On Mon, Jan 16, 2017 at 6:20 PM, Amrith Kumar 
> wrote:
> > Ian,
> >
> > This is a fascinating conversation. Let me offer two observations.
> >
> > First, Trove has long debated the ideal solution for storing secrets.
> There
> > have been many conversations, and Barbican has been considered many
> times.
> > We sought input from people who were deploying and operating Trove at
> scale;
> > customers of Tesora, self described users of the upstream Trove, and
> some of
> > the (then) active contributors who were also operators.
> >
> > The consensus was that installing and deploying OpenStack was hard enough
> > and requiring the installation of yet more services was problematic.
> This is
> > not something which singles out Barbican in any way. For example, Trove
> uses
> > Swift as the default object store where backups are stored, and in
> > implementing replication we leveraged the backup capability. This means
> that
> > to have replication, one needs to have Swift. Several deployers have
> > objected to this since they don't have swift. But that is a dependency
> which
> > we considered to be a hard dependency and offer no alternatives; you can
> > have Ceph if you so desire but we still access it as a swift store.
> > Similarly we needed some capabilities of job scheduling and opted to use
> > mistral for this; we didn't reimplement all of these capabilities in
> Trove.
> >
> > However, when it comes to secret storage, the consensus of opinion is
> > Yet another service.
>
> So, what spurred this thread is that I'm currently working on Craton
> which wants to store deployment secrets for operators and I've
> recently received a lot of private mail about Glare and how one of its
> goals is to replace Barbican (as well as Glance).
>
> I'm quite happy that Trove has worked hard not to reimplement its
> requirements that were already satisfied by OpenStack projects. That's
> kind of what I'm hoping to help people do with Barbican in this
> thread.
>
> > Here is the second observation. This conversation reminds me of many
> > conversations from years past "Why do you want to use a NoSQL database,
> we
> > have a  database already". I've sat in on heated arguments
> > amongst architects who implemented "lightweight key-value storage based
> on
> > " and didn't use the corporate standard RDBMS.
>
> This I don't quite agree with this comparison. Surely when NoSQL came
> out, people ridiculed it for not having the same properties as RDBMS,
> but there's a large difference in people criticizing NoSQL databases
> having not used them and me asking people to use software that's
> already been audited for security and written by people who understand
> the underlying technologies.
>
> I'm sure if you said to your users and operators: "These N services
> need to store secrets and each has implemented that in its own way
> with no common configuration or storage location. None of them can
> take advantage of HSMs you have present in your infrastructure, and
> none of the people who really developed this are experts at storing
> secrets, but they tried their best!" Those operators would start to
> gnash their teeth and even maybe curse you under their breath. If you
> said "These services all need to store secrets securely, and that
> means we need to add Barbican which was written by people who took the
> time to document their threat models, perform a security analysis, and
> have worked with the larger security community to develop it." They'd
> be happier. I do understand, however, that your customers aren't
> deploying all the services that might use Barbican, and that's fine.
>
> What I'm gleaning from this conversation is that most of us have
> customers who only use 1 extra service that has a soft dependency on
> Barbican but never more than one. I have customers using Octavia and
> Magnum and a team that wants to use Craton, so it seems to me like we
> would benefit from doing the hard work of deploying Barbican but that
> situation is 

Re: [openstack-dev] [security] [telemetry] How to handle security bugs

2017-01-17 Thread Adam Heczko
Hi Julien, I think that you should follow this [1] workflow.

TL;DR: Pls make sure that if the bug is serious make it private on LP so
that only core team members can access it and propose patches. Please do
not send patches to Gerrit review queue but rather attach it to LP bug
ticket and discuss there. Contact VMT members to get more details on how to
get Telemetry project covered by VMT.

[1] https://security.openstack.org/vmt-process.html

On Tue, Jan 17, 2017 at 1:26 PM, Julien Danjou  wrote:

> Hi,
>
> I've asked on #openstack-security without success, so let me try here
> insteead:
>
> We, Telemetry, have a security bug and we're not managed by VMT, any
> hint as how to handle our bug? Or how to get covered by VMT? 
>
> Cheers,
> --
> Julien Danjou
> /* Free Software hacker
>https://julien.danjou.info */
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [telemetry] How to handle security bugs

2017-01-17 Thread Ian Cordasco
On Tue, Jan 17, 2017 at 6:26 AM, Julien Danjou  wrote:
> Hi,
>
> I've asked on #openstack-security without success, so let me try here
> insteead:
>
> We, Telemetry, have a security bug and we're not managed by VMT, any
> hint as how to handle our bug? Or how to get covered by VMT? 

So, in terms of process I'd advise you read
https://security.openstack.org/vmt-process.html because it describes
how the VMT process works.

I believe 
http://docs.openstack.org/project-team-guide/vulnerability-management.html
described that you need to be "security-supported" which involves
joining the list of projects with the "vulnerability:managed" tag
(https://governance.openstack.org/tc/reference/tags/vulnerability_managed.html).

https://governance.openstack.org/tc/reference/tags/vulnerability_managed.html#requirements
describes the requirements to attain that tag.

Cheers,
-- 
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] vhost-user server mode and reconnect

2017-01-17 Thread Mooney, Sean K
Hi everyone
I first proposed a series of patches to enable vhost-user with a
Qemu server/ ovs client topology last july before the relevant changes
To enable this configuration had been release in ovs with dpdk.

Since then ovs 2.6 is out  and shipping, (2.7 will be out soon)
And all of the depdecies on nova, os-vif, dpdk, qemu and the requirements
Repo have been merged.
The final piece to enable this feature with the ovs agents backend is
https://review.openstack.org/#/c/344997/9

it has been a while since this patch was actively reviewed so I have
added everyone who has previously reviewed this change to the to
Line and would ask that if you have time to review it please do.

I would like to get this feature finished and merged before the ocata
Code freeze next week if possible. Given that the code has been largely
Unchanged since your initial review bar addressing comments raised
I think it is in a sable state and ready to merge unless other issues are 
raised.

Regards
Seán
--
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Ian Cordasco
On Mon, Jan 16, 2017 at 6:20 PM, Amrith Kumar  wrote:
> Ian,
>
> This is a fascinating conversation. Let me offer two observations.
>
> First, Trove has long debated the ideal solution for storing secrets. There
> have been many conversations, and Barbican has been considered many times.
> We sought input from people who were deploying and operating Trove at scale;
> customers of Tesora, self described users of the upstream Trove, and some of
> the (then) active contributors who were also operators.
>
> The consensus was that installing and deploying OpenStack was hard enough
> and requiring the installation of yet more services was problematic. This is
> not something which singles out Barbican in any way. For example, Trove uses
> Swift as the default object store where backups are stored, and in
> implementing replication we leveraged the backup capability. This means that
> to have replication, one needs to have Swift. Several deployers have
> objected to this since they don't have swift. But that is a dependency which
> we considered to be a hard dependency and offer no alternatives; you can
> have Ceph if you so desire but we still access it as a swift store.
> Similarly we needed some capabilities of job scheduling and opted to use
> mistral for this; we didn't reimplement all of these capabilities in Trove.
>
> However, when it comes to secret storage, the consensus of opinion is
> Yet another service.

So, what spurred this thread is that I'm currently working on Craton
which wants to store deployment secrets for operators and I've
recently received a lot of private mail about Glare and how one of its
goals is to replace Barbican (as well as Glance).

I'm quite happy that Trove has worked hard not to reimplement its
requirements that were already satisfied by OpenStack projects. That's
kind of what I'm hoping to help people do with Barbican in this
thread.

> Here is the second observation. This conversation reminds me of many
> conversations from years past "Why do you want to use a NoSQL database, we
> have a  database already". I've sat in on heated arguments
> amongst architects who implemented "lightweight key-value storage based on
> " and didn't use the corporate standard RDBMS.

This I don't quite agree with this comparison. Surely when NoSQL came
out, people ridiculed it for not having the same properties as RDBMS,
but there's a large difference in people criticizing NoSQL databases
having not used them and me asking people to use software that's
already been audited for security and written by people who understand
the underlying technologies.

I'm sure if you said to your users and operators: "These N services
need to store secrets and each has implemented that in its own way
with no common configuration or storage location. None of them can
take advantage of HSMs you have present in your infrastructure, and
none of the people who really developed this are experts at storing
secrets, but they tried their best!" Those operators would start to
gnash their teeth and even maybe curse you under their breath. If you
said "These services all need to store secrets securely, and that
means we need to add Barbican which was written by people who took the
time to document their threat models, perform a security analysis, and
have worked with the larger security community to develop it." They'd
be happier. I do understand, however, that your customers aren't
deploying all the services that might use Barbican, and that's fine.

What I'm gleaning from this conversation is that most of us have
customers who only use 1 extra service that has a soft dependency on
Barbican but never more than one. I have customers using Octavia and
Magnum and a team that wants to use Craton, so it seems to me like we
would benefit from doing the hard work of deploying Barbican but that
situation is rare.

> Finally, it is my personal belief that making software pluggable such that
> "if it discovers Barbican, it uses it, if it discovers XYZ it uses it, if it
> discovers PQR it uses that ..." is a very expensive design paradigm.  Unless
> Barbican, PQR, XYZ and any other implementation provide such material value
> to the consumer, and there is significant deployment and usage of each, the
> cost of maintaining the transparent pluggability of these, the cost of
> testing, and development all add up very quickly.

I believe this is exactly what Castellan is designed to be. That
interface so that services that want to have a soft requirement on
Barbican can do so through Castellan.

> Which is why when some project wants to store a secret, it ciphers it using
> some one way hash and stuffs that in a database (if that's all it needs).

Sometimes you need to get that secret back out though and that one way
hash won't cut it. Also you have to balance what most people on this
list consider "normal" deployments of OpenStack against the increasing
demand to be able to deploy OpenStack on a FIPS compliant 

Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Ian Cordasco
On Mon, Jan 16, 2017 at 6:11 PM, Joshua Harlow  wrote:
>> Is the problem perhaps that no one is aware of other projects using
>> Barbican? Is the status on the project navigator alarming (it looks
>> like some of this information is potentially out of date)? Has
>> Barbican been deemed too hard to deploy?
>>
>> I really want to understand why so many projects feel the need to
>> implement their own secrets storage. This seems a bit short-sighted
>> and foolish. While these projects are making themselves easier to
>> deploy, if not done properly they are potentially endangering their
>> users and that seems like a bigger problem than deploying Barbican to
>> me.
>>
>
> Just food for thought, and I'm pretty sure it's probably the same for
> various others; but one part that I feel is a reason that folks don't deploy
> barbican is because most companies need a solution that works beyond
> OpenStack and whether people like it or not, a OpenStack specific solution
> isn't really something that is attractive (especially with the growing
> adoption of other things that are *not* OpenStack).
>
> Another reason, some companies have or are already building/built solutions
> that offer functionality like what's in https://github.com/square/keywhiz
> and others and such things integrate with kubernetes and **their existing**
> systems ... natively already so why would they bother with a service like
> barbican?
>
> IMHO we've got to get our heads out of the sand with regard to some of this
> stuff, expecting people to consume all things OpenStack and only all things
> OpenStack is a losing battle; companies will consume what is right for their
> need, whether that is in the OpenStack community or not, it doesn't really
> matter (maybe at one point it did).

As long as they're using something secure, that's fine by me. Instead
these projects all want to reimplement the same functionality on their
own.

Does Castellan need to become something that can integrate with
Barbican + all of these other projects?

-- 
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Yaql validating performance

2017-01-17 Thread lương hữu tuấn
.

On Tue, Jan 17, 2017 at 1:10 PM, lương hữu tuấn 
wrote:

> Hi,
>
> We are now using yaql in mistral and what we see that the process of
> validating yaql expression of input takes a lot of time, especially with
> the big size input. Do you guys have any information about performance of
> yaql?
>
> Br,
>
> @Nokia/Tuan
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [security] [telemetry] How to handle security bugs

2017-01-17 Thread Julien Danjou
Hi,

I've asked on #openstack-security without success, so let me try here
insteead:

We, Telemetry, have a security bug and we're not managed by VMT, any
hint as how to handle our bug? Or how to get covered by VMT? 

Cheers,
-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Yaql validating performance

2017-01-17 Thread lương hữu tuấn
Hi,

We are now using yaql in mistral and what we see that the process of
validating yaql expression of input takes a lot of time, especially with
the big size input. Do you guys have any information about performance of
yaql?

Br,

@Nokia/Tuan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] FIPS compliance

2017-01-17 Thread Ian Cordasco
On Tue, Jan 17, 2017 at 4:11 AM, Yolanda Robla Mota  wrote:
> Hi, in previous threads, there have been discussions about enabling FIPS,
> and the problems we are hitting with md5 inside OpenStack:
> http://lists.openstack.org/pipermail/openstack-dev/2016-November/107035.html
>
> It is important from a security perspective to enable FIPS, however
> OpenStack cannot boot with that, because of the existence of md5 calls in
> several projects. These calls are not used for security, just for hash
> generation, but even with that, FIPS is blocking them.
>
> There is a patch proposed for newest versions of python, to avoid that
> problem. The idea is that when a hash method is called, users could specify
> if these are used for security or not. If the useforsecurity flag is set to
> False, FIPS won't block the call. See: http://bugs.python.org/issue9216

This patch looks to have died off in 2013 prior to Robert's comment from today.

> This won't land until next versions of Python, however the patch is already
> on place for current RHEL and CentOS versions that are used in OpenStack
> deploys. Using that patch as a base, I have a proposal to allow FIPS
> enabling, at least in the distros that support it.
>
> The idea is to create a wrapper around md5, something like:
> md5_wrapper('string_to_hash', useforsecurity=False)

We should probably work harder on actually landing the patch in Python
first. I agree with Robert that the optional boolean parameter is
awkward. It'd be better to have a fips submodule.

> This method will check the signature of hashlib.md5, and see if that's
> offering the useforsecurity parameter. If that's offered, it will pass the
> given parameter from the wrapper. If not, we will just call
> md5('string_to_hash') .
>
> This gives us the possibility to whitelist all the md5 calls, and enabling
> FIPS kernel booting without problems. It will start to work for distros
> supporting it, and it will be ready to use generally when the patch lands in
> python upstream and another distros adopt it. At some point, when all
> projects are using newest python versions, this wrapper could disappear and
> use md5 useforsecurity parameter natively.

I'd much rather have the upstream interface fixed in Python and then
to have a wrapper that does things the correct way. Otherwise, we're
encouraging other distros to use a patch that still requires a lot of
edits to address the review comments and might be defining an API that
will never end up in Python.

> The steps needed to achieve it are:
> - create a wrapper, place it on some existing project or create a new fips
> one
> - search and replace all md5 calls used in OpenStack core projects , to use
> that new wrapper. Note that all the md5 calls will be whitelisted by
> default. We have not noted any md5 call that is used for security, but if
> that exists, it shall be better to use another algorithms, in terms of
> security.
>
> What do people think about it?

I think people should work on the Python patches *first*. Once they're
merged, *then* we should potentially create a wrapper (if it's still
necessary at that point) to do this.

-- 
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Different length limit for tags in object definition and db model definition

2017-01-17 Thread Zhenyu Zheng
OK, added to my todo for the next cycle.

On Tue, Jan 17, 2017 at 7:08 PM, Matt Riedemann 
wrote:

> On 1/17/2017 3:31 AM, Roman Podoliaka wrote:
>
>> Hi all,
>>
>> Changing the type of column from VARCHAR(80) to VARCHAR(60) would also
>> require a data migration (i.e. a schema migration to add a new column
>> with the "correct" type, changes to the object, data migration logic)
>> as it is not an "online" DDL operation according to [1].  Adding a new
>> API microversion seems to be easier.
>>
>> Thanks,
>> Roman
>>
>>
> Yeah if we're going to do anything we should do the microversion bump
> since the DB change requires an offline schema migration which we don't
> want to do.
>
> I didn't think about the interoperability issue with the change so I agree
> it will require a microversion.
>
> As for the timing, we're two weeks from feature freeze and all API changes
> require a spec according to our policy [1]. We also have a lot of unmerged
> blueprints yet to get reviewed [2] and frankly our review numbers are
> already down this release. So if this can be held until Pike I'd prefer
> that so it's not a distraction in Ocata.
>
> [1] http://docs.openstack.org/developer/nova/blueprints.html#specs
> [2] https://blueprints.launchpad.net/nova/ocata
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] FIPS compliance

2017-01-17 Thread Luke Hinds
On Tue, Jan 17, 2017 at 10:11 AM, Yolanda Robla Mota 
wrote:

> Hi, in previous threads, there have been discussions about enabling FIPS,
> and the problems we are hitting with md5 inside OpenStack:
> http://lists.openstack.org/pipermail/openstack-dev/2016-
> November/107035.html
>
> It is important from a security perspective to enable FIPS, however
> OpenStack cannot boot with that, because of the existence of md5 calls in
> several projects. These calls are not used for security, just for hash
> generation, but even with that, FIPS is blocking them.
>
> There is a patch proposed for newest versions of python, to avoid that
> problem. The idea is that when a hash method is called, users could specify
> if these are used for security or not. If the useforsecurity flag is set to
> False, FIPS won't block the call. See: http://bugs.python.org/issue9216
>
> This won't land until next versions of Python, however the patch is
> already on place for current RHEL and CentOS versions that are used in
> OpenStack deploys. Using that patch as a base, I have a proposal to allow
> FIPS enabling, at least in the distros that support it.
>
> The idea is to create a wrapper around md5, something like:
> md5_wrapper('string_to_hash', useforsecurity=False)
>
> This method will check the signature of hashlib.md5, and see if that's
> offering the useforsecurity parameter. If that's offered, it will pass the
> given parameter from the wrapper. If not, we will just call
> md5('string_to_hash') .
>
> This gives us the possibility to whitelist all the md5 calls, and enabling
> FIPS kernel booting without problems. It will start to work for distros
> supporting it, and it will be ready to use generally when the patch lands
> in python upstream and another distros adopt it. At some point, when all
> projects are using newest python versions, this wrapper could disappear and
> use md5 useforsecurity parameter natively.
>
> The steps needed to achieve it are:
> - create a wrapper, place it on some existing project or create a new fips
> one
> - search and replace all md5 calls used in OpenStack core projects , to
> use that new wrapper. Note that all the md5 calls will be whitelisted by
> default. We have not noted any md5 call that is used for security, but if
> that exists, it shall be better to use another algorithms, in terms of
> security.
>
> What do people think about it?
>
>
Sounds pragmatic to me. The other option explored was for projects to
migrate to sha2, but that transpired to be a huge challenge for some
projects that had complex functionality built up around md5.

I see this as a non breaking way to allow FIPS compliant kernels, without
throwing the `baby out with the bath water`, as we use md5.




> Best
>
> --
> Yolanda Robla Mota
> NFV Partner Engineer
> yrobl...@redhat.com
> +34 605641639 <+34%20605%2064%2016%2039>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Different length limit for tags in object definition and db model definition

2017-01-17 Thread Matt Riedemann

On 1/17/2017 3:31 AM, Roman Podoliaka wrote:

Hi all,

Changing the type of column from VARCHAR(80) to VARCHAR(60) would also
require a data migration (i.e. a schema migration to add a new column
with the "correct" type, changes to the object, data migration logic)
as it is not an "online" DDL operation according to [1].  Adding a new
API microversion seems to be easier.

Thanks,
Roman



Yeah if we're going to do anything we should do the microversion bump 
since the DB change requires an offline schema migration which we don't 
want to do.


I didn't think about the interoperability issue with the change so I 
agree it will require a microversion.


As for the timing, we're two weeks from feature freeze and all API 
changes require a spec according to our policy [1]. We also have a lot 
of unmerged blueprints yet to get reviewed [2] and frankly our review 
numbers are already down this release. So if this can be held until Pike 
I'd prefer that so it's not a distraction in Ocata.


[1] http://docs.openstack.org/developer/nova/blueprints.html#specs
[2] https://blueprints.launchpad.net/nova/ocata

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG? / Was (Consistent Versioned Endpoints)

2017-01-17 Thread Flavio Percoco

On 13/01/17 14:50 -0800, Clint Byrum wrote:

Excerpts from Fox, Kevin M's message of 2017-01-13 19:44:23 +:

Don't want to hijack the thread too much but... when the PTG was being sold, it 
was a way to get the various developers in to one place and make it cheaper to 
go to for devs. Now it seems to be being made into a place where each of the 
silo's can co'exist but not talk, and then the summit is still required to get 
cross project work done, so it only increases the devs cost by requiring 
attendance at both. This is very troubling. :/ Whats the main benefit of the 
PTG then?



I've come to the conclusion that this will still have a net positive
effect for communication.

The reason? Leaders. Not just PTL's, but all of those who are serving as
leaders, whether formal or not.

With the old system, the leaders of each project would be tasked
with attending all of the summit sessions relevant to their project,
whether cross-project, ops-centric, or project-centric. This was a
full-time job for the entirety of the summit for many. As a result,
leaders were unable to attend the conference portion of the event,
which meant no socialization of what is actually happening with their
work to the community.

Basically the leadership was there to plan, facilitate, and listen,
but not to present. They'd also be expected at the mid-cycle to help
keep up on what's really coming down the pipe for the release vs. what
was planned (and to help work on their own efforts for those with time
left to do actual development).

With the new system, the leadership will be at the PTG, and have dev-centric
conversations related to planning all week, and probably be just as busy
as they were at the summit and mid-cycle.

But with that work done at the PTG, a project leader can attend the Forum
and conference and actually participate fully in both. They can talk about
the work the team is doing, they can showcase their company's offerings
(let's keep the lights on please!) and they can spend time in the Forum
on the things that they're needed for there (which should be a fraction
of what they did at the dev summit).

For operators, unless you're sponsoring work, you can ignore the PTG just
like you ignored the mid-cycle. You can come to the forum and expect
to see the most influential developers there, just like you would have
seen them at the summit. But they will have a lot less to do that isn't
listening to you or telling you what's happening in their projects. I've
specifically heard the tales of developers, cornered in summit sessions,
being clear that they simply don't have time to listen to the operators'
needs. We can hope that this new scheme works against that feeling.

So yeah, it's new and scary. But I got over my fear of the change, and
I think you should too. Let's see how it goes, and reserve our final
judgement until after the Forum.



Loved the way you put it, Clint. I second this feeling too. Having the
opportunity to focus on the PTG entirely and not having to multi-task across a
gazillion of things is one of the things I'm definitely looking forward to.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Mistral][Ansible] Calling Ansible from Mistral workflows

2017-01-17 Thread Flavio Percoco

On 12/01/17 15:27 +, Dougal Matthews wrote:

Hey all,

I just wanted to share a quick experiment that I tried out. I had heard
there was some interest in native Ansible actions for Mistral. After much
dragging my heels I decided to give it a go, and it turns out to be very
easy.

This code is very raw and has only been lightly tested - I just wanted to
make sure it was going in the right direction and see what everyone thought.

I wont duplicate it all again here, but you can see the details on either
GitHub or a quick blog post that I put together.

https://github.com/d0ugal/mistral-ansible-actions
http://www.dougalmatthews.com/2017/Jan/12/calling-ansible-from-mistral-workflows/


WoW, this is awesome! Great work Dougal.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Flavio Percoco

On 16/01/17 16:57 -0500, Jay Pipes wrote:

On 01/16/2017 04:09 PM, Fox, Kevin M wrote:

If the developers that had issue with the lack of functionality,
contributed to Barbican rather then go off on their own, the problem
would have been solved much more quickly. The lack of sharing means
the problems don't get fixed as fast.


Agreed completely.


As for operators, If the more common projects all started depending
on it, it would be commonly deployed.


Also agreed.


Would the operators deploy Barbican just for Magnum? maybe not. maybe
so. For Magnum, Ironic, and Sahara, more likely . Would they deploy
it if Neutron and Keystone depended on it, yeah. they would. And then
all the other projects would benefit from it being there, such as
Magnum.


Totally agreed.


The sooner OpenStack as a whole can decide on some new core
components so that projects can start hard depending on them, the
better I think. That process kind of stopped with the arrival of the
big tent.


You are using a false equivalence again.

As I've mentioned numerous times before on the mailing list, the Big 
Tent was NOT either of these things:


* Expanding what the "core components" of OpenStack
* Expanding the mission or scope of OpenStack

What the Big Tent -- technically "Project Structure Reform" -- was 
about was actually the following:


* No longer having a formal incubation and graduation period/review 
for applying projects
* Having a single, objective list of requirements and responsibilities 
for inclusion into the OpenStack development community
* Specifically allowing competition of different source projects in 
the same "space" (e.g. deployment or metrics)


What you are complaining about (rightly IMHO) regarding OpenStack 
project contributors not contributing missing functionality to 
Barbican has absolutely nothing to do with the Big Tent:


There's no competing secret storage project in OpenStack other than 
Barbican/Castellan.


Furthermore, this behaviour of projects choosing to DIY/NIH something 
that existed in other projects was around long before the advent of 
the Big Tent. In fact, in this specific case, the Magnum team knew 
about Barbican, previously depended on it, and chose to make Barbican 
an option not because Barbican wasn't OpenStack -- it absolutely WAS 
-- but because it wasn't commonly deployed, which limited their own 
adoption.


What you are asking for, Kevin, is a single opinionated and 
consolidated OpenStack deployment; a single OpenStack "product" if you 
will. This is a perfectly valid request. However it has nothing to do 
with the Big Tent governance reform.


I guess this is also why castellan was created in the first place, which is to
try to avoid a single opinionated deployment, except that there's only one
secret storage service right now.

FWIW, The same thing happened with Zaqar, which was one of the first (if not the
first) project to join the Big Tent. To my knowledge, it's still neither widely
used nor deployed. Heat is using it, TripleO is using it (probably the biggest
consumer of Zaqar today). I can see Zaqar being adopted by several other 
services.

The point is, as Kevin mentioned, we would benefit more from consuming more of
our services rather than re-inventing some of this logics in every project.
We've faced this issue in different areas and the best solution has been to
consolidate on a fixed set of solutions that we can manage, support and
contribute. For example, Oslo.

So yeah, I'd love to see more projects consuming Barbican, even if it means that
a new service is required to have a working OpenStack.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Tim Bell

On 17 Jan 2017, at 11:28, Maish Saidel-Keesing 
> wrote:


Please see inline.

On 17/01/17 9:36, Tim Bell wrote:

...
Are we really talking about Barbican or has the conversation drifted towards 
Big Tent concerns?

Perhaps we can flip this thread on it’s head and more positively discuss what 
can be done to improve Barbican, or ways that we can collaboratively address 
any issues. I’m almost wondering if some opinions about Barbican are even 
coming from its heavy users, or users who’ve placed much time into 
developing/improving Barbican? If not, let’s collectively change that.


When we started deploying Magnum, there was a pre-req for Barbican to store the 
container engine secrets. We were not so enthusiastic since there was no puppet 
configuration or RPM packaging.  However, with a few upstream contributions, 
these are now all resolved.

the operator documentation has improved, HA deployment is working and the 
unified openstack client support is now available in the latest versions.
Tim - where exactly is this documentation?

We followed the doc for installation at 
http://docs.openstack.org/project-install-guide/newton/, specifically for our 
environment (RDO/CentOS) 
http://docs.openstack.org/project-install-guide/key-manager/newton/

Tim


These extra parts may not be a direct deliverable of the code contributions 
itself but they make a major difference on deployability which Barbican now 
satisfies. Big tent projects should aim to cover these areas also if they wish 
to thrive in the community.

Tim


Thanks,
Kevin


Brandon B. Jozsa

--
Best Regards,
Maish Saidel-Keesing
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Thierry Carrez
Qiming Teng wrote:
> On Mon, Jan 16, 2017 at 08:21:02PM +, Fox, Kevin M wrote:
>> IMO, This is why the big tent has been so damaging to OpenStack's progress. 
>> Instead of lifting the commons up, by requiring dependencies on other 
>> projects, there by making them commonly deployed and high quality, post big 
>> tent, each project reimplements just enough to get away with making 
>> something optional, and then the commons, and OpenStack as a whole suffers. 
>> This behavior MUST STOP if OpenStack is to make progress again. Other 
>> projects, such as Kubernetes are making tremendous progress because they are 
>> not hamstrung by one component trying desperately not to depend on another 
>> when the dependency is appropriate. They enhance the existing component 
>> until its suitable and the whole project benefits. Yes, as an isolated dev, 
>> the behavior to make deps optional seems to make sense. But as a whole, 
>> OpenStack is suffering and will become increasingly irrelevant moving 
>> forward if the current path is continued. Please, please reconsider what the 
>> current stance on dependencies is d
>>  oing to the community. This problem is not just isolated to barbican, but 
>> lots of other projects as well. We can either help pull each other up, or we 
>> can step on each other to try and get "on top". I'd rather we help each 
>> other rather then the destructive path we seem to be on.
> 
> Very well said, Kevin. The problem is not just about Barbican. Time for
> the TC and the whole community to rethink or even just to realize
> where we are heading ... Time for each and every projects to do some
> introspection ... Time to solve this chicken-and-egg problem.

The service dependency issue is, indeed, a difficult problem to solve.
In the early days of OpenStack, we decided that every service could
assume that a number of base services would be available: a relational
database (MySQL), a message queue (RabbitMQ), and an AuthN/AuthZ token
service (Keystone). That served us well, but we were unable to grow that
set of "base services".

We need more advanced features, like a distributed lock manager
(Zookeeper?), or a secrets vault (Barbican?), but rather than making the
hard decision, we work around their absence in every project, badly
emulating those features using what we have. This has nothing to do with
the big tent or the way we structure projects. It just has to do with
the size of this community. It was easier to agree to depend on MySQL
and RabbitMQ and Keystone when we were 100.

Now, how do we solve it ? First, we need to realize what the issue is,
define language around it. Using the Architecture WG as a vehicle, I
started to push the idea of defining "base services"[1] (resources that
other services can assume will be present). This is the first step:
realizing we do have base services, and need a way to *extend* them.

[1]
https://git.openstack.org/cgit/openstack/arch-wg/tree/proposals/base-services.rst

The next step will be to propose NEW base services. It's simpler than
you think -- the TC will just say it's fine to assume that service X
will be present. We obviously need to pick the right solutions, the ones
that solve the problem set and actually are not horrible to deploy. I
expect the Architecture WG to help in that analysis. But beyond that,
making the decision that it is OK to depend on them is not that hard.

> Stick together, there seems still a chance for future; otherwise, we
> will feel guilty wasting people's life building something that is
> falling apart eventually. Should we kill all "sh**-as-a-service"
> projects and focus on the few "core" services and hope they will meet
> all users' requirements? Or, should we give every project an equal
> chance to be adopted? Who is blocking other services to get adopted?
> How many projects are listed on the project navigator?

I think the focus question is an illusion, as Ed brilliantly explained
in https://blog.leafe.com/openstack-focus/

The issue here is that it's just a lot more profitable career-wise and a
lot less risky to work first-level user-visible features like Machine
Learning as a service, than it is to work on infrastructural services
like Glance, Keystone or Barbican. Developers naturally prefer to go to
shiny objects than to boring technology. As long as their corporate
sponsors are happy with them ignoring critical services, that will
continue. Saying that some of those things are not part of our
community, while they are developed by our community, is sticking our
heads in the sand.

We can certainly influence where those corporate sponsors dedicate their
development resources (and I think we should definitely pursue the base
service stuff, to send a strong signal), but we don't directly control
where the resources are spent.

-- 
Thierry Carrez (ttx)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [sahara] Pike's PTG etherpad

2017-01-17 Thread Vitaly Gridnev
Hello team,

Let’s start collecting ideas for Pike’s PTG in the etherpad [0]. For reference 
there is a collection of the etherpads for other teams [1]. 
So, feel free to add topics for discussion, but don’t forget to add some 
contact information about you. Thanks. 

[0] https://etherpad.openstack.org/p/sahara-ptg-pike 

[1] https://wiki.openstack.org/wiki/PTG/Pike/Etherpads 
 

Best regards,
Vitaly Gridnev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Different length limit for tags in object definition and db model definition

2017-01-17 Thread Roman Podoliaka
Hi all,

Changing the type of column from VARCHAR(80) to VARCHAR(60) would also
require a data migration (i.e. a schema migration to add a new column
with the "correct" type, changes to the object, data migration logic)
as it is not an "online" DDL operation according to [1].  Adding a new
API microversion seems to be easier.

Thanks,
Roman

[1] 
https://dev.mysql.com/doc/refman/5.7/en/innodb-create-index-overview.html#innodb-online-ddl-column-properties

On Tue, Jan 17, 2017 at 10:19 AM, Sergey Nikitin  wrote:
> Hi, Zhenyu!
>
> I think we should ask DB guys about migration. But my personal opinion is
> that DB migration is much painful than new microversion.
>
>>  But it seems too late to have a microversion for this cycle.
>
>
> Correct me if I'm wrong but I thought that Feature Freeze will be in action
> Jan 26.
> https://wiki.openstack.org/wiki/Nova/Ocata_Release_Schedule
>
> Even if we need a new microversion I think it will be a specless
> microversion and patch will change about 5 lines of code. We can merge such
> patch in one day.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Maish Saidel-Keesing
Please see inline.


On 17/01/17 9:36, Tim Bell wrote:
>
>> On 17 Jan 2017, at 01:19, Brandon B. Jozsa > > wrote:
>>
>> Inline
>>
>> On January 16, 2017 at 7:04:00 PM, Fox, Kevin M (kevin@pnnl.gov
>> ) wrote:
>>
>>>
>>> I'm not stating that the big tent should be abolished and we go back
>>> to the way things were. But I also know the status quo is not
>>> working either. How do we fix this? Anyone have any thoughts? 
>>>
>>
>> Are we really talking about Barbican or has the conversation drifted
>> towards Big Tent concerns?
>>
>> Perhaps we can flip this thread on it’s head and more positively
>> discuss what can be done to improve Barbican, or ways that we can
>> collaboratively address any issues. I’m almost wondering if some
>> opinions about Barbican are even coming from its heavy users, or
>> users who’ve placed much time into developing/improving Barbican? If
>> not, let’s collectively change that.
>>
>
> When we started deploying Magnum, there was a pre-req for Barbican to
> store the container engine secrets. We were not so enthusiastic since
> there was no puppet configuration or RPM packaging.  However, with a
> few upstream contributions, these are now all resolved.
>
> the operator documentation has improved, HA deployment is working and
> the unified openstack client support is now available in the latest
> versions.
Tim - where exactly is this documentation?
>
> These extra parts may not be a direct deliverable of the code
> contributions itself but they make a major difference on deployability
> which Barbican now satisfies. Big tent projects should aim to cover
> these areas also if they wish to thrive in the community.
>
> Tim
>
>>
>>> Thanks, 
>>> Kevin 
>>
>> Brandon B. Jozsa
>>

-- 
Best Regards,
Maish Saidel-Keesing
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [security] FIPS compliance

2017-01-17 Thread Yolanda Robla Mota
Hi, in previous threads, there have been discussions about enabling FIPS,
and the problems we are hitting with md5 inside OpenStack:
http://lists.openstack.org/pipermail/openstack-dev/2016-November/107035.html

It is important from a security perspective to enable FIPS, however
OpenStack cannot boot with that, because of the existence of md5 calls in
several projects. These calls are not used for security, just for hash
generation, but even with that, FIPS is blocking them.

There is a patch proposed for newest versions of python, to avoid that
problem. The idea is that when a hash method is called, users could specify
if these are used for security or not. If the useforsecurity flag is set to
False, FIPS won't block the call. See: http://bugs.python.org/issue9216

This won't land until next versions of Python, however the patch is already
on place for current RHEL and CentOS versions that are used in OpenStack
deploys. Using that patch as a base, I have a proposal to allow FIPS
enabling, at least in the distros that support it.

The idea is to create a wrapper around md5, something like:
md5_wrapper('string_to_hash', useforsecurity=False)

This method will check the signature of hashlib.md5, and see if that's
offering the useforsecurity parameter. If that's offered, it will pass the
given parameter from the wrapper. If not, we will just call
md5('string_to_hash') .

This gives us the possibility to whitelist all the md5 calls, and enabling
FIPS kernel booting without problems. It will start to work for distros
supporting it, and it will be ready to use generally when the patch lands
in python upstream and another distros adopt it. At some point, when all
projects are using newest python versions, this wrapper could disappear and
use md5 useforsecurity parameter natively.

The steps needed to achieve it are:
- create a wrapper, place it on some existing project or create a new fips
one
- search and replace all md5 calls used in OpenStack core projects , to use
that new wrapper. Note that all the md5 calls will be whitelisted by
default. We have not noted any md5 call that is used for security, but if
that exists, it shall be better to use another algorithms, in terms of
security.

What do people think about it?

Best

-- 
Yolanda Robla Mota
NFV Partner Engineer
yrobl...@redhat.com
+34 605641639
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]Tricircle Pike PTG

2017-01-17 Thread joehuang
Hello,

As only few of us may go to Atlanta, the etherpad has been renamed to reflect 
the fact of our "virtually distributed PTG"

https://etherpad.openstack.org/p/tricircle-pike-design-topics

We will discuss this in the weekly meeting, "What date and time and venu during 
the PTG, and meetup for contributors who can't go to Atlanta? It would be great 
to hold it at the same time, like virtually distributed PTG: some in PTG 
Atlanta, some in other place, but are inter-connectted through online etherpad."

Best Regards
Chaoyi Huang (joehuang)

From: joehuang
Sent: 17 January 2017 10:22
To: openstack-dev
Subject: [openstack-dev][tricircle]Tricircle Pike PTG

As the Ocata stable branch will be created and released soon, it's time to 
prepare what we need to discuss and implement in Pike release:

The etherpad has been created at: 
https://etherpad.openstack.org/p/tricircle-ptg-pike

Please feel free to add the topics, ideas into the etherpad, and let's plan the 
agenda as well.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Different length limit for tags in object definition and db model definition

2017-01-17 Thread Zhenyu Zheng
OK, then, lets try to work this out.

On Tue, Jan 17, 2017 at 4:19 PM, Sergey Nikitin 
wrote:

> Hi, Zhenyu!
>
> I think we should ask DB guys about migration. But my personal opinion is
> that DB migration is much painful than new microversion.
>
>  But it seems too late to have a microversion for this cycle.
>>
>
> Correct me if I'm wrong but I thought that Feature Freeze will be in
> action Jan 26.
> https://wiki.openstack.org/wiki/Nova/Ocata_Release_Schedule
>
> Even if we need a new microversion I think it will be a specless
> microversion and patch will change about 5 lines of code. We can merge such
> patch in one day.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [machine learning] Question: Why there is no serious project for machine learning ?

2017-01-17 Thread 严超
Thank you Eran. This is a rather interesting replay.
Thank you very much.

于2017年1月16日周一 下午6:07写道:

> > Not sure what you mean by serious.
> >
> > Maybe you could have a look at Meteos[1]. It is a young project but
> surely
> > focuses on machine learning.
> >
> > [1]: https://wiki.openstack.org/wiki/Meteos
> Another avenue is to use Storlets for either the learn or prediction
> phase where the data resides in Swift.
> We are currently adding IPython integration [1] that makes it very
> easy to deploy and invoke Storlets from IPython (a data scientists
> beloved tool :-), plus [2] is an initial working towards leveraging
> Storlets for machine learning.
>
> In few more words: Storlets [3] allow to run a serverless computation
> inside Swift nodes, where the computation is done inside a Docker
> container. This basically means that you can write a piece of code (in
> either Python or Java) upload that code to Swift (as if it was a data
> object) and then invoke the uploaded code (called storlet) on your
> data (much like AWS Lambda). The nice thing is that the Docker image
> where the storlet is executed can be tailored by the admin, and as to
> make sure it has, e.g. scikit-learn installed. With such a Docker
> image you can write a storlet that would use the sickit-learn
> algorithms on swift objects.
>
>
> [1] https://review.openstack.org/#/c/416089/
> [2] https://github.com/eranr/mlstorlets
> [3] http://storlets.readthedocs.io/en/latest/
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Different length limit for tags in object definition and db model definition

2017-01-17 Thread Sergey Nikitin
Hi, Zhenyu!

I think we should ask DB guys about migration. But my personal opinion is
that DB migration is much painful than new microversion.

 But it seems too late to have a microversion for this cycle.
>

Correct me if I'm wrong but I thought that Feature Freeze will be in action
Jan 26.
https://wiki.openstack.org/wiki/Nova/Ocata_Release_Schedule

Even if we need a new microversion I think it will be a specless
microversion and patch will change about 5 lines of code. We can merge such
patch in one day.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev