Re: [openstack-dev] [ceph-users] [Nova] [RBD] Copy-on-write cloning for RBD-backed disks

2014-07-16 Thread Dennis Kramer (DT)
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Dmitry,

I've been using Ubuntu 14.04LTS + Icehouse /w CEPH as a storage
backend for glance, cinder and nova (kvm/libvirt). I *really* would
love to see this patch cycle in Juno. It's been a real performance
issue because of the unnecessary re-copy from-and-to CEPH when using
the default "boot from image"-option. It seems that the your fix would
be the solution to all. IMHO this is one of the most important
features when using CEPH RBD as a backend for Openstack Nova.

Can you point me in the right direction in how to apply this patch of
yours on a default Ubuntu14.04LTS + Icehouse installation? I'm using
the default ubuntu packages since Icehouse lives in core and I'm not
sure how to apply the patch series. I would love to test and review it.

With regards,

Dennis

On 07/16/2014 11:18 PM, Dmitry Borodaenko wrote:
> I've got a bit of good news and bad news about the state of landing
> the rbd-ephemeral-clone patch series for Nova in Juno.
> 
> The good news is that the first patch in the series 
> (https://review.openstack.org/91722 fixing a data loss inducing bug
> with live migrations of instances with RBD backed ephemeral drives)
> was merged yesterday.
> 
> The bad news is that after 2 months of sitting in review queue and 
> only getting its first a +1 from a core reviewer on the spec 
> approval freeze day, the spec for the blueprint 
> rbd-clone-image-handler (https://review.openstack.org/91486)
> wasn't approved in time. Because of that, today the blueprint was
> rejected along with the rest of the commits in the series, even
> though the code itself was reviewed and approved a number of
> times.
> 
> Our last chance to avoid putting this work on hold for yet another 
> OpenStack release cycle is to petition for a spec freeze exception 
> in the next Nova team meeting: 
> https://wiki.openstack.org/wiki/Meetings/Nova
> 
> If you're using Ceph RBD as backend for ephemeral disks in Nova and
> are interested this patch series, please speak up. Since the 
> biggest concern raised about this spec so far has been lack of CI 
> coverage, please let us know if you're already using this patch 
> series with Juno, Icehouse, or Havana.
> 
> I've put together an etherpad with a summary of where things are 
> with this patch series and how we got here: 
> https://etherpad.openstack.org/p/nova-ephemeral-rbd-clone-status
> 
> Previous thread about this patch series on ceph-users ML: 
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-March/028097.html
>
___
ceph-users
> 
mailing list
ceph-us...@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.15 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlPHc3EACgkQiJDTKUBxIRtAEgCgiNRTedwsydYOWY4rkC6v2vbS
FTEAn34qSiwTyBNCDrXGWOmGPpFu+4PQ
=tK4K
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nova] Weekly nova-network / neutron parity meeting

2014-07-16 Thread Oleg Bondarev
Thanks for setting this up Kyle, Wednesday 1500 UTC works for me.

Thanks,
Oleg


On Thu, Jul 17, 2014 at 6:35 AM, Kyle Mestery  wrote:

> On Wed, Jul 16, 2014 at 9:28 PM, Michael Still  wrote:
> > That time is around 1am for me. I'm ok with that as long as someone on
> > the nova side can attend in my place.
> >
> > Michael
> >
> Some of the neutron contributors to this effort are in Europe and
> Russia, so finding a time slot to get everyone could prove tricky.
> I'll leave this slot now and hope we can get someone else from nova to
> attend Michael. If not, we'll move this to another time.
>
> Thanks!
> Kyle
>
> > On Thu, Jul 17, 2014 at 12:22 PM, Kyle Mestery 
> wrote:
> >> As we're getting down to the wire in Juno, I'd like to propose we have
> >> a weekly meeting on the nova-network and neutron parity effort. I'd
> >> like to start this meeting next week, and I'd like to propose
> >> Wednesday at 1500 UTC on #openstack-meeting-3 as the time and
> >> location. If this works for people, please reply on this thread, or
> >> suggest an alternate time. I've started a meeting page [1] to track
> >> agenda for the first meeting next week.
> >>
> >> Thanks!
> >> Kyle
> >>
> >> [1] https://wiki.openstack.org/wiki/Meetings/NeutronNovaNetworkParity
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > --
> > Rackspace Australia
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] glance.store repo created

2014-07-16 Thread Flavio Percoco
Greeting,

I'd like to announce that we finally got the glance.store repo created.

This library pulls out of glance the code related to stores. Not many
changes were made to the API during this process. The main goal, for
now, is to switch glance over and keep backwards compatibility with
Glance to reduce the number of changes required. We'll improve and
revamp the store API during K - FWIW, I've a spec draft with ideas for it.

The library still needs some work and this is a perfect moment for
anyone interested to chime in and contribute to the library. Some things
that are missing:

- Swift store (Nikhil is working on this)
- Sync latest changes made to the store code.

If you've recently made changes to any of the stores, please go ahead
and contribute them back to `glance.store` or let me know so I can do it.

I'd also like to ask reviewers to request contributions to the `store`
code in glance to be proposed to `glance.store` as well. This way, we'll
be able to keep parity.

I'll be releasing an alpha version soon so we can start reviewing the
glance switch-over. We won't obviously merge it until we have feature
parity.

Any feedback is obviously very welcome,
Cheers,
Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-16 Thread Thomas Herve

> > >The check url is already a part of Neutron LBaaS IIRC.
> 
> Yep. LBaaS is a work in progress, right?

You mean more than OpenStack in general? :) The LBaaS API in Neutron has been 
working fine since Havana. It's certainly has shortcomings and it seems there 
is a big refactoring in plan, though.

> Those of use using Nova networking are not feeling the love, unfortunately.

That's to be expected. nova-network is going to be supported, but you won't get 
new features for it.

> As far as Heat goes, there is no LBaaS resource type. The
> OS::Neutron::LoadBalancer resource type does not have any health checking
> properties.

There are 4 resources related to neutron load balancing. 
OS::Neutron::LoadBalancer is probably the least useful and the one you can 
*not* use, as it's only there for compatibility with 
AWS::AutoScaling::AutoScalingGroup. OS::Neutron::HealthMonitor does the health 
checking part, although maybe not in the way you want it.

-- 
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone V3] not able to cloud_admin user within the admin_domain domain

2014-07-16 Thread foss geek
Dear All,

I have 3 node openstack (controller + compute+ storage node) deployment. I
have integrated keystone with OpenLDAP.

I have configure keystone to do authentication through LDAP and assignment
from SQL.

Here is configuration entry in keystone.conf

[identity]

driver = keystone.identity.backends.ldap.Identity

[assignment]

driver = keystone.assignment.backends.sql.Assignment


Here is LDAP Schema:

# cat tcl.ldif
dn: dc=TCL
dc: TCL
objectclass: top
objectclass: domain

dn: ou=TCL,dc=TCL
objectClass: organizationalUnit
objectClass: top
ou: TCL

I have manually created openstack service user and admin user so that the
LDAP driver can place necessary details  in LDAP database. I am able to
login to openstack as admin user and all functionality are working fine
post LDAP integration.

 Here is my LDAP schema with admin and service user.

# ldapsearch -x -h  -W -D"dc=Manager,dc=TCL" -b dc=TCL }}

Enter LDAP Password:


# extended LDIF
#
# LDAPv3
# base  with scope subtree
# filter: (objectclass=*)
# requesting: }}
#

# TCL
dn: dc=TCL

# TCL, TCL
dn: ou=TCL,dc=TCL

# a8f8ed812aba458ba42d0fbfc0145bd4, TCL, TCL
dn: cn=a8f8ed812aba458ba42d0fbfc0145bd4,ou=TCL,dc=TCL

# c8d9eef1a2044f08b6ae5eb509ff3c83, TCL, TCL
dn: cn=c8d9eef1a2044f08b6ae5eb509ff3c83,ou=TCL,dc=TCL

# 8c4a189b78204b2c87a9e70997afa4fe, TCL, TCL
dn: cn=8c4a189b78204b2c87a9e70997afa4fe,ou=TCL,dc=TCL

# 5c90951603a444db826eb48672843183, TCL, TCL
dn: cn=5c90951603a444db826eb48672843183,ou=TCL,dc=TCL

# 1c60c85acf3942cebbdec91fea1d9b75, TCL, TCL
dn: cn=1c60c85acf3942cebbdec91fea1d9b75,ou=TCL,dc=TCL

# bbc4d9fa57724d31ba016f572951a474, TCL, TCL
dn: cn=bbc4d9fa57724d31ba016f572951a474,ou=TCL,dc=TCL

# 78839ea49f82468b831efb6c08167360, TCL, TCL
dn: cn=78839ea49f82468b831efb6c08167360,ou=TCL,dc=TCL

# search result
search: 2
result: 0 Success

# numResponses: 10
# numEntries: 9

Now I am trying to enable Keystone V3.0 API. I am following this url :
http://www.florentflament.com/blog/setting-keystone-v3-domains.html

ADMIN_TOKEN=$(\
curl http://192.169.0.2:5000/v3/auth/tokens \
-s \
-i \
-H "Content-Type: application/json" \
-d '
{
"auth": {
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"domain": {
"name": "Default"
},
"name": "admin",
"password": "I0DzaQ3LkSUpS1eW89"
}
}
},
"scope": {
"project": {
"domain": {
"name": "Default"
},
"name": "admin"
}
}
}
}' | grep ^X-Subject-Token: | awk '{print $2}' )



# echo $ADMIN_TOKEN

be1a1c02623740aeb72fa8c2dfdb8bbb



ID_ADMIN_DOMAIN=$(\
curl http://192.169.0.2:5000/v3/domains \
-s \
-H "X-Auth-Token: $ADMIN_TOKEN" \
-H "Content-Type: application/json" \
-d '
{
"domain": {
"enabled": true,
"name": "admin_domain"
}
}' | jq .domain.id | tr -d '"' )


# echo $ID_ADMIN_DOMAIN
null

I am getting the below error message:

{"error": {"message": "Conflict occurred attempting to store domain.
(IntegrityError) (1062, \"Duplicate entry 'admin_domain' for key 'name'\")
'INSERT INTO domain (id, name, enabled, extra) VALUES (%s, %s, %s, %s)'
('ea3e791ffa524ca29e43099682ceee8f', 'admin_domain', 1, '{}')", "code":
409, "title": "Conflict"}}


It says that admin_domain is already exist. It seems by default it comes
with admin_domain and default domain. Here is my domain list.


# curl -X GET -H "X-Auth-token:$ADMIN_TOKEN"
http://192.169.0.2:5000/v3/domains | jq '.domains'

[
  {
"name": "admin_domain",
"links": {
  "self": "
http://192.169.0.2:5000/v3/domains/1fdf6cd4da99480797d3e2a08d6a8591";
},
"id": "1fdf6cd4da99480797d3e2a08d6a8591",
"enabled": true
  },
  {
"id": "default",
"name": "Default",
"description": "Owns users and tenants (i.e. projects) available on
Identity API v2.",
"enabled": true,
"links": {
  "self": "http://192.169.0.2:5000/v3/domains/default";
}
  }
]


I have manually added ID_CLOUD_ADMIN variable.

# ID_CLOUD_ADMIN=1fdf6cd4da99480797d3e2a08d6a8591

# echo $ID_CLOUD_ADMIN

1fdf6cd4da99480797d3e2a08d6a8591

The problem is when I try to create cloud_admin user it fails with Could
not find domain.

ID_CLOUD_ADMIN=$(\
curl http://192.169.0.2:5000/v3/users \
-s \
-H "X-Auth-Token: $ADMIN_TOKEN" \
-H "Content-Type: application/json" \
-d "
{
\"user\": {
\"description\": \"Cloud administrator\",
\"domain_id\": \"$ID_ADMIN_DOMAIN\",
\"enabled\": true,
\"name\": \"cloud_admin\",
\"password\": \"password\"
}
}" | jq .user.id | tr -d '"' )


# echo $ID_CLOUD_ADMIN
null

{"error": {"message": "Could not find domain, null.", "code": 404, "title":
"Not Found"}}

Any body faced similar issue?

Do I need to delete

Re: [openstack-dev] [Cinder] Integrated with iSCSI target Question

2014-07-16 Thread Johnson Cheng
Dear Kanagaraj,

Thanks for your reply.
I installed it at compute node before, and it doesn't work.
I will try it again at controller node.


Thanks,
Johnson

From: Manickam, Kanagaraj [mailto:kanagaraj.manic...@hp.com]
Sent: Thursday, July 17, 2014 1:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] Integrated with iSCSI target Question

I think, It should be on the cinder node which is usually deployed on the 
controller node

From: Johnson Cheng [mailto:johnson.ch...@qsantechnology.com]
Sent: Thursday, July 17, 2014 10:38 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Cinder] Integrated with iSCSI target Question

Dear All,

I have three nodes, a controller node and two compute nodes(volume node).
The default value for iscsi_helper in cinder.conf is "tgtadm", I will change to 
"ietadm" to integrate with iSCSI target.
Unfortunately I am not sure that iscsitarget should be installed at controller 
node or compute node?
Have any reference?


Regards,
Johnson

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Integrated with iSCSI target Question

2014-07-16 Thread Manickam, Kanagaraj
I think, It should be on the cinder node which is usually deployed on the 
controller node

From: Johnson Cheng [mailto:johnson.ch...@qsantechnology.com]
Sent: Thursday, July 17, 2014 10:38 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Cinder] Integrated with iSCSI target Question

Dear All,

I have three nodes, a controller node and two compute nodes(volume node).
The default value for iscsi_helper in cinder.conf is "tgtadm", I will change to 
"ietadm" to integrate with iSCSI target.
Unfortunately I am not sure that iscsitarget should be installed at controller 
node or compute node?
Have any reference?


Regards,
Johnson

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Heat Db Model updates

2014-07-16 Thread Manickam, Kanagaraj
Hi Zane,

Please find inline answer.

Regards
Kanagaraj M

-Original Message-
From: Zane Bitter [mailto:zbit...@redhat.com] 
Sent: Thursday, July 17, 2014 10:01 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [heat]Heat Db Model updates

On 16/07/14 23:48, Manickam, Kanagaraj wrote:
> I have gone thru the Heat Database and found the drawbacks in the 
> existing model as listed below.  Could you review and add anything 
> missing here. Thanks.
>
> Heat Database model is having following drawbacks:
>
> 1.Duplicate information
>
> 2.Incomplete naming of columns
>
> 3.Inconsistency in the identifiers (id) and deleted_at columns across 
> the tables
>
> 4.resource table is specific to nova and make it generic
>
> 5.Pre-defined constants are not using enum.
>
> And the section provided below describes these problem on  table vice.
>
> *Stack*
>
> Duplicate info
>
> Tenant & stack_user_project_id

These are different things; "stack_user_project_id" is the project/tenant in 
which Heat creates users (in a different domain); "tenant" is the 
project/tenant in which the stack itself was created.


KanagarajM > 


> Credentials_id & username , owner_id.
>
> Tenant is also part of user_creds and Stack always has credentials_id, 
> so what is the need of having tenant info in stack table and in stack 
> table only the credentials_id is sufficient.

tenant is in the Stack table because we routinely query by tenant and we don't 
want to have to do a join.

There may be a legitimate reason for the UserCreds table to exist separately 
from the Stack table but I don't know what it is, so merging the two is an 
option. 

Kanagaraj M> user_creds are being consumed by stack only and I feel that for 
one user say 'admin1' there will be one row in user_Creds table and who can 
have more than one stacks owned. I assumed this could be the reason. 
But to validate,  I created two stacks with same user and for each stack, a 
seprate row is created. So as you suggested, I also feel that stack and 
user_creds could be merged if there is no other fact. And it will remove the 
redundancy too.

> Status & action should be enum of predefined status

+1. I assume it is still easy to add more actions later?

> *User_creads*
>
> correct the spelling in Truststore_id

"trustor_user_id" is correct.

Kanagaraj M> sure.

> *Resource*
>
> Status & action should be enum of predefined status

+1

> Rsrc_metadata - make full name resource_metadata

-0. I don't see any benefit here.

KanagarajM > It is to just make consistency in naming format 

> Why only nova_instance column how about for other services like 
> cinder, glance resource, could be renamed to be generic enough??

+1 this should have been called "physical_resource_id".

> *Watch_rule*
>
> Last_evaluated -> append _at

I really don't see the point.

KanagarajM > It is to just make consistency in naming format when we say 
created_at, updated_at and so last_evaluated_at. 

> State should be an enum

+1

> *Event*
>
> Why uuid and id both used?

I believe it's because you should always use an integer as the primary key. I'm 
not sure if it makes a difference even though we _never_ do a lookup by the 
(integer) id.

Kanagaraj M> In openstack, most of the services migrated from using INT to UUID 
for the primary key.  And more than that, it would be nice to make consistency. 
The reason is, when the user access the resource over REST API, if we use UUID 
for the all the entities used in the heat project, it will make user/developer 
experience easier.

> Resource_action is being used in both event and resource table, so it 
> should be moved to common table

I'm not sure what this means. Do you mean a common base class?

KanagarajM > Yes, its an implementation specific and have a common python base 
class.


> Resource_status should be any enum

+1

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Integrated with iSCSI target Question

2014-07-16 Thread Johnson Cheng
Dear All,

I have three nodes, a controller node and two compute nodes(volume node).
The default value for iscsi_helper in cinder.conf is "tgtadm", I will change to 
"ietadm" to integrate with iSCSI target.
Unfortunately I am not sure that iscsitarget should be installed at controller 
node or compute node?
Have any reference?


Regards,
Johnson

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] (98)Address already in use: make_sock: could not bind to address [::]:5000 & 0.0.0.0:5000

2014-07-16 Thread Joe Jiang
Hi all,
Thanks for your responds.


I try to running # sudo semanage port -l|grep 5000 in my envrionment and get 
same infomation.
>> ...
>> commplex_main_port_t tcp 5000
>> commplex_main_port_t udp 5000
then, I wanna remove this port(5000) from SELinux policy rules list use this 
command(semanage port -d -p tcp -t commplex_port_t 5000),
the console echo is "/usr/sbin/semanage: Port tcp/5000 is defined in policy, 
cannot be deleted", and 'udp/5000' is same reply.
Some sounds[1] say, this port is declared in the corenetwork source policy 
which is compiled in the base module.
So, Have to recompile selinux module?




Thanks.
Joe.


[1]
http://www.redhat.com/archives/fedora-selinux-list/2009-September/msg00056.html








>> Another problem with port 5000 in Fedora, and probably more recent
>> versions of RHEL, is the selinux policy:
>>  
>> # sudo semanage port -l|grep 5000
>> ...
>> commplex_main_port_t tcp 5000
>> commplex_main_port_t udp 5000
>>  
>> There is some service called "commplex" that has already "claimed" port
>> 5000 for its use, at least as far as selinux goes.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Heat Db Model updates

2014-07-16 Thread Zane Bitter

On 16/07/14 23:48, Manickam, Kanagaraj wrote:

I have gone thru the Heat Database and found the drawbacks in the
existing model as listed below.  Could you review and add anything
missing here. Thanks.

Heat Database model is having following drawbacks:

1.Duplicate information

2.Incomplete naming of columns

3.Inconsistency in the identifiers (id) and deleted_at columns across
the tables

4.resource table is specific to nova and make it generic

5.Pre-defined constants are not using enum.

And the section provided below describes these problem on  table vice.

*Stack*

Duplicate info

Tenant & stack_user_project_id


These are different things; "stack_user_project_id" is the 
project/tenant in which Heat creates users (in a different domain); 
"tenant" is the project/tenant in which the stack itself was created.



Credentials_id & username , owner_id.

Tenant is also part of user_creds and Stack always has credentials_id,
so what is the need of having tenant info in stack table and in stack
table only the credentials_id is sufficient.


tenant is in the Stack table because we routinely query by tenant and we 
don't want to have to do a join.


There may be a legitimate reason for the UserCreds table to exist 
separately from the Stack table but I don't know what it is, so merging 
the two is an option.



Status & action should be enum of predefined status


+1. I assume it is still easy to add more actions later?


*User_creads*

correct the spelling in Truststore_id


"trustor_user_id" is correct.


*Resource*

Status & action should be enum of predefined status


+1


Rsrc_metadata - make full name resource_metadata


-0. I don't see any benefit here.


Why only nova_instance column how about for other services like cinder,
glance resource, could be renamed to be generic enough??


+1 this should have been called "physical_resource_id".


*Watch_rule*

Last_evaluated -> append _at


I really don't see the point.


State should be an enum


+1


*Event*

Why uuid and id both used?


I believe it's because you should always use an integer as the primary 
key. I'm not sure if it makes a difference even though we _never_ do a 
lookup by the (integer) id.



Resource_action is being used in both event and resource table, so it
should be moved to common table


I'm not sure what this means. Do you mean a common base class?


Resource_status should be any enum


+1

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-16 Thread Mike Spreitzer
Doug Wiegley  wrote on 07/16/2014 04:58:52 PM:

> On 7/16/14, 2:43 PM, "Clint Byrum"  wrote:
> 
> >Excerpts from Mike Spreitzer's message of 2014-07-16 10:50:42 -0700:
> ...
> >> I noticed that health checking in AWS goes beyond convergence.  In 
AWS
> >>an 
> >> ELB can be configured with a URL to ping, for application-level 
health
> >> checking.  And an ASG can simply be *told* the health of a member by 
a
> >> user's own external health system.  I think we should have analogous
> >> functionality in OpenStack.  Does that make sense to you?  If so, do
> >>you 
> >> have any opinion on the right way to integrate, so that we do not 
have
> >> three completely independent health maintenance systems?
> >
> >The check url is already a part of Neutron LBaaS IIRC.

Yep.  LBaaS is a work in progress, right?  Those of use using Nova 
networking are not feeling the love, unfortunately.

As far as Heat goes, there is no LBaaS resource type.  The 
OS::Neutron::LoadBalancer resource type does not have any health checking 
properties.  The AWS::ElasticLoadBalancing::LoadBalancer does have a 
parameter that prescribes health checking --- but, as far as I know, there 
is no way to ask such a load balancer for its opinion of a member's 
health.

> >What may not be
> >a part is notifications for when all members are reporting down (which
> >might be something to trigger scale-up).

I do not think we want an ASG to react only when all members are down;
I think an ASG should maintain at least its minimum size
(although I have to admit that I do not understand why the current code
has an explicit exception to that).

> You do recall correctly, and there are currently no mechanisms for
> notifying anything outside of the load balancer backend when the health
> monitor/member state changes.

This is true in AWS as well.  The AWS design is that you can configure the 
ASG to poll the ELB for its opinion of member health.  The idea seems to 
be that an ASG can get health information from three kinds of sources 
(polling status in EC2, polling ELB, and being explicitly informed), 
synthesizes its own summary opinion, and reacts in due time.

> There is also currently no way for an external system to inject health
> information about an LB or its members.
> 
> Both would be interesting additions.
> 
> doug
> 
> 
> >
> >If we don't have push checks in our auto scaling implementation then we
> >don't have a proper auto scaling implementation.

I am not sure what is meant by push checks.

Thanks,
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Juno-2 status update

2014-07-16 Thread Devananda van der Veen
Hi folks!

I know this is slightly late... We focused on reviewing and landing specs
last week or so, and I think we've done a great job of that. Let's keep
doing that in the future. Also, let's do it earlier in the "K" cycle next
time, which should be easier, since we now have a better handle on what to
put in a spec, how to review them, and when we should (and should not) use
one. (And yes, it's subjective and still evolving in my mind, too).

As a result of everyone's work reviewing specs, we have eight approved now,
and Juno-2 is just a week away. I've just updated the status of all of them
on launchpad based on my knowledge.
  https://launchpad.net/ironic/+milestone/juno-2

To note, Juno-2 is only a milestone. It's not a release. Bumping any of
these to Juno-3 is fine. I've targeted the ones that I *think* we can land,
if cores focus on reviewing them in the next few days. I also may be wrong,
as I don't know everything.

Those of you who have specs or bugs assigned to you and listed (and not
already completed) on that page, please take a minute to go update them.

- If there's code up and it needs to be reviewed, the status should say
"Needs Review" (if it's a BP) or "In progress" (if it's a bug)
- If there's no code up (or the code is not close to ready), the status
should NOT say those things AND it should not be targeted to Juno-2. You
probably can't change that, but please poke me on IRC or email to let me
know, and I'll change it.

If you're a core reviewer, please focus on reviewing these things at the
moment. Thanks!

Regards,
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat]Heat Db Model updates

2014-07-16 Thread Manickam, Kanagaraj
I have gone thru the Heat Database and found the drawbacks in the existing 
model as listed below.  Could you review and add anything missing here. Thanks.

Heat Database model is having following drawbacks:
1.   Duplicate information
2.   Incomplete naming of columns
3.   Inconsistency in the identifiers (id) and deleted_at columns across 
the tables
4.   resource table is specific to nova and make it generic
5.   Pre-defined constants are not using enum.

And the section provided below describes these problem on  table vice.

Stack
Duplicate info
Tenant & stack_user_project_id
Credentials_id & username , owner_id.
Tenant is also part of user_creds and Stack always has credentials_id,  so what 
is the need of having tenant info in stack table and in stack table only the 
credentials_id is sufficient.

Status & action should be enum of predefined status

User_creads
correct the spelling in Truststore_id

Resource
Status & action should be enum of predefined status
Rsrc_metadata - make full name resource_metadata
Why only nova_instance column how about for other services like cinder, glance 
resource, could be renamed to be generic enough??

Watch_rule
Last_evaluated -> append _at
State should be an enum

Event
Why uuid and id both used?
Resource_action is being used in both event and resource table, so it should be 
moved to common table
Resource_status should be any enum



Regards
Kanagaraj M

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor framework proposal

2014-07-16 Thread Stephen Balukoff
Hi Salvatore!

Thank you for reading through my book-length e-mail and responding to all
my points!

Unfortunately, I have more responses for you, inline:

On Wed, Jul 16, 2014 at 4:22 PM, Salvatore Orlando 
wrote:

> Hi Stephen,
>
> Thanks for your exhaustive comments!
>

I'm always happy to exhaust others with my comments. ;)


> I think your points are true and valid for most cloud operators; besides
> the first all the point you provided indeed pertain operators and vendors.
> However you can't prove, I think, the opposite - that is to say that no
> cloud operator will find multi-service flavors useful. At the end of the
> day Openstack is always about choice - in this case the choice of having
> flavours spanning services or flavours limited to a single service.
> This discussion however will just end up slowly drifting into the realm of
> the theoretical and hypotethical and therefore won't bring anything good to
> our cause. Who know, in a few post we might just end up calling godwin's
> law!
>

That's certainly true.  But would you be willing to agree that both the
model and logic behind single-service_type flavors is likely to be simpler
to implement, troubleshoot, and maintain than multi-service_type flavors?

If you agree, then I would say: Let's go with single-service_type flavors
for now so that we can actually get an implementation done by Juno (and
thus free up development that is currently being blocked by lack of
flavors), and leave the more complicated multi-service_type flavors for
some later date when there's a more obvious need for them.

For what it's worth, I'm not against multi-service_type flavors if someone
can come up with a good usage scenario that is best solved using the same.
But I think it's more complication than we want or need right now, and
shooting for it now is likely to ensure we wouldn't get flavors in time for
Juno.



>  There are other considerations which could be made, but since they're
>>> dependent on features which do not yet exist (NFV, service insertion,
>>> chaining and steering) I think there is no point in arguing over it.
>>>
>>
>> Agreed. Though, I don't think single-service flavors paint us into a
>> corner here at all. Again, things get complicated enough when it comes to
>> service insertion, chaining, steering, etc. that what we'll really need at
>> that point is actual orchestration. Flavors alone will not solve these
>> problems, and orchestration can work with many single-service flavors to
>> provide the illusion of multi-service flavors.
>>
>
> Don't take it the wrong way - but this is what I mean by "theoretical and
> hypothetical". I agree with you. I think that's totally possible. But there
> are so many pieces which are yet missing from the puzzle that this
> discussion is probably worthless. Anyway, I started it, and I'm the one to
> be punished for it!
>

Hah! Indeed. Ok, I'll stop speculating down that path for now, eh. ;)


>  In conclusion I think the idea makes sense, and is a minor twist in the
>>> current design which should not either make the feature too complex neither
>>> prevent any other use case for which the flavours are being conceived. For
>>> the very same reason however, it is worth noting that this is surely not an
>>> aspect which will cause me or somebody else to put a veto on this work item.
>>>
>>
>> I don't think this is a minor twist in the current design, actually:
>> * We'll have to deal with cases like the above where no valid service
>> profiles can be found for a given kind of flavor (which we can avoid
>> entirely if a flavor can have service profiles valid for only one kind of
>> service).
>>
>
> Point taken, but does not require a major change to the design since a
> service flavour like this should probably be caught by a validation
> routine. Still you'd need more pervasive validation in different points of
> the API.
>

... which sounds like significantly more complication to me. But at this
point, we're arguing over what a "minor twist" is, which is not likely to
lead to anywhere useful...


>  * When and if tags/capabilities/extensions get introduced, we would need
>> to provide an additional capabilities list on the service profiles in order
>> to be able to select which service profiles provide the capabilities
>> requested.
>>
>
> Might be... but I don't see how that would be worse with multiple service
> types, especially if profiles are grouped by type.
>

Presumably, with single-service_type flavors, all service profiles
associated with the flavor should be capable of providing all the features
advertised as being provided by the flavor (first in the 'description' and
possibly later programmatically via an extensions list). This means we
don't have to check to see whether a service profile associated with the
flavor can provide for all the extensions advertised in the flavor
description because by creating the association, the operator is implying
it can.


>  * The above point makes things muc

Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SubjectAlternativeNames (SAN)

2014-07-16 Thread Stephen Balukoff
Vijay--

I'm confused: If NetScaler doesn't actually look at the SAN, isn't it not
actually following the SNI standard? (RFC2818 page 4, paragraph 2, as I
think Carlos pointed out in another thread.) Or at least, isn't that
ignoring how every major browser on the market that support SNI operates?

Anyway, per the other thread we've had on this, and Evgeny's proposal
there, do you see harm in having SAN available at the API level
(informationally, at least). In any case, duplication of code is something
I think we can all agree is not desirable, and because so many other
implementations are likely to need the SAN info, it should be available to
drivers via a universal library (as Carlos is describing).

Stephen


On Wed, Jul 16, 2014 at 3:43 PM, Eichberger, German <
german.eichber...@hp.com> wrote:

> +1 for not duplicating code
>
> For me it's scary as well if different implementation exhibit different
> behavior. This very contrary to what we would like to do with exposing LBs
> only as flavor...
>
> German
>
> -Original Message-
> From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
> Sent: Wednesday, July 16, 2014 2:05 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability -
> SubjectAlternativeNames (SAN)
>
>
> On Jul 16, 2014, at 3:49 PM, Carlos Garza 
> wrote:
>
> >
> > On Jul 16, 2014, at 12:30 PM, Vijay Venkatachalam
> > 
> > wrote:
> >
> >We will have the code that will parse the X509 in the API scope of
> the code. The validation I'm refering to is making sure the key matches the
> cert used and that we mandate that at a minimum the backend driver support
> RSA and that since the X509 validation is happeneing at the api layer this
> same module will also handling the extraction of the SANs. I am proposing
> that the methods that can extract the SAN SCN from the x509 be present in
> the api portion of the code and that drivers can call these methods if they
> need too. Infact I'm already working to get these extraction methods
> contributed to the PyOpenSSL project so that they will already available at
> a more fundemental layer then our nuetron/LBAAS code. At the very least I
> want to spec to declare that SAN SCN and parsing must be made available
> from the API layer. If the PyOpenSSL has the methods available at that time
> then I we can simply write wrappers for this in the API or simple write
> more higher level methods in the API module.
>
> I meant to say bottom line I want the parsing code exposed in the API
> and not duplicated in everyone elses driver.
>
> > I am partioally open to the idea of letting the driver handle the
> behavior of the cert parsing. Although I defer this to the rest of the
> folks as I get this feeling having differn't implementations exhibiting
> differen't behavior may sound scary.
> >
> >>
> >>I think it is best not to mention about SAN in the
> OpenStack TLS spec. It is expected that the backend should implement
> according to the SSL/SNI IETF spec.
> >> Let's leave the implementation/validation part to the driver.  For ex.
> NetScaler does not support SAN and the NetScaler driver could either throw
> an error if certs with SAN are used or ignore it.
> >
> >How is netscaler making the decision when choosing the cert to
> associate with the SNI handshake?
> >
> >>
> >> Does anyone see a requirement for detailing?
> >>
> >>
> >> Thanks,
> >> Vijay V.
> >>
> >>
> >> From: Vijay Venkatachalam
> >> Sent: Wednesday, July 16, 2014 8:54 AM
> >> To: 'Samuel Bercovici'; 'OpenStack Development Mailing List (not for
> usage questions)'
> >> Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI -
> >> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
> >>
> >> Apologies for the delayed response.
> >>
> >> I am OK with displaying the certificates contents as part of the API,
> that should not harm.
> >>
> >> I think the discussion has to be split into 2 topics.
> >>
> >> 1.   Certificate conflict resolution. Meaning what is expected when
> 2 or more certificates become eligible during SSL negotiation
> >> 2.   SAN support
> >>
> >> I will send out 2 separate mails on this.
> >>
> >>
> >> From: Samuel Bercovici [mailto:samu...@radware.com]
> >> Sent: Tuesday, July 15, 2014 11:52 PM
> >> To: OpenStack Development Mailing List (not for usage questions);
> >> Vijay Venkatachalam
> >> Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI -
> >> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
> >>
> >> OK.
> >>
> >> Let me be more precise, extracting the information for view sake /
> validation would be good.
> >> Providing values that are different than what is in the x509 is what I
> am opposed to.
> >>
> >> +1 for Carlos on the library and that it should be ubiquitously used.
> >>
> >> I will wait for Vijay to speak for himself in this regard...
> >>
> >> -Sam.
> >>
> >>
> >> From: St

Re: [openstack-dev] [neutron] [nova] Weekly nova-network / neutron parity meeting

2014-07-16 Thread Kyle Mestery
On Wed, Jul 16, 2014 at 9:28 PM, Michael Still  wrote:
> That time is around 1am for me. I'm ok with that as long as someone on
> the nova side can attend in my place.
>
> Michael
>
Some of the neutron contributors to this effort are in Europe and
Russia, so finding a time slot to get everyone could prove tricky.
I'll leave this slot now and hope we can get someone else from nova to
attend Michael. If not, we'll move this to another time.

Thanks!
Kyle

> On Thu, Jul 17, 2014 at 12:22 PM, Kyle Mestery  wrote:
>> As we're getting down to the wire in Juno, I'd like to propose we have
>> a weekly meeting on the nova-network and neutron parity effort. I'd
>> like to start this meeting next week, and I'd like to propose
>> Wednesday at 1500 UTC on #openstack-meeting-3 as the time and
>> location. If this works for people, please reply on this thread, or
>> suggest an alternate time. I've started a meeting page [1] to track
>> agenda for the first meeting next week.
>>
>> Thanks!
>> Kyle
>>
>> [1] https://wiki.openstack.org/wiki/Meetings/NeutronNovaNetworkParity
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Rackspace Australia
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nova] Weekly nova-network / neutron parity meeting

2014-07-16 Thread Michael Still
That time is around 1am for me. I'm ok with that as long as someone on
the nova side can attend in my place.

Michael

On Thu, Jul 17, 2014 at 12:22 PM, Kyle Mestery  wrote:
> As we're getting down to the wire in Juno, I'd like to propose we have
> a weekly meeting on the nova-network and neutron parity effort. I'd
> like to start this meeting next week, and I'd like to propose
> Wednesday at 1500 UTC on #openstack-meeting-3 as the time and
> location. If this works for people, please reply on this thread, or
> suggest an alternate time. I've started a meeting page [1] to track
> agenda for the first meeting next week.
>
> Thanks!
> Kyle
>
> [1] https://wiki.openstack.org/wiki/Meetings/NeutronNovaNetworkParity
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - Certificate conflict resolution

2014-07-16 Thread Stephen Balukoff
Hi Vijay,



On Wed, Jul 16, 2014 at 9:07 AM, Vijay Venkatachalam <
vijay.venkatacha...@citrix.com> wrote:

>
>
> Do you know if the SSL/SNI IETF spec details about conflict resolution. I
> am assuming not.
>
>
>
> Because of this ambiguity each backend employs its own mechanism to
> resolve conflicts.
>
>
>
> There are 3 choices now
>
> 1.   The LBaaS extension does not allow conflicting certificates to
> be bound using validation
>
> 2.   Allow each backend conflict resolution mechanism to get into the
> spec
>
> 3.   Does not specify anything in the spec, no mechanism introduced
> and let the driver deal with it.
>
>
>
> Both HA proxy and Radware uses configuration as a mechanism to resolve.
> Radware uses order while HA Proxy uses externally specified DNS names.
>
> NetScaler implementation uses the best possible match algorithm
>
>
>
> For ex, let’s say 3 certs are bound to the same endpoint with the
> following SNs
>
> www.finance.abc.com
>
> *.finance.abc.com
>
> *.*.abc.com
>
>
>
> If the host request is  payroll.finance.abc.com  we shall  use  *.
> finance.abc.com
>
> If it is  payroll.engg.abc.com  we shall use  *.*.abc.com
>
>
>
> NetScaler won’t not allow 2 certs to have the same SN.
>
>
>

Did you mean "CN" as in "Common Name" above?

In any case, it sounds like:

1. Conflicts are going to be relatively rare in any case
2. Conflict resolution as such can probably be left to the vendor. Since
the Neutron LBaaS reference implementation uses HAProxy, it seems logical
that this should be considered "normal" behavior for the Neutron LBaaS
service-- though again the slight variations in vendor implementations for
conflict resolution are unlikely to cause serious issues for most users.

If NetScaler runs into a situation where the SCN of a cert conflicts with
the SCN or SAN of another cert, then perhaps they can return an
'UnsupportedConfigruation' error or whatnot? (I believe we're trying to get
the ability to return such errors with the flavor framework, correct?)

In any case, is there any reason to delay going forward with a model and
code that:
A. Uses an 'order' attribute on the SNI-related objects to resolve name
conflicts.
B. Includes a ubiquitous library for extracting SCN and SAN that any driver
may use if they don't use the 'order' attribute?

Thanks,
Stephen


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-16 Thread Stephen Balukoff
Just saw this thread after responding to the other:

I'm in favor of Evgeny's proposal. It sounds like it should resolve most
(if not all) of the operators', vendors' and users' concerns with regard to
handling TLS certificates.

Stephen


On Wed, Jul 16, 2014 at 12:35 PM, Carlos Garza 
wrote:

>
> On Jul 16, 2014, at 10:55 AM, Vijay Venkatachalam <
> vijay.venkatacha...@citrix.com>
>  wrote:
>
> > Apologies for the delayed response.
> >
> > I am OK with displaying the certificates contents as part of the API,
> that should not harm.
> >
> > I think the discussion has to be split into 2 topics.
> >
> > 1.   Certificate conflict resolution. Meaning what is expected when
> 2 or more certificates become eligible during SSL negotiation
> > 2.   SAN support
> >
>
> Ok cool that makes more sense. #2 seems to be met by Evgeny proposal.
> I'll let you folks decide the conflict resolution issue #1.
>
>
> > I will send out 2 separate mails on this.
> >
> >
> > From: Samuel Bercovici [mailto:samu...@radware.com]
> > Sent: Tuesday, July 15, 2014 11:52 PM
> > To: OpenStack Development Mailing List (not for usage questions); Vijay
> Venkatachalam
> > Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI -
> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
> >
> > OK.
> >
> > Let me be more precise, extracting the information for view sake /
> validation would be good.
> > Providing values that are different than what is in the x509 is what I
> am opposed to.
> >
> > +1 for Carlos on the library and that it should be ubiquitously used.
> >
> > I will wait for Vijay to speak for himself in this regard…
> >
> > -Sam.
> >
> >
> > From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
> > Sent: Tuesday, July 15, 2014 8:35 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI -
> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
> >
> > +1 to German's and  Carlos' comments.
> >
> > It's also worth pointing out that some UIs will definitely want to show
> SAN information and the like, so either having this available as part of
> the API, or as a standard library we write which then gets used by multiple
> drivers is going to be necessary.
> >
> > If we're extracting the Subject Common Name in any place in the code
> then we also need to be extracting the Subject Alternative Names at the
> same place. From the perspective of the SNI standard, there's no difference
> in how these fields should be treated, and if we were to treat SANs
> differently then we're both breaking the standard and setting a bad
> precedent.
> >
> > Stephen
> >
> >
> > On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza <
> carlos.ga...@rackspace.com> wrote:
> >
> > On Jul 15, 2014, at 10:55 AM, Samuel Bercovici 
> >  wrote:
> >
> > > Hi,
> > >
> > >
> > > Obtaining the domain name from the x509 is probably more of a
> driver/backend/device capability, it would make sense to have a library
> that could be used by anyone wishing to do so in their driver code.
> >
> > You can do what ever you want in *your* driver. The code to extract
> this information will be apart of the API and needs to be mentioned in the
> spec now. PyOpenSSL with PyASN1 are the most likely candidates.
> >
> > Carlos D. Garza
> > >
> > > -Sam.
> > >
> > >
> > >
> > > From: Eichberger, German [mailto:german.eichber...@hp.com]
> > > Sent: Tuesday, July 15, 2014 6:43 PM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI -
> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
> > >
> > > Hi,
> > >
> > > My impression was that the frontend would extract the names and hand
> them to the driver.  This has the following advantages:
> > >
> > > · We can be sure all drivers can extract the same names
> > > · No duplicate code to maintain
> > > · If we ever allow the user to specify the names on UI rather
> in the certificate the driver doesn’t need to change.
> > >
> > > I think I saw Adam say something similar in a comment to the code.
> > >
> > > Thanks,
> > > German
> > >
> > > From: Evgeny Fedoruk [mailto:evge...@radware.com]
> > > Sent: Tuesday, July 15, 2014 7:24 AM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI -
> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
> > >
> > > Hi All,
> > >
> > > Since this issue came up from TLS capabilities RST doc review, I
> opened a ML thread for it to make the decision.
> > > Currently, the document says:
> > >
> > > “
> > > For SNI functionality, tenant will supply list of TLS containers in
> specific
> > > Order.
> > > In case when specific back-end is not able to support SNI capabilities,
> > > its driver should throw an exception. The exception message 

[openstack-dev] [neutron] [nova] Weekly nova-network / neutron parity meeting

2014-07-16 Thread Kyle Mestery
As we're getting down to the wire in Juno, I'd like to propose we have
a weekly meeting on the nova-network and neutron parity effort. I'd
like to start this meeting next week, and I'd like to propose
Wednesday at 1500 UTC on #openstack-meeting-3 as the time and
location. If this works for people, please reply on this thread, or
suggest an alternate time. I've started a meeting page [1] to track
agenda for the first meeting next week.

Thanks!
Kyle

[1] https://wiki.openstack.org/wiki/Meetings/NeutronNovaNetworkParity

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

2014-07-16 Thread Wuhongning
hi,

It's a very important use case for multiple vswitchs. Now we use a fork version 
of ovs-dpdk to support NFV service chaining deployment, however all other 
traffic are still in the kernel ovs path, including management and storage 
traffic, and it will be very difficult to switch all these traffic to userspace 
ovs.

there is no problem for two ovs instance, they works very well. we have two 
vswitched daemon, none of original ovs userspace tools are never touched, but 
userspace-ovs-vswitchd(also with some little patch) is re-compiled without 
kernel datapath. we separate the ovsdb instance, and have all userspace ovs 
vsctl & ofctl point to another communication target.

then we patched ML2 plugin with new vnic&vif type, also with a new 
user-ovs-agent deployed on each CN.
we didn't mix the ovs and user-ovs (user-ovs is positioned as high performance, 
so a dedicated VF is assigned to it). but if you want to create a segment 
across ovs and user-ovs, just connect user-ovs to br-eth through it's kni veth 
interface.

hope helpful for this discussion.



From: Czesnowicz, Przemyslaw [przemyslaw.czesnow...@intel.com]
Sent: Wednesday, July 16, 2014 9:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

I don't think this is a usecase that could be supported right now.
There will be multiple issues with running two ovs instances on the node, e.g. 
how to  manage two sets of userspace utilities, two ovsdb servers etc.
Also there would be some limitations from how ml2 plugin does port  binding 
(different segmentation ids would have to be used for the two ovs instances)

This could be done if ovs was able to run two datapaths at the same time 
(kernel and dpdk enabled userspace datapath).
I would like to concentrate on the more simple usecase where some nodes are 
optimized for high perf  net i/o

Thanks
Przemek

-Original Message-
From: Mathieu Rohon [mailto:mathieu.ro...@gmail.com]
Sent: Friday, July 11, 2014 10:30 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

A simple usecase could be to have a compute node able start VM with optimized 
net I/O or standard net I/O, depending on the network flavor ordered for this 
VM.

On Fri, Jul 11, 2014 at 11:16 AM, Czesnowicz, Przemyslaw 
 wrote:
>
>
> Can you explain whats the use case for  running both ovs and userspace
> ovs on the same host?
>
>
>
> Thanks
>
> Przemek
>
> From: loy wolfe [mailto:loywo...@gmail.com]
> Sent: Friday, July 11, 2014 3:17 AM
>
>
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2
> plugin
>
>
>
> +1
>
>
>
> It's totally different between ovs and userspace ovs.
>
> also, there is strong need to keep ovs even we have a userspace ovs in
> the same host
>
>
>
>
>
> --
>
>
> Intel Shannon Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County
> Kildare Registered Number: 308263 Business address: Dromore House,
> East Park, Shannon, Co. Clare
>
> This e-mail and any attachments may contain confidential material for
> the sole use of the intended recipient(s). Any review or distribution
> by others is strictly prohibited. If you are not the intended
> recipient, please contact the sender and delete all copies.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - Certificate conflict resolution

2014-07-16 Thread Carlos Garza

On Jul 16, 2014, at 11:07 AM, Vijay Venkatachalam 
 wrote:

>  
> Do you know if the SSL/SNI IETF spec details about conflict resolution. I am 
> assuming not.
> 

The specs I have seen just describe SNI as a way
of passing an intended host name in the clear during the TLS handshake.  The 
specs do not
describe the behavior of what the server should do with the SNI host or what 
peer certificate
they should return based on it. The whole idea of SNI was that the server or 
something like a
load balancer(Like we are doing) could make decisions based on this unencrypted 
value on the
server side with out even knowing the private key. IE a loadbalancer doesn't 
even need to interact
with the handshake(I've seen at least one tool that doesn't even use an SSL 
library to peek at the
SNI host (looking at blue box)) and simply forward they tcp stream an 
appropriate back end node, at
which point the back end interacts with the TLS handshake.

 In short the SAN SCN cruft was added to the spec as a convience method so 
that users could just
upload their X509 set for SNI vs the original plan to upload a set of 
(hostname,X509ContainerId) tuples. The RFC
seems to implie that it intends to deprecate the use of the SubjectCN to store 
the hostname for web certificates 
but since its so popular I'm guessing that'll never happen.


By the way:
   RFC 2818 (HTTP-TLS) does dicate that if a subjectAltName extention with a 
dNSName entry exists then the
dNSNames entries should be used for PKIX validation and not the SubjectCN. so 
PKIX validation that ignores
the subjectAltName is already breaking RFC2818.

> Because of this ambiguity each backend employs its own mechanism to resolve 
> conflicts.
>  
> There are 3 choices now
> 1.   The LBaaS extension does not allow conflicting certificates to be 
> bound using validation
> 2.   Allow each backend conflict resolution mechanism to get into the spec
> 3.   Does not specify anything in the spec, no mechanism introduced and 
> let the driver deal with it. 

I propose another optionspecifically #1 is not acceptable. 
  4. The spec should mandate that each driver document their SNI behavior and 
more specifically 
behavior on conflicts resolution. The vendor documentation doesn't have to be 
in the same spec or even in
the lbaas project it just has to be documented some where central side beside 
with other vendors docs.

> Both HA proxy and Radware uses configuration as a mechanism to resolve. 
> Radware uses order while HA Proxy uses externally specified DNS names.
> NetScaler implementation uses the best possible match algorithm
>  
> For ex, let’s say 3 certs are bound to the same endpoint with the following 
> SNs
> www.finance.abc.com
> *.finance.abc.com
> *.*.abc.com
> If the host request is  payroll.finance.abc.com  we shall  use  
> *.finance.abc.com
> If it is  payroll.engg.abc.com  we shall use  *.*.abc.com
>  
> NetScaler won’t not allow 2 certs to have the same SN.

In this case NetScaler could document the behavior of their driver at that 
case.

> From: Samuel Bercovici [mailto:samu...@radware.com] 
> Sent: Tuesday, July 15, 2014 11:52 PM
> To: OpenStack Development Mailing List (not for usage questions); Vijay 
> Venkatachalam
> Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
>  
> OK.
>  
> Let me be more precise, extracting the information for view sake / validation 
> would be good.
> Providing values that are different than what is in the x509 is what I am 
> opposed to.
>  
> +1 for Carlos on the library and that it should be ubiquitously used.
>  
> I will wait for Vijay to speak for himself in this regard…
>  
> -Sam.
>  
>  
> From: Stephen Balukoff [mailto:sbaluk...@bluebox.net] 
> Sent: Tuesday, July 15, 2014 8:35 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
>  
> +1 to German's and  Carlos' comments.
>  
> It's also worth pointing out that some UIs will definitely want to show SAN 
> information and the like, so either having this available as part of the API, 
> or as a standard library we write which then gets used by multiple drivers is 
> going to be necessary.
>  
> If we're extracting the Subject Common Name in any place in the code then we 
> also need to be extracting the Subject Alternative Names at the same place. 
> From the perspective of the SNI standard, there's no difference in how these 
> fields should be treated, and if we were to treat SANs differently then we're 
> both breaking the standard and setting a bad precedent.
>  
> Stephen
>  
> 
> On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza  
> wrote:
> 
> On Jul 15, 2014, at 10:55 AM, Samuel Bercovici 
>  wrote:
> 
> > Hi,
> >
> >
> > Obtaining the domain name from the x509 is probably more of a 
> 

Re: [openstack-dev] [Heat] Nova-network support

2014-07-16 Thread Steve Baker
On 15/07/14 05:12, Thomas Spatzier wrote:
>> From: Pavlo Shchelokovskyy 
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: 14/07/2014 16:42
>> Subject: [openstack-dev] [Heat] Nova-network support
>>
>> Hi Heaters,
>>
>> I would like to start a discussion about Heat and nova-network. As
>> far as I understand nova-network is here to stay for at least 2 more
>> releases [1] and, even more, might be left indefinitely as a viable
>> simple deployment option supported by OpenStack (if anyone has a
>> more recent update on nova-network deprecation status please call me
>> out on that).
>>
>> In light of this I think we should improve our support of nova-
>> network-based OpenStack in Heat. There are several topics that
>> warrant attention:
>>
>> 1) As python-neutronclient is already set as a dependency of heat
>> package, we need a unified way for Heat to "understand" what network
>> service the OpenStack cloud uses that does not depend on presence or
>> absence of neutronclient. Several resources already need this (e.g.
>> AWS::EC2::SecurityGroup that currently decides on whether to use
>> Neutron or Nova-network only by a presence of VPC_ID property in the
>> template). This check might be a config option but IMO this could be
>> auto-discovered on heat-engine start. Also, when current Heat is
>> deployed on nova-network-based OpenStack, OS::Neutron::* resources
>> are still being registered and shown with "heat resource-type-list"
>> (at least on DevStack that is) although clearly they can not be
>> used. A network backend check would then allow to disable those
>> Neutron resources for such deployment. (On a side note, such checks
>> might also be created for resources of other integrated but not
>> bare-minimum essential OpenStack components such as Trove and Swift.)
>>
>> 2) We need more native nova-network specific resources. For example,
>> to use security groups on nova-network now one is forced to use
>> AWS::EC2::SecurityGroup, that looks odd when used among other
>> OpenStack native resources and has its own limitations as its
>> implementation must stay compatible with AWS. Currently it seems we
>> are also missing native nova-network Network, Cloudpipe VPN, DNS
>> domains and entries (though I am not sure how admin-specific those are).
>>
>> If we agree that such improvements make sense, I will gladly put
>> myself to implement these changes.
An OS::Nova::SecurityGroup would be welcome. This may be the only gap
for non-admin and non-esoteric nova-networking resources.

> I think  those improvements do make sense, since neutron cannot be taken as
> a given in each environment.
>
> Ideally, we would actually have a resource model that abstract from the
> underlying implementation, i.e. do not call out neutron or nova-net but
> just talk about something like a FloatingIP which then gets implemented by
> a neutron or nova-net backend. Currently, binding to either option has to
> be explicitly defined in templates, so in the worst case one might end up
> with two complete separate definitions of the same thing.
> That said, I know that it will probably be hard to come up with an
> abstraction for everything. I also know that provider templates could also
> partly solve the problem today, but many users probably do not know how to
> apply them.
> Some level of abstraction could also help to make some changes in
> underlying API transparent to templates.
> Anyway, I wanted to throw out the idea of some level of abstraction and see
> what the reactions are.
>
For security groups OS::Nova::SecurityGroup could be that abstraction
since nova proxies to neutron when required.

For floating IPs I would like to see the networks property on
OS::Nova::Server become much richer so that all port and floating IP
properties can be specified inline with the server rather than in
separate resources. Having a subset of properties here which work on
both nova-networking and neutron seems reasonable. This isn't on my
radar but I would happily help anyone who wants to take this on.

cheers

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] heat stack-create with two vm instances always got one failed

2014-07-16 Thread Yuling_C
Dell Customer Communication
My mistake. It deletes well all the networks in the stack too.

Thanks,
YuLing

From: C, Yuling
Sent: Wednesday, July 16, 2014 3:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] heat stack-create with two vm instances always got 
one failed


Dell Customer Communication
Thanks very much Qiming. It was the problem and I got it fixed by creating 
another port for the instance2.

B.T.W., another question. When I delete the stack, how come the network was not 
deleted? It seems only the instance was deleted. How do I clean up the networks 
and subnets associated with the instance using heat?

Thanks,

YuLing

-Original Message-
From: Qiming Teng [mailto:teng...@linux.vnet.ibm.com]
Sent: Wednesday, July 16, 2014 3:59 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] heat stack-create with two vm instances always got 
one failed

It seems that you are sharing one port between two instances, which won't be a 
legal configuration.

On Wed, Jul 16, 2014 at 01:17:00AM -0500, 
yulin...@dell.com wrote:
> Dell Customer Communication
>
> Hi,
> I'm using heat to create a stack with two instances. I always got one of them 
> successful, but the other would fail. If I split the template into two and 
> each of them contains one instance then it worked. However, I thought Heat 
> template would allow multiple instances being created?
>
> Here I attach the heat template:
> {
> "AWSTemplateFormatVersion" : "2010-09-09",
> "Description" : "Sample Heat template that spins up multiple instances and a 
> private network (JSON)",
> "Resources" : {
> "test_net" : {
> "Type" : "OS::Neutron::Net",
> "Properties" : {
> "name" : "test_net"
> }
> },
> "test_subnet" : {
> "Type" : "OS::Neutron::Subnet",
> "Properties" : {
> "name" : "test_subnet",
> "cidr" : "120.10.9.0/24",
> "enable_dhcp" : "True",
> "gateway_ip" : "120.10.9.1",
> "network_id" : { "Ref" : "test_net" }
> }
> },
> "test_net_port" : {
> "Type" : "OS::Neutron::Port",
> "Properties" : {
> "admin_state_up" : "True",
> "network_id" : { "Ref" : "test_net" }
> }
> },
> "instance1" : {
> "Type" : "OS::Nova::Server",
> "Properties" : {
> "name" : "instance1",
> "image" : "8e2b4c71-448c-4313-8b41-b238af31f419",
> "flavor": "tvm-tt_lite",
> "networks" : [
> {"port" : { "Ref" : "test_net_port" }}
> ]
> }
> },
> "instance2" : {
> "Type" : "OS::Nova::Server",
> "Properties" : {
> "name" : "instance2",
> "image" : "8e2b4c71-448c-4313-8b41-b238af31f419",
> "flavor": "tvm-tt_lite",
> "networks" : [
> {"port" : { "Ref" : "test_net_port" }}
> ]
> }
> }
> }
> }
> The error that I got from heat-engine.log is as follows:
>
> 2014-07-16 01:49:50.514 25101 DEBUG heat.engine.scheduler [-] Task
> resource_action complete step
> /usr/lib/python2.6/site-packages/heat/engine/scheduler.py:170
> 2014-07-16 01:49:50.515 25101 DEBUG heat.engine.scheduler [-] Task
> stack_task from Stack "teststack" sleeping _sleep
> /usr/lib/python2.6/site-packages/heat/engine/scheduler.py:108
> 2014-07-16 01:49:51.516 25101 DEBUG heat.engine.scheduler [-] Task
> stack_task from Stack "teststack" running step
> /usr/lib/python2.6/site-packages/heat/engine/scheduler.py:164
> 2014-07-16 01:49:51.516 25101 DEBUG heat.engine.scheduler [-] Task
> resource_action running step
> /usr/lib/python2.6/site-packages/heat/engine/scheduler.py:164
> 2014-07-16 01:49:51.960 25101 DEBUG urllib3.connectionpool [-] "GET
> /v2/b64803d759e04b999e616b786b407661/servers/7cb9459c-29b3-4a23-a52c-1
> 7d85fce0559 HTTP/1.1" 200 1854 _make_request
> /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295
> 2014-07-16 01:49:51.963 25101 ERROR heat.engine.resource [-] CREATE : Server 
> "instance1"
> 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource Traceback (most 
> recent call last):
> 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource File 
> "/usr/lib/python2.6/site-packages/heat/engine/resource.py", line 371, in 
> _do_action
> 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource while not 
> check(handle_data):
> 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource File 
> "/usr/lib/python2.6/site-packages/heat/engine/resources/server.py", line 239, 
> in check_create_complete
> 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource return 
> self._check_active(server)
> 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource File 
> "/usr/lib/python2.6/site-packages/heat/engine/resources/server.py", line 255, 
> in _check_active
> 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource raise exc
> 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource Error: Creation of 
> server instance1 failed.
> 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource
> 2014-07-16 01:49:51.996 25101 DEBUG heat.engine.scheduler [-] Task
> resource_action cancelled cancel
> /usr/lib/python2.6/site-packages/heat/engine/scheduler.py:187
> 2014-07-16 01:49:52.004 25101 

[openstack-dev] [Nova][Scheduler] Requesting spec freeze exception: Smart (Solver) Scheduler spec

2014-07-16 Thread Yathiraj Udupi (yudupi)
Hi Nova cores,

I would like to request a spec freeze exception for our spec on Solver 
Scheduler – a constraint based scheduler framework.
Please see the spec here: https://review.openstack.org/#/c/96543/   ->  
https://review.openstack.org/#/c/96543/10/specs/juno/solver-scheduler.rst

This is for the blueprint: 
https://blueprints.launchpad.net/nova/+spec/solver-scheduler.

Our working and tested code is currently available at this project repository:  
https://github.com/CiscoSystems/nova-solver-scheduler
This feature would integrate easily with Nova or the split out scheduler Gantt. 
 This would be non-disruptive as it is a pluggable scheduler driver.

This blueprint was approved for Icehouse, and a lot of code patches were 
submitted as part of the Icehouse time frame.  It had missed the feature freeze 
deadline then.  And now for Juno we have had to re-submit it here as a 
nova-spec for review.

The first code patch with the basic framework, the SolverScheduler driver  - 
https://review.openstack.org/#/c/46588/ has already gone through several 
iterations of code reviews as part of Icehouse, and will need minimal changes 
now.  The other dependent  patches that are already submitted add additional 
features in terms of a pluggable Solver that supports pluggable constraints and 
costs.

We had demoed a few use cases already in HongKong using our constraint-based 
scheduling framework addressing the cross-service constraint scenarios:
  - Compute – Storage affinity,  where we can schedule a VM on or close to a 
Volume node.

We have also seen a lot of interest from the NFV community, in terms of 
addressing the complex NFV related use cases using our constraints solving 
framework for making optimal placement decisions
Please see our NFV talk and demo in the Atlanta summit:   
https://www.youtube.com/watch?v=7QzDbhkk-BI Slides: 
http://www.slideshare.net/ybudupi/optimized-nfv-placement-in-openstack-clouds

Hope you will honor this request, and help us take this effort forward.

Thanks,
Yathi.









___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor framework proposal

2014-07-16 Thread Salvatore Orlando
Hi Stephen,

Thanks for your exhaustive comments!
Some more replies from me inline.

Have a good evening,
Salvatore


On 16 July 2014 00:05, Stephen Balukoff  wrote:

> Hi Salvatore and Eugene,
>
> Responses inline:
>
> On Tue, Jul 15, 2014 at 12:59 PM, Salvatore Orlando 
> wrote:
>
>> I think I've provided some examples in the review.
>>
>
> I was hoping for specific examples. The discussion I've seen so far has
> been vague enough that it's difficult to see what people mean. It's also
> possible you gave specific examples but these were buried in comments on
> previous revisions (one of my biggest gripes with the way Gerrit works. :P
> ) Could you please give a specific example of what you mean, as well as how
> it simplifies the user experience?
>
> However, the point is mostly to simplify usage from a user perspective -
>> allowing consumers of the neutron API to use the same flavour object for
>> multiple services.
>>
>
> Actually, I would argue the having a single flavor valid for several
> different services complicates the user experience (and vastly complicates
> the operator experience). This is because:
>
> * Flavors are how Operators will provide different service levels, or
> different feature sets for similar kinds of service. Users interested in
> paying for those services are likely to be more confused if a single flavor
> lists features for several different kinds of service.
> * Billing becomes more incomprehensible when the same flavor is used for
> multiple kinds of service. Users and Operators should not expect to pay the
> same rate for a "Gold" FWaaS instance and "Gold" VPNaaS instance, so why
> complicate things by putting them under the same flavor?
> * Because of the above concerns, it's likely that Operators will only
> deploy service profiles in a flavor for a single type of service anyway.
> But from the user's perspective, it's not apparent when looking at the list
> of flavors, which are valid for which kinds of service. What if a user
> tries to deploy a LBaaS service using a flavor that only has FWaaS service
> profiles associated with it? Presumably, the system must respond with an
> error indicating that no valid service profiles could be found for that
> service in that flavor. But this isn't very helpful to the user and is
> likely to lead to increased support load for the Operator who will need to
> explain this.
> * A single-service flavor is going to be inherently easier to understand
> than a multi-service flavor.
> * Single-service flavors do not preclude the ability for vendors to have
> multi-purpose appliances serve multiple roles in an OpenStack cloud.
>

I think your points are true and valid for most cloud operators; besides
the first all the point you provided indeed pertain operators and vendors.
However you can't prove, I think, the opposite - that is to say that no
cloud operator will find multi-service flavors useful. At the end of the
day Openstack is always about choice - in this case the choice of having
flavours spanning services or flavours limited to a single service.
This discussion however will just end up slowly drifting into the realm of
the theoretical and hypotethical and therefore won't bring anything good to
our cause. Who know, in a few post we might just end up calling godwin's
law!



>
>
>> There are other considerations which could be made, but since they're
>> dependent on features which do not yet exist (NFV, service insertion,
>> chaining and steering) I think there is no point in arguing over it.
>>
>
> Agreed. Though, I don't think single-service flavors paint us into a
> corner here at all. Again, things get complicated enough when it comes to
> service insertion, chaining, steering, etc. that what we'll really need at
> that point is actual orchestration. Flavors alone will not solve these
> problems, and orchestration can work with many single-service flavors to
> provide the illusion of multi-service flavors.
>

Don't take it the wrong way - but this is what I mean by "theoretical and
hypothetical". I agree with you. I think that's totally possible. But there
are so many pieces which are yet missing from the puzzle that this
discussion is probably worthless. Anyway, I started it, and I'm the one to
be punished for it!


>
>
>> In conclusion I think the idea makes sense, and is a minor twist in the
>> current design which should not either make the feature too complex neither
>> prevent any other use case for which the flavours are being conceived. For
>> the very same reason however, it is worth noting that this is surely not an
>> aspect which will cause me or somebody else to put a veto on this work item.
>>
>
> I don't think this is a minor twist in the current design, actually:
> * We'll have to deal with cases like the above where no valid service
> profiles can be found for a given kind of flavor (which we can avoid
> entirely if a flavor can have service profiles valid for only one kind of
> service).
>

Point taken, bu

Re: [openstack-dev] [Nova] [Gantt] Scheduler split status (updated)

2014-07-16 Thread Robert Collins
On 15 July 2014 06:10, Jay Pipes  wrote:

> Frankly, I don't think a lot of the NFV use cases are well-defined.
>
> Even more frankly, I don't see any benefit to a split-out scheduler to a
> single NFV use case.
>
>
>> Don't you see each Summit the lots of talks (and people attending
>> them) talking about how OpenStack should look at Pets vs. Cattle and
>> saying that the scheduler should be out of Nova ?
>
>
> There's been no concrete benefits discussed to having the scheduler outside
> of Nova.
>
> I don't really care how many people say that the scheduler should be out of
> Nova unless those same people come to the table with concrete reasons why.
> Just saying something is a benefit does not make it a benefit, and I think
> I've outlined some of the very real dangers -- in terms of code and payload
> complexity -- of breaking the scheduler out of Nova until the interfaces are
> cleaned up and the scheduler actually owns the resources upon which it
> exercises placement decisions.

I agree with the risks if we get it wrong.

In terms of benefits, I want to do cross-domain scheduling: 'Give me
five Galera servers with no shared non-HA infrastructure and
resiliency to no less than 2 separate failures'. By far the largest
push back I get is 'how do I make Ironic pick the servers I want it
to' when talking to ops folk about using Ironic. And when you dig into
that, it falls into two buckets:
 - role based mappings (e.g. storage optimised vs cpu optimised) -
which Ironic can trivially do
 - failure domain and performance domain optimisation
   - which Nova cannot do at all today.

I want this very very very badly, and I would love to be pushing
directly on it, but its just under a few other key features like
'working HA' and 'efficient updates' that sadly matter more in the
short term.



> Sorry, I'm not following you. Who is saying to Gantt "I want to store this
> data"?
>
> All I am saying is that the thing that places a resource on some provider of
> that resource should be the thing that owns the process of a requester
> *claiming* the resources on that provider, and in order to properly track
> resources in a race-free way in such a system, then the system needs to
> contain the resource tracker.

Trying to translate that:
 - the scheduler (thing that places a resource)
 - should own the act of claiming a resource
 - to avoid races the scheduler should own the tracker

So I think we need to aknowledge that Right Now we have massive races.
We can choose where we put our efforts - we can try to fix them in the
current architecture, we can try to fix them by changing the
architecture.

I think you agree that the current architecture is wrong; and that
from a risk perspective the gantt extraction should not change the
architecture - as part of making it graceful and cinder-like with
immediate use by Nova.

But once extracted the architecture can change - versioned APIs FTW.

To my mind the key question is not whether the thing will be *better*
with gantt extracted, it is whether it will be *no worse*, while
simultaneously enabling a bunch of pent up demand in another part of
the community.

That seems hard to answer categorically, but it seems to me the key
risk is whether changing the architecture will be too hard / unsafe
post extraction.

However in Nova it takes months and months to land things (and I'm not
poking here - TripleO has the same issue at the moment) - I think
there is a very real possibility that gantt can improve much faster
and efficiently as a new project, once forklifted out. Patches to Nova
to move to newer APIs can be posted and worked with while folk work on
other bits of key plumbing like performance (e.g. not loading every
host in the entire cloud into ram on every scheduling request),
scalability (e.g. elegantly solving the current racy behaviour between
different scheduler instances) and begin the work to expose the
scheduler to neutron and cinder.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Registration for the mid cycle meetup is now closed

2014-07-16 Thread Michael Still
On Thu, Jul 17, 2014 at 12:20 AM, Eric Windisch  wrote:
> On Tue, Jul 15, 2014 at 11:55 PM, Michael Still  wrote:
>>
>> The containers meetup is in a different room with different space
>> constraints, so containers focussed people should do whatever Adrian
>> is doing for registration.
>
> Interesting. In that case, for those that are primarily attending for
> containers-specific matters, but have already registered for the Nova
> mid-cycle, should we recommend they release their registrations to help
> clear the wait-list?

I would appreciate that.

If your primary focus is on the containers sub team, and not nova,
then I think moving your rego over to the containers meetup makes
sense. We wont be checking badges at the door, so people can drift
into our room as they need to...

Given the containers sub team is a sub set of nova, its not clear to
me exactly where to draw that line, so I will let other people work it
out for themselves. I am sure we can tweak as we go along.

Michael

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Michael Still
Top posting to the original email because I want this to stand out...

I've added this to the agenda for the nova mid cycle meetup, I think
most of the contributors to this thread will be there. So, if we can
nail this down here then that's great, but if we think we'd be more
productive in person chatting about this then we have that option too.

Michael

On Thu, Jul 17, 2014 at 12:15 AM, Sean Dague  wrote:
> Recently the main gate updated from Ubuntu 12.04 to 14.04, and in doing
> so we started executing the livesnapshot code in the nova libvirt
> driver. Which fails about 20% of the time in the gate, as we're bringing
> computes up and down while doing a snapshot. Dan Berange did a bunch of
> debug on that and thinks it might be a qemu bug. We disabled these code
> paths, so live snapshot has now been ripped out.
>
> In January we also triggered a libvirt bug, and had to carry a private
> build of libvirt for 6 weeks in order to let people merge code in OpenStack.
>
> We never were able to switch to libvirt 1.1.1 in the gate using the
> Ubuntu Cloud Archive during Icehouse development, because it has a
> different set of failures that would have prevented people from merging
> code.
>
> Based on these experiences, libvirt version differences seem to be as
> substantial as major hypervisor differences. There is a proposal here -
> https://review.openstack.org/#/c/103923/ to hold newer versions of
> libvirt to the same standard we hold xen, vmware, hyperv, docker,
> ironic, etc.
>
> I'm somewhat concerned that the -2 pile on in this review is a double
> standard of libvirt features, and features exploiting really new
> upstream features. I feel like a lot of the language being used here
> about the burden of doing this testing is exactly the same as was
> presented by the docker team before their driver was removed, which was
> ignored by the Nova team at the time. It was the concern by the freebsd
> team, which was also ignored and they were told to go land libvirt
> patches instead.
>
> I'm ok with us as a project changing our mind and deciding that the test
> bar needs to be taken down a notch or two because it's too burdensome to
> contributors and vendors, but if we are doing that, we need to do it for
> everyone. A lot of other organizations have put a ton of time and energy
> into this, and are carrying a maintenance cost of running these systems
> to get results back in a timely basis.
>
> As we seem deadlocked in the review, I think the mailing list is
> probably a better place for this.
>
> If we want to reduce the standards for libvirt we should reconsider
> what's being asked of 3rd party CI teams, and things like the docker
> driver, as well as the A, B, C driver classification. Because clearly
> libvirt 1.2.5+ isn't actually class A supported.
>
> Anyway, discussion welcomed. My primary concern right now isn't actually
> where we set the bar, but that we set the same bar for everyone.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Michael Still
On Thu, Jul 17, 2014 at 3:27 AM, Vishvananda Ishaya
 wrote:
> On Jul 16, 2014, at 8:28 AM, Daniel P. Berrange  wrote:>
>> On Wed, Jul 16, 2014 at 08:12:47AM -0700, Clark Boylan wrote:
>>
>>> I am worried that we would just regress to the current process because
>>> we have tried something similar to this previously and were forced to
>>> regress to the current process.
>>
>> IMHO the longer we wait between updating the gate to new versions
>> the bigger the problems we create for ourselves. eg we were switching
>> from 0.9.8 released Dec 2011, to  1.1.1 released Jun 2013, so we
>> were exposed to over 1 + 1/2 years worth of code churn in a single
>> event. The fact that we only hit a couple of bugs in that, is actually
>> remarkable given the amount of feature development that had gone into
>> libvirt in that time. If we had been tracking each intervening libvirt
>> release I expect the majority of updates would have had no ill effect
>> on us at all. For the couple of releases where there was a problem we
>> would not be forced to rollback to a version years older again, we'd
>> just drop back to the previous release at most 1 month older.
>
> This is a really good point. As someone who has to deal with packaging
> issues constantly, it is odd to me that libvirt is one of the few places
> where we depend on upstream packaging. We constantly pull in new python
> dependencies from pypi that are not packaged in ubuntu. If we had to
> wait for packaging before merging the whole system would grind to a halt.
>
> I think we should be updating our libvirt version more frequently vy
> installing from source or our own ppa instead of waiting for the ubuntu
> team to package it.

I agree with Vish here, although I do recognise its a bunch of work
for someone. One of the reasons we experienced bugs in the gate is
that we jumped 18 months in libvirt versions in a single leap. If we
had flexibility of packaging, we could have stepped through each major
version along the way, and that would have helped us identify problems
in a more controlled manner.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SubjectAlternativeNames (SAN)

2014-07-16 Thread Eichberger, German
+1 for not duplicating code

For me it's scary as well if different implementation exhibit different 
behavior. This very contrary to what we would like to do with exposing LBs only 
as flavor...

German

-Original Message-
From: Carlos Garza [mailto:carlos.ga...@rackspace.com] 
Sent: Wednesday, July 16, 2014 2:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - 
SubjectAlternativeNames (SAN)


On Jul 16, 2014, at 3:49 PM, Carlos Garza  wrote:

> 
> On Jul 16, 2014, at 12:30 PM, Vijay Venkatachalam 
> 
> wrote:
> 
>We will have the code that will parse the X509 in the API scope of the 
> code. The validation I'm refering to is making sure the key matches the cert 
> used and that we mandate that at a minimum the backend driver support RSA and 
> that since the X509 validation is happeneing at the api layer this same 
> module will also handling the extraction of the SANs. I am proposing that the 
> methods that can extract the SAN SCN from the x509 be present in the api 
> portion of the code and that drivers can call these methods if they need too. 
> Infact I'm already working to get these extraction methods contributed to the 
> PyOpenSSL project so that they will already available at a more fundemental 
> layer then our nuetron/LBAAS code. At the very least I want to spec to 
> declare that SAN SCN and parsing must be made available from the API layer. 
> If the PyOpenSSL has the methods available at that time then I we can simply 
> write wrappers for this in the API or simple write more higher level methods 
> in the API 
 module.  

I meant to say bottom line I want the parsing code exposed in the API and 
not duplicated in everyone elses driver.

> I am partioally open to the idea of letting the driver handle the 
> behavior of the cert parsing. Although I defer this to the rest of the folks 
> as I get this feeling having differn't implementations exhibiting differen't 
> behavior may sound scary. 
> 
>> 
>>I think it is best not to mention about SAN in the OpenStack 
>> TLS spec. It is expected that the backend should implement according to the 
>> SSL/SNI IETF spec.
>> Let's leave the implementation/validation part to the driver.  For ex. 
>> NetScaler does not support SAN and the NetScaler driver could either throw 
>> an error if certs with SAN are used or ignore it.
> 
>How is netscaler making the decision when choosing the cert to associate 
> with the SNI handshake?
> 
>> 
>> Does anyone see a requirement for detailing?
>> 
>> 
>> Thanks,
>> Vijay V.
>> 
>> 
>> From: Vijay Venkatachalam
>> Sent: Wednesday, July 16, 2014 8:54 AM
>> To: 'Samuel Bercovici'; 'OpenStack Development Mailing List (not for usage 
>> questions)'
>> Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
>> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
>> 
>> Apologies for the delayed response.
>> 
>> I am OK with displaying the certificates contents as part of the API, that 
>> should not harm.
>> 
>> I think the discussion has to be split into 2 topics.
>> 
>> 1.   Certificate conflict resolution. Meaning what is expected when 2 or 
>> more certificates become eligible during SSL negotiation
>> 2.   SAN support
>> 
>> I will send out 2 separate mails on this.
>> 
>> 
>> From: Samuel Bercovici [mailto:samu...@radware.com]
>> Sent: Tuesday, July 15, 2014 11:52 PM
>> To: OpenStack Development Mailing List (not for usage questions); 
>> Vijay Venkatachalam
>> Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
>> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
>> 
>> OK.
>> 
>> Let me be more precise, extracting the information for view sake / 
>> validation would be good.
>> Providing values that are different than what is in the x509 is what I am 
>> opposed to.
>> 
>> +1 for Carlos on the library and that it should be ubiquitously used.
>> 
>> I will wait for Vijay to speak for himself in this regard...
>> 
>> -Sam.
>> 
>> 
>> From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
>> Sent: Tuesday, July 15, 2014 8:35 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
>> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
>> 
>> +1 to German's and  Carlos' comments.
>> 
>> It's also worth pointing out that some UIs will definitely want to show SAN 
>> information and the like, so either having this available as part of the 
>> API, or as a standard library we write which then gets used by multiple 
>> drivers is going to be necessary.
>> 
>> If we're extracting the Subject Common Name in any place in the code then we 
>> also need to be extracting the Subject Alternative Names at the same place. 
>> From the perspective of the SNI standard, there's no difference in how these 
>> fields should be treated, 

[openstack-dev] [Nova] Agenda for the Nova mid cycle meetup

2014-07-16 Thread Michael Still
Hi.

I think its time to start getting more organized with an agenda for
the mid cycle meetup. When we first announced the meetup we created an
etherpad with a list of things we wanted to talk about during the
meetup. That list is here:

https://etherpad.openstack.org/p/juno-nova-mid-cycle-meetup

It would be good if people could ensure that what they want to cover
is on that list, we can then try and turn that into a meaningful
agenda.

Thanks,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] heat stack-create with two vm instances always got one failed

2014-07-16 Thread Yuling_C
Dell Customer Communication
Thanks very much Qiming. It was the problem and I got it fixed by creating 
another port for the instance2.

B.T.W., another question. When I delete the stack, how come the network was not 
deleted? It seems only the instance was deleted. How do I clean up the networks 
and subnets associated with the instance using heat?

Thanks,

YuLing

-Original Message-
From: Qiming Teng [mailto:teng...@linux.vnet.ibm.com]
Sent: Wednesday, July 16, 2014 3:59 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] heat stack-create with two vm instances always got 
one failed

It seems that you are sharing one port between two instances, which won't be a 
legal configuration.

On Wed, Jul 16, 2014 at 01:17:00AM -0500, yulin...@dell.com wrote:
> Dell Customer Communication
>
> Hi,
> I'm using heat to create a stack with two instances. I always got one of them 
> successful, but the other would fail. If I split the template into two and 
> each of them contains one instance then it worked. However, I thought Heat 
> template would allow multiple instances being created?
>
> Here I attach the heat template:
> {
> "AWSTemplateFormatVersion" : "2010-09-09",
> "Description" : "Sample Heat template that spins up multiple instances and a 
> private network (JSON)",
> "Resources" : {
> "test_net" : {
> "Type" : "OS::Neutron::Net",
> "Properties" : {
> "name" : "test_net"
> }
> },
> "test_subnet" : {
> "Type" : "OS::Neutron::Subnet",
> "Properties" : {
> "name" : "test_subnet",
> "cidr" : "120.10.9.0/24",
> "enable_dhcp" : "True",
> "gateway_ip" : "120.10.9.1",
> "network_id" : { "Ref" : "test_net" }
> }
> },
> "test_net_port" : {
> "Type" : "OS::Neutron::Port",
> "Properties" : {
> "admin_state_up" : "True",
> "network_id" : { "Ref" : "test_net" }
> }
> },
> "instance1" : {
> "Type" : "OS::Nova::Server",
> "Properties" : {
> "name" : "instance1",
> "image" : "8e2b4c71-448c-4313-8b41-b238af31f419",
> "flavor": "tvm-tt_lite",
> "networks" : [
> {"port" : { "Ref" : "test_net_port" }}
> ]
> }
> },
> "instance2" : {
> "Type" : "OS::Nova::Server",
> "Properties" : {
> "name" : "instance2",
> "image" : "8e2b4c71-448c-4313-8b41-b238af31f419",
> "flavor": "tvm-tt_lite",
> "networks" : [
> {"port" : { "Ref" : "test_net_port" }}
> ]
> }
> }
> }
> }
> The error that I got from heat-engine.log is as follows:
>
> 2014-07-16 01:49:50.514 25101 DEBUG heat.engine.scheduler [-] Task
> resource_action complete step
> /usr/lib/python2.6/site-packages/heat/engine/scheduler.py:170
> 2014-07-16 01:49:50.515 25101 DEBUG heat.engine.scheduler [-] Task
> stack_task from Stack "teststack" sleeping _sleep
> /usr/lib/python2.6/site-packages/heat/engine/scheduler.py:108
> 2014-07-16 01:49:51.516 25101 DEBUG heat.engine.scheduler [-] Task
> stack_task from Stack "teststack" running step
> /usr/lib/python2.6/site-packages/heat/engine/scheduler.py:164
> 2014-07-16 01:49:51.516 25101 DEBUG heat.engine.scheduler [-] Task
> resource_action running step
> /usr/lib/python2.6/site-packages/heat/engine/scheduler.py:164
> 2014-07-16 01:49:51.960 25101 DEBUG urllib3.connectionpool [-] "GET
> /v2/b64803d759e04b999e616b786b407661/servers/7cb9459c-29b3-4a23-a52c-1
> 7d85fce0559 HTTP/1.1" 200 1854 _make_request
> /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295
> 2014-07-16 01:49:51.963 25101 ERROR heat.engine.resource [-] CREATE : Server 
> "instance1"
> 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource Traceback (most 
> recent call last):
> 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource File 
> "/usr/lib/python2.6/site-packages/heat/engine/resource.py", line 371, in 
> _do_action
> 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource while not 
> check(handle_data):
> 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource File 
> "/usr/lib/python2.6/site-packages/heat/engine/resources/server.py", line 239, 
> in check_create_complete
> 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource return 
> self._check_active(server)
> 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource File 
> "/usr/lib/python2.6/site-packages/heat/engine/resources/server.py", line 255, 
> in _check_active
> 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource raise exc
> 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource Error: Creation of 
> server instance1 failed.
> 2014-07-16 01:49:51.963 25101 TRACE heat.engine.resource
> 2014-07-16 01:49:51.996 25101 DEBUG heat.engine.scheduler [-] Task
> resource_action cancelled cancel
> /usr/lib/python2.6/site-packages/heat/engine/scheduler.py:187
> 2014-07-16 01:49:52.004 25101 DEBUG heat.engine.scheduler [-] Task
> stack_task from Stack "teststack" complete step
> /usr/lib/python2.6/site-packages/heat/engine/scheduler.py:170
> 2014-07-16 01:49:52.005 25101 WARNING heat.engine.service [-] Stack
> create failed, status FAILED
> 2014-07-16 01:50:29.218 25101 DEBUG heat.openstack.common.rpc.amqp [-]
> received {u'_context_r

Re: [openstack-dev] [specs] how to continue spec discussion

2014-07-16 Thread Kyle Mestery
On Wed, Jul 16, 2014 at 9:30 AM, John Garbutt  wrote:
> On 16 July 2014 14:07, Thierry Carrez  wrote:
>> Daniel P. Berrange wrote:
>>> On Wed, Jul 16, 2014 at 11:57:33AM +, Tim Bell wrote:
 It seems a pity to archive the comments and reviewer lists along
 with losing a place to continue the discussions even if we are not
 expecting to see code in Juno.
>
> Agreed we should keep those comments.
>
>>> Agreed, that is sub-optimal to say the least.
>>>
>>> The spec documents themselves are in a release specific directory
>>> though. Any which are to be postponed to Kxxx would need to move
>>> into a specs/k directory instead of specs/juno, but we don't
>>> know what the k directory needs to be called yet :-(
>>
>> The poll ends in 18 hours, so that should no longer be a blocker :)
>
> Aww, there goes our lame excuse for punting making a decision on this.
>
>> I think what we don't really want to abandon those specs and lose
>> comments and history... but we want to shelve them in a place where they
>> do not interrupt core developers workflow as they concentrate on Juno
>> work. It will be difficult to efficiently ignore them if they are filed
>> in a next or a kxxx directory, as they would still clutter /most/ Gerrit
>> views.
>
> +1
>
> My intention was that once the specific project is open for K specs,
> people will restore their original patch set, and move the spec to the
> K directory, thus keeping all the history.
>
> For Nova, the open reviews, with a -2, are ones that are on the
> potential exception list, and so still might need some reviews. If
> they gain an exception, the -2 will be removed. The list of possible
> exceptions is currently included in bottom of this etherpad:
> https://etherpad.openstack.org/p/nova-juno-spec-priorities
>
> At some point we will open nova-specs for K, right now we are closed
> for all spec submissions. We already have more blueprints approved
> than we will be able to merge during the rest of Juno.
>
> The idea is that everyone can now focus more on fixing bugs, reviewing
> bug fixes, and reviewing remaining higher priority features, rather
> than reviewing designs for K features. It is syncing a lot of
> reviewers time looking at nova-specs, and it feels best to divert
> attention.
>
> We could leave the reviews open in gerrit, but we are trying hard to
> set expectations around the likelihood of being reviewed and/or
> accepted. In the past people have got very frustraighted and
> complained about not finding out about what is happening (or not) with
> what they have up for reviews.
>
> This is all very new, so we are mostly making this up as we go along,
> based on what we do with code submissions. Ideas on a better approach
> that still meet most of the above goals, would be awesome.
>
For the most part, I've been giving -2 to specs which we're not
approving for Juno. This means myself (and others who have been giving
-2s) needs to go back and remove those once the "K" release opens in
the neutron-specs repository. This is far from optimal, but does allow
for tracking of the specs and the history. I'm somewhat concerned that
once we open the "K" directory for Neutron specs we'll be deluged with
specs for that while we're trying hard to close Juno out, but I think
this problem will be there for all projects with specs repositories. I
don't think we've figured out a good way to focus people on the
remaining Juno items when this happens.

Kyle


> Thanks,
> John
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Cinder coverage

2014-07-16 Thread Clint Byrum
Excerpts from Dan Prince's message of 2014-07-16 09:50:51 -0700:
> Hi TripleO!
> 
> It would appear that we have no coverage in devtest which ensures that
> Cinder consistently works in the overcloud. As such the TripleO Cinder
> elements are often broken (as of today I can't fully use lio or tgt w/
> upstream TripleO elements).
> 
> How do people feel about swapping out our single 'nova boot' command to
> boot from a volume. Something like this:
> 
>  https://review.openstack.org/#/c/107437
> 
> There is a bit of tradeoff here in that the conversion will take a bit
> of time (qemu-img has to run). Also our boot code path won't be exactly
> the same as booting from an image.
> 
> Long term we want to run Tempest but due to resource constraints we
> can't do that today. Until then this sort of deep systems test (running
> a command that exercises more code) might serve us well and give us the
> Cinder coverage we need.
> 
> Thoughts?
> 

Tempest is a stretch goal. Given our long test times, until we get them
down, I don't know if we can even flirt with tempest other than the most
basic smoke tests.

So yes, I like the idea of having our one smoke test be as wide as
possible.

Later on we can add Heat coverage by putting said smoke test into a
Heat template.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] request to tag novaclient 2.18.0

2014-07-16 Thread Steve Baker
On 12/07/14 09:25, Joe Gordon wrote:
>
>
>
> On Fri, Jul 11, 2014 at 4:42 AM, Jeremy Stanley  > wrote:
>
> On 2014-07-11 11:21:19 +0200 (+0200), Matthias Runge wrote:
> > this broke horizon stable and master; heat stable is affected as
> > well.
> [...]
>
> I guess this is a plea for applying something like the oslotest
> framework to client libraries so they get backward-compat jobs run
> against unit tests of all dependant/consuming software... branchless
> tempest already alleviates some of this, but not the case of changes
> in a library which will break unit/functional tests of another
> project.
>
>
> We actually do have some tests for backwards compatibility, and they
> all passed. Presumably because both heat and horizon have poor
> integration test.
>
> We ran 
>
>   * check-tempest-dsvm-full-havana
> 
> 
>  SUCCESS in
> 40m 47s (non-voting)
>   * check-tempest-dsvm-neutron-havana
> 
> 
>  SUCCESS in
> 36m 17s (non-voting)
>   * check-tempest-dsvm-full-icehouse
> 
> 
>  SUCCESS in
> 53m 05s
>   * check-tempest-dsvm-neutron-icehouse
> 
> 
>  SUCCESS in
> 57m 28s
>
>
> on the offending patches (https://review.openstack.org/#/c/94166/)
>  
>
> Infra patch that added these tests:
> https://review.openstack.org/#/c/80698/
>
>
Heat-proper would have continued working fine with novaclient 2.18.0.
The regression was with raising novaclient exceptions, which is only
required in our unit tests. I saw this break coming and switched to
raising via from_response
https://review.openstack.org/#/c/97977/22/heat/tests/v1_1/fakes.py

Unit tests tend to deal with more internals of client libraries just for
mocking purposes, and there have been multiple breaks in unit tests for
heat and horizon when client libraries make internal changes.

This could be avoided if the client gate jobs run the unit tests for the
projects which consume them.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] [RBD] Copy-on-write cloning for RBD-backed disks

2014-07-16 Thread Dmitry Borodaenko
I've got a bit of good news and bad news about the state of landing
the rbd-ephemeral-clone patch series for Nova in Juno.

The good news is that the first patch in the series
(https://review.openstack.org/91722 fixing a data loss inducing bug
with live migrations of instances with RBD backed ephemeral drives)
was merged yesterday.

The bad news is that after 2 months of sitting in review queue and
only getting its first a +1 from a core reviewer on the spec approval
freeze day, the spec for the blueprint rbd-clone-image-handler
(https://review.openstack.org/91486) wasn't approved in time. Because
of that, today the blueprint was rejected along with the rest of the
commits in the series, even though the code itself was reviewed and
approved a number of times.

Our last chance to avoid putting this work on hold for yet another
OpenStack release cycle is to petition for a spec freeze exception in
the next Nova team meeting:
https://wiki.openstack.org/wiki/Meetings/Nova

If you're using Ceph RBD as backend for ephemeral disks in Nova and
are interested this patch series, please speak up. Since the biggest
concern raised about this spec so far has been lack of CI coverage,
please let us know if you're already using this patch series with
Juno, Icehouse, or Havana.

I've put together an etherpad with a summary of where things are with
this patch series and how we got here:
https://etherpad.openstack.org/p/nova-ephemeral-rbd-clone-status

Previous thread about this patch series on ceph-users ML:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-March/028097.html

-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] os-net-config

2014-07-16 Thread Robert Collins
On 17 July 2014 05:58, Dan Prince  wrote:
> Hi TripleO!
>
> I wanted to get the word out on progress with a new os-net-config tool
> for TripleO. The spec (not yet approved) lives here:
>
> https://review.openstack.org/#/c/97859/
>
> We've also got a working implementation here:
>
> https://github.com/dprince/os-net-config
>
> You can see WIP example of how it wires in here (more work to do on this
> to fully support parity):
>
> https://review.openstack.org/#/c/104054/1/elements/network-utils/bin/ensure-bridge,cm
>
> The end goal is that we will be able to more flexibly control our host
> level network settings in TripleO. Once it is fully integrated
> os-net-config would provide a mechanism to drive more flexible
> configurations (multiple bridges, bonding, etc.) via Heat metadata.
>
> We are already in dire need of this sort of thing today because we can't
> successfully deploy our CI overclouds without making manual changes to
> our images (this is because we need 2 bridges and our heat templates
> only support 1).

I'm really glad this is coming along. One small thing to note - we
don't need two bridges for CI overclouds - the rearranging of things
I've done over the last couple of weeks means we no longer *break* the
build in bridge, and so we can use br-ex for everything.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SubjectAlternativeNames (SAN)

2014-07-16 Thread Carlos Garza

On Jul 16, 2014, at 3:49 PM, Carlos Garza  wrote:

> 
> On Jul 16, 2014, at 12:30 PM, Vijay Venkatachalam 
> 
> wrote:
> 
>We will have the code that will parse the X509 in the API scope of the 
> code. The validation I'm refering to is making sure the key matches the cert 
> used and that we mandate that at a minimum the backend driver support RSA and 
> that since the X509 validation is happeneing at the api layer this same 
> module will also handling the extraction of the SANs. I am proposing that the 
> methods that can extract the SAN SCN from the x509 be present in the api 
> portion of the code and that drivers can call these methods if they need too. 
> Infact I'm already working to get these extraction methods contributed to the 
> PyOpenSSL project so that they will already available at a more fundemental 
> layer then our nuetron/LBAAS code. At the very least I want to spec to 
> declare that SAN SCN and parsing must be made available from the API layer. 
> If the PyOpenSSL has the methods available at that time then I we can simply 
> write wrappers for this in the API or simple write more higher level methods 
> in the API module.  

I meant to say bottom line I want the parsing code exposed in the API and 
not duplicated in everyone elses driver.

> I am partioally open to the idea of letting the driver handle the 
> behavior of the cert parsing. Although I defer this to the rest of the folks 
> as I get this feeling having differn't implementations exhibiting differen't 
> behavior may sound scary. 
> 
>> 
>>I think it is best not to mention about SAN in the OpenStack 
>> TLS spec. It is expected that the backend should implement according to the 
>> SSL/SNI IETF spec.
>> Let’s leave the implementation/validation part to the driver.  For ex. 
>> NetScaler does not support SAN and the NetScaler driver could either throw 
>> an error if certs with SAN are used or ignore it.
> 
>How is netscaler making the decision when choosing the cert to associate 
> with the SNI handshake?
> 
>> 
>> Does anyone see a requirement for detailing?
>> 
>> 
>> Thanks,
>> Vijay V.
>> 
>> 
>> From: Vijay Venkatachalam 
>> Sent: Wednesday, July 16, 2014 8:54 AM
>> To: 'Samuel Bercovici'; 'OpenStack Development Mailing List (not for usage 
>> questions)'
>> Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
>> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
>> 
>> Apologies for the delayed response.
>> 
>> I am OK with displaying the certificates contents as part of the API, that 
>> should not harm.
>> 
>> I think the discussion has to be split into 2 topics.
>> 
>> 1.   Certificate conflict resolution. Meaning what is expected when 2 or 
>> more certificates become eligible during SSL negotiation
>> 2.   SAN support
>> 
>> I will send out 2 separate mails on this.
>> 
>> 
>> From: Samuel Bercovici [mailto:samu...@radware.com] 
>> Sent: Tuesday, July 15, 2014 11:52 PM
>> To: OpenStack Development Mailing List (not for usage questions); Vijay 
>> Venkatachalam
>> Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
>> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
>> 
>> OK.
>> 
>> Let me be more precise, extracting the information for view sake / 
>> validation would be good.
>> Providing values that are different than what is in the x509 is what I am 
>> opposed to.
>> 
>> +1 for Carlos on the library and that it should be ubiquitously used.
>> 
>> I will wait for Vijay to speak for himself in this regard…
>> 
>> -Sam.
>> 
>> 
>> From: Stephen Balukoff [mailto:sbaluk...@bluebox.net] 
>> Sent: Tuesday, July 15, 2014 8:35 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
>> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
>> 
>> +1 to German's and  Carlos' comments.
>> 
>> It's also worth pointing out that some UIs will definitely want to show SAN 
>> information and the like, so either having this available as part of the 
>> API, or as a standard library we write which then gets used by multiple 
>> drivers is going to be necessary.
>> 
>> If we're extracting the Subject Common Name in any place in the code then we 
>> also need to be extracting the Subject Alternative Names at the same place. 
>> From the perspective of the SNI standard, there's no difference in how these 
>> fields should be treated, and if we were to treat SANs differently then 
>> we're both breaking the standard and setting a bad precedent.
>> 
>> Stephen
>> 
>> 
>> On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza  
>> wrote:
>> 
>> On Jul 15, 2014, at 10:55 AM, Samuel Bercovici 
>> wrote:
>> 
>>> Hi,
>>> 
>>> 
>>> Obtaining the domain name from the x509 is probably more of a 
>>> driver/backend/device capability, it would make sense to have a library 
>>> that could be used by anyone wishing to do so in their dr

Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-16 Thread Doug Wiegley


On 7/16/14, 2:43 PM, "Clint Byrum"  wrote:

>Excerpts from Mike Spreitzer's message of 2014-07-16 10:50:42 -0700:
>> Clint Byrum  wrote on 07/02/2014 01:54:49 PM:
>> 
>> > Excerpts from Qiming Teng's message of 2014-07-02 00:02:14 -0700:
>> > > Just some random thoughts below ...
>> > > 
>> > > On Tue, Jul 01, 2014 at 03:47:03PM -0400, Mike Spreitzer wrote:
>> > > > In AWS, an autoscaling group includes health maintenance
>> > functionality ---
>> > > > both an ability to detect basic forms of failures and an
>>abilityto 
>> react 
>> > > > properly to failures detected by itself or by a load balancer.
>>What 
>> is 
>> > > > the thinking about how to get this functionality in OpenStack?
>>Since 
>> 
>> > > 
>> > > We are prototyping a solution to this problem at IBM Research -
>>China
>> > > lab.  The idea is to leverage oslo.messaging and ceilometer events
>>for
>> > > instance (possibly other resource such as port, securitygroup ...)
>> > > failure detection and handling.
>> > > 
>> > 
>> > Hm.. perhaps you should be contributing some reviews here as you may
>> > have some real insight:
>> > 
>> > https://review.openstack.org/#/c/100012/
>> > 
>> > This sounds a lot like what we're working on for continuous
>>convergence.
>> 
>> I noticed that health checking in AWS goes beyond convergence.  In AWS
>>an 
>> ELB can be configured with a URL to ping, for application-level health
>> checking.  And an ASG can simply be *told* the health of a member by a
>> user's own external health system.  I think we should have analogous
>> functionality in OpenStack.  Does that make sense to you?  If so, do
>>you 
>> have any opinion on the right way to integrate, so that we do not have
>> three completely independent health maintenance systems?
>
>The check url is already a part of Neutron LBaaS IIRC. What may not be
>a part is notifications for when all members are reporting down (which
>might be something to trigger scale-up).

You do recall correctly, and there are currently no mechanisms for
notifying anything outside of the load balancer backend when the health
monitor/member state changes.

There is also currently no way for an external system to inject health
information about an LB or its members.

Both would be interesting additions.

doug


>
>If we don't have push checks in our auto scaling implementation then we
>don't have a proper auto scaling implementation.
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SubjectAlternativeNames (SAN)

2014-07-16 Thread Carlos Garza

On Jul 16, 2014, at 12:30 PM, Vijay Venkatachalam 

 wrote:

We will have the code that will parse the X509 in the API scope of the 
code. The validation I'm refering to is making sure the key matches the cert 
used and that we mandate that at a minimum the backend driver support RSA and 
that since the X509 validation is happeneing at the api layer this same module 
will also handling the extraction of the SANs. I am proposing that the methods 
that can extract the SAN SCN from the x509 be present in the api portion of the 
code and that drivers can call these methods if they need too. Infact I'm 
already working to get these extraction methods contributed to the PyOpenSSL 
project so that they will already available at a more fundemental layer then 
our nuetron/LBAAS code. At the very least I want to spec to declare that SAN 
SCN and parsing must be made available from the API layer. If the PyOpenSSL has 
the methods available at that time then I we can simply write wrappers for this 
in the API or simple write more higher level methods in the API module. Bottom 
line I 

 I am partioally open to the idea of letting the driver handle the behavior 
of the cert parsing. Although I defer this to the rest of the folks as I get 
this feeling having differn't implementations exhibiting differen't behavior 
may sound scary. 

>  
> I think it is best not to mention about SAN in the OpenStack 
> TLS spec. It is expected that the backend should implement according to the 
> SSL/SNI IETF spec.
> Let’s leave the implementation/validation part to the driver.  For ex. 
> NetScaler does not support SAN and the NetScaler driver could either throw an 
> error if certs with SAN are used or ignore it.

How is netscaler making the decision when choosing the cert to associate 
with the SNI handshake?

>  
> Does anyone see a requirement for detailing?
>  
>  
> Thanks,
> Vijay V.
>  
>  
> From: Vijay Venkatachalam 
> Sent: Wednesday, July 16, 2014 8:54 AM
> To: 'Samuel Bercovici'; 'OpenStack Development Mailing List (not for usage 
> questions)'
> Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
>  
> Apologies for the delayed response.
>
> I am OK with displaying the certificates contents as part of the API, that 
> should not harm.
>  
> I think the discussion has to be split into 2 topics.
>  
> 1.   Certificate conflict resolution. Meaning what is expected when 2 or 
> more certificates become eligible during SSL negotiation
> 2.   SAN support
>  
> I will send out 2 separate mails on this.
>  
>  
> From: Samuel Bercovici [mailto:samu...@radware.com] 
> Sent: Tuesday, July 15, 2014 11:52 PM
> To: OpenStack Development Mailing List (not for usage questions); Vijay 
> Venkatachalam
> Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
>  
> OK.
>  
> Let me be more precise, extracting the information for view sake / validation 
> would be good.
> Providing values that are different than what is in the x509 is what I am 
> opposed to.
>  
> +1 for Carlos on the library and that it should be ubiquitously used.
>  
> I will wait for Vijay to speak for himself in this regard…
>  
> -Sam.
>  
>  
> From: Stephen Balukoff [mailto:sbaluk...@bluebox.net] 
> Sent: Tuesday, July 15, 2014 8:35 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
>  
> +1 to German's and  Carlos' comments.
>  
> It's also worth pointing out that some UIs will definitely want to show SAN 
> information and the like, so either having this available as part of the API, 
> or as a standard library we write which then gets used by multiple drivers is 
> going to be necessary.
>  
> If we're extracting the Subject Common Name in any place in the code then we 
> also need to be extracting the Subject Alternative Names at the same place. 
> From the perspective of the SNI standard, there's no difference in how these 
> fields should be treated, and if we were to treat SANs differently then we're 
> both breaking the standard and setting a bad precedent.
>  
> Stephen
>  
> 
> On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza  
> wrote:
> 
> On Jul 15, 2014, at 10:55 AM, Samuel Bercovici 
>  wrote:
> 
> > Hi,
> >
> >
> > Obtaining the domain name from the x509 is probably more of a 
> > driver/backend/device capability, it would make sense to have a library 
> > that could be used by anyone wishing to do so in their driver code.
> 
> You can do what ever you want in *your* driver. The code to extract this 
> information will be apart of the API and needs to be mentioned in the spec 
> now. PyOpenSSL with PyASN1 are the most likely candidates.
> 
> Carlos D. Garza
> >

Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-16 Thread Clint Byrum
Excerpts from Mike Spreitzer's message of 2014-07-16 10:50:42 -0700:
> Clint Byrum  wrote on 07/02/2014 01:54:49 PM:
> 
> > Excerpts from Qiming Teng's message of 2014-07-02 00:02:14 -0700:
> > > Just some random thoughts below ...
> > > 
> > > On Tue, Jul 01, 2014 at 03:47:03PM -0400, Mike Spreitzer wrote:
> > > > In AWS, an autoscaling group includes health maintenance 
> > functionality --- 
> > > > both an ability to detect basic forms of failures and an abilityto 
> react 
> > > > properly to failures detected by itself or by a load balancer.  What 
> is 
> > > > the thinking about how to get this functionality in OpenStack? Since 
> 
> > > 
> > > We are prototyping a solution to this problem at IBM Research - China
> > > lab.  The idea is to leverage oslo.messaging and ceilometer events for
> > > instance (possibly other resource such as port, securitygroup ...)
> > > failure detection and handling.
> > > 
> > 
> > Hm.. perhaps you should be contributing some reviews here as you may
> > have some real insight:
> > 
> > https://review.openstack.org/#/c/100012/
> > 
> > This sounds a lot like what we're working on for continuous convergence.
> 
> I noticed that health checking in AWS goes beyond convergence.  In AWS an 
> ELB can be configured with a URL to ping, for application-level health 
> checking.  And an ASG can simply be *told* the health of a member by a 
> user's own external health system.  I think we should have analogous 
> functionality in OpenStack.  Does that make sense to you?  If so, do you 
> have any opinion on the right way to integrate, so that we do not have 
> three completely independent health maintenance systems?

The check url is already a part of Neutron LBaaS IIRC. What may not be
a part is notifications for when all members are reporting down (which
might be something to trigger scale-up).

If we don't have push checks in our auto scaling implementation then we
don't have a proper auto scaling implementation.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? July 16 2014

2014-07-16 Thread Anne Gentle
Hi all,
First of all, I'm sorry we had to skip last week's doc team meeting and
that I didn't send this note out last week -- had to take care of my son's
health. As Pa from Little House on the Prairie would say, "All's well that
ends well."

Thanks to the APAC team for holding the docs team meeting this week.
Minutes and logs:
http://eavesdrop.openstack.org/meetings/docteam/2014/docteam.2014-07-16-03.06.html
http://eavesdrop.openstack.org/meetings/docteam/2014/docteam.2014-07-16-03.06.log.html

__In review and merged this past week__
The CLI Reference has been updated for the release of:
 python-novaclient 2.18.1
 python-keystoneclient  0.9
 python-cinderclient 1.0.9 
 python-glanceclient 0.13.1

We're now labeling the release name in a running sidebar of text on older
releases and the current release for all docs pages that correlate with an
integrated release.

The neutron.conf advanced configuration info has been updated to use the
alias openvswitch rather than, for example,
neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2.

__High priority doc work__

I'm as eager as you are to get my hands on the results of the Architecture
and Design Guide! We're working on the best output and should have it
available soon.

__Ongoing doc work__

To clarify the request for docs-specs, while we have some wiki page
specifications for Launchpad blueprints, I was hoping to try out the
docs-specs repo for the networking guide, the HOT template user guide
chapter, and for app developer deliverables. We are trying out the
docs-specs repo rather than wiki pages. So far Andreas has proposed one for
a common glossary, and Gauvain is working on another for the HOT template
user guide chapter. Phil and Matt are working on the networking guide, so
that leaves Tom and me working on developer deliverables. Training guides,
do you have any blueprints you want reviewed? Let's get them proposed.

Or, if we think we should stick to wiki pages for specs, that's okay too.

__New incoming doc requests__

The Trove team gets a gold star for outlining their doc gaps here:
https://etherpad.openstack.org/p/trove-doc-items. Their goal is to get
those items at least in draft by 7/24.

Mostly the interest is in the HOT templates doc and the upcoming Networking
doc swarm and spec.

__Doc tools updates__

I want to be clear that there's no Foundation support for any purchased
licenses of a proprietary toolchain. Our entire docs toolchain is open.
Some of us choose to use Oxygen for authoring, and Oxygen XML, the company,
chooses to support open source projects by providing free licenses for a
longer trial than their 30 day trial. So as far as I know, something like
Prince for output wouldn't be supported.

The clouddocs-maven-plugin has a 2.1.2 release (release notes:
https://github.com/stackforge/clouddocs-maven-plugin/blob/master/RELEASE_NOTES.rst#clouddocs-maven-plugin-212-july-15-2014)
which enables hyphenation. To update to 2.1.2, update the 
indicated for the plugin in the pom.xml and try out hyphenation!

__Other doc news__

I plan to attend the Ops Meetup in San Antonio Aug. 25-26th. More details
at https://etherpad.openstack.org/p/SAT-ops-meetup. Please let me know your
Ops Docs Needs prior to or at that event.

I absolutely love this blog post by Mark McLoughlin at
http://blogs.gnome.org/markmc/2014/06/06/an-ideal-openstack-developer/  -
an excellent example of satire and how we should all watch each other for
burnout. :) Best paragraph:
"And then there’s docs, always the poor forgotten child of any open source
project. Yet OpenStack has some relatively awesome docs and a great team
developing them. They can never hope to cope with the workload themselves,
though, so they need you to pitch in and help perfect those docs in your
area of expertise."
Great job docs team, for working so hard on docs.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-07-16 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 16/07/14 01:50, Vishvananda Ishaya wrote:
> 
> On Jul 15, 2014, at 3:30 PM, Ihar Hrachyshka 
> wrote:
> 
>> Signed PGP part On 14/07/14 22:48, Vishvananda Ishaya wrote:
>>> 
>>> On Jul 13, 2014, at 9:29 AM, Ihar Hrachyshka
>>>  wrote:
>>> 
 Signed PGP part On 12/07/14 03:17, Mike Bayer wrote:
> 
> On 7/11/14, 7:26 PM, Carl Baldwin wrote:
>> 
>> 
>> On Jul 11, 2014 5:32 PM, "Vishvananda Ishaya" 
>>  > wrote:
>>> 
>>> I have tried using pymysql in place of mysqldb and in
>>> real world
> concurrency
>>> tests against cinder and nova it performs slower. I
>>> was inspired by
> the mention
>>> of mysql-connector so I just tried that option
>>> instead.
> Mysql-connector seems
>>> to be slightly slower as well, which leads me to
>>> believe that the
> blocking inside of
>> 
>> Do you have some numbers?  "Seems to be slightly slower" 
>> doesn't
> really stand up as an argument against the numbers that
> have been posted in this thread.
>>> 
>>> Numbers are highly dependent on a number of other factors, but
>>> I was seeing 100 concurrent list commands against cinder going
>>> from an average of 400 ms to an average of around 600 ms with
>>> both msql-connector and pymsql.
>> 
>> I've made my tests on neutron only, so there is possibility that 
>> cinder works somehow differently.
>> 
>> But, those numbers don't tell a lot in terms of considering the 
>> switch. Do you have numbers for mysqldb case?
> 
> Sorry if my commentary above was unclear. The  400ms is mysqldb. 
> The 600ms average was the same for both the other options.
>> 
>>> 
>>> It is also worth mentioning that my test of 100 concurrent
>>> creates from the same project in cinder leads to average
>>> response times over 3 seconds. Note that creates return before
>>> the request is sent to the node for processing, so this is just
>>> the api creating the db record and sticking a message on the
>>> queue. A huge part of the slowdown is in quota reservation
>>> processing which does a row lock on the project id.
>> 
>> Again, are those 3 seconds better or worse than what we have for
>> mysqldb?
> 
> The 3 seconds is from mysqldb. I don?t have average response times
> for mysql-connector due to the timeouts I mention below.
>> 
>>> 
>>> Before we are sure that an eventlet friendly backend ?gets rid
>>> of all deadlocks?, I will mention that trying this test
>>> against connector leads to some requests timing out at our load
>>> balancer (5 minute timeout), so we may actually be introducing
>>> deadlocks where the retry_on_deadlock operator is used.
>> 
>> Deadlocks != timeouts. I attempt to fix eventlet-triggered db 
>> deadlocks, not all possible deadlocks that you may envision, or
>> timeouts.
> 
> That may be true, but if switching the default is trading one
> problem for another it isn?t necessarily the right fix. The timeout
> means that one or more greenthreads are never actually generating a
> response. I suspect and endless retry_on_deadlock between a couple
> of competing greenthreads which we don?t hit with mysqldb, but it
> could be any number of things.
> 
>> 
>>> 
>>> Consider the above anecdotal for the moment, since I can?t
>>> verify for sure that switching the sql driver didn?t introduce
>>> some other race or unrelated problem.
>>> 
>>> Let me just caution that we can?t recommend replacing our
>>> mysql backend without real performance and load testing.
>> 
>> I agree. Not saying that the tests are somehow complete, but here
>> is what I was into last two days.
>> 
>> There is a nice openstack project called Rally that is designed
>> to allow easy benchmarks for openstack projects. They have four
>> scenarios for neutron implemented: for networks, ports, routers,
>> and subnets. Each scenario combines create and list commands.
>> 
>> I've run each test with the following runner settings: times =
>> 100, concurrency = 10, meaning each scenario is run 100 times in
>> parallel, and there were not more than 10 parallel scenarios
>> running. Then I've repeated the same for times = 100, concurrency
>> = 20 (also set max_pool_size to 20 to allow sqlalchemy utilize
>> that level of parallelism), and times = 1000, concurrency = 100
>> (same note on sqlalchemy parallelism).
>> 
>> You can find detailed html files with nice graphs here [1].
>> Brief description of results is below:
>> 
>> 1. create_and_list_networks scenario: for 10 parallel workers 
>> performance boost is -12.5% from original time, for 20 workers
>> -6.3%, for 100 workers there is a slight reduction of average
>> time spent for scenario +9.4% (this is the only scenario that
>> showed slight reduction in performance, I'll try to rerun the
>> test tomorrow to see whether it was some discrepancy when I
>> executed it that influenced the result).
>> 
>> 2. create_and_list_ports scenario: for 10 para

Re: [openstack-dev] [Neutron][CI] DB migration error

2014-07-16 Thread Kyle Mestery
I've poked some folks on the infra channel about this now, as we need
this merged soon.

On Wed, Jul 16, 2014 at 11:30 AM, Kevin Benton  wrote:
> This bug is also affecting Ryu and the Big Switch CI.
> There is a patch to bump the version requirement for alembic linked in the
> bug report that should fix it. It we can't get that merged we may have to
> revert the healing patch.
>
> https://bugs.launchpad.net/bugs/1342507
>
> On Jul 16, 2014 9:27 AM, "trinath.soman...@freescale.com"
>  wrote:
>>
>> Hi-
>>
>>
>>
>> With the neutron Update to my CI, I get the following error while
>> configuring Neutron in devstack.
>>
>>
>>
>> 2014-07-16 16:12:06.349 | INFO  [alembic.autogenerate.compare] Detected
>> server default on column 'poolmonitorassociations.status'
>>
>> 2014-07-16 16:12:06.411 | INFO
>> [neutron.db.migration.alembic_migrations.heal_script] Detected added foreign
>> key for column 'id' on table u'ml2_brocadeports'
>>
>> 2014-07-16 16:12:14.853 | Traceback (most recent call last):
>>
>> 2014-07-16 16:12:14.853 |   File "/usr/local/bin/neutron-db-manage", line
>> 10, in 
>>
>> 2014-07-16 16:12:14.853 | sys.exit(main())
>>
>> 2014-07-16 16:12:14.854 |   File
>> "/opt/stack/new/neutron/neutron/db/migration/cli.py", line 171, in main
>>
>> 2014-07-16 16:12:14.854 | CONF.command.func(config, CONF.command.name)
>>
>> 2014-07-16 16:12:14.854 |   File
>> "/opt/stack/new/neutron/neutron/db/migration/cli.py", line 85, in
>> do_upgrade_downgrade
>>
>> 2014-07-16 16:12:14.854 | do_alembic_command(config, cmd, revision,
>> sql=CONF.command.sql)
>>
>> 2014-07-16 16:12:14.854 |   File
>> "/opt/stack/new/neutron/neutron/db/migration/cli.py", line 63, in
>> do_alembic_command
>>
>> 2014-07-16 16:12:14.854 | getattr(alembic_command, cmd)(config, *args,
>> **kwargs)
>>
>> 2014-07-16 16:12:14.854 |   File
>> "/usr/local/lib/python2.7/dist-packages/alembic/command.py", line 124, in
>> upgrade
>>
>> 2014-07-16 16:12:14.854 | script.run_env()
>>
>> 2014-07-16 16:12:14.854 |   File
>> "/usr/local/lib/python2.7/dist-packages/alembic/script.py", line 199, in
>> run_env
>>
>> 2014-07-16 16:12:14.854 | util.load_python_file(self.dir, 'env.py')
>>
>> 2014-07-16 16:12:14.854 |   File
>> "/usr/local/lib/python2.7/dist-packages/alembic/util.py", line 205, in
>> load_python_file
>>
>> 2014-07-16 16:12:14.854 | module = load_module_py(module_id, path)
>>
>> 2014-07-16 16:12:14.854 |   File
>> "/usr/local/lib/python2.7/dist-packages/alembic/compat.py", line 58, in
>> load_module_py
>>
>> 2014-07-16 16:12:14.854 | mod = imp.load_source(module_id, path, fp)
>>
>> 2014-07-16 16:12:14.854 |   File
>> "/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py",
>> line 106, in 
>>
>> 2014-07-16 16:12:14.854 | run_migrations_online()
>>
>> 2014-07-16 16:12:14.855 |   File
>> "/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py",
>> line 90, in run_migrations_online
>>
>> 2014-07-16 16:12:14.855 | options=build_options())
>>
>> 2014-07-16 16:12:14.855 |   File "", line 7, in run_migrations
>>
>> 2014-07-16 16:12:14.855 |   File
>> "/usr/local/lib/python2.7/dist-packages/alembic/environment.py", line 681,
>> in run_migrations
>>
>> 2014-07-16 16:12:14.855 | self.get_context().run_migrations(**kw)
>>
>> 2014-07-16 16:12:14.855 |   File
>> "/usr/local/lib/python2.7/dist-packages/alembic/migration.py", line 225, in
>> run_migrations
>>
>> 2014-07-16 16:12:14.855 | change(**kw)
>>
>> 2014-07-16 16:12:14.856 |   File
>> "/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/versions/1d6ee1ae5da5_db_healing.py",
>> line 32, in upgrade
>>
>> 2014-07-16 16:12:14.856 | heal_script.heal()
>>
>> 2014-07-16 16:12:14.856 |   File
>> "/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py",
>> line 78, in heal
>>
>> 2014-07-16 16:12:14.856 | execute_alembic_command(el)
>>
>> 2014-07-16 16:12:14.856 |   File
>> "/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py",
>> line 93, in execute_alembic_command
>>
>> 2014-07-16 16:12:14.856 | parse_modify_command(command)
>>
>> 2014-07-16 16:12:14.856 |   File
>> "/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py",
>> line 126, in parse_modify_command
>>
>> 2014-07-16 16:12:14.856 | op.alter_column(table, column, **kwargs)
>>
>> 2014-07-16 16:12:14.856 |   File "", line 7, in alter_column
>>
>> 2014-07-16 16:12:14.856 |   File "", line 1, in 
>>
>> 2014-07-16 16:12:14.856 |   File
>> "/usr/local/lib/python2.7/dist-packages/alembic/util.py", line 322, in go
>>
>> 2014-07-16 16:12:14.857 | return fn(*arg, **kw)
>>
>> 2014-07-16 16:12:14.857 |   File
>> "/usr/local/lib/python2.7/dist-packages/alembic/operations.py", line 300, in
>> alter_column
>>
>> 2014-07-16 16:12:14.857 |
>> existing_autoincrement=existing_autoincrement
>>
>> 2014-07-16 16:12:14.857 |   File
>> "/usr/local/lib/python2.7/dist-packages/alembic/ddl/mysql.py", line 42, i

Re: [openstack-dev] [oslo] oslo.serialization repo review

2014-07-16 Thread Ben Nemec
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 07/15/2014 03:52 PM, Ihar Hrachyshka wrote:
> On 15/07/14 20:36, Joshua Harlow wrote:
>> LGTM.
> 
>> I'd be interesting in the future to see if we can transparently
>> use some other serialization format (besides json)...
> 
>> That's my only compliant is that jsonutils is still named
>> jsonutils instead of 'serializer' or something else but I
>> understand the reasoning why...
> 
> Now that jsonutils module contains all basic 'json' functions 
> (dump[s], load[s]), can we rename it to 'json' to mimic the
> standard 'json' library? I think jsonutils is now easy to use as an
> enhanced drop-in replacement for standard 'json' module, and I even
> envisioned a hacking rule that would suggest to use jsonutils
> instead of json. So appropriate naming would be helpful to push
> that use case.

We discussed this a bit on the oslo.utils spec, but we don't want to
shadow builtin names so we're leaving the utils suffix on the modules
that have it.  I would think the same applies here.

If someone wants to use this as a dropin they can still do "from
oslo.serialization import jsonutils as json"

> 
> /Ihar
> 
> 
>> -Josh
> 
>> On Jul 15, 2014, at 10:42 AM, Ben Nemec  
>> wrote:
> 
>>> And the link, since I forgot it before: 
>>> https://github.com/cybertron/oslo.serialization
>>> 
>>> On 07/14/2014 04:59 PM, Ben Nemec wrote:
 Hi oslophiles,
 
 I've (finally) started the graduation of oslo.serialization, 
 and I'm up to the point of having a repo on github that
 passes the unit tests.
 
 I realize there is some more work to be done (e.g. replacing 
 all of the openstack.common files with libs) but my plan is
 to do that once it's under Gerrit control so we can review
 the changes properly.
 
 Please take a look and leave feedback as appropriate.
 Thanks!
 
 -Ben
 
>>> 
>>> 
>>> ___ OpenStack-dev 
>>> mailing list OpenStack-dev@lists.openstack.org 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>>> 
>>> 
> 
>> ___ OpenStack-dev 
>> mailing list OpenStack-dev@lists.openstack.org 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>> 
> 
> ___ OpenStack-dev
> mailing list OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTxt1rAAoJEDehGd0Fy7uqm0EH/RwwkuCOwrZy/f/DhxgHgyXA
zWi9x29M+q9kDdkJImdoSCoimReV1tXGMBe/hMtqiqa7XUtC0daltPDsDgZX1rCE
Od1luXfnD8jxdIWI+6ecDpf8eK3PZqe++FHditOEVDNN6R84xW6Zkkd/3ERipT5D
Jt4G1VBV6DmeO80p94InunAvlG6f15t1NuWfqo7a1fU8r9XpKRnYqmgSBrjNxZcL
8cDTW/3HH6X2kps1xVDJTDFCo2WionbK73N9FYy1NBRt0XKThseRVXQiC4sANlEN
/tHqlWVGZg6e6HCkvywV4gAUKnaNiuHVi6U0RDgz4KIa2Qrbazup3Azz2fsbt6U=
=39+D
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting July 17 1800 UTC

2014-07-16 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meeting&iso=20140717T18

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-16 Thread Carlos Garza

On Jul 16, 2014, at 10:55 AM, Vijay Venkatachalam 

 wrote:

> Apologies for the delayed response.
>
> I am OK with displaying the certificates contents as part of the API, that 
> should not harm.
>  
> I think the discussion has to be split into 2 topics.
>  
> 1.   Certificate conflict resolution. Meaning what is expected when 2 or 
> more certificates become eligible during SSL negotiation
> 2.   SAN support
>  

Ok cool that makes more sense. #2 seems to be met by Evgeny proposal. I'll 
let you folks decide the conflict resolution issue #1.


> I will send out 2 separate mails on this.
>  
>  
> From: Samuel Bercovici [mailto:samu...@radware.com] 
> Sent: Tuesday, July 15, 2014 11:52 PM
> To: OpenStack Development Mailing List (not for usage questions); Vijay 
> Venkatachalam
> Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
>  
> OK.
>  
> Let me be more precise, extracting the information for view sake / validation 
> would be good.
> Providing values that are different than what is in the x509 is what I am 
> opposed to.
>  
> +1 for Carlos on the library and that it should be ubiquitously used.
>  
> I will wait for Vijay to speak for himself in this regard…
>  
> -Sam.
>  
>  
> From: Stephen Balukoff [mailto:sbaluk...@bluebox.net] 
> Sent: Tuesday, July 15, 2014 8:35 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
>  
> +1 to German's and  Carlos' comments.
>  
> It's also worth pointing out that some UIs will definitely want to show SAN 
> information and the like, so either having this available as part of the API, 
> or as a standard library we write which then gets used by multiple drivers is 
> going to be necessary.
>  
> If we're extracting the Subject Common Name in any place in the code then we 
> also need to be extracting the Subject Alternative Names at the same place. 
> From the perspective of the SNI standard, there's no difference in how these 
> fields should be treated, and if we were to treat SANs differently then we're 
> both breaking the standard and setting a bad precedent.
>  
> Stephen
>  
> 
> On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza  
> wrote:
> 
> On Jul 15, 2014, at 10:55 AM, Samuel Bercovici 
>  wrote:
> 
> > Hi,
> >
> >
> > Obtaining the domain name from the x509 is probably more of a 
> > driver/backend/device capability, it would make sense to have a library 
> > that could be used by anyone wishing to do so in their driver code.
> 
> You can do what ever you want in *your* driver. The code to extract this 
> information will be apart of the API and needs to be mentioned in the spec 
> now. PyOpenSSL with PyASN1 are the most likely candidates.
> 
> Carlos D. Garza
> >
> > -Sam.
> >
> >
> >
> > From: Eichberger, German [mailto:german.eichber...@hp.com]
> > Sent: Tuesday, July 15, 2014 6:43 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
> > Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
> >
> > Hi,
> >
> > My impression was that the frontend would extract the names and hand them 
> > to the driver.  This has the following advantages:
> >
> > · We can be sure all drivers can extract the same names
> > · No duplicate code to maintain
> > · If we ever allow the user to specify the names on UI rather in 
> > the certificate the driver doesn’t need to change.
> >
> > I think I saw Adam say something similar in a comment to the code.
> >
> > Thanks,
> > German
> >
> > From: Evgeny Fedoruk [mailto:evge...@radware.com]
> > Sent: Tuesday, July 15, 2014 7:24 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
> > SubjectCommonName and/or SubjectAlternativeNames from X509
> >
> > Hi All,
> >
> > Since this issue came up from TLS capabilities RST doc review, I opened a 
> > ML thread for it to make the decision.
> > Currently, the document says:
> >
> > “
> > For SNI functionality, tenant will supply list of TLS containers in specific
> > Order.
> > In case when specific back-end is not able to support SNI capabilities,
> > its driver should throw an exception. The exception message should state
> > that this specific back-end (provider) does not support SNI capability.
> > The clear sign of listener's requirement for SNI capability is
> > a none empty SNI container ids list.
> > However, reference implementation must support SNI capability.
> >
> > Specific back-end code may retrieve SubjectCommonName and/or altSubjectNames
> > from the certificate which will determine the hostname(s) the certificate
> > is associated with.
> >
> > The order

Re: [openstack-dev] [Neutron] minimal device driver for VPN

2014-07-16 Thread Paul Michali (pcm)
Do you have a repo with the code that is visible to the public?

What does the /etc/neutron/vpn_agent.ini look like?

Can you put the log output of the actual error messages seen?

Regards,

PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On Jul 16, 2014, at 2:43 PM, Julio Carlos Barrera Juez 
 wrote:

> I am fighting with this for months. I want to develop a VPN Neutron plugin, 
> but it is almost impossible to realize how to achieve it. this is a thread I 
> opened months ago and Paul Mchali helped me a lot: 
> http://lists.openstack.org/pipermail/openstack-dev/2014-February/028389.html
> 
> I want to know the minimum requirements to develop a device driver and a 
> service driver for a VPN Neutron plugin. I tried adding an empty device 
> driver and I got this error:
> 
> DeviceDriverImportError: Can not load driver 
> :neutron.services.vpn.junos_vpnaas.device_drivers.fake_device_driver.FakeDeviceDriver
> 
> Both Python file and class exists, but the implementation is empty. What is 
> the problem? What I need to include in this file/class to avoid this error?
> 
> Thank you.
> 
>   
> Julio C. Barrera Juez  
> Office phone: (+34) 93 357 99 27 (ext. 527)
> Office mobile phone: (+34) 625 66 77 26
> Distributed Applications and Networks Area (DANA)
> i2CAT Foundation, Barcelona
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] minimal device driver for VPN

2014-07-16 Thread Nachi Ueno
QQ: do you have __init__.py in the directory?


2014-07-16 11:43 GMT-07:00 Julio Carlos Barrera Juez <
juliocarlos.barr...@i2cat.net>:

> I am fighting with this for months. I want to develop a VPN Neutron
> plugin, but it is almost impossible to realize how to achieve it. this is a
> thread I opened months ago and Paul Mchali helped me a lot:
> http://lists.openstack.org/pipermail/openstack-dev/2014-February/028389.html
>
> I want to know the minimum requirements to develop a device driver and a
> service driver for a VPN Neutron plugin. I tried adding an empty device
> driver and I got this error:
>
> DeviceDriverImportError: Can not load driver
> :neutron.services.vpn.junos_vpnaas.device_drivers.fake_device_driver.FakeDeviceDriver
>
> Both Python file and class exists, but the implementation is empty. What
> is the problem? What I need to include in this file/class to avoid this
> error?
>
> Thank you.
>
>     
> Julio C. Barrera Juez  [image: View my profile on LinkedIn]
> 
> Office phone: (+34) 93 357 99 27 (ext. 527)
> Office mobile phone: (+34) 625 66 77 26
> Distributed Applications and Networks Area (DANA)
> i2CAT Foundation, Barcelona
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] minimal device driver for VPN

2014-07-16 Thread Julio Carlos Barrera Juez
I am fighting with this for months. I want to develop a VPN Neutron plugin,
but it is almost impossible to realize how to achieve it. this is a thread
I opened months ago and Paul Mchali helped me a lot:
http://lists.openstack.org/pipermail/openstack-dev/2014-February/028389.html

I want to know the minimum requirements to develop a device driver and a
service driver for a VPN Neutron plugin. I tried adding an empty device
driver and I got this error:

DeviceDriverImportError: Can not load driver
:neutron.services.vpn.junos_vpnaas.device_drivers.fake_device_driver.FakeDeviceDriver

Both Python file and class exists, but the implementation is empty. What is
the problem? What I need to include in this file/class to avoid this error?

Thank you.

    
Julio C. Barrera Juez  [image: View my profile on LinkedIn]

Office phone: (+34) 93 357 99 27 (ext. 527)
Office mobile phone: (+34) 625 66 77 26
Distributed Applications and Networks Area (DANA)
i2CAT Foundation, Barcelona
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - Location for common third-party libs?

2014-07-16 Thread Kevin Benton
I have filed a bug in Red Hat[1], however I'm not sure if it's in the right
place.

Ihar, can you verify that it's correct or move it to the appropriate
location?

1. https://bugzilla.redhat.com/show_bug.cgi?id=1120332


On Wed, Jul 9, 2014 at 3:29 AM, Ihar Hrachyshka  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> Reviving the old thread.
>
> On 17/06/14 11:23, Kevin Benton wrote:
> > Hi Ihar,
> >
> > What is the reason to breakup neutron into so many packages? A
> > quick disk usage stat shows the plugins directory is currently
> > 3.4M. Is that considered to be too much space for a package, or was
> > it for another reason?
>
> I think the reasoning was that we don't want to pollute systems with
> unneeded files, and it seems to be easily achievable by splitting
> files into separate packages. It turned out now it's not that easy now
> that we have dependencies between ml2 mechanisms and separate plugins.
>
> So I would be in favor of merging plugin packages back into
> python-neutron package. AFAIK there is still no bug for that in Red
> Hat Bugzilla, so please report one.
>
> >
> > Thanks, Kevin Benton
> >
> >
> > On Mon, Jun 16, 2014 at 3:37 PM, Ihar Hrachyshka
> >  wrote:
> >
> > On 17/06/14 00:10, Anita Kuno wrote:
>  On 06/16/2014 06:02 PM, Kevin Benton wrote:
> > Hello,
> >
> > In the Big Switch ML2 driver, we rely on quite a bit of
> > code from the Big Switch plugin. This works fine for
> > distributions that include the entire neutron code base.
> > However, some break apart the neutron code base into
> > separate packages. For example, in CentOS I can't use the
> > Big Switch ML2 driver with just ML2 installed because the
> > Big Switch plugin directory is gone.
> >
> > Is there somewhere where we can put common third party code
> > that will be safe from removal during packaging?
> >
> >
> > Hi,
> >
> > I'm a neutron packager for redhat based distros.
> >
> > AFAIK the main reason is to avoid installing lots of plugins to
> > systems that are not going to use them. No one really spent too
> > much time going file by file and determining internal
> > interdependencies.
> >
> > In your case, I would move those Brocade specific ML2 files to
> > Brocade plugin package. I would suggest to report the bug in Red
> > Hat bugzilla. I think this won't get the highest priority, but once
> > packagers will have spare cycles, this can be fixed.
> >
> > Cheers, /Ihar
> >>
> >> ___ OpenStack-dev
> >> mailing list OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >>
> >
> >
> >
> >
> > ___ OpenStack-dev
> > mailing list OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> -BEGIN PGP SIGNATURE-
> Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQEcBAEBCgAGBQJTvRmeAAoJEC5aWaUY1u57OSoIALVFA1a0CrIrUk/vc28I7245
> P3xe2WjV86txu71vtOVh0uSzh7oaGHkFOy1fpDDPp4httsALQepza8YziR2MsQHp
> 8fotY/fOvR2MRLNNvR+ekE+2n8U+pZW5vRchfOo3xKBGNeHs30Is3ZZHLyF6I7+T
> TrSR1qcHhkWgUF6HB6IcnRGHlNjhXJt1RBAjLVhbc4FuQAqy41ZxtFpi1QfIsgIl
> 7CmBJeZu+nTap+XvXqBqQslUbGdSeodbVh6uNMso6OP+P+3hKAwgXBhGD2Mc7Hed
> TMeKtY8BH5k1LAsadkMXgRm0L9f+vBPHeB5rzQgyLDBD6UpwH9bWryaDoDEJFYE=
> =M8GI
> -END PGP SIGNATURE-
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Chris Friesen

On 07/16/2014 11:59 AM, Monty Taylor wrote:

On 07/16/2014 07:27 PM, Vishvananda Ishaya wrote:



This is a really good point. As someone who has to deal with packaging
issues constantly, it is odd to me that libvirt is one of the few places
where we depend on upstream packaging. We constantly pull in new python
dependencies from pypi that are not packaged in ubuntu. If we had to
wait for packaging before merging the whole system would grind to a halt.

I think we should be updating our libvirt version more frequently vy
installing from source or our own ppa instead of waiting for the ubuntu
team to package it.


Shrinking in terror from what I'm about to say ... but I actually agree
with this, There are SEVERAL logistical issues we'd need to sort, not
the least of which involve the actual mechanics of us doing that and
properly gating,etc. But I think that, like the python depends where we
tell distros what version we _need_ rather than using what version they
have, libvirt, qemu, ovs and maybe one or two other things are areas in
which we may want or need to have a strongish opinion.

I'll bring this up in the room tomorrow at the Infra/QA meetup, and will
probably be flayed alive for it - but maybe I can put forward a
straw-man proposal on how this might work.


How would this work...would you have them uninstall the distro-provided 
libvirt/qemu and replace them with newer ones?  (In which case what 
happens if the version desired by OpenStack has bugs in features that 
OpenStack doesn't use, but that some other software that the user wants 
to run does use?)


Or would you have OpenStack versions of them installed in parallel in an 
alternate location?


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] os-net-config

2014-07-16 Thread Dan Prince
Hi TripleO!

I wanted to get the word out on progress with a new os-net-config tool
for TripleO. The spec (not yet approved) lives here:

https://review.openstack.org/#/c/97859/

We've also got a working implementation here:

https://github.com/dprince/os-net-config

You can see WIP example of how it wires in here (more work to do on this
to fully support parity):

https://review.openstack.org/#/c/104054/1/elements/network-utils/bin/ensure-bridge,cm

The end goal is that we will be able to more flexibly control our host
level network settings in TripleO. Once it is fully integrated
os-net-config would provide a mechanism to drive more flexible
configurations (multiple bridges, bonding, etc.) via Heat metadata.

We are already in dire need of this sort of thing today because we can't
successfully deploy our CI overclouds without making manual changes to
our images (this is because we need 2 bridges and our heat templates
only support 1).

Dan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Monty Taylor
On 07/16/2014 07:27 PM, Vishvananda Ishaya wrote:
> 
> On Jul 16, 2014, at 8:28 AM, Daniel P. Berrange  wrote:
> 
>> On Wed, Jul 16, 2014 at 08:12:47AM -0700, Clark Boylan wrote:
>>
>>> I am worried that we would just regress to the current process because
>>> we have tried something similar to this previously and were forced to
>>> regress to the current process.
>>
>> IMHO the longer we wait between updating the gate to new versions
>> the bigger the problems we create for ourselves. eg we were switching
>> from 0.9.8 released Dec 2011, to  1.1.1 released Jun 2013, so we
>> were exposed to over 1 + 1/2 years worth of code churn in a single
>> event. The fact that we only hit a couple of bugs in that, is actually
>> remarkable given the amount of feature development that had gone into
>> libvirt in that time. If we had been tracking each intervening libvirt
>> release I expect the majority of updates would have had no ill effect
>> on us at all. For the couple of releases where there was a problem we
>> would not be forced to rollback to a version years older again, we'd
>> just drop back to the previous release at most 1 month older.
> 
> This is a really good point. As someone who has to deal with packaging
> issues constantly, it is odd to me that libvirt is one of the few places
> where we depend on upstream packaging. We constantly pull in new python
> dependencies from pypi that are not packaged in ubuntu. If we had to
> wait for packaging before merging the whole system would grind to a halt.
> 
> I think we should be updating our libvirt version more frequently vy
> installing from source or our own ppa instead of waiting for the ubuntu
> team to package it.

Shrinking in terror from what I'm about to say ... but I actually agree
with this, There are SEVERAL logistical issues we'd need to sort, not
the least of which involve the actual mechanics of us doing that and
properly gating,etc. But I think that, like the python depends where we
tell distros what version we _need_ rather than using what version they
have, libvirt, qemu, ovs and maybe one or two other things are areas in
which we may want or need to have a strongish opinion.

I'll bring this up in the room tomorrow at the Infra/QA meetup, and will
probably be flayed alive for it - but maybe I can put forward a
straw-man proposal on how this might work.

Monty



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Eric Windisch
On Wed, Jul 16, 2014 at 12:55 PM, Roman Bogorodskiy <
rbogorods...@mirantis.com> wrote:

>   Eric Windisch wrote:
>
> > This thread highlights more deeply the problems for the FreeBSD folks.
> > First, I still disagree with the recommendation that they contribute to
> > libvirt. It's a classic example of creating two or more problems from
> one.
> > Once they have support in libvirt, how long before their code is in a
> > version of libvirt acceptable to Nova? When they hit edge-cases or bugs,
> > requiring changes in libvirt, how long before those fixes are accepted by
> > Nova?
>
> Could you please elaborate why you disagree on the contributing patches
> to libvirt approach and what the alternative approach do you propose?
>

I don't necessarily disagree with contributing patches to libvirt. I
believe that the current system makes it difficult to perform quick,
iterative development. I wish to see this thread attempt to solve that
problem and reduce the barrier to getting stuff done.


> Also, could you please elaborate on what is 'version of libvirt
> acceptable to Nova'? Cannot we just say that e.g. Nova requires libvirt
> X.Y to be deployed on FreeBSD?
>

This is precisely my point, that we need to support different versions of
libvirt and to test those versions. If we're going to support  different
versions of libvirt on FreeBSD, Ubuntu, and RedHat - those should be
tested, possibly as third-party options.

The primary testing path for libvirt upstream should be with the latest
stable release with a non-voting test against trunk. There might be value
in testing against a development snapshot as well, where we know there are
features we want in an unreleased version of libvirt but where we cannot
trust trunk to be stable enough for gate.


> Anyway, speaking about FreeBSD support I assume we actually talking
> about Bhyve support. I think it'd be good to break the task and
> implement FreeBSD support for libvirt/Qemu first


 I believe Sean was referencing to Bhyve support, this is how I interpreted
it.


-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-16 Thread Mike Spreitzer
Clint Byrum  wrote on 07/02/2014 01:54:49 PM:

> Excerpts from Qiming Teng's message of 2014-07-02 00:02:14 -0700:
> > Just some random thoughts below ...
> > 
> > On Tue, Jul 01, 2014 at 03:47:03PM -0400, Mike Spreitzer wrote:
> > > In AWS, an autoscaling group includes health maintenance 
> functionality --- 
> > > both an ability to detect basic forms of failures and an abilityto 
react 
> > > properly to failures detected by itself or by a load balancer.  What 
is 
> > > the thinking about how to get this functionality in OpenStack? Since 

> > 
> > We are prototyping a solution to this problem at IBM Research - China
> > lab.  The idea is to leverage oslo.messaging and ceilometer events for
> > instance (possibly other resource such as port, securitygroup ...)
> > failure detection and handling.
> > 
> 
> Hm.. perhaps you should be contributing some reviews here as you may
> have some real insight:
> 
> https://review.openstack.org/#/c/100012/
> 
> This sounds a lot like what we're working on for continuous convergence.

I noticed that health checking in AWS goes beyond convergence.  In AWS an 
ELB can be configured with a URL to ping, for application-level health 
checking.  And an ASG can simply be *told* the health of a member by a 
user's own external health system.  I think we should have analogous 
functionality in OpenStack.  Does that make sense to you?  If so, do you 
have any opinion on the right way to integrate, so that we do not have 
three completely independent health maintenance systems?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Horizon] [UX] Wireframes for

2014-07-16 Thread Wan-yen Hsu
Hi,  Jarda


>We are already prepared for multiple drivers. If you look at the >Driver

>field, there is a dropdown menu from which you can choose a driver and

>based on the selection the additional information (like IP, user, passw)

>will be changed.



 So, if "iLO + Virtual Media" is chosen in the dropdown menu, Horizon node
managmement panel will display "iLO Password" and "iLO user " instead of
"IPMI user" and "IPMI Password"?  This is great!





>> Also, myself and a few folks are working on Ironic UEFI support and

>> we hope to land this feature in Juno (Spec is still in review state but

>> the feature is on the Ironic Juno Prioritized list). In order to add

>> UEFI boot feature, a "Supported Boot Modes" field in the hardware info

>> is needed.  The possible values are "BIOS Only", "UEFI Only", and

>> "BIOS+UEFI".   We will need to work with you to add this field onto

>> hardware info.



>There is no problem to accommodate this change in the UI once the

>back-end supports it. So if there is a desire to expose the feature in

>the UI, when there is already working back-end solution, feel free to

>send a patch which adds that to the HW info - it's an easy addition and

the UI is prepared for such types of expansions.



 ok.  Thanks!







wanyen



>Hi Wan,



>thanks for great notes. My response is inline:



>On 2014/15/07 23:19, Wan-yen Hsu wrote:

>> The "Register Nodes" panel uses "IPMI user" and "IPMI Password".

>> However, not all Ironic drivers use IPMI, for instance, some Ironic

>> drivers will use iLO or other BMC interfaces instead of IPMI.  I would

>> like to suggest changing "IPMI" to "BMC" or ""IPMI/BMC" to acomodate

>> more Ironic drivers.  The "Driver" field will reflect what power

>> management interface (e.g., IPMI + PXE, or iLO + Virtual Media) is used

>> so it can be used to correlate the user and password fields.



>We are already prepared for multiple drivers. If you look at the >Driver

>field, there is a dropdown menu from which you can choose a driver and

>based on the selection the additional information (like IP, user, passw)

>will be changed.



>> Also, myself and a few folks are working on Ironic UEFI support and

>> we hope to land this feature in Juno (Spec is still in review state but

>> the feature is on the Ironic Juno Prioritized list). In order to add

>> UEFI boot feature, a "Supported Boot Modes" field in the hardware info

>> is needed.  The possible values are "BIOS Only", "UEFI Only", and

>> "BIOS+UEFI".   We will need to work with you to add this field onto

>> hardware info.



>There is no problem to accommodate this change in the UI once the

>back-end supports it. So if there is a desire to expose the feature in

>the UI, when there is already working back-end solution, feel free to

>send a patch which adds that to the HW info - it's an easy addition and

the UI is prepared for such types of expansions.





> Thanks!

>

> wanyen



Cheers

-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SubjectAlternativeNames (SAN)

2014-07-16 Thread Vijay Venkatachalam

I think it is best not to mention about SAN in the OpenStack 
TLS spec. It is expected that the backend should implement according to the 
SSL/SNI IETF spec.
Let’s leave the implementation/validation part to the driver.  For ex. 
NetScaler does not support SAN and the NetScaler driver could either throw an 
error if certs with SAN are used or ignore it.

Does anyone see a requirement for detailing?


Thanks,
Vijay V.


From: Vijay Venkatachalam
Sent: Wednesday, July 16, 2014 8:54 AM
To: 'Samuel Bercovici'; 'OpenStack Development Mailing List (not for usage 
questions)'
Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

Apologies for the delayed response.

I am OK with displaying the certificates contents as part of the API, that 
should not harm.

I think the discussion has to be split into 2 topics.


1.   Certificate conflict resolution. Meaning what is expected when 2 or 
more certificates become eligible during SSL negotiation

2.   SAN support

I will send out 2 separate mails on this.


From: Samuel Bercovici [mailto:samu...@radware.com]
Sent: Tuesday, July 15, 2014 11:52 PM
To: OpenStack Development Mailing List (not for usage questions); Vijay 
Venkatachalam
Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

OK.

Let me be more precise, extracting the information for view sake / validation 
would be good.
Providing values that are different than what is in the x509 is what I am 
opposed to.

+1 for Carlos on the library and that it should be ubiquitously used.

I will wait for Vijay to speak for himself in this regard…

-Sam.


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, July 15, 2014 8:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

+1 to German's and  Carlos' comments.

It's also worth pointing out that some UIs will definitely want to show SAN 
information and the like, so either having this available as part of the API, 
or as a standard library we write which then gets used by multiple drivers is 
going to be necessary.

If we're extracting the Subject Common Name in any place in the code then we 
also need to be extracting the Subject Alternative Names at the same place. 
From the perspective of the SNI standard, there's no difference in how these 
fields should be treated, and if we were to treat SANs differently then we're 
both breaking the standard and setting a bad precedent.

Stephen

On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza 
mailto:carlos.ga...@rackspace.com>> wrote:

On Jul 15, 2014, at 10:55 AM, Samuel Bercovici 
mailto:samu...@radware.com>>
 wrote:

> Hi,
>
>
> Obtaining the domain name from the x509 is probably more of a 
> driver/backend/device capability, it would make sense to have a library that 
> could be used by anyone wishing to do so in their driver code.
You can do what ever you want in *your* driver. The code to extract this 
information will be apart of the API and needs to be mentioned in the spec now. 
PyOpenSSL with PyASN1 are the most likely candidates.

Carlos D. Garza
>
> -Sam.
>
>
>
> From: Eichberger, German 
> [mailto:german.eichber...@hp.com]
> Sent: Tuesday, July 15, 2014 6:43 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
>
> Hi,
>
> My impression was that the frontend would extract the names and hand them to 
> the driver.  This has the following advantages:
>
> · We can be sure all drivers can extract the same names
> · No duplicate code to maintain
> · If we ever allow the user to specify the names on UI rather in the 
> certificate the driver doesn’t need to change.
>
> I think I saw Adam say something similar in a comment to the code.
>
> Thanks,
> German
>
> From: Evgeny Fedoruk [mailto:evge...@radware.com]
> Sent: Tuesday, July 15, 2014 7:24 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
> SubjectCommonName and/or SubjectAlternativeNames from X509
>
> Hi All,
>
> Since this issue came up from TLS capabilities RST doc review, I opened a ML 
> thread for it to make the decision.
> Currently, the document says:
>
> “
> For SNI functionality, tenant will supply list of TLS containers in specific
> Order.
> In case when specific back-end is not able to support SNI capabilities,
> its driver should throw an exception. The exception message should state
> that this specific back-end (provider) does not support SNI c

Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Vishvananda Ishaya

On Jul 16, 2014, at 8:28 AM, Daniel P. Berrange  wrote:

> On Wed, Jul 16, 2014 at 08:12:47AM -0700, Clark Boylan wrote:
> 
>> I am worried that we would just regress to the current process because
>> we have tried something similar to this previously and were forced to
>> regress to the current process.
> 
> IMHO the longer we wait between updating the gate to new versions
> the bigger the problems we create for ourselves. eg we were switching
> from 0.9.8 released Dec 2011, to  1.1.1 released Jun 2013, so we
> were exposed to over 1 + 1/2 years worth of code churn in a single
> event. The fact that we only hit a couple of bugs in that, is actually
> remarkable given the amount of feature development that had gone into
> libvirt in that time. If we had been tracking each intervening libvirt
> release I expect the majority of updates would have had no ill effect
> on us at all. For the couple of releases where there was a problem we
> would not be forced to rollback to a version years older again, we'd
> just drop back to the previous release at most 1 month older.

This is a really good point. As someone who has to deal with packaging
issues constantly, it is odd to me that libvirt is one of the few places
where we depend on upstream packaging. We constantly pull in new python
dependencies from pypi that are not packaged in ubuntu. If we had to
wait for packaging before merging the whole system would grind to a halt.

I think we should be updating our libvirt version more frequently vy
installing from source or our own ppa instead of waiting for the ubuntu
team to package it.

Vish


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] (98)Address already in use: make_sock: could not bind to address [::]:5000 & 0.0.0.0:5000

2014-07-16 Thread Rich Megginson

On 07/16/2014 09:10 AM, Morgan Fainberg wrote:

--
From: Rich Megginson rmegg...@redhat.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: July 16, 2014 at 08:08:00
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [devstack][keystone] (98)Address already in use: 
make_sock: could not bind to address [::]:5000 & 0.0.0.0:5000




Another problem with port 5000 in Fedora, and probably more recent
versions of RHEL, is the selinux policy:
  
# sudo semanage port -l|grep 5000

...
commplex_main_port_t tcp 5000
commplex_main_port_t udp 5000
  
There is some service called "commplex" that has already "claimed" port

5000 for its use, at least as far as selinux goes.
  

Wouldn’t this also affect the eventlet-based Keystone using port 5000?


Yes, it should.


This is not an apache-specific related issue is it?


No, afaict.



—Morgan



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Roman Bogorodskiy
  Eric Windisch wrote:

> This thread highlights more deeply the problems for the FreeBSD folks.
> First, I still disagree with the recommendation that they contribute to
> libvirt. It's a classic example of creating two or more problems from one.
> Once they have support in libvirt, how long before their code is in a
> version of libvirt acceptable to Nova? When they hit edge-cases or bugs,
> requiring changes in libvirt, how long before those fixes are accepted by
> Nova?

Could you please elaborate why you disagree on the contributing patches
to libvirt approach and what the alternative approach do you propose?

Also, could you please elaborate on what is 'version of libvirt
acceptable to Nova'? Cannot we just say that e.g. Nova requires libvirt
X.Y to be deployed on FreeBSD?

Anyway, speaking about FreeBSD support I assume we actually talking
about Bhyve support. I think it'd be good to break the task and
implement FreeBSD support for libvirt/Qemu first.

Qemu driver of libvirt works fine with FreeBSD for quite some time
already and adding support for that in Nova will allow to do all the
ground work before we could move to the libvirt/bhyve support.

I'm planning to start with adding networking support. Unfortunately, it
seems I got late with the spec for Juno though:

https://review.openstack.org/#/c/95328/

Roman Bogorodskiy


pgpkeNEjFWmYC.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Johannes Erdfelt
On Wed, Jul 16, 2014, Mark McLoughlin  wrote:
> No, there are features or code paths of the libvirt 1.2.5+ driver that
> aren't as well tested as the "class A" designation implies. And we have
> a proposal to make sure these aren't used by default:
> 
>   https://review.openstack.org/107119
> 
> i.e. to stray off the "class A" path, an operator has to opt into it by
> changing a configuration option that explains they will be enabling code
> paths which aren't yet tested upstream.

So that means the libvirt driver will be a mix of tested and untested
features, but only the tested code paths will be enabled by default?

The gate not only tests code as it gets merged, it tests to make sure it
doesn't get broken in the future by other changes.

What happens when it comes time to bump the default version_cap in the
future? It looks like there could potentially be a scramble to fix code
that has been merged but doesn't work now that it's being tested. Which
potentially further slows down development since now unrelated code
needs to be fixed.

This sounds like we're actively weakening the gate we currently have.

> However, not everything is tested now, nor is the tests we have
> foolproof. When you consider the number of configuration options we
> have, the supported distros, the ranges of library versions we claim to
> support, etc., etc. I don't think we can ever get to an "everything is
> tested" point.
> 
> In the absence of that, I think we should aim to be more clear what *is*
> tested. The config option I suggest does that, which is a big part of
> its merit IMHO.

I like the sound of this especially since it's not clear right now at
all.

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Cinder coverage

2014-07-16 Thread Dan Prince
Hi TripleO!

It would appear that we have no coverage in devtest which ensures that
Cinder consistently works in the overcloud. As such the TripleO Cinder
elements are often broken (as of today I can't fully use lio or tgt w/
upstream TripleO elements).

How do people feel about swapping out our single 'nova boot' command to
boot from a volume. Something like this:

 https://review.openstack.org/#/c/107437

There is a bit of tradeoff here in that the conversion will take a bit
of time (qemu-img has to run). Also our boot code path won't be exactly
the same as booting from an image.

Long term we want to run Tempest but due to resource constraints we
can't do that today. Until then this sort of deep systems test (running
a command that exercises more code) might serve us well and give us the
Cinder coverage we need.

Thoughts?

I would also like to split the test configurations so that we use
cinder-lio for some (cinder-tgt is our existing default in devtest).

Dan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Meeting time change

2014-07-16 Thread Malini Kamalambal

On 7/16/14 4:43 AM, "Flavio Percoco"  wrote:

>On 07/15/2014 06:20 PM, Kurt Griffiths wrote:
>> Hi folks, we¹ve been talking about this in IRC, but I wanted to bring it
>> to the ML to get broader feedback and make sure everyone is aware. We¹d
>> like to change our meeting time to better accommodate folks that live
>> around the globe. Proposals:
>> 
>> Tuesdays, 1900 UTC
>> Wednessdays, 2000 UTC
>> Wednessdays, 2100 UTC
>> 
>> I believe these time slots are free, based
>> on: https://wiki.openstack.org/wiki/Meetings
>> 
>> Please respond with ONE of the following:
>> 
>> A. None of these times work for me
>> B. An ordered list of the above times, by preference
>> C. I am a robot
>
>I don't like the idea of switching days :/
>
>Since the reason we're using Wednesday is because we don't want the
>meeting to overlap with the TC and projects meeting, what if we change
>the day of both meeting times in order to keep them on the same day (and
>perhaps also channel) but on different times?
>
>I think changing day and time will be more confusing than just changing
>the time.

If we can find an agreeable time on a non Tuesday, I take the ownership of
pinging & getting you to #openstack-meeting-alt ;)

>From a quick look, #openstack-meeting-alt is free on Wednesdays on both
>times: 15 UTC and 21 UTC. Does this sound like a good day/time/idea to
>folks?

1500 UTC might still be too early for our NZ folks - I thought we wanted
to have the meeting at/after 1900 UTC.
That being said, I will be able to attend only part of the meeting any
time after 1900 UTC - unless it is @ Thursday 1900 UTC
Sorry for making this a puzzle :(

>
>
>
>Cheers,
>Flavio
>
>
>-- 
>@flaper87
>Flavio Percoco
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][CI] DB migration error

2014-07-16 Thread Kevin Benton
This bug is also affecting Ryu and the Big Switch CI.
There is a patch to bump the version requirement for alembic linked in the
bug report that should fix it. It we can't get that merged we may have to
revert the healing patch.

https://bugs.launchpad.net/bugs/1342507
On Jul 16, 2014 9:27 AM, "trinath.soman...@freescale.com" <
trinath.soman...@freescale.com> wrote:

>  Hi-
>
>
>
> With the neutron Update to my CI, I get the following error while
> configuring Neutron in devstack.
>
>
>
> 2014-07-16 16:12:06.349 | INFO  [alembic.autogenerate.compare] Detected
> server default on column 'poolmonitorassociations.status'
>
> 2014-07-16 16:12:06.411 | INFO
> [neutron.db.migration.alembic_migrations.heal_script] Detected added
> foreign key for column 'id' on table u'ml2_brocadeports'
>
> 2014-07-16 16:12:14.853 | Traceback (most recent call last):
>
> 2014-07-16 16:12:14.853 |   File "/usr/local/bin/neutron-db-manage", line
> 10, in 
>
> 2014-07-16 16:12:14.853 | sys.exit(main())
>
> 2014-07-16 16:12:14.854 |   File
> "/opt/stack/new/neutron/neutron/db/migration/cli.py", line 171, in main
>
> 2014-07-16 16:12:14.854 | CONF.command.func(config, CONF.command.name)
>
> 2014-07-16 16:12:14.854 |   File
> "/opt/stack/new/neutron/neutron/db/migration/cli.py", line 85, in
> do_upgrade_downgrade
>
> 2014-07-16 16:12:14.854 | do_alembic_command(config, cmd, revision,
> sql=CONF.command.sql)
>
> 2014-07-16 16:12:14.854 |   File
> "/opt/stack/new/neutron/neutron/db/migration/cli.py", line 63, in
> do_alembic_command
>
> 2014-07-16 16:12:14.854 | getattr(alembic_command, cmd)(config, *args,
> **kwargs)
>
> 2014-07-16 16:12:14.854 |   File
> "/usr/local/lib/python2.7/dist-packages/alembic/command.py", line 124, in
> upgrade
>
> 2014-07-16 16:12:14.854 | script.run_env()
>
> 2014-07-16 16:12:14.854 |   File
> "/usr/local/lib/python2.7/dist-packages/alembic/script.py", line 199, in
> run_env
>
> 2014-07-16 16:12:14.854 | util.load_python_file(self.dir, 'env.py')
>
> 2014-07-16 16:12:14.854 |   File
> "/usr/local/lib/python2.7/dist-packages/alembic/util.py", line 205, in
> load_python_file
>
> 2014-07-16 16:12:14.854 | module = load_module_py(module_id, path)
>
> 2014-07-16 16:12:14.854 |   File
> "/usr/local/lib/python2.7/dist-packages/alembic/compat.py", line 58, in
> load_module_py
>
> 2014-07-16 16:12:14.854 | mod = imp.load_source(module_id, path, fp)
>
> 2014-07-16 16:12:14.854 |   File
> "/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py",
> line 106, in 
>
> 2014-07-16 16:12:14.854 | run_migrations_online()
>
> 2014-07-16 16:12:14.855 |   File
> "/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py",
> line 90, in run_migrations_online
>
> 2014-07-16 16:12:14.855 | options=build_options())
>
> 2014-07-16 16:12:14.855 |   File "", line 7, in run_migrations
>
> 2014-07-16 16:12:14.855 |   File
> "/usr/local/lib/python2.7/dist-packages/alembic/environment.py", line 681,
> in run_migrations
>
> 2014-07-16 16:12:14.855 | self.get_context().run_migrations(**kw)
>
> 2014-07-16 16:12:14.855 |   File
> "/usr/local/lib/python2.7/dist-packages/alembic/migration.py", line 225, in
> run_migrations
>
> 2014-07-16 16:12:14.855 | change(**kw)
>
> 2014-07-16 16:12:14.856 |   File
> "/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/versions/1d6ee1ae5da5_db_healing.py",
> line 32, in upgrade
>
> 2014-07-16 16:12:14.856 | heal_script.heal()
>
> 2014-07-16 16:12:14.856 |   File
> "/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py",
> line 78, in heal
>
> 2014-07-16 16:12:14.856 | execute_alembic_command(el)
>
> 2014-07-16 16:12:14.856 |   File
> "/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py",
> line 93, in execute_alembic_command
>
> 2014-07-16 16:12:14.856 | parse_modify_command(command)
>
> 2014-07-16 16:12:14.856 |   File
> "/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py",
> line 126, in parse_modify_command
>
> 2014-07-16 16:12:14.856 | op.alter_column(table, column, **kwargs)
>
> 2014-07-16 16:12:14.856 |   File "", line 7, in alter_column
>
> 2014-07-16 16:12:14.856 |   File "", line 1, in 
>
> 2014-07-16 16:12:14.856 |   File
> "/usr/local/lib/python2.7/dist-packages/alembic/util.py", line 322, in go
>
> 2014-07-16 16:12:14.857 | return fn(*arg, **kw)
>
> 2014-07-16 16:12:14.857 |   File
> "/usr/local/lib/python2.7/dist-packages/alembic/operations.py", line 300,
> in alter_column
>
> 2014-07-16 16:12:14.857 | existing_autoincrement=existing_autoincrement
>
> 2014-07-16 16:12:14.857 |   File
> "/usr/local/lib/python2.7/dist-packages/alembic/ddl/mysql.py", line 42, in
> alter_column
>
> 2014-07-16 16:12:14.857 | else existing_autoincrement
>
> 2014-07-16 16:12:14.857 |   File
> "/usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py", line 76, in
> _exec
>
> 2014-07-16 16:12:14.857 | conn.execute(construct, *mu

[openstack-dev] [Neutron][CI] DB migration error

2014-07-16 Thread trinath.soman...@freescale.com
Hi-

With the neutron Update to my CI, I get the following error while configuring 
Neutron in devstack.

2014-07-16 16:12:06.349 | INFO  [alembic.autogenerate.compare] Detected server 
default on column 'poolmonitorassociations.status'
2014-07-16 16:12:06.411 | INFO  
[neutron.db.migration.alembic_migrations.heal_script] Detected added foreign 
key for column 'id' on table u'ml2_brocadeports'
2014-07-16 16:12:14.853 | Traceback (most recent call last):
2014-07-16 16:12:14.853 |   File "/usr/local/bin/neutron-db-manage", line 10, 
in 
2014-07-16 16:12:14.853 | sys.exit(main())
2014-07-16 16:12:14.854 |   File 
"/opt/stack/new/neutron/neutron/db/migration/cli.py", line 171, in main
2014-07-16 16:12:14.854 | CONF.command.func(config, CONF.command.name)
2014-07-16 16:12:14.854 |   File 
"/opt/stack/new/neutron/neutron/db/migration/cli.py", line 85, in 
do_upgrade_downgrade
2014-07-16 16:12:14.854 | do_alembic_command(config, cmd, revision, 
sql=CONF.command.sql)
2014-07-16 16:12:14.854 |   File 
"/opt/stack/new/neutron/neutron/db/migration/cli.py", line 63, in 
do_alembic_command
2014-07-16 16:12:14.854 | getattr(alembic_command, cmd)(config, *args, 
**kwargs)
2014-07-16 16:12:14.854 |   File 
"/usr/local/lib/python2.7/dist-packages/alembic/command.py", line 124, in 
upgrade
2014-07-16 16:12:14.854 | script.run_env()
2014-07-16 16:12:14.854 |   File 
"/usr/local/lib/python2.7/dist-packages/alembic/script.py", line 199, in run_env
2014-07-16 16:12:14.854 | util.load_python_file(self.dir, 'env.py')
2014-07-16 16:12:14.854 |   File 
"/usr/local/lib/python2.7/dist-packages/alembic/util.py", line 205, in 
load_python_file
2014-07-16 16:12:14.854 | module = load_module_py(module_id, path)
2014-07-16 16:12:14.854 |   File 
"/usr/local/lib/python2.7/dist-packages/alembic/compat.py", line 58, in 
load_module_py
2014-07-16 16:12:14.854 | mod = imp.load_source(module_id, path, fp)
2014-07-16 16:12:14.854 |   File 
"/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py", line 
106, in 
2014-07-16 16:12:14.854 | run_migrations_online()
2014-07-16 16:12:14.855 |   File 
"/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py", line 
90, in run_migrations_online
2014-07-16 16:12:14.855 | options=build_options())
2014-07-16 16:12:14.855 |   File "", line 7, in run_migrations
2014-07-16 16:12:14.855 |   File 
"/usr/local/lib/python2.7/dist-packages/alembic/environment.py", line 681, in 
run_migrations
2014-07-16 16:12:14.855 | self.get_context().run_migrations(**kw)
2014-07-16 16:12:14.855 |   File 
"/usr/local/lib/python2.7/dist-packages/alembic/migration.py", line 225, in 
run_migrations
2014-07-16 16:12:14.855 | change(**kw)
2014-07-16 16:12:14.856 |   File 
"/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/versions/1d6ee1ae5da5_db_healing.py",
 line 32, in upgrade
2014-07-16 16:12:14.856 | heal_script.heal()
2014-07-16 16:12:14.856 |   File 
"/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py",
 line 78, in heal
2014-07-16 16:12:14.856 | execute_alembic_command(el)
2014-07-16 16:12:14.856 |   File 
"/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py",
 line 93, in execute_alembic_command
2014-07-16 16:12:14.856 | parse_modify_command(command)
2014-07-16 16:12:14.856 |   File 
"/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/heal_script.py",
 line 126, in parse_modify_command
2014-07-16 16:12:14.856 | op.alter_column(table, column, **kwargs)
2014-07-16 16:12:14.856 |   File "", line 7, in alter_column
2014-07-16 16:12:14.856 |   File "", line 1, in 
2014-07-16 16:12:14.856 |   File 
"/usr/local/lib/python2.7/dist-packages/alembic/util.py", line 322, in go
2014-07-16 16:12:14.857 | return fn(*arg, **kw)
2014-07-16 16:12:14.857 |   File 
"/usr/local/lib/python2.7/dist-packages/alembic/operations.py", line 300, in 
alter_column
2014-07-16 16:12:14.857 | existing_autoincrement=existing_autoincrement
2014-07-16 16:12:14.857 |   File 
"/usr/local/lib/python2.7/dist-packages/alembic/ddl/mysql.py", line 42, in 
alter_column
2014-07-16 16:12:14.857 | else existing_autoincrement
2014-07-16 16:12:14.857 |   File 
"/usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py", line 76, in _exec
2014-07-16 16:12:14.857 | conn.execute(construct, *multiparams, **params)
2014-07-16 16:12:14.857 |   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 727, 
in execute
2014-07-16 16:12:14.857 | return meth(self, multiparams, params)
2014-07-16 16:12:14.858 |   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/ddl.py", line 67, in 
_execute_on_connection
2014-07-16 16:12:14.858 | return connection._execute_ddl(self, multiparams, 
params)
2014-07-16 16:12:14.858 |   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 775, 
in _execute_ddl
2014-07-16 16:12:14.858 | compiled = ddl.

Re: [openstack-dev] [all] Treating notifications as a contract

2014-07-16 Thread Sandy Walsh
On 7/11/2014 6:08 AM, Chris Dent wrote:
> On Fri, 11 Jul 2014, Lucas Alvares Gomes wrote:
>
>> The data format that Ironic will send was part of the spec proposed
>> and could have been reviewed. I think there's still time to change it
>> tho, if you have a better format talk to Haomeng which is the guys
>> responsible for that work in Ironic and see if he can change it (We
>> can put up a following patch to fix the spec with the new format as
>> well) . But we need to do this ASAP because we want to get it landed
>> in Ironic soon.
> It was only after doing the work that I realized how it might be an
> example for the sake of this discussion. As the architecure of
> Ceilometer currently exist there still needs to be some measure of
> custom code, even if the notifications are as I described them.
>
> However, if we want to take this opportunity to move some of the
> smarts from Ceilomer into the Ironic code then the paste that I created
> might be a guide to make it possible:
>
> http://paste.openstack.org/show/86071/
>
> However on that however, if there's some chance that a large change could
> happen, it might be better to wait, I don't know.
>

Just to give a sense of what we're dealing with, as while back I wrote a
little script to dump the schema of all events StackTach collected from
Nova.  The value fields are replaced with types (or  if it was a
class object).

http://paste.openstack.org/show/54140/




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] TLS capability - Certificate conflict resolution

2014-07-16 Thread Vijay Venkatachalam

Do you know if the SSL/SNI IETF spec details about conflict resolution. I am 
assuming not.

Because of this ambiguity each backend employs its own mechanism to resolve 
conflicts.

There are 3 choices now
1.   The LBaaS extension does not allow conflicting certificates to be 
bound using validation
2.   Allow each backend conflict resolution mechanism to get into the spec
3.   Does not specify anything in the spec, no mechanism introduced and let 
the driver deal with it.

Both HA proxy and Radware uses configuration as a mechanism to resolve. Radware 
uses order while HA Proxy uses externally specified DNS names.
NetScaler implementation uses the best possible match algorithm

For ex, let’s say 3 certs are bound to the same endpoint with the following SNs
www.finance.abc.com
*.finance.abc.com
*.*.abc.com

If the host request is  payroll.finance.abc.com  we shall  use  
*.finance.abc.com
If it is  payroll.engg.abc.com  we shall use  *.*.abc.com

NetScaler won’t not allow 2 certs to have the same SN.

From: Samuel Bercovici [mailto:samu...@radware.com]
Sent: Tuesday, July 15, 2014 11:52 PM
To: OpenStack Development Mailing List (not for usage questions); Vijay 
Venkatachalam
Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

OK.

Let me be more precise, extracting the information for view sake / validation 
would be good.
Providing values that are different than what is in the x509 is what I am 
opposed to.

+1 for Carlos on the library and that it should be ubiquitously used.

I will wait for Vijay to speak for himself in this regard…

-Sam.


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, July 15, 2014 8:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

+1 to German's and  Carlos' comments.

It's also worth pointing out that some UIs will definitely want to show SAN 
information and the like, so either having this available as part of the API, 
or as a standard library we write which then gets used by multiple drivers is 
going to be necessary.

If we're extracting the Subject Common Name in any place in the code then we 
also need to be extracting the Subject Alternative Names at the same place. 
From the perspective of the SNI standard, there's no difference in how these 
fields should be treated, and if we were to treat SANs differently then we're 
both breaking the standard and setting a bad precedent.

Stephen

On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza 
mailto:carlos.ga...@rackspace.com>> wrote:

On Jul 15, 2014, at 10:55 AM, Samuel Bercovici 
mailto:samu...@radware.com>>
 wrote:

> Hi,
>
>
> Obtaining the domain name from the x509 is probably more of a 
> driver/backend/device capability, it would make sense to have a library that 
> could be used by anyone wishing to do so in their driver code.
You can do what ever you want in *your* driver. The code to extract this 
information will be apart of the API and needs to be mentioned in the spec now. 
PyOpenSSL with PyASN1 are the most likely candidates.

Carlos D. Garza
>
> -Sam.
>
>
>
> From: Eichberger, German 
> [mailto:german.eichber...@hp.com]
> Sent: Tuesday, July 15, 2014 6:43 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
>
> Hi,
>
> My impression was that the frontend would extract the names and hand them to 
> the driver.  This has the following advantages:
>
> · We can be sure all drivers can extract the same names
> · No duplicate code to maintain
> · If we ever allow the user to specify the names on UI rather in the 
> certificate the driver doesn’t need to change.
>
> I think I saw Adam say something similar in a comment to the code.
>
> Thanks,
> German
>
> From: Evgeny Fedoruk [mailto:evge...@radware.com]
> Sent: Tuesday, July 15, 2014 7:24 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
> SubjectCommonName and/or SubjectAlternativeNames from X509
>
> Hi All,
>
> Since this issue came up from TLS capabilities RST doc review, I opened a ML 
> thread for it to make the decision.
> Currently, the document says:
>
> “
> For SNI functionality, tenant will supply list of TLS containers in specific
> Order.
> In case when specific back-end is not able to support SNI capabilities,
> its driver should throw an exception. The exception message should state
> that this specific back-end (provider) does not support SNI capability.
> The clear sign of listener's requirement for SNI cap

Re: [openstack-dev] [infra] "recheck no bug" and comment

2014-07-16 Thread Derek Higgins
On 16/07/14 14:48, Steve Martinelli wrote:
> What are the benefits of doing this over looking at the existing
> rechecks, and if not there opening a bug and rechecking the new bug?

I agree we should be using a bug number (or open one when needed), the
example in the original email should have included a bug number but now
that the topic has come up

I think this would serve as a good way to provide a little explanation
as to why somebody has not provided a bug number e.g.

recheck no bug
   zuul was restarted

Derek

> 
> 
> Regards,
> 
> *Steve Martinelli*
> Software Developer - Openstack
> Keystone Core Member
> 
> *Phone:*1-905-413-2851*
> E-mail:*_steve...@ca.ibm.com_ 
> 8200 Warden Ave
> Markham, ON L6G 1C7
> Canada
> 
> 
> 
> 
> 
> 
> From:Alexis Lee 
> To:"OpenStack Development Mailing List (not for usage
> questions)" ,
> Date:07/16/2014 09:19 AM
> Subject:[openstack-dev]  [infra] "recheck no bug" and comment
> 
> 
> 
> 
> Hello,
> 
> What do you think about allowing some text after the words "recheck no
> bug"? EG to include a snippet from the log showing the failure has been
> at least briefly investigated before attempting a recheck. EG:
> 
>  recheck no bug
> 
>  Compute node failed to spawn:
> 
>2014-07-15 12:18:09.936 | 3f1e7f32-812e-48c8-a83c-2615c4451fa6 |
>  overcloud-NovaCompute0-zahdxwar7zlh | ERROR  | - | NOSTATE | |
> 
> 
> Alexis
> -- 
> Nova Engineer, HP Cloud.  AKA lealexis, lxsli.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Kashyap Chamarthy
On Wed, Jul 16, 2014 at 04:15:40PM +0200, Sean Dague wrote:

[. . .]

> Anyway, discussion welcomed. My primary concern right now isn't actually
> where we set the bar, but that we set the same bar for everyone.

As someone who tries to test Nova w/ upstream libvirt/QEMU, couple of
points why I disagree with your above comments:


  - From time time I find myself frustrated due to older versions of
libvirt on CI infra systems: I try to investigate a bug, 2 hours
into debugging, it turns out that CI system is using very old
libvirt, alas - it's not in my control. Consequence: The bug
needlessly got bumped up in priority for investigation, while
it's already solved in an existing upstream release, just waiting to
be picked up CI infra.

  - Also, as a frequent tester of libvirt upstream, and a participant
in debugging the recent Nova snapshots issue mentioned here, the
comment[1] (by Daniel Berrange) debunks the illusion of "the
required verison of libvirt should have been released for at least
30 days" very convincingly in crystal clear language.

  - FWIW, I feel the libvirt version cap[2] is a fine idea to alleviate
this.

[1] https://review.openstack.org/#/c/103923/ (Comment:Jul 14 9:24 PM)
  -
  "The kind of new features we're depending on in Nova (looking at specs
  proposed for Juno) are not the kind of features that users in any distro
  are liable to test themselves, outside of the context of Nova (or
  perhaps oVirt) applications. eg Users in a distro aren't likely to
  seriously test the NUMA/Hugepages stuff in libvirt until it is part of
  Nova and that Nova release is in their distro, which creates a
  chicken+egg problem wrt your proposal. In addition I have not seen any
  evidence of significant libvirt testing by the distro maintainers
  themselves either, except for the enterprise distros and we if we wait
  for enterprise distros to pick up a new libvirt we'd be talking 1 year+
  of delay. Finally if just having it in a distro is your benchmark,
  then this is satisfied by Fedora rawhide inclusion, but there's
  basically no user testing of that. So if you instead set the
  benchmark to be a released distro, then saying this is a 1 month
  delay is rather misleading, because distros only release once every
  6 months, so you'd really be talking about a 7 month delay on using
  new features. For all these reasons, tieing Nova acceptance to
  distro inclusion of libvirt is a fundamentally flawed idea that does
  not achieve what it purports to achieve & is detrimental to Nova.
  
  I think the key problem here is that our testing is inadequate and we
  need to address that aspect of it rather than crippling our development
  process."
  -

 [2] https://review.openstack.org/#/c/107119/ -- libvirt: add version
 cap tied to gate CI testing

-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Daniel P. Berrange
On Wed, Jul 16, 2014 at 08:29:26AM -0700, Dan Smith wrote:
> >> Based on these experiences, libvirt version differences seem to be as
> >> substantial as major hypervisor differences.
> > 
> > I think that is a pretty dubious conclusion to draw from just a
> > couple of bugs. The reason they really caused pain is that because
> > the CI test system was based on old version for too long.
> 
> I think the conclusion being made is that "libvirt versions two years
> apart are effectively like different major versions of a hypervisor." I
> don't think that's wrong.
> 
> > That is rather misleading statement you're making there. Libvirt is
> > in fact held to *higher* standards than xen/vmware/hypver because it
> > is actually gating all commits. The 3rd party CI systems can be
> > broken for days, weeks and we still happily accept code for those
> > virt. drivers.
> 
> Right, and we've talked about raising that bar as well, by tracking
> their status more closely, automatically -2'ing patches that touch the
> subdirectory but don't get a passing vote from the associated CI system,
> etc.
> 
> You're definitely right that libvirt is held to a higher bar in terms of
> it being required to pass tests before we can even mechanically land a
> patch. However, there is a lot of function in the driver that we don't
> test right now because of the version we're tied to in the gate nodes.
> It's actually *easier* for a 3rd party system like vmware to roll their
> environment and enable tests of newer features, so I don't think that
> this requirement would cause existing 3rd party CI systems any trouble.
> 
> > AFAIK there has never been any statement that every feature added
> > to xen/vmware/hyperv must be tested by the 3rd party CI system.
> 
> On almost every spec that doesn't already call it out, a reviewer asks
> "how are you going to test this beyond just unit tests?" I think the
> assumption and feeling among most reviewers is that new features,
> especially that depend on new things (be it storage drivers, hypervisor
> versions, etc) are concerned about approving without testing.

Expecting new functionality to have testing coverage in the common
case is entirely reasonable. What I disagree with is the proposal
to say it is mandatory, when the current CI system is not able to
test it for any given reason. In some cases it might be reasonable
to expect the contributor to setup 3rd party CI, but we absolutely
cannot make that a fixed rule or we'll kill contributions from
people who are not backed by vendors in a position to spend the
significant resource it takes to setup & maintain CI.  IMHO the
burden is on the maintainer of the CI to ensure it is able to
follow the needs of the contributors. ie if the feature needs a
newer libvirt version in order to test with, the CI maintainer(s)
should deal with that. We should not turn away the contributor
for a problem that is outside their control.

> > AFAIK the requirement for 3rd party CI is merely that it has to exist,
> > running some arbitrary version of the hypervisor in question. We've
> > not said that 3rd party CI has to be covering every version or every
> > feature, as is trying to be pushed on libvirt here.
> 
> The requirement in the past has been that it has to exist. At the last
> summit, we had a discussion about how to raise the bar on what we
> currently have. We made a lot of progress getting those systems
> established (only because we had a requirement, by the way) in the last
> cycle. Going forward, we need to have new levels of expectations in
> terms of coverage and reliability of those things, IMHO.

IMHO we need to maintain a balance between ensuring code quality
and being welcoming & accepting to new contributors. 

New features have a certain value $NNN to the project & our users.
The lack of CI testing does not automatically imply that the value
of that work is erased to $0 or negative $MMM. Of course the lack
of CI will create uncertainty in how valuable it is, and potentially
imply costs for us if we have to deal with resolving bugs later.
We must be careful not to overly obsess on the problems of work
that might have bugs, to the detriment of all the many submissions
that work well.

We need to take a pragmatic view of this tradeoff based on the risk
implied by the new feature. If the new work is impacting existing
functional codepaths then this clearly exposes existing users to
risk of regressions, so if that codepath is not tested this is
something to be very wary of. If the new work is adding new code
paths that existing deployments wouldn't exercise unless they 
explicitly opt in to the feature, the risk is significantly lower.
The existence of unit tests will also serve to limit the risk in
many, but not all, situations. If something is not CI tested then
I'd also expect it to get greater attention during review, with
the reviewers actually testing it functionally themselves as well
as code inspection. Finally we should also have some

Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting SubjectCommonName and/or SubjectAlternativeNames from X509

2014-07-16 Thread Vijay Venkatachalam
Apologies for the delayed response.

I am OK with displaying the certificates contents as part of the API, that 
should not harm.

I think the discussion has to be split into 2 topics.


1.   Certificate conflict resolution. Meaning what is expected when 2 or 
more certificates become eligible during SSL negotiation

2.   SAN support

I will send out 2 separate mails on this.


From: Samuel Bercovici [mailto:samu...@radware.com]
Sent: Tuesday, July 15, 2014 11:52 PM
To: OpenStack Development Mailing List (not for usage questions); Vijay 
Venkatachalam
Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

OK.

Let me be more precise, extracting the information for view sake / validation 
would be good.
Providing values that are different than what is in the x509 is what I am 
opposed to.

+1 for Carlos on the library and that it should be ubiquitously used.

I will wait for Vijay to speak for himself in this regard…

-Sam.


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, July 15, 2014 8:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
SubjectCommonName and/or SubjectAlternativeNames from X509

+1 to German's and  Carlos' comments.

It's also worth pointing out that some UIs will definitely want to show SAN 
information and the like, so either having this available as part of the API, 
or as a standard library we write which then gets used by multiple drivers is 
going to be necessary.

If we're extracting the Subject Common Name in any place in the code then we 
also need to be extracting the Subject Alternative Names at the same place. 
From the perspective of the SNI standard, there's no difference in how these 
fields should be treated, and if we were to treat SANs differently then we're 
both breaking the standard and setting a bad precedent.

Stephen

On Tue, Jul 15, 2014 at 9:35 AM, Carlos Garza 
mailto:carlos.ga...@rackspace.com>> wrote:

On Jul 15, 2014, at 10:55 AM, Samuel Bercovici 
mailto:samu...@radware.com>>
 wrote:

> Hi,
>
>
> Obtaining the domain name from the x509 is probably more of a 
> driver/backend/device capability, it would make sense to have a library that 
> could be used by anyone wishing to do so in their driver code.
You can do what ever you want in *your* driver. The code to extract this 
information will be apart of the API and needs to be mentioned in the spec now. 
PyOpenSSL with PyASN1 are the most likely candidates.

Carlos D. Garza
>
> -Sam.
>
>
>
> From: Eichberger, German 
> [mailto:german.eichber...@hp.com]
> Sent: Tuesday, July 15, 2014 6:43 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - 
> Extracting SubjectCommonName and/or SubjectAlternativeNames from X509
>
> Hi,
>
> My impression was that the frontend would extract the names and hand them to 
> the driver.  This has the following advantages:
>
> · We can be sure all drivers can extract the same names
> · No duplicate code to maintain
> · If we ever allow the user to specify the names on UI rather in the 
> certificate the driver doesn’t need to change.
>
> I think I saw Adam say something similar in a comment to the code.
>
> Thanks,
> German
>
> From: Evgeny Fedoruk [mailto:evge...@radware.com]
> Sent: Tuesday, July 15, 2014 7:24 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - SNI - Extracting 
> SubjectCommonName and/or SubjectAlternativeNames from X509
>
> Hi All,
>
> Since this issue came up from TLS capabilities RST doc review, I opened a ML 
> thread for it to make the decision.
> Currently, the document says:
>
> “
> For SNI functionality, tenant will supply list of TLS containers in specific
> Order.
> In case when specific back-end is not able to support SNI capabilities,
> its driver should throw an exception. The exception message should state
> that this specific back-end (provider) does not support SNI capability.
> The clear sign of listener's requirement for SNI capability is
> a none empty SNI container ids list.
> However, reference implementation must support SNI capability.
>
> Specific back-end code may retrieve SubjectCommonName and/or altSubjectNames
> from the certificate which will determine the hostname(s) the certificate
> is associated with.
>
> The order of SNI containers list may be used by specific back-end code,
> like Radware's, for specifying priorities among certificates.
> In case when two or more uploaded certificates are valid for the same DNS name
> and the tenant has specific requirements around which one wins this collision,
> certificate ordering provides a mechanism to define which cert wins in 

Re: [openstack-dev] [Containers][Nova] Containers Team Mid-Cycle Meetup to join Nova Meetup

2014-07-16 Thread Adrian Otto
Additional Update:

Two important additions:

1) No Formal Thursday Meetings.

We are eliminating our plans to meet formally on the 31st. You are still 
welcome to meet informally. We want to keep these discussions as productive as 
possible, and want to avoid attendee burnout. My deepest apologies to those who 
have made travel plans around this. See me if there are financial 
considerations to resolve.

2) Containers Team Registration

To better manage attendance expectations, register for the event that you will 
attend as a primary. For those attending primarily for Containers, register 
here:

https://www.eventbrite.com/e/openstack-containers-team-juno-mid-cycle-developer-meetup-tickets-12304951441

If you are registering for Nova, use this link:

https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-11878128803

If you are already registered for the Nova Meetup, but will be attending in the 
Containers Team Meetup as the primary, you can return your tickets for Nova as 
long as you have a Containers Team Meetup ticket. That will allow for a more 
accurate count, and make sure that all the Nova devs who need to attend can.

Logistics details:

https://wiki.openstack.org/wiki/Sprints/BeavertonJunoSprint

Event Etherpad:

https://etherpad.openstack.org/p/juno-containers-sprint

Thanks,

Adrian


On Jul 11, 2014, at 3:31 PM, Adrian Otto 
mailto:adrian.o...@rackspace.com>> wrote:

CORRECTION: This event happens July 28-31. Sorry for any confusion! Corrected 
Announcement:

Containers Team,

We have decided to hold our Mid-Cycle meetup along with the Nova Meetup in 
Beaverton, Oregon on July 28-31.The Nova Meetup is scheduled for July 28-30.

https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-11878128803

Those of us interested in Containers topic will use one of the breakout rooms 
generously offered by Intel. We will also stay on Thursday to focus on 
implementation plans and to engage with those members of the Nova Team who will 
be otherwise occupied on July 28-30, and will have a chance to focus entirely 
on Containers on the 31st.

Please take a moment now to register using the link above, and I look forward 
to seeing you there.

Thanks,

Adrian Otto


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Test Ceilometer polling in tempest

2014-07-16 Thread Dina Belova
Ildiko, thanks for starting this discussion.

Really, that is quite painful problem for Ceilometer and QA team. As far as
I know, currently there is some kind of tendency of making integration
Tempest tests quicker and less resource consuming - that's quite logical
IMHO. Polling as a way of information collecting from different services
and projects is quite consuming speaking about load on Nova API, etc. -
that's why I completely understand the wish of QA team to get rid of it,
although polling still makes lots work inside Ceilometer, and that's why
integration testing for this feature is really important for me as
Ceilometer contributor - without pollsters testing we have no way to check
its workability.

That's why I'll be really glad if Ildiko's (or whatever other) solution
that will allow polling testing in the gate will be found and accepted.

Problem with described above solution requires some kind of change in what
do we call "environment preparing" for the integration testing - and we
really need QA crew help here. Afair polling deprecation was suggested in
some of the IRC discussions (by only notifications usage), but that's not
the solution that might be just used right now - but we need way of
Ceilometer workability verification right now to continue work on its
improvement.

So any suggestions and comments are welcome here :)

Thanks!
Dina


On Wed, Jul 16, 2014 at 7:06 PM, Ildikó Váncsa 
wrote:

>  Hi Folks,
>
>
>
> We’ve faced with some problems during running Ceilometer integration tests
> on the gate. The main issue is that we cannot test the polling mechanism,
> as if we use a small polling interval, like 1 min, then it puts a high
> pressure on Nova API. If we use a longer interval, like 10 mins, then we
> will not be able to execute any tests successfully, because it would run
> too long.
>
>
>
> The idea, to solve this issue,  is to reconfigure Ceilometer, when the
> polling is tested. Which would mean to change the polling interval from the
> default 10 mins to 1 min at the beginning of the test, restart the service
> and when the test is finished, the polling interval should be changed back
> to 10 mins, which will require one more service restart. The downside of
> this idea is, that it needs service restart today. It is on the list of
> plans to support dynamic re-configuration of Ceilometer, which would mean
> the ability to change the polling interval without restarting the service.
>
>
>
> I know that this idea isn’t ideal from the PoV that the system
> configuration is changed during running the tests, but this is an expected
> scenario even in a production environment. We would change a parameter that
> can be changed by a user any time in a way as users do it too. Later on,
> when we can reconfigure the polling interval without restarting the
> service, this approach will be even simpler.
>
>
>
> This idea would make it possible to test the polling mechanism of
> Ceilometer without any radical change in the ordering of test cases or any
> other things that would be strange in integration tests. We couldn’t find
> any better way to solve the issue of the load on the APIs caused by polling.
>
>
>
> What’s your opinion about this scenario? Do you think it could be a viable
> solution to the above described problem?
>
>
>
> Thanks and Best Regards,
>
> Ildiko
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Eric Windisch
On Wed, Jul 16, 2014 at 10:15 AM, Sean Dague  wrote:

> Recently the main gate updated from Ubuntu 12.04 to 14.04, and in doing
> so we started executing the livesnapshot code in the nova libvirt
> driver. Which fails about 20% of the time in the gate, as we're bringing
> computes up and down while doing a snapshot. Dan Berange did a bunch of
> debug on that and thinks it might be a qemu bug. We disabled these code
> paths, so live snapshot has now been ripped out.
>
> In January we also triggered a libvirt bug, and had to carry a private
> build of libvirt for 6 weeks in order to let people merge code in
> OpenStack.
>
> We never were able to switch to libvirt 1.1.1 in the gate using the
> Ubuntu Cloud Archive during Icehouse development, because it has a
> different set of failures that would have prevented people from merging
> code.
>
> Based on these experiences, libvirt version differences seem to be as
> substantial as major hypervisor differences. There is a proposal here -
> https://review.openstack.org/#/c/103923/ to hold newer versions of
> libvirt to the same standard we hold xen, vmware, hyperv, docker,
> ironic, etc.
>
> I'm somewhat concerned that the -2 pile on in this review is a double
> standard of libvirt features, and features exploiting really new
> upstream features. I feel like a lot of the language being used here
> about the burden of doing this testing is exactly the same as was
> presented by the docker team before their driver was removed, which was
> ignored by the Nova team at the time. It was the concern by the freebsd
> team, which was also ignored and they were told to go land libvirt
> patches instead.
>

For running our own CI, the burden was largely a matter of resource and
time constraints for individual contributors and/or startups to setup and
maintain 3rd-party CI, especially in light of a parallel requirement to
pass the CI itself. I received community responses that equated to, "if you
were serious, you'd dedicate several full-time developers and/or
infrastructure engineers available for OpenStack development, plus several
thousand a month in infrastructure itself".  For Docker, these were simply
not options. Back in January, putting 2-3 engineers fulltime toward
OpenStack would have been a contribution of 10-20% of our engineering
force. OpenStack is not more important to us than Docker itself.

This thread highlights more deeply the problems for the FreeBSD folks.
First, I still disagree with the recommendation that they contribute to
libvirt. It's a classic example of creating two or more problems from one.
Once they have support in libvirt, how long before their code is in a
version of libvirt acceptable to Nova? When they hit edge-cases or bugs,
requiring changes in libvirt, how long before those fixes are accepted by
Nova?

I concur with thoughts in the Gerrit review which suggest there should be a
non-voting gate for testing against the latest libvirt.

I think the ideal situation would be to functionally test against multiple
versions of libvirt. We'd have at least two versions: "trunk,
latest-stable". We might want "trunk, trunk-snapshot-XYZ, latest-stable,
version-in-ubuntu, version-in-rhel", or any number of back-versions
included in the gate. The version-in-rhel and version-in-ubuntu might be
good candidates for 3rd-party CI.


Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor framework proposal

2014-07-16 Thread Sumit Naiksatam
To the earlier question on whether we had defined what we wanted to
solve with the flavors framework, a high level requirement was
captured in the following approved spec for advanced services:
https://review.openstack.org/#/c/92200

On Wed, Jul 16, 2014 at 5:18 AM, Eugene Nikanorov
 wrote:
> Some comments inline:
>
>>
>> Agreed-- I think we need to more fully flesh out how extension list / tags
>> should work here before we implement it. But this doesn't prevent us from
>> rolling forward with a "version 1" of flavors so that we can start to use
>> some of the benefits of having flavors (like the ability to use multiple
>> service profiles with a single driver/provider, or multiple service profiles
>> for a single kind of service).
>
> Agree here.
>
>>
>>
>> Yes, I think there are many benefits we can get out of the flavor
>> framework without having to have an extensions list / tags at this revision.
>> But I'm curious: Did we ever define what we were actually trying to solve
>> with flavors?  Maybe that's the reason the discussion on this has been all
>> of the place: People are probably making assumptions about the problem we're
>> trying to solve and we need to get on the same page about this.
>
>
> Yes, we did!
>  The original problem has several aspects aspects:
> 1) providing users with some information about what service implementation
> they get (capabilities)
> 2) providing users with ability to specify (choose, actually) some
> implementation details that don't relate to a logical configuration
> (capacity, insertion mode, HA mode, resiliency, security standards, etc)
> 3) providing operators a way to setup different modes of one driver
> 4) providing operators to seamlessly change drivers backing up existing
> logical configurations (now it's not so easy to do because logical config is
> tightly coupled with provider/driver)
>
> The proposal we're discussing right is mostly covering points (2), (3) and
> (4) which is already a good thing.
> So for now I'd propose to put 'information about service implementation' in
> the description to cover (1)
>
> I'm currently implementing the proposal (API and DB parts, no integration
> with services yet)
>
>
> Thanks,
> Eugene.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Missing logs in Midokura CI Bot Inbox x

2014-07-16 Thread Kyle Mestery
On Wed, Jul 16, 2014 at 4:48 AM, Tomoe Sugihara  wrote:
> Hi there,
>
> Just to apologize and inform that most of the links to the logs of Midokura
> CI bot on gerrit are dead now. That is because I accidentally deleted all
> the logs (instead of over a month old logs) today. Logs for the jobs after
> the deletion are saved just fine.
> We'll be more careful about handling the logs.
>
Thanks for the update here Tomoe, it's appreciated!

Kyle

> Best,
> Tomoe
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Dan Smith
>> Based on these experiences, libvirt version differences seem to be as
>> substantial as major hypervisor differences.
> 
> I think that is a pretty dubious conclusion to draw from just a
> couple of bugs. The reason they really caused pain is that because
> the CI test system was based on old version for too long.

I think the conclusion being made is that "libvirt versions two years
apart are effectively like different major versions of a hypervisor." I
don't think that's wrong.

> That is rather misleading statement you're making there. Libvirt is
> in fact held to *higher* standards than xen/vmware/hypver because it
> is actually gating all commits. The 3rd party CI systems can be
> broken for days, weeks and we still happily accept code for those
> virt. drivers.

Right, and we've talked about raising that bar as well, by tracking
their status more closely, automatically -2'ing patches that touch the
subdirectory but don't get a passing vote from the associated CI system,
etc.

You're definitely right that libvirt is held to a higher bar in terms of
it being required to pass tests before we can even mechanically land a
patch. However, there is a lot of function in the driver that we don't
test right now because of the version we're tied to in the gate nodes.
It's actually *easier* for a 3rd party system like vmware to roll their
environment and enable tests of newer features, so I don't think that
this requirement would cause existing 3rd party CI systems any trouble.

> AFAIK there has never been any statement that every feature added
> to xen/vmware/hyperv must be tested by the 3rd party CI system.

On almost every spec that doesn't already call it out, a reviewer asks
"how are you going to test this beyond just unit tests?" I think the
assumption and feeling among most reviewers is that new features,
especially that depend on new things (be it storage drivers, hypervisor
versions, etc) are concerned about approving without testing.

> AFAIK the requirement for 3rd party CI is merely that it has to exist,
> running some arbitrary version of the hypervisor in question. We've
> not said that 3rd party CI has to be covering every version or every
> feature, as is trying to be pushed on libvirt here.

The requirement in the past has been that it has to exist. At the last
summit, we had a discussion about how to raise the bar on what we
currently have. We made a lot of progress getting those systems
established (only because we had a requirement, by the way) in the last
cycle. Going forward, we need to have new levels of expectations in
terms of coverage and reliability of those things, IMHO.

> As above, aside from the question of gating vs non-gating, the bar is
> already set at the same level of everyone. There has to be a CI system
> somewhere testing some arbitrary version of the software. Everyone meets
> that requirement.

Wording our current requirement as you have here makes it sound like an
"arbitrary" ticky mark, which saddens and kind of offends me. What we
currently have was a step in the right direction. It was a lot of work,
but it's by no means arbitrary nor sufficient, IMHO.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Daniel P. Berrange
On Wed, Jul 16, 2014 at 08:12:47AM -0700, Clark Boylan wrote:
> On Wed, Jul 16, 2014 at 7:50 AM, Daniel P. Berrange  
> wrote:
> > On Wed, Jul 16, 2014 at 04:15:40PM +0200, Sean Dague wrote:
> >> Recently the main gate updated from Ubuntu 12.04 to 14.04, and in doing
> >> so we started executing the livesnapshot code in the nova libvirt
> >> driver. Which fails about 20% of the time in the gate, as we're bringing
> >> computes up and down while doing a snapshot. Dan Berange did a bunch of
> >> debug on that and thinks it might be a qemu bug. We disabled these code
> >> paths, so live snapshot has now been ripped out.
> >>
> >> In January we also triggered a libvirt bug, and had to carry a private
> >> build of libvirt for 6 weeks in order to let people merge code in 
> >> OpenStack.
> >>
> >> We never were able to switch to libvirt 1.1.1 in the gate using the
> >> Ubuntu Cloud Archive during Icehouse development, because it has a
> >> different set of failures that would have prevented people from merging
> >> code.
> >>
> >> Based on these experiences, libvirt version differences seem to be as
> >> substantial as major hypervisor differences.
> >
> > I think that is a pretty dubious conclusion to draw from just a
> > couple of bugs. The reason they really caused pain is that because
> > the CI test system was based on old version for too long. If it
> > were tracking current upstream version of libvirt/KVM we'd have
> > seen the problem much sooner & been able to resolve it during
> > review of the change introducing the feature, as we do with any
> > other bugs we encounter in software such as the breakage we see
> > with my stuff off pypi.
> 
> How do you suggest we do this effectively with libvirt? In the past we
> have tried to use newer versions of libvirt and they completely broke.
> And the time to fixing that was non trivial. For most of our pypi
> stuff we attempt to fix upstream and if that does not happen quickly
> we pin (arguably we don't do this well either, see the sqlalchemy<=0.7
> issues of the past).

The real big problem we had was the firewall deadlock problem. When
I was made aware of that problem I worked on fixing that in upstream
libvirt immediately. IIRC we had a solution in a week or two which
was added to a libvirt stable release update. Much of the further
delay was in waiting for the fixes to make their way into the
Ubuntu repositories. If the gate were ignoring Ubuntu repos and
pulling latest upstream libvirt, then we could have just pinned
to an older libvirt until the fix was pushed out to a stable
libvirt release. The libvirt community release process is flexible
enough to push out priority bug fix releases in a matter of days,
or less,  if needed. So temporarily pinning isn't the end of the
world in that respect.

> I am worried that we would just regress to the current process because
> we have tried something similar to this previously and were forced to
> regress to the current process.

IMHO the longer we wait between updating the gate to new versions
the bigger the problems we create for ourselves. eg we were switching
from 0.9.8 released Dec 2011, to  1.1.1 released Jun 2013, so we
were exposed to over 1 + 1/2 years worth of code churn in a single
event. The fact that we only hit a couple of bugs in that, is actually
remarkable given the amount of feature development that had gone into
libvirt in that time. If we had been tracking each intervening libvirt
release I expect the majority of updates would have had no ill effect
on us at all. For the couple of releases where there was a problem we
would not be forced to rollback to a version years older again, we'd
just drop back to the previous release at most 1 month older.

Ultimately, thanks to us identifying & fixing those previously seen
bugs, we did just switch from 0.9.8 to 1.2.2 which is a 2+1/2 year
jump, and the only problem we've hit is the live snapshot problem
which appears to be a QEMU bug.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] REST API access to configuration options

2014-07-16 Thread Oleg Gelbukh
On Tue, Jul 15, 2014 at 1:08 PM, Mark McLoughlin  wrote:
>
> Also, this is going to tell you how the API service you connected to was
> configured. Where there are multiple API servers, what about the others?
> How do operators verify all of the API servers behind a load balancer
> with this?
>
> And in the case of something like Nova, what about the many other nodes
> behind the API server?
>

A query for configuration could be a part of /hypervisors API extension. It
doesn't solve multiple API servers issue though.

--
Best regards,
Oleg Gelbukh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] (98)Address already in use: make_sock: could not bind to address [::]:5000 & 0.0.0.0:5000

2014-07-16 Thread Morgan Fainberg

--
From: Rich Megginson rmegg...@redhat.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: July 16, 2014 at 08:08:00
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [devstack][keystone] (98)Address already in use: 
make_sock: could not bind to address [::]:5000 & 0.0.0.0:5000



> Another problem with port 5000 in Fedora, and probably more recent
> versions of RHEL, is the selinux policy:
>  
> # sudo semanage port -l|grep 5000
> ...
> commplex_main_port_t tcp 5000
> commplex_main_port_t udp 5000
>  
> There is some service called "commplex" that has already "claimed" port
> 5000 for its use, at least as far as selinux goes.
> 

Wouldn’t this also affect the eventlet-based Keystone using port 5000? This is 
not an apache-specific related issue is it?

—Morgan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Clark Boylan
On Wed, Jul 16, 2014 at 7:50 AM, Daniel P. Berrange  wrote:
> On Wed, Jul 16, 2014 at 04:15:40PM +0200, Sean Dague wrote:
>> Recently the main gate updated from Ubuntu 12.04 to 14.04, and in doing
>> so we started executing the livesnapshot code in the nova libvirt
>> driver. Which fails about 20% of the time in the gate, as we're bringing
>> computes up and down while doing a snapshot. Dan Berange did a bunch of
>> debug on that and thinks it might be a qemu bug. We disabled these code
>> paths, so live snapshot has now been ripped out.
>>
>> In January we also triggered a libvirt bug, and had to carry a private
>> build of libvirt for 6 weeks in order to let people merge code in OpenStack.
>>
>> We never were able to switch to libvirt 1.1.1 in the gate using the
>> Ubuntu Cloud Archive during Icehouse development, because it has a
>> different set of failures that would have prevented people from merging
>> code.
>>
>> Based on these experiences, libvirt version differences seem to be as
>> substantial as major hypervisor differences.
>
> I think that is a pretty dubious conclusion to draw from just a
> couple of bugs. The reason they really caused pain is that because
> the CI test system was based on old version for too long. If it
> were tracking current upstream version of libvirt/KVM we'd have
> seen the problem much sooner & been able to resolve it during
> review of the change introducing the feature, as we do with any
> other bugs we encounter in software such as the breakage we see
> with my stuff off pypi.
>
How do you suggest we do this effectively with libvirt? In the past we
have tried to use newer versions of libvirt and they completely broke.
And the time to fixing that was non trivial. For most of our pypi
stuff we attempt to fix upstream and if that does not happen quickly
we pin (arguably we don't do this well either, see the sqlalchemy<=0.7
issues of the past).

I am worried that we would just regress to the current process because
we have tried something similar to this previously and were forced to
regress to the current process.
>
>> There is a proposal here -
>> https://review.openstack.org/#/c/103923/ to hold newer versions of
>> libvirt to the same standard we hold xen, vmware, hyperv, docker,
>> ironic, etc.
>
> That is rather misleading statement you're making there. Libvirt is
> in fact held to *higher* standards than xen/vmware/hypver because it
> is actually gating all commits. The 3rd party CI systems can be
> broken for days, weeks and we still happily accept code for those
> virt. drivers.
>
> AFAIK there has never been any statement that every feature added
> to xen/vmware/hyperv must be tested by the 3rd party CI system.
> All of the CI systems, for whatever driver, are currently testing
> some arbitrary subset of the overall features of that driver, and
> by no means every new feature being approved in review has coverage.
>
>> I'm somewhat concerned that the -2 pile on in this review is a double
>> standard of libvirt features, and features exploiting really new
>> upstream features. I feel like a lot of the language being used here
>> about the burden of doing this testing is exactly the same as was
>> presented by the docker team before their driver was removed, which was
>> ignored by the Nova team at the time. It was the concern by the freebsd
>> team, which was also ignored and they were told to go land libvirt
>> patches instead.
>
> As above the only double standard is that libvirt tests are all gating
> and 3rd party tests are non-gating.
>
>> If we want to reduce the standards for libvirt we should reconsider
>> what's being asked of 3rd party CI teams, and things like the docker
>> driver, as well as the A, B, C driver classification. Because clearly
>> libvirt 1.2.5+ isn't actually class A supported.
>
> AFAIK the requirement for 3rd party CI is merely that it has to exist,
> running some arbitrary version of the hypervisor in question. We've
> not said that 3rd party CI has to be covering every version or every
> feature, as is trying to be pushed on libvirt here.
>
> The "Class A", "Class B", "Class C" classifications were always only
> ever going to be a crude approximation. Unless you define them to be
> wrt the explicit version of every single deb/pypi package installed
> in the gate system (which I don't believe anyone has every suggested)
> there is always risk that a different version of some package has a
> bug that Nova tickles.
>
> IMHO the classification we do for drivers provides an indication as
> to the quality of the *Nova* code. IOW class A indicates that we've
> throughly tested the Nova code and believe it to be free of bugs for
> the features we've tested. If there is a bug in a 3rd party package
> that doesn't imply that the Nova code is any less well tested or
> more buggy. Replace libvirt with mysql in your example above. A new
> version of mysql with a bug does not imply that Nova is sudden

Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Mark McLoughlin
On Wed, 2014-07-16 at 16:15 +0200, Sean Dague wrote:
..
> Based on these experiences, libvirt version differences seem to be as
> substantial as major hypervisor differences. There is a proposal here -
> https://review.openstack.org/#/c/103923/ to hold newer versions of
> libvirt to the same standard we hold xen, vmware, hyperv, docker,
> ironic, etc.

That's a bit of a mis-characterization - in terms of functional test
coverage, the libvirt driver is the bar that all the other drivers
struggle to meet.

And I doubt any of us pay too close attention to the feature coverage
that the 3rd party CI test jobs have.

> I'm somewhat concerned that the -2 pile on in this review is a double
> standard of libvirt features, and features exploiting really new
> upstream features. I feel like a lot of the language being used here
> about the burden of doing this testing is exactly the same as was
> presented by the docker team before their driver was removed, which was
> ignored by the Nova team at the time.

Personally, I wasn't very comfortable with the docker driver move. It
certainly gave an outward impression that we're an unfriendly community.
The mitigating factor was that a lot of friendly, collaborative,
coaching work went on in the background for months. Expectations were
communicated well in advance.

Kicking the docker driver out of the tree has resulted in an uptick in
the amount of work happening on it, but I suspect most people involved
have a bad taste in their mouths. I guess there's incentives at play
which mean they'll continue plugging away at it, but those incentives
aren't always at play.

> It was the concern by the freebsd
> team, which was also ignored and they were told to go land libvirt
> patches instead.
> 
> I'm ok with us as a project changing our mind and deciding that the test
> bar needs to be taken down a notch or two because it's too burdensome to
> contributors and vendors, but if we are doing that, we need to do it for
> everyone. A lot of other organizations have put a ton of time and energy
> into this, and are carrying a maintenance cost of running these systems
> to get results back in a timely basis.

I don't agree that we need to apply the same rules equally to everyone.

At least part of the reasoning behind the emphasis on 3rd party CI
testing was that projects (Neutron in particular) were being overwhelmed
by contributions to drivers from developers who never contributed in any
way to the core. The corollary of that is the contributors who do
contribute to the core should be given a bit more leeway in return.

There's a natural building of trust and element of human relationships
here. As a reviewer, you learn to trust contributors with a good track
record and perhaps prioritize contributions from them.

> As we seem deadlocked in the review, I think the mailing list is
> probably a better place for this.
> 
> If we want to reduce the standards for libvirt we should reconsider
> what's being asked of 3rd party CI teams, and things like the docker
> driver, as well as the A, B, C driver classification. Because clearly
> libvirt 1.2.5+ isn't actually class A supported.

No, there are features or code paths of the libvirt 1.2.5+ driver that
aren't as well tested as the "class A" designation implies. And we have
a proposal to make sure these aren't used by default:

  https://review.openstack.org/107119

i.e. to stray off the "class A" path, an operator has to opt into it by
changing a configuration option that explains they will be enabling code
paths which aren't yet tested upstream.

These features have value to some people now, they don't risk regressing
the "class A" driver and there's a clear path to them being elevated to
"class A" in time. We should value these contributions and nurture these
contributors.

Appending some of my comments from the review below. The tl;dr is that I
think we're losing sight of the importance of welcoming and nurturing
contributors, and valuing whatever contributions they can make. That
terrifies me. 

Mark.

---

Compared to other open source projects, we have done an awesome job in
OpenStack of having good functional test coverage. Arguably, given the
complexity of the system, we couldn't have got this far without it. I
can take zero credit for any of it.

However, not everything is tested now, nor is the tests we have
foolproof. When you consider the number of configuration options we
have, the supported distros, the ranges of library versions we claim to
support, etc., etc. I don't think we can ever get to an "everything is
tested" point.

In the absence of that, I think we should aim to be more clear what *is*
tested. The config option I suggest does that, which is a big part of
its merit IMHO.

We've had some success with the "be nasty enough to driver contributors
and they'll do what we want" approach so far, but IMHO that was an
exceptional approach for an exceptional situation - drivers that were
completely broken, and driver devel

Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-16 Thread Morgan Fainberg
Reposted now will a lot less bad quote issues. Thanks for being patient with 
the re-send!

--
From: Joe Gordon joe.gord...@gmail.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: July 16, 2014 at 02:27:42
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and 
keystone v3

> On Tue, Jul 15, 2014 at 7:20 AM, Morgan Fainberg  
> wrote:
>  
> >
> >
> > On Tuesday, July 15, 2014, Steven Hardy wrote:
> >
> >> On Mon, Jul 14, 2014 at 02:43:19PM -0400, Adam Young wrote:
> >> > On 07/14/2014 11:47 AM, Steven Hardy wrote:
> >> > >Hi all,
> >> > >
> >> > >I'm probably missing something, but can anyone please tell me when
> >> devstack
> >> > >will be moving to keystone v3, and in particular when API auth_token
> >> will
> >> > >be configured such that auth_version is v3.0 by default?
> >> > >
> >> > >Some months ago, I posted this patch, which switched auth_version to
> >> v3.0
> >> > >for Heat:
> >> > >
> >> > >https://review.openstack.org/#/c/80341/
> >> > >
> >> > >That patch was nack'd because there was apparently some version
> >> discovery
> >> > >code coming which would handle it, but AFAICS I still have to manually
> >> > >configure auth_version to v3.0 in the heat.conf for our API to work
> >> > >properly with requests from domains other than the default.
> >> > >
> >> > >The same issue is observed if you try to use non-default-domains via
> >> > >python-heatclient using this soon-to-be-merged patch:
> >> > >
> >> > >https://review.openstack.org/#/c/92728/
> >> > >
> >> > >Can anyone enlighten me here, are we making a global devstack move to
> >> the
> >> > >non-deprecated v3 keystone API, or do I need to revive this devstack
> >> patch?
> >> > >
> >> > >The issue for Heat is we support notifications from "stack domain
> >> users",
> >> > >who are created in a heat-specific domain, thus won't work if the
> >> > >auth_token middleware is configured to use the v2 keystone API.
> >> > >
> >> > >Thanks for any information :)
> >> > >
> >> > >Steve
> >> > There are reviews out there in client land now that should work. I was
> >> > testing discover just now and it seems to be doing the right thing. If
> >> the
> >> > AUTH_URL is chopped of the V2.0 or V3 the client should be able to
> >> handle
> >> > everything from there on forward.
> >>
> >> Perhaps I should restate my problem, as I think perhaps we still have
> >> crossed wires:
> >>
> >> - Certain configurations of Heat *only* work with v3 tokens, because we
> >> create users in a non-default domain
> >> - Current devstack still configures versioned endpoints, with v2.0
> >> keystone
> >> - Heat breaks in some circumstances on current devstack because of this.
> >> - Adding auth_version='v3.0' to the auth_token section of heat.conf fixes
> >> the problem.
> >>
> >> So, back in March, client changes were promised to fix this problem, and
> >> now, in July, they still have not - do I revive my patch, or are fixes for
> >> this really imminent this time?
> >>
> >> Basically I need the auth_token middleware to accept a v3 token for a user
> >> in a non-default domain, e.g validate it *always* with the v3 API not
> >> v2.0,
> >> even if the endpoint is still configured versioned to v2.0.
> >>
> >> Sorry to labour the point, but it's frustrating to see this still broken
> >> so long after I proposed a fix and it was rejected.
> >>
> >>
> > We just did a test converting over the default to v3 (and falling back to
> > v2 as needed, yes fallback will still be needed) yesterday (Dolph posted a
> > couple of test patches and they seemed to succeed - yay!!) It looks like it
> > will just work. Now there is a big caveate, this default will only change
> > in the keystone middleware project, and it needs to have a patch or three
> > get through gate converting projects over to use it before we accept the
> > code.
> >
> > Nova has approved the patch to switch over, it is just fighting with Gate.
> > Other patches are proposed for other projects and are in various states of
> > approval.
> >
>  
> I assume you mean switch over to keystone middleware project [0], not

Correct, switch to middleware (a requirement before we landed this patch in 
middleware). I was unclear in that statement. Sorry didn’t mean to make anyone 
jumpy that something was approved in Nova that shouldn’t have been or that did 
massive re-workings internal to Nova.

> switch over to keystone v3. Based on [1] my understanding is no changes to
> nova are needed to use the v2 compatible parts of the v3 API, But are
> changes needed to support domains or is this not a problem because the auth
> middleware uses uuids for user_id and project_id, so nova doesn't need to
> have any concept of domains? Are any nova changes needed to support the v3
> API?
> 

This change simply

[openstack-dev] [qa] Test Ceilometer polling in tempest

2014-07-16 Thread Ildikó Váncsa
Hi Folks,

We've faced with some problems during running Ceilometer integration tests on 
the gate. The main issue is that we cannot test the polling mechanism, as if we 
use a small polling interval, like 1 min, then it puts a high pressure on Nova 
API. If we use a longer interval, like 10 mins, then we will not be able to 
execute any tests successfully, because it would run too long.

The idea, to solve this issue,  is to reconfigure Ceilometer, when the polling 
is tested. Which would mean to change the polling interval from the default 10 
mins to 1 min at the beginning of the test, restart the service and when the 
test is finished, the polling interval should be changed back to 10 mins, which 
will require one more service restart. The downside of this idea is, that it 
needs service restart today. It is on the list of plans to support dynamic 
re-configuration of Ceilometer, which would mean the ability to change the 
polling interval without restarting the service.

I know that this idea isn't ideal from the PoV that the system configuration is 
changed during running the tests, but this is an expected scenario even in a 
production environment. We would change a parameter that can be changed by a 
user any time in a way as users do it too. Later on, when we can reconfigure 
the polling interval without restarting the service, this approach will be even 
simpler.

This idea would make it possible to test the polling mechanism of Ceilometer 
without any radical change in the ordering of test cases or any other things 
that would be strange in integration tests. We couldn't find any better way to 
solve the issue of the load on the APIs caused by polling.

What's your opinion about this scenario? Do you think it could be a viable 
solution to the above described problem?

Thanks and Best Regards,
Ildiko
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] (98)Address already in use: make_sock: could not bind to address [::]:5000 & 0.0.0.0:5000

2014-07-16 Thread Rich Megginson

On 07/16/2014 08:43 AM, Brian Haley wrote:

On 07/16/2014 07:34 AM, Joe Jiang wrote:

Hi all,

When I just set up my develope environment use devstack at CentOS 6.5,
that fetch devstack source via github.com and checkout stable/icehouse branch.
and bellow[1] is the error log fragment.
I'm not sure if I am ok to ask my question in this mail list or not,
because I search all of the web and still not resolve it.
Anyway, I need you help. and, your help is a highly appreciated.

I tripped over a similar issue with Horizon yesterday and found this bug:

https://bugs.launchpad.net/devstack/+bug/1340660

The error I saw was with port 80, so I was able to disable Horizon to get around
it, and I didn't see anything obvious in the apache error logs to explain it.

-Brian


Another problem with port 5000 in Fedora, and probably more recent 
versions of RHEL, is the selinux policy:


# sudo semanage port -l|grep 5000
...
commplex_main_port_t   tcp  5000
commplex_main_port_t   udp  5000

There is some service called "commplex" that has already "claimed" port 
5000 for its use, at least as far as selinux goes.






2014-07-16 11:08:53.282 | + sudo sed '/^Listen/s/^.*$/Listen 0.0.0.0:80/' -i
/etc/httpd/conf/httpd.conf
2014-07-16 11:08:53.295 | + sudo rm -f '/var/log/httpd/horizon_*'
2014-07-16 11:08:53.310 | + sudo sh -c 'sed -e "
2014-07-16 11:08:53.310 | s,%USER%,stack,g;
2014-07-16 11:08:53.310 | s,%GROUP%,stack,g;
2014-07-16 11:08:53.310 | s,%HORIZON_DIR%,/opt/stack/horizon,g;
2014-07-16 11:08:53.310 | s,%APACHE_NAME%,httpd,g;
2014-07-16 11:08:53.310 | s,%DEST%,/opt/stack,g;
2014-07-16 11:08:53.310 | s,%HORIZON_REQUIRE%,,g;
2014-07-16 11:08:53.310 | " /home/devstack/files/apache-horizon.template

/etc/httpd/conf.d/horizon.conf'

2014-07-16 11:08:53.321 | + start_horizon
2014-07-16 11:08:53.321 | + restart_apache_server
2014-07-16 11:08:53.321 | + restart_service httpd
2014-07-16 11:08:53.321 | + is_ubuntu
2014-07-16 11:08:53.321 | + [[ -z rpm ]]
2014-07-16 11:08:53.322 | + '[' rpm = deb ']'
2014-07-16 11:08:53.322 | + sudo /sbin/service httpd restart
2014-07-16 11:08:53.361 | Stopping httpd:  [FAILED]
2014-07-16 11:08:53.532 | Starting httpd: httpd: Could not reliably determine
the server's fully qualified domain name, using 127.0.0.1 for ServerName
2014-07-16 11:08:53.533 | (98)Address already in use: make_sock: could not bind
to address [::]:5000
2014-07-16 11:08:53.533 | (98)Address already in use: make_sock: could not bind
to address 0.0.0.0:5000
2014-07-16 11:08:53.533 | no listening sockets available, shutting down
2014-07-16 11:08:53.533 | Unable to open logs
2014-07-16 11:08:53.547 |  [FAILED]
2014-07-16 11:08:53.549 | + exit_trap
2014-07-16 11:08:53.549 | + local r=1
2014-07-16 11:08:53.549 | ++ jobs -p
2014-07-16 11:08:53.550 | + jobs=
2014-07-16 11:08:53.550 | + [[ -n '' ]]
2014-07-16 11:08:53.550 | + exit 1
[stack@stack devstack]$




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Daniel P. Berrange
On Wed, Jul 16, 2014 at 04:15:40PM +0200, Sean Dague wrote:
> Recently the main gate updated from Ubuntu 12.04 to 14.04, and in doing
> so we started executing the livesnapshot code in the nova libvirt
> driver. Which fails about 20% of the time in the gate, as we're bringing
> computes up and down while doing a snapshot. Dan Berange did a bunch of
> debug on that and thinks it might be a qemu bug. We disabled these code
> paths, so live snapshot has now been ripped out.
> 
> In January we also triggered a libvirt bug, and had to carry a private
> build of libvirt for 6 weeks in order to let people merge code in OpenStack.
> 
> We never were able to switch to libvirt 1.1.1 in the gate using the
> Ubuntu Cloud Archive during Icehouse development, because it has a
> different set of failures that would have prevented people from merging
> code.
> 
> Based on these experiences, libvirt version differences seem to be as
> substantial as major hypervisor differences.

I think that is a pretty dubious conclusion to draw from just a
couple of bugs. The reason they really caused pain is that because
the CI test system was based on old version for too long. If it
were tracking current upstream version of libvirt/KVM we'd have
seen the problem much sooner & been able to resolve it during
review of the change introducing the feature, as we do with any
other bugs we encounter in software such as the breakage we see
with my stuff off pypi.

> There is a proposal here -
> https://review.openstack.org/#/c/103923/ to hold newer versions of
> libvirt to the same standard we hold xen, vmware, hyperv, docker,
> ironic, etc.

That is rather misleading statement you're making there. Libvirt is
in fact held to *higher* standards than xen/vmware/hypver because it
is actually gating all commits. The 3rd party CI systems can be
broken for days, weeks and we still happily accept code for those
virt. drivers.

AFAIK there has never been any statement that every feature added
to xen/vmware/hyperv must be tested by the 3rd party CI system.
All of the CI systems, for whatever driver, are currently testing
some arbitrary subset of the overall features of that driver, and
by no means every new feature being approved in review has coverage.

> I'm somewhat concerned that the -2 pile on in this review is a double
> standard of libvirt features, and features exploiting really new
> upstream features. I feel like a lot of the language being used here
> about the burden of doing this testing is exactly the same as was
> presented by the docker team before their driver was removed, which was
> ignored by the Nova team at the time. It was the concern by the freebsd
> team, which was also ignored and they were told to go land libvirt
> patches instead.

As above the only double standard is that libvirt tests are all gating
and 3rd party tests are non-gating. 

> If we want to reduce the standards for libvirt we should reconsider
> what's being asked of 3rd party CI teams, and things like the docker
> driver, as well as the A, B, C driver classification. Because clearly
> libvirt 1.2.5+ isn't actually class A supported.

AFAIK the requirement for 3rd party CI is merely that it has to exist,
running some arbitrary version of the hypervisor in question. We've
not said that 3rd party CI has to be covering every version or every
feature, as is trying to be pushed on libvirt here.

The "Class A", "Class B", "Class C" classifications were always only
ever going to be a crude approximation. Unless you define them to be
wrt the explicit version of every single deb/pypi package installed
in the gate system (which I don't believe anyone has every suggested)
there is always risk that a different version of some package has a
bug that Nova tickles.

IMHO the classification we do for drivers provides an indication as 
to the quality of the *Nova* code. IOW class A indicates that we've
throughly tested the Nova code and believe it to be free of bugs for
the features we've tested. If there is a bug in a 3rd party package
that doesn't imply that the Nova code is any less well tested or
more buggy. Replace libvirt with mysql in your example above. A new
version of mysql with a bug does not imply that Nova is suddenly not
"class A" tested.

IMHO it is upto the downstream vendors to run testing to ensure that
what they give to their customers, still achieves the quality level
indicated by the tests upstream has performed on the Nova code.

> Anyway, discussion welcomed. My primary concern right now isn't actually
> where we set the bar, but that we set the same bar for everyone.

As above, aside from the question of gating vs non-gating, the bar is
already set at the same level of everyone. There has to be a CI system
somewhere testing some arbitrary version of the software. Everyone meets
that requirement.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt

[openstack-dev] No DVR Meeting today

2014-07-16 Thread Vasudevan, Swaminathan (PNB Roseville)
Hi Folks,
DVR IRC Meeting for Today is Cancelled.
We will meet next week.
Thanks

Swaminathan Vasudevan
Systems Software Engineer (TC)


HP Networking
Hewlett-Packard
8000 Foothills Blvd
M/S 5541
Roseville, CA - 95747
tel: 916.785.0937
fax: 916.785.1815
email: swaminathan.vasude...@hp.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] l2pop problems

2014-07-16 Thread Zang MingJie
Hi, all:

While resolving ovs restart rebuild br-tun flows[1], we have found
several l2pop problems:

1. L2pop is depending on agent_boot_time to decide whether send all
port information or not, but the agent_boot_time is unreliable, for
example if the service receives port up message before agent status
report, the agent won't receive any port on other agents forever.

2. If the openvswitch restarted, all flows will be lost, including all
l2pop flows, the agent is unable to fetch or recreate the l2pop flows.

To resolve the problems, I'm suggesting some changes:

1. Because the agent_boot_time is unreliable, the service can't decide
whether to send flooding entry or not. But the agent can build up the
flooding entries from unicast entries, it has already been
implemented[2]

2. Create a rpc from agent to service which fetch all fdb entries, the
agent calls the rpc in `provision_local_vlan`, before setting up any
port.[3]

After these changes, the l2pop service part becomes simpler and more
robust, mainly 2 function: first, returns all fdb entries at once when
requested; second, broadcast fdb single entry when a port is up/down.

[1] https://bugs.launchpad.net/neutron/+bug/1332450
[2] https://review.openstack.org/#/c/101581/
[3] https://review.openstack.org/#/c/107409/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-16 Thread Morgan Fainberg
I apologize for the very mixed up/missed quoting in that response, looks like 
my client ate a bunch of the quotes when writing up the email. 

—
Morgan Fainberg


--
From: Morgan Fainberg morgan.fainb...@gmail.com
Reply: Morgan Fainberg morgan.fainb...@gmail.com
Date: July 16, 2014 at 07:34:57
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject:  [openstack-dev] [devstack][keystone] Devstack, auth_token and 
keystone v3

>  
>  
> On Wednesday, July 16, 2014, Joe Gordon wrote:
>  
>  
>  
> On Tue, Jul 15, 2014 at 7:20 AM, Morgan Fainberg wrote:  
>  
>  
> On Tuesday, July 15, 2014, Steven Hardy wrote:
> On Mon, Jul 14, 2014 at 02:43:19PM -0400, Adam Young wrote:
> > On 07/14/2014 11:47 AM, Steven Hardy wrote:
> > >Hi all,
> > >
> > >I'm probably missing something, but can anyone please tell me when devstack
> > >will be moving to keystone v3, and in particular when API auth_token will
> > >be configured such that auth_version is v3.0 by default?
> > >
> > >Some months ago, I posted this patch, which switched auth_version to v3.0
> > >for Heat:
> > >
> > >https://review.openstack.org/#/c/80341/
> > >
> > >That patch was nack'd because there was apparently some version discovery
> > >code coming which would handle it, but AFAICS I still have to manually
> > >configure auth_version to v3.0 in the heat.conf for our API to work
> > >properly with requests from domains other than the default.
> > >
> > >The same issue is observed if you try to use non-default-domains via
> > >python-heatclient using this soon-to-be-merged patch:
> > >
> > >https://review.openstack.org/#/c/92728/
> > >
> > >Can anyone enlighten me here, are we making a global devstack move to the
> > >non-deprecated v3 keystone API, or do I need to revive this devstack patch?
> > >
> > >The issue for Heat is we support notifications from "stack domain users",
> > >who are created in a heat-specific domain, thus won't work if the
> > >auth_token middleware is configured to use the v2 keystone API.
> > >
> > >Thanks for any information :)
> > >
> > >Steve
> > There are reviews out there in client land now that should work. I was
> > testing discover just now and it seems to be doing the right thing. If the
> > AUTH_URL is chopped of the V2.0 or V3 the client should be able to handle
> > everything from there on forward.
>  
> Perhaps I should restate my problem, as I think perhaps we still have
> crossed wires:
>  
> - Certain configurations of Heat *only* work with v3 tokens, because we
> create users in a non-default domain
> - Current devstack still configures versioned endpoints, with v2.0 keystone
> - Heat breaks in some circumstances on current devstack because of this.
> - Adding auth_version='v3.0' to the auth_token section of heat.conf fixes
> the problem.
>  
> So, back in March, client changes were promised to fix this problem, and
> now, in July, they still have not - do I revive my patch, or are fixes for
> this really imminent this time?
>  
> Basically I need the auth_token middleware to accept a v3 token for a user
> in a non-default domain, e.g validate it *always* with the v3 API not v2.0,
> even if the endpoint is still configured versioned to v2.0.
>  
> Sorry to labour the point, but it's frustrating to see this still broken
> so long after I proposed a fix and it was rejected.
>  
>  
> We just did a test converting over the default to v3 (and falling back to v2 
> as needed, yes  
> fallback will still be needed) yesterday (Dolph posted a couple of test 
> patches and they  
> seemed to succeed - yay!!) It looks like it will just work. Now there is a 
> big caveate, this  
> default will only change in the keystone middleware project, and it needs to 
> have a patch  
> or three get through gate converting projects over to use it before we accept 
> the code.  
>  
> Nova has approved the patch to switch over, it is just fighting with Gate. 
> Other patches  
> are proposed for other projects and are in various states of approval.
>  
> I assume you mean switch over to keystone middleware project [0], not switch 
> over to keystone  
> v3. Based on [1] my understanding is no changes to nova are needed to use the 
> v2 compatible  
> parts of the v3 API, But are changes needed to support domains or is this not 
> a problem because  
> the auth middleware uses uuids for user_id and project_id, so nova doesn't 
> need to have  
> any concept of domains? Are any nova changes needed to support the v3 API?
>  
>  
>  
> This change simply makes it so the middleware will prefer v3 over v2 if both 
> are available  
> for validating UUID tokens and fetching certs. It still falls back to v2 as 
> needed. It  
> is transparent to all services (it was blocking on Nova and some uniform 
> catalog related  
> issues a while back, but Jamie Lennox resolved those, see below for more 
> details).
>  
> It does not mean Nova (or anyo

Re: [openstack-dev] [devstack][keystone] (98)Address already in use: make_sock: could not bind to address [::]:5000 & 0.0.0.0:5000

2014-07-16 Thread Brian Haley
On 07/16/2014 07:34 AM, Joe Jiang wrote:
> Hi all, 
> 
> When I just set up my develope environment use devstack at CentOS 6.5, 
> that fetch devstack source via github.com and checkout stable/icehouse branch.
> and bellow[1] is the error log fragment.
> I'm not sure if I am ok to ask my question in this mail list or not,
> because I search all of the web and still not resolve it.
> Anyway, I need you help. and, your help is a highly appreciated.

I tripped over a similar issue with Horizon yesterday and found this bug:

https://bugs.launchpad.net/devstack/+bug/1340660

The error I saw was with port 80, so I was able to disable Horizon to get around
it, and I didn't see anything obvious in the apache error logs to explain it.

-Brian


> 2014-07-16 11:08:53.282 | + sudo sed '/^Listen/s/^.*$/Listen 0.0.0.0:80/' -i
> /etc/httpd/conf/httpd.conf
> 2014-07-16 11:08:53.295 | + sudo rm -f '/var/log/httpd/horizon_*'
> 2014-07-16 11:08:53.310 | + sudo sh -c 'sed -e "
> 2014-07-16 11:08:53.310 | s,%USER%,stack,g;
> 2014-07-16 11:08:53.310 | s,%GROUP%,stack,g;
> 2014-07-16 11:08:53.310 | s,%HORIZON_DIR%,/opt/stack/horizon,g;
> 2014-07-16 11:08:53.310 | s,%APACHE_NAME%,httpd,g;
> 2014-07-16 11:08:53.310 | s,%DEST%,/opt/stack,g;
> 2014-07-16 11:08:53.310 | s,%HORIZON_REQUIRE%,,g;
> 2014-07-16 11:08:53.310 | " /home/devstack/files/apache-horizon.template
>>/etc/httpd/conf.d/horizon.conf'
> 2014-07-16 11:08:53.321 | + start_horizon
> 2014-07-16 11:08:53.321 | + restart_apache_server
> 2014-07-16 11:08:53.321 | + restart_service httpd
> 2014-07-16 11:08:53.321 | + is_ubuntu
> 2014-07-16 11:08:53.321 | + [[ -z rpm ]]
> 2014-07-16 11:08:53.322 | + '[' rpm = deb ']'
> 2014-07-16 11:08:53.322 | + sudo /sbin/service httpd restart
> 2014-07-16 11:08:53.361 | Stopping httpd:  [FAILED]
> 2014-07-16 11:08:53.532 | Starting httpd: httpd: Could not reliably determine
> the server's fully qualified domain name, using 127.0.0.1 for ServerName
> 2014-07-16 11:08:53.533 | (98)Address already in use: make_sock: could not 
> bind
> to address [::]:5000
> 2014-07-16 11:08:53.533 | (98)Address already in use: make_sock: could not 
> bind
> to address 0.0.0.0:5000
> 2014-07-16 11:08:53.533 | no listening sockets available, shutting down
> 2014-07-16 11:08:53.533 | Unable to open logs
> 2014-07-16 11:08:53.547 |  [FAILED]
> 2014-07-16 11:08:53.549 | + exit_trap
> 2014-07-16 11:08:53.549 | + local r=1
> 2014-07-16 11:08:53.549 | ++ jobs -p
> 2014-07-16 11:08:53.550 | + jobs=
> 2014-07-16 11:08:53.550 | + [[ -n '' ]]
> 2014-07-16 11:08:53.550 | + exit 1
> [stack@stack devstack]$
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Gap 0 (database migrations) closed!

2014-07-16 Thread Jakub Libosvar
On 07/16/2014 04:29 PM, Paddu Krishnan (padkrish) wrote:
> Hello,
> A follow-up development question related to this:
> 
> As a part of https://review.openstack.org/#/c/105563/, which was
> introducing a new table in Neutron DB, I was trying to send for review a
> new file in neutron/db/migration/alembic_migrations/versions/
> 
>  which
> got generated through script "neutron-db-manage". This also
> updated  neutron/db/migration/alembic_migrations/versions/
> HEAD.
> I was trying to send this file for review as well.
> 
> "git review" failed and I saw merge errors
> in neutron/db/migration/alembic_migrations/versions/
> HEAD.
>  
> 
> W/O HEAD modified, jenkins was failing. I am working to fix this and saw
> this e-mail. 
> 
> I had to go through all the links in detail in this thread. But,
> meanwhile, the two points mentioned below looks related to the
> patch/issues I am facing. 
> So, if I add a new table, I don't need to run the "neutron-db-manage"
> script to generate the file and modify the HEAD anymore? Is (2) below
> need to be done manually?
Hi Paddu,

the process is the same (create migration script, update HEAD file), but
all migrations should have

migration_for_plugins = ['*']


Because you created a new DB model in new module, you also need to add

from neutron.plugins.ml2.drivers import type_network_overlay

to neutron/db/migration/models/head.py module.

I hope it helps.

Kuba

> 
> Thanks,
> Paddu



> 
> From: Anna Kamyshnikova  >
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>  >
> Date: Wednesday, July 16, 2014 1:14 AM
> To: "OpenStack Development Mailing List (not for usage questions)"
>  >
> Subject: Re: [openstack-dev] [Neutron] Gap 0 (database migrations) closed!
> 
> Hello everyone!
> 
> I would like to bring the next two points to everybody's attention:
> 
> 1) As Henry mentioned if you add new migration you should make it
> unconditional. Conditional migrations should not be merged since now.
> 
> 2) If you add some new models you should ensure that module containing
> it is imported in /neutron/db/migration/models/head.py.
> 
> The second point in important for testing which I hope will be merged
> soon: https://review.openstack.org/76520.
> 
> Regards,
> Ann
> 
> 
> 
> On Wed, Jul 16, 2014 at 5:54 AM, Kyle Mestery  > wrote:
> 
> On Tue, Jul 15, 2014 at 5:49 PM, Henry Gessau  > wrote:
> > I am happy to announce that the first (zero'th?) item in the Neutron Gap
> > Coverage[1] has merged[2]. The Neutron database now contains all tables 
> for
> > all plugins, and database migrations are no longer conditional on the
> > configuration.
> >
> > In the short term, Neutron developers who write migration scripts need 
> to set
> >   migration_for_plugins = ['*']
> > but we will soon clean up the template for migration scripts so that 
> this will
> > be unnecessary.
> >
> > I would like to say special thanks to Ann Kamyshnikova and Jakub 
> Libosvar for
> > their great work on this solution. Also thanks to Salvatore Orlando and 
> Mark
> > McClain for mentoring this through to the finish.
> >
> > [1]
> > 
> https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage
> > [2] https://review.openstack.org/96438
> >
> This is great news! Thanks to everyone who worked on this particular
> gap. We're making progress on the other gaps identified in that plan,
> I'll send an email out once Juno-2 closes with where we're at.
> 
> Thanks,
> Kyle
> 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://l

[openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-16 Thread Morgan Fainberg


On Wednesday, July 16, 2014, Joe Gordon  wrote:



On Tue, Jul 15, 2014 at 7:20 AM, Morgan Fainberg  
wrote:


On Tuesday, July 15, 2014, Steven Hardy  wrote:
On Mon, Jul 14, 2014 at 02:43:19PM -0400, Adam Young wrote:
> On 07/14/2014 11:47 AM, Steven Hardy wrote:
> >Hi all,
> >
> >I'm probably missing something, but can anyone please tell me when devstack
> >will be moving to keystone v3, and in particular when API auth_token will
> >be configured such that auth_version is v3.0 by default?
> >
> >Some months ago, I posted this patch, which switched auth_version to v3.0
> >for Heat:
> >
> >https://review.openstack.org/#/c/80341/
> >
> >That patch was nack'd because there was apparently some version discovery
> >code coming which would handle it, but AFAICS I still have to manually
> >configure auth_version to v3.0 in the heat.conf for our API to work
> >properly with requests from domains other than the default.
> >
> >The same issue is observed if you try to use non-default-domains via
> >python-heatclient using this soon-to-be-merged patch:
> >
> >https://review.openstack.org/#/c/92728/
> >
> >Can anyone enlighten me here, are we making a global devstack move to the
> >non-deprecated v3 keystone API, or do I need to revive this devstack patch?
> >
> >The issue for Heat is we support notifications from "stack domain users",
> >who are created in a heat-specific domain, thus won't work if the
> >auth_token middleware is configured to use the v2 keystone API.
> >
> >Thanks for any information :)
> >
> >Steve
> There are reviews out there in client land now that should work.  I was
> testing discover just now and it seems to be doing the right thing.  If the
> AUTH_URL is chopped of the V2.0 or V3 the client should be able to handle
> everything from there on forward.

Perhaps I should restate my problem, as I think perhaps we still have
crossed wires:

- Certain configurations of Heat *only* work with v3 tokens, because we
  create users in a non-default domain
- Current devstack still configures versioned endpoints, with v2.0 keystone
- Heat breaks in some circumstances on current devstack because of this.
- Adding auth_version='v3.0' to the auth_token section of heat.conf fixes
  the problem.

So, back in March, client changes were promised to fix this problem, and
now, in July, they still have not - do I revive my patch, or are fixes for
this really imminent this time?

Basically I need the auth_token middleware to accept a v3 token for a user
in a non-default domain, e.g validate it *always* with the v3 API not v2.0,
even if the endpoint is still configured versioned to v2.0.

Sorry to labour the point, but it's frustrating to see this still broken
so long after I proposed a fix and it was rejected.


We just did a test converting over the default to v3 (and falling back to v2 as 
needed, yes fallback will still be needed) yesterday (Dolph posted a couple of 
test patches and they seemed to succeed - yay!!) It looks like it will just 
work. Now there is a big caveate, this default will only change in the keystone 
middleware project, and it needs to have a patch or three get through gate 
converting projects over to use it before we accept the code.

Nova has approved the patch to switch over, it is just fighting with Gate. 
Other patches are proposed for other projects and are in various states of 
approval.

I assume you mean switch over to keystone middleware project [0], not switch 
over to keystone v3. Based on [1] my understanding is no changes to nova are 
needed to use the v2 compatible parts of the v3 API, But are changes needed to 
support domains or is this not a problem because the auth middleware uses uuids 
for user_id and project_id, so nova doesn't need to have any concept of 
domains? Are any nova changes needed to support the v3 API?


 
This change simply makes it so the middleware will prefer v3 over v2 if both 
are available for validating UUID tokens and fetching certs. It still falls 
back to v2 as needed. It is transparent to all services (it was blocking on 
Nova and some uniform catalog related issues a while back, but Jamie Lennox 
resolved those, see below for more details).

It does not mean Nova (or anyone else) are magically using features they 
weren't already using. It just means Heat isn't needing to do a bunch of 
conditional stuff to get the V3 information out of the middleware. This change 
is only used in the case that V2 and V3 are available when auth_token 
middleware looks at the auth_url (limited discovery). It is still possible to 
force V2 by setting the ‘identity_uri' to the V2.0 specific root (no discovery 
performed).

Switching over the default to v3 in the middleware doesn't test nova + v3 user 
tokens, tempest nova tests don't generate v3 user tokens (although I hear there 
is an experimental job to do this).  So you are testing that moving the 
middleware to v3 but accepting v2 API user tokens works. But what happens if 
someone tries to use a the non

Re: [openstack-dev] [specs] how to continue spec discussion

2014-07-16 Thread John Garbutt
On 16 July 2014 14:07, Thierry Carrez  wrote:
> Daniel P. Berrange wrote:
>> On Wed, Jul 16, 2014 at 11:57:33AM +, Tim Bell wrote:
>>> It seems a pity to archive the comments and reviewer lists along
>>> with losing a place to continue the discussions even if we are not
>>> expecting to see code in Juno.

Agreed we should keep those comments.

>> Agreed, that is sub-optimal to say the least.
>>
>> The spec documents themselves are in a release specific directory
>> though. Any which are to be postponed to Kxxx would need to move
>> into a specs/k directory instead of specs/juno, but we don't
>> know what the k directory needs to be called yet :-(
>
> The poll ends in 18 hours, so that should no longer be a blocker :)

Aww, there goes our lame excuse for punting making a decision on this.

> I think what we don't really want to abandon those specs and lose
> comments and history... but we want to shelve them in a place where they
> do not interrupt core developers workflow as they concentrate on Juno
> work. It will be difficult to efficiently ignore them if they are filed
> in a next or a kxxx directory, as they would still clutter /most/ Gerrit
> views.

+1

My intention was that once the specific project is open for K specs,
people will restore their original patch set, and move the spec to the
K directory, thus keeping all the history.

For Nova, the open reviews, with a -2, are ones that are on the
potential exception list, and so still might need some reviews. If
they gain an exception, the -2 will be removed. The list of possible
exceptions is currently included in bottom of this etherpad:
https://etherpad.openstack.org/p/nova-juno-spec-priorities

At some point we will open nova-specs for K, right now we are closed
for all spec submissions. We already have more blueprints approved
than we will be able to merge during the rest of Juno.

The idea is that everyone can now focus more on fixing bugs, reviewing
bug fixes, and reviewing remaining higher priority features, rather
than reviewing designs for K features. It is syncing a lot of
reviewers time looking at nova-specs, and it feels best to divert
attention.

We could leave the reviews open in gerrit, but we are trying hard to
set expectations around the likelihood of being reviewed and/or
accepted. In the past people have got very frustraighted and
complained about not finding out about what is happening (or not) with
what they have up for reviews.

This is all very new, so we are mostly making this up as we go along,
based on what we do with code submissions. Ideas on a better approach
that still meet most of the above goals, would be awesome.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][infra] New package dependencies for oslo.messaging's AMQP 1.0 support.

2014-07-16 Thread Ken Giusti
On 07/15/2014 10:58:50 +0200, Flavio Percoco wrote:
>On 07/15/2014 07:16 PM, Doug Hellmann wrote:
>> On Tue, Jul 15, 2014 at 1:03 PM, Ken Giusti  wrote:
>>>
>>> These packages may be obtained via EPEL for Centos/RHEL systems
>>> (qpid-proton-c-devel), and via the Qpid project's PPA [3]
>>> (libqpid-proton2-dev) for Debian/Ubuntu.  They are also available for
>>> Fedora via the default yum repos.  Otherwise, the source can be pulled
>>> directly from the Qpid project and built/installed manually [4].
>>
>> Do you know the timeline for having those added to the Ubuntu cloud
>> archives? I think we try not to add PPAs in devstack, but I'm not sure
>> if that's a hard policy.
>
>IIUC, the package has been accepted in Debian - Ken, correct me if I'm
>wrong. Here's the link to the Debian's mentor page:
>
>http://mentors.debian.net/package/qpid-proton
>

No, it hasn't been accepted yet - it is still pending approval by the
sponsor.  That's one of the reasons the Qpid project has set up its
own PPA.

>>
>>>
>>> I'd like to get the blueprint accepted, but I'll have to address these
>>> new dependencies first.  What is the best way to get these new
>>> packages into CI, devstack, etc?  And will developers be willing to
>>> install the proton development libraries, or can this be done
>>> automagically?
>>
>> To set up integration tests we'll need an option in devstack to set
>> the messaging driver to this new one. That flag should also trigger
>> setting up the dependencies needed. Before you spend time implementing
>> that, though, we should clarify the policy on PPAs.
>
>Agreed. FWIW, the work on devstack is on the works but it's being held
>off while we clarify the policy on PPAs.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Gap 0 (database migrations) closed!

2014-07-16 Thread Paddu Krishnan (padkrish)
Hello,
A follow-up development question related to this:

As a part of https://review.openstack.org/#/c/105563/, which was introducing a 
new table in Neutron DB, I was trying to send for review a new file in 
neutron/db/migration/alembic_migrations/versions/
 which got generated through script "neutron-db-manage". This also updated  
neutron/db/migration/alembic_migrations/versions/HEAD.
 I was trying to send this file for review as well.

"git review" failed and I saw merge errors in 
neutron/db/migration/alembic_migrations/versions/HEAD.

W/O HEAD modified, jenkins was failing. I am working to fix this and saw this 
e-mail.

I had to go through all the links in detail in this thread. But, meanwhile, the 
two points mentioned below looks related to the patch/issues I am facing.
So, if I add a new table, I don't need to run the "neutron-db-manage" script to 
generate the file and modify the HEAD anymore? Is (2) below need to be done 
manually?

Thanks,
Paddu

From: Anna Kamyshnikova 
mailto:akamyshnik...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, July 16, 2014 1:14 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] Gap 0 (database migrations) closed!

Hello everyone!

I would like to bring the next two points to everybody's attention:

1) As Henry mentioned if you add new migration you should make it 
unconditional. Conditional migrations should not be merged since now.

2) If you add some new models you should ensure that module containing it is 
imported in /neutron/db/migration/models/head.py.

The second point in important for testing which I hope will be merged soon: 
https://review.openstack.org/76520.

Regards,
Ann



On Wed, Jul 16, 2014 at 5:54 AM, Kyle Mestery 
mailto:mest...@mestery.com>> wrote:
On Tue, Jul 15, 2014 at 5:49 PM, Henry Gessau 
mailto:ges...@cisco.com>> wrote:
> I am happy to announce that the first (zero'th?) item in the Neutron Gap
> Coverage[1] has merged[2]. The Neutron database now contains all tables for
> all plugins, and database migrations are no longer conditional on the
> configuration.
>
> In the short term, Neutron developers who write migration scripts need to set
>   migration_for_plugins = ['*']
> but we will soon clean up the template for migration scripts so that this will
> be unnecessary.
>
> I would like to say special thanks to Ann Kamyshnikova and Jakub Libosvar for
> their great work on this solution. Also thanks to Salvatore Orlando and Mark
> McClain for mentoring this through to the finish.
>
> [1]
> https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage
> [2] https://review.openstack.org/96438
>
This is great news! Thanks to everyone who worked on this particular
gap. We're making progress on the other gaps identified in that plan,
I'll send an email out once Juno-2 closes with where we're at.

Thanks,
Kyle

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Registration for the mid cycle meetup is now closed

2014-07-16 Thread Eric Windisch
On Tue, Jul 15, 2014 at 11:55 PM, Michael Still  wrote:

> The containers meetup is in a different room with different space
> constraints, so containers focussed people should do whatever Adrian
> is doing for registration.
>

Interesting. In that case, for those that are primarily attending for
containers-specific matters, but have already registered for the Nova
mid-cycle, should we recommend they release their registrations to help
clear the wait-list?

Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] fair standards for all hypervisor drivers

2014-07-16 Thread Sean Dague
Recently the main gate updated from Ubuntu 12.04 to 14.04, and in doing
so we started executing the livesnapshot code in the nova libvirt
driver. Which fails about 20% of the time in the gate, as we're bringing
computes up and down while doing a snapshot. Dan Berange did a bunch of
debug on that and thinks it might be a qemu bug. We disabled these code
paths, so live snapshot has now been ripped out.

In January we also triggered a libvirt bug, and had to carry a private
build of libvirt for 6 weeks in order to let people merge code in OpenStack.

We never were able to switch to libvirt 1.1.1 in the gate using the
Ubuntu Cloud Archive during Icehouse development, because it has a
different set of failures that would have prevented people from merging
code.

Based on these experiences, libvirt version differences seem to be as
substantial as major hypervisor differences. There is a proposal here -
https://review.openstack.org/#/c/103923/ to hold newer versions of
libvirt to the same standard we hold xen, vmware, hyperv, docker,
ironic, etc.

I'm somewhat concerned that the -2 pile on in this review is a double
standard of libvirt features, and features exploiting really new
upstream features. I feel like a lot of the language being used here
about the burden of doing this testing is exactly the same as was
presented by the docker team before their driver was removed, which was
ignored by the Nova team at the time. It was the concern by the freebsd
team, which was also ignored and they were told to go land libvirt
patches instead.

I'm ok with us as a project changing our mind and deciding that the test
bar needs to be taken down a notch or two because it's too burdensome to
contributors and vendors, but if we are doing that, we need to do it for
everyone. A lot of other organizations have put a ton of time and energy
into this, and are carrying a maintenance cost of running these systems
to get results back in a timely basis.

As we seem deadlocked in the review, I think the mailing list is
probably a better place for this.

If we want to reduce the standards for libvirt we should reconsider
what's being asked of 3rd party CI teams, and things like the docker
driver, as well as the A, B, C driver classification. Because clearly
libvirt 1.2.5+ isn't actually class A supported.

Anyway, discussion welcomed. My primary concern right now isn't actually
where we set the bar, but that we set the same bar for everyone.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >