Re: [openstack-dev] addCleanup vs. tearDown

2013-10-08 Thread Nachi Ueno
+1

2013/10/8 Monty Taylor mord...@inaugust.com:
 Hey!

 Got a question on IRC which seemed fair game for a quick mailing list post:

 Q: I see both addCleanup and tearDown in nova's test suite - which one
 should I use for new code?

 A: addCleanup

 All new code should 100% of the time use addCleanup and not tearDown -
 this is because addCleanups are all guaranteed to run, even if one of
 them fails, whereas a failure inside of a tearDown can leave the rest of
 the tearDown un-executed, which can leave stale state laying around.

 Eventually, as we get to it, tearDown should be 100% erradicated from
 OpenStack. However, we don't really need more patch churn, so I
 recommend only working on it as you happen to be in related code.

 Thanks!
 Monty

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Blueprint for IPAM and Policy extensions in Neutron

2013-10-08 Thread Nachi Ueno
Hi Rudra

Thanks!

Some questions and comments

-  name and fq_name
How we use name and fq_name ?
IMO, we should prevent to use shorten name.

- src_ports: [80-80],
For API consistency, we should use similar way of the security groups
http://docs.openstack.org/api/openstack-network/2.0/content/POST_createSecGroupRule__security-group-rules_security-groups-ext.html

- PolicyRuleCreate
Could you add more example if the action contains services.

action_list: [simple_action-pass],

This spec is related with service framework discussion also.
so I wanna know the detail and different with service framework.

it is also helpful if we could have full list of examples.

Best
Nachi





2013/10/7 Rudra Rugge rru...@juniper.net:
 Hi Nachi,

 I have split the spec for policy and VPN wiki served as a good reference 
 point. Please review and provide comments:
 https://wiki.openstack.org/wiki/Blueprint-policy-extensions-for-neutron

 Thanks,
 Rudra

 On Oct 4, 2013, at 4:56 PM, Nachi Ueno na...@ntti3.com wrote:

 2013/10/4 Rudra Rugge rru...@juniper.net:
 Hi Nachi,

 Inline response

 On 10/4/13 12:54 PM, Nachi Ueno na...@ntti3.com wrote:

 Hi Rudra

 inline responded

 2013/10/4 Rudra Rugge rru...@juniper.net:
 Hi Nachi,

 Thanks for reviewing the BP. Please see inline:

 On 10/4/13 11:30 AM, Nachi Ueno na...@ntti3.com wrote:

 Hi Rudra

 Two comment from me

 (1) IPAM and Network policy extension looks like independent extension.
 so IPAM part and Network policy should be divided for two blueprints.

 [Rudra] I agree that these need to be split into two blueprints. I will
 create another BP.

 Thanks


 (2) The team IPAM is too general word. IMO we should use more specific
 word.
 How about SubnetGroup?

 [Rudra] IPAM holds more information.
- All DHCP attributes for this IPAM subnet
- DNS server configuration
- In future address allocation schemes

 Actually, Neutron Subnet has dhcp, DNS, ip allocation schemes.
 If I understand your proposal correct, IPAM is a group of subnets
 for of which shares common parameters.
 Also, you can propose to extend existing subnet.

 [Rudra] Neutron subnet requires a network as I understand. IPAM info
 should not have such dependency. Similar to Amazon VPC model where all
 IPAM information can be stored even if a a network is not created.
 Association to networks can happen at a later time.

 OK I got it. However IPAM is still too general word.
 Don't you have any alternatives?

 Best
 Nachi

 Rudra





 (3) Network Policy Resource
 I would like to know more details of this api

 I would like to know resource definition and
 sample API request and response json.

 (This is one example
 https://wiki.openstack.org/wiki/Quantum/VPNaaS )

 Especially, I'm interested in src-addresses, dst-addresses, action-list
 properties.
 Also, how can we express any port in your API?

 [Rudra] Will add the details of the resources and APIs after separating
 the blueprint.

 Thanks!

 Best
 Nachi

 Regards,
 Rudra


 Best
 Nachi


 2013/10/4 Rudra Rugge rru...@juniper.net:
 Hi All,

 The link in the email was incorrect. Please follow the following link:


 https://blueprints.launchpad.net/neutron/+spec/ipam-policy-extensions-f
 or
 -neutron

 Thanks,
 Rudra

 On Oct 3, 2013, at 11:38 AM, Rudra Rugge rru...@juniper.net wrote:

 Hi All,

 A blueprint has been registered to add IPAM and Policy
 extensions to Neutron. Please review the blueprint and
 the attached specification.


 https://blueprints.launchpad.net/neutron/+spec/juniper-contrail-ipam-po
 li
 cy-extensions-for-neutron

 All comments are welcome.

 Thanks,
 Rudra
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [nova] BUG? nova-compute should delete unused instance files on boot

2013-10-08 Thread Joshua Harlow
Sure, basically a way around this is to do migration of the VM's on the
host u are doing maintenance on.

That¹s one way y! has its ops folks work around this.

Another solution is just don't do local_deletes :-P

It sounds like your 'zombie' state would be useful as a way to solve this
also.

To me though any solution that creates 2 sets of the same resources in
your cloud though isn't a good way (which afaik the current local_delete
aims for) as it causes maintenance and operator pain (and needless
problems that a person has to go in and figure out  resolve). I'd rather
have the delete fail, leave the quota of the user alone, and tell the user
the hypervisor where the VM is on is currently under maintenance (ideally
the `host-update` resolves this, as long as its supported on all
hypervisor types). At least that gives a sane operational experience and
doesn't cause support bugs that are hard to resolve.

But maybe this type of action should be more configurable. Allow or
disallow local deletes.

On 10/7/13 11:50 PM, Chris Friesen chris.frie...@windriver.com wrote:

On 10/07/2013 05:30 PM, Joshua Harlow wrote:
 A scenario that I've seen:

 Take 'nova-compute' down for software upgrade, API still accessible
since
 you want to provide API uptime (aka not taking the whole cluster
offline).

 User Y deletes VM on that hypervisor where nova-compute is currently
down,
 DB locally deletes, at this point VM 'A' is still active but nova thinks
 its not.

Isn't this sort of thing exactly what nova host-update --maintenance
enable hostname was intended for?  I.e., push all the VMs off that
compute node so you can take down the services without causing problems.

Its kind of a pain that the host-update stuff is implemented at the
hypervisor level though (and isn't available for libvirt), it seems like
it could be implemented at a more generic level.  (And on that note, why
isn't there a host table in the database since we can have multiple
services running on one host and we might want to take them all down?)

Alternately, maybe we need to have a 2-stage delete, where the VM gets
put into a zombie state in the database and the resources can't be
reused until the compute service confirms that the VM has been killed.

Chris



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] BUG? nova-compute should delete unused instance files on boot

2013-10-08 Thread Mike Wilson
+1 to what Chris suggested. Zombie state that doesn't affect quota, but
doesn't create more problems by trying to reuse resources that aren't
available. That way we can tell the customer that things are deleted, but
we don't need to break our cloud by screwing up future schedule requests.

-Mike


On Tue, Oct 8, 2013 at 11:58 AM, Joshua Harlow harlo...@yahoo-inc.comwrote:

 Sure, basically a way around this is to do migration of the VM's on the
 host u are doing maintenance on.

 That¹s one way y! has its ops folks work around this.

 Another solution is just don't do local_deletes :-P

 It sounds like your 'zombie' state would be useful as a way to solve this
 also.

 To me though any solution that creates 2 sets of the same resources in
 your cloud though isn't a good way (which afaik the current local_delete
 aims for) as it causes maintenance and operator pain (and needless
 problems that a person has to go in and figure out  resolve). I'd rather
 have the delete fail, leave the quota of the user alone, and tell the user
 the hypervisor where the VM is on is currently under maintenance (ideally
 the `host-update` resolves this, as long as its supported on all
 hypervisor types). At least that gives a sane operational experience and
 doesn't cause support bugs that are hard to resolve.

 But maybe this type of action should be more configurable. Allow or
 disallow local deletes.

 On 10/7/13 11:50 PM, Chris Friesen chris.frie...@windriver.com wrote:

 On 10/07/2013 05:30 PM, Joshua Harlow wrote:
  A scenario that I've seen:
 
  Take 'nova-compute' down for software upgrade, API still accessible
 since
  you want to provide API uptime (aka not taking the whole cluster
 offline).
 
  User Y deletes VM on that hypervisor where nova-compute is currently
 down,
  DB locally deletes, at this point VM 'A' is still active but nova thinks
  its not.
 
 Isn't this sort of thing exactly what nova host-update --maintenance
 enable hostname was intended for?  I.e., push all the VMs off that
 compute node so you can take down the services without causing problems.
 
 Its kind of a pain that the host-update stuff is implemented at the
 hypervisor level though (and isn't available for libvirt), it seems like
 it could be implemented at a more generic level.  (And on that note, why
 isn't there a host table in the database since we can have multiple
 services running on one host and we might want to take them all down?)
 
 Alternately, maybe we need to have a 2-stage delete, where the VM gets
 put into a zombie state in the database and the resources can't be
 reused until the compute service confirms that the VM has been killed.
 
 Chris
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Blueprint for IPAM and Policy extensions in Neutron

2013-10-08 Thread Rudra Rugge
Hi Nachi,

Please see inline:

On Oct 8, 2013, at 10:42 AM, Nachi Ueno na...@ntti3.com wrote:

 Hi Rudra
 
 Thanks!
 
 Some questions and comments
 
 -  name and fq_name
 How we use name and fq_name ?
 IMO, we should prevent to use shorten name.
 
[Rudra] 'name' meets all the current Neutron models like (network, subnet, 
etc). 
'fq_name' is a free string added for plugins to use in their own context. 
fq_name
hierarchy could be different in each plugin.
Example: 
name: test_policy
fq_name: [default-domain:test-project:test_policy]
while a different plugin may use it as
fq_name: [test-project:test_policy]

 - src_ports: [80-80],
 For API consistency, we should use similar way of the security groups
 http://docs.openstack.org/api/openstack-network/2.0/content/POST_createSecGroupRule__security-group-rules_security-groups-ext.html
 
[Rudra] This is a list of start and end ports. If source port ranges to be 
allowed 
are [100-200] and [1000-1200]. Security groups support only a single range.


 - PolicyRuleCreate
 Could you add more example if the action contains services.
 
 action_list: [simple_action-pass],
[Rudra] Will update the spec with more examples.

 
 This spec is related with service framework discussion also.
 so I wanna know the detail and different with service framework.
 
[Rudra] Could you please point me to the service framework spec/discussion. 
Thanks.


 it is also helpful if we could have full list of examples.
[Rudra] Will add more examples.

Cheers,
Rudra

 
 Best
 Nachi
 
 
 
 
 
 2013/10/7 Rudra Rugge rru...@juniper.net:
 Hi Nachi,
 
 I have split the spec for policy and VPN wiki served as a good reference 
 point. Please review and provide comments:
 https://wiki.openstack.org/wiki/Blueprint-policy-extensions-for-neutron
 
 Thanks,
 Rudra
 
 On Oct 4, 2013, at 4:56 PM, Nachi Ueno na...@ntti3.com wrote:
 
 2013/10/4 Rudra Rugge rru...@juniper.net:
 Hi Nachi,
 
 Inline response
 
 On 10/4/13 12:54 PM, Nachi Ueno na...@ntti3.com wrote:
 
 Hi Rudra
 
 inline responded
 
 2013/10/4 Rudra Rugge rru...@juniper.net:
 Hi Nachi,
 
 Thanks for reviewing the BP. Please see inline:
 
 On 10/4/13 11:30 AM, Nachi Ueno na...@ntti3.com wrote:
 
 Hi Rudra
 
 Two comment from me
 
 (1) IPAM and Network policy extension looks like independent extension.
 so IPAM part and Network policy should be divided for two blueprints.
 
 [Rudra] I agree that these need to be split into two blueprints. I will
 create another BP.
 
 Thanks
 
 
 (2) The team IPAM is too general word. IMO we should use more specific
 word.
 How about SubnetGroup?
 
 [Rudra] IPAM holds more information.
   - All DHCP attributes for this IPAM subnet
   - DNS server configuration
   - In future address allocation schemes
 
 Actually, Neutron Subnet has dhcp, DNS, ip allocation schemes.
 If I understand your proposal correct, IPAM is a group of subnets
 for of which shares common parameters.
 Also, you can propose to extend existing subnet.
 
 [Rudra] Neutron subnet requires a network as I understand. IPAM info
 should not have such dependency. Similar to Amazon VPC model where all
 IPAM information can be stored even if a a network is not created.
 Association to networks can happen at a later time.
 
 OK I got it. However IPAM is still too general word.
 Don't you have any alternatives?
 
 Best
 Nachi
 
 Rudra
 
 
 
 
 
 (3) Network Policy Resource
 I would like to know more details of this api
 
 I would like to know resource definition and
 sample API request and response json.
 
 (This is one example
 https://wiki.openstack.org/wiki/Quantum/VPNaaS )
 
 Especially, I'm interested in src-addresses, dst-addresses, action-list
 properties.
 Also, how can we express any port in your API?
 
 [Rudra] Will add the details of the resources and APIs after separating
 the blueprint.
 
 Thanks!
 
 Best
 Nachi
 
 Regards,
 Rudra
 
 
 Best
 Nachi
 
 
 2013/10/4 Rudra Rugge rru...@juniper.net:
 Hi All,
 
 The link in the email was incorrect. Please follow the following link:
 
 
 https://blueprints.launchpad.net/neutron/+spec/ipam-policy-extensions-f
 or
 -neutron
 
 Thanks,
 Rudra
 
 On Oct 3, 2013, at 11:38 AM, Rudra Rugge rru...@juniper.net wrote:
 
 Hi All,
 
 A blueprint has been registered to add IPAM and Policy
 extensions to Neutron. Please review the blueprint and
 the attached specification.
 
 
 https://blueprints.launchpad.net/neutron/+spec/juniper-contrail-ipam-po
 li
 cy-extensions-for-neutron
 
 All comments are welcome.
 
 Thanks,
 Rudra
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Jiří Stránský

Clint and Monty,

thank you for such good responses. I am new in TripleO team indeed and I 
was mostly concerned by the line in the sand. Your responses shed some 
more light on the issue for me and i hope we'll be heading the right way :)


Thanks

Jiri

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Blueprint for Policy and IPAM extensions in Neutron

2013-10-08 Thread Rudra Rugge
Hi All,

Based on initial feedback from Nachi and others I have split the blueprints.
Please review the policy and ipam blueprints.

Policy
Blueprint: 
https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron
Spec: https://wiki.openstack.org/wiki/Blueprint-policy-extensions-for-neutron

IPAM
Blueprint: 
https://blueprints.launchpad.net/neutron/+spec/ipam-extensions-for-neutron
Spec: https://wiki.openstack.org/wiki/Blueprint-ipam-extensions-for-neutron

Thanks,
Rudra

On Oct 3, 2013, at 11:38 AM, Rudra Rugge 
rru...@juniper.netmailto:rru...@juniper.net wrote:

Hi All,

A blueprint has been registered to add IPAM and Policy
extensions to Neutron. Please review the blueprint and
the attached specification.

https://blueprints.launchpad.net/neutron/+spec/juniper-contrail-ipam-policy-extensions-for-neutron

All comments are welcome.

Thanks,
Rudra
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] BUG? nova-compute should delete unused instance files on boot

2013-10-08 Thread Russell Bryant
On 10/08/2013 02:19 PM, Mike Wilson wrote:
 +1 to what Chris suggested. Zombie state that doesn't affect quota, but
 doesn't create more problems by trying to reuse resources that aren't
 available. That way we can tell the customer that things are deleted,
 but we don't need to break our cloud by screwing up future schedule
 requests.

Right.  We don't want to punish users for an operational error or mistake.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC Candidacy

2013-10-08 Thread Justin Shepherd
I'd like to announce my candidacy for the TC.

About Me


I have been involved with OpenStack since the Bexar development cycle, and have 
contributed to most of the programs along the way. I have also been in 
attendance at every summit since the Cactus summit. My focus in OpenStack has 
been on the usability and functionality of the stack (with an operators 
viewpoint). On the installation and configuration side of things, I have been 
heavily involved in the stack forge chef-cookbooks project, as a contributor 
and core reviewer. Over the last development cycle, I have participated in the 
TC acting as a proxy representative for Dolph Mathews.


Platform
===

As OpenStack continues to mature, I believe the TC has a major undertaking 
ahead of it. 

1. Continuing to define the layers of the OpenStack ecosystem is a huge area 
for impact to the OpenStack user community. Clearly laying out the requirements 
and expectations for the programs entering and exiting the incubation phase 
will require a broad view of the project as a whole. This will require a lot of 
interaction between the TC, PTLs, and the UC to make sure that OpenStack 
continues solving problems for the user and operator communities.

2. Pushing performance as a first order concern. The developer community needs 
to have better insight into how a change impacts each component. Right now it 
is unclear how a specific commit to any of the projects impacts the performance 
of either that project or the cloud as a whole.


These are a few points that I think are extremely important to the continued 
success of OpenStack, although not the only ones. I would be honored to serve 
the community as a member of the TC.


Thank you for your consideration,
Justin Shepherd
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2013-10-08 Thread Thierry Carrez
Confirmed.

Justin Shepherd wrote:
 I'd like to announce my candidacy for the TC.
 
 About Me
 
 
 I have been involved with OpenStack since the Bexar development cycle, and 
 have contributed to most of the programs along the way. I have also been in 
 attendance at every summit since the Cactus summit. My focus in OpenStack has 
 been on the usability and functionality of the stack (with an operators 
 viewpoint). On the installation and configuration side of things, I have been 
 heavily involved in the stack forge chef-cookbooks project, as a contributor 
 and core reviewer. Over the last development cycle, I have participated in 
 the TC acting as a proxy representative for Dolph Mathews.
 
 
 Platform
 ===
 
 As OpenStack continues to mature, I believe the TC has a major undertaking 
 ahead of it. 
 
 1. Continuing to define the layers of the OpenStack ecosystem is a huge area 
 for impact to the OpenStack user community. Clearly laying out the 
 requirements and expectations for the programs entering and exiting the 
 incubation phase will require a broad view of the project as a whole. This 
 will require a lot of interaction between the TC, PTLs, and the UC to make 
 sure that OpenStack continues solving problems for the user and operator 
 communities.
 
 2. Pushing performance as a first order concern. The developer community 
 needs to have better insight into how a change impacts each component. Right 
 now it is unclear how a specific commit to any of the projects impacts the 
 performance of either that project or the cloud as a whole.
 
 
 These are a few points that I think are extremely important to the continued 
 success of OpenStack, although not the only ones. I would be honored to serve 
 the community as a member of the TC.
 
 
 Thank you for your consideration,
 Justin Shepherd
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Meeting Tuesday October 8th at 19:00 UTC

2013-10-08 Thread Elizabeth Krumbach Joseph
On Mon, Oct 7, 2013 at 9:28 AM, Elizabeth Krumbach Joseph
l...@princessleia.com wrote:
 The OpenStack Infrastructure (Infra) team is hosting our weekly
 meeting tomorrow, Tuesday October 8th, at 19:00 UTC in
 #openstack-meeting

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-10-08-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-10-08-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-10-08-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Keystone OS-EP-FILTER descrepancy

2013-10-08 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
Hello,

I am attempting to test the Havana v3  OS-EP-FILTER extension with the latest 
RC1 bits and I get a 404 error response.

The documentation actually shows 2 different URIs for this API:

- GET /OS-EP-FILTER/projects/{project_id}/endpoints and 
http://identity:35357/v3/OS-FILTER/projects/{project_id}/endpoints

I have tried both OS-EP-FILTER and OS-FILTER with the same result. Does 
anyone have information as to what I am missing?

Regards,

Mark Miller

-

From the online documentation:

List Associations for Project: GET 
/OS-EP-FILTER/projects/{project_id}/endpoints 

Returns all the endpoints that are currently associated with a specific project.

Response:
Status: 200 OK
{
endpoints:
[
{
id: --endpoint-id--,
interface: public,
url: http://identity:35357/;,
region: north,
links: {
self: http://identity:35357/v3/endpoints/--endpoint-id--;
},
service_id: --service-id--
},
{
id: --endpoint-id--,
interface: internal,
region: south,
url: http://identity:35357/;,
links: {
self: http://identity:35357/v3/endpoints/--endpoint-id--;
},
service_id: --service-id--
}
],
links: {
self: 
http://identity:35357/v3/OS-FILTER/projects/{project_id}/endpoints;,
previous: null,
next: null
}
}


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone OS-EP-FILTER descrepancy

2013-10-08 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
Here is the response from Fabio:

Mark,
  Please have a look at the configuration.rst in the contrib/endpoint-filter 
folder.
I pasted here for your convenience:

==
Enabling Endpoint Filter Extension
==To enable the endpoint filter extension:
1. add the endpoint filter extension catalog driver to the ``[catalog]`` section
   in ``keystone.conf``. example::

[catalog]
driver = 
keystone.contrib.endpoint_filter.backends.catalog_sql.EndpointFilterCatalog
2. add the ``endpoint_filter_extension`` filter to the ``api_v3`` pipeline in
   ``keystone-paste.ini``. example::

[pipeline:api_v3]
pipeline = access_log sizelimit url_normalize token_auth admin_token_auth 
xml_body json_body ec2_extension s3_extension endpoint_filter_extension 
service_v3
3. create the endpoint filter extension tables if using the provided sql 
backend. example::
./bin/keystone-manage db_sync --extension endpoint_filter
4. optional: change ``return_all_endpoints_if_no_filter`` the 
``[endpoint_filter]`` section
   in ``keystone.conf`` to return an empty catalog if no associations are made. 
example::
[endpoint_filter]
return_all_endpoints_if_no_filter = False


Steps 1-3 are mandatory. Once you have done the changes restart the 
keystone-server to apply the changes.

The /v3/auth/tokens?nocatalog is to remove the catalog from the token creation.
It is different from the filtering because it won't return any endpoint in the 
service catalog. The endpoint filter will return only the ones that you have 
associated with a particular project.
Please bear in mind that this works only with scoped token (meaning where you 
pass a project id).






 -Original Message-
 From: Miller, Mark M (EB SW Cloud - RD - Corvallis)
 Sent: Tuesday, October 08, 2013 1:21 PM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] Keystone OS-EP-FILTER descrepancy
 
 Hello,
 
 I am attempting to test the Havana v3  OS-EP-FILTER extension with the
 latest RC1 bits and I get a 404 error response.
 
 The documentation actually shows 2 different URIs for this API:
 
   - GET /OS-EP-FILTER/projects/{project_id}/endpoints and
 http://identity:35357/v3/OS-FILTER/projects/{project_id}/endpoints
 
 I have tried both OS-EP-FILTER and OS-FILTER with the same result. Does
 anyone have information as to what I am missing?
 
 Regards,
 
 Mark Miller
 
 -
 
 From the online documentation:
 
 List Associations for Project: GET /OS-EP-
 FILTER/projects/{project_id}/endpoints
 
 Returns all the endpoints that are currently associated with a specific 
 project.
 
 Response:
 Status: 200 OK
 {
 endpoints:
 [
 {
 id: --endpoint-id--,
 interface: public,
 url: http://identity:35357/;,
 region: north,
 links: {
 self: http://identity:35357/v3/endpoints/--endpoint-id--;
 },
 service_id: --service-id--
 },
 {
 id: --endpoint-id--,
 interface: internal,
 region: south,
 url: http://identity:35357/;,
 links: {
 self: http://identity:35357/v3/endpoints/--endpoint-id--;
 },
 service_id: --service-id--
 }
 ],
 links: {
 self: http://identity:35357/v3/OS-
 FILTER/projects/{project_id}/endpoints,
 previous: null,
 next: null
 }
 }
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Team Meeting reminder

2013-10-08 Thread Mark Washenberger
Hi Glance folks,

We will have a team meeting this Thursday October 10th at 14:00 UTC in
#openstack-meeting-alt. All are welcome to attend.

For time information in your timezone, see
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20131010T14ah=1

The agenda can be found at
https://etherpad.openstack.org/glance-team-meeting-agenda. Feel free to
suggest agenda items.

Thanks,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Convection] Murano and Convection proposal discussion

2013-10-08 Thread Timur Sufiev
Hello!


I am an engineer from Murano team. Murano is an OpenStack project built
around Windows components deployment. Currently we (Murano team) are
working on improving Murano architecture to fit better in OpenStack
ecosystem and reuse more functionality from the other projects. As part of
that we’ve realized that we need task execution workflow. We see a lot of
value in Convection proposal for what we need as well as OpenStack users in
general.

*
*

We have some ideas around possible design approach as well as bits and
pieces that we can potentially reuse from the existing codebase. We would
like to start contributing to Convection and get it moving forward. As the
first step it would be good to start a discussion with you on how to unify
our vision and start the development.

*
*

Here are some of the areas that we would like to discuss:

   -

   Workflow definition proposal. We propose YAML as a task workflow
   definition format. Each node in the workflow has state, dependencies and an
   action associated with it. Action is defined as a generic instruction to
   call some other component using a specific transport (e.g. RabbitMQ).
   Dependency means “in order to execute this task it is required that some
   other tasks be in a certain state”.
   -

   Workflow definitions API proposal. We are discussing and prototyping an
   API for uploading workflow definitions, modifying them, adding the sources
   triggering new tasks to be scheduled for execution and so forth. We propose
   to adapt this API to Convection needs and possible rewrite and extend it.
   -

   Task dependencies resolution engine proposal. We already have a generic
   engine which processes workflows. It is essentially a parallel
   multithreaded state machine. We propose to use this engine as a basis for
   Convection and extend it by adding external event sources like timers and
   Ceilometer alarms.


-- 
Timur Sufiev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone OS-EP-FILTER descrepancy

2013-10-08 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
Slightly adjusted instructions after testing:

To enable the endpoint filter extension:

1. Add the new [endpoin_ filter] section  ton ``keystone.conf``. 
example:

 [endpoint_filter]
# extension for creating associations between project and endpoints in order to
# provide a tailored catalog for project-scoped token requests.
driver = keystone.contrib.endpoint_filter.backends.sql.EndpointFilter
# return_all_endpoints_if_no_filter = True

optional: change ``return_all_endpoints_if_no_filter`` the 
``[endpoint_filter]`` section

2. Add the ``endpoint_filter_extension`` filter to the ``api_v3`` pipeline in 
``keystone-paste.ini``. 
example:

[filter:endpoint_filter_extension]
paste.filter_factory = 
keystone.contrib.endpoint_filter.routers:EndpointFilterExtension.factory

[pipeline:api_v3]
pipeline = access_log sizelimit url_normalize token_auth admin_token_auth 
xml_body json_body ec2_extension s3_extension endpoint_filter_extension 
service_v3

3. Create the endpoint filter extension tables if using the provided sql 
backend. example::
./bin/keystone-manage db_sync --extension endpoint_filter

4.  Once you have done the changes restart the keystone-server to apply the 
changes.

 -Original Message-
 From: Miller, Mark M (EB SW Cloud - RD - Corvallis)
 Sent: Tuesday, October 08, 2013 1:30 PM
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] Keystone OS-EP-FILTER descrepancy
 
 Here is the response from Fabio:
 
 Mark,
   Please have a look at the configuration.rst in the contrib/endpoint-filter
 folder.
 I pasted here for your convenience:
 
 ==
 Enabling Endpoint Filter Extension
 ==To enable the endpoint filter
 extension:
 1. add the endpoint filter extension catalog driver to the ``[catalog]`` 
 section
in ``keystone.conf``. example::
 
 [catalog]
 driver =
 keystone.contrib.endpoint_filter.backends.catalog_sql.EndpointFilterCatalog
 2. add the ``endpoint_filter_extension`` filter to the ``api_v3`` pipeline in
``keystone-paste.ini``. example::
 
 [pipeline:api_v3]
 pipeline = access_log sizelimit url_normalize token_auth
 admin_token_auth xml_body json_body ec2_extension s3_extension
 endpoint_filter_extension service_v3 3. create the endpoint filter extension
 tables if using the provided sql backend. example::
 ./bin/keystone-manage db_sync --extension endpoint_filter 4. optional:
 change ``return_all_endpoints_if_no_filter`` the ``[endpoint_filter]`` section
in ``keystone.conf`` to return an empty catalog if no associations are 
 made.
 example::
 [endpoint_filter]
 return_all_endpoints_if_no_filter = False
 
 
 Steps 1-3 are mandatory. Once you have done the changes restart the
 keystone-server to apply the changes.
 
 The /v3/auth/tokens?nocatalog is to remove the catalog from the token
 creation.
 It is different from the filtering because it won't return any endpoint in the
 service catalog. The endpoint filter will return only the ones that you have
 associated with a particular project.
 Please bear in mind that this works only with scoped token (meaning where
 you pass a project id).
 
 
 
 
 
 
  -Original Message-
  From: Miller, Mark M (EB SW Cloud - RD - Corvallis)
  Sent: Tuesday, October 08, 2013 1:21 PM
  To: OpenStack Development Mailing List
  Subject: [openstack-dev] Keystone OS-EP-FILTER descrepancy
 
  Hello,
 
  I am attempting to test the Havana v3  OS-EP-FILTER extension with the
  latest RC1 bits and I get a 404 error response.
 
  The documentation actually shows 2 different URIs for this API:
 
  - GET /OS-EP-FILTER/projects/{project_id}/endpoints and
  http://identity:35357/v3/OS-FILTER/projects/{project_id}/endpoints
 
  I have tried both OS-EP-FILTER and OS-FILTER with the same result.
  Does anyone have information as to what I am missing?
 
  Regards,
 
  Mark Miller
 
  -
 
  From the online documentation:
 
  List Associations for Project: GET /OS-EP-
  FILTER/projects/{project_id}/endpoints
 
  Returns all the endpoints that are currently associated with a specific
 project.
 
  Response:
  Status: 200 OK
  {
  endpoints:
  [
  {
  id: --endpoint-id--,
  interface: public,
  url: http://identity:35357/;,
  region: north,
  links: {
  self: http://identity:35357/v3/endpoints/--endpoint-id--;
  },
  service_id: --service-id--
  },
  {
  id: --endpoint-id--,
  interface: internal,
  region: south,
  url: http://identity:35357/;,
  links: {
  self: http://identity:35357/v3/endpoints/--endpoint-id--;
  },
  service_id: --service-id--
  }
  ],
  links: {
  self: http://identity:35357/v3/OS-
  FILTER/projects/{project_id}/endpoints,
  previous: null,
 

[openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-08 Thread Stan Lagun
Hello,

I’m one of the engineer working on Murano project. Recently we started a
discussion about Murano and Heat Software orchestration and I want to
continue this discussion with more technical details.

In our project we do deployment of complex multi-instance Windows services.
Those services usually require specific multi-VM orchestration that is
currently impossible or at least not that easy to achieve with Heat. As you
are currently doing HOT software orchestration design we would like to
participate in HOT Software orchestration design and contribute into it, so
that the Heat could address use-cases which we believe are very common.

For example here is how deployment of a SQL Server cluster goes:

   1.

   Allocate Windows VMs for SQL Server cluster
   2.

   Enable secondary IP address from user input on all SQL Windows instances
   3.

   Install SQL Server prerequisites on each node
   4.

   Choose a master node and install Failover Cluster on it
   5.

   Configure all nodes so that they know which one of them is the master
   6.

   Install SQL Server on all nodes
   7.

   Initialize AlwaysOn on all nodes except for the master
   8.

   Initialize Primary replica
   9.

   Initialize secondary replicas


All of the steps must take place in appropriate order depending on the
state of other nodes. Some steps require an output from previous steps and
all of them require some input parameters. SQL Server requires an Active
Directory service in order to use Failover mechanism and installation of
Active Directory with primary and secondary controllers is a complex
workflow of its own.

That is why it is necessary to have some central coordination service which
would handle deployment workflow and perform specific actions (create VMs
and other OpenStack resources, do something on that VM) on each stage
according to that workflow. We think that Heat is the best place for such
service.

Our idea is to extend HOT DSL by adding  workflow definition capabilities
as an explicit list of resources, components’ states and actions. States
may depend on each other so that you can reach state X only after you’ve
reached states Y and Z that the X depends on. The goal is from initial
state to reach some final state “Deployed”.

There is such state graph for each of our deployment entities (service,
VMs, other things). There is also an action that must be performed on each
state.
For example states graph from example above would look like this:






The goal is to reach Service_Done state which depends on VM1_Done and
VM2_Done states and so on from initial Service_Start state.

We propose to extend HOT DSL with workflow definition capabilities where
you can describe step by step instruction to install service and properly
handle errors on each step.

We already have an experience in implementation of the DSL, workflow
description and processing mechanism for complex deployments and believe
we’ll all benefit by re-using this experience and existing code, having
properly discussed and agreed on abstraction layers and distribution of
responsibilities between OS components. There is an idea of implementing
part of workflow processing mechanism as a part of Convection proposal,
which would allow other OS projects to benefit by using this.

We would like to discuss if such design could become a part of future Heat
version as well as other possible contributions from Murano team.

Regards,
Stan Lagun

-- 

Senior Developer
Mirantis
35b/3, Vorontsovskaya St.
Moscow, Russia
Skype: stanlagun
www.mirantis.com
sla...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Convection] Murano and Convection proposal discussion

2013-10-08 Thread Timur Sufiev
Hi!

I just forgot to ask about the best way for discussion: what will it be? We
have an IRC channel #murano at FreeNode, so you can find someone from our
team here. The best way probably will be a Google hangout or Webex meeting
for a live discussion and document sharing.


On Wed, Oct 9, 2013 at 12:44 AM, Timur Sufiev tsuf...@mirantis.com wrote:

 Hello!


 I am an engineer from Murano team. Murano is an OpenStack project built
 around Windows components deployment. Currently we (Murano team) are
 working on improving Murano architecture to fit better in OpenStack
 ecosystem and reuse more functionality from the other projects. As part of
 that we’ve realized that we need task execution workflow. We see a lot of
 value in Convection proposal for what we need as well as OpenStack users in
 general.

 *
 *

 We have some ideas around possible design approach as well as bits and
 pieces that we can potentially reuse from the existing codebase. We would
 like to start contributing to Convection and get it moving forward. As the
 first step it would be good to start a discussion with you on how to unify
 our vision and start the development.

 *
 *

 Here are some of the areas that we would like to discuss:

-

Workflow definition proposal. We propose YAML as a task workflow
definition format. Each node in the workflow has state, dependencies and an
action associated with it. Action is defined as a generic instruction to
call some other component using a specific transport (e.g. RabbitMQ).
Dependency means “in order to execute this task it is required that some
other tasks be in a certain state”.
-

Workflow definitions API proposal. We are discussing and prototyping
an API for uploading workflow definitions, modifying them, adding the
sources triggering new tasks to be scheduled for execution and so forth. We
propose to adapt this API to Convection needs and possible rewrite and
extend it.
-

Task dependencies resolution engine proposal. We already have a
generic engine which processes workflows. It is essentially a parallel
multithreaded state machine. We propose to use this engine as a basis for
Convection and extend it by adding external event sources like timers and
Ceilometer alarms.


 --
 Timur Sufiev




-- 
Timur Sufiev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] automatically evacuate instances on compute failure

2013-10-08 Thread Syed Armani
Hi Folks,

I am also very much curious about this. Earlier this bp had a dependency on
query scheduler, which is now merged. It will be very helpful if anyone can
throw some light on the fate of this bp.

Thanks.

Cheers,
Syed Armani


On Wed, Sep 25, 2013 at 11:46 PM, Chris Friesen chris.frie...@windriver.com
 wrote:

 I'm interested in automatically evacuating instances in the case of a
 failed compute node.  I found the following blueprint that covers exactly
 this case:

 https://blueprints.launchpad.**net/nova/+spec/evacuate-**
 instance-automaticallyhttps://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically

 However, the comments there seem to indicate that the code that
 orchestrates the evacuation shouldn't go into nova (referencing the Havana
 design summit).

 Why wouldn't this type of behaviour belong in nova?  (Is there a summary
 of discussions at the summit?)  Is there a recommended place where this
 sort of thing should go?

 Thanks,
 Chris

 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Convection] Murano and Convection proposal discussion

2013-10-08 Thread Joshua Harlow
I'd like to counter-propose :)

https://wiki.openstack.org/wiki/TaskFlow

Taskflow is a lower level library that also has an engine, but not only does it 
have the underlying engine, it also has key fundamental components such as 
state persistence, resumption, reverting.

The workflow definition  api to me should be independent of the underlying 
'engine'.

Can u explain how your engine addresses some of the concepts that I think a 
underlying library should have (workflow state persistence, workflow 
resumption, task/workflow reverting).

-Josh

From: Timur Sufiev tsuf...@mirantis.commailto:tsuf...@mirantis.com
Date: Tuesday, October 8, 2013 1:44 PM
To: Joshua Harlow harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com
Cc: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [Convection] Murano and Convection proposal discussion


Hello!


I am an engineer from Murano team. Murano is an OpenStack project built around 
Windows components deployment. Currently we (Murano team) are working on 
improving Murano architecture to fit better in OpenStack ecosystem and reuse 
more functionality from the other projects. As part of that we’ve realized that 
we need task execution workflow. We see a lot of value in Convection proposal 
for what we need as well as OpenStack users in general.


We have some ideas around possible design approach as well as bits and pieces 
that we can potentially reuse from the existing codebase. We would like to 
start contributing to Convection and get it moving forward. As the first step 
it would be good to start a discussion with you on how to unify our vision and 
start the development.


Here are some of the areas that we would like to discuss:

  *   Workflow definition proposal. We propose YAML as a task workflow 
definition format. Each node in the workflow has state, dependencies and an 
action associated with it. Action is defined as a generic instruction to call 
some other component using a specific transport (e.g. RabbitMQ). Dependency 
means “in order to execute this task it is required that some other tasks be in 
a certain state”.

  *   Workflow definitions API proposal. We are discussing and prototyping an 
API for uploading workflow definitions, modifying them, adding the sources 
triggering new tasks to be scheduled for execution and so forth. We propose to 
adapt this API to Convection needs and possible rewrite and extend it.

  *   Task dependencies resolution engine proposal. We already have a generic 
engine which processes workflows. It is essentially a parallel multithreaded 
state machine. We propose to use this engine as a basis for Convection and 
extend it by adding external event sources like timers and Ceilometer alarms.


--
Timur Sufiev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] automatically evacuate instances on compute failure

2013-10-08 Thread Alex Glikson
Seems that this can be broken into 3 incremental pieces. First, would be 
great if the ability to schedule a single 'evacuate' would be finally 
merged (
https://blueprints.launchpad.net/nova/+spec/find-host-and-evacuate-instance
). Then, it would make sense to have the logic that evacuates an entire 
host (
https://blueprints.launchpad.net/python-novaclient/+spec/find-and-evacuate-host
). The reasoning behind suggesting that this should not necessarily be in 
Nova is, perhaps, that it *can* be implemented outside Nova using the 
indvidual 'evacuate' API. Finally, it should be possible to close the loop 
and invoke the evacuation automatically as a result of a failure detection 
(not clear how exactly this would work, though). Hopefully we will have at 
least the first part merged soon (not sure if anyone is actively working 
on a rebase).

Regards,
Alex




From:   Syed Armani dce3...@gmail.com
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
Date:   09/10/2013 12:04 AM
Subject:Re: [openstack-dev] [nova] automatically evacuate 
instances on compute failure



Hi Folks,

I am also very much curious about this. Earlier this bp had a dependency 
on query scheduler, which is now merged. It will be very helpful if anyone 
can throw some light on the fate of this bp.

Thanks.

Cheers,
Syed Armani


On Wed, Sep 25, 2013 at 11:46 PM, Chris Friesen 
chris.frie...@windriver.com wrote:
I'm interested in automatically evacuating instances in the case of a 
failed compute node.  I found the following blueprint that covers exactly 
this case:

https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically


However, the comments there seem to indicate that the code that 
orchestrates the evacuation shouldn't go into nova (referencing the Havana 
design summit).

Why wouldn't this type of behaviour belong in nova?  (Is there a summary 
of discussions at the summit?)  Is there a recommended place where this 
sort of thing should go?

Thanks,
Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-08 Thread Angus Salkeld

On 09/10/13 00:53 +0400, Stan Lagun wrote:

Hello,

I’m one of the engineer working on Murano project. Recently we started a
discussion about Murano and Heat Software orchestration and I want to
continue this discussion with more technical details.


I hope you are going to be at summit, as I expect this to an important
session we have there:

Related summit sessions:
http://summit.openstack.org/cfp/details/82
http://summit.openstack.org/cfp/details/121
http://summit.openstack.org/cfp/details/78

Related blueprints:
https://blueprints.launchpad.net/heat/+spec/software-configuration-provider
https://blueprints.launchpad.net/heat/+spec/hot-software-config-deps
https://blueprints.launchpad.net/heat/+spec/hot-software-config
https://blueprints.launchpad.net/heat/+spec/windows-instances

Excuse me if you are well aware of these.

-Angus



In our project we do deployment of complex multi-instance Windows services.
Those services usually require specific multi-VM orchestration that is
currently impossible or at least not that easy to achieve with Heat. As you
are currently doing HOT software orchestration design we would like to
participate in HOT Software orchestration design and contribute into it, so
that the Heat could address use-cases which we believe are very common.

For example here is how deployment of a SQL Server cluster goes:

  1.

  Allocate Windows VMs for SQL Server cluster
  2.

  Enable secondary IP address from user input on all SQL Windows instances
  3.

  Install SQL Server prerequisites on each node
  4.

  Choose a master node and install Failover Cluster on it
  5.

  Configure all nodes so that they know which one of them is the master
  6.

  Install SQL Server on all nodes
  7.

  Initialize AlwaysOn on all nodes except for the master
  8.

  Initialize Primary replica
  9.

  Initialize secondary replicas


All of the steps must take place in appropriate order depending on the
state of other nodes. Some steps require an output from previous steps and
all of them require some input parameters. SQL Server requires an Active
Directory service in order to use Failover mechanism and installation of
Active Directory with primary and secondary controllers is a complex
workflow of its own.

That is why it is necessary to have some central coordination service which
would handle deployment workflow and perform specific actions (create VMs
and other OpenStack resources, do something on that VM) on each stage
according to that workflow. We think that Heat is the best place for such
service.

Our idea is to extend HOT DSL by adding  workflow definition capabilities
as an explicit list of resources, components’ states and actions. States
may depend on each other so that you can reach state X only after you’ve
reached states Y and Z that the X depends on. The goal is from initial
state to reach some final state “Deployed”.

There is such state graph for each of our deployment entities (service,
VMs, other things). There is also an action that must be performed on each
state.
For example states graph from example above would look like this:






The goal is to reach Service_Done state which depends on VM1_Done and
VM2_Done states and so on from initial Service_Start state.

We propose to extend HOT DSL with workflow definition capabilities where
you can describe step by step instruction to install service and properly
handle errors on each step.

We already have an experience in implementation of the DSL, workflow
description and processing mechanism for complex deployments and believe
we’ll all benefit by re-using this experience and existing code, having
properly discussed and agreed on abstraction layers and distribution of
responsibilities between OS components. There is an idea of implementing
part of workflow processing mechanism as a part of Convection proposal,
which would allow other OS projects to benefit by using this.

We would like to discuss if such design could become a part of future Heat
version as well as other possible contributions from Murano team.


I am really happy that you want to get involved and this sounds like it
functionally matches quite well to the blueprints at the top.

-Angus



Regards,
Stan Lagun

--

Senior Developer
Mirantis
35b/3, Vorontsovskaya St.
Moscow, Russia
Skype: stanlagun
www.mirantis.com
sla...@mirantis.com



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Service VM discussion - Use Cases

2013-10-08 Thread Regnier, Greg J
Hi,

Re: blueprint:  
https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms

Before going into more detail on the mechanics, would like to nail down use 
cases.

Based on input and feedback, here is what I see so far.



Assumptions:



- a 'Service VM' hosts one or more 'Service Instances'

- each Service Instance has one or more Data Ports that plug into Neutron 
networks

- each Service Instance has a Service Management i/f for Service management 
(e.g. FW rules)

- each Service Instance has a VM Management i/f for VM management (e.g. health 
monitor)



Use case 1: Private Service VM

Owned by tenant

VM hosts one or more service instances

Ports of each service instance only plug into network(s) owned by tenant



Use case 2: Shared Service VM

Owned by admin/operator

VM hosts multiple service instances

The ports of each service instance plug into one tenants network(s)

Service instance provides isolation from other service instances within VM



Use case 3: Multi-Service VM

Either Private or Shared Service VM

Support multiple service types (e.g. FW, LB, ...)


-  Greg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Robert Collins
On 9 October 2013 07:24, Jiří Stránský ji...@redhat.com wrote:
 Clint and Monty,

 thank you for such good responses. I am new in TripleO team indeed and I was
 mostly concerned by the line in the sand. Your responses shed some more
 light on the issue for me and i hope we'll be heading the right way :)

Sorry for getting folk concerned! I'm really glad some folk jumped in
to clarify. Let me offer some more thoughts on top of this..
I was taking some concepts as a given - they are part of the OpenStack
culture - when I wrote my mail about TripleO reviewer status:

* That what we need is a bunch of folk actively engaged in thinking
about the structure, performance and features of the component
projects in TripleO, who *apply* that knowledge to every code review.
And we need to grow that collection of reviewers to keep up with a
growing contributor base.

* That the more reviewers we have, the less burden any one reviewer
has to carry : I'd be very happy if we normalised on everyone in -core
doing just one careful and effective review a day, *if* thats
sufficient to carry the load. I doubt it will be, because developers
can produce way more than one patch a day each, which implies 2*
developer count reviews per day *at minimum*, and even if every ATC
was a -core reviewer, we'd still need two reviews per -core per day.

* How much knowledge is needed to be a -core? And how many reviews?
There isn't a magic number of reviews IMO: we need 'lots' of reviews
and 'over a substantial period of time' : it's very hard to review
effectively in a new project, but after 3 months, if someone has been
regularly reviewing they will have had lots of mentoring taking place,
and we (-core membership is voted on by -core members) are likely to
be reasonably happy that they will do a good job.

* And finally that the job of -core is to sacrifice their own
productivity in exachange for team productivity : while there are
limits to this - reviewer fatigue, personal/company goals, etc etc, at
the heart of it it's a volunteer role which is crucial for keeping
velocity up: every time a patch lingers without feedback the developer
writing it is stalled, which is a waste (in the Lean sense).



So with those 'givens' in place, I was trying to just report in that
context.. the metric of reviews being done is a *lower bound* - it is
necessary, but not sufficient, to be -core. Dropping below it for an
extended period of time - and I've set a pretty arbitrary initial
value of approximately one per day - is a solid sign that the person
is not keeping up with evolution of the code base.

Being -core means being on top of the evolution of the program and the
state of the code, and being a regular, effective, reviewer is the one
sure fire way to do that. I'm certainly open to folk who want to focus
on just the CLI doing so, but that isn't enough to keep up to date on
the overall structure/needs - the client is part of the overall
story!. So the big thing for me is - if someone no longer has time to
offer doing reviews, thats fine, we should recognise that and release
them from the burden of -core: their reviews will still be valued and
thought deeply about, and if they contribute more time for a while
then we can ask them to shoulder -core again.

HTH,
-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] automatically evacuate instances on compute failure

2013-10-08 Thread Mahesh K P
I was working on
https://blueprints.launchpad.net/nova/+spec/find-host-and-evacuate-instance.
Hadn't looked at it because of FF, will restore the patch soon.

Thanks
Mahesh
Developer | ThoughtWorks | +1 (210) 716 1767


On Tue, Oct 8, 2013 at 5:20 PM, Alex Glikson glik...@il.ibm.com wrote:

 Seems that this can be broken into 3 incremental pieces. First, would be
 great if the ability to schedule a single 'evacuate' would be finally
 merged (*
 https://blueprints.launchpad.net/nova/+spec/find-host-and-evacuate-instance
 *https://blueprints.launchpad.net/nova/+spec/find-host-and-evacuate-instance).
 Then, it would make sense to have the logic that evacuates an entire host (
 *
 https://blueprints.launchpad.net/python-novaclient/+spec/find-and-evacuate-host
 *https://blueprints.launchpad.net/python-novaclient/+spec/find-and-evacuate-host).
 The reasoning behind suggesting that this should not necessarily be in Nova
 is, perhaps, that it *can* be implemented outside Nova using the indvidual
 'evacuate' API. Finally, it should be possible to close the loop and invoke
 the evacuation automatically as a result of a failure detection (not clear
 how exactly this would work, though). Hopefully we will have at least the
 first part merged soon (not sure if anyone is actively working on a rebase).

 Regards,
 Alex




 From:Syed Armani dce3...@gmail.com
 To:OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org,
 Date:09/10/2013 12:04 AM
 Subject:Re: [openstack-dev] [nova] automatically evacuate
 instances on compute failure
 --



 Hi Folks,

 I am also very much curious about this. Earlier this bp had a dependency
 on query scheduler, which is now merged. It will be very helpful if anyone
 can throw some light on the fate of this bp.

 Thanks.

 Cheers,
 Syed Armani


 On Wed, Sep 25, 2013 at 11:46 PM, Chris Friesen *
 chris.frie...@windriver.com* chris.frie...@windriver.com wrote:
 I'm interested in automatically evacuating instances in the case of a
 failed compute node.  I found the following blueprint that covers exactly
 this case:
 *
 **
 https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically
 *https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically

 However, the comments there seem to indicate that the code that
 orchestrates the evacuation shouldn't go into nova (referencing the Havana
 design summit).

 Why wouldn't this type of behaviour belong in nova?  (Is there a summary
 of discussions at the summit?)  Is there a recommended place where this
 sort of thing should go?

 Thanks,
 Chris

 ___
 OpenStack-dev mailing list*
 **OpenStack-dev@lists.openstack.org* OpenStack-dev@lists.openstack.org*
 **http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-08 Thread Georgy Okrokvertskhov
Hi Angus,

We will have representatives from our Team . Alex Tivelkov and I will be on
summit. We definitely will participate in design sessions for these hot
topics.

Before the summit we will work in etherpads to add necessary technical
information to have a solid background for discussions. We already checked
BPs and we think we can add more details to them.

As I see, not all BP have etherpad links published on launchpad. Should we
create them and attach to BP's whiteboards?

Do you have any internal design sessions before summit? It would be nice if
we can participate in them too.

Thanks
Gosha






On Tue, Oct 8, 2013 at 2:46 PM, Angus Salkeld asalk...@redhat.com wrote:

 On 09/10/13 00:53 +0400, Stan Lagun wrote:

 Hello,

 I’m one of the engineer working on Murano project. Recently we started a
 discussion about Murano and Heat Software orchestration and I want to
 continue this discussion with more technical details.


 I hope you are going to be at summit, as I expect this to an important
 session we have there:

 Related summit sessions:
 http://summit.openstack.org/**cfp/details/82http://summit.openstack.org/cfp/details/82
 http://summit.openstack.org/**cfp/details/121http://summit.openstack.org/cfp/details/121
 http://summit.openstack.org/**cfp/details/78http://summit.openstack.org/cfp/details/78

 Related blueprints:
 https://blueprints.launchpad.**net/heat/+spec/software-**
 configuration-providerhttps://blueprints.launchpad.net/heat/+spec/software-configuration-provider
 https://blueprints.launchpad.**net/heat/+spec/hot-software-**config-depshttps://blueprints.launchpad.net/heat/+spec/hot-software-config-deps
 https://blueprints.launchpad.**net/heat/+spec/hot-software-**confighttps://blueprints.launchpad.net/heat/+spec/hot-software-config
 https://blueprints.launchpad.**net/heat/+spec/windows-**instanceshttps://blueprints.launchpad.net/heat/+spec/windows-instances

 Excuse me if you are well aware of these.

 -Angus


 In our project we do deployment of complex multi-instance Windows
 services.
 Those services usually require specific multi-VM orchestration that is
 currently impossible or at least not that easy to achieve with Heat. As
 you
 are currently doing HOT software orchestration design we would like to
 participate in HOT Software orchestration design and contribute into it,
 so
 that the Heat could address use-cases which we believe are very common.

 For example here is how deployment of a SQL Server cluster goes:

   1.


   Allocate Windows VMs for SQL Server cluster
   2.


   Enable secondary IP address from user input on all SQL Windows instances
   3.


   Install SQL Server prerequisites on each node
   4.


   Choose a master node and install Failover Cluster on it
   5.


   Configure all nodes so that they know which one of them is the master
   6.


   Install SQL Server on all nodes
   7.


   Initialize AlwaysOn on all nodes except for the master
   8.

   Initialize Primary replica
   9.


   Initialize secondary replicas


 All of the steps must take place in appropriate order depending on the
 state of other nodes. Some steps require an output from previous steps and
 all of them require some input parameters. SQL Server requires an Active
 Directory service in order to use Failover mechanism and installation of
 Active Directory with primary and secondary controllers is a complex
 workflow of its own.

 That is why it is necessary to have some central coordination service
 which
 would handle deployment workflow and perform specific actions (create VMs
 and other OpenStack resources, do something on that VM) on each stage
 according to that workflow. We think that Heat is the best place for such
 service.

 Our idea is to extend HOT DSL by adding  workflow definition capabilities
 as an explicit list of resources, components’ states and actions. States
 may depend on each other so that you can reach state X only after you’ve
 reached states Y and Z that the X depends on. The goal is from initial
 state to reach some final state “Deployed”.

 There is such state graph for each of our deployment entities (service,
 VMs, other things). There is also an action that must be performed on each
 state.
 For example states graph from example above would look like this:






 The goal is to reach Service_Done state which depends on VM1_Done and
 VM2_Done states and so on from initial Service_Start state.

 We propose to extend HOT DSL with workflow definition capabilities where
 you can describe step by step instruction to install service and properly
 handle errors on each step.

 We already have an experience in implementation of the DSL, workflow
 description and processing mechanism for complex deployments and believe
 we’ll all benefit by re-using this experience and existing code, having
 properly discussed and agreed on abstraction layers and distribution of
 responsibilities between OS components. There is an idea of implementing
 part of workflow processing 

Re: [openstack-dev] [nova] automatically evacuate instances on compute failure

2013-10-08 Thread Chris Friesen

On 10/08/2013 03:20 PM, Alex Glikson wrote:

Seems that this can be broken into 3 incremental pieces. First, would be
great if the ability to schedule a single 'evacuate' would be finally
merged
(_https://blueprints.launchpad.net/nova/+spec/find-host-and-evacuate-instance_).


Agreed.


Then, it would make sense to have the logic that evacuates an entire
host
(_https://blueprints.launchpad.net/python-novaclient/+spec/find-and-evacuate-host_).
The reasoning behind suggesting that this should not necessarily be in
Nova is, perhaps, that it *can* be implemented outside Nova using the
indvidual 'evacuate' API.


This actually more-or-less exists already in the existing nova 
host-evacuate command.  One major issue with this however is that it 
requires the caller to specify whether all the instances are on shared 
or local storage, and so it can't handle a mix of local and shared 
storage for the instances.   If any of them boot off block storage for 
instance you need to move them first and then do the remaining ones as a 
group.


It would be nice to embed the knowledge of whether or not an instance is 
on shared storage in the instance itself at creation time.  I envision 
specifying this in the config file for the compute manager along with 
the instance storage location, and the compute manager could set the 
field in the instance at creation time.



Finally, it should be possible to close the
loop and invoke the evacuation automatically as a result of a failure
detection (not clear how exactly this would work, though). Hopefully we
will have at least the first part merged soon (not sure if anyone is
actively working on a rebase).


My interpretation of the discussion so far is that the nova maintainers 
would prefer this to be driven by an outside orchestration daemon.


Currently the only way a service is recognized to be down is if 
someone calls is_up() and it notices that the service hasn't sent an 
update in the last minute.  There's nothing in nova actively scanning 
for compute node failures, which is where the outside daemon comes in.


Also, there is some complexity involved in dealing with auto-evacuate: 
What do you do if an evacuate fails?  How do you recover intelligently 
if there is no admin involved?


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

2013-10-08 Thread Rudra Rugge
Hi Greg,

Is there any discussion so far on the scaling of VMs as in launching multiple 
VMs
for the same service. It would also have impact on the VIF scheme.

How can we plug these services into different networks - is that still being 
worked
on?

Thanks,
Rudra

On Oct 8, 2013, at 2:48 PM, Regnier, Greg J 
greg.j.regn...@intel.commailto:greg.j.regn...@intel.com wrote:

Hi,

Re: blueprint:  
https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms
Before going into more detail on the mechanics, would like to nail down use 
cases.
Based on input and feedback, here is what I see so far.

Assumptions:


- a 'Service VM' hosts one or more 'Service Instances'

- each Service Instance has one or more Data Ports that plug into Neutron 
networks

- each Service Instance has a Service Management i/f for Service management 
(e.g. FW rules)

- each Service Instance has a VM Management i/f for VM management (e.g. health 
monitor)


Use case 1: Private Service VM

Owned by tenant

VM hosts one or more service instances

Ports of each service instance only plug into network(s) owned by tenant


Use case 2: Shared Service VM

Owned by admin/operator

VM hosts multiple service instances

The ports of each service instance plug into one tenants network(s)

Service instance provides isolation from other service instances within VM



Use case 3: Multi-Service VM

Either Private or Shared Service VM

Support multiple service types (e.g. FW, LB, …)


-  Greg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Keystone OS-EP-FILTER descrepancy

2013-10-08 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
Sorry to send this out again, but I wrote too soon. I was missing one driver 
entry in keystone.conf. Here are my working settings:

File keystone.conf:

[catalog]
# dynamic, sql-based backend (supports API/CLI-based management commands)
#driver = keystone.catalog.backends.sql.Catalog
driver = 
keystone.contrib.endpoint_filter.backends.catalog_sql.EndpointFilterCatalog

# static, file-based backend (does *NOT* support any management commands)
# driver = keystone.catalog.backends.templated.TemplatedCatalog

template_file = default_catalog.templates

[endpoint_filter]
# extension for creating associations between project and endpoints in order to
# provide a tailored catalog for project-scoped token requests.
driver = keystone.contrib.endpoint_filter.backends.sql.EndpointFilter
return_all_endpoints_if_no_filter = False


File keystone-paste.ini:

[filter:endpoint_filter_extension]
paste.filter_factory = 
keystone.contrib.endpoint_filter.routers:EndpointFilterExtension.factory

and

[pipeline:api_v3]
pipeline = access_log sizelimit url_normalize token_auth admin_token_auth 
xml_body json_body ec2_extension s3_extension oauth1_extension 
endpoint_filter_extension service_v3



Updated Installation instructions:

To enable the endpoint filter extension:

1. Add the new filter driver to the catalog section to keystone.conf.

Example:
[catalog]
driver = 
keystone.contrib.endpoint_filter.backends.catalog_sql.EndpointFilterCatalog

2. Add the new [endpoint_filter] section  to ``keystone.conf``.

Example:

 [endpoint_filter]
# extension for creating associations between project and endpoints in order
to # provide a tailored catalog for project-scoped token requests.
driver = keystone.contrib.endpoint_filter.backends.sql.EndpointFilter
# return_all_endpoints_if_no_filter = True

optional: uncomment and set ``return_all_endpoints_if_no_filter`` 

3. Add the ``endpoint_filter_extension`` filter to the ``api_v3`` pipeline in 
``keystone-paste.ini``.

Example:

[filter:endpoint_filter_extension]
paste.filter_factory = 
keystone.contrib.endpoint_filter.routers:EndpointFilterExtension.factory

[pipeline:api_v3]
pipeline = access_log sizelimit url_normalize token_auth admin_token_auth
xml_body json_body ec2_extension s3_extension endpoint_filter_extension 
service_v3

4. Create the endpoint filter extension tables if using the provided sql
backend.

Example::
./bin/keystone-manage db_sync --extension endpoint_filter

5.  Once you have done the changes restart the keystone-server to apply the
changes.

 -Original Message-
 From: Miller, Mark M (EB SW Cloud - RD - Corvallis)
 Sent: Tuesday, October 08, 2013 1:51 PM
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] Keystone OS-EP-FILTER descrepancy
 
 Slightly adjusted instructions after testing:
 
 To enable the endpoint filter extension:
 
 1. Add the new [endpoin_ filter] section  ton ``keystone.conf``.
 example:
 
  [endpoint_filter]
 # extension for creating associations between project and endpoints in order
 to # provide a tailored catalog for project-scoped token requests.
 driver = keystone.contrib.endpoint_filter.backends.sql.EndpointFilter
 # return_all_endpoints_if_no_filter = True
 
 optional: change ``return_all_endpoints_if_no_filter`` the
 ``[endpoint_filter]`` section
 
 2. Add the ``endpoint_filter_extension`` filter to the ``api_v3`` pipeline in
 ``keystone-paste.ini``.
 example:
 
 [filter:endpoint_filter_extension]
 paste.filter_factory =
 keystone.contrib.endpoint_filter.routers:EndpointFilterExtension.factory
 
 [pipeline:api_v3]
 pipeline = access_log sizelimit url_normalize token_auth admin_token_auth
 xml_body json_body ec2_extension s3_extension
 endpoint_filter_extension service_v3
 
 3. Create the endpoint filter extension tables if using the provided sql
 backend. example::
 ./bin/keystone-manage db_sync --extension endpoint_filter
 
 4.  Once you have done the changes restart the keystone-server to apply the
 changes.
 
  -Original Message-
  From: Miller, Mark M (EB SW Cloud - RD - Corvallis)
  Sent: Tuesday, October 08, 2013 1:30 PM
  To: OpenStack Development Mailing List
  Subject: Re: [openstack-dev] Keystone OS-EP-FILTER descrepancy
 
  Here is the response from Fabio:
 
  Mark,
Please have a look at the configuration.rst in the
  contrib/endpoint-filter folder.
  I pasted here for your convenience:
 
  ==
  Enabling Endpoint Filter Extension
  ==To enable the endpoint filter
  extension:
  1. add the endpoint filter extension catalog driver to the ``[catalog]``
 section
 in ``keystone.conf``. example::
 
  [catalog]
  driver =
  keystone.contrib.endpoint_filter.backends.catalog_sql.EndpointFilterCa
  talog 2. add the ``endpoint_filter_extension`` filter to the
  ``api_v3`` pipeline in
 ``keystone-paste.ini``. example::
 
  [pipeline:api_v3]
  pipeline = access_log sizelimit url_normalize token_auth
  

Re: [openstack-dev] [Neutron] Service VM discussion - Use Cases

2013-10-08 Thread Harshad Nakil
Hello Greg,

Blueprint you have put together is very much in line what we have done in
openContrail virtual services implementation.

One thing that we have done is Service instance is a single type of
service provided by virtual appliance.
e.g. firewall or load-balancer etc
Service instance itself can be made up one or more virtual machines. This
will usually be case when you need to scale out services for performance
reasons

Another thing that we have done is introduced a concept of service
template. Service template describes how the service can be deployed. Image
specified in the template can also be snapshot of VM with cookie cutter
configuration.

service templates can be created by admins.Service instances are created by
tenants (if allowed) using a service templates.

So a a single firewall instance from vendor can be packaged as transparent
L2 firewall in one template and in network L3 firewall in another template.

Regards
-Harshad



On Tue, Oct 8, 2013 at 2:48 PM, Regnier, Greg J greg.j.regn...@intel.comwrote:

  Hi,

 ** **

 Re: blueprint:
 https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms

 Before going into more detail on the mechanics, would like to nail down
 use cases.  

 Based on input and feedback, here is what I see so far.  

 ** **

 Assumptions:

  

 - a 'Service VM' hosts one or more 'Service Instances'

 - each Service Instance has one or more Data Ports that plug into Neutron
 networks

 - each Service Instance has a Service Management i/f for Service
 management (e.g. FW rules)

 - each Service Instance has a VM Management i/f for VM management (e.g.
 health monitor)

  

 Use case 1: Private Service VM 

 Owned by tenant

 VM hosts one or more service instances

 Ports of each service instance only plug into network(s) owned by tenant**
 **

  

 Use case 2: Shared Service VM

 Owned by admin/operator

 VM hosts multiple service instances

 The ports of each service instance plug into one tenants network(s)

 Service instance provides isolation from other service instances within VM
 

  

 Use case 3: Multi-Service VM

 Either Private or Shared Service VM

 Support multiple service types (e.g. FW, LB, …)

 ** **

 **-  **Greg

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-08 Thread Yathiraj Udupi (yudupi)
Hi Sylvain,

Thanks for your comments.  I can see that Climate is aiming to provide a 
reservation service for physical and now virtual resources also like you 
mention.

The Instance-group [a][b] effort   (proposed during the last summit,  and good 
progress has been made so far)  attempts to address the tenant facing API 
aspects in the bigger Smart Resource Placement puzzle [c].
The idea is to be able to represent an entire topology (a group of resources) 
that is requested by the tenant, that contains members or sub-groups , their 
connections,  their associated policies and other metadata.

The first part is to be able to persist this group, and use the group to 
create/schedule the resources together as a whole group, so that intelligent 
decisions can be made together considering all the requirements and constraints 
(policies).

In the ongoing discussions in the Nova scheduler sub-team, we do agree that we 
need additional support to achieve the creation of the group as a whole.   It 
will involve reservation too to achieve this.

Once the Instance group is registered and persisted,  we can trigger the 
creation/boot up of the instances, which will involve arriving at the resource 
placement decisions and then the actual creation.  So one of the idea is to 
provide clear apis such an external component (such as climate, heat, or some 
other module) can take the placement decision results and do the actual 
creation of resource.

As described in [c], we will also need the support of a global state repository 
to make all the resource states from across services available to smart 
placement decision engine.

As part of the plan for [c],  the first step is to tackle the representation 
and API for these InstanceGroups, and that is this ongoing effort within the 
Nova Scheduler sub-team.

Our idea to separate the phases of this grand scale scheduling of resources, 
and keep the interfaces clean.  If we have to interface with Climate for the 
final creation (I.e., once the smart placement decisions have been made), we 
should be able to do that, at least that is the vision.


References
[a]Instance Group Model and API extension doc -  
https://docs.google.com/document/d/17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit?usp=sharinghttps://docs.google.com/document/d/17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit?usp=sharing
[b] Instance group blueprint - 
https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension
[c] Smart Resource Placement  
https://docs.google.com/document/d/1IiPI0sfaWb1bdYiMWzAAx0HYR6UqzOan_Utgml5W1HI/edit

Thanks,
Yathi.





From: Sylvain Bauza sylvain.ba...@bull.netmailto:sylvain.ba...@bull.net
Date: Tuesday, October 8, 2013 12:40 AM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Yathiraj Udupi yud...@cisco.commailto:yud...@cisco.com
Subject: Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - 
Updated Instance Group Model and API extension model - WIP Draft

Hi Yathi,

Le 08/10/2013 05:10, Yathiraj Udupi (yudupi) a écrit :
Hi,

Based on the discussions we have had in the past few scheduler sub-team 
meetings,  I am sharing a document that proposes an updated Instance Group 
Model and API extension model.
This is a work-in-progress draft version, but sharing it for early feedback.
https://docs.google.com/document/d/17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit?usp=sharing

This model support generic instance types, where an instance can represent a 
virtual node of any resource type.  But in the context of Nova, an instance 
refers to the VM instance.

This builds on the existing proposal for Instance Group Extension as documented 
here in this blueprint:  
https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension

Thanks,
Yathi.



Well, I actually read the design document, and I'm strongly interested in 
jumping to the project.
We started a few months ago a Stackforge project, called Climate [0], aiming to 
reserve both physical and virtual resources. Initially, the project came from a 
blueprint targeting only physical reservations [1], and then Mirantis folks 
joined us having a new usecase for virtual reservations (potentially 
implementing deferred starts, as said above).

Basically, the physical host reservation is not about deferred starts of 
instances, it's about grouping for a single tenant a list of hosts, in other 
words a whole host allocation (see [2]).

We'll provide to end-users a Reservation API allowing to define policies for 
selecting hosts based on their capabilities [3] and then create host aggregates 
(or Pclouds if we implement [2]). Actually, we could define some policies in 
the Climate host aggregate for affinity and network-proximity policies, so that 
any VM to boot from one of these hosts would be applied these host aggregate 
policies.

As you maybe see, there are some concerns which are close in between your BP 
[4] and our 

Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-08 Thread Yathiraj Udupi (yudupi)
Mike, Sylvain,

I think some of Mike's questions were answered during today's scheduler team 
meeting.

Mike:
 Thanks.  I have a few questions.  First, I am a bit stymied by the style of 
 API documentation used in that document and many others: it shows the first 
 line of an HTTP request but says nothing about all the other details.  I am 
 sure some of those requests must have interesting bodies, but I am   not 
 always sure which ones have a body at all, let alone what goes in it.  I 
 suspect there may be some headers that are important too.  Am I missing 
 something?
Yathi: Do see some of the existing code written up for instance group here in 
this review, there are a few request/response examples of the older model - 
https://review.openstack.org/#/c/30028/  This code will be revived and the 
effort will be similar incorporating the newer model.

Mike:
That draft says the VMs are created before the group.  Is there a way today 
to create a VM without scheduling it?  Is there a way to activate a resource 
that has already been scheduled but not activated?By activate I mean, for a 
VM instance for example, to start running it.
Sylvain:
 I can't answer for it, but as far as I can understand the draft, there is no 
 clear understanding that we have to postpone the VM boot *after* creating 
 the Groups.
As stated in the document, there is a strong prereq which is that all the 
resources mapped to the Group must have their own uuids, but there is no 
clear outstanding that it should prevent the VMs to actually boot.
At the moment, deferring a bootable state in Nova is not yet implemented and 
that's part of Climate folks to implement it, so I can't get your point.
Yathi: I guess Gary Kotton can comment more here,  but this is probably 
implementation detail.  As long as the group is defined and registered, the 
actual activation can take care of assigning the created UUIDs for the 
instances..  But I do see there are internal DB APIs to save the instances and 
thereby creating uuids, but not actually activating them.   my documentation 
assumes, that the instances need to be registered.  The reason why we want to 
defer the instance boot, is because we want to get the complete big picture and 
do a unified scheduling taking all the parameters into consideration (read this 
as smart resource placement).

This document does not yet mention anything about the actual process involved 
in the activation of the group.  That will involve whole lot of work in terms 
of reservation, ordering of creation, etc, and it is here where we need to have 
clean interfaces to plug in external support to accomplish this.


Thanks,
Yathi.








From: Sylvain Bauza sylvain.ba...@bull.netmailto:sylvain.ba...@bull.net
Date: Tuesday, October 8, 2013 12:08 AM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Mike Spreitzer mspre...@us.ibm.commailto:mspre...@us.ibm.com, Yathiraj 
Udupi yud...@cisco.commailto:yud...@cisco.com
Subject: Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - 
Updated Instance Group Model and API extension model - WIP Draft

Hi Mike,

Le 08/10/2013 08:51, Mike Spreitzer a écrit :
On second thought, I should revise and extend my remarks.  This message 
supersedes my previous two replies.

Thanks.  I have a few questions.  First, I am a bit stymied by the style of API 
documentation used in that document and many others: it shows the first line of 
an HTTP request but says nothing about all the other details.  I am sure some 
of those requests must have interesting bodies, but I am not always sure which 
ones have a body at all, let alone what goes in it.  I suspect there may be 
some headers that are important too.  Am I missing something?

That draft says the VMs are created before the group.  Is there a way today to 
create a VM without scheduling it?  Is there a way to activate a resource that 
has already been scheduled but not activated?By activate I mean, for a VM 
instance for example, to start running it.

As I understand your draft, it lays out a three phase process for a client to 
follow: create resources without scheduling or activating them, then present 
the groups and policies to the service for joint scheduling, then activate the 
resources.  With regard to a given resource, things must happen in that order; 
between resources there is a little more flexibility.  Activations are invoked 
by the client in an order that is consistent with (a) runtime dependencies that 
are mediated directly by the client (e.g., string slinging in the heat engine) 
and (b) the nature of the resources (for example, you  can not attach a volume 
to a VM instance until after both have been created).  Other than those 
considerations, the ordering and/or parallelism is a degree of freedom 
available to the client.  Have I got this right?

Couldn't we simplify this into a two phase process: create groups and resources 
with 

Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-08 Thread Clint Byrum
Excerpts from Stan Lagun's message of 2013-10-08 13:53:45 -0700:
 Hello,
 
 I’m one of the engineer working on Murano project. Recently we started a
 discussion about Murano and Heat Software orchestration and I want to
 continue this discussion with more technical details.
 
 In our project we do deployment of complex multi-instance Windows services.
 Those services usually require specific multi-VM orchestration that is
 currently impossible or at least not that easy to achieve with Heat. As you
 are currently doing HOT software orchestration design we would like to
 participate in HOT Software orchestration design and contribute into it, so
 that the Heat could address use-cases which we believe are very common.
 
 For example here is how deployment of a SQL Server cluster goes:
 
1.
 
Allocate Windows VMs for SQL Server cluster
2.

Heat does this quite well. :)

 
Enable secondary IP address from user input on all SQL Windows instances
3.

I don't understand what that means.

 
Install SQL Server prerequisites on each node
4.
 

Automating software installation is a very common aspect of most
configuration tools.

Choose a master node and install Failover Cluster on it
5.
 

Leader election should be built in to services that take advantage of
it. External leader election is really hard without all of the
primitives built into the service. Otherwise you have to resort to
things like STONITH to avoid split brain. If Heat tries to do this, it
will just result in a complicated design with very few happy users.

Configure all nodes so that they know which one of them is the master
6.
 

Anyway, Heat can do this quite well. The template author can just choose
one, and flag it as the master in the resource's Metadata. Point all
others at that one using Ref's to it.

Configuring nodes should be something an automated configuration tool
does well. It is pretty straight forward to have a tool fetch the
Metadata of a resource via the Heat API and feed it to a configuration
tool. Or you can even teach your config tool to read the Metadata.

Install SQL Server on all nodes
7.
 

Same as before - any automated software installation tool should
suffice.

Initialize AlwaysOn on all nodes except for the master
8.
 

I don't know what that means.

Initialize Primary replica
9.
 
Initialize secondary replicas
 

I also don't know what that entails, but I would suspect that it is the
kind of thing you do via the SQL server API once everything is done.

For that, you just need to poll a waitcondition and when all of the
nodes have signaled the waitcondition, run this.

 
 All of the steps must take place in appropriate order depending on the
 state of other nodes. Some steps require an output from previous steps and
 all of them require some input parameters. SQL Server requires an Active
 Directory service in order to use Failover mechanism and installation of
 Active Directory with primary and secondary controllers is a complex
 workflow of its own.
 
 That is why it is necessary to have some central coordination service which
 would handle deployment workflow and perform specific actions (create VMs
 and other OpenStack resources, do something on that VM) on each stage
 according to that workflow. We think that Heat is the best place for such
 service.
 

I'm not so sure. Heat is part of the Orchestration program, not workflow.

 Our idea is to extend HOT DSL by adding  workflow definition capabilities
 as an explicit list of resources, components’ states and actions. States
 may depend on each other so that you can reach state X only after you’ve
 reached states Y and Z that the X depends on. The goal is from initial
 state to reach some final state “Deployed”.
 

Orchestration is not workflow, and HOT is an orchestration templating
language, not a workflow language. Extending it would just complect two
very different (though certainly related) tasks.

I think the appropriate thing to do is actually to join up with the
TaskFlow project and consider building it into a workflow service or tools
(it is just a library right now).

 There is such state graph for each of our deployment entities (service,
 VMs, other things). There is also an action that must be performed on each
 state.

Heat does its own translation of the orchestration template into a
workflow right now, but we have already discussed using TaskFlow to
break up the orchestration graph into distributable jobs. As we get more
sophisticated on updates (rolling/canary for instance) we'll need to
be able to reason about the process without having to glue all the
pieces together.

 We propose to extend HOT DSL with workflow definition capabilities where
 you can describe step by step instruction to install service and properly
 handle errors on each step.
 
 We already have an experience in implementation of the DSL, workflow
 description and processing mechanism for complex deployments and 

[openstack-dev] what is the code organization of nova

2013-10-08 Thread Aparna Datt
hi i was going through code of nova on github...but there are no readme
files available regarding code organization of nova. Can anyone provide me
with a link from where i can begin reading the code ? or if anyone can help
me by indicators on from which files / folders the nova begins its
processing?

Regards,

Aparna
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Common requirements for services' discussion

2013-10-08 Thread Sumit Naiksatam
Hi All,

We had a VPNaaS meeting yesterday and it was felt that we should have a
separate meeting to discuss the topics common to all services. So, in
preparation for the Icehouse summit, I am proposing an IRC meeting on Oct
14th 22:00 UTC (immediately after the Neutron meeting) to discuss common
aspects related to the FWaaS, LBaaS, and VPNaaS.

We will begin with service insertion and chaining discussion, and I hope we
can collect requirements for other common aspects such as service agents,
services instances, etc. as well.

Etherpad for service insertion  chaining can be found here:
https://etherpad.openstack.org/icehouse-neutron-service-insertion-chaining

Hope you all can join.

Thanks,
~Sumit.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reminder: Project release status meeting - 21:00 UTC

2013-10-08 Thread Gareth
it seems that we didn't log this channel in here:
http://eavesdrop.openstack.org/meetings/openstack-meeting/2013/


On Tue, Oct 8, 2013 at 6:12 PM, Thierry Carrez thie...@openstack.orgwrote:

 Today in the project/release status meeting, we are 9 days before
 release ! We'll look in particular into how far away Swift RC1 is, the
 status of already-opened RC2 windows, and review havana-rc-potential
 bugs to evaluate the need to open other ones.

 Feel free to add extra topics to the agenda:
 [1] http://wiki.openstack.org/Meetings/ProjectMeeting

 All Technical Leads for integrated programs should be present (if you
 can't make it, please name a substitute on [1]). Other program leads and
 everyone else is very welcome to attend.

 The meeting will be held at 21:00 UTC on the #openstack-meeting channel
 on Freenode IRC. You can look up how this time translates locally at:
 [2] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20131008T21

 See you there,

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Gareth

*Cloud Computing, OpenStack, Fitness, Basketball*
*OpenStack contributor*
*Company: UnitedStack http://www.ustack.com*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Configuring default page-size on image-list call in nova

2013-10-08 Thread Sumanth Suresh Nagadavalli
Hi All,

Recently we had a bug where page-size parameter was not passed in from nova
while making a call to glance for image list. This was addressed by
https://review.openstack.org/#/c/43262/.

As a result, I also thought that it might make sense to have a default page
size configurable for image list call on nova end.

The reason is, if we do not have page-size parameter set, then the page
size is defaulted to 20 on glance client. This is not a configurable value
on glance client. If there are huge number images(say 1000), then nova ends
up making huge number of calls to glance(50, in this case). So, having a
default page-size configurable on nova end would help to have more control
over this value.

This is captured in
https://blueprints.launchpad.net/nova/+spec/default-page-size-for-image-list

Can we get some discussions started over this?

Thanks
-- 
Sumanth N S
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Questions and comments

2013-10-08 Thread Mike Spreitzer
Yes, that helps.  Please, guys, do not interpret my questions as 
hostility, I really am just trying to understand.  I think there is some 
overlap between your concerns and mine, and I hope we can work together.

Sticking to the physical reservations for the moment, let me ask for a 
little more explicit details.  In your outline below, late in the game you 
write the actual reservation is performed by the lease manager plugin. 
Is that the point in time when something (the lease manager plugin, in 
fact) decides which hosts will be used to satisfy the reservation?  Or is 
that decided up-front when the reservation is made?  I do not understand 
how the lease manager plugin can make this decision on its own, isn't the 
nova scheduler also deciding how to use hosts?  Why isn't there a problem 
due to two independent allocators making allocations of the same resources 
(the system's hosts)?

Thanks,
Mike

Patrick Petit patrick.pe...@bull.net wrote on 10/07/2013 07:02:36 AM:

 Hi Mike,
 
 There are actually more facets to this. Sorry if it's a little 
 confusing :-( Climate's original blueprint https://
 wiki.openstack.org/wiki/Blueprint-nova-planned-resource-reservation-api
 was about physical host reservation only. The typical use case 
 being: I want to reserve x number of hosts that match the 
 capabilities expressed in the reservation request. The lease is 
 populated with reservations which at this point are only capacity 
 descriptors. The reservation becomes active only when the lease 
 starts at a specified time and for a specified duration. The lease 
 manager plugin in charge of the physical reservation has a planning 
 of reservations that allows Climate to grant a lease only if the 
 requested capacity is available at that time. Once the lease becomes
 active, the user can request instances to be created on the reserved
 hosts using a lease handle as a Nova's scheduler hint. That's 
 basically it. We do not assume or enforce how and by whom (Nova, 
 Heat ,...) a resource instantiation is performed. In other words, a 
 host reservation is like a whole host allocation https://
 wiki.openstack.org/wiki/WholeHostAllocation that is reserved ahead 
 of time by a tenant in anticipation of some workloads that is bound 
 to happen in the future. Note that while we are primarily targeting 
 hosts reservations the same service should be offered for storage. 
 Now, Mirantis brought in a slew of new use cases that are targeted 
 toward virtual resource reservation as explained earlier by Dina. 
 While architecturally both reservation schemes (physical vs virtual)
 leverage common components, it is important to understand that they 
 behave differently. For example, Climate exposes an API for the 
 physical resource reservation that the virtual resource reservation 
 doesn't. That's because virtual resources are supposed to be already
 reserved (through some yet to be created Nova, Heat, Cinder,... 
 extensions) when the lease is created. Things work differently for 
 the physical resource reservation in that the actual reservation is 
 performed by the lease manager plugin not before the lease is 
 created but when the lease becomes active (or some time before 
 depending on the provisioning lead time) and released when the lease 
ends.
 HTH clarifying things.
 BR,
 Patrick 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] OVS Agent and VxLan UDP Ports

2013-10-08 Thread P Balaji-B37839
Any comments on the below from community using OVS will be helpful.

Regards,
Balaji.P

 -Original Message-
 From: P Balaji-B37839
 Sent: Tuesday, October 08, 2013 2:31 PM
 To: OpenStack Development Mailing List; Addepalli Srini-B22160
 Subject: [openstack-dev] [Neutron] OVS Agent and VxLan UDP Ports
 
 Hi,
 
 Current OVS Agent is creating tunnel with dst_port as the port configured
 in INI file on Compute Node. If all the compute nodes on VXLAN network
 are configured for DEFAULT port it is fine.
 
 When any of the Compute Nodes are configured for CUSTOM udp port as VXLAN
 UDP Port, Then how does the tunnel will be established with remote IP.
 
 It is observed that the fan-out RPC message is not having the destination
 port information.
 
 Regards,
 Balaji.P



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Questions and comments

2013-10-08 Thread Mike Spreitzer
Sylvain: please do not interpret my questions as hostility.  I am only 
trying to understand your proposal, but I am still confused.  Can you 
please walk through a scenario involving Climate reservations on virtual 
resources?  I mean from start to finish, outlining which party makes which 
decision when, based on what.  I am trying to understand the relationship 
between the individual resource schedulers (such as nova, cinder) and 
climate --- they both seem to be about allocating the same resources.

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] automatically evacuate instances on compute failure

2013-10-08 Thread Tim Bell
I have proposed the summit design session for Hong Kong 
(http://summit.openstack.org/cfp/details/103) to discuss exactly these sort of 
points. We have the low level Nova commands but need a service to automate the 
process.

I see two scenarios

- A hardware intervention needs to be scheduled, please rebalance this workload 
elsewhere before it fails completely
- A hypervisor has failed, please recover what you can using shared storage and 
give me a policy on what to do with the other VMs (restart, leave down till 
repair etc.)

Most OpenStack production sites have some sort of script doing this sort of 
thing now. However, each one will be implementing the logic for migration 
differently so there is no agreed best practise approach.

Tim

 -Original Message-
 From: Chris Friesen [mailto:chris.frie...@windriver.com]
 Sent: 09 October 2013 00:48
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] automatically evacuate instances on 
 compute failure
 
 On 10/08/2013 03:20 PM, Alex Glikson wrote:
  Seems that this can be broken into 3 incremental pieces. First, would
  be great if the ability to schedule a single 'evacuate' would be
  finally merged
  (_https://blueprints.launchpad.net/nova/+spec/find-host-and-evacuate-instance_).
 
 Agreed.
 
  Then, it would make sense to have the logic that evacuates an entire
  host
  (_https://blueprints.launchpad.net/python-novaclient/+spec/find-and-evacuate-host_).
  The reasoning behind suggesting that this should not necessarily be in
  Nova is, perhaps, that it *can* be implemented outside Nova using the
  indvidual 'evacuate' API.
 
 This actually more-or-less exists already in the existing nova 
 host-evacuate command.  One major issue with this however is that it
 requires the caller to specify whether all the instances are on shared or 
 local storage, and so it can't handle a mix of local and shared
 storage for the instances.   If any of them boot off block storage for
 instance you need to move them first and then do the remaining ones as a 
 group.
 
 It would be nice to embed the knowledge of whether or not an instance is on 
 shared storage in the instance itself at creation time.  I
 envision specifying this in the config file for the compute manager along 
 with the instance storage location, and the compute manager
 could set the field in the instance at creation time.
 
  Finally, it should be possible to close the loop and invoke the
  evacuation automatically as a result of a failure detection (not clear
  how exactly this would work, though). Hopefully we will have at least
  the first part merged soon (not sure if anyone is actively working on
  a rebase).
 
 My interpretation of the discussion so far is that the nova maintainers would 
 prefer this to be driven by an outside orchestration daemon.
 
 Currently the only way a service is recognized to be down is if someone 
 calls is_up() and it notices that the service hasn't sent an update
 in the last minute.  There's nothing in nova actively scanning for compute 
 node failures, which is where the outside daemon comes in.
 
 Also, there is some complexity involved in dealing with auto-evacuate:
 What do you do if an evacuate fails?  How do you recover intelligently if 
 there is no admin involved?
 
 Chris
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-08 Thread Mike Spreitzer
Thanks for the clue about where the request/response bodies are 
documented.  Is there any convenient way to view built documentation for 
Havana right now?

You speak repeatedly of the desire for clean interfaces, and nobody 
could disagree with such words.  I characterize my desire that way too. It 
might help me if you elaborate a little on what clean means to you.  To 
me it is about minimizing the number of interactions between different 
modules/agents and the amount of information in those interactions.  In 
short, it is about making narrow interfaces - a form of simplicity.

To me the most frustrating aspect of this challenge is the need for the 
client to directly mediate the dependencies between resources; this is 
really what is driving us to do ugly things.  As I mentioned before, I am 
coming from a setting that does not have this problem.  So I am thinking 
about two alternatives: (A1) how clean can we make a system in which the 
client continues to directly mediate dependencies between resources, and 
(A2) how easily and cleanly can we make that problem go away.

For A1, we need the client to make a distinct activation call for each 
resource.  You have said that we should start the roadmap without joint 
scheduling; in this case, the scheduling can continue to be done 
independently for each resource and can be bundled with the activation 
call.  That can be the call we know and love today, the one that creates a 
resource, except that it needs to be augmented to also carry some pointer 
that points into the policy data so that the relevant policy data can be 
taken into account when making the scheduling decision.  Ergo, the client 
needs to know this pointer value for each resource.  The simplest approach 
would be to let that pointer be the combination of (p1) a VRT's UUID and 
(p2) the local name for the resource within the VRT.  Other alternatives 
are possible, but require more bookkeeping by the client.

I think that at the first step of the roadmap for A1, the client/service 
interaction for CREATE can be in just two phases.  In the first phase the 
client presents a topology (top-level InstanceGroup in your terminology), 
including resource definitions, to the new API for registration; the 
response is a UUID for that registered top-level group.  In the second 
phase the client creates the resources as is done today, except that 
each creation call is augmented to carry the aforementioned pointer into 
the policy information.  Each resource scheduler (just nova, at first) can 
use that pointer to access the relevant policy information and take it 
into account when scheduling.  The client/service interaction for UPDATE 
would be in the same two phases: first update the policyresource 
definitions at the new API, then do the individual resource updates in 
dependency order.

I suppose the second step in the roadmap is to have Nova do joint 
scheduling.  The client/service interaction pattern can stay the same. The 
only difference is that Nova makes the scheduling decisions in the first 
phase rather than the second.  But that is not a detail exposed to the 
clients.

Maybe the third step is to generalize beyond nova?

For A2, the first question is how to remove user-level create-time 
dependencies between resources.  We are only concerned with the 
user-level create-time dependencies here because it is only they that 
drive intimate client interactions.  There are also create-time 
dependencies due to the nature of the resource APIs; for example, you can 
not attach a volume to a VM until after both have been created.  But 
handling those kinds of create-time dependencies does not require intimate 
interactions with the client.  I know of two software orchestration 
technologies developed in IBM, and both have the property that there are 
no user-level create-time dependencies between resources; rather, the 
startup code (userdata) that each VM runs handles dependencies (using a 
library for cross-VM communication and synchronization).  This can even be 
done in plain CFN, using wait conditions and handles (albeit somewhat 
clunkily), right?  So I think there are ways to get this nice property 
already.  The next question is how best to exploit it to make cleaner 
APIs.  I think we can have a one-step client/service interaction: the 
client presents a top-level group (including leaf resource definitions) to 
the new service, which registers it and proceeds to 
create/schedule/activate the resources.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] behaviour about boot-from-volume (possible bug)

2013-10-08 Thread Ji2-3
Hi everyone.

I am playing Openstack with lates code version.
I want to know your ideas about below case.
Now, when I boot VM from volume with down of cinder-api, openstack returns
400.
It seems incompatibility in this case.
When HTTPClient exception ocuured, it should return 500Internal server
error.
In fact, Nova turned into normal responses from HTTPClient exception.

API respons
--
 HTTP/1.1 400 Bad Request
 Content-Length: 135
 Content-Type: application/json; charset=UTF-8
 X-Compute-Request-Id: req-969ae498-0273-4b06-8c69-edc1875aa0b7
 Date: Tue, 08 Oct 2013 06:12:22 GMT

* Connection #0 to host localhost left intact
* Closing connection #0
{badRequest: {message: Block Device Mapping is Invalid: failed to get
volume 063495f6-bff8-4b4d-b63e-41fd6305753e., code: 400}}
--

nova-api log
--
2013-10-08 15:12:22.867 DEBUG cinderclient.client [-] Connection refused:
HTTPConnectionPool(host='192.168.58.132', port=8776): Max retries exceeded
with url:
/v1/a4a70325190b4163baf4ca9138fb2d5f/volumes/063495f6-bff8-4b4d-b63e-41fd6305753e
(Caused by class 'socket.error': [Errno 111] ECONNREFUSED) from
(pid=4886) _cs_request
/opt/stack/python-cinderclient/cinderclient/client.py:197
--

Or does anyone know any bug report like this?
Feel free to comment or any feedback.



Best regards,
 Yasunori Jitsukawa
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Tzu-Mainn Chen
 Hi, like most OpenStack projects we need to keep the core team up to
 date: folk who are not regularly reviewing will lose context over
 time, and new folk who have been reviewing regularly should be trusted
 with -core responsibilities.
 
 Please see Russell's excellent stats:
 http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
 http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt
 
 For joining and retaining core I look at the 90 day statistics; folk
 who are particularly low in the 30 day stats get a heads up: it's not
 a purely mechanical process :).
 
 As we've just merged review teams with Tuskar devs, we need to allow
 some time for everyone to get up to speed; so for folk who are core as
 a result of the merge will be retained as core, but November I expect
 the stats will have normalised somewhat and that special handling
 won't be needed.
 
 IMO these are the reviewers doing enough over 90 days to meet the
 requirements for core:
 
 |   lifeless **| 3498 140   2 19957.6% |2
 (  1.0%)  |
 | clint-fewbar **  | 3292  54   1 27283.0% |7
 (  2.6%)  |
 | cmsj **  | 2481  25   1 22189.5% |   13
 (  5.9%)  |
 |derekh ** |  880  28  23  3768.2% |6
 ( 10.0%)  |
 
 Who are already core, so thats easy.
 
 If you are core, and not on that list, that may be because you're
 coming from tuskar, which doesn't have 90 days of history, or you need
 to get stuck into some more reviews :).
 
 Now, 30 day history - this is the heads up for folk:
 
 | clint-fewbar **  | 1792  27   0 15083.8% |6 (  4.0%)  |
 | cmsj **  | 1791  15   0 16391.1% |   11 (  6.7%)  |
 |   lifeless **| 1293  39   2  8567.4% |2 (  2.3%)  |
 |derekh ** |  410  11   0  3073.2% |0 (  0.0%)  |
 |  slagle  |  370  11  26   070.3% |3 ( 11.5%)  |
 |ghe.rivero|  280   4  24   085.7% |2 (  8.3%)  |
 
 
 I'm using the fairly simple metric of 'average at least one review a
 day' as a proxy for 'sees enough of the code and enough discussion of
 the code to be an effective reviewer'. James and Ghe, good stuff -
 you're well on your way to core. If you're not in that list, please
 treat this as a heads-up that you need to do more reviews to keep on
 top of what's going on, whether so you become core, or you keep it.
 
 In next month's update I'll review whether to remove some folk that
 aren't keeping on top of things, as it won't be a surprise :).
 
 Cheers,
 Rob
 
 
 
 
 
 
 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

Hi,

I feel like I should point out that before tuskar merged with tripleo, we had 
some distinction between the team working on the tuskar api and the team 
working on the UI, with each team focusing reviews on its particular experties. 
 The latter team works quite closely with horizon, to the extent of spending a 
lot of time involved with horizon development and blueprints.  This is done so 
that horizon changes can be understood and utilized by tuskar-ui.

For that reason, I feel like a UI core reviewer split here might make sense. . 
. ?  tuskar-ui doesn't require as many updates as tripleo/tuskar api, but a 
certain level of horizon and UI expertise is definitely helpful in reviewing 
the UI patches.

Thanks,
Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-08 Thread Sylvain Bauza

Hi Yathi,

Le 08/10/2013 05:10, Yathiraj Udupi (yudupi) a écrit :

Hi,

Based on the discussions we have had in the past few scheduler 
sub-team meetings,  I am sharing a document that proposes an 
updated Instance Group Model and API extension model.
This is a work-in-progress draft version, but sharing it for early 
feedback.
https://docs.google.com/document/d/17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit?usp=sharing 



This model support generic instance types, where an instance can 
represent a virtual node of any resource type.  But in the context of 
Nova, an instance refers to the VM instance.


This builds on the existing proposal for Instance Group Extension as 
documented here in this blueprint: 
https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension


Thanks,
Yathi.




Well, I actually read the design document, and I'm strongly interested 
in jumping to the project.
We started a few months ago a Stackforge project, called Climate [0], 
aiming to reserve both physical and virtual resources. Initially, the 
project came from a blueprint targeting only physical reservations [1], 
and then Mirantis folks joined us having a new usecase for virtual 
reservations (potentially implementing deferred starts, as said above).


Basically, the physical host reservation is not about deferred starts of 
instances, it's about grouping for a single tenant a list of hosts, in 
other words a whole host allocation (see [2]).


We'll provide to end-users a Reservation API allowing to define policies 
for selecting hosts based on their capabilities [3] and then create host 
aggregates (or Pclouds if we implement [2]). Actually, we could define 
some policies in the Climate host aggregate for affinity and 
network-proximity policies, so that any VM to boot from one of these 
hosts would be applied these host aggregate policies.


As you maybe see, there are some concerns which are close in between 
your BP [4] and our vision of Climate. What are your thoughts about it ?


[0] : https://github.com/stackforge/climate
[1] : 
https://wiki.openstack.org/wiki/Blueprint-nova-planned-resource-reservation-api

[2] : https://wiki.openstack.org/wiki/WholeHostAllocation
[3] : 
https://docs.google.com/document/d/1U36k5wk0sOUyLl-4Cz8tmk8RQFQGWKO9dVhb87ZxPC8/edit#heading=h.ujapi6o0un65
[4] : 
https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Tomas Sedovic
Just an fyi, tomas-8c8 on the reviewers list is yours truly. That's 
the name I got assigned when I registered to Gerrit and apparently, it 
can't be changed.


Thanks for the heads-up, will be doing more reviews.

T.

On 07/10/13 21:03, Robert Collins wrote:

Hi, like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up: it's not
a purely mechanical process :).

As we've just merged review teams with Tuskar devs, we need to allow
some time for everyone to get up to speed; so for folk who are core as
a result of the merge will be retained as core, but November I expect
the stats will have normalised somewhat and that special handling
won't be needed.

IMO these are the reviewers doing enough over 90 days to meet the
requirements for core:

|   lifeless **| 3498 140   2 19957.6% |2
(  1.0%)  |
| clint-fewbar **  | 3292  54   1 27283.0% |7
(  2.6%)  |
| cmsj **  | 2481  25   1 22189.5% |   13
(  5.9%)  |
|derekh ** |  880  28  23  3768.2% |6
( 10.0%)  |

Who are already core, so thats easy.

If you are core, and not on that list, that may be because you're
coming from tuskar, which doesn't have 90 days of history, or you need
to get stuck into some more reviews :).

Now, 30 day history - this is the heads up for folk:

| clint-fewbar **  | 1792  27   0 15083.8% |6 (  4.0%)  |
| cmsj **  | 1791  15   0 16391.1% |   11 (  6.7%)  |
|   lifeless **| 1293  39   2  8567.4% |2 (  2.3%)  |
|derekh ** |  410  11   0  3073.2% |0 (  0.0%)  |
|  slagle  |  370  11  26   070.3% |3 ( 11.5%)  |
|ghe.rivero|  280   4  24   085.7% |2 (  8.3%)  |


I'm using the fairly simple metric of 'average at least one review a
day' as a proxy for 'sees enough of the code and enough discussion of
the code to be an effective reviewer'. James and Ghe, good stuff -
you're well on your way to core. If you're not in that list, please
treat this as a heads-up that you need to do more reviews to keep on
top of what's going on, whether so you become core, or you keep it.

In next month's update I'll review whether to remove some folk that
aren't keeping on top of things, as it won't be a surprise :).

Cheers,
Rob









___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Robert Collins
Thanks, will submit a patch to reviewerstats updating it for you.

-Rob

On 8 October 2013 21:14, Tomas Sedovic tsedo...@redhat.com wrote:
 Just an fyi, tomas-8c8 on the reviewers list is yours truly. That's the
 name I got assigned when I registered to Gerrit and apparently, it can't be
 changed.

 Thanks for the heads-up, will be doing more reviews.

 T.


 On 07/10/13 21:03, Robert Collins wrote:

 Hi, like most OpenStack projects we need to keep the core team up to
 date: folk who are not regularly reviewing will lose context over
 time, and new folk who have been reviewing regularly should be trusted
 with -core responsibilities.

 Please see Russell's excellent stats:
 http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
 http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

 For joining and retaining core I look at the 90 day statistics; folk
 who are particularly low in the 30 day stats get a heads up: it's not
 a purely mechanical process :).

 As we've just merged review teams with Tuskar devs, we need to allow
 some time for everyone to get up to speed; so for folk who are core as
 a result of the merge will be retained as core, but November I expect
 the stats will have normalised somewhat and that special handling
 won't be needed.

 IMO these are the reviewers doing enough over 90 days to meet the
 requirements for core:

 |   lifeless **| 3498 140   2 19957.6% |2
 (  1.0%)  |
 | clint-fewbar **  | 3292  54   1 27283.0% |7
 (  2.6%)  |
 | cmsj **  | 2481  25   1 22189.5% |   13
 (  5.9%)  |
 |derekh ** |  880  28  23  3768.2% |6
 ( 10.0%)  |

 Who are already core, so thats easy.

 If you are core, and not on that list, that may be because you're
 coming from tuskar, which doesn't have 90 days of history, or you need
 to get stuck into some more reviews :).

 Now, 30 day history - this is the heads up for folk:

 | clint-fewbar **  | 1792  27   0 15083.8% |6 (  4.0%)  |
 | cmsj **  | 1791  15   0 16391.1% |   11 (  6.7%)  |
 |   lifeless **| 1293  39   2  8567.4% |2 (  2.3%)  |
 |derekh ** |  410  11   0  3073.2% |0 (  0.0%)  |
 |  slagle  |  370  11  26   070.3% |3 ( 11.5%)  |
 |ghe.rivero|  280   4  24   085.7% |2 (  8.3%)  |


 I'm using the fairly simple metric of 'average at least one review a
 day' as a proxy for 'sees enough of the code and enough discussion of
 the code to be an effective reviewer'. James and Ghe, good stuff -
 you're well on your way to core. If you're not in that list, please
 treat this as a heads-up that you need to do more reviews to keep on
 top of what's going on, whether so you become core, or you keep it.

 In next month's update I'll review whether to remove some folk that
 aren't keeping on top of things, as it won't be a surprise :).

 Cheers,
 Rob








 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] OVS Agent and VxLan UDP Ports

2013-10-08 Thread P Balaji-B37839
Hi,

Current OVS Agent is creating tunnel with dst_port as the port configured in 
INI file on Compute Node. If all the compute nodes on VXLAN network are 
configured for DEFAULT port it is fine.

When any of the Compute Nodes are configured for CUSTOM udp port as VXLAN UDP 
Port, Then how does the tunnel will be established with remote IP.

It is observed that the fan-out RPC message is not having the destination port 
information.

Regards,
Balaji.P 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] BUG? nova-compute should delete unused instance files on boot

2013-10-08 Thread Nikola Đipanov
On 08/10/13 01:01, Russell Bryant wrote:
 On 10/07/2013 06:34 PM, Vishvananda Ishaya wrote:
 There is a configuration option stating what to do with instances that are 
 still in the hypervisor but have been deleted from the database. I think you 
 want:

 running_deleted_instance_action=reap

 You probably also want

 resume_guests_state_on_host_boot=true

 to bring back the instances that were running before the node was powered 
 off. We should definitely consider changing the default of these two values 
 since I think the default values are probably not what most people would 
 want.
 
 Thanks, vish.  Those defaults keep biting ...
 
 We tried changing the default of resume_guests_state_on_host_boot in the
 RDO packages at one point, but ended up turning it back off.  I believe
 we had some problems with nova-compute trying to start instances before
 nova-network was fully initialized, sometimes leaving instances in a
 broken state.  That kind of stuff is all fixable though, and if we go
 ahead and change the defaults early in Icehouse dev, we should have
 plenty of time to deal with any fallout before Icehouse is released.
 

IIRC - the consensus was that resume_guests_state_on_host_boot is not
what cloud users would expect actually. If the instance went down, it is
likely that another one took it's place and bringing the old one back up
might cause problems.

I will comment on the review and we can take the discussion there if
more appropriate.

Cheers,

N.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Martyn Taylor

On 07/10/13 20:03, Robert Collins wrote:

Hi, like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up: it's not
a purely mechanical process :).

As we've just merged review teams with Tuskar devs, we need to allow
some time for everyone to get up to speed; so for folk who are core as
a result of the merge will be retained as core, but November I expect
the stats will have normalised somewhat and that special handling
won't be needed.

IMO these are the reviewers doing enough over 90 days to meet the
requirements for core:

|   lifeless **| 3498 140   2 19957.6% |2
(  1.0%)  |
| clint-fewbar **  | 3292  54   1 27283.0% |7
(  2.6%)  |
| cmsj **  | 2481  25   1 22189.5% |   13
(  5.9%)  |
|derekh ** |  880  28  23  3768.2% |6
( 10.0%)  |

Who are already core, so thats easy.

If you are core, and not on that list, that may be because you're
coming from tuskar, which doesn't have 90 days of history, or you need
to get stuck into some more reviews :).

Now, 30 day history - this is the heads up for folk:

| clint-fewbar **  | 1792  27   0 15083.8% |6 (  4.0%)  |
| cmsj **  | 1791  15   0 16391.1% |   11 (  6.7%)  |
|   lifeless **| 1293  39   2  8567.4% |2 (  2.3%)  |
|derekh ** |  410  11   0  3073.2% |0 (  0.0%)  |
|  slagle  |  370  11  26   070.3% |3 ( 11.5%)  |
|ghe.rivero|  280   4  24   085.7% |2 (  8.3%)  |


I'm using the fairly simple metric of 'average at least one review a
day' as a proxy for 'sees enough of the code and enough discussion of
the code to be an effective reviewer'. James and Ghe, good stuff -
you're well on your way to core. If you're not in that list, please
treat this as a heads-up that you need to do more reviews to keep on
top of what's going on, whether so you become core, or you keep it.

In next month's update I'll review whether to remove some folk that
aren't keeping on top of things, as it won't be a surprise :).

Cheers,
Rob






Whilst I can see that deciding on who is Core is a difficult task, I do 
feel that creating a competitive environment based on no. reviews will 
be detrimental to the project.


I do feel this is going to result in quantity over quality. Personally, 
I'd like to see every commit properly reviewed and tested before getting 
a vote and I don't think these stats are promoting that.


Regards
Martyn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Robert Collins
On 8 October 2013 22:44, Martyn Taylor mtay...@redhat.com wrote:
 On 07/10/13 20:03, Robert Collins wrote:


 Whilst I can see that deciding on who is Core is a difficult task, I do feel
 that creating a competitive environment based on no. reviews will be
 detrimental to the project.

I'm not sure how it's competitive : I'd be delighted if every
contributor was also a -core reviewer: I'm not setting, nor do I think
we need to think about setting (at this point anyhow), a cap on the
number of reviewers.

 I do feel this is going to result in quantity over quality. Personally, I'd
 like to see every commit properly reviewed and tested before getting a vote
 and I don't think these stats are promoting that.

I think thats a valid concern. However Nova has been running a (very
slightly less mechanical) form of this for well over a year, and they
are not drowning in -core reviewers. yes, reviewing is hard, and folk
should take it seriously.

Do you have an alternative mechanism to propose? The key things for me are:
 - folk who are idling are recognised as such and gc'd around about
the time their growing staleness will become an issue with review
correctness
 - folk who have been putting in consistent reading of code + changes
get given the additional responsibility of -core around about the time
that they will know enough about whats going on to review effectively.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Ladislav Smola

On 10/08/2013 10:27 AM, Robert Collins wrote:

Perhaps the best thing to do here is to get tuskar-ui to be part of
the horizon program, and utilise it's review team?


This is planned. But it wont happen soon.



On 8 October 2013 19:31, Tzu-Mainn Chen tzuma...@redhat.com wrote:

Hi, like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up: it's not
a purely mechanical process :).

As we've just merged review teams with Tuskar devs, we need to allow
some time for everyone to get up to speed; so for folk who are core as
a result of the merge will be retained as core, but November I expect
the stats will have normalised somewhat and that special handling
won't be needed.

IMO these are the reviewers doing enough over 90 days to meet the
requirements for core:

|   lifeless **| 3498 140   2 19957.6% |2
(  1.0%)  |
| clint-fewbar **  | 3292  54   1 27283.0% |7
(  2.6%)  |
| cmsj **  | 2481  25   1 22189.5% |   13
(  5.9%)  |
|derekh ** |  880  28  23  3768.2% |6
( 10.0%)  |

Who are already core, so thats easy.

If you are core, and not on that list, that may be because you're
coming from tuskar, which doesn't have 90 days of history, or you need
to get stuck into some more reviews :).

Now, 30 day history - this is the heads up for folk:

| clint-fewbar **  | 1792  27   0 15083.8% |6 (  4.0%)  |
| cmsj **  | 1791  15   0 16391.1% |   11 (  6.7%)  |
|   lifeless **| 1293  39   2  8567.4% |2 (  2.3%)  |
|derekh ** |  410  11   0  3073.2% |0 (  0.0%)  |
|  slagle  |  370  11  26   070.3% |3 ( 11.5%)  |
|ghe.rivero|  280   4  24   085.7% |2 (  8.3%)  |


I'm using the fairly simple metric of 'average at least one review a
day' as a proxy for 'sees enough of the code and enough discussion of
the code to be an effective reviewer'. James and Ghe, good stuff -
you're well on your way to core. If you're not in that list, please
treat this as a heads-up that you need to do more reviews to keep on
top of what's going on, whether so you become core, or you keep it.

In next month's update I'll review whether to remove some folk that
aren't keeping on top of things, as it won't be a surprise :).

Cheers,
Rob






--
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi,

I feel like I should point out that before tuskar merged with tripleo, we had 
some distinction between the team working on the tuskar api and the team 
working on the UI, with each team focusing reviews on its particular experties. 
 The latter team works quite closely with horizon, to the extent of spending a 
lot of time involved with horizon development and blueprints.  This is done so 
that horizon changes can be understood and utilized by tuskar-ui.

For that reason, I feel like a UI core reviewer split here might make sense. . 
. ?  tuskar-ui doesn't require as many updates as tripleo/tuskar api, but a 
certain level of horizon and UI expertise is definitely helpful in reviewing 
the UI patches.

Thanks,
Tzu-Mainn Chen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reminder: Project release status meeting - 21:00 UTC

2013-10-08 Thread Thierry Carrez
Today in the project/release status meeting, we are 9 days before
release ! We'll look in particular into how far away Swift RC1 is, the
status of already-opened RC2 windows, and review havana-rc-potential
bugs to evaluate the need to open other ones.

Feel free to add extra topics to the agenda:
[1] http://wiki.openstack.org/Meetings/ProjectMeeting

All Technical Leads for integrated programs should be present (if you
can't make it, please name a substitute on [1]). Other program leads and
everyone else is very welcome to attend.

The meeting will be held at 21:00 UTC on the #openstack-meeting channel
on Freenode IRC. You can look up how this time translates locally at:
[2] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20131008T21

See you there,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Libvirt] Virtio-Serial support for Nova libvirt driver

2013-10-08 Thread Daniel P. Berrange
On Wed, Oct 02, 2013 at 11:07:23AM -0700, Ravi Chunduru wrote:
 Hi Daniel,
   I will modify the blueprint as per your suggestions. Actually, we can use
 state_path in nova.conf if set or the default location.

This set of config vars:

  - Enable unix channels
  - No of Unix Channels
  - Target name


is really overkill. All you need is a list of target names really.
The 'enable unix channels' option is obviously 'true' if you have
any target names listed. And likewise the number of channels is
just the number of target names listed.

Also all hardware related config properties should have a 'hw_'
prefix on their name eg

   # glance image-update \
 --property hw_channels=name1,name2,name3 \
 f16-x86_64-openstack-sda

I still don't see clear enough information in the blueprint about
how this is actually going to be used. In particular the interaction
between neutron  nova.

eg you talk about neutron agents, which implies that the admins who
run the OpenStack instance are in charge. But then the image meta
stuff is really end user facing. In the talk of 'appliance vendors'
is unclear who is deploying the stuff provided by the vendors.

I'd like to see the blueprint outline the complete process of how
each part is configured from end-to-end and who is responsible for
each bit. If this is intended to be completely internal to the
admins running the neutron/nova services, then we don't want the
glance image properties to be exposed to end users.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Ladislav Smola

Hi,

seems like not all people agrees on what should be the 'metric' of a 
core reviewer.

Also what justify us to give +1 or +2.

Could it be a topic on today's meeting?

Ladislav


On 10/07/2013 09:03 PM, Robert Collins wrote:

Hi, like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up: it's not
a purely mechanical process :).

As we've just merged review teams with Tuskar devs, we need to allow
some time for everyone to get up to speed; so for folk who are core as
a result of the merge will be retained as core, but November I expect
the stats will have normalised somewhat and that special handling
won't be needed.

IMO these are the reviewers doing enough over 90 days to meet the
requirements for core:

|   lifeless **| 3498 140   2 19957.6% |2
(  1.0%)  |
| clint-fewbar **  | 3292  54   1 27283.0% |7
(  2.6%)  |
| cmsj **  | 2481  25   1 22189.5% |   13
(  5.9%)  |
|derekh ** |  880  28  23  3768.2% |6
( 10.0%)  |

Who are already core, so thats easy.

If you are core, and not on that list, that may be because you're
coming from tuskar, which doesn't have 90 days of history, or you need
to get stuck into some more reviews :).

Now, 30 day history - this is the heads up for folk:

| clint-fewbar **  | 1792  27   0 15083.8% |6 (  4.0%)  |
| cmsj **  | 1791  15   0 16391.1% |   11 (  6.7%)  |
|   lifeless **| 1293  39   2  8567.4% |2 (  2.3%)  |
|derekh ** |  410  11   0  3073.2% |0 (  0.0%)  |
|  slagle  |  370  11  26   070.3% |3 ( 11.5%)  |
|ghe.rivero|  280   4  24   085.7% |2 (  8.3%)  |


I'm using the fairly simple metric of 'average at least one review a
day' as a proxy for 'sees enough of the code and enough discussion of
the code to be an effective reviewer'. James and Ghe, good stuff -
you're well on your way to core. If you're not in that list, please
treat this as a heads-up that you need to do more reviews to keep on
top of what's going on, whether so you become core, or you keep it.

In next month's update I'll review whether to remove some folk that
aren't keeping on top of things, as it won't be a surprise :).

Cheers,
Rob









___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Jaromir Coufal

Hi Robert,

I have few concerns regarding metrics and core-team. To sum up, I think 
that there needs to be more metrics and core-reviewers for particular 
project (not one group). More details follow:


Measures:

* Only number of reviews shouldn't be the only indicator - you can get 
into situation where people sit at computer and start give +1s to the 
code - regardless the quality, just to get quantity.
* Delivery of solutions (code, and other stuff) should be counted as 
well. It is not responsibility of core member just to review the code 
but also to deliver.
* Also very important is general activity of the person on IRC, mailing 
lists, etc.


With multiple metrics, we really can assure that the person is a core 
member at that project. It can be delivering architectural solutions, it 
can be delivering code, it can be reviewing the work or discussing 
problems. But only reviews are not very strong metric and we can run 
into problems.



Review Process:
-
* +1... People should give +1 to something what looks good (they might 
not test it, but they indicate that they are fine with that)
* +2... Should be given only if the person tested it and if he is sure 
that the solution works (meaning running test, testing functionality, etc).
* Approved... Same for approvals - they are final step when person is 
saying 'merge it'. There needs to be clear certainty, that what I am 
merging will not brake the app and works.


Quality of code is very important. It shouldn't come into the state, 
where core reviewers will start to give +2 to code which looks ok. They 
need to be sure that it works and solves the problem and only core 
people on particular project might assure this.



Core Reviewers:
-
* Tzu-Mainn pointed out, that there are big differences between 
projects. I think that splitting core-members based on projects where 
they contribute make bigger sense.
* Example: It doesn't make sense, that someone who is core-reviewer 
based on image-builder is able to give +2 on UI or CLI code and vice-versa.
* For me it makes bigger sense to have separate core-members for each 
project then having one big group - then we can assure higher quality of 
the code.
* If there is no way to split the core-reviewers across projects and we 
have one big group for whole TripleO, then we need to make sure that all 
projects are reflected appropriately.


I think that the example speaks for everything. It is really crucial to 
consider all projects of TripleO and try to assure their quality. That's 
what core-members are here for, that's why I see them as experts in 
particular project.


I believe that we all want TripleO to succeed,let's find some solutions 
how to achieve that.


Thanks
-- Jarda



On 2013/07/10 21:03, Robert Collins wrote:

Hi, like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up: it's not
a purely mechanical process :).

As we've just merged review teams with Tuskar devs, we need to allow
some time for everyone to get up to speed; so for folk who are core as
a result of the merge will be retained as core, but November I expect
the stats will have normalised somewhat and that special handling
won't be needed.

IMO these are the reviewers doing enough over 90 days to meet the
requirements for core:

|   lifeless **| 3498 140   2 19957.6% |2
(  1.0%)  |
| clint-fewbar **  | 3292  54   1 27283.0% |7
(  2.6%)  |
| cmsj **  | 2481  25   1 22189.5% |   13
(  5.9%)  |
|derekh ** |  880  28  23  3768.2% |6
( 10.0%)  |

Who are already core, so thats easy.

If you are core, and not on that list, that may be because you're
coming from tuskar, which doesn't have 90 days of history, or you need
to get stuck into some more reviews :).

Now, 30 day history - this is the heads up for folk:

| clint-fewbar **  | 1792  27   0 15083.8% |6 (  4.0%)  |
| cmsj **  | 1791  15   0 16391.1% |   11 (  6.7%)  |
|   lifeless **| 1293  39   2  8567.4% |2 (  2.3%)  |
|derekh ** |  410  11   0  3073.2% |0 (  0.0%)  |
|  slagle  |  370  11  26   070.3% |3 ( 11.5%)  |
|ghe.rivero|  280   4  24   085.7% |2 (  8.3%)  |


I'm using the fairly simple metric of 'average at least one review a
day' as a proxy for 'sees enough of the code and enough discussion of
the code to be an effective 

Re: [openstack-dev] [savanna] using keystone client

2013-10-08 Thread Jon Maron

On Oct 7, 2013, at 10:02 PM, Dolph Mathews dolph.math...@gmail.com wrote:

 
 On Mon, Oct 7, 2013 at 5:57 PM, Jon Maron jma...@hortonworks.com wrote:
 Hi,
 
   I'm trying to use the keystone client code in savanna/utils/openstack but 
 my attempt to sue it yield:
 
  'Api v2.0 endpoint not found in service identity'
 
 
 This sounds like the service catalog for keystone itself either isn't 
 configured, or isn't configured properly (with /v2.0/ endpoints). What does 
 your `keystone service-list` and `keystone endpoint-list` look like?

they look fine:

[root@cn081 ~(keystone_admin)]# keystone endpoint-list
+--+---+---+---+--+--+
|id|   region  |   publicurl
   |  internalurl  |
 adminurl |service_id|
+--+---+---+---+--+--+
| 1d093399f00246b895ce8507c1b24b7b | RegionOne |
http://172.18.0.81:9292|http://172.18.0.81:9292 
   | http://172.18.0.81:9292  | 
dce5859bb86e4d76a3688d2bf70cad33 |
| 48f8a5bcde0747c08b149f36144a018d | RegionOne |
http://172.18.0.81:8080|http://172.18.0.81:8080 
   | http://172.18.0.81:8080  | 
8e83541c4add45058a83609345f0f7f5 |
| 6223abce129948539d413adb0f392f66 | RegionOne | 
http://172.18.0.81:8080/v1/AUTH_%(tenant_id)s | 
http://172.18.0.81:8080/v1/AUTH_%(tenant_id)s | 
http://172.18.0.81:8080/ | fe922aac92ac4e048fb02346a3176827 |
| 64740640bb824c2493cc456c76d9c4e8 | RegionOne |
http://172.18.0.81:8776/v1/%(tenant_id)s   |
http://172.18.0.81:8776/v1/%(tenant_id)s   | 
http://172.18.0.81:8776/v1/%(tenant_id)s | a13bf1f4319a4b78984cbf80ce4a1879 |
| 8948845ea83940f7a04f2d6ec35da7ab | RegionOne |
http://172.18.0.81:8774/v2/%(tenant_id)s   |
http://172.18.0.81:8774/v2/%(tenant_id)s   | 
http://172.18.0.81:8774/v2/%(tenant_id)s | 4480cd65dc6a4858b5b237cc4c30761e |
| cd1420cfcc59467ba76bfc32f79f9c77 | RegionOne |
http://172.18.0.81:9696/   |http://172.18.0.81:9696/
   | http://172.18.0.81:9696/ | 
399854c740b649a6935d6568d3ffe497 |
| d860fe39b41646be97582de9cef8c91c | RegionOne |  
http://172.18.0.81:5000/v2.0 |  http://172.18.0.81:5000/v2.0
 |  http://172.18.0.81:35357/v2.0   | 
b4b2cc6d2db2493eafe2ccbb649b491e |
| edc75652965a4bd2854c194c213ea395 | RegionOne | 
http://172.18.0.81:8773/services/Cloud| 
http://172.18.0.81:8773/services/Cloud|  
http://172.18.0.81:8773/services/Admin  | dea2442916144ef18cf64d2111f1d906 |
+--+---+---+---+--+--+
[root@cn081 ~(keystone_admin)]# keystone service-list
+--+--+--++
|id|   name   | type |  
description   |
+--+--+--++
| a13bf1f4319a4b78984cbf80ce4a1879 |  cinder  |volume| Cinder 
Service |
| dce5859bb86e4d76a3688d2bf70cad33 |  glance  |image |Openstack 
Image Service |
| b4b2cc6d2db2493eafe2ccbb649b491e | keystone |   identity   |   OpenStack 
Identity Service   |
| 4480cd65dc6a4858b5b237cc4c30761e |   nova   |   compute|   Openstack 
Compute Service|
| dea2442916144ef18cf64d2111f1d906 | nova_ec2 | ec2  |  EC2 
Service   |
| 399854c740b649a6935d6568d3ffe497 | quantum  |   network|   Quantum 
Networking Service   |
| fe922aac92ac4e048fb02346a3176827 |  swift   | object-store | Openstack 
Object-Store Service |
| 8e83541c4add45058a83609345f0f7f5 | swift_s3 |  s3  |  Openstack 
S3 Service  |
+--+--+--++


  
   An code sample:
 
 from savanna.utils.openstack import keystone
 
 . . .
   service_id = next((service.id for service in
keystone.client().services.list()
if 'quantum' == service.name), None)
 
 I don't really know what the context of this code is, but be aware that it 
 requires admin access to keystone and is not interacting with a 
 representation of the catalog that normal users see.

I'm 

Re: [openstack-dev] [savanna] using keystone client

2013-10-08 Thread Dolph Mathews
On Tue, Oct 8, 2013 at 6:50 AM, Jon Maron jma...@hortonworks.com wrote:


 On Oct 7, 2013, at 10:02 PM, Dolph Mathews dolph.math...@gmail.com
 wrote:


 On Mon, Oct 7, 2013 at 5:57 PM, Jon Maron jma...@hortonworks.com wrote:

 Hi,

   I'm trying to use the keystone client code in savanna/utils/openstack
 but my attempt to sue it yield:

  'Api v2.0 endpoint not found in service identity'


 This sounds like the service catalog for keystone itself either isn't
 configured, or isn't configured properly (with /v2.0/ endpoints). What does
 your `keystone service-list` and `keystone endpoint-list` look like?


 they look fine:


Agree, AFAICT. My next guess would be that the credentials you're using to
connect to keystone do not specify a tenant / project - without one, the
client won't get a service catalog at all.



 [root@cn081 ~(keystone_admin)]# keystone endpoint-list

 +--+---+---+---+--+--+
 |id|   region  |
 publicurl   |  internalurl
  | adminurl |service_id
|

 +--+---+---+---+--+--+
 | 1d093399f00246b895ce8507c1b24b7b | RegionOne |
 http://172.18.0.81:9292|http://172.18.0.81:9292
  | http://172.18.0.81:9292  |
 dce5859bb86e4d76a3688d2bf70cad33 |
 | 48f8a5bcde0747c08b149f36144a018d | RegionOne |
 http://172.18.0.81:8080|http://172.18.0.81:8080
  | http://172.18.0.81:8080  |
 8e83541c4add45058a83609345f0f7f5 |
 | 6223abce129948539d413adb0f392f66 | RegionOne |
 http://172.18.0.81:8080/v1/AUTH_%(tenant_id)s |
 http://172.18.0.81:8080/v1/AUTH_%(tenant_id)s |
 http://172.18.0.81:8080/ | fe922aac92ac4e048fb02346a3176827 |
 | 64740640bb824c2493cc456c76d9c4e8 | RegionOne |
 http://172.18.0.81:8776/v1/%(tenant_id)s   |
 http://172.18.0.81:8776/v1/%(tenant_id)s   |
 http://172.18.0.81:8776/v1/%(tenant_id)s |
 a13bf1f4319a4b78984cbf80ce4a1879 |
 | 8948845ea83940f7a04f2d6ec35da7ab | RegionOne |
 http://172.18.0.81:8774/v2/%(tenant_id)s   |
 http://172.18.0.81:8774/v2/%(tenant_id)s   |
 http://172.18.0.81:8774/v2/%(tenant_id)s |
 4480cd65dc6a4858b5b237cc4c30761e |
 | cd1420cfcc59467ba76bfc32f79f9c77 | RegionOne |
 http://172.18.0.81:9696/   |http://172.18.0.81:9696/  
 |
 http://172.18.0.81:9696/ | 399854c740b649a6935d6568d3ffe497 |
 | d860fe39b41646be97582de9cef8c91c | RegionOne |
 http://172.18.0.81:5000/v2.0 |
 http://172.18.0.81:5000/v2.0 |  http://172.18.0.81:35357/v2.0 
  | b4b2cc6d2db2493eafe2ccbb649b491e |
 | edc75652965a4bd2854c194c213ea395 | RegionOne |
 http://172.18.0.81:8773/services/Cloud|
 http://172.18.0.81:8773/services/Cloud|
 http://172.18.0.81:8773/services/Admin  |
 dea2442916144ef18cf64d2111f1d906 |

 +--+---+---+---+--+--+
 [root@cn081 ~(keystone_admin)]# keystone service-list

 +--+--+--++
 |id|   name   | type |
  description   |

 +--+--+--++
 | a13bf1f4319a4b78984cbf80ce4a1879 |  cinder  |volume|
 Cinder Service |
 | dce5859bb86e4d76a3688d2bf70cad33 |  glance  |image |
  Openstack Image Service |
 | b4b2cc6d2db2493eafe2ccbb649b491e | keystone |   identity   |   OpenStack
 Identity Service   |
 | 4480cd65dc6a4858b5b237cc4c30761e |   nova   |   compute|   Openstack
 Compute Service|
 | dea2442916144ef18cf64d2111f1d906 | nova_ec2 | ec2  |
  EC2 Service   |
 | 399854c740b649a6935d6568d3ffe497 | quantum  |   network|   Quantum
 Networking Service   |
 | fe922aac92ac4e048fb02346a3176827 |  swift   | object-store | Openstack
 Object-Store Service |
 | 8e83541c4add45058a83609345f0f7f5 | swift_s3 |  s3  |
  Openstack S3 Service  |

 +--+--+--++




   An code sample:

 from savanna.utils.openstack import keystone

 . . .
   service_id = next((service.id for service in
keystone.client().services.list()
if 'quantum' == service.name), None)


 I don't really know what the context of this code is, but be aware that it
 

[openstack-dev] Scheduling meeting

2013-10-08 Thread Gary Kotton
Hi,
Ideas for an agenda:
1. Following last weeks meeting the following document was drawn up: 
https://docs.google.com/document/d/17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit?usp=sharing
2. Open issues
See you at the end of the hour
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Ben Nemec

On 2013-10-08 05:03, Robert Collins wrote:

On 8 October 2013 22:44, Martyn Taylor mtay...@redhat.com wrote:

On 07/10/13 20:03, Robert Collins wrote:



Whilst I can see that deciding on who is Core is a difficult task, I 
do feel

that creating a competitive environment based on no. reviews will be
detrimental to the project.


I'm not sure how it's competitive : I'd be delighted if every
contributor was also a -core reviewer: I'm not setting, nor do I think
we need to think about setting (at this point anyhow), a cap on the
number of reviewers.

I do feel this is going to result in quantity over quality. 
Personally, I'd
like to see every commit properly reviewed and tested before getting a 
vote

and I don't think these stats are promoting that.


I think thats a valid concern. However Nova has been running a (very
slightly less mechanical) form of this for well over a year, and they
are not drowning in -core reviewers. yes, reviewing is hard, and folk
should take it seriously.

Do you have an alternative mechanism to propose? The key things for me 
are:

 - folk who are idling are recognised as such and gc'd around about
the time their growing staleness will become an issue with review
correctness
 - folk who have been putting in consistent reading of code + changes
get given the additional responsibility of -core around about the time
that they will know enough about whats going on to review effectively.


This is a discussion that has come up in the other projects (not 
surprisingly), and I thought I would mention some of the criteria that 
are being used in those projects.  The first, and simplest, is from 
Dolph Mathews:


'Ultimately, core contributor to me simply means that this person's 
downvotes on code reviews are consistently well thought out and 
meaningful, such that an upvote by the same person shows a lot of 
confidence in the patch.'


I personally like this definition because it requires a certain volume 
of review work (which benefits the project), but it also takes into 
account the quality of those reviews.  Obviously both are important.  
Note that the +/- and disagreements columns in Russell's stats are 
intended to help with determining review quality.  Nothing can replace 
the judgment of the current cores of course, but if someone has been 
+1'ing in 95% of their reviews it's probably a sign that they aren't 
doing quality reviews.  Likewise if they're -1'ing everything but are 
constantly disagreeing with cores.


An expanded version of that can be found in this post to the list: 
http://lists.openstack.org/pipermail/openstack-dev/2013-June/009876.html


To me, that is along the same lines as what Dolph said, just a bit more 
specific as to how quality should be demonstrated and measured.


Hope this is helpful.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Clint Byrum
I don't meant to pick on you personally Jiří, but I have singled this
message out because I feel you have captured the objections to Robert's
initial email well.

Excerpts from Jiří Stránský's message of 2013-10-08 04:30:29 -0700:
 On 8.10.2013 11:44, Martyn Taylor wrote:
  Whilst I can see that deciding on who is Core is a difficult task, I do
  feel that creating a competitive environment based on no. reviews will
  be detrimental to the project.
 
  I do feel this is going to result in quantity over quality. Personally,
  I'd like to see every commit properly reviewed and tested before getting
  a vote and I don't think these stats are promoting that.
 
 +1. I feel that such metric favors shallow i like this code-reviews as 
 opposed to deep i verified that it actually does what it 
 should-reviews. E.g. i hit one such example just today morning on 
 tuskarclient. If i just looked at the code as the other reviewer did, 
 we'd let in code that doesn't do what it should. There's nothing bad on 
 making a mistake, but I wouldn't like to foster environment of quick 
 shallow reviews by having such metrics for core team.
 


I think you may not have worked long enough with Robert Collins to
understand what Robert is doing with the stats. While it may seem that
Robert has simply drawn a line in the sand and is going to sit back and
wait for everyone to cross it before nominating them, nothing could be
further from the truth.

As one gets involved and start -1'ing and +1'ing, one can expect feedback
from all of us as core reviewers. It is part of the responsibility of
being a core reviewer to communicate not just with the submitter of
patches, but also with the other reviewers. If I see shallow +1's from
people consistently, I'm going to reach out to those people and ask them
to elaborate on their reviews, and I'm going to be especially critical
of their -1's.

 I think it's also important who actually *writes* the code, not just who 
 does reviews. I find it odd that none of the people who most contributed 
 to any of the Tuskar projects in the last 3 months would make it onto 
 the core list [1], [2], [3].
 

I think having written a lot of code in a project is indeed a good way
to get familiar with the code. However, it is actually quite valuable
to have reviewers on a project who did not write _any_ of the code,
as their investment in the code itself is not as deep. They will look
at each change with fresh eyes and bring fewer assumptions.

Reviewing is a different skill than coding, and thus I think it is o-k
to measure it differently than coding.

 This might also suggest that we should be looking at contributions to 
 the particular projects, not just the whole program in general. We're 
 such a big program that one's staleness towards some of the components 
 (or being short on global review count) doesn't necessarily mean the 
 person is not important contributor/reviewer on some of the other 
 projects, and i'd also argue this doesn't affect the quality of his work 
 (e.g. there's no relationship between tuskarclient and say, t-i-e, 
 whatsoever).
 

Indeed, I don't think we would nominate or approve a reviewer if they
just did reviews, and never came in the IRC channel, participated in
mailing list discussions, or tried to write patches. It would be pretty
difficult to hold a dialog in reviews with somebody who is not involved
with the program as a whole.

 So i'd say we should get on with having a greater base of core folks and 
 count on people using their own good judgement on where will they 
 exercise their +/-2 powers (i think it's been working very well so far), 
 or alternatively split tripleo-core into some subteams.
 

If we see the review queue get backed up and response times rising, I
could see a push to grow the core review team early. But we're talking
about a 30 day sustained review contribution. That means for 30 days
you're +1'ing instead of +2'ing, and then maybe another 30 days while we
figure out who wants core powers and hold a vote.

If this is causing anyone stress, we should definitely address that and
make a change. However, I feel the opposite. Knowing what is expected
and being able to track where I sit on some of those expectations is
extremely comforting. Of course, easy to say up here with my +2/-2. ;)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] addCleanup vs. tearDown

2013-10-08 Thread Monty Taylor
Hey!

Got a question on IRC which seemed fair game for a quick mailing list post:

Q: I see both addCleanup and tearDown in nova's test suite - which one
should I use for new code?

A: addCleanup

All new code should 100% of the time use addCleanup and not tearDown -
this is because addCleanups are all guaranteed to run, even if one of
them fails, whereas a failure inside of a tearDown can leave the rest of
the tearDown un-executed, which can leave stale state laying around.

Eventually, as we get to it, tearDown should be 100% erradicated from
OpenStack. However, we don't really need more patch churn, so I
recommend only working on it as you happen to be in related code.

Thanks!
Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-08 Thread Monty Taylor


On 10/08/2013 11:22 AM, Clint Byrum wrote:
 I don't meant to pick on you personally Jiří, but I have singled this
 message out because I feel you have captured the objections to Robert's
 initial email well.

Darn. My pop-up window showed I don't meant to pick on you personally
so I rushed to go read the message, and it turns out to be reasonable
and not ranty.

 Excerpts from Jiří Stránský's message of 2013-10-08 04:30:29 -0700:
 On 8.10.2013 11:44, Martyn Taylor wrote:
 Whilst I can see that deciding on who is Core is a difficult task, I do
 feel that creating a competitive environment based on no. reviews will
 be detrimental to the project.

 I do feel this is going to result in quantity over quality. Personally,
 I'd like to see every commit properly reviewed and tested before getting
 a vote and I don't think these stats are promoting that.

 +1. I feel that such metric favors shallow i like this code-reviews as 
 opposed to deep i verified that it actually does what it 
 should-reviews. E.g. i hit one such example just today morning on 
 tuskarclient. If i just looked at the code as the other reviewer did, 
 we'd let in code that doesn't do what it should. There's nothing bad on 
 making a mistake, but I wouldn't like to foster environment of quick 
 shallow reviews by having such metrics for core team.

 
 
 I think you may not have worked long enough with Robert Collins to
 understand what Robert is doing with the stats. While it may seem that
 Robert has simply drawn a line in the sand and is going to sit back and
 wait for everyone to cross it before nominating them, nothing could be
 further from the truth.
 
 As one gets involved and start -1'ing and +1'ing, one can expect feedback
 from all of us as core reviewers. It is part of the responsibility of
 being a core reviewer to communicate not just with the submitter of
 patches, but also with the other reviewers. If I see shallow +1's from
 people consistently, I'm going to reach out to those people and ask them
 to elaborate on their reviews, and I'm going to be especially critical
 of their -1's.
 
 I think it's also important who actually *writes* the code, not just who 
 does reviews. I find it odd that none of the people who most contributed 
 to any of the Tuskar projects in the last 3 months would make it onto 
 the core list [1], [2], [3].

I believe this is consistent with every other OpenStack project. -core
is not a status badge, nor is it a value judgement on the relative
coding skills. -core is PURELY a reviewing job. The only thing is grants
is more weight to the reviews you write in the future, so it makes
perfect sense that it should be judged on the basis of your review work.

It's a mind-shift to make, because in other projects you get 'committer'
access by writing good code. We don't do that in OpenStack. Here, you
get reviewer access by writing good reviews. (this lets the good coders
code and the good reviewers review)

 I think having written a lot of code in a project is indeed a good way
 to get familiar with the code. However, it is actually quite valuable
 to have reviewers on a project who did not write _any_ of the code,
 as their investment in the code itself is not as deep. They will look
 at each change with fresh eyes and bring fewer assumptions.
 
 Reviewing is a different skill than coding, and thus I think it is o-k
 to measure it differently than coding.
 
 This might also suggest that we should be looking at contributions to 
 the particular projects, not just the whole program in general. We're 
 such a big program that one's staleness towards some of the components 
 (or being short on global review count) doesn't necessarily mean the 
 person is not important contributor/reviewer on some of the other 
 projects, and i'd also argue this doesn't affect the quality of his work 
 (e.g. there's no relationship between tuskarclient and say, t-i-e, 
 whatsoever).

 
 Indeed, I don't think we would nominate or approve a reviewer if they
 just did reviews, and never came in the IRC channel, participated in
 mailing list discussions, or tried to write patches. It would be pretty
 difficult to hold a dialog in reviews with somebody who is not involved
 with the program as a whole.

For projects inside of openstack-infra (where we have like 30 of them or
something) we've added additional core teams that include infra-core but
have space for additional reviewers. jenkins-job-builder is the first
one we did like this, as it has a fantastically active set of devs and
reviewers who solely focus on that. However, that's the only one we've
done that for so far.

 So i'd say we should get on with having a greater base of core folks and 
 count on people using their own good judgement on where will they 
 exercise their +/-2 powers (i think it's been working very well so far), 
 or alternatively split tripleo-core into some subteams.

 
 If we see the review queue get backed up and response times rising, I
 could see a 

[openstack-dev] Hyper-V Meeting Canceled Today

2013-10-08 Thread Peter Pouliot
Hi All,

The hyper-v meeting today will be canceled.   We will reconnect next week at 
the usual time.

p

Peter J. Pouliot, CISSP
Senior SDET, OpenStack

Microsoft
New England Research  Development Center
One Memorial Drive,Cambridge, MA 02142
ppoul...@microsoft.commailto:ppoul...@microsoft.com | Tel: +1(857) 453 6436

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Looking for input on optional sample pipelines branch

2013-10-08 Thread Doug Hellmann
On Tue, Oct 8, 2013 at 9:44 AM, Thomas Maddox
thomas.mad...@rackspace.comwrote:

   On 10/7/13 3:49 PM, Doug Hellmann doug.hellm...@dreamhost.com wrote:




 On Mon, Oct 7, 2013 at 4:23 PM, Thomas Maddox thomas.mad...@rackspace.com
  wrote:

   On 10/7/13 1:55 PM, Doug Hellmann doug.hellm...@dreamhost.com
 wrote:




 On Mon, Oct 7, 2013 at 1:44 PM, Thomas Maddox 
 thomas.mad...@rackspace.com wrote:

  On 10/3/13 4:09 PM, Thomas Maddox thomas.mad...@rackspace.com
 wrote:

 On 10/3/13 8:53 AM, Julien Danjou jul...@danjou.info wrote:
 
 On Thu, Oct 03 2013, Thomas Maddox wrote:
 
  Interesting point, Doug and Julien. I'm thinking out loud, but if we
 wanted
  to use pipeline.yaml, we could have an 'enabled' attribute for each
  pipeline?
 
 That would be an option, for sure. But just removing all of them should
 also work.
 
  I'm curious, does the pipeline dictate whether its resulting
  sample is stored, or if no pipeline is configured, will it just store
 the
  sample according to the plugins in */notifications.py? I will test
 this
 out.
 
 If there's no pipeline, there's no sample, so nothing's stored.
 
  For additional context, the intent of the feature is to allow a
 deployer
  more flexibility. Like, say we wanted to only enable storing
 white-listed
  event traits and using trigger pipelines (to come) for notification
 based
  alerting/monitoring?
 
 This is already supported by the pipeline as you can list the meters
 you
 want or not.
 
 I poked around a bunch today; yep, you're right - we can just drop
 samples
 on the floor by negating all meters in pipeline.yaml. I didn't have much
 luck just removing all pipeline definitions or using a blank one (it
 puked, and anything other than negating all samples felt too hacky to be
 viable with trusted behavior).
 
 I had my semantics and understanding of the workflow from the collector
 to
 the pipeline to the dispatcher all muddled and was set straight today.
 =]
 I will think on this some more.
 
 I was also made aware of some additional Stevedore functionality, like
 NamedExtensionManager, that should allow us to completely enable/disable
 any handlers we don't want to load and the pipelines with just config
 changes, and easily (thanks, Dragon!).
 
 I really appreciate the time you all take to help us less experienced
 developers learn on a daily basis! =]

  I tried two approaches from this:

 1. Using NamedExtensionManager and passing in an empty list of names, I
 get the same RuntimeError[1]
 2. Using EnabledExtensionManager (my preference since the use case for
 disabling is lesser than enabling) and passing in a black list check,
 with
 which I received the same Runtime error when an empty list of extensions
 was the result.

 I was thinking that, with the white-list/black-list capability of [Named,
 Enabled]ExtensionManager, it would behave more like an iterator. If the
 manager didn't load any Extensions, then it would just no op on
 operations
 on said extensions it owns and the application would carry on as always.

 Is this something that we could change in Stevedore? I wanted to get your
 thoughts before opening an issue there, in case this was intended
 behavior
 for some benefit I'm not aware of.


  The exception is intended to prevent the app from failing silently if
 it cannot load any plugins for some reason, but
 stevedore should throw a different exception for the could not load any
 plugins and I was told not to use any plugins and then told to do some
 work cases.


   Thanks, Doug!

  I poked around a bit more. This is being raised in the map function:
 https://github.com/dreamhost/stevedore/blob/master/stevedore/extension.py#L135-L137,
 not at load time. I see a separate try/except block for a failure to load,
 it looks like:
 https://github.com/dreamhost/stevedore/blob/master/stevedore/extension.py#L85-L97.
 Is that what you're referring to?


  The exception is raised when the manager is used, because the manager
 might have been created as a module or application global object in a place
 where the traceback wouldn't have been logged properly.


  I don't understand. Why wouldn't it have been logged properly when it
 fails in the _load_plugins(…) method? Due to implementor's code, Stevedore
 code, or some other reason(s) that I'm missing?


If the manager is instantiated before logging is configured that log call
won't do anything (or might do the wrong thing).

Stevedore is trying to prevent an app from failing to start if some or all
of the extensions don't load, but then complain noisily later when the
extensions are actually *used*. The trade-off for protecting against those
cases is what you're running into -- sometimes it is OK to not load or
invoke any plugins. But stevedore does not know when that is OK, so it is
left up to the caller to either catch the exception and ignore it or
perform some sort of check explicitly before calling the manager.


 In this particular case, though, the thing calling the extension