[Openstack] Nova Folsom-3

2012-08-24 Thread Trinath Somanchi
Hi-

I have installed Nova Folsom-3 using source code. But my installation did
not create any nova-compute.conf or nova.conf in the /etc/ path, rather
/usr/bin/nova* scripts are created.

I want to know, In the Folsom-3 milestone of Nova, were there any changes
to the nova.conf when compared to Essex release.

Kindly help me understand the same.

Thanking you all for the help...



-- 
Regards,
--
Trinath Somanchi,
+91 9866 235 130
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cannot create snapshots of instances running not on the controller

2012-08-24 Thread Alessandro Tagliapietra
Hi Vish,

I had already a setting:

glance_api_servers=10.0.0.1:9292

i've also tried to add

glance_host=10.0.0.1

but i got the same error.. Also, after changing configuration and restarting 
nova-compute restarts all instances, is that normal?

Best

Alessandro

Il giorno 23/ago/2012, alle ore 20:24, Vishvananda Ishaya 
vishvana...@gmail.com ha scritto:

 looks like the compute node has a bad setting for glance_api_servers on the 
 second node.
 
 because glance_api_servers defaults to $glance_host:$glance_port, you should 
 be able to fix it by setting:
 
 glance_host = ip where glance is running
 
 in your nova.conf on the second node.
 
 Vish
 
 On Aug 23, 2012, at 10:15 AM, Alessandro Tagliapietra 
 tagliapietra.alessan...@gmail.com wrote:
 
 Hi all,
 
 i've a controller which is running all service and a secondary controller 
 which is un multi_host so it's running compute network and api-metadata. 
 From the dashboard i can successfully create snapshots of instances running 
 on the controller but when i try to create a snapshot of an instance on a 
 compute node i get in its logs:
 
 == /var/log/nova/nova-compute.log ==
 2012-08-23 19:08:14 ERROR nova.rpc.amqp 
 [req-66389a04-b071-4641-949b-3df04da85d08 a63f5293c5454a979bddff1415a216f6 
 e8c3367ff91d44b1ab1b14eb63f48bf7] Exception during message handling
 2012-08-23 19:08:14 TRACE nova.rpc.amqp Traceback (most recent call last):
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 253, in 
 _process_data
 2012-08-23 19:08:14 TRACE nova.rpc.amqp rval = node_func(context=ctxt, 
 **node_args)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped
 2012-08-23 19:08:14 TRACE nova.rpc.amqp return f(*args, **kw)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 183, in 
 decorated_function
 2012-08-23 19:08:14 TRACE nova.rpc.amqp sys.exc_info())
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/contextlib.py, line 24, in __exit__
 2012-08-23 19:08:14 TRACE nova.rpc.amqp self.gen.next()
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 177, in 
 decorated_function
 2012-08-23 19:08:14 TRACE nova.rpc.amqp return function(self, context, 
 instance_uuid, *args, **kwargs)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 952, in 
 snapshot_instance
 2012-08-23 19:08:14 TRACE nova.rpc.amqp self.driver.snapshot(context, 
 instance_ref, image_id)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped
 2012-08-23 19:08:14 TRACE nova.rpc.amqp return f(*args, **kw)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 
 714, in snapshot
 2012-08-23 19:08:14 TRACE nova.rpc.amqp image_file)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/image/glance.py, line 306, in update
 2012-08-23 19:08:14 TRACE nova.rpc.amqp 
 _reraise_translated_image_exception(image_id)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/image/glance.py, line 304, in update
 2012-08-23 19:08:14 TRACE nova.rpc.amqp image_meta = 
 client.update_image(image_id, image_meta, data)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/glance/client.py, line 195, in 
 update_image
 2012-08-23 19:08:14 TRACE nova.rpc.amqp res = self.do_request(PUT, 
 /images/%s % image_id, body, headers)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/glance/common/client.py, line 58, in 
 wrapped
 2012-08-23 19:08:14 TRACE nova.rpc.amqp return func(self, *args, 
 **kwargs)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/glance/common/client.py, line 420, in 
 do_request
 2012-08-23 19:08:14 TRACE nova.rpc.amqp headers=headers)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/glance/common/client.py, line 75, in 
 wrapped
 2012-08-23 19:08:14 TRACE nova.rpc.amqp return func(self, method, url, 
 body, headers)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/glance/common/client.py, line 547, in 
 _do_request
 2012-08-23 19:08:14 TRACE nova.rpc.amqp raise 
 exception.Invalid(res.read())
 2012-08-23 19:08:14 TRACE nova.rpc.amqp Invalid: Data supplied was not valid.
 2012-08-23 19:08:14 TRACE nova.rpc.amqp Details: 400 Bad Request
 2012-08-23 19:08:14 TRACE nova.rpc.amqp 
 2012-08-23 19:08:14 TRACE nova.rpc.amqp The server could not comply with the 
 request since it is either malformed or otherwise incorrect.
 2012-08-23 19:08:14 TRACE nova.rpc.amqp 
 2012-08-23 

Re: [Openstack] Nova Folsom-3

2012-08-24 Thread Trinath Somanchi
Hi-

Thanks for the reply,

While installing from the source, will it not be copied from the source to
/etc/nova .directory. I see that, /etc/nova directory itself is not created.



On Fri, Aug 24, 2012 at 12:51 PM, Ritesh Nanda riteshnand...@gmail.comwrote:

 Hello Trinath,

 If you are installing it from source, it would be in your Source directory
 , there would be already
 etc directory and those files would not auto-generate would you have to
 configure it according to your setup. there would be nova-compute.conf with
 all the options u need to uncomment/comment options which you need for your
 setup.

 On Fri, Aug 24, 2012 at 11:27 AM, Trinath Somanchi 
 trinath.soman...@gmail.com wrote:

 Hi-

 I have installed Nova Folsom-3 using source code. But my installation did
 not create any nova-compute.conf or nova.conf in the /etc/ path, rather
 /usr/bin/nova* scripts are created.

 I want to know, In the Folsom-3 milestone of Nova, were there any changes
 to the nova.conf when compared to Essex release.

 Kindly help me understand the same.

 Thanking you all for the help...



 --
 Regards,
 --
 Trinath Somanchi,
 +91 9866 235 130


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 --

 * With Regards
 *

 * Ritesh Nanda
 *

 ***
 *
 http://www.ericsson.com/







-- 
Regards,
--
Trinath Somanchi,
+91 9866 235 130
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] keystone installed by devstack redirect http request

2012-08-24 Thread Lu, Lianhao
Hi gang,

I used the devstack to install a all-one-one develop environment, but the 
keystone service seemed not working for me.

The host OS is Ubuntu 12.04 with a statically assigned IP address 
192.168.79.201. Since this host is in the internal network, I have to use a 
gateway(with 2 NICs of ip addresses 192.168.79.1 and 10.239.48.224) to login 
into the 192.168.79.201 host from the 10.239.48.0/24 network to run devstack. 

After running devstack successfully, I found that the keystone service was not 
usable. It mysteriously redirected http requests to the gateway 
10.239.48.224(see below for the http response and keystone configurations). 
Does anyone know why I saw the redirect here? Thanks! 

Best Regards,
-Lianhao

$ keystone --debug tenant-list
connect: (127.0.0.1, 5000)
send: 'POST /v2.0/tokens HTTP/1.1\r\nHost: 127.0.0.1:5000\r\nContent-Length: 
100\r\ncontent-type: application/json\r\naccept-encoding: gzip, 
deflate\r\nuser-agent: python-keystoneclient\r\n\r\n{auth: {tenantName: 
demo, passwordCredentials: {username: admin, password: 123456}}}'
reply: 'HTTP/1.1 301 Moved Permanently\r\n'
header: Server: BlueCoat-Security-Appliance
header: Location:http://10.239.48.224
header: Connection: Close
connect: (10.239.48.224, 80)
send: 'POST / HTTP/1.1\r\nHost: 10.239.48.224\r\nContent-Length: 
100\r\ncontent-type: application/json\r\naccept-encoding: gzip, 
deflate\r\nuser-agent: python-keystoneclient\r\n\r\n{auth: {tenantName: 
demo, passwordCredentials: {username: admin, password: 123456}}}'


$ cat /etc/keystone/keystone.conf
[DEFAULT]
admin_token = 123456
[sql]
connection = mysql://root:123456@localhost/keystone?charset=utf8
[catalog]
template_file = /etc/keystone/default_catalog.templates
driver = keystone.catalog.backends.templated.TemplatedCatalog
[ec2]
driver = keystone.contrib.ec2.backends.sql.Ec2
[filter:debug]
paste.filter_factory = keystone.common.wsgi:Debug.factory
[filter:token_auth]
paste.filter_factory = keystone.middleware:TokenAuthMiddleware.factory
[filter:admin_token_auth]
paste.filter_factory = keystone.middleware:AdminTokenAuthMiddleware.factory
[filter:xml_body]
paste.filter_factory = keystone.middleware:XmlBodyMiddleware.factory
[filter:json_body]
paste.filter_factory = keystone.middleware:JsonBodyMiddleware.factory
[filter:user_crud_extension]
paste.filter_factory = keystone.contrib.user_crud:CrudExtension.factory
[filter:crud_extension]
paste.filter_factory = keystone.contrib.admin_crud:CrudExtension.factory
[filter:ec2_extension]
paste.filter_factory = keystone.contrib.ec2:Ec2Extension.factory
[filter:s3_extension]
paste.filter_factory = keystone.contrib.s3:S3Extension.factory
[filter:url_normalize]
paste.filter_factory = keystone.middleware:NormalizingFilter.factory
[filter:stats_monitoring]
paste.filter_factory = keystone.contrib.stats:StatsMiddleware.factory
[filter:stats_reporting]
paste.filter_factory = keystone.contrib.stats:StatsExtension.factory
[app:public_service]
paste.app_factory = keystone.service:public_app_factory
[app:admin_service]
paste.app_factory = keystone.service:admin_app_factory
[pipeline:public_api]
pipeline = stats_monitoring url_normalize token_auth admin_token_auth xml_body 
json_body debug ec2_extension user_crud_extension public_service
[pipeline:admin_api]
pipeline = stats_monitoring url_normalize token_auth admin_token_auth xml_body 
json_body debug stats_reporting ec2_extension s3_extension crud_extension 
admin_service
[app:public_version_service]
paste.app_factory = keystone.service:public_version_app_factory
[app:admin_version_service]
paste.app_factory = keystone.service:admin_version_app_factory
[pipeline:public_version_api]
pipeline = stats_monitoring url_normalize xml_body public_version_service
[pipeline:admin_version_api]
pipeline = stats_monitoring url_normalize xml_body admin_version_service
[composite:main]
use = egg:Paste#urlmap
/v2.0 = public_api
/ = public_version_api
[composite:admin]
use = egg:Paste#urlmap
/v2.0 = admin_api
/ = admin_version_api


$ cat /etc/keystone/default_catalog.templates
catalog.RegionOne.identity.publicURL = 
http://192.168.79.201:$(public_port)s/v2.0
catalog.RegionOne.identity.adminURL = http://192.168.79.201:$(admin_port)s/v2.0
catalog.RegionOne.identity.internalURL = 
http://192.168.79.201:$(public_port)s/v2.0
catalog.RegionOne.identity.name = Identity Service

catalog.RegionOne.compute.publicURL = 
http://192.168.79.201:8774/v2/$(tenant_id)s
catalog.RegionOne.compute.adminURL = http://192.168.79.201:8774/v2/$(tenant_id)s
catalog.RegionOne.compute.internalURL = 
http://192.168.79.201:8774/v2/$(tenant_id)s
catalog.RegionOne.compute.name = Compute Service

catalog.RegionOne.volume.publicURL = http://192.168.79.201:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.adminURL = http://192.168.79.201:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.internalURL = 
http://192.168.79.201:8776/v1/$(tenant_id)s
catalog.RegionOne.volume.name = Volume Service

catalog.RegionOne.ec2.publicURL = 

[Openstack] Coming up: PTL and TC elections

2012-08-24 Thread Thierry Carrez
Hi everyone,

As the Foundation Board of Directors election is closing, we have new
elections coming up for the OpenStack technical contributors: the PTL
and TC elections.

The process we'll follow lives at:
http://wiki.openstack.org/Governance/TCElectionsFall2012

TL;DR summary:
* The election process is mostly the same used in past PTL/PPB elections
* Only active technical contributors get to vote in these elections (see
wiki for details)
* We'll run the PTL elections first: nominations between Aug 30 and Sep
5, voting between Sep 7 and Sep 13
* We'll run the TC directly-elected seats election next: nominations
between Sep 13 and Sep 19, voting between Sep 21 and Sep 27
* Candidates will publicly nominate themselves by email to this list and
get publicly confirmed by one of the election officials

Note: If someone wants to replace me as one of the election officials
for the rest of the process, I'm happy to leave my spot.

Regards,

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] OpenStack and IGMP

2012-08-24 Thread Juris
Hi all,

Do you have any experience configuring OpenStack to work with IGMP traffic?
If I have IGMP server and appropriate network infrastructure, what is
the best way to bound it with one of OpenStack's private networks?

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Default rules for the 'default' security group

2012-08-24 Thread Yufang Zhang
2012/8/24 Gabriel Hurley gabriel.hur...@nebula.com

  I traced this through the code at one point looking for the same thing.
 As it stands, right now there is **not** a mechanism for customizing the
 default security group’s rules. It’s created programmatically the first
 time the rules for a project are retrieved with no hook to add or change
 its characteristics.

 ** **

 I’d love to see this be possible, but it’s definitely a feature request.**
 **

 **


 Really agreed. I have created a blueprint to track this issue:

https://blueprints.launchpad.net/nova/+spec/default-rules-for-default-security-group

**

 **-  **Gabriel

 ** **

 *From:* 
 openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net[mailto:
 openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net] *On
 Behalf Of *Boris-Michel Deschenes
 *Sent:* Thursday, August 23, 2012 7:59 AM
 *To:* Yufang Zhang; openstack@lists.launchpad.net
 *Subject:* Re: [Openstack] Default rules for the 'default' security group*
 ***

 ** **

 I’m very interested in this, we run essex and have a very bad workaround
 for this currently, but it would be great to be able to do this (set
 default rules for the default security group).

 ** **

 Boris

 ** **

 *De :*
 openstack-bounces+boris-michel.deschenes=ubisoft@lists.launchpad.net
 [mailto:openstack-bounces+boris-michel.deschenes=ubisoft@lists.launchpad.net]
 *De la part de* Yufang Zhang
 *Envoyé :* 23 août 2012 08:43
 *À :* openstack@lists.launchpad.net
 *Objet :* [Openstack] Default rules for the 'default' security group

 ** **

 Hi all,

 ** **

 Could I ask how to set the default rules for the 'default' security group
 for all the users in openstack? Currently, the 'default' security group has
 no rule by default, thus newly created instances could only be accessed by
 instances from the same group. 

 ** **

 Is there any method to set default rules(such as ssh or icmp) for the
 'default' security group for all users in openstack, so that I don't have
 to remind the new users to modify security group setting the fist time they
 logged into openstack and create instances?  I have ever tried HP could
 which is built on openstack, they permit ssh or ping to the instances in
 the 'default' security group. 

 ** **

 Best Regards.

 ** **

 Yufang

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] keystone installed by devstack redirect http request

2012-08-24 Thread Dolph Mathews
Keystone doesn't return 301's (ever). However, your 301 response headers
show:

Server: BlueCoat-Security-Appliance

I'm guessing that wasn't installed by devstack :)

-Dolph

On Fri, Aug 24, 2012 at 3:03 AM, Lu, Lianhao lianhao...@intel.com wrote:

 Hi gang,

 I used the devstack to install a all-one-one develop environment, but
 the keystone service seemed not working for me.

 The host OS is Ubuntu 12.04 with a statically assigned IP address
 192.168.79.201. Since this host is in the internal network, I have to use a
 gateway(with 2 NICs of ip addresses 192.168.79.1 and 10.239.48.224) to
 login into the 192.168.79.201 host from the 10.239.48.0/24 network to run
 devstack.

 After running devstack successfully, I found that the keystone service was
 not usable. It mysteriously redirected http requests to the gateway
 10.239.48.224(see below for the http response and keystone configurations).
 Does anyone know why I saw the redirect here? Thanks!

 Best Regards,
 -Lianhao

 $ keystone --debug tenant-list
 connect: (127.0.0.1, 5000)
 send: 'POST /v2.0/tokens HTTP/1.1\r\nHost: 127.0.0.1:5000\r\nContent-Length:
 100\r\ncontent-type: application/json\r\naccept-encoding: gzip,
 deflate\r\nuser-agent: python-keystoneclient\r\n\r\n{auth: {tenantName:
 demo, passwordCredentials: {username: admin, password:
 123456}}}'
 reply: 'HTTP/1.1 301 Moved Permanently\r\n'
 header: Server: BlueCoat-Security-Appliance
 header: Location:http://10.239.48.224
 header: Connection: Close
 connect: (10.239.48.224, 80)
 send: 'POST / HTTP/1.1\r\nHost: 10.239.48.224\r\nContent-Length:
 100\r\ncontent-type: application/json\r\naccept-encoding: gzip,
 deflate\r\nuser-agent: python-keystoneclient\r\n\r\n{auth: {tenantName:
 demo, passwordCredentials: {username: admin, password:
 123456}}}'


 $ cat /etc/keystone/keystone.conf
 [DEFAULT]
 admin_token = 123456
 [sql]
 connection = mysql://root:123456@localhost/keystone?charset=utf8
 [catalog]
 template_file = /etc/keystone/default_catalog.templates
 driver = keystone.catalog.backends.templated.TemplatedCatalog
 [ec2]
 driver = keystone.contrib.ec2.backends.sql.Ec2
 [filter:debug]
 paste.filter_factory = keystone.common.wsgi:Debug.factory
 [filter:token_auth]
 paste.filter_factory = keystone.middleware:TokenAuthMiddleware.factory
 [filter:admin_token_auth]
 paste.filter_factory = keystone.middleware:AdminTokenAuthMiddleware.factory
 [filter:xml_body]
 paste.filter_factory = keystone.middleware:XmlBodyMiddleware.factory
 [filter:json_body]
 paste.filter_factory = keystone.middleware:JsonBodyMiddleware.factory
 [filter:user_crud_extension]
 paste.filter_factory = keystone.contrib.user_crud:CrudExtension.factory
 [filter:crud_extension]
 paste.filter_factory = keystone.contrib.admin_crud:CrudExtension.factory
 [filter:ec2_extension]
 paste.filter_factory = keystone.contrib.ec2:Ec2Extension.factory
 [filter:s3_extension]
 paste.filter_factory = keystone.contrib.s3:S3Extension.factory
 [filter:url_normalize]
 paste.filter_factory = keystone.middleware:NormalizingFilter.factory
 [filter:stats_monitoring]
 paste.filter_factory = keystone.contrib.stats:StatsMiddleware.factory
 [filter:stats_reporting]
 paste.filter_factory = keystone.contrib.stats:StatsExtension.factory
 [app:public_service]
 paste.app_factory = keystone.service:public_app_factory
 [app:admin_service]
 paste.app_factory = keystone.service:admin_app_factory
 [pipeline:public_api]
 pipeline = stats_monitoring url_normalize token_auth admin_token_auth
 xml_body json_body debug ec2_extension user_crud_extension public_service
 [pipeline:admin_api]
 pipeline = stats_monitoring url_normalize token_auth admin_token_auth
 xml_body json_body debug stats_reporting ec2_extension s3_extension
 crud_extension admin_service
 [app:public_version_service]
 paste.app_factory = keystone.service:public_version_app_factory
 [app:admin_version_service]
 paste.app_factory = keystone.service:admin_version_app_factory
 [pipeline:public_version_api]
 pipeline = stats_monitoring url_normalize xml_body public_version_service
 [pipeline:admin_version_api]
 pipeline = stats_monitoring url_normalize xml_body admin_version_service
 [composite:main]
 use = egg:Paste#urlmap
 /v2.0 = public_api
 / = public_version_api
 [composite:admin]
 use = egg:Paste#urlmap
 /v2.0 = admin_api
 / = admin_version_api


 $ cat /etc/keystone/default_catalog.templates
 catalog.RegionOne.identity.publicURL = http://192.168.79.201:
 $(public_port)s/v2.0
 catalog.RegionOne.identity.adminURL = http://192.168.79.201:
 $(admin_port)s/v2.0
 catalog.RegionOne.identity.internalURL = http://192.168.79.201:
 $(public_port)s/v2.0
 catalog.RegionOne.identity.name = Identity Service

 catalog.RegionOne.compute.publicURL =
 http://192.168.79.201:8774/v2/$(tenant_id)s
 catalog.RegionOne.compute.adminURL =
 http://192.168.79.201:8774/v2/$(tenant_id)s
 catalog.RegionOne.compute.internalURL =
 http://192.168.79.201:8774/v2/$(tenant_id)s
 catalog.RegionOne.compute.name = Compute Service

 

[Openstack] [Nova] Instance Type Extra Specs clarifications

2012-08-24 Thread Patrick Petit
Hi,

Could someone give a practical overview of how configuring and using the
instance type extra specs extension capability introduced in Folsom?

If how extending an instance type is relatively clear.

Eg.: #nova-manage instance_type set_key --name=my.instancetype --key
cpu_arch --value 's== x86_64'

The principles of capability advertising is less clearer. Is it assumed
that the key/value pairs are always declared statically as flags in
nova.conf of the compute node, or can they be generated dynamically and if
so, who would that be? And also, are the keys completely free form strings
or strings that are known (reserved) by Nova?

Thanks in advance for clarifying this.

Patrick
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Quantum] Running plugin tests with tox

2012-08-24 Thread Maru Newby
Hi Salvatore,

I see you're working on getting plugins testable with tox:

https://review.openstack.org/#/c/11922/

What about keeping the plugins isolated for testing purposes?  I have been 
unable to work on it yet, but I was thinking it might be a good idea to move 
the plugins out of the main tree (but still in the same repo) for ease of 
maintenance, testing and deployment.  The thought was:

- relocate all plugins outside of main quantum tree (plugins/ dir in the repo 
root)
- give each plugin 
  - its own python root-level package (e.g. quantum_ovs)
  - its own tox.ini
  - its own tools/*-requires

So the layout would be something like:

plugins/quantum_ovs/tox.ini
plugins/quantum_ovs/quantum_ovs/__init__.py
plugins/quantum_ovs/tests/__init__.py
plugins/quanum_ovs/tools/pip-requires

plugins/quantum_linuxbridge/tox.ini
...

The tests for each plugin could then be executed via an independent tox run.

Is there any merit to this, now or in the future?

Thanks,


Maru

On 2012-08-24, at 2:56 PM, Salvatore Orlando (Code Review) wrote:

 Salvatore Orlando has uploaded a new change for review.
 
 Change subject: Enable tox to run OVS plugin unit tests
 ..
 
 Enable tox to run OVS plugin unit tests
 
 Fix bug 1029142
 
 Unit tests have been moved into /quantum/tests/unit
 
 Change-Id: I5d0fa84826f62a86e4ab04c3e1958869f24a1fcf
 ---
 D quantum/plugins/openvswitch/run_tests.py
 D quantum/plugins/openvswitch/tests/__init__.py
 D quantum/plugins/openvswitch/tests/unit/__init__.py
 R quantum/tests/unit/test_ovs_db.py
 R quantum/tests/unit/test_ovs_defaults.py
 R quantum/tests/unit/test_ovs_rpcapi.py
 R quantum/tests/unit/test_ovs_tunnel.py
 7 files changed, 0 insertions(+), 72 deletions(-)
 
 
  git pull ssh://review.openstack.org:29418/openstack/quantum 
 refs/changes/22/11922/1
 --
 To view, visit https://review.openstack.org/11922
 To unsubscribe, visit https://review.openstack.org/settings
 
 Gerrit-MessageType: newchange
 Gerrit-Change-Id: I5d0fa84826f62a86e4ab04c3e1958869f24a1fcf
 Gerrit-PatchSet: 1
 Gerrit-Project: openstack/quantum
 Gerrit-Branch: master
 Gerrit-Owner: Salvatore Orlando salv.orla...@gmail.com


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] Instance Type Extra Specs clarifications

2012-08-24 Thread David Kang

 Parick,

 We are using the feature in Bare-metal machine provisioning.
Some keys are automatically generated by nova-compute.
For example, hypervisor_type, hypervisor_version, etc. fields are 
automatically
put into capabilities by nova-compute (in the case of libvirt).
So, you don't need to specify that.
But, if you want to add custom fields, you should put them into nova.conf file 
of 
the nova-compute node.

 Since the new key are put into 'capabilities', 
the new key must be different from any other keys in the 'capabilities'.
If that uniqueness is enforced, it can be any string, I believe.

 Thanks,
 David

--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

- Original Message -
 Hi,
 
 
 Could someone give a practical overview of how configuring and using
 the instance type extra specs extension capability introduced in
 Folsom?
 
 
 If how extending an instance type is relatively clear.
 
 
 Eg.: #nova-manage instance_type set_key --name=my.instancetype --key
 cpu_arch --value 's== x86_64'
 
 
 The principles of capability advertising is less clearer. Is it
 assumed that the key/value pairs are always declared statically as
 flags in nova.conf of the compute node, or can they be generated
 dynamically and if so, who would that be? And also, are the keys
 completely free form strings or strings that are known (reserved) by
 Nova?
 
 
 Thanks in advance for clarifying this.
 
 
 Patrick
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] Instance Type Extra Specs clarifications

2012-08-24 Thread Dugger, Donald D
Patrick-

We've enhanced `nova-manage' to manipulate the `extra_specs' entries, c.f. 
https://blueprints.launchpad.net/nova/+spec/update-flavor-key-value,   You can 
add an `extra_specs' key/value pair to a flavor with the command:

nova-manage instance_type add_key m1.humongous cpu_type itanium

And delete a key/value pair with the command:

nova-manage instance_type del_key m1.humongous cpu_type

We're in the process of enhancing `python-novaclient' and Horizon with similar 
capabilities and hope to have them ready for the Folsom release.

Currently, there's no hook to set `extra_specs' through the `nova.conf' file, 
the mechanism is to dynamically add the `extra_specs' key/values after the 
administrator has created a flavor.

Currently, the keys are completely free form but there are some issues with 
that so that should change.  Checkout the bug:

https://bugs.launchpad.net/nova/+bug/1039386

Based upon that bug we need to put some sort of scope on the keys to indicate 
which components a key applies to. I'm in favor of adding a new column to the 
`extra_specs' table that would explicitly set the scope but an alternative 
would be to encode the scope into the key itself, something like 
`TrustedFilter:trust' to indicate that  the `trust' key only applies to the 
`TrustedFilter' scheduler component.  Feel free to chime in on the BZ entry on 
how to specify the scope, once we decide on how to deal with this I'll create a 
patch to handle it.

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

From: openstack-bounces+donald.d.dugger=intel@lists.launchpad.net 
[mailto:openstack-bounces+donald.d.dugger=intel@lists.launchpad.net] On 
Behalf Of Patrick Petit
Sent: Friday, August 24, 2012 7:13 AM
To: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
Subject: [Openstack] [Nova] Instance Type Extra Specs clarifications

Hi,

Could someone give a practical overview of how configuring and using the 
instance type extra specs extension capability introduced in Folsom?

If how extending an instance type is relatively clear.

Eg.: #nova-manage instance_type set_key --name=my.instancetype --key 
cpu_arch --value 's== x86_64'

The principles of capability advertising is less clearer. Is it assumed that 
the key/value pairs are always declared statically as flags in nova.conf of the 
compute node, or can they be generated dynamically and if so, who would that 
be? And also, are the keys completely free form strings or strings that are 
known (reserved) by Nova?

Thanks in advance for clarifying this.

Patrick
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] Running plugin tests with tox

2012-08-24 Thread Salvatore Orlando
Hi Maru,

whether plugins should be packaged or not with the main quantum service is
a recurrent question on this mailing list - and I am afraid I don't have
answer to it!
Pro and cons of separating the plugins from the main repository have been
discussed on the mailing list and on the IRC channels; I hope some final
deciscion/action will be agreed at the next Openstack summit. However, so
far nothing has been done in this direction.

As concerns testing, our goal is to ensure that plugins too are covered by
unit tests just like the quantum service. This can be achieved in two ways:
1) moving all tests in the same tree, so that  a single tox run can run
them all, or
2) modifying the unit testing script on the openstack jenkins to run tox in
the main tests directory and into each plugin's test dir.

The approach I am following so far is the first, which seem consistent with
what nova does for its virt drivers; however if we (and the openstack-ci
team) believe we should do it differently, we could have a tox.ini and a
test-requires into each plugin folder, and execute a tox run for each
plugin (plus a tox run for the quantum service)

Salvatore


On 24 August 2012 17:35, Maru Newby mne...@internap.com wrote:

 Hi Salvatore,

 I see you're working on getting plugins testable with tox:

 https://review.openstack.org/#/c/11922/

 What about keeping the plugins isolated for testing purposes?  I have been
 unable to work on it yet, but I was thinking it might be a good idea to
 move the plugins out of the main tree (but still in the same repo) for ease
 of maintenance, testing and deployment.  The thought was:

 - relocate all plugins outside of main quantum tree (plugins/ dir in the
 repo root)
 - give each plugin
   - its own python root-level package (e.g. quantum_ovs)
   - its own tox.ini
   - its own tools/*-requires

 So the layout would be something like:

 plugins/quantum_ovs/tox.ini
 plugins/quantum_ovs/quantum_ovs/__init__.py
 plugins/quantum_ovs/tests/__init__.py
 plugins/quanum_ovs/tools/pip-requires
 
 plugins/quantum_linuxbridge/tox.ini
 ...

 The tests for each plugin could then be executed via an independent tox
 run.

 Is there any merit to this, now or in the future?

 Thanks,


 Maru

 On 2012-08-24, at 2:56 PM, Salvatore Orlando (Code Review) wrote:

  Salvatore Orlando has uploaded a new change for review.
 
  Change subject: Enable tox to run OVS plugin unit tests
  ..
 
  Enable tox to run OVS plugin unit tests
 
  Fix bug 1029142
 
  Unit tests have been moved into /quantum/tests/unit
 
  Change-Id: I5d0fa84826f62a86e4ab04c3e1958869f24a1fcf
  ---
  D quantum/plugins/openvswitch/run_tests.py
  D quantum/plugins/openvswitch/tests/__init__.py
  D quantum/plugins/openvswitch/tests/unit/__init__.py
  R quantum/tests/unit/test_ovs_db.py
  R quantum/tests/unit/test_ovs_defaults.py
  R quantum/tests/unit/test_ovs_rpcapi.py
  R quantum/tests/unit/test_ovs_tunnel.py
  7 files changed, 0 insertions(+), 72 deletions(-)
 
 
   git pull 
  ssh://review.openstack.org:29418/openstack/quantumrefs/changes/22/11922/1
  --
  To view, visit https://review.openstack.org/11922
  To unsubscribe, visit https://review.openstack.org/settings
 
  Gerrit-MessageType: newchange
  Gerrit-Change-Id: I5d0fa84826f62a86e4ab04c3e1958869f24a1fcf
  Gerrit-PatchSet: 1
  Gerrit-Project: openstack/quantum
  Gerrit-Branch: master
  Gerrit-Owner: Salvatore Orlando salv.orla...@gmail.com


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] Running plugin tests with tox

2012-08-24 Thread James E. Blair
Maru Newby mne...@internap.com writes:

 Hi Salvatore,

 I see you're working on getting plugins testable with tox:

 https://review.openstack.org/#/c/11922/

 What about keeping the plugins isolated for testing purposes?  I have
 been unable to work on it yet, but I was thinking it might be a good
 idea to move the plugins out of the main tree (but still in the same
 repo) for ease of maintenance, testing and deployment.  The thought
 was:

 - relocate all plugins outside of main quantum tree (plugins/ dir in the repo 
 root)
 - give each plugin 
   - its own python root-level package (e.g. quantum_ovs)
   - its own tox.ini
   - its own tools/*-requires

 So the layout would be something like:

 plugins/quantum_ovs/tox.ini
 plugins/quantum_ovs/quantum_ovs/__init__.py
 plugins/quantum_ovs/tests/__init__.py
 plugins/quanum_ovs/tools/pip-requires
 
 plugins/quantum_linuxbridge/tox.ini
 ...

 The tests for each plugin could then be executed via an independent tox run.

 Is there any merit to this, now or in the future?

I don't think we should do that -- it's fine to organize the tree
however you see fit, of course, but a while back we implemented the
Project Testing Interface:

  http://wiki.openstack.org/ProjectTestingInterface

where we expect each project to have just one tox.ini, one set of
dependencies, and run one set of tests.  We automatically manage 239
Jenkins jobs (and counting) based on that interface which would not be
possible if we had to customize the jobs for each project.  Also, keep
in mind that we use tox to generate tarballs, so splitting that up means
multiple release artifacts for a single repo, which is also something we
want to avoid.

If the plugins are truly independent enough that they should be tested,
packaged, and released separately from quantum, we may want to consider
making separate projects for them.  Otherwise, if they should continue
to live in the quantum project itself, it would be great if we stuck
with the established interface.

-Jim

PS, This seems more like an openstack-dev discussion, let's move it to
that list.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] Instance Type Extra Specs clarifications

2012-08-24 Thread Joseph Suh
Patrick,

Once a new item (key and value pair) is added to the capabilities, it can be 
compared against extra_specs. The extra_specs can be populated in 
instance_type_extra_specs table. The items in the extra_specs can start with 
one of the keywords for operations such as = and s==. For example, if 
ngpus: 4 is populated in capability, extra_specs of = 2 will choose the 
host. If the extra_specs is = 5, the host will not be chosen. If no keyword 
is found at the beginning of the extra_specs (with the latest change in 
upstream code), the given string is directly compared with capability. For 
example, if fpu is given as extra_specs, the capability must be fpu to be 
selected.

If more clarification is needed, please let us know.

Thanks,

Joseph

- Original Message -
From: David Kang dk...@isi.edu
To: Patrick Petit patrick.michel.pe...@gmail.com
Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net) 
openstack@lists.launchpad.net
Sent: Friday, August 24, 2012 11:34:11 AM
Subject: Re: [Openstack] [Nova] Instance Type Extra Specs clarifications


 Parick,

 We are using the feature in Bare-metal machine provisioning.
Some keys are automatically generated by nova-compute.
For example, hypervisor_type, hypervisor_version, etc. fields are 
automatically
put into capabilities by nova-compute (in the case of libvirt).
So, you don't need to specify that.
But, if you want to add custom fields, you should put them into nova.conf file 
of 
the nova-compute node.

 Since the new key are put into 'capabilities', 
the new key must be different from any other keys in the 'capabilities'.
If that uniqueness is enforced, it can be any string, I believe.

 Thanks,
 David

--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

- Original Message -
 Hi,
 
 
 Could someone give a practical overview of how configuring and using
 the instance type extra specs extension capability introduced in
 Folsom?
 
 
 If how extending an instance type is relatively clear.
 
 
 Eg.: #nova-manage instance_type set_key --name=my.instancetype --key
 cpu_arch --value 's== x86_64'
 
 
 The principles of capability advertising is less clearer. Is it
 assumed that the key/value pairs are always declared statically as
 flags in nova.conf of the compute node, or can they be generated
 dynamically and if so, who would that be? And also, are the keys
 completely free form strings or strings that are known (reserved) by
 Nova?
 
 
 Thanks in advance for clarifying this.
 
 
 Patrick
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] VM can't ping self floating IP after a snapshot is taken

2012-08-24 Thread Sam Su
Hi,

I also reported this bug:
 https://bugs.launchpad.net/nova/+bug/1040255

 If someone can combine you guys solution and get a perfect way to fix this
bug, that will be great.

BRs,
Sam

On Thu, Aug 23, 2012 at 9:27 PM, heut2008 heut2...@gmail.com wrote:

 this bug has been filed here  https://bugs.launchpad.net/nova/+bug/1040537

 2012/8/24 Vishvananda Ishaya vishvana...@gmail.com:
  +1 to this. Evan, can you report a bug (if one hasn't been reported yet)
 and
  propose the fix? Or else I can find someone else to propose it.
 
  Vish
 
  On Aug 23, 2012, at 1:38 PM, Evan Callicoat diop...@gmail.com wrote:
 
  Hello all!
 
  I'm the original author of the hairpin patch, and things have changed a
  little bit in Essex and Folsom from the original Diablo target. I
 believe I
  can shed some light on what should be done here to solve the issue in
 either
  case.
 
  ---
  For Essex (stable/essex), in nova/virt/libvirt/connection.py:
  ---
 
  Currently _enable_hairpin() is only being called from spawn(). However,
  spawn() is not the only place that vifs (veth#) get added to a bridge
 (which
  is when we need to enable hairpin_mode on them). The more relevant
 function
  is _create_new_domain(), which is called from spawn() and other places.
  Without changing the information that gets passed to _create_new_domain()
  (which is just 'xml' from to_xml()), we can easily rewrite the first 2
 lines
  in _enable_hairpin(), as follows:
 
  def _enable_hairpin(self, xml):
  interfaces = self.get_interfaces(xml['name'])
 
  Then, we can move the self._enable_hairpin(instance) call from spawn() up
  into _create_new_domain(), and pass it xml as follows:
 
  [...]
  self._enable_hairpin(xml)
  return domain
 
  This will run the hairpin code every time a domain gets created, which is
  also when the domain's vif(s) gets inserted into the bridge with the
 default
  of hairpin_mode=0.
 
  ---
  For Folsom (trunk), in nova/virt/libvirt/driver.py:
  ---
 
  There've been a lot more changes made here, but the same strategy as
 above
  should work. Here, _create_new_domain() has been split into
 _create_domain()
  and _create_domain_and_network(), and _enable_hairpin() was moved from
  spawn() to _create_domain_and_network(), which seems like it'd be the
 right
  thing to do, but doesn't quite cover all of the cases of vif reinsertion,
  since _create_domain() is the only function which actually creates the
  domain (_create_domain_and_network() just calls it after doing some
  pre-work). The solution here is likewise fairly simple; make the same 2
  changes to _enable_hairpin():
 
  def _enable_hairpin(self, xml):
  interfaces = self.get_interfaces(xml['name'])
 
  And move it from _create_domain_and_network() to _create_domain(), like
  before:
 
  [...]
  self._enable_hairpin(xml)
  return domain
 
  I haven't yet tested this on my Essex clusters and I don't have a Folsom
  cluster handy at present, but the change is simple and makes sense.
 Looking
  at to_xml() and _prepare_xml_info(), it appears that the 'xml' variable
  _create_[new_]domain() gets is just a python dictionary, and xml['name']
 =
  instance['name'], exactly what _enable_hairpin() was using the 'instance'
  variable for previously.
 
  Let me know if this works, or doesn't work, or doesn't make sense, or if
 you
  need an address to send gifts, etc. Hope it's solved!
 
  -Evan
 
  On Thu, Aug 23, 2012 at 11:20 AM, Sam Su susltd...@gmail.com wrote:
 
  Hi Oleg,
 
  Thank you for your investigation. Good lucky!
 
  Can you let me know if find how to fix the bug?
 
  Thanks,
  Sam
 
  On Wed, Aug 22, 2012 at 12:50 PM, Oleg Gelbukh ogelb...@mirantis.com
  wrote:
 
  Hello,
 
  Is it possible that, during snapshotting, libvirt just tears down
 virtual
  interface at some point, and then re-creates it, with hairpin_mode
 disabled
  again?
  This bugfix [https://bugs.launchpad.net/nova/+bug/933640] implies that
  fix works on spawn of instance. This means that upon resume after
 snapshot,
  hairpin is not restored. May be if we insert the _enable_hairpin()
 call in
  snapshot procedure, it helps.
  We're currently investigating this issue in one of our environments,
 hope
  to come up with answer by tomorrow.
 
  --
  Best regards,
  Oleg
 
  On Wed, Aug 22, 2012 at 11:29 PM, Sam Su susltd...@gmail.com wrote:
 
  My friend has found a way to enable ping itself, when this problem
  happened. But not found why this happen.
  sudo echo 1 
  /sys/class/net/br1000/brif/virtual-interface-name/hairpin_mode
 
  I file a ticket to report this problem:
  https://bugs.launchpad.net/nova/+bug/1040255
 
  hopefully someone can find why this happen and solve it.
 
  Thanks,
  Sam
 
 
  On Fri, Jul 20, 2012 at 3:50 PM, Gabriel Hurley
  gabriel.hur...@nebula.com wrote:
 
  I ran into some similar issues with the _enable_hairpin() call. The
  call is allowed to fail silently and (in my case) was failing. I
 couldn’t
  for the life of me figure out why, though, and 

Re: [Openstack] [openstack-dev] Discussion about where to put database for bare-metal provisioning (review 10726)

2012-08-24 Thread David Kang

 Vish,

 I've tested your code and did more testing.
There are a couple of problems.
1. host name should be unique. If not, any repetitive updates of new 
capabilities with the same host name are simply overwritten.
2. We cannot generate arbitrary host names on the fly.
  The scheduler (I tested filter scheduler) gets host names from db.
  So, if a host name is not in the 'services' table, it is not considered by 
the scheduler at all.

So, to make your suggestions possible, nova-compute should register N different 
host names in 'services' table,
and N corresponding entries in 'compute_nodes' table.
Here is an example:

mysql select id, host, binary, topic, report_count, disabled, 
availability_zone from services;
++-++---+--+--+---+
| id | host        | binary         | topic     | report_count | disabled | 
availability_zone |
++-++---+--+--+---+
|  1 | bespin101   | nova-scheduler | scheduler |        17145 |        0 | 
nova              |
|  2 | bespin101   | nova-network   | network   |        16819 |        0 | 
nova              |
|  3 | bespin101-0 | nova-compute   | compute   |        16405 |        0 | 
nova              |
|  4 | bespin101-1 | nova-compute   | compute   |            1 |        0 | 
nova              |
++-++---+--+--+---+

mysql select id, service_id, hypervisor_hostname from compute_nodes;
++++
| id | service_id | hypervisor_hostname    |
++++
|  1 |          3 | bespin101.east.isi.edu |
|  2 |          4 | bespin101.east.isi.edu |
++++

 Then, nova db (compute_nodes table) has entries of all bare-metal nodes.
What do you think of this approach.
Do you have any better approach?

 Thanks,
 David



- Original Message -
 To elaborate, something the below. I'm not absolutely sure you need to
 be able to set service_name and host, but this gives you the option to
 do so if needed.
 
 iff --git a/nova/manager.py b/nova/manager.py
 index c6711aa..c0f4669 100644
 --- a/nova/manager.py
 +++ b/nova/manager.py
 @@ -217,6 +217,8 @@ class SchedulerDependentManager(Manager):
 
 def update_service_capabilities(self, capabilities):
 Remember these capabilities to send on next periodic update.
 + if not isinstance(capabilities, list):
 + capabilities = [capabilities]
 self.last_capabilities = capabilities
 
 @periodic_task
 @@ -224,5 +226,8 @@ class SchedulerDependentManager(Manager):
 Pass data back to the scheduler at a periodic interval.
 if self.last_capabilities:
 LOG.debug(_('Notifying Schedulers of capabilities ...'))
 - self.scheduler_rpcapi.update_service_capabilities(context,
 - self.service_name, self.host, self.last_capabilities)
 + for capability_item in self.last_capabilities:
 + name = capability_item.get('service_name', self.service_name)
 + host = capability_item.get('host', self.host)
 + self.scheduler_rpcapi.update_service_capabilities(context,
 + name, host, capability_item)
 
 On Aug 21, 2012, at 1:28 PM, David Kang dk...@isi.edu wrote:
 
 
   Hi Vish,
 
   We are trying to change our code according to your comment.
  I want to ask a question.
 
  a) modify driver.get_host_stats to be able to return a list of
  host
  stats instead of just one. Report the whole list back to the
  scheduler. We could modify the receiving end to accept a list as
  well
  or just make multiple calls to
  self.update_service_capabilities(capabilities)
 
   Modifying driver.get_host_stats to return a list of host stats is
   easy.
  Calling muliple calls to
  self.update_service_capabilities(capabilities) doesn't seem to work,
  because 'capabilities' is overwritten each time.
 
   Modifying the receiving end to accept a list seems to be easy.
  However, 'capabilities' is assumed to be dictionary by all other
  scheduler routines,
  it looks like that we have to change all of them to handle
  'capability' as a list of dictionary.
 
   If my understanding is correct, it would affect many parts of the
   scheduler.
  Is it what you recommended?
 
   Thanks,
   David
 
 
  - Original Message -
  This was an immediate goal, the bare-metal nova-compute node could
  keep an internal database, but report capabilities through nova in
  the
  common way with the changes below. Then the scheduler wouldn't need
  access to the bare metal database at all.
 
  On Aug 15, 2012, at 4:23 PM, David Kang dk...@isi.edu wrote:
 
 
  Hi Vish,
 
  Is this discussion for long-term goal or for this Folsom release?
 
  We still believe that bare-metal database is needed
  because there is not an automated way how bare-metal nodes report
  their capabilities
  to their bare-metal nova-compute node.
 
  Thanks,
  David
 
 
  I am interested in finding a solution 

[Openstack] quantum-rootwrap

2012-08-24 Thread jrd
https://review.openstack.org/#/c/11524/

Patches posted.  Review solicited.  Thanks in advance...

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] What is the most commonly used Hypervisor and toolset combination?

2012-08-24 Thread Jim Fehlig
Sorry for the delayed response.

Boris-Michel Deschenes wrote:
 That would be great Jim,

 I've built a cloud that uses CentOS+libvirt+Xen 4.1.3 to do GPU passthrough 
 and I just love to be able to use libvirt with Xen, this setup makes a lot of 
 sense to me since our main, bigger cloud is the standard libvirt+KVM, using 
 libvirt across the board is great for us.

 I'm following your work closely, the GPU cloud is still using libvirt+xend 
 but when I move to Xen 4.2 my understanding is that I will need libvirt+xl 
 (xenlight) so I guess there's still some work to be done in libvirt there...
   

Yes, there is.  libxl changed significantly between Xen 4.1 and soon to
be released Xen 4.2, so much so that the current libvirt libxl driver
won't even build against Xen 4.2.  In addition, the libxl driver does
not have feature parity with the legacy xen driver.  So lots of work to
be done, but I have limited free cycles.  I'm hoping to get another body
or two at SUSE to help with this work.

That said, xm/xend will still be included in Xen 4.2 and can be
configured as the primary tool stack, allowing you to continue using
your existing setup with Xen 4.2.

Regards,
Jim

 The reason I want to move to Xen 4.2 is the GPU passthrough of NVIDIA GPUs... 
 currently, with Xen 4.1.3, I successfully passthrough ATI GPUs only.

 Boris

 -Message d'origine-
 De : openstack-bounces+boris-michel.deschenes=ubisoft@lists.launchpad.net 
 [mailto:openstack-bounces+boris-michel.deschenes=ubisoft@lists.launchpad.net]
  De la part de Jim Fehlig
 Envoyé : 18 juillet 2012 17:56
 À : John Garbutt
 Cc : openstack@lists.launchpad.net
 Objet : Re: [Openstack] What is the most commonly used Hypervisor and toolset 
 combination?

 John Garbutt wrote:
   
 To my knowledge, if you want to use Xen, using XCP or XenServer (i.e. using 
 XenAPI driver) is the way to go. If you look at the contributions to the 
 drivers, you can have a good guess at who is using them.

 I know people are going into production on XenAPI, not heard about 
 Xen+libvirt in production. Having said this, I have seen some fixes to 
 Folsom around Xen + libvirt, I think from SUSE?
   
 

 Yes, I'm slowly working on improving support for xen.org Xen via the libvirt 
 driver and hope to have these improvements in for the Folsom release.

 Regards,
 Jim


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

   

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] What is the most commonly used Hypervisor and toolset combination?

2012-08-24 Thread Jim Fehlig
Boris-Michel Deschenes wrote:
 John,

 Sorry for my late response..

 It would be great to collaborate, like I said, I prefer to keep the libvirt 
 layer as it works great with openstack and many other techs (collectd, 
 virt-manager, etc.), the virsh tool is also very useful for us.

 You say:
 ---
 We have GPU passthrough working with NVIDIA GPUs in Xen 4.1.2, if I recall 
 correctly.  We don't yet have a stable Xen + Libvirt installation working, 
 but we're looking at it.  Perhaps it would be worth collaborating since it 
 sounds like this could be a win for both of us.
 ---
 I have Jim Fehlig in CC since this could be of interest to him.

 We managed to have the GPU passthrough of NVIDIA cards using Xen 4.1.2 but 
 ONLY with the xenapi (actually the whole XCP toolstack), with libvirt/Xen 
 4.1.2 and even libvirt/Xen 4.1.3, I only manage to apss through radeon GPUs, 
 the reason could be:

 1. The inability to pass the gfx_passthru parameter through libvirt (IIRC 
 this parameter passes the PCI device as the main VGA card and not a second 
 one).
 2. Bad FLR reset  support (or other PCI low-level function) from the NVIDIA 
 boards
   

I've noticed this issue with some Broadcom multifunction nics.  No FLR,
so fallback to secondary bus reset, which is problematic if another
function is being used by a different vm.

 3. something else entirely.

 Anyway, like I said, this GPU passthrough of nvidia worked well with XCP 
 using xenapi but not with libvirt/Xen
   

Hmm, would be nice to get that fixed.  To date, I haven't tried GPU
passthrough with Xen so I'm not familiar with the issues.

 Now, as for the libvirt/Xen setup we have, I don't know if I would call it 
 stable but it does the job as a POC cloud and is actually used by real people 
 with real GPU needs (for example developing on OpenCL 1.2), the main thing is 
 that it seamlessly integrates with openstack (because of libvirt) and  with 
 the instance_type_extra_specs, you can actually add a couple of these 
 special nodes to an existing plain KVM cloud and they will receive the 
 instances requesting GPUs without any problem.

 the setup:
 (this only refers to compute nodes as controller nodes are un-modified)

 1. Install Centos 6.2 and make your own project Zeus (transforming a centos 
 in Xen) 
 http://www.howtoforge.com/virtualization-with-xen-on-centos-6.2-x86_64-paravirtualization-and-hardware-virtualization
  (first page only and skip the bridge setup as openstack-nova-compute does 
 this at startup).  You end up with a Xen hypervisor with libvirt, the libvirt 
 patch is actually a single-line config change IIRC.  Pretty straight-forward.

 2. Install openstack-nova from EPEL (so all this refers only to ESSEX, 
 openstack 2012.1)

 3. configure the compute node accordingly (libvirt_type=xen)

 That's the first part, at this point, you can spawn a VM, and attach a GPU 
 manually with:

 virsh nodedev-dettach pci__02_00_01
 (edit the VM's nova libvirt.xml to add a pci node dev definition like this: 
 http://docs.fedoraproject.org/en-US/Fedora/13/html/Virtualization_Guide/chap-Virtualization-PCI_passthrough.html
  )
 virsh define libvirt.xml
 virsh start instance-000x

 Now, this is all manual and we wish to automate this in openstack, so this is 
 what I've done, I currently can launch VMs in my cloud and the passthrough 
 occurs without any intervention.

 These files were modified from an original essex installation to make this 
 possible:

 (on the controller)
 create a g1.small instance_type with {'free_gpus': '1'} as 
 instance_type_extra_specs
 select the compute_filter filter to enforce extra_specs in scheduling (also 
 the function host_passes of the filter is slightly modified so that it read 
 key=value instead of key=value... (free_gpus=1 is good, does not need to be 
 strictly equals to 1)
   

I think this has already been done for you in Folsom via the
ComputeCapabilitiesFilter and Jinwoo Suh's addition of
instance_type_extra_specs operators.  See commit 90f77d71.

 (on the compute node)
 nova/virt/libvirt/gpu.py
   a new file that contains functions like detach_all_gpus, get_free_gpus, 
 simple stuff 

Have you considered pushing this upstream?

 using virsh and lspci
 nova/virt/libvirt/connection.py
   calls gpu.detach_all_gpus on startup (virsh nodedev-dettach)
   builds the VM libvirt.xml as normal but also adds the pci nodedev 
 definition
   advertises free_gpus capabilities so that the scheduler gets it through 
 host_state calls

 that's about it, with that we get:

 1. compute nodes that detach all GPUS on startup
 2. compute nodes that advertise the nb of free gpus to the scheduler
 3. compute nodes that are able to build the VMs libvirt.xml with a valid, 
 free GPU definition when a VM is launched
 4. controller that runs a scheduler that knows where to send VMs (free_gpus 
 = 1)

 It does the trick for now, with RADEON 6950 I get 100% success, I spawn a VM 
 and in 20 

Re: [Openstack] OpenStack and IGMP

2012-08-24 Thread Dan Wendlandt
Hi Juris,

Some more detail would be useful here.  It sounds like you're trying
to use multicast, for which IGMP is a control protocol.  Is it that
you're trying to run nova VMs and make sure they can participate in
multicast groups?  Basic flat Nova networking connects VMs directly to
a physical network, so the configuration of multicast on the routers
(and IGMP snooping on the switches) is generally something that would
happen outside the scope of openstack configuration.  For private
networks in VlanManager mode or with Quantum, the existing L3
forwarding logic does not run a daemon that participates in IGMP, so
there's no out-of-the box way to get IGMP working between a private
network and the external network in your data center, I suspect (my
guess is that you'd have to muck with making the host running
nova-network or the quantum-l3-agent also run a multi-cast aware
routing software, like Quagga).  In the future, Quantum will enable
pluggable back-ends for logical routers, in which case you'll be
able to get routing back-ends from different vendors and projects,
many of which will support IGMP.

Dan


On Fri, Aug 24, 2012 at 3:59 AM, Juris ju...@zee.lv wrote:
 Hi all,

 Do you have any experience configuring OpenStack to work with IGMP traffic?
 If I have IGMP server and appropriate network infrastructure, what is
 the best way to bound it with one of OpenStack's private networks?

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [nova] Meeting Notes

2012-08-24 Thread Vishvananda Ishaya
Hello Everyone,

We had another productive nova meeting yesterday. I've included the summary 
below.

Vish

Meeting summary
---

* http://wiki.openstack.org/Meetings/Nova  (vishy, 21:01:14)

* Role Call  (vishy, 21:01:34)
  * LINK: Agenda: http://wiki.openstack.org/Meetings/Nova  (vishy,
21:01:55)
  * Attendeees: maoy, dansmith, markmc, ttx  (vishy, 21:02:46)
  * Atendees: jk0, dprince  (vishy, 21:03:22)
  * Attendees: vishy, maoy, dansmith, markmc, ttx, jk0, dprince  (vishy,
21:04:01)

* FFE for Entry Points  (vishy, 21:04:11)
  * Entry Points will be deferred to Grizzly  (vishy, 21:05:17)
  * ACTION: summit discussion needed for entry points  (vishy, 21:05:48)
  * ACTION: mtaylor to schedule summit discussion for entry points
(vishy, 21:07:05)

* Feature Freeze Progress  (vishy, 21:07:21)
  * LINK:
https://blueprints.launchpad.net/nova/+spec/os-api-network-create
(vishy, 21:07:44)
  * HELP: need review on https://review.openstack.org/#/c/9847/  (vishy,
21:08:07)
  * LINK:
https://blueprints.launchpad.net/nova/+spec/project-specific-flavors
(vishy, 21:08:45)
  * HELP: need review on https://review.openstack.org/#/c/11270/
(vishy, 21:09:02)

* Bug Triage Plan  (vishy, 21:09:43)
  * ACTION: vishy to triage 20 new bugs  (vishy, 21:10:29)
  * LINK:
https://bugs.launchpad.net/nova/+bugs?search=Searchfield.status=New
(vishy, 21:11:01)
  * ACTION: dprince to triage 20 new bugs  (vishy, 21:11:12)
  * ACTION: markmc to triage 20 (more) new bugs  (vishy, 21:11:41)

* Critical Bugs  (vishy, 21:13:33)
  * target bugs to rc1 if they should be fixed for rc1  (vishy,
21:16:21)
  * LINK: https://launchpad.net/nova/+milestone/folsom-rc1  (vishy,
21:16:24)
  * LINK: https://review.openstack.org/#/c/10936/  (vishy, 21:20:05)
  * ACTION: dprince to verify https://review.openstack.org/#/c/10936/
works on postgres  (vishy, 21:21:25)
  * ACTION: markmc to send an email about
https://bugs.launchpad.net/bugs/1039665  (vishy, 21:29:46)

* XML Support in Nova  (vishy, 21:30:09)
  * LINK: https://launchpad.net/~openstack-qa-core  (dansmith, 21:35:37)
  * LINK: https://review.openstack.org/#/c/11411/  (dansmith, 21:36:12)
  * ACTION: blamar to unblock tempest queue!  (vishy, 21:37:43)
  * ACTION: vishy to switch the api sample testing to an env variable so
it can get approved  (vishy, 21:41:15)
  * ACTION: vishy to create an etherpad for work around api samples
testing  (vishy, 21:43:39)

* Open Discussion  (vishy, 21:44:07)
  * LINK: https://review.openstack.org/#/c/11444/  (maoy, 21:44:55)
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] Meeting Notes

2012-08-24 Thread Mark Collier
Thanks Vishy, these meeting notes are super helpful IMHO. 



On Aug 24, 2012, at 5:25 PM, Vishvananda Ishaya vishvana...@gmail.com wrote:

 Hello Everyone,
 
 We had another productive nova meeting yesterday. I've included the summary 
 below.
 
 Vish
 
 Meeting summary
 ---
 
 * http://wiki.openstack.org/Meetings/Nova  (vishy, 21:01:14)
 
 * Role Call  (vishy, 21:01:34)
  * LINK: Agenda: http://wiki.openstack.org/Meetings/Nova  (vishy,
21:01:55)
  * Attendeees: maoy, dansmith, markmc, ttx  (vishy, 21:02:46)
  * Atendees: jk0, dprince  (vishy, 21:03:22)
  * Attendees: vishy, maoy, dansmith, markmc, ttx, jk0, dprince  (vishy,
21:04:01)
 
 * FFE for Entry Points  (vishy, 21:04:11)
  * Entry Points will be deferred to Grizzly  (vishy, 21:05:17)
  * ACTION: summit discussion needed for entry points  (vishy, 21:05:48)
  * ACTION: mtaylor to schedule summit discussion for entry points
(vishy, 21:07:05)
 
 * Feature Freeze Progress  (vishy, 21:07:21)
  * LINK:
https://blueprints.launchpad.net/nova/+spec/os-api-network-create
(vishy, 21:07:44)
  * HELP: need review on https://review.openstack.org/#/c/9847/ (vishy,
21:08:07)
  * LINK:
https://blueprints.launchpad.net/nova/+spec/project-specific-flavors
(vishy, 21:08:45)
  * HELP: need review on https://review.openstack.org/#/c/11270/
(vishy, 21:09:02)
 
 * Bug Triage Plan  (vishy, 21:09:43)
  * ACTION: vishy to triage 20 new bugs  (vishy, 21:10:29)
  * LINK:
https://bugs.launchpad.net/nova/+bugs?search=Searchfield.status=New
(vishy, 21:11:01)
  * ACTION: dprince to triage 20 new bugs  (vishy, 21:11:12)
  * ACTION: markmc to triage 20 (more) new bugs  (vishy, 21:11:41)
 
 * Critical Bugs  (vishy, 21:13:33)
  * target bugs to rc1 if they should be fixed for rc1  (vishy,
21:16:21)
  * LINK: https://launchpad.net/nova/+milestone/folsom-rc1  (vishy,
21:16:24)
  * LINK: https://review.openstack.org/#/c/10936/  (vishy, 21:20:05)
  * ACTION: dprince to verify https://review.openstack.org/#/c/10936/
works on postgres  (vishy, 21:21:25)
  * ACTION: markmc to send an email about
https://bugs.launchpad.net/bugs/1039665  (vishy, 21:29:46)
 
 * XML Support in Nova  (vishy, 21:30:09)
  * LINK: https://launchpad.net/~openstack-qa-core  (dansmith, 21:35:37)
  * LINK: https://review.openstack.org/#/c/11411/  (dansmith, 21:36:12)
  * ACTION: blamar to unblock tempest queue!  (vishy, 21:37:43)
  * ACTION: vishy to switch the api sample testing to an env variable so
it can get approved  (vishy, 21:41:15)
  * ACTION: vishy to create an etherpad for work around api samples
testing  (vishy, 21:43:39)
 
 * Open Discussion  (vishy, 21:44:07)
  * LINK: https://review.openstack.org/#/c/11444/  (maoy, 21:44:55)
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Quantum vs. Nova-network in Folsom

2012-08-24 Thread Dan Wendlandt
tl;dr  both Quantum and nova-network will be core and fully supported
in Folsom.

Hi folks,

Thierry, Vish and I have been spending some talking about OpenStack
networking in Folsom, and in particular the availability of
nova-network now that Quantum is a core project.  We wanted to share
our current thinking with the community to avoid confusion.

With a project like OpenStack, there's a fundamental trade-off between
the rate of introducing new capabilities and the desire for stability
and backward compatibility.  We agreed that OpenStack is a point in
its growth cycle where the cost of disruptive changes is high.  As a
result, we've decided that even with Quantum being core in Folsom, we
will also continue to support nova-network as it currently exists in
Folsom.  There is, of couse, overhead to this approach, but we think
it is worth it.

With this in mind, a key question becomes: how do we direct users to
the networking option that is right for them.  We have the following
guidelines:

1) For users who require only very basic networking (e.g.,
nova-network Flat, FlatDHCP) there's little difference between Quantum
and nova-network is such basic use cases, so using nova's built-in
networking for these basic use cases makes sense.

2) There are many use cases (e.g., tenant API for defined topologies
and addresses) and advanced network technologies (e.g., tunneling
rather than VLANs) that Quantum enables that are simply not possible
with nova-network, so if these advanced capabilities are important to
someone deploying OpenStack, they clearly need to use Quantum.

3) There are a few things that are possible in nova-network, but not
in Quantum.  Multi-host is the most significant one, but there are
bound to be other gaps, some of which we will uncover only when people
try their particular use case with Quantum.  For these, users will
have to use nova-network, with the gaps being covered in Quantum
during Grizzly.

As a result, we plan to structure the docs so that you can do a basic
functionality Nova setup with flat networking without requiring
Quantum.  For anything beyond that, we will have an advanced
networking section, which describes the different advanced use of
OpenStack networking with Quantum, and also highlight reasons that a
user may still want to use nova-networking over Quantum.

Moving beyond Folsom, we expect to fully freeze the addition of new
functionality to nova-network, and likely deprecate at least some
portions of the existing nova-network functionality.  Likely this will
leave the basic flat and flat + dhcp nova networking intact, but
reduce complexity in the nova codebase by removing more advanced
networking scenarios that can also be achieved via Quantum.  This
means that even those using nova-network in Folsom should still be
evaluating Quantum if they networking needs beyond flat networking,
such that this feedback can be incorporated into the Grizzly
deliverable of Quantum.

Thanks,

Dan


-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] Discussion about where to put database for bare-metal provisioning (review 10726)

2012-08-24 Thread Vishvananda Ishaya
I would investigate changing the capabilities to key off of something other 
than hostname. It looks from the table structure like compute_nodes could be 
have a many-to-one relationship with services. You would just have to use a 
little more than hostname. Perhaps (hostname, hypervisor_hostname) could be 
used to update the entry?

Vish

On Aug 24, 2012, at 11:23 AM, David Kang dk...@isi.edu wrote:

 
  Vish,
 
  I've tested your code and did more testing.
 There are a couple of problems.
 1. host name should be unique. If not, any repetitive updates of new 
 capabilities with the same host name are simply overwritten.
 2. We cannot generate arbitrary host names on the fly.
   The scheduler (I tested filter scheduler) gets host names from db.
   So, if a host name is not in the 'services' table, it is not considered by 
 the scheduler at all.
 
 So, to make your suggestions possible, nova-compute should register N 
 different host names in 'services' table,
 and N corresponding entries in 'compute_nodes' table.
 Here is an example:
 
 mysql select id, host, binary, topic, report_count, disabled, 
 availability_zone from services;
 ++-++---+--+--+---+
 | id | host| binary | topic | report_count | disabled | 
 availability_zone |
 ++-++---+--+--+---+
 |  1 | bespin101   | nova-scheduler | scheduler |17145 |0 | 
 nova  |
 |  2 | bespin101   | nova-network   | network   |16819 |0 | 
 nova  |
 |  3 | bespin101-0 | nova-compute   | compute   |16405 |0 | 
 nova  |
 |  4 | bespin101-1 | nova-compute   | compute   |1 |0 | 
 nova  |
 ++-++---+--+--+---+
 
 mysql select id, service_id, hypervisor_hostname from compute_nodes;
 ++++
 | id | service_id | hypervisor_hostname|
 ++++
 |  1 |  3 | bespin101.east.isi.edu |
 |  2 |  4 | bespin101.east.isi.edu |
 ++++
 
  Then, nova db (compute_nodes table) has entries of all bare-metal nodes.
 What do you think of this approach.
 Do you have any better approach?
 
  Thanks,
  David
 
 
 
 - Original Message -
 To elaborate, something the below. I'm not absolutely sure you need to
 be able to set service_name and host, but this gives you the option to
 do so if needed.
 
 iff --git a/nova/manager.py b/nova/manager.py
 index c6711aa..c0f4669 100644
 --- a/nova/manager.py
 +++ b/nova/manager.py
 @@ -217,6 +217,8 @@ class SchedulerDependentManager(Manager):
 
 def update_service_capabilities(self, capabilities):
 Remember these capabilities to send on next periodic update.
 + if not isinstance(capabilities, list):
 + capabilities = [capabilities]
 self.last_capabilities = capabilities
 
 @periodic_task
 @@ -224,5 +226,8 @@ class SchedulerDependentManager(Manager):
 Pass data back to the scheduler at a periodic interval.
 if self.last_capabilities:
 LOG.debug(_('Notifying Schedulers of capabilities ...'))
 - self.scheduler_rpcapi.update_service_capabilities(context,
 - self.service_name, self.host, self.last_capabilities)
 + for capability_item in self.last_capabilities:
 + name = capability_item.get('service_name', self.service_name)
 + host = capability_item.get('host', self.host)
 + self.scheduler_rpcapi.update_service_capabilities(context,
 + name, host, capability_item)
 
 On Aug 21, 2012, at 1:28 PM, David Kang dk...@isi.edu wrote:
 
 
  Hi Vish,
 
  We are trying to change our code according to your comment.
 I want to ask a question.
 
 a) modify driver.get_host_stats to be able to return a list of
 host
 stats instead of just one. Report the whole list back to the
 scheduler. We could modify the receiving end to accept a list as
 well
 or just make multiple calls to
 self.update_service_capabilities(capabilities)
 
  Modifying driver.get_host_stats to return a list of host stats is
  easy.
 Calling muliple calls to
 self.update_service_capabilities(capabilities) doesn't seem to work,
 because 'capabilities' is overwritten each time.
 
  Modifying the receiving end to accept a list seems to be easy.
 However, 'capabilities' is assumed to be dictionary by all other
 scheduler routines,
 it looks like that we have to change all of them to handle
 'capability' as a list of dictionary.
 
  If my understanding is correct, it would affect many parts of the
  scheduler.
 Is it what you recommended?
 
  Thanks,
  David
 
 
 - Original Message -
 This was an immediate goal, the bare-metal nova-compute node could
 keep an internal database, but report capabilities through nova in
 the
 common way with the changes below. Then the scheduler wouldn't need
 access to the 

Re: [Openstack] Cannot create snapshots of instances running not on the controller

2012-08-24 Thread Vishvananda Ishaya
Actually it looks like a different error. For some reason container format is 
being sent in as none on the second node.

Is it possible the original image that you launched the vm from has been 
deleted? For some reason it can't determine the container format.

If not, can you also make sure that your versions of glance and 
python-glanceclient are the same on both nodes?

you should be able to do `pip freeze` to see the installed versions.


Vish

On Aug 24, 2012, at 12:10 AM, Alessandro Tagliapietra 
tagliapietra.alessan...@gmail.com wrote:

 Hi Vish,
 
 I had already a setting:
 
 glance_api_servers=10.0.0.1:9292
 
 i've also tried to add
 
 glance_host=10.0.0.1
 
 but i got the same error.. Also, after changing configuration and restarting 
 nova-compute restarts all instances, is that normal?
 
 Best
 
 Alessandro
 
 Il giorno 23/ago/2012, alle ore 20:24, Vishvananda Ishaya 
 vishvana...@gmail.com ha scritto:
 
 looks like the compute node has a bad setting for glance_api_servers on the 
 second node.
 
 because glance_api_servers defaults to $glance_host:$glance_port, you should 
 be able to fix it by setting:
 
 glance_host = ip where glance is running
 
 in your nova.conf on the second node.
 
 Vish
 
 On Aug 23, 2012, at 10:15 AM, Alessandro Tagliapietra 
 tagliapietra.alessan...@gmail.com wrote:
 
 Hi all,
 
 i've a controller which is running all service and a secondary controller 
 which is un multi_host so it's running compute network and api-metadata. 
 From the dashboard i can successfully create snapshots of instances running 
 on the controller but when i try to create a snapshot of an instance on a 
 compute node i get in its logs:
 
 == /var/log/nova/nova-compute.log ==
 2012-08-23 19:08:14 ERROR nova.rpc.amqp 
 [req-66389a04-b071-4641-949b-3df04da85d08 a63f5293c5454a979bddff1415a216f6 
 e8c3367ff91d44b1ab1b14eb63f48bf7] Exception during message handling
 2012-08-23 19:08:14 TRACE nova.rpc.amqp Traceback (most recent call last):
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 253, in 
 _process_data
 2012-08-23 19:08:14 TRACE nova.rpc.amqp rval = node_func(context=ctxt, 
 **node_args)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped
 2012-08-23 19:08:14 TRACE nova.rpc.amqp return f(*args, **kw)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 183, in 
 decorated_function
 2012-08-23 19:08:14 TRACE nova.rpc.amqp sys.exc_info())
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/contextlib.py, line 24, in __exit__
 2012-08-23 19:08:14 TRACE nova.rpc.amqp self.gen.next()
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 177, in 
 decorated_function
 2012-08-23 19:08:14 TRACE nova.rpc.amqp return function(self, context, 
 instance_uuid, *args, **kwargs)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 952, in 
 snapshot_instance
 2012-08-23 19:08:14 TRACE nova.rpc.amqp self.driver.snapshot(context, 
 instance_ref, image_id)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/exception.py, line 114, in wrapped
 2012-08-23 19:08:14 TRACE nova.rpc.amqp return f(*args, **kw)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py, line 
 714, in snapshot
 2012-08-23 19:08:14 TRACE nova.rpc.amqp image_file)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/image/glance.py, line 306, in update
 2012-08-23 19:08:14 TRACE nova.rpc.amqp 
 _reraise_translated_image_exception(image_id)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/nova/image/glance.py, line 304, in update
 2012-08-23 19:08:14 TRACE nova.rpc.amqp image_meta = 
 client.update_image(image_id, image_meta, data)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/glance/client.py, line 195, in 
 update_image
 2012-08-23 19:08:14 TRACE nova.rpc.amqp res = self.do_request(PUT, 
 /images/%s % image_id, body, headers)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/glance/common/client.py, line 58, in 
 wrapped
 2012-08-23 19:08:14 TRACE nova.rpc.amqp return func(self, *args, 
 **kwargs)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/glance/common/client.py, line 420, in 
 do_request
 2012-08-23 19:08:14 TRACE nova.rpc.amqp headers=headers)
 2012-08-23 19:08:14 TRACE nova.rpc.amqp   File 
 /usr/lib/python2.7/dist-packages/glance/common/client.py, line 75, in 
 wrapped
 2012-08-23 19:08:14 TRACE nova.rpc.amqp return func(self, method, url, 
 body, headers)
 2012-08-23 19:08:14 TRACE 

[Openstack] OpenStack Community Weekly Newsletter (Aug 17-24)

2012-08-24 Thread Stefano Maffulli

Highlights of the week


  OpenStack at CloudOpen
  http://www.openstack.org/blog/2012/08/openstack-at-cloudopen/

OpenStack is a protagonist of CloudOpen, the only conference providing a
collaboration and education space dedicated to advancing the open cloud.
Next week, from Aug 29 to 31 in San Diego there will be plenty of
chances to hear talks about OpenStack and how it's shaping the cloud
industry.

On Tuesday Aug 28th join the OpenStack community at The Hopping Pig
http://www.thehoppingpig.com/ for a party!  Food  drinks sponsored
by HP, Intel, Opscode, Rackspace, SUSE, and Ubuntu. The event will
immediately follow the official CloudOpen happy hour, and is within
walking distance of the Andaz. Reserve your ticket
http://openstack-cloudopen2012.eventbrite.com/.


  OpenStack at PuppetConf: Tim Bell of CERN to Keynote
  
http://www.openstack.org/blog/2012/08/openstack-at-puppetconf-tim-bell-of-cern-to-keynote/

PuppetConf is coming up September 27-28 in San Francisco, and we're
excited to announce some great OpenStack content, including a keynote
presentation from Tim Bell of CERN!


  OpenStack Won Unprecedented Popularity in Asia/Pacific
  
http://www.openstack.org/blog/2012/08/openstack-won-unprecedented-popularity-in-asiapacific/

On August 10 -11, the first two-day OpenStack Asia-Pacific Conference
(OSAC) was held in Beijing and Shanghai concurrently. This conference is
 jointly organized by CSDN (Chinese Software Develop the Net), the
world's largest Chinese IT technology community and the OpenStack user
group (COSUG). The presentations are on slideshare:
http://www.slideshare.net/HuiCheng2/tag/2012osac


  Submitting new features to Nova
  
http://blogs.gnome.org/markmc/2012/08/20/submitting-new-features-to-nova/

Mark McLoughlin wrote down a few pieces advice for someone submitting a
large feature patch to Nova.


  OpenStack Folsom  Glance
  http://bcwaldon.cc/2012/08/20/openstack-folsom-glance-overview.html

Brian Waldon recaps what landed in Glance in the past months. These are
most of the features that will make it in Folsom release.


Tips and Tricks

  * By Derek Higgins: Listing openstack keystone credentials
http://goodsquishy.com/2012/08/listing-openstack-keystone-credentials/
  * By Zmanda Team: Storing Pebbles or Boulders: Optimizing Swift Cloud
for different workloads http://www.zmanda.com/blogs/?p=894
  * By Mate Lakat: Hello Xen API host plugin
http://blogs.citrix.com/2012/08/17/hello-xen-api-host-plugin/
  * By Derek Higgins: rackspace cloud files from the command line

http://goodsquishy.com/2012/08/rackspace-cloud-files-from-the-command-line/
  * By Alessandro Tagliapietra: Hetzner Failover IP routing tool for
openstack

http://www.alexnetwork.it/2012/08/20/openstack/hetzner-failover-ip-routing-tool-for-openstack.html
  * By Brian Waldon: Using Warlock  JSON Schemas
http://bcwaldon.cc/2012/08/19/using-warlock-and-json-schemas.html
  * By Everett Toews: Logging in jclouds
http://blog.phymata.com/2012/08/18/logging-in-jclouds/ and How I
get a Token from the Rackspace Open Cloud from the Command Line
http://blog.phymata.com/2012/08/17/get-a-token-from-rackspace/
  * By Chmouel Boudjnah http://blog.chmouel.com/: Using
python-novaclient against Rackspace Cloud next generation (powered
by OpenStack)

http://blog.chmouel.com/2012/08/17/using-python-novaclient-against-rackspace-cloud-next-generation-powered-by-openstack/


Upcoming Events

  * OPENSTACK REVOLUTION
http://openstack-cloudopen2012.eventbrite.com/Aug 28, 2012 -- San
Diego, CA RSVP http://openstack-cloudopen2012.eventbrite.com/
  * Australian OpenStack User Group -- Adelaide Meetup with SAGE-AU
http://aosug.openstack.org.au/events/66911242/ Aug 28, 2012 --
Details http://aosug.openstack.org.au/events/66911242/
  * Block Storage Service is Cemented by Cinder
http://www.meetup.com/OpenStack-LA/events/74425192/ Aug 30, 2012
-- Los Angeles, CA Details
http://www.meetup.com/OpenStack-LA/events/74425192/
  * OpenStack Swift meetup
http://www.meetup.com/openstack/events/77706042/ Aug 30, 2012 --
San Francisco, CA Details
http://www.meetup.com/openstack/events/77706042/
  * OpenStack Summit http://openstack.org/ Oct 15 -- 18, 2012 -- San
Diego, CA


Other news

  * Feature freeze period started: we're on our way to Folsom release
  * OpenStack Project Meeting 2012-08-21: Summary

http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-08-21-21.02.html
and Meeting log

http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-08-21-21.02.log.html


Welcome new contributors

Celebrating the first patches submitted this week by:

  * Alex Yang

/The weekly newsletter is a way for the community to learn about all the
various activities occurring on a weekly basis. If you would like to add
content to a weekly 

Re: [Openstack] [Nova] Instance Type Extra Specs clarifications

2012-08-24 Thread Patrick Petit
Hi Joseph and All,

You are pointing the root cause of my question. The question is about how to 
specify capabilities for a compute node so that they can be compared with the 
extra specs. I think how to define extra specs in a flavor is clear enough.

So, some capabilities are standard and are generated dynamically. Others are 
not known or not captured by the system and so not standard (yet), like the 
GPUs, and therefore must be specified somehow. Today, the somehow, as I 
understand things, is to add key/value pairs in nova.conf when/if it is 
supported.

I wanted to make sure i understand the basic principals.

Now, this in my opinion poses couple problems and/or call for additional 
questions:

What is the naming convention to add capabilities in nova.conf? I would suppose 
that any key/value pair cannot be taken for a capability.

How to avoid name clashing with standard capability? At the very least one 
should have an option to print them out (in nova-manage?). Even a simple 
written list would be helpful.

But, are we really comfortable with the idea to define static capabilities in 
nova.conf? that's putting a lot of burden on config management. Also, not 
standard doesn't imply static.

We can certainly live we that for now, But eventually, i think we'll need some 
sort of an extension mechanism so that providers can generate whatever 
capabilities they want using their own plugin? Note that capabilities could be 
software related too.

What do you think?
Best,
Patrick

Envoyé de mon iPad

Le 24 août 2012 à 18:38, Joseph Suh j...@isi.edu a écrit :

 Patrick,
 
 Once a new item (key and value pair) is added to the capabilities, it can be 
 compared against extra_specs. The extra_specs can be populated in 
 instance_type_extra_specs table. The items in the extra_specs can start with 
 one of the keywords for operations such as = and s==. For example, if 
 ngpus: 4 is populated in capability, extra_specs of = 2 will choose the 
 host. If the extra_specs is = 5, the host will not be chosen. If no 
 keyword is found at the beginning of the extra_specs (with the latest change 
 in upstream code), the given string is directly compared with capability. For 
 example, if fpu is given as extra_specs, the capability must be fpu to be 
 selected.
 
 If more clarification is needed, please let us know.
 
 Thanks,
 
 Joseph
 
 - Original Message -
 From: David Kang dk...@isi.edu
 To: Patrick Petit patrick.michel.pe...@gmail.com
 Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net) 
 openstack@lists.launchpad.net
 Sent: Friday, August 24, 2012 11:34:11 AM
 Subject: Re: [Openstack] [Nova] Instance Type Extra Specs clarifications
 
 
 Parick,
 
 We are using the feature in Bare-metal machine provisioning.
 Some keys are automatically generated by nova-compute.
 For example, hypervisor_type, hypervisor_version, etc. fields are 
 automatically
 put into capabilities by nova-compute (in the case of libvirt).
 So, you don't need to specify that.
 But, if you want to add custom fields, you should put them into nova.conf 
 file of 
 the nova-compute node.
 
 Since the new key are put into 'capabilities', 
 the new key must be different from any other keys in the 'capabilities'.
 If that uniqueness is enforced, it can be any string, I believe.
 
 Thanks,
 David
 
 --
 Dr. Dong-In David Kang
 Computer Scientist
 USC/ISI
 
 - Original Message -
 Hi,
 
 
 Could someone give a practical overview of how configuring and using
 the instance type extra specs extension capability introduced in
 Folsom?
 
 
 If how extending an instance type is relatively clear.
 
 
 Eg.: #nova-manage instance_type set_key --name=my.instancetype --key
 cpu_arch --value 's== x86_64'
 
 
 The principles of capability advertising is less clearer. Is it
 assumed that the key/value pairs are always declared statically as
 flags in nova.conf of the compute node, or can they be generated
 dynamically and if so, who would that be? And also, are the keys
 completely free form strings or strings that are known (reserved) by
 Nova?
 
 
 Thanks in advance for clarifying this.
 
 
 Patrick
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] Instance Type Extra Specs clarifications

2012-08-24 Thread Joseph Suh
Patrick,

That's a good point. I think the issue is already being discussed at 
https://bugs.launchpad.net/nova/+bug/1039386 as Don Dugger pointed out. 

That being said, as answers to some of your questions: yes, any key/value pair 
can be used and it is user's (in this case, system admin's) responsibility to 
avoid conflict at this time. The scope we originally thought was pretty much 
static like the number of GPUs, but there is no reason why it should be static 
as some features can change dynamically. I'd encourage you to propose a 
blueprint if you can. We can also consider the feature, but our team needs to 
discuss it before we can commit to it.

Thanks,

Joseph

- Original Message -
From: Patrick Petit patrick.michel.pe...@gmail.com
To: Joseph Suh j...@isi.edu
Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net) 
openstack@lists.launchpad.net, David Kang dk...@isi.edu
Sent: Friday, August 24, 2012 7:37:31 PM
Subject: Re: [Openstack] [Nova] Instance Type Extra Specs clarifications

Hi Joseph and All,

You are pointing the root cause of my question. The question is about how to 
specify capabilities for a compute node so that they can be compared with the 
extra specs. I think how to define extra specs in a flavor is clear enough.

So, some capabilities are standard and are generated dynamically. Others are 
not known or not captured by the system and so not standard (yet), like the 
GPUs, and therefore must be specified somehow. Today, the somehow, as I 
understand things, is to add key/value pairs in nova.conf when/if it is 
supported.

I wanted to make sure i understand the basic principals.

Now, this in my opinion poses couple problems and/or call for additional 
questions:

What is the naming convention to add capabilities in nova.conf? I would suppose 
that any key/value pair cannot be taken for a capability.

How to avoid name clashing with standard capability? At the very least one 
should have an option to print them out (in nova-manage?). Even a simple 
written list would be helpful.

But, are we really comfortable with the idea to define static capabilities in 
nova.conf? that's putting a lot of burden on config management. Also, not 
standard doesn't imply static.

We can certainly live we that for now, But eventually, i think we'll need some 
sort of an extension mechanism so that providers can generate whatever 
capabilities they want using their own plugin? Note that capabilities could be 
software related too.

What do you think?
Best,
Patrick

Envoyé de mon iPad

Le 24 août 2012 à 18:38, Joseph Suh j...@isi.edu a écrit :

 Patrick,
 
 Once a new item (key and value pair) is added to the capabilities, it can be 
 compared against extra_specs. The extra_specs can be populated in 
 instance_type_extra_specs table. The items in the extra_specs can start with 
 one of the keywords for operations such as = and s==. For example, if 
 ngpus: 4 is populated in capability, extra_specs of = 2 will choose the 
 host. If the extra_specs is = 5, the host will not be chosen. If no 
 keyword is found at the beginning of the extra_specs (with the latest change 
 in upstream code), the given string is directly compared with capability. For 
 example, if fpu is given as extra_specs, the capability must be fpu to be 
 selected.
 
 If more clarification is needed, please let us know.
 
 Thanks,
 
 Joseph
 
 - Original Message -
 From: David Kang dk...@isi.edu
 To: Patrick Petit patrick.michel.pe...@gmail.com
 Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net) 
 openstack@lists.launchpad.net
 Sent: Friday, August 24, 2012 11:34:11 AM
 Subject: Re: [Openstack] [Nova] Instance Type Extra Specs clarifications
 
 
 Parick,
 
 We are using the feature in Bare-metal machine provisioning.
 Some keys are automatically generated by nova-compute.
 For example, hypervisor_type, hypervisor_version, etc. fields are 
 automatically
 put into capabilities by nova-compute (in the case of libvirt).
 So, you don't need to specify that.
 But, if you want to add custom fields, you should put them into nova.conf 
 file of 
 the nova-compute node.
 
 Since the new key are put into 'capabilities', 
 the new key must be different from any other keys in the 'capabilities'.
 If that uniqueness is enforced, it can be any string, I believe.
 
 Thanks,
 David
 
 --
 Dr. Dong-In David Kang
 Computer Scientist
 USC/ISI
 
 - Original Message -
 Hi,
 
 
 Could someone give a practical overview of how configuring and using
 the instance type extra specs extension capability introduced in
 Folsom?
 
 
 If how extending an instance type is relatively clear.
 
 
 Eg.: #nova-manage instance_type set_key --name=my.instancetype --key
 cpu_arch --value 's== x86_64'
 
 
 The principles of capability advertising is less clearer. Is it
 assumed that the key/value pairs are always declared statically as
 flags in nova.conf of the compute node, or can they be generated
 

Re: [Openstack] [Nova] Instance Type Extra Specs clarifications

2012-08-24 Thread Bhandaru, Malini K
There are two entry points
1) the filters
2) the flavors

From a UI perspective it feels safer to let the user select extra spec keys and 
values from drop-down lists (of pre-defined/registered keys and respective 
values) to avoid typographic errors
This would then introduce a dependency, with needing to define the keys and 
values apriori

It is good to keep user interfaces simple and reduce error (simple in this case 
comes at the expense of implementation effort)

For the namespace, scope, which hopefully should reduce the likelihood of name 
clashes
A python package type approach would make sense.  a.b.c.myKey=myValue


-Original Message-
From: openstack-bounces+malini.k.bhandaru=intel@lists.launchpad.net 
[mailto:openstack-bounces+malini.k.bhandaru=intel@lists.launchpad.net] On 
Behalf Of Joseph Suh
Sent: Friday, August 24, 2012 6:40 PM
To: Patrick Petit
Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
Subject: Re: [Openstack] [Nova] Instance Type Extra Specs clarifications

Patrick,

That's a good point. I think the issue is already being discussed at 
https://bugs.launchpad.net/nova/+bug/1039386 as Don Dugger pointed out. 

That being said, as answers to some of your questions: yes, any key/value pair 
can be used and it is user's (in this case, system admin's) responsibility to 
avoid conflict at this time. The scope we originally thought was pretty much 
static like the number of GPUs, but there is no reason why it should be static 
as some features can change dynamically. I'd encourage you to propose a 
blueprint if you can. We can also consider the feature, but our team needs to 
discuss it before we can commit to it.

Thanks,

Joseph

- Original Message -
From: Patrick Petit patrick.michel.pe...@gmail.com
To: Joseph Suh j...@isi.edu
Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net) 
openstack@lists.launchpad.net, David Kang dk...@isi.edu
Sent: Friday, August 24, 2012 7:37:31 PM
Subject: Re: [Openstack] [Nova] Instance Type Extra Specs clarifications

Hi Joseph and All,

You are pointing the root cause of my question. The question is about how to 
specify capabilities for a compute node so that they can be compared with the 
extra specs. I think how to define extra specs in a flavor is clear enough.

So, some capabilities are standard and are generated dynamically. Others are 
not known or not captured by the system and so not standard (yet), like the 
GPUs, and therefore must be specified somehow. Today, the somehow, as I 
understand things, is to add key/value pairs in nova.conf when/if it is 
supported.

I wanted to make sure i understand the basic principals.

Now, this in my opinion poses couple problems and/or call for additional 
questions:

What is the naming convention to add capabilities in nova.conf? I would suppose 
that any key/value pair cannot be taken for a capability.

How to avoid name clashing with standard capability? At the very least one 
should have an option to print them out (in nova-manage?). Even a simple 
written list would be helpful.

But, are we really comfortable with the idea to define static capabilities in 
nova.conf? that's putting a lot of burden on config management. Also, not 
standard doesn't imply static.

We can certainly live we that for now, But eventually, i think we'll need some 
sort of an extension mechanism so that providers can generate whatever 
capabilities they want using their own plugin? Note that capabilities could be 
software related too.

What do you think?
Best,
Patrick

Envoyé de mon iPad

Le 24 août 2012 à 18:38, Joseph Suh j...@isi.edu a écrit :

 Patrick,
 
 Once a new item (key and value pair) is added to the capabilities, it can be 
 compared against extra_specs. The extra_specs can be populated in 
 instance_type_extra_specs table. The items in the extra_specs can start with 
 one of the keywords for operations such as = and s==. For example, if 
 ngpus: 4 is populated in capability, extra_specs of = 2 will choose the 
 host. If the extra_specs is = 5, the host will not be chosen. If no 
 keyword is found at the beginning of the extra_specs (with the latest change 
 in upstream code), the given string is directly compared with capability. For 
 example, if fpu is given as extra_specs, the capability must be fpu to be 
 selected.
 
 If more clarification is needed, please let us know.
 
 Thanks,
 
 Joseph
 
 - Original Message -
 From: David Kang dk...@isi.edu
 To: Patrick Petit patrick.michel.pe...@gmail.com
 Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net) 
 openstack@lists.launchpad.net
 Sent: Friday, August 24, 2012 11:34:11 AM
 Subject: Re: [Openstack] [Nova] Instance Type Extra Specs 
 clarifications
 
 
 Parick,
 
 We are using the feature in Bare-metal machine provisioning.
 Some keys are automatically generated by nova-compute.
 For example, hypervisor_type, hypervisor_version, etc. fields are 
 automatically put into capabilities by 

Re: [Openstack] VM can't ping self floating IP after a snapshot is taken

2012-08-24 Thread heut2008
I have fixed it here  https://review.openstack.org/#/c/11925/

2012/8/25 Sam Su susltd...@gmail.com:
 Hi,

 I also reported this bug:
  https://bugs.launchpad.net/nova/+bug/1040255

  If someone can combine you guys solution and get a perfect way to fix this
 bug, that will be great.

 BRs,
 Sam


 On Thu, Aug 23, 2012 at 9:27 PM, heut2008 heut2...@gmail.com wrote:

 this bug has been filed here  https://bugs.launchpad.net/nova/+bug/1040537

 2012/8/24 Vishvananda Ishaya vishvana...@gmail.com:
  +1 to this. Evan, can you report a bug (if one hasn't been reported yet)
  and
  propose the fix? Or else I can find someone else to propose it.
 
  Vish
 
  On Aug 23, 2012, at 1:38 PM, Evan Callicoat diop...@gmail.com wrote:
 
  Hello all!
 
  I'm the original author of the hairpin patch, and things have changed a
  little bit in Essex and Folsom from the original Diablo target. I
  believe I
  can shed some light on what should be done here to solve the issue in
  either
  case.
 
  ---
  For Essex (stable/essex), in nova/virt/libvirt/connection.py:
  ---
 
  Currently _enable_hairpin() is only being called from spawn(). However,
  spawn() is not the only place that vifs (veth#) get added to a bridge
  (which
  is when we need to enable hairpin_mode on them). The more relevant
  function
  is _create_new_domain(), which is called from spawn() and other places.
  Without changing the information that gets passed to
  _create_new_domain()
  (which is just 'xml' from to_xml()), we can easily rewrite the first 2
  lines
  in _enable_hairpin(), as follows:
 
  def _enable_hairpin(self, xml):
  interfaces = self.get_interfaces(xml['name'])
 
  Then, we can move the self._enable_hairpin(instance) call from spawn()
  up
  into _create_new_domain(), and pass it xml as follows:
 
  [...]
  self._enable_hairpin(xml)
  return domain
 
  This will run the hairpin code every time a domain gets created, which
  is
  also when the domain's vif(s) gets inserted into the bridge with the
  default
  of hairpin_mode=0.
 
  ---
  For Folsom (trunk), in nova/virt/libvirt/driver.py:
  ---
 
  There've been a lot more changes made here, but the same strategy as
  above
  should work. Here, _create_new_domain() has been split into
  _create_domain()
  and _create_domain_and_network(), and _enable_hairpin() was moved from
  spawn() to _create_domain_and_network(), which seems like it'd be the
  right
  thing to do, but doesn't quite cover all of the cases of vif
  reinsertion,
  since _create_domain() is the only function which actually creates the
  domain (_create_domain_and_network() just calls it after doing some
  pre-work). The solution here is likewise fairly simple; make the same 2
  changes to _enable_hairpin():
 
  def _enable_hairpin(self, xml):
  interfaces = self.get_interfaces(xml['name'])
 
  And move it from _create_domain_and_network() to _create_domain(), like
  before:
 
  [...]
  self._enable_hairpin(xml)
  return domain
 
  I haven't yet tested this on my Essex clusters and I don't have a Folsom
  cluster handy at present, but the change is simple and makes sense.
  Looking
  at to_xml() and _prepare_xml_info(), it appears that the 'xml' variable
  _create_[new_]domain() gets is just a python dictionary, and xml['name']
  =
  instance['name'], exactly what _enable_hairpin() was using the
  'instance'
  variable for previously.
 
  Let me know if this works, or doesn't work, or doesn't make sense, or if
  you
  need an address to send gifts, etc. Hope it's solved!
 
  -Evan
 
  On Thu, Aug 23, 2012 at 11:20 AM, Sam Su susltd...@gmail.com wrote:
 
  Hi Oleg,
 
  Thank you for your investigation. Good lucky!
 
  Can you let me know if find how to fix the bug?
 
  Thanks,
  Sam
 
  On Wed, Aug 22, 2012 at 12:50 PM, Oleg Gelbukh ogelb...@mirantis.com
  wrote:
 
  Hello,
 
  Is it possible that, during snapshotting, libvirt just tears down
  virtual
  interface at some point, and then re-creates it, with hairpin_mode
  disabled
  again?
  This bugfix [https://bugs.launchpad.net/nova/+bug/933640] implies that
  fix works on spawn of instance. This means that upon resume after
  snapshot,
  hairpin is not restored. May be if we insert the _enable_hairpin()
  call in
  snapshot procedure, it helps.
  We're currently investigating this issue in one of our environments,
  hope
  to come up with answer by tomorrow.
 
  --
  Best regards,
  Oleg
 
  On Wed, Aug 22, 2012 at 11:29 PM, Sam Su susltd...@gmail.com wrote:
 
  My friend has found a way to enable ping itself, when this problem
  happened. But not found why this happen.
  sudo echo 1 
  /sys/class/net/br1000/brif/virtual-interface-name/hairpin_mode
 
  I file a ticket to report this problem:
  https://bugs.launchpad.net/nova/+bug/1040255
 
  hopefully someone can find why this happen and solve it.
 
  Thanks,
  Sam
 
 
  On Fri, Jul 20, 2012 at 3:50 PM, Gabriel Hurley
  gabriel.hur...@nebula.com wrote:
 
  I ran into some similar issues with the _enable_hairpin() 

Re: [Openstack] VM can't ping self floating IP after a snapshot is taken

2012-08-24 Thread Evan Callicoat
That'll work! Looks good to me. I can't review it on Gerrit but good 
job. Beat me to it :)


On 08/24/2012 09:15 PM, heut2008 wrote:

I have fixed it here  https://review.openstack.org/#/c/11925/

2012/8/25 Sam Su susltd...@gmail.com:

Hi,

I also reported this bug:
  https://bugs.launchpad.net/nova/+bug/1040255

  If someone can combine you guys solution and get a perfect way to fix this
bug, that will be great.

BRs,
Sam


On Thu, Aug 23, 2012 at 9:27 PM, heut2008 heut2...@gmail.com wrote:


this bug has been filed here  https://bugs.launchpad.net/nova/+bug/1040537

2012/8/24 Vishvananda Ishaya vishvana...@gmail.com:

+1 to this. Evan, can you report a bug (if one hasn't been reported yet)
and
propose the fix? Or else I can find someone else to propose it.

Vish

On Aug 23, 2012, at 1:38 PM, Evan Callicoat diop...@gmail.com wrote:

Hello all!

I'm the original author of the hairpin patch, and things have changed a
little bit in Essex and Folsom from the original Diablo target. I
believe I
can shed some light on what should be done here to solve the issue in
either
case.

---
For Essex (stable/essex), in nova/virt/libvirt/connection.py:
---

Currently _enable_hairpin() is only being called from spawn(). However,
spawn() is not the only place that vifs (veth#) get added to a bridge
(which
is when we need to enable hairpin_mode on them). The more relevant
function
is _create_new_domain(), which is called from spawn() and other places.
Without changing the information that gets passed to
_create_new_domain()
(which is just 'xml' from to_xml()), we can easily rewrite the first 2
lines
in _enable_hairpin(), as follows:

def _enable_hairpin(self, xml):
 interfaces = self.get_interfaces(xml['name'])

Then, we can move the self._enable_hairpin(instance) call from spawn()
up
into _create_new_domain(), and pass it xml as follows:

[...]
self._enable_hairpin(xml)
return domain

This will run the hairpin code every time a domain gets created, which
is
also when the domain's vif(s) gets inserted into the bridge with the
default
of hairpin_mode=0.

---
For Folsom (trunk), in nova/virt/libvirt/driver.py:
---

There've been a lot more changes made here, but the same strategy as
above
should work. Here, _create_new_domain() has been split into
_create_domain()
and _create_domain_and_network(), and _enable_hairpin() was moved from
spawn() to _create_domain_and_network(), which seems like it'd be the
right
thing to do, but doesn't quite cover all of the cases of vif
reinsertion,
since _create_domain() is the only function which actually creates the
domain (_create_domain_and_network() just calls it after doing some
pre-work). The solution here is likewise fairly simple; make the same 2
changes to _enable_hairpin():

def _enable_hairpin(self, xml):
 interfaces = self.get_interfaces(xml['name'])

And move it from _create_domain_and_network() to _create_domain(), like
before:

[...]
self._enable_hairpin(xml)
return domain

I haven't yet tested this on my Essex clusters and I don't have a Folsom
cluster handy at present, but the change is simple and makes sense.
Looking
at to_xml() and _prepare_xml_info(), it appears that the 'xml' variable
_create_[new_]domain() gets is just a python dictionary, and xml['name']
=
instance['name'], exactly what _enable_hairpin() was using the
'instance'
variable for previously.

Let me know if this works, or doesn't work, or doesn't make sense, or if
you
need an address to send gifts, etc. Hope it's solved!

-Evan

On Thu, Aug 23, 2012 at 11:20 AM, Sam Su susltd...@gmail.com wrote:


Hi Oleg,

Thank you for your investigation. Good lucky!

Can you let me know if find how to fix the bug?

Thanks,
Sam

On Wed, Aug 22, 2012 at 12:50 PM, Oleg Gelbukh ogelb...@mirantis.com
wrote:


Hello,

Is it possible that, during snapshotting, libvirt just tears down
virtual
interface at some point, and then re-creates it, with hairpin_mode
disabled
again?
This bugfix [https://bugs.launchpad.net/nova/+bug/933640] implies that
fix works on spawn of instance. This means that upon resume after
snapshot,
hairpin is not restored. May be if we insert the _enable_hairpin()
call in
snapshot procedure, it helps.
We're currently investigating this issue in one of our environments,
hope
to come up with answer by tomorrow.

--
Best regards,
Oleg

On Wed, Aug 22, 2012 at 11:29 PM, Sam Su susltd...@gmail.com wrote:


My friend has found a way to enable ping itself, when this problem
happened. But not found why this happen.
sudo echo 1 
/sys/class/net/br1000/brif/virtual-interface-name/hairpin_mode

I file a ticket to report this problem:
https://bugs.launchpad.net/nova/+bug/1040255

hopefully someone can find why this happen and solve it.

Thanks,
Sam


On Fri, Jul 20, 2012 at 3:50 PM, Gabriel Hurley
gabriel.hur...@nebula.com wrote:


I ran into some similar issues with the _enable_hairpin() call. The
call is allowed to fail silently and (in my case) was failing. I
couldn’t
for the life of me figure out why, though, 

Re: [Openstack] [Nova] Instance Type Extra Specs clarifications

2012-08-24 Thread Vishvananda Ishaya
Folsom also supports setting key values for things like capabilities via host 
aggregates. There is a filter[1] that matches the extra specs by exact 
comparison just like was done for capabilities before the last patch. The new 
extra specs matching should be added to it. These capabilities can be set 
dynamically by administrators so it directly supports the use case below.

[1] 
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/aggregate_instance_extra_specs.py

On Aug 24, 2012, at 6:39 PM, Joseph Suh j...@isi.edu wrote:

 Patrick,
 
 That's a good point. I think the issue is already being discussed at 
 https://bugs.launchpad.net/nova/+bug/1039386 as Don Dugger pointed out. 
 
 That being said, as answers to some of your questions: yes, any key/value 
 pair can be used and it is user's (in this case, system admin's) 
 responsibility to avoid conflict at this time. The scope we originally 
 thought was pretty much static like the number of GPUs, but there is no 
 reason why it should be static as some features can change dynamically. I'd 
 encourage you to propose a blueprint if you can. We can also consider the 
 feature, but our team needs to discuss it before we can commit to it.
 
 Thanks,
 
 Joseph
 
 - Original Message -
 From: Patrick Petit patrick.michel.pe...@gmail.com
 To: Joseph Suh j...@isi.edu
 Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net) 
 openstack@lists.launchpad.net, David Kang dk...@isi.edu
 Sent: Friday, August 24, 2012 7:37:31 PM
 Subject: Re: [Openstack] [Nova] Instance Type Extra Specs clarifications
 
 Hi Joseph and All,
 
 You are pointing the root cause of my question. The question is about how to 
 specify capabilities for a compute node so that they can be compared with the 
 extra specs. I think how to define extra specs in a flavor is clear enough.
 
 So, some capabilities are standard and are generated dynamically. Others are 
 not known or not captured by the system and so not standard (yet), like the 
 GPUs, and therefore must be specified somehow. Today, the somehow, as I 
 understand things, is to add key/value pairs in nova.conf when/if it is 
 supported.
 
 I wanted to make sure i understand the basic principals.
 
 Now, this in my opinion poses couple problems and/or call for additional 
 questions:
 
 What is the naming convention to add capabilities in nova.conf? I would 
 suppose that any key/value pair cannot be taken for a capability.
 
 How to avoid name clashing with standard capability? At the very least one 
 should have an option to print them out (in nova-manage?). Even a simple 
 written list would be helpful.
 
 But, are we really comfortable with the idea to define static capabilities in 
 nova.conf? that's putting a lot of burden on config management. Also, not 
 standard doesn't imply static.
 
 We can certainly live we that for now, But eventually, i think we'll need 
 some sort of an extension mechanism so that providers can generate whatever 
 capabilities they want using their own plugin? Note that capabilities could 
 be software related too.
 
 What do you think?
 Best,
 Patrick
 
 Envoyé de mon iPad
 
 Le 24 août 2012 à 18:38, Joseph Suh j...@isi.edu a écrit :
 
 Patrick,
 
 Once a new item (key and value pair) is added to the capabilities, it can be 
 compared against extra_specs. The extra_specs can be populated in 
 instance_type_extra_specs table. The items in the extra_specs can start with 
 one of the keywords for operations such as = and s==. For example, if 
 ngpus: 4 is populated in capability, extra_specs of = 2 will choose the 
 host. If the extra_specs is = 5, the host will not be chosen. If no 
 keyword is found at the beginning of the extra_specs (with the latest change 
 in upstream code), the given string is directly compared with capability. 
 For example, if fpu is given as extra_specs, the capability must be fpu 
 to be selected.
 
 If more clarification is needed, please let us know.
 
 Thanks,
 
 Joseph
 
 - Original Message -
 From: David Kang dk...@isi.edu
 To: Patrick Petit patrick.michel.pe...@gmail.com
 Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net) 
 openstack@lists.launchpad.net
 Sent: Friday, August 24, 2012 11:34:11 AM
 Subject: Re: [Openstack] [Nova] Instance Type Extra Specs clarifications
 
 
 Parick,
 
 We are using the feature in Bare-metal machine provisioning.
 Some keys are automatically generated by nova-compute.
 For example, hypervisor_type, hypervisor_version, etc. fields are 
 automatically
 put into capabilities by nova-compute (in the case of libvirt).
 So, you don't need to specify that.
 But, if you want to add custom fields, you should put them into nova.conf 
 file of 
 the nova-compute node.
 
 Since the new key are put into 'capabilities', 
 the new key must be different from any other keys in the 'capabilities'.
 If that uniqueness is enforced, it can be any string, I believe.
 
 Thanks,
 David
 
 

Re: [Openstack] VM can't ping self floating IP after a snapshot is taken

2012-08-24 Thread Sam Su
That's great, thank you for your efforts. Can you make a backport for essex?  

Sent from my iPhone

On Aug 24, 2012, at 7:15 PM, heut2008 heut2...@gmail.com wrote:

 I have fixed it here  https://review.openstack.org/#/c/11925/
 
 2012/8/25 Sam Su susltd...@gmail.com:
 Hi,
 
 I also reported this bug:
 https://bugs.launchpad.net/nova/+bug/1040255
 
 If someone can combine you guys solution and get a perfect way to fix this
 bug, that will be great.
 
 BRs,
 Sam
 
 
 On Thu, Aug 23, 2012 at 9:27 PM, heut2008 heut2...@gmail.com wrote:
 
 this bug has been filed here  https://bugs.launchpad.net/nova/+bug/1040537
 
 2012/8/24 Vishvananda Ishaya vishvana...@gmail.com:
 +1 to this. Evan, can you report a bug (if one hasn't been reported yet)
 and
 propose the fix? Or else I can find someone else to propose it.
 
 Vish
 
 On Aug 23, 2012, at 1:38 PM, Evan Callicoat diop...@gmail.com wrote:
 
 Hello all!
 
 I'm the original author of the hairpin patch, and things have changed a
 little bit in Essex and Folsom from the original Diablo target. I
 believe I
 can shed some light on what should be done here to solve the issue in
 either
 case.
 
 ---
 For Essex (stable/essex), in nova/virt/libvirt/connection.py:
 ---
 
 Currently _enable_hairpin() is only being called from spawn(). However,
 spawn() is not the only place that vifs (veth#) get added to a bridge
 (which
 is when we need to enable hairpin_mode on them). The more relevant
 function
 is _create_new_domain(), which is called from spawn() and other places.
 Without changing the information that gets passed to
 _create_new_domain()
 (which is just 'xml' from to_xml()), we can easily rewrite the first 2
 lines
 in _enable_hairpin(), as follows:
 
 def _enable_hairpin(self, xml):
interfaces = self.get_interfaces(xml['name'])
 
 Then, we can move the self._enable_hairpin(instance) call from spawn()
 up
 into _create_new_domain(), and pass it xml as follows:
 
 [...]
 self._enable_hairpin(xml)
 return domain
 
 This will run the hairpin code every time a domain gets created, which
 is
 also when the domain's vif(s) gets inserted into the bridge with the
 default
 of hairpin_mode=0.
 
 ---
 For Folsom (trunk), in nova/virt/libvirt/driver.py:
 ---
 
 There've been a lot more changes made here, but the same strategy as
 above
 should work. Here, _create_new_domain() has been split into
 _create_domain()
 and _create_domain_and_network(), and _enable_hairpin() was moved from
 spawn() to _create_domain_and_network(), which seems like it'd be the
 right
 thing to do, but doesn't quite cover all of the cases of vif
 reinsertion,
 since _create_domain() is the only function which actually creates the
 domain (_create_domain_and_network() just calls it after doing some
 pre-work). The solution here is likewise fairly simple; make the same 2
 changes to _enable_hairpin():
 
 def _enable_hairpin(self, xml):
interfaces = self.get_interfaces(xml['name'])
 
 And move it from _create_domain_and_network() to _create_domain(), like
 before:
 
 [...]
 self._enable_hairpin(xml)
 return domain
 
 I haven't yet tested this on my Essex clusters and I don't have a Folsom
 cluster handy at present, but the change is simple and makes sense.
 Looking
 at to_xml() and _prepare_xml_info(), it appears that the 'xml' variable
 _create_[new_]domain() gets is just a python dictionary, and xml['name']
 =
 instance['name'], exactly what _enable_hairpin() was using the
 'instance'
 variable for previously.
 
 Let me know if this works, or doesn't work, or doesn't make sense, or if
 you
 need an address to send gifts, etc. Hope it's solved!
 
 -Evan
 
 On Thu, Aug 23, 2012 at 11:20 AM, Sam Su susltd...@gmail.com wrote:
 
 Hi Oleg,
 
 Thank you for your investigation. Good lucky!
 
 Can you let me know if find how to fix the bug?
 
 Thanks,
 Sam
 
 On Wed, Aug 22, 2012 at 12:50 PM, Oleg Gelbukh ogelb...@mirantis.com
 wrote:
 
 Hello,
 
 Is it possible that, during snapshotting, libvirt just tears down
 virtual
 interface at some point, and then re-creates it, with hairpin_mode
 disabled
 again?
 This bugfix [https://bugs.launchpad.net/nova/+bug/933640] implies that
 fix works on spawn of instance. This means that upon resume after
 snapshot,
 hairpin is not restored. May be if we insert the _enable_hairpin()
 call in
 snapshot procedure, it helps.
 We're currently investigating this issue in one of our environments,
 hope
 to come up with answer by tomorrow.
 
 --
 Best regards,
 Oleg
 
 On Wed, Aug 22, 2012 at 11:29 PM, Sam Su susltd...@gmail.com wrote:
 
 My friend has found a way to enable ping itself, when this problem
 happened. But not found why this happen.
 sudo echo 1 
 /sys/class/net/br1000/brif/virtual-interface-name/hairpin_mode
 
 I file a ticket to report this problem:
 https://bugs.launchpad.net/nova/+bug/1040255
 
 hopefully someone can find why this happen and solve it.
 
 Thanks,
 Sam
 
 
 On Fri, Jul 20, 2012 at 3:50 PM, Gabriel Hurley
 gabriel.hur...@nebula.com wrote:
 
 

Re: [Openstack-qa-team] Tempest Gating

2012-08-24 Thread Jay Pipes
On 08/21/2012 05:45 PM, Dan Smith wrote:
 In other suites, I've seen an XFAIL result used to mark tests that we
 know are failing right now so that they're not SKIPped like tests that
 are missing some component, but rather just not fatal to the task at
 hand. Maybe something like that would be useful in tempest? If I found a
 bug in Nova right now and wanted to get a test into tempest ASAP to poke
 it, submitting as XFAIL would (a) not break Jenkins because the test
 failed (as expected) and (b) raise a flag when the test started to pass
 to make sure that it gets un-marked as XFAIL.

This would be ideal! Unfortunately, I don't know of a way to do this
with nosetests/unittest(2) in Python. Very open to suggestions, though :)

 DW They may not be caused by the patch at hand, but servers and volumes
 DW going into error status definitely signal issues, whether they be in
 DW code or environment. I don't have access to the Tempest CI
 DW environment so I don't have much insight into those issues
 DW specifically, though there might be some additional error checking
 DW that we can do to get more information on what is going wrong.
 
 Yeah, I've been trying to reproduce the issues locally, as I'm happy to
 fix them up if I can figure out what the problem is. However, I feel
 like I'm flying blind a bit, without a view into the CI machine itself :)

I believe Jim Blair has addressed this in followup emails...

 DW I'm doing what I can Dan to get your patches reviewed. The trick
 DW being that since there is a dependency chain between most of the
 DW commits, it adds a level of complexity. Jay, who's done most of the
 DW CI setup thus far, is out of country, so I'm trying to find other
 DW folks I can reach out to help stabilize the environment.
 
 Yeah, where is that slacker? :)

Right now? Venice, Italy. In the last few days, Milan, Istanbul, Sofia.
Next couple days, Florence then Rome, then home. Back on the 29th. So
sorry to slack off so much! ;)

Best,
-jay

-- 
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-qa-team] Tempest Gating

2012-08-24 Thread Dan Smith
JB We're almost done with a project to move the build artifacts to a
JB static web server, where we plan to keep them indefinitely.  The
JB artifacts in jenkins do have an expiration (currently about a
JB month), and we actually want to drastically shorten that because
JB long build histories cause performance problems in jenkins.

I see that this has gone live now. Let me be the first to thank you for
this, as the logs load _much_ faster now, which makes me happy :)

-- 
Dan Smith
IBM Linux Technology Center

-- 
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp