[Openstack] Documentation help on Folsom -3 Milestone

2012-08-28 Thread Trinath Somanchi
Hi-

Do we have any documentation help in configuration and validation of
Openstack Folsom - 3 milestone?

Please help me in this regard.



-- 
Regards,
--
Trinath Somanchi,
+91 9866 235 130
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] keystone questions

2012-08-28 Thread pat
Hi Joe,

Thanks for Q1. About Q2, I more think about keystone instances and each has
its own storage and the keystones are interconnected and their data are
replicated. The DB, in your suggestion, looks like single point of failure to 
me.

Thanks for your time

 Pat

On Mon, 27 Aug 2012 09:46:41 -0700, Joseph Heck wrote
 Hi Pat,
 
 On Aug 27, 2012, at 8:09 AM, pat p...@xvalheru.org wrote:
  I have two questions regarding OpenStack Keystone:
  
  Q1) The Folsom release supports domains. The domain can contain more tenants
  and tenant cannot be shared between domains. Is this right? I think so, but
  want to be sure.
 
 I'm afraid it doesn't. We didn't make sufficient progress with the 
 V3 API (which is what incorporates domains) to include that with the 
 Folsom release. We expect this to be available with the grizzly release.
 
  Q2) Is it posible to have a “cluster” of the Keystones to avoid Keystone to 
  be
  a bottleneck? If so, could you point me to a “tutorial”? Or did I missed
  something important?
 
 If by cluster you mean multiple instances to handle requests, then 
 absolutely - yes. For this particular response, I'll assume you're 
 using a SQL backend for Keystone. Generally you maintain a single 
 database - wether that's an HA cluster or a single instance, and 
 any number of Keystone service instances can point to and use that.
 



Freehosting PIPNI - http://www.pipni.cz/


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Openstack swift][1.6.0] Regarding to swift api for Controlling a Large List of Containers

2012-08-28 Thread Irene . Peng-彭怡欣-研究發展部
Hi,

Regarding to swift api for Controlling a Large List of Containers, I have an 
issue here.
For example:
I have 5 container, apple/banana/lemon/mango/orange.

API like below:
Curl –H ‘X-Auth-Token: Token_ID’ 
http://Proxy_website/Account?marker=bananalimit=2http://%3cProxy_website%3e/%3cAccount%3e?marker=bananalimit=2

Expected result:
It’s should be display lemon and mango.

But actual result:
It display lemon, mango and orange.

Is it a bug?

Thank you so much.
Best Regards,
Irene Peng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Openstack swift][1.6.0] Regarding to swift api for Controlling a Large List of Containers

2012-08-28 Thread Irene . Peng-彭怡欣-研究發展部
Hi,

Regarding to swift api for Controlling a Large List of Containers, I have an 
issue here.
For example:
I have 5 container, apple/banana/lemon/mango/orange.

API like below:
Curl –H ‘X-Auth-Token: Token_ID’ 
http://Proxy_website/Account?marker=bananalimit=2http://%3cProxy_website%3e/%3cAccount%3e?marker=bananalimit=2

Expected result:
It’s should be display lemon and mango.

But actual result:
It display lemon, mango and orange.

Is it a bug?

Thank you so much.
Best Regards,
Irene Peng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Openstack swift][1.6.0] Regarding to swift api for Controlling a Large List of Containers

2012-08-28 Thread Michael Barton
On Tue, Aug 28, 2012 at 5:46 AM, Irene.Peng-彭怡欣-研究發展部  API like below:
 Curl –H ‘X-Auth-Token: Token_ID’
 http://Proxy_website/Account?marker=bananalimit=2


I think you just need to put quotation marks around that URL -- the 
is causing the curl command to be backgrounded by the shell, and
cutting off the url being sent.

- Mike

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] Instance Type Extra Specs clarifications

2012-08-28 Thread Patrick Petit
Hi Don,

I added a comment in https://bugs.launchpad.net/nova/+bug/1039386 regarding
your point.
Best regards,
Patrick

2012/8/24 Dugger, Donald D donald.d.dug...@intel.com

  Patrick-

 ** **

 We’ve enhanced `nova-manage’ to manipulate the `extra_specs’ entries, c.f.
 https://blueprints.launchpad.net/nova/+spec/update-flavor-key-value,
   You can add an `extra_specs’ key/value pair to a flavor with the command:
 

 ** **

 nova-manage instance_type add_key m1.humongous cpu_type
 itanium

 ** **

 And delete a key/value pair with the command:

 ** **

 nova-manage instance_type del_key m1.humongous cpu_type***
 *

 ** **

 We’re in the process of enhancing `python-novaclient’ and Horizon with
 similar capabilities and hope to have them ready for the Folsom release.**
 **

 ** **

 Currently, there’s no hook to set `extra_specs’ through the `nova.conf’
 file, the mechanism is to dynamically add the `extra_specs’ key/values
 after the administrator has created a flavor.

 ** **

 Currently, the keys are completely free form but there are some issues
 with that so that should change.  Checkout the bug:

 ** **

 https://bugs.launchpad.net/nova/+bug/1039386

 ** **

 Based upon that bug we need to put some sort of scope on the keys to
 indicate which components a key applies to. I’m in favor of adding a new
 column to the `extra_specs’ table that would explicitly set the scope but
 an alternative would be to encode the scope into the key itself, something
 like `TrustedFilter:trust’ to indicate that  the `trust’ key only applies
 to the `TrustedFilter’ scheduler component.  Feel free to chime in on the
 BZ entry on how to specify the scope, once we decide on how to deal with
 this I’ll create a patch to handle it.

 ** **

 --

 Don Dugger

 Censeo Toto nos in Kansa esse decisse. - D. Gale

 Ph: 303/443-3786

 ** **

 *From:* 
 openstack-bounces+donald.d.dugger=intel@lists.launchpad.net[mailto:
 openstack-bounces+donald.d.dugger=intel@lists.launchpad.net] *On
 Behalf Of *Patrick Petit
 *Sent:* Friday, August 24, 2012 7:13 AM
 *To:* openstack@lists.launchpad.net (openstack@lists.launchpad.net)
 *Subject:* [Openstack] [Nova] Instance Type Extra Specs clarifications

 ** **

 Hi,

 ** **

 Could someone give a practical overview of how configuring and using the
 instance type extra specs extension capability introduced in Folsom?

 ** **

 If how extending an instance type is relatively clear.

 ** **

 Eg.: #nova-manage instance_type set_key --name=my.instancetype --key
 cpu_arch --value 's== x86_64' 

 ** **

 The principles of capability advertising is less clearer. Is it assumed
 that the key/value pairs are always declared statically as flags in
 nova.conf of the compute node, or can they be generated dynamically and if
 so, who would that be? And also, are the keys completely free form strings
 or strings that are known (reserved) by Nova?

 ** **

 Thanks in advance for clarifying this.

 ** **

 Patrick




-- 
*Give me a place to stand, and I shall move the earth with a lever*
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Blueprint Proposal: Inflight Monitoring Service ...

2012-08-28 Thread Sandy Walsh
Not sure if an email like this gets sent automagically by LP anymore, so here 
goes ...

I'd love to get some feedback on it. 

Thanks!
-S

The Blueprint
https://blueprints.launchpad.net/nova/+spec/monitoring-service

The Spec
http://wiki.openstack.org/PerformanceMonitoring

The Branch:
https://review.openstack.org/#/c/11179/


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] keystone questions

2012-08-28 Thread Joseph Heck

On Aug 28, 2012, at 12:41 AM, pat p...@xvalheru.org wrote:
 Thanks for Q1. About Q2, I more think about keystone instances and each has
 its own storage and the keystones are interconnected and their data are
 replicated. The DB, in your suggestion, looks like single point of failure to 
 me.

Hi Pat,

Yes - it definitely could be. If you're setting up keystone in an HA 
configuration, then I'd expect that you actually have a mysql cluster backing 
the database that could allow a single instance of mysql to fail and maintain 
services. Keystone, like Nova, Glance, etc is stashing it's state somewhere - 
the WSGI processes that run keystone have moved that to MySQL, so MySQL is the 
place where you need to watch and care for.

Many implementations of OpenStack that I've seen have shared the MySQL instance 
between keystone, nova, and glance, and quite successfully. 

If you were using LDAP entirely for the backend instead of the SQL backed 
mechanisms, then you'd need a replicated/failover cluster for LDAP as well.

-joe

 On Mon, 27 Aug 2012 09:46:41 -0700, Joseph Heck wrote
 Hi Pat,
 
 On Aug 27, 2012, at 8:09 AM, pat p...@xvalheru.org wrote:
 I have two questions regarding OpenStack Keystone:
 
 Q1) The Folsom release supports domains. The domain can contain more tenants
 and tenant cannot be shared between domains. Is this right? I think so, but
 want to be sure.
 
 I'm afraid it doesn't. We didn't make sufficient progress with the 
 V3 API (which is what incorporates domains) to include that with the 
 Folsom release. We expect this to be available with the grizzly release.
 
 Q2) Is it posible to have a “cluster” of the Keystones to avoid Keystone to 
 be
 a bottleneck? If so, could you point me to a “tutorial”? Or did I missed
 something important?
 
 If by cluster you mean multiple instances to handle requests, then 
 absolutely - yes. For this particular response, I'll assume you're 
 using a SQL backend for Keystone. Generally you maintain a single 
 database - wether that's an HA cluster or a single instance, and 
 any number of Keystone service instances can point to and use that.
 
 
 
 
 Freehosting PIPNI - http://www.pipni.cz/
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Future of Launchpad OpenStack mailing list (this list)

2012-08-28 Thread Francis J. Lacoste
On 12-08-27 08:32 PM, Andrew Clay Shafer wrote:
 For exporting from Launchpad, surely someone at Canonical would be able
 and willing to get that list of emails.
 

We can provide the mailing list pickle (Mailman 2) which contains all
the email addresses as well as preferences.

 If people think migrating the archive is important, then it shouldn't be
 that hard to sort that out either, once we decide what is acceptable.


Similarly, we can give you the mbox file from which the HTML archive is
generated.

Cheers

-- 
Francis J. Lacoste
francis.laco...@canonical.com



signature.asc
Description: OpenPGP digital signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Future of Launchpad OpenStack mailing list (this list)

2012-08-28 Thread Andrew Clay Shafer
On Tue, Aug 28, 2012 at 11:59 AM, Francis J. Lacoste 
francis.laco...@canonical.com wrote:

 On 12-08-27 08:32 PM, Andrew Clay Shafer wrote:
  For exporting from Launchpad, surely someone at Canonical would be able
  and willing to get that list of emails.
 

 We can provide the mailing list pickle (Mailman 2) which contains all
 the email addresses as well as preferences.

  If people think migrating the archive is important, then it shouldn't be
  that hard to sort that out either, once we decide what is acceptable.


 Similarly, we can give you the mbox file from which the HTML archive is
 generated.



Thanks Francis

See, the system works... :)
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] [Glance] Implementing Common Image Properties

2012-08-28 Thread Hancock, Tom (HP Cloud Services)
Yes we'd love to see these included also.
: Tom

From: openstack-bounces+tom.hancock=hp@lists.launchpad.net 
[mailto:openstack-bounces+tom.hancock=hp@lists.launchpad.net] On Behalf Of 
Gabe Westmaas
Sent: 20 August 2012 20:18
To: OpenStack List; openstack@lists.launchpad.net
Subject: Re: [Openstack] [openstack-dev] [Glance] Implementing Common Image 
Properties

It would definitely be great to see these as generally accepted properties.  In 
general, I would hope that anything that is accepted by the community 
eventually makes it into core API, and shipping with them on by default is a 
great first step.  Mostly, I'd like to see us able to turn off the 
org.openstack__1__ part of the properties, if everyone agrees they are useful 
:)

Also just to highlight one of the links in the mailing list discussions, this 
is where we pulled those properties from: 
http://wiki.openstack.org/CommonImageProperties

Gabe

From: Brian Waldon bcwal...@gmail.commailto:bcwal...@gmail.com
Reply-To: OpenStack List 
openstack-...@lists.openstack.orgmailto:openstack-...@lists.openstack.org
Date: Mon, 20 Aug 2012 14:11:20 -0400
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net, 
OpenStack List 
openstack-...@lists.openstack.orgmailto:openstack-...@lists.openstack.org
Subject: [openstack-dev] [Glance] Implementing Common Image Properties

We discussed a common set of image properties a while back on the mailing list 
([1] and [2]). The general idea was to define a common way to expose useful 
image properties (distro, version, architecture, packages, etc).

It doesn't appear we ever came to a hard consensus, but Rackspace has been 
publishing the following properties in their deployment:

org.openstack__1__architecture = x64
org.openstack__1__os_distro = org.ubuntu
org.openstack__1__os_version = 12.04

If the idea is to get all deployments to publish these properties, I think what 
Rackspace has implemented would be a good starting point. The question I want 
to pose to the community is this:

Does it make sense to ship a set of JSON schemas with Glance that represent 
these properties? Doing so wouldn't explicitly require all deployments to use 
them, but it would reduce the work required to publish them and help ensure 
they have a common meaning across deployments. Keep in mind that we would be 
shipping these as a part of Glance, not as a part of the Images API spec.

I personally think it would be great to provide these, and to do so in the 
Folsom release so those deployers riding major releases wouldn't be left out in 
the dark.

All comments welcome!

Brian Waldon


[1] http://markmail.org/message/5bd5zkyre57ppi3n
[2] http://markmail.org/message/soaldxs4lovd2uir
___ OpenStack-dev mailing list 
openstack-...@lists.openstack.orgmailto:openstack-...@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [swift] important upgrade note for swift deployers

2012-08-28 Thread John Dickinson
Swift 1.7
=

The next release of Swift will be version 1.7. This will be our
release for OpenStack Folsom, and is scheduled land mid-September.
There is an important change for deployers in this release. This
email has the details so you can begin planning your upgrade path.

What's the change
=

The version bump is based in part on a recent patch that changed the
on-disk format of the ring files
(https://github.com/openstack/swift/commit/f8ce43a21891ae2cc00d0770895b556eea9c7845
 ).
This was a necessary change that addresses a major performance issue
introduced by a change in Python between Py2.6 and Py2.7. See
https://bugs.launchpad.net/swift/+bug/1031954 for more detail.

This patch essentially changes a default in a backwards incompatible
way. Swift 1.7 can read the old format but only write the new format.
Therefore deployers can easily upgrade but not easily downgrade or
roll back this part of the system.

This information is included in the official docs at
http://docs.openstack.org/developer/swift/admin_guide.html#managing-the-rings


Safe Upgrade Path
=

This is how deployers can safely upgrade their existing swift cluster:

1) Upgrade the proxy, account, container, and object nodes as normal.
   Cluster operations will continue to work and you can still upgrade
   with no downtime, as always.

2) Once your entire cluster is upgraded, only then upgrade the version
   of swift on the box that builds your ring files (ie where you run
   swift-ring-builder). Upgrading this piece will change the on-disk
   format of your generated ring files. Deploy the new ring files to the
   swift cluster.

Notes:

 - Swift 1.7 can read both old and new format ring files.

 - If you upgrade the swift-ring-builder to the new format and
   generate new ring files with it, you cannot downgrade your cluster
   and use the new rings.


Oh No! I really, really have to downgrade my cluster


1) Downgrade your box where you run swift-ring-builder

2) Rebalance and write out rings (to put them in the old format) and
   deploy them to your cluster

3) Downgrade the rest of the swift cluster



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] nova-manage db sync fails

2012-08-28 Thread Afef MDHAFFAR
Dear Openstack users,

I am trying to install openstack on an ubuntu server 12.04, with Xen as a
virtualization technology.
I unfortunately got a problem while trying to install the nova service.
Actually, the nova-manage db sync fails and returns the following
warnings:
---
2012-08-28 18:47:24 DEBUG nova.utils [-] backend module
'nova.db.sqlalchemy.migration' from
'/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/migration.pyc' from
(pid=3101) __get_backend /usr/lib/python2.7/dist-packages/nova/utils.py:658
2012-08-28 18:47:24 WARNING nova.utils [-]
/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py:639:
SADeprecationWarning: The 'listeners' argument to Pool (and
create_engine()) is deprecated.  Use event.listen().
  Pool.__init__(self, creator, **kw)

2012-08-28 18:47:24 WARNING nova.utils [-]
/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py:145:
SADeprecationWarning: Pool.add_listener is deprecated.  Use event.listen()
  self.add_listener(l)


Would you please help me to fix this problem?

Thank you

Best regards,
Afef
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova-manage db sync fails

2012-08-28 Thread Kevin L. Mitchell
On Tue, 2012-08-28 at 18:53 +0200, Afef MDHAFFAR wrote:
 I am trying to install openstack on an ubuntu server 12.04, with Xen
 as a virtualization technology.
 I unfortunately got a problem while trying to install the nova
 service. Actually, the nova-manage db sync fails and returns the
 following warnings:

These are just warnings and can be safely ignored at this point.  The
next release of nova should not emit these warnings.

-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Question regarding who the nova database is updated

2012-08-28 Thread Heng Xu
Hi folks:
I was trying to add a new column to the Nova database table compute_nodes. 
however, I was not sure of how it is updated periodically, all I know is that 
comsume_from_instance() function in host_manager.py update the memory, then I 
was not able to find how nova subsequently update the compute_nodes table in 
the database. Any help will be appreciated, please let me know the exact 
location (file, function, etc.) and how I can mimc that to modify my own new 
field in the compute_nodes table. Thanks in advance.

Heng

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Future of Launchpad OpenStack mailing list (this list)

2012-08-28 Thread Stefano Maffulli
On 08/28/2012 08:59 AM, Francis J. Lacoste wrote:
 We can provide the mailing list pickle (Mailman 2) which contains all
 the email addresses as well as preferences.

That's good the hear, thanks. Canonical has always been conservative
about disclosing email addresses of Launchpad's members. Let's take it
offline for the details.

 Similarly, we can give you the mbox file from which the HTML archive is
 generated.

I have used this feature before :)

Now, the main question is still open: where should the General mailing
list go? Anybody disagrees that we should merge this list into 'Operators'?

/stef

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Can't change X-Storage-Url from localhost

2012-08-28 Thread David Krider
I seem to be having this exact problem, but the fix doesn't work for me:

https://answers.launchpad.net/swift/+question/157858

No matter what I set the default_swift_cluster to, or if I add a bind_ip
to the DEFAULT section, I can't get X-Storage-Url to come back as
anything other than localhost:

dkrider@workstation:~$ curl -k -v -H 'X-Storage-User: test:tester' -H
'X-Storage-Pass: testing' https://external_ip:8080/auth/v1.0
* About to connect() to external_ip port 8080 (#0)
*   Trying external_ip... connected
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using AES256-SHA
* Server certificate:
*  subject: C=AU; ST=Some-State; O=Internet Widgits Pty Ltd
*  start date: 2012-08-14 13:51:32 GMT
*  expire date: 2012-09-13 13:51:32 GMT
* SSL: unable to obtain common name from peer certificate
 GET /auth/v1.0 HTTP/1.1
 User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0
OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
 Host: external_ip:8080
 Accept: */*
 X-Storage-User: test:tester
 X-Storage-Pass: testing

 HTTP/1.1 200 OK
 X-Storage-Url:
https://127.0.0.1:8080/v1/AUTH_e6ecde05-959a-4898-907b-5bec495fa4f0
 X-Storage-Token: AUTH_tk36c97915aed242b7b9a93aa05c06ba0c
 X-Auth-Token: AUTH_tk36c97915aed242b7b9a93aa05c06ba0c
 Content-Length: 113
 Date: Tue, 28 Aug 2012 18:38:34 GMT

* Connection #0 to host external_ip left intact
* Closing connection #0
* SSLv3, TLS alert, Client hello (1):
{storage: {default: local, local:
https://127.0.0.1:8080/v1/AUTH_e6ecde05-959a-4898-907b-5bec495fa4f0}}

Have I run into a bug, or is there something simple I'm overlooking in
the config file?

/etc/swift/proxy-server.conf
-
[DEFAULT]
cert_file = /etc/swift/cert.crt
key_file = /etc/swift/cert.key
bind_port = 8080
workers = 8
user = swift

[pipeline:main]
pipeline = healthcheck cache swauth proxy-server

[filter:healthcheck]
use = egg:swift#healthcheck

[filter:cache]
use = egg:swift#memcache
memcache_servers = 10.1.7.10:11211,10.1.7.11:11211

[filter:swauth]
use = egg:swauth#swauth
set_log_level = DEBUG
super_admin_key = asdfqwer
default_swift_cluster =
local#https://external_ip:8080/v1#https://127.0.0.1:8080/v1

[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true
log_level = DEBUG
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Glance] Unable to retrieve request id from context

2012-08-28 Thread Aaron Rosen
Hi,

I'm running devstack and when I boot vms I seem to be running into this
error in glance which I believe is causing  the cirros image to just hang
on Booting from ROM...  I was wondering if anyone has run into this
before?  (Logs below)

Thanks,

Aaron


arosen@controller:/opt/stack$ nova image-list
+--+-+++
| ID   | Name|
Status | Server |
+--+-+++
| a1a0f8ea-bdba-4892-8fff-2760442b59be | cirros-0.3.0-x86_64-uec |
ACTIVE ||
| c40709b0-c554-4d1b-9a18-9fc393fc6d35 | cirros-0.3.0-x86_64-uec-kernel  |
ACTIVE ||
| 15d78597-4384-4b1f-894f-d1e9eee4b732 | cirros-0.3.0-x86_64-uec-ramdisk |
ACTIVE ||
+--+-+++



nova boot --image  a1a0f8ea-bdba-4892-8fff-2760442b59be-flavor 1 --nic
net-id=e2908a4c-98cd-4ab5-b61c-131f7ed488e9 teststest


 After booting VM from g-api.


2012-08-28 14:56:27 DEBUG glance.api.middleware.version_negotiation [-]
Determining version of request: HEAD
/v1/images/a1a0f8ea-bdba-4892-8fff-2760442b59be Accept:  from (pid=17874)
process_request
/opt/stack/glance/glance/api/middleware/version_negotiation.py:42
Determining version of request: HEAD
/v1/images/a1a0f8ea-bdba-4892-8fff-2760442b59be Accept:
2012-08-28 14:56:27 DEBUG glance.api.middleware.version_negotiation [-]
Using url versioning from (pid=17874) process_request
/opt/stack/glance/glance/api/middleware/version_negotiation.py:55
Using url versioning
2012-08-28 14:56:27 DEBUG glance.api.middleware.version_negotiation [-]
Matched version: v1 from (pid=17874) process_request
/opt/stack/glance/glance/api/middleware/version_negotiation.py:67
Matched version: v1
2012-08-28 14:56:27 DEBUG glance.api.middleware.version_negotiation [-] new
uri /v1/images/a1a0f8ea-bdba-4892-8fff-2760442b59be from (pid=17874)
process_request
/opt/stack/glance/glance/api/middleware/version_negotiation.py:68
new uri /v1/images/a1a0f8ea-bdba-4892-8fff-2760442b59be
2012-08-28 14:56:27 DEBUG glance.common.client
[29d06659-c2e2-45f4-b9e9-c5e2f787562a edfe191c96d2460fa2c83a204d03893a
0d4d4b373b344f2ba292e9a56d78c2e8] Constructed URL:
http://0.0.0.0:9191/images/a1a0f8ea-bdba-4892-8fff-2760442b59be from
(pid=17874) _construct_url /opt/stack/glance/glance/common/client.py:464
Constructed URL:
http://0.0.0.0:9191/images/a1a0f8ea-bdba-4892-8fff-2760442b59be
2012-08-28 14:56:27 DEBUG glance.registry.client
[29d06659-c2e2-45f4-b9e9-c5e2f787562a edfe191c96d2460fa2c83a204d03893a
0d4d4b373b344f2ba292e9a56d78c2e8] Registry request GET
/images/a1a0f8ea-bdba-4892-8fff-2760442b59be HTTP 200 request id None from
(pid=17874) do_request /opt/stack/glance/glance/registry/client.py:94
Registry request GET /images/a1a0f8ea-bdba-4892-8fff-2760442b59be HTTP 200
request id None
2012-08-28 14:56:27 ERROR glance.api.middleware.context
[29d06659-c2e2-45f4-b9e9-c5e2f787562a edfe191c96d2460fa2c83a204d03893a
0d4d4b373b344f2ba292e9a56d78c2e8] Unable to retrieve request id from context
Unable to retrieve request id from context
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Future of Launchpad OpenStack mailing list (this list)

2012-08-28 Thread Brian Schott
At the risk of getting bad e-karma for cross posting to openstack-operators, 
that might be the place to post that question.  I for one disagree that we 
should merge the openstack general list into 
openstack-operat...@lists.openstack.org and the only other vote I caught on 
this thread also disagreed.

Several reasons: 

1) new users will look for openst...@openstack.lists.openstack.org  because 
openstack-dev@ and openstack-operators@ are both specific things.  community@ 
might have been an option, but that is taken already.  
2) operations guys are just as specialized as devs in terms of what they want 
to talk about, it isn't meant for general why openstack questions.
3) if/when you migrate email addresses / logs it will be easier to move them to 
a brand new list.  Otherwise you will have to try to merge history and not step 
on existing data.
4) reusing an email address for the sake of optimization of having too many 
lists at the cost of community confusion is false optimization, you'll just 
get more non-dev traffic on the dev list if the choice is -dev or -operators.

Brian

-
Brian Schott, CTO
Nimbis Services, Inc.
brian.sch...@nimbisservices.com
ph: 443-274-6064  fx: 443-274-6060



On Aug 28, 2012, at 1:54 PM, Stefano Maffulli stef...@openstack.org wrote:

 Now, the main question is still open: where should the General mailing
 list go? Anybody disagrees that we should merge this list into 'Operators'?

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Future of Launchpad OpenStack mailing list (this list)

2012-08-28 Thread Andrew Clay Shafer
On Tue, Aug 28, 2012 at 3:14 PM, Brian Schott 
brian.sch...@nimbisservices.com wrote:

 At the risk of getting bad e-karma for cross posting to
 openstack-operators, that might be the place to post that question.  I for
 one disagree that we should merge the openstack general list into
 openstack-operat...@lists.openstack.org and the only other vote I caught
 on this thread also disagreed.


Everyone who responded to the question so far opposed merging the general
list with operators.



 Several reasons:

 1) new users will look for openst...@openstack.lists.openstack.org because 
 openstack-dev@and openstack-operators@are both specific things.  
 community@might have been an option, but that is taken already.
 2) operations guys are just as specialized as devs in terms of what they
 want to talk about, it isn't meant for general why openstack questions.
 3) if/when you migrate email addresses / logs it will be easier to move
 them to a brand new list.  Otherwise you will have to try to merge history
 and not step on existing data.
 4) reusing an email address for the sake of optimization of having too
 many lists at the cost of community confusion is false optimization,
 you'll just get more non-dev traffic on the dev list if the choice is -dev
 or -operators.


yes, yes, yes and yes.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Can't change X-Storage-Url from localhost

2012-08-28 Thread David Krider
I finally found the place to search the archives. I think this is the
answer:

https://answers.launchpad.net/swift/+question/148450

I will have a play.

On 08/28/2012 02:56 PM, David Krider wrote:
 I seem to be having this exact problem, but the fix doesn't work for me:

 https://answers.launchpad.net/swift/+question/157858

 No matter what I set the default_swift_cluster to, or if I add a
 bind_ip to the DEFAULT section, I can't get X-Storage-Url to come back
 as anything other than localhost:

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Upgrading from devstack pre-F3/quantum v1/OVS to latest not going well :-(

2012-08-28 Thread Syd (Sydney) Logan
Hi,

Is there a recommended procedure for upgrading nodes that were configured 
pre-Folsom 3 to use quantum V1/OVS that were deployed with devstack? I probably 
should have asked this question before trying, but I went ahead and tried. I 
had a multi-node setup that I was driving with Horizon that was working very 
well. Now I'm just trying to get a single node setup working, and not getting 
far.

To get sync'd up with the latest, I did the following:

$ rm -rf /opt/stack (this is where devstack pulled things to)
$ rm -rf /etc/quantum; rm -rf /etc/nova

In the devstack localrc:

Removed n-net from ENABLED_SERVICES
Added q-dhcp to ENABLED_SERVICES (I had this disabled in pre-F3 after e-mails 
with Aaron Rosen when he helped me get going earlier, I've tried both ways and 
seems not to make a difference)
Added NOVA_USE_QUANTUM=v2 (but this doesn't seem to make a difference either)

And I ran devstack.

I got no errors when I ran devstack.

When I launched Horizon, some problems are evident. There is no launch instance 
button on the Instances page. Because I don't yet know the command UI enough to 
spin up and configure VMs, I figured I'd try running the devstack exercise.sh 
script to see what happens. It creates a few VMs, but none get an IP address 
(before I used to get IPs in 10.4.128.0).  It reports all tests passed, as 
well. If I click through in the UI on the VM, I see that for networking address 
it assigns all VMs is the value Net1.

I've looked at console logs for the VMs created and see failures trying to dhcp 
(that's why I naively added q-dhcp back to ENABLED_SERVICES), but as I 
mentioned above, adding q-dhcp didn't help, and I'm wondering if it was a good 
idea anyway since Aaron steered me away from it before.

Output of ps shows expected services running (e.g., OVS daemon, plugins, 
agents) and services lists displayed by Horizon (e.g., nova, quantum, etc.) all 
seem normal to me.

Notably missing is the OVS gw- interface that was present before I upgraded (at 
http://wiki.openstack.org/RunningWQuantumV2Api there is this: Note: with v2, 
Quantum no longer uses the L3 + NAT logic from nova-network. Quantum will not 
have the equivalent functionality until F-3, so you won't be able to ping the 
VMs from the nova controller host. Is that the reason?)  The gw interface is 
the way I could ping VMs from the host.

The missing gateway, horizon UI missing the create instance button, and not 
getting networks for VMs spun up by devstack's exercise script are the major 
symptoms.  I trust that devstack is up to sync with what is happening in 
Folsom, and that I am actually pulling down F3 code at this point (I've not 
tried to verify this).  I'm not aware of any need to tweek the devstack 
exercise script, I am assuming it is designed to work as is.

I'm thinking of wiping my entire disk and starting from scratch in case blowing 
away /etc/nova etc. and /opt/stack were not enough to reset state, but before I 
do this, any pointers to links or mail messages (I've scanned for relevant 
posts but missed finding any) that would be helpful before I do this?

Thanks,

syd
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] A plea from an OpenStack user

2012-08-28 Thread Ryan Lane
Yesterday I spent the day finally upgrading my nova infrastructure
from diablo to essex. I've upgraded from bexar to cactus, and cactus
to diablo, and now diablo to essex. Every single upgrade is becoming
more and more difficult. It's not getting easier, at all. Here's some
of the issues I ran into:

1. Glance changed from using image numbers to uuids for images. Nova's
reference to these weren't updated. There was no automated way to do
so. I had to map the old values to the new values from glance's
database then update them in nova.

2. Instance hostnames are changed every single release. In bexar and
cactus it was the ec2 style id. In diablo it was changed and hardcoded
to instance-ec2-style-id. In essex it is hardcoded to the instance
name; the instance's ID is configurable (with a default of
instance-ec2-style-id, but it only affects the name used in
virsh/the filesystem. I put a hack into diablo (thanks to Vish for
that hack) to fix the naming convention as to not break our production
deployment, but it only affected the hostnames in the database,
instances in virsh and on the filesystem were still named
instance-ec2-style-id, so I had to fix all libvirt definitions and
rename a ton of files to fix this during this upgrade, since our
naming convention is the ec2-style format. The hostname change still
affected our deployment, though. It's hardcoded. I decided to simply
switch hostnames to the instance name in production, since our
hostnames are required to be unique globally; however, that changes
how our puppet infrastructure works too, since the certname is by
default based on fqdn (I changed this to use the ec2-style id). Small
changes like this have giant rippling effects in infrastructures.

3. There used to be global groups in nova. In keystone there are no
global groups. This makes performing actions on sets of instances
across tenants incredibly difficult; for instance, I did an in-place
ubuntu upgrade from lucid to precise on a compute node, and needed to
reboot all instances on that host. There's no way to do that without
database queries fed into a custom script. Also, I have to have a
management user added to every single tenant and every single
tenant-role.

4. Keystone's LDAP implementation in stable was broken. It returned no
roles, many values were hardcoded, etc. The LDAP implementation in
nova worked, and it looks like its code was simply ignored when auth
was moved into keystone.

My plea is for the developers to think about how their changes are
going to affect production deployments when upgrade time comes.

It's fine that glance changed its id structure, but the upgrade should
have handled that. If a user needs to go into the database in their
deployment to fix your change, it's broken.

The constant hardcoded hostname changes are totally unacceptable; if
you change something like this it *must* be configurable, and there
should be a warning that the default is changing.

The removal of global groups was a major usability killer for users.
The removal of the global groups wasn't necessarily the problem,
though. The problem is that there were no alternative management
methods added. There's currently no reasonable way to manage the
infrastructure.

I understand that bugs will crop up when a stable branch is released,
but the LDAP implementation in keystone was missing basic
functionality. Keystone simply doesn't work without roles. I believe
this was likely due to the fact that the LDAP backend has basically no
tests and that Keystone light was rushed in for this release. It's
imperative that new required services at least handle the
functionality they are replacing, when released.

That said, excluding the above issues, my upgrade went fairly smoothly
and this release is *way* more stable and performs *way* better, so
kudos to the community for that. Keep up the good work!

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A plea from an OpenStack user

2012-08-28 Thread Gabriel Hurley
Well said, Ryan. Agreed 100% on all points, both in the specific examples and 
the overarching theme of n+1 compatibility. Upgrade paths have got to be clean 
and well-documented, and deprecations must be done according to responsible, 
established timelines from here on out.

We're verifiably doing better between Essex and Folsom, but we still have a 
LONG way to go to call our upgrade process anything resembling great.

There was talk of trying to set up test infrastructure that would roll out 
Essex and then upgrade it to Folsom in some automated fashion so we could start 
learning where it breaks. Was there any forward momentum on that?

All the best,

- Gabriel

 -Original Message-
 From: openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net
 [mailto:openstack-
 bounces+gabriel.hurley=nebula@lists.launchpad.net] On Behalf Of
 Ryan Lane
 Sent: Tuesday, August 28, 2012 2:26 PM
 To: openstack@lists.launchpad.net
 Subject: [Openstack] A plea from an OpenStack user
 
 Yesterday I spent the day finally upgrading my nova infrastructure from
 diablo to essex. I've upgraded from bexar to cactus, and cactus to diablo, and
 now diablo to essex. Every single upgrade is becoming more and more
 difficult. It's not getting easier, at all. Here's some of the issues I ran 
 into:
 
 1. Glance changed from using image numbers to uuids for images. Nova's
 reference to these weren't updated. There was no automated way to do so.
 I had to map the old values to the new values from glance's database then
 update them in nova.
 
 2. Instance hostnames are changed every single release. In bexar and cactus
 it was the ec2 style id. In diablo it was changed and hardcoded to instance-
 ec2-style-id. In essex it is hardcoded to the instance name; the instance's
 ID is configurable (with a default of instance-ec2-style-id, but it only
 affects the name used in virsh/the filesystem. I put a hack into diablo 
 (thanks
 to Vish for that hack) to fix the naming convention as to not break our
 production deployment, but it only affected the hostnames in the database,
 instances in virsh and on the filesystem were still named instance-ec2-style-
 id, so I had to fix all libvirt definitions and rename a ton of files to fix 
 this
 during this upgrade, since our naming convention is the ec2-style format. The
 hostname change still affected our deployment, though. It's hardcoded. I
 decided to simply switch hostnames to the instance name in production,
 since our hostnames are required to be unique globally; however, that
 changes how our puppet infrastructure works too, since the certname is by
 default based on fqdn (I changed this to use the ec2-style id). Small changes
 like this have giant rippling effects in infrastructures.
 
 3. There used to be global groups in nova. In keystone there are no global
 groups. This makes performing actions on sets of instances across tenants
 incredibly difficult; for instance, I did an in-place ubuntu upgrade from 
 lucid
 to precise on a compute node, and needed to reboot all instances on that
 host. There's no way to do that without database queries fed into a custom
 script. Also, I have to have a management user added to every single tenant
 and every single tenant-role.
 
 4. Keystone's LDAP implementation in stable was broken. It returned no
 roles, many values were hardcoded, etc. The LDAP implementation in nova
 worked, and it looks like its code was simply ignored when auth was moved
 into keystone.
 
 My plea is for the developers to think about how their changes are going to
 affect production deployments when upgrade time comes.
 
 It's fine that glance changed its id structure, but the upgrade should have
 handled that. If a user needs to go into the database in their deployment to
 fix your change, it's broken.
 
 The constant hardcoded hostname changes are totally unacceptable; if you
 change something like this it *must* be configurable, and there should be a
 warning that the default is changing.
 
 The removal of global groups was a major usability killer for users.
 The removal of the global groups wasn't necessarily the problem, though.
 The problem is that there were no alternative management methods added.
 There's currently no reasonable way to manage the infrastructure.
 
 I understand that bugs will crop up when a stable branch is released, but the
 LDAP implementation in keystone was missing basic functionality. Keystone
 simply doesn't work without roles. I believe this was likely due to the fact
 that the LDAP backend has basically no tests and that Keystone light was
 rushed in for this release. It's imperative that new required services at 
 least
 handle the functionality they are replacing, when released.
 
 That said, excluding the above issues, my upgrade went fairly smoothly and
 this release is *way* more stable and performs *way* better, so kudos to
 the community for that. Keep up the good work!
 
 - Ryan
 
 

Re: [Openstack] nova-manage db sync fails

2012-08-28 Thread Vishvananda Ishaya
I have had problems using nova-manage db sync when using sqlite and the 
packaged version of sqlalchemy. If you are trying to do this, you probably need 
to do a sudo pip install -U sqlalchemy

Vish

On Aug 28, 2012, at 9:53 AM, Afef MDHAFFAR afef.mdhaf...@gmail.com wrote:

 Dear Openstack users,
 
 I am trying to install openstack on an ubuntu server 12.04, with Xen as a 
 virtualization technology.
 I unfortunately got a problem while trying to install the nova service. 
 Actually, the nova-manage db sync fails and returns the following warnings:
 ---
 2012-08-28 18:47:24 DEBUG nova.utils [-] backend module 
 'nova.db.sqlalchemy.migration' from 
 '/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/migration.pyc' from 
 (pid=3101) __get_backend /usr/lib/python2.7/dist-packages/nova/utils.py:658
 2012-08-28 18:47:24 WARNING nova.utils [-] 
 /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py:639: 
 SADeprecationWarning: The 'listeners' argument to Pool (and create_engine()) 
 is deprecated.  Use event.listen().
   Pool.__init__(self, creator, **kw)
 
 2012-08-28 18:47:24 WARNING nova.utils [-] 
 /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py:145: 
 SADeprecationWarning: Pool.add_listener is deprecated.  Use event.listen()
   self.add_listener(l)
 
 
 Would you please help me to fix this problem?
 
 Thank you
 
 Best regards,
 Afef
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A plea from an OpenStack user

2012-08-28 Thread Troy Toman
I hope everyone takes time to read Ryan's note. We all need to keep this in 
mind even more so going forward. Almost all of the required changes can be 
implemented without causing disruption but it won't happen by accident. We try 
and cope with this by absorbing changes in smaller bites (by staying close to 
trunk.) But, that's still challenging and really just a coping strategy - not a 
solution.

I think we can do better.

Troy

On Aug 28, 2012, at 4:26 PM, Ryan Lane rl...@wikimedia.org
 wrote:

 Yesterday I spent the day finally upgrading my nova infrastructure
 from diablo to essex. I've upgraded from bexar to cactus, and cactus
 to diablo, and now diablo to essex. Every single upgrade is becoming
 more and more difficult. It's not getting easier, at all. Here's some
 of the issues I ran into:
 
 1. Glance changed from using image numbers to uuids for images. Nova's
 reference to these weren't updated. There was no automated way to do
 so. I had to map the old values to the new values from glance's
 database then update them in nova.
 
 2. Instance hostnames are changed every single release. In bexar and
 cactus it was the ec2 style id. In diablo it was changed and hardcoded
 to instance-ec2-style-id. In essex it is hardcoded to the instance
 name; the instance's ID is configurable (with a default of
 instance-ec2-style-id, but it only affects the name used in
 virsh/the filesystem. I put a hack into diablo (thanks to Vish for
 that hack) to fix the naming convention as to not break our production
 deployment, but it only affected the hostnames in the database,
 instances in virsh and on the filesystem were still named
 instance-ec2-style-id, so I had to fix all libvirt definitions and
 rename a ton of files to fix this during this upgrade, since our
 naming convention is the ec2-style format. The hostname change still
 affected our deployment, though. It's hardcoded. I decided to simply
 switch hostnames to the instance name in production, since our
 hostnames are required to be unique globally; however, that changes
 how our puppet infrastructure works too, since the certname is by
 default based on fqdn (I changed this to use the ec2-style id). Small
 changes like this have giant rippling effects in infrastructures.
 
 3. There used to be global groups in nova. In keystone there are no
 global groups. This makes performing actions on sets of instances
 across tenants incredibly difficult; for instance, I did an in-place
 ubuntu upgrade from lucid to precise on a compute node, and needed to
 reboot all instances on that host. There's no way to do that without
 database queries fed into a custom script. Also, I have to have a
 management user added to every single tenant and every single
 tenant-role.
 
 4. Keystone's LDAP implementation in stable was broken. It returned no
 roles, many values were hardcoded, etc. The LDAP implementation in
 nova worked, and it looks like its code was simply ignored when auth
 was moved into keystone.
 
 My plea is for the developers to think about how their changes are
 going to affect production deployments when upgrade time comes.
 
 It's fine that glance changed its id structure, but the upgrade should
 have handled that. If a user needs to go into the database in their
 deployment to fix your change, it's broken.
 
 The constant hardcoded hostname changes are totally unacceptable; if
 you change something like this it *must* be configurable, and there
 should be a warning that the default is changing.
 
 The removal of global groups was a major usability killer for users.
 The removal of the global groups wasn't necessarily the problem,
 though. The problem is that there were no alternative management
 methods added. There's currently no reasonable way to manage the
 infrastructure.
 
 I understand that bugs will crop up when a stable branch is released,
 but the LDAP implementation in keystone was missing basic
 functionality. Keystone simply doesn't work without roles. I believe
 this was likely due to the fact that the LDAP backend has basically no
 tests and that Keystone light was rushed in for this release. It's
 imperative that new required services at least handle the
 functionality they are replacing, when released.
 
 That said, excluding the above issues, my upgrade went fairly smoothly
 and this release is *way* more stable and performs *way* better, so
 kudos to the community for that. Keep up the good work!
 
 - Ryan
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A plea from an OpenStack user

2012-08-28 Thread Michael Still
On 08/29/2012 07:26 AM, Ryan Lane wrote:

 My plea is for the developers to think about how their changes are
 going to affect production deployments when upgrade time comes.

I for one would like to see the ops bug tag used more to try and track
these issues. If an upgrade makes something harder for operations
people, developers at the very least should create an ops bug to fix
that so that its at least tracked.

Mikal


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A plea from an OpenStack user

2012-08-28 Thread Ryan Lane
 There was talk of trying to set up test infrastructure that would roll out 
 Essex and then upgrade it to Folsom in some automated fashion so we could 
 start learning where it breaks. Was there any forward momentum on that?


This would be awesome. Wrapping automated tests around upgrades would
greatly improve the situation. Most of the issues that ops runs into
during upgrades are unexpected changes, which are the same things that
will likely be hit when testing upgrades in an automated way.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Question regarding who the nova database is updated

2012-08-28 Thread Joseph Suh
Heng,

I maybe wrong as the latest code is somewhat different from the one I followed 
last time, but it looks like the information is update at line 440 in 
nova/nova/compute/resource_tracker.py.

Thanks,

Joseph

- Original Message -
From: Heng Xu shouhengzhang...@mail.utoronto.ca
To: openstack@lists.launchpad.net
Sent: Tuesday, August 28, 2012 1:52:18 PM
Subject: [Openstack] Question regarding who the nova database is updated



Hi folks: 
I was trying to add a new column to the Nova database table compute_nodes. 
however, I was not sure of how it is updated periodically, all I know is that 
comsume_from_instance() function in host_manager.py update the memory, then I 
was not able to find how nova subsequently update the compute_nodes table in 
the database. Any help will be appreciated, please let me know the exact 
location (file, function, etc.) and how I can mimc that to modify my own new 
field in the compute_nodes table. Thanks in advance. 

Heng 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A plea from an OpenStack user

2012-08-28 Thread Robert Collins
On Wed, Aug 29, 2012 at 10:52 AM, Ryan Lane rl...@wikimedia.org wrote:
 There was talk of trying to set up test infrastructure that would roll out 
 Essex and then upgrade it to Folsom in some automated fashion so we could 
 start learning where it breaks. Was there any forward momentum on that?


 This would be awesome. Wrapping automated tests around upgrades would
 greatly improve the situation. Most of the issues that ops runs into
 during upgrades are unexpected changes, which are the same things that
 will likely be hit when testing upgrades in an automated way.

It would be fascinating (for me at least :)) to know the upgrade
process you use - how many stages you use, do you have multiple
regions and use one/some as canaries? Does the downtime required to do
an upgrade affect you? Do you run skewed versions (e.g. folsom nova,
essex glance) or do you do lock-step upgrades of all the components?

For Launchpad we've been moving more and more to a model of permitting
temporary skew so that we can do rolling upgrades of the component
services. That seems in-principle doable here - and could make it
easier to smoothly transition between versions, at the cost of a
(small) amount of attention to detail while writing changes to the
various apis.

-Rob

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] My diablo to essex upgrade process (was: A plea from an OpenStack user)

2012-08-28 Thread Ryan Lane
 It would be fascinating (for me at least :)) to know the upgrade
 process you use - how many stages you use, do you have multiple
 regions and use one/some as canaries? Does the downtime required to do
 an upgrade affect you? Do you run skewed versions (e.g. folsom nova,
 essex glance) or do you do lock-step upgrades of all the components?


This was a particularly difficult upgrade, since we needed to change
so many things at once.

We did a lock-step upgrade this time around. Keystone basically
required that. As far as I could tell, if you enable keystone for
nova, you must enable it for glance. Also, I know that the components
are well tested for compatibility within the same release, so I
thought it would be best to not include any extra complications.

I did my initial testing in a project within my infrastructure (hooray
for inception). After everything worked in a testing set up and was
puppetized, I tested on production hardware. I'm preparing a region in
a new datacenter, so this time I used that hardware for
production-level testing. In the future we're going to set aside a
small amount of cheap-ish hardware for production-level testing.

This upgrade required an operating system upgrade as well. I took the
following steps for the actual upgrade:

1. Backed up all databases, and LDAP
2. Disabled the OpenStackManager extension in the controller's wiki
(we have a custom interface integrated with MediaWiki)
3. Turned off all openstack services
4. Made the required LDAP changes needed for Keystone's backend
5. Upgraded the controller to precise, then made required changes (via
puppet), which includes installing/configuring keystone
6. Upgraded the glance and nova databases
7. Upgraded the network node to precise, then made required changes
(via puppet) - this caused network downtime for a few minutes during
the reboot and puppet run
8. Upgraded a compute node that wasn't in use to precise, made
required changes (via puppet), and tested instance creation and
networking
9. Upgraded a compute node that was in use, rebooted a couple
instances to ensure they'd start properly and have proper networking,
then rebooted all instances on the node
10. Upgraded the remaining compute nodes and rebooted their instances

I had notes on how to rollback during various phases of the upgrade.
This was mostly moving services to different nodes.

Downtime was required because of the need to change OS releases. That
said, my environment is mostly test and development and some
semi-production uses that can handle downtime, so I didn't put a large
amount of effort into completely avoiding downtime.

 For Launchpad we've been moving more and more to a model of permitting
 temporary skew so that we can do rolling upgrades of the component
 services. That seems in-principle doable here - and could make it
 easier to smoothly transition between versions, at the cost of a
 (small) amount of attention to detail while writing changes to the
 various apis.


Right now it's not possible to run multiple versions of openstack
services as far as I know. It would be ideal to be able to run all
folsom and grizzly services (for instance) side-by-side while the
upgrade is occurring. At minimum it would be nice for the next release
to be able to use the old release's schema so that upgrades can be
attempted in a way that's much easier to rollback.

- Ryan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Default rules for the 'default' security group

2012-08-28 Thread Tom Fifield

On 24/08/12 20:50, Yufang Zhang wrote:

2012/8/24 Gabriel Hurley gabriel.hur...@nebula.com
mailto:gabriel.hur...@nebula.com

I traced this through the code at one point looking for the same
thing. As it stands, right now there is **not** a mechanism for
customizing the default security group’s rules. It’s created
programmatically the first time the rules for a project are
retrieved with no hook to add or change its characteristics.

__ __

I’d love to see this be possible, but it’s definitely a feature
request.

__


  Really agreed. I have created a blueprint to track this issue:
https://blueprints.launchpad.net/nova/+spec/default-rules-for-default-security-group


At NeCTAR, rather than modifying the default group we create 3 new 
groups (SSH, ICMP, HTTP/S) for the tenant at the time of tenant 
creation, and found this to be a reasonable compromise between security 
and convenience. This has its issues of course, but perhaps the 
blueprint could be extended to cover the creation of new groups, as well 
as modifying the existing default one . . .




__

__-__Gabriel

__ __

*From:*openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net
mailto:nebula@lists.launchpad.net
[mailto:openstack-bounces+gabriel.hurley
mailto:openstack-bounces%2Bgabriel.hurley=nebula@lists.launchpad.net
mailto:nebula@lists.launchpad.net] *On Behalf Of *Boris-Michel
Deschenes
*Sent:* Thursday, August 23, 2012 7:59 AM
*To:* Yufang Zhang; openstack@lists.launchpad.net
mailto:openstack@lists.launchpad.net
*Subject:* Re: [Openstack] Default rules for the 'default' security
group

__ __

I’m very interested in this, we run essex and have a very bad
workaround for this currently, but it would be great to be able to
do this (set default rules for the default security group).

__ __

Boris

__ __

*De 
:*openstack-bounces+boris-michel.deschenes=ubisoft@lists.launchpad.net

mailto:openstack-bounces+boris-michel.deschenes=ubisoft@lists.launchpad.net

[mailto:openstack-bounces+boris-michel.deschenes=ubisoft@lists.launchpad.net]

mailto:[mailto:openstack-bounces+boris-michel.deschenes=ubisoft@lists.launchpad.net]
*De la part de* Yufang Zhang
*Envoyé :* 23 août 2012 08:43
*À :* openstack@lists.launchpad.net
mailto:openstack@lists.launchpad.net
*Objet :* [Openstack] Default rules for the 'default' security group

__ __

Hi all,

__ __

Could I ask how to set the default rules for the 'default' security
group for all the users in openstack? Currently, the 'default'
security group has no rule by default, thus newly created instances
could only be accessed by instances from the same group. 

__ __

Is there any method to set default rules(such as ssh or icmp) for
the 'default' security group for all users in openstack, so that I
don't have to remind the new users to modify security group setting
the fist time they logged into openstack and create instances?  I
have ever tried HP could which is built on openstack, they permit
ssh or ping to the instances in the 'default' security group. 

__ __

Best Regards.

__ __

Yufang




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] associating provider vlan and IP allocation pool with a subnet

2012-08-28 Thread Naveen Joy (najoy)
Hi All,

In the latest quantum code,  I believe it is possible to associate a provider 
vlan and an IP allocation pool with a subnet. Can someone provide the quantum 
client cli or the API to accomplish this.

Thanks much,
Naveen
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Documentation help on Folsom -3 Milestone

2012-08-28 Thread Anne Gentle
Hi Trinath -

Could you be more specific about your needs? Documentation for Folsom is an
ongoing effort and the docs do not track to milestone releases. Are there
specific areas like networking or volumes or computing that you are wanting
to validate and configure?

To run milestone-proposed branches using Devstack, you can edit the
stackrc file prior to running the stack.sh script to point to specific repo
branches of each project. See http://devstack.org/stackrc.html.

Anne

On Tue, Aug 28, 2012 at 1:43 AM, Trinath Somanchi 
trinath.soman...@gmail.com wrote:

 Hi-

 Do we have any documentation help in configuration and validation of
 Openstack Folsom - 3 milestone?

 Please help me in this regard.



 --
 Regards,
 --
 Trinath Somanchi,
 +91 9866 235 130


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] After deleted an instance, nobody can create any instance.

2012-08-28 Thread Sam Su
Hi,

I have an Essex cluster with 6 compute nodes and one control node, and it
worked fine in the past several months. Last week, since someone deleted an
instance in a project, no new instance can be launched in that project.

When create a new instance in the project, it displays:
*Error: *An error occurred. Please try again.

the following error info will come up in the file nova-api.log:
2012-08-28 19:14:23 ERROR nova.api.openstack
[req-6fb211c1-60cf-4a69-83e1-f9e7da30b458 6023cea36f784448b667922894ca7102
afdee06258774b2d9e768c08b62dbbf2] Caught error: Remote error:
InstanceNotFound Instance 603 could not be found.

Here is the info of nova-api.log :
http://pastebin.com/evuDMdrU
.

Actually this instance with id 603 has been deleted, it's very strange.
Does anyone has any idea about this issue?

By the way, if an existing instance in another project is terminated, the
same problem occurs in that project.

Thanks ahead,
Sam
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Documentation help on Folsom -3 Milestone

2012-08-28 Thread Trinath Somanchi
Hi Anne-

Thanks for the reply.

I'm testing the Openstack Folsom-3 milestone. Can u help me on changes
needed in the configuration and running of Folsom-3 milestone code base
with respect to the Essex release .

Thanking you.


-
Trinath

On Wed, Aug 29, 2012 at 7:43 AM, Anne Gentle a...@openstack.org wrote:

 Hi Trinath -

 Could you be more specific about your needs? Documentation for Folsom is
 an ongoing effort and the docs do not track to milestone releases. Are
 there specific areas like networking or volumes or computing that you are
 wanting to validate and configure?

 To run milestone-proposed branches using Devstack, you can edit the
 stackrc file prior to running the stack.sh script to point to specific repo
 branches of each project. See http://devstack.org/stackrc.html.

 Anne

 On Tue, Aug 28, 2012 at 1:43 AM, Trinath Somanchi 
 trinath.soman...@gmail.com wrote:

 Hi-

 Do we have any documentation help in configuration and validation of
 Openstack Folsom - 3 milestone?

 Please help me in this regard.



 --
 Regards,
 --
 Trinath Somanchi,
 +91 9866 235 130


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





-- 
Regards,
--
Trinath Somanchi,
+91 9866 235 130
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Upgrading from devstack pre-F3/quantum v1/OVS to latest not going well :-(

2012-08-28 Thread Syd (Sydney) Logan
I played around with horizon a bit more and discovered that the demo project 
page does have a Create Instance button, but when I try and do so I get an 
error message saying that horizon is unable to get quota information. I tracked 
down a bug that was filed 5 days ago on someone seeing the same message, and it 
was punted over to nova after the horizon guys concluded that it was a nova bug.

I'm going to see if I can work around this problem in horizon (or rootcause it) 
tomorrow only because I have no other obvious course of action at the moment.

Here is my localrc, the same as what was working well before I grabbed latest 
devstack (and it grabbed the latest git versions of the openstack apps):

HOST_IP=192.168.4.1
FLAT_INTERFACE=eth1
FIXED_RANGE=10.4.128.0/20
FIXED_NETWORK_SIZE=4096
FLOATING_RANGE=192.168.4.128/25
MULTI_HOST=True
Q_INTERFACE=eth1
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=xyzpdqlazydog
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,mysql,rabbit,openstackx,q-svc,quantum,q-agt
Q_PLUGIN=openvswitch
Q_AUTH_STRATEGY=noauth
NOVA_USE_QUANTUM_API=v2

syd

From: Syd (Sydney) Logan
Sent: Tuesday, August 28, 2012 2:19 PM
To: 'openstack@lists.launchpad.net'
Subject: Upgrading from devstack pre-F3/quantum v1/OVS to latest not going well 
:-(

Hi,

Is there a recommended procedure for upgrading nodes that were configured 
pre-Folsom 3 to use quantum V1/OVS that were deployed with devstack? I probably 
should have asked this question before trying, but I went ahead and tried. I 
had a multi-node setup that I was driving with Horizon that was working very 
well. Now I'm just trying to get a single node setup working, and not getting 
far.

To get sync'd up with the latest, I did the following:

$ rm -rf /opt/stack (this is where devstack pulled things to)
$ rm -rf /etc/quantum; rm -rf /etc/nova

In the devstack localrc:

Removed n-net from ENABLED_SERVICES
Added q-dhcp to ENABLED_SERVICES (I had this disabled in pre-F3 after e-mails 
with Aaron Rosen when he helped me get going earlier, I've tried both ways and 
seems not to make a difference)
Added NOVA_USE_QUANTUM=v2 (but this doesn't seem to make a difference either)

And I ran devstack.

I got no errors when I ran devstack.

When I launched Horizon, some problems are evident. There is no launch instance 
button on the Instances page. Because I don't yet know the command UI enough to 
spin up and configure VMs, I figured I'd try running the devstack exercise.sh 
script to see what happens. It creates a few VMs, but none get an IP address 
(before I used to get IPs in 10.4.128.0).  It reports all tests passed, as 
well. If I click through in the UI on the VM, I see that for networking address 
it assigns all VMs is the value Net1.

I've looked at console logs for the VMs created and see failures trying to dhcp 
(that's why I naively added q-dhcp back to ENABLED_SERVICES), but as I 
mentioned above, adding q-dhcp didn't help, and I'm wondering if it was a good 
idea anyway since Aaron steered me away from it before.

Output of ps shows expected services running (e.g., OVS daemon, plugins, 
agents) and services lists displayed by Horizon (e.g., nova, quantum, etc.) all 
seem normal to me.

Notably missing is the OVS gw- interface that was present before I upgraded (at 
http://wiki.openstack.org/RunningWQuantumV2Api there is this: Note: with v2, 
Quantum no longer uses the L3 + NAT logic from nova-network. Quantum will not 
have the equivalent functionality until F-3, so you won't be able to ping the 
VMs from the nova controller host. Is that the reason?)  The gw interface is 
the way I could ping VMs from the host.

The missing gateway, horizon UI missing the create instance button, and not 
getting networks for VMs spun up by devstack's exercise script are the major 
symptoms.  I trust that devstack is up to sync with what is happening in 
Folsom, and that I am actually pulling down F3 code at this point (I've not 
tried to verify this).  I'm not aware of any need to tweek the devstack 
exercise script, I am assuming it is designed to work as is.

I'm thinking of wiping my entire disk and starting from scratch in case blowing 
away /etc/nova etc. and /opt/stack were not enough to reset state, but before I 
do this, any pointers to links or mail messages (I've scanned for relevant 
posts but missed finding any) that would be helpful before I do this?

Thanks,

syd
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Upgrading from devstack pre-F3/quantum v1/OVS to latest not going well :-(

2012-08-28 Thread Aaron Rosen
Hi Syd,

Unfortunately, I don't believe there are any tools to upgrade the
ovs_quantum db to it's current format. That said, I don't believe it would
be that hard to write one to migrate your setup.

If you read though this page http://wiki.openstack.org/RunningQuantumV2Api it
gives an example of creating a network and boot vms on it. I'm
not familiar with horizon (maybe someone else who is can help you out).

One last thing. Are you running the latest devstack code? The v1 api code
has been removed from quantum so you can remove the following line from
rclocal NOVA_USE_QUANTUM_API=v2

I'd also suggest also removing this line since devstack can now configure
quantum to use keystone by default.

Q_AUTH_STRATEGY=noauth

Best,

Aaron

On Wed, Aug 29, 2012 at 12:53 AM, Syd (Sydney) Logan slo...@broadcom.comwrote:

  I played around with horizon a bit more and discovered that the “demo”
 project page does have a Create Instance button, but when I try and do so I
 get an error message saying that horizon is unable to get quota
 information. I tracked down a bug that was filed 5 days ago on someone
 seeing the same message, and it was punted over to nova after the horizon
 guys concluded that it was a nova bug.

 ** **

 I’m going to see if I can work around this problem in horizon (or
 rootcause it) tomorrow only because I have no other obvious course of
 action at the moment.

 ** **

 Here is my localrc, the same as what was working well before I grabbed
 latest devstack (and it grabbed the latest git versions of the openstack
 apps):

 ** **

 HOST_IP=192.168.4.1

 FLAT_INTERFACE=eth1

 FIXED_RANGE=10.4.128.0/20

 FIXED_NETWORK_SIZE=4096

 FLOATING_RANGE=192.168.4.128/25

 MULTI_HOST=True

 Q_INTERFACE=eth1

 LOGFILE=/opt/stack/logs/stack.sh.log

 ADMIN_PASSWORD=password

 MYSQL_PASSWORD=password

 RABBIT_PASSWORD=password

 SERVICE_PASSWORD=password

 SERVICE_TOKEN=xyzpdqlazydog

 LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver


 ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,mysql,rabbit,openstackx,q-svc,quantum,q-agt
 

 Q_PLUGIN=openvswitch

 Q_AUTH_STRATEGY=noauth

 NOVA_USE_QUANTUM_API=v2

 ** **

 syd

 ** **

 *From:* Syd (Sydney) Logan
 *Sent:* Tuesday, August 28, 2012 2:19 PM
 *To:* 'openstack@lists.launchpad.net'
 *Subject:* Upgrading from devstack pre-F3/quantum v1/OVS to latest not
 going well :-(

 ** **

 Hi,

 ** **

 Is there a recommended procedure for upgrading nodes that were configured
 pre-Folsom 3 to use quantum V1/OVS that were deployed with devstack? I
 probably should have asked this question before trying, but I went ahead
 and tried. I had a multi-node setup that I was driving with Horizon that
 was working very well. Now I’m just trying to get a single node setup
 working, and not getting far. 

 ** **

 To get sync’d up with the latest, I did the following:

 ** **

 $ rm –rf /opt/stack (this is where devstack pulled things to)

 $ rm –rf /etc/quantum; rm –rf /etc/nova

 ** **

 In the devstack localrc:

 ** **

 Removed n-net from ENABLED_SERVICES

 Added q-dhcp to ENABLED_SERVICES (I had this disabled in pre-F3 after
 e-mails with Aaron Rosen when he helped me get going earlier, I’ve tried
 both ways and seems not to make a difference)

 Added NOVA_USE_QUANTUM=v2 (but this doesn’t seem to make a difference
 either)

 ** **

 And I ran devstack.

 ** **

 I got no errors when I ran devstack.

 ** **

 When I launched Horizon, some problems are evident. There is no launch
 instance button on the Instances page. Because I don’t yet know the command
 UI enough to spin up and configure VMs, I figured I’d try running the
 devstack exercise.sh script to see what happens. It creates a few VMs, but
 none get an IP address (before I used to get IPs in 10.4.128.0).  It
 reports all tests passed, as well. If I click through in the UI on the VM,
 I see that for networking address it assigns all VMs is the value “Net1”.
 

 ** **

 I’ve looked at console logs for the VMs created and see failures trying to
 dhcp (that’s why I naively added q-dhcp back to ENABLED_SERVICES), but as I
 mentioned above, adding q-dhcp didn’t help, and I’m wondering if it was a
 good idea anyway since Aaron steered me away from it before.

 ** **

 Output of ps shows expected services running (e.g., OVS daemon, plugins,
 agents) and services lists displayed by Horizon (e.g., nova, quantum, etc.)
 all seem normal to me.

 ** **

 Notably missing is the OVS gw- interface that was present before I
 upgraded (at http://wiki.openstack.org/RunningWQuantumV2Api there is
 this: “Note: with v2, Quantum no longer uses the L3 + NAT logic from
 nova-network. Quantum will not have the equivalent functionality until F-3,
 so you won't be able to ping the VMs from the nova controller host.” Is
 

Re: [Openstack] Upgrading from devstack pre-F3/quantum v1/OVS to latest not going well :-(

2012-08-28 Thread Trinath Somanchi
Hi Syd-

Hope you are using Folsome-3 of Openstack in your setup.

Can you guide me on bringing up Openstack Folsom-3.

Thanking you-


-
TNS

On Wed, Aug 29, 2012 at 10:47 AM, Aaron Rosen aro...@nicira.com wrote:

 Hi Syd,

 Unfortunately, I don't believe there are any tools to upgrade the
 ovs_quantum db to it's current format. That said, I don't believe it would
 be that hard to write one to migrate your setup.

 If you read though this page http://wiki.openstack.org/RunningQuantumV2Api it
 gives an example of creating a network and boot vms on it. I'm
 not familiar with horizon (maybe someone else who is can help you out).

 One last thing. Are you running the latest devstack code? The v1 api code
 has been removed from quantum so you can remove the following line from
 rclocal NOVA_USE_QUANTUM_API=v2

 I'd also suggest also removing this line since devstack can now configure
 quantum to use keystone by default.

 Q_AUTH_STRATEGY=noauth

 Best,

 Aaron

 On Wed, Aug 29, 2012 at 12:53 AM, Syd (Sydney) Logan 
 slo...@broadcom.comwrote:

  I played around with horizon a bit more and discovered that the “demo”
 project page does have a Create Instance button, but when I try and do so I
 get an error message saying that horizon is unable to get quota
 information. I tracked down a bug that was filed 5 days ago on someone
 seeing the same message, and it was punted over to nova after the horizon
 guys concluded that it was a nova bug.

 ** **

 I’m going to see if I can work around this problem in horizon (or
 rootcause it) tomorrow only because I have no other obvious course of
 action at the moment.

 ** **

 Here is my localrc, the same as what was working well before I grabbed
 latest devstack (and it grabbed the latest git versions of the openstack
 apps):

 ** **

 HOST_IP=192.168.4.1

 FLAT_INTERFACE=eth1

 FIXED_RANGE=10.4.128.0/20

 FIXED_NETWORK_SIZE=4096

 FLOATING_RANGE=192.168.4.128/25

 MULTI_HOST=True

 Q_INTERFACE=eth1

 LOGFILE=/opt/stack/logs/stack.sh.log

 ADMIN_PASSWORD=password

 MYSQL_PASSWORD=password

 RABBIT_PASSWORD=password

 SERVICE_PASSWORD=password

 SERVICE_TOKEN=xyzpdqlazydog

 LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver


 ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,cinder,c-sch,c-api,c-vol,n-sch,n-novnc,n-xvnc,n-cauth,horizon,mysql,rabbit,openstackx,q-svc,quantum,q-agt
 

 Q_PLUGIN=openvswitch

 Q_AUTH_STRATEGY=noauth

 NOVA_USE_QUANTUM_API=v2

 ** **

 syd

 ** **

 *From:* Syd (Sydney) Logan
 *Sent:* Tuesday, August 28, 2012 2:19 PM
 *To:* 'openstack@lists.launchpad.net'
 *Subject:* Upgrading from devstack pre-F3/quantum v1/OVS to latest not
 going well :-(

 ** **

 Hi,

 ** **

 Is there a recommended procedure for upgrading nodes that were configured
 pre-Folsom 3 to use quantum V1/OVS that were deployed with devstack? I
 probably should have asked this question before trying, but I went ahead
 and tried. I had a multi-node setup that I was driving with Horizon that
 was working very well. Now I’m just trying to get a single node setup
 working, and not getting far. 

 ** **

 To get sync’d up with the latest, I did the following:

 ** **

 $ rm –rf /opt/stack (this is where devstack pulled things to)

 $ rm –rf /etc/quantum; rm –rf /etc/nova

 ** **

 In the devstack localrc:

 ** **

 Removed n-net from ENABLED_SERVICES

 Added q-dhcp to ENABLED_SERVICES (I had this disabled in pre-F3 after
 e-mails with Aaron Rosen when he helped me get going earlier, I’ve tried
 both ways and seems not to make a difference)

 Added NOVA_USE_QUANTUM=v2 (but this doesn’t seem to make a difference
 either)

 ** **

 And I ran devstack.

 ** **

 I got no errors when I ran devstack.

 ** **

 When I launched Horizon, some problems are evident. There is no launch
 instance button on the Instances page. Because I don’t yet know the command
 UI enough to spin up and configure VMs, I figured I’d try running the
 devstack exercise.sh script to see what happens. It creates a few VMs, but
 none get an IP address (before I used to get IPs in 10.4.128.0).  It
 reports all tests passed, as well. If I click through in the UI on the VM,
 I see that for networking address it assigns all VMs is the value “Net1”.
 

 ** **

 I’ve looked at console logs for the VMs created and see failures trying
 to dhcp (that’s why I naively added q-dhcp back to ENABLED_SERVICES), but
 as I mentioned above, adding q-dhcp didn’t help, and I’m wondering if it
 was a good idea anyway since Aaron steered me away from it before.

 ** **

 Output of ps shows expected services running (e.g., OVS daemon, plugins,
 agents) and services lists displayed by Horizon (e.g., nova, quantum, etc.)
 all seem normal to me.

 ** **

 Notably missing is the OVS gw- interface that was present before I
 upgraded (at http://wiki.openstack.org/RunningWQuantumV2Api