Re: [openstack-dev] [cinder] Propose to add copying the reference images when creating a volume

2013-07-03 Thread Sheng Bo Hou
Hi Vish,

I would like you to review this patch 
https://review.openstack.org/#/c/35460/.

I think this approach takes the least effort to fix this issue.
When we boot an instance from a volume, nova can get the volume 
information by _get_volume. kerel_id and ramdisk_id are already in the 
volume information. We just need to make nova retrieve them. In the code 
of creating instance, kernel_id and ramdisk_id are accessed by checking 
"properties"(many parts), but the volume information saved them in 
"volume_image_metadata". I just convert the data structure a bit and save 
two of this params in "properties", and it can work.

Well, if you do not see it favorable, we can work it out in another way.

Thank you.

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
West Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193



Vishvananda Ishaya  
2013/07/02 01:14
Please respond to
OpenStack Development Mailing List 


To
OpenStack Development Mailing List , 
cc
jsbry...@us.ibm.com, "Duncan Thomas  John 
Griffith" 
Subject
Re: [openstack-dev] [cinder] Propose to add copying the reference images 
when creating a volume







On Jul 1, 2013, at 3:35 AM, Sheng Bo Hou  wrote:

Hi Mate, 

First, thanks for answering. 
I was trying to find the way to prepare the bootable volume. 
Take the default image downloaded by devstack, there are three images: 
cirros-0.3.0-x86_64-uec, cirros-0.3.0-x86_64-uec-kernel and 
cirros-0.3.0-x86_64-uec-ramdisk. 
cirros-0.3.0-x86_64-uec-kernel is referred as the kernel image and 
cirros-0.3.0-x86_64-uec-ramdisk is referred as the ramdisk image. 

Issue: If only the image(cirros-0.3.0-x86_64-uec) is copied to the volume 
when creating a volume) from an image, this volume is unable to boot an 
instance without the references to the kernel and the ramdisk images. The 
current cinder only copies the image cirros-0.3.0-x86_64-uec to one 
targeted volume(Vol-1), which is marked as bootable but unable to do a 
successful boot with the current nova code, even if image-id is removed in 
the parameter. 

Possible solutions: There are two ways in my mind to resolve it. One is we 
just need the code change in Nova to let it find the reference images for 
the bootable volume(Vol-1) and there is no need to change anything in 
cinder, since the kernel and ramdisk id are saved in the 
volume_glance_metadata, where the references point to the images(kernel 
and ramdisk) for the volume(Vol-1). 


You should be able to create an image in glance that references the volume 
in block device mapping but also has a kernel_id and ramdisk_id parameter 
so it can boot properly. I know this is kind of an odd way to do things, 
but this seems like an edge case and I think it is a valid workaround.

Vish

The other is that if we need multiple images to boot an instance, we need 
a new way to create the bootable volume. For example, we can create three 
separate volumes for three of the images and set the new references in 
volume_glance_metadata with the kernel_volume_id and ramdisk_volume_id. 
The benefit of this approach is that the volume can live independent of 
the existence of the original images. Even the images get lost 
accidentally, the volumes are still sufficient to boot an instance, 
because all the information have been copied to Cinder part. 

I am trying to looking for the "another way to prepare your bootable 
volume" as you mentioned and asking for the suggestions. 
And I think the second approach could be one way. Do you think it is a 
good approach? 

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
West Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193 


Mate Lakat  
2013/07/01 04:18 

Please respond to
OpenStack Development Mailing List 



To
OpenStack Development Mailing List , 
cc
jsbry...@us.ibm.com, "Duncan Thomas  John 
Griffith"  
Subject
Re: [openstack-dev] [cinder] Propose to add copying the reference images 
when creating a volume








Hi,

I just proposed a patch for the boot_from_volume_exercise.sh to get rid
of --image. To be honest, I did not look at the various execution paths.
My initial thought is that boot from volume means you boot from volume.
If you only have a kernel + ramdisk image, I simply assumed that you
can't do it. 

I would not do any magic. Boot from volume should boot from volume. If
you only have 3 part images,

Re: [openstack-dev] [Neutron] Help with database migration error

2013-07-03 Thread Henry Gessau
Matt, thanks for looking. I should have followed up with a note that I did
find what the problem was.

The Cisco plugin is a special case -- it is a wrapper that loads
sub-plugins. The database connection is specified in config file of the
first sub-plugin. By specifying that instead of cisco_plugins.ini everything
works.

-- Henry

On Wed, Jul 03, at 10:10 pm, Matt Riedemann  wrote:

> What is the sql_connection value in your cisco_plugins.ini file? Looks like
> sqlalchemy is having issues parsing the URL.
> 
> 
> 
> Thanks,
> 
> *MATT RIEDEMANN*
> Advisory Software Engineer
> Cloud Solutions and OpenStack Development
> 
> *Phone:*1-507-253-7622| *Mobile:*1-507-990-1889*
> E-mail:*_mrie...@us.ibm.com_   
> IBM
> 
> 3605 Hwy 52 N
> Rochester, MN 55901-1407
> United States
> 
> 
> 
> 
> 
> 
> From:Henry Gessau 
> To:OpenStack Development Mailing List
> ,
> Date:07/02/2013 09:05 PM
> Subject:[openstack-dev] [Neutron] Help with database migration error
> 
> 
> 
> 
> I have not worked with databases much and this is my first attempt
> at a database migration. I am trying to follow this Howto:
> https://wiki.openstack.org/wiki/Neutron/DatabaseMigration
> 
> I get the following error at step 3:
> 
> /opt/stack/quantum[master] $ quantum-db-manage --config-file
> /etc/quantum/quantum.conf --config-file
> /etc/quantum/plugins/cisco/cisco_plugins.ini stamp head
> Traceback (most recent call last):
>  File "/usr/local/bin/quantum-db-manage", line 9, in 
>load_entry_point('quantum==2013.2.a882.g0fc6605', 'console_scripts',
> 'quantum-db-manage')()
>  File "/opt/stack/quantum/quantum/db/migration/cli.py", line 136, in main
>CONF.command.func(config, CONF.command.name)
>  File "/opt/stack/quantum/quantum/db/migration/cli.py", line 81, in do_stamp
>sql=CONF.command.sql)
>  File "/opt/stack/quantum/quantum/db/migration/cli.py", line 54, in
> do_alembic_command
>getattr(alembic_command, cmd)(config, *args, **kwargs)
>  File "/usr/local/lib/python2.7/dist-packages/alembic/command.py", line 221,
> in stamp
>script.run_env()
>  File "/usr/local/lib/python2.7/dist-packages/alembic/script.py", line 193,
> in run_env
>util.load_python_file(self.dir, 'env.py')
>  File "/usr/local/lib/python2.7/dist-packages/alembic/util.py", line 177, in
> load_python_file
>module = imp.load_source(module_id, path, open(path, 'rb'))
>  File "/opt/stack/quantum/quantum/db/migration/alembic_migrations/env.py",
> line 100, in 
>run_migrations_online()
>  File "/opt/stack/quantum/quantum/db/migration/alembic_migrations/env.py",
> line 73, in run_migrations_online
>poolclass=pool.NullPool)
>  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/__init__.py", line
> 338, in create_engine
>return strategy.create(*args, **kwargs)
>  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py",
> line 48, in create
>u = url.make_url(name_or_url)
>  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/url.py", line 178,
> in make_url
>return _parse_rfc1738_args(name_or_url)
>  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/url.py", line 219,
> in _parse_rfc1738_args
>"Could not parse rfc1738 URL from string '%s'" % name)
> sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string ''
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with "git review" for a dependent commit

2013-07-03 Thread Jeremy Stanley
On 2013-07-03 09:04:04 -0700 (-0700), Vishvananda Ishaya wrote:
> Oh, I didn't see that you added -x/-X/-N.

You in the collective sense at least. Credit for that goes to Miklos
Vajna, who had to do a good bit of convincing us it was safe/useful.
And now I use it frequently, if fairly carefully, for things like
building dependent series from previously unrelated changes. I'm
glad I listened!

> I can simplify my backport script[1] significantly now.
> 
> [1] https://gist.github.com/vishvananda/2206428

Neat! I had no idea you were doing that via git-review.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Help with database migration error

2013-07-03 Thread Matt Riedemann
What is the sql_connection value in your cisco_plugins.ini file?  Looks 
like sqlalchemy is having issues parsing the URL.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Henry Gessau 
To: OpenStack Development Mailing List 
, 
Date:   07/02/2013 09:05 PM
Subject:[openstack-dev] [Neutron] Help with database migration 
error



I have not worked with databases much and this is my first attempt
at a database migration. I am trying to follow this Howto:
https://wiki.openstack.org/wiki/Neutron/DatabaseMigration

I get the following error at step 3:

/opt/stack/quantum[master] $ quantum-db-manage --config-file 
/etc/quantum/quantum.conf --config-file 
/etc/quantum/plugins/cisco/cisco_plugins.ini stamp head
Traceback (most recent call last):
  File "/usr/local/bin/quantum-db-manage", line 9, in 
load_entry_point('quantum==2013.2.a882.g0fc6605', 'console_scripts', 
'quantum-db-manage')()
  File "/opt/stack/quantum/quantum/db/migration/cli.py", line 136, in main
CONF.command.func(config, CONF.command.name)
  File "/opt/stack/quantum/quantum/db/migration/cli.py", line 81, in 
do_stamp
sql=CONF.command.sql)
  File "/opt/stack/quantum/quantum/db/migration/cli.py", line 54, in 
do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/alembic/command.py", line 
221, in stamp
script.run_env()
  File "/usr/local/lib/python2.7/dist-packages/alembic/script.py", line 
193, in run_env
util.load_python_file(self.dir, 'env.py')
  File "/usr/local/lib/python2.7/dist-packages/alembic/util.py", line 177, 
in load_python_file
module = imp.load_source(module_id, path, open(path, 'rb'))
  File 
"/opt/stack/quantum/quantum/db/migration/alembic_migrations/env.py", line 
100, in 
run_migrations_online()
  File 
"/opt/stack/quantum/quantum/db/migration/alembic_migrations/env.py", line 
73, in run_migrations_online
poolclass=pool.NullPool)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/__init__.py", 
line 338, in create_engine
return strategy.create(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py", 
line 48, in create
u = url.make_url(name_or_url)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/url.py", line 
178, in make_url
return _parse_rfc1738_args(name_or_url)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/url.py", line 
219, in _parse_rfc1738_args
"Could not parse rfc1738 URL from string '%s'" % name)
sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string ''


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Removing the .mo files from Horizon git

2013-07-03 Thread Matt Riedemann
If you use Babel, I don't think you need gettext by itself since I thought 
Babel has it's own conversion/compile code built-in?



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Monty Taylor 
To: openstack-dev@lists.openstack.org, 
Date:   07/02/2013 01:22 PM
Subject:Re: [openstack-dev] [horizon] Removing the .mo files from 
Horizon git





On 07/02/2013 01:13 AM, Mark McLoughlin wrote:
> On Tue, 2013-07-02 at 09:58 +0200, Thierry Carrez wrote:
>> Thomas Goirand wrote:
>>> So, shouldn't the .mo files be generated at build time only, and be 
kept
>>> out of the Git?
>>
>> +1
> 
> Yep, agree too.
> 
> Interestingly, last time I checked, devstack doesn't actually compile
> the message catalogs (python setup.py compile_catalog).
> 
> I've been meaning to fix that for a while now, but it's fallen by the
> wayside. I've unassigned myself from the bug for now:
> 
>   https://bugs.launchpad.net/devstack/+bug/995287

Should we make python setup.py install do this if gettext is installed?
Or keep it as a separate step for people who care?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] failure node muting not working

2013-07-03 Thread Zhou, Yuan
Got it. So Swift will try to enable the muted nodes after 60 seconds by 
default. Thanks.

-yuanz

-Original Message-
From: John Dickinson [mailto:m...@not.mn] 
Sent: Thursday, July 04, 2013 1:23 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Swift] failure node muting not working

Take a look at the proxy config, starting here: 
https://github.com/openstack/swift/blob/master/etc/proxy-server.conf-sample#L70

The error_suppression_interval and error_suppression_limit control the window 
you are looking for. With the default values, 10 errors in 60 seconds will 
prevent the proxy from using that particular storage node for another 60 
seconds.

--John



On Jul 2, 2013, at 8:57 PM, "Zhou, Yuan"  wrote:

> Hi lists,
>  
> We're trying to evaluate the node failure performance in Swift.
> According the docs Swift should be able to mute the failed nodes:
> 'if a storage node does not respond in a reasonable about of time, the proxy 
> considers it to be unavailable and will not attempt to communicate with it 
> for a while.'
>  
> We did a simple test on a 5 nodes cluster:
> 1.   Using COSBench to keep downloading files from the clusters.
> 2.   Stop the networking on SN1, there are lots of 'connection timeout 
> 0.5s' error occurs in Proxy's log
> 3.   Keep workload running and wait for about 1hour
> 4.   The same error still occurs in Proxy, which means the node is not 
> muted, but we expect the SN1 is muted in proxy side and there is no 
> 'connection  timeout ' error in Proxy
>  
> So is there any special works needs to be done to use this feature?
>  
> Regards, -yuanz
>  
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift]After the X-Container-Read how to get the list of readable container

2013-07-03 Thread Hajime Takase
Hi guys,

I have successfully figure out to grant read only permission or write
permission to non operator users(operator is defined in
/etc/swift/proxy.conf) but I found out that those users can't get the
account info of the tenant using get_account() or GET,or they won't know
the list of the readable or writable containers.

I was thinking to store those additional data in local databases,but since
I want to use PC client as well,I would rather prefer using the server side
command to get the list of readable or writable containers.Is there any way
to do this using the API?

Hajime
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Mass review time again!

2013-07-03 Thread Mark Washenberger
Hi folks,

Its looking like another review day is in order. During the last Glance
team meeting, we discussed having a review day this July 8th.

In case you're unfamiliar with the format, a glance review day basically
just consists of as many reviewers (core and otherwise) as possible
dedicating a day to reviews, lurking in #openstack-glance, and consulting
with each other to try to get as many reviews approved or triaged as
possible.

I hope to see you there!

markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Changing pollster/pipeline parameters

2013-07-03 Thread Eoghan Glynn


- Original Message -
> A couple of related questions on managing CM pollster behavior via "config"
> type files:
> 
> 1. I'm trying to make some modifications to the timing of the Glance (image)
> polling in a default CM install. It looks like the pipeline interval fields
> are the way to do it, and that I should tweak it using a yaml config file,
> but I can't seem to verify based on a read-through of the code. Can anyone
> confirm?

Yep, the configured interval in the /etc/ceilometer/pipeline.yaml is the way
to go.

For example:

 $ sed -i 's/interval: .*$/interval: 10/' /etc/ceilometer/pipeline.yaml
 $ # restart ceilometer-agent-central
 $ ceilometer sample-list -m image.size | tail -5
 | df52ee46-91c0-417a-a27c-ca0b447fb5db | image.size | gauge | 25165824.0 | B   
 | 2013-07-03T21:40:15|
 | df52ee46-91c0-417a-a27c-ca0b447fb5db | image.size | gauge | 25165824.0 | B   
 | 2013-07-03T21:40:25|
 | df52ee46-91c0-417a-a27c-ca0b447fb5db | image.size | gauge | 25165824.0 | B   
 | 2013-07-03T21:40:35|
 | df52ee46-91c0-417a-a27c-ca0b447fb5db | image.size | gauge | 25165824.0 | B   
 | 2013-07-03T21:40:45|
 
+--++---++--++

As you see the meter is now being collected at a 10s cadence.

This timing is pulled in via the AgentManager base class:

  https://github.com/openstack/ceilometer/blob/master/ceilometer/agent.py#L90
 
> 2. Similarly, it looks like disabling pollsters is done via the oslo.cfg
> logic in the agent manager. I'd like to populate that using a config
> fileis there logic already to do that that I haven't come across yet?

Well the way I'd disable a pollster is simply by configuring the counters
exclusion in the pipeline.yaml.

For example to disable the pollster providing the image.size meter referred
to above, the following will do the trick:

counters:
- "*"
- "!image.size"

i.e. allow everything but the image.size meter.

Confirm as before using:

 $ # restart ceilometer-agent-central
 $ ceilometer sample-list -m image.size | tail -5


Cheers,
Eoghan
 
> - Phil
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] as-update-policy implementation questions

2013-07-03 Thread Clint Byrum
Excerpts from Chan, Winson C's message of 2013-06-18 23:22:15 -0700:
> Is the expectation here that the update policy for the auto-scaling group is 
> enforced only when either the launch configuration or subnet group membership 
> is changed in the auto-scaling group?  The LaunchConfigurationName is not 
> currently listed as an update_allowed_properties in the AutoScalingGroup.  
> Update handling is not implemented in LaunchConfiguration either.  Also, 
> VPCZoneIdentifier for listing subnet group membership is not implemented in 
> the AutoScalingGroup either.  Should there be blueprints to handle these 
> before working on the update policy?

I'm afraid I can't be much help here. The way AWS works is not really of
any concern to me. The way I think InstanceGroup and AutoScalingGroup
should work isn't really compatible with CloudFormation, so I don't
think it is worthwhile to lend too much credence to "how things are now".

If you want it to work like AWS, I suggest testing on AWS and then
filing/fixing bugs against Heat. However, if you want a rolling updates
mechanism to be practical and highly useful, I'd suggest reviewing the
rolling-updates blueprint/spec and helping to improve it. I'd like to
rewrite the spec actually, as I have learned a lot since writing it and
now have some better ideas on how to implement it.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vmware] VMwareAPI sub-team status update

2013-07-03 Thread Dan Wendlandt
Shawn, are you able to put other patches of yours into WIP and just not
this one?  Or is WIP not working for you at all?


On Wed, Jul 3, 2013 at 11:05 AM, Shawn Hartsock wrote:

> Yes. I'm logged into gerrit and I can review my own patch. I can't abandon
> or press "work in progress" on it.
>
> # Shawn Hartsock
>
> - Original Message -
> > From: "David Ripton" 
> > To: openstack-dev@lists.openstack.org
> > Sent: Wednesday, July 3, 2013 1:25:47 PM
> > Subject: Re: [openstack-dev] [vmware] VMwareAPI sub-team status update
> >
> > On 07/03/2013 11:51 AM, Shawn Hartsock wrote:
> >
> > > Work in progress:
> > > * https://review.openstack.org/#/c/35502/ <- I can't click the "Work
> in
> > > progress button" I'm not sure how else to signal that I'm still
> working...
> > > help?
> >
> > It's your patch.  You should be able to.  (I just successfully put one
> > of my reviews into WIP state and then back to Ready for Review, so it's
> > not globally broken.)  Are you logged into Gerrit?
> >
> > --
> > David Ripton   Red Hat   drip...@redhat.com
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] No meeting this week

2013-07-03 Thread Joshua Harlow
Howdy all,

Since its july 4th in the US and I think most people will be on vacation lets 
have our next meeting the week after instead.

Feel free to email me though if u need to.

Have a good break for those who have one :-)

-Josh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] too many tokens

2013-07-03 Thread Matt Riedemann
For some history, there was an attempt at consolidating some of this here:

https://github.com/openstack/nova/commit/dd9c27f999221001bae9faa03571645824d2a681
 


But that caused some issues and was reverted here:

https://github.com/openstack/nova/commit/ee5d9ae8d376e41e852b06488e922400cf69b4ac




Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Ala Rezmerita 
To: OpenStack Development Mailing List 
, 
Cc: gong...@unitedstack.com, hrushikesh.gan...@hp.com
Date:   07/03/2013 11:26 AM
Subject:[openstack-dev] [Nova] too many tokens



Hi everyone, 
I have a question regarding too many token generation in nova when using 
quantumclient (also related to bug reports 
https://bugs.launchpad.net/nova/+bug/1192383 + 
https://bugs.launchpad.net/nova-project/+bug/1191159) 
For instance during the periodic task  heal_instance_info_cache  (every 
60s) nova calls quantum API method  get_instance_nw_info that calls 
_build_network_info_model (backtrace at the end of the mail).  
During the execution of this method,  4 quantum clients intances are 
created (all of them use the same context object) and for each of them a 
new token is generated.   
Is it possible to change this behavior by updating the context.auth_token 
property the first time a quantumclient for a given context is created (so 
that the same token will be reused among the 4 client instances) ?  Is 
there some security issue that can appear?
Thanks
Ala Rezmerita
Cloudwatt

The backtrace :
  
/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py(194)main()
-> result = function(*args, **kwargs)
  /opt/stack/nova/nova/openstack/common/loopingcall.py(125)_inner()
-> idle = self.f(*self.args, **self.kw)
  /opt/stack/nova/nova/service.py(283)periodic_tasks()
-> return self.manager.periodic_tasks(ctxt, raise_on_error=raise_on_error)
  /opt/stack/nova/nova/manager.py(100)periodic_tasks()
-> return self.run_periodic_tasks(context, raise_on_error=raise_on_error)
  
/opt/stack/nova/nova/openstack/common/periodic_task.py(179)run_periodic_tasks()
-> task(self, context)
  /opt/stack/nova/nova/compute/manager.py(3654)_heal_instance_info_cache()
-> self._get_instance_nw_info(context, instance)
  /opt/stack/nova/nova/compute/manager.py(767)_get_instance_nw_info()
-> instance, conductor_api=self.conductor_api)
  /opt/stack/nova/nova/network/quantumv2/api.py(367)get_instance_nw_info()
-> result = self._get_instance_nw_info(context, instance, networks)
  
/opt/stack/nova/nova/network/quantumv2/api.py(375)_get_instance_nw_info()
-> nw_info = self._build_network_info_model(context, instance, networks)
  
/opt/stack/nova/nova/network/quantumv2/api.py(840)_build_network_info_model()
-> client = quantumv2.get_client(context, admin=True)
> /opt/stack/nova/nova/network/quantumv2/__init__.py(67)get_client()
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Changing pollster/pipeline parameters

2013-07-03 Thread Neal, Phil
A couple of related questions on managing CM pollster behavior via "config" 
type files:

1. I'm trying to make some modifications to the timing of the Glance (image) 
polling in a default CM install. It looks like the pipeline interval fields are 
the way to do it, and that I should tweak it using a yaml config file, but I 
can't seem to verify based on a read-through of the code. Can anyone confirm?

2. Similarly, it looks like disabling pollsters is done via the oslo.cfg logic 
in the agent manager. I'd like to populate that using a config fileis there 
logic already to do that that I haven't come across yet?

- Phil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding a clean shutdown for stop/delete breaks Jenkins

2013-07-03 Thread Clark Boylan
On Wed, Jul 3, 2013 at 10:03 AM, Day, Phil  wrote:
> Thanks Clark,
>
> So the process would be to get a new version of devstack-vm-gate.sh merged in 
> here first, and then submit my change in Nova right ?
>
> Is there any guidance on how I could check my change to the 
> devstack-vm-gate.sh ahead of submitting it ?
https://github.com/openstack-infra/devstack-gate/blob/master/README.rst#simulating-devstack-gate-tests
should have the info you need to test changes locally.
>
> Thanks,
> Phil
>
>> -Original Message-
>> From: Clark Boylan [mailto:clark.boy...@gmail.com]
>> Sent: 03 July 2013 17:48
>> To: OpenStack Development Mailing List
>> Subject: Re: [openstack-dev] Adding a clean shutdown for stop/delete breaks
>> Jenkins
>>
>> On Wed, Jul 3, 2013 at 9:30 AM, Day, Phil  wrote:
>> > Hi Folks,
>> I can't really speak to the stuff that was here so snip.
>> >
>> > i)Make the default wait time much shorter so that Jenkins runs OK
>> > (tries this with 10 seconds and it works fine), and assume that users
>> > will configure it to a more realistic value.
>> >
>> I know Robert Collins and others would like to see our defaults be reasonable
>> for a mid sized deployment so we shouldn't use a default to accomodate
>> Jenkins.
>> > ii)   Keep the default at 120 seconds, but make the Jenkins jobs use a
>> > specific configuration setting (is this possible, and iof so can
>> > someone point me at where to make the change) ?
>> >
>> It is possible. You can expose and set config options through the 
>> devstack-gate
>> project [1] which runs the devstack tests in Jenkins.
>> > iii) Increase the time allowed for Jenkins
>> >
>> I don't think we want to do this as it already takes quite a bit of time to 
>> get
>> through the gate (the three hour timeout seems long, but sdague and others
>> would have a better idea of what it should be).
>> > iv) The ever popular something else ...
>> >
>> Nothing comes to mind immediately.
>>
>> [1] https://github.com/openstack-infra/devstack-gate
>> You will probably find the devstack-vm-gate.sh and devstack-vm-gate-wrap.sh
>> scripts to be most useful.
>>
>> Clark
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Work around DB in OpenStack (Oslo, Nova, Cinder, Glance)

2013-07-03 Thread Boris Pavlovic
Hi Monty,

>>> I think if you're working on a non-alembic plan and boris is working on
>>> an alembic plan, then something is going to be unhappy in the
>>> not-too-distant future. Can we get alignment on this?


As I said before, we are preparing our DB code to move from
sqlalchemy-migrate to something another.
There will be a tons of work before we will be able to rewrite or migration
scripts to alembic or something else.

And we are not sure that we would like to use alembic=)


Best regards,
Boris Pavlovic



On Wed, Jul 3, 2013 at 9:30 PM, Monty Taylor  wrote:

>
>
> On 07/02/2013 10:50 AM, Boris Pavlovic wrote:
> >
> ###
> > Goal
> >
> ###
> >
> > We should fix work with DB, unify it in all projects and use oslo code
> > for all common things.
>
> Just wanted to say a quick word that isn't about migrations...
>
> Thank you. This is all great, and I'm thrilled someone is taking on the
> task of fixing what is probably one of OpenStack's biggest nightmares.
>
> > In more words:
> >
> > DB API
> >
> >   *) Fully cover by tests.
> >
> >   *) Run tests against all backends (now they are runed only against
> > sqlite).
> >
> >   *) Unique constraints (instead of select + insert)
> >  a) Provide unique constraints.
> >  b) Add missing unique constraints.
> >
> >   *) DB Archiving
> >  a) create shadow tables
> >  b) add tests that checks that shadow and main table are synced.
> >  c) add code that work with shadow tables.
> >
> >   *) DB API performance optimization
> > a) Remove unused joins..
> > b) 1 query instead of N (where it is possible).
> > c) Add methods that could improve performance.
> > d) Drop unused methods.
> >
> >   *) DB reconnect
> > a) Don’t break huge task if we lost connection for a moment.. just
> > retry DB query.
> >
> >   *) DB Session cleanup
> > a) do not use session parameter in public DB API methods.
> > b) fix places where we are doing N queries in N transactions instead
> > of 1.
> > c) get only data that is used (e.g. len(query.all()) =>
> query.count()).
> >
> > 
> >
> > DB Migrations
> >
> >   *) Test DB Migrations against all backends and real data.
> >
> >   *) Fix: DB schemas after Migrations should be same in different
> backends
> >
> >   *) Fix: hidden bugs, that are caused by wrong migrations:
> >  a) fix indexes. e.g. 152 migration in Nova drop all Indexes that
> > has deleted column
> >  b) fix wrong types
> >  c) drop unused tables
> >
> >   *) Switch from sqlalchemy-migrate to something that is not death (e.g.
> > alembic).
> >
> > 
> >
> > DB Models
> >
> >   *) Fix: Schema that is created by Models should be the same as after
> > migrations.
> >
> >   *) Fix: Unit tests should be runed on DB that was created by Models
> > not migrations.
> >
> >   *) Add test that checks that Models are synced with migrations.
> >
> > 
> >
> > Oslo Code
> >
> >   *) Base Sqlalchemy Models.
> >
> >   *) Work around engine and session.
> >
> >   *) SqlAlchemy Utils - that helps us with migrations and tests.
> >
> >   *) Test migrations Base.
> >
> >   *) Use common test wrapper that allows us to run tests on different
> > backends.
> >
> >
> >
> ###
> >Implementation
> >
> ###
> >
> >   This is really really huge task. And we are almost done with Nova=).
> >
> >   In OpenStack for such work there is only one approach (“baby steps”
> > development deriven). So we are making tons of patches that could be
> > easy reviewed. But there is also minuses in such approach. It is pretty
> > hard to track work on high level. And sometimes there are misunderstand.
> >
> >   For example with oslo code. In few words at this moment we would like
> > to add (for some time) in oslo monkey patching for sqlalchemy-migrate.
> > And I got reasonable question from Doug Hellmann. Why? I answer because
> > of our “baby steps”. But if you don’t have a list of baby steps it is
> > pretty hard to understand why our baby steps need this thing. And why we
> > don’t switch to alembic firstly. So I would like to describe our Road
> > Map and write list of "baby steps".
> >
> >
> > ---
> >
> > OSLO
> >
> >   *) (Merged) Base code for Models and sqlalchemy engine (session)
> >
> >   *) (On review) Sqlalchemy utils that are used to:
> >   1. Fix bugs in sqlalchemy-migrate
> >   2. Base code for migrations that provides Unique Constraints.
> >   3. Utils for db.archiving helps us to create and check shadow
> tables.
> >
> >   *) (On review) Testtools wrapper
> >We should have only one testtool wrapp

Re: [openstack-dev] [vmware] VMwareAPI sub-team status update

2013-07-03 Thread Shawn Hartsock
Yes. I'm logged into gerrit and I can review my own patch. I can't abandon or 
press "work in progress" on it.

# Shawn Hartsock

- Original Message -
> From: "David Ripton" 
> To: openstack-dev@lists.openstack.org
> Sent: Wednesday, July 3, 2013 1:25:47 PM
> Subject: Re: [openstack-dev] [vmware] VMwareAPI sub-team status update
> 
> On 07/03/2013 11:51 AM, Shawn Hartsock wrote:
> 
> > Work in progress:
> > * https://review.openstack.org/#/c/35502/ <- I can't click the "Work in
> > progress button" I'm not sure how else to signal that I'm still working...
> > help?
> 
> It's your patch.  You should be able to.  (I just successfully put one
> of my reviews into WIP state and then back to Ready for Review, so it's
> not globally broken.)  Are you logged into Gerrit?
> 
> --
> David Ripton   Red Hat   drip...@redhat.com
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Metrics][Nova] Using Bicho database to get stats about code review

2013-07-03 Thread Jesus M. Gonzalez-Barahona
On Wed, 2013-07-03 at 12:30 -0400, Russell Bryant wrote:
> On 07/03/2013 12:14 PM, Jesus M. Gonzalez-Barahona wrote:
> > Hi all,
> > 
> > Bicho [1] now has a Gerrit backend, which has been tested with
> > OpenStack's Gerrit. We have used it to produce the MySQL database dump
> > available at [2] ( gerrit.mysql.7z ). You can use it to compute the
> > metrics mentioned in the previous threads about code review, and some
> > others.
> > 
> > [1] https://github.com/MetricsGrimoire/Bicho
> > [2] http://activity.openstack.org/dash/browser/data/db/
> > 
> > The database dump will be updated daily, starting in a few days.
> > 
> > For some examples on how to run queries on it, or how to produce the
> > database using Bicho, fresh from OpenStack's gerrit, have a look at [3].
> > 
> > [3] https://github.com/MetricsGrimoire/Bicho/wiki/Gerrit-backend
> > 
> > At some point, we plan to visualize the data as a part of the
> > development dashboard [4], so any comment on interesting metrics, or
> > bugs, will be welcome. For now, we're taking not of all metrics
> > mentioned in the previous posts about code review stats.
> > 
> > [4] http://activity.openstack.org/dash/browser/
> 
> Thanks for sharing!  I don't think I understand the last sentence,
> though.  Can you clarify?
> 

Oooops. It should read "For now, we're taking notice of all metrics
mentioned in the previous posts about code review stats." That would
mean that, in a while, we expect to produce charts on the evolution of
some of those metrics over time.

Sorry for the typo.

Saludos,

Jesus.

-- 
-- 
Bitergia: http://bitergia.com http://blog.bitergia.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Work around DB in OpenStack (Oslo, Nova, Cinder, Glance)

2013-07-03 Thread Monty Taylor


On 07/02/2013 10:50 AM, Boris Pavlovic wrote:
> ###
> Goal
> ###
> 
> We should fix work with DB, unify it in all projects and use oslo code
> for all common things.

Just wanted to say a quick word that isn't about migrations...

Thank you. This is all great, and I'm thrilled someone is taking on the
task of fixing what is probably one of OpenStack's biggest nightmares.

> In more words:
> 
> DB API
> 
>   *) Fully cover by tests.
> 
>   *) Run tests against all backends (now they are runed only against
> sqlite).
> 
>   *) Unique constraints (instead of select + insert)
>  a) Provide unique constraints.
>  b) Add missing unique constraints.
> 
>   *) DB Archiving
>  a) create shadow tables
>  b) add tests that checks that shadow and main table are synced.
>  c) add code that work with shadow tables.
> 
>   *) DB API performance optimization
> a) Remove unused joins..
> b) 1 query instead of N (where it is possible).
> c) Add methods that could improve performance.
> d) Drop unused methods.
> 
>   *) DB reconnect
> a) Don’t break huge task if we lost connection for a moment.. just
> retry DB query.
> 
>   *) DB Session cleanup
> a) do not use session parameter in public DB API methods.
> b) fix places where we are doing N queries in N transactions instead
> of 1.
> c) get only data that is used (e.g. len(query.all()) => query.count()).
> 
> 
> 
> DB Migrations
> 
>   *) Test DB Migrations against all backends and real data.
> 
>   *) Fix: DB schemas after Migrations should be same in different backends
> 
>   *) Fix: hidden bugs, that are caused by wrong migrations:
>  a) fix indexes. e.g. 152 migration in Nova drop all Indexes that
> has deleted column
>  b) fix wrong types
>  c) drop unused tables
> 
>   *) Switch from sqlalchemy-migrate to something that is not death (e.g.
> alembic).
> 
> 
> 
> DB Models
> 
>   *) Fix: Schema that is created by Models should be the same as after
> migrations.
>  
>   *) Fix: Unit tests should be runed on DB that was created by Models
> not migrations.
> 
>   *) Add test that checks that Models are synced with migrations.
> 
> 
> 
> Oslo Code
> 
>   *) Base Sqlalchemy Models.
> 
>   *) Work around engine and session.
> 
>   *) SqlAlchemy Utils - that helps us with migrations and tests.
> 
>   *) Test migrations Base.
> 
>   *) Use common test wrapper that allows us to run tests on different
> backends.
> 
> 
> ###
>Implementation
> ###
> 
>   This is really really huge task. And we are almost done with Nova=).
> 
>   In OpenStack for such work there is only one approach (“baby steps”
> development deriven). So we are making tons of patches that could be
> easy reviewed. But there is also minuses in such approach. It is pretty
> hard to track work on high level. And sometimes there are misunderstand.
>  
>   For example with oslo code. In few words at this moment we would like
> to add (for some time) in oslo monkey patching for sqlalchemy-migrate.
> And I got reasonable question from Doug Hellmann. Why? I answer because
> of our “baby steps”. But if you don’t have a list of baby steps it is
> pretty hard to understand why our baby steps need this thing. And why we
> don’t switch to alembic firstly. So I would like to describe our Road
> Map and write list of "baby steps".
> 
> 
> ---
> 
> OSLO
> 
>   *) (Merged) Base code for Models and sqlalchemy engine (session)
> 
>   *) (On review) Sqlalchemy utils that are used to:
>   1. Fix bugs in sqlalchemy-migrate
>   2. Base code for migrations that provides Unique Constraints.
>   3. Utils for db.archiving helps us to create and check shadow tables.
> 
>   *) (On review) Testtools wrapper
>We should have only one testtool wrapper in all projects. And
> this is the one of base steps in task of running tests against all backends.
> 
>   *) (On review) Test migrations base
>Base classes that provides us to test our migrations against all
> backends on real data
> 
>   *) (On review, not finished yet) DB Reconnect.  
> 
>   *) (Not finished) Test that checks that schemas and models are synced
> 
> ---
> 
> ${PROJECT_NAME}
> 
> 
> In different projects we could work absolutely simultaneously, and first
> candidates are Glance and Cinder. But inside project we could also work
> simultaneously. Here is the workflow:
> 
> 
>   1) (SYNC) Use base code for Models and sqlalchemy engines (from oslo)
> 
>   2) (SYNC) Use test migrations base (from oslo)
> 
>   3) (SYNC) Use SqlAlchemy utils (from oslo)
> 
>  

Re: [openstack-dev] Work around DB in OpenStack (Oslo, Nova, Cinder, Glance)

2013-07-03 Thread Monty Taylor


On 07/03/2013 07:26 AM, Johannes Erdfelt wrote:
> On Wed, Jul 03, 2013, Michael Still  wrote:
>> On Wed, Jul 3, 2013 at 3:50 AM, Boris Pavlovic  wrote:
>>
>>> Question:
>>>   Why we should put in oslo slqlalchemy-migrate monkey patches, when we are
>>> planing to switch to alembic?
>>>
>>> Answer:
>>>If we don’t put in oslo sqlalchemy-migrate monkey patches. We won't be
>>> able to work on 7 point at all until 8 and 10 points will be implemented in
>>> every project. Also work around 8 point is not finished, so we are not able
>>> to implement 10 points in any of project. So this blocks almost all work in
>>> all projects. I think that these 100-200 lines of code are not so big price
>>> for saving few cycles of time.
>>
>> We've talked in the past (Folsom summit?) about alembic, but I'm not
>> aware of anyone who is actually working on it. Is someone working on
>> moving us to alembic? If not, it seems unfair to block database work
>> on something no one is actually working on.
> 
> I've started working on a non-alembic migration path that was discussed
> at the Grizzly summit.
>
> While alembic is better than sqlalchemy-migrate, it still requires long
> downtimes when some migrations are run. We discussed moving to an
> expand/contract cycle where migrations add new columns, allow migrations
> to slowly (relatively speaking) migrate data over, then (possibly) remove
> any old columns.

I think if you're working on a non-alembic plan and boris is working on
an alembic plan, then something is going to be unhappy in the
not-too-distant future. Can we get alignment on this?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vmware] VMwareAPI sub-team status update

2013-07-03 Thread David Ripton

On 07/03/2013 11:51 AM, Shawn Hartsock wrote:


Work in progress:
* https://review.openstack.org/#/c/35502/ <- I can't click the "Work in progress 
button" I'm not sure how else to signal that I'm still working... help?


It's your patch.  You should be able to.  (I just successfully put one 
of my reviews into WIP state and then back to Ready for Review, so it's 
not globally broken.)  Are you logged into Gerrit?


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] failure node muting not working

2013-07-03 Thread John Dickinson
Take a look at the proxy config, starting here: 
https://github.com/openstack/swift/blob/master/etc/proxy-server.conf-sample#L70

The error_suppression_interval and error_suppression_limit control the window 
you are looking for. With the default values, 10 errors in 60 seconds will 
prevent the proxy from using that particular storage node for another 60 
seconds.

--John



On Jul 2, 2013, at 8:57 PM, "Zhou, Yuan"  wrote:

> Hi lists,
>  
> We’re trying to evaluate the node failure performance in Swift.
> According the docs Swift should be able to mute the failed nodes:
> ‘if a storage node does not respond in a reasonable about of time, the proxy 
> considers it to be unavailable and will not attempt to communicate with it 
> for a while.’
>  
> We did a simple test on a 5 nodes cluster:
> 1.   Using COSBench to keep downloading files from the clusters.
> 2.   Stop the networking on SN1, there are lots of ‘connection timeout 
> 0.5s’ error occurs in Proxy’s log
> 3.   Keep workload running and wait for about 1hour
> 4.   The same error still occurs in Proxy, which means the node is not 
> muted, but we expect the SN1 is muted in proxy side and there is no 
> ‘connection  timeout ’ error in Proxy
>  
> So is there any special works needs to be done to use this feature?
>  
> Regards, -yuanz
>  
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding a clean shutdown for stop/delete breaks Jenkins

2013-07-03 Thread David Kranz

On 07/03/2013 12:30 PM, Day, Phil wrote:


Hi Folks,

I have a change submitted which adds the same clean shutdown logic to 
stop and delete that exists for soft reboot -- the rational being that 
its always better to give a VM a chance to shutdown cleanly if 
possible even if you're about to delete it as sometimes other parts of 
the application expect this, and if its booted from a volume you want 
to leave the guest file system in a tidy state.


https://review.openstack.org/#/c/35303/

However setting the default value to 120 seconds (as per soft reboot) 
causes the Jenkins gate jobs to blow the 3 hour limit.   This seems to 
be just a gradual accumulation of extra time rather than any one test 
running much longer.


So options would seem to be:

i)Make the default wait time much shorter so that Jenkins runs OK 
(tries this with 10 seconds and it works fine), and assume that users 
will configure it to a more realistic value.


ii)Keep the default at 120 seconds, but make the Jenkins jobs use a 
specific configuration setting (is this possible, and iof so can 
someone point me at where to make the change) ?


iii)Increase the time allowed for Jenkins

iv)The ever popular something else ...

Thought please.

Cheers,

Phil

The fact that changing the timeout changes gate time means the code is 
actually hitting the timeout. Is that expected?
Shutdown is now relying on the guest responding to acpi. Is that what we 
want? Tempest uses a specialized image and I'm not sure how it is set up 
in this regard. In any event I don't think we want to add any more time 
to server delete when running in the gate.


I'm also a little concerned that this seems to be a significant behavior 
change when using vms that behave like the ones in the gate. In reboot 
this is handled by having soft/hard options of course.


 -David




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding a clean shutdown for stop/delete breaks Jenkins

2013-07-03 Thread Day, Phil
Thanks Clark,

So the process would be to get a new version of devstack-vm-gate.sh merged in 
here first, and then submit my change in Nova right ? 

Is there any guidance on how I could check my change to the devstack-vm-gate.sh 
ahead of submitting it ?

Thanks,
Phil

> -Original Message-
> From: Clark Boylan [mailto:clark.boy...@gmail.com]
> Sent: 03 July 2013 17:48
> To: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] Adding a clean shutdown for stop/delete breaks
> Jenkins
> 
> On Wed, Jul 3, 2013 at 9:30 AM, Day, Phil  wrote:
> > Hi Folks,
> I can't really speak to the stuff that was here so snip.
> >
> > i)Make the default wait time much shorter so that Jenkins runs OK
> > (tries this with 10 seconds and it works fine), and assume that users
> > will configure it to a more realistic value.
> >
> I know Robert Collins and others would like to see our defaults be reasonable
> for a mid sized deployment so we shouldn't use a default to accomodate
> Jenkins.
> > ii)   Keep the default at 120 seconds, but make the Jenkins jobs use a
> > specific configuration setting (is this possible, and iof so can
> > someone point me at where to make the change) ?
> >
> It is possible. You can expose and set config options through the 
> devstack-gate
> project [1] which runs the devstack tests in Jenkins.
> > iii) Increase the time allowed for Jenkins
> >
> I don't think we want to do this as it already takes quite a bit of time to 
> get
> through the gate (the three hour timeout seems long, but sdague and others
> would have a better idea of what it should be).
> > iv) The ever popular something else ...
> >
> Nothing comes to mind immediately.
> 
> [1] https://github.com/openstack-infra/devstack-gate
> You will probably find the devstack-vm-gate.sh and devstack-vm-gate-wrap.sh
> scripts to be most useful.
> 
> Clark
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] volume affinity filter for nova scheduler

2013-07-03 Thread Jérôme Gallard
Hi all,

Russell, I agree with all of your remarks, and especially with the
fact that placement details "have to be avoided" to be exposed to
users.

However I see a possible use case for the filter. For instance, if we
consider the BP "Support for multiple active scheduler drivers" (
https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers
): a cloud provider may want to provide a specific class of services
(on a dedicated aggregate) for users who wants to ensure that both
volumes and instances are on the same host and use the weight function
for all the other hosts.
Does it make sense?

Regards,
Jérôme

On Wed, Jul 3, 2013 at 5:54 PM, Russell Bryant  wrote:
> On 07/03/2013 10:24 AM, Alexey Ovchinnikov wrote:
>> Hi everyone,
>>
>> for some time I have been working on an implementation of a filter that
>> would allow to force instances to hosts which contain specific volumes.
>> A blueprint can be found here:
>> https://blueprints.launchpad.net/nova/+spec/volume-affinity-filter
>> and an implementation here:
>> https://review.openstack.org/#/c/29343/
>>
>> The filter works for LVM driver and now it picks either a host
>> containing specified volume
>> or nothing (thus effectively failing instance scheduling). Now it fails
>> primarily when it can't find the volume. It has been
>> pointed to me that sometimes it may be desirable not to fail instance
>> scheduling but to run it anyway. However this softer behaviour fits better
>> for weighter function. Thus I have registered a blueprint for the
>> weighter function:
>> https://blueprints.launchpad.net/nova/+spec/volume-affinity-weighter-function
>>
>> I was thinking about both the filter and the weighter working together.
>> The former
>> could be used in cases when we strongly need storage space associated
>> with an
>> instance and need them placed on the same host. The latter could be used
>> when
>> storage space is nice to have and preferably on the same host
>> with an instance, but not so crucial as to have the instance running.
>>
>> During reviewing a question appeared whether we need the filter and
>> wouldn't things be better
>> if we removed it and had only the weighter function instead. I am not
>> yet convinced
>> that the filter is useless and needs to be replaced with the weighter,
>> so I am asking for your opinion on this matter. Do you see usecases for
>> the filter,
>> or the weighter will answer all needs?
>
> Thanks for starting this thread.
>
> I was pushing for the weight function.  It seems much more appropriate
> for a cloud environment than the filter.  It's an optimization that is
> always a good idea, so the weight function that works automatically
> would be good.  It's also transparent to users.
>
> Some things I don't like about the filter:
>
>  - It requires specifying a scheduler hint
>
>  - It's exposing a concept of co-locating volumes and instances on the
> same host to users.  This isn't applicable for many volume backends.  As
> a result, it's a violation of the principle where users ideally do not
> need to know or care about deployment details.
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding a clean shutdown for stop/delete breaks Jenkins

2013-07-03 Thread Clark Boylan
On Wed, Jul 3, 2013 at 9:30 AM, Day, Phil  wrote:
> Hi Folks,
I can't really speak to the stuff that was here so snip.
>
> i)Make the default wait time much shorter so that Jenkins runs OK
> (tries this with 10 seconds and it works fine), and assume that users will
> configure it to a more realistic value.
>
I know Robert Collins and others would like to see our defaults be
reasonable for a mid sized deployment so we shouldn't use a default to
accomodate Jenkins.
> ii)   Keep the default at 120 seconds, but make the Jenkins jobs use a
> specific configuration setting (is this possible, and iof so can someone
> point me at where to make the change) ?
>
It is possible. You can expose and set config options through the
devstack-gate project [1] which runs the devstack tests in Jenkins.
> iii) Increase the time allowed for Jenkins
>
I don't think we want to do this as it already takes quite a bit of
time to get through the gate (the three hour timeout seems long, but
sdague and others would have a better idea of what it should be).
> iv) The ever popular something else …
>
Nothing comes to mind immediately.

[1] https://github.com/openstack-infra/devstack-gate
You will probably find the devstack-vm-gate.sh and
devstack-vm-gate-wrap.sh scripts to be most useful.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Metrics][Nova] Using Bicho database to get stats about code review

2013-07-03 Thread Russell Bryant
On 07/03/2013 12:14 PM, Jesus M. Gonzalez-Barahona wrote:
> Hi all,
> 
> Bicho [1] now has a Gerrit backend, which has been tested with
> OpenStack's Gerrit. We have used it to produce the MySQL database dump
> available at [2] ( gerrit.mysql.7z ). You can use it to compute the
> metrics mentioned in the previous threads about code review, and some
> others.
> 
> [1] https://github.com/MetricsGrimoire/Bicho
> [2] http://activity.openstack.org/dash/browser/data/db/
> 
> The database dump will be updated daily, starting in a few days.
> 
> For some examples on how to run queries on it, or how to produce the
> database using Bicho, fresh from OpenStack's gerrit, have a look at [3].
> 
> [3] https://github.com/MetricsGrimoire/Bicho/wiki/Gerrit-backend
> 
> At some point, we plan to visualize the data as a part of the
> development dashboard [4], so any comment on interesting metrics, or
> bugs, will be welcome. For now, we're taking not of all metrics
> mentioned in the previous posts about code review stats.
> 
> [4] http://activity.openstack.org/dash/browser/

Thanks for sharing!  I don't think I understand the last sentence,
though.  Can you clarify?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Adding a clean shutdown for stop/delete breaks Jenkins

2013-07-03 Thread Day, Phil
Hi Folks,

I have a change submitted which adds the same clean shutdown logic to stop and 
delete that exists for soft reboot - the rational being that its always better 
to give a VM a chance to shutdown cleanly if possible even if you're about to 
delete it as sometimes other parts of the application expect this, and if its 
booted from a volume you want to leave the guest file system in a tidy state.

https://review.openstack.org/#/c/35303/

However setting the default value to 120 seconds (as per soft reboot) causes 
the Jenkins gate jobs to blow the 3 hour limit.   This seems to be just a 
gradual accumulation of extra time rather than any one test running much longer.

So options would seem to be:


i)Make the default wait time much shorter so that Jenkins runs OK 
(tries this with 10 seconds and it works fine), and assume that users will 
configure it to a more realistic value.

ii)   Keep the default at 120 seconds, but make the Jenkins jobs use a 
specific configuration setting (is this possible, and iof so can someone point 
me at where to make the change) ?

iii) Increase the time allowed for Jenkins

iv) The ever popular something else ...

Thought please.

Cheers,
Phil


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vmware] VMwareAPI sub-team status update

2013-07-03 Thread Shawn Hartsock
Greetings Stackers!

I covered open reviews on Friday. It is a short week for us here in the US so 
I'll just be sending the one email this week just ahead of our VMwareAPI team 
meeting.

Blueprints targeted for Havana-2:
* https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage - 
started but depends on
** https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy - 
which I have up for review
* 
https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
 - good progress 

If you are working on a BP for H2 deadline try to get it up and finished for 
Monday morning. Review cycles are long and if you don't have it up for review 
by then the chances are your blueprint will slip into Havana-3.

Thanks to the core-reviewers for all their attention! We've had several patches 
merge and I feel that our developers are starting to learn what's expected from 
them. I have a feeling that future reviews will be smoother thanks to your 
attentions.

Merged (Victory!):
* https://review.openstack.org/#/c/30036/
* https://review.openstack.org/#/c/30289/

Needs one more +2 / Approve button:
* https://review.openstack.org/#/c/27885/
* https://review.openstack.org/#/c/29453/ 

Ready for core-reviewer:
[none at this stage right now]

Needs VMware API expert review:
* https://review.openstack.org/#/c/30282/
* https://review.openstack.org/#/c/30628/ <- pretty close to ready for core
* https://review.openstack.org/#/c/30822/ <- almost ready for core
* https://review.openstack.org/#/c/33100/
* https://review.openstack.org/#/c/33238/ <- BP
* https://review.openstack.org/#/c/33504/ <- VC not available fix
* https://review.openstack.org/#/c/34033/ <- new feature, how should VMwareAPI 
support?

Needs help/discussion (has topical -1 issues):
* https://review.openstack.org/#/c/33782/ <- general nova discussion affects 
all hypervisors
* https://review.openstack.org/#/c/33088/ <- IIRC author is considering 
dropping this?
* https://review.openstack.org/#/c/34189/ <- has -1 for nit-pick reasons

Work in progress:
* https://review.openstack.org/#/c/35502/ <- I can't click the "Work in 
progress button" I'm not sure how else to signal that I'm still working... help?

Meeting info:
* https://wiki.openstack.org/wiki/Meetings/VMwareAPI

# Shawn Hartsock - VMware's Nova Compute driver maintainer guy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for including fake implementations in python-client packages

2013-07-03 Thread Christopher Armstrong
On Tue, Jul 2, 2013 at 11:38 PM, Robert Collins
 wrote:
> Radix points out I missed the naunce that you're targeting the users
> of python-novaclient, for instance, rather than python-novaclient's
> own tests.
>
>
> On 3 July 2013 16:29, Robert Collins  wrote:
>
>>> What I'd like is for each client library, in addition to the actual
>>> implementation, is that they ship a fake, in-memory, version of the API. The
>>> fake implementations should take the same arguments, have the same return
>>> values, raise the same exceptions, and otherwise be identical, besides the
>>> fact
>>> that they are entirely in memory and never make network requests.
>>
>> So, +1 on shipping a fake reference copy of the API.
>>
>> -1 on shipping it in the client.
>>
>> The server that defines the API should have two implementations - the
>> production one, and a testing fake. The server tests should exercise
>> *both* code paths [e.g. using testscenarios] to ensure there is no
>> skew between them.
>>
>> Then the client tests can be fast and efficient but not subject to
>> implementation skew between fake and prod implementations.
>>
>> Back on Launchpad I designed a similar thing, but with language
>> neutrality as a goal :
>> https://dev.launchpad.net/ArchitectureGuide/ServicesRequirements#Test_fake
>>
>> And in fact, I think that that design would work well here, because we
>> have multiple language bindings - Python, Ruby, PHP, Java, Go etc, and
>> all of them will benefit from a low(ms or less)-latency test fake.
>
> So taking the aspect I missed into account I'm much happier with the
> idea of shipping a fake in the client, but... AFAICT many of our
> client behaviours are only well defined in the presence of a server
> anyhow.
>
> So it seems to me that a fast server fake can be used in tests of
> python-novaclient, *and* in tests of code using python-novaclient
> (including for instance, heat itself), and we get to write it just
> once per server, rather than once per server per language binding.
>
> -Rob


I want to make sure I understond you. Let's say I have a program named
cool-cloud-tool, and it uses python-novaclient, python-keystoneclient,
and three other clients for OpenStack services. You're suggesting that
its test suite should start up instances of all those OpenStack
services with in-memory or otherwise localized backends, and
communicate with them using standard python-*client functionality?

I can imagine that being a useful thing, if it's very easy to do, and
won't increase my test execution time too much.

-- 
IRC: radix
Christopher Armstrong
Rackspace

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Metrics][Nova] Using Bicho database to get stats about code review

2013-07-03 Thread Jesus M. Gonzalez-Barahona
Hi all,

Bicho [1] now has a Gerrit backend, which has been tested with
OpenStack's Gerrit. We have used it to produce the MySQL database dump
available at [2] ( gerrit.mysql.7z ). You can use it to compute the
metrics mentioned in the previous threads about code review, and some
others.

[1] https://github.com/MetricsGrimoire/Bicho
[2] http://activity.openstack.org/dash/browser/data/db/

The database dump will be updated daily, starting in a few days.

For some examples on how to run queries on it, or how to produce the
database using Bicho, fresh from OpenStack's gerrit, have a look at [3].

[3] https://github.com/MetricsGrimoire/Bicho/wiki/Gerrit-backend

At some point, we plan to visualize the data as a part of the
development dashboard [4], so any comment on interesting metrics, or
bugs, will be welcome. For now, we're taking not of all metrics
mentioned in the previous posts about code review stats.

[4] http://activity.openstack.org/dash/browser/

Saludos,

Jesus.

-- 
-- 
Bitergia: http://bitergia.com http://blog.bitergia.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] too many tokens

2013-07-03 Thread Ala Rezmerita
Hi everyone,

I have a question regarding too many token generation in nova when using
quantumclient (also related to bug reports
https://bugs.launchpad.net/nova/+bug/1192383 +
https://bugs.launchpad.net/nova-project/+bug/1191159)

For instance during the periodic task  *heal_instance_info_cache*  (every
60s) nova calls quantum API method  get_instance_nw_info that calls
_build_network_info_model (backtrace at the end of the mail).

During the execution of this method,  4 quantum clients intances are
created (all of them use the same context object) and for each of them a
new token is generated.

Is it possible to change this behavior by updating the context.auth_token
property the first time a quantumclient for a given context is created (so
that the same token will be reused among the 4 client instances) ?  Is
there some security issue that can appear?

Thanks
Ala Rezmerita
Cloudwatt

The backtrace :
  /usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py(194)main()
-> result = function(*args, **kwargs)
  /opt/stack/nova/nova/openstack/common/loopingcall.py(125)_inner()
-> idle = self.f(*self.args, **self.kw)
  /opt/stack/nova/nova/service.py(283)periodic_tasks()
-> return self.manager.periodic_tasks(ctxt, raise_on_error=raise_on_error)
  /opt/stack/nova/nova/manager.py(100)periodic_tasks()
-> return self.run_periodic_tasks(context, raise_on_error=raise_on_error)

/opt/stack/nova/nova/openstack/common/periodic_task.py(179)run_periodic_tasks()
-> task(self, context)
  /opt/stack/nova/nova/compute/manager.py(3654)_heal_instance_info_cache()
-> self._get_instance_nw_info(context, instance)
  /opt/stack/nova/nova/compute/manager.py(767)_get_instance_nw_info()
-> instance, conductor_api=self.conductor_api)
  /opt/stack/nova/nova/network/quantumv2/api.py(367)get_instance_nw_info()
-> result = self._get_instance_nw_info(context, instance, networks)
  /opt/stack/nova/nova/network/quantumv2/api.py(375)_get_instance_nw_info()
-> nw_info = self._build_network_info_model(context, instance, networks)

/opt/stack/nova/nova/network/quantumv2/api.py(840)_build_network_info_model()
-> client = quantumv2.get_client(context, admin=True)
> /opt/stack/nova/nova/network/quantumv2/__init__.py(67)get_client()
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with "git review" for a dependent commit

2013-07-03 Thread Vishvananda Ishaya

On Jul 2, 2013, at 4:23 PM, Jeremy Stanley  wrote:

> 
>git review -d 33297
>git review -x 35384
>git review

Oh, I didn't see that you added -x/-X/-N. I can simplify my backport script[1] 
significantly now.

Vish

[1] https://gist.github.com/vishvananda/2206428
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] volume affinity filter for nova scheduler

2013-07-03 Thread Russell Bryant
On 07/03/2013 10:24 AM, Alexey Ovchinnikov wrote:
> Hi everyone,
> 
> for some time I have been working on an implementation of a filter that
> would allow to force instances to hosts which contain specific volumes.
> A blueprint can be found here:
> https://blueprints.launchpad.net/nova/+spec/volume-affinity-filter
> and an implementation here:
> https://review.openstack.org/#/c/29343/
> 
> The filter works for LVM driver and now it picks either a host
> containing specified volume
> or nothing (thus effectively failing instance scheduling). Now it fails
> primarily when it can't find the volume. It has been
> pointed to me that sometimes it may be desirable not to fail instance
> scheduling but to run it anyway. However this softer behaviour fits better
> for weighter function. Thus I have registered a blueprint for the
> weighter function:
> https://blueprints.launchpad.net/nova/+spec/volume-affinity-weighter-function
> 
> I was thinking about both the filter and the weighter working together.
> The former
> could be used in cases when we strongly need storage space associated
> with an 
> instance and need them placed on the same host. The latter could be used
> when 
> storage space is nice to have and preferably on the same host
> with an instance, but not so crucial as to have the instance running.
> 
> During reviewing a question appeared whether we need the filter and
> wouldn't things be better
> if we removed it and had only the weighter function instead. I am not
> yet convinced
> that the filter is useless and needs to be replaced with the weighter,
> so I am asking for your opinion on this matter. Do you see usecases for
> the filter,
> or the weighter will answer all needs?

Thanks for starting this thread.

I was pushing for the weight function.  It seems much more appropriate
for a cloud environment than the filter.  It's an optimization that is
always a good idea, so the weight function that works automatically
would be good.  It's also transparent to users.

Some things I don't like about the filter:

 - It requires specifying a scheduler hint

 - It's exposing a concept of co-locating volumes and instances on the
same host to users.  This isn't applicable for many volume backends.  As
a result, it's a violation of the principle where users ideally do not
need to know or care about deployment details.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] weekly meeting today

2013-07-03 Thread Michael Basnight
Same bat time, same bat channel. 2000 UTC in #openstack-meeting-alt

https://wiki.openstack.org/wiki/Meetings/TroveMeeting

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] trove and heat integration status

2013-07-03 Thread Michael Basnight
On Jul 3, 2013, at 2:03 AM, Thierry Carrez  wrote:

> Michael Basnight wrote:
>> 1) Companies who are looking at trove are not yet looking at heat, and a 
>> hard dependency might stifle growth of the product initially
> 
> Integration with other OpenStack 'integrated' projects cuts both ways.
> 
> You can't have the exposure benefits of being published in the common
> integrated release without fulfilling the integration constraints. If
> you think integration requirements will stifle initial growth of the
> project, maybe remaining a bit longer in "incubation" is what you want ?

How can this be a valid response if its not up to me to stand up said projects 
within a company? I'm saying it might be worth considering to change the 
integration requirements is all. But given the information previous on the 
thread I think we can move forward. 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Work around DB in OpenStack (Oslo, Nova, Cinder, Glance)

2013-07-03 Thread Doug Hellmann
On Wed, Jul 3, 2013 at 6:50 AM, Michael Still  wrote:

> On Wed, Jul 3, 2013 at 3:50 AM, Boris Pavlovic  wrote:
>
> > Question:
> >   Why we should put in oslo slqlalchemy-migrate monkey patches, when we
> are
> > planing to switch to alembic?
> >
> > Answer:
> >If we don’t put in oslo sqlalchemy-migrate monkey patches. We won't be
> > able to work on 7 point at all until 8 and 10 points will be implemented
> in
> > every project. Also work around 8 point is not finished, so we are not
> able
> > to implement 10 points in any of project. So this blocks almost all work
> in
> > all projects. I think that these 100-200 lines of code are not so big
> price
> > for saving few cycles of time.
>
> We've talked in the past (Folsom summit?) about alembic, but I'm not
> aware of anyone who is actually working on it. Is someone working on
> moving us to alembic? If not, it seems unfair to block database work
> on something no one is actually working on.
>

That's not quite what happened. Unfortunately the conversation happened in
gerrit, IRC, and email, so it's a little hard to piece together from the
outside.

I had several concerns with the nature of this change, not the least of
which is it is monkey-patching a third-party library to add a feature
instead of just modifying that library upstream.

The patch I objected to (https://review.openstack.org/#/c/31016) modifies
the sqlite driver inside sqlalchemy-migrate to support some migration
patterns that it does not support natively. There's no blueprint linked
from the commit message on the patch I was reviewing, so I didn't have the
full background. The description of the patch, and the discussion in
gerrit, initially led me to believe this was for unit tests for the
migrations themselves. I pointed out that it didn't make any sense to test
the migrations on a database no one would use in production, especially if
we had to monkey patch the driver to make the migrations work in the first
place.

Boris clarified that the tests were the general nova tests, at which point
I asked why nova was relying on the migrations to set up a database for its
tests instead of just using the models. Sean cleared up the history on that
point, and although I'm still not happy with the idea of putting code in
oslo with the pre-declared plan to remove it (rather than consider it for
graduation), I agreed that the pragmatic thing to do for now is to live
with the monkey patched version of sqlalchemy-migrate.

At this point, I have removed my -2 to the patch, but I haven't had a
chance to fully review the code. I voted 0 to unblock it in case other
reviewers had time to look at it before I was able to come back. That
hasn't happened, but the patch is no longer blocked.

Somewhere during that conversation, I suggested looking at alembic as an
alternative, but alembic clearly states in its documentation that
migrations on sqlite are not supported because of the database's limited
support for alter statements, but that if someone wants to contribute those
features patches would be welcome. If we do need this feature to support
good unit tests of SQLalchemy-based projects, we should eventually move it
out of oslo and into alembic, then move our migration scripts to use
alembic. It would make the most sense to do that on a release boundary,
when we normally collapse the migration scripts anyway. Even better would
be if we could make the models and migration scripts produce databases that
are compatible enough for testing the main project, and then run tests for
the migrations themselves against real databases as a separate step. Based
on the plan Boris has posted, it sounds like he is working toward both of
these goals.

Doug


>
> Michael
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Work around DB in OpenStack (Oslo, Nova, Cinder, Glance)

2013-07-03 Thread Johannes Erdfelt
On Wed, Jul 03, 2013, Michael Still  wrote:
> On Wed, Jul 3, 2013 at 3:50 AM, Boris Pavlovic  wrote:
> 
> > Question:
> >   Why we should put in oslo slqlalchemy-migrate monkey patches, when we are
> > planing to switch to alembic?
> >
> > Answer:
> >If we don’t put in oslo sqlalchemy-migrate monkey patches. We won't be
> > able to work on 7 point at all until 8 and 10 points will be implemented in
> > every project. Also work around 8 point is not finished, so we are not able
> > to implement 10 points in any of project. So this blocks almost all work in
> > all projects. I think that these 100-200 lines of code are not so big price
> > for saving few cycles of time.
> 
> We've talked in the past (Folsom summit?) about alembic, but I'm not
> aware of anyone who is actually working on it. Is someone working on
> moving us to alembic? If not, it seems unfair to block database work
> on something no one is actually working on.

I've started working on a non-alembic migration path that was discussed
at the Grizzly summit.

While alembic is better than sqlalchemy-migrate, it still requires long
downtimes when some migrations are run. We discussed moving to an
expand/contract cycle where migrations add new columns, allow migrations
to slowly (relatively speaking) migrate data over, then (possibly) remove
any old columns.

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] volume affinity filter for nova scheduler

2013-07-03 Thread Alexey Ovchinnikov
Hi everyone,

for some time I have been working on an implementation of a filter that
would allow to force instances to hosts which contain specific volumes.
A blueprint can be found here:
https://blueprints.launchpad.net/nova/+spec/volume-affinity-filter
and an implementation here:
https://review.openstack.org/#/c/29343/

The filter works for LVM driver and now it picks either a host containing
specified volume
or nothing (thus effectively failing instance scheduling). Now it fails
primarily when it can't find the volume. It has been
pointed to me that sometimes it may be desirable not to fail instance
scheduling but to run it anyway. However this softer behaviour fits better
for weighter function. Thus I have registered a blueprint for the
weighter function:
https://blueprints.launchpad.net/nova/+spec/volume-affinity-weighter-function

I was thinking about both the filter and the weighter working together. The
former
could be used in cases when we strongly need storage space associated with
an
instance and need them placed on the same host. The latter could be used
when
storage space is nice to have and preferably on the same host
with an instance, but not so crucial as to have the instance running.

During reviewing a question appeared whether we need the filter and
wouldn't things be better
if we removed it and had only the weighter function instead. I am not yet
convinced
that the filter is useless and needs to be replaced with the weighter,
so I am asking for your opinion on this matter. Do you see usecases for the
filter,
or the weighter will answer all needs?

With kind regards,
Alexey.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to add Christopher Yeoh to nova-core

2013-07-03 Thread Boris Pavlovic
+1 =)


On Wed, Jul 3, 2013 at 4:49 PM, Sean Dague  wrote:

> +1, good to have more voices in the eastern hemisphere as well.
>
> On Wed, Jul 3, 2013 at 8:38 AM, Pádraig Brady  wrote:
> > +1
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Sean Dague
> http://dague.net
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Meeting agenda for Wed Jul 3rd at 2100 UTC

2013-07-03 Thread Julien Danjou
On Wed, Jul 03 2013, Julien Danjou wrote:

Erratum:
  Next meeting is on Wed Jul 3rd at 2100 UTC 
(today)

> The Ceilometer project team holds a meeting in #openstack-meeting, see
> https://wiki.openstack.org/wiki/Meetings/MeteringAgenda for more details.
>
> Next meeting is on Thu Jul 3rd at 2100 UTC 
>
> Please add your name with the agenda item, so we know who to call on during
> the meeting.
> * Review Havana-2 milestone
>   * https://launchpad.net/ceilometer/+milestone/havana-2
> * Release python-ceilometerclient? 
> * Open discussion
>
> If you are not able to attend or have additional topic(s) you would like
> to add, please update the agenda on the wiki.
>
> Cheers,

-- 
Julien Danjou
/* Free Software hacker * freelance consultant
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for including fake implementations in python-client packages

2013-07-03 Thread Joe Gordon
On Mon, Jul 1, 2013 at 11:08 PM, Alex Gaynor  wrote:

> Hi all,
>
> I suspect many of you don't know me as I've only started to get involved in
> OpenStack recently, I work at Rackspace and I'm pretty involved in other
> Python
> open source stuff, notably Django and PyPy, I also serve on the board of
> the
> PSF. So hi!
>
> I'd like to propose an addition to all of the python-client libraries going
> forwards (and perhaps a requirement for future ones).
>
> What I'd like is for each client library, in addition to the actual
> implementation, is that they ship a fake, in-memory, version of the API.
> The
> fake implementations should take the same arguments, have the same return
> values, raise the same exceptions, and otherwise be identical, besides the
> fact
> that they are entirely in memory and never make network requests.
>
> Why not ``mock.Mock(spec=...)``:
>
> First, for those not familiar with the distinction between fakes and mocks
> (and
> doubles and stubs and ...): http://mumak.net/test-doubles/ is a great
> resource.
> https://www.youtube.com/watch?v=Xu5EhKVZdV8 is also a great resource which
> explains much of what I'm about to say, but better.
>
> Fakes are better than Mocks, for this because:
>
> * Mocks tend to be brittle because they're testing for the implementation,
> and
>   not the interface.
> * Each mock tends to grow its own divergent behaviors, which tend to not be
>   correct.
>
> http://stackoverflow.com/questions/8943022/reactor-stops-between-tests-when-using-twisted-trial-unittest/8947354#8947354
>   explains how to avoid this with fakes
> * Mocks tend to encourage monkey patching, instead of "just passing
> objects as
>   parameters"
>
> Again: https://www.youtube.com/watch?v=Xu5EhKVZdV8 is an amazing resource.
>
>
> This obviously adds a bit of a burden to development of client libraries,
> so
> there needs to be a good justification. Here are the advantages I see:
>


Just as  FYI, nova currently has a different way of addressing the two
points that a fake API server would address.



>
> * Helps flesh out the API: by having a simple implementation it helps in
>   designing a good API.
>

Having a simple API implementation will not help us with having a good API,
as by the time its in code almost too late to change (the window to change
is between coded and released).  Instead a fake API would help validate
that the API follows the specs.  Nova has two ways of doing this now, unit
tests that exercise the APIs, and compare the results against recorded
output (see doc/api_samples/ in nova).  And we have tempest API tests.



> * Helps other projects tests: right now any project which uses an openstack
>   client library has to do something manual in their tests, either add
> their
>   own abstraction layer where they hand write an in-memory implementation,
> or
>   they just monkey patch the socket, http, or client library to not make
>   request. Either direction requires a bunch of work from each and every
>   project using an openstack client. Having these in the core client
> libraries
>   would allow downstream authors to simply swap out Connection classes.
>

Instead of a fake API server with no scheduler, no db etc, nova has a fake
backend.  So the only code path that is different then a real deployment is
which virt driver is used (https://review.openstack.org/#/c/24938/).


>
> I think these benefits out weigh the disadvantages. I'm not sure what the
> procedure for this is going forward. I think to demonstrate this concept it
> should start with a few (or even just one) client libraries, particularly
> ones
> which completely own the resources they serve (e.g. swift, marconi,
> ceilometer,
> trove), as compared to ones that interact more (e.g. neutrino, cinder, and
> nova). This is absolutely something I'm volunteering to work on, but I
> want to
> ensure this is an idea that has general buy in from the community and
> existing
> maintainers, so it doesn't wither.
>
> Thanks,
> Alex
>
> --
> "I disapprove of what you say, but I will defend to the death your right
> to say it." -- Evelyn Beatrice Hall (summarizing Voltaire)
> "The people's good is the highest law." -- Cicero
> GPG Key fingerprint: 125F 5C67 DFE9 4084
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to add Christopher Yeoh to nova-core

2013-07-03 Thread Sean Dague
+1, good to have more voices in the eastern hemisphere as well.

On Wed, Jul 3, 2013 at 8:38 AM, Pádraig Brady  wrote:
> +1
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] PTL proxy 8th-19th July

2013-07-03 Thread Steven Hardy
I'll be on holiday from 8th-19th July inclusive (booked before I knew I'd
be taking on the PTL responsibility hence unfortunately coinciding with H2)

Steve Baker (stevebaker on IRC) has kindly offered to take on PTL tasks in
my absense, and will be my nominated proxy at all meetings.

Steve should also be the primary point of contact for release coordination
for the H2 milestone.

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to add Christopher Yeoh to nova-core

2013-07-03 Thread Pádraig Brady
+1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to add Christopher Yeoh to nova-core

2013-07-03 Thread Andrew Laski

+1

On 07/02/13 at 06:40pm, Russell Bryant wrote:

Greetings,

I would like to propose Christopher Yoeh to be added to the nova-core team.

Christopher has been prolific in his contributions to nova lately, both
in code and his general leadership of the v3 API effort.  He has also
been regularly contributing to code reviews.  It would be great to have
him on board to help review API changes, as well as fixes elsewhere in nova.

References:

   https://review.openstack.org/#/q/owner:5292,n,z

   https://review.openstack.org/#/q/reviewer:5292,n,z

   https://review.openstack.org/#/dashboard/5292

Please respond with +1s or any concerns.

Thanks,

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Meeting agenda for Thu Jul 3rd at 2100 UTC

2013-07-03 Thread Julien Danjou
The Ceilometer project team holds a meeting in #openstack-meeting, see
https://wiki.openstack.org/wiki/Meetings/MeteringAgenda for more details.

Next meeting is on Thu Jul 3rd at 2100 UTC 

Please add your name with the agenda item, so we know who to call on during
the meeting.
* Review Havana-2 milestone
  * https://launchpad.net/ceilometer/+milestone/havana-2
* Release python-ceilometerclient? 
* Open discussion

If you are not able to attend or have additional topic(s) you would like
to add, please update the agenda on the wiki.

Cheers,
-- 
Julien Danjou
;; Free Software hacker ; freelance consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to add Christopher Yeoh to nova-core

2013-07-03 Thread Brian Elliott
+1

Sent from my iPad

On Jul 2, 2013, at 5:40 PM, Russell Bryant  wrote:

> Greetings,
> 
> I would like to propose Christopher Yoeh to be added to the nova-core team.
> 
> Christopher has been prolific in his contributions to nova lately, both
> in code and his general leadership of the v3 API effort.  He has also
> been regularly contributing to code reviews.  It would be great to have
> him on board to help review API changes, as well as fixes elsewhere in nova.
> 
> References:
> 
>https://review.openstack.org/#/q/owner:5292,n,z
> 
>https://review.openstack.org/#/q/reviewer:5292,n,z
> 
>https://review.openstack.org/#/dashboard/5292
> 
> Please respond with +1s or any concerns.
> 
> Thanks,
> 
> -- 
> Russell Bryant
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]Discussion of the inconsistency between api and backend compute driver limitations.

2013-07-03 Thread guohliu

Hello,

As we know, nova have different compute drivers with different 
limitations, you can see in this link,


https://wiki.openstack.org/wiki/HypervisorSupportMatrix

but the api doesn't reflect that, which means when the user performs 
some operations via APIs, user can not get well defined message of the 
limitations, most likely just get error status. I am thinking this 
problem for a while and feeling a limitation filter might be needed in 
current api design to filter those limitations and directly return well 
defined message, I apologize if this problem has been already discussed 
but I missed it, looking forward to your comments.


Best Regards

guohliu



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Work around DB in OpenStack (Oslo, Nova, Cinder, Glance)

2013-07-03 Thread Boris Pavlovic
Michael,

Actually my team is already working in Nova, around moving from
sqlalchemy-migrate to alembic.
We are doing first step: Syncing models and migrations.

But as I wrote there is a lot of things that should be done before
switching to another migration utils:
1) Sync our Schemas in different backends (in all projects)
2) Sync schemas with models (also there is a lot of hidden bugs in
migrations, that should be fixed)
3) Add tests that checks that models and migrations schemas are equal

And only then we will be able to rewrite our migrations without risk. (and
it is really huge task).

So I think that we shouldn't block our work around DB because of temporary
sqlalchemy-migrate monkey patches in oslo..

Best regards,
Boris Pavlovic



On Wed, Jul 3, 2013 at 2:50 PM, Michael Still  wrote:

> On Wed, Jul 3, 2013 at 3:50 AM, Boris Pavlovic  wrote:
>
> > Question:
> >   Why we should put in oslo slqlalchemy-migrate monkey patches, when we
> are
> > planing to switch to alembic?
> >
> > Answer:
> >If we don’t put in oslo sqlalchemy-migrate monkey patches. We won't be
> > able to work on 7 point at all until 8 and 10 points will be implemented
> in
> > every project. Also work around 8 point is not finished, so we are not
> able
> > to implement 10 points in any of project. So this blocks almost all work
> in
> > all projects. I think that these 100-200 lines of code are not so big
> price
> > for saving few cycles of time.
>
> We've talked in the past (Folsom summit?) about alembic, but I'm not
> aware of anyone who is actually working on it. Is someone working on
> moving us to alembic? If not, it seems unfair to block database work
> on something no one is actually working on.
>
> Michael
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Removing the .mo files from Horizon git

2013-07-03 Thread ZG Niu
+1


On Wed, Jul 3, 2013 at 4:40 PM, Julie Pichon  wrote:

> "Monty Taylor"  wrote:
> > On 07/02/2013 01:13 AM, Mark McLoughlin wrote:
> > > On Tue, 2013-07-02 at 09:58 +0200, Thierry Carrez wrote:
> > >> Thomas Goirand wrote:
> > >>> So, shouldn't the .mo files be generated at build time only, and be
> kept
> > >>> out of the Git?
> > >>
> > >> +1
> > >
> > > Yep, agree too.
> > >
> > > Interestingly, last time I checked, devstack doesn't actually compile
> > > the message catalogs (python setup.py compile_catalog).
> > >
> > > I've been meaning to fix that for a while now, but it's fallen by the
> > > wayside. I've unassigned myself from the bug for now:
> > >
> > >   https://bugs.launchpad.net/devstack/+bug/995287
> >
> > Should we make python setup.py install do this if gettext is installed?
> > Or keep it as a separate step for people who care?
>
> Yes, it would be awesome to automate this on install and let people still
> get the translations automagically.
>
> Julie
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards,
NiuZG
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Work around DB in OpenStack (Oslo, Nova, Cinder, Glance)

2013-07-03 Thread Michael Still
On Wed, Jul 3, 2013 at 3:50 AM, Boris Pavlovic  wrote:

> Question:
>   Why we should put in oslo slqlalchemy-migrate monkey patches, when we are
> planing to switch to alembic?
>
> Answer:
>If we don’t put in oslo sqlalchemy-migrate monkey patches. We won't be
> able to work on 7 point at all until 8 and 10 points will be implemented in
> every project. Also work around 8 point is not finished, so we are not able
> to implement 10 points in any of project. So this blocks almost all work in
> all projects. I think that these 100-200 lines of code are not so big price
> for saving few cycles of time.

We've talked in the past (Folsom summit?) about alembic, but I'm not
aware of anyone who is actually working on it. Is someone working on
moving us to alembic? If not, it seems unfair to block database work
on something no one is actually working on.

Michael

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Propose to add copying the reference images when creating a volume

2013-07-03 Thread Mate Lakat
Hi Sheng,

Unfortunately I had a missunderstanding around boot from volume, as
pointed out by Robert Collins. So apologies for that. It turned out,
that boot from volume is a broader concept.

Merge:
Generally I would say, creating a bootable disk (Let's forget about the
cloud at the moment) from a kernel, ramdisk and rootfs is not a
straightforward job. I think you need a boot loader, and you need to
install that bootloader to the disk, so that it will load the kernel,
etc, although I am not an expert in this area.

Disk image:
If you download the cirros disk images, they should contain this
bootloader already, this is a "disk image"

https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img

Tis is in qcow2 format, and could be converted to raw with the qemu-img
convert command. dd ing the raw bytes to a block device should leave you
with a device, which could be booted.

I hope it helped, and sorry for the confusion.

I think Vish described a solution to your problem here:
http://lists.openstack.org/pipermail/openstack-dev/2013-July/011164.html

Mate

On Wed, Jul 03, 2013 at 01:52:55PM +0800, Sheng Bo Hou wrote:
> Hi Mate,
> 
> This is an issue currently I found only happens to the UEC image.
> 
> A UEC image consists in a triplet: a kernel, an init ramdisk and a root 
> file system image, e.g. cirros-0.3.0-x86_64-uec, 
> cirros-0.3.0-x86_64-uec-kernel and cirros-0.3.0-x86_64-uec-ramdisk. If we 
> boot a VM from a UEC image, we need to make sure that Nova can find the 
> kernel_id and ramdisk_id.
> 
> Suppose three of them have already existed in glance. I create a volume 
> from the root file system image(cirros-0.3.0-x86_64-uec), then only this 
> image is copied to a volume. If I boot a VM from the volume via "nova boot 
> --block-device-mapping vda=$vol_id"(no image_id specified), Nova will not 
> know the kernel_id and ramdisk_id, so that the VM launched in this 
> situation will have some issues, e.g. it can not be pinged(
> https://bugs.launchpad.net/nova/+bug/1155512). I am looking for the way to 
> inform nova the kernel_id and ramdisk_id without a specified image_id.
> 
> You reply gives me a new clue: is it possible to "merge" the triple images 
> into one via a sort of command, when creating the bootable volume from the 
> root file system image? Do you think qemu-img can do? I'd like to hear 
> from you. Thx.
> 
> Best wishes,
> Vincent Hou (侯胜博)
> 
> Staff Software Engineer, Open Standards and Open Source Team, Emerging 
> Technology Institute, IBM China Software Development Lab
> 
> Tel: 86-10-82450778 Fax: 86-10-82453660
> Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
> Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
> West Road, Haidian District, Beijing, P.R.C.100193
> 地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193
> 
> 
> 
> Mate Lakat  
> 2013/07/02 23:34
> 
> To
> Sheng Bo Hou/China/IBM@IBMCN, 
> cc
> OpenStack Development Mailing List , 
> "Duncan Thomas  John Griffith" 
> , Avishay Traeger 
> , Eric Harney , "Walter A. Boring 
> IV (hemna)" , , 
> 
> Subject
> Re: [openstack-dev] [cinder] Propose to add copying the reference images 
> when creating a volume
> 
> 
> 
> 
> 
> 
> Hi Sheng,
> 
> You can use a raw(qemu-img recognised) type image to glance, and ask
> cinder to create a volume from that. This way you end up with a bootable
> volume. In the end of the day, your instance will just see a block
> device. The default cinder driver should also recognise other formats
> that are understood by qemu-img.
> 
> As an advertisement, I just added a patch to make it able to recognise
> XenServer type images:
> 
> https://review.openstack.org/34336
> 
> Mate
> 
> On Mon, Jul 01, 2013 at 06:35:03AM -0400, Sheng Bo Hou wrote:
> > Hi Mate,
> > 
> > First, thanks for answering.
> > I was trying to find the way to prepare the bootable volume.
> > Take the default image downloaded by devstack, there are three images: 
> > cirros-0.3.0-x86_64-uec, cirros-0.3.0-x86_64-uec-kernel and 
> > cirros-0.3.0-x86_64-uec-ramdisk.
> > cirros-0.3.0-x86_64-uec-kernel is referred as the kernel image and 
> > cirros-0.3.0-x86_64-uec-ramdisk is referred as the ramdisk image.
> > 
> > Issue: If only the image(cirros-0.3.0-x86_64-uec) is copied to the 
> volume 
> > when creating a volume) from an image, this volume is unable to boot an 
> > instance without the references to the kernel and the ramdisk images. 
> The 
> > current cinder only copies the image cirros-0.3.0-x86_64-uec to one 
> > targeted volume(Vol-1), which is marked as bootable but unable to do a 
> > successful boot with the current nova code, even if image-id is removed 
> in 
> > the parameter.
> > 
> > Possible solutions: There are two ways in my mind to resolve it. One is 
> we 
> > just need the code change in Nova to let it find the reference images 
> for 
> > the bootable volume(Vol-1) and there is no need to change anything in 
> > cinder, since the kernel and ramdisk id 

Re: [openstack-dev] trove and heat integration status

2013-07-03 Thread Thierry Carrez
Michael Basnight wrote:
> 1) Companies who are looking at trove are not yet looking at heat, and a hard 
> dependency might stifle growth of the product initially

Integration with other OpenStack 'integrated' projects cuts both ways.

You can't have the exposure benefits of being published in the common
integrated release without fulfilling the integration constraints. If
you think integration requirements will stifle initial growth of the
project, maybe remaining a bit longer in "incubation" is what you want ?

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Removing the .mo files from Horizon git

2013-07-03 Thread Julie Pichon
"Monty Taylor"  wrote:
> On 07/02/2013 01:13 AM, Mark McLoughlin wrote:
> > On Tue, 2013-07-02 at 09:58 +0200, Thierry Carrez wrote:
> >> Thomas Goirand wrote:
> >>> So, shouldn't the .mo files be generated at build time only, and be kept
> >>> out of the Git?
> >>
> >> +1
> > 
> > Yep, agree too.
> > 
> > Interestingly, last time I checked, devstack doesn't actually compile
> > the message catalogs (python setup.py compile_catalog).
> > 
> > I've been meaning to fix that for a while now, but it's fallen by the
> > wayside. I've unassigned myself from the bug for now:
> > 
> >   https://bugs.launchpad.net/devstack/+bug/995287
> 
> Should we make python setup.py install do this if gettext is installed?
> Or keep it as a separate step for people who care?

Yes, it would be awesome to automate this on install and let people still get 
the translations automagically.

Julie

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Meeting agenda for Wed July 3rd at 2000 UTC

2013-07-03 Thread Steven Hardy
The Heat team holds a weekly meeting in #openstack-meeting, see

https://wiki.openstack.org/wiki/Meetings/HeatAgenda for more details

The next meeting is on Wed July 3rd at 2000 UTC

Current topics for discussion:
 - Review last week's actions
 - h2 bug/blueprint status
 - stable branch process/release
 - Open discussion

If anyone has any other topic to discuss, please add to the wiki.

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] I18n team's IRC meeting

2013-07-03 Thread Ying Chun Guo


Hi,


I'm pleased to announce the first IRC meeting of OpenStack I18n team will
be held on
Thursday, 4th July at 0100 UTC .


The OpenStack I18n team will take responsible for the I18n and L10n of
OpenStack.
Our mission is to make OpenStack ubiquitously accessible to people of all
language
backgrounds, by providing a framework to create high quality translations,
recruiting
contributors and actively managing and planning the translation process.


We want people all around the world who are interested in I18n or OpenStack
can join.
The work scope of this team includes:
   translation of documentation, messages, websites, and etc.
   I18n tests
   tools maintenance and enhancements


In order to facilitate people to join the discussion, the meeting will be
held on each Thursday,
but alternately between two different times: Asia and America friendly
time, and
Europe and India friendly time.


For more details, please look into
https://wiki.openstack.org/wiki/Meetings/I18nTeamMeeting.
Our wiki page is https://wiki.openstack.org/wiki/I18nTeam.
Welcome to join us!


Best regards
Ying Chun (Daisy) Guo___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev