Re: [openstack-dev] [nova] bp proposal: libvirt-resize-disk-down

2014-02-03 Thread sahid
Ok Rich, I'm going to test your tools and add it in the workflow if available 
on the host.

s.

- Original Message -
From: Richard W.M. Jones rjo...@redhat.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Friday, January 31, 2014 8:13:57 AM
Subject: Re: [openstack-dev] [nova] bp proposal: libvirt-resize-disk-down

On Thu, Jan 30, 2014 at 02:59:45PM +, sahid wrote:
   Greetings,
 
 A blueprint is being discussed about the disk resize down feature of libvirt 
 driver.
   https://blueprints.launchpad.net/nova/+spec/libvirt-resize-disk-down
 
 The current implementation does not handle disk resize down and just skips the
 step during a resize down of the instance. I'm really convinced we can 
 implement 
 this feature by using the good job of disk resize down of the driver xenapi.

resize2fs -M is problematic as another reply mentions.

virt-sparsify is designed to handle this case properly.  It currently
works by copying the disk image, but it should soon work in-place too
(waiting on some qemu command line changes).

And incidentally, virt-resize can handle the offline growing case well
too.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming blog: http://rwmj.wordpress.com
Fedora now supports 80 OCaml packages (the OPEN alternative to F#)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova]improvement-of-accessing-to-glance

2014-02-03 Thread Flavio Percoco

On 03/02/14 06:09 +, Eiichi Aikawa wrote:

Hi,

Here is the blueprint about improvement of accessing to glance API server.
 https://blueprints.launchpad.net/nova/+spec/improvement-of-accessing-to-glance

The summary of this bp is:
- Glance API Servers are categorized into two groups: Primary and Secondary.
- Firstly, Nova tries to access Glance API Servers of Primary randomly.
- If failed to access all Primary Servers, Nova tries to access Secondary
  Servers randomly.

We suppose the near servers will be treated as Primary, and other servers
as Secondary.

The benefits of this bp we think is as followings.
- By listing near glance API servers and using them, the total amount of data
  transfer across the networks can be reduced.
- Especially, in case of using microserver, accessing glance API server
  in the same chassis can make efficiency better than using in other chassis.

This can reduce the network traffic and increase the efficiency.



I went through the blueprint and the review. I've to say that I agree
with Russell. I don't think making nova's config more complex is the
way to go.

I've more comments, though.

Glance doesn't have a Primary / Secondary concept and I don't
think it makes a lot of sense to add it. You could easily specify the
nearest glance node and fallback to the service catalog if the
glance-api node is not available anymore. Even better would be to
always rely on the service catalog.

That said, it seems that the goal of this blueprint is also similar to
the work happening here[0] - or at least it could be achieved as part
of that.

IMHO, the bit that should really be optimized is the selection of the
store nodes where the image should be downloaded from. That is,
selecting the nearest location from the image locations and this is
something that perhaps should happen in glance-api, not nova.

One last comment, `glance_api_server` is, usually, the address of a
load balancer that sits on top of several glance-api nodes. I'm pretty
sure this kind of node selection can be also optimized in services
like haproxy.

Summary, I don't think the proposed solution in this blueprint is the
way to go. I do think node selection is important but I'd focus more
on the nearest store selection rather than the glance-api node
selection for now. The work on the multiple-location support blueprint
for nova is promising.

Cheers,
flaper

[0] https://review.openstack.org/#/c/33409/

Please give us your advice/comment.

Regards,
E.Aikawa (aik...@mxk.nes.nec.co.jp)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpaxQeDoq4aT.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder Stability Hack-a-thon

2014-02-03 Thread Flavio Percoco

On 01/02/14 00:06 -0800, Mike Perez wrote:

Folks,

I would love to get people together who are interested in Cinder stability 
to really dedicate a few days. This is not for additional features, but rather 
finishing what we already have and really getting those in a good shape 
before the end of the release.


When: Feb 24-26
Where: San Francisco (DreamHost Office can host), Colorado, remote?

Some ideas that come to mind:

- Cleanup/complete volume retype
- Cleanup/complete volume migration [1][2]
- Other ideas that come from this thread.



As an occasional contributor to Cinder, I think it would benefit a lot
if new tests were added. There are some areas that are lacking of
tests - AFAICT - and other tests that seem to be inconsistent with the
rest of the test suite. This has caused me some frustrations in the
past. I don't have good examples handy but if I have some free time
between the 24th and 26th, I'll look into that and raise them in the
IRC channel.

That said, I think folks participating should also look forward to add
more tests during those hacking days. Ensuring that features (not just
methods and functions ) are fully covered is important.

Great initiative Mike!

Cheers,
flaper

I can't stress the dedicated part enough. I think if we have some folks 
from core and anyone interested in contributing and staying focus, we 
can really get a lot done in a few days with small set of doable stability
goals 
to stay focused on. If there is enough interest, being together in the 
mentioned locations would be great, otherwise remote would be fine as 
long as people can stay focused and communicate through suggested 
ideas like team speak or google hangout.


What do you guys think? Location? Other stability concerns to add to the list?

[1] - https://bugs.launchpad.net/cinder/+bug/1255622
[2] - https://bugs.launchpad.net/cinder/+bug/1246200


-Mike Perez



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


pgpj19W9ku3mI.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [openstack-user] About installing CLI for swift on windows

2014-02-03 Thread Mayur Patil
Hi Dmitry,

 Thanks for the detail reply.

 I am asking why because *except swift* all components are properly
installed  ready to use.

 So I need single hint that could solve my problem. And yes, in case of
Linux, no doubt that it has to work.

 Waiting for favourable guidance,

 Thanks !!

*--*

*Cheers,Mayur*.

Contact :
https://www.facebook.com/mayurram  https://twitter.com/RamMayur
https://plus.google.com/u/0/+MayurPatil/about
http://in.linkedin.com/pub/mayur-patil/35/154/b8b/
https://stackoverflow.com/users/1528044/rammayur *
https://myspace.com/mayurram* https://github.com/ramlaxman
https://www.ohloh.net/accounts/mayurp7



On Mon, Feb 3, 2014 at 2:58 PM, Dmitry Mescheryakov 
dmescherya...@mirantis.com wrote:

 Mayur,

 An observation: the stack trace shows that pkg_resources failed to import
 'shell' module. At the same time swift's entry point is
 'swiftclient.shell:main'. As if pkg_resources tries to import 'shell'
 instead of 'swiftclient.shell' for some reason. Can't give you more hints
 as I am not accustomed to develop under Windows. One thing I can suggest is
 to use swift client under Linux if you can. The client works in Linux out
 of the box.

 Dmitry


 2014-02-01 Mayur Patil ram.nath241...@gmail.com:

 Hi All,

 I am trying to install Client Library for OpenStack Object Storage
 API i.e. python-swiftclient.

 I have tried for each method but all fails:

 1)   pip install python-swiftclient
 2)   pip install url_of_package
 3)   easy_install python-swiftclient

 I also configured manually setup.cfg in python-swiftclient as follows:

 http://fpaste.org/73621/

 in which, I have removed scripts= bin/swift.

 It seems to be all Ok from configurations. http://fpaste.org/73625/

 It also setup swift.exe to C:\Python27\Scripts.

 But again if I will check for  swift --version, it gives error
 http://fpaste.org/73627/

 I have also googled but it did not help; stuck at this!

 Seeking for guidance,

 Thanks !!

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] bp: nova-ecu-support

2014-02-03 Thread Kenichi Oomichi
Hi,

There is a blueprint ECU[1], and that is an interesting idea for me.
so I'd like to know the comments about ECU idea.

After production environments start, the operators will need to add
compute nodes before exhausting the capacity.
On the scenario, they'd like to add cost-efficient machines as the
compute node at the time. So the production environments will consist
of different performance compute nodes. Also they hope to provide
the same performance virtual machines on different performance nodes
if specifying the same flavor.

Now nova contains flavor_extraspecs[2] which can customize the cpu
bandwidth for each flavor:
 # nova flavor-key m1.low_cpu set quota:cpu_quota=1
 # nova flavor-key m1.low_cpu set quota:cpu_period=2

However, this feature can not provide the same vm performance on
different performance node, because this arranges the vm performance
with the same ratio(cpu_quota/cpu_period) only even if the compute
node performances are different. So it is necessary to arrange the
different ratio based on each compute node performance.

Amazon EC2 has ECU[3] already for implementing this, and the blueprint
[1] is also for it.

Any thoughts?


Thanks
Ken'ichi Ohmichi

---
[1]: https://blueprints.launchpad.net/nova/+spec/nova-ecu-support
[2]: 
http://docs.openstack.org/admin-guide-cloud/content/ch_introduction-to-openstack-compute.html#customize-flavors
[3]: http://aws.amazon.com/ec2/faqs/  Q: What is a “EC2 Compute Unit” and why 
did you introduce it?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp: nova-ecu-support

2014-02-03 Thread Alex Glikson
Similar capabilities are being introduced here: 
https://review.openstack.org/#/c/61839/

Regards,
Alex




From:   Kenichi Oomichi oomi...@mxs.nes.nec.co.jp
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   03/02/2014 11:48 AM
Subject:[openstack-dev] [Nova] bp: nova-ecu-support



Hi,

There is a blueprint ECU[1], and that is an interesting idea for me.
so I'd like to know the comments about ECU idea.

After production environments start, the operators will need to add
compute nodes before exhausting the capacity.
On the scenario, they'd like to add cost-efficient machines as the
compute node at the time. So the production environments will consist
of different performance compute nodes. Also they hope to provide
the same performance virtual machines on different performance nodes
if specifying the same flavor.

Now nova contains flavor_extraspecs[2] which can customize the cpu
bandwidth for each flavor:
 # nova flavor-key m1.low_cpu set quota:cpu_quota=1
 # nova flavor-key m1.low_cpu set quota:cpu_period=2

However, this feature can not provide the same vm performance on
different performance node, because this arranges the vm performance
with the same ratio(cpu_quota/cpu_period) only even if the compute
node performances are different. So it is necessary to arrange the
different ratio based on each compute node performance.

Amazon EC2 has ECU[3] already for implementing this, and the blueprint
[1] is also for it.

Any thoughts?


Thanks
Ken'ichi Ohmichi

---
[1]: https://blueprints.launchpad.net/nova/+spec/nova-ecu-support
[2]: 
http://docs.openstack.org/admin-guide-cloud/content/ch_introduction-to-openstack-compute.html#customize-flavors

[3]: http://aws.amazon.com/ec2/faqs/  Q: What is a “EC2 Compute Unit” 
and why did you introduce it?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Supporting WebOb 1.3

2014-02-03 Thread Thomas Goirand
Hi,

Sorry if this has been already discussed (traffic is high in this list).

I've just checked, and our global-requirements.txt still has:
WebOb=1.2.3,1.3

Problem: both Sid and Trusty have version 1.3.

What package is holding the newer version of webob? Can we work toward
supporting version 1.3? I haven't seen issues building (most of not all)
core packages using WebOb 1.3. Did I miss the obvious?

Cheers,

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V Nova CI Infrastructure

2014-02-03 Thread Alessandro Pilotti

On 02 Feb 2014, at 23:10 , Alessandro Pilotti apilo...@cloudbasesolutions.com 
wrote:

 
 On 02 Feb 2014, at 23:01 , Michael Still mi...@stillhq.com wrote:
 
 It seems like there were a lot of failing Hyper-V CI jobs for nova
 yesterday. Is there some systemic problem or did all those patches
 really fail? An example: https://review.openstack.org/#/c/66291/
 
 
 
 We’re awere of this issue and looking into it. The issue happens in devstack 
 before the Hyper-V compute nodes are added and before tempests starts.
 
 I’ll post an update as soon as we get it sorted out.
 

Fixed the issue. The reason was related to the following devstack patch which 
now binds by default the keystone
private port 35357 to $SERVICE_HOST. Our config was errornously using the 
private port instead of the public one
in OS_AUTH_URL to connect to 127.0.0.1, hence the sudded failures. 

https://github.com/openstack-dev/devstack/commit/6c57fbab26e40af5c5b19b46fb3da39341f34dab

Alessandro


 
 Thanks,
 
 Alessandro
 
 
 Michael
 
 On Mon, Feb 3, 2014 at 7:21 AM, Alessandro Pilotti
 apilo...@cloudbasesolutions.com wrote:
 Hi Michael,
 
 
 On 02 Feb 2014, at 06:19 , Michael Still mi...@stillhq.com wrote:
 
 I saw another case of the build succeeded message for a failure just
 now... https://review.openstack.org/#/c/59101/ has a rebase failure
 but was marked as successful.
 
 Is this another case of hyper-v not being voting and therefore being a
 bit confusing? The text of the comment clearly indicates this is a
 failure at least.
 
 
 Yes, all the Hyper-V CI messages start with build succeded, while the 
 next lines show the actual job result.
 I asked on infra about how to get rid of that message, but from what I got 
 from the chat it is not possible as long as the CI is non voting 
 independently from the return status of the single jobs.
 
 Alessandro
 
 
 Thanks,
 Michael
 
 On Tue, Jan 28, 2014 at 12:17 AM, Alessandro Pilotti
 apilo...@cloudbasesolutions.com wrote:
 On 25 Jan 2014, at 16:51 , Matt Riedemann mrie...@linux.vnet.ibm.com 
 wrote:
 
 
 
 On 1/24/2014 3:41 PM, Peter Pouliot wrote:
 Hello OpenStack Community,
 
 I am excited at this opportunity to make the community aware that the
 Hyper-V CI infrastructure
 
 is now up and running.  Let's first start with some housekeeping
 details.  Our Tempest logs are
 
 publically available here: http://64.119.130.115. You will see them show
 up in any
 
 Nova Gerrit commit from this moment on.
 snip
 
 So now some questions. :)
 
 I saw this failed on one of my nova patches [1].  It says the build 
 succeeded but that the tests failed.  I talked with Alessandro about 
 this yesterday and he said that's working as designed, something with 
 how the scoring works with zuul?
 
 I spoke with clarkb on infra, since we were also very puzzled by this 
 behaviour. I've been told that when the job is non voting, it's always 
 reported as succeeded, which makes sense, although sligltly misleading.
 The message in the Gerrit comment is clearly stating: Test run failed in 
 ..m ..s (non-voting), so this should be fair enough. It'd be great to 
 have a way to get rid of the Build succeded message above.
 
 The problem I'm having is figuring out why it failed.  I looked at the 
 compute logs but didn't find any errors.  Can someone help me figure out 
 what went wrong here?
 
 
 The reason for the failure of this job can be found here:
 
 http://64.119.130.115/69047/1/devstack_logs/screen-n-api.log.gz
 
 Please search for (1054, Unknown column 'instances.locked_by' in 'field 
 list')
 
 In this case the job failed when nova service-list got called to verify 
 wether the compute nodes have been properly added to the devstack 
 instance in the overcloud.
 
 During the weekend we added also a console.log to help in simplifying 
 debugging, especially in the rare cases in which the job fails before 
 getting to run tempest:
 
 http://64.119.130.115/69047/1/console.log.gz
 
 
 Let me know if this helps in tracking down your issue!
 
 Alessandro
 
 
 [1] https://review.openstack.org/#/c/69047/1
 
 --
 
 Thanks,
 
 Matt Riedemann
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 --
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] Cinder Stability Hack-a-thon

2014-02-03 Thread Swapnil Kulkarni
+1 Remote!

~Swapnil



On Mon, Feb 3, 2014 at 2:41 PM, Flavio Percoco fla...@redhat.com wrote:

 On 01/02/14 00:06 -0800, Mike Perez wrote:

 Folks,

 I would love to get people together who are interested in Cinder
 stability to really dedicate a few days. This is not for additional
 features, but rather finishing what we already have and really getting
 those in a good shape before the end of the release.

 When: Feb 24-26
 Where: San Francisco (DreamHost Office can host), Colorado, remote?

 Some ideas that come to mind:

 - Cleanup/complete volume retype
 - Cleanup/complete volume migration [1][2]
 - Other ideas that come from this thread.


 As an occasional contributor to Cinder, I think it would benefit a lot
 if new tests were added. There are some areas that are lacking of
 tests - AFAICT - and other tests that seem to be inconsistent with the
 rest of the test suite. This has caused me some frustrations in the
 past. I don't have good examples handy but if I have some free time
 between the 24th and 26th, I'll look into that and raise them in the
 IRC channel.

 That said, I think folks participating should also look forward to add
 more tests during those hacking days. Ensuring that features (not just
 methods and functions ) are fully covered is important.

 Great initiative Mike!

 Cheers,
 flaper


  I can't stress the dedicated part enough. I think if we have some folks
 from core and anyone interested in contributing and staying focus, we can
 really get a lot done in a few days with small set of doable stability
 goals to stay focused on. If there is enough interest, being together in
 the mentioned locations would be great, otherwise remote would be fine as
 long as people can stay focused and communicate through suggested ideas
 like team speak or google hangout.

 What do you guys think? Location? Other stability concerns to add to the
 list?

 [1] - https://bugs.launchpad.net/cinder/+bug/1255622
 [2] - https://bugs.launchpad.net/cinder/+bug/1246200


 -Mike Perez


  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-02-03 Thread Tomas Sedovic

My apologies for firing this off and then hiding under the FOSDEM rock.

In light of the points raised by Devananda and Robert, I no longer think 
fiddling with the scheduler is the way to go.


Note this was never intended to break/confuse all TripleO users -- I 
considered it a cleaner equivalent to entering incorrect HW specs (i.e. 
instead of doing that you would switch to this other filter in nova.conf).


Regardless, I stand corrected on the distinction between heterogeneous 
hardware all the way and having a flavour per service definition. That 
was a very good point to raise.


I'm fine with both approaches.

So yeah, let's work towards having a single Node Profile (flavor) 
associated with each Deployment Role (pages 12  13 of the latest 
mockups[1]), optionally starting with requiring all the Node Profiles to 
be equal.


Once that's working fine, we can look into the harder case of having 
multiple Node Profiles within a Deployment Role.


Is everyone comfortable with that?

Tomas

[1]: 
http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-27_tripleo-ui-icehouse.pdf


On 03/02/14 00:21, Robert Collins wrote:

On 3 February 2014 08:45, Jaromir Coufal jcou...@redhat.com wrote:





However, taking a step back, maybe the real answer is:

a) homogeneous nodes
b) document. . .
 - **unsupported** means of demoing Tuskar (set node attributes to
match flavors, hack
   the scheduler, etc)


Why are people calling it 'hack'? It's an additional filter to
nova-scheduler...?


It doesn't properly support the use case; its extra code to write and
test and configure that is precisely identical to mis-registering
nodes.


 - our goals of supporting heterogeneous nodes for the J-release.


I wouldn't talk about J-release. I would talk about next iteration or next
step. Nobody said that we are not able to make it in I-release.


+1




Does this seem reasonable to everyone?

Mainn



Well +1 for a) and it's documentation.

However me and Robert, we look to have different opinions on what
'homogeneous' means in our context. I think we should clarify that.


So I think my point is more this:
  - either this iteration is entirely limited to homogeneous hardware,
in which case, document it, not workarounds or custom schedulers etc.
  - or it isn't limited, in which case we should consider the options:
- flavor per service definition
- custom scheduler
- register nodes wrongly

-Rob




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Supporting WebOb 1.3

2014-02-03 Thread Monty Taylor

On 02/03/2014 11:45 AM, Thomas Goirand wrote:

Hi,

Sorry if this has been already discussed (traffic is high in this list).

I've just checked, and our global-requirements.txt still has:
WebOb=1.2.3,1.3

Problem: both Sid and Trusty have version 1.3.

What package is holding the newer version of webob? Can we work toward
supporting version 1.3? I haven't seen issues building (most of not all)
core packages using WebOb 1.3. Did I miss the obvious?


It might be worth trying to patch global-requirements.txt and submit it 
as a change - see what falls out of the gate. That might give you some 
insight into where the pin might be needed. It might also just be 
historical protection.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] massive number of new errors in logs with oslo.messaging

2014-02-03 Thread Monty Taylor

On 02/02/2014 08:33 PM, Jay Pipes wrote:

On Sun, 2014-02-02 at 07:13 -0500, Sean Dague wrote:

Just noticed this at the end of a successful run:

http://logs.openstack.org/15/63215/13/check/check-tempest-dsvm-full/2636cae/console.html#_2014-02-02_12_02_44_422

It looks like the merge of oslo.messaging brings a huge amount of false
negative error messages in the logs. Would be good if we didn't ship
icehouse with this state of things.


Agreed.

And the error messages, which look like this:

Returning exception Unexpected task state: expecting [u'scheduling',
None] but the actual state is deleting to caller

don't make sense -- at least in the English language.

What does the actual state is deleting to caller mean?


I'm going to try to translate, with no real context:

I'm trying to return a state to the caller. While trying to do that, 
I've encountered an unexpected task state. I was expecting the state to 
be [u'scheduling', None'] but instead I seem to have discovered that the 
state was in fact, deleting. I'm freaked out by this, so instead of 
returning a task state to the caller, I'm going to return an exception.


If the above is correct, I'd say that this is pretty terrible. We 
probably don't need to log that we're returning a state. That's crazy 
detailed. If we _do_ encounter an exception, we probably shouldn't 
encapsulate the exception state and try to return it as the state text.


That said - I actually have no idea what's going on here. :)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V Nova CI Infrastructure

2014-02-03 Thread Monty Taylor

On 01/29/2014 11:10 PM, Matt Riedemann wrote:



On Monday, January 27, 2014 7:17:27 AM, Alessandro Pilotti wrote:

On 25 Jan 2014, at 16:51 , Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:




On 1/24/2014 3:41 PM, Peter Pouliot wrote:

Hello OpenStack Community,

I am excited at this opportunity to make the community aware that the
Hyper-V CI infrastructure

is now up and running.  Let’s first start with some housekeeping
details.  Our Tempest logs are

publically available here: http://64.119.130.115. You will see them
show
up in any

Nova Gerrit commit from this moment on.
snip


So now some questions. :)

I saw this failed on one of my nova patches [1].  It says the build
succeeded but that the tests failed.  I talked with Alessandro about
this yesterday and he said that's working as designed, something with
how the scoring works with zuul?


I spoke with clarkb on infra, since we were also very puzzled by this
behaviour. I’ve been told that when the job is non voting, it’s always
reported as succeeded, which makes sense, although sligltly misleading.
The message in the Gerrit comment is clearly stating: Test run failed
in ..m ..s (non-voting)”, so this should be fair enough. It’d be great
to have a way to get rid of the “Build succeded” message above.


The problem I'm having is figuring out why it failed.  I looked at
the compute logs but didn't find any errors.  Can someone help me
figure out what went wrong here?



The reason for the failure of this job can be found here:

http://64.119.130.115/69047/1/devstack_logs/screen-n-api.log.gz

Please search for (1054, Unknown column 'instances.locked_by' in
'field list')

In this case the job failed when nova service-list” got called to
verify wether the compute nodes have been properly added to the
devstack instance in the overcloud.

During the weekend we added also a console.log to help in simplifying
debugging, especially in the rare cases in which the job fails before
getting to run tempest:

http://64.119.130.115/69047/1/console.log.gz


Let me know if this helps in tracking down your issue!

Alessandro



[1] https://review.openstack.org/#/c/69047/1

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Alex, thanks, figured it out and yes, the console log is helpful, and
the fail was a real bug in my patch which changed how the 180 migration
was doing something which later broke another migration running against
your MySQL backend - so nice catch.


WOOT! I can't even tell you how excited I am that we're catching real 
bugs here. (honestly, if we weren't then all of this extra work we've 
put people through would be kinda depressing)



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] log message translations

2014-02-03 Thread Doug Hellmann
On Mon, Jan 27, 2014 at 12:42 PM, Doug Hellmann doug.hellm...@dreamhost.com
 wrote:

 We have a blueprint open for separating translated log messages into
 different domains so the translation team can prioritize them differently
 (focusing on errors and warnings before debug messages, for example) [1].
 Some concerns were raised related to the review [2], and I would like to
 address those in this thread and see if we can reach consensus about how to
 proceed.

 The implementation in [2] provides a set of new marker functions similar
 to _(), one for each log level (we have _LE, LW, _LI, _LD, etc.). These
 would be used in conjunction with _(), and reserved for log messages.
 Exceptions, API messages, and other user-facing messages all would still be
 marked for translation with _() and would (I assume) receive the highest
 priority work from the translation team.

 When the string extraction CI job is updated, we will have one main
 catalog for each app or library, and additional catalogs for the log
 levels. Those show up in transifex separately, but will be named in a way
 that they are obviously related. Each translation team will be able to
 decide, based on the requirements of their users, how to set priorities for
 translating the different catalogs.

 Existing strings being sent to the log and marked with _() will be removed
 from the main catalog and moved to the appropriate log-level-specific
 catalog when their marker function is changed. My understanding is that
 transifex is smart enough to recognize the same string from more than one
 source, and to suggest previous translations when it sees the same text.
 This should make it easier for the translation teams to catch up by
 reusing the translations they have already done, in the new catalogs.

 One concern that was raised was the need to mark all of the log messages
 by hand. I investigated using extraction patterns like LOG.debug( and
 LOG.info(, but because of the way the translation actually works
 internally we cannot do that. There are a few related reasons.

 In other applications, the function _() translates a string at the point
 where it is invoked, and returns a new string object. OpenStack has a
 requirement that messages be translated multiple times, whether in the API
 or the LOG (there is already support for logging in more than one language,
 to different log files). This requirement means we delay the translation
 operation until right before the string is output, at which time we know
 the target language. We could update the log functions to create Message
 objects dynamically, except...

 Each app or library that uses the translation code will need its own
 domain for the message catalogs. We get around that right now by not
 translating many messages from the libraries, but that's obviously not what
 we want long term (we at least want exceptions translated). If we had a
 special version of a logger in oslo.log that knew how to create Message
 objects for the format strings used in logging (the first argument to
 LOG.debug for example), it would also have to know what translation domain
 to use so the proper catalog could be loaded. The wrapper functions defined
 in the patch [2] include this information, and can be updated to be
 application or library specific when oslo.log eventually becomes its own
 library.

 Further, as part of moving the logging code from oslo-incubator to
 oslo.log, and making our logging something we can use from other OpenStack
 libraries, we are trying to change the implementation of the logging code
 so it is no longer necessary to create loggers with our special wrapper
 function. That would mean that oslo.log will be a library for *configuring*
 logging, but the actual log calls can be handled with Python's standard
 library, eliminating a dependency between new libraries and oslo.log. (This
 is a longer, and separate, discussion, but I mention it here as backround.
 We don't want to change the API of the logger in oslo.log because we don't
 want to be using it directly in the first place.)

 Another concern raised was the use of a prefix _L for these functions,
 since it ties the priority definitions to logs. I chose that prefix as an
 explicit indicate that these *are* just for logs. I am not associating any
 actual priority with them. The translators want us to move the log messages
 out of the main catalog. Having them all in separate catalogs is a
 refinement that gives them what they want -- some translators don't care
 about log messages at all, some only care about errors, etc. We decided
 that the translators should set priorities, and we would make that possible
 by separating the catalogs into logical groups. Everything marked with _()
 will still go into the main catalog, but beyond that it isn't up to the
 developers to indicate priority for translations.

 The alternative approach of using babel translator comments would, under
 other circumstances, help because each message 

[openstack-dev] Fw: [Openstack-dev] [Fuel] [Oslo] Add APP-NAME (RFC5424) for Oslo syslog logging

2014-02-03 Thread Bogdan Dobrelya
On 12/20/2013 08:21 PM, Bogdan Dobrelya wrote:
 *Preamble*
 Hi stackers, I was trying to implement correct APP-NAME tags for remote
 logging in Fuel for Openstack, and faced the
 https://bugs.launchpad.net/nova/+bug/904307 issue. There are no logging
 options in Python 2.6/2.7 to address this APP-NAME in logging formats or
 configs (log_format, log_config(_append)).
 
 Just look at the log file names, and you will understand me:
 cinder-cinder.api.extensions.log
 cinder-cinder.db.sqlalchemy.session.log
 cinder-cinder.log
 cinder-cinder.openstack.common.rpc.common.log
 cinder-eventlet.wsgi.server.log
 cinder-keystoneclient.middleware.auth_token.log
 glance-eventlet.wsgi.server.log
 glance-glance.api.middleware.cache.log
 glance-glance.api.middleware.cache_manage.log
 glance-glance.image_cache.log
 glance-keystoneclient.middleware.auth_token.log
 keystone-root.log
 nova-keystoneclient.middleware.auth_token.log
 nova-nova.api.openstack.compute.extensions.log
 nova-nova.api.openstack.extensions.log
 nova-nova.ec2.wsgi.server.log
 nova-nova.log
 nova-nova.metadata.wsgi.server.log
 nova-nova.network.driver.log
 nova-nova.osapi_compute.wsgi.server.log
 nova-nova.S3.log
 quantum-eventlet.wsgi.server.log
 quantum-keystoneclient.middleware.auth_token.log
 quantum-quantum.api.extensions.log
 quantum-quantum.manager.log
 quantum-quantum.openstack.common.rpc.amqp.log
 quantum-quantum.plugins.openvswitch.ovs_quantum_plugin.log
 
 But I actually want to see something like this:
 cinder-api.log
 cinder-volume.log
 glance-api.log
 glance-manage.log
 glance-registry.log
 keystone-all.log
 nova-api.log
 nova-conductor.log
 nova-consoleauth.log
 nova-objectstore.log
 nova-scheduler.log
 ...and so on.
 
 Another words, logging should honor RFC3164  RFC5424, here are some quotes:
 The MSG part has two fields known as the TAG field and the CONTENT
 field. The value in the TAG field will be the name of the program or
 process that generated the message. The CONTENT contains the details of
 the message...
 The APP-NAME field SHOULD identify the device or application that
 originated the message...
 
 I see two solutions for this issue.
 
 *Solution 1*
 The one of possible solutions is to use new key for log_format (i.e.
 %(binary_name)s) to extract application/service name for log records.
 The implementation could be like patch #4:
 https://review.openstack.org/#/c/63094/4
 And the log_format could be like this:
 log_format=%(asctime)s %(binary_name)s %(levelname)s: %(name)s: %(message)s
 
 The patch is applicable to other Openstack services, which did not moved
 to Oslo yet.
 I tested it with nova services, and all services can start with
 log_format using %(binary_name)s, but nova-api. Looks like
 /keystoneclient/middleware/auth_token.py is unhappy with this patch, see
 the trace http://paste.openstack.org/show/55519/
 
 *Solution 2*
 The other and only option I can suggest, is to backport ‘ident’ from
 python 3.3, see http://hg.python.org/cpython/rev/6baa90fa2b6d
 The implementation could be like this:
 https://review.openstack.org/#/c/63094
 To ensure we will have APP-NAME in message we can set use_syslog = true
 and check the results.
 If we’re using log_config_append, the formatters and handlers could be
 like this:
 [formatter_normal]
 format = %(levelname)s: %(message)s
 [handler_production]
 class = openstack.common.log.RFCSysLogHandler
 level = INFO
 formatter = normal
 args = ('/dev/log', handlers.SysLogHandler.LOG_LOCAL6)
 
 The patch is also applicable to other Openstack services, which did not
 moved to Oslo yet.
 For syslog logging, the application/service/process name (aka APP-NAME,
 see RFC5424) would be added before the MSG part, right after it has been
 formatted, and there is no need to use any special log_format settings
 as well.
 
 *Conclusion*
 I vote for implement solution 2 for Oslo logging, and for those
 Openstack services, which don’t use Oslo for logging yet. That would not
 require any changes outside of the Openstack modules, thus looks like a
 good compromise for backporting ‘ident’ feature for APP-NAME tags from
 Python 3.3. What do you think?
 

Hi, stackers.
FYI, the patch https://bugs.launchpad.net/oslo/+bug/904307 had been
merged into Oslo-incubator to implement Option 2 (see above).

According to the docs provided, to honor RFC5424 APP-NAME for syslog
messages, the existing syslog format is DEPRECATED during I and will be
removed in J.

New use_syslog_rfc_format option was introduced:
(Optional) use syslog rfc5424 format for logging. If enabled, will add
APP-NAME (RFC5424) before the MSG part of the syslog message.  The old
format without APP-NAME is deprecated in I, and will be removed in J.

1) Please consider to sync the patch to any affected Openstack projects
(i.e. all projects with use_syslog option in their configs) to ensure
the application/service name will be present in the syslog messages, if
use_syslog_rfc_format = true.

2) Please consider to add this info to the Syslog section 

Re: [openstack-dev] [glance][nova]improvement-of-accessing-to-glance

2014-02-03 Thread Jay Pipes
On Mon, 2014-02-03 at 10:03 +0100, Flavio Percoco wrote:
 IMHO, the bit that should really be optimized is the selection of the
 store nodes where the image should be downloaded from. That is,
 selecting the nearest location from the image locations and this is
 something that perhaps should happen in glance-api, not nova.

I disagree. The reason is because glance-api does not know where nova
is. Nova does.

I continue to think that the best performance gains will come from
getting rid of glance-api entirely, putting the block-streaming bits
into a separate Python library, and having Nova and Cinder pull
image/volume bits directly from backend storage instead of going through
the glance middleman.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] massive number of new errors in logs with oslo.messaging

2014-02-03 Thread David Kranz

On 02/03/2014 07:36 AM, Monty Taylor wrote:

On 02/02/2014 08:33 PM, Jay Pipes wrote:

On Sun, 2014-02-02 at 07:13 -0500, Sean Dague wrote:

Just noticed this at the end of a successful run:

http://logs.openstack.org/15/63215/13/check/check-tempest-dsvm-full/2636cae/console.html#_2014-02-02_12_02_44_422 



It looks like the merge of oslo.messaging brings a huge amount of false
negative error messages in the logs. Would be good if we didn't ship
icehouse with this state of things.


Agreed.

And the error messages, which look like this:

Returning exception Unexpected task state: expecting [u'scheduling',
None] but the actual state is deleting to caller

don't make sense -- at least in the English language.

What does the actual state is deleting to caller mean?


I'm going to try to translate, with no real context:

I'm trying to return a state to the caller. While trying to do that, 
I've encountered an unexpected task state. I was expecting the state 
to be [u'scheduling', None'] but instead I seem to have discovered 
that the state was in fact, deleting. I'm freaked out by this, so 
instead of returning a task state to the caller, I'm going to return 
an exception.


If the above is correct, I'd say that this is pretty terrible. We 
probably don't need to log that we're returning a state. That's crazy 
detailed. If we _do_ encounter an exception, we probably shouldn't 
encapsulate the exception state and try to return it as the state text.


That said - I actually have no idea what's going on here. :)
I don't either, but this is why we started failing gate jobs that put 
unexpected errors in the logs even though all tests passed. 
Unfortunately I hear that the log-errors-failing-jobs feature was 
reverted as part of the recent process of restoring the gate to health. 
Is restoring that behavior on the radar screen?


 -David



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Supporting WebOb 1.3

2014-02-03 Thread Doug Hellmann
On Mon, Feb 3, 2014 at 7:31 AM, Monty Taylor mord...@inaugust.com wrote:

 On 02/03/2014 11:45 AM, Thomas Goirand wrote:

 Hi,

 Sorry if this has been already discussed (traffic is high in this list).

 I've just checked, and our global-requirements.txt still has:
 WebOb=1.2.3,1.3

 Problem: both Sid and Trusty have version 1.3.

 What package is holding the newer version of webob? Can we work toward
 supporting version 1.3? I haven't seen issues building (most of not all)
 core packages using WebOb 1.3. Did I miss the obvious?


 It might be worth trying to patch global-requirements.txt and submit it as
 a change - see what falls out of the gate. That might give you some insight
 into where the pin might be needed. It might also just be historical
 protection.


Yeah, according to 4f81f419a1430b5e44feb299ce061f592064a7dd we were just
playing it safe the last time we updated.

Doug






 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder Stability Hack-a-thon

2014-02-03 Thread Eric Harney
On 02/03/2014 04:11 AM, Flavio Percoco wrote:
 On 01/02/14 00:06 -0800, Mike Perez wrote:
 Folks,

 I would love to get people together who are interested in Cinder
 stability to really dedicate a few days. This is not for additional
 features, but rather finishing what we already have and really getting
 those in a good shape before the end of the release.

 When: Feb 24-26
 Where: San Francisco (DreamHost Office can host), Colorado, remote?

 Some ideas that come to mind:

 - Cleanup/complete volume retype
 - Cleanup/complete volume migration [1][2]
 - Other ideas that come from this thread.

 
 As an occasional contributor to Cinder, I think it would benefit a lot
 if new tests were added. There are some areas that are lacking of
 tests - AFAICT - and other tests that seem to be inconsistent with the
 rest of the test suite. This has caused me some frustrations in the
 past. I don't have good examples handy but if I have some free time
 between the 24th and 26th, I'll look into that and raise them in the
 IRC channel.
 

I've gotten the same feeling, and have had some ideas around improving
the LVM and base volume tests to improve structure and coverage that I'd
like to work on.  (Though some of those may have been implemented already.)

 That said, I think folks participating should also look forward to add
 more tests during those hacking days. Ensuring that features (not just
 methods and functions ) are fully covered is important.

This may also fit in nicely with the effort around moving to mock, which
I expect will reveal issues in tests and improve things a good bit as we
pick through them while converting to the new framework.

 
 Great initiative Mike!

Definitely agreed.

 
 Cheers,
 flaper
 
 I can't stress the dedicated part enough. I think if we have some
 folks from core and anyone interested in contributing and staying
 focus, we can really get a lot done in a few days with small set of
 doable stability
 goals to stay focused on. If there is enough interest, being together
 in the mentioned locations would be great, otherwise remote would be
 fine as long as people can stay focused and communicate through
 suggested ideas like team speak or google hangout.

 What do you guys think? Location? Other stability concerns to add to
 the list?

 [1] - https://bugs.launchpad.net/cinder/+bug/1255622
 [2] - https://bugs.launchpad.net/cinder/+bug/1246200


 -Mike Perez
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] API extention update in order to support multi-databases integration

2014-02-03 Thread Denis Makogon
Goodday, OpenStack DBaaS users/contributors.

I'd like to start topic related to refactoring of API extensions. Since
Trove already supports more than one db engines (mysql and redis in
single-instance mode).

At this moment if contributors will decide that one of the integrated
db engines would support users/databases CRUD operations they will come
into the issue of non-pluggable extensions for described operations. Here
[1] you can see that users/databases CRUD operations will go through routes
defined for mysql database integration.



I would like to suggest more flexible mechanism for such API extensions:

   1.

   Extensions should be implemented per datastore manager, such as mysql,
   redis, cassandra, mongodb.
   2.

   ReST service routes for API extensions should be common for all of
   datastores. It means that implemention should be imported/used according to
   uniq attribute, best option for such attribute - datastore manager.
   3.

   Even if datastore doesn't supports users ACL or storage distinction
   (databases in terms of mysql, keyspaces in terms of cassandra) API
   extension should be implemented, each method should raise NotImplmented
   exception with meaningful message Not supported.


As you can see at [2], mechanism implemented according to rules given
above. So, I would like to hear all your thoughts about that.

[1]
https://github.com/openstack/trove/blob/master/trove/extensions/routes/mysql.py

[2] https://review.openstack.org/70742

https://review.openstack.org/70742
https://review.openstack.org/70742

Best regards,

Denis Makogon

Mirantis, Inc.

Kharkov, Ukraine

www.mirantis.com

www.mirantis.ru

dmako...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova]improvement-of-accessing-to-glance

2014-02-03 Thread Flavio Percoco

On 03/02/14 10:13 -0500, Jay Pipes wrote:

On Mon, 2014-02-03 at 10:03 +0100, Flavio Percoco wrote:

IMHO, the bit that should really be optimized is the selection of the
store nodes where the image should be downloaded from. That is,
selecting the nearest location from the image locations and this is
something that perhaps should happen in glance-api, not nova.


I disagree. The reason is because glance-api does not know where nova
is. Nova does.


Nova doesn't know where glance is either. More info is required in
order to finally do something smart here. Not sure what the best
approach is just yet but as mentioned in my previous email I think
focusing on the stores for now is the thing to do. (As you pointed out
bellow too).



I continue to think that the best performance gains will come from
getting rid of glance-api entirely, putting the block-streaming bits
into a separate Python library, and having Nova and Cinder pull
image/volume bits directly from backend storage instead of going through
the glance middleman.



This is exactly what we're doing by pulling glance.store into its own
library. I'm working on this myself. We are not completely getting rid
of glance-api but we're working on not depending on it to get the
image data.

Cheers,
flaper



Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


pgpzSgh3jpN4x.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Python 3 compatibility

2014-02-03 Thread Sylvain Bauza
Hi,

I was at the FOSDEM event this week-end and some interesting talks about
asyncio raised  my interest about this framework for replacing eventlet.

Although there is an experimental port of asyncio for Python 2.6 named
trollius [1], I think it would be a good move for having Python 3.

I know there were some regular meetings previously, but it seems the
interest decreased. Would it make sense to resume this effort and maybe do
a quick inventory of what's missing for supporting Python 3.3 ?

In particular, which external requirements are still missing Py3
compatibility and what is the status for the Oslo incubated libraries ?

Thanks,
-Sylvain

[1] https://pypi.python.org/pypi/trollius/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] VMware tools in oslo-incubator or straight to oslo.vmware

2014-02-03 Thread Davanum Srinivas
Hi Vipin, Doug,

I've followed instructions from Doug and this page [1]. Will give
Infra folks a bit of time and then ping them in a day or so.

Git Repo on Github : https://github.com/dims/oslo.vmware
Request to create Gerrit Groups :
https://bugs.launchpad.net/openstack-ci/+bug/1275817
Review request to create stackforge repo and jenkins jobs :
https://review.openstack.org/#/c/70761/

-- dims

[1] http://ci.openstack.org/stackforge.html

On Fri, Jan 31, 2014 at 8:51 AM, Doug Hellmann
doug.hellm...@dreamhost.com wrote:
 Thanks, Vipin,

 Dims is going work on setting up your repository and help you configure the
 review team. Give us a few days to get the details sorted out.

 Doug


 On Fri, Jan 31, 2014 at 7:39 AM, Vipin Balachandran
 vbalachand...@vmware.com wrote:

 The current API is stable since this is used by nova and cinder for the
 last two releases. Yes, I can act as the maintainer.



 Here is the list of reviewers:

 Arnaud Legendre arnaud...@gmail.com

 Davanum Srinivas (dims) dava...@gmail.com

 garyk gkot...@vmware.com

 Kartik Bommepally kbommepa...@vmware.com

 Sabari Murugesan smuruge...@vmware.com

 Shawn Hartsock harts...@acm.org

 Subbu subramanian.neelakan...@gmail.com

 Vui Lam v...@vmware.com



 From: Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
 Sent: Friday, January 31, 2014 4:22 AM
 To: Vipin Balachandran
 Cc: Donald Stufft; OpenStack Development Mailing List (not for usage
 questions)


 Subject: Re: [openstack-dev] [oslo] VMware tools in oslo-incubator or
 straight to oslo.vmware





 On Thu, Jan 30, 2014 at 12:38 PM, Vipin Balachandran
 vbalachand...@vmware.com wrote:

 This library is highly specific to VMware drivers in OpenStack and not a
 generic VMware API client. As Doug mentioned, this library won't be useful
 outside OpenStack. Also, it has some dependencies on openstack.common code
 as well. Therefore it makes sense if we make this code as part of OSLO.



 I think we have consensus that, assuming you are committing to API
 stability, this set of code does not need to go through the incubator before
 becoming a library. How stable is the current API?



 If it stable and is not going to be useful to anyone outside of OpenStack,
 we can create an oslo.vmware library for it. I can start working with -infra
 next week to set up the repository.



 We will need someone on your team to be designated as the lead maintainer,
 to coordinate with the Oslo PTL for release management issues and bug
 triage. Is that you, Vipin?



 We will also need to have a set of reviewers for the new repository. I'll
 add oslo-core, but it will be necessary for a few people familiar with the
 code to also be included. If you have anyone from nova or cinder who should
 be a reviewer, we can add them, too. Please send me a list of names and the
 email addresses used in gerrit so I can add them to the reviewer list when
 the repository is created.



 Doug







 By the way, a work in progress review has been posted for the VMware
 cinder driver integration with the OSLO common code
 (https://review.openstack.org/#/c/70108/). The nova integration is currently
 under progress.



 Thanks,

 Vipin



 From: Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
 Sent: Wednesday, January 29, 2014 4:06 AM
 To: Donald Stufft
 Cc: OpenStack Development Mailing List (not for usage questions); Vipin
 Balachandran
 Subject: Re: [openstack-dev] [oslo] VMware tools in oslo-incubator or
 straight to oslo.vmware







 On Tue, Jan 28, 2014 at 5:06 PM, Donald Stufft don...@stufft.io wrote:


 On Jan 28, 2014, at 5:01 PM, Julien Danjou jul...@danjou.info wrote:

  On Tue, Jan 28 2014, Doug Hellmann wrote:
 
  There are several reviews related to adding VMware interface code to
  the
  oslo-incubator so it can be shared among projects (start at
  https://review.openstack.org/#/c/65075/7 if you want to look at the
  code).
 
  I expect this code to be fairly stand-alone, so I wonder if we would be
  better off creating an oslo.vmware library from the beginning, instead
  of
  bringing it through the incubator.
 
  Thoughts?
 
  This sounds like a good idea, but it doesn't look OpenStack specific, so
  maybe building a non-oslo library would be better.
 
  Let's not zope it! :)

 +1 on not making it an oslo library.



 Given the number of issues we've seen with stackforge libs in the gate,
 I've changed my default stance on this point.



 It's not clear from the code whether Vipin et al expect this library to be
 useful for anyone not working with both OpenStack and VMware. Either way, I
 anticipate having the library under the symmetric gating rules and managed
 by the one of the OpenStack teams (oslo, nova, cinder?) and VMware
 contributors should make life easier in the long run.



 As far as the actual name goes, I'm not set on oslo.vmware it was just a
 convenient name for the conversation.



 Doug






 
  --
  Julien Danjou
  # Free Software hacker # independent consultant
  # 

[openstack-dev] [Solum] Solum database schema modification proposal

2014-02-03 Thread Paul Montgomery
Solum community,

I notice that we are using String(36) UUID values in the database schema as 
primary key for many new tables that we are creating.  For example:
https://review.openstack.org/#/c/68328/10/solum/objects/sqlalchemy/application.py

Proposal: Add an int or bigint ID as the primary key, instead of UUID (the UUID 
field remains if needed), to improve database efficiency.

In my experience (I briefly pinged a DBA to verify), using a relatively long 
field as a primary key will increase resource utilization and reduce 
throughput.  This will become pronounced with the database will no longer fit 
into memory which would likely characterize any medium-large Solum 
installation.  This proposal would relatively painlessly improve database 
efficiency before a database schema change becomes difficult (many pull 
requests are in flight right now for schema).

In order to prevent the auto-incrementing ID from leaking usage information 
about the system, I would recommend using the integer-based ID field internally 
within Solum for efficiency and do not expose this ID field to users.  Users 
would only see UUID or non-ID values to prevent Solum metadata from leaking.

Thoughts?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] Issues in running tests

2014-02-03 Thread Paul Czarkowski
If you're running on a machine with devstack these dependencies *should* 
already be met.

If you like to work on VMs on your local box I have build a Vagrant dev 
environment that should take care of your dependencies here - 
https://github.com/rackerlabs/vagrant-solum-dev

I think it's documented pretty well,  but if you want to use it and have any 
issues, reach out to me and I'll help.


From: Rajdeep Dua dua_rajd...@yahoo.commailto:dua_rajd...@yahoo.com
Reply-To: Rajdeep Dua dua_rajd...@yahoo.commailto:dua_rajd...@yahoo.com, 
OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Sunday, February 2, 2014 10:46 AM
To: Julien Vey vey.jul...@gmail.commailto:vey.jul...@gmail.com, OpenStack 
Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [solum] Issues in running tests

Thanks that worked


On Sunday, February 2, 2014 7:17 PM, Julien Vey 
vey.jul...@gmail.commailto:vey.jul...@gmail.com wrote:
Hi Rajdeep,

We just updated the 
documentationhttps://review.openstack.org/#/c/67590/1/CONTRIBUTING.rst 
recently with some necessary packages to install :  libxml2-dev and libxslt-dev.
You just need to install those 2 packages.
If you are on Ubuntu, see 
http://stackoverflow.com/questions/6504810/how-to-install-lxml-on-ubuntu/6504860#6504860

Julien


2014-02-02 Rajdeep Dua dua_rajd...@yahoo.commailto:dua_rajd...@yahoo.com:
Hi,
I am facing some errors running tests in a fresh installation of solum

$tox -e py27



gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall 
-Wstrict-prototypes -fPIC 
-I/home/hadoop/work/openstack/solum-2/solum/.tox/py27/build/lxml/src/lxml/includes
 -I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o 
build/temp.linux-x86_64-2.7/src/lxml/lxml.etree.o -w

In file included from src/lxml/lxml.etree.c:340:0:

/home/hadoop/work/openstack/solum-2/solum/.tox/py27/build/lxml/src/lxml/includes/etree_defs.h:9:31:
 fatal error: libxml/xmlversion.h: No such file or directory

compilation terminated.

error: command 'gcc' failed with exit status 1



Thanks
Rajdeep

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Lang-pack working group meeting cancelled today due to freenode DOS, will be rescheduled

2014-02-03 Thread Clayton Coleman
Maybe wednesday at noon?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova]improvement-of-accessing-to-glance

2014-02-03 Thread Zhi Yan Liu
I have a related BP:
https://blueprints.launchpad.net/glance/+spec/image-location-selection-strategy

IMO, as I mentioned in its description, it can be applied in glance
and consumer (glance's client) both sides: in glance internal, this
can be used for image-download handling and direct_url_ selection
logic; For consumer side, like Nova, it can be used to select
efficient image storage for particular compute node, actually it allow
customer/ISV implement their own strategy plugin.

And we have a near plan, as flaper87 mentioned above, I believe we
will separate glance store code *and*
image-location-selection-strategy stuff to an independent package
under glance project, at that time we can change Nova to leverage it,
and admin/operator can via selection-strategy related options to
configure Nova but Glance (agree to Jay on this point)

zhiyan

On Tue, Feb 4, 2014 at 12:04 AM, Flavio Percoco fla...@redhat.com wrote:
 On 03/02/14 10:13 -0500, Jay Pipes wrote:

 On Mon, 2014-02-03 at 10:03 +0100, Flavio Percoco wrote:

 IMHO, the bit that should really be optimized is the selection of the
 store nodes where the image should be downloaded from. That is,
 selecting the nearest location from the image locations and this is
 something that perhaps should happen in glance-api, not nova.


 I disagree. The reason is because glance-api does not know where nova
 is. Nova does.


 Nova doesn't know where glance is either. More info is required in
 order to finally do something smart here. Not sure what the best
 approach is just yet but as mentioned in my previous email I think
 focusing on the stores for now is the thing to do. (As you pointed out
 bellow too).



 I continue to think that the best performance gains will come from
 getting rid of glance-api entirely, putting the block-streaming bits
 into a separate Python library, and having Nova and Cinder pull
 image/volume bits directly from backend storage instead of going through
 the glance middleman.



 This is exactly what we're doing by pulling glance.store into its own
 library. I'm working on this myself. We are not completely getting rid
 of glance-api but we're working on not depending on it to get the
 image data.

 Cheers,
 flaper



 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3 compatibility

2014-02-03 Thread Julien Danjou
On Mon, Feb 03 2014, Sylvain Bauza wrote:

 I was at the FOSDEM event this week-end and some interesting talks about
 asyncio raised  my interest about this framework for replacing eventlet.

 Although there is an experimental port of asyncio for Python 2.6 named
 trollius [1], I think it would be a good move for having Python 3.

 I know there were some regular meetings previously, but it seems the
 interest decreased. Would it make sense to resume this effort and maybe do
 a quick inventory of what's missing for supporting Python 3.3 ?

 In particular, which external requirements are still missing Py3
 compatibility and what is the status for the Oslo incubated libraries ?

There's https://wiki.openstack.org/wiki/Python3 but it's not (always) up
to date.

The interest never decreased, but it's always have been a long term
effort and it's not going to be finished tomorrow.

There's a team of people working among OpenStack projects to add support
of Python 3 in various places already, and some clients are already
working.

One of the main blocker has been oslo-incubator which was not ported to
Python. But if my plan comes together, we'll be able to partially gate
on py33 this week.

Last, but not least, trollius has been created by Victor Stinner, who
actually did that work with porting OpenStack in mind and as the first
objective.

-- 
Julien Danjou
;; Free Software hacker ; independent consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Lang-pack working group meeting cancelled today due to freenode DOS, will be rescheduled

2014-02-03 Thread devdatta kulkarni
Which time zone?

On Wednesdays between 11.00am - noon (US Central time) there is git integration 
working group meeting.


-Original Message-
From: Clayton Coleman ccole...@redhat.com
Sent: Monday, February 3, 2014 10:35am
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Solum] Lang-pack working group meeting cancelled 
today due to freenode DOS, will be rescheduled

Maybe wednesday at noon?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] do nova objects work for plugins?

2014-02-03 Thread Murray, Paul (HP Cloud Services)

I was looking through Nova Objects with a view to creating an extensible object 
that can be used by writers of plugins to include data generated by the plugin 
(others have also done the same e.g. https://review.openstack.org/#/c/65826/ ) 
On the way I noticed what I think is a bug in Nova Objects serialization (but 
might be considered by design by some - see: 
https://bugs.launchpad.net/nova/+bug/1275675). Basically, if object A has 
object B as a child, and deserialization finds object B to be an unrecognized 
version, it will try to back port the object A to the version number of object 
B.

Now this is not a problem if the version of A is always bumped when the version 
of B changes. If the A and B versions are always deployed together, because 
they are revised and built together, then A will always be the one that is 
found to be incompatible first and in back porting it will always know what 
version its child should be. If that is not the way things are meant to work 
then there is a problem (I think).

Going back to the extensible object, what I would like to be able to do is 
allow the writer of a plugin to implement a nova object for data specific to 
that plugin, so that it can be communicated by Nova. (For example, a resource 
plugin on the compute node generates resource specific data that is passed to 
the scheduler, where another plugin consumes it). This object will be 
communicated as a child of another object (e.g. the compute_node). It would be 
useful if the plugins at each end benefit from the same version handling that 
nova does itself.

It is not reasonable to bump the version of the compute_node when new external 
plugin is developed. So currently the versioning seems too rigid to implement 
extensible/pluggable objects this way.

A reasonable alternative might be for all objects to be deserialized 
individually within a tree data structure, but I'm not sure what might happen 
to parent/child compatability without some careful tracking.

Another might be to say that nova objects are for nova use only and that's just 
tough for plugin writers!

Thoughts?

Paul



Paul Murray
HP Cloud Services
+44 117 312 9309

Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England. The contents of this message and any attachments 
to it are confidential and may be legally privileged. If you have received this 
message in error, you should delete it from your system immediately and advise 
the sender. To any recipient of this message within HP, unless otherwise stated 
you should consider this message and attachments as HP CONFIDENTIAL.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder Stability Hack-a-thon

2014-02-03 Thread Jay S Bryant
Mike,

Great idea!

I can participate remotely.

Let me know how I can be of the best help!


Jay S. Bryant
   IBM Cinder Subject Matter ExpertCinder Core Member
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbry...@us.ibm.com

 All the world's a stage and most of us are desperately unrehearsed.
   -- Sean O'Casey




From:   Mike Perez thin...@gmail.com
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
Date:   02/01/2014 02:11 AM
Subject:[openstack-dev] Cinder Stability Hack-a-thon



Folks,

I would love to get people together who are interested in Cinder 
stability 
to really dedicate a few days. This is not for additional features, but 
rather 
finishing what we already have and really getting those in a good shape 
before the end of the release.

When: Feb 24-26
Where: San Francisco (DreamHost Office can host), Colorado, remote?

Some ideas that come to mind:

- Cleanup/complete volume retype
- Cleanup/complete volume migration [1][2]
- Other ideas that come from this thread.

I can't stress the dedicated part enough. I think if we have some folks 
from core and anyone interested in contributing and staying focus, we 
can really get a lot done in a few days with small set of doable stability 
goals 
to stay focused on. If there is enough interest, being together in the 
mentioned locations would be great, otherwise remote would be fine as 
long as people can stay focused and communicate through suggested 
ideas like team speak or google hangout.

What do you guys think? Location? Other stability concerns to add to the 
list?

[1] - https://bugs.launchpad.net/cinder/+bug/1255622
[2] - https://bugs.launchpad.net/cinder/+bug/1246200


-Mike Perez___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Community meeting minutes and logs - 02/03/2014

2014-02-03 Thread Renat Akhmerov
Hi,

Thanks for joining us today!

Here are the links to minutes and log:

Minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-02-03-16.00.html
Log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-02-03-16.00.log.html

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] API extention update in order to support multi-databases integration

2014-02-03 Thread Kevin Conway
Denis,

It seems that you are hitting on a couple of points here. 1) You aren't a
fan of the fact that calls to user/db are routed through an extensions
written for mysql. 2) This creates a conflict when a datastore does not
support those calls. I am confused about your solution.

Your suggestion is to require each datastore manager to define the same
extension and implement the API even if it only returns NotImplemented. If
every datastore must define the same API extension then it is not an
extension. It is a part of the core API. If anything, we should be promoting
the user and database API to core since we already require that datastore
account for it in some way.

As far as communicating to a user whether or not their datastore supports an
API call, I thought we already solved this problem by 1) setting the guest
agents to raise NotImplmentedError for unsupported calls, 2) starting work
on a capabilities api:
https://wiki.openstack.org/wiki/Trove/trove-capabilities, and 3) drafting
the feature compatibility of our current target data stores at
https://wiki.openstack.org/wiki/Trove/DatastoreCompatibilityMatrix.

If the question, instead, is How do we provide single-datastore specific
functionality in Trove? then that is a whole different conversation.

** This is a side note for whoever added it, but I am also confused why
MongoDB and Cassandra cannot support the users API according to the
compatibility matrix. MongoDB has user/role management and even has a
superuser which is equivalent to the mysql root. Cassandra also supports
user/pass authentication as a part of its simple-auth protocol.

From:  Denis Makogon dmako...@mirantis.com
Reply-To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:  Monday, February 3, 2014 9:54 AM
To:  OpenStack Development Mailing List openstack-dev@lists.openstack.org
Subject:  [openstack-dev] [Trove] API extention update in order to support
multi-databases integration

Goodday, OpenStack DBaaS users/contributors.


I'd like to start topic related to refactoring of API extensions. Since
Trove already supports more than one db engines (mysql and redis in
single-instance mode).


At this moment if contributors will decide that one of the integrated db
engines would support users/databases CRUD operations they will come into
the issue of non-pluggable extensions for described operations. Here [1] you
can see that users/databases CRUD operations will go through routes defined
for mysql database integration.

I would like to suggest more flexible mechanism for such API extensions:
1. Extensions should be implemented per datastore manager, such as mysql,
redis, cassandra, mongodb.
2. 
3. ReST service routes for API extensions should be common for all of
datastores. It means that implemention should be imported/used according to
uniq attribute, best option for such attribute ­ datastore manager.
4. 
5. Even if datastore doesn't supports users ACL or storage distinction
(databases in terms of mysql, keyspaces in terms of cassandra) API extension
should be implemented, each method should raise NotImplmented exception with
meaningful message ³Not supported².


As you can see at [2], mechanism implemented according to rules given
above. So, I would like to hear all your thoughts about that.


[1] 
https://github.com/openstack/trove/blob/master/trove/extensions/routes/mysql
.py 
https://github.com/openstack/trove/blob/master/trove/extensions/routes/mysq
l.py 
[2] https://review.openstack.org/70742
https://review.openstack.org/70742

 https://review.openstack.org/70742
 https://review.openstack.org/70742
Best regards,
Denis Makogon
Mirantis, Inc.


Kharkov, Ukraine
www.mirantis.com http://www.mirantis.com/
www.mirantis.ru http://www.mirantis.ru/
dmako...@mirantis.com mailto:dmako...@mirantis.com
___ OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] VMware tools in oslo-incubator or straight to oslo.vmware

2014-02-03 Thread Doug Hellmann
Thanks, Dims!


On Mon, Feb 3, 2014 at 11:12 AM, Davanum Srinivas dava...@gmail.com wrote:

 Hi Vipin, Doug,

 I've followed instructions from Doug and this page [1]. Will give
 Infra folks a bit of time and then ping them in a day or so.

 Git Repo on Github : https://github.com/dims/oslo.vmware
 Request to create Gerrit Groups :
 https://bugs.launchpad.net/openstack-ci/+bug/1275817
 Review request to create stackforge repo and jenkins jobs :
 https://review.openstack.org/#/c/70761/

 -- dims

 [1] http://ci.openstack.org/stackforge.html

 On Fri, Jan 31, 2014 at 8:51 AM, Doug Hellmann
 doug.hellm...@dreamhost.com wrote:
  Thanks, Vipin,
 
  Dims is going work on setting up your repository and help you configure
 the
  review team. Give us a few days to get the details sorted out.
 
  Doug
 
 
  On Fri, Jan 31, 2014 at 7:39 AM, Vipin Balachandran
  vbalachand...@vmware.com wrote:
 
  The current API is stable since this is used by nova and cinder for the
  last two releases. Yes, I can act as the maintainer.
 
 
 
  Here is the list of reviewers:
 
  Arnaud Legendre arnaud...@gmail.com
 
  Davanum Srinivas (dims) dava...@gmail.com
 
  garyk gkot...@vmware.com
 
  Kartik Bommepally kbommepa...@vmware.com
 
  Sabari Murugesan smuruge...@vmware.com
 
  Shawn Hartsock harts...@acm.org
 
  Subbu subramanian.neelakan...@gmail.com
 
  Vui Lam v...@vmware.com
 
 
 
  From: Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
  Sent: Friday, January 31, 2014 4:22 AM
  To: Vipin Balachandran
  Cc: Donald Stufft; OpenStack Development Mailing List (not for usage
  questions)
 
 
  Subject: Re: [openstack-dev] [oslo] VMware tools in oslo-incubator or
  straight to oslo.vmware
 
 
 
 
 
  On Thu, Jan 30, 2014 at 12:38 PM, Vipin Balachandran
  vbalachand...@vmware.com wrote:
 
  This library is highly specific to VMware drivers in OpenStack and not a
  generic VMware API client. As Doug mentioned, this library won't be
 useful
  outside OpenStack. Also, it has some dependencies on openstack.common
 code
  as well. Therefore it makes sense if we make this code as part of OSLO.
 
 
 
  I think we have consensus that, assuming you are committing to API
  stability, this set of code does not need to go through the incubator
 before
  becoming a library. How stable is the current API?
 
 
 
  If it stable and is not going to be useful to anyone outside of
 OpenStack,
  we can create an oslo.vmware library for it. I can start working with
 -infra
  next week to set up the repository.
 
 
 
  We will need someone on your team to be designated as the lead
 maintainer,
  to coordinate with the Oslo PTL for release management issues and bug
  triage. Is that you, Vipin?
 
 
 
  We will also need to have a set of reviewers for the new repository.
 I'll
  add oslo-core, but it will be necessary for a few people familiar with
 the
  code to also be included. If you have anyone from nova or cinder who
 should
  be a reviewer, we can add them, too. Please send me a list of names and
 the
  email addresses used in gerrit so I can add them to the reviewer list
 when
  the repository is created.
 
 
 
  Doug
 
 
 
 
 
 
 
  By the way, a work in progress review has been posted for the VMware
  cinder driver integration with the OSLO common code
  (https://review.openstack.org/#/c/70108/). The nova integration is
 currently
  under progress.
 
 
 
  Thanks,
 
  Vipin
 
 
 
  From: Doug Hellmann [mailto:doug.hellm...@dreamhost.com]
  Sent: Wednesday, January 29, 2014 4:06 AM
  To: Donald Stufft
  Cc: OpenStack Development Mailing List (not for usage questions); Vipin
  Balachandran
  Subject: Re: [openstack-dev] [oslo] VMware tools in oslo-incubator or
  straight to oslo.vmware
 
 
 
 
 
 
 
  On Tue, Jan 28, 2014 at 5:06 PM, Donald Stufft don...@stufft.io
 wrote:
 
 
  On Jan 28, 2014, at 5:01 PM, Julien Danjou jul...@danjou.info wrote:
 
   On Tue, Jan 28 2014, Doug Hellmann wrote:
  
   There are several reviews related to adding VMware interface code to
   the
   oslo-incubator so it can be shared among projects (start at
   https://review.openstack.org/#/c/65075/7 if you want to look at the
   code).
  
   I expect this code to be fairly stand-alone, so I wonder if we would
 be
   better off creating an oslo.vmware library from the beginning,
 instead
   of
   bringing it through the incubator.
  
   Thoughts?
  
   This sounds like a good idea, but it doesn't look OpenStack specific,
 so
   maybe building a non-oslo library would be better.
  
   Let's not zope it! :)
 
  +1 on not making it an oslo library.
 
 
 
  Given the number of issues we've seen with stackforge libs in the gate,
  I've changed my default stance on this point.
 
 
 
  It's not clear from the code whether Vipin et al expect this library to
 be
  useful for anyone not working with both OpenStack and VMware. Either
 way, I
  anticipate having the library under the symmetric gating rules and
 managed
  by the one of the OpenStack teams (oslo, nova, cinder?) and 

Re: [openstack-dev] [Solum] Solum database schema modification proposal

2014-02-03 Thread Jay Pipes
On Mon, 2014-02-03 at 16:22 +, Paul Montgomery wrote:
 Solum community,
 
 I notice that we are using String(36) UUID values in the database
 schema as primary key for many new tables that we are creating.  For
 example:
 https://review.openstack.org/#/c/68328/10/solum/objects/sqlalchemy/application.py
 
 Proposal: Add an int or bigint ID as the primary key, instead of UUID
 (the UUID field remains if needed), to improve database efficiency.

-1 for this particular case.

Using auto-incrementing primary keys can be beneficial in many use cases
-- particularly when trying to create a hot spot on disk for tables
with very high write to read ratios, like logging-type tables. 

However, autoincrementing primary keys come with some baggage when used
in large distributed database systems that UUIDs don't come with.
Namely, if you run Solum in multiple deployment zones or cells, you will
have primary key collision if you try to aggregate those databases into,
say, a data warehouse. With UUID primary keys, you won't have that
trouble.

In addition, for InnoDB tables in MySQL (as well as PostgreSQL and MS
SQL Server), the choice of primary key is critical, as it determines the
order by which the clustered index-organized tables are written to disk.
If the data you are looking up is accessed by the primary key, it will
be faster to store the records on disk in that order. Since you are not
advocating exposing the autoincrementing primary key to the user, the
database query for a record would need to do one non-clustered index
lookup into the index on UUID to find the autoincrementing primary key
value of the record in question, and then retrieve the record from the
clustered index on disk (or in the InnoDB buffer pool, which is also
ordered by primary key [1]). Two seek operations, versus only one if the
UUID is used as a primary key.

Again, autoincrementing are useful in many scenarios. But in this
particular case, I don't believe there would be a whole lot of value.

Best,
-jay

[1] Technically the InnoDB buffer pool contain unordered records within
each 16KB page, and a small ordered PK to slot number catalog at the
tail end of each data page, but the effect is the same.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Will the Scheuler use Nova Objects?

2014-02-03 Thread Joe Gordon
On Thu, Jan 30, 2014 at 8:55 AM, Dan Smith d...@danplanet.com wrote:
 I'm of the opinion that the scheduler should use objects, for all the
 reasons that Nova uses objects, but that they should not be Nova
 objects.  Ultimately what the scheduler needs is a concept of capacity,
 allocations, and locality of resources.  But the way those are modeled
 doesn't need to be tied to how Nova does it, and once the scope expands
 to include Cinder it may quickly turn out to be limiting to hold onto
 Nova objects.

 Yeah, my response to the original question was going to be something like:

 If the scheduler was staying in Nova, it would use NovaObjects like the
 rest of Nova. Long-term Gantt should use whatever it wants and the API
 between it and Nova will be something other than RPC and thus something
 other than NovaObject anyway.

++


 I think the point you're making here is that the models used for
 communication between Nova and Gantt should be objecty, regardless of
 what the backing implementation is on either side. I totally agree with
 that.

objecty and versoined


 --Dan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova]improvement-of-accessing-to-glance

2014-02-03 Thread Jay Pipes
On Mon, 2014-02-03 at 17:04 +0100, Flavio Percoco wrote:
 On 03/02/14 10:13 -0500, Jay Pipes wrote:
 On Mon, 2014-02-03 at 10:03 +0100, Flavio Percoco wrote:
  IMHO, the bit that should really be optimized is the selection of the
  store nodes where the image should be downloaded from. That is,
  selecting the nearest location from the image locations and this is
  something that perhaps should happen in glance-api, not nova.
 
 I disagree. The reason is because glance-api does not know where nova
 is. Nova does.
 
 Nova doesn't know where glance is either. More info is required in
 order to finally do something smart here. Not sure what the best
 approach is just yet but as mentioned in my previous email I think
 focusing on the stores for now is the thing to do. (As you pointed out
 bellow too).

Right, which is why I am recommending to get rid of glance-api below...

 I continue to think that the best performance gains will come from
 getting rid of glance-api entirely, putting the block-streaming bits
 into a separate Python library, and having Nova and Cinder pull
 image/volume bits directly from backend storage instead of going through
 the glance middleman.
 
 This is exactly what we're doing by pulling glance.store into its own
 library. I'm working on this myself. We are not completely getting rid
 of glance-api but we're working on not depending on it to get the
 image data.

Cool. Have you pushed a patch for this I can see?

Thanks, Flavio!
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] API extention update in order to support multi-databases integration

2014-02-03 Thread Denis Makogon
Hello, Kevin.

1. Users/Databases CRUD operation are not the part of Trove core API.

2. And i'm not suggesting to define same extension, if database supports
ACL and storage distinction developer would be able to define
it's own ReST routes for all his needs in term of CRUD operations.

3. There's problem on rising exception on guest side, each exception
returns stacktrace to client side,
that why this suggested implementation makes possible to handle reporting
to end-user without envolving
guest service in the process of information delivery. Example - [1]

4. About support matrix - it defined only for first steps (initial commit).
In future all possible functions would be implemented.

Main idea of patchset and this convo is to discover huge problem - routes
for extensions should not be datastore-specific, they should be, mostly,
uniq and background implementation should be datastore orientied.


[1] https://gist.github.com/denismakogon/8788333




2014/2/3 Kevin Conway kevinjacobcon...@gmail.com

 Denis,

 It seems that you are hitting on a couple of points here. 1) You aren't a
 fan of the fact that calls to user/db are routed through an extensions
 written for mysql. 2) This creates a conflict when a datastore does not
 support those calls. I am confused about your solution.

 Your suggestion is to require each datastore manager to define the same
 extension and implement the API even if it only returns NotImplemented. If
 every datastore must define the same API extension then it is not an
 extension. It is a part of the core API. If anything, we should be
 promoting the user and database API to core since we already require that
 datastore account for it in some way.

 As far as communicating to a user whether or not their datastore supports
 an API call, I thought we already solved this problem by 1) setting the
 guest agents to raise NotImplmentedError for unsupported calls, 2) starting
 work on a capabilities api:
 https://wiki.openstack.org/wiki/Trove/trove-capabilities, and 3) drafting
 the feature compatibility of our current target data stores at
 https://wiki.openstack.org/wiki/Trove/DatastoreCompatibilityMatrix.

 If the question, instead, is How do we provide single-datastore specific
 functionality in Trove? then that is a whole different conversation.

 ** This is a side note for whoever added it, but I am also confused why
 MongoDB and Cassandra cannot support the users API according to the
 compatibility matrix. MongoDB has user/role management and even has a
 superuser which is equivalent to the mysql root. Cassandra also supports
 user/pass authentication as a part of its simple-auth protocol.

 From: Denis Makogon dmako...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Monday, February 3, 2014 9:54 AM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Trove] API extention update in order to support
 multi-databases integration

 Goodday, OpenStack DBaaS users/contributors.

 I'd like to start topic related to refactoring of API extensions.
 Since Trove already supports more than one db engines (mysql and redis in
 single-instance mode).

 At this moment if contributors will decide that one of the integrated
 db engines would support users/databases CRUD operations they will come
 into the issue of non-pluggable extensions for described operations. Here
 [1] you can see that users/databases CRUD operations will go through routes
 defined for mysql database integration.



 I would like to suggest more flexible mechanism for such API
 extensions:

1.

Extensions should be implemented per datastore manager, such as mysql,
redis, cassandra, mongodb.
2.

ReST service routes for API extensions should be common for all of
datastores. It means that implemention should be imported/used according to
uniq attribute, best option for such attribute - datastore manager.
3.

Even if datastore doesn't supports users ACL or storage distinction
(databases in terms of mysql, keyspaces in terms of cassandra) API
extension should be implemented, each method should raise NotImplmented
exception with meaningful message Not supported.


 As you can see at [2], mechanism implemented according to rules given
 above. So, I would like to hear all your thoughts about that.

 [1]
 https://github.com/openstack/trove/blob/master/trove/extensions/routes/mysql.py

 [2] https://review.openstack.org/70742

 https://review.openstack.org/70742
 https://review.openstack.org/70742

 Best regards,

 Denis Makogon

 Mirantis, Inc.

 Kharkov, Ukraine

 www.mirantis.com

 www.mirantis.ru

 dmako...@mirantis.com
 ___ OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 

[openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-03 Thread Clint Byrum
So, I wrote the original rolling updates spec about a year ago, and the
time has come to get serious about implementation. I went through it and
basically rewrote the entire thing to reflect the knowledge I have
gained from a year of working with Heat.

Any and all comments are welcome. I intend to start implementation very
soon, as this is an important component of the HA story for TripleO:

https://wiki.openstack.org/wiki/Heat/Blueprints/RollingUpdates

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Putting nova-network support into the V3 API

2014-02-03 Thread Joe Gordon
On Thu, Jan 30, 2014 at 10:45 PM, Christopher Yeoh cbky...@gmail.com wrote:
 So with it now looking like nova-network won't go away for the forseable
 future, it looks like we'll want nova-network support in the Nova V3 API
 after all. I've created a blueprint for this work here:

 https://blueprints.launchpad.net/nova/+spec/v3-api-restore-nova-network

 And there is a first pass of what needs to be done here:

 https://etherpad.openstack.org/p/NovaV3APINovaNetworkExtensions

From the etherpad:

Some of the API only every supported nova-network and not neutron,
others supported both.
I think as a first pass because of limited time we just port them from
V2 as-is. Longer term I think
we should probably remove neutron back-end functionality as we
shouldn't be proxying, but can
decide that later.

While I like the idea of not proxying neutron, since we are taking the
time to create a new API we should make it clear that this API won't
work when neutron is being used. There have been some nova network
commands that pretend to work even when running neutron (quotas etc).
Perhaps this should be treated as a V3 extension since we don't expect
all deployments to run this API.

The user befit to proxying neutron is an API that works for both
nova-network and neutron. So a cloud can disable the nova-network API
after the neutron migration instead of  being forced to do so lockstep
with the migration. To continue supporting this perhaps we should see
if we can get neutron to implement its own copy of nova-network v3
API.


 There's a lot to be done in a rather short period of time so any help with
 patches/reviews would be very welcome.

 Also I'd appreciate any feedback on what might be considered a reasonable
 minimal subset of nova-network API support for icehouse so we can better
 prioritise what gets done first.

 Regards,

 Chris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Lang-pack working group meeting cancelled today due to freenode DOS, will be rescheduled

2014-02-03 Thread Clayton Coleman
Good point.  We can try immediately after the git inter workflow if folks 
prefer.  12pm US CST, 1pm EST.

- Original Message -
 Which time zone?
 
 On Wednesdays between 11.00am - noon (US Central time) there is git
 integration working group meeting.
 
 
 -Original Message-
 From: Clayton Coleman ccole...@redhat.com
 Sent: Monday, February 3, 2014 10:35am
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Solum] Lang-pack working group meeting cancelled
 today due to freenode DOS, will be rescheduled
 
 Maybe wednesday at noon?
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Putting nova-network support into the V3 API

2014-02-03 Thread Joe Gordon
On Mon, Feb 3, 2014 at 10:02 AM, Joe Gordon joe.gord...@gmail.com wrote:
 On Thu, Jan 30, 2014 at 10:45 PM, Christopher Yeoh cbky...@gmail.com wrote:
 So with it now looking like nova-network won't go away for the forseable
 future, it looks like we'll want nova-network support in the Nova V3 API
 after all. I've created a blueprint for this work here:

 https://blueprints.launchpad.net/nova/+spec/v3-api-restore-nova-network

 And there is a first pass of what needs to be done here:

 https://etherpad.openstack.org/p/NovaV3APINovaNetworkExtensions

 From the etherpad:

 Some of the API only every supported nova-network and not neutron,
 others supported both.
 I think as a first pass because of limited time we just port them from
 V2 as-is. Longer term I think
 we should probably remove neutron back-end functionality as we
 shouldn't be proxying, but can
 decide that later.

 While I like the idea of not proxying neutron, since we are taking the
 time to create a new API we should make it clear that this API won't
 work when neutron is being used. There have been some nova network
 commands that pretend to work even when running neutron (quotas etc).
 Perhaps this should be treated as a V3 extension since we don't expect
 all deployments to run this API.

 The user befit to proxying neutron is an API that works for both
 nova-network and neutron. So a cloud can disable the nova-network API
 after the neutron migration instead of  being forced to do so lockstep
 with the migration. To continue supporting this perhaps we should see
 if we can get neutron to implement its own copy of nova-network v3
 API.


To clarify a bit, the goal here is: a network API that  works with
both nova and neutron without the user knowing which he is using



 There's a lot to be done in a rather short period of time so any help with
 patches/reviews would be very welcome.

 Also I'd appreciate any feedback on what might be considered a reasonable
 minimal subset of nova-network API support for icehouse so we can better
 prioritise what gets done first.

 Regards,

 Chris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] ceilometer unit tests broke because of a nova patch

2014-02-03 Thread Joe Gordon
On Thu, Jan 30, 2014 at 2:28 AM, Julien Danjou jul...@danjou.info wrote:
 On Thu, Jan 30 2014, Joe Gordon wrote:

 Hi Joe,

 While looking at gate failures trying to improve our classification
 rate I stumbled across this:

 http://logs.openstack.org/50/67850/5/gate/gate-ceilometer-python26/8fd55b6/console.html

 It appears that ceilometer is pulling in the nova repo
 (http://git.openstack.org/cgit/openstack/ceilometer/tree/test-requirements.txt#n8)
 and running tests against non-contractual (private) APIs. This means
 that nova will occasionally break ceilometer unit tests.

 /me pat Joe on the back :)

 We know, Ceilometer has been broken several times because of that in the
 past months. We know we shouldn't do that, but for now we don't have
 enough work force to work on a batter solution unfortunately.

Does this issue mean Ceilometer won't work for the most literal
definition of continuous deployment?

Has this ever been a problem in the stable branches?


 A fix has been proposed and I approved it.

 I don't know how to prevent that. We don't want to block Nova for making
 internal changes, as we're not doing something supported in Ceilometer.

Whats the underlying problem here? nova notifications aren't
versioned?  Nova should try to support ceilometer's use case so it
sounds like there is may be a nova issue in here as well.


 So, until now we lived with that burden and each breakage of our gate
 reminds us how important it would be to work on that.

 --
 Julien Danjou
 // Free Software hacker / independent consultant
 // http://julien.danjou.info

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Meeting Tuesday February 4th at 19:00 UTC

2014-02-03 Thread Elizabeth Krumbach Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting tomorrow, Tuesday February 4th, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV extra hr of discussion today

2014-02-03 Thread Sandhya Dasu (sadasu)
Hi all,
Both openstack-meeting and openstack-meeting-alt are available today. Lets 
meet at UTC 2000 @ openstack-meeting-alt.

Thanks,
Sandhya

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Monday, February 3, 2014 12:52 AM
To: Sandhya Dasu sad...@cisco.commailto:sad...@cisco.com, Robert Li 
(baoli) ba...@cisco.commailto:ba...@cisco.com, Robert Kukura 
rkuk...@redhat.commailto:rkuk...@redhat.com, OpenStack Development Mailing 
List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Brian Bowen (brbowen) brbo...@cisco.commailto:brbo...@cisco.com
Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Hi Sandhya,
Can you please elaborate how do you suggest to extend the below bp for SRIOV 
Ports managed by different Mechanism Driver?
I am not biased to any specific direction here, just think we need common layer 
for managing SRIOV port at neutron, since there is a common pass between nova 
and neutron.

BR,
Irena


From: Sandhya Dasu (sadasu) [mailto:sad...@cisco.com]
Sent: Friday, January 31, 2014 6:46 PM
To: Irena Berezovsky; Robert Li (baoli); Robert Kukura; OpenStack Development 
Mailing List (not for usage questions); Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Hi Irena,
  I was initially looking at 
https://blueprints.launchpad.net/neutron/+spec/ml2-typedriver-extra-port-info 
to take care of the extra information required to set up the SR-IOV port. When 
the scope of the BP was being decided, we had very little info about our own 
design so I didn't give any feedback about SR-IOV ports. But, I feel that this 
is the direction we should be going. Maybe we should target this in Juno.

Introducing, SRIOVPortProfileMixin would be creating yet another way to take 
care of extra port config. Let me know what you think.

Thanks,
Sandhya

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Thursday, January 30, 2014 4:13 PM
To: Robert Li (baoli) ba...@cisco.commailto:ba...@cisco.com, Robert 
Kukura rkuk...@redhat.commailto:rkuk...@redhat.com, Sandhya Dasu 
sad...@cisco.commailto:sad...@cisco.com, OpenStack Development Mailing 
List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Brian Bowen (brbowen) brbo...@cisco.commailto:brbo...@cisco.com
Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Robert,
Thank you very much for the summary.
Please, see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Thursday, January 30, 2014 10:45 PM
To: Robert Kukura; Sandhya Dasu (sadasu); Irena Berezovsky; OpenStack 
Development Mailing List (not for usage questions); Brian Bowen (brbowen)
Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Hi,

We made a lot of progress today. We agreed that:
-- vnic_type will be a top level attribute as binding:vnic_type
-- BPs:
 * Irena's 
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type for 
binding:vnic_type
 * Bob to submit a BP for binding:profile in ML2. SRIOV input info will be 
encapsulated in binding:profile
 * Bob to submit a BP for binding:vif_details in ML2. SRIOV output info 
will be encapsulated in binding:vif_details, which may include other 
information like security parameters. For SRIOV, vlan_id and profileid are 
candidates.
-- new arguments for port-create will be implicit arguments. Future release may 
make them explicit. New argument: --binding:vnic_type {virtio, direct, macvtap}.
I think that currently we can make do without the profileid as an input 
parameter from the user. The mechanism driver will return a profileid in the 
vif output.

Please correct any misstatement in above.

Issues:
  -- do we need a common utils/driver for SRIOV generic parts to be used by 
individual Mechanism drivers that support SRIOV? More details on what would be 
included in this sriov utils/driver? I'm thinking that a candidate would be the 
helper functions to interpret the pci_slot, which is proposed as a string. 
Anything else in your mind?
[IrenaB] I thought on some SRIOVPortProfileMixin to handle and persist SRIOV 
port related attributes

  -- what should mechanism drivers put in binding:vif_details and how nova 
would use this information? as far as I see it from the code, a VIF object is 
created and populated based on information provided by neutron (from get 
network and get port)

Questions:
  -- nova needs to work with both ML2 and non-ML2 plugins. For regular plugins, 
binding:vnic_type will not be set, I guess. Then would it be treated as a 
virtio type? And if a non-ML2 plugin wants to support SRIOV, would it need to  
implement vnic-type, binding:profile, binding:vif-details for SRIOV itself?
[IrenaB] vnic_type will be added as an additional attribute to binding 
extension. For persistency it should be added in 

Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-02-03 Thread Khanh-Toan Tran
Nice idea. I also think that filters and weighers are FilterScheduler-specific. 
Thus, that is unnecessary for SolverScheduler to try to translate all 
filters/weighers into constraints. It would be easier for transition, though. 
Anyway, we just need some placement logics that will be written as constraints, 
as the they are currently represented as filters in FilterScheduler. So yes we 
will need a placement advisor here.

For provisioning engine, isn't scheduler manager (or maybe nova-conductor) will 
be the one? I still don't figure out after we have gantt, how nova-conductor 
interact with gantt, or we put its logic into gantt, too.

Another though would be the need for Instance Group API [1]. Currently users 
can only request multiple instances of the same flavors. These requests do not 
need LP to solve, just placing instances one by one is sufficient. Therefore we 
need this API so that users can request instances of different flavors, with 
some relations (constraints) among them. The advantage is that this logic and 
API will help us add Cinder volumes with ease (not sure how the Cinder-stackers 
think about it, though).

Best regards,
Toan

[1] https://wiki.openstack.org/wiki/InstanceGroupApiExtension

- Original Message -
From: Yathiraj Udupi (yudupi) yud...@cisco.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Thu, 30 Jan 2014 18:13:59 - (UTC)
Subject: Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and 
Solver Scheduler

It is really good we are reviving the conversation we started during the last 
summit in Hongkong during one of the scheduler sessions called “Smart resource 
placement”. This is the document we used to discuss during the session. 
Probably you may have seen this before:
https://docs.google.com/document/d/1IiPI0sfaWb1bdYiMWzAAx0HYR6UqzOan_Utgml5W1HI/edit

The idea is to separate out the logic for the placement decision engine from 
the actual request and the final provisioning phase. The placement engine 
itself can be pluggable, and as we show in the solver scheduler blueprint, we 
show how it fits inside of Nova.

The discussions at the summit and in our weekly scheduler meetings led to us 
starting the “Smart resource placement” idea inside of Nova, and then take it 
to a unified global level spanning cross services such as cinder and neutron.

Like you point out, I do agree the two entities of placement advisor, and the 
placement engine, but I think there should be a third one – the provisioning 
engine, which should be responsible for whatever it takes to finally create the 
instances, after the placement decision has been taken.
It is good to take incremental approaches, hence we should try to get patches 
like these get accepted first within nova, and then slowly split up the logic 
into separate entities.

Thanks,
Yathi.





On 1/30/14, 7:14 AM, Gil Rapaport g...@il.ibm.commailto:g...@il.ibm.com 
wrote:

Hi all,

Excellent definition of the issue at hand.
The recent blueprints of policy-based-scheduler and solver-scheduler indeed 
highlight a possible weakness in the current design, as despite their 
completely independent contributions (i.e. which filters to apply per request 
vs. how to compute a valid placement) their implementation as drivers makes 
combining them non-trivial.

As Alex Glikson hinted a couple of weekly meetings ago, our approach to this is 
to think of the driver's work as split between two entities:
-- A Placement Advisor, that constructs placement problems for scheduling 
requests (filter-scheduler and policy-based-scheduler)
-- A Placement Engine, that solves placement problems (HostManager in 
get_filtered_hosts() and solver-scheduler with its LP engine).

Such modularity should allow developing independent mechanisms that can be 
combined seamlessly through a unified  well-defined protocol based on 
constructing placement problem objects by the placement advisor and then 
passing them to the placement engine, which returns the solution. The protocol 
can be orchestrated by the scheduler manager.

As can be seen at this point already, the policy-based-scheduler blueprint can 
now be positioned as an improvement of the placement advisor. Similarly, the 
solver-scheduler blueprint can be positioned as an improvement of the placement 
engine.

I'm working on a wiki page that will get into the details.
Would appreciate your initial thoughts on this approach.

Regards,
Gil



From: Khanh-Toan Tran 
khanh-toan.t...@cloudwatt.commailto:khanh-toan.t...@cloudwatt.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date: 01/30/2014 01:43 PM
Subject: Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and 
Solver Scheduler




Hi Sylvain,

1) Some Filters such as AggregateCoreFilter, AggregateRAMFilter can change its 
parameters for 

Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-03 Thread Robert Collins
Quick thoughts:

 - I'd like to be able to express a minimum service percentage: e.g. I
know I need 80% of my capacity available at anyone time, so an
additional constraint to the unit counts, is to stay below 20% down at
a time (and this implies that if 20% have failed, either stop or spin
up more nodes before continuing).

The wait condition stuff seems to be conflating in the 'graceful
operations' stuff we discussed briefly at the summit, which in my head
at least is an entirely different thing - it's per node rather than
per group. If done separately that might make each feature
substantially easier to reason about.

-Rob

On 4 February 2014 06:52, Clint Byrum cl...@fewbar.com wrote:
 So, I wrote the original rolling updates spec about a year ago, and the
 time has come to get serious about implementation. I went through it and
 basically rewrote the entire thing to reflect the knowledge I have
 gained from a year of working with Heat.

 Any and all comments are welcome. I intend to start implementation very
 soon, as this is an important component of the HA story for TripleO:

 https://wiki.openstack.org/wiki/Heat/Blueprints/RollingUpdates

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova]improvement-of-accessing-to-glance

2014-02-03 Thread Mark Washenberger
On Mon, Feb 3, 2014 at 7:13 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Mon, 2014-02-03 at 10:03 +0100, Flavio Percoco wrote:
  IMHO, the bit that should really be optimized is the selection of the
  store nodes where the image should be downloaded from. That is,
  selecting the nearest location from the image locations and this is
  something that perhaps should happen in glance-api, not nova.

 I disagree. The reason is because glance-api does not know where nova
 is. Nova does.

 I continue to think that the best performance gains will come from
 getting rid of glance-api entirely, putting the block-streaming bits
 into a separate Python library, and having Nova and Cinder pull
 image/volume bits directly from backend storage instead of going through
 the glance middleman.


When you say get rid of glance-api, do you mean the glance server project?
or glance-api as opposed to glance-registry? If its the latter, I think
we're basically in agreement. However, there may be a little bit of a
terminology distinction that is important. Here is the plan that is
currently underway:

1) Deprecate the registry deployment (done when v1 is deprecated)
2) v2 glance api talks directly to the underlying database (done)
3) Create a library in the images program that allows OpenStack projects to
share code for reading image data remotely and picking optimal paths for
bulk data transfer (In progress under the glance.store title)
4) v2 exposes locations that clients can directly access (partially done,
continues to need a lot of improvement)
5) v2 still allows downloading images from the glance server as a
compatibility and lowest-common-denominator feature

In 4, some work is complete, and some more is planned, but we still need
some more planning and design to figure out how to support directly
downloading images in a secure and general way.

Cheers,
markwash


 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican Incubation Review

2014-02-03 Thread Joe Gordon
On Wed, Jan 29, 2014 at 3:28 PM, Justin Santa Barbara
jus...@fathomdb.com wrote:
 Jarret Raim  wrote:

I'm presuming that this is our last opportunity for API review - if
this isn't the right occasion to bring this up, ignore me!

Apparently you are right:

For incubation

'Project APIs should be reasonably stable'

http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements#n23

And there is nothing about APIs in graduation.



 I wouldn't agree here. The barbican API will be evolving over time as we
 add new functionality. We will, of course, have to deal with backwards
 compatibility and version as we do so.

 I suggest that writing bindings for every major language, maintaining
 them through API revisions, and dealing with all the software that
 depends on your service is a much bigger undertaking than e.g. writing
 Barbican itself ;-)  So it seems much more efficient to get v1 closer
 to right.

 I don't think this need turn into a huge upfront design project
 either; I'd just like to see the TC approve your project with an API
 that the PTLs have signed off on as meeting their known needs, rather
 than one that we know will need changes.  Better to delay take-off
 than commit ourselves to rebuilding the engine in mid-flight.

 We don't need the functionality to be implemented in your first
 release, but the API should allow the known upcoming changes.

 We're also looking at adopting the
 model that Keystone uses for API blueprints where the API changes are
 separate blueprints that are reviewed by a larger group than the
 implementations.

 I think you should aspire to something greater than the adoption of Keystone 
 V3.

 I'm sorry to pick on your project - I think it is much more important
 to OpenStack than many others, though that's a big part of why it is
 important to avoid API churn.  The instability of our APIs is a huge
 barrier to OpenStack adoption.  I'd love to see the TC review all
 breaking API changes, but I don't think we're set up that way.

 Justin

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] transactions in openstack REST API?

2014-02-03 Thread Chris Friesen


Has anyone ever considered adding the concept of transaction IDs to the 
openstack REST API?


I'm envisioning a way to handle long-running transactions more cleanly. 
 For example:


1) A user sends a request to live-migrate an instance
2) Openstack acks the request and includes a transaction ID in the 
response.
3) The user can then poll (or maybe listen to notifications) to see 
whether the transaction is complete or hit an error.


I view this as most useful for things that could potentially take a long 
time to finish--instance creation/deletion/migration/evacuation are 
obvious, I'm sure there are others.


Also, anywhere that we use a cast RPC call we'd want to add that call 
to a list associated with that transaction in the database...that way 
the transaction is only complete when all the sub-jobs are complete.


I've seen some discussion about using transaction IDs to locate logs 
corresponding to a given transaction, but nothing about the end user 
being able to query the status of the transaction.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova]improvement-of-accessing-to-glance

2014-02-03 Thread Jay Pipes
On Mon, 2014-02-03 at 10:59 -0800, Mark Washenberger wrote:

 On Mon, Feb 3, 2014 at 7:13 AM, Jay Pipes jaypi...@gmail.com wrote:
 On Mon, 2014-02-03 at 10:03 +0100, Flavio Percoco wrote:
  IMHO, the bit that should really be optimized is the
 selection of the
  store nodes where the image should be downloaded from. That
 is,
  selecting the nearest location from the image locations and
 this is
  something that perhaps should happen in glance-api, not
 nova.
 
 
 I disagree. The reason is because glance-api does not know
 where nova
 is. Nova does.
 
 I continue to think that the best performance gains will come
 from
 getting rid of glance-api entirely, putting the
 block-streaming bits
 into a separate Python library, and having Nova and Cinder
 pull
 image/volume bits directly from backend storage instead of
 going through
 the glance middleman.
 
 
 When you say get rid of glance-api, do you mean the glance server
 project? or glance-api as opposed to glance-registry? 

I mean the latter.

 If its the latter, I think we're basically in agreement. However,
 there may be a little bit of a terminology distinction that is
 important. Here is the plan that is currently underway:
 
 1) Deprecate the registry deployment (done when v1 is deprecated)
 2) v2 glance api talks directly to the underlying database (done)
 3) Create a library in the images program that allows OpenStack
 projects to share code for reading image data remotely and picking
 optimal paths for bulk data transfer (In progress under the
 glance.store title)
 4) v2 exposes locations that clients can directly access (partially
 done, continues to need a lot of improvement)
 5) v2 still allows downloading images from the glance server as a
 compatibility and lowest-common-denominator feature

All good.

 In 4, some work is complete, and some more is planned, but we still
 need some more planning and design to figure out how to support
 directly downloading images in a secure and general way.

Sounds good to me :)

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-02-03 Thread Chris Friesen

On 02/03/2014 12:28 PM, Khanh-Toan Tran wrote:


Another though would be the need for Instance Group API [1].
Currently users can only request multiple instances of the same
flavors. These requests do not need LP to solve, just placing
instances one by one is sufficient. Therefore we need this API so
that users can request instances of different flavors, with some
relations (constraints) among them. The advantage is that this logic
and API will help us add Cinder volumes with ease (not sure how the
Cinder-stackers think about it, though).


I don't think that the instance group API actually helps here.  (I think 
it's a good idea, just not directly related to this.)


I think what we really want is the ability to specify an arbitrary list 
of instances (or other things) that you want to schedule, each of which 
may have different image/flavor, each of which may be part of an 
instance group, a specific network, have metadata which associates with 
a host aggregate, desire specific PCI passthrough devices, etc.


An immediate user of something like this would be heat, since it would 
let them pass the whole stack to the scheduler in one API call.  The 
scheduler could then take a more holistic view, possibly doing a better 
fitting job than if the instances are scheduled one-at-a-time.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] transactions in openstack REST API?

2014-02-03 Thread Andrew Laski

On 02/03/14 at 01:10pm, Chris Friesen wrote:


Has anyone ever considered adding the concept of transaction IDs to 
the openstack REST API?


I'm envisioning a way to handle long-running transactions more 
cleanly.  For example:


1) A user sends a request to live-migrate an instance
2) Openstack acks the request and includes a transaction ID in the 
response.
3) The user can then poll (or maybe listen to notifications) to see 
whether the transaction is complete or hit an error.


I've called them tasks, but I have a proposal up at 
https://blueprints.launchpad.net/nova/+spec/instance-tasks-api that is 
very similar to this.  It allows for polling, but doesn't get into 
notifications.  But this is a first step in this direction and it can be 
expanded upon later.


Please let me know if this covers what you've brought up, and add any 
feedback you may have to the blueprint.




I view this as most useful for things that could potentially take a 
long time to finish--instance creation/deletion/migration/evacuation 
are obvious, I'm sure there are others.


Also, anywhere that we use a cast RPC call we'd want to add that 
call to a list associated with that transaction in the 
database...that way the transaction is only complete when all the 
sub-jobs are complete.


I've seen some discussion about using transaction IDs to locate logs 
corresponding to a given transaction, but nothing about the end user 
being able to query the status of the transaction.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-02-03 Thread Yathiraj Udupi (yudupi)
The solver-scheduler is designed to solve for an arbitrary list of instances of 
different flavors. We need to have some updated apis in the scheduler to be 
able to pass on such requests. Instance group api is an initial effort to 
specify such groups.



Even now the existing solver scheduler patch,  works for a group request,  only 
that it is a group of a single flavor. It still solves once for the entire 
group based on the constraints on available capacity.



With updates to the api that call the solver scheduler we can easily 
demonstrate how an arbitrary group of VM request can be satisfied and solved 
together in a single constraint solver run. (LP based solver for now in the 
current patch, But can be any constraint solver)



Thanks,

Yathi.





-- Original message--

From: Chris Friesen

Date: Mon, 2/3/2014 11:24 AM

To: openstack-dev@lists.openstack.org;

Subject:Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver 
Scheduler



On 02/03/2014 12:28 PM, Khanh-Toan Tran wrote:

 Another though would be the need for Instance Group API [1].
 Currently users can only request multiple instances of the same
 flavors. These requests do not need LP to solve, just placing
 instances one by one is sufficient. Therefore we need this API so
 that users can request instances of different flavors, with some
 relations (constraints) among them. The advantage is that this logic
 and API will help us add Cinder volumes with ease (not sure how the
 Cinder-stackers think about it, though).

I don't think that the instance group API actually helps here.  (I think
it's a good idea, just not directly related to this.)

I think what we really want is the ability to specify an arbitrary list
of instances (or other things) that you want to schedule, each of which
may have different image/flavor, each of which may be part of an
instance group, a specific network, have metadata which associates with
a host aggregate, desire specific PCI passthrough devices, etc.

An immediate user of something like this would be heat, since it would
let them pass the whole stack to the scheduler in one API call.  The
scheduler could then take a more holistic view, possibly doing a better
fitting job than if the instances are scheduled one-at-a-time.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-03 Thread Clint Byrum
Excerpts from Robert Collins's message of 2014-02-03 10:47:06 -0800:
 Quick thoughts:
 
  - I'd like to be able to express a minimum service percentage: e.g. I
 know I need 80% of my capacity available at anyone time, so an
 additional constraint to the unit counts, is to stay below 20% down at
 a time (and this implies that if 20% have failed, either stop or spin
 up more nodes before continuing).
 

Right will add that.

One thing though, all failures lead to rollback. I put that in the
'Unresolved issues' section. Continuing a group operation with any
failures is an entirely different change to Heat. We have a few choices,
from a whole re-thinking of how we handle failures, to just a special
type of resource group that tolerates failure percentages.

 The wait condition stuff seems to be conflating in the 'graceful
 operations' stuff we discussed briefly at the summit, which in my head
 at least is an entirely different thing - it's per node rather than
 per group. If done separately that might make each feature
 substantially easier to reason about.

Agreed. I think something more generic than an actual Heat wait condition
would make more sense. Perhaps even returning all of the active scheduler
tasks which the update must wait on would make sense. Then in the
graceful update version we can just make the dynamically created wait
conditions depend on the update pattern, which would have the same effect.

With the maximum out of service addition, we'll also need to make sure
that upon the must wait for these things completing we evaluate state
again before letting the update proceed.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python 3 compatibility

2014-02-03 Thread Chmouel Boudjnah
On Mon, Feb 3, 2014 at 5:29 PM, Julien Danjou jul...@danjou.info wrote:

 Last, but not least, trollius has been created by Victor Stinner, who
 actually did that work with porting OpenStack in mind and as the first
 objective.



AFAIK: victor had plans to send a mail about it to the list later this week.

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV extra hr of discussion today

2014-02-03 Thread Sandhya Dasu (sadasu)
Hi,
Since, openstack-meeting-alt seems to be in use, baoli and myself are 
moving to openstack-meeting. Hopefully, Bob Kukura  Irena can join soon.

Thanks,
Sandhya

From: Sandhya Dasu sad...@cisco.commailto:sad...@cisco.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, February 3, 2014 1:26 PM
To: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com, Robert 
Li (baoli) ba...@cisco.commailto:ba...@cisco.com, Robert Kukura 
rkuk...@redhat.commailto:rkuk...@redhat.com, OpenStack Development Mailing 
List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Brian Bowen (brbowen) brbo...@cisco.commailto:brbo...@cisco.com
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV extra hr of 
discussion today

Hi all,
Both openstack-meeting and openstack-meeting-alt are available today. Lets 
meet at UTC 2000 @ openstack-meeting-alt.

Thanks,
Sandhya

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Monday, February 3, 2014 12:52 AM
To: Sandhya Dasu sad...@cisco.commailto:sad...@cisco.com, Robert Li 
(baoli) ba...@cisco.commailto:ba...@cisco.com, Robert Kukura 
rkuk...@redhat.commailto:rkuk...@redhat.com, OpenStack Development Mailing 
List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Brian Bowen (brbowen) brbo...@cisco.commailto:brbo...@cisco.com
Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Hi Sandhya,
Can you please elaborate how do you suggest to extend the below bp for SRIOV 
Ports managed by different Mechanism Driver?
I am not biased to any specific direction here, just think we need common layer 
for managing SRIOV port at neutron, since there is a common pass between nova 
and neutron.

BR,
Irena


From: Sandhya Dasu (sadasu) [mailto:sad...@cisco.com]
Sent: Friday, January 31, 2014 6:46 PM
To: Irena Berezovsky; Robert Li (baoli); Robert Kukura; OpenStack Development 
Mailing List (not for usage questions); Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Hi Irena,
  I was initially looking at 
https://blueprints.launchpad.net/neutron/+spec/ml2-typedriver-extra-port-info 
to take care of the extra information required to set up the SR-IOV port. When 
the scope of the BP was being decided, we had very little info about our own 
design so I didn't give any feedback about SR-IOV ports. But, I feel that this 
is the direction we should be going. Maybe we should target this in Juno.

Introducing, SRIOVPortProfileMixin would be creating yet another way to take 
care of extra port config. Let me know what you think.

Thanks,
Sandhya

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Thursday, January 30, 2014 4:13 PM
To: Robert Li (baoli) ba...@cisco.commailto:ba...@cisco.com, Robert 
Kukura rkuk...@redhat.commailto:rkuk...@redhat.com, Sandhya Dasu 
sad...@cisco.commailto:sad...@cisco.com, OpenStack Development Mailing 
List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Brian Bowen (brbowen) brbo...@cisco.commailto:brbo...@cisco.com
Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Robert,
Thank you very much for the summary.
Please, see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Thursday, January 30, 2014 10:45 PM
To: Robert Kukura; Sandhya Dasu (sadasu); Irena Berezovsky; OpenStack 
Development Mailing List (not for usage questions); Brian Bowen (brbowen)
Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Hi,

We made a lot of progress today. We agreed that:
-- vnic_type will be a top level attribute as binding:vnic_type
-- BPs:
 * Irena's 
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type for 
binding:vnic_type
 * Bob to submit a BP for binding:profile in ML2. SRIOV input info will be 
encapsulated in binding:profile
 * Bob to submit a BP for binding:vif_details in ML2. SRIOV output info 
will be encapsulated in binding:vif_details, which may include other 
information like security parameters. For SRIOV, vlan_id and profileid are 
candidates.
-- new arguments for port-create will be implicit arguments. Future release may 
make them explicit. New argument: --binding:vnic_type {virtio, direct, macvtap}.
I think that currently we can make do without the profileid as an input 
parameter from the user. The mechanism driver will return a profileid in the 
vif output.

Please correct any misstatement in above.

Issues:
  -- do we need a common utils/driver for SRIOV generic parts to be used by 
individual Mechanism drivers that support SRIOV? More details on what would be 
included in this sriov utils/driver? I'm thinking that a candidate would be the 
helper 

Re: [openstack-dev] transactions in openstack REST API?

2014-02-03 Thread Chris Friesen

On 02/03/2014 01:31 PM, Andrew Laski wrote:

On 02/03/14 at 01:10pm, Chris Friesen wrote:


Has anyone ever considered adding the concept of transaction IDs to
the openstack REST API?

I'm envisioning a way to handle long-running transactions more
cleanly.  For example:

1) A user sends a request to live-migrate an instance
2) Openstack acks the request and includes a transaction ID in the
response.
3) The user can then poll (or maybe listen to notifications) to see
whether the transaction is complete or hit an error.


I've called them tasks, but I have a proposal up at
https://blueprints.launchpad.net/nova/+spec/instance-tasks-api that is
very similar to this.  It allows for polling, but doesn't get into
notifications.  But this is a first step in this direction and it can be
expanded upon later.

Please let me know if this covers what you've brought up, and add any
feedback you may have to the blueprint.



That actually looks really good.  I like the idea of subtasks for things 
like live migration.


The only real comment I have at this point is that you might want to 
talk to the transaction ID guys and maybe use your task UUID as the 
transaction ID that gets passed to other services acting on behalf of nova.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-03 Thread Thomas Herve
 So, I wrote the original rolling updates spec about a year ago, and the
 time has come to get serious about implementation. I went through it and
 basically rewrote the entire thing to reflect the knowledge I have
 gained from a year of working with Heat.
 
 Any and all comments are welcome. I intend to start implementation very
 soon, as this is an important component of the HA story for TripleO:
 
 https://wiki.openstack.org/wiki/Heat/Blueprints/RollingUpdates

Hi Clint, thanks for pushing this.

First, I don't think RollingUpdatePattern and CanaryUpdatePattern should be 2 
different entities. The second just looks like a parametrization of the first 
(growth_factor=1?).

I then feel that using (abusing?) depends_on for update pattern is a bit weird. 
Maybe I'm influenced by the CFN design, but the separate UpdatePolicy attribute 
feels better (although I would probably use a property). I guess my main 
question is around the meaning of using the update pattern on a server 
instance. I think I see what you want to do for the group, where child_updating 
would return a number, but I have no idea what it means for a single resource. 
Could you detail the operation a bit more in the document?

It also seems that the interface you're creating 
(child_creating/child_updating) is fairly specific to your use case. For 
autoscaling we have a need for more generic notification system, it would be 
nice to find common grounds. Maybe we can invert the relationship? Add a 
notified_resources attribute, which would call hooks on the parent when 
actions are happening.

Thanks,

-- 
Thomas 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cinder + taskflow

2014-02-03 Thread Joshua Harlow
Hi all,

After talking with john g. about taskflow in cinder and seeing more and
more reviews showing up I wanted to start a thread to gather all our
lessons learned and how we can improve a little before continuing to add
too many more refactoring and more reviews (making sure everyone is
understands the larger goal and larger picture of switching pieces of
cinder - piece by piece - to taskflow).

Just to catch everyone up.

Taskflow started integrating with cinder in havana and there has been some
continued work around these changes:

- https://review.openstack.org/#/c/58724/
- https://review.openstack.org/#/c/66283/
- https://review.openstack.org/#/c/62671/

There has also been a few other pieces of work going in (forgive me if I
missed any...):

- https://review.openstack.org/#/c/64469/
- https://review.openstack.org/#/c/69329/
- https://review.openstack.org/#/c/64026/

I think now would be a good time (and seems like a good idea) to create
the discussion to learn how people are using taskflow, common patterns
people like, don't like, common refactoring idioms that are occurring and
most importantly to make sure that we refactor with a purpose and not just
refactor for refactoring sake (which can be harmful if not done
correctly). So to get a kind of forward and unified momentum behind
further adjustments I'd just like to make sure we are all aligned and
understood on the benefits and yes even the drawbacks that these
refactorings bring.

So here is my little list of benefits:

- Objects that do just one thing (a common pattern I am seeing is
determining what the one thing is, without making it to granular that its
hard to read).
- Combining these objects together in a well-defined way (once again it
has to be carefully done to not create to much granularness).
- Ability to test these tasks and flows via mocking (something that is
harder when its not split up like this).
- Features that aren't currently used such as state-persistence (but will
help cinder become more crash-resistant in the future).
  - This one will itself need to be understood before doing [I started
etherpad @ https://etherpad.openstack.org/p/cinder-taskflow-persistence
for this].

List of drawbacks (or potential drawbacks):

- Having a understanding of what taskflow is doing adds on a new layer of
things to know (hopefully docs help in this area, that was there goal).
- Selecting to granular of a task or flow; makes it harder to
follow/understand the task/flow logic.
- Focuses on the long-term (not necessarily short-term) state-management
concerns (can't refactor rome in a day).
- Taskflow is being developed at the same time cinder is.

I'd be very interested in hearing about others experiences and to make
sure that we discuss the changes (in a well documented and agreed on
approach) before jumping to much into the 'deep end' with a large amount
of refactoring (aka, refactoring with a purpose). Let's make this thread
as useful as we can and try to see how we can unify all these refactorings
behind a common (and documented  agreed-on) purpose.

A thought, for the reviews above, I think it would be very useful to
etherpad/writeup more in the blueprint what the 'refactoring with a
purpose' is so that its more known to future readers (and for active
reviewers), hopefully this email can start to help clarify that purpose so
that things proceed as smoothly as possible.

-Josh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] do nova objects work for plugins?

2014-02-03 Thread Dan Smith
 Basically, if object A has object B as a child, and deserialization
 finds object B to be an unrecognized version, it will try to back
 port the object A to the version number of object B.

Right, which is why we rev the version of, say, the InstanceList when we
have to rev Instance itself, and why we have unit tests to makes sure
that happens.

 It is not reasonable to bump the version of the compute_node when
 new external plugin is developed. So currently the versioning seems
 too rigid to implement extensible/pluggable objects this way.

So we're talking about an out-of-tree closed-source plugin, right? IMHO,
Nova's versioning infrastructure is in place to make Nova able to handle
upgrades; adding requirements for supporting out-of-tree plugins
wouldn't be high on my priority list.

 A reasonable alternative might be for all objects to be deserialized 
 individually within a tree data structure, but I’m not sure what
 might happen to parent/child compatability without some careful
 tracking.

I think it would probably be possible to make the deserializer specify
the object and version it tripped over when passing the whole thing back
to conductor to be backleveled. That seems reasonably useful to Nova itself.

 Another might be to say that nova objects are for nova use only and 
 that’s just tough for plugin writers!

Well, for the same reason we don't provide a stable virt driver API
(among other things) I don't think we need to be overly concerned with
allowing arbitrary bolt-on code to hook in at this point.

Your concern is, I assume, allowing a resource metric plugin to shove
actual NovaObject items into a container object of compute node metrics?
Is there some reason that we can't just coerce all of these to a
dict-of-strings or dict-of-known-primitive-types to save all of this
complication? I seem to recall the proposal that led us down this road
being store/communicate arbitrary JSON blobs, but certainly there is a
happy medium?

Given that the nova meetup is next week, perhaps that would be a good
time to actually figure out a path forward?

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Solum database schema modification proposal

2014-02-03 Thread Angus Salkeld

On 03/02/14 16:22 +, Paul Montgomery wrote:

Solum community,

I notice that we are using String(36) UUID values in the database schema as 
primary key for many new tables that we are creating.  For example:
https://review.openstack.org/#/c/68328/10/solum/objects/sqlalchemy/application.py

Proposal: Add an int or bigint ID as the primary key, instead of UUID (the UUID 
field remains if needed), to improve database efficiency.

In my experience (I briefly pinged a DBA to verify), using a relatively long 
field as a primary key will increase resource utilization and reduce 
throughput.  This will become pronounced with the database will no longer fit 
into memory which would likely characterize any medium-large Solum 
installation.  This proposal would relatively painlessly improve database 
efficiency before a database schema change becomes difficult (many pull 
requests are in flight right now for schema).

In order to prevent the auto-incrementing ID from leaking usage information 
about the system, I would recommend using the integer-based ID field internally 
within Solum for efficiency and do not expose this ID field to users.  Users 
would only see UUID or non-ID values to prevent Solum metadata from leaking.

Thoughts?


I am reworking my patch now to use autoinc. int for the index and
have a seperate uuid.

-Angus






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-03 Thread Christopher Armstrong
Heya Clint, this BP looks really good - it should significantly simplify
the implementation of scaling if this becomes a core Heat feature. Comments
below.

On Mon, Feb 3, 2014 at 2:46 PM, Thomas Herve thomas.he...@enovance.comwrote:

  So, I wrote the original rolling updates spec about a year ago, and the
  time has come to get serious about implementation. I went through it and
  basically rewrote the entire thing to reflect the knowledge I have
  gained from a year of working with Heat.
 
  Any and all comments are welcome. I intend to start implementation very
  soon, as this is an important component of the HA story for TripleO:
 
  https://wiki.openstack.org/wiki/Heat/Blueprints/RollingUpdates

 Hi Clint, thanks for pushing this.

 First, I don't think RollingUpdatePattern and CanaryUpdatePattern should
 be 2 different entities. The second just looks like a parametrization of
 the first (growth_factor=1?).


Agreed.



 I then feel that using (abusing?) depends_on for update pattern is a bit
 weird. Maybe I'm influenced by the CFN design, but the separate
 UpdatePolicy attribute feels better (although I would probably use a
 property). I guess my main question is around the meaning of using the
 update pattern on a server instance. I think I see what you want to do for
 the group, where child_updating would return a number, but I have no idea
 what it means for a single resource. Could you detail the operation a bit
 more in the document?



I agree that depends_on is weird and I think it should be avoided. I'm not
sure a property is the right decision, though, assuming that it's the heat
engine that's dealing with the rolling updates -- I think having the engine
reach into a resource's properties would set a strange precedent. The CFN
design does seem pretty reasonable to me, assuming an update_policy field
in a HOT resource, referring to the policy that the resource should use.


It also seems that the interface you're creating
 (child_creating/child_updating) is fairly specific to your use case. For
 autoscaling we have a need for more generic notification system, it would
 be nice to find common grounds. Maybe we can invert the relationship? Add a
 notified_resources attribute, which would call hooks on the parent when
 actions are happening.



Yeah, this would be really helpful for stuff like load balancer
notifications (and any of a number of different resource relationships).

-- 
IRC: radix
http://twitter.com/radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder + taskflow

2014-02-03 Thread John Griffith
On Mon, Feb 3, 2014 at 1:53 PM, Joshua Harlow harlo...@yahoo-inc.com wrote:
 Hi all,

 After talking with john g. about taskflow in cinder and seeing more and
 more reviews showing up I wanted to start a thread to gather all our
 lessons learned and how we can improve a little before continuing to add
 too many more refactoring and more reviews (making sure everyone is
 understands the larger goal and larger picture of switching pieces of
 cinder - piece by piece - to taskflow).

 Just to catch everyone up.

 Taskflow started integrating with cinder in havana and there has been some
 continued work around these changes:

 - https://review.openstack.org/#/c/58724/
 - https://review.openstack.org/#/c/66283/
 - https://review.openstack.org/#/c/62671/

 There has also been a few other pieces of work going in (forgive me if I
 missed any...):

 - https://review.openstack.org/#/c/64469/
 - https://review.openstack.org/#/c/69329/
 - https://review.openstack.org/#/c/64026/

 I think now would be a good time (and seems like a good idea) to create
 the discussion to learn how people are using taskflow, common patterns
 people like, don't like, common refactoring idioms that are occurring and
 most importantly to make sure that we refactor with a purpose and not just
 refactor for refactoring sake (which can be harmful if not done
 correctly). So to get a kind of forward and unified momentum behind
 further adjustments I'd just like to make sure we are all aligned and
 understood on the benefits and yes even the drawbacks that these
 refactorings bring.

 So here is my little list of benefits:

 - Objects that do just one thing (a common pattern I am seeing is
 determining what the one thing is, without making it to granular that its
 hard to read).
 - Combining these objects together in a well-defined way (once again it
 has to be carefully done to not create to much granularness).
 - Ability to test these tasks and flows via mocking (something that is
 harder when its not split up like this).
 - Features that aren't currently used such as state-persistence (but will
 help cinder become more crash-resistant in the future).
   - This one will itself need to be understood before doing [I started
 etherpad @ https://etherpad.openstack.org/p/cinder-taskflow-persistence
 for this].

 List of drawbacks (or potential drawbacks):

 - Having a understanding of what taskflow is doing adds on a new layer of
 things to know (hopefully docs help in this area, that was there goal).
 - Selecting to granular of a task or flow; makes it harder to
 follow/understand the task/flow logic.
 - Focuses on the long-term (not necessarily short-term) state-management
 concerns (can't refactor rome in a day).
 - Taskflow is being developed at the same time cinder is.

 I'd be very interested in hearing about others experiences and to make
 sure that we discuss the changes (in a well documented and agreed on
 approach) before jumping to much into the 'deep end' with a large amount
 of refactoring (aka, refactoring with a purpose). Let's make this thread
 as useful as we can and try to see how we can unify all these refactorings
 behind a common (and documented  agreed-on) purpose.

 A thought, for the reviews above, I think it would be very useful to
 etherpad/writeup more in the blueprint what the 'refactoring with a
 purpose' is so that its more known to future readers (and for active
 reviewers), hopefully this email can start to help clarify that purpose so
 that things proceed as smoothly as possible.

 -Josh


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks for putting this together Josh, I just wanted to add a couple
of things from my own perspective.

The end-goals of taskflow (specifically persistence and better state
managment) are the motivating factors for going this route.  We've
made a first step with create_volume however we haven't advanced it
enough to realize the benefits that we set out to gain by this in the
first place.  I still think it's the right direction and IMO we should
keep on the path, however there are a number of things that I've
noticed that make me lean towards refraining from moving other API
calls to taskflow right now.

1. Currently taskflow is pretty much a functional equivalent
replacement of what was in the volume manager.  We're not really
gaining that much from it (yet).

2. taskflow adds quite a bit of code and indirection that currently
IMHO adds a bit of complexity and difficulty in trouble-shooting (I
think we're fixing this up and it will continue to get better, I also
think this is normal for introduction of new implementations, no
criticism intended).

3. Our unit testing / mock infrastructure is broken right now for
items that use taskflow.  Particularly cinder.test.test_volume can not
be run independently until we fix the taskflow fakes and mock objects.
 

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV extra hr of discussion today

2014-02-03 Thread Irena Berezovsky
Seems the openstack-meeting-alt is busy, let's use openstack-meeting

From: Sandhya Dasu (sadasu) [mailto:sad...@cisco.com]
Sent: Monday, February 03, 2014 8:28 PM
To: Irena Berezovsky; Robert Li (baoli); Robert Kukura; OpenStack Development 
Mailing List (not for usage questions); Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV extra hr of 
discussion today

Hi all,
Both openstack-meeting and openstack-meeting-alt are available today. Lets 
meet at UTC 2000 @ openstack-meeting-alt.

Thanks,
Sandhya

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Monday, February 3, 2014 12:52 AM
To: Sandhya Dasu sad...@cisco.commailto:sad...@cisco.com, Robert Li 
(baoli) ba...@cisco.commailto:ba...@cisco.com, Robert Kukura 
rkuk...@redhat.commailto:rkuk...@redhat.com, OpenStack Development Mailing 
List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Brian Bowen (brbowen) brbo...@cisco.commailto:brbo...@cisco.com
Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Hi Sandhya,
Can you please elaborate how do you suggest to extend the below bp for SRIOV 
Ports managed by different Mechanism Driver?
I am not biased to any specific direction here, just think we need common layer 
for managing SRIOV port at neutron, since there is a common pass between nova 
and neutron.

BR,
Irena


From: Sandhya Dasu (sadasu) [mailto:sad...@cisco.com]
Sent: Friday, January 31, 2014 6:46 PM
To: Irena Berezovsky; Robert Li (baoli); Robert Kukura; OpenStack Development 
Mailing List (not for usage questions); Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Hi Irena,
  I was initially looking at 
https://blueprints.launchpad.net/neutron/+spec/ml2-typedriver-extra-port-info 
to take care of the extra information required to set up the SR-IOV port. When 
the scope of the BP was being decided, we had very little info about our own 
design so I didn't give any feedback about SR-IOV ports. But, I feel that this 
is the direction we should be going. Maybe we should target this in Juno.

Introducing, SRIOVPortProfileMixin would be creating yet another way to take 
care of extra port config. Let me know what you think.

Thanks,
Sandhya

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Thursday, January 30, 2014 4:13 PM
To: Robert Li (baoli) ba...@cisco.commailto:ba...@cisco.com, Robert 
Kukura rkuk...@redhat.commailto:rkuk...@redhat.com, Sandhya Dasu 
sad...@cisco.commailto:sad...@cisco.com, OpenStack Development Mailing 
List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Brian Bowen (brbowen) brbo...@cisco.commailto:brbo...@cisco.com
Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Robert,
Thank you very much for the summary.
Please, see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Thursday, January 30, 2014 10:45 PM
To: Robert Kukura; Sandhya Dasu (sadasu); Irena Berezovsky; OpenStack 
Development Mailing List (not for usage questions); Brian Bowen (brbowen)
Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Hi,

We made a lot of progress today. We agreed that:
-- vnic_type will be a top level attribute as binding:vnic_type
-- BPs:
 * Irena's 
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type for 
binding:vnic_type
 * Bob to submit a BP for binding:profile in ML2. SRIOV input info will be 
encapsulated in binding:profile
 * Bob to submit a BP for binding:vif_details in ML2. SRIOV output info 
will be encapsulated in binding:vif_details, which may include other 
information like security parameters. For SRIOV, vlan_id and profileid are 
candidates.
-- new arguments for port-create will be implicit arguments. Future release may 
make them explicit. New argument: --binding:vnic_type {virtio, direct, macvtap}.
I think that currently we can make do without the profileid as an input 
parameter from the user. The mechanism driver will return a profileid in the 
vif output.

Please correct any misstatement in above.

Issues:
  -- do we need a common utils/driver for SRIOV generic parts to be used by 
individual Mechanism drivers that support SRIOV? More details on what would be 
included in this sriov utils/driver? I'm thinking that a candidate would be the 
helper functions to interpret the pci_slot, which is proposed as a string. 
Anything else in your mind?
[IrenaB] I thought on some SRIOVPortProfileMixin to handle and persist SRIOV 
port related attributes

  -- what should mechanism drivers put in binding:vif_details and how nova 
would use this information? as far as I see it from the code, a VIF object is 
created and populated based on information provided by neutron (from get 
network and get port)

Questions:
  -- nova needs to work with both 

[openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy Discussion

2014-02-03 Thread Vishvananda Ishaya
Hello Again!

At the meeting last week we discussed some options around getting true 
multitenancy in nova. The use case that we are trying to support can be 
described as follows:

Martha, the owner of ProductionIT provides it services to multiple Enterprise 
clients. She would like to offer cloud services to Joe at WidgetMaster, and Sam 
at SuperDevShop. Joe is a Development Manager for WidgetMaster and he has 
multiple QA and Development teams with many users. Joe needs the ability create 
users, projects, and quotas, as well as the ability to list and delete 
resources across WidgetMaster. Martha needs to be able to set the quotas for 
both WidgetMaster and SuperDevShop; manage users, projects, and objects across 
the entire system; and set quotas for the client companies as a whole. She also 
needs to ensure that Joe can't see or mess with anything owned by Sam.

As per the plan I outlined in the meeting I have implemented a Proof-of-Concept 
that would allow me to see what changes were required in nova to get scoped 
tenancy working. I used a simple approach of faking out heirarchy by prepending 
the id of the larger scope to the id of the smaller scope. Keystone uses uuids 
internally, but for ease of explanation I will pretend like it is using the 
name. I think we can all agree that ‘orga.projecta’ is more readable than 
‘b04f9ea01a9944ac903526885a2666dec45674c5c2c6463dad3c0cb9d7b8a6d8’.

The code basically creates the following five projects:

orga
orga.projecta
orga.projectb
orgb
orgb.projecta

I then modified nova to replace everywhere where it searches or limits policy 
by project_id to do a prefix match. This means that someone using project 
‘orga’ should be able to list/delete instances in orga, orga.projecta, and 
orga.projectb.

You can find the code here:

  
https://github.com/vishvananda/devstack/commit/10f727ce39ef4275b613201ae1ec7655bd79dd5f
  
https://github.com/vishvananda/nova/commit/ae4de19560b0a3718efaffb6c205c7a3c372412f

Keeping in mind that this is a prototype, but I’m hoping to come to some kind 
of consensus as to whether this is a reasonable approach. I’ve compiled a list 
of pros and cons.

Pros:

  * Very easy to understand
  * Minimal changes to nova
  * Good performance in db (prefix matching uses indexes)
  * Could be extended to cover more complex scenarios like multiple owners or 
multiple scopes

Cons:

  * Nova has no map of the hierarchy
  * Moving projects would require updates to ownership inside of nova
  * Complex scenarios involving delegation of roles may be a bad fit
  * Database upgrade to hierarchy could be tricky

If this seems like a reasonable set of tradeoffs, there are a few things that 
need to be done inside of nova bring this to a complete solution:

  * Prefix matching needs to go into oslo.policy
  * Should the tenant_id returned by the api reflect the full ‘orga.projecta’, 
or just the child ‘projecta’ or match the scope: i.e. the first if you are 
authenticated to orga and the second if you are authenticated to the project?
  * Possible migrations for existing project_id fields
  * Use a different field for passing ownership scope instead of overloading 
project_id
  * Figure out how nested quotas should work
  * Look for other bugs relating to scoping

Also, we need to decide how keystone should construct and pass this information 
to the services. The obvious case that could be supported today would be to 
allow a single level of hierarchy using domains. For example, if domains are 
active, keystone could pass domain.project_id for ownership_scope. This could 
be controversial because potentially domains are just for grouping users and 
shouldn’t be applied to projects.

I think the real value of this approach would be to allow nested projects with 
role inheritance. When keystone is creating the token, it could walk the tree 
of parent projects, construct the set of roles, and construct the 
ownership_scope as it walks to the root of the tree.

Finally, similar fixes will need to be made in the other projects to bring this 
to a complete solution.

Please feel free to respond with any input, and we will be having another 
Hierarchical Multitenancy Meeting on Friday at 1600 UTC to discuss.

Vish

On Jan 28, 2014, at 10:35 AM, Vishvananda Ishaya vishvana...@gmail.com wrote:

 Hi Everyone,
 
 I apologize for the obtuse title, but there isn't a better succinct term to 
 describe what is needed. OpenStack has no support for multiple owners of 
 objects. This means that a variety of private cloud use cases are simply not 
 supported. Specifically, objects in the system can only be managed on the 
 tenant level or globally.
 
 The key use case here is to delegate administration rights for a group of 
 tenants to a specific user/role. There is something in Keystone called a 
 “domain” which supports part of this functionality, but without support from 
 all of the projects, this concept is pretty useless.
 
 In IRC today I had a brief discussion 

Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-03 Thread Clint Byrum
Excerpts from Thomas Herve's message of 2014-02-03 12:46:05 -0800:
  So, I wrote the original rolling updates spec about a year ago, and the
  time has come to get serious about implementation. I went through it and
  basically rewrote the entire thing to reflect the knowledge I have
  gained from a year of working with Heat.
  
  Any and all comments are welcome. I intend to start implementation very
  soon, as this is an important component of the HA story for TripleO:
  
  https://wiki.openstack.org/wiki/Heat/Blueprints/RollingUpdates
 
 Hi Clint, thanks for pushing this.
 
 First, I don't think RollingUpdatePattern and CanaryUpdatePattern should be 2 
 different entities. The second just looks like a parametrization of the first 
 (growth_factor=1?).

Perhaps they can just be one. Until I find parameters which would need
to mean something different, I'll just use UpdatePattern.

 
 I then feel that using (abusing?) depends_on for update pattern is a bit 
 weird. Maybe I'm influenced by the CFN design, but the separate UpdatePolicy 
 attribute feels better (although I would probably use a property). I guess my 
 main question is around the meaning of using the update pattern on a server 
 instance. I think I see what you want to do for the group, where 
 child_updating would return a number, but I have no idea what it means for a 
 single resource. Could you detail the operation a bit more in the document?
 

I would be o-k with adding another keyword. The idea in abusing depends_on
is that it changes the core language less. Properties is definitely out
for the reasons Christopher brought up, properties is really meant to
be for the resource's end target only.

UpdatePolicy in cfn is a single string, and causes very generic rolling
update behavior. I want this resource to be able to control multiple
groups as if they are one in some cases (Such as a case where a user
has migrated part of an app to a new type of server, but not all.. so
they will want to treat the entire aggregate as one rolling update).

I'm o-k with overloading it to allow resource references, but I'd like
to hear more people take issue with depends_on before I select that
course.

To answer your question, using it with a server instance allows
rolling updates across non-grouped resources. In the example the
rolling_update_dbs does this.

 It also seems that the interface you're creating 
 (child_creating/child_updating) is fairly specific to your use case. For 
 autoscaling we have a need for more generic notification system, it would be 
 nice to find common grounds. Maybe we can invert the relationship? Add a 
 notified_resources attribute, which would call hooks on the parent when 
 actions are happening.
 

I'm open to a different interface design. I don't really have a firm
grasp of the generic behavior you'd like to model though. This is quite
concrete and would be entirely hidden from template authors, though not
from resource plugin authors. Attributes sound like something where you
want the template authors to get involved in specifying, but maybe that
was just an overloaded term.

So perhaps we can replace this interface with the generic one when your
use case is more clear?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] Specific job type for streaming mapreduce? (and someday pipes)

2014-02-03 Thread Trevor McKay

I was trying my best to avoid adding extra job types to support
mapreduce variants like streaming or mapreduce with pipes, but it seems
that adding the types is the simplest solution.

On the API side, Savanna can live without a specific job type by
examining the data in the job record.  Presence/absence of certain
things, or null values, etc, can provide adequate indicators to what
kind of mapreduce it is.  Maybe a little bit subtle.

But for the UI, it seems that explicit knowledge of what the job is
makes things easier and better for the user.  When a user creates a
streaming mapreduce job and the UI is aware of the type later on at job
launch, the user can be prompted to provide the right configs (i.e., the
streaming mapper and reducer values).

The explicit job type also supports validation without having to add
extra flags (which impacts the savanna client, and the JSON, etc). For
example, a streaming mapreduce job does not require any specified
libraries so the fact that it is meant to be a streaming job needs to be
known at job creation time.

So, to that end, I propose that we add a MapReduceStreaming job type,
and probably at some point we will have MapReducePiped too. It's
possible that we might have other job types in the future too as the
feature set grows.

There was an effort to make Savanna job types parallel Oozie action
types, but in this case that's just not possible without introducing a
subtype field in the job record, which leads to a database migration
script and savanna client changes.

What do you think?

Best,

Trevor



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Interest in discussing vendor plugins for L3 services?

2014-02-03 Thread Paul Michali
I'd like to see if there is interest in discussing vendor plugins for L3 
services. The goal is to strive for consistency across vendor plugins/drivers 
and across service types (if possible/sensible). Some of this could/should 
apply to reference drivers as well. I'm thinking about these topics (based on 
questions I've had on VPNaaS - feel free to add to the list):

How to handle vendor specific validation (e.g. say a vendor has restrictions or 
added capabilities compared to the reference drivers for attributes).
Providing client feedback (e.g. should help and validation be extended to 
include vendor capabilities or should it be delegated to server reporting?)
Handling and reporting of errors to the user (e.g. how to indicate to the user 
that a failure has occurred establishing a IPSec tunnel in device driver?)
Persistence of vendor specific information (e.g. should new tables be used or 
should/can existing reference tables be extended?).
Provider selection for resources (e.g. should we allow --provider attribute on 
VPN IPSec policies to have vendor specific policies or should we rely on checks 
at connection creation for policy compatibility?)
Handling of multiple device drivers per vendor (e.g. have service driver 
determine which device driver to send RPC requests, or have agent determine 
what driver requests should go to - say based on the router type)
If you have an interest, please reply to me and include some days/times that 
would be good for you, and I'll send out a notice on the ML of the time/date 
and we can discuss.

Looking to hearing form you!

PCM (Paul Michali)

MAIL  p...@cisco.com
IRCpcm_  (irc.freenode.net)
TW@pmichali
GPG key4525ECC253E31A83
Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Assigning a floating IP to an internal network

2014-02-03 Thread Carl Baldwin
I have looked at the code that you posted. I am concerned that there
are db queries performed inside nested loops.  The approach looks
sound from a functional perspective but I think these loops will run
very slowly and increase pressure on the db.

I tend to think that if a router has an extra route on it then we can
take it at its word that IPs in the scope of the extra route would be
reachable from the router.  In the absence of running a dynamic
routing protocol, that is what is typically done by a router.

Maybe you could use an example to expound on your concerns that we'll
pick the wrong router.  Without a specific example in mind, I tend to
think that we should leave it up to the tenants to avoid the ambiguity
that would get us in to this predicament by using mutually exclusive
subnets on their various networks, especially where there are
different routers involved.

You could use a phased approach where you first hammer out the simpler
approach and follow-on with an enhancement for the more complicated
approach.  It would allow progress to be made on the patch that you
have up and more time to think about the need for the more complex
approach.  You could mark that the first patch partially implements
the blueprint.

Carl



On Thu, Jan 30, 2014 at 6:21 AM, Ofer Barkai o...@checkpoint.com wrote:
 Hi all,

 During the implementation of:
 https://blueprints.launchpad.net/neutron/+spec/floating-ip-extra-route

 Which suggest allowing assignment of floating IP to internal address
 not directly connected to the router, if there is a route configured on
 the router to the internal address.

 In: https://review.openstack.org/55987

 There seem to be 2 possible approaches for finding an appropriate
 router for a floating IP assignment, while considering extra routes:

 1. Use the first router that has a route matching the internal address
 which is the target of the floating IP.

 2. Use the first router that has a matching route, _and_ verify that
 there exists a path of connected devices to the network object to
 which the internal address belongs.

 The first approach solves the simple case of a gateway on a compute
 hosts that protects an internal network (which is the motivation for
 this enhancement).

 However, if the same (or overlapping) addresses are assigned to
 different internal networks, there is a risk that the first approach
 might find the wrong router.

 Still, the second approach might force many DB lookups to trace the path from
 the router to the internal network. This overhead might not be
 desirable if the use case does not (at least, initially) appear in the
 real world.

 Patch set 6 presents the first, lightweight approach, and Patch set 5
 presents the second, more accurate approach.

 I would appreciate the opportunity to get more points of view on this subject.

 Thanks,

 -Ofer
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Interest in discussing vendor plugins for L3 services?

2014-02-03 Thread Hemanth Ravi
Hi,

I would be interested in this discussion. Below are some time slot
suggestions:

Mon: 19:00, 20:00 UTC (11:00, 12:00 PST)
Wed: 20:00, 21:00 UTC (12:00, 13:00 PST)
Thu: 19:00, 20:00, 21:00 UTC (11:00, 12:00, 13:00 PST)

Thanks,
-hemanth


On Mon, Feb 3, 2014 at 2:19 PM, Paul Michali p...@cisco.com wrote:

 I'd like to see if there is interest in discussing vendor plugins for L3
 services. The goal is to strive for consistency across vendor
 plugins/drivers and across service types (if possible/sensible). Some of
 this could/should apply to reference drivers as well. I'm thinking about
 these topics (based on questions I've had on VPNaaS - feel free to add to
 the list):


- How to handle vendor specific validation (e.g. say a vendor has
restrictions or added capabilities compared to the reference drivers for
attributes).
- Providing client feedback (e.g. should help and validation be
extended to include vendor capabilities or should it be delegated to server
reporting?)
- Handling and reporting of errors to the user (e.g. how to indicate
to the user that a failure has occurred establishing a IPSec tunnel in
device driver?)
- Persistence of vendor specific information (e.g. should new tables
be used or should/can existing reference tables be extended?).
- Provider selection for resources (e.g. should we allow --provider
attribute on VPN IPSec policies to have vendor specific policies or should
we rely on checks at connection creation for policy compatibility?)
- Handling of multiple device drivers per vendor (e.g. have service
driver determine which device driver to send RPC requests, or have agent
determine what driver requests should go to - say based on the router type)

 If you have an interest, please reply to me and include some days/times
 that would be good for you, and I'll send out a notice on the ML of the
 time/date and we can discuss.

 Looking to hearing form you!

 PCM (Paul Michali)

 MAIL  p...@cisco.com
 IRCpcm_  (irc.freenode.net)
 TW@pmichali
 GPG key4525ECC253E31A83
 Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Specific job type for streaming mapreduce? (and someday pipes)

2014-02-03 Thread Andrew Lazarev
I see two points:
* having Savanna types mapped to Oozie action types is intuitive for hadoop
users and this is something we would like to keep
* it is hard to distinguish different kinds of one job type

Adding 'subtype' field will solve both problems. Having it optional will
not break backward compatibility. Adding database migration
script is also pretty straightforward.

Summarizing, my vote is on subtype field.

Thanks,
Andrew.


On Mon, Feb 3, 2014 at 2:10 PM, Trevor McKay tmc...@redhat.com wrote:


 I was trying my best to avoid adding extra job types to support
 mapreduce variants like streaming or mapreduce with pipes, but it seems
 that adding the types is the simplest solution.

 On the API side, Savanna can live without a specific job type by
 examining the data in the job record.  Presence/absence of certain
 things, or null values, etc, can provide adequate indicators to what
 kind of mapreduce it is.  Maybe a little bit subtle.

 But for the UI, it seems that explicit knowledge of what the job is
 makes things easier and better for the user.  When a user creates a
 streaming mapreduce job and the UI is aware of the type later on at job
 launch, the user can be prompted to provide the right configs (i.e., the
 streaming mapper and reducer values).

 The explicit job type also supports validation without having to add
 extra flags (which impacts the savanna client, and the JSON, etc). For
 example, a streaming mapreduce job does not require any specified
 libraries so the fact that it is meant to be a streaming job needs to be
 known at job creation time.

 So, to that end, I propose that we add a MapReduceStreaming job type,
 and probably at some point we will have MapReducePiped too. It's
 possible that we might have other job types in the future too as the
 feature set grows.

 There was an effort to make Savanna job types parallel Oozie action
 types, but in this case that's just not possible without introducing a
 subtype field in the job record, which leads to a database migration
 script and savanna client changes.

 What do you think?

 Best,

 Trevor



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ugly Hack to deal with multiple versions

2014-02-03 Thread Dean Troyer
On Mon, Feb 3, 2014 at 1:50 PM, Adam Young ayo...@redhat.com wrote:

 HACK:  In a new client, look at the URL.  If it ends with /v2.0, chop it
 off and us the substring up to that point.

 Now, at this point you are probably going:  That is ugly, is it really
 necessary?  Can't we do something more correct?


At this point I think we are stuck with hard-coding some legacy
compatibility like this for the near future.  Fortunately Identity is an
easy one to handle, Compute is going to be a #$^%! as the commonly
documented case has a version not at the end.

I've been playing with variations on this strategy and I think it is our
least bad option...

Can we accept that this is necessary, and vow to never let this happen
 again by removing the versions from the URLs after the current set of
 clients are deprecated?


+1

There is another hack to think about:  if public_endpoint and/or
admin_endpoint are not set in keystone.conf, all of the discovered urls use
localhost: http://localhost:8770/v2.0/.  Discovery falls over aga

I don't know how common this is but I have encountered it at least once or
twice.  Is this the only place those config values are used?  It seems like
a better default could be worked out here too;  is 'localhost' ever the
right thing to advertise in a real-world deployment?

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ugly Hack to deal with multiple versions

2014-02-03 Thread Christopher Yeoh
On Tue, Feb 4, 2014 at 6:20 AM, Adam Young ayo...@redhat.com wrote:

 We have to support old clients.
 Old clients expect that the URL that comes back for the service catalog
 has the version in it.
 Old clients don't do version negotiation.

 Thus, we need an approach to not-break old clients while we politely
 encourage the rest of the world to move to later APIs.


 I know Keystone has this problem.  I've heard that some of the other
 services do as well.  Here is what I propose.  It is ugly, but it is a
 transition plan, and can be disabled once the old clients are deprecated:

 HACK:  In a new client, look at the URL.  If it ends with /v2.0, chop it
 off and us the substring up to that point.


+1 to this. I agree its ugly, but I think its the least-worst solution.
Nova certainly has this problem with the url including the version suffix
in the service catalog.

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Adding package to requirements.txt

2014-02-03 Thread Hemanth Ravi
Hi,

We are in the process of submitting a third party Neutron plugin that uses
urllib3 for the connection pooling feature available in urllib3. httplib2
doesn't provide this capability.

Is it possible to add urllib3 to requirements.txt? If this is OK, please
advise on the process to add this.

Thanks,
-hemanth
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] PXE driver deploy issues

2014-02-03 Thread Devananda van der Veen
On Fri, Jan 31, 2014 at 12:13 PM, Devananda van der Veen 
devananda@gmail.com wrote:

 I think your driver should implement a wrapper around both VendorPassthru
 interfaces and call each appropriately, depending on the request. This
 keeps each VendorPassthru driver separate, and encapsulates the logic about
 when to call each of them in the driver layer.


I've posted an example of this here:

  https://review.openstack.org/#/c/70863/

-Deva
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] bp proposal: configurable locked vm api

2014-02-03 Thread Jae Sang Lee
Hi, Stackers.

The deadline for icehouse comes really quickly and I understand that there
are a lot of work todo, but I would like get your attention about my
blueprint for configurable locked vm api.

 - https://blueprints.launchpad.net/nova/+spec/configurable-locked-vm-api

So far, developer places the decoration(@check_instance_lock) in the
function's declaration,
for example)
@wrap_check_policy
@check_instance_lock
@check_instance_cell
@check_instance_state(vm_state=None, task_state=None,
  must_have_launched=False)
def delete(self, context, instance):
Terminate an instance.
LOG.debug(_(Going to try to terminate instance),
instance=instance)
self._delete_instance(context, instance)

So good, but when administrator want to change API policy for locked vm,
admin must modify source code, and restart.

I suggest nova api do check api list for locked vm using config file like
policy.json. It just modify a config file, not a code
and don't need to service restart.

Can you take a small amount of time to approve a blueprint for icehouse-3?

Thanks.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder + taskflow

2014-02-03 Thread Joshua Harlow
Thanks john for the input.

Hopefully we can help focus some of the refactoring on solving the
state-management problem very soon.

For the mocking case, is there any active work being done here?

As for the state-management and persistence, I think that the goal of both
of these will be reached and it is a good idea to focus around these
problems and I am all in to figuring out those solutions, although my
guess is that both of these will be long-term no matter what. Refactoring
cinder from what it is to what it could/can be will take time (and should
take time, to be careful and meticulous) and hopefully we can ensure that
focus is retained. Since in the end it benefits everyone :)

Lets reform around that state-management issue (which involved a
state-machine concept?). To me the current work/refactoring helps
establish tasks objects that can be plugged into this machine (which is
part of the problem, without task objects its hard to create a
state-machine concept around code that is dispersed). To me that¹s where
the current refactoring work helps (in identifying those tasks and
adjusting code to be closer to smaller units that do a single task), later
when a state-machine concept (or something similar) comes along with will
be using these tasks (or variations of) to automate transitions based on
given events (the flow concept that exists in taskflow is similar to this
already).

The questions I had (or can currently think of) with the state-machine
idea (versus just defined flows of tasks) are:

1. What are the events that trigger a state-machine to transition?
  - Typically some type of event causes a machine to transition to a new
state (after performing some kind of action). Who initiates that
transition.
2. What are the events that will cause this triggering? They are likely
related directly to API requests (but may not be).
3. If a state-machine ends up being created, how does it interact with
other state-machines that are also running at the same time (does it?)
  - This is a bigger question, and involves how one state-machine could be
modifying a resource, while another one could be too (this is where u want
only one state-machine to be modifying a resource at a time). This would
solve some of the races that are currently existent (while introducing the
complexity of distributed locking).
  - It is of my opnion that the same problem in #3 happens when using task
and flows that also affect simulatenous resources; so its not a unique
problem that is directly connected to flows. Part of this I am hoping the
tooz project[1] can help with, since last time I checked they want to help
make a nice API around distributed locking backends (among other similar
APIs).

[1] https://github.com/stackforge/tooz#tooz

-Original Message-
From: John Griffith john.griff...@solidfire.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Monday, February 3, 2014 at 1:16 PM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Cinder + taskflow

On Mon, Feb 3, 2014 at 1:53 PM, Joshua Harlow harlo...@yahoo-inc.com
wrote:
 Hi all,

 After talking with john g. about taskflow in cinder and seeing more and
 more reviews showing up I wanted to start a thread to gather all our
 lessons learned and how we can improve a little before continuing to add
 too many more refactoring and more reviews (making sure everyone is
 understands the larger goal and larger picture of switching pieces of
 cinder - piece by piece - to taskflow).

 Just to catch everyone up.

 Taskflow started integrating with cinder in havana and there has been
some
 continued work around these changes:

 - https://review.openstack.org/#/c/58724/
 - https://review.openstack.org/#/c/66283/
 - https://review.openstack.org/#/c/62671/

 There has also been a few other pieces of work going in (forgive me if I
 missed any...):

 - https://review.openstack.org/#/c/64469/
 - https://review.openstack.org/#/c/69329/
 - https://review.openstack.org/#/c/64026/

 I think now would be a good time (and seems like a good idea) to create
 the discussion to learn how people are using taskflow, common patterns
 people like, don't like, common refactoring idioms that are occurring
and
 most importantly to make sure that we refactor with a purpose and not
just
 refactor for refactoring sake (which can be harmful if not done
 correctly). So to get a kind of forward and unified momentum behind
 further adjustments I'd just like to make sure we are all aligned and
 understood on the benefits and yes even the drawbacks that these
 refactorings bring.

 So here is my little list of benefits:

 - Objects that do just one thing (a common pattern I am seeing is
 determining what the one thing is, without making it to granular that
its
 hard to read).
 - Combining these objects together in a well-defined way (once again it
 has to be carefully 

Re: [openstack-dev] [Nova] Putting nova-network support into the V3 API

2014-02-03 Thread Christopher Yeoh
On Tue, Feb 4, 2014 at 4:32 AM, Joe Gordon joe.gord...@gmail.com wrote:

 On Thu, Jan 30, 2014 at 10:45 PM, Christopher Yeoh cbky...@gmail.com
 wrote:
  So with it now looking like nova-network won't go away for the forseable
  future, it looks like we'll want nova-network support in the Nova V3 API
  after all. I've created a blueprint for this work here:
 
  https://blueprints.launchpad.net/nova/+spec/v3-api-restore-nova-network
 
  And there is a first pass of what needs to be done here:
 
  https://etherpad.openstack.org/p/NovaV3APINovaNetworkExtensions

 From the etherpad:

 Some of the API only every supported nova-network and not neutron,
 others supported both.
 I think as a first pass because of limited time we just port them from
 V2 as-is. Longer term I think
 we should probably remove neutron back-end functionality as we
 shouldn't be proxying, but can
 decide that later.

 While I like the idea of not proxying neutron, since we are taking the
 time to create a new API we should make it clear that this API won't
 work when neutron is being used. There have been some nova network
 commands that pretend to work even when running neutron (quotas etc).
 Perhaps this should be treated as a V3 extension since we don't expect
 all deployments to run this API.

 The user befit to proxying neutron is an API that works for both
 nova-network and neutron. So a cloud can disable the nova-network API
 after the neutron migration instead of  being forced to do so lockstep
 with the migration. To continue supporting this perhaps we should see
 if we can get neutron to implement its own copy of nova-network v3
 API.


So I suspect that asking neutron to support the nova-network API is a bit
of a big ask. Although I guess it could be done fairly independently from
the rest of the neutron code (it could I would guess sit on top of their
api as a translation layer.

But the much simpler solution would be just to proxy for the neutron
service only which as you say gives a better transition for user. Fully
implementing either of these would be Juno timeframe sort of thing though.

I did read a bit of the irc log history discussion on #openstack-nova
related to this. If I understand what was being said correctly, I do want
to push back as hard as I can against further delaying the release of the
V3 API in order to design a new nova-network api for the V3 API. I think
there's always going to be something extra we could wait just one more
cycle and at some point (which I think is now) we have to go with what we
have.

For big API rewrites I think we can wait for V4 :-)

For the moment I'm just going ahead with doing the V2 nova-network port to
V3 because if I wait any longer for further discussion there simply won't
be enough time to get the patches submitted before the feature proposal
deadline.

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Adding package to requirements.txt

2014-02-03 Thread Mark McClain
I’m interested to know why you are using urllib3 directly.  Have you considered 
using the requests module?  requests is built upon urllib3 and already a 
dependency of Neutron.

mark

On Feb 3, 2014, at 6:45 PM, Hemanth Ravi hemanthrav...@gmail.com wrote:

 Hi,
 
 We are in the process of submitting a third party Neutron plugin that uses 
 urllib3 for the connection pooling feature available in urllib3. httplib2 
 doesn't provide this capability.
 
 Is it possible to add urllib3 to requirements.txt? If this is OK, please 
 advise on the process to add this.
 
 Thanks,
 -hemanth
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Developer documentation - linking to slideshares?

2014-02-03 Thread Collins, Sean
Hi,

Some Neutron developers have some really great slides from some of the summits,
and I'd like to link them in the documentation I am building as part of a 
developer doc blueprint,
with proper attribution.

https://blueprints.launchpad.net/neutron/+spec/developer-documentation

I'm hoping to add Salvatore Orlando's slides on building a plugin from scratch, 
as well as
Yong Sheng Gong's deep dive slides as references in the documentation.

First - do I have permission from the previously mentioned? Second - is there 
any licensing that would make things complicated?

As I add more links, I will make sure to ask for permission on the mailing 
list. Also, if you have done a presentation and have slides that
help explain the internals of Neutron, I would love to add it as a reference.

---
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bp proposal: configurable locked vm api

2014-02-03 Thread Russell Bryant
On 02/03/2014 07:25 PM, Jae Sang Lee wrote:
 Hi, Stackers.
 
 The deadline for icehouse comes really quickly and I understand that there
 are a lot of work todo, but I would like get your attention about my
 blueprint for configurable locked vm api.
 
  - https://blueprints.launchpad.net/nova/+spec/configurable-locked-vm-api
 
 So far, developer places the decoration(@check_instance_lock) in the
 function's declaration,
 for example)
 @wrap_check_policy
 @check_instance_lock
 @check_instance_cell
 @check_instance_state(vm_state=None, task_state=None,
   must_have_launched=False)
 def delete(self, context, instance):
 Terminate an instance.
 LOG.debug(_(Going to try to terminate instance),
 instance=instance)
 self._delete_instance(context, instance)
 
 So good, but when administrator want to change API policy for locked vm,
 admin must modify source code, and restart.
 
 I suggest nova api do check api list for locked vm using config file
 like policy.json. It just modify a config file, not a code
 and don't need to service restart.
 
 Can you take a small amount of time to approve a blueprint for icehouse-3?

I'm concerned about this idea from an interop perspective.  It means
that lock will not mean the same thing from one cloud to another.
That seems like something we should avoid.

One thing that might work is to do this from the API side.  We could
allow the caller of the API to list which operations are locked.  The
default behavior would be the current behavior of locking all
operations.  That gives some flexibility and keeps the API call working
the same way across clouds.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Adding package to requirements.txt

2014-02-03 Thread Hemanth Ravi
Mark,

We had started the plugin dev with grizzly initially and the grizzly
distribution included httplib2. We used urllib3 for the HTTPConnectionPool
object. Overlooked the requests module included in the master when we
migrated. I'll take a look at using requests for the same support.

Thanks,
-hemanth


On Mon, Feb 3, 2014 at 6:08 PM, Mark McClain mmccl...@yahoo-inc.com wrote:

 I'm interested to know why you are using urllib3 directly.  Have you
 considered using the requests module?  requests is built upon urllib3 and
 already a dependency of Neutron.

 mark

 On Feb 3, 2014, at 6:45 PM, Hemanth Ravi hemanthrav...@gmail.com wrote:

  Hi,
 
  We are in the process of submitting a third party Neutron plugin that
 uses urllib3 for the connection pooling feature available in urllib3.
 httplib2 doesn't provide this capability.
 
  Is it possible to add urllib3 to requirements.txt? If this is OK, please
 advise on the process to add this.
 
  Thanks,
  -hemanth
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Agenda for Feb 4 - 1400 UTC - in #openstack-meeting

2014-02-03 Thread Collins, Sean
Hi,

I've posted a preliminary agenda for the upcoming IPv6 meeting. See everyone 
soon!

https://wiki.openstack.org/wiki/Meetings/Neutron-IPv6-Subteam#Agenda_for_Feb_4th

---
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Community meeting agenda - 02/04/2014

2014-02-03 Thread Alexander Tivelkov
Hi,

This is just a reminder that we are going to have a weekly meeting of
Murano team in IRC (#openstack-meeting-alt) on Feb, 4 at 17:00 UTC (9am
PST) .

The agenda can be found here:
https://wiki.openstack.org/wiki/Meetings/MuranoAgenda#Agenda

Feel free to add anything you want to discuss.

--
Regards,
Alexander Tivelkov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Supporting WebOb 1.3

2014-02-03 Thread Thomas Goirand
On 02/03/2014 11:31 PM, Doug Hellmann wrote:
 
 
 
 On Mon, Feb 3, 2014 at 7:31 AM, Monty Taylor mord...@inaugust.com
 mailto:mord...@inaugust.com wrote:
 
 On 02/03/2014 11:45 AM, Thomas Goirand wrote:
 
 Hi,
 
 Sorry if this has been already discussed (traffic is high in
 this list).
 
 I've just checked, and our global-requirements.txt still has:
 WebOb=1.2.3,1.3
 
 Problem: both Sid and Trusty have version 1.3.
 
 What package is holding the newer version of webob? Can we work
 toward
 supporting version 1.3? I haven't seen issues building (most of
 not all)
 core packages using WebOb 1.3. Did I miss the obvious?
 
 
 It might be worth trying to patch global-requirements.txt and submit
 it as a change - see what falls out of the gate. That might give you
 some insight into where the pin might be needed. It might also just
 be historical protection.
 
 
 Yeah, according to 4f81f419a1430b5e44feb299ce061f592064a7dd we were just
 playing it safe the last time we updated.
 
 Doug

The result is that it does work! :)

Please approve:
https://review.openstack.org/#/c/70741/

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] pep8 gating fails due to tools/config/check_uptodate.sh

2014-02-03 Thread Matt Riedemann



On 1/13/2014 10:49 AM, Sahid Ferdjaoui wrote:

Hello all,

It looks 100% of the pep8 gate for nova is failing because of a bug reported,
we probably need to mark this as Critical.

https://bugs.launchpad.net/nova/+bug/1268614

Ivan Melnikov has pushed a patchset waiting for review:
https://review.openstack.org/#/c/66346/

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRVJST1I6IEludm9jYXRpb25FcnJvcjogXFwnL2hvbWUvamVua2lucy93b3Jrc3BhY2UvZ2F0ZS1ub3ZhLXBlcDgvdG9vbHMvY29uZmlnL2NoZWNrX3VwdG9kYXRlLnNoXFwnXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjQzMjAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4OTYzMTQzMzQ4OSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==


s.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



This broke us (nova) again today after python-keystoneclient-0.5.0 was 
released with a new config option. Joe Gordon pushed the patch to fix 
nova [1] so everyone will need to recheck their patches again once that 
merges.


This is going to be a continuing problem when external libs that nova 
pulls config options from get released, which now also includes 
oslo.messaging.


Ben Nemec floated some ideas in the previous bug [2]. I'll restate them 
here for discussion:


1) Set up a Jenkins job that triggers on keystoneclient releases to 
check whether it changed their config options and automatically propose 
an update to the other projects. I expect this could work like the 
requirements sync job.


2) Move the keystoneclient config back to a separate file and don't 
automatically generate it. This will likely result in it getting out of 
date again though. I assume that's why we started including 
keystoneclient directly in the generated config.


Joe also had an idea that we keep/generate a vanilla nova.conf.sample 
that only includes options from the nova tree itself which the 
check_uptodate script can check against, not the one generated under 
etc/nova/ which has the external lib options in it.  Then we can still 
get the generated nova.conf.sample that gets packaged by setup.cfg with 
the external lib options but not gate on it when those external packages 
are updated. (Joe, please correct my summary of your idea if it's wrong).


I was also thinking of something similar which could maybe just be done 
in memory where the check tool keeps track of the external config 
options and when validating the generated nova.conf.sample it ignores 
any 'failures' if they are in the list of external options.


Anyway, no matter how we fix it, we need to fix it, so let's weigh the 
pros and cons of the various options since this is worse than a race 
condition that breaks the gate, it just simply breaks and blocks 
everything until fixed.


[1] https://review.openstack.org/#/c/70891/
[2] https://bugs.launchpad.net/nova/+bug/1268614/comments/15

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] Scheduler sub-group meeting agenda 2/4

2014-02-03 Thread Dugger, Donald D
1) Memcached based scheduler updates
2) Scheduler code forklift
3) Opens

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Putting nova-network support into the V3 API

2014-02-03 Thread Robert Collins
On 4 February 2014 14:33, Joe Gordon joe.gord...@gmail.com wrote:

 John and I discussed a third possibility:

 nova-network v3 should be an extension, so the idea was to: Make
 nova-network API a subset of neturon (instead of them adopting our API
 we adopt theirs). And we could release v3 without nova network in
 Icehouse and add the nova-network extension in Juno.

+1, also off by default perhaps ;)

-rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] The simplified blueprint for PCI extra attributes

2014-02-03 Thread Jiang, Yunhong
Hi, John and all,
I updated the blueprint 
https://blueprints.launchpad.net/nova/+spec/pci-extra-info-icehouse  according 
to your feedback, to add the backward compatibility/upgrade issue/examples.

I try to separate this BP with the SR-IOV NIC support as a standalone 
enhancement, because this requirement is more a generic PCI pass through 
feature, and will benefit other usage scenario as well.

And the reasons that I want to finish this BP in I release are:

a) it's a generic requirement, and push it into I release is helpful to 
other scenario.
b) I don't see upgrade issue, and the only thing will be discarded in 
future is the PCI alias if we all agree to use PCI flavor. But that effort will 
be small and there is no conclusion to PCI flavor yet.
c) SR-IOV NIC support is complex, it will be really helpful if we can 
keep ball rolling and push the all-agreed items forward. 

Considering the big patch list for I-3 release, I'm not optimistic to 
merge this in I release, but as said, we should keep the ball rolling and move 
forward.

Thanks
--jyh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev