Re: [openstack-dev] Horizon and Tuskar-UI codebase merge

2013-12-19 Thread Matthias Runge
On 12/18/2013 10:33 PM, Gabriel Hurley wrote:

 
 Adding developers to Horizon Core just for the purpose of reviewing 
 an incubated umbrella project is not the right way to do things at 
 all.  If my proposal of two separate groups having the +2 power in 
 Gerrit isn't technically feasible then a new group should be created 
 for management of umbrella projects.

Yes, I totally agree.

Having two separate projects with separate cores should be possible
under the umbrella of a program.

Tuskar differs somewhat from other projects to be included in horizon,
because other projects contributed a view on their specific feature.
Tuskar provides an additional dashboard and is talking with several apis
below. It's a something like a separate dashboard to be merged here.

When having both under the horizon program umbrella, my concern is, that
both projects wouldn't be coupled so tight, as I would like it.

Esp. I'd love to see an automatic merge of horizon commits to a
(combined) tuskar and horizon repository, thus making sure, tuskar will
work in a fresh (updated) horizon environment.

Matthias

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Diversity as a requirement for incubation

2013-12-19 Thread Sylvain Bauza

Le 18/12/2013 16:37, Steven Dake a écrit :


In the early days of incubation requests, I got the distinct 
impression managers at companies believed that actually getting a 
project incubated in OpenStack was not possible, even though it was 
sparsely documented as an option.  Maybe things are different now that 
a few projects have actually run the gauntlet of incubation and proven 
that it can be done ;)   (see ceilometer, heat as early examples).


But I can tell you one thing for certain, an actual incubation 
commitment from the OpenStack Technical Committee has a huge impact - 
it says Yes we think this project has great potential for improving 
OpenStack's scope in a helpful useful way and we plan to support the 
program to make it happen.  Without that commitment, managers at 
companies have a harder time justifying RD expenses.


That is why I am not a big fan of approach #3 - companies are unlikely 
to commit without a commitment from the TC first ;-) (see chicken/egg 
in your original argument ;)


We shouldn't be afraid of a project failing to graduate to 
Integrated.  Even though it hasn't happened yet, it will undoubtedly 
happen at some point in the future.  We have a way for projects to 
leave incubation if they fail to become a strong emergent system, as 
described in option #2.


Regards
-steve



Thanks Steven for the story. Based on your comments, I would change my 
vote to #2 as it sounds like the most practical approach.


-Sylvain


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][lbaas] plugin driver

2013-12-19 Thread Eugene Nikanorov
Hi Subrahmanyam,

The patch was originally implemented by Oleg Bondarev :)
 plugin-driver appears to be specific to driver provider
That's correct
 How does this work with single common agent and many providers?
The idea is that the plugin-driver is server-side logic which defines how
neutron-server interacts with the device implementing LB. For example,
haproxy plugin driver does it by communicating with lbaas-agent via rpc.
LBaas-agent has specific driver (device driver) that manages haproxy
processes on the host.
There could be other device drivers.
Using common agent is not required for a plugin-driver.

 Who owns the plugin driver?
Usually it's vendor's responsibility to maintain their drivers.

Thanks,
Eugene.


On Thu, Dec 19, 2013 at 3:30 AM, Subrahmanyam Ongole 
song...@oneconvergence.com wrote:


 I was going through the latest common agent patch from Eugene.
 plugin-driver appears to be specific to driver provider, for example
 haproxy has a plugin driver (in addition to agent driver). How does this
 work with single common agent and many providers? Would they need to use a
 separate topic?

 Who owns the plugin driver? Is it appliance driver provider (such as
 f5/radware for example) or Openstack cloud service provider (such as
 Rackspace for example?)

 --

 Thanks
 Subra
 (Subrahmanyam Ongole)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the FloatingIPChecker control point

2013-12-19 Thread Yair Fried
Hi Guys, 
I run into this issue trying to incorporate this test into 
cross_tenant_connectivity scenario: 
launching 2 VMs in different tenants 
What I saw, is that in the gate it fails half the time (the original test 
passes without issues) and ONLY on the 2nd VM (the first FLIP propagates fine). 
https://bugs.launchpad.net/nova/+bug/1262529 

I don't see this in: 
1. my local RHOS-Havana setup 
2. the cross_tenant_connectivity scenario without the control point (test 
passes without issues) 
3. test_network_basic_ops runs in the gate 

So here's my somewhat less experienced opinion: 
1. this happens due to stress (more than a single FLIP/VM) 
2. (as Brent said) Timeout interval between polling are too short 
3. FLIP is usually reachable long before it is seen in the nova DB (also from 
manual experience), so blocking the test until it reaches the nova DB doesn't 
make sense for me. if we could do this in different thread, then maybe, but 
using a Pass/Fail criteria to test for a timing issue seems wrong. Especially 
since as I understand it, the issue is on IF it reaches nova DB, only WHEN. 

I would like to, at least, move this check from its place as a blocker to later 
in the test. Before this is done, I would like to know if anyone else has seen 
the same problems Brent describes prior to this patch being merged. 

Regarding Jay's scenario suggestion, I think this should not be a part of 
network_basic_ops, but rather a separate stress scenario creating multiple VMs 
and testing for FLIP associations and propagation time. 

Regards Yair 
(Also added my comments inline) 

- Original Message -

From: Jay Pipes jaypi...@gmail.com 
To: openstack-dev@lists.openstack.org 
Sent: Thursday, December 19, 2013 5:54:29 AM 
Subject: Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the 
FloatingIPChecker control point 

On 12/18/2013 10:21 PM, Brent Eagles wrote: 
 Hi, 
 
 Yair and I were discussing a change that I initiated and was 
 incorporated into the test_network_basic_ops test. It was intended as a 
 configuration control point for floating IP address assignments before 
 actually testing connectivity. The question we were discussing was 
 whether this check was a valid pass/fail criteria for tests like 
 test_network_basic_ops. 
 
 The initial motivation for the change was that test_network_basic_ops 
 had a less than 50/50 chance of passing in my local environment for 
 whatever reason. After looking at the test, it seemed ridiculous that it 
 should be failing. The problem is that more often than not the data that 
 was available in the logs all pointed to it being set up correctly but 
 the ping test for connectivity was timing out. From the logs it wasn't 
 clear that the test was failing because neutron did not do the right 
 thing, did not do it fast enough, or is something else happening? Of 
 course if I paused the test for a short bit between setup and the checks 
 to manually verify everything the checks always passed. So it's a timing 
 issue right? 
 

DID anyone else see experience this issue? locally or on the gate? 

 Two things: adding more timeout to a check is as appealing to me as 
 gargling glass AND I was less annoyed that the test was failing as I 
 was that it wasn't clear from reading logs what had gone wrong. I tried 
 to find an additional intermediate control point that would split 
 failure modes into two categories: neutron is too slow in setting things 
 up and neutron failed to set things up correctly. Granted it still is 
 adding timeout to the test, but if I could find a control point based on 
 settling so that if it passed, then there is a good chance that if the 
 next check failed it was because neutron actually screwed up what it was 
 trying to do. 
 
 Waiting until the query on the nova for the floating IP information 
 seemed a relatively reasonable, if imperfect, settling criteria before 
 attempting to connect to the VM. Testing to see if the floating IP 
 assignment gets to the nova instance details is a valid test and, 
 AFAICT, missing from the current tests. However, Yair has the reasonable 
 point that connectivity is often available long before the floating IP 
 appears in the nova results and that it could be considered invalid to 
 use non-network specific criteria as pass/fail for this test. 

But, Tempest is all about functional integration testing. Using a call 
to Nova's server details to determine whether a dependent call to 
Neutron succeeded (setting up the floating IP) is exactly what I think 
Tempest is all about. It's validating that the integration between Nova 
and Neutron is working as expected. 

So, I actually think the assertion on the floating IP address appearing 
(after some timeout/timeout-backoff) is entirely appropriate. 

Blocking the connectivity check until DB is updated doesn't make sense to me, 
since we know FLIP is reachable well before nova DB is updated (this is seen 
also in manual mode, not just by 

Re: [openstack-dev] [heat] Nomination for heat-core

2013-12-19 Thread Thomas Herve


 I would like to nominate Bartosz Górski to be a heat-core reviewer. His
 reviews to date have been valuable and his other contributions to the
 project have shown a sound understanding of how heat works.
 
 Here is his review history:
 https://review.openstack.org/#/q/reviewer:bartosz.gorski%2540ntti3.com+project:openstack/heat,n,z
 
 If you are heat-core please reply with your vote.


+1!

-- 
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Policy-Rules discussions based on Dec.12 network policy meeting

2013-12-19 Thread Prasad Vellanki
On Dec 17, 2013 3:22 PM, Tim Hinrichs thinri...@vmware.com wrote:



 - Original Message -
 | From: Prasad Vellanki prasad.vella...@oneconvergence.com
 | To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
 | Sent: Monday, December 16, 2013 2:11:37 PM
 | Subject: Re: [openstack-dev] [neutron][policy] Policy-Rules discussions
based on Dec.12 network policy meeting
 |
 |
 |
 | Hi
 | Please see inline 
 |
 |
 |
 | On Sun, Dec 15, 2013 at 8:49 AM, Stephen Wong  s3w...@midokura.com 
 | wrote:
 |
 |
 | Hi,
 |
 | During Thursday's group-policy meeting[1], there are several
 | policy-rules related issues which we agreed should be posted on the
 | mailing list to gather community comments / consensus. They are:
 |
 | (1) Conflict resolution between policy-rules
 | --- a priority field was added to the policy-rules attributes
 | list[2]. Is this enough to resolve conflict across policy-rules (or
 | even across policies)? Please state cases where a cross policy-rules
 | conflict can occur.
 | --- conflict resolution was a major discussion point during
 | Thursday's meeting - and there was even suggestion on setting
 | priority
 | on endpoint groups; but I would like to have this email thread
 | focused
 | on conflict resolution across policy-rules in a single policy first.
 |

 There was interest in having a single policy that could include different
actions so that a single flow might be both redirected and QOSed
simultaneously.  For me this rules out a total ordering on the policy
statements.  Here's a proposal that relies on the fact that we're fixing
the meaning of actions within the language: the language specifies a
partial order on the *actions*.  For example, DENY takes precedence over
ALLOW, so if we both ALLOW and DENY, then the conflict resolution dictates
DENY wins. But {DENY, ALLOW} and QOS and REDIRECT are all unrelated, so
there is no problem with a policy that both DENYs and QOSes and REDIRECTs.

 | (2) Default policy-rule actions
 | --- there seems to be consensus from the community that we need to
 | establish some basic set of policy-rule actions upon which all
 | plugins/drivers would have to support
 | --- just to get the discussion going, I am proposing:
 |
 |
 |
 | Or should this be a query the plugin for supported actions and thus
 | the user knows what functionality the plugin can support. Hence
 | there is no default supported list.
 |

 I think the important part is that the language defines what the actions
mean.  Whether each plugin supports them all is a different issue.  If the
language doesn't define the meaning of the actions, there's no way for
anyone to use the language.  We might be able to write down policies, but
we don't know what those policies actually mean because 2 plugins might
assign very different meanings to the same action name.


I agree that it is very important to define what actions mean.


As for supported action, it is probably best to simplify this for POC by
restricting it to a small set of actions. One can always add this call. My
point was UI becomes cleaner and clear for the user if you have the call.


 |
 |
 | a.) action_type: 'security' action: 'allow' | 'drop'
 | b.) action_type: 'qos' action: {'qos_class': {'critical' |
 | 'low-priority' | 'high-priority' |
 |
 | 'low-immediate' | 'high-immediate' |
 |
 | 'expedite-forwarding'}
 | (a subset of DSCP values - hopefully in language that can
 | be well understood by those performing application deployments)
 | c.) action_type:'redirect' action: {UUID, [UUID]...}
 | (a list of Neutron objects to redirect to, and the list
 | should contain at least one element)
 |
 |
 |
 |
 | I am not sure making the UUIDs a list of neutron objects or endpoints
 | will work well. It seems that it should more higher level such as
 | list of services that form a chain. Lets say one forms a chain of
 | services, firewall, IPS, LB. It would be tough to expect user to
 | derive the neutron ports create a chain of them. It could be a VM
 | UUID.
 |
 |

 Perhaps we could use our usual group mechanism here and say that the
redirect action operates on 3 groups: source, destination, and the group to
which we want to redirect.


 |
 | Please discuss. In the document, there is also 'rate-limit' and
 | 'policing' for 'qos' type, but those can be optional instead of
 | required for now
 |

 It would be nice if we had some rationale for deciding which actions to
include and which to leave out.  Maybe if we found a
standard/spec/collection-of-use-cases and included exactly the same
actions.  Or if we go with the action-based conflict resolution scheme from
(1), we might want to think about whether we have at least complementary
actions (e.g. ALLOW and DENY, WAYPOINT -- route traffic through a group of
middleboxes-- and FORBID -- prohibit traffic from passing through
middleboxes).

 | (3) Prasad asked for clarification on 'redirect' action, I propose to
 | add the following text to 

[openstack-dev] [Ironic] Yuriy Zveryanskyy for ironic-core?

2013-12-19 Thread Robert Collins
Yuriy seems to have been doing reviews consistently over the last
three months, is catching plenty of issues.

He isn't catching everything, but I think he catches approximately as
much as other cores - none of us catch everything.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][policy] Policy-Rules discussions based on Dec.12 network policy meeting

2013-12-19 Thread Prasad Vellanki
On Tue, Dec 17, 2013 at 7:34 PM, Stephen Wong s3w...@midokura.com wrote:

 Hi Prasad,

 Thanks for the comments, please see responses inline.

 On Mon, Dec 16, 2013 at 2:11 PM, Prasad Vellanki
 prasad.vella...@oneconvergence.com wrote:
  Hi
  Please see inline 
 
 
  On Sun, Dec 15, 2013 at 8:49 AM, Stephen Wong s3w...@midokura.com
 wrote:
 
  Hi,
 
  During Thursday's  group-policy meeting[1], there are several
  policy-rules related issues which we agreed should be posted on the
  mailing list to gather community comments / consensus. They are:
 
  (1) Conflict resolution between policy-rules
  --- a priority field was added to the policy-rules attributes
  list[2]. Is this enough to resolve conflict across policy-rules (or
  even across policies)? Please state cases where a cross policy-rules
  conflict can occur.
  --- conflict resolution was a major discussion point during
  Thursday's meeting - and there was even suggestion on setting priority
  on endpoint groups; but I would like to have this email thread focused
  on conflict resolution across policy-rules in a single policy first.
 
  (2) Default policy-rule actions
  --- there seems to be consensus from the community that we need to
  establish some basic set of policy-rule actions upon which all
  plugins/drivers would have to support
  --- just to get the discussion going, I am proposing:
 
 
  Or should this be a query the plugin for supported actions and thus the
 user
  knows what functionality the plugin can support.  Hence there is no
 default
  supported list.

 I think what we want is a set of must-have actions which
 application can utilize by default while using the group-policy APIs.
 Without this, application would need to perform many run time checks
 and have unpredictable behavior across different deployments.

 As for querying for a capability list - I am not against having
 such API, but what is the common use case? Having a script querying
 for the supported action list and generate policies based on that?
 Should we expect policy definition to be so dynamic?


I agree that we should simplify this for POC.

The use case is in the UI the user should know what actions are valid. The
user should not wait for error to figure out whether a action is valid. But
if we put well defined set that is mandatory this is not an issue.



 
  a.) action_type: 'security'action: 'allow' | 'drop'
  b.) action_type: 'qos'action: {'qos_class': {'critical' |
  'low-priority' | 'high-priority' |
 
 'low-immediate' | 'high-immediate' |
 
 'expedite-forwarding'}
   (a subset of DSCP values - hopefully in language that can
  be well understood by those performing application deployments)
  c.) action_type:'redirect'   action: {UUID, [UUID]...}
   (a list of Neutron objects to redirect to, and the list
  should contain at least one element)
 
 
  I am not sure making the UUIDs a list of neutron objects or endpoints
 will
  work well. It seems that it should more higher level such as list of
  services that form a chain. Lets say one forms a chain of services,
  firewall, IPS, LB. It would be tough to expect user to derive the neutron
  ports create a chain of them. It could be a VM UUID.

 Service chain is a Neutron object with UUID:


 https://docs.google.com/document/d/1fmCWpCxAN4g5txmCJVmBDt02GYew2kvyRsh0Wl3YF2U/edit#

 so this is not defined by the group-policy subgroup, but from a
 different project. We expect operator / tenant to define a service
 chain for the users, and users simply pick that as one of the
 redirect action object to send traffic to.


 
  Please discuss. In the document, there is also 'rate-limit' and
  'policing' for 'qos' type, but those can be optional instead of
  required for now
 
  (3) Prasad asked for clarification on 'redirect' action, I propose to
  add the following text to document regarding 'redirect' action:
 
  'redirect' action is used to mirror traffic to other destinations
  - destination can be another endpoint group, a service chain, a port,
  or a network. Note that 'redirect' action type can be used with other
  forwarding related action type such as 'security'; therefore, it is
  entirely possible that one can specify {'security':'deny'} and still
  do {'redirect':{'uuid-1', 'uuid-2'...}. Note that the destination
  specified on the list CANNOT be the endpoint-group who provides this
  policy. Also, in case of destination being another endpoint-group, the
  policy of this new destination endpoint-group will still be applied
 
 
  As I said above one needs clarity on what these UUIDs mean. Also do we
 need
  a call to manage the ordered list around adding, deleting.listing the
  elements in the list.
  One other issue that comes up whether the classifier holds up along the
  chain. The classifier that goes into the chain might not be the same on
 the
  reverse path.

 The 

[openstack-dev] [Neutron][IPv6] Blueprint Bind dnsmasq in qrouter- namespace

2013-12-19 Thread Xuhan Peng
I am reading through the blueprint created by Randy to bind dnsmasq into
qrouter- namespace:

https://blueprints.launchpad.net/neutron/+spec/dnsmasq-bind-into-qrouter-namespace

I don't think I can follow the reason that we need to change the namespace
which contains dnsmasq process and the device it listens to from qdhcp- to
qrouter-. Why the original namespace design conflicts with the Router
Advertisement sending from dnsmasq for SLAAC?

From the attached POC result link, the reason is stated as:

Even if the dnsmasq process could send Router Advertisement, the default
gateway would bind to its own link-local address in the qdhcp- namespace.
As a result, traffic leaving tenant network will be drawn to DHCP
interface, instead of gateway port on router. That is not desirable! 

Can Randy or Shixiong explain this more? Thanks!

Xuhan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Nomination for heat-core

2013-12-19 Thread Steven Hardy
On Thu, Dec 19, 2013 at 03:21:46PM +1300, Steve Baker wrote:
 I would like to nominate Bartosz Górski to be a heat-core reviewer. His
 reviews to date have been valuable and his other contributions to the
 project have shown a sound understanding of how heat works.
 
 Here is his review history:
 https://review.openstack.org/#/q/reviewer:bartosz.gorski%2540ntti3.com+project:openstack/heat,n,z
 
 If you are heat-core please reply with your vote.

+1, great work Bartosz!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] meeting times and rotations etc...

2013-12-19 Thread Robert Collins
So, I'm a little worried about the complexities of organising free
slots given we're basically about to double the # of entrieshave in
all our calendars.

Maybe we can do something a little simpler: just have the *whole
calender* shift phase 180' each week: it won't be perfect,
particularly for those projects that currently have a majority of
members meeting in the middle of their day (12 midday - 12 midnight),
but if there's any decent spread already meeting, there will be a
decent spread for the alter week - and an important thing for
inclusion is to not be doing votes etc in meetings *anyway* so I think
it's ok for the PTL (for instance) to not be at every meeting.

Thoughts?

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] meeting times and rotations etc...

2013-12-19 Thread John Dickinson
Another option would be to use what we already have to our benefit. Instead of 
trying to provision two meeting rooms (-meeting and -meeting-alt), use the 
various other IRC channels that we already have for team meetings. This would 
allow for meetings to be at the same time, but it would free up more time slots 
to be scheduled, and those time slots can be scheduled more precisely to fit 
the schedules of those attending.

So what about cross-team concerns? We have the weekly meeting, and if that 
isn't sufficient, then the -meeting and -meeting-alt channels can be scheduled 
for cross-team needs.

--John




On Dec 19, 2013, at 1:20 AM, Robert Collins robe...@robertcollins.net wrote:

 So, I'm a little worried about the complexities of organising free
 slots given we're basically about to double the # of entrieshave in
 all our calendars.
 
 Maybe we can do something a little simpler: just have the *whole
 calender* shift phase 180' each week: it won't be perfect,
 particularly for those projects that currently have a majority of
 members meeting in the middle of their day (12 midday - 12 midnight),
 but if there's any decent spread already meeting, there will be a
 decent spread for the alter week - and an important thing for
 inclusion is to not be doing votes etc in meetings *anyway* so I think
 it's ok for the PTL (for instance) to not be at every meeting.
 
 Thoughts?
 
 -Rob
 
 -- 
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tuskar] How to install tuskar-ui from packaging point of view

2013-12-19 Thread Radomir Dopieralski
On 16/12/13 04:47, Thomas Goirand wrote:

[snip]

 As for tuskar-ui, the install.rst is quite vague about how to install. I
 got the python-tuskar-ui binary package done, with egg-info and all,
 that's not the problem. What worries me is this part:

[snip]

Hello Thomas,

sorry for the late reply. The install instructions in the tuskar-ui
repository seem to be written with the developer in mind. For a
production installation (for which Tuskar is not yet entirely ready,
regrettably), you would just need two things:

1. Make sure that tuskar_ui is importable as a python module.
2. Make sure that Tuskar-UI is enabled as a Horizon extension, by
creating a file within Horizon's configuration, as described here:
https://github.com/openstack/horizon/blob/master/doc/source/topics/settings.rst#examples

We will need to update our documentation to include those instructions.

I hope that helps.
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-19 Thread Radomir Dopieralski
On 14/12/13 16:51, Jay Pipes wrote:

[snip]

 Instead of focusing on locking issues -- which I agree are very
 important in the virtualized side of things where resources are
 thinner -- I believe that in the bare-metal world, a more useful focus
 would be to ensure that the Tuskar API service treats related group
 operations (like deploy an undercloud on these nodes) in a way that
 can handle failures in a graceful and/or atomic way.

Atomicity of operations can be achieved by intoducing critical sections.
You basically have two ways of doing that, optimistic and pessimistic.
Pessimistic critical section is implemented with a locking mechanism
that prevents all other processes from entering the critical section
until it is finished. Optimistic one is implemented using transactions,
that assume that there will be no conflict, and just rollback all the
changes if there was. Since none of OpenStack services that we use
expose any kind of transaction mechanisms (mostly, because they have
REST, stateless APIs, and transacrions imply state), we are left with
locks as the only tool to assure atomicity. Thus, your sentence above is
a little bit contradictory, advocating ignoring locking issues, and
proposing making operations atomic at the same time.
Perhaps you have some other way of making them atomic that I can't think of?

 For example, if the construction or installation of one compute worker
 failed, adding some retry or retry-after-wait-for-event logic would be
 more useful than trying to put locks in a bunch of places to prevent
 multiple sysadmins from trying to deploy on the same bare-metal nodes
 (since it's just not gonna happen in the real world, and IMO, if it did
 happen, the sysadmins/deployers should be punished and have to clean up
 their own mess ;)

I don't see why they should be punished, if the UI was assuring them
that they are doing exactly the thing that they wanted to do, at every
step, and in the end it did something completely different, without any
warning. If anyone deserves punishment in such a situation, it's the
programmers who wrote the UI in such a way.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Weekly subteam meeting at Thursday, 19.12, 14-00 UTC

2013-12-19 Thread Eugene Nikanorov
Hi lbaas folks,

Let's meet as usual at #openstack-meeting on Thursday, 19 at 14-00 UTC.
The primary discussion points should be:
1) Third party testing, test scenarios
2) L7 rules
3) HA for agents and HA for HAProxy
4) SSL termination

Thanks,
Eugene
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] meeting times and rotations etc...

2013-12-19 Thread Flavio Percoco

On 19/12/13 01:28 -0800, John Dickinson wrote:

Another option would be to use what we already have to our benefit. Instead of 
trying to provision two meeting rooms (-meeting and -meeting-alt), use the 
various other IRC channels that we already have for team meetings. This would 
allow for meetings to be at the same time, but it would free up more time slots 
to be scheduled, and those time slots can be scheduled more precisely to fit 
the schedules of those attending.

So what about cross-team concerns? We have the weekly meeting, and if that 
isn't sufficient, then the -meeting and -meeting-alt channels can be scheduled 
for cross-team needs.


+1

I was thinking the exact same thing while reading Robert's email. I
guess we should also consider new projects that don't have a channel
yet. Just as you mentioned, they could use -meeting or -meeting-alt.

Marconi's team has had several 'extra' meetings - mostly for
brainstorming about a specific argument - in #openstack-marconi. All
this meetings where announced properly and worked just fine.

We've done the same for Glance.

I'd like to add that, although I prefer keeping the meetings short,
there are times where 1h may not be enough and people end up using
other team's time or dropping the meeting. Using the project's channel
should help with this as well.


Cheers,
FF



--John




On Dec 19, 2013, at 1:20 AM, Robert Collins robe...@robertcollins.net wrote:


So, I'm a little worried about the complexities of organising free
slots given we're basically about to double the # of entrieshave in
all our calendars.

Maybe we can do something a little simpler: just have the *whole
calender* shift phase 180' each week: it won't be perfect,
particularly for those projects that currently have a majority of
members meeting in the middle of their day (12 midday - 12 midnight),
but if there's any decent spread already meeting, there will be a
decent spread for the alter week - and an important thing for
inclusion is to not be doing votes etc in meetings *anyway* so I think
it's ok for the PTL (for instance) to not be at every meeting.

Thoughts?

-Rob

--
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


pgpbi1gy9OB8q.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the FloatingIPChecker control point

2013-12-19 Thread Sean Dague
On 12/18/2013 10:54 PM, Jay Pipes wrote:
 On 12/18/2013 10:21 PM, Brent Eagles wrote:
 Hi,

 Yair and I were discussing a change that I initiated and was
 incorporated into the test_network_basic_ops test. It was intended as a
 configuration control point for floating IP address assignments before
 actually testing connectivity. The question we were discussing was
 whether this check was a valid pass/fail criteria for tests like
 test_network_basic_ops.

 The initial motivation for the change was that test_network_basic_ops
 had a less than 50/50 chance of passing in my local environment for
 whatever reason. After looking at the test, it seemed ridiculous that it
 should be failing. The problem is that more often than not the data that
 was available in the logs all pointed to it being set up correctly but
 the ping test for connectivity was timing out. From the logs it wasn't
 clear that the test was failing because neutron did not do the right
 thing, did not do it fast enough, or is something else happening?  Of
 course if I paused the test for a short bit between setup and the checks
 to manually verify everything the checks always passed. So it's a timing
 issue right?

 Two things: adding more timeout to a check is as appealing to me as
 gargling glass AND I was less annoyed that the test was failing as I
 was that it wasn't clear from reading logs what had gone wrong. I tried
 to find an additional intermediate control point that would split
 failure modes into two categories: neutron is too slow in setting things
 up and neutron failed to set things up correctly. Granted it still is
 adding timeout to the test, but if I could find a control point based on
 settling so that if it passed, then there is a good chance that if the
 next check failed it was because neutron actually screwed up what it was
 trying to do.

 Waiting until the query on the nova for the floating IP information
 seemed a relatively reasonable, if imperfect, settling criteria before
 attempting to connect to the VM. Testing to see if the floating IP
 assignment gets to the nova instance details is a valid test and,
 AFAICT, missing from the current tests. However, Yair has the reasonable
 point that connectivity is often available long before the floating IP
 appears in the nova results and that it could be considered invalid to
 use non-network specific criteria as pass/fail for this test.
 
 But, Tempest is all about functional integration testing. Using a call
 to Nova's server details to determine whether a dependent call to
 Neutron succeeded (setting up the floating IP) is exactly what I think
 Tempest is all about. It's validating that the integration between Nova
 and Neutron is working as expected.
 
 So, I actually think the assertion on the floating IP address appearing
 (after some timeout/timeout-backoff) is entirely appropriate.
 
 In general, the validity of checking for the presence of a floating IP
 in the server details is a matter of interpretation. I think it is a
 given that it must be tested somewhere and that if it causes a test to
 fail then it is as valid a failure than a ping failing. Certainly I have
 seen scenarios where an IP appears, but doesn't actually work and others
 where the IP doesn't appear (ever, not just in really long while) but
 magically works. Both are bugs. Which is more appropriate to tests like
 test_network_basic_ops?
 
 I believe both assertions should be part of the test cases, but since
 the latter condition (good ping connectivity, but no floater ever
 appears attached to the instance) necessarily depends on the first
 failure (floating IP does not appear in the server details after a
 timeout), then perhaps one way to handle this would be to do this:
 
 a) create server instance
 b) assign floating ip
 c) query server details looking for floater in a timeout-backoff loop
 c1) floater does appear
  c1-a) assert ping connectivity
 c2) floater does not appear
  c2-a) check ping connectivity. if ping connectivity succeeds, use a
 call to testtools.TestCase.addDetail() to provide some interesting
 feedback
  c2-b) raise assertion that floater did not appear in the server details
 
 Currently, the polling interval for the checks in the gate should be
 tuned. They are borrowing other polling configuration and I can see it
 is ill-advised. It is currently polling at an interval of a second and
 if the intent is to wait for the entire system to settle down before
 proceeding then polling nova that quickly is too often. It simply
 increases the load while we are waiting to adapt to a loaded system. For
 example in the course of a three minute timeout, the floating IP check
 polled nova for server details 180 times.
 
 Agreed completely.

We should just add an exponential backoff to the waiting. That should
decrease load over time. I'd be +2 to such a patch.

That being said I'm not sure why 1 request / sec is considered load
that would break the system. That doesn't seem a 

Re: [openstack-dev] meeting times and rotations etc...

2013-12-19 Thread Thierry Carrez
Robert Collins wrote:
 So, I'm a little worried about the complexities of organising free
 slots given we're basically about to double the # of entrieshave in
 all our calendars.

I don't think that's what we are about to do, as I don't expect every
single meeting to implement a rotation. As I said in another thread:

Rotating meetings are not a magic bullet: unless your contributors are
evenly distributed on the planet, you end up alienating your regular
contributors and slowing down the pace of your core team... sometimes
for no obvious gain.

One option is to keep your regular weekly meeting, but then throw an
*additional* meeting in alternate timezone(s) every month to take the
pulse of the devs you have there. That would also let you gauge the need
for a more regular meeting, or truly alternating meeting times. All that
without impacting your regular velocity, and not being too much of a pain.

So I would not just shift the whole calendar each week because that
assumes that rotating meetings are good for everyone. I don't expect a
few meeting rotating times to create that many scheduling headaches
(since the Europe/APAC-friendly meeting slots are not used that much
currently !). If the number of meeting channels really becomes a
problem we could explore John's proposed solution (enabling the meeting
bot on topical channels and have some meetings there).

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the FloatingIPChecker control point

2013-12-19 Thread Sean Dague
On 12/19/2013 03:31 AM, Yair Fried wrote:
 Hi Guys,
 I run into this issue trying to incorporate this test into
 cross_tenant_connectivity scenario:
 launching 2 VMs in different tenants
 What I saw, is that in the gate it fails half the time (the original
 test passes without issues) and ONLY on the 2nd VM (the first FLIP
 propagates fine).
 https://bugs.launchpad.net/nova/+bug/1262529
 
 I don't see this in:
 1. my local RHOS-Havana setup
 2. the cross_tenant_connectivity scenario without the control point
 (test passes without issues)
 3. test_network_basic_ops runs in the gate
 
 So here's my somewhat less experienced opinion:
 1. this happens due to stress (more than a single FLIP/VM)
 2. (as Brent said) Timeout interval between polling are too short
 3. FLIP is usually reachable long before it is seen in the nova DB (also
 from manual experience), so blocking the test until it reaches the nova
 DB doesn't make sense for me. if we could do this in different thread,
 then maybe, but using a Pass/Fail criteria to test for a timing issue
 seems wrong. Especially since as I understand it, the issue is on IF it
 reaches nova DB, only WHEN.
 
 I would like to, at least, move this check from its place as a blocker
 to later in the test. Before this is done, I would like to know if
 anyone else has seen the same problems Brent describes prior to this
 patch being merged.

 Regarding Jay's scenario suggestion, I think this should not be a part
 of network_basic_ops, but rather a separate stress scenario creating
 multiple VMs and testing for FLIP associations and propagation time.

+1 there is no need to overload that one scenario. A dedicated one would
be fine.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Yuriy Zveryanskyy for ironic-core?

2013-12-19 Thread Maksym Lobur
I up-vote this election.
Yuri doing much reviews, moreover he is mostly working on core ironic
features, so he is more likely will spot architectural problems.

Best regards,
Max Lobur,
Python Developer, Mirantis, Inc.

Mobile: +38 (093) 665 14 28
Skype: max_lobur

38, Lenina ave. Kharkov, Ukraine
www.mirantis.com
www.mirantis.ru


On Thu, Dec 19, 2013 at 11:01 AM, Robert Collins
robe...@robertcollins.netwrote:

 Yuriy seems to have been doing reviews consistently over the last
 three months, is catching plenty of issues.

 He isn't catching everything, but I think he catches approximately as
 much as other cores - none of us catch everything.

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] meeting times and rotations etc...

2013-12-19 Thread Sean Dague
On 12/19/2013 04:28 AM, John Dickinson wrote:
 Another option would be to use what we already have to our benefit. Instead 
 of trying to provision two meeting rooms (-meeting and -meeting-alt), use the 
 various other IRC channels that we already have for team meetings. This would 
 allow for meetings to be at the same time, but it would free up more time 
 slots to be scheduled, and those time slots can be scheduled more precisely 
 to fit the schedules of those attending.
 
 So what about cross-team concerns? We have the weekly meeting, and if that 
 isn't sufficient, then the -meeting and -meeting-alt channels can be 
 scheduled for cross-team needs.

I'm generally -1 to this. I idle in #openstack-meeting and
#openstack-meeting-alt, and periodically go look at what's going on. It
also means if someone pings me in a meeting, I'm likely to see it. This
level of passive monitoring on the meetings is I think useful.

If a particular team wants to take their meetings out of the main
channels, so be it, but I think the norm should be to continue to use
those channels.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-12-19 Thread John Garbutt
Apologies for being late onto this thread, and not making the meeting
the other day.
Also apologies this is almost totally a top post.

On 17 December 2013 15:09, Ian Wells ijw.ubu...@cack.org.uk wrote:
 Firstly, I disagree that
 https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support is an accurate
 reflection of the current state.  It's a very unilateral view, largely
 because the rest of us had been focussing on the google document that we've
 been using for weeks.

I haven't seen the google doc. I got involved through the blueprint
review of this:
https://blueprints.launchpad.net/nova/+spec/pci-extra-info

I assume its this one?
https://docs.google.com/document/d/1EMwDg9J8zOxzvTnQJ9HwZdiotaVstFWKIuKrPse6JOs

On a quick read, my main concern is separating out the user more:
* administration (defines pci-flavor, defines which hosts can provide
it, defines server flavor...)
* person who boots server (picks server flavor, defines neutron ports)

Note, I don't see the person who boots the server ever seeing the
pci-flavor, only understanding the server flavor.

We might also want a nic-flavor that tells neutron information it
requires, but lets get to that later...

 Secondly, I totally disagree with this approach.  This assumes that
 description of the (cloud-internal, hardware) details of each compute node
 is best done with data stored centrally and driven by an API.  I don't agree
 with either of these points.

Possibly, but I would like to first agree on the use cases and data
model we want.

Nova has generally gone for APIs over config in recent times.
Mostly so you can do run-time configuration of the system.
But lets just see what makes sense when we have the use cases agreed.

 On 2013年12月16日 22:27, Robert Li (baoli) wrote:
 I'd like to give you guy a summary of current state, let's discuss it
 then.
 https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support


 1)  fade out alias ( i think this ok for all)
 2)  white list became pic-flavor ( i think this ok for all)
 3)  address simply regular expression support: only * and a number range
 is support [hex-hex]. ( i think this ok?)
 4)  aggregate : now it's clear enough, and won't impact SRIOV.  ( i think
 this irrelevant to SRIOV now)

So... this means we have:

PCI-flavor:
* i.e. standardGPU, standardGPUnew, fastGPU, hdFlash1TB etc

Host mapping:
* decide which hosts you allow a particular flavor to be used
* note, the scheduler still needs to find out if any devices are free

flavor (of the server):
* usual RAM, CPU, Storage
* use extra specs to add PCI devices
* example:
** add one PCI device, choice of standardGPU or standardGPUnew
** also add: one hdFlash1TB

Now, the other bit is SRIOV... At a high level:

Neutron:
* user wants to connect to a particular neutron network
* user wants a super-fast SRIOV connection

Administration:
* needs to map PCI device to what neutron network the connect to

The big question is:
* is this a specific SRIOV only (provider) network
* OR... are other non-SRIOV connections also made to that same network

I feel we have to go for that latter. Imagine a network on VLAN 42,
you might want some SRIOV into that network, and some OVS connecting
into the same network. The user might have VMs connected using both
methods, so wants the same IP address ranges and same network id
spanning both.

If we go for that latter new either need:
* some kind of nic-flavor
** boot ... -nic nic-id:public-id:,nic-flavor:10GBpassthrough
** but neutron could store nic-flavor, and pass it through to VIF
driver, and user says port-id
* OR add NIC config into the server flavor
** extra spec to say, tell VIF driver it could use on of this list of
PCI devices: (list pci-flavors)
* OR do both

I vote for nic-flavor only, because it matches the volume-type we have
with cinder.

However, it does suggest that Nova should leave all the SRIOV work to
the VIF driver.
So the VIF driver, as activate by neutron, will understand which PCI
devices to passthrough.

Similar to the plan with brick, we could have an oslo lib that helps
you attach SRIOV devices that could be used by the neturon VIF drivers
and the nova PCI passthrough code.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Future meeting times

2013-12-19 Thread John Garbutt
+1 to 14:00 meeting, I can always make those.
I can probably make some of the 21:00 meetings
But probably only in the summer when its UTC+1 over here.

John

On 19 December 2013 07:49, Day, Phil philip@hp.com wrote:
 +1, I would make the 14:00 meeting. I often have good intention of making the 
 21:00 meeting,  but it's tough to work in around family life


 Sent from Samsung Mobile



  Original message 
 From: Joe Gordon joe.gord...@gmail.com
 Date:
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Nova] Future meeting times



 On Dec 18, 2013 6:38 AM, Russell Bryant 
 rbry...@redhat.commailto:rbry...@redhat.com wrote:

 Greetings,

 The weekly Nova meeting [1] has been held on Thursdays at 2100 UTC.
 I've been getting some requests to offer an alternative meeting time.
 I'd like to try out alternating the meeting time between two different
 times to allow more people in our global development team to attend
 meetings and engage in some real-time discussion.

 I propose the alternate meeting time as 1400 UTC.  I realize that
 doesn't help *everyone*, but it should be an improvement for some,
 especially for those in Europe.

 If we proceed with this, we would meet at 2100 UTC on January 2nd, 1400
 UTC on January 9th, and alternate from there.  Note that we will not be
 meeting at all on December 26th as a break for the holidays.

 If you can't attend either of these times, please note that the meetings
 are intended to be supplementary to the openstack-dev mailing list.  In
 the meetings, we check in on status, raise awareness of important
 issues, and progress some discussions with real-time debate, but the
 most important discussions and decisions will always be brought to the
 openstack-dev mailing list, as well.  With that said, active Nova
 contributors are always encouraged to attend and participate if they are
 able.

 Comments welcome, especially some acknowledgement that there are people
 that would attend the alternate meeting time.  :-)

 I am fine with this, but I will never be attending the 1400 UTC meetings, as 
 I live in utc-8


 Thanks,

 [1] https://wiki.openstack.org/wiki/Meetings/Nova

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request for Barbican

2013-12-19 Thread Sean Dague
On 12/19/2013 12:10 AM, Mike Perez wrote:
 On Tue, Dec 17, 2013 at 1:59 PM, Mike Perez thin...@gmail.com
 mailto:thin...@gmail.com wrote:
snip
 I reviewed the TC meeting notes, and my question still stands.
 
 It seems the committee is touching on the point of there being a worry
 because if 
 it's a single company running the show, they can pull resources away and
 the 
 project collapses. My worry is just having one company attempting to
 design solutions 
 to use cases that work for them, will later not work for those potential
 companies that would 
 provide contributors.
 
 -Mike Perez

Which is our fundamental chicken and egg problem. The Barbican team has
said they've reached out to other parties, who have expressed interest
in joining, but no one else has.

The Heat experience shows that a lot of the time companies won't kick in
resources until there is some kind of stamp of general approval.

If you showed up early, with a commitment to work openly, the fact that
the project maps to your own use cases really well isn't a bug, it's a
feature. I don't want to hold up a team from incubating because other
people stayed on the sidelines. That was actually exactly what was going
on with Heat, where lots of entities thought they would keep that side
of the equation proprietary, or outside of OpenStack. By bringing Heat
in, we changed the equation, I think massively for the better.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Horizon and Tuskar-UI codebase merge

2013-12-19 Thread Jiri Tomasek

On 12/19/2013 08:58 AM, Matthias Runge wrote:

On 12/18/2013 10:33 PM, Gabriel Hurley wrote:


Adding developers to Horizon Core just for the purpose of reviewing
an incubated umbrella project is not the right way to do things at
all.  If my proposal of two separate groups having the +2 power in
Gerrit isn't technically feasible then a new group should be created
for management of umbrella projects.

Yes, I totally agree.

Having two separate projects with separate cores should be possible
under the umbrella of a program.

Tuskar differs somewhat from other projects to be included in horizon,
because other projects contributed a view on their specific feature.
Tuskar provides an additional dashboard and is talking with several apis
below. It's a something like a separate dashboard to be merged here.

When having both under the horizon program umbrella, my concern is, that
both projects wouldn't be coupled so tight, as I would like it.

Esp. I'd love to see an automatic merge of horizon commits to a
(combined) tuskar and horizon repository, thus making sure, tuskar will
work in a fresh (updated) horizon environment.


Please correct me if I am wrong, but I think this is not an issue. 
Currently Tuskar-UI is run from Horizon fork. In local Horizon fork we 
create symlink to tuskar-ui local clone and to run Horizon with 
Tuskar-UI we simply start Horizon server. This means that Tuskar-UI runs 
on latest version of Horizon. (If you pull regularly of course).




Matthias

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-12-19 Thread Ian Wells
John:

 At a high level:

 Neutron:
 * user wants to connect to a particular neutron network
 * user wants a super-fast SRIOV connection

Administration:
 * needs to map PCI device to what neutron network the connect to

The big question is:
 * is this a specific SRIOV only (provider) network
 * OR... are other non-SRIOV connections also made to that same network

 I feel we have to go for that latter. Imagine a network on VLAN 42,
 you might want some SRIOV into that network, and some OVS connecting
 into the same network. The user might have VMs connected using both
 methods, so wants the same IP address ranges and same network id
 spanning both.


 If we go for that latter new either need:
 * some kind of nic-flavor
 ** boot ... -nic nic-id:public-id:,nic-flavor:10GBpassthrough
 ** but neutron could store nic-flavor, and pass it through to VIF
 driver, and user says port-id
 * OR add NIC config into the server flavor
 ** extra spec to say, tell VIF driver it could use on of this list of
 PCI devices: (list pci-flavors)
 * OR do both

 I vote for nic-flavor only, because it matches the volume-type we have
 with cinder.


I think the issue there is that Nova is managing the supply of PCI devices
(which is limited and limited on a per-machine basis).  Indisputably you
need to select the NIC you want to use as a passthrough rather than a vnic
device, so there's something in the --nic argument, but you have to answer
two questions:

- how many devices do you need (which is now not a flavor property but in
the --nic list, which seems to me an odd place to be defining billable
resources)
- what happens when someone does nova interface-attach?

Cinder's an indirect parallel because the resources it's adding to the
hypervisor are virtual and unlimited, I think, or am I missing something
here?


 However, it does suggest that Nova should leave all the SRIOV work to
 the VIF driver.
 So the VIF driver, as activate by neutron, will understand which PCI
 devices to passthrough.

 Similar to the plan with brick, we could have an oslo lib that helps
 you attach SRIOV devices that could be used by the neturon VIF drivers
 and the nova PCI passthrough code.


I'm not clear that this is necessary.

At the moment with vNICs, you pass through devices by having a co-operation
between Neutron (which configures a way of attaching them to put them on a
certain network) and the hypervisor specific code (which creates them in
the instance and attaches them as instructed by Neutron).  Why would we not
follow the same pattern with passthrough devices?  In this instance,
neutron would tell nova that when it's plugging this device it should be a
passthrough device, and pass any additional parameters like the VF encap,
and Nova would do as instructed, then Neutron would reconfigure whatever
parts of the network need to be reconfigured in concert with the
hypervisor's settings to make the NIC a part of the specified network.
-- 
Ian.



 Thanks,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting Dec 19 1800 UTC

2013-12-19 Thread Sergey Lukjanov
Hi folks,

We'll be having the Savanna team meeting as usual in #openstack-meeting-alt
channel.

Additionally, we are canceling our next two weekly meetings - Dec 26 and
Jan 2.

Agenda:
https://wiki.openstack.org/wiki/Meetings/SavannaAgenda#Agenda_for_December.2C_19

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Savanna+Meetingiso=20131219T18

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-19 Thread Radomir Dopieralski
On 11/12/13 21:42, Robert Collins wrote:
 On 12 December 2013 01:17, Jaromir Coufal jcou...@redhat.com wrote:
 On 2013/10/12 23:09, Robert Collins wrote:

[snip]

 Thats speculation. We don't know if they will or will not because we
 haven't given them a working system to test.

 Some part of that is speculation, some part of that is feedback from people
 who are doing deployments (of course its just very limited audience).
 Anyway, it is not just pure theory.
 
 Sure. Let be me more precise. There is a hypothesis that lack of
 direct control will be a significant adoption blocker for a primary
 group of users.

I'm sorry for butting in, but I think I can see where your disagreement
comes from and maybe explaining it will help resolving it.

It's not a hypothesis, but a well documented and researched fact, that
transparency has a huge impact on the ease of use of any information
artifact. In particular, the easier you can see what is actually
happening and how your actions affect the outcome, the faster you can
learn to use it and the more efficient you are in using it and resolving
any problems with it. It's no surprise that closeness of mapping and
hidden dependencies are two important congnitive dimensions that are
often measured when assesing the usability of an artifact. Humans simply
find it nice when they can tell what is happening, even if theoretically
they don't need that knowledge when everything works correctly.

This doesn't come from any direct requirements of Tuskar itself, and I
am sure that all the workarounds that Robert gave will work somehow in
every real-world problem that arises. But the whole will not necessarily
be easy or pleasant to learn and use. I am aware, that the requirment to
be able to see what is happening is a fundamental problem, because it
destroys one of the most important rules in system engineering --
separation of concerns. The parts in the upper layers should simply not
care how the parts in the lower layers do their jobs, as long as they
work properly.

I know that it is a kind of a tradition in Open Source software to
create software with the assumption, that it's enough for it to do its
job, and if every use case can be somehow done, directly or indirectly,
then it's good enough. We have a lot of working tools designed with this
principle in mind, such as CSS, autotools or our favorite git. They do
their job, and they do it well (except when they break horribly). But I
think we can put a little bit more effort into also ensuring that the
common use cases are not just doable, but also easy to implement and
maintain. And that means that we will sometimes have a requirement that
comes from how people think, and not from any particular technical need.
I know that it sounds like speculation, or theory, but I think we need
to tust in Jarda's experience with usability and his judgement about
what works better -- unless of course we are willing to learn all that
ourselves, which may take quite some time.

What is the point of having an expert, if we know better, after all?
-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] Next two weekly meetings cancelled ?

2013-12-19 Thread Sylvain Bauza

Hi team,

I won't be able to attend the next two weekly meetings (23Dec and 
30Dec), I would like to postpone our meetings till 6th January 2014.

Any objections to this ?

-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-12-19 Thread John Garbutt
On 19 December 2013 12:21, Ian Wells ijw.ubu...@cack.org.uk wrote:

 John:

 At a high level:

 Neutron:
 * user wants to connect to a particular neutron network
 * user wants a super-fast SRIOV connection

 Administration:
 * needs to map PCI device to what neutron network the connect to

 The big question is:
 * is this a specific SRIOV only (provider) network
 * OR... are other non-SRIOV connections also made to that same network

 I feel we have to go for that latter. Imagine a network on VLAN 42,
 you might want some SRIOV into that network, and some OVS connecting
 into the same network. The user might have VMs connected using both
 methods, so wants the same IP address ranges and same network id
 spanning both.


 If we go for that latter new either need:
 * some kind of nic-flavor
 ** boot ... -nic nic-id:public-id:,nic-flavor:10GBpassthrough
 ** but neutron could store nic-flavor, and pass it through to VIF
 driver, and user says port-id
 * OR add NIC config into the server flavor
 ** extra spec to say, tell VIF driver it could use on of this list of
 PCI devices: (list pci-flavors)
 * OR do both

 I vote for nic-flavor only, because it matches the volume-type we have
 with cinder.


 I think the issue there is that Nova is managing the supply of PCI devices
 (which is limited and limited on a per-machine basis).  Indisputably you
 need to select the NIC you want to use as a passthrough rather than a vnic
 device, so there's something in the --nic argument, but you have to answer
 two questions:

 - how many devices do you need (which is now not a flavor property but in
 the --nic list, which seems to me an odd place to be defining billable
 resources)
 - what happens when someone does nova interface-attach?

Agreed.

The --nic list specifies how many NICs.

I was suggesting adding a nic-flavor on each --nic spec to say if its
PCI passthrough vs virtual NIC.

 Cinder's an indirect parallel because the resources it's adding to the
 hypervisor are virtual and unlimited, I think, or am I missing something
 here?

I was more referring more to the different volume-types i.e. fast
volume or normal volume.
And how that is similar to virtual vs fast PCI passthough vs slow
PCI passthrough

Local volumes probably have the same issues as PCI passthrough with
finite resources.
But I am not sure we have a good solution for that yet.

Mostly, it seems right that Cinder and Neutron own the configuration
about the volume and network resources.

The VIF driver and volume drivers seem to have a similar sort of
relationship with Cinder and Neutron vs Nova.

Then the issues boils down to visibility into that data so we can
schedule efficiently, which is no easy problem.


 However, it does suggest that Nova should leave all the SRIOV work to
 the VIF driver.
 So the VIF driver, as activate by neutron, will understand which PCI
 devices to passthrough.

 Similar to the plan with brick, we could have an oslo lib that helps
 you attach SRIOV devices that could be used by the neturon VIF drivers
 and the nova PCI passthrough code.

 I'm not clear that this is necessary.

 At the moment with vNICs, you pass through devices by having a co-operation
 between Neutron (which configures a way of attaching them to put them on a
 certain network) and the hypervisor specific code (which creates them in the
 instance and attaches them as instructed by Neutron).  Why would we not
 follow the same pattern with passthrough devices?  In this instance, neutron
 would tell nova that when it's plugging this device it should be a
 passthrough device, and pass any additional parameters like the VF encap,
 and Nova would do as instructed, then Neutron would reconfigure whatever
 parts of the network need to be reconfigured in concert with the
 hypervisor's settings to make the NIC a part of the specified network.

I agree, in general terms.

Firstly, do you agree the neutron network-id can be used for
passthrough and non-passthrough VIF connections? i.e. a neturon
network-id does not imply PCI-passthrough.

Secondly, we need to agree on the information flow around defining the
flavor of the NIC. i.e. virtual or passthroughFast or
passthroughNormal.

My gut feeling is that neutron port description should somehow define
this via a nic-flavor that maps to a group of pci-flavors.

But from a billing point of view, I like the idea of the server flavor
saying to the VIF plug code, by the way, for this server, please
support all the nics using devices in pciflavor:fastNic should that be
possible for the users given port configuration. But this is leaking
neutron/networking information into Nova, which seems bad.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Next two weekly meetings cancelled ?

2013-12-19 Thread Sergey Lukjanov
Howdy,

yup, agreed. Additionally, I'd like to start a discussion about new meeting
time that'd be more US-folks friendly.

Thanks.


On Thu, Dec 19, 2013 at 4:49 PM, Sylvain Bauza sylvain.ba...@bull.netwrote:

 Hi team,

 I won't be able to attend the next two weekly meetings (23Dec and 30Dec),
 I would like to postpone our meetings till 6th January 2014.
 Any objections to this ?

 -Sylvain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-12-19 Thread John Garbutt
On 19 December 2013 12:54, John Garbutt j...@johngarbutt.com wrote:
 On 19 December 2013 12:21, Ian Wells ijw.ubu...@cack.org.uk wrote:

 John:

 At a high level:

 Neutron:
 * user wants to connect to a particular neutron network
 * user wants a super-fast SRIOV connection

 Administration:
 * needs to map PCI device to what neutron network the connect to

 The big question is:
 * is this a specific SRIOV only (provider) network
 * OR... are other non-SRIOV connections also made to that same network

 I feel we have to go for that latter. Imagine a network on VLAN 42,
 you might want some SRIOV into that network, and some OVS connecting
 into the same network. The user might have VMs connected using both
 methods, so wants the same IP address ranges and same network id
 spanning both.


 If we go for that latter new either need:
 * some kind of nic-flavor
 ** boot ... -nic nic-id:public-id:,nic-flavor:10GBpassthrough
 ** but neutron could store nic-flavor, and pass it through to VIF
 driver, and user says port-id
 * OR add NIC config into the server flavor
 ** extra spec to say, tell VIF driver it could use on of this list of
 PCI devices: (list pci-flavors)
 * OR do both

 I vote for nic-flavor only, because it matches the volume-type we have
 with cinder.


 I think the issue there is that Nova is managing the supply of PCI devices
 (which is limited and limited on a per-machine basis).  Indisputably you
 need to select the NIC you want to use as a passthrough rather than a vnic
 device, so there's something in the --nic argument, but you have to answer
 two questions:

 - how many devices do you need (which is now not a flavor property but in
 the --nic list, which seems to me an odd place to be defining billable
 resources)
 - what happens when someone does nova interface-attach?

 Agreed.

Apologies, I misread what you put, maybe we don't agree...

I am just trying not to make a passthrough NIC and odd special case.

In my mind, it should just be a regular neturon port connection that
happens to be implemented using PCI passthrough.

I agree we need to sort out the scheduling of that, because its a
finite resource.

 The --nic list specifies how many NICs.

 I was suggesting adding a nic-flavor on each --nic spec to say if its
 PCI passthrough vs virtual NIC.

 Cinder's an indirect parallel because the resources it's adding to the
 hypervisor are virtual and unlimited, I think, or am I missing something
 here?

 I was more referring more to the different volume-types i.e. fast
 volume or normal volume.
 And how that is similar to virtual vs fast PCI passthough vs slow
 PCI passthrough

 Local volumes probably have the same issues as PCI passthrough with
 finite resources.
 But I am not sure we have a good solution for that yet.

 Mostly, it seems right that Cinder and Neutron own the configuration
 about the volume and network resources.

 The VIF driver and volume drivers seem to have a similar sort of
 relationship with Cinder and Neutron vs Nova.

 Then the issues boils down to visibility into that data so we can
 schedule efficiently, which is no easy problem.


 However, it does suggest that Nova should leave all the SRIOV work to
 the VIF driver.
 So the VIF driver, as activate by neutron, will understand which PCI
 devices to passthrough.

 Similar to the plan with brick, we could have an oslo lib that helps
 you attach SRIOV devices that could be used by the neturon VIF drivers
 and the nova PCI passthrough code.

 I'm not clear that this is necessary.

 At the moment with vNICs, you pass through devices by having a co-operation
 between Neutron (which configures a way of attaching them to put them on a
 certain network) and the hypervisor specific code (which creates them in the
 instance and attaches them as instructed by Neutron).  Why would we not
 follow the same pattern with passthrough devices?  In this instance, neutron
 would tell nova that when it's plugging this device it should be a
 passthrough device, and pass any additional parameters like the VF encap,
 and Nova would do as instructed, then Neutron would reconfigure whatever
 parts of the network need to be reconfigured in concert with the
 hypervisor's settings to make the NIC a part of the specified network.

 I agree, in general terms.

 Firstly, do you agree the neutron network-id can be used for
 passthrough and non-passthrough VIF connections? i.e. a neturon
 network-id does not imply PCI-passthrough.

 Secondly, we need to agree on the information flow around defining the
 flavor of the NIC. i.e. virtual or passthroughFast or
 passthroughNormal.

 My gut feeling is that neutron port description should somehow define
 this via a nic-flavor that maps to a group of pci-flavors.

 But from a billing point of view, I like the idea of the server flavor
 saying to the VIF plug code, by the way, for this server, please
 support all the nics using devices in pciflavor:fastNic should that be
 possible for the users given port 

Re: [openstack-dev] [Climate] Next two weekly meetings cancelled ?

2013-12-19 Thread Dina Belova
I have Christmas holidays till 12th January... So I don't really know I if
I will be available 6th Jan.


On Thu, Dec 19, 2013 at 4:49 PM, Sylvain Bauza sylvain.ba...@bull.netwrote:

 Hi team,

 I won't be able to attend the next two weekly meetings (23Dec and 30Dec),
 I would like to postpone our meetings till 6th January 2014.
 Any objections to this ?

 -Sylvain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Blueprint Bind dnsmasq in qrouter- namespace

2013-12-19 Thread Randy Tuttle
First, dnsmasq is not being moved. Instead, it's a different instance for the 
attached subnet in the qrouter namespace. If it's not in the qrouter namespace, 
the default gateway (the local router interface) will be the interface of qdhcp 
namespace interface. That will cause blackhole for traffic from VM. As you 
know, routing tables and NAT all occur in qrouter namespace. So we want the RA 
to contain the local interface as default gateway in qrouter namespace

Randy

Sent from my iPhone

On Dec 19, 2013, at 4:05 AM, Xuhan Peng pengxu...@gmail.com wrote:

 I am reading through the blueprint created by Randy to bind dnsmasq into 
 qrouter- namespace:
 
 https://blueprints.launchpad.net/neutron/+spec/dnsmasq-bind-into-qrouter-namespace
 
 I don't think I can follow the reason that we need to change the namespace 
 which contains dnsmasq process and the device it listens to from qdhcp- to 
 qrouter-. Why the original namespace design conflicts with the Router 
 Advertisement sending from dnsmasq for SLAAC?
 
 From the attached POC result link, the reason is stated as:
 
 Even if the dnsmasq process could send Router Advertisement, the default 
 gateway would bind to its own link-local address in the qdhcp- namespace. As 
 a result, traffic leaving tenant network will be drawn to DHCP interface, 
 instead of gateway port on router. That is not desirable! 
 
 Can Randy or Shixiong explain this more? Thanks!
 
 Xuhan 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Blueprint Bind dnsmasq in qrouter- namespace

2013-12-19 Thread Shixiong Shang
Hi, Xuhan:

Thanks for reaching out to us for questions! Here are the summary of several 
key points:

1. Currently dnsmasq is bound to the ns- interface within qdhcp- namespace. If 
we continue to use this model, then the announced RA has to use the ns- 
interface’s link-local address as source, based on standards.
2. When VM receives this RA, it will install default gateway pointing to the 
ns- interface based on standards. In other words, VM will send packets destined 
to other subnets to ns- interface.
3. However, outbound traffic should be sent to qr- interface, which is created 
to act as the default gateway for tenant network.
4. In addition, the qdhcp- namespace has no IPv6 route installed. So traffic 
will be blackholed.

I initially thought about getting rid of entire qdhcp- namespace and only use 
qrouter namespace. I poked around and nobody could explain to me why we need 
two separate namespaces. In my opinions, I don’t see the clear value of qdhcp- 
namespace…... Maybe I overlooked something. But on the second thought, we 
decided to launch dnsmasq in qrouter- namespace and keep IPv4 dnsmasq in qdhcp- 
namespace since we didn’t know what else might break.

Please let us know if you have any further questions! Thanks!

Shixiong



On Dec 19, 2013, at 4:05 AM, Xuhan Peng pengxu...@gmail.com wrote:

 I am reading through the blueprint created by Randy to bind dnsmasq into 
 qrouter- namespace:
 
 https://blueprints.launchpad.net/neutron/+spec/dnsmasq-bind-into-qrouter-namespace
 
 I don't think I can follow the reason that we need to change the namespace 
 which contains dnsmasq process and the device it listens to from qdhcp- to 
 qrouter-. Why the original namespace design conflicts with the Router 
 Advertisement sending from dnsmasq for SLAAC?
 
 From the attached POC result link, the reason is stated as:
 
 Even if the dnsmasq process could send Router Advertisement, the default 
 gateway would bind to its own link-local address in the qdhcp- namespace. As 
 a result, traffic leaving tenant network will be drawn to DHCP interface, 
 instead of gateway port on router. That is not desirable! 
 
 Can Randy or Shixiong explain this more? Thanks!
 
 Xuhan 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-12-19 Thread Irena Berezovsky
Hi John,
I totally agree that we should define the use cases both for administration and 
tenant that powers the VM.
Since we are trying to support PCI pass-through network, let's focus on the 
related use cases.
Please see my comments inline.

Regards,
Irena
-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.com] 
Sent: Thursday, December 19, 2013 1:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

Apologies for being late onto this thread, and not making the meeting the other 
day.
Also apologies this is almost totally a top post.

On 17 December 2013 15:09, Ian Wells ijw.ubu...@cack.org.uk wrote:
 Firstly, I disagree that
 https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support is an 
 accurate reflection of the current state.  It's a very unilateral 
 view, largely because the rest of us had been focussing on the google 
 document that we've been using for weeks.

I haven't seen the google doc. I got involved through the blueprint review of 
this:
https://blueprints.launchpad.net/nova/+spec/pci-extra-info

I assume its this one?
https://docs.google.com/document/d/1EMwDg9J8zOxzvTnQJ9HwZdiotaVstFWKIuKrPse6JOs

On a quick read, my main concern is separating out the user more:
* administration (defines pci-flavor, defines which hosts can provide it, 
defines server flavor...)
* person who boots server (picks server flavor, defines neutron ports)

Note, I don't see the person who boots the server ever seeing the pci-flavor, 
only understanding the server flavor.
[IrenaB] I am not sure that elaborating PCI device request into server flavor 
is the right approach for the PCI pass-through network case. vNIC by its nature 
is something dynamic that can be plugged or unplugged after VM boot. server 
flavor is  quite static.

We might also want a nic-flavor that tells neutron information it requires, 
but lets get to that later...
[IrenaB] nic flavor is definitely something that we need in order to choose if  
high performance (PCI pass-through) or virtio (i.e. OVS) nic will be created.

 Secondly, I totally disagree with this approach.  This assumes that 
 description of the (cloud-internal, hardware) details of each compute 
 node is best done with data stored centrally and driven by an API.  I 
 don't agree with either of these points.

Possibly, but I would like to first agree on the use cases and data model we 
want.

Nova has generally gone for APIs over config in recent times.
Mostly so you can do run-time configuration of the system.
But lets just see what makes sense when we have the use cases agreed.

 On 2013年12月16日 22:27, Robert Li (baoli) wrote:
 I'd like to give you guy a summary of current state, let's discuss it 
 then.
 https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support


 1)  fade out alias ( i think this ok for all)
 2)  white list became pic-flavor ( i think this ok for all)
 3)  address simply regular expression support: only * and a number 
 range is support [hex-hex]. ( i think this ok?)
 4)  aggregate : now it's clear enough, and won't impact SRIOV.  ( i 
 think this irrelevant to SRIOV now)

So... this means we have:

PCI-flavor:
* i.e. standardGPU, standardGPUnew, fastGPU, hdFlash1TB etc

Host mapping:
* decide which hosts you allow a particular flavor to be used
* note, the scheduler still needs to find out if any devices are free

flavor (of the server):
* usual RAM, CPU, Storage
* use extra specs to add PCI devices
* example:
** add one PCI device, choice of standardGPU or standardGPUnew
** also add: one hdFlash1TB

Now, the other bit is SRIOV... At a high level:

Neutron:
* user wants to connect to a particular neutron network
* user wants a super-fast SRIOV connection

Administration:
* needs to map PCI device to what neutron network the connect to

The big question is:
* is this a specific SRIOV only (provider) network
* OR... are other non-SRIOV connections also made to that same network

I feel we have to go for that latter. Imagine a network on VLAN 42, you might 
want some SRIOV into that network, and some OVS connecting into the same 
network. The user might have VMs connected using both methods, so wants the 
same IP address ranges and same network id spanning both.
[IrenaB] Agree. SRIOV connection is the choice for certain VM on certain 
network. The same VM can be connected to other network via virtio nic as well 
as other VMs can be connected to the same network via virtio nics.

If we go for that latter new either need:
* some kind of nic-flavor
** boot ... -nic nic-id:public-id:,nic-flavor:10GBpassthrough
** but neutron could store nic-flavor, and pass it through to VIF driver, and 
user says port-id
* OR add NIC config into the server flavor
** extra spec to say, tell VIF driver it could use on of this list of PCI 
devices: (list pci-flavors)
* OR do both

I vote for nic-flavor only, because it matches the volume-type we have with 

Re: [openstack-dev] [heat] Nomination for heat-core

2013-12-19 Thread Liang Chen

On 12/19/2013 10:21 AM, Steve Baker wrote:
I would like to nominate Bartosz Górski to be a heat-core reviewer. 
His reviews to date have been valuable and his other contributions to 
the project have shown a sound understanding of how heat works.


Here is his review history:
https://review.openstack.org/#/q/reviewer:bartosz.gorski%2540ntti3.com+project:openstack/heat,n,z

If you are heat-core please reply with your vote.


+1 !

cheers


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Trove] [Savanna] [Oslo] Unified Agents - what is the actual problem?

2013-12-19 Thread Dmitry Mescheryakov
I agree that enabling communication between guest and cloud service is a
common problem for most agent designs. The only exception is agent based on
hypervisor provided transport. But as far as I understand many people are
interested in network-based agent, so indeed we can start a thread (or
continue discussion in this on) on the problem.

Dmitry


2013/12/19 Clint Byrum cl...@fewbar.com

 So I've seen a lot of really great discussion of the unified agents, and
 it has made me think a lot about the problem that we're trying to solve.

 I just wanted to reiterate that we should be trying to solve real problems
 and not get distracted by doing things right or even better.

 I actually think there are three problems to solve.

 * Private network guest to cloud service communication.
 * Narrow scope highly responsive lean guest agents (Trove, Savanna).
 * General purpose in-instance management agent (Heat).

 Since the private network guests problem is the only one they all share,
 perhaps this is where the three projects should collaborate, and the
 other pieces should be left to another discussion.

 Thoughts?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Re: [Blueprint vlan-aware-vms] VLAN aware VMs

2013-12-19 Thread Ian Wells
On 19 December 2013 06:35, Isaku Yamahata isaku.yamah...@gmail.com wrote:


 Hi Ian.

 I can't see your proposal. Can you please make it public viewable?


Crap, sorry - fixed.


  Even before I read the document I could list three use cases.  Eric's
  covered some of them himself.

 I'm not against trunking.
 I'm trying to understand what requirements need trunk network in
 the figure 1 in addition to L2 gateway directly connected to VM via
 trunk port.


No problem, just putting the information there for you.

-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Next two weekly meetings cancelled ?

2013-12-19 Thread Sylvain Bauza

Le 19/12/2013 13:54, Sergey Lukjanov a écrit :
yup, agreed. Additionally, I'd like to start a discussion about new 
meeting time that'd be more US-folks friendly.



Luckily, we do have Internet now :
http://www.timeanddate.com/worldclock/meetingtime.html?iso=20140106p1=195p2=166p3=179http://www.timeanddate.com/worldclock/meetingtime.html?iso=20140106p1=195p2=166p3=179p4=224

Option #1 :
Mondays 1500UTC could be the best fit for us, as most of US people 
(except PDT) could join us, but :

 1. that's pretty late for you, guys
 2. the meeting slot is busy on #openstack-meeting (Stackalytics), we 
need to switch to #openstack-meeting-alt


So, alternatives are :
Option #2 :
Mondays 1400 UTC, but :
 1. we loose a certain interest in moving our timeslot for US folks

Option #3 :
Tuesdays 1500 UTC, pretty interesting because that's not an week 
early-bird meeting for US, but :
 1. the meeting slot is busy on #openstack-meeting (Scheduler 
sub-group), we need to switch to #openstack-meeting-alt




Thoughts ? My vote goes to #1 personnally.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Next two weekly meetings cancelled ?

2013-12-19 Thread Sylvain Bauza

Le 19/12/2013 13:57, Dina Belova a écrit :
I have Christmas holidays till 12th January... So I don't really know 
I if I will be available 6th Jan.




Oh ok. Who else are still on vacation these times ?
We can do our next meeting on 12th Jan, but I'm concerned with the 
delivery of Climate 0.1 which would be one week after.


-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the FloatingIPChecker control point

2013-12-19 Thread Yair Fried
I would also like to point out that, since Brent used compute.build_timeout as 
the timeout value
***It takes more time to update FLIP in nova DB, than for a VM to build***

Yair

- Original Message -
From: Sean Dague s...@dague.net
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Thursday, December 19, 2013 12:42:56 PM
Subject: Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the 
FloatingIPChecker control point

On 12/19/2013 03:31 AM, Yair Fried wrote:
 Hi Guys,
 I run into this issue trying to incorporate this test into
 cross_tenant_connectivity scenario:
 launching 2 VMs in different tenants
 What I saw, is that in the gate it fails half the time (the original
 test passes without issues) and ONLY on the 2nd VM (the first FLIP
 propagates fine).
 https://bugs.launchpad.net/nova/+bug/1262529
 
 I don't see this in:
 1. my local RHOS-Havana setup
 2. the cross_tenant_connectivity scenario without the control point
 (test passes without issues)
 3. test_network_basic_ops runs in the gate
 
 So here's my somewhat less experienced opinion:
 1. this happens due to stress (more than a single FLIP/VM)
 2. (as Brent said) Timeout interval between polling are too short
 3. FLIP is usually reachable long before it is seen in the nova DB (also
 from manual experience), so blocking the test until it reaches the nova
 DB doesn't make sense for me. if we could do this in different thread,
 then maybe, but using a Pass/Fail criteria to test for a timing issue
 seems wrong. Especially since as I understand it, the issue is on IF it
 reaches nova DB, only WHEN.
 
 I would like to, at least, move this check from its place as a blocker
 to later in the test. Before this is done, I would like to know if
 anyone else has seen the same problems Brent describes prior to this
 patch being merged.

 Regarding Jay's scenario suggestion, I think this should not be a part
 of network_basic_ops, but rather a separate stress scenario creating
 multiple VMs and testing for FLIP associations and propagation time.

+1 there is no need to overload that one scenario. A dedicated one would
be fine.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [climate] PTL elections: nominations

2013-12-19 Thread Sergey Lukjanov
Hey all,

It was decided to choose the PTL for Climate project and I've volunteered
to handle it. All important questions was discussed on the last IRC team
meeting [1]. You can find details on a wiki page [2].

So, we'd like to choose PTL for the rest Icehouse release cycle. To
announce your candidacy please write to openstack-dev at
lists.openstack.orgmailing list thread with the following subject:
[climate] PTL Candidacy.
I'll confirm nomination and add your candidacy to the wiki page.

Nominations are now opened and will remain open until 23:59 UTC December
24, 2013.
Elections will be opened for Dec 25 - Jan 2.

[1]
http://eavesdrop.openstack.org/meetings/climate/2013/climate.2013-12-17-09.59.html
[2] https://wiki.openstack.org/wiki/Climate/PTL_Elections_Icehouse

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [ml2] Canceling next two weekly meetings

2013-12-19 Thread Kyle Mestery
Since the next two Neutron ML2 meetings fall on Dec. 25
and Jan. 1, we'll cancel both and reconvene on Jan. 8.

Thanks!
Kyle

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party-testing] Reminder: Meeting tomorrow

2013-12-19 Thread Kyle Mestery
Apologies folks, I meant 2200 UTC Thursday. We'll still do the
meeting today.

On Dec 18, 2013, at 4:40 PM, Don Kehn dek...@gmail.com wrote:

 Wouldn't 2200 UTC be in about 20 mins?
 
 
 On Wed, Dec 18, 2013 at 3:32 PM, Itsuro ODA o...@valinux.co.jp wrote:
 Hi,
 
 It seems the meeting was not held on 2200 UTC on Wednesday (today).
 
 Do you mean 2200 UTC on Thursday ?
 
 Thanks.
 
 On Thu, 12 Dec 2013 11:43:03 -0600
 Kyle Mestery mest...@siliconloons.com wrote:
 
  Hi everyone:
 
  We had a meeting around Neutron Third-Party testing today on IRC.
  The logs are available here [1]. We plan to host another meeting
  next week, and it will be at 2200 UTC on Wednesday in the
  #openstack-meeting-alt channel on IRC. Please attend and update
  the etherpad [2] with any items relevant to you before then.
 
  Thanks again!
  Kyle
 
  [1] 
  http://eavesdrop.openstack.org/meetings/networking_third_party_testing/2013/
  [2] https://etherpad.openstack.org/p/multi-node-neutron-tempest
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 On Wed, 18 Dec 2013 15:10:46 -0600
 Kyle Mestery mest...@siliconloons.com wrote:
 
  Just a reminder, we'll be meeting at 2200 UTC on #openstack-meeting-alt.
  We'll be looking at this etherpad [1] again, and continuing discussions from
  last week.
 
  Thanks!
  Kyle
 
  [1] https://etherpad.openstack.org/p/multi-node-neutron-tempest
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 --
 Itsuro ODA o...@valinux.co.jp
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 
 Don Kehn
 303-442-0060
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-12-19 Thread John Garbutt
Response inline...

On 19 December 2013 13:05, Irena Berezovsky ire...@mellanox.com wrote:
 Hi John,
 I totally agree that we should define the use cases both for administration 
 and tenant that powers the VM.
 Since we are trying to support PCI pass-through network, let's focus on the 
 related use cases.
 Please see my comments inline.

Cool.

 Regards,
 Irena
 -Original Message-
 From: John Garbutt [mailto:j...@johngarbutt.com]
 Sent: Thursday, December 19, 2013 1:42 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

 Apologies for being late onto this thread, and not making the meeting the 
 other day.
 Also apologies this is almost totally a top post.

 On 17 December 2013 15:09, Ian Wells ijw.ubu...@cack.org.uk wrote:
 Firstly, I disagree that
 https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support is an
 accurate reflection of the current state.  It's a very unilateral
 view, largely because the rest of us had been focussing on the google
 document that we've been using for weeks.

 I haven't seen the google doc. I got involved through the blueprint review of 
 this:
 https://blueprints.launchpad.net/nova/+spec/pci-extra-info

 I assume its this one?
 https://docs.google.com/document/d/1EMwDg9J8zOxzvTnQJ9HwZdiotaVstFWKIuKrPse6JOs

 On a quick read, my main concern is separating out the user more:
 * administration (defines pci-flavor, defines which hosts can provide it, 
 defines server flavor...)
 * person who boots server (picks server flavor, defines neutron ports)

 Note, I don't see the person who boots the server ever seeing the pci-flavor, 
 only understanding the server flavor.
 [IrenaB] I am not sure that elaborating PCI device request into server flavor 
 is the right approach for the PCI pass-through network case. vNIC by its 
 nature is something dynamic that can be plugged or unplugged after VM boot. 
 server flavor is  quite static.

I was really just meaning the server flavor specify the type of NIC to attach.

The existing port specs, etc, define how many nics, and you can hot
plug as normal, just the VIF plugger code is told by the server flavor
if it is able to PCI passthrough, and which devices it can pick from.
The idea being combined with the neturon network-id you know what to
plug.

The more I talk about this approach the more I hate it :(

 We might also want a nic-flavor that tells neutron information it requires, 
 but lets get to that later...
 [IrenaB] nic flavor is definitely something that we need in order to choose 
 if  high performance (PCI pass-through) or virtio (i.e. OVS) nic will be 
 created.

Well, I think its the right way go. Rather than overloading the server
flavor with hints about which PCI devices you could use.

 Secondly, I totally disagree with this approach.  This assumes that
 description of the (cloud-internal, hardware) details of each compute
 node is best done with data stored centrally and driven by an API.  I
 don't agree with either of these points.

 Possibly, but I would like to first agree on the use cases and data model we 
 want.

 Nova has generally gone for APIs over config in recent times.
 Mostly so you can do run-time configuration of the system.
 But lets just see what makes sense when we have the use cases agreed.

 On 2013年12月16日 22:27, Robert Li (baoli) wrote:
 I'd like to give you guy a summary of current state, let's discuss it
 then.
 https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support


 1)  fade out alias ( i think this ok for all)
 2)  white list became pic-flavor ( i think this ok for all)
 3)  address simply regular expression support: only * and a number
 range is support [hex-hex]. ( i think this ok?)
 4)  aggregate : now it's clear enough, and won't impact SRIOV.  ( i
 think this irrelevant to SRIOV now)

 So... this means we have:

 PCI-flavor:
 * i.e. standardGPU, standardGPUnew, fastGPU, hdFlash1TB etc

 Host mapping:
 * decide which hosts you allow a particular flavor to be used
 * note, the scheduler still needs to find out if any devices are free

 flavor (of the server):
 * usual RAM, CPU, Storage
 * use extra specs to add PCI devices
 * example:
 ** add one PCI device, choice of standardGPU or standardGPUnew
 ** also add: one hdFlash1TB

 Now, the other bit is SRIOV... At a high level:

 Neutron:
 * user wants to connect to a particular neutron network
 * user wants a super-fast SRIOV connection

 Administration:
 * needs to map PCI device to what neutron network the connect to

 The big question is:
 * is this a specific SRIOV only (provider) network
 * OR... are other non-SRIOV connections also made to that same network

 I feel we have to go for that latter. Imagine a network on VLAN 42, you might 
 want some SRIOV into that network, and some OVS connecting into the same 
 network. The user might have VMs connected using both methods, so wants the 
 same IP address ranges and same 

Re: [openstack-dev] [heat] Nomination for heat-core

2013-12-19 Thread Randall Burt
+1


Sent from my Verizon Wireless 4G LTE Smartphone



 Original message 
From: Steve Baker sba...@redhat.com
Date: 12/18/2013 8:28 PM (GMT-06:00)
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Subject: [openstack-dev] [heat] Nomination for heat-core


I would like to nominate Bartosz Górski to be a heat-core reviewer. His reviews 
to date have been valuable and his other contributions to the project have 
shown a sound understanding of how heat works.

Here is his review history:
https://review.openstack.org/#/q/reviewer:bartosz.gorski%2540ntti3.com+project:openstack/heat,n,z

If you are heat-core please reply with your vote.

cheers
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Docker] Environment variables

2013-12-19 Thread John Garbutt
On 17 December 2013 12:53, Daniel P. Berrange berra...@redhat.com wrote:
 On Mon, Dec 16, 2013 at 01:04:33PM -0800, Dan Smith wrote:
  eg use a 'env_' prefix for glance image attributes
 
  We've got a couple of cases now where we want to overrides these
  same things on a per-instance basis. Kernel command line args
  is one other example. Other hardware overrides like disk/net device
  types are another possibility
 
  Rather than invent new extensions for each, I think we should
  have a way to pass arbitrary attributes alon with the boot
  API call, that a driver would handle in much  the same way as
  they do for glance image properties. Basically think of it as
  a way to custom any image property per instance created.

 Personally, I think having a bunch of special case magic namespaces
 (even if documented) is less desirable than a proper API to do something
 like this. Especially a namespace that someone else could potentially
 use legitimately that would conflict.

 To me, this feels a lot like what I'm worried this effort will turn
 into, which is making containers support in Nova look like a bolt-on
 thing with a bunch of specialness required to make it behave.

 NB I'm not saying that everything related to containers should be done
 with metadata properties. I just feel that this is appropriate for
 setting of environment variables and some other things like kernel
 command line args, since it gives a consistent approach to use for
 setting those per-image vs per-instance.

+1 it seems a fairly nice mapping for kernel args and environment variables.

Cloud-Init could add the environment variable inside VMs if we felt
that way inclined.

Discoverability isn't awesome though.

 Anyone remember this bolt-on gem?

 nova boot --block-device-mapping
 vda=965453c9-02b5-4d5b-8ec0-3164a89bf6f4:::0 --flavor=m1.tiny
 --image=6415797a-7c03-45fe-b490-f9af99d2bae0 BFV

 I found that one amidst hundreds of forum threads of people confused
 about what incantation of magic they were supposed to do to make it
 actually boot from volume.

 Everything about the way you use block device mapping is plain
 awful - even the bits that were done as proper API extensions.
 I don't think the design failures there apply in this case.

 If we do env variables as metadata properties, then you may well
 not end up even needing to pass them to 'nova boot' in the common
 case, since it'll likely be sufficient to have them just set against
 the image in glance most of the time.

+1

Going further, we set PV vs HVM via image properties. It would be nice
to override that on a per boot basis that matches these other cases.

Some generic way of setting a per-boot equivalent of an image
property might be the best approach? Going back to glance protected
properties, we would need a Nova equivalent. But prehaps a whitelist
of properties you can override on boot would be best?

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VM diagnostics - V3 proposal

2013-12-19 Thread John Garbutt
On 16 December 2013 15:50, Daniel P. Berrange berra...@redhat.com wrote:
 On Mon, Dec 16, 2013 at 03:37:39PM +, John Garbutt wrote:
 On 16 December 2013 15:25, Daniel P. Berrange berra...@redhat.com wrote:
  On Mon, Dec 16, 2013 at 06:58:24AM -0800, Gary Kotton wrote:
  I'd like to propose the following for the V3 API (we will not touch V2
  in case operators have applications that are written against this – this
  may be the case for libvirt or xen. The VMware API support was added
  in I1):
 
   1.  We formalize the data that is returned by the API [1]
 
  Before we debate what standard data should be returned we need
  detail of exactly what info the current 3 virt drivers return.
  IMHO it would be better if we did this all in the existing wiki
  page associated with the blueprint, rather than etherpad, so it
  serves as a permanent historical record for the blueprint design.

 +1

  While we're doing this I think we should also consider whether
  the 'get_diagnostics' API is fit for purpose more generally.
  eg currently it is restricted to administrators. Some, if
  not all, of the data libvirt returns is relevant to the owner
  of the VM but they can not get at it.

 Ceilometer covers that ground, we should ask them about this API.

 If we consider what is potentially in scope for ceilometer and
 subtract that from what the libvirt get_diagnostics impl currently
 returns, you pretty much end up with the empty set. This might cause
 us to question if 'get_diagnostics' should exist at all from the
 POV of the libvirt driver's impl. Perhaps vmware/xen return data
 that is out of scope for ceilometer ?

Hmm, a good point.

  For a cloud administrator it might be argued that the current
  API is too inefficient to be useful in many troubleshooting
  scenarios since it requires you to invoke it once per instance
  if you're collecting info on a set of guests, eg all VMs on
  one host. It could be that cloud admins would be better
  served by an API which returned info for all VMs ona host
  at once, if they're monitoring say, I/O stats across VM
  disks to identify one that is causing I/O trouble ? IOW, I
  think we could do with better identifying the usage scenarios
  for this API if we're to improve its design / impl.

 I like the API that helps you dig into info for a specific host that
 other system highlight as problematic.
 You can do things that could be expensive to compute, but useful for
 troubleshooting.

 If things get expensive to compute, then it may well be preferrable to
 have separate APIs for distinct pieces of interesting diagnostic
 data. eg If they only care about one particular thing, they don't want
 to trigger expensive computations of things they don't care about seeing.

Maybe that is what we need:
* API to get what ceilometer would tell you, maybe using its format
* API to perform expensive diagnostics

But then, we would just be duplicating ceilometer, which goes back to
your original point. And we are trying to get rid of the APIs that
just proxy to another service, so lets not add another one.

Maybe we should just remove this from the v3 API for now, and see who shouts?

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VM diagnostics - V3 proposal

2013-12-19 Thread Daniel P. Berrange
On Thu, Dec 19, 2013 at 02:27:40PM +, John Garbutt wrote:
 On 16 December 2013 15:50, Daniel P. Berrange berra...@redhat.com wrote:
  On Mon, Dec 16, 2013 at 03:37:39PM +, John Garbutt wrote:
  On 16 December 2013 15:25, Daniel P. Berrange berra...@redhat.com wrote:
   On Mon, Dec 16, 2013 at 06:58:24AM -0800, Gary Kotton wrote:
   I'd like to propose the following for the V3 API (we will not touch V2
   in case operators have applications that are written against this – this
   may be the case for libvirt or xen. The VMware API support was added
   in I1):
  
1.  We formalize the data that is returned by the API [1]
  
   Before we debate what standard data should be returned we need
   detail of exactly what info the current 3 virt drivers return.
   IMHO it would be better if we did this all in the existing wiki
   page associated with the blueprint, rather than etherpad, so it
   serves as a permanent historical record for the blueprint design.
 
  +1
 
   While we're doing this I think we should also consider whether
   the 'get_diagnostics' API is fit for purpose more generally.
   eg currently it is restricted to administrators. Some, if
   not all, of the data libvirt returns is relevant to the owner
   of the VM but they can not get at it.
 
  Ceilometer covers that ground, we should ask them about this API.
 
  If we consider what is potentially in scope for ceilometer and
  subtract that from what the libvirt get_diagnostics impl currently
  returns, you pretty much end up with the empty set. This might cause
  us to question if 'get_diagnostics' should exist at all from the
  POV of the libvirt driver's impl. Perhaps vmware/xen return data
  that is out of scope for ceilometer ?
 
 Hmm, a good point.

So perhaps I'm just being dumb, but I deployed ceilometer and could
not figure out how to get it to print out the stats for a single
VM from its CLI ? eg, can someone show me a command line invocation
for ceilometer that displays CPU, memory, disk and network I/O stats
in one go ?


Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Agenda for today's meeting

2013-12-19 Thread Collins, Sean
Hi,

Agenda for today's meeting is pretty light - if you have something you'd
like to discuss please add it to the wiki page

https://wiki.openstack.org/wiki/Meetings/Neutron-IPv6-Subteam#Agenda_for_Dec_19_2013

I would also ask that when we conduct the meeting - we stick to the
agenda that has been posted, and hold any other discussions until the
open discussion period. The quicker we get through the agenda, the more
time we have at the end for open discussion.

See you all soon!

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VM diagnostics - V3 proposal

2013-12-19 Thread Vladik Romanovsky
I think it was:

ceilometer sample-list -m cpu_util -q 'resource_id=vm_uuid'

Vladik

- Original Message -
 From: Daniel P. Berrange berra...@redhat.com
 To: John Garbutt j...@johngarbutt.com
 Cc: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Thursday, 19 December, 2013 9:34:02 AM
 Subject: Re: [openstack-dev] [nova] VM diagnostics - V3 proposal
 
 On Thu, Dec 19, 2013 at 02:27:40PM +, John Garbutt wrote:
  On 16 December 2013 15:50, Daniel P. Berrange berra...@redhat.com wrote:
   On Mon, Dec 16, 2013 at 03:37:39PM +, John Garbutt wrote:
   On 16 December 2013 15:25, Daniel P. Berrange berra...@redhat.com
   wrote:
On Mon, Dec 16, 2013 at 06:58:24AM -0800, Gary Kotton wrote:
I'd like to propose the following for the V3 API (we will not touch
V2
in case operators have applications that are written against this –
this
may be the case for libvirt or xen. The VMware API support was added
in I1):
   
 1.  We formalize the data that is returned by the API [1]
   
Before we debate what standard data should be returned we need
detail of exactly what info the current 3 virt drivers return.
IMHO it would be better if we did this all in the existing wiki
page associated with the blueprint, rather than etherpad, so it
serves as a permanent historical record for the blueprint design.
  
   +1
  
While we're doing this I think we should also consider whether
the 'get_diagnostics' API is fit for purpose more generally.
eg currently it is restricted to administrators. Some, if
not all, of the data libvirt returns is relevant to the owner
of the VM but they can not get at it.
  
   Ceilometer covers that ground, we should ask them about this API.
  
   If we consider what is potentially in scope for ceilometer and
   subtract that from what the libvirt get_diagnostics impl currently
   returns, you pretty much end up with the empty set. This might cause
   us to question if 'get_diagnostics' should exist at all from the
   POV of the libvirt driver's impl. Perhaps vmware/xen return data
   that is out of scope for ceilometer ?
  
  Hmm, a good point.
 
 So perhaps I'm just being dumb, but I deployed ceilometer and could
 not figure out how to get it to print out the stats for a single
 VM from its CLI ? eg, can someone show me a command line invocation
 for ceilometer that displays CPU, memory, disk and network I/O stats
 in one go ?
 
 
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VM diagnostics - V3 proposal

2013-12-19 Thread Vladik Romanovsky
Or

ceilometer meter-list -q resource_id='vm_uuid'

- Original Message -
 From: Daniel P. Berrange berra...@redhat.com
 To: John Garbutt j...@johngarbutt.com
 Cc: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Thursday, 19 December, 2013 9:34:02 AM
 Subject: Re: [openstack-dev] [nova] VM diagnostics - V3 proposal
 
 On Thu, Dec 19, 2013 at 02:27:40PM +, John Garbutt wrote:
  On 16 December 2013 15:50, Daniel P. Berrange berra...@redhat.com wrote:
   On Mon, Dec 16, 2013 at 03:37:39PM +, John Garbutt wrote:
   On 16 December 2013 15:25, Daniel P. Berrange berra...@redhat.com
   wrote:
On Mon, Dec 16, 2013 at 06:58:24AM -0800, Gary Kotton wrote:
I'd like to propose the following for the V3 API (we will not touch
V2
in case operators have applications that are written against this –
this
may be the case for libvirt or xen. The VMware API support was added
in I1):
   
 1.  We formalize the data that is returned by the API [1]
   
Before we debate what standard data should be returned we need
detail of exactly what info the current 3 virt drivers return.
IMHO it would be better if we did this all in the existing wiki
page associated with the blueprint, rather than etherpad, so it
serves as a permanent historical record for the blueprint design.
  
   +1
  
While we're doing this I think we should also consider whether
the 'get_diagnostics' API is fit for purpose more generally.
eg currently it is restricted to administrators. Some, if
not all, of the data libvirt returns is relevant to the owner
of the VM but they can not get at it.
  
   Ceilometer covers that ground, we should ask them about this API.
  
   If we consider what is potentially in scope for ceilometer and
   subtract that from what the libvirt get_diagnostics impl currently
   returns, you pretty much end up with the empty set. This might cause
   us to question if 'get_diagnostics' should exist at all from the
   POV of the libvirt driver's impl. Perhaps vmware/xen return data
   that is out of scope for ceilometer ?
  
  Hmm, a good point.
 
 So perhaps I'm just being dumb, but I deployed ceilometer and could
 not figure out how to get it to print out the stats for a single
 VM from its CLI ? eg, can someone show me a command line invocation
 for ceilometer that displays CPU, memory, disk and network I/O stats
 in one go ?
 
 
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the FloatingIPChecker control point

2013-12-19 Thread Brent Eagles

Hi,

Yair Fried wrote:

I would also like to point out that, since Brent used compute.build_timeout as 
the timeout value
***It takes more time to update FLIP in nova DB, than for a VM to build***

Yair


Agreed. I think that's an extremely important highlight of this 
discussion. Propagation of the floating IP is definitely bugged. In the 
small sample of logs (2) that I checked, the floating IP assignment 
propagated in around 10 seconds for test_network_basic_ops, but in the 
cross tenant connectivity test it took somewhere around 1 minute for the 
first assignment and something over 3 (otherwise known as 
simply-too-long-to-find-out). Even if the querying of once a second were 
excessive - which I do not feel strong enough about to say is anything 
other than a *possible* contributing factor - it should not take that long.


Cheers,

Brent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2013-12-19 Thread John Garbutt
On 4 December 2013 17:10, Russell Bryant rbry...@redhat.com wrote:
 I think option 3 makes the most sense here (pending anyone saying we
 should run away screaming from mox3 for some reason).  It's actually
 what I had been assuming since this thread a while back.

 This means that we don't need to *require* that tests get converted if
 you're changing one.  It just gets you bonus imaginary internet points.

 Requiring mock for new tests seems fine.  We can grant exceptions in
 specific cases if necessary.  In general, we should be using mock for
 new tests.

I have lost track a bit here.

The above seems like a sane approach. Do we all agree on that now?

Can we add the above text into here:
https://wiki.openstack.org/wiki/ReviewChecklist#Nova_Review_Checklist

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Three SLAAC and DHCPv6 related blueprints

2013-12-19 Thread Collins, Sean
On Wed, Dec 18, 2013 at 10:29:35PM -0500, Shixiong Shang wrote:
 It is up to Sean to make the call, but I would love to see IBM team in the 
 meeting.
 
Agreed - If we can find a time that works for USA, Europe and
China that would be great. 

How good/bad is 1500 UTC? I don't trust my math :)



-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the FloatingIPChecker control point

2013-12-19 Thread Frittoli, Andrea (Cloud Services)
My 2 cents:

In the test the floating IP is created via neutron API and later checked via
nova API.

So the test is relying here (or trying to verify?) the network cache refresh
mechanism in nova. 
This is something that we should test, but in a test dedicated to this.

The primary objective of test_network_basic_ops is to verify the network
plumbing and end-to-end connectivity, so it should be decoupled from things
like network cache refresh.

If the floating IP is associated via neutron API, only the neutron API will
report the associated in a timely manner. 
Else if the floating IP is created via the nova API, this will update the
network cache automatically, not relying on the cache refresh mechanism, so
both neutron and nova API will report the associated in a timely manner
(this did not work some weeks ago, so it something tempest tests should
catch).

andrea

-Original Message-
From: Brent Eagles [mailto:beag...@redhat.com] 
Sent: 19 December 2013 14:53
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the
FloatingIPChecker control point

Hi,

Yair Fried wrote:
 I would also like to point out that, since Brent used 
 compute.build_timeout as the timeout value ***It takes more time to 
 update FLIP in nova DB, than for a VM to build***

 Yair

Agreed. I think that's an extremely important highlight of this discussion.
Propagation of the floating IP is definitely bugged. In the small sample of
logs (2) that I checked, the floating IP assignment propagated in around 10
seconds for test_network_basic_ops, but in the cross tenant connectivity
test it took somewhere around 1 minute for the first assignment and
something over 3 (otherwise known as simply-too-long-to-find-out). Even if
the querying of once a second were excessive - which I do not feel strong
enough about to say is anything other than a *possible* contributing factor
- it should not take that long.

Cheers,

Brent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] datastore migration issues

2013-12-19 Thread Greg Hill
We did consider doing that, but decided it wasn't really any different from the 
other options as it required the deployer to know to alter that data.  That 
would require the fewest code changes, though.  It was also my understanding 
that mysql variants were a possibility as well (percona and mariadb), which is 
what brought on the objection to just defaulting in code.  Also, we can't 
derive the version being used, so we *could* fill it with a dummy version and 
assume mysql, but I don't feel like that solves the problem or the objections 
to the earlier solutions.  And then we also have bogus data in the database.

Since there's no perfect solution, I'm really just hoping to gather consensus 
among people who are running existing trove installations and have yet to 
upgrade to the newer code about what would be easiest for them.  My 
understanding is that list is basically HP and Rackspace, and maybe Ebay?, but 
the hope was that bringing the issue up on the list might confirm or refute 
that assumption and drive the conversation to a suitable workaround for those 
affected, which hopefully isn't that many organizations at this point.

The options are basically:

1. Put the onus on the deployer to correct existing records in the database.
2. Have the migration script put dummy data in the database which you have to 
correct.
3. Put the onus on the deployer to fill out values in the config value

Greg

On Dec 18, 2013, at 8:46 PM, Robert Myers 
myer0...@gmail.commailto:myer0...@gmail.com wrote:


There is the database migration for datastores. We should add a function to  
back fill the existing data with either a dummy data or set it to 'mysql' as 
that was the only possibility before data stores.

On Dec 18, 2013 3:23 PM, Greg Hill 
greg.h...@rackspace.commailto:greg.h...@rackspace.com wrote:
I've been working on fixing a bug related to migrating existing installations 
to the new datastore code:

https://bugs.launchpad.net/trove/+bug/1259642

The basic gist is that existing instances won't have any data in the 
datastore_version_id field in the database unless we somehow populate that data 
during migration, and not having that data populated breaks a lot of things 
(including the ability to list instances or delete or resize old instances).  
It's impossible to populate that data in an automatic, generic way, since it's 
highly vendor-dependent on what database and version they currently support, 
and there's not enough data in the older schema to populate the new tables 
automatically.

So far, we've come up with some non-optimal solutions:

1. The first iteration was to assume 'mysql' as the database manager on 
instances without a datastore set.
2. The next iteration was to make the default value be configurable in 
trove.conf, but default to 'mysql' if it wasn't set.
3. It was then proposed that we could just use the 'default_datastore' value 
from the config, which may or may not be set by the operator.

My problem with any of these approaches beyond the first is that requiring 
people to populate config values in order to successfully migrate to the newer 
code is really no different than requiring them to populate the new database 
tables with appropriate data and updating the existing instances with the 
appropriate values.  Either way, it's now highly dependent on people deploying 
the upgrade to know about this change and react accordingly.

Does anyone have a better solution that we aren't considering?  Is this even 
worth the effort given that trove has so few current deployments that we can 
just make sure everyone is populating the new tables as part of their upgrade 
path and not bother fixing the code to deal with the legacy data?

Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VM diagnostics - V3 proposal

2013-12-19 Thread Matt Riedemann



On Thursday, December 19, 2013 8:49:13 AM, Vladik Romanovsky wrote:

Or

ceilometer meter-list -q resource_id='vm_uuid'

- Original Message -

From: Daniel P. Berrange berra...@redhat.com
To: John Garbutt j...@johngarbutt.com
Cc: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Thursday, 19 December, 2013 9:34:02 AM
Subject: Re: [openstack-dev] [nova] VM diagnostics - V3 proposal

On Thu, Dec 19, 2013 at 02:27:40PM +, John Garbutt wrote:

On 16 December 2013 15:50, Daniel P. Berrange berra...@redhat.com wrote:

On Mon, Dec 16, 2013 at 03:37:39PM +, John Garbutt wrote:

On 16 December 2013 15:25, Daniel P. Berrange berra...@redhat.com
wrote:

On Mon, Dec 16, 2013 at 06:58:24AM -0800, Gary Kotton wrote:

I'd like to propose the following for the V3 API (we will not touch
V2
in case operators have applications that are written against this –
this
may be the case for libvirt or xen. The VMware API support was added
in I1):

  1.  We formalize the data that is returned by the API [1]


Before we debate what standard data should be returned we need
detail of exactly what info the current 3 virt drivers return.
IMHO it would be better if we did this all in the existing wiki
page associated with the blueprint, rather than etherpad, so it
serves as a permanent historical record for the blueprint design.


+1


While we're doing this I think we should also consider whether
the 'get_diagnostics' API is fit for purpose more generally.
eg currently it is restricted to administrators. Some, if
not all, of the data libvirt returns is relevant to the owner
of the VM but they can not get at it.


Ceilometer covers that ground, we should ask them about this API.


If we consider what is potentially in scope for ceilometer and
subtract that from what the libvirt get_diagnostics impl currently
returns, you pretty much end up with the empty set. This might cause
us to question if 'get_diagnostics' should exist at all from the
POV of the libvirt driver's impl. Perhaps vmware/xen return data
that is out of scope for ceilometer ?


Hmm, a good point.


So perhaps I'm just being dumb, but I deployed ceilometer and could
not figure out how to get it to print out the stats for a single
VM from its CLI ? eg, can someone show me a command line invocation
for ceilometer that displays CPU, memory, disk and network I/O stats
in one go ?


Daniel
--
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I just wanted to point out for anyone that hasn't reviewed it yet, but 
Gary's latest design wiki [1] is quite a departure from his original 
set of patches for this blueprint, which was pretty straight-forward, 
just namespacing the diagnostics dict when using the nova v3 API.  The 
keys were all still hypervisor-specific.


The proposal now is much more generic and attempts to translate 
hypervisor-specific keys/data into a common standard versioned set and 
allows for some wiggle room for the drivers to still provide custom 
data if necessary.


I think this is a better long-term solution but is a lot more work than 
the original blueprint and given there seems to be some feeling of 
does nova even need this API, can ceilometer provider it instead? I'd 
like there to be some agreement within nova that this is the right way 
to go before Gary spends a bunch of time on it - and I as the bp 
sponsor spend a bunch of time reviewing it. :)


[1] https://wiki.openstack.org/wiki/Nova_VM_Diagnostics

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Support for Django 1.6

2013-12-19 Thread Thomas Goirand
Hi,

Sid has Django 1.6. Is it planned to add support for it? I currently
don't know what to do with the Horizon package, as it's currently
broken... :(

Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VM diagnostics - V3 proposal

2013-12-19 Thread Daniel P. Berrange
On Tue, Dec 17, 2013 at 04:28:30AM -0800, Gary Kotton wrote:
 Hi,
 Following the discussion yesterday I have updated the wiki - please see
 https://wiki.openstack.org/wiki/Nova_VM_Diagnostics. The proposal is
 backwards compatible and will hopefully provide us with the tools to be
 able to troubleshoot VM issues.

Some comments

 If the driver is unable to return the value or does not have
  access to it at the moment then it should return 'n/a'.

I think it is better if the driver just omitted any key that
it doesn't support altogether. That avoids clients / users
having to do magic string comparisons to identify omitted
data.

 An ID for the diagnostics version. The structure defined below
  is version 1 (Integer)

What are the proposed semantics for version numbers. Do they incremented
on any change, or only on backwards incompatible changes ?

 The amount of time in seconds that the VM has been running (Integer)

I'd suggest nano-seconds here. I've been burnt too many times in the
past providing APIs where we rounded data to a coarse unit like seconds.

Let client programs convert from nanoseconds to seconds if they wish
to display it in that way, but keep the API with the full precision.

  The version of the raw data

Same question as previously.



The allowed keys in network/disk/memory details seem to be
unduly limited. Just having a boolean activity for disk
or NICs seems almost entirely useless. eg the VM might have
sent 1 byte when it first booted and nothing more for the
next 10 days, and an admin can't see this.

I'd suggest we should follow the much expanded set of possible
stats shown by the libvirt driver. These are pretty common
things to show for disk/nic activity and a driver wouldn't have
to support all of them if it doesn't have that info.

It would be nice to have CPU stats available too. 


 https://urldefense.proofpoint.com/v1/url?u=http://berrange.com/k=oIvRg1%2
 BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%
 3D%0Am=k92Oxw4Ev6Raba%2FayHa0ExWlFkO%2BLbCNYQYrLDivTK8%3D%0As=dd903dfca0
 b7b3ace5c560509caf1164f8f3f4dda62174e6374b07a85724183c  -o-
 https://urldefense.proofpoint.com/v1/url?u=http://www.flickr.com/photos/db
 errange/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfD
 tysg45MkPhCZFxPEq8%3D%0Am=k92Oxw4Ev6Raba%2FayHa0ExWlFkO%2BLbCNYQYrLDivTK8
 %3D%0As=3d3587124076d99d0ad02847a95a69c541cfe296f650027c99cf098aad764ab9

BTW if you would be nice if you can get your email program not to
mangle URLs in mails you're replying to. In this case it was just
links in a signature so didn't matter, but in other messages it is
mangled stuff in the body of the message :-( It makes it painful
to read the context.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

2013-12-19 Thread Dmitry Mescheryakov
2013/12/19 Fox, Kevin M kevin@pnnl.gov

 How about a different approach then... OpenStack has thus far been very
 successful providing an API and plugins for dealing with things that cloud
 providers need to be able to switch out to suit their needs.

 There seems to be two different parts to the unified agent issue:
  * How to get rpc messages to/from the VM from the thing needing to
 control it.
  * How to write a plugin to go from a generic rpc mechanism, to doing
 something useful in the vm.

 How about standardising what a plugin looks like, python api, c++ api,
 etc. It won't have to deal with transport at all.

 Also standardize the api the controller uses to talk to the system, rest
 or amqp.


I think that is what we discussed when we tried to select between Salt +
oslo.messaging and pure oslo.messaging
framework for the agent. As you can see, we didn't came to agreement so far
:-) Also Clint started a new thread to discuss what, I believe, you defined
as the first part of unified agent issue. For clarity, the thread I am
referring to is

http://lists.openstack.org/pipermail/openstack-dev/2013-December/022690.html



 Then the mechanism is an implementation detail. If rackspace wants to do a
 VM serial driver, thats cool. If you want to use the network, that works
 too. Savanna/Trove/etc don't have to care which mechanism is used, only the
 cloud provider.

Its not quite as good as one and only one implementation to rule them all,
 but would allow providers to choose what's best for their situation and get
 as much code shared as can be.

 What do you think?

 Thanks,
 Kevin




 
 From: Tim Simpson [tim.simp...@rackspace.com]
 Sent: Wednesday, December 18, 2013 11:34 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

 Thanks for the summary Dmitry. I'm ok with these ideas, and while I still
 disagree with having a single, forced standard for RPC communication, I
 should probably let things pan out a bit before being too concerned.

 - Tim


 
 From: Dmitry Mescheryakov [dmescherya...@mirantis.com]
 Sent: Wednesday, December 18, 2013 11:51 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

 Tim,

 The unified agent we proposing is based on the following ideas:
   * the core agent has _no_ functionality at all. It is a pure RPC
 mechanism with the ability to add whichever API needed on top of it.
   * the API is organized into modules which could be reused across
 different projects.
   * there will be no single package: each project (Trove/Savanna/Others)
 assembles its own agent based on API project needs.

 I hope that covers your concerns.

 Dmitry


 2013/12/18 Tim Simpson tim.simp...@rackspace.commailto:
 tim.simp...@rackspace.com
 I've been following the Unified Agent mailing list thread for awhile now
 and, as someone who has written a fair amount of code for both of the two
 existing Trove agents, thought I should give my opinion about it. I like
 the idea of a unified agent, but believe that forcing Trove to adopt this
 agent for use as its by default will stifle innovation and harm the project.

 There are reasons Trove has more than one agent currently. While everyone
 knows about the Reference Agent written in Python, Rackspace uses a
 different agent written in C++ because it takes up less memory. The
 concerns which led to the C++ agent would not be addressed by a unified
 agent, which if anything would be larger than the Reference Agent is
 currently.

 I also believe a unified agent represents the wrong approach
 philosophically. An agent by design needs to be lightweight, capable of
 doing exactly what it needs to and no more. This is especially true for a
 project like Trove whose goal is to not to provide overly general PAAS
 capabilities but simply installation and maintenance of different
 datastores. Currently, the Trove daemons handle most logic and leave the
 agents themselves to do relatively little. This takes some effort as many
 of the first iterations of Trove features have too much logic put into the
 guest agents. However through perseverance the subsequent designs are
 usually cleaner and simpler to follow. A community approved, do
 everything agent would endorse the wrong balance and lead to developers
 piling up logic on the guest side. Over time, features would become
 dependent on the Unified Agent, making it impossible to run or even
 contemplate light-weight agents.

 Trove's interface to agents today is fairly loose and could stand to be
 made stricter. However, it is flexible and works well enough. Essentially,
 the duck typed interface of the trove.guestagent.api.API class is used to
 send messages, and Trove conductor is used to receive them at which point
 it updates the database. Because 

Re: [openstack-dev] [trove] datastore migration issues

2013-12-19 Thread Robert Myers
I think that we need to be good citizens and at least add dummy data.
Because it is impossible to know who all is using this, the list you have
is probably complete. But Trove has been available for quite some time and
all these users will not be listening on this thread. Basically anytime you
have a database migration that adds a required field you *have* to alter
the existing rows. If we don't we're basically telling everyone who
upgrades that we the 'Database as a Service' team don't care about data
integrity in our own product :)

Robert


On Thu, Dec 19, 2013 at 9:25 AM, Greg Hill greg.h...@rackspace.com wrote:

  We did consider doing that, but decided it wasn't really any different
 from the other options as it required the deployer to know to alter that
 data.  That would require the fewest code changes, though.  It was also my
 understanding that mysql variants were a possibility as well (percona and
 mariadb), which is what brought on the objection to just defaulting in
 code.  Also, we can't derive the version being used, so we *could* fill it
 with a dummy version and assume mysql, but I don't feel like that solves
 the problem or the objections to the earlier solutions.  And then we also
 have bogus data in the database.

   Since there's no perfect solution, I'm really just hoping to gather
 consensus among people who are running existing trove installations and
 have yet to upgrade to the newer code about what would be easiest for them.
  My understanding is that list is basically HP and Rackspace, and maybe
 Ebay?, but the hope was that bringing the issue up on the list might
 confirm or refute that assumption and drive the conversation to a suitable
 workaround for those affected, which hopefully isn't that many
 organizations at this point.

  The options are basically:

  1. Put the onus on the deployer to correct existing records in the
 database.
 2. Have the migration script put dummy data in the database which you have
 to correct.
 3. Put the onus on the deployer to fill out values in the config value

  Greg

  On Dec 18, 2013, at 8:46 PM, Robert Myers myer0...@gmail.com wrote:

  There is the database migration for datastores. We should add a function
 to  back fill the existing data with either a dummy data or set it to
 'mysql' as that was the only possibility before data stores.
 On Dec 18, 2013 3:23 PM, Greg Hill greg.h...@rackspace.com wrote:

 I've been working on fixing a bug related to migrating existing
 installations to the new datastore code:

  https://bugs.launchpad.net/trove/+bug/1259642

  The basic gist is that existing instances won't have any data in the
 datastore_version_id field in the database unless we somehow populate that
 data during migration, and not having that data populated breaks a lot of
 things (including the ability to list instances or delete or resize old
 instances).  It's impossible to populate that data in an automatic, generic
 way, since it's highly vendor-dependent on what database and version they
 currently support, and there's not enough data in the older schema to
 populate the new tables automatically.

  So far, we've come up with some non-optimal solutions:

  1. The first iteration was to assume 'mysql' as the database manager on
 instances without a datastore set.
 2. The next iteration was to make the default value be configurable in
 trove.conf, but default to 'mysql' if it wasn't set.
 3. It was then proposed that we could just use the 'default_datastore'
 value from the config, which may or may not be set by the operator.

  My problem with any of these approaches beyond the first is that
 requiring people to populate config values in order to successfully migrate
 to the newer code is really no different than requiring them to populate
 the new database tables with appropriate data and updating the existing
 instances with the appropriate values.  Either way, it's now highly
 dependent on people deploying the upgrade to know about this change and
 react accordingly.

  Does anyone have a better solution that we aren't considering?  Is this
 even worth the effort given that trove has so few current deployments that
 we can just make sure everyone is populating the new tables as part of
 their upgrade path and not bother fixing the code to deal with the legacy
 data?

  Greg

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Horizon] Support for Django 1.6

2013-12-19 Thread Jeremy Stanley
On 2013-12-19 23:45:09 +0800 (+0800), Thomas Goirand wrote:
 Sid has Django 1.6. Is it planned to add support for it? I currently
 don't know what to do with the Horizon package, as it's currently
 broken... :(

You probably want to follow
https://blueprints.launchpad.net/horizon/+spec/django-1point6 and
pitch in on reviews, patches or discussions related to this work if
it is important to you.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VM diagnostics - V3 proposal

2013-12-19 Thread Gary Kotton


On 12/19/13 5:50 PM, Daniel P. Berrange berra...@redhat.com wrote:

On Tue, Dec 17, 2013 at 04:28:30AM -0800, Gary Kotton wrote:
 Hi,
 Following the discussion yesterday I have updated the wiki - please see
 
https://urldefense.proofpoint.com/v1/url?u=https://wiki.openstack.org/wik
i/Nova_VM_Diagnosticsk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZ
yF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=vzUZT3t%2BPvKlvBTFueAjUjo8YUZvDE
tRLmlzFb5ORuM%3D%0As=d13969885872ea187937a89d12aab9b36b51452ba47e35c7e41
692335967b9f7. The proposal is
 backwards compatible and will hopefully provide us with the tools to be
 able to troubleshoot VM issues.

Some comments

 If the driver is unable to return the value or does not have
  access to it at the moment then it should return 'n/a'.

I think it is better if the driver just omitted any key that
it doesn't support altogether. That avoids clients / users
having to do magic string comparisons to identify omitted
data.

I am fine with this. If the data is marked optional then whoever is
parsing the data should check to see if the field exists prior.


 An ID for the diagnostics version. The structure defined below
  is version 1 (Integer)

What are the proposed semantics for version numbers. Do they incremented
on any change, or only on backwards incompatible changes ?

The purpose of this was to be backward compatible. But I guess that if we
go with the optional approach then this is redundant.


 The amount of time in seconds that the VM has been running (Integer)

I'd suggest nano-seconds here. I've been burnt too many times in the
past providing APIs where we rounded data to a coarse unit like seconds.

Sure, sounds reasonable.


Let client programs convert from nanoseconds to seconds if they wish
to display it in that way, but keep the API with the full precision.

  The version of the raw data

I guess that this is redundant too.


Same question as previously.



The allowed keys in network/disk/memory details seem to be
unduly limited. Just having a boolean activity for disk
or NICs seems almost entirely useless. eg the VM might have
sent 1 byte when it first booted and nothing more for the
next 10 days, and an admin can't see this.

I'd suggest we should follow the much expanded set of possible
stats shown by the libvirt driver. These are pretty common
things to show for disk/nic activity and a driver wouldn't have
to support all of them if it doesn't have that info.

Ok. I was just trying to provide an indicator for the admin to dive into
the raw data. But I am fine with this.


It would be nice to have CPU stats available too.

At the moment libvirt only return the cpu0_time. Can you please let me
know what other stats you would like here?

 


 
https://urldefense.proofpoint.com/v1/url?u=http://berrange.com/k=oIvRg1
%2
 
BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq
8%
 
3D%0Am=k92Oxw4Ev6Raba%2FayHa0ExWlFkO%2BLbCNYQYrLDivTK8%3D%0As=dd903dfc
a0
 b7b3ace5c560509caf1164f8f3f4dda62174e6374b07a85724183c  -o-
 
https://urldefense.proofpoint.com/v1/url?u=http://www.flickr.com/photos/
db
 
errange/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2B
fD
 
tysg45MkPhCZFxPEq8%3D%0Am=k92Oxw4Ev6Raba%2FayHa0ExWlFkO%2BLbCNYQYrLDivT
K8
 
%3D%0As=3d3587124076d99d0ad02847a95a69c541cfe296f650027c99cf098aad764ab
9

BTW if you would be nice if you can get your email program not to
mangle URLs in mails you're replying to. In this case it was just
links in a signature so didn't matter, but in other messages it is
mangled stuff in the body of the message :-( It makes it painful
to read the context.

Regards,
Daniel
-- 
|: 
https://urldefense.proofpoint.com/v1/url?u=http://berrange.com/k=oIvRg1%2
BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%
3D%0Am=vzUZT3t%2BPvKlvBTFueAjUjo8YUZvDEtRLmlzFb5ORuM%3D%0As=3b9f7af3a2bc
4ffaf73f6cff69fd1e88b2af95ced9b60945bc1e2f97ebaf7da4  -o-
https://urldefense.proofpoint.com/v1/url?u=http://www.flickr.com/photos/db
errange/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfD
tysg45MkPhCZFxPEq8%3D%0Am=vzUZT3t%2BPvKlvBTFueAjUjo8YUZvDEtRLmlzFb5ORuM%3
D%0As=3fa5cf45352c4dcbedf56f5ba059edf46fec10637a329c808cdfca2f66d3a4ce :|
|: 
https://urldefense.proofpoint.com/v1/url?u=http://libvirt.org/k=oIvRg1%2B
dGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3
D%0Am=vzUZT3t%2BPvKlvBTFueAjUjo8YUZvDEtRLmlzFb5ORuM%3D%0As=4bf16f2a8e571
3d2fb13f8dcb0ab13e78a5ec376b215f6c07476f4a75c1b829f  -o-
   
https://urldefense.proofpoint.com/v1/url?u=http://virt-manager.org/k=oIvR
g1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxP
Eq8%3D%0Am=vzUZT3t%2BPvKlvBTFueAjUjo8YUZvDEtRLmlzFb5ORuM%3D%0As=1d14716b
524d2c056cb5b26f32b13df0a602ca98fb80380d7e1964491f43b44f :|
|: 
https://urldefense.proofpoint.com/v1/url?u=http://autobuild.org/k=oIvRg1%
2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8

Re: [openstack-dev] [nova] VM diagnostics - V3 proposal

2013-12-19 Thread Gary Kotton


On 12/19/13 6:07 PM, Daniel P. Berrange berra...@redhat.com wrote:

On Thu, Dec 19, 2013 at 08:02:16AM -0800, Gary Kotton wrote:
 
 
 On 12/19/13 5:50 PM, Daniel P. Berrange berra...@redhat.com wrote:
 
 On Tue, Dec 17, 2013 at 04:28:30AM -0800, Gary Kotton wrote:
  Hi,
  Following the discussion yesterday I have updated the wiki - please
see
  
 
https://urldefense.proofpoint.com/v1/url?u=https://wiki.openstack.org/w
ik
 
i/Nova_VM_Diagnosticsk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8N
PZ
 
yF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=vzUZT3t%2BPvKlvBTFueAjUjo8YUZv
DE
 
tRLmlzFb5ORuM%3D%0As=d13969885872ea187937a89d12aab9b36b51452ba47e35c7e
41
 692335967b9f7. The proposal is
  backwards compatible and will hopefully provide us with the tools to
be
  able to troubleshoot VM issues.
 
 Some comments
 
  If the driver is unable to return the value or does not have
   access to it at the moment then it should return 'n/a'.
 
 I think it is better if the driver just omitted any key that
 it doesn't support altogether. That avoids clients / users
 having to do magic string comparisons to identify omitted
 data.
 
 I am fine with this. If the data is marked optional then whoever is
 parsing the data should check to see if the field exists prior.
 
 
  An ID for the diagnostics version. The structure defined below
   is version 1 (Integer)
 
 What are the proposed semantics for version numbers. Do they
incremented
 on any change, or only on backwards incompatible changes ?
 
 The purpose of this was to be backward compatible. But I guess that if
we
 go with the optional approach then this is redundant.
 
 
  The amount of time in seconds that the VM has been running (Integer)
 
 I'd suggest nano-seconds here. I've been burnt too many times in the
 past providing APIs where we rounded data to a coarse unit like
seconds.
 
 Sure, sounds reasonable.

Oh hang on, when you say 'amount of time in seconds the VM has been
running'
you're meaning wall-clock time since boot.  Seconds is fine for wall clock
time actually.


I was getting mixed up with CPU utilization time, since libvirt doesn't
actually provide any way to get uptime.


 Let client programs convert from nanoseconds to seconds if they wish
 to display it in that way, but keep the API with the full precision.
 
   The version of the raw data
 
 I guess that this is redundant too.
 
 
 Same question as previously.
 
 
 
 The allowed keys in network/disk/memory details seem to be
 unduly limited. Just having a boolean activity for disk
 or NICs seems almost entirely useless. eg the VM might have
 sent 1 byte when it first booted and nothing more for the
 next 10 days, and an admin can't see this.
 
 I'd suggest we should follow the much expanded set of possible
 stats shown by the libvirt driver. These are pretty common
 things to show for disk/nic activity and a driver wouldn't have
 to support all of them if it doesn't have that info.
 
 Ok. I was just trying to provide an indicator for the admin to dive into
 the raw data. But I am fine with this.
 
 
 It would be nice to have CPU stats available too.
 
 At the moment libvirt only return the cpu0_time. Can you please let me
 know what other stats you would like here?

Since we have numCpus, I'd suggest we allow for a list of cpus in the
same way we do for disk/nics and returning the execution time split
out for each vCPU.  We could still have a merged execution time too
since I can imagine some hypervisors won't be able to provide the
split out per-vcpu time.

Good call. I'll add this!


Daniel
-- 
|: 
https://urldefense.proofpoint.com/v1/url?u=http://berrange.com/k=oIvRg1%2
BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%
3D%0Am=np3rsZrtAfOFOfhCRXiCtSXdJPm3QIwaKWcO75QdIvo%3D%0As=43e28d32e5a671
8ba104d118a69e659866e10cb5981b43bd8c89ac09d96bc6de  -o-
https://urldefense.proofpoint.com/v1/url?u=http://www.flickr.com/photos/db
errange/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfD
tysg45MkPhCZFxPEq8%3D%0Am=np3rsZrtAfOFOfhCRXiCtSXdJPm3QIwaKWcO75QdIvo%3D%
0As=69b56c12bb439e62c6aa90ec908016b701d268210a464d9d1f43f8c070e6e1db :|
|: 
https://urldefense.proofpoint.com/v1/url?u=http://libvirt.org/k=oIvRg1%2B
dGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3
D%0Am=np3rsZrtAfOFOfhCRXiCtSXdJPm3QIwaKWcO75QdIvo%3D%0As=d4d36b4c778f308
9290ed06239bad90dbe8b52370f9c6c24b60a935510fb74d7  -o-
 
https://urldefense.proofpoint.com/v1/url?u=http://virt-manager.org/k=oIvR
g1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxP
Eq8%3D%0Am=np3rsZrtAfOFOfhCRXiCtSXdJPm3QIwaKWcO75QdIvo%3D%0As=edb1182136
b3c880b14557de1856e0fbb4a950fceb89b39bb0cef7df081fa10c :|
|: 
https://urldefense.proofpoint.com/v1/url?u=http://autobuild.org/k=oIvRg1%
2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8
%3D%0Am=np3rsZrtAfOFOfhCRXiCtSXdJPm3QIwaKWcO75QdIvo%3D%0As=42e39e02e8829
734ca2e30d74e10da2b0b3467c1cf019dc51c2edf1886f1 

Re: [openstack-dev] [Heat] [Trove] [Savanna] [Oslo] Unified Agents - what is the actual problem?

2013-12-19 Thread Tim Simpson
 I agree that enabling communication between guest and cloud service is a 
 common problem for most agent designs. The only exception is agent based on 
 hypervisor provided transport. But as far as I understand many people are 
 interested in network-based agent, so indeed we can start a thread (or 
 continue discussion in this on) on the problem.

Can't they co-exist?

Let's say the interface to talk to an agent is simply some class loaded from a 
config file, the way it is in Trove. So we have a class which has the methods 
add_user, get_filesystem_stats. 

The first, and let's say default, implementation sends a message over Rabbit 
using oslo.rpc or something like it. All the arguments turn into a JSON object 
and are deserialized on the agent side using oslo.rpc or some C++ code capable 
of reading JSON.

If someone wants to add a hypervisor provided transport, they could do so by 
instead changing this API class to one which contacts a service on the 
hypervisor node (using oslo.rpc) with arguments that include the guest agent ID 
and args, which is just a dictionary of the original arguments. This service 
would then shell out to execute some hypervisor specific command to talk to the 
given guest.

That's what I meant when I said I liked how Trove handled this now- because it 
uses a simple, non-prescriptive interface, it's easy to swap out yet still easy 
to use.

That would mean the job of a unified agent framework would be to offer up 
libraries to ease up the creation of the API class by offering Python code to 
send messages in various styles / formats, as well as Python or C++ code to 
read and interpret those messages. 

Of course, we'd still settle on one default (probably network based) which 
would become the standard way of sending messages to guests so that package 
maintainers, the Infra team, and newbies to OpenStack wouldn't have to deal 
with dozens of different ways of doing things, but the important thing is that 
other methods of communication would still be possible.

Thanks,

Tim


From: Dmitry Mescheryakov [mailto:dmescherya...@mirantis.com] 
Sent: Thursday, December 19, 2013 7:15 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Heat] [Trove] [Savanna] [Oslo] Unified Agents - 
what is the actual problem?

I agree that enabling communication between guest and cloud service is a common 
problem for most agent designs. The only exception is agent based on hypervisor 
provided transport. But as far as I understand many people are interested in 
network-based agent, so indeed we can start a thread (or continue discussion in 
this on) on the problem.

Dmitry

2013/12/19 Clint Byrum cl...@fewbar.com
So I've seen a lot of really great discussion of the unified agents, and
it has made me think a lot about the problem that we're trying to solve.

I just wanted to reiterate that we should be trying to solve real problems
and not get distracted by doing things right or even better.

I actually think there are three problems to solve.

* Private network guest to cloud service communication.
* Narrow scope highly responsive lean guest agents (Trove, Savanna).
* General purpose in-instance management agent (Heat).

Since the private network guests problem is the only one they all share,
perhaps this is where the three projects should collaborate, and the
other pieces should be left to another discussion.

Thoughts?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VM diagnostics - V3 proposal

2013-12-19 Thread Vladik Romanovsky
Ah, I think I've responded too fast, sorry.

meter-list provides a list of various measurements that are being done per 
resource.
sample-list provides a list of samples per every meter: ceilometer sample-list 
--meter cpu_util -q resource_id=vm_uuid
These samples can be aggregated over a period of time per every meter and 
resource:
ceilometer statistics -m cpu_util -q 
'timestampSTART;timestamp=END;resource_id=vm_uuid' --period 3600

Vladik



- Original Message -
 From: Daniel P. Berrange berra...@redhat.com
 To: Vladik Romanovsky vladik.romanov...@enovance.com
 Cc: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org, John
 Garbutt j...@johngarbutt.com
 Sent: Thursday, 19 December, 2013 10:37:27 AM
 Subject: Re: [openstack-dev] [nova] VM diagnostics - V3 proposal
 
 On Thu, Dec 19, 2013 at 03:47:30PM +0100, Vladik Romanovsky wrote:
  I think it was:
  
  ceilometer sample-list -m cpu_util -q 'resource_id=vm_uuid'
 
 Hmm, a standard devstack deployment of ceilometer doesn't seem to
 record any performance stats at all - just shows me the static
 configuration parameters :-(
 
  ceilometer meter-list  -q 'resource_id=296b22c6-2a4d-4a8d-a7cd-2d73339f9c70'
 +-+---+--+--+--+--+
 | Name| Type  | Unit | Resource ID
 | | User ID  | Project ID
 | |
 +-+---+--+--+--+--+
 | disk.ephemeral.size | gauge | GB   |
 | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 | 96f9a624a325473daf4cd7875be46009 |
 | ec26984024c1438e8e2f93dc6a8c5ad0 |
 | disk.root.size  | gauge | GB   |
 | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 | 96f9a624a325473daf4cd7875be46009 |
 | ec26984024c1438e8e2f93dc6a8c5ad0 |
 | instance| gauge | instance |
 | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 | 96f9a624a325473daf4cd7875be46009 |
 | ec26984024c1438e8e2f93dc6a8c5ad0 |
 | instance:m1.small   | gauge | instance |
 | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 | 96f9a624a325473daf4cd7875be46009 |
 | ec26984024c1438e8e2f93dc6a8c5ad0 |
 | memory  | gauge | MB   |
 | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 | 96f9a624a325473daf4cd7875be46009 |
 | ec26984024c1438e8e2f93dc6a8c5ad0 |
 | vcpus   | gauge | vcpu |
 | 296b22c6-2a4d-4a8d-a7cd-2d73339f9c70 | 96f9a624a325473daf4cd7875be46009 |
 | ec26984024c1438e8e2f93dc6a8c5ad0 |
 +-+---+--+--+--+--+
 
 
 If the admin user can't rely on ceilometer guaranteeing availability of
 the performance stats at all, then I think having an API in nova to report
 them is in fact justifiable. In fact it is probably justifiable no matter
 what as a fallback way to check that VMs are doing in the fact of failure
 of ceilometer / part of the cloud infrastructure.
 
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request for Barbican

2013-12-19 Thread Clint Byrum
Excerpts from Sean Dague's message of 2013-12-19 04:14:51 -0800:
 On 12/19/2013 12:10 AM, Mike Perez wrote:
  On Tue, Dec 17, 2013 at 1:59 PM, Mike Perez thin...@gmail.com
  mailto:thin...@gmail.com wrote:
 snip
  I reviewed the TC meeting notes, and my question still stands.
  
  It seems the committee is touching on the point of there being a worry
  because if 
  it's a single company running the show, they can pull resources away and
  the 
  project collapses. My worry is just having one company attempting to
  design solutions 
  to use cases that work for them, will later not work for those potential
  companies that would 
  provide contributors.
  
  -Mike Perez
 
 Which is our fundamental chicken and egg problem. The Barbican team has
 said they've reached out to other parties, who have expressed interest
 in joining, but no one else has.
 
 The Heat experience shows that a lot of the time companies won't kick in
 resources until there is some kind of stamp of general approval.
 

I want to confirm this specific case. I joined the TripleO effort just
about a year ago. We needed an orchestration tool. If Heat hadn't been
in incubation we would have considered all other options. Because it
was incubated, even though some others might have been more or less
attractive, there was no question we would lend our efforts to Heat.

Had we just decided to build our own, or try to enhance Ansible or
salt-cloud, we'd have likely had to abandon that effort as Heat improved
beyond their scope in the context of managing OpenStack API's.

 If you showed up early, with a commitment to work openly, the fact that
 the project maps to your own use cases really well isn't a bug, it's a
 feature. I don't want to hold up a team from incubating because other
 people stayed on the sidelines. That was actually exactly what was going
 on with Heat, where lots of entities thought they would keep that side
 of the equation proprietary, or outside of OpenStack. By bringing Heat
 in, we changed the equation, I think massively for the better.
 

Right, contributing to a project that is already part of
OpenStack means you don't have to have _another_ conversation with
management/legal/etc. about contributing to _another_ OpenSource project
with slightly different governance/licensing/affiliation. An
organization can align its strategy around OpenStack, earn influence on
the board/TC/dev teams/etc.

So when it is time to open source something as part of that, the org can,
I hope, count on OpenStack to welcome them and shout to the world that
there is a new X in town and everybody else should take a good long
look at it and consider dropping their own X in favor of contributing
to this one.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] Blocking issue with ring rebalancing

2013-12-19 Thread Nikolay Markov
Hi,

Our team run into some serious trouble with performance of
'swift-ring-builder rebalance' after some recent changes. On our
environment it takes about 8 minutes, and this and it is not the
maximum. This is really blocker for us.

This issue is reproducible on Ubuntu 12.04 + Python 2.7. The fun fact
is it works as expected on CentOS + Python 2.6.

I created a bug on launchpad regarding this:
https://bugs.launchpad.net/swift/+bug/1262166

Could anybody please participate in discussion on how to overcome it?


-- 
Best regards,
Nick Markov,
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Horizon and Tuskar-UI codebase merge

2013-12-19 Thread Jaromir Coufal
So basically this is our first proposal what we send out: 
http://lists.openstack.org/pipermail/openstack-dev/2013-December/022196.html


After Horizon meetings, several e-mails and also couple of other 
discussions of people who are for/against codebase merge, it looks that 
in the end upstream leans towards 'umbrella' solution.


After all, +1 for umbrella solution from my side too. Tuskar UI will get 
closer to the nature of the project (based on Horizon, UI related 
audience). And in the same time, we will not rush things up before the 
project graduates. In Icehouse we can easier reach goals of both - 
Horizon as well as Tuskar UI - and after Icehouse release we can review 
back and get to the codebase merge in the end.


Do you all agree?

-- Jarda

On 2013/18/12 22:33, Gabriel Hurley wrote:

 From my experience, directly adding incubated projects to the main Horizon 
codebase prior to graduation has been fraught with peril. That said, the closer 
they can be together prior to the graduation merge, the better.

I like the idea of these types of projects being under the OpenStack Dashboard 
Program umbrella. Ideally I think it would be a jointly-managed resource in 
Gerrit. The Horizon Core folks would have +2 power, but the Tuskar core folks 
would also have +2 power. (I'm 90% certain that can be done in the Gerrit 
admin...)

That way development speed isn't bottlenecked by Horizon Core, but there's a 
closer tie-in with the people who may ultimately be maintaining it. It becomes 
easier to keep track of, and can be more easily guided in the right directions. 
With a little work incubated dashboard components like this could even be made 
to be a non-gating part of the testing infrastructure to indicate when things 
change or break.

Adding developers to Horizon Core just for the purpose of reviewing an 
incubated umbrella project is not the right way to do things at all.  If my 
proposal of two separate groups having the +2 power in Gerrit isn't technically 
feasible then a new group should be created for management of umbrella projects.

All the best,

  - Gabriel


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Horizon and Tuskar-UI codebase merge

2013-12-19 Thread Lyle, David
So after a lot of consideration, my opinion is the two code bases should stay 
in separate repos under the Horizon Program, for a few reasons:
-Adding a large chunk of code for an incubated project is likely going to cause 
the Horizon delivery some grief due to dependencies and packaging issues at the 
distro level.
-The code in Tuskar-UI is currently in a large state of flux/rework.  The 
Tuskar-UI code needs to be able to move quickly and at times drastically, this 
could be detrimental to the stability of Horizon.  And conversely, the 
stability needs of Horizon and be detrimental to the speed at which Tuskar-UI 
can change.
-Horizon Core can review changes in the Tuskar-UI code base and provide 
feedback without the code needing to be integrated in Horizon proper.  
Obviously, with an eye to the code bases merging in the long run.

As far as core group organization, I think the current Tuskar-UI core should 
maintain their +2 for only Tuskar-UI.  Individuals who make significant review 
contributions to Horizon will certainly be considered for Horizon core in time. 
 I agree with Gabriel's suggestion of adding Horizon Core to tuskar-UI core.  
The idea being that Horizon core is looking for compatibility with Horizon 
initially and working toward a deeper understanding of the Tuskar-UI code base. 
 This will help insure the integration process goes as smoothly as possible 
when Tuskar/TripleO comes out of incubation. 

I look forward to being able to merge the two code bases, but I don't think the 
time is right yet and Horizon should stick to only integrating code into 
OpenStack Dashboard that is out of incubation.  We've made exceptions in the 
past, and they tend to have unfortunate consequences.

-David


 -Original Message-
 From: Jiri Tomasek [mailto:jtoma...@redhat.com]
 Sent: Thursday, December 19, 2013 4:40 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Horizon and Tuskar-UI codebase merge
 
 On 12/19/2013 08:58 AM, Matthias Runge wrote:
  On 12/18/2013 10:33 PM, Gabriel Hurley wrote:
 
  Adding developers to Horizon Core just for the purpose of reviewing
  an incubated umbrella project is not the right way to do things at
  all.  If my proposal of two separate groups having the +2 power in
  Gerrit isn't technically feasible then a new group should be created
  for management of umbrella projects.
  Yes, I totally agree.
 
  Having two separate projects with separate cores should be possible
  under the umbrella of a program.
 
  Tuskar differs somewhat from other projects to be included in horizon,
  because other projects contributed a view on their specific feature.
  Tuskar provides an additional dashboard and is talking with several apis
  below. It's a something like a separate dashboard to be merged here.
 
  When having both under the horizon program umbrella, my concern is,
 that
  both projects wouldn't be coupled so tight, as I would like it.
 
  Esp. I'd love to see an automatic merge of horizon commits to a
  (combined) tuskar and horizon repository, thus making sure, tuskar will
  work in a fresh (updated) horizon environment.
 
 Please correct me if I am wrong, but I think this is not an issue.
 Currently Tuskar-UI is run from Horizon fork. In local Horizon fork we
 create symlink to tuskar-ui local clone and to run Horizon with
 Tuskar-UI we simply start Horizon server. This means that Tuskar-UI runs
 on latest version of Horizon. (If you pull regularly of course).
 
 
  Matthias
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 Jirka


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Horizon and Tuskar-UI codebase merge

2013-12-19 Thread Jordan O'Mara


- Original Message -
 So basically this is our first proposal what we send out:
 http://lists.openstack.org/pipermail/openstack-dev/2013-December/022196.html
 
 After Horizon meetings, several e-mails and also couple of other
 discussions of people who are for/against codebase merge, it looks that
 in the end upstream leans towards 'umbrella' solution.
 
 After all, +1 for umbrella solution from my side too. Tuskar UI will get
 closer to the nature of the project (based on Horizon, UI related
 audience). And in the same time, we will not rush things up before the
 project graduates. In Icehouse we can easier reach goals of both -
 Horizon as well as Tuskar UI - and after Icehouse release we can review
 back and get to the codebase merge in the end.
 
 Do you all agree?
 
 -- Jarda

+1, I think this is the most sensible approach.

 
 On 2013/18/12 22:33, Gabriel Hurley wrote:
   From my experience, directly adding incubated projects to the main Horizon
   codebase prior to graduation has been fraught with peril. That said, the
   closer they can be together prior to the graduation merge, the better.
 
  I like the idea of these types of projects being under the OpenStack
  Dashboard Program umbrella. Ideally I think it would be a jointly-managed
  resource in Gerrit. The Horizon Core folks would have +2 power, but the
  Tuskar core folks would also have +2 power. (I'm 90% certain that can be
  done in the Gerrit admin...)
 
  That way development speed isn't bottlenecked by Horizon Core, but there's
  a closer tie-in with the people who may ultimately be maintaining it. It
  becomes easier to keep track of, and can be more easily guided in the
  right directions. With a little work incubated dashboard components like
  this could even be made to be a non-gating part of the testing
  infrastructure to indicate when things change or break.
 
  Adding developers to Horizon Core just for the purpose of reviewing an
  incubated umbrella project is not the right way to do things at all.  If
  my proposal of two separate groups having the +2 power in Gerrit isn't
  technically feasible then a new group should be created for management of
  umbrella projects.
 
  All the best,
 
- Gabriel
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Jordan O'Mara jomara at redhat.com
Red Hat Engineering, Raleigh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] PTL Candidacy

2013-12-19 Thread Sylvain Bauza

Hi,

I hereby would like to announce my candidacy for the Climate 
(Reservations) PTL.


A brief history about me : I'm playing with Software Engineering and 
Operations since more than 10 years, with a special focus on Openstack 
since the Essex release. I promoted Openstack in my previous company as 
the top solution for our internal cloud solution, and now I'm working as 
Software Engineer at Bull within the open-source XLCloud [1] project 
which is an international collaborative project.


I joined the Climate project in the early stages from the idea of having 
a resource planner in Nova [2]. I also led the subteam responsible for 
delivering physical hosts reservations in Climate and dedicated long 
time about spec'ing what would be a global resource planner for both 
virtual and physical reservations. I also engaged and delivered 
important core features like unittesting framework or policies 
management. You can see my reviews [3] and my commits [4] for appreciation.
About Climate visibility, I also engaged the initiative of having weekly 
meetings and now I'm chair of the meetings half time, with respect to 
the virtual reservations subteam. I also copresented Climate during the 
Openstack Icehouse Summit in HK.


I see the PTL position as a communication point for the development of 
the community around Climate and its integration within the Openstack 
ecosystem, not only focusing to the code but also listening to the users 
point of view. As a corollary to this, his duties also include to be the 
interface in between Openstack developers community and Climate 
stakeholders for defining the good path of leveraging Openstack with 
Climate. Last but not least, I'm convinced that the PTL position should 
be rotating in order to express all the variations of our team 
contributors, subteams and sponsors.


For the Icehouse release, I'm seeing the next steps for Climate as 
having its first release end of January, the discussions with the 
Technical Committee about an project incubation and a new Program about 
Reservations. I'm also seeing a tighter interaction with Nova, Heat and 
Horizon for the end of this cycle. In particular about Nova, there are 
various aspects that have to be addressed and have already been 
presented during a Nova design unconference session [5].


Anyway, I know we do a great job, and I'm having pleasure with working 
with you all guys !

-Sylvain

[1] : http://www.xlcloud.org
[2] : 
https://blueprints.launchpad.net/nova/+spec/planned-resource-reservation-api
[3] : 
https://review.openstack.org/#/q/reviewer:sbauza+AND+(project:stackforge/climate+OR+project:stackforge/python-climateclient+OR+project:stackforge/climate-nova),n,z 
https://review.openstack.org/#/q/reviewer:sbauza+AND+%28project:stackforge/climate+OR+project:stackforge/python-climateclient+OR+project:stackforge/climate-nova%29,n,z
[4] : 
https://review.openstack.org/#/q/owner:sbauza+AND+(project:stackforge/climate+OR+project:stackforge/python-climateclient+OR+project:stackforge/climate-nova),n,z 
https://review.openstack.org/#/q/owner:sbauza+AND+%28project:stackforge/climate+OR+project:stackforge/python-climateclient+OR+project:stackforge/climate-nova%29,n,z

[5] : https://etherpad.openstack.org/p/NovaIcehouse-ClimateInteractions
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Trove] [Savanna] [Oslo] Unified Agents - what is the actual problem?

2013-12-19 Thread Dmitry Mescheryakov
Tim,

IMHO network-based and hypervisor-based agents definitely can co-exist.
What I wanted to say is that the problem of enabling communication between
guest and cloud service is not relevant for hypervisor-based agents. They
simply don't need network access into a VM.

Dmitry


2013/12/19 Tim Simpson tim.simp...@rackspace.com

  I agree that enabling communication between guest and cloud service is
 a common problem for most agent designs. The only exception is agent based
 on hypervisor provided transport. But as far as I understand many people
 are interested in network-based agent, so indeed we can start a thread (or
 continue discussion in this on) on the problem.

 Can't they co-exist?

 Let's say the interface to talk to an agent is simply some class loaded
 from a config file, the way it is in Trove. So we have a class which has
 the methods add_user, get_filesystem_stats.

 The first, and let's say default, implementation sends a message over
 Rabbit using oslo.rpc or something like it. All the arguments turn into a
 JSON object and are deserialized on the agent side using oslo.rpc or some
 C++ code capable of reading JSON.

 If someone wants to add a hypervisor provided transport, they could do so
 by instead changing this API class to one which contacts a service on the
 hypervisor node (using oslo.rpc) with arguments that include the guest
 agent ID and args, which is just a dictionary of the original arguments.
 This service would then shell out to execute some hypervisor specific
 command to talk to the given guest.

 That's what I meant when I said I liked how Trove handled this now-
 because it uses a simple, non-prescriptive interface, it's easy to swap out
 yet still easy to use.

 That would mean the job of a unified agent framework would be to offer up
 libraries to ease up the creation of the API class by offering Python
 code to send messages in various styles / formats, as well as Python or C++
 code to read and interpret those messages.

 Of course, we'd still settle on one default (probably network based)
 which would become the standard way of sending messages to guests so that
 package maintainers, the Infra team, and newbies to OpenStack wouldn't have
 to deal with dozens of different ways of doing things, but the important
 thing is that other methods of communication would still be possible.

 Thanks,

 Tim


 From: Dmitry Mescheryakov [mailto:dmescherya...@mirantis.com]
 Sent: Thursday, December 19, 2013 7:15 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Heat] [Trove] [Savanna] [Oslo] Unified
 Agents - what is the actual problem?

 I agree that enabling communication between guest and cloud service is a
 common problem for most agent designs. The only exception is agent based on
 hypervisor provided transport. But as far as I understand many people are
 interested in network-based agent, so indeed we can start a thread (or
 continue discussion in this on) on the problem.

 Dmitry

 2013/12/19 Clint Byrum cl...@fewbar.com
 So I've seen a lot of really great discussion of the unified agents, and
 it has made me think a lot about the problem that we're trying to solve.

 I just wanted to reiterate that we should be trying to solve real problems
 and not get distracted by doing things right or even better.

 I actually think there are three problems to solve.

 * Private network guest to cloud service communication.
 * Narrow scope highly responsive lean guest agents (Trove, Savanna).
 * General purpose in-instance management agent (Heat).

 Since the private network guests problem is the only one they all share,
 perhaps this is where the three projects should collaborate, and the
 other pieces should be left to another discussion.

 Thoughts?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Nomination of Sandy Walsh to core team

2013-12-19 Thread Julien Danjou
On Mon, Dec 09 2013, Herndon, John Luke wrote:

Hi John,

 I¹m not 100% sure what the process is around electing an individual to the
 core team (i.e., can a non-core person nominate someone?). However, I
 believe the ceilometer core team could use a member who is more active in
 the development of the event pipeline. A core developer in this area will
 not only speed up review times for event patches, but will also help keep
 new contributions focused on the overall eventing vision.

 To that end, I would like to nominate Sandy Walsh from Rackspace to
 ceilometer-core. Sandy is one of the original authors of StackTach, and
 spearheaded the original stacktach-ceilometer integration. He has been
 instrumental in many of my codes reviews, and has contributed much of the
 existing event storage and querying code.

Unfortunately, as stated yesterday during our weekly meeting, this
nomination did not receive enough support during the allowed timeframe,
and therefore has to be rejected.

-- 
Julien Danjou
;; Free Software hacker ; independent consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Horizon and Tuskar-UI codebase merge

2013-12-19 Thread Tzu-Mainn Chen
+1 to this!

- Original Message -
 So after a lot of consideration, my opinion is the two code bases should stay
 in separate repos under the Horizon Program, for a few reasons:
 -Adding a large chunk of code for an incubated project is likely going to
 cause the Horizon delivery some grief due to dependencies and packaging
 issues at the distro level.
 -The code in Tuskar-UI is currently in a large state of flux/rework.  The
 Tuskar-UI code needs to be able to move quickly and at times drastically,
 this could be detrimental to the stability of Horizon.  And conversely, the
 stability needs of Horizon and be detrimental to the speed at which
 Tuskar-UI can change.
 -Horizon Core can review changes in the Tuskar-UI code base and provide
 feedback without the code needing to be integrated in Horizon proper.
 Obviously, with an eye to the code bases merging in the long run.
 
 As far as core group organization, I think the current Tuskar-UI core should
 maintain their +2 for only Tuskar-UI.  Individuals who make significant
 review contributions to Horizon will certainly be considered for Horizon
 core in time.  I agree with Gabriel's suggestion of adding Horizon Core to
 tuskar-UI core.  The idea being that Horizon core is looking for
 compatibility with Horizon initially and working toward a deeper
 understanding of the Tuskar-UI code base.  This will help insure the
 integration process goes as smoothly as possible when Tuskar/TripleO comes
 out of incubation.
 
 I look forward to being able to merge the two code bases, but I don't think
 the time is right yet and Horizon should stick to only integrating code into
 OpenStack Dashboard that is out of incubation.  We've made exceptions in the
 past, and they tend to have unfortunate consequences.
 
 -David
 
 
  -Original Message-
  From: Jiri Tomasek [mailto:jtoma...@redhat.com]
  Sent: Thursday, December 19, 2013 4:40 AM
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] Horizon and Tuskar-UI codebase merge
  
  On 12/19/2013 08:58 AM, Matthias Runge wrote:
   On 12/18/2013 10:33 PM, Gabriel Hurley wrote:
  
   Adding developers to Horizon Core just for the purpose of reviewing
   an incubated umbrella project is not the right way to do things at
   all.  If my proposal of two separate groups having the +2 power in
   Gerrit isn't technically feasible then a new group should be created
   for management of umbrella projects.
   Yes, I totally agree.
  
   Having two separate projects with separate cores should be possible
   under the umbrella of a program.
  
   Tuskar differs somewhat from other projects to be included in horizon,
   because other projects contributed a view on their specific feature.
   Tuskar provides an additional dashboard and is talking with several apis
   below. It's a something like a separate dashboard to be merged here.
  
   When having both under the horizon program umbrella, my concern is,
  that
   both projects wouldn't be coupled so tight, as I would like it.
  
   Esp. I'd love to see an automatic merge of horizon commits to a
   (combined) tuskar and horizon repository, thus making sure, tuskar will
   work in a fresh (updated) horizon environment.
  
  Please correct me if I am wrong, but I think this is not an issue.
  Currently Tuskar-UI is run from Horizon fork. In local Horizon fork we
  create symlink to tuskar-ui local clone and to run Horizon with
  Tuskar-UI we simply start Horizon server. This means that Tuskar-UI runs
  on latest version of Horizon. (If you pull regularly of course).
  
  
   Matthias
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  Jirka
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Horizon and Tuskar-UI codebase merge

2013-12-19 Thread Ana Krivokapic
On 12/19/2013 05:32 PM, Jordan O'Mara wrote:

 - Original Message -
 So basically this is our first proposal what we send out:
 http://lists.openstack.org/pipermail/openstack-dev/2013-December/022196.html

 After Horizon meetings, several e-mails and also couple of other
 discussions of people who are for/against codebase merge, it looks that
 in the end upstream leans towards 'umbrella' solution.

 After all, +1 for umbrella solution from my side too. Tuskar UI will get
 closer to the nature of the project (based on Horizon, UI related
 audience). And in the same time, we will not rush things up before the
 project graduates. In Icehouse we can easier reach goals of both -
 Horizon as well as Tuskar UI - and after Icehouse release we can review
 back and get to the codebase merge in the end.

 Do you all agree?

 -- Jarda
 +1, I think this is the most sensible approach.

+1 from me, this approach makes perfect sense.


 On 2013/18/12 22:33, Gabriel Hurley wrote:
  From my experience, directly adding incubated projects to the main Horizon
  codebase prior to graduation has been fraught with peril. That said, the
  closer they can be together prior to the graduation merge, the better.

 I like the idea of these types of projects being under the OpenStack
 Dashboard Program umbrella. Ideally I think it would be a jointly-managed
 resource in Gerrit. The Horizon Core folks would have +2 power, but the
 Tuskar core folks would also have +2 power. (I'm 90% certain that can be
 done in the Gerrit admin...)

 That way development speed isn't bottlenecked by Horizon Core, but there's
 a closer tie-in with the people who may ultimately be maintaining it. It
 becomes easier to keep track of, and can be more easily guided in the
 right directions. With a little work incubated dashboard components like
 this could even be made to be a non-gating part of the testing
 infrastructure to indicate when things change or break.

 Adding developers to Horizon Core just for the purpose of reviewing an
 incubated umbrella project is not the right way to do things at all.  If
 my proposal of two separate groups having the +2 power in Gerrit isn't
 technically feasible then a new group should be created for management of
 umbrella projects.

 All the best,

   - Gabriel
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Regards,

Ana Krivokapic
Associate Software Engineer
OpenStack team
Red Hat Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Three SLAAC and DHCPv6 related blueprints

2013-12-19 Thread Collins, Sean
Perfect! Will you be at the IRC meeting to discuss these? I've added
them to the agenda in the hopes that we can discuss

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Three SLAAC and DHCPv6 related blueprints

2013-12-19 Thread Shixiong Shang
I will surely be there this afternoon, Sean! Look forward to it!

On Dec 19, 2013, at 12:22 PM, Collins, Sean sean_colli...@cable.comcast.com 
wrote:

 Perfect! Will you be at the IRC meeting to discuss these? I've added
 them to the agenda in the hopes that we can discuss
 
 -- 
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Nomination for heat-core

2013-12-19 Thread Jeff Peeler
On Thu, Dec 19, 2013 at 03:21:46PM +1300, Steve Baker wrote:
 I would like to nominate Bartosz Górski to be a heat-core reviewer. His
 reviews to date have been valuable and his other contributions to the
 project have shown a sound understanding of how heat works.
 
 Here is his review history:
 https://review.openstack.org/#/q/reviewer:bartosz.gorski%2540ntti3.com+project:openstack/heat,n,z
 
 If you are heat-core please reply with your vote.

+1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Agenda for today meeting at 2000 UTC

2013-12-19 Thread Joshua Harlow
Hi all,


The [state-management] project team holds a weekly meeting in 
#openstack-meeting on thursdays, 2000 UTC. The next meeting is today, 
2013-12-19!!!


As usual, everyone is welcome :-)


Link: https://wiki.openstack.org/wiki/Meetings/StateManagement

Taskflow: https://wiki.openstack.org/TaskFlow


## Agenda (30-60 mins):


- Discuss any action items from last meeting.

- Discuss about any other potential new use-cases for said library.

- Discuss about any other ideas, questions and answers (and more!).


Any other topics are welcome :-)


See you all soon!


--


Joshua Harlow


It's openstack, relax... | harlo...@yahoo-inc.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] PTL Candidacy

2013-12-19 Thread Sergey Lukjanov
Confirmed.

https://wiki.openstack.org/wiki/Climate/PTL_Elections_Icehouse#Candidates


On Thu, Dec 19, 2013 at 8:39 PM, Sylvain Bauza sylvain.ba...@bull.netwrote:

  Hi,

 I hereby would like to announce my candidacy for the Climate
 (Reservations) PTL.

 A brief history about me : I'm playing with Software Engineering and
 Operations since more than 10 years, with a special focus on Openstack
 since the Essex release. I promoted Openstack in my previous company as the
 top solution for our internal cloud solution, and now I'm working as
 Software Engineer at Bull within the open-source XLCloud [1] project which
 is an international collaborative project.

 I joined the Climate project in the early stages from the idea of having a
 resource planner in Nova [2]. I also led the subteam responsible for
 delivering physical hosts reservations in Climate and dedicated long time
 about spec'ing what would be a global resource planner for both virtual and
 physical reservations. I also engaged and delivered important core features
 like unittesting framework or policies management. You can see my reviews
 [3] and my commits [4] for appreciation.
 About Climate visibility, I also engaged the initiative of having weekly
 meetings and now I'm chair of the meetings half time, with respect to the
 virtual reservations subteam. I also copresented Climate during the
 Openstack Icehouse Summit in HK.

 I see the PTL position as a communication point for the development of the
 community around Climate and its integration within the Openstack
 ecosystem, not only focusing to the code but also listening to the users
 point of view. As a corollary to this, his duties also include to be the
 interface in between Openstack developers community and Climate
 stakeholders for defining the good path of leveraging Openstack with
 Climate. Last but not least, I'm convinced that the PTL position should be
 rotating in order to express all the variations of our team contributors,
 subteams and sponsors.

 For the Icehouse release, I'm seeing the next steps for Climate as having
 its first release end of January, the discussions with the Technical
 Committee about an project incubation and a new Program about Reservations.
 I'm also seeing a tighter interaction with Nova, Heat and Horizon for the
 end of this cycle. In particular about Nova, there are various aspects that
 have to be addressed and have already been presented during a Nova design
 unconference session [5].

 Anyway, I know we do a great job, and I'm having pleasure with working
 with you all guys !
 -Sylvain

 [1] : http://www.xlcloud.org
 [2] :
 https://blueprints.launchpad.net/nova/+spec/planned-resource-reservation-api
 [3] :
 https://review.openstack.org/#/q/reviewer:sbauza+AND+(project:stackforge/climate+OR+project:stackforge/python-climateclient+OR+project:stackforge/climate-nova),n,zhttps://review.openstack.org/#/q/reviewer:sbauza+AND+%28project:stackforge/climate+OR+project:stackforge/python-climateclient+OR+project:stackforge/climate-nova%29,n,z
 [4] :
 https://review.openstack.org/#/q/owner:sbauza+AND+(project:stackforge/climate+OR+project:stackforge/python-climateclient+OR+project:stackforge/climate-nova),n,zhttps://review.openstack.org/#/q/owner:sbauza+AND+%28project:stackforge/climate+OR+project:stackforge/python-climateclient+OR+project:stackforge/climate-nova%29,n,z
 [5] : https://etherpad.openstack.org/p/NovaIcehouse-ClimateInteractions

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

2013-12-19 Thread Adrian Otto
When the OpenStack project was started in 2010, we conceived of two languages 
that would be considered to have first class status: Python, and C++. The idea 
is that Python would be used for the API services, and that C++ would be used 
in special cases where Python was not a good fit, such as for ultra-high 
performance, or kernel drivers, or for memory constrained situations.

Although the Python language preference has prevailed, we should not be 
allergic to the idea of an agent being done in C++ if it means that there are 
end-user benefits that justify it. I think that having a modular agent that can 
be easily extended that has a very small resource footprint is wise. Key issues 
for a base agent are:

1) A way to sign the distributed bits so users can detect/prevent tampering.
2) Ways to extend the agent using flexible, well documented extension APIs.
3) A way to securely issue remote commands to the agent (to be serviced in 
accordance with registered commands).
4) A way to update the agent in-place, initiated by a remote signal (with an 
option to disable).

Whether standard AMQP protocol is used for messaging is besides the point, and 
should be discussed as an implementation detail. I see no reason why C++ could 
not be used to implement a low memory footprint agent that could offer the 
functionality I outlined above. Perhaps one of the extension api's is a shell 
exec with standard IO connected to the parent process. That way you could 
easily extend it using Python, or whatever you want (existing configuration 
management tools, etc.)

Adrian

On Dec 19, 2013, at 7:51 AM, Dmitry Mescheryakov 
dmescherya...@mirantis.commailto:dmescherya...@mirantis.com wrote:

2013/12/19 Fox, Kevin M kevin@pnnl.govmailto:kevin@pnnl.gov
How about a different approach then... OpenStack has thus far been very 
successful providing an API and plugins for dealing with things that cloud 
providers need to be able to switch out to suit their needs.

There seems to be two different parts to the unified agent issue:
 * How to get rpc messages to/from the VM from the thing needing to control it.
 * How to write a plugin to go from a generic rpc mechanism, to doing something 
useful in the vm.

How about standardising what a plugin looks like, python api, c++ api, etc. 
It won't have to deal with transport at all.

Also standardize the api the controller uses to talk to the system, rest or 
amqp.

I think that is what we discussed when we tried to select between Salt + 
oslo.messaging and pure oslo.messaging
framework for the agent. As you can see, we didn't came to agreement so far :-) 
Also Clint started a new thread to discuss what, I believe, you defined as the 
first part of unified agent issue. For clarity, the thread I am referring to is

http://lists.openstack.org/pipermail/openstack-dev/2013-December/022690.html

Then the mechanism is an implementation detail. If rackspace wants to do a VM 
serial driver, thats cool. If you want to use the network, that works too. 
Savanna/Trove/etc don't have to care which mechanism is used, only the cloud 
provider.
Its not quite as good as one and only one implementation to rule them all, but 
would allow providers to choose what's best for their situation and get as much 
code shared as can be.

What do you think?

Thanks,
Kevin





From: Tim Simpson [tim.simp...@rackspace.commailto:tim.simp...@rackspace.com]
Sent: Wednesday, December 18, 2013 11:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

Thanks for the summary Dmitry. I'm ok with these ideas, and while I still 
disagree with having a single, forced standard for RPC communication, I should 
probably let things pan out a bit before being too concerned.

- Tim



From: Dmitry Mescheryakov 
[dmescherya...@mirantis.commailto:dmescherya...@mirantis.com]
Sent: Wednesday, December 18, 2013 11:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] My thoughts on the Unified Guest Agent

Tim,

The unified agent we proposing is based on the following ideas:
  * the core agent has _no_ functionality at all. It is a pure RPC mechanism 
with the ability to add whichever API needed on top of it.
  * the API is organized into modules which could be reused across different 
projects.
  * there will be no single package: each project (Trove/Savanna/Others) 
assembles its own agent based on API project needs.

I hope that covers your concerns.

Dmitry


2013/12/18 Tim Simpson 
tim.simp...@rackspace.commailto:tim.simp...@rackspace.commailto:tim.simp...@rackspace.commailto:tim.simp...@rackspace.com
I've been following the Unified Agent mailing list thread for awhile now and, 
as someone who has written a fair amount of code for both of the two existing 
Trove agents, thought I should give my opinion about 

Re: [openstack-dev] Incubation Request for Barbican

2013-12-19 Thread Mike Perez
On Thu, Dec 19, 2013 at 4:14 AM, Sean Dague s...@dague.net wrote:

 On 12/19/2013 12:10 AM, Mike Perez wrote:
  On Tue, Dec 17, 2013 at 1:59 PM, Mike Perez thin...@gmail.com
  mailto:thin...@gmail.com wrote:
 snip
  I reviewed the TC meeting notes, and my question still stands.
 
  It seems the committee is touching on the point of there being a worry
  because if
  it's a single company running the show, they can pull resources away and
  the
  project collapses. My worry is just having one company attempting to
  design solutions
  to use cases that work for them, will later not work for those potential
  companies that would
  provide contributors.
 
  -Mike Perez

 Which is our fundamental chicken and egg problem. The Barbican team has
 said they've reached out to other parties, who have expressed interest
 in joining, but no one else has.

 The Heat experience shows that a lot of the time companies won't kick in
 resources until there is some kind of stamp of general approval.

 If you showed up early, with a commitment to work openly, the fact that
 the project maps to your own use cases really well isn't a bug, it's a
 feature. I don't want to hold up a team from incubating because other
 people stayed on the sidelines. That was actually exactly what was going
 on with Heat, where lots of entities thought they would keep that side
 of the equation proprietary, or outside of OpenStack. By bringing Heat
 in, we changed the equation, I think massively for the better.

 -Sean



To make my message more clear, I would like to see the TC thinking of this
problem as well. In Cinder for example, there was a push for a shared
service.
One of the problems that the core saw in this feature was it was a
one-sided
project because only one vendor was really contributing. The API they
provided
may work great for them, but may not work for other potential contributors
that
come from another company where their storage system works differently. I
see
this as causing potential serious rewrites that really just sets a project
back.

I'm not at all saying this stops incubation, but just something else to
consider besides
a company pulling out the main resource from a project.

-Mike Perez
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding DB migration items to the common review checklist

2013-12-19 Thread Ben Nemec

On 2013-12-18 22:08, Jay Pipes wrote:

On 12/18/2013 02:14 PM, Brant Knudson wrote:

Matt -

Could a test be added that goes through the models and checks these
things? Other projects could use this too.

Here's an example of a test that checks if the tables are all InnoDB:
http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/db/test_migrations.py?id=6e455cd97f04bf26bbe022be17c57e089cf502f4#n430


Actually, there's already work done for this.

https://review.openstack.org/#/c/42307/

I was initially put off by the unique constraint naming convention
(and it's still a little problematic due to constraint name length
constraints in certain RDBMS), but the patch above is an excellent
start.

Please show Svetlana's work a little review love :)


Big +1 to this.  I've been trying to review that patch series, but I 
don't have deep knowledge of the db stuff, so the more db folks that can 
weigh in the better. :-)


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request for Barbican

2013-12-19 Thread Dolph Mathews
On Thu, Dec 12, 2013 at 4:48 PM, Morgan Fainberg m...@metacloud.com wrote:

 On December 12, 2013 at 14:32:36, Dolph Mathews 
 (dolph.math...@gmail.com//dolph.math...@gmail.com)
 wrote:


 On Thu, Dec 12, 2013 at 2:58 PM, Adam Young ayo...@redhat.com wrote:

 On 12/04/2013 08:58 AM, Jarret Raim wrote:

 While I am all for adding a new program, I think we should only add one
 if we
 rule out all existing programs as a home. With that in mind why not add
 this
 to the  keystone program? Perhaps that may require a tweak to keystones
 mission
 statement, but that is doable. I saw a partial answer to this somewhere
 but not a full one.

 From our point of view, Barbican can certainly help solve some problems
 related to identity like SSH key management and client certs. However,
 there is a wide array of functionality that Barbican will handle that is
 not related to identity.


 Some examples, there is some additional detail in our application if you
 want to dig deeper [1].


 * Symmetric key management - These keys are used for encryption of data at
 rest in various places including Swift, Nova, Cinder, etc. Keys are
 resources that roll up to a project, much like servers or load balancers,
 but they have no direct relationship to an identity.

 * SSL / TLS certificates - The management of certificate authorities and
 the issuance of keys for SSL / TLS. Again, these are resources rather than
 anything attached to identity.

 * SSH Key Management - These could certainly be managed through keystone
 if we think that¹s the right way to go about it, but from Barbican¹s point
 of view, these are just another type of a key to be generated and tracked
 that roll up to an identity.


 * Client certificates - These are most likely tied to an identity, but
 again, just managed as resources from a Barbican point of view.

 * Raw Secret Storage - This functionality is usually used by applications
 residing on an Cloud. An app can use Barbican to store secrets such as
 sensitive configuration files, encryption keys and the like. This data
 belongs to the application rather than any particular user in Keystone.
 For example, some Rackspace customers don¹t allow their application dev /
 maintenance teams direct access to the Rackspace APIs.

 * Boot Verification - This functionality is used as part of the trusted
 boot functionality for transparent disk encryption on Nova.

 * Randomness Source - Barbican manages HSMs which allow us to offer a
 source of true randomness.



 In short (ha), I would encourage everyone to think of keys / certificates
 as resources managed by an API in much the same way we think of VMs being
 managed by the Nova API. A consumer of Barbican (either as an OpenStack
 service or a consumer of an OpenStack cloud) will have an API to create
 and manage various types of secrets that are owned by their project.


 My reason for keeping them separate is more practical:  the Keystone team
 is already somewhat overloaded.  I know that a couple of us have interest
 in contributing to Barbican, the question is time and prioritization.

 Unless there is some benefit to having both projects in the same program
 with essentially different teams, I think Barbican should proceed as is.  I
 personally plan on contributing to Barbican.


 /me puts PTL hat on

 ++ I don't want Russel's job.

 Keystone has a fairly narrow mission statement in my mind (come to think
 of it, I need to propose it to governance..), and that's basically to
 abstract away the problem of authenticating and authorizing the API users
 of other openstack services. Everything else, including identity
 management, key management, key distribution, quotas, etc, is just
 secondary fodder that we tend to help with along the way... but they should
 be first class problems in someone else's mind.

 If we rolled everything together that kind of looks related to keystone
 under a big keystone program for the sake of organizational tidiness, I
 know I would be less effective as a PTL and that's a bit disheartening.
 That said, I'm always happy to help where I can.


 The long and the short of it is that I can’t argue that Barbican couldn’t
 be considered a mechanism of “Identity” (in most everything keys end up
 being a form of Identity, and the management of that would fit nicely under
 the “Identity Program”).  That being said I also can’t argue that Barbican
 shouldn’t be it’s own top-level program.  It comes down to the best fit for
 OpenStack as a whole.

 From a deployer standpoint, I don’t think it will make any real difference
 if Barbican is in Identity or it’s own program.  Basically, it’ll be a
 separate process to run in either case.  It will have it’s own rules and
 quirks.

 From a developer standpoint, I don’t think it will make a significant
 difference (besides, perhaps where documentation lies).  The contributors
 to Keystone will contribute (or not) to Barbican and vice-versa based upon
 interest/time/needs.

 From a community and 

Re: [openstack-dev] [Nova] [Ironic] Get power and temperature via IPMI

2013-12-19 Thread Devananda van der Veen
On Wed, Dec 18, 2013 at 7:16 PM, Gao, Fengqian fengqian@intel.comwrote:

  Hi, Devananda,

 I agree with you that new features should be towards Ironic.

 As you asked why use Ironic instead of lm-sensors, actually I just want to
 use IPMI instead of lm-sensors. I think it is reasonable to put the IPMI
 part into Ironic and we already didJ.



 To get the sensors’ information, I think IPMI is much more powerful than
 lm-sensors.

 Firstly, IPMI is flexible.  Generally speaking, it provides two kinds of
 connections, in-bind and out-of-band.

 Out-of-band connection allows us to get sensors’ status even without OS
 and CPU.

 In-band connection is quite similar to lm-sensors, It needs the OS kernel
 to get sensor data.

 Secondly,  IPMI can gather more sensors’ information that lm-sensors and
 it is easy to use. From my own experience, using IPMI can get all the
 sensor information that lm-sensors could get, such as
 temperature/voltage/fan. Besides that, IPMI can get power data and some OEM
 specific sensor data.

 Thirdly, I think IPMI is a common spec for most of OEMs.  And most of
 servers are integrated with IPMI interface.



 As you sais, nova-compute is already supplying information to the
 scheduler and power/temperature should be gathered locally.  IPMI can be
 used locally, the in-band connection. And there is a lot of open source
 library, such as OpenIPMI, FreeIPMI, which provide the interfaces to OS,
 just like lm-sensors.

 So, I prefer to use IPMI than lm-sensors. Please leave your comments if
 you disagreeJ.



I see nothing wrong with nova-compute gathering such information locally.
Whether you use lm-sensors or in-band IPMI is an implementation detail of
how nova-compute would gather the information.

However, I don't see how this has anything to do with Ironic or the
nova-baremetal driver. These would gather information remotely (using
out-of-band IPMI) for hardware controlled and deployed by these services.
In most cases, nova-compute is not deployed by nova-compute (exception: if
you're running TripleO).

Hope that helps,
-Deva
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Default ephemeral filesystem

2013-12-19 Thread Robert Collins
The default ephemeral filesystem in Nova is ext3 (for Linux). However
ext3 is IMNSHO a pretty poor choice given ext4's existence. I can
totally accept that other fs's like xfs might be contentious - but is
there any reason not to make ext4 the default?

I'm not aware of any distro that doesn't have ext4 support - even RHEL
defaults to ext4 in RHEL5.

The reason I'm raising this is that making a 1TB ext3 ephemeral volume
does (way) over 5GB of writes due to zeroing all the inode tables, but
an ext4 one does less than 1% of the IO - 14m vs 7seconds in my brief
testing. (We were investigating why baremetal deploys were slow :)).

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Default ephemeral filesystem

2013-12-19 Thread Sean Dague
On 12/19/2013 03:21 PM, Robert Collins wrote:
 The default ephemeral filesystem in Nova is ext3 (for Linux). However
 ext3 is IMNSHO a pretty poor choice given ext4's existence. I can
 totally accept that other fs's like xfs might be contentious - but is
 there any reason not to make ext4 the default?
 
 I'm not aware of any distro that doesn't have ext4 support - even RHEL
 defaults to ext4 in RHEL5.
 
 The reason I'm raising this is that making a 1TB ext3 ephemeral volume
 does (way) over 5GB of writes due to zeroing all the inode tables, but
 an ext4 one does less than 1% of the IO - 14m vs 7seconds in my brief
 testing. (We were investigating why baremetal deploys were slow :)).
 
 -Rob

Seems like a fine change to me. I assume that's all just historical
artifact.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bugs] definition of triaged

2013-12-19 Thread Robert Collins
On 16 December 2013 23:56, Thierry Carrez thie...@openstack.org wrote:
 Robert Collins wrote:
 https://wiki.openstack.org/wiki/BugTriage is a 10 step process: thats
 not something we can sensible do over coffee in the morning.

 Totally agree with that. The idea of those 10 steps was not meant as
 something you could ever complete, it was more a way to state there is
 no point in doing step 8 unless step 1 is covered.

 I agree that splitting the process into separate daily / cycle-y
 processes (as you propose below) is a better way to ever get it done.

Ok, cool.


 I like the first and third parts. Not really convinced with the second
 part, though. You'll have a lot of Confirmed bugs without proposed
 approach (like 99% of them) so asking core to read them all and scan
 them for a proposed approach sounds like a waste of time. There seems to

So, I'm trying to reconcile:
 - the goal of moving bugs into 'triaged'
   - which really is:
 - keeping a pipeline of low hanging fruit as an onramp
 - apparently some documentation needs too, though I'd rather push
/all/ those into DocImpact tags on changes, + those bugs that are
solely documentation issues.
 - the goal of identifying critical bugs rapidly
 - the goal of steering bugs to the right subteam (e.g. vendor interests)

 be more value in asserting that there is a proposed approach in the
 bug, rather than there is a core-approved approach in this bug.

The definition of Triaged on the Bugs wiki page is 'Triaged The bug
comments contain a full analysis on how to properly fix the issue '

Fundamentally for a process to work at scale we can't depend on 'all
triagers naturally figure out which bugs need to be Triaged and which
only need to be Confirmed'. *regardless* of definition of triaged, we
need a system where we don't exhaust people every few weeks reviewing
*every open bug* to figure out which ones have been missed.

I'm *entirely* happy with saying that anyone with the experience to do
it can move things up to Triaged - I see no problem there, but there
is a huge problem if we have any step in the process's inner loop that
requires O(bugs) tasks.

 Furthermore, I'm not convinced we really need to spend core time
 assessing the proposed approach. You can spend core time on suggesting
 a proposed approach (i.e. turning Confirmed into Triaged), though.

So if I rephrased
Daily tasks - second layer - -core current and previous members
1. Review Confirmed+High[1] bugs
1.1. If there is a full analysis on how to properly fix the issue move
to Triaged
1.2  If not, add one.
2. If appropriate add low-hanging-fruit tag

You'd be happy? The reason I didn't write it like that is that this
burns *more* core time. I was trying to propose a process where we can
scale a competent but non-core team up without demanding that -core
commit to boiling the ocean.

The different between the phrasing I had and the one you seem to be
proposing is that rather than promoting and guiding on bugs, -core are
being asked to /do/ on every bug. That places a lot of load on -core,
*or* it means we get many less bugs triaged.

If we say that anyone with reasonable competency can do it, we could say:

Daily tasks - second layer - more experienced folk
1. Review Confirmed+High[1] bugs
1.1. If there is a full analysis on how to properly fix the issue move
to Triaged
1.2  If not, add one.
2. If appropriate add low-hanging-fruit tag

 You basically have the following states for a bug:

 A - Brand new
 B - WAT (incomplete, ask the reporter for more info)
 C - I don't have the expertise to judge (but I can add tag)
 D - Yes this is definitely a bug, here is its priority
 E - This is a bug and here is a suggested approach to fix it
 F - I'm core and I bless that way to fix it
 G - I started working on that fix
 H - The fix is in code review
 I - The fix landed in master
 J - The fix landed in a milestone
 K - The fix landed in a release

 That is way too many states, especially if you rely on humans to set
 them. My experience is that humans deal with 3 bug states correctly, but
 start to fail doing it consistently if you ask them to set 4 or more.

I would quibble about that list - for instance tagging and expertise
to judge project impact tag properly are IME the same.

There are many dimensions of input to bugs, and what you're calling
states there are really the combination of things from different
dimensions. Timeline interactions for instance, don't change the state
of a bug, but they change whether we want it to show up in bug
searches.

 In the current setup (constrained by what Launchpad supports) we use
 tags for C, we ignore F, we combine G+H, and we combine J+K:

 A - New
 B - Incomplete
 C - (New + tagged)
 D - Confirmed
 E - Triaged
 F - (Triaged + comment)
 G - In progress
 H - In progress (automatically set)
 I - Fix committed (automatically set)
 J - Fix released (automatically set)
 K - (Fix Released + remilestoning) (automatically set)

 I think less states (and 

Re: [openstack-dev] [Neutron][IPv6] Agenda for today's meeting

2013-12-19 Thread Collins, Sean
Minutes from the meeting:

http://eavesdrop.openstack.org/meetings/neutron_ipv6/2013/neutron_ipv6.2013-12-19-21.00.html

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Meeting time - change to 1300 UTC or 1500 UTC?

2013-12-19 Thread Collins, Sean
Thoughts? I know we have people who are not able to attend at our
current time.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-19 Thread Jay Pipes

On 12/19/2013 04:55 AM, Radomir Dopieralski wrote:

On 14/12/13 16:51, Jay Pipes wrote:

[snip]


Instead of focusing on locking issues -- which I agree are very
important in the virtualized side of things where resources are
thinner -- I believe that in the bare-metal world, a more useful focus
would be to ensure that the Tuskar API service treats related group
operations (like deploy an undercloud on these nodes) in a way that
can handle failures in a graceful and/or atomic way.


Atomicity of operations can be achieved by intoducing critical sections.
You basically have two ways of doing that, optimistic and pessimistic.
Pessimistic critical section is implemented with a locking mechanism
that prevents all other processes from entering the critical section
until it is finished.


I'm familiar with the traditional non-distributed software concept of a 
mutex (or in Windows world, a critical section). But we aren't dealing 
with traditional non-distributed software here. We're dealing with 
highly distributed software where components involved in the 
transaction may not be running on the same host or have much awareness 
of each other at all.


And, in any case (see below), I don't think that this is a problem that 
needs to be solved in Tuskar.



Perhaps you have some other way of making them atomic that I can't think of?


I should not have used the term atomic above. I actually do not think 
that the things that Tuskar/Ironic does should be viewed as an atomic 
operation. More below.



For example, if the construction or installation of one compute worker
failed, adding some retry or retry-after-wait-for-event logic would be
more useful than trying to put locks in a bunch of places to prevent
multiple sysadmins from trying to deploy on the same bare-metal nodes
(since it's just not gonna happen in the real world, and IMO, if it did
happen, the sysadmins/deployers should be punished and have to clean up
their own mess ;)


I don't see why they should be punished, if the UI was assuring them
that they are doing exactly the thing that they wanted to do, at every
step, and in the end it did something completely different, without any
warning. If anyone deserves punishment in such a situation, it's the
programmers who wrote the UI in such a way.


The issue I am getting at is that, in the real world, the problem of 
multiple users of Tuskar attempting to deploy an undercloud on the exact 
same set of bare metal machines is just not going to happen. If you 
think this is actually a real-world problem, and have seen two sysadmins 
actively trying to deploy an undercloud on bare-metal machines at the 
same time without unbeknownst to each other, then I feel bad for the 
sysadmins that found themselves in such a situation, but I feel its 
their own fault for not knowing about what the other was doing.


Trying to make a complex series of related but distributed actions -- 
like the underlying actions of the Tuskar - Ironic API calls -- into an 
atomic operation is just not a good use of programming effort, IMO. 
Instead, I'm advocating that programming effort should instead be spent 
coding a workflow/taskflow pipeline that can gracefully retry failed 
operations and report the state of the total taskflow back to the user.


Hope that makes more sense,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Meeting time - change to 1300 UTC or 1500 UTC?

2013-12-19 Thread Shixiong Shang
I cannot do 13:00UTC, but 14:00 or 15:00 UTC should work for me.



 On Dec 19, 2013, at 5:12 PM, Collins, Sean 
 sean_colli...@cable.comcast.com wrote:
 
 Thoughts? I know we have people who are not able to attend at our
 current time.
 
 -- 
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer][Oslo] Consuming Notifications in Batches

2013-12-19 Thread Herndon, John Luke
Hi Folks,

The Rackspace-HP team has been putting a lot of effort into performance
testing event collection in the ceilometer storage drivers[0]. Based on
some results of this testing, we would like to support batch consumption
of notifications, as it will greatly improve insertion performance. Batch
consumption in this case means waiting for a certain number of
notifications to arrive before sending to the storage
driver. 

I¹d like to get feedback from the community about this feature, and how we
are planning to implement it. Here is what I’m currently thinking:

1) This seems to fit well into oslo.messaging - batching may be a feature
that other projects will find useful. After reviewing the changes that
sileht has been working on in oslo.messaging, I think the right way to
start off is to create a new executor that builds up a batch of
notifications, and sends the batch to the dispatcher. We’d also add a
timeout, so if a certain amount of time passes and the batch isn’t filled
up, the notifications will be dispatched anyway. I’ve started a
blueprint for this change and am filling in the details as I go along [1].

2) In ceilometer, initialize the notification listener with the batch
executor instead of the eventlet executor (this should probably be
configurable)[2]. We can then send the entire batch of notifications to
the storage driver to be processed as events, while maintaining the
current method for converting notifications into samples.

3) Error handling becomes more difficult. The executor needs to know if
any of the notifications should be requeued. I think the right way to
solve this is to return a list of notifications to requeue from the
handler. Any better ideas?

Is this the right approach to take? I¹m not an oslo.messaging expert, so
if there is a proper way to implement this change, I¹m all ears!

Thanks, happy holidays!
-john

0: https://etherpad.openstack.org/p/ceilometer-data-store-scale-testing
1: 
https://blueprints.launchpad.net/oslo.messaging/+spec/bulk-consume-messages
2: https://blueprints.launchpad.net/ceilometer/+spec/use-bulk-notification


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Meeting time - change to 1300 UTC or 1500 UTC?

2013-12-19 Thread Randy Tuttle
Any of those times suit me. 

Sent from my iPhone

On Dec 19, 2013, at 5:12 PM, Collins, Sean sean_colli...@cable.comcast.com 
wrote:

 Thoughts? I know we have people who are not able to attend at our
 current time.
 
 -- 
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron Tempest code sprint - 2nd week of January, Montreal, QC, Canada

2013-12-19 Thread Edgar Magana
Anita,

Fawad and Myself will be also attending.

BTW. Fawad will require an invitation letter for visa. He will email you
directly with that request.

Thanks,

Edgar


On Wed, Dec 18, 2013 at 1:17 PM, Anita Kuno ante...@anteaya.info wrote:

 Okay time for a recap.

 What: Neutron Tempest code sprint
 Where: Montreal, QC, Canada
 When: January 15, 16, 17 2014
 Location: I am about to sign the contract for Salle du Parc at 3625 Parc
 avenue, a room in a residence of McGill University.
 Time: 9am - 5am

 I am expecting to see the following people in Montreal in January:
 Mark McClain
 Salvatore Orlando
 Sean Dague
 Matt Trenish
 Jay Pipes
 Sukhdev Kapur
 Miguel Lavelle
 Oleg Bondarev
 Rossella Sblendido
 Emilien Macchi
 Sylvain Afchain
 Nicolas Planel
 Kyle Mestery
 Dane Leblanc
 Sumit Naiksatam
 Henry Gessau
 Don Kehn
 Carl Baldwin
 Justin Hammond
 Anita Kuno

 If you are on the above list and can't attend, please email me so I have
 an up-to-date list. If you are planning on attending and I don't have
 your name listed, please email me without delay so that I can add you
 and you get done what you need to get done to attend.

 I have the contract for the room and will be signing it and sending it
 in with the room deposit tomorrow. Monty has about 6 more hours to get
 back to me on this, then I just have to go ahead and do it.

 Caterer is booked and I will be doing menu selection over the holidays.
 I can post the intended, _the intended_ menu once I have decided. Soup,
 salad, sandwich - not glamourous but hopefully filling. If the menu on
 the day isn't the same as what I post, please forgive me. Unforeseen
 circumstances may crop up and I will do my best to get you fed. One
 person has identified they have a specific food request, if there are
 any more out there, please email me now. This covers breakfast, lunch
 and tea/coffee all day.

 Henry Gessau will be social convener for dinners. If you have some
 restaurant suggestions, please contact Henry. Organization of dinners
 will take place once we congregate in our meeting room.

 T-shirts: we decided that the code quality of Neutron was a higher
 priority than t-shirts.

 One person required a letter of invitation for visa purposes and
 received it. I hope the visa has been granted.

 Individuals arrangements for hotels seem to be going well from what I
 have been hearing. A few people will be staying at Le Nouvel Hotel,
 thanks for finding that one, Rosella.

 Weather: well you got me on this one. This winter is colder than we have
 had in some time and more snow too. So it will be beautiful but bring or
 buy warm clothes. A few suggestions:
 * layer your clothes (t-shirt, turtleneck, sweatshirt)
 * boots with removable liners (this is my boot of choice:
 http://amzn.to/19ddJve) remove the liners at the end of each day to dry
 them
 * warm coat
 * toque (wool unless you are allergic) I'm seeing them for $35, don't
 pay that much, you should be able to get something warm for $15 or less
 * warm socks (cotton socks and wool over top)- keep your feet dry
 * mitts (mitts keep my fingers warmer than gloves)
 * scarf
 If the weather is making you panic, talk to me and I will see about
 bringing some of my extra accessories with me. The style might not be
 you but you will be warm.

 Remember, don't lick the flagpole. It doesn't matter what your friends
 tell you.

 That's all I can think of, if I missed something, email me.

 Oh, and best to consider me offline from Jan.2 until the code sprint.
 Make sure you have all the information you need prior to that time.

 See you in Montreal,
 Anita.


 On 11/19/2013 11:31 AM, Rossella Sblendido wrote:
  Hi all,
 
  sorry if this is a bit OT now.
  I contacted some hotels to see if we could get a special price if we book
  many rooms. According to my research the difference in price is not much.
  Also, as Anita was saying, booking for everybody is more complicated.
  So I decided to booked a room for myself.
  I share the name of the hotel, in case you want to stay in the same place
  http://www.lenouvelhotel.com/
 http://www.booking.com/hotel/ca/le-nouvel-spa.html?aid=318615label=postbooking_confemailpbsource=conf_email_hotel_nameet=UmFuZG9tSVYkc2RlIyh9YQkLIKuQhwqabGHP/3dl6rJzqy0AqLilEWRB9q2h3N7LbLpnopp78jpk3Zrw8QEON8/7uGk2Z4XEVgx0jMidsc7G6J6/mpIjb0/tpL+TyUzh/SougdT7JVfQN96wrY/Uz9Cu068o86et5KaL1N1ikBA2yvj25PBlFEF+/iBPj8Nq
 .
  It's close to the meeting room and the price is one of the best I have
  found.
 
  cheers,
 
  Rossella
 
 
 
 
  On Sat, Nov 16, 2013 at 7:39 PM, Anita Kuno ante...@anteaya.info
 wrote:
 
   On 11/16/2013 01:14 PM, Anita Kuno wrote:
 
  On 11/16/2013 12:37 PM, Sean Dague wrote:
 
  On 11/15/2013 10:36 AM, Russell Bryant wrote:
 
   On 11/13/2013 11:10 AM, Anita Kuno wrote:
 
   Neutron Tempest code sprint
 
  In the second week of January in Montreal, Quebec, Canadathere will be a
  Neutron Tempest code sprint to improve the status of Neutron tests in
  Tempest and to add new tests.
 

[openstack-dev] [keystone] External authentication plugins

2013-12-19 Thread Brant Knudson
We've got to figure out what external authentication plugins we're going to
provide in Keystone.

This is something that you'd think wouldn't be complicated, but somehow
it's gotten that way.

Since we've made mistakes in the past, let's try to be careful this time
and come up with what plugins are required, and make sure those are
implemented.

To this end, I've opened a blueprint:
https://blueprints.launchpad.net/keystone/+spec/external-auth-plugins

What I need is,
a) Make sure the background info is correct. It documents the plugin
behavior that we've provided in the past and how they work. Keystone must
continue to support these for backwards-compatibility.

b) Figure out if there are new plugins that we need to provide. For
example, we don't have a V3 plugin that works like V2 authentication; we
don't have a V3 plugin that works like Grizzly did.

Thanks! - Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Meeting time - change to 1300 UTC or 1500 UTC?

2013-12-19 Thread Ian Wells
I'm easy.


On 20 December 2013 00:47, Randy Tuttle randy.m.tut...@gmail.com wrote:

 Any of those times suit me.

 Sent from my iPhone

 On Dec 19, 2013, at 5:12 PM, Collins, Sean 
 sean_colli...@cable.comcast.com wrote:

  Thoughts? I know we have people who are not able to attend at our
  current time.
 
  --
  Sean M. Collins
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >