[openstack-dev] [Cinder] Icehouse RC3 available

2014-04-15 Thread Thierry Carrez
Hello everyone,

Two regressions were detected in Cinder release candidate testing. We
respun a new release candidate to include the fixes for those issues in
the final Icehouse release. You can find links to the 2 bugs fixed and
the RC3 source tarball at:

https://launchpad.net/cinder/icehouse/icehouse-rc3

Unless new release-critical regressions are found that warrant a new
release candidate respin, this RC3 will be formally released as part of
the 2014.1 final version on Thursday. Last minute testing is therefore
strongly encouraged on this tarball !

Alternatively, you can directly test the milestone-proposed branch at:
https://github.com/openstack/cinder/tree/milestone-proposed

If you find an issue that could be considered release-critical and
justify a release candidate respin, please file it at:

https://bugs.launchpad.net/cinder/+filebug

and tag it *icehouse-rc-potential* to bring it to the release crew's
attention.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral][TaskFlow] Mistral-TaskFlow Summary

2014-04-15 Thread Renat Akhmerov
Some notes:
Even though we use YAQL now our design is flexible enough to plug other ELs in.
If it tells you something in Amazon SWF a component that makes a decision about 
a further route is called Decider. 
This discussion about conditionals is surely important but it doesn’t matter 
too much if we don’t agree on that lazy execution model.

 Of course I'm trying to make the above not be its own micro-language as much 
 as possible (a switch object starts to act like one, sadly).

Why do you think it’s going to be a micro-language?

 [1] http://www.cs.cmu.edu/~aldrich/papers/onward2009-concurrency.pdf
 [2] 
 http://www.cs.ucf.edu/~dcm/Teaching/COT4810-Spring2011/Literature/DataFlowProgrammingLanguages.pdf

Cool, thanks!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer]support direct alarm_evaluator db access

2014-04-15 Thread liusheng
Hi there,
Currently,alarm_evaluator invoke ceilometerclient to get all assigned
alarms. and then, evaluate per alarm by get statistics, which will also
call ceilometerclient. this process is:
evaluator--ceilometerclient--ceilometer API--db.
If we use default option of evaluation_interval (60s), and if we have
many alarms, alarm_evaluator will frequently invoke ceilometerclient and
will produce many http requests per minute.
That is inefficient,it affect the performance of ceilometer(data
collection, data query, e.g.). The better way is allowing
alarm_evaluator access db directely.

Should the related codes of alarm-evaluator need a refactor?

Can you provide your thoughts about this?

I have registered a related blueprint:
https://blueprints.launchpad.net/ceilometer/+spec/direct-alarm-evaluator-db-access

Best regards
Liu sheng


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: Openstack Vms not able to abtain IP address from DHCP

2014-04-15 Thread abhishek jain
Hi

I'm following link for opendaylight openstack integration..

http://networkstatic.net/opendaylight-openstack-integration-devstack-fedora-20/

I'm able to interact opendaylight controller with the stack using ml2
plugin and boot the VMs at the controller node and at compute node.

The problem which I'm facing now is that I'm not able to obtain IP address
on the VM at compute node from the DHCP.
However I'm able to abtain the IP address on the VM at controller node from
DHCP

I'm attaching my local.conf at the compute node as well as at the
controller node

Please help regarding this.


Thanks
Abhishek Jain


local.conf_compute_node
Description: Binary data


local.conf_controller
Description: Binary data
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] [taskflow] Mistral TaskFlow integration summary

2014-04-15 Thread Kirill Izotov
Since you have mentioned that, I'm kind of interested, who else is going to 
benefit from this transition?  

I'm asking because everyone tells me that it's essential for us to work 
together, though my senses tells me that we are starting from different 
prerequisites, targeting different use cases and taking the entirely different 
approaches to solve the problem. Actually, I'd say we have more differences 
than the things we have in common.

Did you hear the same requests from anyone of your current users? May be we 
should invite them for discussion, just to make sure we would not have to redo 
it again.  

--  
Kirill Izotov


вторник, 15 апреля 2014 г. в 11:13, Joshua Harlow написал:

 Sure, its not the fully complete lazy_engine, but piece by piece we can get 
 there.
  
 Of course code/contributions are welcome, as such things will benefit more 
 than just mistral, but openstack as a whole :-)  
  
 -Josh  
  
 From: Kirill Izotov enyk...@stackstorm.com (mailto:enyk...@stackstorm.com)
 Date: Monday, April 14, 2014 at 9:02 PM
 To: Joshua Harlow harlo...@yahoo-inc.com (mailto:harlo...@yahoo-inc.com)
 Cc: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org (mailto:openstack-dev@lists.openstack.org)
 Subject: Re: [openstack-dev] [mistral] [taskflow] Mistral TaskFlow 
 integration summary
  
  Thank for pointing that out, Joshua.  
   
  I had a look on [1] and it seems to me that it might actually do the trick 
  to some degree, though I'm afraid this is still not what we are looking 
  for. While Mistral is asynchronous and event-driven, this particular design 
  is not and would still force us to store the engine in memory and therefore 
  limit our means of scalability. The lazy engine (or better controller) I 
  have proposed is asynchronous at its core and would fit the needs for both 
  of us (since it's much easier to make sync from async, rather than 
  backwards).  
   
  Regarding the retries, while it might work with the current flow design, I 
  doubt it would work with conditional transitions. The attempt to build a 
  repeater by incapsulating the tasks into sub-flow will basically means that 
  every transition they produce will be in that flow and you can't leave it 
  until they are all finished. The whole idea of sub-flows within the scope 
  of direct conditional transitions is a bit unclear to me (and probably us 
  all) at the moment, though I'm trying to rely on them only as a means to 
  lesser the complexity.  
   
  [1] https://review.openstack.org/#/c/86470  
   
  --   
  Kirill Izotov
   
   
  пятница, 11 апреля 2014 г. в 23:47, Joshua Harlow написал:
   
   Thanks for the write-up krill.

   Also some adjustments,  

   Both points are good, and putting some of this on @ 
   https://etherpad.openstack.org/p/taskflow-mistral-details so that we can 
   have it actively noted (feel free to adjust it).  

   I think ivan is working on some docs/code/… for the lazy engine idea, so 
   hopefully we can get back soon with that. Lets see what comes out of that 
   effort and iterate on that.  

   For (2), our are mostly correct about unconditional execution although 
   [1] does now change this, and there are a few active reviews that are 
   being worked [3] on to fit this mistral use-case better. I believe [2] 
   can help move in this direction, ivans ideas I think will also push it a 
   little farther to. Of course lets work together to make sure they fit the 
   best so that taskflow  mistral  openstack can be the best it can be 
   (pigeons not included).  

   Can we also make sure the small issues are noted somewhere (maybe in the 
   above etherpad??). Thanks!  

   [1] https://wiki.openstack.org/wiki/TaskFlow#Retries  
   [2] https://review.openstack.org/#/c/86470
   [3] 
   https://review.openstack.org/#/q/status:open+project:openstack/taskflow,n,z
 

   From: Kirill Izotov enyk...@stackstorm.com 
   (mailto:enyk...@stackstorm.com)
   Reply-To: OpenStack Development Mailing List (not for usage questions) 
   openstack-dev@lists.openstack.org 
   (mailto:openstack-dev@lists.openstack.org)
   Date: Thursday, April 10, 2014 at 9:20 PM
   To: OpenStack-dev@lists.openstack.org 
   (mailto:OpenStack-dev@lists.openstack.org) 
   OpenStack-dev@lists.openstack.org 
   (mailto:OpenStack-dev@lists.openstack.org)
   Subject: [openstack-dev] [mistral] [taskflow] Mistral TaskFlow 
   integration summary

Hi everyone,  
 
This is a summary to the prototype integration we did not too long ago: 
http://github.com/enykeev/mistral/pull/1. Hope it would shed some light 
on the aspects of the integration we are struggling with.  
 
There is a possibility to build Mistral on top of TaskFlow as a 
library, but in order to meet the requirements dictated by Mistral 
users and use cases, both Mistral and TaskFlow should change.  
 
There are two main sides of the story. 

Re: [openstack-dev] [mistral] [taskflow] Mistral TaskFlow integration summary

2014-04-15 Thread Renat Akhmerov

On 15 Apr 2014, at 11:13, Joshua Harlow harlo...@yahoo-inc.com wrote:

 Sure, its not the fully complete lazy_engine, but piece by piece we can get 
 there.

Did you make any estimations when it could happen? :)

 Of course code/contributions are welcome, as such things will benefit more 
 than just mistral, but openstack as a whole :-)

OK.

 From: Kirill Izotov enyk...@stackstorm.com
 The whole idea of sub-flows within the scope of direct conditional 
 transitions is a bit unclear to me (and probably us all) at the moment, 
 though I'm trying to rely on them only as a means to lesser the complexity.

Yes, eventually it’s for reducing complexity. I would just add that it opens 
wide range of opportunities like:
ability to combine multiple physically independent workflows
reusability (using one workflow as a part of another)
isolation (different namespaces for data flow contexts etc)

Renat Akhmerov
@ Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] /bin/bash vs. /bin/sh

2014-04-15 Thread Daniel P. Berrange
On Mon, Apr 14, 2014 at 11:26:17AM -0500, Ben Nemec wrote:
 tldr: I propose we use bash explicitly for all diskimage-builder
 scripts (at least for the short-term - see details below).
 
 This is something that was raised on my linting changes to enable
 set -o pipefail.  That is a bash-ism, so it could break in the
 diskimage-builder scripts that are run using /bin/sh.  Two possible
 fixes for that: switch to /bin/bash, or don't use -o pipefail
 
 But I think this raises a bigger question - does diskimage-builder
 require bash?  If so, I think we should just add a rule to enforce
 that /bin/bash is the shell used for everything.  I know we have a
 bunch of bash-isms in the code already, so at least in the
 short-term I think this is probably the way to go, so we can get the
 benefits of things like -o pipefail and lose the ambiguity we have
 right now.  For reference, a quick grep of the diskimage-builder
 source shows we have 150 scripts using bash explicitly and only 24
 that are plain sh, so making the code truly shell-agnostic is likely
 to be a significant amount of work.
 
 In the long run it might be nice to have cross-shell compatibility,
 but if we're going to do that I think we need a couple of things: 1)
 Someone to do the work (I don't have a particular need to run dib in
 not-bash, so I'm not signing up for that :-) 2) Testing in other
 shells - obviously just changing /bin/bash to /bin/sh doesn't mean
 we actually support anything but bash.  We really need to be gating
 on other shells if we're going to make a significant effort to
 support them.  It's not good to ask reviewers to try to catch every
 bash-ism proposed in a change.  This also relates to some of the
 unit testing work that is going on right now too - if we had better
 unit test coverage of the scripts we would be able to do this more
 easily.
 
 Thoughts?

I supose that rewriting the code to be in Python is out of the
question ?  IMHO shell is just a terrible language for doing any
program that is remotely complicated (ie longer than 10 lines of
shell), for the reasons you are unfortunately illustrating here,
among many others.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2014-04-15 Thread Tristan Cacqueray
confirmed

On 04/15/2014 03:07 AM, Angus Salkeld wrote:
 Hi all,
 
 I'd like to announce my candidacy as a TC member.
 
 A little about me:
 I am a software developer at Rackspace (previously at Red Hat). I
 started off my OpenStack
 contributions with Heat (I started Heat with Steve Dake). I have also
 have make some significant contirbutions to Ceilometer and I am now
 working mostly fulltime on Solum.
 
 What I'd like to focus on within the TC:
 I'd like to help/encourage the upper layer projects (heat, murano,
 mistral, solum, ...) so that they
 1) fit into OpenStack in a sensible way
 2) make sure we have the organisational structures in place to encourage
developers to work better together to deliver features that actually
 make sense to end users
and don't just confuse everyone.
 3) if they want to integrate, help communication between the project
 and the TC
 
 We have developers in this community doing some amazing work, but I
 think we need
 some better mechanisms for ongoing cross project communication (or
 possibly a different
 programme structure) that makes it easier to see what these new
 projects are doing
 and how it all fits together. I'd like to see developers in these
 projects working more
 together so that we build features that are cohesive and projects that
 try to do one thing well.
 
 I think having more people on the TC that know these projects better can
 only be a good thing (note: I am not totally up to speed on Murano but
 aim to be).
 
 Regards
 Angus Salkeld
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Tracking PoC readiness etherpad

2014-04-15 Thread Renat Akhmerov
Team,

Following up on the yesterday’s community meeting I created an etherpad and 
captured all the formal steps we need to make in order to consider Mistral PoC 
ready [0]. Please take a look and comment and/or add other items that you think 
I missed.

My expectation is to get all the items going before “Release under ‘poc’ tag in 
git knocked out this week. The rest is on the next week.

[0] https://etherpad.openstack.org/p/mistral-poc-readiness

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] /bin/bash vs. /bin/sh

2014-04-15 Thread Chris Jones
Hi

On 15 April 2014 09:14, Daniel P. Berrange berra...@redhat.com wrote:

 I supose that rewriting the code to be in Python is out of the
 question ?  IMHO shell is just a terrible language for doing any
 program that is remotely complicated (ie longer than 10 lines of


I don't think it's out of the question - where something makes sense to
switch to Python, that would seem like a worthwhile thing to be doing. I do
think it's a different question though - we can quickly flip things from
/bin/sh to /bin/bash without affecting their suitability for replacement
with python.

-- 
Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] starting regular meetings

2014-04-15 Thread Julien Danjou
On Mon, Apr 14 2014, Doug Hellmann wrote:

 Balancing Europe and Pacific TZs is going to be a challenge. I can't
 go at 1800 or 1900, myself, and those are pushing a little late in
 Europe anyway.

 How about 1600?
 http://www.timeanddate.com/worldclock/converted.html?iso=20140414T16p1=0p2=2133p3=195p4=224

Yeah, that should work for me most of the time.

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Gantt] Reminder: Todays weekly meeting at 1500 UTC

2014-04-15 Thread Sylvain Bauza
Hi,

Just for reminder, the weekly meeting will be held today at 1500 UTC on
#openstack-meeting.

The agenda I can see for today is :
 - Follow-up on previous actions
 - Status on forklift efforts
 - Juno summit design sessions
 - open discussion

If you want to discuss about another topic, please come back to me before
the meeting.

-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] starting regular meetings

2014-04-15 Thread Ghe Rivero
1600 UTC for me is ok. Later than that in Europe (Friday
afternoon/night) could be a problem.

Ghe Rivero

On 04/14/2014 10:56 PM, Doug Hellmann wrote:
 That may work. Let's see what the team members in Europe say about the
 proposed times.

 Doug

 On Mon, Apr 14, 2014 at 3:04 PM, Joshua Harlow harlo...@yahoo-inc.com wrote:
 Lets try that (1600) out and see how it goes :)

 I've seen other projects setup alternating times, maybe we could do that
 too?

 One week @ 1600UTC, next week @ 2000UTC (and so-on).

 -Original Message-
 From: Doug Hellmann doug.hellm...@dreamhost.com
 Date: Monday, April 14, 2014 at 11:53 AM
 To: Joshua Harlow harlo...@yahoo-inc.com
 Cc: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [oslo] starting regular meetings

 Balancing Europe and Pacific TZs is going to be a challenge. I can't
 go at 1800 or 1900, myself, and those are pushing a little late in
 Europe anyway.

 How about 1600?
 http://www.timeanddate.com/worldclock/converted.html?iso=20140414T16p1=0;
 p2=2133p3=195p4=224

 We would need to move to another room, but that's not a big deal.

 Doug

 On Mon, Apr 14, 2014 at 1:54 PM, Joshua Harlow harlo...@yahoo-inc.com
 wrote:
 Is anything around 1800 UTC or 1900 UTC possible?

 I'll try to be there, 1400 UTC is pretty early for us pacific coast
 folk.

 -Original Message-
 From: Doug Hellmann doug.hellm...@dreamhost.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Monday, April 14, 2014 at 9:07 AM
 To: OpenStack Development Mailing List
 openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [oslo] starting regular meetings

 The Oslo team has had a regular meeting slot on Friday at 1400 UTC. In
 the past, we only held meetings irregularly when we had something
 definite to discuss. During Juno, I expect us to need to coordinate
 more closely with the new liaison team as well as internally, so I
 would like to start holding regular weekly meetings.

 Is the Friday 1400 UTC time slot inconvenient enough to anyone that
 you couldn't make a regular meeting? We can work out another slot, if
 needed.

 Since there isn't likely to be time to find another slot this week,
 please plan to join the meeting in #openstack-meeting this week on 18
 Apr. If you have anything you would like to discuss, please add it to
 the agenda:
 https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_for_Next_Meeting

 Doug

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Re: deliver the vm-level HA to improve the business continuity with openstack

2014-04-15 Thread Qiming Teng
What I saw in this thread are several topics:

1) Is VM HA really relevant (in a cloud)?

This is the most difficult question to answer, because it really depends
on who you are talking to, who are the user community you are facing.
IMHO, for most web-based applications that are born to run on cloud,
maybe certain level of business resiliency has already been built into
the code, so the application or service can live happily when VMs come
and go.

For traditional business applications, the scenario may be quite
different.  These apps are migrated to cloud for reasons like cost
savings, server consolidation, etc..  Quite some companies are
evaluating OpenStack for their private cloud -- which is a weird term,
IMHO.

In addition to this, while we are looking into the 'utility' vision of
cloud, we can still ask ourselves: a) can we survive one month of power
outage or water outage, though there are abundant supply elsewhere on
this
planet? b) what are the costs we need to pay if we eventually make it?
c) do we want to pay for this?

My personal experience is that our customers really want this feature
(VM HA) for their private clouds.  The question they asked us was:


  Does OpenStack support VM HA?  Maybe not for all VMS...
  We know we can have that using vSphere, Azure, or CloudStack...



2) Where is the best location to provide VM HA?

Suppose that we do feel the need to support VM HA, then the questions
following this would 'where' and 'how'.

Considering that a VM is not merely a bundle of compute processes, it is
actually a virtual execution environment that consumes resources like
storage and network bandwidth besides processor cycles, Nova may be NOT 
the ideal location to deal with this cross-cutting concern.

High availability involves redundant resource provisioning, effective
failure detection and appropriate fail-over policies, including fencing.
Imposing all these requirements on Nova is impractical.  We may need to 
consider whether VM HA, if ever implemented/supported, should be part of 
the orchestration service, aka Heat.


3) Can/should we do the VM HA orchestration in Heat?

My perception is that it can be done in Heat, based on my limited
understandig of how Heat works.  It may imply some requirements to other 
projects (e.g.  nova, cinder, neutron ...) as well, though Heat should be 
the orchestrator.

What do we need then?

  - A resource type for VM groups/clusters, for the redundant
provisioning.  VMs in the group can be identical instances, managed 
by a Pacemaker setup among the VMs, just like a WatchRule in Heat can 
be controlled by Ceilometer.  

Another way to do this is to have the VMs monitored via heartbeat 
messages sent by Nova (if possible/needed), or some services injected 
into the VMs (consider what cfn-hup, cfn-signal does today).

However, the VM group/cluster can decide how to react to a VM online
/offline signal.  It may choose to a) restart the VM in-place; b)
remote-restart (aka evacuate) the VM somewhere else; c) live/cold 
migrate the VM to other nodes.

The policies can be out sourced to other plugins considering that
global load-balancing or power management requirements.  But that is an
advanced feature that warrants another blueprint.

  - Some fencing support from nova, cinder, neutron to shoot the bad VMs
in the head so a VM that cannot be reached is guarantteed to be cleanly 
killed.

  - VM failure detectors that can reliably tell whether a VM has failed.  
Sometimes a VM that failed the expected performance goal should be
treated as failed as well, if we really want to be strict on this.

A failure detector can reside inside Nova, as what has been done for
the 'service groups' there.  It can reside inside a VM, as a service
istalled there, sending out heatbeat messages (before the battery runs 
out, :))

  - A generic signaling mechanism that allows a secure message delivery
back to Heat indicating that a VM is alive or dead.

My current understanding is that we may avoid complicated task-flow
here.

Regards,
  - Qiming


 For the most part we've been trying to encourage projects that want to
 control VMs to add such functionality to the Orchestration program, aka
 Heat.
 Yes, exactly.
 
 -jay
 
 Hey folks,
 
 Just as a note for HA for VMs, our current heat-core thinking is our
 HARestarter resource functionality is a workflow (Restarter is a
 verb, rather then a Noun - Heat orchestrates Nouns) and would be
 better suited to a workflow service like Mistral.  Clearly we don't
 know how to get from where we are today to the proper separation of
 concerns as pointed out by Zane Bitter in recent threads on the ml
 but just throwing this out there so folks are aware.
 
 Regards
 -steve
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron Migrations, service plugins and Grenade jobs

2014-04-15 Thread Anna Kamyshnikova
Hello everyone!

I would like to try to solve this problem. I registered blueprint on this
topic
https://blueprints.launchpad.net/neutron/+spec/neutron-robust-db-migrationsand
I'm going to experiment with options to solve this. I'm welcome any
suggestions and ready to talk about it via IRC (akamyshnikova).

Regards
Ann


On Mon, Apr 14, 2014 at 9:39 PM, Sean Dague s...@dague.net wrote:

 On 04/14/2014 12:46 PM, Eugene Nikanorov wrote:
  Hi Salvatore,
 
  The described problem could be even worse if vendor drivers are
 considered.
  Doesn't #1 require that all DB tables are named differently? Otherwise
  it seems that user can't be sure in DB schema even if all tables are
  present.
 
  I think the big part of the problem is that we need to support both
  online and offline migrations. Without the latter things could be a
  little bit simpler.
 
  Also it seems to me that problem statement should be changed to the
  following:
  One need to migrate from (Config1, MigrationID1) to (Config2,
  MigrationID2), and currently our code only accounts for MigrationIDs.
  We may consider amending DB with configuration metadata, at least that
  will allow to run migration code with full knowledge of what happened
  before (if online mode is considered).
  In offline mode that will require providing old and current
 configurations.
 
  That was just thinking aloud, no concrete proposals yet.

 The root issue really is Migrations *must* be global, and config
 invariant. That's the design point in both sqlalchemy-migrate and
 alembic. The fact that there is one global migrations table per
 database, with a single value in it, is indicative of this fact.

 I think that design point got lost somewhere along the way, and folks
 assumed migrations were just a way to change schemas. They are much more
 constrained than that.

 It does also sound like the data model is going to need some serious
 reconsidering given what's allowed to be changed at the plugin or vendor
 driver model. Contrast this with Nova, were virt drivers don't get to
 define persistant data that's unique to them (only generic data that
 they fit into the grander nova model).

 The one time we had a driver which needed persistent data (baremetal) it
 managed it's own database entirely.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] XXXFSDriver: Query on usage of load_shares_config in ensure_shares_mounted

2014-04-15 Thread Duncan Thomas
On 11 April 2014 16:24, Eric Harney ehar...@redhat.com wrote:


 I suppose I should also note that if the plans in this blueprint are
 implemented the way I've had in mind, the main issue here about only
 loading shares at startup time would be in place, so we may want to
 consider these questions under that direction.

Currently, any config changes to a backend require a restart of the
volume service to be reliably applied, shares included. Some changes
work for shares, some don't, which is a dangerous place to be. If
we're going to look at restartless config changes, then I thing we
should look at how it could be generalised for every backend, not just
shared fs ones.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] new contributor failing tests - help!

2014-04-15 Thread mar...@redhat.com
On 14/04/14 19:51, Kevin Benton wrote:
 Hi Mario,
 
 Here is the problem:
 import netaddr
 p = netaddr.IPNetwork('2001:0db8::/64')
 str(p)
 '2001:db8::/64'
 
 Now that you are converting CIDR strings to netaddr objects and back, it's
 causing redundant info to be dropped. You will just have to remove the
 references to 2001:0db8::/64 and replace them with 2001:db8::/64.

thanks so much! This did it and the tests now pass ok. When we meet (I
won't be at Atlanta unfortunately, hopefully next one) I owe you at
*least* one beer ;)

thanks, marios

 
 I will comment on the review as well.
 
 Cheers,
 Kevin Benton
 
 
 On Mon, Apr 14, 2014 at 9:23 AM, mar...@redhat.com mandr...@redhat.comwrote:
 
 Hi,

 I am really stumped by a Jenkins failure for one of my reviews... @
 https://review.openstack.org/#/c/59212/ if any kind soul has any
 pointers/help I will be very grateful. The strange thing is that Jenkins
 +1 this patchset (Apr 2) but subsequently failed as described below:

 The failure is from

 neutron.tests.unit.test_security_groups_rpc.SGServerRpcCallBackMixinTestCase
 and specifically the 2 test cases
 test_security_group_rules_for_devices_ipv6_ingress and
 test_security_group_rules_for_devices_ipv6_egress

 The failure is on the assertion at the end of the methods:

  self.assertEqual(port_rpc['security_group_rules'],expected)

 port_rpc['security_group_rules'] --- {'ethertype': u'IPv6',
 'direction': u'egress'
 expected --- {'ethertype': 'IPv6', 'direction':
 'egress'

 (you can see this in the logs, e.g.

 http://logs.openstack.org/12/59212/14/check/gate-neutron-python27/57d89d5/console.html.gz
 )


 I have *no* idea what in my code is causing this behaviour; if I just
 grab this review, the tests pass fine. But if I rebase against master,
 they fail, i.e.:

 git review -d I71fb8c887963a122a5bd8cfdda800026c1cd3954
 source .tox/py27/bin/activate
 ./run_tests.sh -d

 neutron.tests.unit.test_security_groups_rpc.SGServerRpcCallBackMixinTestCase

 ...

 Ran 7 tests in 1.926s
 OK

 ...

 git rebase master
 ./run_tests.sh -d

 neutron.tests.unit.test_security_groups_rpc.SGServerRpcCallBackMixinTestCase

 ...

 Ran 11 tests in 2.964s
 FAILED (failures=2)

 ...

 thanks for reading, if you came this far!

 marios

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] deliver the vm-level HA to improve the business continuity with openstack

2014-04-15 Thread Duncan Thomas
On 14 April 2014 19:51, James Penick pen...@yahoo-inc.com wrote:
 We drive the ³VM=Cattle² message pretty hard. Part of onboarding a
 property to our cloud, and allowing them to serve traffic from VMs is
 explaining the transient nature of VMs. I broadcast the message that all
 compute resources die, and if your day/week/month is ruined because of a
 single compute instance going away, then you¹re doing something Very
 Wrong. :)

While I agree with the message, if cloud provider A has VM restarts
every hour, and B has restarts every 6 months, all other things being
equal I'm going to go with B. Restarts are a pain point for most
systems, requiring data resynchronisation etc, so looking to minimise
them is a good aim as long as it doesn't conflict much with other
concerns...

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC candidacy

2014-04-15 Thread Julien Danjou
Hi there,

I'd like to announce my candidacy as a TC member.

I'm a software engineer, and started contributing to OpenStack 2.5 years
ago. I've also been the Ceilometer PTL for the last year. And I've
actually already seated at the TC last year.

Within the TC, I'd like to focus on improving projects on the QA level,
defining clear guidance on incubation/graduation and improving our
user/operator experience. I think we can do better than that we do
currently, and I'd love to weight in on that.

Cheers,
-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Stack snapshots

2014-04-15 Thread Thomas Herve
Hi all,

I started working on the stack snapshot blueprint [1] and wrote a first series 
of patches [2] to get a feeling of what's possible. I have a couple of related 
design questions though:

 * Is a stack snapshot independent of the stack? That's the way I chose for my 
patches, you start with a stack, but then you can show and delete snapshots 
independently. The main impact will be how restoration works: is restoration an 
update action on a stack towards a specific state, or a creation action with 
backup data?

 * Consequently, should be use volume backups (which survive deleting of the 
original volumes) or volume snapshots (which don't). If the snapshot is 
dependent of the stack, then we can use the more efficient snapshot operation. 
But backup is also an interesting use-case, so should it be another action 
completely?


[1] https://blueprints.launchpad.net/heat/+spec/stack-snapshot

[2] 
https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/stack-snapshot,n,z

Thanks,

-- 
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC candidacy

2014-04-15 Thread Tristan Cacqueray
confirmed

On 04/15/2014 01:13 PM, Julien Danjou wrote:
 Hi there,
 
 I'd like to announce my candidacy as a TC member.
 
 I'm a software engineer, and started contributing to OpenStack 2.5 years
 ago. I've also been the Ceilometer PTL for the last year. And I've
 actually already seated at the TC last year.
 
 Within the TC, I'd like to focus on improving projects on the QA level,
 defining clear guidance on incubation/graduation and improving our
 user/operator experience. I think we can do better than that we do
 currently, and I'd love to weight in on that.
 
 Cheers,
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC Candidacy

2014-04-15 Thread Michael Still
Hi.

I'd also like to announce my TC candidacy. I am currently a member of
the TC, and I would like to continue to serve.

I first started hacking on Nova during the Diablo release, with my
first code contributions appearing in the Essex release. Since then
I've hacked mostly on Nova and Oslo, although I have also contributed
to many other projects as my travels have required. For example, I've
tried hard to keep various projects in sync with their imports of
parts of Oslo I maintain.

I work full time on OpenStack at Rackspace, leading a team of
developers who work solely on upstream open source OpenStack. I am a
Nova and Oslo core reviewer and the Nova PTL.

I have been serving on the TC for the last year, and in the Icehouse
release started acting as the liaison for the board defcore
committee along with Anne Gentle. defcore is the board effort to
define what parts of OpenStack we require vendors to ship in order to
be able to use the OpenStack trade mark, so it involves both the board
and the TC. That liaison relationship is very new and only starting to
be effective now, so I'd like to keep working on that if you're
willing to allow it.

Cheers,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] /bin/bash vs. /bin/sh

2014-04-15 Thread Ghe Rivero
+1 to use bash as the default shell. So far, all major distros use bash
as the default one (except Debian which uses dash).
An about rewriting the code in Python, I agree that shell is complicated
for large programs, but writing anything command oriented in other than
shell is a nightmare. But there are some parts that can benefit from that.

Ghe Rivero

On 04/15/2014 11:05 AM, Chris Jones wrote:
 Hi

 On 15 April 2014 09:14, Daniel P. Berrange berra...@redhat.com
 mailto:berra...@redhat.com wrote:

 I supose that rewriting the code to be in Python is out of the
 question ?  IMHO shell is just a terrible language for doing any
 program that is remotely complicated (ie longer than 10 lines of


 I don't think it's out of the question - where something makes sense
 to switch to Python, that would seem like a worthwhile thing to be
 doing. I do think it's a different question though - we can quickly
 flip things from /bin/sh to /bin/bash without affecting their
 suitability for replacement with python.

 -- 
 Cheers,

 Chris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Default paths in os-*-config projects

2014-04-15 Thread Ghe Rivero
+1 for the use of /usr/share and keeping the compatibility for a couple
of releases.

Ghe Rivero
On 04/15/2014 03:30 AM, Clint Byrum wrote:
 Excerpts from Ben Nemec's message of 2014-04-14 15:41:23 -0700:
 Right now the os-*-config projects default to looking for their files in 
 /opt/stack, with an override env var provided for other locations.  For 
 packaging purposes it would be nice if they defaulted to a more 
 FHS-compliant location like /var/lib.  For devtest we could either 
 override the env var or simply install the appropriate files to /var/lib.

 This was discussed briefly in IRC and everyone seemed to be onboard with 
 the change, but Robert wanted to run it by the list before we make any 
 changes.  If anyone objects to changing the default, please reply here. 
   I'll take silence as agreement with the move. :-)

 +1 from me for doing FHS compliance. :)

 /var/lib is not actually FHS compliant as it is for Variable state
 information. os-collect-config does have such things, and does use
 /var/lib. But os-refresh-config reads executables and os-apply-config
 reads templates, neither of which will ever be variable state
 information.

 /usr/share would be the right place, as it is Architecture independent
 data. I suppose if somebody wants to compile a C program as an o-r-c
 script we could rethink that, but I'd just suggest they drop it in a bin
 dir and exec it from a one line shell script in the /usr/share.

 So anyway, I suggest:

 /usr/share/os-apply-config/templates
 /usr/share/os-refresh-config/scripts

 With the usual hierarchy underneath.

 We'll need to continue to support the non-FHS paths for at least a few
 releases as well.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2014-04-15 Thread Tristan Cacqueray
confirmed

On 04/15/2014 01:29 PM, Michael Still wrote:
 Hi.
 
 I'd also like to announce my TC candidacy. I am currently a member of
 the TC, and I would like to continue to serve.
 
 I first started hacking on Nova during the Diablo release, with my
 first code contributions appearing in the Essex release. Since then
 I've hacked mostly on Nova and Oslo, although I have also contributed
 to many other projects as my travels have required. For example, I've
 tried hard to keep various projects in sync with their imports of
 parts of Oslo I maintain.
 
 I work full time on OpenStack at Rackspace, leading a team of
 developers who work solely on upstream open source OpenStack. I am a
 Nova and Oslo core reviewer and the Nova PTL.
 
 I have been serving on the TC for the last year, and in the Icehouse
 release started acting as the liaison for the board defcore
 committee along with Anne Gentle. defcore is the board effort to
 define what parts of OpenStack we require vendors to ship in order to
 be able to use the OpenStack trade mark, so it involves both the board
 and the TC. That liaison relationship is very new and only starting to
 be effective now, so I'd like to keep working on that if you're
 willing to allow it.
 
 Cheers,
 Michael
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Environments Use Cases

2014-04-15 Thread Roshan Agrawal
Julien, good inputs. I will make updates to the wiki after compiling feedback 
from today's IRC


From: Julien Vey [vey.jul...@gmail.com]
Sent: Monday, April 14, 2014 3:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Solum] Environments Use Cases

Hi Roshan,

Happy to see you start the discussion about environments

Angus also started a wiki page on this 
https://wiki.openstack.org/wiki/Solum/ApiModel but on a more technical level.
And we discussed it a little on this review 
https://review.openstack.org/#/c/84434/

About the use-cases you described, here are some comments:
#1 : I don't think it should be the developer responsibility to say where his 
application gets deployed. It should have been decided during the creation of 
the environments, what would be the chaining of environnements. A developer 
would only push his code to the first environment, and the promotions are 
responsible of the rest.
#3 : I think a better way to say it would be As a release manager, I can 
choose how many resources I allocate to each environnement. I don't think we 
need quota or threshold
#4 : Good point!
#6 : The artifact generated by the build job will never get rebuild (a WAR 
archive for instance, in case of a Java WebApp), but the DU might be. For 
instance, we might want to use docker for Dev/Testing and VMs for production

Regards
Julien


2014-04-14 21:43 GMT+02:00 Roshan Agrawal 
roshan.agra...@rackspace.commailto:roshan.agra...@rackspace.com:
As a follow up to our F2F discussion at Raleigh on Environments,  I have 
documented an initial set of use cases as it relates to Environments:
https://wiki.openstack.org/wiki/Solum/Environments

The goal of this discussion thread on Environments is for the Solum team to 
develop a POV on what Environment is, and which project(s) in OpenStack should 
own it. With this, we should be able to go into the Atlanta summit and engage 
in discussions with the relevant project teams.

We can discuss in the IRC meeting tomorrow, meanwhile would appreciate your 
feedback on what is documented above.

Thanks  Regards,
Roshan Agrawal
Direct:512.874.1278
Mobile:  512.354.5253
roshan.agra...@rackspace.commailto:roshan.agra...@rackspace.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Code merge policy

2014-04-15 Thread Dmitry Pyzhov
Guys,

We have big and complicated structure of the project. And part of our
patchsets require additional actions before merge. Sometimes we need
approve from testers, sometimes we need merge requests in several repos at
the same time, sometimes we need updates of rpm repositories before merge.

We have informal rule: invite all the required persons to the review. And
core reviewer does not merge code if part of +1's are missed. Sad, but this
rule is not obvious.

This informal rule became even more strict when we need update of rpm/deb
repositories, because OSCI changes should be accomplished right before
merge. For such reviews we ask OSCI team to do changes, checks and merge.

https://review.openstack.org/#/c/86001/ This particular request requires
check of our 4.1.1 rpm/deb repositories status. Thats why Roman Vyalov is
added as reviewer.

I don't like over-bureaucracy. My suggestion is simple: take into account
reviewers status and do not merge if unsure.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Requirements and API revision progress

2014-04-15 Thread Eugene Nikanorov
Hi Stephen,

Thanks for a good summary. Some comments inline.


On Tue, Apr 15, 2014 at 5:20 AM, Stephen Balukoff sbaluk...@bluebox.netwrote:


 So! On this front:

 1. Does is make sense to keep filling out use cases in Samuel's document
 above? I can think of several more use cases that our customers actually
 use on our current deployments which aren't considered in the 8 cases in
 Samuel's document thus far. Plus nobody has create any use cases from the
 cloud operator perspective yet.


I treat Sam's doc as a source of use cases to triage API proposals. If you
think you have use cases that don't fit into existing API or into proposed
API, they should certainly be brought to attention.



 2. It looks like we've started to get real-world data on Load Balancer
 features in use in the real world. If you've not added your organization's
 data, please be sure to do so soon so we can make informed decisions about
 product direction. On this front, when will we be making these decisions?

I'd say we have two kinds of features - one kind is features that affect or
even define the object model and API.
Other kind are features that are implementable within existing/proposed API
or require slight changes/evolution.
First kind is the priority: while some of such features may or may not be
implemented in some particular release, we need to implement proper
infrastructure for them (API, obj model)

Oleg Bondarev (he's neutron core) and me are planning and mostly interested
to work on implementing generic stuff like API/obj model and adopt haproxy
driver to it. So our goal is to make implementation of particular features
simpler for contributors and also make sure that proposed design fits in
general lbaas architecture. I believe that everyone who wants to see
certain feature may start working on it - propose design, participate in
discussions and start actually writing the code.



 3. Jorge-- I know an action item from the last meeting was to draft a
 revision of the API (probably starting from something similar to the Atlas
 API). Have you had a chance to get started on this, and are you open for
 collaboration on this document at this time? Alternatively, I'd be happy to
 take a stab at it this week (though I'm not very familiar with the Atlas
 API-- so my proposal might not look all that similar).


+1, i'd like to see something as well.


 What format or template should we be following to create the API
 documentation?  (I see this here:
 http://docs.openstack.org/api/openstack-network/2.0/content/ch_preface.html 
 but this seems like it might be a little heavy for an API draft that is
 likely to get altered significantly, especially given how this discussion
 has gone thus far. :/ )


Agree, that's too heavy for API sketch. I think a set of resources with
some attributes plus a few cli calls is what could show the picture.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron Migrations, service plugins and Grenade jobs

2014-04-15 Thread Salvatore Orlando
Thanks Anna.

I've been following the issue so far, but I am happy to hand it over to you.
I think the problem assessment is complete, but if you have more questions
ping me on IRC.

Regarding the solution, I think we already have a fairly wide consensus on
the approach.
There are however a few details to discuss:
- Conflicting schemas. For instance two migrations for two distinct plugins
might create tables with the same name but different columns.
  We first need to look at existing migrations to verify where this
condition occurs, and then study a solution case by case.
- Logic for corrective migrations. For instance a corrective migration
for 569e98a8132b_metering is needed. However, such corrective migration
should have logic for understanding whether the original migration has been
executed or not.
- Corrective actions for corrupted schemas. This would be the case, for
instance, of somebody which enables metering while the database is at a
migration rev higher than the one when metering was introduced.

I reckon it might be the case of putting together a specification and push
it to the newly created neutron-specs repo, assuming that we feel confident
enough to start using this new process (Kyle and Mark might chime in on
this point). Also, I would like to see this work completed by Juno-1, which
I reckon is a reasonable target.

Of course I'm available for discussing design, implementation, reviewing
and writing code.

Salvatore



On 15 April 2014 12:44, Anna Kamyshnikova akamyshnik...@mirantis.comwrote:

 Hello everyone!

 I would like to try to solve this problem. I registered blueprint on this
 topic
 https://blueprints.launchpad.net/neutron/+spec/neutron-robust-db-migrationsand
  I'm going to experiment with options to solve this. I'm welcome any
 suggestions and ready to talk about it via IRC (akamyshnikova).

 Regards
 Ann


 On Mon, Apr 14, 2014 at 9:39 PM, Sean Dague s...@dague.net wrote:

 On 04/14/2014 12:46 PM, Eugene Nikanorov wrote:
  Hi Salvatore,
 
  The described problem could be even worse if vendor drivers are
 considered.
  Doesn't #1 require that all DB tables are named differently? Otherwise
  it seems that user can't be sure in DB schema even if all tables are
  present.
 
  I think the big part of the problem is that we need to support both
  online and offline migrations. Without the latter things could be a
  little bit simpler.
 
  Also it seems to me that problem statement should be changed to the
  following:
  One need to migrate from (Config1, MigrationID1) to (Config2,
  MigrationID2), and currently our code only accounts for MigrationIDs.
  We may consider amending DB with configuration metadata, at least that
  will allow to run migration code with full knowledge of what happened
  before (if online mode is considered).
  In offline mode that will require providing old and current
 configurations.
 
  That was just thinking aloud, no concrete proposals yet.

 The root issue really is Migrations *must* be global, and config
 invariant. That's the design point in both sqlalchemy-migrate and
 alembic. The fact that there is one global migrations table per
 database, with a single value in it, is indicative of this fact.

 I think that design point got lost somewhere along the way, and folks
 assumed migrations were just a way to change schemas. They are much more
 constrained than that.

 It does also sound like the data model is going to need some serious
 reconsidering given what's allowed to be changed at the plugin or vendor
 driver model. Contrast this with Nova, were virt drivers don't get to
 define persistant data that's unique to them (only generic data that
 they fit into the grander nova model).

 The one time we had a driver which needed persistent data (baremetal) it
 managed it's own database entirely.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC Candidacy

2014-04-15 Thread Steven Hardy
Hi all!

I'd like to announce my candidacy to serve as a TC member.


About me:

I'm a software engineer at Red Hat, and have been working full-time on
OpenStack for the last two years.  I've been heavily involved with the Heat
project since before it was incubated, served as PTL for the Havana cycle,
and remain one of the top contributors to the project.

During Icehouse, I've been trying to improve my knowledge and involvement
with other projects, in particular contributing fixes to keystone and
working to improve Heat's functional testing via tempest.

Platform:

Working on orchestration provides a unique view of OpenStack - it's
essential to learn at least a little bit about every other project, because
orchestration integrates with everything.  This is an exciting challenge,
and provides a pretty broad perspective on issues such as API and client
consistency.

I am of the opinion that OpenStack should be inclusive by default and,
trademark considerations aside, should not be limited to a core of
components.  Instead I see OpenStack developing into an increasingly
powerful collection of abstractions, providing consistency for users and
flexibility for deployers.  If elected I would strive to work as an
advocate for new and recently graduated projects, seeking to identify
common problems and opportunities for improved integration.

The issues I would consider priorities were I to be elected are detailed
below, the common theme is basically better communication, efficiency and
reuse:

1. API consistency and reuse

I believe we're making great progress and improving API consistency but
there are still challenges and opportunities, primarily around improving
cross-project consistency, communication and reuse.  I see the TC's role
here as primarily one of facilitating and encouraging cross-project
discussion, providing direction and monitoring progress.

Closely related to this is encouraging projects to more pro-actively
collaborate via cross-project initiatives such as oslo.  Ultimately we all
benefit if we can reduce duplication of effort and collaborate on shared
solutions.

I believe that the TC should be providing clear leadership and direction
which encourages projects to avoid long term fragmentation as ultimately it
harms the user community (users operators and deployers getting
inconsistent experience) and developers (maintenance burden and lack of
knowledge transfer between projects).

Finally client API consistency should be improved, in particular working
towards common solutions for version discovery and common version-agnositc
interfaces which reduce the (IME considerable) pain users experience when
API versions change.

2. Mentoring of new projects

Having experienced the incubation process, and subsequent rapid growth of
the Heat project, I've got first-hand experience of the challenges
experienced by new projects seeking to become part of OpenStack.  There is
a lot of accumulated experience and knowledge in the community, and
in many cases this lore is not documented for new projects and
contributors.

I think the TC should strive to encourage more active mentoring of new
projects, by implementing a scheme where a representative from the
incubated project is paired with a person experienced with the area related
to a graduation requirement, over time this can lead to identifying areas of
common confusion, improved documentation, and hopefully engage new
contributors (and reviewers) to participate in components related to gate
integration and testing.

3. Review of current PTL model

After serving as the Heat PTL for Havana, I was left with the (apparently
not that uncommon) impression that there are aspects of the PTL role and
responsibilities which are sub-optimal for a diverse open-source project.

As such, I would propose a review of the current model, where a PTL's
primary function is release management and coordination, not really
technical leadership in many cases.

I would encourage consideration of a move to a subsystem maintainer
structure, similar to that used for the Linux kernel, where individual
projects and project sub-components are afforded more autonomy and less
granular management, in exchange for an increased requirement for
participation in automated gate integration testing.

There may be other alternative solutions, but I believe it's time to
initiate some discussion around what may be the most effective and
appropriate strategy for project release management and leadership in an
increasingly diverse and fast-moving environment.

Thanks for your consideration!

--
Steve Hardy
Red Hat Engineering, Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Default paths in os-*-config projects

2014-04-15 Thread Petr Blaho
On Mon, Apr 14, 2014 at 06:30:59PM -0700, Clint Byrum wrote:
 Excerpts from Ben Nemec's message of 2014-04-14 15:41:23 -0700:
  Right now the os-*-config projects default to looking for their files in 
  /opt/stack, with an override env var provided for other locations.  For 
  packaging purposes it would be nice if they defaulted to a more 
  FHS-compliant location like /var/lib.  For devtest we could either 
  override the env var or simply install the appropriate files to /var/lib.
  
  This was discussed briefly in IRC and everyone seemed to be onboard with 
  the change, but Robert wanted to run it by the list before we make any 
  changes.  If anyone objects to changing the default, please reply here. 
I'll take silence as agreement with the move. :-)
  
 
 +1 from me for doing FHS compliance. :)
 
 /var/lib is not actually FHS compliant as it is for Variable state
 information. os-collect-config does have such things, and does use
 /var/lib. But os-refresh-config reads executables and os-apply-config
 reads templates, neither of which will ever be variable state
 information.
 
 /usr/share would be the right place, as it is Architecture independent
 data. I suppose if somebody wants to compile a C program as an o-r-c
 script we could rethink that, but I'd just suggest they drop it in a bin
 dir and exec it from a one line shell script in the /usr/share.
 
 So anyway, I suggest:
 
 /usr/share/os-apply-config/templates
 /usr/share/os-refresh-config/scripts

+1 for /usr/share/
 
 With the usual hierarchy underneath.
 
 We'll need to continue to support the non-FHS paths for at least a few
 releases as well.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Petr Blaho, pbl...@redhat.com
Software Engineer

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Custom Resource

2014-04-15 Thread Rabi Mishra
 IIRC implementing something like this had been discussed quite a while back.
 I think we discussed the possibility of using web hooks and a defined
 api/payload in place of the SNS/SQS type stuff. I don't think it ever made
 it to the backlog, but I'd be happy to discuss further design and maybe add
 a design session to the summit if you're unable to make it.

Thanks. As suggested, I've added a design session for this. 

http://summit.openstack.org/cfp/details/308

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2014-04-15 Thread Tristan Cacqueray
confirmed

On 04/15/2014 02:04 PM, Steven Hardy wrote:
 Hi all!
 
 I'd like to announce my candidacy to serve as a TC member.
 
 
 About me:
 
 I'm a software engineer at Red Hat, and have been working full-time on
 OpenStack for the last two years.  I've been heavily involved with the Heat
 project since before it was incubated, served as PTL for the Havana cycle,
 and remain one of the top contributors to the project.
 
 During Icehouse, I've been trying to improve my knowledge and involvement
 with other projects, in particular contributing fixes to keystone and
 working to improve Heat's functional testing via tempest.
 
 Platform:
 
 Working on orchestration provides a unique view of OpenStack - it's
 essential to learn at least a little bit about every other project, because
 orchestration integrates with everything.  This is an exciting challenge,
 and provides a pretty broad perspective on issues such as API and client
 consistency.
 
 I am of the opinion that OpenStack should be inclusive by default and,
 trademark considerations aside, should not be limited to a core of
 components.  Instead I see OpenStack developing into an increasingly
 powerful collection of abstractions, providing consistency for users and
 flexibility for deployers.  If elected I would strive to work as an
 advocate for new and recently graduated projects, seeking to identify
 common problems and opportunities for improved integration.
 
 The issues I would consider priorities were I to be elected are detailed
 below, the common theme is basically better communication, efficiency and
 reuse:
 
 1. API consistency and reuse
 
 I believe we're making great progress and improving API consistency but
 there are still challenges and opportunities, primarily around improving
 cross-project consistency, communication and reuse.  I see the TC's role
 here as primarily one of facilitating and encouraging cross-project
 discussion, providing direction and monitoring progress.
 
 Closely related to this is encouraging projects to more pro-actively
 collaborate via cross-project initiatives such as oslo.  Ultimately we all
 benefit if we can reduce duplication of effort and collaborate on shared
 solutions.
 
 I believe that the TC should be providing clear leadership and direction
 which encourages projects to avoid long term fragmentation as ultimately it
 harms the user community (users operators and deployers getting
 inconsistent experience) and developers (maintenance burden and lack of
 knowledge transfer between projects).
 
 Finally client API consistency should be improved, in particular working
 towards common solutions for version discovery and common version-agnositc
 interfaces which reduce the (IME considerable) pain users experience when
 API versions change.
 
 2. Mentoring of new projects
 
 Having experienced the incubation process, and subsequent rapid growth of
 the Heat project, I've got first-hand experience of the challenges
 experienced by new projects seeking to become part of OpenStack.  There is
 a lot of accumulated experience and knowledge in the community, and
 in many cases this lore is not documented for new projects and
 contributors.
 
 I think the TC should strive to encourage more active mentoring of new
 projects, by implementing a scheme where a representative from the
 incubated project is paired with a person experienced with the area related
 to a graduation requirement, over time this can lead to identifying areas of
 common confusion, improved documentation, and hopefully engage new
 contributors (and reviewers) to participate in components related to gate
 integration and testing.
 
 3. Review of current PTL model
 
 After serving as the Heat PTL for Havana, I was left with the (apparently
 not that uncommon) impression that there are aspects of the PTL role and
 responsibilities which are sub-optimal for a diverse open-source project.
 
 As such, I would propose a review of the current model, where a PTL's
 primary function is release management and coordination, not really
 technical leadership in many cases.
 
 I would encourage consideration of a move to a subsystem maintainer
 structure, similar to that used for the Linux kernel, where individual
 projects and project sub-components are afforded more autonomy and less
 granular management, in exchange for an increased requirement for
 participation in automated gate integration testing.
 
 There may be other alternative solutions, but I believe it's time to
 initiate some discussion around what may be the most effective and
 appropriate strategy for project release management and leadership in an
 increasingly diverse and fast-moving environment.
 
 Thanks for your consideration!
 
 --
 Steve Hardy
 Red Hat Engineering, Cloud
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 




signature.asc

Re: [openstack-dev] [tripleo] /bin/bash vs. /bin/sh

2014-04-15 Thread Petr Blaho
On Mon, Apr 14, 2014 at 07:24:57PM +0100, Chris Jones wrote:
 Hi
 
 Apart from special cases like the ramdisk's /init, which is a script that 
 needs
 to run in busybox's shell, everything should be using bash. There's no point 
 us
 tying ourselves in knots trying to achieve POSIX compliance for the sake of 
 it,
 when bashisms are super useful.

+1, especially for tying ourselves in knots trying to achieve POSIX compliance 
for
the sake of it
 
 Cheers,
 
 Chris
 
 
 On 14 April 2014 17:26, Ben Nemec openst...@nemebean.com wrote:
 
 tldr: I propose we use bash explicitly for all diskimage-builder scripts
 (at least for the short-term - see details below).
 
 This is something that was raised on my linting changes to enable set -o
 pipefail.  That is a bash-ism, so it could break in the diskimage-builder
 scripts that are run using /bin/sh.  Two possible fixes for that: switch 
 to
 /bin/bash, or don't use -o pipefail
 
 But I think this raises a bigger question - does diskimage-builder require
 bash?  If so, I think we should just add a rule to enforce that /bin/bash
 is the shell used for everything.  I know we have a bunch of bash-isms in
 the code already, so at least in the short-term I think this is probably
 the way to go, so we can get the benefits of things like -o pipefail and
 lose the ambiguity we have right now.  For reference, a quick grep of the
 diskimage-builder source shows we have 150 scripts using bash explicitly
 and only 24 that are plain sh, so making the code truly shell-agnostic is
 likely to be a significant amount of work.
 
 In the long run it might be nice to have cross-shell compatibility, but if
 we're going to do that I think we need a couple of things: 1) Someone to 
 do
 the work (I don't have a particular need to run dib in not-bash, so I'm 
 not
 signing up for that :-) 2) Testing in other shells - obviously just
 changing /bin/bash to /bin/sh doesn't mean we actually support anything 
 but
 bash.  We really need to be gating on other shells if we're going to make 
 a
 significant effort to support them.  It's not good to ask reviewers to try
 to catch every bash-ism proposed in a change.  This also relates to some 
 of
 the unit testing work that is going on right now too - if we had better
 unit test coverage of the scripts we would be able to do this more easily.
 
 Thoughts?
 
 Thanks.
 
 -Ben
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 --
 Cheers,
 
 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Petr Blaho, pbl...@redhat.com
Software Engineer

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC Candidacy

2014-04-15 Thread Jay Pipes
Hi Stackers,

I've known OpenStack since it was a baby, and I've watched it grow to
the teenager it is today -- pimples and all. I hope to help guide it
along its way to adulthood by serving on the Technical Committee.

Besides developing on a number of core and ecosystem OpenStack projects,
I've also spent a couple years on the deployment and operator side of
the coin, which I think gives me a well-balanced perspective on some of
the hardships that go hand in hand with configuring, upgrading and
scaling a highly distributed piece of software.

I think it's pretty well-known that I live and breathe a utility cloud
world-view (all hail Cow29118281!). But hopefully it's also known that
I'm open to other viewpoints, welcome discussion on ideological
incongruence, and try to treat all folks with respect and dignity.

On the TC, I would be a strong voice for:

 * Newcomers to the developer and operator community -- I'd like to help
foster mentoring opportunities in the community and spread knowledge in
a viral way

 * Simplicity and consistency in our public REST APIs -- The APIs are
our public face, and when they are overly complex, inconsistent, or just
don't make sense, we turn possible newcomers away from OpenStack. We
should do everything we can to break down any barriers to entry to our
wonderful community and fantastic set of projects

Thanks for reading!

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2014-04-15 Thread Tristan Cacqueray
confirmed

On 04/15/2014 02:18 PM, Jay Pipes wrote:
 Hi Stackers,
 
 I've known OpenStack since it was a baby, and I've watched it grow to
 the teenager it is today -- pimples and all. I hope to help guide it
 along its way to adulthood by serving on the Technical Committee.
 
 Besides developing on a number of core and ecosystem OpenStack projects,
 I've also spent a couple years on the deployment and operator side of
 the coin, which I think gives me a well-balanced perspective on some of
 the hardships that go hand in hand with configuring, upgrading and
 scaling a highly distributed piece of software.
 
 I think it's pretty well-known that I live and breathe a utility cloud
 world-view (all hail Cow29118281!). But hopefully it's also known that
 I'm open to other viewpoints, welcome discussion on ideological
 incongruence, and try to treat all folks with respect and dignity.
 
 On the TC, I would be a strong voice for:
 
  * Newcomers to the developer and operator community -- I'd like to help
 foster mentoring opportunities in the community and spread knowledge in
 a viral way
 
  * Simplicity and consistency in our public REST APIs -- The APIs are
 our public face, and when they are overly complex, inconsistent, or just
 don't make sense, we turn possible newcomers away from OpenStack. We
 should do everything we can to break down any barriers to entry to our
 wonderful community and fantastic set of projects
 
 Thanks for reading!
 
 -jay
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] starting regular meetings

2014-04-15 Thread Flavio Percoco

On 15/04/14 11:05 +0200, Julien Danjou wrote:

On Mon, Apr 14 2014, Doug Hellmann wrote:


Balancing Europe and Pacific TZs is going to be a challenge. I can't
go at 1800 or 1900, myself, and those are pushing a little late in
Europe anyway.

How about 1600?
http://www.timeanddate.com/worldclock/converted.html?iso=20140414T16p1=0p2=2133p3=195p4=224


Yeah, that should work for me most of the time.


Idem, it should work most of the time!

--
@flaper87
Flavio Percoco


pgpFsw1qjmlx5.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] /bin/bash vs. /bin/sh

2014-04-15 Thread Ryan Brady
- Original Message -
 From: Chris Jones c...@tenshu.net
 To: openst...@nemebean.com, OpenStack Development Mailing List (not for 
 usage questions)
 openstack-dev@lists.openstack.org
 Sent: Monday, April 14, 2014 2:24:57 PM
 Subject: Re: [openstack-dev] [tripleo] /bin/bash vs. /bin/sh
 
 Hi
 
 Apart from special cases like the ramdisk's /init, which is a script that
 needs to run in busybox's shell, everything should be using bash. There's no
 point us tying ourselves in knots trying to achieve POSIX compliance for the
 sake of it, when bashisms are super useful.

+1 for the pragmatic approach.

 
 Cheers,
 
 Chris
 
 
 On 14 April 2014 17:26, Ben Nemec  openst...@nemebean.com  wrote:
 
 
 tldr: I propose we use bash explicitly for all diskimage-builder scripts (at
 least for the short-term - see details below).
 
 This is something that was raised on my linting changes to enable set -o
 pipefail. That is a bash-ism, so it could break in the diskimage-builder
 scripts that are run using /bin/sh. Two possible fixes for that: switch to
 /bin/bash, or don't use -o pipefail
 
 But I think this raises a bigger question - does diskimage-builder require
 bash? If so, I think we should just add a rule to enforce that /bin/bash is
 the shell used for everything. I know we have a bunch of bash-isms in the
 code already, so at least in the short-term I think this is probably the way
 to go, so we can get the benefits of things like -o pipefail and lose the
 ambiguity we have right now. For reference, a quick grep of the
 diskimage-builder source shows we have 150 scripts using bash explicitly and
 only 24 that are plain sh, so making the code truly shell-agnostic is likely
 to be a significant amount of work.
 
 In the long run it might be nice to have cross-shell compatibility, but if
 we're going to do that I think we need a couple of things: 1) Someone to do
 the work (I don't have a particular need to run dib in not-bash, so I'm not
 signing up for that :-) 2) Testing in other shells - obviously just changing
 /bin/bash to /bin/sh doesn't mean we actually support anything but bash. We
 really need to be gating on other shells if we're going to make a
 significant effort to support them. It's not good to ask reviewers to try to
 catch every bash-ism proposed in a change. This also relates to some of the
 unit testing work that is going on right now too - if we had better unit
 test coverage of the scripts we would be able to do this more easily.
 
 Thoughts?
 
 Thanks.
 
 -Ben
 
 __ _
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack. org
 http://lists.openstack.org/ cgi-bin/mailman/listinfo/ openstack-dev
 
 
 
 --
 Cheers,
 
 Chris
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases. Data from Operators needed.

2014-04-15 Thread Eugene Nikanorov
Thanks a lot, folks!

Eugene.


On Tue, Apr 15, 2014 at 2:55 AM, Stephen Balukoff sbaluk...@bluebox.netwrote:

 I've also added our L7 feature usage data on a new tab.


 On Mon, Apr 14, 2014 at 3:03 PM, Prashanth Hari hvpr...@gmail.com wrote:

 Hi,

 We have updated the operators data -
 https://docs.google.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc#gid=1

 Please note the percentage is based on number of VIPs. The traffic
 distribution (connections / sec) will vary by services.

 Thanks,
 Prashanth
 Comcast




 On Wed, Apr 9, 2014 at 11:28 AM, Susanne Balle sleipnir...@gmail.comwrote:

 Hi



 I wasn't able to get % for the spreadsheet but our Product Manager
 prioritized the features:



 *Function*

 *Priority (0 = highest)*

 *HTTP+HTTPS on one device*

 5

 *L7 Switching*

 2

 *SSL Offloading*

 1

 *High Availability*

 0

 *IP4  IPV6 Address Support*

 6

 *Server Name Indication (SNI) Support*

 3

 *UDP Protocol*

 7

 *Round Robin Algorithm*

 4



  Susanne


 On Thu, Apr 3, 2014 at 9:32 AM, Vijay Venkatachalam 
 vijay.venkatacha...@citrix.com wrote:



 The document has Vendor  column, it should be from
 Cloud Operator?



 Thanks,

 Vijay V.





 *From:* Eugene Nikanorov [mailto:enikano...@mirantis.com]
 *Sent:* Thursday, April 3, 2014 11:23 AM
 *To:* OpenStack Development Mailing List (not for usage questions)

 *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Load balancing use
 cases. Data from Operators needed.



 Stephen,



 Agree with you. Basically the page starts looking as requirements page.

 I think we need to move to google spreadsheet, where table is organized
 easily.

 Here's the doc that may do a better job for us:


 https://docs.google.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWcusp=sharing



 Thanks,

 Eugene.



 On Thu, Apr 3, 2014 at 5:34 AM, Prashanth Hari hvpr...@gmail.com
 wrote:

  More additions to the use cases (
 https://wiki.openstack.org/wiki/Neutron/LBaaS/Usecases).

 I have updated some of the features we are interested in.







 Thanks,

 Prashanth





 On Wed, Apr 2, 2014 at 8:12 PM, Stephen Balukoff sbaluk...@bluebox.net
 wrote:

  Hi y'all--



 Looking at the data in the page already, it looks more like a feature
 wishlist than actual usage data. I thought we agreed to provide data based
 on percentage usage of a given feature, the end result of the data
 collection being that it would become more obvious which features are the
 most relevant to the most users, and therefore are more worthwhile targets
 for software development.



 Specifically, I was expecting to see something like the following
 (using hypothetical numbers of course, and where technical people from
 Company A  etc. fill out the data for their organization):



 == L7 features ==



 Company A (Cloud operator serving external customers): 56% of
 load-balancer instances use

 Company B (Cloud operator serving external customers): 92% of
 load-balancer instances use

 Company C (Fortune 100 company serving internal customers): 0% of
 load-balancer instances use



 == SSL termination ==



 Company A (Cloud operator serving external customers): 95% of
 load-balancer instances use

 Company B (Cloud operator serving external customers): 20% of
 load-balancer instances use

 Company C (Fortune 100 company serving internal customers): 50% of
 load-balancer instances use.



 == Racing stripes ==



 Company A (Cloud operator serving external customers): 100% of
 load-balancer instances use

 Company B (Cloud operator serving external customers): 100% of
 load-balancer instances use

 Company C (Fortune 100 company serving internal customers): 100% of
 load-balancer instances use





 In my mind, a wish-list of features is only going to be relevant to
 this discussion if (after we agree on what the items under consideration
 ought to be) each technical representative presents a prioritized list for
 their organization. :/ A wish-list is great for brain-storming what ought
 to be added, but is less relevant for prioritization.



 In light of last week's meeting, it seems useful to list the features
 most recently discussed in that meeting and on the mailing list as being
 points on which we want to gather actual usage data (ie. from what people
 are actually using on the load balancers in their organization right now).
 Should we start a new page that lists actual usage percentages, or just
 re-vamp the one above?  (After all, wish-list can be useful for discovering
 things we're missing, especially if we get people new to the discussion to
 add their $0.02.)



 Thanks,

 Stephen







 On Wed, Apr 2, 2014 at 3:46 PM, Jorge Miramontes 
 jorge.miramon...@rackspace.com wrote:

   Thanks Eugene,



 I added our data onto the requirements page since I was hoping to
 prioritize requirements based on the operator data that gets provided. We
 can move it over to the other page if you think that makes sense. See
 everyone 

Re: [openstack-dev] [heat][nova]dynamic scheduling

2014-04-15 Thread Jiangying (Jenny)
Sorry, I’m not quite clear about it yet.
I’m trying to find a way that heat controls the flow but not the nova scheduler.

发件人: Henrique Truta [mailto:henriquecostatr...@gmail.com]
发送时间: 2014年4月14日 21:39
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [heat][nova]dynamic scheduling

Hello!
I'm currently investigating both of these features you have mentioned, 
specifically on the NEAT[1] and GANTT[2] projects, as you might see on the last 
week discussion.
Do you have any further ideas about how and why this would work with Heat?
Thanks,
Henrique

[1] http://openstack-neat.org/
[2] https://github.com/openstack/gantt

2014-04-13 22:53 GMT-03:00 Jiangying (Jenny) 
jenny.jiangy...@huawei.commailto:jenny.jiangy...@huawei.com:
Hi,
there has been a heated discussion about dynamic scheduling last 
week.(http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg21644.html)
I am also interested in this topic. We believe that dynamic scheduling consists 
of two parts: balancing computing capacity and optimizing power consumption.
For balancing computing capacity, the ceilometer periodically monitors 
distribution and usage of CPU and memory resources for hosts and virtual 
machines. Based on the information, the scheduler calculates the current system 
standard deviation metric and determines the system imbalance by comparing it 
to the target. To resolve the imbalance, the scheduler gives the suitable 
virtual machine migration suggestions to nova. In this way, the dynamic 
scheduling achieves higher consolidation ratios and deliver optimized 
performance for the virtual machines.
For optimizing power consumption, we attempt to keep the resource utilization 
of each host within a specified target range. The scheduler evaluates if the 
goal can be reached by balancing the system workloads. If the resource 
utilization of a host remains below the target, the scheduler calls nova to 
power off some hosts. Conversely the scheduler powers on hosts to absorb the 
additional workloads. Thus optimizing power consumption offers an optimum mix 
of resource availability and power savings.
As Chen CH Ji said, “nova is a cloud solution that aim to control virtual / 
real machine lifecycle management the dynamic scheduling mechanism is something 
like optimization of the cloud resource”. We think implementing the dynamic 
scheduling with heat may be a good attempt.
Do you have any comments?
Thanks,
Jenny



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Payload within RabbitMQ messages for Nova related exchanges

2014-04-15 Thread George Monday
Hey there,

I've got a quick question about the RabbitMQ exchanges. We are writing
listeners
for the RabbitMQ exchanges. The basic information about the tasks like
compute.instance.create.[start|stop] etc. as stored in the 'payload'
attribute of the
json message are my concern at the moment.

Does this follow a certain predefined structure that's consistent for the
lifetime of, say,
a specific nova api version? Will this change in major releases (from
havana to icehouse)?
Is this subject to change without notice? Is there a definition available
somewhere? Like for
the api versions?

In short, how reliable is the json structure of the payload attribute in a
rabbitMQ message?

We just want to make sure, that with an update to the OpenStack controller,
we wouldn't
break our listeners?

My Best,
George
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] magnetodb 2.0.2 released

2014-04-15 Thread Ilya Sviridov
Hello openstackers,

MagnetoDB community is proud to announce the release
of magnetodb-2.0.2 milestone.

It is publicly available here
https://pypi.python.org/pypi/magnetodb/2.0.2
http://tarballs.openstack.org/magnetodb/

The version contains following features and major fixes
+ devstack integration
+ devstack gate for MagnetoDB has been introduced and added to development
process
+ BatchWrite API implemented
+ Error handling API defined and implemented
+ tempest coverage improvement
* redesign index implementation for better performance

More details can be found here
https://launchpad.net/magnetodb/2.0/2.0.2

--
MagnetoDB community
#magnetodb at FreeNode
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron Migrations, service plugins and Grenade jobs

2014-04-15 Thread Kyle Mestery
On Tue, Apr 15, 2014 at 7:03 AM, Salvatore Orlando sorla...@nicira.com wrote:
 Thanks Anna.

 I've been following the issue so far, but I am happy to hand it over to you.
 I think the problem assessment is complete, but if you have more questions
 ping me on IRC.

 Regarding the solution, I think we already have a fairly wide consensus on
 the approach.
 There are however a few details to discuss:
 - Conflicting schemas. For instance two migrations for two distinct plugins
 might create tables with the same name but different columns.
   We first need to look at existing migrations to verify where this
 condition occurs, and then study a solution case by case.
 - Logic for corrective migrations. For instance a corrective migration for
 569e98a8132b_metering is needed. However, such corrective migration should
 have logic for understanding whether the original migration has been
 executed or not.
 - Corrective actions for corrupted schemas. This would be the case, for
 instance, of somebody which enables metering while the database is at a
 migration rev higher than the one when metering was introduced.

 I reckon it might be the case of putting together a specification and push
 it to the newly created neutron-specs repo, assuming that we feel confident
 enough to start using this new process (Kyle and Mark might chime in on this
 point). Also, I would like to see this work completed by Juno-1, which I
 reckon is a reasonable target.

I'm working to get this new specification approval process ready,
hopefully by later
today. Once this is done, I agree with Salvatore, pushing a gerrit
review with the
specification for this work will be the right approach.

 Of course I'm available for discussing design, implementation, reviewing and
 writing code.

Thanks to Anna and Salvatore for taking this up!

Kyle

 Salvatore



 On 15 April 2014 12:44, Anna Kamyshnikova akamyshnik...@mirantis.com
 wrote:

 Hello everyone!

 I would like to try to solve this problem. I registered blueprint on this
 topic
 https://blueprints.launchpad.net/neutron/+spec/neutron-robust-db-migrations
 and I'm going to experiment with options to solve this. I'm welcome any
 suggestions and ready to talk about it via IRC (akamyshnikova).

 Regards
 Ann


 On Mon, Apr 14, 2014 at 9:39 PM, Sean Dague s...@dague.net wrote:

 On 04/14/2014 12:46 PM, Eugene Nikanorov wrote:
  Hi Salvatore,
 
  The described problem could be even worse if vendor drivers are
  considered.
  Doesn't #1 require that all DB tables are named differently? Otherwise
  it seems that user can't be sure in DB schema even if all tables are
  present.
 
  I think the big part of the problem is that we need to support both
  online and offline migrations. Without the latter things could be a
  little bit simpler.
 
  Also it seems to me that problem statement should be changed to the
  following:
  One need to migrate from (Config1, MigrationID1) to (Config2,
  MigrationID2), and currently our code only accounts for MigrationIDs.
  We may consider amending DB with configuration metadata, at least that
  will allow to run migration code with full knowledge of what happened
  before (if online mode is considered).
  In offline mode that will require providing old and current
  configurations.
 
  That was just thinking aloud, no concrete proposals yet.

 The root issue really is Migrations *must* be global, and config
 invariant. That's the design point in both sqlalchemy-migrate and
 alembic. The fact that there is one global migrations table per
 database, with a single value in it, is indicative of this fact.

 I think that design point got lost somewhere along the way, and folks
 assumed migrations were just a way to change schemas. They are much more
 constrained than that.

 It does also sound like the data model is going to need some serious
 reconsidering given what's allowed to be changed at the plugin or vendor
 driver model. Contrast this with Nova, were virt drivers don't get to
 define persistant data that's unique to them (only generic data that
 they fit into the grander nova model).

 The one time we had a driver which needed persistent data (baremetal) it
 managed it's own database entirely.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] Payload within RabbitMQ messages for Nova related exchanges

2014-04-15 Thread Russell Bryant
On 04/15/2014 09:07 AM, George Monday wrote:
 Hey there,
 
 I've got a quick question about the RabbitMQ exchanges. We are writing
 listeners
 for the RabbitMQ exchanges. The basic information about the tasks like
 compute.instance.create.[start|stop] etc. as stored in the 'payload'
 attribute of the
 json message are my concern at the moment.
 
 Does this follow a certain predefined structure that's consistent for
 the lifetime of, say,
 a specific nova api version? Will this change in major releases (from
 havana to icehouse)?
 Is this subject to change without notice? Is there a definition
 available somewhere? Like for
 the api versions?
 
 In short, how reliable is the json structure of the payload attribute in
 a rabbitMQ message?
 
 We just want to make sure, that with an update to the OpenStack
 controller, we wouldn't
 break our listeners?

First, we're talking specifically about notifications.  Nova also uses
messages between services, and those messages are considered private
internal implementation details.

The notifications are of course intended for consumption by other apps.
 We currently try to make sure that all changes to the body of these
messages are backwards compatible.  You should be safe within a version,
but as usual, please watch the release notes for changes.

At some point we really need to do something about the body of
notifications so that they're properly versioned.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Default paths in os-*-config projects

2014-04-15 Thread Jay Dobies



On 04/14/2014 09:30 PM, Clint Byrum wrote:

Excerpts from Ben Nemec's message of 2014-04-14 15:41:23 -0700:

Right now the os-*-config projects default to looking for their files in
/opt/stack, with an override env var provided for other locations.  For
packaging purposes it would be nice if they defaulted to a more
FHS-compliant location like /var/lib.  For devtest we could either
override the env var or simply install the appropriate files to /var/lib.

This was discussed briefly in IRC and everyone seemed to be onboard with
the change, but Robert wanted to run it by the list before we make any
changes.  If anyone objects to changing the default, please reply here.
   I'll take silence as agreement with the move. :-)



+1 from me for doing FHS compliance. :)

/var/lib is not actually FHS compliant as it is for Variable state
information. os-collect-config does have such things, and does use
/var/lib. But os-refresh-config reads executables and os-apply-config
reads templates, neither of which will ever be variable state
information.

/usr/share would be the right place, as it is Architecture independent
data. I suppose if somebody wants to compile a C program as an o-r-c
script we could rethink that, but I'd just suggest they drop it in a bin
dir and exec it from a one line shell script in the /usr/share.

So anyway, I suggest:

/usr/share/os-apply-config/templates
/usr/share/os-refresh-config/scripts


+1

This would have been my suggestion too if we were moving out of /opt. 
I've gotten yelled at in the past for not using this in these sorts of 
cases :)



With the usual hierarchy underneath.

We'll need to continue to support the non-FHS paths for at least a few
releases as well.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] /bin/bash vs. /bin/sh

2014-04-15 Thread Jay Dobies
+1 to using bash, the argument about not keeping POSIX compliance for 
the sake of it makes sense to me.


On 04/15/2014 07:31 AM, Ghe Rivero wrote:

+1 to use bash as the default shell. So far, all major distros use bash
as the default one (except Debian which uses dash).
An about rewriting the code in Python, I agree that shell is complicated
for large programs, but writing anything command oriented in other than
shell is a nightmare. But there are some parts that can benefit from that.

Ghe Rivero

On 04/15/2014 11:05 AM, Chris Jones wrote:

Hi

On 15 April 2014 09:14, Daniel P. Berrange berra...@redhat.com
mailto:berra...@redhat.com wrote:

I supose that rewriting the code to be in Python is out of the
question ?  IMHO shell is just a terrible language for doing any
program that is remotely complicated (ie longer than 10 lines of


I don't think it's out of the question - where something makes sense
to switch to Python, that would seem like a worthwhile thing to be
doing. I do think it's a different question though - we can quickly
flip things from /bin/sh to /bin/bash without affecting their
suitability for replacement with python.

--
Cheers,

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Payload within RabbitMQ messages for Nova related exchanges

2014-04-15 Thread Sandy Walsh


On 04/15/2014 10:07 AM, George Monday wrote:
 Hey there,
 
 I've got a quick question about the RabbitMQ exchanges. We are writing
 listeners
 for the RabbitMQ exchanges. The basic information about the tasks like
 compute.instance.create.[start|stop] etc. as stored in the 'payload'
 attribute of the
 json message are my concern at the moment.
 
 Does this follow a certain predefined structure that's consistent for
 the lifetime of, say,
 a specific nova api version? Will this change in major releases (from
 havana to icehouse)?
 Is this subject to change without notice? Is there a definition
 available somewhere? Like for
 the api versions?
 
 In short, how reliable is the json structure of the payload attribute in
 a rabbitMQ message?
 
 We just want to make sure, that with an update to the OpenStack
 controller, we wouldn't
 break our listeners?

Hey George,

Most of the notifications are documented here
https://wiki.openstack.org/wiki/SystemUsageData

But, you're correct that there is no versioning on these currently, but
there are some efforts to fix this (specifically around CADF-support)

Here's some more info on notifications if you're interested:
http://www.sandywalsh.com/2013/09/notification-usage-in-openstack-report.html

Hope it helps!
-S





 My Best,
 George
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Nova][VMWare] Enable live migration with one nova compute

2014-04-15 Thread Jay Pipes
Hi again, Divakar, sorry for the delayed response!

On Wed, 2014-04-09 at 14:52 +, Nandavar, Divakar Padiyar wrote:
 Hi Jay, Managing multiple clusters using the Compute Proxy is not new
 right? Prior to this nova baremetal driver has used this model
 already.

Yes, unfortunately. However, nova-baremetal has moved to the Ironic
project, which has its own scheduler that is built to deal with the
problem of having a single compute manager responsible for 1 set of
compute resources.

 Also this Proxy Compute model gives flexibility to deploy as many
 computes required based on the requirement. For example, one can
 setup one proxy compute node to manage a set of clusters and another
 proxy compute to manage a separate set of clusters or launch compute
 node for each of the clusters.

It actually reduces the flexibility of Nova in two main ways:

1) It breaks the horizontal scale-out model that exists when a single
nova-compute is responsible for a wholly-separate group of compute
resources. Under the proxy compute model, if the proxy compute goes
down, the whole cluster goes down.

2) It surrenders control of the VMs to an external system and in doing
so, must constantly poll this external system in order to know the state
of the externally-controlled resources. We do this already at the
hypervisor driver level. The source of truth for VM state is the
hypervisor, and nova-compute must periodically query the hypervisor for
live data about the VMs running on the hypervisor. The Nova scheduler
queries nova-compute workers for information about the VMs running on
the host it controls. For the proxy-compute model, the Nova scheduler
queries proxy-computes, which then must query some externally-controlled
management platform, and then aggregate this data back up to the Nova
scheduler (a job that the Nova scheduler is designed to do, not the
nova-compute worker). This additional layer of complexity does not allow
greater flexibility -- it instead limits the flexibility of Nova, since
it changes the definition and responsibility of two of its major
components: nova-compute and nova-scheduler.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Payload within RabbitMQ messages for Nova related exchanges

2014-04-15 Thread George Monday
Hey there,

thanks for the input.

@Russel

My bad, sorry. Yes I was talking about notifications.

@Sandy

I'll have a look into the links provided. Thanks.

I guess we can consider this closed.

Cheers,
George


On Tue, Apr 15, 2014 at 3:40 PM, Sandy Walsh sandy.wa...@rackspace.comwrote:



 On 04/15/2014 10:07 AM, George Monday wrote:
  Hey there,
 
  I've got a quick question about the RabbitMQ exchanges. We are writing
  listeners
  for the RabbitMQ exchanges. The basic information about the tasks like
  compute.instance.create.[start|stop] etc. as stored in the 'payload'
  attribute of the
  json message are my concern at the moment.
 
  Does this follow a certain predefined structure that's consistent for
  the lifetime of, say,
  a specific nova api version? Will this change in major releases (from
  havana to icehouse)?
  Is this subject to change without notice? Is there a definition
  available somewhere? Like for
  the api versions?
 
  In short, how reliable is the json structure of the payload attribute in
  a rabbitMQ message?
 
  We just want to make sure, that with an update to the OpenStack
  controller, we wouldn't
  break our listeners?

 Hey George,

 Most of the notifications are documented here
 https://wiki.openstack.org/wiki/SystemUsageData

 But, you're correct that there is no versioning on these currently, but
 there are some efforts to fix this (specifically around CADF-support)

 Here's some more info on notifications if you're interested:

 http://www.sandywalsh.com/2013/09/notification-usage-in-openstack-report.html

 Hope it helps!
 -S





  My Best,
  George
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC Candidacy

2014-04-15 Thread John Garbutt
Hi.

I would like to announce my TC candidacy.

I work full time as a Software Developer on OpenStack at Rackspace,
part of the team working on Rackspace Cloud Servers, Rackspace's
public cloud. I am a Nova core reviewer, a member of nova-drivers,
leader of the XenAPI Nova sub-team, and the Nova blueprint czar.

I have had a wide variety of experience around OpenStack that will
help me to represent the views of the whole OpenStack community. I
first started working with OpenStack back in late 2010, building a
private cloud distribution at Citrix. Later I moved to working full
time on improving how XenServer integrates with OpenStack, and created
the XenAPI Nova sub-team. I am currently focusing on ensuring
OpenStack continues to perform well when running at the massive scale
needed for the Rackspace public cloud.

Throughout my career I have been a passionate advocate of getting
developers thinking about the people who use (and deploy) the software
they create. If elected to the TC, I would look at what can be done to
improve the relationship between OpenStack's users and developers.
Nova's reform of blueprint reviews is a great example of things we can
try to help users to influence the direction developers choose to
take.

Above all, as always, I will be working hard to ensure OpenStack
continues to be same awesome community that made me excited to start
work this morning, and welcomed me into the world of open source
development and cloud computing back in late 2010.

Thanks for reading!

John

IRC:johnthetubaguy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] Changing to external events

2014-04-15 Thread Day, Phil
Hi Folks,

Sorry for being a tad slow on the uptake here, but I'm trying to understand the 
sequence of updates required to move from a  system that doesn't have external  
events configured between Neutron and Nova and one that does  (The new 
nova-specs repo would have captured this as part of the BP ;-)

So assuming I start form the model where neither service is using events, as I 
understand it


-  As soon as I deploy the modified Nova code the compute manager will 
start waiting for events when pugging vifs. By default it will wait for 600 
seconds and then fail (because neutron can't deliver the event).   Because 
vif_plugging_is_fatal=True by default this will mean all instance creates will 
fail - so that doesn't seem like a good first move (and maybe not the best set 
of defaults)


-  If I modify Neutron first so that it now sends the events, but Nova 
isn't yet updated to expose the new API extension, then Neutron will fail 
instead because it can't send the event (I can't find the corresponding neutron 
patch references in the BP page 
https://blueprints.launchpad.net/nova/+spec/admin-event-callback-api - so if 
someone could point me at that it would be helpful).So unless Neutron can 
cope with this that doesn't feel like a good first move either.



If feels like the right sequence is:

-  Deploy the new code in Nova and at the same time set 
vif_plugging_is-fatal=False, so that Nova will wait for Neutron, but will still 
continue if the event never turns up (which is kind of like the code was 
before, but with a wait)

-  At the same time enable the new API extension in Nova so that Nova 
can consume events

-  Then update Neutron (with presumably some additional config) so that 
it starts sending events

Is that right, and any reason why the default for vif_plugging_is_fatal 
shouldn't be False insated of True to make this sequence less dependent on 
matching config changes ?

Phil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2014-04-15 Thread Tristan Cacqueray
confirmed

On 04/15/2014 04:03 PM, John Garbutt wrote:
 Hi.
 
 I would like to announce my TC candidacy.
 
 I work full time as a Software Developer on OpenStack at Rackspace,
 part of the team working on Rackspace Cloud Servers, Rackspace's
 public cloud. I am a Nova core reviewer, a member of nova-drivers,
 leader of the XenAPI Nova sub-team, and the Nova blueprint czar.
 
 I have had a wide variety of experience around OpenStack that will
 help me to represent the views of the whole OpenStack community. I
 first started working with OpenStack back in late 2010, building a
 private cloud distribution at Citrix. Later I moved to working full
 time on improving how XenServer integrates with OpenStack, and created
 the XenAPI Nova sub-team. I am currently focusing on ensuring
 OpenStack continues to perform well when running at the massive scale
 needed for the Rackspace public cloud.
 
 Throughout my career I have been a passionate advocate of getting
 developers thinking about the people who use (and deploy) the software
 they create. If elected to the TC, I would look at what can be done to
 improve the relationship between OpenStack's users and developers.
 Nova's reform of blueprint reviews is a great example of things we can
 try to help users to influence the direction developers choose to
 take.
 
 Above all, as always, I will be working hard to ensure OpenStack
 continues to be same awesome community that made me excited to start
 work this morning, and welcomed me into the world of open source
 development and cloud computing back in late 2010.
 
 Thanks for reading!
 
 John
 
 IRC:johnthetubaguy
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Changing to external events

2014-04-15 Thread Dan Smith
 If feels like the right sequence is:
 
 -  Deploy the new code in Nova and at the same time set
 vif_plugging_is-fatal=False, so that Nova will wait for Neutron, but
 will still continue if the event never turns up (which is kind of like
 the code was before, but with a wait)

Yes, but set the timeout=0. This will cause nova to gracefully accept
the events when they start showing up, but the computes will not wait
for them.

 -  Then update Neutron (with presumably some additional config)
 so that it starts sending events

Right, after that, set fatal=True and timeout=300 (or whatever) and
you're good to go.

 Is that right, and any reason why the default for vif_plugging_is_fatal
 shouldn’t be False insated of True to make this sequence less dependent
 on matching config changes ?

Yes, because the right approach to a new deployment is to have this
enabled. If it was disabled by default, most deployments would never
turn it on.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Infra] [3rd party testing] Atlanta meetup for 3rd party testing?

2014-04-15 Thread Kurt Taylor


Sorry if you get this twice.

Since summit is approaching quickly, I wanted to see if anyone had interest
in forming a meetup for 3rd party testing. This would be a group for
helping the project cores by helping ourselves and hopefully improving
overall CI rollout.

Some topics to discuss:  I'd at least like to get a standard tag that would
help with email filtering, but we could also build a FAQ and documentation
contributions to improve the content of the 3rd party testing. We could
also document different use cases and best practices for each on how
different testers solved problems they encountered for their environment. I
don't know how long we would be able to meet, so this may just be an
organizational meeting, focusing on how to best share this info.

I started an etherpad for discussion topics and eventual agenda here:
https://etherpad.openstack.org/p/3rdPartyTesting

I have not worked out all the details for when and where to meet, but I
would be happy to set it up and facilitate the discussion. I wanted to see
if there was any interest before I took it any further.

Any interest?

Kurt Taylor (krtaylor)
OpenStack Development Lead - PowerKVM CI
IBM Linux Technology Center

___
OpenStack-Infra mailing list
openstack-in...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] starting regular meetings

2014-04-15 Thread Mark McLoughlin
On Mon, 2014-04-14 at 14:53 -0400, Doug Hellmann wrote:
 Balancing Europe and Pacific TZs is going to be a challenge. I can't
 go at 1800 or 1900, myself, and those are pushing a little late in
 Europe anyway.
 
 How about 1600?
 http://www.timeanddate.com/worldclock/converted.html?iso=20140414T16p1=0p2=2133p3=195p4=224
 
 We would need to move to another room, but that's not a big deal.

Works for me.

Thanks,
Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Efficient image cloning implementation in NetApp nfs drivers // make this part of base NFS driver

2014-04-15 Thread Kerr, Andrew
The NetApp driver uses NetApp specific API calls to implement the actual
cloning of the file.  You could probably generalize it by implementing the
keeping of a cached image file on the destination share for future copies,
and then implement a standard copy file method that could be overloaded
by individual drivers.  In implementing the cache image though you would
also need to implement a cache scrubber that will come around and clean
up the cached files once you reach (user) defined thresholds.

Andrew Kerr
OpenStack QA
Cloud Solutions Group
NetApp


From:  Luohao   (brian) brian.luo...@huawei.com
Reply-To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:  Monday, April 14, 2014 at 2:17 AM
To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] Efficient image cloning implementation in
NetApp nfs drivers // make this part of base NFS driver


Nice idea.
 
Actually, fast image cloning has been widely supported by most NAS
devices, and VMware VAAI also started to require this criteria many years
ago.
 
However, I am not quite sure what exactly need to put into the base NFS
driver, anyways, the fast cloning api will vary for specific vendors.
 
-Hao

 
From: Nilesh P Bhosale [mailto:nilesh.bhos...@in.ibm.com]

Sent: Monday, April 14, 2014 1:51 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] Efficient image cloning implementation in NetApp
nfs drivers // make this part of base NFS driver

 
Hi All,

I was going through the following blue print, NetApp proposed and
implemented in its driver (NetAppNFSDriver -
cinder/volume/drivers/netapp/nfs.py) a while back (change
https://review.openstack.org/#/c/41868/):
https://blueprints.launchpad.net/cinder/+spec/netapp-cinder-nfs-image-cloni
ng

It looks quite an interesting and valuable feature for the end customers.
Can we make it part of the base NfsDriver (cinder/volume/drivers/nfs.py)?
so that the customers using the base NFS driver can benefit and also other
drivers inheriting from
 this base NFS driver (e.g. IBMNAS_NFSDriver, NexentaNfsDriver) can also
benefit.

Please let me know your valuable opinion.
I can start a blueprint for the Juno release.

Thanks,
Nilesh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [3rd party testing] Atlanta meetup for 3rd party testing?

2014-04-15 Thread Anita Kuno
On 04/15/2014 10:20 AM, Kurt Taylor wrote:
 
 
 Sorry if you get this twice.
 
 Since summit is approaching quickly, I wanted to see if anyone had interest
 in forming a meetup for 3rd party testing. This would be a group for
 helping the project cores by helping ourselves and hopefully improving
 overall CI rollout.
 
 Some topics to discuss:  I'd at least like to get a standard tag that would
 help with email filtering, but we could also build a FAQ and documentation
 contributions to improve the content of the 3rd party testing. We could
 also document different use cases and best practices for each on how
 different testers solved problems they encountered for their environment. I
 don't know how long we would be able to meet, so this may just be an
 organizational meeting, focusing on how to best share this info.
 
 I started an etherpad for discussion topics and eventual agenda here:
 https://etherpad.openstack.org/p/3rdPartyTesting
 
 I have not worked out all the details for when and where to meet, but I
 would be happy to set it up and facilitate the discussion. I wanted to see
 if there was any interest before I took it any further.
 
 Any interest?
 
 Kurt Taylor (krtaylor)
 OpenStack Development Lead - PowerKVM CI
 IBM Linux Technology Center
 
 ___
 OpenStack-Infra mailing list
 openstack-in...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
There is a summit design session proposed to address this:
http://summit.openstack.org/cfp/details/59

Entitled: Improving Third Party Testing

If you would like to add the information you would like to cover as a
comment to the proposal that would help ensure the content gets covered.

Since the purpose of the design summit is to cover material like this, I
encourage people to get familiar with the summit sessions that are
already proposed: http://summit.openstack.org/ and to propose sessions
that you would like covered if you don't see a current proposal for your
topic of interest.

Since most of what I saw in the Neutron design summit sessions last year
is counter to how most other programs use their design summit sessions,
I encourage you to consider using the design summit session format.
Neutron design summit sessions end up being more like conference
presentations and that is not what the design summit was meant to be at all.

Design summit sessions are meant to be group discussion chaired by
either the proposer of the session or someone else but equally discussed
by all participants. They are not meant to be a presentation with a
passive audience.

Using the design summit sessions as they are intended to be used would
be more advantageous for the community then trying to schedule a meetup
that might be a splintering off of the design summit sessions. If this
direction is unsatisfactory for your needs after it is tried, please be
sure to attend the summit wrapup design session and express your
perspective so it can be included in future summit planning.

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Code merge policy

2014-04-15 Thread Mike Scherbakov
Humans make mistakes... all the time. Let's think how we can automate this
to have appropriate Jenkins check. In this particular case, we could do the
following:
a) make it work in progress if we still unsure on some deps
b) can we have smoke test which would check that master node builds, and
simplest deploy passes? This needs to be run only if there are changes
discovered in ISO build script (including mirror changes), and puppet
manifests which deploy master node



On Tue, Apr 15, 2014 at 3:49 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote:

 Guys,

 We have big and complicated structure of the project. And part of our
 patchsets require additional actions before merge. Sometimes we need
 approve from testers, sometimes we need merge requests in several repos at
 the same time, sometimes we need updates of rpm repositories before merge.

 We have informal rule: invite all the required persons to the review. And
 core reviewer does not merge code if part of +1's are missed. Sad, but this
 rule is not obvious.

 This informal rule became even more strict when we need update of rpm/deb
 repositories, because OSCI changes should be accomplished right before
 merge. For such reviews we ask OSCI team to do changes, checks and merge.

 https://review.openstack.org/#/c/86001/ This particular request requires
 check of our 4.1.1 rpm/deb repositories status. Thats why Roman Vyalov is
 added as reviewer.

 I don't like over-bureaucracy. My suggestion is simple: take into account
 reviewers status and do not merge if unsure.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Rabbit in HA mode for Glance

2014-04-15 Thread Vladimir Kuklin
Ilya, here is link to the bug created:

https://bugs.launchpad.net/fuel/+bug/1308104


On Sun, Apr 13, 2014 at 8:53 AM, Vladimir Kuklin vkuk...@mirantis.comwrote:

 Ilya, thank you for pointing this out. Obviously we will add it into fuel
 manifests.
 10 апр. 2014 г. 19:43 пользователь Ilya Shakhat ishak...@mirantis.com
 написал:

 Hi,

 Recently Glance switched to oslo.messaging library and as benefit got
 support of HA for Rabbit queues (parameter 'rabbit_ha_queues'). There
 are similar parameters in other projects (Nova, Heat, Ceilometer) and they
 are configured in corresponding manifest files. Are there plans to add the
 same into Glance manifests?

 Thanks,
 Ilya

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com http://www.mirantis.ru/
www.mirantis.ru
vkuk...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Thoughts from the PTL

2014-04-15 Thread Brian Elliott

On Apr 13, 2014, at 11:58 PM, Michael Still mi...@stillhq.com wrote:

 First off, thanks for electing me as the Nova PTL for Juno. I find the
 outcome of the election both flattering and daunting. I'd like to
 thank Dan and John for running as PTL candidates as well -- I strongly
 believe that a solid democratic process is part of what makes
 OpenStack so successful, and that isn't possible without people being
 will to stand up during the election cycle.

Congrats!

 
 I'm hoping to send out regular emails to this list with my thoughts
 about our current position in the release process. Its early in the
 cycle, so the ideas here aren't fully formed yet -- however I'd rather
 get feedback early and often, in case I'm off on the wrong path. What
 am I thinking about at the moment? The following things:
 
 * a mid cycle meetup. I think the Icehouse meetup was a great success,
 and I'd like to see us do this again in Juno. I'd also like to get the
 location and venue nailed down as early as possible, so that people
 who have complex travel approval processes have a chance to get travel
 sorted out. I think its pretty much a foregone conclusion this meetup
 will be somewhere in the continental US. If you're interested in
 hosting a meetup in approximately August, please mail me privately so
 we can chat.

Yeah this was a great opportunity to collaborate and keep the project pointed 
in the right direction during Icehouse.

 
 * specs review. The new blueprint process is a work of genius, and I
 think its already working better than what we've had in previous
 releases. However, there are a lot of blueprints there in review, and
 we need to focus on making sure these get looked at sooner rather than
 later. I'd especially like to encourage operators to take a look at
 blueprints relevant to their interests. Phil Day from HP has been
 doing a really good job at this, and I'd like to see more of it.

I have mixed feelings about the nova-specs repo.  I dig the open collaboration 
of the blueprints process, but I also think there is a danger of getting too 
process-oriented here.  Are these design documents expected to call out every 
detail of a feature?  Ideally, I’d like to see only very high level 
documentation in the specs repo.  Basically, the spec could include just enough 
detail for people to agree that they think a feature is worth inclusion.  More 
detailed discussion could remain on the code reviews since they are the actual 
end work product.

Thanks,
Brian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Re: deliver the vm-level HA to improve the business continuity with openstack

2014-04-15 Thread Steven Dake

On 04/15/2014 03:16 AM, Qiming Teng wrote:

What I saw in this thread are several topics:

1) Is VM HA really relevant (in a cloud)?

This is the most difficult question to answer, because it really depends
on who you are talking to, who are the user community you are facing.
IMHO, for most web-based applications that are born to run on cloud,
maybe certain level of business resiliency has already been built into
the code, so the application or service can live happily when VMs come
and go.

For traditional business applications, the scenario may be quite
different.  These apps are migrated to cloud for reasons like cost
savings, server consolidation, etc..  Quite some companies are
evaluating OpenStack for their private cloud -- which is a weird term,
IMHO.

In addition to this, while we are looking into the 'utility' vision of
cloud, we can still ask ourselves: a) can we survive one month of power
outage or water outage, though there are abundant supply elsewhere on
this
planet? b) what are the costs we need to pay if we eventually make it?
c) do we want to pay for this?

My personal experience is that our customers really want this feature
(VM HA) for their private clouds.  The question they asked us was:


   Does OpenStack support VM HA?  Maybe not for all VMS...
   We know we can have that using vSphere, Azure, or CloudStack...



2) Where is the best location to provide VM HA?

Suppose that we do feel the need to support VM HA, then the questions
following this would 'where' and 'how'.

Considering that a VM is not merely a bundle of compute processes, it is
actually a virtual execution environment that consumes resources like
storage and network bandwidth besides processor cycles, Nova may be NOT
the ideal location to deal with this cross-cutting concern.

High availability involves redundant resource provisioning, effective
failure detection and appropriate fail-over policies, including fencing.
Imposing all these requirements on Nova is impractical.  We may need to
consider whether VM HA, if ever implemented/supported, should be part of
the orchestration service, aka Heat.


3) Can/should we do the VM HA orchestration in Heat?

My perception is that it can be done in Heat, based on my limited
understandig of how Heat works.  It may imply some requirements to other
projects (e.g.  nova, cinder, neutron ...) as well, though Heat should be
the orchestrator.

What do we need then?

   - A resource type for VM groups/clusters, for the redundant
 provisioning.  VMs in the group can be identical instances, managed
 by a Pacemaker setup among the VMs, just like a WatchRule in Heat can
 be controlled by Ceilometer.

 Another way to do this is to have the VMs monitored via heartbeat
 messages sent by Nova (if possible/needed), or some services injected
 into the VMs (consider what cfn-hup, cfn-signal does today).

 However, the VM group/cluster can decide how to react to a VM online
 /offline signal.  It may choose to a) restart the VM in-place; b)
 remote-restart (aka evacuate) the VM somewhere else; c) live/cold
 migrate the VM to other nodes.

 The policies can be out sourced to other plugins considering that
 global load-balancing or power management requirements.  But that is an
 advanced feature that warrants another blueprint.

   - Some fencing support from nova, cinder, neutron to shoot the bad VMs
 in the head so a VM that cannot be reached is guarantteed to be cleanly
 killed.

   - VM failure detectors that can reliably tell whether a VM has failed.
 Sometimes a VM that failed the expected performance goal should be
 treated as failed as well, if we really want to be strict on this.

 A failure detector can reside inside Nova, as what has been done for
 the 'service groups' there.  It can reside inside a VM, as a service
 istalled there, sending out heatbeat messages (before the battery runs
 out, :))

   - A generic signaling mechanism that allows a secure message delivery
 back to Heat indicating that a VM is alive or dead.

My current understanding is that we may avoid complicated task-flow
here.

Regards,
   - Qiming


Qiming,

If you read my original post on this thread, it outlines the current 
heat-core thinking, which is to reduce the scope of this resource from 
the Heat resources since it describes a workflow rather then an 
orchestrated thing (a Noun).


A good framework for HA already exists for HA in the HARestarter 
resource.  It incorporates HA escalation, which is a critical feature of 
any HA system.  The fundamental problem with HARestarter is that is in 
the wrong project.


Long term, HA, if desired, should be part of taskflow, though, because 
its a verb, and verbs don't belong as heat orchestrated resources.


How we get from here to there is left as an exercise to the reader ;-)

Regards
-steve


For the most part we've been trying to encourage projects that want to
control VMs to 

Re: [openstack-dev] [heat][nova]dynamic scheduling

2014-04-15 Thread Steven Dake

On 04/15/2014 06:03 AM, Jiangying (Jenny) wrote:


Sorry, I'm not quite clear about it yet.

I'm trying to find a way that heat controls the flow but not the nova 
scheduler.


Heat doesn't control flow.  Heat expects a scheduler is built into 
whatever service it is consuming for resource management, if the 
resource is constrained for some reason (such as limited memory, disk, 
cpu resources available for consumption).  This is why something like a 
storage system (cinder) has a scheduler, and Heat does not.


It makes zero sense to add scheduling to Heat - since the projects that 
Heat consumes are in much better position to make decisions about which 
resources get scheduled when and where.


Regards
-steve


*???:*Henrique Truta [mailto:henriquecostatr...@gmail.com]
*:*2014?4?14?21:39
*???:*OpenStack Development Mailing List (not for usage questions)
*??:*Re: [openstack-dev] [heat][nova]dynamic scheduling

Hello!

I'm currently investigating both of these features you have mentioned, 
specifically on the NEAT[1] and GANTT[2] projects, as you might see on 
the last week discussion.


Do you have any further ideas about how and why this would work with Heat?

Thanks,

Henrique

[1] http://openstack-neat.org/ http://openstack-neat.org/
[2] https://github.com/openstack/gantt

2014-04-13 22:53 GMT-03:00 Jiangying (Jenny) 
jenny.jiangy...@huawei.com mailto:jenny.jiangy...@huawei.com:


Hi,

there has been a heated discussion about dynamic scheduling last 
week.(http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg21644.html)


I am also interested in this topic. We believe that dynamic scheduling 
consists of two parts: balancing computing capacity and optimizing 
power consumption.


For balancing computing capacity, the ceilometer periodically monitors 
distribution and usage of CPU and memory resources for hosts and 
virtual machines. Based on the information, the scheduler calculates 
the current system standard deviation metric and determines the system 
imbalance by comparing it to the target. To resolve the imbalance, the 
scheduler gives the suitable virtual machine migration suggestions to 
nova. In this way, the dynamic scheduling achieves higher 
consolidation ratios and deliver optimized performance for the virtual 
machines.


For optimizing power consumption, we attempt to keep the resource 
utilization of each host within a specified target range. The 
scheduler evaluates if the goal can be reached by balancing the system 
workloads. If the resource utilization of a host remains below the 
target, the scheduler calls nova to power off some hosts. Conversely 
the scheduler powers on hosts to absorb the additional workloads. Thus 
optimizing power consumption offers an optimum mix of resource 
availability and power savings.


As Chen CH Ji said, nova is a cloud solution that aim to control 
virtual / real machine lifecycle management the dynamic scheduling 
mechanism is something like optimization of the cloud resource. We 
think implementing the dynamic scheduling with heat may be a good attempt.


Do you have any comments?

Thanks,

Jenny


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] nova-specs

2014-04-15 Thread Russell Bryant
On 04/15/2014 11:01 AM, Brian Elliott wrote:
 * specs review. The new blueprint process is a work of genius, and I
 think its already working better than what we've had in previous
 releases. However, there are a lot of blueprints there in review, and
 we need to focus on making sure these get looked at sooner rather than
 later. I'd especially like to encourage operators to take a look at
 blueprints relevant to their interests. Phil Day from HP has been
 doing a really good job at this, and I'd like to see more of it.
 
 I have mixed feelings about the nova-specs repo.  I dig the open 
 collaboration of the blueprints process, but I also think there is a danger 
 of getting too process-oriented here.  Are these design documents expected to 
 call out every detail of a feature?  Ideally, I’d like to see only very high 
 level documentation in the specs repo.  Basically, the spec could include 
 just enough detail for people to agree that they think a feature is worth 
 inclusion.  More detailed discussion could remain on the code reviews since 
 they are the actual end work product.

There is a balance to be found here.  The benefit of doing more review
earlier is to change direction as necessary when it's *much* easier to
do so.  It's a lot more time consuming to do re-work after code has been
written, than re-work in a spec.

Yes, it's more up front work, but I think it will speed up the process
overall.  It means we're much more in agreement and on the same page
before code even shows up.  That's huge.

One of the big problems we've had in code review is the amount of churn
and re-work required.  That is killing our throughput in code review.
If we can do more up front work that will reduce re-work later, it's
going to be a *huge* help to our primary project bottleneck: the code
review queue.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Code merge policy

2014-04-15 Thread Dmitry Pyzhov
So every developer should manually mark every such review as WIP? And
remove this flag only when everyone agreed to merge? This will require
additional actions in 50% of fuel-web and 100% of fuel-main reviews.
Developers make mistakes too.

Let's just be more accurate.


On Tue, Apr 15, 2014 at 6:41 PM, Mike Scherbakov
mscherba...@mirantis.comwrote:

 Humans make mistakes... all the time. Let's think how we can automate this
 to have appropriate Jenkins check. In this particular case, we could do the
 following:
 a) make it work in progress if we still unsure on some deps
 b) can we have smoke test which would check that master node builds, and
 simplest deploy passes? This needs to be run only if there are changes
 discovered in ISO build script (including mirror changes), and puppet
 manifests which deploy master node



 On Tue, Apr 15, 2014 at 3:49 PM, Dmitry Pyzhov dpyz...@mirantis.comwrote:

 Guys,

 We have big and complicated structure of the project. And part of our
 patchsets require additional actions before merge. Sometimes we need
 approve from testers, sometimes we need merge requests in several repos at
 the same time, sometimes we need updates of rpm repositories before merge.

 We have informal rule: invite all the required persons to the review. And
 core reviewer does not merge code if part of +1's are missed. Sad, but this
 rule is not obvious.

 This informal rule became even more strict when we need update of rpm/deb
 repositories, because OSCI changes should be accomplished right before
 merge. For such reviews we ask OSCI team to do changes, checks and merge.

 https://review.openstack.org/#/c/86001/ This particular request requires
 check of our 4.1.1 rpm/deb repositories status. Thats why Roman Vyalov is
 added as reviewer.

 I don't like over-bureaucracy. My suggestion is simple: take into account
 reviewers status and do not merge if unsure.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron Migrations, service plugins and Grenade jobs

2014-04-15 Thread Amir Sadoughi
I know alembic is designed to be global, but could we extend it to track 
multiple histories for a given database. In other words, various branches for 
different namespaces on a single database. Would this feature ameliorate the 
issues?

Amir

On Apr 15, 2014, at 8:24 AM, Kyle Mestery 
mest...@noironetworks.commailto:mest...@noironetworks.com wrote:

On Tue, Apr 15, 2014 at 7:03 AM, Salvatore Orlando 
sorla...@nicira.commailto:sorla...@nicira.com wrote:
Thanks Anna.

I've been following the issue so far, but I am happy to hand it over to you.
I think the problem assessment is complete, but if you have more questions
ping me on IRC.

Regarding the solution, I think we already have a fairly wide consensus on
the approach.
There are however a few details to discuss:
- Conflicting schemas. For instance two migrations for two distinct plugins
might create tables with the same name but different columns.
 We first need to look at existing migrations to verify where this
condition occurs, and then study a solution case by case.
- Logic for corrective migrations. For instance a corrective migration for
569e98a8132b_metering is needed. However, such corrective migration should
have logic for understanding whether the original migration has been
executed or not.
- Corrective actions for corrupted schemas. This would be the case, for
instance, of somebody which enables metering while the database is at a
migration rev higher than the one when metering was introduced.

I reckon it might be the case of putting together a specification and push
it to the newly created neutron-specs repo, assuming that we feel confident
enough to start using this new process (Kyle and Mark might chime in on this
point). Also, I would like to see this work completed by Juno-1, which I
reckon is a reasonable target.

I'm working to get this new specification approval process ready,
hopefully by later
today. Once this is done, I agree with Salvatore, pushing a gerrit
review with the
specification for this work will be the right approach.

Of course I'm available for discussing design, implementation, reviewing and
writing code.

Thanks to Anna and Salvatore for taking this up!

Kyle

Salvatore



On 15 April 2014 12:44, Anna Kamyshnikova 
akamyshnik...@mirantis.commailto:akamyshnik...@mirantis.com
wrote:

Hello everyone!

I would like to try to solve this problem. I registered blueprint on this
topic
https://blueprints.launchpad.net/neutron/+spec/neutron-robust-db-migrations
and I'm going to experiment with options to solve this. I'm welcome any
suggestions and ready to talk about it via IRC (akamyshnikova).

Regards
Ann


On Mon, Apr 14, 2014 at 9:39 PM, Sean Dague s...@dague.net wrote:

On 04/14/2014 12:46 PM, Eugene Nikanorov wrote:
Hi Salvatore,

The described problem could be even worse if vendor drivers are
considered.
Doesn't #1 require that all DB tables are named differently? Otherwise
it seems that user can't be sure in DB schema even if all tables are
present.

I think the big part of the problem is that we need to support both
online and offline migrations. Without the latter things could be a
little bit simpler.

Also it seems to me that problem statement should be changed to the
following:
One need to migrate from (Config1, MigrationID1) to (Config2,
MigrationID2), and currently our code only accounts for MigrationIDs.
We may consider amending DB with configuration metadata, at least that
will allow to run migration code with full knowledge of what happened
before (if online mode is considered).
In offline mode that will require providing old and current
configurations.

That was just thinking aloud, no concrete proposals yet.

The root issue really is Migrations *must* be global, and config
invariant. That's the design point in both sqlalchemy-migrate and
alembic. The fact that there is one global migrations table per
database, with a single value in it, is indicative of this fact.

I think that design point got lost somewhere along the way, and folks
assumed migrations were just a way to change schemas. They are much more
constrained than that.

It does also sound like the data model is going to need some serious
reconsidering given what's allowed to be changed at the plugin or vendor
driver model. Contrast this with Nova, were virt drivers don't get to
define persistant data that's unique to them (only generic data that
they fit into the grander nova model).

The one time we had a driver which needed persistent data (baremetal) it
managed it's own database entirely.

   -Sean

--
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

[openstack-dev] Hyper-V Meeting Minutes

2014-04-15 Thread Peter Pouliot
Hi Everyone,
Here are the minutes from today’s Hyper-V Meeting.

Minutes:
http://eavesdrop.openstack.org/meetings/hyper_v/2014/hyper_v.2014-04-15-16.02.html
Minutes (text): 
http://eavesdrop.openstack.org/meetings/hyper_v/2014/hyper_v.2014-04-15-16.02.txt
Log:
http://eavesdrop.openstack.org/meetings/hyper_v/2014/hyper_v.2014-04-15-16.02.log.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [3rd party testing] Atlanta meetup for 3rd party testing?

2014-04-15 Thread Kurt Taylor

Anita Kuno ante...@anteaya.info wrote on 04/15/2014 09:41:17 AM:
 On 04/15/2014 10:20 AM, Kurt Taylor wrote:
 
 
  Sorry if you get this twice.
 
  Since summit is approaching quickly, I wanted to see if anyone had
interest
  in forming a meetup for 3rd party testing. This would be a group for
  helping the project cores by helping ourselves and hopefully improving
  overall CI rollout.
 
  Some topics to discuss:  I'd at least like to get a standard tag that
would
  help with email filtering, but we could also build a FAQ and
documentation
  contributions to improve the content of the 3rd party testing. We could
  also document different use cases and best practices for each on how
  different testers solved problems they encountered for their
environment. I
  don't know how long we would be able to meet, so this may just be an
  organizational meeting, focusing on how to best share this info.
 
  I started an etherpad for discussion topics and eventual agenda here:
  https://etherpad.openstack.org/p/3rdPartyTesting
 
  I have not worked out all the details for when and where to meet, but I
  would be happy to set it up and facilitate the discussion. I wanted to
see
  if there was any interest before I took it any further.
 
  Any interest?

 
 There is a summit design session proposed to address this:
 http://summit.openstack.org/cfp/details/59

 Entitled: Improving Third Party Testing

 If you would like to add the information you would like to cover as a
 comment to the proposal that would help ensure the content gets covered.

Thanks for your response. I did not know about this and I am glad this is
proposed. It was my intention originally to propose this type of
discussion, but after bringing this up in -infra, it was not something the
team wanted to discuss in a design session. They suggested a BOF or meetup
at the infra table in developer's lounge. I do not know yet if that
location is still a possibility or not, as I indicated in my email.


 Since the purpose of the design summit is to cover material like this, I
 encourage people to get familiar with the summit sessions that are
 already proposed: http://summit.openstack.org/ and to propose sessions
 that you would like covered if you don't see a current proposal for your
 topic of interest.

 Since most of what I saw in the Neutron design summit sessions last year
 is counter to how most other programs use their design summit sessions,
 I encourage you to consider using the design summit session format.
 Neutron design summit sessions end up being more like conference
 presentations and that is not what the design summit was meant to be at
all.

 Design summit sessions are meant to be group discussion chaired by
 either the proposer of the session or someone else but equally discussed
 by all participants. They are not meant to be a presentation with a
 passive audience.

Yes, I am very familiar with UDS-style design summits and have lead several
design sessions in the past on other projects. I think this is a big factor
in the enormous success of the OpenStack project.

There is probably some overlap in the proposed design session and the
meetup. But, I actually see the meetup as more of a organizational meeting
to pull together the developers with experience in CI to share ideas and
how we got past the problems we found.

Maybe we can wait for the outcome of the session discussion and determine
next steps after summit?

Thanks again!

Kurt Taylor (krtaylor)
OpenStack Development Lead - PowerKVM CI
IBM Linux Technology Center___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V Meeting Minutes

2014-04-15 Thread Joe Gordon
Is there a plan to get Hyper-V CI working better? It looks like it is
failing significantly more frequently then Jenkins.

http://www.rcbops.com/gerrit/reports/nova-cireport.html


On Tue, Apr 15, 2014 at 9:25 AM, Peter Pouliot ppoul...@microsoft.comwrote:

  Hi Everyone,
 Here are the minutes from today’s Hyper-V Meeting.

  Minutes:
 http://eavesdrop.openstack.org/meetings/hyper_v/2014/hyper_v.2014-04-15-16.02.html
 Minutes (text):
 http://eavesdrop.openstack.org/meetings/hyper_v/2014/hyper_v.2014-04-15-16.02.txt
 Log:
 http://eavesdrop.openstack.org/meetings/hyper_v/2014/hyper_v.2014-04-15-16.02.log.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [3rd party testing] Atlanta meetup for 3rd party testing?

2014-04-15 Thread Anita Kuno
On 04/15/2014 12:34 PM, Kurt Taylor wrote:
 
 Anita Kuno ante...@anteaya.info wrote on 04/15/2014 09:41:17 AM:
 On 04/15/2014 10:20 AM, Kurt Taylor wrote:


 Sorry if you get this twice.

 Since summit is approaching quickly, I wanted to see if anyone had
 interest
 in forming a meetup for 3rd party testing. This would be a group for
 helping the project cores by helping ourselves and hopefully improving
 overall CI rollout.

 Some topics to discuss:  I'd at least like to get a standard tag that
 would
 help with email filtering, but we could also build a FAQ and
 documentation
 contributions to improve the content of the 3rd party testing. We could
 also document different use cases and best practices for each on how
 different testers solved problems they encountered for their
 environment. I
 don't know how long we would be able to meet, so this may just be an
 organizational meeting, focusing on how to best share this info.

 I started an etherpad for discussion topics and eventual agenda here:
 https://etherpad.openstack.org/p/3rdPartyTesting

 I have not worked out all the details for when and where to meet, but I
 would be happy to set it up and facilitate the discussion. I wanted to
 see
 if there was any interest before I took it any further.

 Any interest?
 

 There is a summit design session proposed to address this:
 http://summit.openstack.org/cfp/details/59

 Entitled: Improving Third Party Testing

 If you would like to add the information you would like to cover as a
 comment to the proposal that would help ensure the content gets covered.
 
 Thanks for your response. I did not know about this and I am glad this is
 proposed. It was my intention originally to propose this type of
 discussion, but after bringing this up in -infra, it was not something the
 team wanted to discuss in a design session. They suggested a BOF or meetup
 at the infra table in developer's lounge. I do not know yet if that
 location is still a possibility or not, as I indicated in my email.
 

 Since the purpose of the design summit is to cover material like this, I
 encourage people to get familiar with the summit sessions that are
 already proposed: http://summit.openstack.org/ and to propose sessions
 that you would like covered if you don't see a current proposal for your
 topic of interest.

 Since most of what I saw in the Neutron design summit sessions last year
 is counter to how most other programs use their design summit sessions,
 I encourage you to consider using the design summit session format.
 Neutron design summit sessions end up being more like conference
 presentations and that is not what the design summit was meant to be at
 all.

 Design summit sessions are meant to be group discussion chaired by
 either the proposer of the session or someone else but equally discussed
 by all participants. They are not meant to be a presentation with a
 passive audience.
 
 Yes, I am very familiar with UDS-style design summits and have lead several
 design sessions in the past on other projects. I think this is a big factor
 in the enormous success of the OpenStack project.
 
 There is probably some overlap in the proposed design session and the
 meetup. But, I actually see the meetup as more of a organizational meeting
 to pull together the developers with experience in CI to share ideas and
 how we got past the problems we found.
 
 Maybe we can wait for the outcome of the session discussion and determine
 next steps after summit?
I can get behind that direction.

Yes it feels like mid-cycle meetups are getting to be more and more
important. Let's bring our ideas to the summit session (hoping it gets a
slot) and continue the discussion afterwards.

Thanks Kurt,
Anita.
 
 Thanks again!
 
 Kurt Taylor (krtaylor)
 OpenStack Development Lead - PowerKVM CI
 IBM Linux Technology Center
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Meeting Minutes

2014-04-15 Thread Collins, Sean
Meeting minutes:

http://eavesdrop.openstack.org/meetings/neutron_ipv6/2014/neutron_ipv6.2014-04-15-14.00.html

See you next week!

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral][TaskFlow] Mistral-TaskFlow Summary

2014-04-15 Thread Joshua Harlow
I think we agree on the lazy execution model. At least at a high-level, I'd 
rather not agree on the API's that are exactly exposed until there is an 
implementation since I've found that agreeing to any type of API's before there 
is the needed groundwork to make it happen is pretty useless and a waste of 
energy on everyones part.

Decider sounds like it could work also as a name, although it seems from 
dataflow like work its called a switch or gate, either or I guess.

As far as the micro-language:

So there are typically 2 types of DSL's that occur, internal and external.

An internal DSL is like http://martinfowler.com/bliki/InternalDslStyle.html, 
taskflow is already a micro-DSL internal to python (mistral is an external 
DSL[1]). To me there is a drawback of becoming to much of a DSL (internal or 
external) in that it requires a lot of new learning (imho internal DSLs are 
easier to pick-up since they take advantage of the surrounding languages 
capabilities, in this case python). So that’s what I just want to keep in our 
minds that we need to make it simple *enough*, or we will die a nasty death of 
complexity :-P

[1] http://martinfowler.com/bliki/DomainSpecificLanguage.html

From: Renat Akhmerov rakhme...@mirantis.commailto:rakhme...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, April 15, 2014 at 12:19 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Mistral][TaskFlow] Mistral-TaskFlow Summary

Some notes:

  *   Even though we use YAQL now our design is flexible enough to plug other 
ELs in.
  *   If it tells you something in Amazon SWF a component that makes a decision 
about a further route is called Decider.
  *   This discussion about conditionals is surely important but it doesn’t 
matter too much if we don’t agree on that lazy execution model.

Of course I'm trying to make the above not be its own micro-language as much as 
possible (a switch object starts to act like one, sadly).

Why do you think it’s going to be a micro-language?

[1] http://www.cs.cmu.edu/~aldrich/papers/onward2009-concurrency.pdf
[2] 
http://www.cs.ucf.edu/~dcm/Teaching/COT4810-Spring2011/Literature/DataFlowProgrammingLanguages.pdf

Cool, thanks!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] [taskflow] Mistral TaskFlow integration summary

2014-04-15 Thread Joshua Harlow
Well Ivan afaik is thinking through it, but being a community project its not 
exactly easy to put estimations or timelines on things (I don't control Ivan, 
or others).

Likely faster if u guys want to get involved.

-Josh

From: Renat Akhmerov rakhme...@mirantis.commailto:rakhme...@mirantis.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, April 15, 2014 at 12:35 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [mistral] [taskflow] Mistral TaskFlow integration 
summary


On 15 Apr 2014, at 11:13, Joshua Harlow 
harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote:

Sure, its not the fully complete lazy_engine, but piece by piece we can get 
there.

Did you make any estimations when it could happen? :)

Of course code/contributions are welcome, as such things will benefit more than 
just mistral, but openstack as a whole :-)

OK.

From: Kirill Izotov enyk...@stackstorm.commailto:enyk...@stackstorm.com
The whole idea of sub-flows within the scope of direct conditional transitions 
is a bit unclear to me (and probably us all) at the moment, though I'm trying 
to rely on them only as a means to lesser the complexity.

Yes, eventually it’s for reducing complexity. I would just add that it opens 
wide range of opportunities like:

  *   ability to combine multiple physically independent workflows
  *   reusability (using one workflow as a part of another)
  *   isolation (different namespaces for data flow contexts etc)

Renat Akhmerov
@ Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] /bin/bash vs. /bin/sh

2014-04-15 Thread Dougal Matthews

Another +1 for using bash. Sounds like an easy win.

On 15/04/14 12:31, Ghe Rivero wrote:

+1 to use bash as the default shell. So far, all major distros use bash
as the default one (except Debian which uses dash).
An about rewriting the code in Python, I agree that shell is complicated
for large programs, but writing anything command oriented in other than
shell is a nightmare. But there are some parts that can benefit from that.

Ghe Rivero

On 04/15/2014 11:05 AM, Chris Jones wrote:

Hi

On 15 April 2014 09:14, Daniel P. Berrange berra...@redhat.com
mailto:berra...@redhat.com wrote:

I supose that rewriting the code to be in Python is out of the
question ?  IMHO shell is just a terrible language for doing any
program that is remotely complicated (ie longer than 10 lines of


I don't think it's out of the question - where something makes sense
to switch to Python, that would seem like a worthwhile thing to be
doing. I do think it's a different question though - we can quickly
flip things from /bin/sh to /bin/bash without affecting their
suitability for replacement with python.

--
Cheers,

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Migrations, service plugins and Grenade jobs

2014-04-15 Thread Mark McClain

On Apr 14, 2014, at 11:54 AM, Salvatore Orlando 
sorla...@nicira.commailto:sorla...@nicira.com wrote:


1) Specify that all migrations must run for every plugin (*) unless they are 
really introducing schemas which are specific to a particular technology (such 
as uuid mappings between neutron and backed)

This approach will probably ensure all the important migrations are executed.
However, it is not back portable, unless deployers decide to tear down and 
rebuild their databases from scratch, which is unlikely to happen.

To this aim recovery scripts might be provided to sync up the schema state of 
specific service plugins with the current alembic migration registered in the 
neutron database; or appropriate migrations can be added in the path to fix 
database schemas.

(*) Neutron does not address, and probably never will address, switching from 
plugin a to b for a given service (e.g.: core features).

This option has several additional problems because of the way plugins/drivers 
are conditionally imported and packaged.  As a result, auto generation of 
schema (inadvertent drop for schema) or even validation of existing models vs 
the the migration will fail because models are not imported unless part of the 
configuration.

mark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] nova-specs

2014-04-15 Thread Sean Dague
On 04/15/2014 11:42 AM, Russell Bryant wrote:
 On 04/15/2014 11:01 AM, Brian Elliott wrote:
 * specs review. The new blueprint process is a work of genius, and I
 think its already working better than what we've had in previous
 releases. However, there are a lot of blueprints there in review, and
 we need to focus on making sure these get looked at sooner rather than
 later. I'd especially like to encourage operators to take a look at
 blueprints relevant to their interests. Phil Day from HP has been
 doing a really good job at this, and I'd like to see more of it.

 I have mixed feelings about the nova-specs repo.  I dig the open 
 collaboration of the blueprints process, but I also think there is a danger 
 of getting too process-oriented here.  Are these design documents expected 
 to call out every detail of a feature?  Ideally, I’d like to see only very 
 high level documentation in the specs repo.  Basically, the spec could 
 include just enough detail for people to agree that they think a feature is 
 worth inclusion.  More detailed discussion could remain on the code reviews 
 since they are the actual end work product.
 
 There is a balance to be found here.  The benefit of doing more review
 earlier is to change direction as necessary when it's *much* easier to
 do so.  It's a lot more time consuming to do re-work after code has been
 written, than re-work in a spec.
 
 Yes, it's more up front work, but I think it will speed up the process
 overall.  It means we're much more in agreement and on the same page
 before code even shows up.  That's huge.
 
 One of the big problems we've had in code review is the amount of churn
 and re-work required.  That is killing our throughput in code review.
 If we can do more up front work that will reduce re-work later, it's
 going to be a *huge* help to our primary project bottleneck: the code
 review queue.

I think the previous process is a huge demotivator to contributors, when
they file a blueprint with minimal info, it gets approved, they spend
months working on it, and only at the end of the process does the idea
get dug into enough for people to realize that it's not what anyone wants.

At that point people are so invested in the time they spent on this
feature that turning that conversation productive is really hard.

Catching more of these up front and being more explicit about what Nova
wants in a cycle is goodness.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Basic zuul startup question: Private key file is encrypted

2014-04-15 Thread Dane Leblanc (leblancd)
I'm trying to modify a 3rd party test setup to use zuul, but I'm seeing the 
following error when I start up the zuul server:

===
2014-04-15 09:09:18,910 ERROR gerrit.GerritWatcher: Exception on ssh event 
stream:
Traceback (most recent call last):
  File /usr/local/lib/python2.7/dist-packages/zuul/lib/gerrit.py, line 64, in 
_run
key_filename=self.keyfile)
  File /usr/local/lib/python2.7/dist-packages/paramiko/client.py, line 342, 
in connect
self._auth(username, password, pkey, key_filenames, allow_agent, 
look_for_keys)
  File /usr/local/lib/python2.7/dist-packages/paramiko/client.py, line 533, 
in _auth
raise saved_exception
PasswordRequiredException: Private key file is encrypted
===

Here is my zuul config file (/etc/zuul/zuul.conf):

===
[gearman]
server=127.0.0.1

[gearman_server]
start=true
log_config=/etc/zuul/gearman-logging.conf

[gerrit]
server=review.openstack.org
user=zuul
sshkey=/var/lib/zuul/.ssh/id_rsa
#user=jenkins
#sshkey=/var/lib/jenkins/.ssh/id_rsa
#user=cisco_neutron_ci
#sshkey=/home/cisco_neutron_ci/.ssh/id_rsa

[zuul]
layout_config=/etc/zuul/layout.yaml
log_config=/etc/zuul/logging.conf
state_dir=/var/lib/zuul
git_dir=/var/lib/zuul/git
push_change_refs=false
url_pattern=http://128.107.233.28:8080/job/(job.name)/(build.number)
status_url=http://status.openstack.org/zuul/
job_name_in_report=true
zuul_url=https://review.openstack.org
===

I've tried using various users for the zuul-to-gerrit connection (zuul, 
jenkins, and our corporate gerrit account username).

Is there something basic that I'm missing in the config?

Appreciate any help,
Dane

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][design] review based conceptual design process

2014-04-15 Thread Robert Collins
I've been watching the nova process, and I think its working out well
- it certainly addresses:
 - making design work visible
 - being able to tell who has had input
 - and providing clear feedback to the designers

I'd like to do the same thing for TripleO this cycle..

I'm thinking we can just add docs to incubator, since thats already a
repository separate to our production code - what do folk think?

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC candidacy

2014-04-15 Thread Vishvananda Ishaya
Hello all,

I’d like to announce my candidacy for the Technical Committee election.

I was one of the original authors of the Nova project and served as
its PTL for the first two years that the position existed. I have also
been on the Technical Comittee since its inception. I was also recently
elected to the OpenStack Board. In my day job I am  the Chief Technical
Officer for Nebula, a startup focused on bringing OpenStack to the
Enterprise.

My role in OpenStack has changed over time from being one of the top
code contributors to more leadership and planning. I helped start both
Cinder and Devstack. I’m on the stable-backports committee, and I’m
currently working on an effort to bring Hierarchical Multitenancy to
all of the OpenStack Projects. I also spend a lot of my time dealing
with my companies customers, which are real operators and users of
OpenStack.

I think there are two major governance issues that need to be addressed
in OpenStack. We’ve started having these discussions in both the Board
and the Technical Committee, but some more effort is needed to drive
them home.

1. Stop the kitchen-sink approach. We are adding new projects like mad
and in order to keep the quality of the integrated release high, we have
to raise the bar to be come integrated. We made some changes over the
past few months here.
2. Better product management. This was a topic of discussion at the
last board meeting and one we hope to continue at the joint meeting in
Atlanta. We have a bit of a whole across openstack that is filled in
most organizations by a product management team. We need to be more
conscious of release quality and addressing customer issues. It isn’t
exactly clear how something like this should happen in Open Source,
but it is something we should try to address.

I hope to be able to continue to address these issues on the Technical
Committee and provide some much-needed understanding from the “old-days”
of OpenStack. It is often helpful to know where you came from in order
see the best direction to go next.

Thanks,
Vish Ishaya


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] /bin/bash vs. /bin/sh

2014-04-15 Thread Chmouel Boudjnah
FWIW: we are using bash in devstack if we were going to try to make it
POSIX bourne shell (or whatever /bin/sh is) it would have been a huge pain.


On Tue, Apr 15, 2014 at 1:25 PM, Dougal Matthews dou...@redhat.com wrote:

 Another +1 for using bash. Sounds like an easy win.


 On 15/04/14 12:31, Ghe Rivero wrote:

 +1 to use bash as the default shell. So far, all major distros use bash
 as the default one (except Debian which uses dash).
 An about rewriting the code in Python, I agree that shell is complicated
 for large programs, but writing anything command oriented in other than
 shell is a nightmare. But there are some parts that can benefit from that.

 Ghe Rivero

 On 04/15/2014 11:05 AM, Chris Jones wrote:

 Hi

 On 15 April 2014 09:14, Daniel P. Berrange berra...@redhat.com
 mailto:berra...@redhat.com wrote:

 I supose that rewriting the code to be in Python is out of the
 question ?  IMHO shell is just a terrible language for doing any
 program that is remotely complicated (ie longer than 10 lines of


 I don't think it's out of the question - where something makes sense
 to switch to Python, that would seem like a worthwhile thing to be
 doing. I do think it's a different question though - we can quickly
 flip things from /bin/sh to /bin/bash without affecting their
 suitability for replacement with python.

 --
 Cheers,

 Chris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC candidacy

2014-04-15 Thread Tristan Cacqueray
confirmed

On 04/15/2014 08:45 PM, Vishvananda Ishaya wrote:
 Hello all,
 
 I’d like to announce my candidacy for the Technical Committee election.
 
 I was one of the original authors of the Nova project and served as
 its PTL for the first two years that the position existed. I have also
 been on the Technical Comittee since its inception. I was also recently
 elected to the OpenStack Board. In my day job I am  the Chief Technical
 Officer for Nebula, a startup focused on bringing OpenStack to the
 Enterprise.
 
 My role in OpenStack has changed over time from being one of the top
 code contributors to more leadership and planning. I helped start both
 Cinder and Devstack. I’m on the stable-backports committee, and I’m
 currently working on an effort to bring Hierarchical Multitenancy to
 all of the OpenStack Projects. I also spend a lot of my time dealing
 with my companies customers, which are real operators and users of
 OpenStack.
 
 I think there are two major governance issues that need to be addressed
 in OpenStack. We’ve started having these discussions in both the Board
 and the Technical Committee, but some more effort is needed to drive
 them home.
 
 1. Stop the kitchen-sink approach. We are adding new projects like mad
 and in order to keep the quality of the integrated release high, we have
 to raise the bar to be come integrated. We made some changes over the
 past few months here.
 2. Better product management. This was a topic of discussion at the
 last board meeting and one we hope to continue at the joint meeting in
 Atlanta. We have a bit of a whole across openstack that is filled in
 most organizations by a product management team. We need to be more
 conscious of release quality and addressing customer issues. It isn’t
 exactly clear how something like this should happen in Open Source,
 but it is something we should try to address.
 
 I hope to be able to continue to address these issues on the Technical
 Committee and provide some much-needed understanding from the “old-days”
 of OpenStack. It is often helpful to know where you came from in order
 see the best direction to go next.
 
 Thanks,
 Vish Ishaya
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][chaining][policy] Port-oriented Network service chaining

2014-04-15 Thread Carlos Gonçalves
Hi Kanzhe,

First off, thank you for showing interest in discussing this proposal!

I’m not fully sure if I understood your point. Could you elaborate a bit more 
on the L1, L2, L3 part?

Regarding the traffic steering API, as I see it the Neutron port is the virtual 
counterpart of the network interface and would allow L1, L2 and L3 steering 
within OpenStack.  Within a single OpenStack deployment I think the Neutron 
port abstraction might be enough. Nevertheless in the API data model proposal 
we have the service function end
point abstraction which can (eventually) be mapped to something else than a 
Neutron port (e.g., remote IP).

Thanks,
Carlos Goncalves

On 15 Apr 2014, at 02:07, Kanzhe Jiang kan...@gmail.com wrote:

 Hi Carlos,
 
 This is Kanzhe. We discussed your port-based SFC on the Neutron advanced 
 service IRC.
 I would like to reach out to you to discuss a bit more.
 
 As you know, Neutron port is a logic abstraction for network interfaces with 
 a MAC and IP address. However, network services could be used at different 
 layers, L3, L2, or L1. In L3 case, each service interface could be easily 
 mapped to a Neutron port. However, in the other two cases, there won't be a 
 corresponding Neutron port. In your proposal, you mentioned DPI service. What 
 is your thought?
 
 Neutron doesn't have traffic steering API. Is Neutron port the right 
 abstraction to introduce traffic steering API? Or May Neutron need separate 
 abstraction for such?
 
 Love to discuss more!
 Kanzhe
 
 
 On Tue, Mar 25, 2014 at 3:59 PM, Carlos Gonçalves m...@cgoncalves.pt wrote:
 Hi,
 
 Most of the advanced services and group policy sub-team members who attended 
 last week’s meeting should remember I promised to start a drafting proposal 
 regarding network service chaining. This week I got to start writing a 
 document which is accessible here: 
 https://docs.google.com/document/d/1Bk1e8-diE1VnzlbM8l479Mjx2vKliqdqC_3l5S56ITU
 
 It should not be considered a formal blueprint as it yet requires large 
 discussion from the community wrt the validation (or sanity if you will) of 
 the proposed idea.
 
 I will be joining the advanced service IRC meeting tomorrow, and the group 
 policy IRC meeting thursday, making myself available to answer any questions 
 you may have. In the meantime you can also start discussing in this email 
 thread or commenting in the document.
 
 Thanks,
 Carlos Goncalves
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] nova-specs

2014-04-15 Thread Solly Ross
Just wanted to confirm what Sean said -- as someone who just joined the 
OpenStack community last
year, going to implement a vaguely worded blueprint and then having the code 
review be derailed
with people saying well, you probably should be using this completely 
different design is fairly
frustrating.  While you come to anticipate certain changes, IMHO it's 
definitely much better to decide
on the design *before* you start coding, that way code reviews can focus on the 
code, and you don't have
to completely rewrite patches as much.

Best Regards,
Solly Ross

- Original Message -
From: Sean Dague s...@dague.net
To: openstack-dev@lists.openstack.org
Sent: Tuesday, April 15, 2014 1:45:16 PM
Subject: Re: [openstack-dev] [Nova] nova-specs

On 04/15/2014 11:42 AM, Russell Bryant wrote:
 On 04/15/2014 11:01 AM, Brian Elliott wrote:
 * specs review. The new blueprint process is a work of genius, and I
 think its already working better than what we've had in previous
 releases. However, there are a lot of blueprints there in review, and
 we need to focus on making sure these get looked at sooner rather than
 later. I'd especially like to encourage operators to take a look at
 blueprints relevant to their interests. Phil Day from HP has been
 doing a really good job at this, and I'd like to see more of it.

 I have mixed feelings about the nova-specs repo.  I dig the open 
 collaboration of the blueprints process, but I also think there is a danger 
 of getting too process-oriented here.  Are these design documents expected 
 to call out every detail of a feature?  Ideally, I’d like to see only very 
 high level documentation in the specs repo.  Basically, the spec could 
 include just enough detail for people to agree that they think a feature is 
 worth inclusion.  More detailed discussion could remain on the code reviews 
 since they are the actual end work product.
 
 There is a balance to be found here.  The benefit of doing more review
 earlier is to change direction as necessary when it's *much* easier to
 do so.  It's a lot more time consuming to do re-work after code has been
 written, than re-work in a spec.
 
 Yes, it's more up front work, but I think it will speed up the process
 overall.  It means we're much more in agreement and on the same page
 before code even shows up.  That's huge.
 
 One of the big problems we've had in code review is the amount of churn
 and re-work required.  That is killing our throughput in code review.
 If we can do more up front work that will reduce re-work later, it's
 going to be a *huge* help to our primary project bottleneck: the code
 review queue.

I think the previous process is a huge demotivator to contributors, when
they file a blueprint with minimal info, it gets approved, they spend
months working on it, and only at the end of the process does the idea
get dug into enough for people to realize that it's not what anyone wants.

At that point people are so invested in the time they spent on this
feature that turning that conversation productive is really hard.

Catching more of these up front and being more explicit about what Nova
wants in a cycle is goodness.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] computed package names?

2014-04-15 Thread Zane Bitter

On 15/04/14 14:31, Mike Spreitzer wrote:

It appears that in Fedora 19 and 20 the Wordpress examples need to
install different packages than in every other release (see my debugging
in https://review.openstack.org/#/c/87065/).  I just got a complaint
from Heat validation that I can't do this:

 AWS::CloudFormation::Init : {
   config : {
 packages : {
   yum : {
{ Fn::FindInMap : [ Pkgset2Pkgs, { Fn::FindInMap : [
Distro2Pkgset, { Ref : LinuxDistribution }, db ] }, client ] }
: [],
{ Fn::FindInMap : [ Pkgset2Pkgs, { Fn::FindInMap : [
Distro2Pkgset, { Ref : LinuxDistribution }, db ] }, server ] }
: [],

because expecting property name at the place where the first {
Fn::FindInMap ... } appears.  Am I understanding this right?  Is there
a workable way to solve this problem?


The formatting of that config section is somewhat bizarre, and 
unfortunately leaves no workable way to solve this problem because in 
JSON a property name cannot be an object just as in Python a dict key 
cannot be a dict.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][design] review based conceptual design process

2014-04-15 Thread Monty Taylor

On 04/15/2014 11:44 AM, Robert Collins wrote:

I've been watching the nova process, and I think its working out well
- it certainly addresses:
  - making design work visible
  - being able to tell who has had input
  - and providing clear feedback to the designers

I'd like to do the same thing for TripleO this cycle..


++


I'm thinking we can just add docs to incubator, since thats already a
repository separate to our production code - what do folk think?


In the current nova-specs thread on the ML, Tim Bell says:

I think that there is also a need to verify the user story aspect. One 
of the great things with the ability to subscribe to nova-specs is that 
the community can give input early, when we can check on the need and 
the approach. I know from the CERN team how the requirements need to be 
reviewed early, not after the code has been written.


Which is great. I'm mentioning it because he calls out the ability to 
subscribe to nova-specs.


I think if you put them in incubator, then people who are wanting to 
fill a role like Tim - subscribing as an operator and validating user 
stories - might be a bit muddied by patches to other thigns. (although 
thanks for having a thought about less repos :) )


So I'd just vote, for whatever my vote is worth, for a tripleo-specs repo.

Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] /bin/bash vs. /bin/sh

2014-04-15 Thread Clint Byrum
Excerpts from Ben Nemec's message of 2014-04-14 09:26:17 -0700:
 tldr: I propose we use bash explicitly for all diskimage-builder scripts 
 (at least for the short-term - see details below).
 
 This is something that was raised on my linting changes to enable set -o 
 pipefail.  That is a bash-ism, so it could break in the 
 diskimage-builder scripts that are run using /bin/sh.  Two possible 
 fixes for that: switch to /bin/bash, or don't use -o pipefail


What about this:

if ! [ $SHEBANG = #!/bin/bash ] ; then
  report_warning Non bash shebang, skipping script lint
fi

 But I think this raises a bigger question - does diskimage-builder 
 require bash?  If so, I think we should just add a rule to enforce that 
 /bin/bash is the shell used for everything.  I know we have a bunch of 
 bash-isms in the code already, so at least in the short-term I think 
 this is probably the way to go, so we can get the benefits of things 
 like -o pipefail and lose the ambiguity we have right now.  For 
 reference, a quick grep of the diskimage-builder source shows we have 
 150 scripts using bash explicitly and only 24 that are plain sh, so 
 making the code truly shell-agnostic is likely to be a significant 
 amount of work.

Yes, diskimage-builder is bash, not posix shell. We're not masochists.
;)

 
 In the long run it might be nice to have cross-shell compatibility, but 
 if we're going to do that I think we need a couple of things: 1) Someone 
 to do the work (I don't have a particular need to run dib in not-bash, 
 so I'm not signing up for that :-) 2) Testing in other shells - 
 obviously just changing /bin/bash to /bin/sh doesn't mean we actually 
 support anything but bash.  We really need to be gating on other shells 
 if we're going to make a significant effort to support them.  It's not 
 good to ask reviewers to try to catch every bash-ism proposed in a 
 change.  This also relates to some of the unit testing work that is 
 going on right now too - if we had better unit test coverage of the 
 scripts we would be able to do this more easily.


I suggest that diskimage-builder's included elements should be /bin/bash
only. When we have an element linting tool, non bash shebangs should be
warnings we should enforce no warnings. For t-i-e, we can strive for
no warnings, but that would be a stretch goal and may involve refining
the warnings.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] /bin/bash vs. /bin/sh

2014-04-15 Thread Clint Byrum
Excerpts from Ghe Rivero's message of 2014-04-15 04:31:19 -0700:
 +1 to use bash as the default shell. So far, all major distros use bash
 as the default one (except Debian which uses dash).
 An about rewriting the code in Python, I agree that shell is complicated
 for large programs, but writing anything command oriented in other than
 shell is a nightmare. But there are some parts that can benefit from that.
 

Side note, Ubuntu uses dash as /bin/sh.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova-specs

2014-04-15 Thread Matt Van Winkle
Exactly.  Even if operators/users only comment with a +0, it's already
flushed out a lot of good details on several blueprints.

Thanks!
Matt


On 4/15/14 2:38 PM, Tim Bell tim.b...@cern.ch wrote:


+2

I think that there is also a need to verify the user story aspect. One of
the great things with the ability to subscribe to nova-specs is that the
community can give input early, when we can check on the need and the
approach. I know from the CERN team how the requirements need to be
reviewed early, not after the code has been written.

Tim

-Original Message-
From: Solly Ross [mailto:sr...@redhat.com]
Sent: 15 April 2014 21:16
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] nova-specs

Just wanted to confirm what Sean said -- as someone who just joined the
OpenStack community last year, going to implement a vaguely worded
blueprint and then having the code review be derailed with people saying
well, you probably should be using this completely different design is
fairly frustrating.  While you come to anticipate certain changes, IMHO
it's definitely much better to decide on the design *before* you start
coding, that way code reviews can focus on the code, and you don't have
to completely rewrite patches as much.

Best Regards,
Solly Ross

- Original Message -
From: Sean Dague s...@dague.net
To: openstack-dev@lists.openstack.org
Sent: Tuesday, April 15, 2014 1:45:16 PM
Subject: Re: [openstack-dev] [Nova] nova-specs

On 04/15/2014 11:42 AM, Russell Bryant wrote:
 On 04/15/2014 11:01 AM, Brian Elliott wrote:
 * specs review. The new blueprint process is a work of genius, and I
 think its already working better than what we've had in previous
 releases. However, there are a lot of blueprints there in review,
 and we need to focus on making sure these get looked at sooner
 rather than later. I'd especially like to encourage operators to
 take a look at blueprints relevant to their interests. Phil Day from
 HP has been doing a really good job at this, and I'd like to see more
of it.

 I have mixed feelings about the nova-specs repo.  I dig the open
collaboration of the blueprints process, but I also think there is a
danger of getting too process-oriented here.  Are these design
documents expected to call out every detail of a feature?  Ideally, I¹d
like to see only very high level documentation in the specs repo.
Basically, the spec could include just enough detail for people to
agree that they think a feature is worth inclusion.  More detailed
discussion could remain on the code reviews since they are the actual
end work product.
 
 There is a balance to be found here.  The benefit of doing more review
 earlier is to change direction as necessary when it's *much* easier to
 do so.  It's a lot more time consuming to do re-work after code has
 been written, than re-work in a spec.
 
 Yes, it's more up front work, but I think it will speed up the process
 overall.  It means we're much more in agreement and on the same page
 before code even shows up.  That's huge.
 
 One of the big problems we've had in code review is the amount of
 churn and re-work required.  That is killing our throughput in code
review.
 If we can do more up front work that will reduce re-work later, it's
 going to be a *huge* help to our primary project bottleneck: the code
 review queue.

I think the previous process is a huge demotivator to contributors, when
they file a blueprint with minimal info, it gets approved, they spend
months working on it, and only at the end of the process does the idea
get dug into enough for people to realize that it's not what anyone wants.

At that point people are so invested in the time they spent on this
feature that turning that conversation productive is really hard.

Catching more of these up front and being more explicit about what Nova
wants in a cycle is goodness.

   -Sean

--
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][design] review based conceptual design process

2014-04-15 Thread Ben Nemec

On 04/15/2014 01:44 PM, Robert Collins wrote:

I've been watching the nova process, and I think its working out well
- it certainly addresses:
  - making design work visible
  - being able to tell who has had input
  - and providing clear feedback to the designers

I'd like to do the same thing for TripleO this cycle..

I'm thinking we can just add docs to incubator, since thats already a
repository separate to our production code - what do folk think?

-Rob



+1 from me.  We've also been planning to adopt this for Oslo.

For anyone who hasn't been following the Nova discussion, here's a link 
to the original proposal: 
http://lists.openstack.org/pipermail/openstack-dev/2014-March/029232.html


There's also the more recent thread Monty referenced: 
http://lists.openstack.org/pipermail/openstack-dev/2014-April/032753.html


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] computed package names?

2014-04-15 Thread Mike Spreitzer
Zane Bitter zbit...@redhat.com wrote on 04/15/2014 03:29:03 PM:

 On 15/04/14 14:31, Mike Spreitzer wrote:
  It appears that in Fedora 19 and 20 the Wordpress examples need to
  install different packages than in every other release (see my 
debugging
  in https://review.openstack.org/#/c/87065/).  I just got a complaint
  from Heat validation that I can't do this:
 
   AWS::CloudFormation::Init : {
 config : {
   packages : {
 yum : {
  { Fn::FindInMap : [ Pkgset2Pkgs, { Fn::FindInMap : [
  Distro2Pkgset, { Ref : LinuxDistribution }, db ] }, client ] 
}
  : [],
  { Fn::FindInMap : [ Pkgset2Pkgs, { Fn::FindInMap : [
  Distro2Pkgset, { Ref : LinuxDistribution }, db ] }, server ] 
}
  : [],
 
 
 .. in 
 JSON a property name cannot be an object ...

Ah, right.  So what would be the simplest way to enable this use case? 
Perhaps a generalization of AWS::CloudFormation::Init that allows the 
package names to be objects (that evaluate to strings, of course)?  Maybe 
allow, e.g., yum to be associated with not a map but rather a list of 
pairs (2-element lists)?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Meeting Tuesday April 15th at 19:00 UTC

2014-04-15 Thread Elizabeth Krumbach Joseph
On Mon, Apr 14, 2014 at 8:02 AM, Elizabeth Krumbach Joseph
l...@princessleia.com wrote:
 Hi everyone,

 The OpenStack Infrastructure (Infra) team is hosting our weekly
 meeting tomorrow, Tuesday April 15th, at 19:00 UTC in
 #openstack-meeting

Meeting logs and minutes:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-04-15-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-04-15-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-04-15-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][design] review based conceptual design process

2014-04-15 Thread Jay Dobies

+1, I think it's a better medium for conversations than blueprints or wikis.

I'm also +1 to a tripleo-specs repo, but that's less me having a problem 
with using incubator and more my OCD.


On 04/15/2014 03:43 PM, Monty Taylor wrote:

On 04/15/2014 11:44 AM, Robert Collins wrote:

I've been watching the nova process, and I think its working out well
- it certainly addresses:
  - making design work visible
  - being able to tell who has had input
  - and providing clear feedback to the designers

I'd like to do the same thing for TripleO this cycle..


++


I'm thinking we can just add docs to incubator, since thats already a
repository separate to our production code - what do folk think?


In the current nova-specs thread on the ML, Tim Bell says:

I think that there is also a need to verify the user story aspect. One
of the great things with the ability to subscribe to nova-specs is that
the community can give input early, when we can check on the need and
the approach. I know from the CERN team how the requirements need to be
reviewed early, not after the code has been written.

Which is great. I'm mentioning it because he calls out the ability to
subscribe to nova-specs.

I think if you put them in incubator, then people who are wanting to
fill a role like Tim - subscribing as an operator and validating user
stories - might be a bit muddied by patches to other thigns. (although
thanks for having a thought about less repos :) )

So I'd just vote, for whatever my vote is worth, for a tripleo-specs repo.

Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] /bin/bash vs. /bin/sh

2014-04-15 Thread Ben Nemec

On 04/15/2014 02:44 PM, Clint Byrum wrote:

Excerpts from Ben Nemec's message of 2014-04-14 09:26:17 -0700:

tldr: I propose we use bash explicitly for all diskimage-builder scripts
(at least for the short-term - see details below).

This is something that was raised on my linting changes to enable set -o
pipefail.  That is a bash-ism, so it could break in the
diskimage-builder scripts that are run using /bin/sh.  Two possible
fixes for that: switch to /bin/bash, or don't use -o pipefail



What about this:

if ! [ $SHEBANG = #!/bin/bash ] ; then
   report_warning Non bash shebang, skipping script lint
fi


I was thinking along the same lines, although at least for the moment I 
would like to leave the +x check enabled for all shebangs since it still 
doesn't make sense to have a shebang without +x.





But I think this raises a bigger question - does diskimage-builder
require bash?  If so, I think we should just add a rule to enforce that
/bin/bash is the shell used for everything.  I know we have a bunch of
bash-isms in the code already, so at least in the short-term I think
this is probably the way to go, so we can get the benefits of things
like -o pipefail and lose the ambiguity we have right now.  For
reference, a quick grep of the diskimage-builder source shows we have
150 scripts using bash explicitly and only 24 that are plain sh, so
making the code truly shell-agnostic is likely to be a significant
amount of work.


Yes, diskimage-builder is bash, not posix shell. We're not masochists.
;)



In the long run it might be nice to have cross-shell compatibility, but
if we're going to do that I think we need a couple of things: 1) Someone
to do the work (I don't have a particular need to run dib in not-bash,
so I'm not signing up for that :-) 2) Testing in other shells -
obviously just changing /bin/bash to /bin/sh doesn't mean we actually
support anything but bash.  We really need to be gating on other shells
if we're going to make a significant effort to support them.  It's not
good to ask reviewers to try to catch every bash-ism proposed in a
change.  This also relates to some of the unit testing work that is
going on right now too - if we had better unit test coverage of the
scripts we would be able to do this more easily.



I suggest that diskimage-builder's included elements should be /bin/bash
only. When we have an element linting tool, non bash shebangs should be
warnings we should enforce no warnings. For t-i-e, we can strive for
no warnings, but that would be a stretch goal and may involve refining
the warnings.


This doesn't seem to be a problem in dib - almost everything was 
explicitly bash already, and the scripts that weren't are pretty trivial.


The ramdisk init script remains a thorn in my side for this though - 
based on our conversation last Friday we don't want that to have the 
same set -eu behavior as the other scripts (since init failing causes a 
kernel panic), and based on Chris's comment in 
http://lists.openstack.org/pipermail/openstack-dev/2014-April/032753.html we 
don't even want that to be explicitly a bash script, except that it is 
today.  So I think we need an exception mechanism like the pep8 #noqa 
tag (or something similar) to note that init should basically be ignored 
by the lint checks since it needs to violate most of the current ones.


For the moment, I'm thinking I'll silently ignore /bin/sh scripts, 
convert init to be one of those, and work on the warning/exception 
mechanism in the future.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Thoughts from the PTL

2014-04-15 Thread Michael Still
On Mon, Apr 14, 2014 at 7:06 PM, Stefano Maffulli stef...@openstack.org wrote:
 On 04/14/2014 06:58 AM, Michael Still wrote:
 First off, thanks for electing me as the Nova PTL for Juno.

 Congratulations Michael.

 * I promised to look at mentoring newcomers. The first step there is
 working out how to identify what newcomers to mentor, and who mentors
 them.

 I'm very interested in the mentoring topic, too. As many may know, the
 Foundation will host an Upstream Training session in Atlanta. This is
 first attempt at formalizing the process to become a *good* contributor
 to OpenStack. Mentorship is a crucial part of that training, which is
 made of in-person classes and online mentorship (before and after the
 in-person training).

 OpenStack project has also two other programs where mentorship is
 crucial: Outreach Program for Women (it's been running for almost 2
 years now) and we added also Google Summer of Code. Mentoring is now
 becoming a thing we do among the other things we do.

 I think the easy targets to mentor are Upstream students, OPW and GSoC
 candidates.  I'd be happy to have a session at the summit about this.

Sounds good to me. The goal here from the nova side is to have nova be
a fun project to contribute to so that we don't shed the developers we
need to sustain our growth over time. Adding new developers is
important because people do leave nova in a natural process of moving
onto other problems, so we need to be growing new developers to
replace attrition.

I'm intending to drop in on the Upstream University stuff happening
the weekend before the summit and see if there's any way I can help.

Cheers,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Vagrant Devstack projects - time to consolidate?

2014-04-15 Thread Joe Gordon
On Fri, Apr 11, 2014 at 8:40 AM, Anne Gentle a...@openstack.org wrote:

 I'd love to see consolidation as I've tried to keep up nova dev docs for
 example, and with all the options it's tough to test and maintain one for
 docs. Go for it.


Nova isn't the only project that uses devstack, so shouldn't docs on how to
set up devstack live at http://devstack.org/ or where ever the official
docs for devstack are?




 On Fri, Apr 11, 2014 at 9:34 AM, Collins, Sean 
 sean_colli...@cable.comcast.com wrote:

 Hi,

 I've noticed a proliferation of Vagrant projects that are popping up, is
 there any interest from other authors in trying to consolidate?

 https://github.com/bcwaldon/vagrant_devstack

 https://github.com/sdague/devstack-vagrant

 http://openstack.prov12n.com/how-to-make-a-lot-of-devstack-with-vagrant/

 https://github.com/jogo/DevstackUp

 https://github.com/search?q=vagrant+devstackref=cmdform

 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Vagrant Devstack projects - time to consolidate?

2014-04-15 Thread Joe Gordon
On Fri, Apr 11, 2014 at 11:24 AM, Sean Dague s...@dague.net wrote:

 On 04/11/2014 11:38 AM, Greg Lucas wrote:
  Sean Dague wrote:
  Maybe it would be good to get an ad-hoc IRC meeting together to figure
  out what the must have features are that inspired everyone to write
  these. If we can come up with a way to overlap those all sanely, moving
  to stackforge and doing this via gerrit would be something I'd be into.
 
  This is a good idea, I've definitely stumbled across lots of GitHub
  projects, blog posts, etc that overlap here.
 
  Folks seem to have a strong preference for provisioner so it may makes
  sense to support several. We can put together a Vagrantfile that allows
  you to choose a provisioner while maintaining a common machine
  configuration (using --provision-with or using env variables and loading
  in additional rb files, etc).

 Honestly, multi provisioner support is something I think shouldn't be
 done. That's realistically where I become uninterested in spending
 effort here. Puppet is needed if we want to be able to replicate
 devstack-gate locally (which is one of the reasons I started writing this).

 Being opinionated is good when it comes to providing tools to make
 things easy to onboard people. The provisioner in infra is puppet.
 Learning puppet lets you contribute to the rest of the openstack infra,
 and I expect to consume some piece of that in this process. I get that
 leaves other efforts out in the cold, but the tradeoff in the other
 direction I don't think is worth it.

 The place I think plugability makes sense is in virt backends. I'd
 honestly love to be able to do nested kvm for performance reasons, or an
 openstack cloud for dogfooding reasons.



I originally cobbled together  https://github.com/jogo/DevstackUp  to run
devstack locally as there were no good alternatives at the time.  I would
be really happy to see a vagrant devstack runner that looks more like how
infra does it.

One of my biggest issues with running devstack in vagrant is how slow it is
to set everything up, there are a lot of packages to install. Hopefully
with one consolidated devstack vagrant we can address this.


On a related note, I have stopped using vagrant and just use nova on a
public cloud.


 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] computed package names?

2014-04-15 Thread Zane Bitter

On 15/04/14 15:57, Mike Spreitzer wrote:

Zane Bitter zbit...@redhat.com wrote on 04/15/2014 03:29:03 PM:

  On 15/04/14 14:31, Mike Spreitzer wrote:
   It appears that in Fedora 19 and 20 the Wordpress examples need to
   install different packages than in every other release (see my
debugging
   in https://review.openstack.org/#/c/87065/).  I just got a complaint
   from Heat validation that I can't do this:
  
AWS::CloudFormation::Init : {
  config : {
packages : {
  yum : {
   { Fn::FindInMap : [ Pkgset2Pkgs, { Fn::FindInMap : [
   Distro2Pkgset, { Ref : LinuxDistribution }, db ] },
client ] }
   : [],
   { Fn::FindInMap : [ Pkgset2Pkgs, { Fn::FindInMap : [
   Distro2Pkgset, { Ref : LinuxDistribution }, db ] },
server ] }
   : [],
  
 
  .. in
  JSON a property name cannot be an object ...

Ah, right.  So what would be the simplest way to enable this use case?
  Perhaps a generalization of AWS::CloudFormation::Init that allows the
package names to be objects (that evaluate to strings, of course)?
  Maybe allow, e.g., yum to be associated with not a map but rather a
list of pairs (2-element lists)?


Yes, that _kind_ of thing. But I don't see much point in having an 
AWS::CloudFormation::Init section that isn't compatible with 
CloudFormation's definition of it. We already have some native 
in-instance tools (e.g. os-apply-config) that can probably handle this 
better.


FWIW, in the short term I'm not aware of any issue with installing 
mariadb in Fedora 17/18, provided that mysql is not installed first. And 
in fact they're both EOL anyway, so we should probably migrate all the 
templates to Fedora 20 and mariadb.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Re: deliver the vm-level HA to improve the business continuity with openstack

2014-04-15 Thread Sylvain Bauza
Hi Steven et al.


2014-04-15 17:01 GMT+02:00 Steven Dake sd...@redhat.com:


  Qiming,

 If you read my original post on this thread, it outlines the current
 heat-core thinking, which is to reduce the scope of this resource from the
 Heat resources since it describes a workflow rather then an orchestrated
 thing (a Noun).

 A good framework for HA already exists for HA in the HARestarter resource.
  It incorporates HA escalation, which is a critical feature of any HA
 system.  The fundamental problem with HARestarter is that is in the wrong
 project.

 Long term, HA, if desired, should be part of taskflow, though, because its
 a verb, and verbs don't belong as heat orchestrated resources.

 How we get from here to there is left as an exercise to the reader ;-)

 Regards
 -steve



From my POV, I can consider that the HA feature would be something like
AutoScaling in Heat, meaning it would involve multiple stakeholders :
 - Ceilometer for aggregating metrics from hosts and raising an Alarm which
will trigger a WebHook corresponding to the faulty resource
 - Heat for declaring all HA policies within a template and creating
appropriate Ceilometer Alarms
 - the service behind the Webhook for executing the remediation actions
(Heat) which would call the appropriate services (eg. Nova)
 - eg. Nova to perform live migrations of the instances if possible, or
evacuate

I don't know if Taskflow would be necessary for doing the logic here,
that's more likely a placement issue than a workflow issue, as it requires
to schedule new resources but can only be made on a per-resource basis
(Nova instances, Cinder volumes, etc.). Taskflow would here be helpful for
having a all-or-one logic in case of the migration untl the scheduling
would be holistic.

Of course, in a far far away galaxy, we could consider having one global
Scheduler for placement decisions that Heat could query for live migrating
resources, but I'm still dreaming...

-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] API inconsistencies with security groups

2014-04-15 Thread Joe Gordon
On Sun, Apr 6, 2014 at 6:06 AM, Christopher Yeoh cbky...@gmail.com wrote:

 On Sat, Apr 5, 2014 at 10:17 PM, Joshua Hesketh 
 joshua.hesk...@rackspace.com wrote:

 Hi Chris,

 Thanks for your input.


 On 4/5/14 9:56 PM, Christopher Yeoh wrote:

 On Sat, 5 Apr 2014 15:16:33 +1100
 Joshua Hesketh joshua.hesk...@rackspace.com wrote:

 I'm moving a conversation that has begun on a review to this mailing
 list as it is perhaps systematic of a larger issue regarding API
 compatibility (specifically between neutron and nova-networking).
 Unfortunately these are areas I don't have much experience with so
 I'm hoping to gain some clarity here.

 There is a bug in nova where launching an instance with a given
 security group is case-insensitive for nova-networks but
 case-sensitive for neutron. This highlights inconsistencies but I
 also think this is a legitimate bug[0]. Specifically the 'nova boot'
 command accepts the incorrectly cased security- group but the
 instance enters an error state as it has been unable to boot it.
 There is an inherent mistake here where the initial check approves
 the security-group name but when it comes time to assign the security
 group (at the scheduler level) it fails.

 I think this should be fixed but then the nova CLI behaves
 differently with different tasks. For example, `nova
 secgroup-add-rule` is case sensitive. So in reality it is unclear if
 security groups should, or should not, be case sensitive. The API
 implies that they should not. The CLI has methods where some are and
 some are not.

 I've addressed the initial bug as a patch to the neutron driver[1]
 and also amended the case-sensitive lookup in the
 python-novaclient[2] but both reviews are being held up by this issue.

 I guess the questions are:
- are people aware of this inconsistency?
- is there some documentation on the inconsistencies?
- is a fix of this nature considered an API compatibility break?
- and what are the expectations (in terms of case-sensitivity)?

 I don't know the history behind making security group names case
 insensitive for nova-network, but without that knowledge it seems a
 little odd to me. The Nova API is in general case sensitive - with the
 exception of when you supply types  - eg True/False, Enabled/Disabled.

 If someone thinks there's a good reason for having it case insensitive
 then I'd like to hear what that is. But otherwise in an ideal world I
 think they should be case sensitive.

 Working with what we have however, I think it would also be bad if
 using the neutron API directly security group were case sensitive but
 talking to it via Nova it was case insensitive. Put this down as one of
 the risks of doing proxying type work in Nova.

 I think the proposed patches are backwards incompatible API changes.


 I agree that changing the python-novaclient[2] is new functionality and
 perhaps
 more controversial, but it is not directly related to an API change. The
 change I proposed to nova[1] stops the scheduler from getting stuck when
 it
 tries to launch an instance with an already accepted security group.

 Perhaps the fix here should be that the nova API never accepted the
 security
 group to begin with. However, that would be an API change. The change I've
 proposed at the moment stops instances from entering an error state, but
 it
 doesn't do anything to help with the inconsistencies.


 So if Nova can detect earlier on in the process that an instance launch is
 definitely going to fail because the security group is invalid then I think
 its ok to return an error to the user earlier rather than return success
 and have it fail later on anyway.


So this changes the behavior for nova-network users.

I don't really see any easy way out of this one, besides thorough
documentation of the issue.



  That's likely true. However I would appreciate reviews on 77347 with the
 above

 in mind.



 I might be misunderstanding exactly what is going on here, but I'll
 comment directly on the 77347.

 Regards,

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Provider Framework and Flavor Framework

2014-04-15 Thread Eugene Nikanorov
Hi folks,

In Icehouse there were attempts to apply Provider Framework ('Service Type
Framework') approach to VPN and Firewall services.
Initially Provider Framework was created as a simplistic approach of
allowing user to choose service implementation.
That approach definitely didn't account for public cloud case where users
should not be aware of underlying implementation, while being able to
request capabilities or a SLA.

However, Provider Framework consists of two parts:
1) API part.
That's just 'provider' attribute of the main resource of the service plus a
REST call to fetch available providers for a service

2) Dispatching part
That's a DB table that keeps mapping between resource and implementing
provider/driver.
With this mapping it's possible to dispatch a REST call to the particular
driver that is implementing the service.

As we are moving to better API and user experience, we may want to drop the
first part, which makes the framework non-public-cloud-friendly but the
second part will remain if we ever want to support more then one driver
simultaneously.

Flavor framework proposes choosing implementation based on capabilities,
but the result of the choice (e.g. scheduling) is still a mapping between
resource and the driver.
So the second part is still needed for the Flavor Framework.

I think it's a good time to continue the discussion on Flavor and Provider
Frameworks.

Some references:
1. Flavor Framework description
https://wiki.openstack.org/wiki/Neutron/FlavorFramework
2. Flavor Framework PoC/example code https://review.openstack.org/#/c/83055/
3. FWaaS integration with Provider framework:
https://review.openstack.org/#/c/60699/
4. VPNaaS integration with Provider framework:
https://review.openstack.org/#/c/41827/

I'd like to see the work on (3) and (4) continued, considering Provider
Framework is a basis for Flavor Framework.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [neutron] Gerrit spec reviews

2014-04-15 Thread Kyle Mestery
Hi Nova developers:

I have a question around the new nova-specs gerrit repository. We're
implementing the same thing in Neutron for Juno (I'm hoping to make
this live tomorrow), but I had a few quick questions so I can build on
your experience with this so far:

1. Did you implement any sort of time limit on BPs in review? Mostly
around trying to triage the BPs as they come in.
2. Nova has a subset of the core-team which can actually approve BPs,
is this correct?
3. For all BPs submitted using the new procedure, is it required to
submit a Summit session?

Any guidance appreciated here!

Thanks,
Kyle

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >