Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-21 Thread Christopher Yeoh
On Fri, 21 Feb 2014 06:53:11 +
Kenichi Oomichi oomi...@mxs.nes.nec.co.jp wrote:

  -Original Message-
  From: Christopher Yeoh [mailto:cbky...@gmail.com]
  Sent: Thursday, February 20, 2014 11:44 AM
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Nova] v3 API in Icehouse
  
  On Wed, 19 Feb 2014 12:36:46 -0500
  Russell Bryant rbry...@redhat.com wrote:
  
   Greetings,
  
   The v3 API effort has been going for a few release cycles now.
   As we approach the Icehouse release, we are faced with the
   following question: Is it time to mark v3 stable?
  
   My opinion is that I think we need to leave v3 marked as
   experimental for Icehouse.
  
  
  Although I'm very eager to get the V3 API released, I do agree with
  you. As you have said we will be living with both the V2 and V3
  APIs for a very long time. And at this point there would be simply
  too many last minute changes to the V3 API for us to be confident
  that we have it right enough to release as a stable API.
 
 Through v3 API development, we have found a lot of the existing v2 API
 input validation problems. but we have concentrated v3 API development
 without fixing the problems of v2 API.
 
 After Icehouse release, v2 API would be still CURRENT and v3 API would
 be EXPERIMENTAL. So should we fix v2 API problems also in the
 remaining Icehouse cycle?
 

So bug fixes are certainly fine with the usual caveats around backwards
compatibility (I think there's a few in there that aren't
backwards compatible especially those that fall into the category of
making the API more consistent).

https://wiki.openstack.org/wiki/APIChangeGuidelines

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][HEAT][Windows] Does HEAT support provisioning windows cluster

2014-02-21 Thread Serg Melikyan
Jay, yes, cfn-tools does not support Windows at this moment, and there are
may be issues related to autoscaling based on Ceilometer.


On Thu, Feb 20, 2014 at 4:48 PM, Jay Lau jay.lau@gmail.com wrote:

 Thanks Alexander for the detail explanation, really very helpful!

 What I meant for a windows cluster is actually a windows application, such
 as a WebSphere cluster or a hadoop windows cluster.

 Seems I can use Cloudbase Init to do the post-deploy actions on windows,
 but I cannot do some scale up or scale down for this cluster as currently
 there is no cfn-tools for windows, is it correct?

 Thanks,

 Jay



 2014-02-20 18:24 GMT+08:00 Alexander Tivelkov ativel...@mirantis.com:

 Hi Jay,

 Windows support in Heat is being developed, but is not complete yet,
 afaik. You may already use Cloudbase Init to do the post-deploy actions on
 windows - check [1] for the details.

 Meanwhile, running a windows cluster is a much more complicated task then
 just deploying a number of windows instances (if I understand you correctly
 and you speak about Microsoft Failover Cluster, see [2]): to build it in
 the cloud you will have to execute quite a complex workflow after the nodes
 are actually deployed, which is not possible with Heat (at least for now).

 Murano project ([3]) does this on top of Heat, as it was initially
 designed as Windows Data Center as a Service, so I suggest you too take a
 look at it. You may also check this video ([4]) which demonstrates how
 Murano is used to deploy a failover cluster of Windows 2012 with a
 clustered MS SQL server on top of it.


 [1] http://wiki.cloudbase.it/heat-windows
 [2] http://technet.microsoft.com/library/hh831579
 [3] https://wiki.openstack.org/Murano
 [4] http://www.youtube.com/watch?v=Y_CmrZfKy18

  --
 Regards,
 Alexander Tivelkov


 On Thu, Feb 20, 2014 at 2:02 PM, Jay Lau jay.lau@gmail.com wrote:


 Hi,

 Does HEAT support provisioning windows cluster?  If so, can I also use
 user-data to do some post install work for windows instance? Is there any
 example template for this?

 Thanks,

 Jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Does scenario.test_minimum_basic need to upload ami images?

2014-02-21 Thread Masayuki Igawa
Hi,

I've created a patch for qcow2 on scenario tests part.[1]
But I'd like to know someone already do like this or not because I'd like
to avoid conflict works.

[1] https://review.openstack.org/#/c/75312/




On Fri, Feb 21, 2014 at 7:01 AM, David Kranz dkr...@redhat.com wrote:

 On 02/20/2014 04:53 PM, Sean Dague wrote:

 On 02/20/2014 04:31 PM, David Kranz wrote:

 Running this test in tempest requires an ami image triple to be on the
 disk where tempest is running in order for the test to upload it. It
 would be a lot easier if this test could use a simple image file
 instead. That image file could even be obtained from the cloud being
 tested while configuring tempest. Is there a reason to keep the
 three-part image?

 I have no issue changing this to a single part image, as long as we
 could find a way that we can make it work with cirros in the gate
 (mostly because it can run in really low mem footprint).

 Is there a cirros single part image somewhere? Honestly it would be much
 simpler even in the devstack environment.

 -Sean

 http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Masayuki Igawa
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-21 Thread IWAMOTO Toshihiro
At Thu, 20 Feb 2014 15:21:49 +0400,
Eugene Nikanorov wrote:
 
 Hi Iwamoto,
 
 
  I agree with Samuel here.  I feel the logical model and other issues
  (implementation etc.) are mixed in the discussion.
 
 
 A little bit. While ideally it's better to separate it, in my opinion we
 need to have some 'fair bit' of implementation details
 in API in order to reduce code complexity (I'll try to explain it on the
 meeting). Currently these 'implementation details' are implied because we
 deal with simplest configurations which maps 1:1 to a backend.

Exposing some implementation details as API might not be ideal but
would be accepable if it saves a lot of code complexity.

  I'm failing to understand why the current model is unfit for L7 rules.
 
- pools belonging to a L7 group should be created with the same
  provider/flavor by a user
- pool scheduling can be delayed until it is bound to a vip to make
  sure pools belonging to a L7 group are scheduled to one backend
 
  While that could be an option, It's not as easy as it seems.
 We've discussed that back on HK summit but at that point decided that it's
 undesirable.

Could you give some more details/examples why my proposal above is
undesirable?
In my opinion, pool rescheduling happens anyway when an agent dies,
and calculating pool-vip grouping based on their connectivity is not hard.


   I think grouping vips and pools is important part of logical model, even
  if
   some users may not care about it.
 
  One possibility is to provide an optional data structure to describe
  grouping of vips and pools, on top of the existing pool-vip model.
 
 That would be 'loadbalancer' approach, #2 in a wiki page.
 So far we tend to introduce such grouping directly into vip-pool
 relationship.
 I plan to explain that in more detail on the meeting.

My idea was to keep the 'loadbalancer' API optional for users who
don't care about grouping.

--
IWAMOTO Toshihiro


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-21 Thread Tomas Sedovic
On 20/02/14 16:24, Imre Farkas wrote:
 On 02/20/2014 03:57 PM, Tomas Sedovic wrote:
 On 20/02/14 15:41, Radomir Dopieralski wrote:
 On 20/02/14 15:00, Tomas Sedovic wrote:

 Are we even sure we need to store the passwords in the first place? All
 this encryption talk seems very premature to me.

 How are you going to redeploy without them?


 What do you mean by redeploy?

 1. Deploy a brand new overcloud, overwriting the old one
 2. Updating the services in the existing overcloud (i.e. image updates)
 3. Adding new machines to the existing overcloud
 4. Autoscaling
 5. Something else
 6. All of the above

 I'd guess each of these have different password workflow requirements.
 
 I am not sure if all these use cases have different password
 requirement. If you check devtest, no matter whether you are creating or
 just updating your overcloud, all the parameters have to be provided for
 the heat template:
 https://github.com/openstack/tripleo-incubator/blob/master/scripts/devtest_overcloud.sh#L125
 
 
 I would rather not require the user to enter 5/10/15 different passwords
 every time Tuskar updates the stack. I think it's much better to
 autogenerate the passwords for the first time, provide an option to
 override them, then save and encrypt them in Tuskar. So +1 for designing
 a proper system for storing the passwords.

Well if that is the case and we can't change the templates/heat to
change that, the secrets should be put in Keystone or at least go
through Keystone. Or use Barbican or whatever.

We shouldn't be implementing crypto in Tuskar.

 
 Imre
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-21 Thread Dougal Matthews

On 21/02/14 09:24, Tomas Sedovic wrote:

On 20/02/14 16:24, Imre Farkas wrote:

On 02/20/2014 03:57 PM, Tomas Sedovic wrote:

On 20/02/14 15:41, Radomir Dopieralski wrote:

On 20/02/14 15:00, Tomas Sedovic wrote:


Are we even sure we need to store the passwords in the first place? All
this encryption talk seems very premature to me.


How are you going to redeploy without them?



What do you mean by redeploy?

1. Deploy a brand new overcloud, overwriting the old one
2. Updating the services in the existing overcloud (i.e. image updates)
3. Adding new machines to the existing overcloud
4. Autoscaling
5. Something else
6. All of the above

I'd guess each of these have different password workflow requirements.


I am not sure if all these use cases have different password
requirement. If you check devtest, no matter whether you are creating or
just updating your overcloud, all the parameters have to be provided for
the heat template:
https://github.com/openstack/tripleo-incubator/blob/master/scripts/devtest_overcloud.sh#L125


I would rather not require the user to enter 5/10/15 different passwords
every time Tuskar updates the stack. I think it's much better to
autogenerate the passwords for the first time, provide an option to
override them, then save and encrypt them in Tuskar. So +1 for designing
a proper system for storing the passwords.


Well if that is the case and we can't change the templates/heat to
change that, the secrets should be put in Keystone or at least go
through Keystone. Or use Barbican or whatever.

We shouldn't be implementing crypto in Tuskar.


+1, after giving this quite a bit of thought i completely agree. This
is really out of the scope of Tuskar. I think we should definitely go
down this route and only fall back to storing all these details
later, that could be an improvement if it turns out to be a real
usability problem.

At the moment we are guessing, we don't even know how many passwords
we need to store. So we should progress with the safest and simplest
option (which is to not store them) and consider changing this later.

Dougal


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] async / threading for python 2 and 3

2014-02-21 Thread Victor Stinner
Le vendredi 21 février 2014, 09:27:49 Angus Salkeld a écrit :
 Honestly, I have no answer to your question right now (How useful is
 trollius ...).
 (...) 
 I asked your question on Tulip mailing list to see how a single code base
 could support Tulip (yield from) and Trollius (yield). At least check if
 it's technically possible.

Short answer: it's possible to write code working on Trollius (Python 2 and 3) 
/ Tulip (Python 3.3+) / CPython 3.4 (asyncio) if you use callbacks. The core 
of the asyncio module (event loop and scheduler) uses callbacks. If you only 
uses callbacks, you can also support Twisted and Tornado frameworks.

For example, the AutobahnPython project adopted this design and thanks to 
that, it supports Trollius, Tulip and CPython asyncio, but also Twisted:

https://github.com/tavendo/AutobahnPython

So you have to find a WSGI server using callbacks instead of yield from. It 
should not be hard since asyncio is young, callbacks was the only option 
before greenlet/eventlet, and Twisted and Tornado (which use callbacks) are 
still widely used.

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-21 Thread Radomir Dopieralski
On 21/02/14 10:38, Dougal Matthews wrote:
 On 21/02/14 09:24, Tomas Sedovic wrote:
 On 20/02/14 16:24, Imre Farkas wrote:
 On 02/20/2014 03:57 PM, Tomas Sedovic wrote:
 On 20/02/14 15:41, Radomir Dopieralski wrote:
 On 20/02/14 15:00, Tomas Sedovic wrote:

 Are we even sure we need to store the passwords in the first
 place? All
 this encryption talk seems very premature to me.

 How are you going to redeploy without them?


 What do you mean by redeploy?

 1. Deploy a brand new overcloud, overwriting the old one
 2. Updating the services in the existing overcloud (i.e. image updates)
 3. Adding new machines to the existing overcloud
 4. Autoscaling
 5. Something else
 6. All of the above

 I'd guess each of these have different password workflow requirements.

 I am not sure if all these use cases have different password
 requirement. If you check devtest, no matter whether you are creating or
 just updating your overcloud, all the parameters have to be provided for
 the heat template:
 https://github.com/openstack/tripleo-incubator/blob/master/scripts/devtest_overcloud.sh#L125



 I would rather not require the user to enter 5/10/15 different passwords
 every time Tuskar updates the stack. I think it's much better to
 autogenerate the passwords for the first time, provide an option to
 override them, then save and encrypt them in Tuskar. So +1 for designing
 a proper system for storing the passwords.

 Well if that is the case and we can't change the templates/heat to
 change that, the secrets should be put in Keystone or at least go
 through Keystone. Or use Barbican or whatever.

 We shouldn't be implementing crypto in Tuskar.
 
 +1, after giving this quite a bit of thought i completely agree. This
 is really out of the scope of Tuskar. I think we should definitely go
 down this route and only fall back to storing all these details
 later, that could be an improvement if it turns out to be a real
 usability problem.
 
 At the moment we are guessing, we don't even know how many passwords
 we need to store. So we should progress with the safest and simplest
 option (which is to not store them) and consider changing this later.

I think we have a pretty good idea:
https://github.com/openstack/tuskar-ui/blob/master/tuskar_ui/infrastructure/overcloud/workflows/undeployed_configuration.py#L23-L222

(just count the NoEcho: true lines)
-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] squal...@gmail.com

2014-02-21 Thread Michael Wang
squal...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Nominate Andrew Lazarew for savanna-core

2014-02-21 Thread Matthew Farrellee

On 02/19/2014 05:40 PM, Sergey Lukjanov wrote:

Hey folks,

I'd like to nominate Andrew Lazarew (alazarev) for savanna-core.

He is among the top reviewers of Savanna subprojects. Andrew is working
on Savanna full time since September 2013 and is very familiar with
current codebase. His code contributions and reviews have demonstrated a
good knowledge of Savanna internals. Andrew have a valuable knowledge of
both core and EDP parts, IDH plugin and Hadoop itself. He's working on
both bugs and new features implementation.

Some links:

http://stackalytics.com/report/reviews/savanna-group/30
http://stackalytics.com/report/reviews/savanna-group/90
http://stackalytics.com/report/reviews/savanna-group/180
https://review.openstack.org/#/q/owner:alazarev+savanna+AND+-status:abandoned,n,z
https://launchpad.net/~alazarev

Savanna cores, please, reply with +1/0/-1 votes.

Thanks.

--
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


fyi, some of those links don't work, but these do,

http://stackalytics.com/report/contribution/savanna-group/30
http://stackalytics.com/report/contribution/savanna-group/90
http://stackalytics.com/report/contribution/savanna-group/180

i'm very happy to see andrew evolving in the savanna community, making 
meaningful contributions, demonstrating a reasoned approach to resolve 
disagreements, and following guidelines such as GitCommitMessages more 
closely. i expect he will continue his growth as well as influence 
others to contribute positively.


+1

best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Automatic Evacuation

2014-02-21 Thread Russell Bryant
On 02/20/2014 06:04 PM, Sean Dague wrote:
 On 02/20/2014 05:32 PM, Russell Bryant wrote:
 On 02/20/2014 05:05 PM, Costantino, Leandro I wrote:
 Hi,
 
 Would like to know if there's any interest on having
 'automatic evacuation' feature when a compute node goes down. I
 found 3 bps related to this topic: [1] Adding a periodic task
 and using ServiceGroup API for compute-node status [2] Using
 ceilometer to trigger the evacuate api. [3] Include some kind
 of H/A plugin  by using a 'resource optimization service'
 
 Most of those BP's have comments like 'this logic should not
 reside in nova', so that's why i am asking what should be the
 best approach to have something like that.
 
 Should this be ignored, and just rely on external monitoring
 tools to trigger the evacuation? There are complex scenarios
 that require lot of logic that won't fit into nova nor any
 other OS component. (For instance: sometimes it will be faster
 to reboot the node or compute-nova than starting the 
 evacuation, but if it fail X times then trigger an evacuation,
 etc )
 
 Any thought/comment// about this?
 
 Regards Leandro
 
 [1]
 https://blueprints.launchpad.net/nova/+spec/vm-auto-ha-when-host-broken

 
[2]
 https://blueprints.launchpad.net/nova/+spec/evacuate-instance-automatically

 
[3]
 https://blueprints.launchpad.net/nova/+spec/resource-optimization-service


 
My opinion is that I would like to see this logic done outside of Nova.
 
 Right now Nova is the only service that really understands the
 compute topology of hosts, though it's understanding of liveness is
 really not sufficient to handle this kind of HA thing anyway.
 
 I think that's the real problem to solve. How to provide
 notifications to somewhere outside of Nova on host death. And the
 question is, should Nova be involved in just that part, keeping
 track of node liveness and signaling up for someone else to deal
 with it? Honestly that part I'm more on the fence about. Because
 putting another service in place to just handle that monitoring
 seems overkill.
 
 I 100% agree that all the policy, reacting, logic for this should
 be outside of Nova. Be it Heat or somewhere else.

I think we agree.  I'm very interested in continuing to enhance Nova
to make sure that the thing outside of Nova has all of the APIs it
needs to get the job done.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] supported dependency versioning and testing

2014-02-21 Thread Mark McLoughlin
On Thu, 2014-02-20 at 10:31 -0800, Joe Gordon wrote:
 Hi All,
 
 I discussion recently came up inside of nova about what it means
 supported version for a dependency means.  in libvirt we gate on the
 minimal version that we support but for all python dependencies we
 gate on the highest version that passes our requirements. While we all
 agree that having two different ways of choosing which version to test
 (min and max) is bad, there are good arguments for doing both.
 
 testing most recent version:
 * We want to make sure we support the latest and greatest
 * Bug fixes
 * Quickly discover backwards incompatible changes so we can deal
 with them as they arise instead of in batch
 
 Testing lowest version supported:
 * Make sure we don't land any code that breaks compatibility with
 the lowest version we say we support

Good summary.

 A few questions and ideas on how to move forward.
 * How do other projects deal with this? This problem isn't unique
 in OpenStack.
 * What are the issues with making one gate job use the latest
 versions and one use the lowest supported versions?

I think this would be very useful.

Obviously it would take some effort on someone's part to set this up
initially, but I don't immediately see it being much of an ongoing
burden on developers.

 * Only test some things on every commit or every day (periodic
 jobs)? But no one ever fixes those things when they break? who wants
 to own them? distros? deployers?

The gate job above would most likely lead to us trying hard to maintain
support for older.

A periodic job would either go stale or we'd keep it going simply by
dropping support for older libraries. (Maybe that's ok)

 * Other solutions?
 * Does it make sense to gate on the lowest version of libvirt but
 the highest version of python libs?

We might be unconsciously drawing a platform vs app line here - that
libvirt is part of the platform and the python library stack is part of
our app - while still giving a nod to supporting the notion that the
python library stack is part of the platform.

Put it another way - we'd be wary of dropping support for an older
libvirt (because it rules out support for a platform) but not so much
with dropping support for an older python library (because meh, it's not
*really* part of the platform).

Or another way, we give explicit guidance about what exact versions of
libvirt we support (i.e. test with specific distros and whatever
versions of libvirt they have) and leave those versions newer than the
oldest version we explicitly support as a grey area. Similarly, we give
explicit guidance about the exact python library stack we support (i.e.
what we test now in the gate) and leave the older versions as a grey
area.

Perhaps rather than focusing on making this absolutely black and white,
we should focus on better communicating what we actually focus our
testing on? (i.e. rather than making the grey areas black, improve the
white areas)

Concretely, for every commit merged, we could publish:

  - the set of commits tested
  - details of the jobs passed:
  - the distro
  - installed packages and versions
  - output of pip freeze
  - configuration used
  - tests passed

Meh, feeling like I'm going off-topic a bit.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Nominate Andrew Lazarew for savanna-core

2014-02-21 Thread Ilya Shakhat
2014-02-21 16:46 GMT+04:00 Matthew Farrellee m...@redhat.com:


 fyi, some of those links don't work, but these do,

 http://stackalytics.com/report/contribution/savanna-group/30
 http://stackalytics.com/report/contribution/savanna-group/90
 http://stackalytics.com/report/contribution/savanna-group/180


Yep. These reports were upgraded and now show not only review stats, but
also a number of posted patches, commits and emails sent to openstack-dev -
thus covering almost all engineer's contribution to project.

Glad to see Andrew on top of the list.


Thanks,
Ilya
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Heat resource isolation withing single stack

2014-02-21 Thread Stan Lagun
Hi Everyone,

While looking through Heat templates generation code in Murano I've
realized it has a major design flaw: there is no isolation between Heat
resources generated by different apps.

Every app manifest can access and modify its environment stack in any way.
For example it can delete instances and other resources belonging to other
applications. This may be not so bad for Murano 0.4 but it becomes critical
for AppCatalog (0.5) as there is no trust relations between applications
and it may be unacceptable that untrusted application can gain complete
write access over the whole stack.

There is also a problem of name collisions - resources generated by
different applications may have the same names. This is especially probable
between resources generated by different instances of the same app. This
also affects Parameters/Output of Heat templates as each application
instance must generate unique names for them (and do not forget them later
as they are needed to read output results).

I think we need at least to know how we going to solve it before 0.5

Here is possible directions i can think of:

1. Use nested Heat stacks. I'm not sure it solves naming collisions and
that nested stacks can have their own Output

2. Control all stack template modifications and track which resource was
created by which app. Give applications read-only access to resources they
don't own

3. Auto-generate resource names. Auto-add prefixes/suffixes to
resource/output etc names indicating owning app instance ID and remove them
upon read access from workflow so that generated names would be invisible
to workflow. That would also mean all VMs  would have generated names

Hope to see better ideas and suggestions in this thread

-- 
Sincerely yours
Stanislav (Stan) Lagun
Senior Developer
Mirantis
35b/3, Vorontsovskaya St.
Moscow, Russia
Skype: stanlagun
www.mirantis.com
sla...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][glance] Question about evacuate with no shared storage..

2014-02-21 Thread ChangBo Guo
This looks like a useful feature,  need some work to do that. evacuate
function based on rebuild, if we want to use snapshot images, we need pass
the the snapshot reference  from API layer ,and expose the interface from
python-novaclient. Correct me if I am wrong :)


2014-02-21 13:01 GMT+08:00 Sangeeta Singh sin...@yahoo-inc.com:

  Hi,

  At my organization we do not use a shared storage for VM disks  but need
 to evacuate VMs  from a HV that is down or having problems to another HV.
 The evacuate command only allows the evacuated VM to have the base image.
 What I am interested in is to create a snapshot of the VM on the down HV
 and then be able to use the evacuate command by specifying the snapshot for
 the image.

  Has anyone had such a use case? Is there a command that uses snapshots
 in this way to recreate VM on a new HV.

  Thanks for the pointers.

  Sangeeta

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
ChangBo Guo(gcb)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] supported dependency versioning and testing

2014-02-21 Thread Daniel P. Berrange
On Thu, Feb 20, 2014 at 02:45:03PM -0500, Sean Dague wrote:
 
 So I'm one of the first people to utter if it isn't tested, it's
 probably broken, however I also think we need to be realistic about the
 fact that if you did out the permutations of dependencies and config
 options, we'd have as many test matrix scenarios as grains of sand on
 the planet.
 
 I do think in some ways this is unique to OpenStack, in that our
 automated testing is head and shoulders above any other Open Source
 project out there, and most proprietary software systems I've seen.
 
 So this is about being pragmatic. In our dependency testing we are
 actually testing with most recent versions of everything. So I would
 think that even with libvirt, we should err in that direction.

I'm very much against that, because IME, time  time again across
all open source projects I've worked on, people silently introduce
use of features/apis that only exist in newer versions without anyone
ever noticing until it is too late.

 That being said, we also need to be a little bit careful about taking
 such a hard line about supported vs. not based on only what's in the
 gate. Because if we did the following things would be listed as
 unsupported (in increasing level of ridiculousness):
 
  * Live migration
  * Using qpid or zmq
  * Running on anything other than Ubuntu 12.04
  * Running on multiple nodes
 
 Supported to me means we think it should work, and if it doesn't, it's a
 high priority bug that will get fixed quickly. Testing is our sanity
 check. But it can't be considered that it will catch everything, at
 least not before the heat death of the universe.

I agree we should be pragmatic here to some extent. We shouldn't aim to
test every single intermediate version, or every possible permutation of
versions - just a representative sample. Testing both lowest and highest
versions is a reasonable sample set IMHO.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] [neutron] Neutron Full Parallel job very close to voting - call to arms by neutron team

2014-02-21 Thread Sean Dague
Yesterday during the QA meeting we realized that the neutron full job,
which includes tenant isolation, and full parallelism, was passing quite
often in the experimental queue. Which was actually news to most of us,
as no one had been keeping a close eye on it.

I moved that to a non-voting job on all projects. A spot check overnight
is that it's failing about twice as often as the regular neutron job.
Which is too high a failure rate to make it voting, but it's close.

This would be the time for a final hard push by the neutron team to get
to the bottom of these failures to bring the pass rate to the level of
the existing neutron job, then we could make neutron full voting.

This is a *huge* move forward from where things were at the Havana
summit. I want to thank the Neutron team for getting so aggressive about
getting this testing working. I was skeptical we could get there within
the cycle, but a last push could actually get us neutron parity in the
gate by i3.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] supported dependency versioning and testing

2014-02-21 Thread Daniel P. Berrange
On Thu, Feb 20, 2014 at 10:31:06AM -0800, Joe Gordon wrote:
 Hi All,
 
 I discussion recently came up inside of nova about what it means
 supported version for a dependency means.  in libvirt we gate on the
 minimal version that we support but for all python dependencies we
 gate on the highest version that passes our requirements. While we all
 agree that having two different ways of choosing which version to test
 (min and max) is bad, there are good arguments for doing both.
 
 testing most recent version:
 * We want to make sure we support the latest and greatest
 * Bug fixes
 * Quickly discover backwards incompatible changes so we can deal
 with them as they arise instead of in batch
 
 Testing lowest version supported:
 * Make sure we don't land any code that breaks compatibility with
 the lowest version we say we support

I'm pretty strongly of the opinion that unless you test the minimum
declared version, you shouldn't claim to support it. Experience across
many open source projects is that far too often people silently introduce
features that require new versions of external deps and only get found
by the poor downstream user who tries to actually use the min required
version.

 A few questions and ideas on how to move forward.
 * How do other projects deal with this? This problem isn't unique
 in OpenStack.

The level of testing is fairly unique to openstack amongst
open source projects.

 * What are the issues with making one gate job use the latest
 versions and one use the lowest supported versions?

Double the resources is the obvious one I guess :-)

 * Only test some things on every commit or every day (periodic
 jobs)? But no one ever fixes those things when they break? who wants
 to own them? distros? deployers?

I think for testing done by openstack it should always be gating,
otherwise there's not much incentive to deal with the fallout.
If distros want to periodically test their own stack in a non-gating
manner let them make that choice themselves.

 * Other solutions?
 * Does it make sense to gate on the lowest version of libvirt but
 the highest version of python libs?

I think we should be consistent and at very least add testing of the
lowest python lib versions, so we can be confident that our declared
min versions are actually capable of working.

 * Given our finite resources what gets us the furthest?

As you say above, testing the lowest vs highest is targetting two different
use cases. 

  - Testing the lowest version demonstrates that OpenStack has not
broken its own code by introducing use of a new feature.

  - Testing the highest version demonstrates that OpenStack has not
been broken by 3rd party code introducing a regression.

If I was to prioritize things given limited resources, I'd suggest that
we should be validating that OpenStack has not broken its own code as the
top priority. So testing lowest version would rank above testing highest
version. I do think that both a very important to openstack though.

So if we have the resources to cope, I do think it would be very valuable
if we were able to have 2 sets of jobs, one focused on the highest version
and one focused on the lowest version both gating.

Of course, there are also intermediate versions to worry about. Given
that our test suite is fully opensource, I think it is pretty reasonable
to say that distro vendors / other downstream consumers should take the
responsibility to testing any intermediate versions that are in their
own distro.

One might argue that distros should also have responsibility for testing
the highest version, but I think there is value in having openstack
keep that responsibility to avoid too much duplication of effort, and to
ensure that openstack releases keep a good reputation for quality operation
wrt latest versions of external deps.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] supported dependency versioning and testing

2014-02-21 Thread Sean Dague
On 02/21/2014 09:45 AM, Daniel P. Berrange wrote:
 On Thu, Feb 20, 2014 at 02:45:03PM -0500, Sean Dague wrote:

 So I'm one of the first people to utter if it isn't tested, it's
 probably broken, however I also think we need to be realistic about the
 fact that if you did out the permutations of dependencies and config
 options, we'd have as many test matrix scenarios as grains of sand on
 the planet.

 I do think in some ways this is unique to OpenStack, in that our
 automated testing is head and shoulders above any other Open Source
 project out there, and most proprietary software systems I've seen.

 So this is about being pragmatic. In our dependency testing we are
 actually testing with most recent versions of everything. So I would
 think that even with libvirt, we should err in that direction.
 
 I'm very much against that, because IME, time  time again across
 all open source projects I've worked on, people silently introduce
 use of features/apis that only exist in newer versions without anyone
 ever noticing until it is too late.
 
 That being said, we also need to be a little bit careful about taking
 such a hard line about supported vs. not based on only what's in the
 gate. Because if we did the following things would be listed as
 unsupported (in increasing level of ridiculousness):

  * Live migration
  * Using qpid or zmq
  * Running on anything other than Ubuntu 12.04
  * Running on multiple nodes

 Supported to me means we think it should work, and if it doesn't, it's a
 high priority bug that will get fixed quickly. Testing is our sanity
 check. But it can't be considered that it will catch everything, at
 least not before the heat death of the universe.
 
 I agree we should be pragmatic here to some extent. We shouldn't aim to
 test every single intermediate version, or every possible permutation of
 versions - just a representative sample. Testing both lowest and highest
 versions is a reasonable sample set IMHO.

Testing lower bounds is interesting, because of the way pip works. That
being said, if someone wants to take ownership of building that job to
start as a periodic job, I'm happy to point in the right direction. Just
right now, it's a lower priority item than things like Tempest self
testing, Heat actually gating, Neutron running in parallel, Nova API
coverage.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] Meeting minutes

2014-02-21 Thread Dina Belova
Thanks everyone who was on our meeting :)
Meeting minutes are here:

Minutes:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-02-21-15.01.html

Minutes (text):
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-02-21-15.01.txt

Log:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-02-21-15.01.log.html


Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] supported dependency versioning and testing

2014-02-21 Thread Daniel P. Berrange
On Fri, Feb 21, 2014 at 10:46:22AM -0500, Sean Dague wrote:
 On 02/21/2014 09:45 AM, Daniel P. Berrange wrote:
  On Thu, Feb 20, 2014 at 02:45:03PM -0500, Sean Dague wrote:
 
  So I'm one of the first people to utter if it isn't tested, it's
  probably broken, however I also think we need to be realistic about the
  fact that if you did out the permutations of dependencies and config
  options, we'd have as many test matrix scenarios as grains of sand on
  the planet.
 
  I do think in some ways this is unique to OpenStack, in that our
  automated testing is head and shoulders above any other Open Source
  project out there, and most proprietary software systems I've seen.
 
  So this is about being pragmatic. In our dependency testing we are
  actually testing with most recent versions of everything. So I would
  think that even with libvirt, we should err in that direction.
  
  I'm very much against that, because IME, time  time again across
  all open source projects I've worked on, people silently introduce
  use of features/apis that only exist in newer versions without anyone
  ever noticing until it is too late.
  
  That being said, we also need to be a little bit careful about taking
  such a hard line about supported vs. not based on only what's in the
  gate. Because if we did the following things would be listed as
  unsupported (in increasing level of ridiculousness):
 
   * Live migration
   * Using qpid or zmq
   * Running on anything other than Ubuntu 12.04
   * Running on multiple nodes
 
  Supported to me means we think it should work, and if it doesn't, it's a
  high priority bug that will get fixed quickly. Testing is our sanity
  check. But it can't be considered that it will catch everything, at
  least not before the heat death of the universe.
  
  I agree we should be pragmatic here to some extent. We shouldn't aim to
  test every single intermediate version, or every possible permutation of
  versions - just a representative sample. Testing both lowest and highest
  versions is a reasonable sample set IMHO.
 
 Testing lower bounds is interesting, because of the way pip works. That
 being said, if someone wants to take ownership of building that job to
 start as a periodic job, I'm happy to point in the right direction. Just
 right now, it's a lower priority item than things like Tempest self
 testing, Heat actually gating, Neutron running in parallel, Nova API
 coverage.

If it would be hard work to do it for python modules, we can at least
not remove the existing testing of an old libvirt version - simply add
an additional test with newer libvirt.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Heat resource isolation withing single stack

2014-02-21 Thread Steven Hardy
On Fri, Feb 21, 2014 at 06:37:27PM +0400, Stan Lagun wrote:
 Hi Everyone,
 
 While looking through Heat templates generation code in Murano I've
 realized it has a major design flaw: there is no isolation between Heat
 resources generated by different apps.

Can you define the requirement for isolation in more detail?  Are you
referring simply to namespace isolation, or do you need auth level
isolation, e.g something enforced via keystone?

 Every app manifest can access and modify its environment stack in any way.
 For example it can delete instances and other resources belonging to other
 applications. This may be not so bad for Murano 0.4 but it becomes critical
 for AppCatalog (0.5) as there is no trust relations between applications
 and it may be unacceptable that untrusted application can gain complete
 write access over the whole stack.

All requests to Heat are scoped by tenant/project, so unless you enforce
resource-level access policy (which we sort-of started looking at with
OS::Heat::AccessPolicy), this is expected behavior.

 There is also a problem of name collisions - resources generated by
 different applications may have the same names. This is especially probable
 between resources generated by different instances of the same app. This
 also affects Parameters/Output of Heat templates as each application
 instance must generate unique names for them (and do not forget them later
 as they are needed to read output results).

A heirarchy of nested stacks, with each application defined as a separate
stack seems the obvious solution here.

 I think we need at least to know how we going to solve it before 0.5
 
 Here is possible directions i can think of:
 
 1. Use nested Heat stacks. I'm not sure it solves naming collisions and
 that nested stacks can have their own Output

I think it does, and yes all stacks can have their own outputs, including
nested stacks.

Of particular interest to you may be the provider resource interface to
nested stacks, which will allow you to define (via a series of nested stack
templates) custom resource types defining each of your applications.

See this old blog post, which will give you the providers/environments 101,
and contains links to most of the related heat docs:

http://hardysteven.blogspot.co.uk/2013/10/heat-providersenvironments-101-ive.html

 2. Control all stack template modifications and track which resource was
 created by which app. Give applications read-only access to resources they
 don't own

I think we need more info on the use-case here, but perhaps you can either
use the AccessPolicy resource, or we can work on defining an enhanced
version which meets your requirements.

 3. Auto-generate resource names. Auto-add prefixes/suffixes to
 resource/output etc names indicating owning app instance ID and remove them
 upon read access from workflow so that generated names would be invisible
 to workflow. That would also mean all VMs  would have generated names

Heat already does this internally, we create unique names for all your
instances, unless you explicitly provide a name via the OS::Nova::Server
name property.

It might help if you could provide a really simplified example of the
problem you are facing, or links to the real templates which we could
review and make suggestions?

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] SAML consumption Blueprints

2014-02-21 Thread Marco Fargetta
Hi Dolph,


On 21 Feb 2014, at 03:05, Dolph Mathews dolph.math...@gmail.com wrote:

 
 On Thu, Feb 20, 2014 at 4:18 AM, Marco Fargetta marco.farge...@ct.infn.it 
 wrote:
 Dear all,
 
 I am interested to the integration of SAML with keystone and I am analysing
 the following blueprint and its implementation:
 
 https://blueprints.launchpad.net/keystone/+spec/saml-id
 
 https://review.openstack.org/#/c/71353/
 
 
 Looking at the code there is something I cannot undertand. In the code it 
 seems you
 will use apache httpd with mod_shib (or other alternatives) to parse saml 
 assertion
 and the code inside keystone will read only the values extrapolated by the 
 front-end server.
 
 That's correct (for icehouse development, at least).
  
 
 If this is the case, it is not clear to me why you need to register the IdPs, 
 with its certificate,
 in keystone using the new federation API. You can filter the IdP in the 
 server so why do you need this extra list?
 What is the use of the IdP list and the certificate?
 
 This reflects our original design, and it has evolved a bit from the original 
 design to be a bit more simplified. With the additional dependency on 
 mod_shib / mod_mellon, we are no longer implementing the certificates API, 
 but we do still need the IdP API. The IdP API specifically allows us to track 
 the source of an identity, and apply the correct authorization mapping 
 (producing the project- and domain-based role assignments that OpenStack is 
 accostomed to to) to the federated attributes coming from mod_shib / 
 mod_mellon. The benefit is that federated identities from one source can have 
 a different level of authorization than those identities from a different 
 source, even if they (theoretically) had the exact same SAML assertions.
  
 

The idea to distinguish the IdPs make sense but for SAML I think it is better 
to work with the attributes only. If the SAML is the same you may work at 
mod_shib level to
create attributes containing the value you want so it is easier to create a new 
attribute in the SP having the authorisation level. In this way you can define 
authorisation rules
using only assertion instead of mixing assertion with IdP names, with …… . I do 
not know if you can exploit the same approach with OpenId and/or other 
authentication framework.

Nevertheless, it seems that I am late for icehouse so I will better analyse 
what you provide now and think what could be done in Juno.

Cheers,
Marco


 Is still this implementation open to discussion or the design is frozen for 
 the icehouse release?
 
 It is certainly still open to discussion (and the implementation open to 
 review!), but we're past feature proposal freeze; anything that would require 
 new development (beyond what is already in review) will have to wait a few 
 weeks for Juno.
  
 
 Thanks in advance,
 Marco
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] supported dependency versioning and testing

2014-02-21 Thread Kashyap Chamarthy
On Thu, Feb 20, 2014 at 10:31:06AM -0800, Joe Gordon wrote:
 Hi All,
 
 I discussion recently came up inside of nova about what it means
 supported version for a dependency means.  in libvirt we gate on the
 minimal version that we support but for all python dependencies we
 gate on the highest version that passes our requirements. While we all
 agree that having two different ways of choosing which version to test
 (min and max) is bad, there are good arguments for doing both.
 
 testing most recent version:
 * We want to make sure we support the latest and greatest
 * Bug fixes
 * Quickly discover backwards incompatible changes so we can deal
 with them as they arise instead of in batch
 
 Testing lowest version supported:
 * Make sure we don't land any code that breaks compatibility with
 the lowest version we say we support
 
 
 A few questions and ideas on how to move forward.
 * How do other projects deal with this? This problem isn't unique
 in OpenStack.
 * What are the issues with making one gate job use the latest
 versions and one use the lowest supported versions?
 * Given our finite resources what gets us the furthest?


tl;dr -- I've read the further replies in the thread. FWIW, the
suggestion of testing with lowest and higest versions sounds reasonable
to me.

I think I remember the bug you're alluding to here[1] -- I tried to
reproduce it a couple of times in a Fedora 20 environment, but later
moved on (noting relevant details in the bug) to other issues as I
realized after initial investigation that the fix exists in a _newer_
version of Libvirt (which the Gate machine needs to be updated to)[2].
Later, Sean Dague pointed on IRC there was another dependent bug[3]
which is preventing to bump up the Libvirt version on Gate.

Putting my Fedora distro user hat on: I try (as humanly as possible) to
keep on top of OpenStack bits with whatever is newest availalbe on
Fedora Rawhide (mostly - RPMs built from upstream git). And often with
its underlying Virtualization components - Libvirt/QEMU RPMs built from
git as well and ensure to test Minimal OpenStack (components I care
about) works without exploding. I'll do whatever I can to be helpful
here to continue to test the higher versions.


  [1] https://bugs.launchpad.net/nova/+bug/1254872 --libvirtError: Timed
  out during operation: cannot acquire state change lock 
  [2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix
  [3] https://bugs.launchpad.net/nova/+bug/1228977



-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Documenting test environments (was Re: supported dependency versioning and testing)

2014-02-21 Thread Dean Troyer
[new thread for this...]

On Fri, Feb 21, 2014 at 7:33 AM, Mark McLoughlin Perhaps rather than
focusing on making this absolutely black and white,

 we should focus on better communicating what we actually focus our
 testing on? (i.e. rather than making the grey areas black, improve the
 white areas)

 Concretely, for every commit merged, we could publish:

   - the set of commits tested
   - details of the jobs passed:
   - the distro
   - installed packages and versions
   - output of pip freeze
   - configuration used
   - tests passed


Long ago I created tools/info.sh to document the DevStack environment as a
troubleshooting tool.  Among other things it grabs information on the
distro and system packages installed, pip-installed packages and repos and
DevStack configuration (local.conf/localrc).

The output is designed to be easily parsable, and with a bit of work I
think it could provide the info above and be logged with the rest of the
logfiles.

Thoughts?

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][review] Please treat -1s on check-tripleo-*-precise as voting.

2014-02-21 Thread Derek Higgins
On 21/02/14 03:31, Robert Collins wrote:
 On 18 February 2014 04:30, Derek Higgins der...@redhat.com wrote:
 On 17/02/14 01:25, Robert Collins wrote:
 Hi!

 The nascent tripleo-gate is now running on all tripleo repositories,
 *and should pass*, but are not yet voting. They aren't voting because
 we cannot submit to the gate unless jenkins votes verified... *and* we
 have no redundancy for the tripleo-ci cloud now, so any glitch in the
 current region will take out our ability to land changes.

 We're working up the path to having two regions as fast as we can- and
 once we do we should be up to check or perhaps even gate in short
 order :).

 Note: unless you *expand* the jenkins vote, you can't tell if a -1 occurred.

 If, for some reason, we have an infrastructure failure that means
 spurious -1's will be occurring, then we'll put that in the #tripleo
 topic.

 It looks like we've hit a glitch, network access to our ci-overcloud
 controller seems to be gone, I think invoking this clause is needed
 until the problem is sorted, will update the topic and am working on
 diagnosing the problem.
 
 So we fixed that clause, but infra took us out of rotation as we took
 nodepool down before it was fixed.
 
 We've now:
  - improved nodepool to handle downclouds more gracefully
  - moved the tripleo cloud using jobs to dedicated check and
 experimental pipelines
  - and been reinstated
 
 So - please look for comments from check-tripleo before approving merges!

the ci cloud seems to be running today as expected, but we have a bit to
tuning todo

check-tripleo-overcloud-precise is throwing out false negatives because
the testenv-worker has a timeout that is less then the timeout on the
jenkins job (and less then the length of time it take to run the job)
o this should handle the false negatives
  https://review.openstack.org/#/c/75402/

o and this is a more permanent solution (to remove the possibility of
double booking environments), a new test-env cluster will need to be
built with it, we can do that once we iron out anything else that may
pop up over the next few days.
  https://review.openstack.org/#/c/75403/

Current status is that a lot of jobs are failing because they are not
completing the nova-manage db sync on the seed quickly enough, this
only started happening today and doesn't immediately suggest a problem
with our test environment setup (unless we are over committing resources
on the test environments), I suspect some part of the seed boot process
on or before the db sync is now taking longer then it used to. I was
trying to track down the problem but I'm about to run out of time.

This begs the question,
  If this proves to be a failure in tripleo-ci that is being caused by a
change that happened outside of tripleo should we stop merging commits?
Are are we ok to go ahead and merge while also helping the other project
to solve the problem? Of course if we were gating on all projects this
problem would be far less frequent then I suspect it will be, but for
now how do we proceed in these situations.

Derek.


 
 The tripleo test cloud is still one region, CI is running on 10
 hypervisors and 10 emulated baremetal backend systems, so we have
 reasonable capacity.
 
 Additionally, running 'check experimental' will now run tripleo jobs
 against everything we include in tripleo images - nova, cinder, swift
 etc etc.
 
 See the config layout.yaml for details, and I'll send a broader
 announcement once we've had a little bit of run-time with this.
 
 -Rob
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][HEAT][Windows] Does HEAT support provisioning windows cluster

2014-02-21 Thread Alessandro Pilotti
Hi guys,

Windows Heat templates are currently supported by using Cloudbase-Init.

Here’s the wiki document that I attached some weeks ago to the blueprint 
referenced in this thread: http://wiki.cloudbase.it/heat-windows
There are a few open points that IMO require some discussion.

One topic that deserves attention is what to do with the cfn-tools: we opted 
for using for the moment the AWS version ported to Heat, since those already 
contain the required Windows integration, but we’re are willing to contribute 
to the cfn-tools project if this makes still sense.

Talking about Windows clusters, the main issue is related to the fact that the 
typical Windows cluster configuration requires shared storage for the quorum 
and Nova / Cinder don’t allow attaching volumes to multiple instances, although 
there’s a BP targetting this potential feature: 
https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume

There are solutions to work around this issue that we are putting in place in 
the templates, but shared volumes are an important requirement for providing 
proper support for most advanced Windows workloads on OpenStack.

Talking about specific workloads, we are going to release very soon an initial 
set of templates with support for Active Directory, SQL Server, Exchange, 
Sharepoint and IIS.


Alessandro



On 20 Feb 2014, at 12:24, Alexander Tivelkov 
ativel...@mirantis.commailto:ativel...@mirantis.com wrote:

Hi Jay,

Windows support in Heat is being developed, but is not complete yet, afaik. You 
may already use Cloudbase Init to do the post-deploy actions on windows - check 
[1] for the details.

Meanwhile, running a windows cluster is a much more complicated task then just 
deploying a number of windows instances (if I understand you correctly and you 
speak about Microsoft Failover Cluster, see [2]): to build it in the cloud you 
will have to execute quite a complex workflow after the nodes are actually 
deployed, which is not possible with Heat (at least for now).

Murano project ([3]) does this on top of Heat, as it was initially designed as 
Windows Data Center as a Service, so I suggest you too take a look at it. You 
may also check this video ([4]) which demonstrates how Murano is used to deploy 
a failover cluster of Windows 2012 with a clustered MS SQL server on top of it.


[1] http://wiki.cloudbase.it/heat-windows
[2] http://technet.microsoft.com/library/hh831579
[3] https://wiki.openstack.org/Murano
[4] http://www.youtube.com/watch?v=Y_CmrZfKy18

--
Regards,
Alexander Tivelkov


On Thu, Feb 20, 2014 at 2:02 PM, Jay Lau 
jay.lau@gmail.commailto:jay.lau@gmail.com wrote:

Hi,

Does HEAT support provisioning windows cluster?  If so, can I also use 
user-data to do some post install work for windows instance? Is there any 
example template for this?

Thanks,

Jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Monitoring IP Availability

2014-02-21 Thread Sangeeta Singh
What about the fixed ips? Can this hook be extended for that?

On 2/20/14, 1:59 PM, Collins, Sean sean_colli...@cable.comcast.com
wrote:

On Thu, Feb 20, 2014 at 12:53:51AM +, Vilobh Meshram wrote:
 Hello OpenStack Dev,
 
 We wanted to have your input on how different companies/organizations,
using Openstack, are monitoring IP availability as this can be useful to
track the used IP¹s and total number of IP¹s.

A while ago I added hooks to Nova-network to forward
floating-ip allocations into an existing management system,
since this system was the source of truth for IP address management
inside Comcast.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][glance] Question about evacuate with no shared storage..

2014-02-21 Thread Sangeeta Singh
Yes, I am thinking on those lines as well. I was planning to write a new 
extension. But probably extending the current evacuate command to take in the 
snapshot as input might be a better approach  as you outlined. Was that what 
your thinking is?

Thanks,
Sangeeta

From: ChangBo Guo glongw...@gmail.commailto:glongw...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, February 21, 2014 at 6:42 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova][glance] Question about evacuate with no 
shared storage..

This looks like a useful feature,  need some work to do that. evacuate function 
based on rebuild, if we want to use snapshot images, we need pass the the 
snapshot reference  from API layer ,and expose the interface from 
python-novaclient. Correct me if I am wrong :)


2014-02-21 13:01 GMT+08:00 Sangeeta Singh 
sin...@yahoo-inc.commailto:sin...@yahoo-inc.com:
Hi,

At my organization we do not use a shared storage for VM disks  but need to 
evacuate VMs  from a HV that is down or having problems to another HV. The 
evacuate command only allows the evacuated VM to have the base image. What I am 
interested in is to create a snapshot of the VM on the down HV and then be able 
to use the evacuate command by specifying the snapshot for the image.

Has anyone had such a use case? Is there a command that uses snapshots in this 
way to recreate VM on a new HV.

Thanks for the pointers.

Sangeeta

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
ChangBo Guo(gcb)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][docs] Need more sample HOT templates for users

2014-02-21 Thread Zane Bitter

On 14/02/14 03:21, Qiming Teng wrote:

On Fri, Feb 14, 2014 at 08:24:09AM +0100, Thomas Spatzier wrote:

Thanks, Thomas.

The first link actually provides a nice inventory of all Resources and
their properties, attributes, etc.  I didn't look into this because I
was thinking of the word 'developer' differently.  This pointer is
useful for template developers in the sense that they don't have to
check the source code to know a resource type.


Yeah, we are overloading the term 'developer' here, since that section 
contains both information that is only useful to developers working on 
Heat itself, and information useful to users developing templates.


I'm not sure if this is forced because of an OpenStack-wide assumption 
that there is only API documentation and developer documentation?


We ought to split these up and make the difference clear if we can.


Maybe more elaborated explanation of resource usage is some work that
can be left to book or manual authors.


Your original suggestion was a good one - we actually already include 
the docstrings from resource plugin classes when we generate the 
template documentation.[1] So we just have to write docstrings for all 
the resource types...


cheers,
Zane.

[1] 
https://github.com/openstack/heat/blob/eae9a2ad3f5d3dcbcbb10c88a55fd81f1fe2dd56/doc/source/ext/resources.py#L47



Regards,
   - Qiming


Hi Qiming,

not sure if you have already seen it, but there is some documentation
available at the following locations. If you already know it, sorry for
dup ;-)

Entry to Heat documentation:
http://docs.openstack.org/developer/heat/

Template Guide with pointers to more details like documentation of all
resources:
http://docs.openstack.org/developer/heat/template_guide/index.html

HOT template guide:
http://docs.openstack.org/developer/heat/template_guide/hot_guide.html

HOT template spec:
http://docs.openstack.org/developer/heat/template_guide/hot_spec.html

Regards,
Thomas

Qiming Teng teng...@linux.vnet.ibm.com wrote on 14/02/2014 06:55:56:


From: Qiming Teng teng...@linux.vnet.ibm.com
To: openstack-dev@lists.openstack.org
Date: 14/02/2014 07:04
Subject: [openstack-dev] [Heat] Need more sample HOT templates for users

Hi,

   I have been recently trying to convince some co-workers and even some
   customers to try deploy and manipulate their applications using Heat.

   Here are some feedbacks I got from them, which could be noteworthy for
   the Heat team, hopefully.

   - No document can be found on how each Resource is supposed to be
 used. This is partly solved by the adding resource_schema API but it
 seems not yet exposed by heatclient thus the CLI.

 In addition to this, resource schema itself may print only simple
 help message in ONE sentence, which could be insufficient for users
 to gain a full understanding.

   - The current 'heat-templates' project provides quite some samples in
 the CFN format, but not so many in HOT format.  For early users,
 this means either they will get more accustomed to CFN templates, or
 they need to write HOT templates from scratch.

 Another suggestion is also related to Resource usage. Maybe more
 smaller HOT templates each focusing on teaching one or two resources
 would be helpful. There could be some complex samples as show cases
 as well.

  Some thoughts on documenting the Resources:

   - The doc can be inlined in the source file, where a developer
 provides the manual of a resource when it is developed. People won't
 forget to update it if the implementation is changed. A Resource can
 provide a 'describe' or 'usage' or 'help' method to be inherited and
 implemented by all resource types.

 One problem with this is that code mixed with long help text may be
 annoying for some developers.  Another problem is about
 internationalization.

   - Another option is to create a subdirectory in the doc directory,
 dedicated to resource usage. In addition to the API references, we
 also provide resource references (think of the AWS CFN online docs).

   Does this makes senses?

Regards,
   - Qiming

-
Qiming Teng, PhD.
Research Staff Member
IBM Research - China
e-mail: teng...@cn.ibm.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][glance] Question about evacuate with no shared storage..

2014-02-21 Thread Joe Gordon
On Thu, Feb 20, 2014 at 9:01 PM, Sangeeta Singh sin...@yahoo-inc.com wrote:
 Hi,

 At my organization we do not use a shared storage for VM disks  but need to
 evacuate VMs  from a HV that is down or having problems to another HV. The
 evacuate command only allows the evacuated VM to have the base image. What I
 am interested in is to create a snapshot of the VM on the down HV and then
 be able to use the evacuate command by specifying the snapshot for the
 image.

libvirt supports live migration without any shared storage. TripleO
has been testing it out using this patch
https://review.openstack.org/#/c/74600/


 Has anyone had such a use case? Is there a command that uses snapshots in
 this way to recreate VM on a new HV.

 Thanks for the pointers.

 Sangeeta

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][docs] Need more sample HOT templates for users

2014-02-21 Thread Mike Spreitzer
Zane Bitter zbit...@redhat.com wrote on 02/21/2014 12:23:05 PM:

 Yeah, we are overloading the term 'developer' here, since that section 
 contains both information that is only useful to developers working on 
 Heat itself, and information useful to users developing templates.

At the highest levels of the OpenStack documentation, a distinction is 
made between cloud users, cloud admins, and developers.  Nobody coming at 
this from the outside would look under developer documentation for what a 
cloud user --- even one writing a Heat template --- needs to know: cloud 
users are obviously application developers and deployers and operators.

 I'm not sure if this is forced because of an OpenStack-wide assumption 
 that there is only API documentation and developer documentation?
 
 We ought to split these up and make the difference clear if we can.

Forget the if.  If we don't want to have to mentor every new user, we 
need decent documentation.

https://bugs.launchpad.net/openstack-manuals/+bug/1281691

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] supported dependency versioning and testing

2014-02-21 Thread Sean Dague
On 02/21/2014 11:02 AM, Daniel P. Berrange wrote:
 On Fri, Feb 21, 2014 at 10:46:22AM -0500, Sean Dague wrote:
 On 02/21/2014 09:45 AM, Daniel P. Berrange wrote:
 On Thu, Feb 20, 2014 at 02:45:03PM -0500, Sean Dague wrote:

 So I'm one of the first people to utter if it isn't tested, it's
 probably broken, however I also think we need to be realistic about the
 fact that if you did out the permutations of dependencies and config
 options, we'd have as many test matrix scenarios as grains of sand on
 the planet.

 I do think in some ways this is unique to OpenStack, in that our
 automated testing is head and shoulders above any other Open Source
 project out there, and most proprietary software systems I've seen.

 So this is about being pragmatic. In our dependency testing we are
 actually testing with most recent versions of everything. So I would
 think that even with libvirt, we should err in that direction.

 I'm very much against that, because IME, time  time again across
 all open source projects I've worked on, people silently introduce
 use of features/apis that only exist in newer versions without anyone
 ever noticing until it is too late.

 That being said, we also need to be a little bit careful about taking
 such a hard line about supported vs. not based on only what's in the
 gate. Because if we did the following things would be listed as
 unsupported (in increasing level of ridiculousness):

  * Live migration
  * Using qpid or zmq
  * Running on anything other than Ubuntu 12.04
  * Running on multiple nodes

 Supported to me means we think it should work, and if it doesn't, it's a
 high priority bug that will get fixed quickly. Testing is our sanity
 check. But it can't be considered that it will catch everything, at
 least not before the heat death of the universe.

 I agree we should be pragmatic here to some extent. We shouldn't aim to
 test every single intermediate version, or every possible permutation of
 versions - just a representative sample. Testing both lowest and highest
 versions is a reasonable sample set IMHO.

 Testing lower bounds is interesting, because of the way pip works. That
 being said, if someone wants to take ownership of building that job to
 start as a periodic job, I'm happy to point in the right direction. Just
 right now, it's a lower priority item than things like Tempest self
 testing, Heat actually gating, Neutron running in parallel, Nova API
 coverage.
 
 If it would be hard work to do it for python modules, we can at least
 not remove the existing testing of an old libvirt version - simply add
 an additional test with newer libvirt.

Simply adding a test with newer libvirt isn't all that simple at the end
of the day, as it requires building a new nodepool image. Because
getting new libvirt in the existing test environment means cloud
archive, and cloud archive means a ton of other new code as well. Plus
in Juno we're presumably going to jump to 14.04 as our test base, which
is going to be it's own big transition.

So, I'm not opposed, but I also think bifurcating libvirt testing is a
big enough change in the pipeline that it needs some pretty dedicated
folks looking at it, and the implications there in. This isn't just a
yaml change, set and forget it. And given where we are in the
development cycle, I'm not sure trying to keep the gate stable with a
new libvirt which we've known to be problematic, is the right time to do
this.

But, if someone is stepping up to work through it, can definitely mentor
them on the right places to be poking.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-21 Thread Jay Pipes
On Thu, 2014-02-20 at 15:21 +0400, Eugene Nikanorov wrote:

 I agree with Samuel here.  I feel the logical model and other
 issues
 (implementation etc.) are mixed in the discussion.
  
 A little bit. While ideally it's better to separate it, in my opinion
 we need to have some 'fair bit' of implementation details
 in API in order to reduce code complexity (I'll try to explain it on
 the meeting). Currently these 'implementation details' are implied
 because we deal with simplest configurations which maps 1:1 to a
 backend.

I disagree on this point. I believe that the more implementation details
bleed into the API, the harder the API is to evolve and improve, and the
less flexible the API becomes.

I'd personally love to see the next version of the LBaaS API be a
complete breakaway from any implementation specifics and refocus itself
to be a control plane API that is written from the perspective of the
*user* of a load balancing service, not the perspective of developers of
load balancer products.

The user of the OpenStack load balancer service would be able to call
the API in the following way (which represents more how the user thinks
about the problem domain):

neutron balancer-type-list

# Returns a list of balancer types (flavors) that might
# look something like this perhaps (just an example off top of head):

- simple:
capabilities:
  topologies:
- single-node
  algorithms:
- round-robin
  protocols:
- http
  max-members: 4
- advanced:
capabilities:
  topologies:
- single-node
- active-standby
  algorithms:
- round-robin
- least-connections
  protocols:
- http
- https
  max-members: 100
   
# User would then create a new balancer from the type:

neutron balancer-create --type=advanced --front=ip \
 --back=list_of_ips --algorithm=least-connections \
 --topology=active-standby

# Neutron LBaaS goes off and does a few things, then perhaps
# user would run:

neutron balancer-show balancer_id

# which might return the following:

front:
  ip: ip
  nodes:
- uuid -- could be a hardware device ID or a VM ID
  ip: fixed_ip
  status: ACTIVE
- uuid
  ip: fixed_ip
  status: STANDBY
back:
  nodes:
- uuid -- could be ID of an appliance or a VM ID
  ip: fixed_ip
  status: ONLINE
- uuid
  ip: fixed_ip
  status: ONLINE
- uuid
  ip: fixed_ip
  status: OFFLINE

No mention of pools, VIPs, or really much else other than a balancer
and the balancer type, which describes capabilities and restrictions
for a class of balancers. All implementation details are hidden behind
the API. How Neutron LBaaS stores the data behind the scenes should not
influence the forward user-facing API.

Just my two cents,
-jay





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] False Positive testing for 3rd party CI

2014-02-21 Thread Aaron Rosen
Hi,

Yesterday, I pushed a patch to review and was surprised that several of the
third party CI systems reported back that the patch-set worked where it
definitely shouldn't have. Anyways, I tested out my theory a little more
and it turns out a few of the 3rd party CI systems for neutron are just
returning  SUCCESS even if the patch set didn't run successfully (
https://review.openstack.org/#/c/75304/).

Here's a short summery of what I found.

Hyper-V CI -- This seems like an easy fix as it's posting build succeeded
but also puts to the side test run failed. Would probably be a good idea
to remove the build succeeded message to avoid any confusion.


Brocade CI - From the log files it posts it shows that it tries to apply my
patch but fails:

2014-02-20 20:23:48 + cd /opt/stack/neutron
2014-02-20 20:23:48 + git fetch
https://review.openstack.org/openstack/neutron.git
refs/changes/04/75304/1
2014-02-20 20:24:00 From https://review.openstack.org/openstack/neutron
2014-02-20 20:24:00  * branchrefs/changes/04/75304/1 - FETCH_HEAD
2014-02-20 20:24:00 + git checkout FETCH_HEAD
2014-02-20 20:24:00 error: Your local changes to the following files
would be overwritten by checkout:
2014-02-20 20:24:00 etc/neutron/plugins/ml2/ml2_conf_brocade.ini
2014-02-20 20:24:00 neutron/plugins/ml2/drivers/brocade/mechanism_brocade.py
2014-02-20 20:24:00 Please, commit your changes or stash them before
you can switch branches.
2014-02-20 20:24:00 Aborting
2014-02-20 20:24:00 + cd /opt/stack/neutron

but still continues running (without my patchset) and reports success. --
This actually looks like a devstack bug  (i'll check it out).

PLUMgrid CI - Seems to always vote +1 without a failure (
https://review.openstack.org/#/dashboard/10117) though the logs are private
so we can't really tell whats going on.

I was thinking it might be worth while or helpful to have a job that tests
that CI is actually fails when we expect it to.

Best,

Aaron
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][glance] Question about evacuate with no shared storage..

2014-02-21 Thread Joshua Harlow
That requires ssh (or some tunnel/other RPC?) connections from-to all
hypervisors to work correct??

Is that allowed in your organization (headless ssh keys from-to all
hypervisors)?

Isn't that a huge security problem if someone manages to break out of a VM
and get access to those keys?

If I was a hacker and I could initiate those calls, bitcoin mining +1 ;)

-Original Message-
From: Joe Gordon joe.gord...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: Friday, February 21, 2014 at 9:38 AM
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova][glance] Question about evacuate with
no shared storage..

On Thu, Feb 20, 2014 at 9:01 PM, Sangeeta Singh sin...@yahoo-inc.com
wrote:
 Hi,

 At my organization we do not use a shared storage for VM disks  but
need to
 evacuate VMs  from a HV that is down or having problems to another HV.
The
 evacuate command only allows the evacuated VM to have the base image.
What I
 am interested in is to create a snapshot of the VM on the down HV and
then
 be able to use the evacuate command by specifying the snapshot for the
 image.

libvirt supports live migration without any shared storage. TripleO
has been testing it out using this patch
https://review.openstack.org/#/c/74600/


 Has anyone had such a use case? Is there a command that uses snapshots
in
 this way to recreate VM on a new HV.

 Thanks for the pointers.

 Sangeeta

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-21 Thread Eugene Nikanorov
Hi Jay,

Just a quick response:

The 'implementation detail in API' that we all are arguing about is some
hint from the user about how logical configuration is mapped on the
backend(s), not much detail IMO.

Your proposed model has that, because you create the balancer at once and
the driver can easily map submitted configuration to *some* backend or even
decide how to split it.
Things get more complicated when you need fine-grained control.

Looking at your proposal it reminds me Heat template for loadbalancer.
It's fine, but we need to be able to operate on particular objects.

Thanks,
Eugene.



On Fri, Feb 21, 2014 at 10:29 PM, Jay Pipes jaypi...@gmail.com wrote:

 On Thu, 2014-02-20 at 15:21 +0400, Eugene Nikanorov wrote:

  I agree with Samuel here.  I feel the logical model and other
  issues
  (implementation etc.) are mixed in the discussion.
 
  A little bit. While ideally it's better to separate it, in my opinion
  we need to have some 'fair bit' of implementation details
  in API in order to reduce code complexity (I'll try to explain it on
  the meeting). Currently these 'implementation details' are implied
  because we deal with simplest configurations which maps 1:1 to a
  backend.

 I disagree on this point. I believe that the more implementation details
 bleed into the API, the harder the API is to evolve and improve, and the
 less flexible the API becomes.

 I'd personally love to see the next version of the LBaaS API be a
 complete breakaway from any implementation specifics and refocus itself
 to be a control plane API that is written from the perspective of the
 *user* of a load balancing service, not the perspective of developers of
 load balancer products.

 The user of the OpenStack load balancer service would be able to call
 the API in the following way (which represents more how the user thinks
 about the problem domain):

 neutron balancer-type-list

 # Returns a list of balancer types (flavors) that might
 # look something like this perhaps (just an example off top of head):

 - simple:
 capabilities:
   topologies:
 - single-node
   algorithms:
 - round-robin
   protocols:
 - http
   max-members: 4
 - advanced:
 capabilities:
   topologies:
 - single-node
 - active-standby
   algorithms:
 - round-robin
 - least-connections
   protocols:
 - http
 - https
   max-members: 100

 # User would then create a new balancer from the type:

 neutron balancer-create --type=advanced --front=ip \
  --back=list_of_ips --algorithm=least-connections \
  --topology=active-standby

 # Neutron LBaaS goes off and does a few things, then perhaps
 # user would run:

 neutron balancer-show balancer_id

 # which might return the following:

 front:
   ip: ip
   nodes:
 - uuid -- could be a hardware device ID or a VM ID
   ip: fixed_ip
   status: ACTIVE
 - uuid
   ip: fixed_ip
   status: STANDBY
 back:
   nodes:
 - uuid -- could be ID of an appliance or a VM ID
   ip: fixed_ip
   status: ONLINE
 - uuid
   ip: fixed_ip
   status: ONLINE
 - uuid
   ip: fixed_ip
   status: OFFLINE

 No mention of pools, VIPs, or really much else other than a balancer
 and the balancer type, which describes capabilities and restrictions
 for a class of balancers. All implementation details are hidden behind
 the API. How Neutron LBaaS stores the data behind the scenes should not
 influence the forward user-facing API.

 Just my two cents,
 -jay





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] bug 1203680 - fix requires doc

2014-02-21 Thread Mike Spreitzer
https://bugs.launchpad.net/devstack/+bug/1203680 is literally about Glance 
but Nova has the same problem.  There is a fix released, but just merging 
that fix accomplishes nothing --- we need people who run DevStack to set 
the new variable (INSTALL_TESTONLY_PACKAGES).  This is something that 
needs to be documented (in http://devstack.org/configuration.html and all 
the places that tell people how to do unit testing, for examples), so that 
people know to do it, right?___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] False Positive testing for 3rd party CI

2014-02-21 Thread Octavian Ciuhandu
Hi,

On 21 Feb 2014, at 20:34, Aaron Rosen 
aaronoro...@gmail.commailto:aaronoro...@gmail.com wrote:

Hi,

Yesterday, I pushed a patch to review and was surprised that several of the 
third party CI systems reported back that the patch-set worked where it 
definitely shouldn't have. Anyways, I tested out my theory a little more and it 
turns out a few of the 3rd party CI systems for neutron are just returning  
SUCCESS even if the patch set didn't run successfully 
(https://review.openstack.org/#/c/75304/).

Here's a short summery of what I found.

Hyper-V CI -- This seems like an easy fix as it's posting build succeeded but 
also puts to the side test run failed. Would probably be a good idea to 
remove the build succeeded message to avoid any confusion.

The Hyper-V CI is non-voting (as required for new third-party CIs) and this is 
the reason why any post from it will show “build succeeded”. As published in 
other threads, AFAIK the only way to get rid of this issue is to have the CI as 
voting.

Brocade CI - From the log files it posts it shows that it tries to apply my 
patch but fails:


2014-02-20 20:23:48 + cd /opt/stack/neutron
2014-02-20 20:23:48 + git fetch 
https://review.openstack.org/openstack/neutron.git refs/changes/04/75304/1
2014-02-20 20:24:00 From https://review.openstack.org/openstack/neutron
2014-02-20https://review.openstack.org/openstack/neutron2014-02-20 20:24:00  
* branchrefs/changes/04/75304/1 - FETCH_HEAD
2014-02-20 20:24:00 + git checkout FETCH_HEAD
2014-02-20 20:24:00 error: Your local changes to the following files would be 
overwritten by checkout:
2014-02-20 20:24:00 etc/neutron/plugins/ml2/ml2_conf_brocade.ini
2014-02-20 20:24:00 neutron/plugins/ml2/drivers/brocade/mechanism_brocade.py
2014-02-20 20:24:00 Please, commit your changes or stash them before you can 
switch branches.
2014-02-20 20:24:00 Aborting
2014-02-20 20:24:00 + cd /opt/stack/neutron

but still continues running (without my patchset) and reports success. -- This 
actually looks like a devstack bug  (i'll check it out).

PLUMgrid CI - Seems to always vote +1 without a failure 
(https://review.openstack.org/#/dashboard/10117) though the logs are private so 
we can't really tell whats going on.

I was thinking it might be worth while or helpful to have a job that tests that 
CI is actually fails when we expect it to.

Best,

Aaron

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks,

Octavian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-21 Thread Jay Pipes
On Fri, 2014-02-21 at 22:58 +0400, Eugene Nikanorov wrote:
 Hi Jay,
 
 Just a quick response:
 
 The 'implementation detail in API' that we all are arguing about is
 some hint from the user about how logical configuration is mapped on
 the backend(s), not much detail IMO. 
 
 Your proposed model has that, because you create the balancer at once
 and the driver can easily map submitted configuration to *some*
 backend or even decide how to split it.
 Things get more complicated when you need fine-grained control.

Could you provide some examples -- even in the pseudo-CLI commands like
I did below. It's really difficult to understand where the limits are
without specific examples.

 Looking at your proposal it reminds me Heat template for
 loadbalancer. 
 It's fine, but we need to be able to operate on particular objects.

I'm not ruling out being able to add or remove nodes from a balancer, if
that's what you're getting at?

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] False Positive testing for 3rd party CI

2014-02-21 Thread Aaron Rosen
This should fix the false positive for brocade:
https://review.openstack.org/#/c/75486/

Aaron


On Fri, Feb 21, 2014 at 10:34 AM, Aaron Rosen aaronoro...@gmail.com wrote:

 Hi,

 Yesterday, I pushed a patch to review and was surprised that several of
 the third party CI systems reported back that the patch-set worked where it
 definitely shouldn't have. Anyways, I tested out my theory a little more
 and it turns out a few of the 3rd party CI systems for neutron are just
 returning  SUCCESS even if the patch set didn't run successfully (
 https://review.openstack.org/#/c/75304/).

 Here's a short summery of what I found.

 Hyper-V CI -- This seems like an easy fix as it's posting build
 succeeded but also puts to the side test run failed. Would probably be a
 good idea to remove the build succeeded message to avoid any confusion.


 Brocade CI - From the log files it posts it shows that it tries to apply
 my patch but fails:

 2014-02-20 20:23:48 + cd /opt/stack/neutron
 2014-02-20 20:23:48 + git fetch 
 https://review.openstack.org/openstack/neutron.git refs/changes/04/75304/1
 2014-02-20 20:24:00 From https://review.openstack.org/openstack/neutron
 2014-02-20 https://review.openstack.org/openstack/neutron2014-02-20 
 20:24:00  * branchrefs/changes/04/75304/1 - FETCH_HEAD
 2014-02-20 20:24:00 + git checkout FETCH_HEAD
 2014-02-20 20:24:00 error: Your local changes to the following files would be 
 overwritten by checkout:
 2014-02-20 20:24:00   etc/neutron/plugins/ml2/ml2_conf_brocade.ini
 2014-02-20 20:24:00   neutron/plugins/ml2/drivers/brocade/mechanism_brocade.py
 2014-02-20 20:24:00 Please, commit your changes or stash them before you can 
 switch branches.
 2014-02-20 20:24:00 Aborting
 2014-02-20 20:24:00 + cd /opt/stack/neutron

 but still continues running (without my patchset) and reports success. --
 This actually looks like a devstack bug  (i'll check it out).

 PLUMgrid CI - Seems to always vote +1 without a failure (
 https://review.openstack.org/#/dashboard/10117) though the logs are
 private so we can't really tell whats going on.

 I was thinking it might be worth while or helpful to have a job that tests
 that CI is actually fails when we expect it to.

 Best,

 Aaron


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Feedback on SSL implementation

2014-02-21 Thread Jay Pipes
On Wed, 2014-02-19 at 22:01 -0800, Stephen Balukoff wrote:

 Front-end versus back-end protocols:
 It's actually really common for a HTTPS-enabled front-end to speak
 HTTP to the back-end.  The assumption here is that the back-end
 network is trusted and therefore we don't need to bother with the
 (considerable) extra CPU overhead of encrypting the back-end traffic.
 To be honest, if you're going to speak HTTPS on the front-end and the
 back-end, then the only possible reason for even terminating SSL on
 the load balancer is to insert the X-Fowarded-For header. In this
 scenario, you lose almost all the benefit of doing SSL offloading at
 all!

This is exactly correct.

 If we make a policy decision right here not to allow front-end and
 back-end protocol to mismatch, this will break a lot of topologies.

Yep.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] supported dependency versioning and testing

2014-02-21 Thread Mike Spreitzer
Sean Dague s...@dague.net wrote on 02/20/2014 02:45:03 PM:
 ...
 That being said, we also need to be a little bit careful about taking
 such a hard line about supported vs. not based on only what's in the
 gate. Because if we did the following things would be listed as
 unsupported (in increasing level of ridiculousness):
 
  * Live migration
  * Using qpid or zmq
  * Running on anything other than Ubuntu 12.04
  * Running on multiple nodes

Ah, so the only distro regularly tested is Ubuntu 12.04?  That's 
consistent with the other clues I am getting.  But it is inconsistent with 
the following remark found in http://devstack.org/overview.html :

The OpenStack Technical Committee (TC) has defined the current CI strategy 
to include the latest Ubuntu release and the latest RHEL release (for 
Python 2.6 testing).

It may not be obvious to core developers, but for us newbies there is a 
lot of inconsistent and incomplete documentation of what to expect and how 
to do things --- and it is scattered in a variety of places, easy to miss 
some; it really is a drag on a beginner's time.  I am trying to point out 
and fix doc problems as I discover them, but am not myself so sure what 
all the right answers are.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-21 Thread Eugene Nikanorov



 Could you provide some examples -- even in the pseudo-CLI commands like
 I did below. It's really difficult to understand where the limits are
 without specific examples.

You know, I always look at the API proposal from implementation standpoint
also, so here's what I see.
In the cli workflow that you described above, everything is fine, because
the driver knowы how and where to deploy each object
that you provide in your command, because it's basically a batch.

When we're talking about separate objectы that form a loadbalancer - vips,
pools, members, it becomes not clear how to map them backends and at which
point.

So here's an example I usually give:
We have 2 VIPs (in fact, one address and 2 ports listening for http and
https, now we call them listeners),
both listeners pass request to a webapp server farm, and http listener also
passes requests to static image servers by processing incoming request URIs
by L7 rules.
So object topology is:

 Listener1 (addr:80)   Listener2(addr:443)
   | \/
   | \/
   |  X
   |  / \
 pool1(webapp) pool2(static imgs)
sorry for that stone age pic :)

The proposal that we discuss can create such object topology by the
following sequence of commands:
1) create-vip --name VipName address=addr
returns vid_id
2) create-listener --name listener1 --port 80 --protocol http --vip_id
vip_id
returns listener_id1
3) create-listener --name listener2 --port 443 --protocol https --sl-params
params --vip_id vip_id
returns listener_id2
4) create-pool --name pool1 members
returns pool_id1
5) create-pool --name pool1 members
returns pool_id2
6) set-listener-pool listener_id1 pool_id1 --default
7) set-listener-pool listener_id1 pool_id2 --l7policy policy
7) set-listener-pool listener_id2 pool_id1 --default

That's a generic workflow that allows you to create such config. The
question is at which point the backend is chosen.
In our current proposal backend is chosen and step (1) and all further
objects are implicitly go on the same backend as VipName.

The API allows the following addition:
8) create-vip --name VipName2 address=addr2
9) create-listener ... listener3 ...
10) set-listener-pool listener_id3 pool_id1

E.g. from API stand point the commands above are valid. But that particular
ability (pool1 is shared by two different backends) introduces lots of
complexity in the implementation and API, and that is what we would like to
avoid at this point.

So the proposal makes step #10 forbidden: pool is already associated with
the listener on other backend, so we don't share it with listeners on
another one.
That kind of restriction introduces implicit knowledge about the
object-to-backend mapping into the API.
In my opinion it's not a big deal. Once we sort out those complexities, we
can allow that.

What do you think?

Thanks,
Eugene.




  Looking at your proposal it reminds me Heat template for
  loadbalancer.
  It's fine, but we need to be able to operate on particular objects.

 I'm not ruling out being able to add or remove nodes from a balancer, if
 that's what you're getting at?

 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] opportunistic and green cloud

2014-02-21 Thread Abmar Barros
Hi everyone,

I'm currently taking part of a project at the Federal University of Campina
Grande called fogbow, which aims on providing an energy efficient scheduler
and an opportunistic compute node (which deactivates the service if the
host isn't idle) on top of openstack.

Regarding the opportunistic aspect of the project, we're planning on using
powernap (http://manpages.ubuntu.com/manpages/lucid/man8/powernap.8.html)
to tell if the host is idle or not.

Finally, my question is: what do you guys think would be the best way for a
powernap action script to interact with nova-compute in order to stop all
running instances and disable the host?

Would killing and respawning the nova-compute be an elegant way of doing
that in your opinion? Or do RPC calls sound better?

Thanks in advance

-- 
Abmar Barros
MSc in Computer Science from the Federal University of Campina Grande -
www.ufcg.edu.br
OurGrid Team Leader - www.ourgrid.org
Buddycloud Dev - www.buddycloud.org
Paraíba - Brazil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] supported dependency versioning and testing

2014-02-21 Thread Dean Troyer
On Fri, Feb 21, 2014 at 1:42 PM, Mike Spreitzer mspre...@us.ibm.com wrote:

 Ah, so the only distro regularly tested is Ubuntu 12.04?


Within the OpenStack Infrastructure Team -managed environment, generally
yes (with the addition of CentOS 6 for Py26 as noted in your quote below).
 However, that is not the only platform that testing is performed on and
certainly not the only platform that DevStack is supported on.


  That's consistent with the other clues I am getting.  But it is
 inconsistent with the following remark found in
 http://devstack.org/overview.html :

 *The OpenStack Technical Committee (TC) has defined the current CI
 strategy to include the latest Ubuntu release and the latest RHEL release
 (for Python 2.6 testing).*


The bullet list immediately following gets very specific regarding our
distribution support policy.

devstack.org documents (sometimes confusingly as you've noted) only
DevStack itself.  CI uses DevStack as the primary install/configuration to
prepare for the Tempest tests, but is only one piece of that picture.  When
CI says they only test on Ubuntu LTS (I missed the letters LTS in that
quote too) that doesn't mean that DevStack is not tested only on LTS.

There are a number of vendors performing their own CI testing for
configurations that are not covered by OpenStack CI tests, including other
distributions.

It may not be obvious to core developers, but for us newbies there is a lot
 of inconsistent and incomplete documentation of what to expect and how to
 do things --- and it is scattered in a variety of places, easy to miss
 some; it really is a drag on a beginner's time.  I am trying to point out
 and fix doc problems as I discover them, but am not myself so sure what all
 the right answers are.


Thaks for helping.  OpenStack is huge; I've got the benefit of having
learned it as it grew.  We need this kind of input from fresh eyes to help
flush out the inconsistencies that I know I am blind to.

dt

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Documenting test environments (was Re: supported dependency versioning and testing)

2014-02-21 Thread Joe Gordon
On Fri, Feb 21, 2014 at 8:33 AM, Dean Troyer dtro...@gmail.com wrote:
 [new thread for this...]

 On Fri, Feb 21, 2014 at 7:33 AM, Mark McLoughlin Perhaps rather than
 focusing on making this absolutely black and white,

 we should focus on better communicating what we actually focus our
 testing on? (i.e. rather than making the grey areas black, improve the
 white areas)

 Concretely, for every commit merged, we could publish:

We already do publish a lot of this, albeit not in a easily discoverable place


   - the set of commits tested

http://git.openstack.org/cgit/openstack/openstack

   - details of the jobs passed:
   - the distro
   - installed packages and versions

http://logs.openstack.org/47/74447/4/check/check-tempest-dsvm-full/fa3a5e0/logs/dpkg-l.txt.gz

   - output of pip freeze

http://logs.openstack.org/47/74447/4/check/check-tempest-dsvm-full/fa3a5e0/logs/pip-freeze.txt.gz

   - configuration used

http://logs.openstack.org/47/74447/4/check/check-tempest-dsvm-full/fa3a5e0/logs/localrc.txt.gz

   - tests passed

http://logs.openstack.org/47/74447/4/check/check-tempest-dsvm-full/fa3a5e0/logs/testr_results.html.gz



 Long ago I created tools/info.sh to document the DevStack environment as a
 troubleshooting tool.  Among other things it grabs information on the distro
 and system packages installed, pip-installed packages and repos and DevStack
 configuration (local.conf/localrc).

 The output is designed to be easily parsable, and with a bit of work I think
 it could provide the info above and be logged with the rest of the logfiles.

 Thoughts?

 dt

 --

 Dean Troyer
 dtro...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-21 Thread Joe Gordon
On Thu, Feb 20, 2014 at 7:10 AM, Sean Dague s...@dague.net wrote:
 On 02/20/2014 09:55 AM, Christopher Yeoh wrote:
 On Thu, 20 Feb 2014 08:22:57 -0500
 Sean Dague s...@dague.net wrote:

 We're also duplicating a lot of test and review energy in having 2 API
 stacks. Even before v3 has come out of experimental it's consumed a
 huge amount of review resource on both the Nova and Tempest sides to
 get it to it's current state.

 So my feeling is that in order to get more energy and focus on the
 API, we need some kind of game plan to get us to a single API
 version, with a single data payload in L (or on the outside, M). If
 the decision is v2 must be in both those releases (and possibly
 beyond), then it seems like asking other hard questions.

 * why do a v3 at all? instead do we figure out a way to be able to
 evolve v2 in a backwards compatible way.

 So there's lots of changes (cleanups) made between v2 and v3 which are
 really not possible to do in a backwards compatible way. One example
 is that we're a lot stricter and consistent on input validation in v3
 than v2 which is better both from a user and server point of view.
 Another is that the tasks API would be a lot uglier and really look
 bolted on if we tried to do so. Also doing so doesn't actually reduce
 the test load as if we're still supporting the old 'look' of the api we
 still need to test for it separately to the new 'look' even if we don't
 bump the api major version.

 In terms of code sharing (and we've experimented a bit with this for
 v2/v3) I think in most cases ends up actually being easier having two
 quite completely separate trees because it ends up diverging so much
 that having if statements around everywhere to handle the different
 cases is actually a higher maintenance burden (much harder to read)
 than just knowing that you might have to make changes in two quite
 separate places.

 * if we aren't doing a v3, can we deprecate XML in v2 in Icehouse so
 that working around all that code isn't a velocity inhibitor in the
 cleanups required in v2? Because some of the crazy hacks that exist to
 make XML structures work for the json in v2 is kind of special.

 So I don't think we can do that for similar reasons we can't just drop
 V2 after a couple of cycles. We should be encouraging people off, not
 forcing them off.

 This big bang approach to API development may just have run it's
 course, and no longer be a useful development model. Which is good to
 find out. Would have been nice to find out earlier... but not all
 lessons are easy or cheap. :)

 So I think what the v3 gives us is much more consistent and clean
 API base to start from. It's a clean break from the past. But we have to
 be much more careful about any future API changes/enhancements than we
 traditionally have done in the past especially with any changes which
 affect the core. I think we've already significantly raised the quality
 bar in what we allow for both v2 and v3 in Icehouse compared to previous
 releases (those frustrated with trying to get API changes in will
 probably agree) but I'd like us to get even stricter about it in the
 future because getting it wrong in the API design has a MUCH higher
 long term impact than bugs in most other areas. Requiring an API spec
 upfront (and reviewing it) with a blueprint for any new API features
 should IMO be compulsory before a blueprint is approved.

 Also micro and extension versioning is not the magic bullet which will
 get us out of trouble in the future. Especially with the core changes.
 Because even though versioning allows us to make changes, for similar
 reasons to not being able to just drop V2 after a couple of cycles
 we'll still need to keep supporting (and testing) the old behaviour for
 a significant period of time (we have often quietly ignored
 this issue in the past).

 Ultimately the only way to free ourselves from the maintenance of two
 API versions (and I'll claim this is rather misleading as it actually
 has more dimensions to it than this) is to convince users to move from
 the V2 API to the new one. And it doesn't make much difference
 whether we call it V3 or V2.1 we still have very similar maintenance
 burdens if we want to make the sorts of API changes that we have done
 for V3.

 I want to flip this a little bit around. As an API consumer for an
 upstream service I actually get excited when they announce a new version
 and give me some new nobs to play with. Often times I'll even email
 providers asking for certain API interfaces get exposed.

 I do think we need to actually start from the end goal and work
 backwards. My assumption is that 1 API vs. with 1 Data Format in L/M is
 our end goal. I think that there are huge technical debt costs with
 anything else. Our current course and speed makes us have 3 APIs/Formats
 in that time frame.


++, I think 1 API in L/M is a great goal


 There is no easy way out of this, but I think that the current course
 and speed inhibits us in a 

Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-21 Thread Joe Gordon
On Wed, Feb 19, 2014 at 9:36 AM, Russell Bryant rbry...@redhat.com wrote:
 Greetings,

 The v3 API effort has been going for a few release cycles now.  As we
 approach the Icehouse release, we are faced with the following question:
 Is it time to mark v3 stable?

 My opinion is that I think we need to leave v3 marked as experimental
 for Icehouse.

 There are a number of reasons for this:

 1) Discussions about the v2 and v3 APIs at the in-person Nova meetup
 last week made me come to the realization that v2 won't be going away
 *any* time soon.  In some cases, users have long term API support
 expectations (perhaps based on experience with EC2).  In the best case,
 we have to get all of the SDKs updated to the new API, and then get to
 the point where everyone is using a new enough version of all of these
 SDKs to use the new API.  I don't think that's going to be quick.

Unless we specifically work with SDKs I don't think they will support
V3 until we mark it as stable. So I think we are in a bit of a chicken
and egg situation.


 We really don't want to be in a situation where we're having to force
 any sort of migration to a new API.  The new API should be compelling
 enough that everyone *wants* to migrate to it.  If that's not the case,
 we haven't done our job.

 2) There's actually quite a bit still left on the existing v3 todo list.
  We have some notes here:

 https://etherpad.openstack.org/p/NovaV3APIDoneCriteria

 One thing is nova-network support.  Since nova-network is still not
 deprecated, we certainly can't deprecate the v2 API without nova-network
 support in v3.  We removed it from v3 assuming nova-network would be
 deprecated in time.

 Another issue is that we discussed the tasks API as the big new API
 feature we would include in v3.  Unfortunately, it's not going to be
 complete for Icehouse.  It's possible we may have some initial parts
 merged, but it's much smaller scope than what we originally envisioned.
  Without this, I honestly worry that there's not quite enough compelling
 functionality yet to encourage a lot of people to migrate.


Can we get more people to work in tasks and try to get it out in icehouse?

If we want to go back to having only 1 API at a specific release in
the future, what about setting a deadline for ourselves to get v3 out
in Juno no matter what?

 3) v3 has taken a lot more time and a lot more effort than anyone
 thought.  This makes it even more important that we're not going to need
 a v4 any time soon.  Due to various things still not quite wrapped up,
 I'm just not confident enough that what we have is something we all feel
 is Nova's API of the future.


 Let's all take some time to reflect on what has happened with v3 so far
 and what it means for how we should move forward.  We can regroup for Juno.

 Finally, I would like to thank everyone who has helped with the effort
 so far.  Many hours have been put in to code and reviews for this.  I
 would like to specifically thank Christopher Yeoh for his work here.
 Chris has done an *enormous* amount of work on this and deserves credit
 for it.  He has taken on a task much bigger than anyone anticipated.
 Thanks, Chris!

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] False Positive testing for 3rd party CI

2014-02-21 Thread Armando M.
Nice one!

On 21 February 2014 11:22, Aaron Rosen aaronoro...@gmail.com wrote:
 This should fix the false positive for brocade:
 https://review.openstack.org/#/c/75486/

 Aaron


 On Fri, Feb 21, 2014 at 10:34 AM, Aaron Rosen aaronoro...@gmail.com wrote:

 Hi,

 Yesterday, I pushed a patch to review and was surprised that several of
 the third party CI systems reported back that the patch-set worked where it
 definitely shouldn't have. Anyways, I tested out my theory a little more and
 it turns out a few of the 3rd party CI systems for neutron are just
 returning  SUCCESS even if the patch set didn't run successfully
 (https://review.openstack.org/#/c/75304/).

 Here's a short summery of what I found.

 Hyper-V CI -- This seems like an easy fix as it's posting build
 succeeded but also puts to the side test run failed. Would probably be a
 good idea to remove the build succeeded message to avoid any confusion.


 Brocade CI - From the log files it posts it shows that it tries to apply
 my patch but fails:

 2014-02-20 20:23:48 + cd /opt/stack/neutron
 2014-02-20 20:23:48 + git fetch
 https://review.openstack.org/openstack/neutron.git refs/changes/04/75304/1
 2014-02-20 20:24:00 From https://review.openstack.org/openstack/neutron
 2014-02-20 20:24:00  * branchrefs/changes/04/75304/1 -
 FETCH_HEAD
 2014-02-20 20:24:00 + git checkout FETCH_HEAD
 2014-02-20 20:24:00 error: Your local changes to the following files would
 be overwritten by checkout:
 2014-02-20 20:24:00  etc/neutron/plugins/ml2/ml2_conf_brocade.ini
 2014-02-20 20:24:00
  neutron/plugins/ml2/drivers/brocade/mechanism_brocade.py
 2014-02-20 20:24:00 Please, commit your changes or stash them before you
 can switch branches.
 2014-02-20 20:24:00 Aborting
 2014-02-20 20:24:00 + cd /opt/stack/neutron

 but still continues running (without my patchset) and reports success. --
 This actually looks like a devstack bug  (i'll check it out).

 PLUMgrid CI - Seems to always vote +1 without a failure
 (https://review.openstack.org/#/dashboard/10117) though the logs are private
 so we can't really tell whats going on.

 I was thinking it might be worth while or helpful to have a job that tests
 that CI is actually fails when we expect it to.

 Best,

 Aaron



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] bug 1203680 - fix requires doc

2014-02-21 Thread Ben Nemec
 

On 2014-02-21 13:01, Mike Spreitzer wrote: 

 https://bugs.launchpad.net/devstack/+bug/1203680 [1] is literally about 
 Glance but Nova has the same problem. There is a fix released, but just 
 merging that fix accomplishes nothing --- we need people who run DevStack to 
 set the new variable (INSTALL_TESTONLY_PACKAGES). This is something that 
 needs to be documented (in http://devstack.org/configuration.html [2] and all 
 the places that tell people how to do unit testing, for examples), so that 
 people know to do it, right?

IMHO, that should be enabled by default. Every developer using devstack
is going to want to run unit tests at some point (or should anyway...),
and if the gate doesn't want the extra install time for something like
tempest that probably doesn't need these packages, then it's much
simpler to disable it in that one config instead of every separate
config used by every developer. 

-Ben 
 

Links:
--
[1] https://bugs.launchpad.net/devstack/+bug/1203680
[2] http://devstack.org/configuration.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Is there anything blocking the libvirt driver from implementing the host_maintenance_mode API?

2014-02-21 Thread Joe Gordon
On Thu, Feb 20, 2014 at 9:38 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:


 On 2/19/2014 4:05 PM, Matt Riedemann wrote:

 The os-hosts OS API extension [1] showed up before I was working on the
 project and I see that only the VMware and XenAPI drivers implement it,
 but was wondering why the libvirt driver doesn't - either no one wants
 it, or there is some technical reason behind not implementing it for
 that driver?


If  I remember correctly maintenance mode is a special thing in Xen.


 [1]

 http://docs.openstack.org/api/openstack-compute/2/content/PUT_os-hosts-v2_updateHost_v2__tenant_id__os-hosts__host_name__ext-os-hosts.html



 By the way, am I missing something when I think that this extension is
 already covered if you're:

 1. Looking to get the node out of the scheduling loop, you can just disable
 it with os-services/disable?

 2. Looking to evacuate instances off a failed host (or one that's in
 maintenance mode), just use the evacuate server action.

I don't think your missing anything.



 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] Changes to oslo-incubator sync workflow

2014-02-21 Thread Ben Nemec
 

/me finally catches up on -dev list traffic... 

On 2014-02-19 20:27, Doug Hellmann wrote: 

 On Wed, Feb 19, 2014 at 8:13 PM, Joe Gordon joe.gord...@gmail.com wrote:
 
 Hi All,
 
 As many of you know most oslo-incubator code is wildly out of sync.
 Assuming we consider it a good idea to sync up oslo-incubator code
 before cutting Icehouse, then we have a problem.
 
 Today oslo-incubator code is synced in ad-hoc manor, resulting in
 duplicated efforts and wildly out of date code. Part of the challenges
 today are backwards incompatible changes and new oslo bugs. I expect
 that once we get a single project to have an up to date oslo-incubator
 copy it will make syncing a second project significantly easier. So
 because I (hopefully) have some karma built up in nova, I would like
 to volunteer nova to be the guinea pig.
 
 Thank you for volunteering to spear-head this, Joe.

+1 

 To fix this I would like to propose starting an oslo-incubator/nova
 sync team. They would be responsible for getting nova's oslo code up
 to date. I expect this work to involve:
 * Reviewing lots of oslo sync patches
 * Tracking the current sync patches
 * Syncing over the low hanging fruit, modules that work without changing 
 nova.
 * Reporting bugs to oslo team
 * Working with oslo team to figure out how to deal with backwards
 incompatible changes
 * Update nova code or make oslo module backwards compatible
 * Track all this
 * Create a roadmap for other projects to follow (re: documentation)
 
 I am looking for volunteers to help with this effort, any takers?
 
 I will help, especially with reviews and tracking.

I'm happy to help as well. I always try to help with oslo syncs any time
I'm made aware of problems anyway. 

What is our first step here? Get the low-hanging fruit syncs proposed
all at once? Do them individually (taking into consideration the module
deps, of course)? If we're going to try to get this done for Icehouse
then we probably need to start ASAP. 

-Ben 

 We are going to want someone from the team working on the db modules to 
 participate as well, since we know that's one area where the API has diverged 
 some (although we did take backwards compatibility into account). Victor, can 
 you help find us a volunteer? 
 
 Doug 
 
 best,
 Joe Gordon
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [1]
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  [1]

 

Links:
--
[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] False Positive testing for 3rd party CI

2014-02-21 Thread Carl Baldwin
Aaron,

I was thinking the same thing recently with this patch [1].  Patch
sets 1-5 should have failed for any plugin besides ml2 yet some passed
and I wondered how that could happen.  Kudos to those patches that
failed my patch sets correctly.

Carl

[1] https://review.openstack.org/#/c/72565/

On Fri, Feb 21, 2014 at 11:34 AM, Aaron Rosen aaronoro...@gmail.com wrote:
 Hi,

 Yesterday, I pushed a patch to review and was surprised that several of the
 third party CI systems reported back that the patch-set worked where it
 definitely shouldn't have. Anyways, I tested out my theory a little more and
 it turns out a few of the 3rd party CI systems for neutron are just
 returning  SUCCESS even if the patch set didn't run successfully
 (https://review.openstack.org/#/c/75304/).

 Here's a short summery of what I found.

 Hyper-V CI -- This seems like an easy fix as it's posting build succeeded
 but also puts to the side test run failed. Would probably be a good idea
 to remove the build succeeded message to avoid any confusion.


 Brocade CI - From the log files it posts it shows that it tries to apply my
 patch but fails:

 2014-02-20 20:23:48 + cd /opt/stack/neutron
 2014-02-20 20:23:48 + git fetch
 https://review.openstack.org/openstack/neutron.git refs/changes/04/75304/1
 2014-02-20 20:24:00 From https://review.openstack.org/openstack/neutron
 2014-02-20 20:24:00  * branchrefs/changes/04/75304/1 -
 FETCH_HEAD
 2014-02-20 20:24:00 + git checkout FETCH_HEAD
 2014-02-20 20:24:00 error: Your local changes to the following files would
 be overwritten by checkout:
 2014-02-20 20:24:00   etc/neutron/plugins/ml2/ml2_conf_brocade.ini
 2014-02-20 20:24:00
   neutron/plugins/ml2/drivers/brocade/mechanism_brocade.py
 2014-02-20 20:24:00 Please, commit your changes or stash them before you can
 switch branches.
 2014-02-20 20:24:00 Aborting
 2014-02-20 20:24:00 + cd /opt/stack/neutron

 but still continues running (without my patchset) and reports success. --
 This actually looks like a devstack bug  (i'll check it out).

 PLUMgrid CI - Seems to always vote +1 without a failure
 (https://review.openstack.org/#/dashboard/10117) though the logs are private
 so we can't really tell whats going on.

 I was thinking it might be worth while or helpful to have a job that tests
 that CI is actually fails when we expect it to.

 Best,

 Aaron


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-21 Thread Christopher Yeoh
On Fri, 21 Feb 2014 13:03:31 -0800
Joe Gordon joe.gord...@gmail.com wrote:
  1) Discussions about the v2 and v3 APIs at the in-person Nova meetup
  last week made me come to the realization that v2 won't be going
  away *any* time soon.  In some cases, users have long term API
  support expectations (perhaps based on experience with EC2).  In
  the best case, we have to get all of the SDKs updated to the new
  API, and then get to the point where everyone is using a new enough
  version of all of these SDKs to use the new API.  I don't think
  that's going to be quick.
 
 Unless we specifically work with SDKs I don't think they will support
 V3 until we mark it as stable. So I think we are in a bit of a chicken
 and egg situation.

Yes and we also seem to be concerned about something (people not move
off the V2 API in reasonable time) which *might* happen. And that its
not something we can influence. Also the longer we leave the V2 API as
the only supported version, the bigger the problem becomes.

  We really don't want to be in a situation where we're having to
  force any sort of migration to a new API.  The new API should be
  compelling enough that everyone *wants* to migrate to it.  If
  that's not the case, we haven't done our job.
 
  2) There's actually quite a bit still left on the existing v3 todo
  list. We have some notes here:
 
  https://etherpad.openstack.org/p/NovaV3APIDoneCriteria
 
  One thing is nova-network support.  Since nova-network is still not
  deprecated, we certainly can't deprecate the v2 API without
  nova-network support in v3.  We removed it from v3 assuming
  nova-network would be deprecated in time.
 
  Another issue is that we discussed the tasks API as the big new API
  feature we would include in v3.  Unfortunately, it's not going to be
  complete for Icehouse.  It's possible we may have some initial parts
  merged, but it's much smaller scope than what we originally
  envisioned. Without this, I honestly worry that there's not quite
  enough compelling functionality yet to encourage a lot of people to
  migrate.
 
 
 Can we get more people to work in tasks and try to get it out in
 icehouse?

It would most likely mean having quite a few feature freeze exceptions.

 If we want to go back to having only 1 API at a specific release in
 the future, what about setting a deadline for ourselves to get v3 out
 in Juno no matter what?

Well I think we should AND keep the scope of V3 API changes as small as
possible (eg tasks and nova-network). 

If we take the route of not releasing the V3 API and trying to backport
features we like from it we actually lose a lot of improvements. Eg
cleaning up the API so its consistent to users can't really be done in
a backward compatible way. Also input validation improvements because
we'll start breaking apps. 

On the other hand if we start saying that its ok to start making 
backwards incompatible changes to fix these sorts of issues then I
think its not that different to just setting a reasonable deprecation
deadline for the V2 API and telling people to move to the V3 API.

In terms of reducing future code duplication issues we could get the V2
API code to be able to load V3 plugins. Where the interface is exactly
the same (and this would be true for new plugins and some of the more
recent ones, we don't need a V2 version, just a V3 version). Assuming
XML support is deprecated in Juno which it now is.

If we wanted to we could even slowly break the V2 API (in say
extensions which are rarely used) so its slowly replaced by the V3
API by marking it as deprecated and then eventually getting the V2 API
to load the V3 version.

This probably wouldn't work for the core of the V2 API - as its the
same as just saying move to the V3 API. But if the maintenance burden
in Nova is the big issue and we *really* want to keep supporting the
V2 API, then we could create a translation V2-V3 proxy service. So
after what we believe is a reasonable warning period, rip out the V2
API and replace with a proxy which converts V2 requests to V3 ones and
does V3-V2 translation for responses. Not as efficient, but removes
a lot of the maintenance overhead from Nova itself.

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstacksdk] Meeting minutes, and Next Steps/new meeting time.

2014-02-21 Thread Brian Curtin
On Thu, Feb 20, 2014 at 12:02 PM, Jesse Noller
jesse.nol...@rackspace.com wrote:
 Hi Everyone;

 Our first python-openstack meeting was awesome: and I really want to thank 
 everyone who came, and for Doug teaching me the meeting bot :)

 Minutes:
 http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-02-19-19.01.html
 Minutes 
 (text):http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-02-19-19.01.txt
 Log:
 http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-02-19-19.01.log.html

 Note that coming out of this we will be moving the meetings to Tuesdays, 
 19:00 UTC / 1pm CST starting on Tuesday March 4th. Next week there will not 
 be a meeting while we discuss and flesh out next steps and requested items 
 (API, names, extensions and internal HTTP API).

 If you want to participate: please join us on free node: #openstack-sdks

 https://wiki.openstack.org/wiki/PythonOpenStackSDK

As was discussed in the meeting, that page is now moved under and
referenced from https://wiki.openstack.org/wiki/SDK-Development (also
linked from /SDKs). There's a redirect in place on the previous
location, but it now lives at
https://wiki.openstack.org/wiki/SDK-Development/PythonOpenStackSDK

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] bug 1203680 - fix requires doc

2014-02-21 Thread Clark Boylan
On Fri, Feb 21, 2014 at 1:00 PM, Ben Nemec openst...@nemebean.com wrote:
 On 2014-02-21 13:01, Mike Spreitzer wrote:

 https://bugs.launchpad.net/devstack/+bug/1203680 is literally about Glance
 but Nova has the same problem.  There is a fix released, but just merging
 that fix accomplishes nothing --- we need people who run DevStack to set the
 new variable (INSTALL_TESTONLY_PACKAGES).  This is something that needs to
 be documented (in http://devstack.org/configuration.html and all the places
 that tell people how to do unit testing, for examples), so that people know
 to do it, right?



 IMHO, that should be enabled by default.  Every developer using devstack is
 going to want to run unit tests at some point (or should anyway...), and if
 the gate doesn't want the extra install time for something like tempest that
 probably doesn't need these packages, then it's much simpler to disable it
 in that one config instead of every separate config used by every developer.

 -Ben


I would be wary of relying on devstack to configure your unittest
environments. Just like it takes over the node you run it on, devstack
takes full ownership of the repos it clones and will do potentially
lossy things like `git reset --hard` when you don't expect it to. +1
to documenting the requirements for unittesting, not sure I would
include devstack in that documentation.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-21 Thread Matt Riedemann



On 2/21/2014 1:53 AM, Christopher Yeoh wrote:

On Fri, 21 Feb 2014 06:53:11 +
Kenichi Oomichi oomi...@mxs.nes.nec.co.jp wrote:


-Original Message-
From: Christopher Yeoh [mailto:cbky...@gmail.com]
Sent: Thursday, February 20, 2014 11:44 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] v3 API in Icehouse

On Wed, 19 Feb 2014 12:36:46 -0500
Russell Bryant rbry...@redhat.com wrote:


Greetings,

The v3 API effort has been going for a few release cycles now.
As we approach the Icehouse release, we are faced with the
following question: Is it time to mark v3 stable?

My opinion is that I think we need to leave v3 marked as
experimental for Icehouse.



Although I'm very eager to get the V3 API released, I do agree with
you. As you have said we will be living with both the V2 and V3
APIs for a very long time. And at this point there would be simply
too many last minute changes to the V3 API for us to be confident
that we have it right enough to release as a stable API.


Through v3 API development, we have found a lot of the existing v2 API
input validation problems. but we have concentrated v3 API development
without fixing the problems of v2 API.

After Icehouse release, v2 API would be still CURRENT and v3 API would
be EXPERIMENTAL. So should we fix v2 API problems also in the
remaining Icehouse cycle?



So bug fixes are certainly fine with the usual caveats around backwards
compatibility (I think there's a few in there that aren't
backwards compatible especially those that fall into the category of
making the API more consistent).

https://wiki.openstack.org/wiki/APIChangeGuidelines

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



We also need to circle back to the issues/debates around what to do with 
the related bug(s) and how to handle something like this in V2 now with 
respect to proxying to neutron (granted that my premise in the last 
comment may be off a bit now):


https://review.openstack.org/#/c/43822/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-21 Thread Christopher Yeoh
On Sat, Feb 22, 2014 at 9:04 AM, Matt Riedemann
mrie...@linux.vnet.ibm.comwrote:



 On 2/21/2014 1:53 AM, Christopher Yeoh wrote:

 On Fri, 21 Feb 2014 06:53:11 +
 Kenichi Oomichi oomi...@mxs.nes.nec.co.jp wrote:

  -Original Message-
 From: Christopher Yeoh [mailto:cbky...@gmail.com]
 Sent: Thursday, February 20, 2014 11:44 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Nova] v3 API in Icehouse

 On Wed, 19 Feb 2014 12:36:46 -0500
 Russell Bryant rbry...@redhat.com wrote:

  Greetings,

 The v3 API effort has been going for a few release cycles now.
 As we approach the Icehouse release, we are faced with the
 following question: Is it time to mark v3 stable?

 My opinion is that I think we need to leave v3 marked as
 experimental for Icehouse.


 Although I'm very eager to get the V3 API released, I do agree with
 you. As you have said we will be living with both the V2 and V3
 APIs for a very long time. And at this point there would be simply
 too many last minute changes to the V3 API for us to be confident
 that we have it right enough to release as a stable API.


 Through v3 API development, we have found a lot of the existing v2 API
 input validation problems. but we have concentrated v3 API development
 without fixing the problems of v2 API.

 After Icehouse release, v2 API would be still CURRENT and v3 API would
 be EXPERIMENTAL. So should we fix v2 API problems also in the
 remaining Icehouse cycle?


 So bug fixes are certainly fine with the usual caveats around backwards
 compatibility (I think there's a few in there that aren't
 backwards compatible especially those that fall into the category of
 making the API more consistent).

 https://wiki.openstack.org/wiki/APIChangeGuidelines

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 We also need to circle back to the issues/debates around what to do with
 the related bug(s) and how to handle something like this in V2 now with
 respect to proxying to neutron (granted that my premise in the last comment
 may be off a bit now):

 https://review.openstack.org/#/c/43822/


So this is something that sort of sits between a bug and a feature
improvement. Like other parts of the V2 API its an area that wasn't
upgraded when neutron support was added. And in other cases new features
were added with only nova-network support.

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pci device hotplug

2014-02-21 Thread yunhong jiang
On Mon, 2014-02-17 at 06:43 +, Gouzongmei wrote:
 Hello,
 
  
 
 In current PCI passthrough implementation, a pci device is only
 allowed to be assigned to a instance while the instance is being
 created, it is not allowed to be assigned or removed from the instance
 while the instance is running or stop. 
 
 Besides, I noticed that the basic ability--remove a pci device from
 the instance(not by delete the flavor) has never been implemented or
 prompted by anyone.
 
 The current implementation:
 
 https://wiki.openstack.org/wiki/Pci_passthrough
 
  
 
 I have tested the nic hotplug on my experimental environment, it’s
 supported by the latest libvirt and qemu.
 
  
 
 My problem is, why the pci device hotplug is not proposed in openstack
 until now, and is there anyone planning to do the pci device hotplug?

Agree that PCI hotplug is an important feature. The reason of no support
yet is bandwidth. The folks working on PCI spend a lot of time on SR-IOV
NIC discussion.

--jyh



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] bug 1203680 - fix requires doc

2014-02-21 Thread Sean Dague
On 02/21/2014 05:28 PM, Clark Boylan wrote:
 On Fri, Feb 21, 2014 at 1:00 PM, Ben Nemec openst...@nemebean.com wrote:
 On 2014-02-21 13:01, Mike Spreitzer wrote:

 https://bugs.launchpad.net/devstack/+bug/1203680 is literally about Glance
 but Nova has the same problem.  There is a fix released, but just merging
 that fix accomplishes nothing --- we need people who run DevStack to set the
 new variable (INSTALL_TESTONLY_PACKAGES).  This is something that needs to
 be documented (in http://devstack.org/configuration.html and all the places
 that tell people how to do unit testing, for examples), so that people know
 to do it, right?



 IMHO, that should be enabled by default.  Every developer using devstack is
 going to want to run unit tests at some point (or should anyway...), and if
 the gate doesn't want the extra install time for something like tempest that
 probably doesn't need these packages, then it's much simpler to disable it
 in that one config instead of every separate config used by every developer.

 -Ben

 
 I would be wary of relying on devstack to configure your unittest
 environments. Just like it takes over the node you run it on, devstack
 takes full ownership of the repos it clones and will do potentially
 lossy things like `git reset --hard` when you don't expect it to. +1
 to documenting the requirements for unittesting, not sure I would
 include devstack in that documentation.

Agreed, I never run unit tests in the devstack tree. I run them on my
laptop or other non dedicated computers. That's why we do unit tests in
virtual envs, they don't need a full environment.

Also many of the unit tests can't be run when openstack services are
actually running, because they try to bind to ports that openstack
services use.

It's one of the reasons I've never considered that path a priority in
devstack.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Heat resource isolation withing single stack

2014-02-21 Thread Stan Lagun
Steve, thank you for very valuable suggestions. Your block post is really
great - I've read about environments in Heat documentation but didn't
really understood them until now.

Usage of nested stacks may or may not solve my problem depending on what is
possible to do within those stacks.
Let me explain with simple example.

As you probably know Murano uses Heat for all infrastructure-related
operations. This means if some application from Catalog needs VM instance
or any other type of OpenStack resource it creates it by inserting a
snippet into user's Heat stack template and executes UPDATE STACK command.

Now suppose there is WordPress application published in App Catalog.
WordPress app manifest says that it requires installation of MySql. There
is also another application in AppCatalog called GaleraMySql that is known
to be compatible with MySql. In Murano Dashboard user creates new
environment (this corresponds to Heat stack and is not related to what is
called environment in Heat)
and puts WordPress and GaleraMySql on it. Then he connects them so that
GaleraMySql instance would be used in WordPress for MySql requirement.

WordPress and GaleraMySql were developed by different vendors that are not
aware of each others presence. But because of unfortunate combination of
circumstances both vendors chose to merge exactly the same snippet into
user's stack:

Resources: {
myHost: {
Type: AWS::EC2::Instance,
Properties: {
InstanceType: large,
ImageId: someImage
}
}
}

Then instead of 2 different VMs there would be only one. Things would be
even worse if there was already resource myHost in user's stack.
It is more than a name-collision problem as incorrectly written application
manifest can cause any imaginable harm to the stack.

The obvious solution would be to give each app dedicated nested stack and
restrict it to that nested stack only. This would be a best solution. All I
need is to have the same level of control on nested stack I have on outer
stack - get stack template, modify and update them, access output
attributes. Is it possible to retrieve nested stack template, modify it and
populate it back to Heat?

Another option would be create separate top-level stacks for each app. But
in Murano applications themselves composed of smaller parts and in practice
this would lead to creation of dozen stacks with most of them containing
single resource. And then we would have to implement transaction update
between several stacks, coordinated deletion etc. This would also be bad
from a user's point of view at he doesn't expect to find long list of
stacks he has no idea where they came from.

My other options were on how nested stacks can be emulated on top of single
stack by controlling which app created which resource and dynamically
adjust resource names back and forth (myHost in example above) to some
unique values in a way that is opaque to application


On Fri, Feb 21, 2014 at 8:20 PM, Steven Hardy sha...@redhat.com wrote:

 On Fri, Feb 21, 2014 at 06:37:27PM +0400, Stan Lagun wrote:
  Hi Everyone,
 
  While looking through Heat templates generation code in Murano I've
  realized it has a major design flaw: there is no isolation between Heat
  resources generated by different apps.

 Can you define the requirement for isolation in more detail?  Are you
 referring simply to namespace isolation, or do you need auth level
 isolation, e.g something enforced via keystone?

  Every app manifest can access and modify its environment stack in any
 way.
  For example it can delete instances and other resources belonging to
 other
  applications. This may be not so bad for Murano 0.4 but it becomes
 critical
  for AppCatalog (0.5) as there is no trust relations between applications
  and it may be unacceptable that untrusted application can gain complete
  write access over the whole stack.

 All requests to Heat are scoped by tenant/project, so unless you enforce
 resource-level access policy (which we sort-of started looking at with
 OS::Heat::AccessPolicy), this is expected behavior.

  There is also a problem of name collisions - resources generated by
  different applications may have the same names. This is especially
 probable
  between resources generated by different instances of the same app. This
  also affects Parameters/Output of Heat templates as each application
  instance must generate unique names for them (and do not forget them
 later
  as they are needed to read output results).

 A heirarchy of nested stacks, with each application defined as a separate
 stack seems the obvious solution here.

  I think we need at least to know how we going to solve it before 0.5
 
  Here is possible directions i can think of:
 
  1. Use nested Heat stacks. I'm not sure it solves naming collisions and
  that nested stacks can have their own Output

 I think it does, and yes all stacks can have their own outputs, 

Re: [openstack-dev] [Nova][glance] Question about evacuate with no shared storage..

2014-02-21 Thread Sangeeta Singh
So we have to use the block-migrate flag in the live-migrate command set.
Also which is the minimum libvirt version that support this. We use
lbvirt-0.10.2-29

Thanks for the pointer to the patch. I will check that out.

Sangeeta

On 2/21/14, 9:38 AM, Joe Gordon joe.gord...@gmail.com wrote:

On Thu, Feb 20, 2014 at 9:01 PM, Sangeeta Singh sin...@yahoo-inc.com
wrote:
 Hi,

 At my organization we do not use a shared storage for VM disks  but
need to
 evacuate VMs  from a HV that is down or having problems to another HV.
The
 evacuate command only allows the evacuated VM to have the base image.
What I
 am interested in is to create a snapshot of the VM on the down HV and
then
 be able to use the evacuate command by specifying the snapshot for the
 image.

libvirt supports live migration without any shared storage. TripleO
has been testing it out using this patch
https://review.openstack.org/#/c/74600/


 Has anyone had such a use case? Is there a command that uses snapshots
in
 this way to recreate VM on a new HV.

 Thanks for the pointers.

 Sangeeta

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-02-21 Thread Mark Washenberger
Hi Georgy,

Thanks for all your efforts putting this together.

In the incubation request, one of the proposals is to include Murano under
an expanded scope of the Images program, renaming it the Catalog program.
I've been extremely pleased with the help of you and your colleagues in
helping to define the broader role for Glance as a more general artifact
repository. However, the proposal to bring all of Murano under the expanded
Images program strains my current understanding of how Images needs to
expand in scope.

Prior to this email, I was imagining that we would expand the Images
program to go beyond storing just block device images, and into more
structured items like whole Nova instance templates, Heat templates, and
Murano packages. In this scheme, Glance would know everything there is to
know about a resource--its type, format, location, size, and relationships
to other resources--but it would not know or offer any links for how a
resource is to be used.

For example, Glance would know the virtual size, the storage format, and
all the data associated with a disk image. But it would not necessarily
know anything about a user's ability to either boot that disk image in Nova
or to populate a Cinder volume with the image data.

I think you make a very good point, however. In an orchestrated view of the
cloud, the most usable approach is to have links directly from a resource
to the actions you can perform with the resource. In pseudocode,
image.boot() rather than nova.boot(image). In this more expansive view of
the Catalog, I think it would make sense to include Murano entirely as part
of the Catalog program.

However, this change seems to me to imply a significant architectural shift
for OpenStack in general, and I'm just not quite comfortable endorsing it.
I'm very eager to hear other opinions about this question--perhaps I am
simply not understanding the advantages.

In any case, I hope these notes help to frame the question of where Murano
can best fit.

Thanks again,
markwash


On Thu, Feb 20, 2014 at 10:35 AM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.com wrote:

 All,

 Murano is the OpenStack Application Catalog service which has been
 developing on stackforge almost 11 months. Murano has been presented on HK
 summit on unconference track and now we would like to apply for incubation
 during Juno release.

 As the first step we would like to get feedback from TC on Murano
 readiness from OpenStack processes standpoint as well as open up
 conversation around mission and how it fits OpenStack ecosystem.

 Murano incubation request form is here:
 https://wiki.openstack.org/wiki/Murano/Incubation

 As a part of incubation request we are looking for an advice from TC on
 the governance model for Murano. Murano may potentially fit to the
 expanding scope of Image program, if it will be transformed to Catalog
 program. Also it potentially fits Orchestration program, and as a third
 option there might be a value in creation of a new standalone Application
 Catalog program. We have pros and cons analysis in Murano Incubation
 request form.

 Murano team  has been working on Murano as a community project. All our
 code and bugs/specs are hosted at OpenStack Gerrit and Launchpad
 correspondingly. Unit tests and all pep8/hacking checks are run at
 OpenStack Jenkins and we have integration tests running at our own Jenkins
 server for each patch set. Murano also has all necessary scripts for
 devstack integration. We have been holding weekly IRC meetings for the last
 7 months and discussing architectural questions there and in openstack-dev
 mailing lists as well.

 Murano related information is here:

 Launchpad: https://launchpad.net/murano

 Murano Wiki page: https://wiki.openstack.org/wiki/Murano

 Murano Documentation: https://wiki.openstack.org/wiki/Murano/Documentation

 Murano IRC channel: #murano

 With this we would like to start the process of incubation application
 review.

 Thanks
 Georgy

 --
 Georgy Okrokvertskhov
 Architect,
 OpenStack Platform Products,
 Mirantis
 http://www.mirantis.com
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][HEAT][Windows] Does HEAT support provisioning windows cluster

2014-02-21 Thread Jay Lau
Thanks Serg and Alessandro for the detailed explanation, very helpful!

I will try to see if I can leverage something from
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-helper-scripts-reference.htmlfor
windows support.

Thanks,

Jay



2014-02-22 0:44 GMT+08:00 Alessandro Pilotti 
apilo...@cloudbasesolutions.com:

  Hi guys,

  Windows Heat templates are currently supported by using Cloudbase-Init.

  Here's the wiki document that I attached some weeks ago to the blueprint
 referenced in this thread: http://wiki.cloudbase.it/heat-windows
 There are a few open points that IMO require some discussion.

  One topic that deserves attention is what to do with the cfn-tools: we
 opted for using for the moment the AWS version ported to Heat, since those
 already contain the required Windows integration, but we're are willing to
 contribute to the cfn-tools project if this makes still sense.

  Talking about Windows clusters, the main issue is related to the fact
 that the typical Windows cluster configuration requires shared storage for
 the quorum and Nova / Cinder don't allow attaching volumes to multiple
 instances, although there's a BP targetting this potential feature:
 https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume

  There are solutions to work around this issue that we are putting in
 place in the templates, but shared volumes are an important requirement for
 providing proper support for most advanced Windows workloads on OpenStack.

  Talking about specific workloads, we are going to release very soon an
 initial set of templates with support for Active Directory, SQL Server,
 Exchange, Sharepoint and IIS.


  Alessandro



  On 20 Feb 2014, at 12:24, Alexander Tivelkov ativel...@mirantis.com
 wrote:

  Hi Jay,

  Windows support in Heat is being developed, but is not complete yet,
 afaik. You may already use Cloudbase Init to do the post-deploy actions on
 windows - check [1] for the details.

  Meanwhile, running a windows cluster is a much more complicated task
 then just deploying a number of windows instances (if I understand you
 correctly and you speak about Microsoft Failover Cluster, see [2]): to
 build it in the cloud you will have to execute quite a complex workflow
 after the nodes are actually deployed, which is not possible with Heat (at
 least for now).

  Murano project ([3]) does this on top of Heat, as it was initially
 designed as Windows Data Center as a Service, so I suggest you too take a
 look at it. You may also check this video ([4]) which demonstrates how
 Murano is used to deploy a failover cluster of Windows 2012 with a
 clustered MS SQL server on top of it.


  [1] http://wiki.cloudbase.it/heat-windows
 [2] http://technet.microsoft.com/library/hh831579
 [3] https://wiki.openstack.org/Murano
 [4] http://www.youtube.com/watch?v=Y_CmrZfKy18

  --
  Regards,
 Alexander Tivelkov


 On Thu, Feb 20, 2014 at 2:02 PM, Jay Lau jay.lau@gmail.com wrote:


  Hi,

  Does HEAT support provisioning windows cluster?  If so, can I also use
 user-data to do some post install work for windows instance? Is there any
 example template for this?

  Thanks,

  Jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] Changes to oslo-incubator sync workflow

2014-02-21 Thread ChangBo Guo
2014-02-22 5:09 GMT+08:00 Ben Nemec openst...@nemebean.com:

  /me finally catches up on -dev list traffic...

 On 2014-02-19 20:27, Doug Hellmann wrote:




 On Wed, Feb 19, 2014 at 8:13 PM, Joe Gordon joe.gord...@gmail.com wrote:

 Hi All,

 As many of you know most oslo-incubator code is wildly out of sync.
 Assuming we consider it a good idea to sync up oslo-incubator code
 before cutting Icehouse, then we have a problem.

 Today oslo-incubator code is synced in ad-hoc manor, resulting in
 duplicated efforts and wildly out of date code. Part of the challenges
 today are backwards incompatible changes and new oslo bugs. I expect
 that once we get a single project to have an up to date oslo-incubator
 copy it will make syncing a second project significantly easier. So
 because I (hopefully) have some karma built up in nova, I would like
 to volunteer nova to be the guinea pig.


  Thank you for volunteering to spear-head this, Joe.

   +1

   To fix this I would like to propose starting an oslo-incubator/nova
 sync team. They would be responsible for getting nova's oslo code up
 to date.  I expect this work to involve:
 * Reviewing lots of oslo sync patches
 * Tracking the current sync patches
 * Syncing over the low hanging fruit, modules that work without changing
 nova.
 * Reporting bugs to oslo team
 * Working with oslo team to figure out how to deal with backwards
 incompatible changes
   * Update nova code or make oslo module backwards compatible
 * Track all this
 * Create a roadmap for other projects to follow (re: documentation)

 I am looking for volunteers to help with this effort, any takers?


  I will help, especially with reviews and tracking.

   I'm happy to help as well.  I always try to help with oslo syncs any
 time I'm made aware of problems anyway.

 What is our first step here?  Get the low-hanging fruit syncs proposed all
 at once?  Do them individually (taking into consideration the module deps,
 of course)?  If we're going to try to get this done for Icehouse then we
 probably need to start ASAP.

 -Ben

 I also would like to be volunteer of the new team :)
 -gcb


-- 
ChangBo Guo(gcb)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][Ceilometer] http header with large token problem

2014-02-21 Thread ZhiQiang Fan
Hi, developers

There is weired problem that when I try to verify the 8k http head problem
for ceilometer, I construct a very long (30k) token (use a real valid PKI
token as front part and copy several times), and use curl to request to
ceilometer api v2 statistic interface, but ** it returns 200 OK with real
data **, (I already set token_cache_time to -1 and restart the
ceilometer-api service)

So, my questions are:
* it should failed with 401, why 200 instead ?
* why ceilometer (or pecan) can enable such large http head
* is there any up bound for http head in ceilometer?
* if there is a up bound, can we configure it? I cannot find any related
config option in /etc/ceilometer/ceilometer.conf

Any help?

Thanks
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] bug 1203680 - fix requires doc

2014-02-21 Thread Mike Spreitzer
Sean Dague s...@dague.net wrote on 02/21/2014 06:09:18 PM:
 On 02/21/2014 05:28 PM, Clark Boylan wrote:
  ...
  I would be wary of relying on devstack to configure your unittest
  environments. Just like it takes over the node you run it on, devstack
  takes full ownership of the repos it clones and will do potentially
  lossy things like `git reset --hard` when you don't expect it to. +1
  to documenting the requirements for unittesting, not sure I would
  include devstack in that documentation.
 
 Agreed, I never run unit tests in the devstack tree. I run them on my
 laptop or other non dedicated computers. That's why we do unit tests in
 virtual envs, they don't need a full environment.
 
 Also many of the unit tests can't be run when openstack services are
 actually running, because they try to bind to ports that openstack
 services use.
 
 It's one of the reasons I've never considered that path a priority in
 devstack.
 
-Sean

OK, that's important news to me.  I thought DevStack is the recommended 
way to run unit tests; I have heard that from other developers and even 
read it in something I did not write (I also wrote it in a few places 
myself, thinking it was the answer).  So the recommended way to run the 
unit tests for a project is to git clone that project and then `tox` in 
that project's directory, right?

http://docs.openstack.org/developer/nova/devref/development.environment.html 
and https://github.com/openstack/horizon/blob/master/README.rst say to use 
run_tests.sh.  Over in Heat-land, Angus is leading a campaign to 
exterminate run_tests.sh in favor of tox.  Am I getting conflicting 
answers because of outdated documentation, or differences between 
projects, or the fact that a sea-change is in progress, or ... ?

My personal interests right now are centered on Nova and Heat, but clearly 
the issues here are not limited to those two projects.

http://docs.openstack.org/developer/nova/devref/development.environment.html 
has several interesting features.  It tells me how to set up a development 
environment in Linux, and also tells me how to set up a development 
environment in MacOS X.  Is the latter for real?  When I followed those 
instructions on my Mac and ran the unit tests I got several failures.

http://docs.openstack.org/developer/nova/devref/development.environment.html 
has long fiddly instructions for setting up a development environment on 
Linux; the Mac side is much simpler, leveraging the setup infrastructure 
for unit testing --- which is available in Linux too!  Why not give the 
simple instructions for both platforms?

Is there a greased path for graduating from unit testing to 
integration/system tests, or would that be done by submitting my work for 
review and then cherry-picking it into a DevStack install?

I am glad to know I am not the only one who finds it odd that the fix to a 
unit testing problem is to make DevStack more flexible.  Why was bug 
1203680 not fixed by more direct means?  Is it because tox can not do the 
sort of flexible system package installations that the DevStack scripting 
does?

IMHO, we need to either accept the existing fix and its implications 
(using DevStack to setup for unit testing) and document in all the 
relevant places (which includes the DevStack documentation), or switch to 
a different fix.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev