Re: [openstack-dev] [nova] Questions about guest NUMA and memory binding policies

2014-03-04 Thread Liuji (Jeremy)
Hi Steve,

Thanks for your reply.

I didn't know why the blueprint numa-aware-cpu-binding seems to have no more 
progress until read the two mails mentioned in your mail.

The use case analysis in the mails are very clear, they are also what I concern 
about.
I agree that we shouldn't provide pCPU/vCPU mapping for the ending user and how 
to provide them for the user need more consideration. 

The use cases I concern more are the pCPU's exclusively use(pCPU:vCPU=1:1) and 
the guest numa.


Thanks,
Jeremy Liu


> -Original Message-
> From: Steve Gordon [mailto:sgor...@redhat.com]
> Sent: Tuesday, March 04, 2014 10:29 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Luohao (brian); Yuanjing (D)
> Subject: Re: [openstack-dev] [nova] Questions about guest NUMA and memory
> binding policies
> 
> - Original Message -
> > Hi, all
> >
> > I search the current blueprints and old mails in the mail list, but
> > find nothing about Guest NUMA and setting memory binding policies.
> > I just find a blueprint about vcpu topology and a blueprint about CPU
> > binding.
> >
> > https://blueprints.launchpad.net/nova/+spec/support-libvirt-vcpu-topol
> > ogy https://blueprints.launchpad.net/nova/+spec/numa-aware-cpu-binding
> >
> > Is there any plan for the guest NUMA and memory binding policies setting?
> >
> > Thanks,
> > Jeremy Liu
> 
> Hi Jeremy,
> 
> As you've discovered there have been a few attempts at getting some work
> started in this area. Dan Berrange outlined some of the possibilities in this 
> area
> in a previous mailing list post [1] though it's multi-faceted, there are a 
> lot of
> different ways to break it down. If you dig into the details you will note 
> that the
> support-libvirt-vcpu-topology blueprint in particular got a fair way along but
> there were some concerns noted in the code reviews and on the list [2] around
> the design.
> 
> It seems like this is an area that there is a decent amount of interest in 
> and we
> should work on list to flesh out a design proposal, ideally this would be
> presented for further discussion at the Juno design summit. What are your
> particular needs/desires from a NUMA aware nova scheduler?
> 
> Thanks,
> 
> Steve
> 
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2013-November/019715.h
> tml
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2013-December/022940.h
> tml
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

2014-03-04 Thread Irena Berezovsky
Hi Robert,
Seems to me that many code lines are duplicated following your proposal.
For agent based MDs, I would prefer to inherit from  
SimpleAgentMechanismDriverBase and add there verify method for 
supported_pci_vendor_info. Specific MD will pass the list of supported 
pci_vendor_info list. The  'try_to_bind_segment_for_agent' method will call 
'supported_pci_vendor_info', and if supported continue with binding flow. 
Maybe instead of a decorator method, it should be just an utility method?
I think that the check for supported vnic_type and pci_vendor info support, 
should be done in order to see if MD should bind the port. If the answer is 
Yes, no more checks are required.

Coming back to the question I asked earlier, for non-agent MD, how would you 
deal with updates after port is bound, like 'admin_state_up' changes?
I'll try to push some reference code later today.

BR,
Irena

-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com] 
Sent: Wednesday, March 05, 2014 4:46 AM
To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for usage 
questions); Irena Berezovsky; Robert Kukura; Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of 
ports

Hi Sandhya,

I agree with you except that I think that the class should inherit from 
MechanismDriver. I took a crack at it, and here is what I got:

# Copyright (c) 2014 OpenStack Foundation # All Rights Reserved.
#
#Licensed under the Apache License, Version 2.0 (the "License"); you
may
#not use this file except in compliance with the License. You may
obtain
#a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT
#WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See
the
#License for the specific language governing permissions and
limitations
#under the License.

from abc import ABCMeta, abstractmethod

import functools
import six

from neutron.extensions import portbindings from neutron.openstack.common 
import log from neutron.plugins.ml2 import driver_api as api

LOG = log.getLogger(__name__)


DEFAULT_VNIC_TYPES_SUPPORTED = [portbindings.VNIC_DIRECT,
portbindings.VNIC_MACVTAP]

def check_vnic_type_and_vendor_info(f):
@functools.wraps(f)
def wrapper(self, context):
vnic_type = context.current.get(portbindings.VNIC_TYPE,
portbindings.VNIC_NORMAL)
if vnic_type not in self.supported_vnic_types:
LOG.debug(_("%(func_name)s: skipped due to unsupported "
"vnic_type: %(vnic_type)s"),
  {'func_name': f.func_name, 'vnic_type': vnic_type})
return

if self.supported_pci_vendor_info:
profile = context.current.get(portbindings.PROFILE, {})
if not profile:
LOG.debug(_("%s: Missing profile in port binding"),
  f.func_name)
return
pci_vendor_info = profile.get('pci_vendor_info')
if not pci_vendor_info:
LOG.debug(_("%s: Missing pci vendor info in profile"),
  f.func_name)
return
if pci_vendor_info not in self.supported_pci_vendor_info:
LOG.debug(_("%(func_name)s: unsupported pci vendor "
"info: %(info)s"),
  {'func_name': f.func_name, 'info':
pci_vendor_info})
return
f(self, context)
return wrapper

@six.add_metaclass(ABCMeta)
class SriovMechanismDriverBase(api.MechanismDriver):
"""Base class for drivers that supports SR-IOV

The SriovMechanismDriverBase provides common code for mechanism
drivers that supports SR-IOV. Such a driver may or may not require
an agent to be running on the port's host.

MechanismDrivers that uses this base class and requires an agent must
pass the agent type to __init__(), and must implement
try_to_bind_segment_for_agent() and check_segment_for_agent().

MechanismDrivers that uses this base class may provide supported vendor
information, and must provide the supported vnic types.
"""
def __init__(self, agent_type=None, supported_pci_vendor_info=[],
 supported_vnic_types=DEFAULT_VNIC_TYPES_SUPPORTED):
"""Initialize base class for SR-IOV capable Mechanism Drivers

:param agent_type: Constant identifying agent type in agents_db
:param supported_pci_vendor_info: a list of "vendor_id:product_id"
:param supported_vnic_types: The binding:vnic_type values we can bind
"""
self.supported_pci_vendor_info = supported_pci_vendor_info
self.agent_type = agent_type
self.supported_vnic_types = supported_vnic_types

  

[openstack-dev] [nova][pci][sriov] rewriting the the common SRIOV support blue-prints :https://blueprints.launchpad.net/nova/+spec/pci-extra-info

2014-03-04 Thread yongli he

Hi, all

this SRIOV common support bp link: 
https://blueprints.launchpad.net/nova/+spec/pci-extra-info



after a long discuss, SRIOV design choice discuss is done and reach a 
agreement.  i want to rewrite this blueprints,
maybe use diagram to present it clear.  i hope that will be done in one 
week or a little bit longer.  then i can introduce

this to nova meeting before the design summit.

all SRIOV work might partition to 3 task:
* the common SRIOV support in nova( which this blue prints will focus)
* Nova side nic , vif , interface to common PCI.
* different MD drivers and other in the neutron.

this blue prints is intend to support common SRIOV in nova side, not 
only for neutron.  all formal design decision about
SRIOV should be in this blue prints. other details information in the 
meeting, or in the dev mail list is reference

for who had interest.

so i will focus on this blue prints, i really want you guy check this bp 
to make sure it's the right things we had agreed(
i think so) before and after new  bp is done.  i will update you when i 
finished it.



meeting link:  https://wiki.openstack.org/wiki/Meetings/Passthrough

Regards
Yongli He





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Kenichi Oomichi

> -Original Message-
> From: Dan Smith [mailto:d...@danplanet.com]
> Sent: Wednesday, March 05, 2014 9:09 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API
> 
> > What I'd like to do next is work through a new proposal that includes
> > keeping both v2 and v3, but with a new added focus of minimizing the
> > cost.  This should include a path away from the dual code bases and to
> > something like the "v2.1" proposal.
> 
> I think that the most we can hope for is consensus on _something_. So,
> the thing that I'm hoping would mostly satisfy the largest number of
> people is:
> 
> - Leaving v2 and v3 as they are today in the tree, and with v3 still
>   marked experimental for the moment
> - We start on a v2 proxy to v3, with the first goal of fully
>   implementing the v2 API on top of v3, as judged by tempest
> - We define the criteria for removing the current v2 code and marking
>   the v3 code supported as:
>  - The v2 proxy passes tempest
>  - The v2 proxy has sign-off from some major deployers as something
>they would be comfortable using in place of the existing v2 code
>  - The v2 proxy seems to us to be lower maintenance and otherwise
>preferable to either keeping both, breaking all our users, deleting
>v3 entirely, etc

Thanks, Dan.
The above criteria is reasonable to me.

Now Tempest does not check API responses in many cases.
For example, Tempest does not check what API attributes("flavor", "image",
etc.) should be included in the response body of "create a server" API.
So we need to improve Tempest coverage from this viewpoint for verifying
any backward incompatibility does not happen on v2.1 API.
We started this improvement for Tempest and have proposed some patches
for it now.


Thanks
Ken'ichi Ohmichi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] DSL model vs. DB model, renaming

2014-03-04 Thread Renat Akhmerov
I think we forgot to point to the commit itself. Here it is: 
https://review.openstack.org/#/c/77126/

Manas, can you please provide more details on your suggestion?

For now let me just describe the background of Nikolay’s question.

Basically, we are talking about how we are working with data inside Mistral. So 
far, for example, if a user sent a request to Mistral “start workflow” then 
Mistral would do the following:
Get workbook DSL (YAML) from the DB (given that it’s been already persisted 
earlier).
Load it into a dictionary-like structure using standard ‘yaml’ library.
Based on this dictionary-like structure create all necessary DB objects to 
track the state of workflow execution objects and individual tasks.
Perform all the necessary logic in engine and so on. The important thing here 
is that DB objects contain corresponding DSL snippets as they are described in 
DSL (e.g. tasks have property “task_dsl”) to reduce the complexity of 
relational model that we have in DB. Otherwise it would be really complicated 
and most of the queries would contain lots of joins. The example of non-trivial 
relation in DSL is “task”->”action name”->”service”->”service 
actions”->”action”, as you can see it would be hard to navigate to action in 
the DB from a task if our relational model matches to what we have in DSL. this 
approach leads to the problem of addressing any dsl properties using hardcoded 
strings which are spread across the code and that brings lots of pain when 
doing refactoring, when trying to understand the structure of the model we 
describe in DSL, it doesn’t allow to do validation easily and so on.

So what we have in DB we’ve called “model” so far and we’ve called just “dsl” 
the dictionary structure coming from DSL. So if we got a part of the structure 
related to a task we would call it “dsl_task”.

So what Nikolay is doing now is he’s reworking the approach how we work with 
DSL. Now we assume that after we parsed a workbook DSL we get some “model”. So 
that we don’t use “dsl” in the code anywhere this “model” describes basically 
the structure of what we have in DSL and that would allow to address the 
problems I mentioned above (hardcoded strings are replaced with access methods, 
we clearly see the structure of what we’re working with, we can easily validate 
it and so on). So when we need to access some DSL properties we would need to 
get workbook DSL from DB, build this model out of it and continue to work with 
it.

Long story short, this model parsed from DSL is not the model we store in DB 
but they’re both called “model” which may be confusing. For me this non-DB 
model more looks like “domain model” or something like this. So the question I 
would ask ourselves here:
Is the approach itself reasonable?
Do we have better ideas on how to work with DSL? A good mental exercise here 
would be to imagine that we have more than one DSL, not only YAML but say XML. 
How would it change the picture?
How can we clearly distinguish between these two models so that it wouldn’t be 
confusing?
Do we have a better naming in mind?

Thanks.

Renat Akhmerov
@ Mirantis Inc.



On 05 Mar 2014, at 08:56, Manas Kelshikar  wrote:

> Since the renaming is for types in mistral.model.*. I am thinking we suffix 
> with Spec e.g. 
> 
> TaskObject -> TaskSpec
> ActionObject -> ActionSpec and so on.
> 
> The "Spec" suggest that it is a specification of the final object that ends 
> up in the DB and not the actual object. Multiple actual objects can be 
> derived from these Spec objects which fits well with the current paradigm. 
> Thoughts?
> 
> 
> On Mon, Mar 3, 2014 at 9:43 PM, Manas Kelshikar  wrote:
> Hi Nikolay - 
> 
> Is your concern that mistral.db.sqlalchemy.models.* and mistral.model.* will 
> lead to confusion or something else? 
> 
> IMHO as per your change model seems like the appropriate usage while what is 
> stored in the DB is also a model. If we pick appropriate names to distinguish 
> between nature of the objects we should be able to avoid any confusion and 
> whether or not model appears in the module name should not matter much.
> 
> Thanks,
> Manas
> 
> 
> On Mon, Mar 3, 2014 at 8:43 AM, Nikolay Makhotkin  
> wrote:
> Hi, team! 
> 
> Please look at the commit .
> 
> Module 'mistral/model' now is responsible for object model representation 
> which is used for accessing properties of actions, tasks etc.
> 
> We have a name problem - looks like we should rename module 'mistral/model' 
> since we have DB models and they are absolutely different.
> 
> 
> Thoughts?
> 
>  
> Best Regards,
> Nikolay
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__

Re: [openstack-dev] [solum] use of the plan for m1

2014-03-04 Thread Adrian Otto
Angus,

Please include a solum-dsl-version attribute in all examples. That can default 
to "current".

It would also be wise to have a language pack attribute as Devdatta suggested. 
Then we don't need to bump the version to support it later. The default value 
for solum-language-pack should be "auto" so If you omit it then you are relying 
on Solum to auto-detect the language. Users' mileage may vary depending on how 
sophisticated we make that detection code. If only one language pack is loaded, 
then we can always guess right ;-)

Cheers,

Adrian

> On Mar 4, 2014, at 7:26 PM, "devdatta kulkarni" 
>  wrote:
> 
> I support this approach. 
> 
> Customization of build and deploy lifecycle actions depends on
> the ability to register different kinds of services for performing
> these actions. I can imagine that operators would want to provide
> such services as part of their Solum install. Then, app developers
> would be able to find about such services and refer to them in
> their application descriptor (may be a plan file, may be
> something else). However, for m1, I agree that we should go with
> the view that build and deploy services are not externalized, but are
> available as default services in Solum.
> 
> About the proposed simpler descriptor -- the only question I have is about
> the language-pack to use to build the app. Won't we need it in the
> application descriptor? So I propose:
> 
> artifacts:
> - name: My Python App
>   artifact_type: application.heroku
>   content: { href: http://github.com/some/project }
>   language-pack: 
> 
> - Devdatta
> 
> 
> -Original Message-
> From: "Angus Salkeld" 
> Sent: Tuesday, March 4, 2014 8:52pm
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [solum] use of the plan for m1
> 
> Hi all
> 
> I just wanted to clarify our use of the camp plan file (esp. for m1).
> 
> Up until now I have been under the impression that use the plan
> to describe the app lifecycle (build/test/deploy) and the contents
> of the app.
> 
> After attempting to write code that converts plans like this into
> heat templates started to think that this is not a good idea as
> it is mixing two ideas from very different areas. It also makes
> the resulting plan complex.
> 
> I suggest we move from some of the plans suggested here:
> https://etherpad.openstack.org/p/solum-demystified
> 
> to a very simple:
> artifacts:
> - name: My Python App
>   artifact_type: application.heroku
>   content: { href: http://github.com/some/project }
> 
> For m1 we can assume a lifecycle of build and deploy. After that
> we can figure out how we would want to expose the lifecycle
> choices/customization to the user.
> 
> Thoughts?
> 
> -Angus
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] use of the plan for m1

2014-03-04 Thread Murali Allada
+1 for using a simple plan file for M1.

I agree, the language pack id would need to be part of the plan file.

-Murali


On Mar 4, 2014, at 9:20 PM, devdatta kulkarni 
 wrote:

> I support this approach. 
> 
> Customization of build and deploy lifecycle actions depends on
> the ability to register different kinds of services for performing
> these actions. I can imagine that operators would want to provide
> such services as part of their Solum install. Then, app developers
> would be able to find about such services and refer to them in
> their application descriptor (may be a plan file, may be
> something else). However, for m1, I agree that we should go with
> the view that build and deploy services are not externalized, but are
> available as default services in Solum.
> 
> About the proposed simpler descriptor -- the only question I have is about
> the language-pack to use to build the app. Won't we need it in the
> application descriptor? So I propose:
> 
> artifacts:
> - name: My Python App
>   artifact_type: application.heroku
>   content: { href: http://github.com/some/project }
>   language-pack: 
> 
> - Devdatta
> 
> 
> -Original Message-
> From: "Angus Salkeld" 
> Sent: Tuesday, March 4, 2014 8:52pm
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [solum] use of the plan for m1
> 
> Hi all
> 
> I just wanted to clarify our use of the camp plan file (esp. for m1).
> 
> Up until now I have been under the impression that use the plan
> to describe the app lifecycle (build/test/deploy) and the contents
> of the app.
> 
> After attempting to write code that converts plans like this into
> heat templates started to think that this is not a good idea as
> it is mixing two ideas from very different areas. It also makes
> the resulting plan complex.
> 
> I suggest we move from some of the plans suggested here:
> https://etherpad.openstack.org/p/solum-demystified
> 
> to a very simple:
> artifacts:
> - name: My Python App
>   artifact_type: application.heroku
>   content: { href: http://github.com/some/project }
> 
> For m1 we can assume a lifecycle of build and deploy. After that
> we can figure out how we would want to expose the lifecycle
> choices/customization to the user.
> 
> Thoughts?
> 
> -Angus
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][SUSE][Ceilometer] ceilometer service init script can be improved

2014-03-04 Thread ZhiQiang Fan
Hi, SUSE developers

Thanks for your great work on OpenStack packaging for SUSE distribution.

I have tested some functionality of ceilometer on sles 11 sp3, and have two
minor problem, which can create critical availability problem, but can be
easily fixed . They all locate in the service init script.

* {start,check,kill}proc program base on process basename
  this problem blocks alarm services, since ceilometer-alarm-evaluator and
ceilometer-alarm-notifier have more than 15 prefix character exactly same,
and blocks agent services in All-In-One scenario too,
(ceilometer-agent-central & ceilometer-agent-compute)
  Dirk Muller has provided a fix which using -p option to ensure the
killproc will not affect another process, but I verify it on sles 11 sp3 in
my all-in-one environment, and find that it does no longer kill other
process, but it cannot kill its own process now, which means each time I
restart the ceilometer-alarm-*, I get a new one but not replace the old one
  I have a ugly workaround which simply shorten the ceilometer process
name, it still works fine. But this problem needs to be fixed in upstream,
by a better solution

* ceilometer-{api,collector} depend on mongodb
  mongodb is full-feature supported in havana (thanks to SUSE developer,
they backport metaquery for sql backend, even it needs to be improved too),
but I've already found that ceilometer-{api,collecor} cannot behavior
normal when host boot, they both complain that cannot connect to database,
api process will quit but collector process will stay broken and cannot
recover itself anymore even mongodb is available after boot.
  the problem may be quite simple, even though the two services specify the
mongodb as a shoud-start service, but when host boot, mongodb may already
start but in an unavailable state, which cause the two services fail. I've
no idea how to solve this problem in a nice way, but I just sleep 5 seconds
before startproc the service's process, then everything seems fine in my
little environment. this workaround is ugly too, since it sleeps each time
besides host boot.

If you need any detail, I can provide more. These two problem need to be
fixed seriously (maybe quickly) since it strongly affects feature
availability and user experience

Thanks
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Christopher Yeoh
On Tue, 04 Mar 2014 16:09:21 -0800
Dan Smith  wrote:

> > What I'd like to do next is work through a new proposal that
> > includes keeping both v2 and v3, but with a new added focus of
> > minimizing the cost.  This should include a path away from the dual
> > code bases and to something like the "v2.1" proposal.
> 
> I think that the most we can hope for is consensus on _something_. So,
> the thing that I'm hoping would mostly satisfy the largest number of
> people is:
> 
> - Leaving v2 and v3 as they are today in the tree, and with v3 still
>   marked experimental for the moment
> - We start on a v2 proxy to v3, with the first goal of fully
>   implementing the v2 API on top of v3, as judged by tempest
> - We define the criteria for removing the current v2 code and marking
>   the v3 code supported as:
>  - The v2 proxy passes tempest
>  - The v2 proxy has sign-off from some major deployers as something
>they would be comfortable using in place of the existing v2 code
>  - The v2 proxy seems to us to be lower maintenance and otherwise
>preferable to either keeping both, breaking all our users, deleting
>v3 entirely, etc
> - We keep this until we either come up with a proxy that works, or
>   decide that it's not worth the cost, etc.
> 
> I think the list of benefits here are:
> 
> - Gives the v3 code a chance to address some of the things we have
>   identified as lacking in both trees
> - Gives us a chance to determine if the proxy approach is reasonable
> or a nightmare
> - Gives a clear go/no-go line in the sand that we can ask deployers to
>   critique or approve
> 
> It doesn't address all of my concerns, but at the risk of just having
> the whole community split over this discussion, I think this is
> probably (hopefully?) something we can all get behind.
> 
> Thoughts?

So I think this a good compromise to keep things moving. Some aspects
that we'll need to consider:

- We need more tempest coverage of Nova because it doesn't cover all of
  the Nova API yet. We've been working on increasing this as part of
  the V3 API work anyway (and V2 support is an easyish side effect).
  But more people willing to write tempest tests are always welcome :-)

- I think in practice this will probably mean that V3 API is
  realistically only a K rather than J thing - just in terms of allowing
  a reasonable timeline to not only implement the v2 compat but get
  feedback from deployers.

- I'm not sure how this affects how we approach the tasks work. Will
  need to think about that more.

But this plan is certainly something I'm happy to support.

Chris

> 
> --Dan
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] use of the plan for m1

2014-03-04 Thread devdatta kulkarni
I support this approach. 

Customization of build and deploy lifecycle actions depends on
the ability to register different kinds of services for performing
these actions. I can imagine that operators would want to provide
such services as part of their Solum install. Then, app developers
would be able to find about such services and refer to them in
their application descriptor (may be a plan file, may be
something else). However, for m1, I agree that we should go with
the view that build and deploy services are not externalized, but are
available as default services in Solum.

About the proposed simpler descriptor -- the only question I have is about
the language-pack to use to build the app. Won't we need it in the
application descriptor? So I propose:

artifacts:
- name: My Python App
   artifact_type: application.heroku
   content: { href: http://github.com/some/project }
   language-pack: 

- Devdatta


-Original Message-
From: "Angus Salkeld" 
Sent: Tuesday, March 4, 2014 8:52pm
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [solum] use of the plan for m1

Hi all

I just wanted to clarify our use of the camp plan file (esp. for m1).

Up until now I have been under the impression that use the plan
to describe the app lifecycle (build/test/deploy) and the contents
of the app.

After attempting to write code that converts plans like this into
heat templates started to think that this is not a good idea as
it is mixing two ideas from very different areas. It also makes
the resulting plan complex.

I suggest we move from some of the plans suggested here:
https://etherpad.openstack.org/p/solum-demystified

to a very simple:
artifacts:
- name: My Python App
   artifact_type: application.heroku
   content: { href: http://github.com/some/project }

For m1 we can assume a lifecycle of build and deploy. After that
we can figure out how we would want to expose the lifecycle
choices/customization to the user.

Thoughts?

-Angus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [solum] use of the plan for m1

2014-03-04 Thread Angus Salkeld

Hi all

I just wanted to clarify our use of the camp plan file (esp. for m1).

Up until now I have been under the impression that use the plan
to describe the app lifecycle (build/test/deploy) and the contents
of the app.

After attempting to write code that converts plans like this into
heat templates started to think that this is not a good idea as
it is mixing two ideas from very different areas. It also makes
the resulting plan complex.

I suggest we move from some of the plans suggested here:
https://etherpad.openstack.org/p/solum-demystified

to a very simple:
artifacts:
- name: My Python App
  artifact_type: application.heroku
  content: { href: http://github.com/some/project }

For m1 we can assume a lifecycle of build and deploy. After that
we can figure out how we would want to expose the lifecycle
choices/customization to the user.

Thoughts?

-Angus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

2014-03-04 Thread Robert Li (baoli)
Hi Sandhya,

I agree with you except that I think that the class should inherit from
MechanismDriver. I took a crack at it, and here is what I got:

# Copyright (c) 2014 OpenStack Foundation
# All Rights Reserved.
#
#Licensed under the Apache License, Version 2.0 (the "License"); you
may
#not use this file except in compliance with the License. You may
obtain
#a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT
#WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See
the
#License for the specific language governing permissions and
limitations
#under the License.

from abc import ABCMeta, abstractmethod

import functools
import six

from neutron.extensions import portbindings
from neutron.openstack.common import log
from neutron.plugins.ml2 import driver_api as api

LOG = log.getLogger(__name__)


DEFAULT_VNIC_TYPES_SUPPORTED = [portbindings.VNIC_DIRECT,
portbindings.VNIC_MACVTAP]

def check_vnic_type_and_vendor_info(f):
@functools.wraps(f)
def wrapper(self, context):
vnic_type = context.current.get(portbindings.VNIC_TYPE,
portbindings.VNIC_NORMAL)
if vnic_type not in self.supported_vnic_types:
LOG.debug(_("%(func_name)s: skipped due to unsupported "
"vnic_type: %(vnic_type)s"),
  {'func_name': f.func_name, 'vnic_type': vnic_type})
return

if self.supported_pci_vendor_info:
profile = context.current.get(portbindings.PROFILE, {})
if not profile:
LOG.debug(_("%s: Missing profile in port binding"),
  f.func_name)
return
pci_vendor_info = profile.get('pci_vendor_info')
if not pci_vendor_info:
LOG.debug(_("%s: Missing pci vendor info in profile"),
  f.func_name)
return
if pci_vendor_info not in self.supported_pci_vendor_info:
LOG.debug(_("%(func_name)s: unsupported pci vendor "
"info: %(info)s"),
  {'func_name': f.func_name, 'info':
pci_vendor_info})
return
f(self, context)
return wrapper

@six.add_metaclass(ABCMeta)
class SriovMechanismDriverBase(api.MechanismDriver):
"""Base class for drivers that supports SR-IOV

The SriovMechanismDriverBase provides common code for mechanism
drivers that supports SR-IOV. Such a driver may or may not require
an agent to be running on the port's host.

MechanismDrivers that uses this base class and requires an agent must
pass the agent type to __init__(), and must implement
try_to_bind_segment_for_agent() and check_segment_for_agent().

MechanismDrivers that uses this base class may provide supported vendor
information, and must provide the supported vnic types.
"""
def __init__(self, agent_type=None, supported_pci_vendor_info=[],
 supported_vnic_types=DEFAULT_VNIC_TYPES_SUPPORTED):
"""Initialize base class for SR-IOV capable Mechanism Drivers

:param agent_type: Constant identifying agent type in agents_db
:param supported_pci_vendor_info: a list of "vendor_id:product_id"
:param supported_vnic_types: The binding:vnic_type values we can
bind
"""
self.supported_pci_vendor_info = supported_pci_vendor_info
self.agent_type = agent_type
self.supported_vnic_types = supported_vnic_types

def initialize(self):
pass

@check_vnic_type_and_vendor_info
def bind_port(self, context):
LOG.debug(_("Attempting to bind port %(port)s on "
"network %(network)s"),
  {'port': context.current['id'],
   'network': context.network.current['id']})

if self.agent_type:
for agent in context.host_agents(self.agent_type):
LOG.debug(_("Checking agent: %s"), agent)
if agent['alive']:
for segment in context.network.network_segments:
if self.try_to_bind_segment_for_agent(context,
segment,
  agent):
LOG.debug(_("Bound using segment: %s"),
segment)
return
else:
LOG.warning(_("Attempting to bind with dead agent:
%s"),
agent)
else:
for segment in context.network.network_segments:
if self.try_to_bind_segment(context, segment):
LOG.debug(_("Bound using segment: %s"), segment)
return

def validate_port_binding(self, c

Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-04 Thread Qin Zhao
Hi Joe, my meaning is that cloud users may not hope to create new instances
or new images, because those actions may require additional approval and
additional charging. Or, due to instance/image quota limits, they can not
do that. Anyway, from user's perspective, saving and reverting the existing
instance will be preferred sometimes. Creating a new instance will be
another story.


On Wed, Mar 5, 2014 at 3:20 AM, Joe Gordon  wrote:

> On Tue, Mar 4, 2014 at 1:06 AM, Qin Zhao  wrote:
> > I think the current snapshot implementation can be a solution sometimes,
> but
> > it is NOT exact same as user's expectation. For example, a new blueprint
> is
> > created last week,
> > https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot,
> which
> > seems a little similar with this discussion. I feel the user is
> requesting
> > Nova to create in-place snapshot (not a new image), in order to revert
> the
> > instance to a certain state. This capability should be very useful when
> > testing new software or system settings. It seems a short-term temporary
> > snapshot associated with a running instance for Nova. Creating a new
> > instance is not that convenient, and may be not feasible for the user,
> > especially if he or she is using public cloud.
> >
>
> Why isn't it easy to create a new instance from a snapshot?
>
> >
> > On Tue, Mar 4, 2014 at 1:32 PM, Nandavar, Divakar Padiyar
> >  wrote:
> >>
> >> >>> Why reboot an instance? What is wrong with deleting it and create a
> >> >>> new one?
> >>
> >> You generally use non-persistent disk mode when you are testing new
> >> software or experimenting with settings.   If something goes wrong just
> >> reboot and you are back to clean state and start over again.I feel
> it's
> >> convenient to handle this with just a reboot rather than recreating the
> >> instance.
> >>
> >> Thanks,
> >> Divakar
> >>
> >> -Original Message-
> >> From: Joe Gordon [mailto:joe.gord...@gmail.com]
> >> Sent: Tuesday, March 04, 2014 10:41 AM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [nova][cinder] non-persistent storage(after
> >> stopping VM, data will be rollback automatically), do you think we shoud
> >> introduce this feature?
> >> Importance: High
> >>
> >> On Mon, Mar 3, 2014 at 8:13 PM, Zhangleiqiang  >
> >> wrote:
> >> >>
> >> >> This sounds like ephemeral storage plus snapshots.  You build a base
> >> >> image, snapshot it then boot from the snapshot.
> >> >
> >> >
> >> > Non-persistent storage/disk is useful for sandbox-like environment,
> and
> >> > this feature has already exists in VMWare ESX from version 4.1. The
> >> > implementation of ESX is the same as what you said, boot from
> snapshot of
> >> > the disk/volume, but it will also *automatically* delete the transient
> >> > snapshot after the instance reboots or shutdowns. I think the whole
> >> > procedure may be controlled by OpenStack other than user's manual
> >> > operations.
> >>
> >> Why reboot an instance? What is wrong with deleting it and create a new
> >> one?
> >>
> >> >
> >> > As far as I know, libvirt already defines the corresponding
> 
> >> > element in domain xml for non-persistent disk ( [1] ), but it cannot
> specify
> >> > the location of the transient snapshot. Although qemu-kvm has provided
> >> > support for this feature by the "-snapshot" command argument, which
> will
> >> > create the transient snapshot under /tmp directory, the qemu driver of
> >> > libvirt don't support  element currently.
> >> >
> >> > I think the steps of creating and deleting transient snapshot may be
> >> > better to done by Nova/Cinder other than waiting for the 
> support
> >> > added to libvirt, as the location of transient snapshot should
> specified by
> >> > Nova.
> >> >
> >> >
> >> > [1] http://libvirt.org/formatdomain.html#elementsDisks
> >> > --
> >> > zhangleiqiang
> >> >
> >> > Best Regards
> >> >
> >> >
> >> >> -Original Message-
> >> >> From: Joe Gordon [mailto:joe.gord...@gmail.com]
> >> >> Sent: Tuesday, March 04, 2014 11:26 AM
> >> >> To: OpenStack Development Mailing List (not for usage questions)
> >> >> Cc: Luohao (brian)
> >> >> Subject: Re: [openstack-dev] [nova][cinder] non-persistent
> >> >> storage(after stopping VM, data will be rollback automatically), do
> >> >> you think we shoud introduce this feature?
> >> >>
> >> >> On Mon, Mar 3, 2014 at 6:00 PM, Yuzhou (C) 
> >> >> wrote:
> >> >> > Hi stackers,
> >> >> >
> >> >> > As far as I know ,there are two types of storage used by VM in
> >> >> > openstack:
> >> >> Ephemeral Storage and Persistent Storage.
> >> >> > Data on ephemeral storage ceases to exist when the instance it is
> >> >> > associated
> >> >> with is terminated. Rebooting the VM or restarting the host server,
> >> >> however, will not destroy ephemeral data.
> >> >> > Persistent storage means that the storage resource outlives any
> >> >> > other
> >> >> resource and is always available

Re: [openstack-dev] [Mistral] DSL model vs. DB model, renaming

2014-03-04 Thread Manas Kelshikar
Since the renaming is for types in mistral.model.*. I am thinking we suffix
with Spec e.g.

TaskObject -> TaskSpec
ActionObject -> ActionSpec and so on.

The "Spec" suggest that it is a specification of the final object that ends
up in the DB and not the actual object. Multiple actual objects can be
derived from these Spec objects which fits well with the current paradigm.
Thoughts?


On Mon, Mar 3, 2014 at 9:43 PM, Manas Kelshikar wrote:

> Hi Nikolay -
>
> Is your concern that mistral.db.sqlalchemy.models.* and mistral.model.*
> will lead to confusion or something else?
>
> IMHO as per your change model seems like the appropriate usage while what
> is stored in the DB is also a model. If we pick appropriate names to
> distinguish between nature of the objects we should be able to avoid any
> confusion and whether or not model appears in the module name should not
> matter much.
>
> Thanks,
> Manas
>
>
> On Mon, Mar 3, 2014 at 8:43 AM, Nikolay Makhotkin  > wrote:
>
>> Hi, team!
>>
>> Please look at the commit .
>>
>> Module 'mistral/model' now is responsible for object model representation
>> which is used for accessing properties of actions, tasks etc.
>>
>> We have a name problem - looks like we should rename module
>> 'mistral/model' since we have DB models and they are absolutely different.
>>
>>
>> Thoughts?
>>
>>
>> Best Regards,
>> Nikolay
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pep8 gating fails due to tools/config/check_uptodate.sh

2014-03-04 Thread Matt Riedemann



On 3/4/2014 4:34 PM, Joe Gordon wrote:

So since tools/config/check_uptodate.sh is oslo code, I assumed this
issue falls into the domain of oslo-incubator.

Until this gets resolved nova is considering
https://review.openstack.org/#/c/78028/


Keystone too: https://review.openstack.org/#/c/78030/



On Wed, Feb 5, 2014 at 9:21 AM, Daniel P. Berrange  wrote:

On Wed, Feb 05, 2014 at 11:56:35AM -0500, Doug Hellmann wrote:

On Wed, Feb 5, 2014 at 11:40 AM, Chmouel Boudjnah wrote:



On Wed, Feb 5, 2014 at 4:20 PM, Doug Hellmann 
wrote:



Including the config file in either the developer documentation or the
packaging build makes more sense. I'm still worried that adding it to the
sdist generation means you would have to have a lot of tools installed just
to make the sdist. However, we could




I think that may slighty complicate more devstack with this, since we rely
heavily on config samples to setup the services.



Good point, we would need to add a step to generate a sample config for
each app instead of just copying the one in the source repository.


Which is what 'python setup.py build' for an app would take care of.

Regards,
Daniel
--
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSOC] Guidance for "Add a New Storage Backend" project.

2014-03-04 Thread Davanum Srinivas
Sai,


On Sun, Mar 2, 2014 at 2:19 PM, saikrishna sripada
 wrote:
> Hi Srinivas/Alej/All,
>
> Adding to my earlier mail,
>
> I found "Implement a re-usable shared library for vmware (oslo.vmware) to be
> consumed by various OpenStack projects like Nova, Cinder or Glance" also
> interesting.
> Please help me choose the project and guide me further about contributing
> and selection criteria for geting selected in GSOC 2014 and also to
> contribute to the project after GSOC.
>
> Best Regards,
> --sai krishna.
>
>
> -- Forwarded message --
> From: saikrishna sripada 
> Date: Mon, Mar 3, 2014 at 12:11 AM
> Subject: Fwd: [GSOC] Guidance for "Add a New Storage Backend" project.
> To: openstack-dev@lists.openstack.org, "cpp.cabrera" 
>
>
> Hi All/Alej,
>
> I am a M.S student from India.My current research work is on Computer
> Networks, Cloud computing.I have been following involving with openstack
> since the past one year. I have contributed to open-stack in the past but
> its just bits and pieces here and there. I always thought of picking a
> blueprint and implementing it is doable by me with some guidance. Luckily I
> found open-stack participating in GSOC 2014.As for me its possible to
> implement the"Add a New Storage Backend"project. With my past experience
> with the openstack project I am sure I could help your project both during
> GSOC and after. Please reply me if you find my idea interesting.
>
>
> Now about me:
> I am Sai Krishna, M.S in computer science student at IIIT-Hyderabad
> university, Hyderabad.I am good with c++ and python programming. I am
> currently working on a tool which uses Bloom filter for Packet attribution
> systems (trace back the source of malicious packet efficiently). I am also
> having experience building command line modules and handling alarms in
> various network elements of Nokia Siemens networks(currently Nokia solutions
> networks). I have also worked in deploying openstack on fedora and Ubuntu
> distributions and integrating foreman to compute node for auto-discovery
> feature. Also fixed two bugs in Glance and keystone modules.
>
> My study on the task :
> Links:
> https://wiki.openstack.org/wiki/GSoC2014/Queues/Storage
>
> https://github.com/cabrera/python-openstack-and-you/blob/master/src/guide.md
> - I am familiar with git, gerrit, contributing to openstack. Please check
> the handle  krishna1256 for my contributions to openstack.
>
> Kindly bare with my english and Guide me on a head start for working with
> the "Add a New Storage Backend" project.
>
> Thanks and Regards,
> --sai krishna.
>
>



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSOC] Guidance for "Add a New Storage Backend" project.

2014-03-04 Thread Davanum Srinivas
Sai,

Please get in touch with Arnaud for oslo.vmware He has graciously
stepped up as a mentor. You can also talk to people on
#openstack-vmware and #openstack-Oslo IRC channels. Take a look at the
code so far as well -
https://github.com/openstack/oslo.vmware

-- dims

On Sun, Mar 2, 2014 at 2:19 PM, saikrishna sripada
 wrote:
> Hi Srinivas/Alej/All,
>
> Adding to my earlier mail,
>
> I found "Implement a re-usable shared library for vmware (oslo.vmware) to be
> consumed by various OpenStack projects like Nova, Cinder or Glance" also
> interesting.
> Please help me choose the project and guide me further about contributing
> and selection criteria for geting selected in GSOC 2014 and also to
> contribute to the project after GSOC.
>
> Best Regards,
> --sai krishna.
>
>
> -- Forwarded message --
> From: saikrishna sripada 
> Date: Mon, Mar 3, 2014 at 12:11 AM
> Subject: Fwd: [GSOC] Guidance for "Add a New Storage Backend" project.
> To: openstack-dev@lists.openstack.org, "cpp.cabrera" 
>
>
> Hi All/Alej,
>
> I am a M.S student from India.My current research work is on Computer
> Networks, Cloud computing.I have been following involving with openstack
> since the past one year. I have contributed to open-stack in the past but
> its just bits and pieces here and there. I always thought of picking a
> blueprint and implementing it is doable by me with some guidance. Luckily I
> found open-stack participating in GSOC 2014.As for me its possible to
> implement the"Add a New Storage Backend"project. With my past experience
> with the openstack project I am sure I could help your project both during
> GSOC and after. Please reply me if you find my idea interesting.
>
>
> Now about me:
> I am Sai Krishna, M.S in computer science student at IIIT-Hyderabad
> university, Hyderabad.I am good with c++ and python programming. I am
> currently working on a tool which uses Bloom filter for Packet attribution
> systems (trace back the source of malicious packet efficiently). I am also
> having experience building command line modules and handling alarms in
> various network elements of Nokia Siemens networks(currently Nokia solutions
> networks). I have also worked in deploying openstack on fedora and Ubuntu
> distributions and integrating foreman to compute node for auto-discovery
> feature. Also fixed two bugs in Glance and keystone modules.
>
> My study on the task :
> Links:
> https://wiki.openstack.org/wiki/GSoC2014/Queues/Storage
>
> https://github.com/cabrera/python-openstack-and-you/blob/master/src/guide.md
> - I am familiar with git, gerrit, contributing to openstack. Please check
> the handle  krishna1256 for my contributions to openstack.
>
> Kindly bare with my english and Guide me on a head start for working with
> the "Add a New Storage Backend" project.
>
> Thanks and Regards,
> --sai krishna.
>
>



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Simulating many fake nova compute nodes for scheduler testing

2014-03-04 Thread Mike Wilson
On Mon, Mar 3, 2014 at 3:10 PM, Sergey Skripnick wrote:

>
>
>
>  I can run multiple compute service in same hosts without containers.
>> Containers give you a nice isolation and another way to try a more
>> realistic scenario, but my initial goal now is to be able to simulate many
>> fake compute node scenario with as little resources as possible.
>>
>
> I believe it is impossible to use threads without changes in the code.
>
>
Having gone the threads route once myself, I can say from experience that
it requires changes to the code. I was able to get threads up and running
with a few modifications, but there were other issues that I never fully
resolved that make me lean more towards the container model that has been
discussed earlier in the thread. Btw, I would suggest having a look at
Rally, the Openstack Benchmarking Service. They have deployment frameworks
that use LXC that you might be able to write a "thread" model for.

-Mike


>
>
> --
> Regards,
> Sergey Skripnick
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PCI SRIOV meeting suspend?

2014-03-04 Thread yongli he
On 2014年03月04日 20:45, Robert Li (baoli) wrote:
> Hi Yongli,
>
> I have been looking at your patch set. Let me look at it again if you have
> new update. 
look forward to that.

thanks.
>
> The meeting changed back to UTC 1300 Tuesday.
>
> thanks,
> Robert
>
> On 3/4/14 12:39 AM, "yongli he"  wrote:
>
>> On 2014年03月04日 13:33, Irena Berezovsky wrote:
>>> Hi Yongli He,
>>> The PCI SRIOV meeting switched back to weekly occurrences,.
>>> Next meeting will be today at usual time slot:
>>> https://wiki.openstack.org/wiki/Meetings#PCI_Passthrough_Meeting
>>>
>>> In coming meetings we would like to work on content to be proposed for
>>> Juno.
>>> BR,
>> thanks, Irena.
>>
>> Yongli he
>>> Irena
>>>
>>> -Original Message-
>>> From: yongli he [mailto:yongli...@intel.com]
>>> Sent: Tuesday, March 04, 2014 3:28 AM
>>> To: Robert Li (baoli); Irena Berezovsky; OpenStack Development Mailing
>>> List
>>> Subject: PCI SRIOV meeting suspend?
>>>
>>> HI, Robert
>>>
>>> does it stop for while?
>>>
>>> and if you are convenient please review this patch set , check if the
>>> interface is ok.
>>>
>>>
>>>
>>> https://review.openstack.org/#/q/status:open+project:openstack/nova+branc
>>> h:master+topic:bp/pci-extra-info,n,z
>>>
>>> Yongli He


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Christopher Yeoh
On Tue, 04 Mar 2014 12:10:15 -0500
Russell Bryant  wrote:
> 
> What I'd like to do next is work through a new proposal that includes
> keeping both v2 and v3, but with a new added focus of minimizing the
> cost.  This should include a path away from the dual code bases and to
> something like the "v2.1" proposal.

That sounds good to me. I would be very interested in any feedback that
people have around the concept of v2.1 that looks just like v2 but
has strong input validation. So that would be the only backwards
incompatible change and would only affect those who are currently
misusing the API.

Because we are not modifying the original V2 API code people could do
side by side tests with real clients to see how badly in practice an
individual bit of software is. And we'll have tempest tests to verify
the correct input behaviour path for V2 remains the same. 

In the meantime we'll aim to get some more fleshed out POC code out for
the decorator approach to implementing V2 on the V3 codebase that we
can show at the design summit.

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Testr] Brand new checkout of Neutron... getting insane unit test run results

2014-03-04 Thread Collins, Sean
On Mon, Jan 13, 2014 at 05:16:16PM EST, Mark McClain wrote:
> I’d rather us explicitly skip the tests if the module is not available.
> 
> mark

Just wanted to close the loop on this - thanks to Darragh O'Reilly,
who submitted https://review.openstack.org/#/c/66609/, which removes
the Pyudev dependency for the Linuxbridge agent.

This has corrected the issue where tox barfs when Pyudev is not found.


-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Michael Still
On Wed, Mar 5, 2014 at 11:09 AM, Dan Smith  wrote:
>> What I'd like to do next is work through a new proposal that includes
>> keeping both v2 and v3, but with a new added focus of minimizing the
>> cost.  This should include a path away from the dual code bases and to
>> something like the "v2.1" proposal.
>
> I think that the most we can hope for is consensus on _something_. So,
> the thing that I'm hoping would mostly satisfy the largest number of
> people is:
>
> - Leaving v2 and v3 as they are today in the tree, and with v3 still
>   marked experimental for the moment
> - We start on a v2 proxy to v3, with the first goal of fully
>   implementing the v2 API on top of v3, as judged by tempest
> - We define the criteria for removing the current v2 code and marking
>   the v3 code supported as:
>  - The v2 proxy passes tempest
>  - The v2 proxy has sign-off from some major deployers as something
>they would be comfortable using in place of the existing v2 code
>  - The v2 proxy seems to us to be lower maintenance and otherwise
>preferable to either keeping both, breaking all our users, deleting
>v3 entirely, etc
> - We keep this until we either come up with a proxy that works, or
>   decide that it's not worth the cost, etc.
>
> I think the list of benefits here are:
>
> - Gives the v3 code a chance to address some of the things we have
>   identified as lacking in both trees
> - Gives us a chance to determine if the proxy approach is reasonable or
>   a nightmare
> - Gives a clear go/no-go line in the sand that we can ask deployers to
>   critique or approve
>
> It doesn't address all of my concerns, but at the risk of just having
> the whole community split over this discussion, I think this is probably
> (hopefully?) something we can all get behind.
>
> Thoughts?

I think this is a good plan.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Christopher Yeoh
On Tue, 4 Mar 2014 11:29:27 -0600
Anne Gentle  wrote:
> I still sense that the struggle with Compute v3 is the lack of
> documentation for contributor developers but also especially end
> users so that we could get feedback early and often.
> 
> My original understanding, passed by word-of-mouth, was that the goal
> for v3 was to define an expanded core that nearly all deployers could
> confidently put into production to serve their users needs. 

So I think in practice the reverse has occurred and the core has got
smaller. I think that's perhaps the nature of attempting to get
consensus - its far easier to get agreement that something should be
optional than get agreement that everyone should support it.

I believe that we really need a debate around this because as others
have mentioned it directly impacts interoperability between openstack
deployments for users. But we should keep this debate separate from the
v2/v3 one :-)

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Morgan Fainberg
On March 4, 2014 at 16:13:45, Dan Smith (d...@danplanet.com) wrote:
> What I'd like to do next is work through a new proposal that includes 
> keeping both v2 and v3, but with a new added focus of minimizing the 
> cost. This should include a path away from the dual code bases and to 
> something like the "v2.1" proposal. 

I think that the most we can hope for is consensus on _something_. So, 
the thing that I'm hoping would mostly satisfy the largest number of 
people is: 

- Leaving v2 and v3 as they are today in the tree, and with v3 still 
marked experimental for the moment 
- We start on a v2 proxy to v3, with the first goal of fully 
implementing the v2 API on top of v3, as judged by tempest 
- We define the criteria for removing the current v2 code and marking 
the v3 code supported as: 
- The v2 proxy passes tempest 
- The v2 proxy has sign-off from some major deployers as something 
they would be comfortable using in place of the existing v2 code 
- The v2 proxy seems to us to be lower maintenance and otherwise 
preferable to either keeping both, breaking all our users, deleting 
v3 entirely, etc 
- We keep this until we either come up with a proxy that works, or 
decide that it's not worth the cost, etc. 
This seems reasonable.


I think the list of benefits here are: 

- Gives the v3 code a chance to address some of the things we have 
identified as lacking in both trees 
- Gives us a chance to determine if the proxy approach is reasonable or 
a nightmare 
- Gives a clear go/no-go line in the sand that we can ask deployers to 
critique or approve 

+1 on this. As a deployer this is a good stance and I especially like the clear 
go/no-go line above the other “benefits” with the assumption we are keeping V2 
as is (e.g. not planning on deprecating out sections/changing interfaces, or 
evolving the API to be more V3 like).


It doesn't address all of my concerns, but at the risk of just having 
the whole community split over this discussion, I think this is probably 
(hopefully?) something we can all get behind. 
I agree this doesn’t solve all the concerns, but it’s a good middle ground to 
stand on. I obviously have a personal preference as a deployer/supporter of 
OpenStack environments. I have concerns over the V2 proxy, but as long as we 
are keeping V2 as is, this can move us towards a larger change to V3 and have 
the solid tempest coverage, I don't see a reason to say “this is a bad 
approach”.

Cheers,

Morgan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Dan Smith
> What I'd like to do next is work through a new proposal that includes
> keeping both v2 and v3, but with a new added focus of minimizing the
> cost.  This should include a path away from the dual code bases and to
> something like the "v2.1" proposal.

I think that the most we can hope for is consensus on _something_. So,
the thing that I'm hoping would mostly satisfy the largest number of
people is:

- Leaving v2 and v3 as they are today in the tree, and with v3 still
  marked experimental for the moment
- We start on a v2 proxy to v3, with the first goal of fully
  implementing the v2 API on top of v3, as judged by tempest
- We define the criteria for removing the current v2 code and marking
  the v3 code supported as:
 - The v2 proxy passes tempest
 - The v2 proxy has sign-off from some major deployers as something
   they would be comfortable using in place of the existing v2 code
 - The v2 proxy seems to us to be lower maintenance and otherwise
   preferable to either keeping both, breaking all our users, deleting
   v3 entirely, etc
- We keep this until we either come up with a proxy that works, or
  decide that it's not worth the cost, etc.

I think the list of benefits here are:

- Gives the v3 code a chance to address some of the things we have
  identified as lacking in both trees
- Gives us a chance to determine if the proxy approach is reasonable or
  a nightmare
- Gives a clear go/no-go line in the sand that we can ask deployers to
  critique or approve

It doesn't address all of my concerns, but at the risk of just having
the whole community split over this discussion, I think this is probably
(hopefully?) something we can all get behind.

Thoughts?

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] tripleo-cd-admins team update / contact info question

2014-03-04 Thread Chris Jones
Hi

On 25 February 2014 14:30, Robert Collins  wrote:

> So - I think we need to define two things:
>   - a stock way for $randoms to ask for support w/ these clouds that
> will be fairly low latency and reliable.
>   - a way for us to escalate to each other *even if folk happen to be
> away from the keyboard at the time*.
> And possibly a third:
>   - a way for openstack-infra admins to escalate to us in the event of
> OMG things happening. Like, we send 1000 VMs all at once at their git
> mirrors or something.
>

I think action zero is to define an SLA, so everyone has a very clear
picture of what to expect from us, and we have a clear picture of what
we're signing up to provide.

Also, I'd note that talking about non-IRC escalation methods, coverage of
weekends, etc. is moving us into a pretty different realm than we have been
in, so it might be worth checking that all the current people (who might
not all have been in the meeting) are ok with fixing a cloud on a Sunday :)

Then we need to map out who can be contacted at any given time of week, and
how they can be contacted. Hopefully follow-the-sun covers us with normal
working hours, apart from the gap between US/Pacific finishing their week,
and New Zealand starting the next week. Since we're essentially relying on
volunteer efforts to service these production clouds, we would need to let
people be pretty flexible about when they can be contacted.

Then we need to publish that information somewhere that the relevant folk
can see and some kind of monitoring that can escalate beyond IRC if it's
not getting a response. James mentioned Pagerduty and I've had good
experiences with it in previous operational roles.

Then we need to write a playbook so each outage isn't a voyage of discovery
unless it's something completely new, and commit to updating the playbook
after each outage, with what we learned that time.

Have we considered reaching out to OpenStack sponsors who have operational
folk, to see if they would be interested in contributing human resources to
this?

-- 
Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Christopher Yeoh
On Tue, 4 Mar 2014 09:26:18 -0800
Vishvananda Ishaya  wrote:
> 
> On Mar 4, 2014, at 9:10 AM, Russell Bryant  wrote:
> > 
> > Thank you all for your participation on this topic.  It has been
> > quite controversial, but the API we expose to our users is a really
> > big deal. I'm feeling more and more confident that we're coming
> > through this with a much better understanding of the problem space
> > overall, as well as a better plan going forward than we had a few
> > weeks ago.
> 
> Hey Russell,
> 
> Thanks for bringing this to the mailing list and being open to
> discussion and collaboration. Also, thanks to everyone who is
> participating in the plan. Doing this kind of thing in the open is
> difficult and it has lead to a ton of debate, but this is the right
> way to do things. It says a lot about the strength of our community
> that we are able to have conversations like this without devolving
> into arguments and flame wars.

+1 to this. Discussions like this are very difficult to have and it is
a great feature of the community that everyone remains polite and civil
with each other.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-03-04 Thread Stan Lagun
Hi all,

Completely agree with Zane. Collaboration with TOSCA TC is a way to go as
Murano is very close to TOSCA. Like Murano = 0.9 * TOSCA + UI + OpenStack
services integration.

Let me share my thoughts on TOSCA as I read all TOSCA docs and I'm also the
author of initial Murano DSL design proposal so I can probably compare them.

We initially considered to just implement TOSCA before going with own DSL.
There was no YAML TOSCA out there at that time, just XML version.

So here's why we've wrote our own DSL:

1. TOSCA is very complex and verbose. Considering there is no
production-ready tooling for TOSCA users would have to type all those tons
of XML tags and namespaces and TOSCA XMLs are really hard to read and
write. No one gonna do this, especially outside of Java-enterprise world

2. TOSCA has no workflow language. TOSCA draft states that the language is
indeed needed and recommends using BPEL or BPMN for that matter.
Earlier versions of Murano showed that some sort of workflow language
(declarative, imperative whatever) if absolutely required for non-trivial
cases. If you don't have workflow language then you have to hard-code a lot
of knowledge into engine in Python. But the whole idea of AppCatalog was
that users upload (share) their application templates that contain
application-specific maintenance/deployment code that is run in on common
shared server (not in any particular VM) and thus capable of orchestrating
all activities that are taking place on different VMs belonging to given
application (for complex applications with typical enterprise SOA
architecture). Besides VMs applications can talk to OpenStack services like
Heat, Neutron, Trove and 3rd party services (DNS registration, NNTP,
license activation service etc). Especially with the Heat so that
application can have its VMs and other IaaS resources. There is a similar
problem in Heat - you can express most of the basic things in HOT but once
you need something really complex like accessing external API, custom load
balancing or anything tricky you need to resort to Python and write custom
resource plugin. And then you required to have root access to engine to
install that plugin. This is not a solution for Murano as in Murano any
user can upload application manifest at any time without affecting running
system and without admin permissions.

Now going back to TOSCA the problem with TOSCA workflows is they are not
part of standard. There is no standardized way how BPEL would access TOSCA
attributes and how 2 systems need to interact. This alone makes any 2 TOSCA
implementations incompatible with each other rendering the whole idea of
standard useless. It is not standard if there is no compatibility.

And again BPEL is heavy XML language that you don't want to have in
OpenStack. Trust me, I spent significant time studying it. And if there is
YAML version of TOSCA that is much more readable than XML one there is no
such thing for BPEL. And I'm not aware of any adequate replacement for it

3. It seems like nobody really using TOSCA. TOSCA standard defines exact
TOSCA package format. TOSCA was designed so that people can share those
packages (CSARs as TOSCA calls them) between various TOSCA implementations.
I've tried to google those packages. It took me like a hour to find even
most trivial CSAR example. And it was on OASIS site.

4. There is no reference TOSCA implementation. No test suite. There is no
way to check your implementation is really TOSCA-compatible. And no one to
ask questions

5. TOSCA is very immature. They didn't even made XML version used and
already working on YAML version that is not compatible with current draft

6. TOSCA is too restrictive and verbose in some areas while leaving a lot
of white spaces in others


So we decided to go with our own DSL that would eliminate all the problems
above. And I personally feel that Murano is how TOSCA should be looked like
if it was designed for OpenStack. Murano is perfectly aligned with Heat and
other OpenStack services and practices. It is very Python-like. It is easy
to read and write once you learn the basics. It is more universal and less
restrictive than TOSCA. You can do much more than you could ever do in
TOSCA. And it is very extensible.

I hope that Murano and its ideas will find a way into OpenStack community :)



On Wed, Mar 5, 2014 at 2:16 AM, Zane Bitter  wrote:

> On 04/03/14 00:04, Georgy Okrokvertskhov wrote:
>
>>
>> First of all let me highlight that Murano DSL was much inspired by
>> TOSCA. We carefully read this standard before our movement to Murano
>> DSL. TOSCA standard has a lot f of very well designed concepts and ideas
>> which we reused in Murano. There is one obvious draw back of TOSCA -
>> very heavy and verbose XML based syntax. Taking into account that
>> OpenStack itself is clearly moving from XML based representations, it
>> will be strange to bring this huge XML monster back on a higher level.
>> Frankly, the current Murano workflows language is 

Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-04 Thread Jay S Bryant
From:   Brian Cline 
To: openstack-dev@lists.openstack.org, 
Date:   03/04/2014 12:29 PM
Subject:Re: [openstack-dev] Proposal to move from Freenode to OFTC



On 03/04/2014 05:01 AM, Thierry Carrez wrote:
> James E. Blair wrote:
>> Freenode has been having a rough time lately due to a series of DDoS
>> attacks which have been increasingly disruptive to collaboration.
>> Fortunately there's an alternative.
>>
>> OFTC http://www.oftc.net/> is a robust and established alternative
>> to Freenode.  It is a smaller network whose mission statement makes it 
a
>> less attractive target.  It's significantly more stable than Freenode
>> and has friendly and responsive operators.  The infrastructure team has
>> been exploring this area and we think OpenStack should move to using
>> OFTC.
> There is quite a bit of literature out there pointing to Freenode, like
> presentation slides from old conferences. We should expect people to
> continue to join Freenode's channels forever. I don't think staying a
> few weeks on those channels to redirect misled people will be nearly
> enough. Could we have a longer plan ? Like advertisement bots that would
> advise every n hours to join the right servers ?
>
>> [...]
>> 1) Create an irc.openstack.org CNAME record that points to
>> chat.freenode.net.  Update instructions to suggest users configure 
their
>> clients to use that alias.
> I'm not sure that helps. The people who would get (and react to) the DNS
> announcement are likely using proxies anyway, which you'll have to
> unplug manually from Freenode on switch day. The vast majority of users
> will just miss the announcement. So I'd rather just make a lot of noise
> on switch day :)
>
> Finally, I second Sean's question on OFTC's stability. As bad as
> Freenode is hit by DoS, they have experience handling this, mitigation
> procedures in place, sponsors lined up to help, so damage ends up
> *relatively* limited. If OFTC raises profile and becomes a target, are
> we confident they would mitigate DoS as well as Freenode does ? Or would
> they just disappear from the map completely ? I fear that we are trading
> a known evil for some unknown here.
>
> In all cases I would target post-release for the transition, maybe even
> post-Summit.
>

Indeed, I can't help but feel like the large amount of effort involved 
in changing networks is a bit of a riverboat gamble. DDoS has been an 
unfortunate reality for every well-known/trusted/stable IRC network for 
the last 15-20 years, and running from it rather than planning for it is 
usually a futile effort. It feels like we'd be chasing our tails trying 
to find a place where DDoS couldn't cause serious disruption; even then 
it's still not a sure thing. I would hate to see everyone's efforts to 
have been in vain once the same problem occurs there.

-- 
Brian Cline
br...@linux.vnet.ibm.com


+1  I have not seen this as a frequent problem.  I have been aware of it 
but it
seems a bit excessive to move to a possibly less equipped provider.

-Jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Christopher Yeoh
On Tue, 4 Mar 2014 13:14:01 +
"Daniel P. Berrange"  wrote:

> On Tue, Mar 04, 2014 at 07:49:03AM -0500, Sean Dague wrote:
> > So this thread is getting deep again, as I expect they all will, so
> > I'm just going to top post and take the ire for doing so.
> > 
> > I also want to summarize what I've seen in the threads so far:
> > 
> > v2 needed forever - if I do a sentiment analysis here looking at the
> > orgs people are coming from, most of this is currently coming from
> > Rackspace folks (publicly). There might be many reasons for this,
> > one of which is the fact that they've done a big transition in the
> > near past (between *not openstack* and Openstack), and felt that
> > pain. Understanding that pain is good.
> > 
> > It is interesting that Phil actually brings up a completely
> > different issue from the HP cloud side, which is the amount of
> > complaints they are having to field about how terrible the v2 API
> > is. HP has actually had an OpenStack cloud public longer than
> > Rackspace. So this feedback shouldn't be lost.
> > 
> > So I feel like while some deployers have expressed no interest in
> > moving forward on API, others can't get there soon enough.
> > 
> > Which makes me think a lot about point 4. As has already been
> > suggested we could actually make v2 a proxy to v3. And just like
> > with images and volumes, it becomes frozen in Juno, and people that
> > want more features will move to the v3 API. Just like with other
> > services.
> 
> > This requires internal cleanups as well. However it wouldn't shut
> > down future evolution of the interface.
> > 
> > Nova really has 4 interfaces today
> >  * Nova v2 JSON
> >  * Nova v2 XML
> >  * Nova v3 JSON
> >  * EC2
> > 
> > I feel like if we had to lose one to decrease maintenance cost,
> > Nova v2 XML is the one to lose. And if we did, v2 on v3 isn't the
> > craziest thing in the world. It's not free, but neither is the
> > backport.
> 
> A proxy of v2 onto v3 is appealing, but do we think we have good
> enough testing of v2 to ensure that any proxy impl is bug-for-bug
> compatible with the original native v2 implementation ? Avoiding
> breakage of client apps is to me the key reason for keeping v2
> around, so we'd want very high confidence that any proxy impl is
> functionally identical with the orginal impl.

So if 100% bug for bug compatibility is our top concern, then the last
thing we want to do is to try to evolve the V2 API. Because we will end
up breaking it accidentally. And it impacts what internal changes we
can make because we've had problems with accidentally exposing internal
implementation issues through the API.

> If we want to proxy v2 onto v3, then by that same argument should we
> be proxying EC2 onto v3 as well.  ie Nova v3 JSON be the only
> supported API, and every thing else just be a proxy, potentially
> maintained out of tree from main nova codebase.

So I think this is a good long term goal (whether the proxying code is
in our out of tree is up for debate). There are apparently issues with
the EC2 needing something that our API doesn't expose, but perhaps it
could. It just needs people who are willing to look at doing that work.

Just as a random data point, when the devs who have been
looking at implementing the google compute engine asked about this sort
of thing on the mailing list, we suggested that they look at layering
their API on top of the Nova API. And if IIRC they said that would be
possible.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-03-04 Thread Georgy Okrokvertskhov
Hi Thomas, Zane,


Thank you for bringing TOSCA to the discussion. I think this is important
topic as it will help to find better alignment or even future merge of
Murano DSL and Heat templates. Murano DSL uses YAML representation too, so
we can easily merge use constructions from Heat and probably any other YAML
based TOSCA formats.

I will be glad to join TOSCA TC. Is there any formal process for that?

I also would like to use this opportunity and start conversation with Heat
team about Heat roadmap and feature set. As Thomas mentioned in his
previous e-mail TOSCA topology story is quite covered by HOT. At the same
time there are entities like Plans which are covered by Murano. We had
discussion about bringing workflows to Heat engine before HK summit and it
looks like that Heat team has no plans to bring workflows into Heat. That
is actually why we mentioned Orchestration program as a potential place for
Murano DSL as Heat+Murano together will cover everything which is defined
by TOSCA.

I think TOSCA initiative can be a great place to collaborate. I think it
will be possible then to use Simplified TOSCA format for Application
descriptions as TOSCA is intended to provide such descriptions.

Is there a team who are driving TOSCA implementation in OpenStack
community? I feel that such team is necessary.

Thanks
Georgy


On Tue, Mar 4, 2014 at 2:36 PM, Thomas Spatzier
wrote:

> Excerpt from Zane Bitter's message on 04/03/2014 23:16:21:
> > From: Zane Bitter 
> > To: openstack-dev@lists.openstack.org
> > Date: 04/03/2014 23:20
> > Subject: Re: [openstack-dev] Incubation Request: Murano
> >
> > On 04/03/14 00:04, Georgy Okrokvertskhov wrote:
> > >
> > It so happens that the OASIS's TOSCA technical committee are working as
> > we speak on a "TOSCA Simple Profile" that will hopefully make things
> > easier to use and includes a YAML representation (the latter is great
> > IMHO, but the key to being able to do it is the former). Work is still
> > at a relatively early stage and in my experience they are very much open
> > to input from implementers.
>
> Nice, I was probably also writing a mail with this information at about the
> same time :-)
> And yes, we are very much interested in feedback from implementers and open
> to suggestions. If we can find gaps and fill them with good proposals, now
> is the right time.
>
> >
> > I would strongly encourage you to get involved in this effort (by
> > joining the TOSCA TC), and also to architect Murano in such a way that
> > it can accept input in multiple formats (this is something we are making
> > good progress toward in Heat). Ideally the DSL format for Murano+Heat
> > should be a trivial translation away from the relevant parts of the YAML
> > representation of TOSCA Simple Profile.
>
> Right, having a straight-forward translation would be really desirable. The
> way to get there can actually be two-fold: (1) any feedback we get from the
> Murano folks on the TOSCA simple profile and YAML can help us to make TOSCA
> capable of addressing the right use cases, and (2) on the other hand make
> sure the implementation goes in a direction that is in line with what TOSCA
> YAML will look like.
>
> >
> > cheers,
> > Zane.
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Open Source and community working together

2014-03-04 Thread Stefano Maffulli
On the topic of 'always improving our community', here is the
announcement I promised

On 03/03/2014 01:52 PM, Stefano Maffulli wrote:
> [...] We'll have a first
> edition of this program in Atlanta. Some more details on the wiki
> https://wiki.openstack.org/wiki/OpenStack_Upstream_Training/Info
> 
> and the exact dates/place for the sessions in Atlanta in a couple of days.

I'm happy to announce a great opportunity for new contributors to
OpenStack: a free training program to accelerate the speed at which new
OpenStack developers are successful at integrating their own roadmap
into that of the OpenStack project.

Details on the blog


If you’re a new OpenStack contributor or plan on becoming one soon, you
should sign up for the next OpenStack Upstream Training in Atlanta, May
10-11. Participation is strongly advised also for first time
participants to OpenStack Design Summit.

Please spread the word among your colleagues.

Cheers,
Stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-04 Thread Douglas Mendizabal
I agree with Stefano.  Migrating the entire community to a new service
would be incredibly painful.  It seems the pain of moving is not justified
if we don’t know for a fact that OFTC would be more resilient to DDoS
attacks.


-1 to the switch as well.

-Doug Mendizabal


On 3/4/14, 2:48 PM, "Stefano Maffulli"  wrote:

>-1 to the switch from me.
>
>this question from Sean is of fundamental value:
>
>On 03/03/2014 03:19 PM, Sean Dague wrote:
>> #1) do we believe OFTC is fundamentally better equipped to resist a
>> DDOS, or do we just believe they are a smaller target? The ongoing DDOS
>> on meetup.com the past 2 weeks is a good indicator that being a smaller
>> fish only helps for so long.
>
>until we can say that *fundamentally* OFTC is not going to suffer
>disruptions in the future I wouldn't even remotely consider a painful
>switch like this one.
>
>/stef
>
>-- 
>Ask and answer questions on https://ask.openstack.org
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What is the currently accepted way to do plugins

2014-03-04 Thread Kevin L. Mitchell
On Tue, 2014-03-04 at 17:31 -0400, Sandy Walsh wrote:
> > How about using 'unstable' as a component of the entrypoint group?
> > E.g., "nova.unstable.events"…
> 
> Wouldn't that defeat the "point" of entry points ... immutable
> endpoints? What happens when an unstable event is deemed stable?

Actually, the idea here is that the API that those entrypoints are
expected to express is unstable; when the API stabilizes, you'd remove
the "unstable" part, then *never* change the API again.  This is as
opposed to thinking that the entrypoint itself is unstable—nova
shouldn't care, it should just use it…
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-03-04 Thread Thomas Spatzier
Excerpt from Zane Bitter's message on 04/03/2014 23:16:21:
> From: Zane Bitter 
> To: openstack-dev@lists.openstack.org
> Date: 04/03/2014 23:20
> Subject: Re: [openstack-dev] Incubation Request: Murano
>
> On 04/03/14 00:04, Georgy Okrokvertskhov wrote:
> >
> It so happens that the OASIS's TOSCA technical committee are working as
> we speak on a "TOSCA Simple Profile" that will hopefully make things
> easier to use and includes a YAML representation (the latter is great
> IMHO, but the key to being able to do it is the former). Work is still
> at a relatively early stage and in my experience they are very much open
> to input from implementers.

Nice, I was probably also writing a mail with this information at about the
same time :-)
And yes, we are very much interested in feedback from implementers and open
to suggestions. If we can find gaps and fill them with good proposals, now
is the right time.

>
> I would strongly encourage you to get involved in this effort (by
> joining the TOSCA TC), and also to architect Murano in such a way that
> it can accept input in multiple formats (this is something we are making
> good progress toward in Heat). Ideally the DSL format for Murano+Heat
> should be a trivial translation away from the relevant parts of the YAML
> representation of TOSCA Simple Profile.

Right, having a straight-forward translation would be really desirable. The
way to get there can actually be two-fold: (1) any feedback we get from the
Murano folks on the TOSCA simple profile and YAML can help us to make TOSCA
capable of addressing the right use cases, and (2) on the other hand make
sure the implementation goes in a direction that is in line with what TOSCA
YAML will look like.

>
> cheers,
> Zane.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pep8 gating fails due to tools/config/check_uptodate.sh

2014-03-04 Thread Joe Gordon
So since tools/config/check_uptodate.sh is oslo code, I assumed this
issue falls into the domain of oslo-incubator.

Until this gets resolved nova is considering
https://review.openstack.org/#/c/78028/

On Wed, Feb 5, 2014 at 9:21 AM, Daniel P. Berrange  wrote:
> On Wed, Feb 05, 2014 at 11:56:35AM -0500, Doug Hellmann wrote:
>> On Wed, Feb 5, 2014 at 11:40 AM, Chmouel Boudjnah 
>> wrote:
>>
>> >
>> > On Wed, Feb 5, 2014 at 4:20 PM, Doug Hellmann > > > wrote:
>> >
>> >> Including the config file in either the developer documentation or the
>> >> packaging build makes more sense. I'm still worried that adding it to the
>> >> sdist generation means you would have to have a lot of tools installed 
>> >> just
>> >> to make the sdist. However, we could
>> >
>> >
>> >
>> > I think that may slighty complicate more devstack with this, since we rely
>> > heavily on config samples to setup the services.
>> >
>>
>> Good point, we would need to add a step to generate a sample config for
>> each app instead of just copying the one in the source repository.
>
> Which is what 'python setup.py build' for an app would take care of.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-03-04 Thread Zane Bitter

On 04/03/14 00:04, Georgy Okrokvertskhov wrote:


First of all let me highlight that Murano DSL was much inspired by
TOSCA. We carefully read this standard before our movement to Murano
DSL. TOSCA standard has a lot f of very well designed concepts and ideas
which we reused in Murano. There is one obvious draw back of TOSCA -
very heavy and verbose XML based syntax. Taking into account that
OpenStack itself is clearly moving from XML based representations, it
will be strange to bring this huge XML monster back on a higher level.
Frankly, the current Murano workflows language is XML based and it is
quite painful to write a workflows without any additional instrument
like IDE.


It so happens that the OASIS's TOSCA technical committee are working as 
we speak on a "TOSCA Simple Profile" that will hopefully make things 
easier to use and includes a YAML representation (the latter is great 
IMHO, but the key to being able to do it is the former). Work is still 
at a relatively early stage and in my experience they are very much open 
to input from implementers.


I would strongly encourage you to get involved in this effort (by 
joining the TOSCA TC), and also to architect Murano in such a way that 
it can accept input in multiple formats (this is something we are making 
good progress toward in Heat). Ideally the DSL format for Murano+Heat 
should be a trivial translation away from the relevant parts of the YAML 
representation of TOSCA Simple Profile.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Policy on upgades required config changes

2014-03-04 Thread Clint Byrum
Excerpts from Steven Hardy's message of 2014-03-04 09:39:21 -0800:
> Hi all,
> 
> As some of you know, I've been working on the instance-users blueprint[1].
> 
> This blueprint implementation requires three new items to be added to the
> heat.conf, or some resources (those which create keystone users) will not
> work:
> 
> https://review.openstack.org/#/c/73978/
> https://review.openstack.org/#/c/76035/
> 
> So on upgrade, the deployer must create a keystone domain and domain-admin
> user, add the details to heat.conf, as already been done in devstack[2].
> 
> The changes requried for this to work have already landed in devstack, but
> it was discussed to day and Clint suggested this may be unacceptable
> upgrade behavior - I'm not sure so looking for guidance/comments.
> 
> My plan was/is:
> - Make devstack work
> - Talk to tripleo folks to assist in any transition (what prompted this
>   discussion)
> - Document the upgrade requirements in the Icehouse release notes so the
>   wider community can upgrade from Havana.
> - Try to give a heads-up to those maintaining downstream heat deployment
>   tools (e.g stackforge/puppet-heat) that some tweaks will be required for
>   Icehouse.
> 
> However some have suggested there may be an openstack-wide policy which
> requires peoples old config files to continue working indefinitely on
> upgrade between versions - is this right?  If so where is it documented?
> 

I don't think I said indefinitely, and I certainly did not mean
indefinitely.

What is required though, is that we be able to upgrade to the next
release without requiring a new config setting.

Also as we scramble to deal with these things in TripleO (as all of our
users are now unable to spin up new images), it is clear that it is more
than just a setting. One must create domain users carefully and roll out
a new password.

What I'm suggesting is that we should instead _warn_ that the old
behavior is being used and will be deprecated.

At this point, out of urgency, we're landing fixes. But in the future,
this should be considered carefully.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-03-04 Thread Thomas Spatzier
Hi all,

I would like to pick up the TOSCA topic brought up by Zane in his mail
below.

TOSCA is in fact a standard that seems closely aligned with the concepts
that Murano is implementing, so thanks Zane for bringing up this
discussion. I saw Georgy's reply early today where he stated that Murano is
actually heavily inspired by TOSCA, but Murano took a different path due to
some drawbacks in TOSCA v1.0 (e.g. XML).

I would like to point out, though, that we (TOSCA TC) are heavily working
on fixing some of the usability issues that TOSCA v1.0 has. The most
important point being that we are working on a YAML rendering, along with a
simplified profile of TOSCA, which both shall make it easier and more
attractive to use TOSCA. Much of this work has actually been inspired by
the collaboration with the Heat community and the development of the HOT
language.

That said, I would really like the Murano folks to have a look at a current
working draft of the TOSCA Simple Profile in YAML which can be found at
[1]. It would be nice to get some feedback, and ideally we could even
collaborate to see if we can come up with a common solution that fits
everyone's needs. As far as topologies are concerned, we are trying to get
TOSCA YAML and HOT well aligned so we can have an easy mapping. Sahdev from
our team (IRC spzala) is actually working on a TOSCA YAML to HOT converter
which he recently put on stackforge (initial version only). With Murano it
would be interesting to see if we could collaborate on the "plans" side of
TOSCA.

Apart from pure DSL work, I think Murano has some other interesting items
that are also interesting from a TOSCA perspective. For example, I read
about a catalog that stores artifacts needed for app deployments. TOSCA
also has the concept of artifacts, and we have a packaging format to
transport a model and its associated artifacts. So if at some point we
start thinking about importing such a TOSCA archive into a layer above
today's Heat, the question is if we could use e.g. the Murano catalog for
storing all content.

All that said, I see some good opportunities for collaboration and it would
be nice to find a common solution with good alignment between projects and
to avoid duplicate efforts.

BTW, Georgy, I am impressed how closely you looked at the TOSCA spec and
the charter :-)

[1]
https://www.oasis-open.org/committees/document.php?document_id=52381&wg_abbrev=tosca

Greetings,
Thomas

Zane Bitter  wrote on 04/03/2014 03:33:01:

> From: Zane Bitter 
> To: openstack-dev@lists.openstack.org
> Date: 04/03/2014 03:32
> Subject: Re: [openstack-dev] Incubation Request: Murano
>
> On 25/02/14 05:08, Thierry Carrez wrote:
> > The second challenge is that we only started to explore the space of
> > workload lifecycle management, with what looks like slightly
overlapping
> > solutions (Heat, Murano, Solum, and the openstack-compatible PaaS
> > options out there), and it might be difficult, or too early, to pick a
> > winning complementary set.
>
> I'd also like to add that there is already a codified OASIS standard
> (TOSCA) that covers application definition at what appears to be a
> similar level to Murano. Basically it's a more abstract version of what
> Heat does plus workflows for various parts of the lifecycle (e.g.
backup).
>
> Heat and TOSCA folks have been working together since around the time of
> the Havana design summit with the aim of eventually getting a solution
> for launching TOSCA applications on OpenStack. Nothing is set in stone
> yet, but I would like to hear from the Murano folks how they are
> factoring compatibility with existing standards into their plans.
>
> cheers,
> Zane.
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] LVM ephemeral storage encryption

2014-03-04 Thread Dan Genin

Hello Joe,

Sorry to be bugging on what is probably a very busy day for you, but it 
being the feature freeze and all, I just wanted to ask if there was any 
chance of the LVM ephemeral storage encryption patch 
, that you -1'ed today, making 
it into Icehouse. The patch has received a lot of attention and has gone 
through numerous revisions. It is a pretty solid piece of code at this 
point.


Regarding your point about the lack of a trunk keymanager capable of 
providing different keys for encryption, you are, of course, correct. 
However, this situation is rapidly evolving and I believe that Barbican 
keymanager may achieve incubation status by the next release.


Thank you for your input and suggestions.
Best regards,
Dan


smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What is the currently accepted way to do plugins

2014-03-04 Thread Sandy Walsh


On 03/04/2014 05:00 PM, Kevin L. Mitchell wrote:
> On Tue, 2014-03-04 at 12:11 -0800, Dan Smith wrote:
>> Now, the actual concern is not related to any of that, but about whether
>> we're going to open this up as a new thing we support. In general, my
>> reaction to adding new APIs people expect to be stable is "no". However,
>> I understand why things like the resource reporting and even my events
>> mechanism are very useful for deployers to do some plumbing and
>> monitoring of their environment -- things that don't belong upstream anyway.
>>
>> So I'm conflicted. I think that for these two cases, as long as we can
>> say that it's not a stable interface, I think it's probably okay.
>> However, things like we've had in the past, where we provide a clear
>> plug point for something like "Compute manager API class" are clearly
>> off the table, IMHO.
> 
> How about using 'unstable' as a component of the entrypoint group?
> E.g., "nova.unstable.events"…

Wouldn't that defeat the "point" of entry points ... immutable
endpoints? What happens when an unstable event is deemed stable?

> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What is the currently accepted way to do plugins

2014-03-04 Thread Sandy Walsh
And sorry, as to your original problem, the loadables approach is kinda
messy since only the classes that are loaded when *that* module are
loaded are used (vs. explicitly specifying them in a config). You may
get different results when the flow changes.

Either entry-points or config would give reliable results.


On 03/04/2014 03:21 PM, Murray, Paul (HP Cloud Services) wrote:
> In a chat with Dan Smith on IRC, he was suggesting that the important thing 
> was not to use class paths in the config file. I can see that internal 
> implementation should not be exposed in the config files - that way the 
> implementation can change without impacting the nova users/operators.

There's plenty of easy ways to deal with that problem vs. entry points.

MyModule.get_my_plugin() ... which can point to anywhere in the module
permanently.

Also, we don't have any of the headaches of merging setup.cfg sections
(as we see with oslo.* integration).

> Sandy, I'm not sure I really get the security argument. Python provides every 
> means possible to inject code, not sure plugins are so different. Certainly 
> agree on choosing which plugins you want to use though.

The concern is that any compromised part of the python eco-system can
get auto-loaded with the entry-point mechanism. Let's say Nova
auto-loads all modules with entry-points the [foo] section. All I have
to do is create a setup that has a [foo] section and my code is loaded.
Explicit is better than implicit.

So, assuming we don't auto-load modules ... what does the entry-point
approach buy us?


> From: Russell Bryant [rbry...@redhat.com]
> We should be careful though.  We need to limit what we expose as external 
> plug points, even if we consider them unstable.  If we don't want it to be 
> public, it may not make sense for it to be a plugin interface at all.

I'm not sure what the concern with introducing new extension points is?
OpenStack is basically just a big bag of plugins. If it's optional, it's
supposed to be a plugin (according to the design tenets).



> 
> --
> Russell Bryant
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What is the currently accepted way to do plugins

2014-03-04 Thread Dan Smith
> How about using 'unstable' as a component of the entrypoint group?
> E.g., "nova.unstable.events"…

Well, this is a pretty heavy way to ensure that the admin gets the
picture, but maybe appropriate :)

What I don't think we want is the in-tree plugins having to hook into
something called "unstable". But, if we handle those one way and then
clearly find and loop through any "unstable $things" then maybe that's a
happy medium.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Suggestions for alarm improvements

2014-03-04 Thread Gordon Chung
hi Sampath

tbh, i actually like the pipeline solution proposed in the blueprint... 
that said, there hasn't been any work done relating to this in Icehouse. 
there was work on adding alarms to notification 
https://blueprints.launchpad.net/ceilometer/+spec/alarm-on-notification 
but that has been pushed. i'd be interested in discussing adding alarms to 
pipeline and it's pros/cons vs current implementation.

>  https://wiki.openstack.org/wiki/Ceilometer/AlarmImprovements
>  Is there any further discussion about [Part 4 - Moving Alarms into the
> Pipelines] in above doc?
is the pipeline alarm design attached to a blueprint? also, is your 
interest purely to see status or were you looking to work on it? ;)

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

2014-03-04 Thread Sandhya Dasu (sadasu)
Hi,
During today's meeting, it was decided that we would re-purpose
Robert's   
https://blueprints.launchpad.net/neutron/+spec/pci-passthrough-sriov to
take care of adding a Base class to take care of common processing for
SR-IOV ports.

This class would:

1. Inherits from AgentMechanismDriverBase.
2. Implements bind_port() where the binding:profile would be checked to
see if the port's vnic_type is VNIC_DIRECT or VNIC_MACVTAP.
3. Also checks to see that port belongs to vendor/product that supports
SR-IOV.
4. This class could be used by MDs that may or may not have a valid L2
agent.
5. Implement validate_port_binding(). This will always return True for Mds
that do not have an L2 agent.

Please let me know if I left out anything.

Thanks,
Sandhya
On 2/25/14 9:18 AM, "Sandhya Dasu (sadasu)"  wrote:

>Hi,
>As a follow up from today's IRC, Irena, are you looking to write the
>below mentioned Base/Mixin class that inherits from
>AgentMechanismDriverBase class? When you mentioned port state, were you
>referring to the validate_port_binding() method?
>
>Pls clarify.
>
>Thanks,
>Sandhya
>
>On 2/6/14 7:57 AM, "Sandhya Dasu (sadasu)"  wrote:
>
>>Hi Bob and Irena,
>>   Thanks for the clarification. Irena, I am not opposed to a
>>SriovMechanismDriverBase/Mixin approach, but I want to first figure out
>>how much common functionality there is. Have you already looked at this?
>>
>>Thanks,
>>Sandhya
>>
>>On 2/5/14 1:58 AM, "Irena Berezovsky"  wrote:
>>
>>>Please see inline my understanding
>>>
>>>-Original Message-
>>>From: Robert Kukura [mailto:rkuk...@redhat.com]
>>>Sent: Tuesday, February 04, 2014 11:57 PM
>>>To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for
>>>usage questions); Irena Berezovsky; Robert Li (baoli); Brian Bowen
>>>(brbowen)
>>>Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
>>>binding of ports
>>>
>>>On 02/04/2014 04:35 PM, Sandhya Dasu (sadasu) wrote:
 Hi,
  I have a couple of questions for ML2 experts regarding support of
 SR-IOV ports.
>>>
>>>I'll try, but I think these questions might be more about how the
>>>various
>>>SR-IOV implementations will work than about ML2 itself...
>>>
 1. The SR-IOV ports would not be managed by ova or linuxbridge L2
 agents. So, how does a MD for SR-IOV ports bind/unbind its ports to
 the host? Will it just be a db update?
>>>
>>>I think whether or not to use an L2 agent depends on the specific SR-IOV
>>>implementation. Some (Mellanox?) might use an L2 agent, while others
>>>(Cisco?) might put information in binding:vif_details that lets the nova
>>>VIF driver take care of setting up the port without an L2 agent.
>>>[IrenaB] Based on VIF_Type that MD defines, and going forward with other
>>>binding:vif_details attributes, VIFDriver should do the VIF pluging
>>>part.
>>>As for required networking configuration is required, it is usually done
>>>either by L2 Agent or external Controller, depends on MD.
>>>
 
 2. Also, how do we handle the functionality in mech_agent.py, within
 the SR-IOV context?
>>>
>>>My guess is that those SR-IOV MechanismDrivers that use an L2 agent
>>>would
>>>inherit the AgentMechanismDriverBase class if it provides useful
>>>functionality, but any MechanismDriver implementation is free to not use
>>>this base class if its not applicable. I'm not sure if an
>>>SriovMechanismDriverBase (or SriovMechanismDriverMixin) class is being
>>>planned, and how that would relate to AgentMechanismDriverBase.
>>>
>>>[IrenaB] Agree with Bob, and as I stated before I think there is a need
>>>for SriovMechanismDriverBase/Mixin that provides all the generic
>>>functionality and helper methods that are common to SRIOV ports.
>>>-Bob
>>>
 
 Thanks,
 Sandhya
 
 From: Sandhya Dasu mailto:sad...@cisco.com>>
 Reply-To: "OpenStack Development Mailing List (not for usage
questions)"
 >>> >
 Date: Monday, February 3, 2014 3:14 PM
 To: "OpenStack Development Mailing List (not for usage questions)"
 >>> >, Irena Berezovsky
 mailto:ire...@mellanox.com>>, "Robert Li
(baoli)"
 mailto:ba...@cisco.com>>, Robert Kukura
 mailto:rkuk...@redhat.com>>, "Brian Bowen
 (brbowen)" mailto:brbo...@cisco.com>>
 Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
 extra hr of discussion today
 
 Hi,
 Since, openstack-meeting-alt seems to be in use, baoli and myself
 are moving to openstack-meeting. Hopefully, Bob Kukura & Irena can
 join soon.
 
 Thanks,
 Sandhya
 
 From: Sandhya Dasu mailto:sad...@cisco.com>>
 Reply-To: "OpenStack Development Mailing List (not for usage
questions)"
 >>> >
 Date: Monday, February 3, 2014 1:26 PM
 To: Irena Berezovsky >>> >, "Robert Li (baoli)" >>> 

Re: [openstack-dev] [Neutron][FYI] Bookmarklet for neutron gerrit review

2014-03-04 Thread Carl Baldwin
Nachi,

Great!  I'd been meaning to do something like this.  I took yours and
tweaked it a bit to highlight failed Jenkins builds in red and grey
other Jenkins messages.  Human reviews are left in blue.

javascript:(function(){
list = document.querySelectorAll('td.GJEA35ODGC');
for(i in list) {
title = list[i];
if(! title.innerHTML) { continue; }
text = title.nextSibling;
if (text.innerHTML.search('Build failed') > 0) {
title.style.color='red'
} else if(title.innerHTML.search('Jenkins|CI|Ryu|Testing|Mine') >= 0) {
title.style.color='#66'
} else {
title.style.color='blue'
}
}
})()

Carl

On Wed, Feb 26, 2014 at 12:31 PM, Nachi Ueno  wrote:
> Hi folks
>
> I wrote an bookmarklet for neutron gerrit review.
> This bookmarklet make the comment title for 3rd party ci as gray.
>
> javascript:(function(){list =
> document.querySelectorAll('td.GJEA35ODGC'); for(i in
> list){if(!list[i].innerHTML){continue;};if(list[i].innerHTML &&
> list[i].innerHTML.search('CI|Ryu|Testing|Mine') >
> 0){list[i].style.color='#66'}else{list[i].style.color='red'}};})()
>
> enjoy :)
> Nachi
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What is the currently accepted way to do plugins

2014-03-04 Thread Kevin L. Mitchell
On Tue, 2014-03-04 at 12:11 -0800, Dan Smith wrote:
> Now, the actual concern is not related to any of that, but about whether
> we're going to open this up as a new thing we support. In general, my
> reaction to adding new APIs people expect to be stable is "no". However,
> I understand why things like the resource reporting and even my events
> mechanism are very useful for deployers to do some plumbing and
> monitoring of their environment -- things that don't belong upstream anyway.
> 
> So I'm conflicted. I think that for these two cases, as long as we can
> say that it's not a stable interface, I think it's probably okay.
> However, things like we've had in the past, where we provide a clear
> plug point for something like "Compute manager API class" are clearly
> off the table, IMHO.

How about using 'unstable' as a component of the entrypoint group?
E.g., "nova.unstable.events"…
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-04 Thread Stefano Maffulli
-1 to the switch from me.

this question from Sean is of fundamental value:

On 03/03/2014 03:19 PM, Sean Dague wrote:
> #1) do we believe OFTC is fundamentally better equipped to resist a
> DDOS, or do we just believe they are a smaller target? The ongoing DDOS
> on meetup.com the past 2 weeks is a good indicator that being a smaller
> fish only helps for so long.

until we can say that *fundamentally* OFTC is not going to suffer
disruptions in the future I wouldn't even remotely consider a painful
switch like this one.

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change in openstack/neutron[master]: Permit ICMPv6 RAs only from known routers

2014-03-04 Thread Collins, Sean
On Tue, Mar 04, 2014 at 02:08:03PM EST, Robert Li (baoli) wrote:
> Hi Xu Han & Sean,
> 
> Is this code going to be committed as it is? Based on this morning's
> discussion, I thought that the IP address used to install the RA rule
> comes from the qr-xxx interface's LLA address. I think that I'm confused.

Xu Han has a better grasp on the query than I do, but I'm going to try
and take a crack at explaining the code as I read through it. Here's
some sample data from the Neutron database - built using
vagrant_devstack. 

https://gist.github.com/sc68cal/568d6119eecad753d696

I don't have V6 addresses working in vagrant_devstack just yet, but for
the sake of discourse I'm going to use it as an example.

If you look at the queries he's building in 72252 - he's querying all
the ports on the network, that are q_const.DEVICE_OWNER_ROUTER_INTF 
("network:router_interface"). The IP of those ports are added to the list of 
IPs.

Then a second query is done to find the port connected from the router
to the gateway, q_const.DEVICE_OWNER_ROUTER_GW
('network:router_gateway'). Those IPs are then appended to the list of
IPs.

Finally, the last query adds the IPs of the gateway for each subnet
in the network.

So, ICMPv6 traffic from ports that are either:

A) A gateway device
B) A router
C) The subnet's gateway 

Will be passed through to an instance.

Now, please take note that I have *not* discussed what *kind* of IP
address will be picked up. We intend for it to be a Link Local address,
but that will be/is addressed in other patch sets.

> Also this bug: Allow LLA as router interface of IPv6 subnet
> https://review.openstack.org/76125 was created due to comments to 72252.
> If We don't need to create a new LLA for the gateway IP, is the fix still
> needed? 

Yes - we still need this patch - because that code path is how we are
able to create ports on routers that are a link local address.


This is at least my understanding of our progress so far, but I'm not
perfect - Xu Han will probably have the last word.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-openstacksdk] Minutes for 4 March Meeting

2014-03-04 Thread Brian Curtin
Today was the second python-openstacksdk meeting

Minutes: 
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-03-04-19.00.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-03-04-19.00.txt
Log: 
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-03-04-19.00.log.html

Action Items:
1. Remove the existing API strawman (https://review.openstack.org/#/c/75362/)
2. Sketch out a core HTTP layer to build on
3. Write a rough Identity client

The next meeting is scheduled for Tuesday 11 March at 1900 UTC / 1300 CST.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What is the currently accepted way to do plugins

2014-03-04 Thread Dan Smith
> In a chat with Dan Smith on IRC, he was suggesting that the important
> thing was not to use class paths in the config file. I can see that
> internal implementation should not be exposed in the config files -
> that way the implementation can change without impacting the nova
> users/operators.
> 
> Sandy, I'm not sure I really get the security argument. Python
> provides every means possible to inject code, not sure plugins are so
> different. Certainly agree on choosing which plugins you want to use
> though.

Yeah, so I don't think there's any security reason why one is better
than the other. I think that we've decided that providing a class path
is ugly, and I agree, especially if we have entry points at our disposal.

Now, the actual concern is not related to any of that, but about whether
we're going to open this up as a new thing we support. In general, my
reaction to adding new APIs people expect to be stable is "no". However,
I understand why things like the resource reporting and even my events
mechanism are very useful for deployers to do some plumbing and
monitoring of their environment -- things that don't belong upstream anyway.

So I'm conflicted. I think that for these two cases, as long as we can
say that it's not a stable interface, I think it's probably okay.
However, things like we've had in the past, where we provide a clear
plug point for something like "Compute manager API class" are clearly
off the table, IMHO.

--Dan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Meeting Tuesday March 4th at 19:00 UTC

2014-03-04 Thread Elizabeth Krumbach Joseph
On Mon, Mar 3, 2014 at 8:53 AM, Elizabeth Krumbach Joseph
 wrote:
> The OpenStack Infrastructure (Infra) team is hosting our weekly
> meeting tomorrow, Tuesday March 4th, at 19:00 UTC in
> #openstack-meeting

Thanks to everyone who joined us, meeting minutes and log here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-03-04-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-03-04-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-03-04-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Russell Bryant
On 03/04/2014 12:42 PM, Sean Dague wrote:
> On 03/04/2014 12:26 PM, Vishvananda Ishaya wrote:
>> 
>> On Mar 4, 2014, at 9:10 AM, Russell Bryant 
>> wrote:
>>> 
>>> Thank you all for your participation on this topic.  It has
>>> been quite controversial, but the API we expose to our users is
>>> a really big deal. I'm feeling more and more confident that
>>> we're coming through this with a much better understanding of
>>> the problem space overall, as well as a better plan going
>>> forward than we had a few weeks ago.
>> 
>> Hey Russell,
>> 
>> Thanks for bringing this to the mailing list and being open to
>> discussion and collaboration. Also, thanks to everyone who is
>> participating in the plan. Doing this kind of thing in the open
>> is difficult and it has lead to a ton of debate, but this is the
>> right way to do things. It says a lot about the strength of our
>> community that we are able to have conversations like this
>> without devolving into arguments and flame wars.
>> 
>> Vish
> 
> +1, and definitely appreciate Russell's leadership through this
> whole discussion.

Thanks for the kind words.  It really is a group effort.  Even in the
face of an incredibly controversial topic, we can't be afraid to ask
hard questions.  It takes a lot of maturity and focus to work through
the answers toward some sort of consensus without it turning into, as
Vish said, arguments and flame wars.

Nova (and OpenStack overall) is made up of a pretty incredible group
of people.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-03-04 Thread Georgy Okrokvertskhov
Hi,

Here is an etherpad page with current Murano status
http://etherpad.openstack.org/p/murano-incubation-status.

Thanks
Georgy


On Mon, Mar 3, 2014 at 9:04 PM, Georgy Okrokvertskhov <
gokrokvertsk...@mirantis.com> wrote:

> Hi Zane,
>
> Thank you very much for this question.
>
> First of all let me highlight that Murano DSL was much inspired by TOSCA.
> We carefully read this standard before our movement to Murano DSL. TOSCA
> standard has a lot f of very well designed concepts and ideas which we
> reused in Murano. There is one obvious draw back of TOSCA - very heavy and
> verbose XML based syntax. Taking into account that OpenStack itself is
> clearly moving from XML based representations, it will be strange to bring
> this huge XML monster back on a higher level. Frankly, the current Murano
> workflows language is XML based and it is quite painful to write a
> workflows without any additional instrument like IDE.
>
> Now let me remind that TOSCA has a defined scope of its responsibility.
> There is a list of areas which are out of scope. For Murano it was
> important to see that the following items are out of TOSCA scope:
> Citations from [1]:
> ...
> 2. The definition of concrete plans, i.e. the definition of plans in any
> process modeling language like BPMN or BPEL.
> 3. The definition of a language for defining plans (i.e. a new process
> modeling language).
> ...
> Plans in TOSCA understanding is something similar to workflows. This is
> what we address by Murano workflow.
>
> Not let me go through TOSCA ideas and show how they are reflected in
> Murano. It will be a long peace of text so feel free to skip it.
>
> Taking this into account lets review what we have in Murano as an
> application package. Inside application package we have:
> 1. Application metadata which describes application, its relations and
> properties
> 2. Heat templates snippets
> 3. Scripts for deployment
> 4. Workflow definitions
>
> In TOSCA document in section 3.2.1 there are Service Templates introduced.
> These templates are declarative descriptions of services components and
> service Topologies. Service templates can be stored in catalog to be found
> and used by users. This  service template description is abstracted from
> actual infrastructure implementation and each cloud provider maps this
> definition to actual cloud infrastructure. This is definitely a part which
> is already covered by Heat.
> The same section says the following:  "Making a concrete instance of a
> Topology Template can be done by running a corresponding Plan (so-called
> instantiating management plan, a.k.a. build plan). This build plan could be
> provided by the service developer who also creates the Service Template." This
> plan part which is out of scope of TOSCA is essentially what Murano adds as
> a part of application definition.
>
> Section 3.3 of TOSCA document introduces an new entity - artifacts.
> Artifact is a name for content which is needed for service deployment
> including (scripts, executables, binaries and images). That is why Murano
> has a metadata service to store artifacts as a part of application package.
> Moreover, Murano works with Glance team to move this metadata repository
> from Murano to Glance providing generic artifact repository which can be
> used not only by Murano but by any other services.
>
> Sections 3.4 and 3.5 explain the idea of Relationships with
> Compatibilities and Service Composition. Murano actually implements all
> these high level features. Application definition has a section with
> contract definitions. This contract syntax is not just a declaration of the
> relations and capabilities but also a way to make assertions and on-the fly
> type validation and conversion if needed.  Section 10 reveals details of
> requirements. It explains that requirements can be complex: inherit each
> other and be abstract to define a broad set of required values. Like when
> service requires relation database it will require type=RDMS without
> assuming the actual DB implementation MySQL, PostgreSQL or MSSQL.
>
> In order to solve the problem of complex requirements, relations and
> service composition we introduced  classes in our DSL. It was presented and
> discussed in this e-mail thread [3]. Murano DSL syntax allows application
> package writer to compose applications from existing classes by using
> inheritance and class properties with complex types like classes. It is
> possible to define a requirement with using abstract classes to define
> specific types of applications and services like databases, webservers and
> other. Using class inheritance Murano will be able to satisfy the
> requirement by specific object which proper parent class by checking the
> whole hierarchy of objects parent classes which can be abstract.
>
> I don't want to cover all entities defined in TOSCA as most important were
> discussed already. There are implementations of many TOSCA concepts in
> Murano, like class properti

Re: [openstack-dev] does exception need localize or not?

2014-03-04 Thread Ben Nemec
 

On 2014-02-27 02:45, yongli he wrote: 

> refer to :
> https://wiki.openstack.org/wiki/Translations [1]
> 
> now some exception use _ and some not. the wiki suggest do not to do that. 
> but i'm not sure.
> 
> what's the correct way?
> 
> F.Y.I 
> 
> WHAT TO TRANSLATE
> 
> At present the convention is to translate _all_ user-facing strings. This 
> means API messages, CLI responses, documentation, help text, etc. 
> 
> There has been a lack of consensus about the translation of log messages; the 
> current ruling is that while it is not against policy to mark log messages 
> for translation if your project feels strongly about it, translating log 
> messages is not actively encouraged. 
> 
> Exception text should _not_ be marked for translation, becuase if an 
> exception occurs there is no guarantee that the translation machinery will be 
> functional. 
> 
> Regards
> Yongli He

Interestingly, although that is very clear that we should not translate
exceptions, we do anyway in many (most?) OpenStack projects. For the
time being I would say go with whatever the project you're contributing
to does, but it's a discussion we may need to have since the reasoning
on the wiki page seems sound to me, especially now that lazy translation
makes the translation mechanism so much more complicated. 

-Ben 
 

Links:
--
[1] https://wiki.openstack.org/wiki/Translations
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Thought exercise for a V2 only world

2014-03-04 Thread Chris Behrens

On Mar 4, 2014, at 11:14 AM, Sean Dague  wrote:

> 
> I want to give the knobs to the users. If we thought it was important
> enough to review and test in Nova, then we made a judgement call that
> people should have access to it.

Oh, I see. But, I don’t agree, certainly not for every single knob. It’s less 
of an issue in the private cloud world, but when you start offering this as a 
service, not everything is appropriate to enable.

- Chris



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-03-04 Thread Gokul Kandiraju
Dear All,



We are working on a framework where we want to monitor the system and take
certain actions when specific events or situations occur. Here are two
examples of 'different' situations:



   Example 1: A VM's-Owner and N/W's-owner are different ==> this could
mean a violation ==> we need to take some action

   Example 2: A simple policy such as (VM-migrate of all VMs on possible
node failure) OR (a more complex Energy Policy that may involve
optimization).



Both these examples need monitoring and actions to be taken when certain
events happen (or through polling). However, the first one falls into the
Compliance domain with Boolean conditions getting evaluated while the
second one may require a more richer set of expression allowing for
sequences or algorithms.

 So far, based on this discussion, it seems that these are *separate*
initiatives in the community. I am understanding the Congress project to be
in the domain of Boolean conditions (used for Compliance, etc.) where as
the Run-time-policies (Jay's proposal) where policies can be expressed as
rules, algorithms with higher-level goals. Is this understanding correct?

Also, looking at all the mails, this is what I am reading:



 1. Congress -- Focused on Compliance [ is that correct? ] (Boolean
constraints and logic)



 2. Runtime-Policies --  -- Focused on Runtime policies for
Load Balancing, Availability, Energy, etc. (sequences of actions, rules,
algorithms)



 3. SolverScheduler -- Focused on Placement [ static or runtime ] and
will be invoked by the (above) policy engines



 4. Gantt - Focused on (Holistic) Scheduling



 5. Neat -- seems to be a special case of Runtime-Policies  (policies
based on Load)



Would this be correct understanding?  We need to understand this to
contribute to the right project. :)



Thanks!

-Gokul



On Fri, Feb 28, 2014 at 5:46 PM, Jay Lau  wrote:

> Hi Yathiraj and Tim,
>
> Really appreciate your comments here ;-)
>
> I will prepare some detailed slides or documents before summit and we can
> have a review then. It would be great if OpenStack can provide "DRS"
> features.
>
> Thanks,
>
> Jay
>
>
>
> 2014-03-01 6:00 GMT+08:00 Tim Hinrichs :
>
> Hi Jay,
>>
>> I think the Solver Scheduler is a better fit for your needs than Congress
>> because you know what kinds of constraints and enforcement you want.  I'm
>> not sure this topic deserves an entire design session--maybe just talking a
>> bit at the summit would suffice (I *think* I'll be attending).
>>
>> Tim
>>
>> - Original Message -
>> | From: "Jay Lau" 
>> | To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> | Sent: Wednesday, February 26, 2014 6:30:54 PM
>> | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for
>> OpenStack run time policy to manage
>> | compute/storage resource
>> |
>> |
>> |
>> |
>> |
>> |
>> | Hi Tim,
>> |
>> | I'm not sure if we can put resource monitor and adjust to
>> | solver-scheduler (Gantt), but I have proposed this to Gantt design
>> | [1], you can refer to [1] and search "jay-lau-513".
>> |
>> | IMHO, Congress does monitoring and also take actions, but the actions
>> | seems mainly for adjusting single VM network or storage. It did not
>> | consider migrating VM according to hypervisor load.
>> |
>> | Not sure if this topic deserved to be a design session for the coming
>> | summit, but I will try to propose.
>> |
>> |
>> |
>> |
>> | [1] https://etherpad.openstack.org/p/icehouse-external-scheduler
>> |
>> |
>> |
>> | Thanks,
>> |
>> |
>> | Jay
>> |
>> |
>> |
>> | 2014-02-27 1:48 GMT+08:00 Tim Hinrichs < thinri...@vmware.com > :
>> |
>> |
>> | Hi Jay and Sylvain,
>> |
>> | The solver-scheduler sounds like a good fit to me as well. It clearly
>> | provisions resources in accordance with policy. Does it monitor
>> | those resources and adjust them if the system falls out of
>> | compliance with the policy?
>> |
>> | I mentioned Congress for two reasons. (i) It does monitoring. (ii)
>> | There was mention of compute, networking, and storage, and I
>> | couldn't tell if the idea was for policy that spans OS components or
>> | not. Congress was designed for policies spanning OS components.
>> |
>> |
>> | Tim
>> |
>> | - Original Message -
>> |
>> | | From: "Jay Lau" < jay.lau@gmail.com >
>> | | To: "OpenStack Development Mailing List (not for usage questions)"
>> | | < openstack-dev@lists.openstack.org >
>> |
>> |
>> | | Sent: Tuesday, February 25, 2014 10:13:14 PM
>> | | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal
>> | | for OpenStack run time policy to manage
>> | | compute/storage resource
>> | |
>> | |
>> | |
>> | |
>> | |
>> | | Thanks Sylvain and Tim for the great sharing.
>> | |
>> | | @Tim, I also go through with Congress and have the same feeling
>> | | with
>> | | Sylvai, it is likely that Congress is doing something simliar with
>> | | Gantt providing a holistic way for d

Re: [openstack-dev] WARNING: ... This application has not enabled MySQL traditional mode, which means silent data corruption may occur - real issue?

2014-03-04 Thread Joe Gordon
On Tue, Mar 4, 2014 at 11:08 AM, Ben Nemec  wrote:
> On 2014-03-04 12:51, Sean Dague wrote:
>>
>> On 03/04/2014 01:27 PM, Ben Nemec wrote:
>>>
>>> This warning should be gone by default once
>>>
>>> https://github.com/openstack/oslo-incubator/commit/dda24eb4a815914c29e801ad0176630786db8734
>>> gets synced.  I believe there is work underway by the db team to get
>>> that done.
>>>
>>> Note that the reason it will be gone is that we're changing the default
>>> oslo db mode to traditional, so if we have any code that would have
>>> triggered silent data corruption it's now going to be not so silent.
>>>
>>> -Ben
>>
>>
>> Ok, but we're at the i3 freeze. So is there a db patch set up for every
>> service to sync that, and FFE ready to let this land?
>>
>> Because otherwise I'm very afraid this is going to get trapped as 1/2
>> implemented, which would be terrible for the release.
>>
>> So basically, who is driving these patches out to the projects?
>>
>> -Sean
>
>
> I'm not sure.  We're tracking the sync work here:
> https://etherpad.openstack.org/p/Icehouse-nova-oslo-sync but it just says
> the db team is working on it.
>
> Adding Joe and Doug since I think they know more about what's going on with
> this.

https://github.com/openstack/oslo-incubator/blob/master/MAINTAINERS#L100

>
> If we can't get db synced, it's basically a bit flip to turn on traditional
> mode in the projects that are seeing this message right now.  I'd rather not
> since we want to drop support for that in favor of the general sql_mode
> option, but it can certainly be done if necessary.
>
> -Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What is the currently accepted way to do plugins

2014-03-04 Thread Murray, Paul (HP Cloud Services)
In a chat with Dan Smith on IRC, he was suggesting that the important thing was 
not to use class paths in the config file. I can see that internal 
implementation should not be exposed in the config files - that way the 
implementation can change without impacting the nova users/operators.

Sandy, I'm not sure I really get the security argument. Python provides every 
means possible to inject code, not sure plugins are so different. Certainly 
agree on choosing which plugins you want to use though.

-Original Message-
From: Sandy Walsh [mailto:sandy.wa...@rackspace.com] 
Sent: 04 March 2014 17:50
To: OpenStack Development Mailing List (not for usage questions); Murray, Paul 
(HP Cloud Services)
Subject: RE: [openstack-dev] [Nova] What is the currently accepted way to do 
plugins

This brings up something that's been gnawing at me for a while now ... why use 
entry-point based loaders at all? I don't see the problem they're trying to 
solve. (I thought I got it for a while, but I was clearly fooling myself)

1. If you use the "load all drivers in this category" feature, that's a 
security risk since any compromised python library could hold a trojan.

2. otherwise you have to explicitly name the plugins you want (or don't want) 
anyway, so why have the extra indirection of the entry-point? Why not just name 
the desired modules directly? 

3. the real value of a loader would be to also extend/manage the python path 
... that's where the deployment pain is. "Use  driver 
and take care of the pathing for me." Abstracting the module/class/function 
name isn't a great win. 

I don't see where the value is for the added pain (entry-point 
management/package metadata) it brings. 

CMV,

-S

From: Russell Bryant [rbry...@redhat.com]
Sent: Tuesday, March 04, 2014 1:29 PM
To: Murray, Paul (HP Cloud Services); OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Nova] What is the currently accepted way to do 
plugins

On 03/04/2014 06:27 AM, Murray, Paul (HP Cloud Services) wrote:
> One of my patches has a query asking if I am using the agreed way to 
> load plugins: https://review.openstack.org/#/c/71557/
>
> I followed the same approach as filters/weights/metrics using 
> nova.loadables. Was there an agreement to do it a different way? And 
> if so, what is the agreed way of doing it? A pointer to an example or 
> even documentation/wiki page would be appreciated.

The short version is entry-point based plugins using stevedore.

We should be careful though.  We need to limit what we expose as external plug 
points, even if we consider them unstable.  If we don't want it to be public, 
it may not make sense for it to be a plugin interface at all.

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-04 Thread Joe Gordon
On Tue, Mar 4, 2014 at 1:06 AM, Qin Zhao  wrote:
> I think the current snapshot implementation can be a solution sometimes, but
> it is NOT exact same as user's expectation. For example, a new blueprint is
> created last week,
> https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot, which
> seems a little similar with this discussion. I feel the user is requesting
> Nova to create in-place snapshot (not a new image), in order to revert the
> instance to a certain state. This capability should be very useful when
> testing new software or system settings. It seems a short-term temporary
> snapshot associated with a running instance for Nova. Creating a new
> instance is not that convenient, and may be not feasible for the user,
> especially if he or she is using public cloud.
>

Why isn't it easy to create a new instance from a snapshot?

>
> On Tue, Mar 4, 2014 at 1:32 PM, Nandavar, Divakar Padiyar
>  wrote:
>>
>> >>> Why reboot an instance? What is wrong with deleting it and create a
>> >>> new one?
>>
>> You generally use non-persistent disk mode when you are testing new
>> software or experimenting with settings.   If something goes wrong just
>> reboot and you are back to clean state and start over again.I feel it's
>> convenient to handle this with just a reboot rather than recreating the
>> instance.
>>
>> Thanks,
>> Divakar
>>
>> -Original Message-
>> From: Joe Gordon [mailto:joe.gord...@gmail.com]
>> Sent: Tuesday, March 04, 2014 10:41 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [nova][cinder] non-persistent storage(after
>> stopping VM, data will be rollback automatically), do you think we shoud
>> introduce this feature?
>> Importance: High
>>
>> On Mon, Mar 3, 2014 at 8:13 PM, Zhangleiqiang 
>> wrote:
>> >>
>> >> This sounds like ephemeral storage plus snapshots.  You build a base
>> >> image, snapshot it then boot from the snapshot.
>> >
>> >
>> > Non-persistent storage/disk is useful for sandbox-like environment, and
>> > this feature has already exists in VMWare ESX from version 4.1. The
>> > implementation of ESX is the same as what you said, boot from snapshot of
>> > the disk/volume, but it will also *automatically* delete the transient
>> > snapshot after the instance reboots or shutdowns. I think the whole
>> > procedure may be controlled by OpenStack other than user's manual
>> > operations.
>>
>> Why reboot an instance? What is wrong with deleting it and create a new
>> one?
>>
>> >
>> > As far as I know, libvirt already defines the corresponding 
>> > element in domain xml for non-persistent disk ( [1] ), but it cannot 
>> > specify
>> > the location of the transient snapshot. Although qemu-kvm has provided
>> > support for this feature by the "-snapshot" command argument, which will
>> > create the transient snapshot under /tmp directory, the qemu driver of
>> > libvirt don't support  element currently.
>> >
>> > I think the steps of creating and deleting transient snapshot may be
>> > better to done by Nova/Cinder other than waiting for the  
>> > support
>> > added to libvirt, as the location of transient snapshot should specified by
>> > Nova.
>> >
>> >
>> > [1] http://libvirt.org/formatdomain.html#elementsDisks
>> > --
>> > zhangleiqiang
>> >
>> > Best Regards
>> >
>> >
>> >> -Original Message-
>> >> From: Joe Gordon [mailto:joe.gord...@gmail.com]
>> >> Sent: Tuesday, March 04, 2014 11:26 AM
>> >> To: OpenStack Development Mailing List (not for usage questions)
>> >> Cc: Luohao (brian)
>> >> Subject: Re: [openstack-dev] [nova][cinder] non-persistent
>> >> storage(after stopping VM, data will be rollback automatically), do
>> >> you think we shoud introduce this feature?
>> >>
>> >> On Mon, Mar 3, 2014 at 6:00 PM, Yuzhou (C) 
>> >> wrote:
>> >> > Hi stackers,
>> >> >
>> >> > As far as I know ,there are two types of storage used by VM in
>> >> > openstack:
>> >> Ephemeral Storage and Persistent Storage.
>> >> > Data on ephemeral storage ceases to exist when the instance it is
>> >> > associated
>> >> with is terminated. Rebooting the VM or restarting the host server,
>> >> however, will not destroy ephemeral data.
>> >> > Persistent storage means that the storage resource outlives any
>> >> > other
>> >> resource and is always available, regardless of the state of a running
>> >> instance.
>> >> >
>> >> > There is a use case that maybe need a new type of storage, maybe we
>> >> > can
>> >> call it non-persistent storage .
>> >> > The use case is that VMs are assigned to the public ephemerally in
>> >> > public
>> >> areas.
>> >> > After the VM is used, new data on storage of VM ceases to exist
>> >> > when the
>> >> instance it is associated with is stopped.
>> >> > It means stop the VM, Non-persistent storage used by VM will be
>> >> > rollback
>> >> automatically.
>> >> >
>> >> > Is there any other suggestions? Or any BPs about this use case?
>> >> >
>> >>
>> >> This sounds like 

Re: [openstack-dev] WARNING: ... This application has not enabled MySQL traditional mode, which means silent data corruption may occur - real issue?

2014-03-04 Thread Ben Nemec

On 2014-03-04 12:51, Sean Dague wrote:

On 03/04/2014 01:27 PM, Ben Nemec wrote:

This warning should be gone by default once
https://github.com/openstack/oslo-incubator/commit/dda24eb4a815914c29e801ad0176630786db8734
gets synced.  I believe there is work underway by the db team to get
that done.

Note that the reason it will be gone is that we're changing the 
default

oslo db mode to traditional, so if we have any code that would have
triggered silent data corruption it's now going to be not so silent.

-Ben


Ok, but we're at the i3 freeze. So is there a db patch set up for every
service to sync that, and FFE ready to let this land?

Because otherwise I'm very afraid this is going to get trapped as 1/2
implemented, which would be terrible for the release.

So basically, who is driving these patches out to the projects?

-Sean


I'm not sure.  We're tracking the sync work here: 
https://etherpad.openstack.org/p/Icehouse-nova-oslo-sync but it just 
says the db team is working on it.


Adding Joe and Doug since I think they know more about what's going on 
with this.


If we can't get db synced, it's basically a bit flip to turn on 
traditional mode in the projects that are seeing this message right now. 
 I'd rather not since we want to drop support for that in favor of the 
general sql_mode option, but it can certainly be done if necessary.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Thought exercise for a V2 only world

2014-03-04 Thread Sean Dague
On 03/04/2014 02:03 PM, Chris Behrens wrote:
> 
> On Mar 4, 2014, at 4:09 AM, Sean Dague  > wrote:
> 
>> On 03/04/2014 01:14 AM, Chris Behrens wrote:
>>> […]
>>> I don’t think I have an answer, but I’m going to throw out some of my
>>> random thoughts about extensions in general. They might influence a
>>> longer term decision. But I’m also curious if I’m the only one that
>>> feels this way:
>>>
>>> I tend to feel like extensions should start outside of nova and any
>>> other code needed to support the extension should be implemented by
>>> using hooks in nova. The modules implementing the hook code should be
>>> shipped with the extension. If hooks don’t exist where needed, they
>>> should be created in trunk. I like hooks. Of course, there’s probably
>>> such a thing as too many hooks, so… hmm… :)  Anyway, this addresses
>>> another annoyance of mine whereby code for extensions is mixed in all
>>> over the place. Is it really an extension if all of the supporting
>>> code is in ‘core nova’?
>>>
>>> That said, I then think that the only extensions shipped with nova
>>> are really ones we deem “optional core API components”. “optional”
>>> and “core” are probably oxymorons in this context, but I’m just going
>>> to go with it. There would be some sort of process by which we let
>>> extensions “graduate” into nova.
>>>
>>> Like I said, this is not really an answer. But if we had such a
>>> model, I wonder if it turns “deprecating extensions” into something
>>> more like “deprecating part of the API”… something less likely to
>>> happen. Extensions that aren’t used would more likely just never
>>> graduate into nova.
>>
>> So this approach actually really concerns me, because what it says is
>> that we should be optimizing Nova for out of tree changes to the API
>> which are vendor specific. Which I think is completely the wrong
>> direction. Because in that world you'll never be able to move between
>> Nova installations. What's worse is you'll get multiple people
>> implementing the same feature out of tree, slightly differently.
> 
> Right. And I have an internal conflict because I also tend to agree with
> what you’re saying. :) But I think that if we have API extensions at
> all, we have your issue of “never being able to move”. Well, maybe not
> “never”, because at least they’d be easy to “turn on” if they are in
> nova. But I think for the random API extension that only 1 person ever
> wants to enable, there’s your same problem. This is somewhat off-topic,
> but I just don’t want a ton of bloat in nova for something few people use.
> 
>>
>> I 100% agree the current extensions approach is problematic. It's used
>> as a way to circumvent the idea of a stable API (mostly with "oh, it's
>> an extension, we need this feature right now, and it's not part of core
>> so we don't need to give the same guaruntees.")
> 
> Yeah, totally..  that’s bad.
> 
>>
>> So realistically I want to march us towards a place where we stop doing
>> that. Nova out of the box should have all the knobs that anyone needs to
>> build these kinds of features on top of. If not, we should fix that. It
>> shouldn't be optional.
> 
> Agree, although I’m not sure if I’m reading this correctly as it sounds
> like you want the knobs that you said above concern you. I want some
> sort of balance. There’s extensions I think absolutely should be part of
> nova as optional features… but I don’t want everything. :)

I want to give the knobs to the users. If we thought it was important
enough to review and test in Nova, then we made a judgement call that
people should have access to it.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change in openstack/neutron[master]: Permit ICMPv6 RAs only from known routers

2014-03-04 Thread Robert Li (baoli)
Hi Xu Han & Sean,

Is this code going to be committed as it is? Based on this morning's
discussion, I thought that the IP address used to install the RA rule
comes from the qr-xxx interface's LLA address. I think that I'm confused.

Also this bug: Allow LLA as router interface of IPv6 subnet
https://review.openstack.org/76125 was created due to comments to 72252.
If We don't need to create a new LLA for the gateway IP, is the fix still
needed? 

Just trying to sync up with you guys on them.

Thanks,
Robert



On 3/4/14 3:02 AM, "Sean M. Collins (Code Review)" 
wrote:

>Sean M. Collins has posted comments on this change.
>
>Change subject: Permit ICMPv6 RAs only from known routers
>..
>
>
>Patch Set 4: Looks good to me, but someone else must approve
>
>Automatically re-added by Gerrit trivial rebase detection script.
>
>--
>To view, visit https://review.openstack.org/72252
>To unsubscribe, visit https://review.openstack.org/settings
>
>Gerrit-MessageType: comment
>Gerrit-Change-Id: I1d5c7aaa8e4cf057204eb746c0faab2c70409a94
>Gerrit-PatchSet: 4
>Gerrit-Project: openstack/neutron
>Gerrit-Branch: master
>Gerrit-Owner: Xu Han Peng 
>Gerrit-Reviewer: Arista Testing 
>Gerrit-Reviewer: Baodong (Robert) Li 
>Gerrit-Reviewer: Big Switch CI 
>Gerrit-Reviewer: Brian Haley 
>Gerrit-Reviewer: Brocade CI 
>Gerrit-Reviewer: Cisco Neutron CI 
>Gerrit-Reviewer: Hyper-V CI 
>Gerrit-Reviewer: Jenkins
>Gerrit-Reviewer: Midokura CI Bot 
>Gerrit-Reviewer: Miguel Angel Ajo 
>Gerrit-Reviewer: NEC OpenStack CI 
>Gerrit-Reviewer: Neutron Ryu 
>Gerrit-Reviewer: Nuage CI 
>Gerrit-Reviewer: One Convergence CI 
>Gerrit-Reviewer: PLUMgrid CI 
>Gerrit-Reviewer: Sean M. Collins 
>Gerrit-Reviewer: Xu Han Peng 
>Gerrit-Reviewer: mark mcclain 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Thought exercise for a V2 only world

2014-03-04 Thread Chris Behrens

On Mar 4, 2014, at 4:09 AM, Sean Dague  wrote:

> On 03/04/2014 01:14 AM, Chris Behrens wrote:
>> […]
>> I don’t think I have an answer, but I’m going to throw out some of my random 
>> thoughts about extensions in general. They might influence a longer term 
>> decision. But I’m also curious if I’m the only one that feels this way:
>> 
>> I tend to feel like extensions should start outside of nova and any other 
>> code needed to support the extension should be implemented by using hooks in 
>> nova. The modules implementing the hook code should be shipped with the 
>> extension. If hooks don’t exist where needed, they should be created in 
>> trunk. I like hooks. Of course, there’s probably such a thing as too many 
>> hooks, so… hmm… :)  Anyway, this addresses another annoyance of mine whereby 
>> code for extensions is mixed in all over the place. Is it really an 
>> extension if all of the supporting code is in ‘core nova’?
>> 
>> That said, I then think that the only extensions shipped with nova are 
>> really ones we deem “optional core API components”. “optional” and “core” 
>> are probably oxymorons in this context, but I’m just going to go with it. 
>> There would be some sort of process by which we let extensions “graduate” 
>> into nova.
>> 
>> Like I said, this is not really an answer. But if we had such a model, I 
>> wonder if it turns “deprecating extensions” into something more like 
>> “deprecating part of the API”… something less likely to happen. Extensions 
>> that aren’t used would more likely just never graduate into nova.
> 
> So this approach actually really concerns me, because what it says is
> that we should be optimizing Nova for out of tree changes to the API
> which are vendor specific. Which I think is completely the wrong
> direction. Because in that world you'll never be able to move between
> Nova installations. What's worse is you'll get multiple people
> implementing the same feature out of tree, slightly differently.

Right. And I have an internal conflict because I also tend to agree with what 
you’re saying. :) But I think that if we have API extensions at all, we have 
your issue of “never being able to move”. Well, maybe not “never”, because at 
least they’d be easy to “turn on” if they are in nova. But I think for the 
random API extension that only 1 person ever wants to enable, there’s your same 
problem. This is somewhat off-topic, but I just don’t want a ton of bloat in 
nova for something few people use.

> 
> I 100% agree the current extensions approach is problematic. It's used
> as a way to circumvent the idea of a stable API (mostly with "oh, it's
> an extension, we need this feature right now, and it's not part of core
> so we don't need to give the same guaruntees.")

Yeah, totally..  that’s bad.

> 
> So realistically I want to march us towards a place where we stop doing
> that. Nova out of the box should have all the knobs that anyone needs to
> build these kinds of features on top of. If not, we should fix that. It
> shouldn't be optional.

Agree, although I’m not sure if I’m reading this correctly as it sounds like 
you want the knobs that you said above concern you. I want some sort of 
balance. There’s extensions I think absolutely should be part of nova as 
optional features… but I don’t want everything. :)

- Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-03-04 Thread Morgan Fainberg
On March 4, 2014 at 10:45:02, Vishvananda Ishaya (vishvana...@gmail.com) wrote:

On Mar 3, 2014, at 11:32 AM, Jay Pipes  wrote: 

> On Mon, 2014-03-03 at 11:09 -0800, Vishvananda Ishaya wrote: 
>> On Mar 3, 2014, at 6:48 AM, Jay Pipes  wrote: 
>> 
>>> On Sun, 2014-03-02 at 12:05 -0800, Morgan Fainberg wrote: 
 Having done some work with MySQL (specifically around similar data 
 sets) and discussing the changes with some former coworkers (MySQL 
 experts) I am inclined to believe the move from varchar to binary 
 absolutely would increase performance like this. 
 
 
 However, I would like to get some real benchmarks around it and if it 
 really makes a difference we should get a smart "UUID" type into the 
 common SQLlibs (can pgsql see a similar benefit? Db2?) I think this 
 conversation. Should be split off from the keystone one at hand - I 
 don't want valuable information / discussions to get lost. 
>>> 
>>> No disagreement on either point. However, this should be done after the 
>>> standardization to a UUID user_id in Keystone, as a separate performance 
>>> improvement patch. Agree? 
>>> 
>>> Best, 
>>> -jay 
>> 
>> -1 
>> 
>> The expectation in other projects has been that project_ids and user_ids are 
>> opaque strings. I need to see more clear benefit to enforcing stricter 
>> typing on these, because I think it might break a lot of things. 
> 
> What does Nova lose here? The proposal is to have Keystone's user_id 
> values be UUIDs all the time. There would be a migration or helper 
> script against Nova's database that would change all non-UUID user_id 
> values to the Keystone UUID values. 

So I don’t have a problem with keystone internally using uuids, but forcing 
a migration of user identifiers isn’t something that should be taken lightly. 
One example is external logging and billing systems which now have to be 
migrated. 

I’m not opposed to the migration in principle. We may have to do a similar 
thing for project_ids with hierarchical multitenancy, for example. I just 
think we need a really good reason to do it, and the performance arguments 
just don’t seem good enough without a little empirical data. 

Vish 
Any one of the proposals we’re planning on using will not affect any current 
IDs.  Since the user_id is a blob, if we start issuing a new “id” format, 
ideally it shouldn’t matter as long as old IDs continue to work. If we do make 
any kind of migration for issued IDs I would expect that to be very deliberate 
and outside of this change set. Specifically this change set would enable 
multiple LDAP backends (among other user_id uniqueness benefits for federation, 
etc). 

I am very concerned about the external tools that reference IDs we currently 
have.

—Morgan





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSoC 2014] Proposal Template

2014-03-04 Thread Davanum Srinivas
Hi,

Is there something we can adopt? can you please send me some pointers
to templates from other communities?

-- dims

On Tue, Mar 4, 2014 at 1:46 PM, Masaru Nomura  wrote:
> Hi,
>
>
> I have a question about an application format as I can't find it on wiki
> page. Is there any specific information I should provide within a proposal?
> I checked other communities and some of them have an application template,
> so I would just like to make it clear.
>
>
> Thank you,
>
> Masaru Nomura
>
> IRC : massa [freenode]
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WARNING: ... This application has not enabled MySQL traditional mode, which means silent data corruption may occur - real issue?

2014-03-04 Thread Sean Dague
On 03/04/2014 01:27 PM, Ben Nemec wrote:
> This warning should be gone by default once
> https://github.com/openstack/oslo-incubator/commit/dda24eb4a815914c29e801ad0176630786db8734
> gets synced.  I believe there is work underway by the db team to get
> that done.
> 
> Note that the reason it will be gone is that we're changing the default
> oslo db mode to traditional, so if we have any code that would have
> triggered silent data corruption it's now going to be not so silent.
> 
> -Ben

Ok, but we're at the i3 freeze. So is there a db patch set up for every
service to sync that, and FFE ready to let this land?

Because otherwise I'm very afraid this is going to get trapped as 1/2
implemented, which would be terrible for the release.

So basically, who is driving these patches out to the projects?

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [GSoC 2014] Proposal Template

2014-03-04 Thread Masaru Nomura
Hi,


 I have a question about an application format as I can't find it on wiki
page. Is there any specific information I should provide within a proposal?
I checked other communities and some of them have an application template,
so I would just like to make it clear.


 Thank you,

Masaru Nomura

IRC : massa [freenode]
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-04 Thread Joe Gordon
On Tue, Mar 4, 2014 at 5:25 AM, Dina Belova  wrote:
> Joe, thanks for discussion.
>
>
>> I think nova should natively support booting an instance for a
>
>> limited amount of time. I would use this all the time to boot up
>
>> devstack instances (boot devstack instance for 5 hours)
>
>
> Really nice idea, but to provide time based resource management for any
> resource type in OS (instance, volume, compute host, Heat stack, etc.) that
> needs to be implemented in every project. And even with that feature
> implemented, without central leasing service, there should be some other
> reservation connected opportunities like user notifications about close end
> of lease / energy efficiency, etc. that do not really fit idea of some
> already existing project / program.
>

So I understand the use case where I want a instance for x amount of
time, because the cloud model makes compute resources (instances)
ephemeral. But volumes and object storage are explicitly persistent,
so not sure why you would want to consume one of those resources for a
finite amount of time.

>
>> Reserved and Spot Instances. I like Amazon's concept of reserved and
>
>> spot instances it would be cool if we could support something similar
>
>
> AWS reserved instances look like your first idea with instances booted for a
> limited amount of time - even that in Amazon use case that's *much* time. As
> for spot instances, I believe this idea is more about some billing service
> that counts current instance/host/whatever price due to current compute
> capacity load, etc.

Actually you have it backwards.
"Reserved Instances are easy to use and require no change to how you
use EC2. When computing your bill, our system will automatically apply
Reserved Instance rates first to minimize your costs. An instance hour
will only be charged at the On-Demand rate when your total quantity of
instances running that hour exceeds the number of applicable Reserved
Instances you own."
https://aws.amazon.com/ec2/purchasing-options/reserved-instances/


https://aws.amazon.com/ec2/purchasing-options/spot-instances/


>
>
>> Boot an instances for 4 hours every morning. This sounds like
>
>> something that
>> https://wiki.openstack.org/wiki/Mistral#Tasks_Scheduling_-_Cloud_Cron
>
>> can handle.
>
>
> That's not really thing we've implemented in Climate - we have not
> implemented periodic tasks like that - now lease might be not started,
> started and ended - without any 'sleeping' periods. Although, that's quite
> nice idea to implement this feature using Mistral.
>
>
>> Give someone 100 CPU hours per time period of quota. Support quotas
>
>> by overall usage not current usage. This sounds like something that
>
>> each service should support natively.
>
>
> Quotas (if we speak about time management) should be satisfied in any time
> period. Now in Climate that's done by getting cloud resources from common
> pool at the lease creation moment - but, as you guess, that does not allow
> to have "resource reusage" at the time lease has not started yet. To
> implement resource reusage advanced quota management is truly needed. That
> idea was the first at the very beginning of Climate project and we
> definitely need that in future.

This is the crux of my concern:  without "'resource reusage' at the
time lease has not started yet." I don't see what climate provides.

How would climate handle quotas? Currently quotas are up to each
project to manage.

>
>
>> Reserved Volume: Not sure how that works.
>
>
> Now we're in the process of investigating this moment too. Ideally that
> should be some kind of volume state, that simply means only DB record
> without real block storage created - and it'll be created only at the lease
> start date. But that requires many changes to Cinder. Other idea is to do
> the same as Climate does with compute hosts - consider cinder-volumes as
> dedicated to Climate and Climate will manage them itself. Reserved volume
> idea came from thoughts about 'reserved stack' - to have working group like
> vm+volume+assigned_ip time you really need that.
>

I would like to see a clear roadmap for this with input from the
Cinder team. Because I am not sure if this really makes much sense.

>
>> Virtual Private Cloud.  It would be great to see OpenStack support a
>
>> hardware isolated virtual private cloud, but not sure what the best
>
>> way to implement that is.
>
>
> There was proposal with pclouds by Phil Day, that was changed after Icehouse
> summit to something new. First idea was to use exactly pclouds, but as they
> are not implemented now, Climate works directly with hosts aggregates to
> imitate them. In future, when we'll have opportunity to use pcloud (it does
> not matter how it'll be called really), we'll do it, of course.
>

That brings up another point, having a project that imports nova code
directly is bad. You are using non-public non-contractual APIs that
nova can change at any time.
http://git.openstack.org/cgit/stackforge/climate-n

Re: [openstack-dev] [Neutron][IPv6] Update the Wiki with links to blueprints and reviews

2014-03-04 Thread Robert Li (baoli)
Yea. that's a good idea. I will try to find out time working on the spec.

--Robert

On 3/4/14 11:17 AM, "Collins, Sean" 
wrote:

>On Tue, Mar 04, 2014 at 04:06:02PM +, Robert Li (baoli) wrote:
>> Hi Sean,
>> 
>> I just added the ipv6-prefix-delegation BP that can be found using the
>> search link on the ipv6 wiki. More details about it will be added once I
>> find time.
>
>Perfect - we'll probably want to do a session at the summit on it.
>
>-- 
>Sean M. Collins
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WARNING: ... This application has not enabled MySQL traditional mode, which means silent data corruption may occur - real issue?

2014-03-04 Thread Ben Nemec
This warning should be gone by default once 
https://github.com/openstack/oslo-incubator/commit/dda24eb4a815914c29e801ad0176630786db8734 gets synced.  I believe there is work underway by the db team to get that done.


Note that the reason it will be gone is that we're changing the default 
oslo db mode to traditional, so if we have any code that would have 
triggered silent data corruption it's now going to be not so silent.


-Ben

On 2014-03-03 13:25, Sean Dague wrote:

So that definitely got lost in translation somewhere, and is about to
have us spam icehouse users with messages that make them think their
openstack cluster is going to burn to the ground. Is there proposed
reviews to set those defaults in projects up already?

Remember - WARN is a level seen by administrators, and telling everyone
they have silent data corruption is not a good default.

-Sean

On 03/03/2014 02:05 PM, Roman Podoliaka wrote:

Hi all,

This is just one another example of MySQL not having production ready
defaults. The original idea was to force setting the SQL mode to
TRADITIONAL in code in projects using oslo.db code when "they are 
ready"

(unit and functional tests pass). So the warning was actually for
developers rather than for users.

Sync of the latest oslo.db code will make users able to set any SQL 
mode

you like (default is TRADITIONAL now, so the warning is gone).

Thanks,
Roman

On Mar 2, 2014 8:36 PM, "John Griffith" mailto:john.griff...@solidfire.com>> wrote:





On Sun, Mar 2, 2014 at 7:42 PM, Sean Dague 
> wrote:


Coming in at slightly less than 1 million log lines in the last 7 
days:



http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhpcyBhcHBsaWNhdGlvbiBoYXMgbm90IGVuYWJsZWQgTXlTUUwgdHJhZGl0aW9uYWwgbW9kZSwgd2hpY2ggbWVhbnMgc2lsZW50IGRhdGEgY29ycnVwdGlvbiBtYXkgb2NjdXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MzgxNDExMzcyOX0=


"This application has not enabled MySQL traditional mode, which 
means

silent data corruption may occur"

This is being generated by  *.openstack.common.db.sqlalchemy.session 
in

at least nova, glance, neutron, heat, ironic, and savana



http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhpcyBhcHBsaWNhdGlvbiBoYXMgbm90IGVuYWJsZWQgTXlTUUwgdHJhZGl0aW9uYWwgbW9kZSwgd2hpY2ggbWVhbnMgc2lsZW50IGRhdGEgY29ycnVwdGlvbiBtYXkgb2NjdXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MzgxNDExMzcyOSwibW9kZSI6InNjb3JlIiwiYW5hbHl6ZV9maWVsZCI6Im1vZHVsZSJ9



At any rate, it would be good if someone that understood the details
here could weigh in about whether is this really a true WARNING that
needs to be fixed or if it's not, and just needs to be silenced.

-Sean

--
Sean Dague
Samsung Research America
s...@dague.net  / sean.da...@samsung.com



http://dague.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org



http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I came across this earlier this week when I was looking at this in

Cinder, haven't completely gone into detail here, but maybe Florian or
Doug have some insight?


https://bugs.launchpad.net/oslo/+bug/1271706

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org



http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-03-04 Thread Vishvananda Ishaya

On Mar 3, 2014, at 11:32 AM, Jay Pipes  wrote:

> On Mon, 2014-03-03 at 11:09 -0800, Vishvananda Ishaya wrote:
>> On Mar 3, 2014, at 6:48 AM, Jay Pipes  wrote:
>> 
>>> On Sun, 2014-03-02 at 12:05 -0800, Morgan Fainberg wrote:
 Having done some work with MySQL (specifically around similar data
 sets) and discussing the changes with some former coworkers (MySQL
 experts) I am inclined to believe the move from varchar  to binary
 absolutely would increase performance like this.
 
 
 However, I would like to get some real benchmarks around it and if it
 really makes a difference we should get a smart "UUID" type into the
 common SQLlibs (can pgsql see a similar benefit? Db2?) I think this
 conversation. Should be split off from the keystone one at hand - I
 don't want valuable information / discussions to get lost.
>>> 
>>> No disagreement on either point. However, this should be done after the
>>> standardization to a UUID user_id in Keystone, as a separate performance
>>> improvement patch. Agree?
>>> 
>>> Best,
>>> -jay
>> 
>> -1
>> 
>> The expectation in other projects has been that project_ids and user_ids are 
>> opaque strings. I need to see more clear benefit to enforcing stricter 
>> typing on these, because I think it might break a lot of things.
> 
> What does Nova lose here? The proposal is to have Keystone's user_id
> values be UUIDs all the time. There would be a migration or helper
> script against Nova's database that would change all non-UUID user_id
> values to the Keystone UUID values.

So I don’t have a problem with keystone internally using uuids, but forcing
a migration of user identifiers isn’t something that should be taken lightly.
One example is external logging and billing systems which now have to be
migrated.

I’m not opposed to the migration in principle. We may have to do a similar
thing for project_ids with hierarchical multitenancy, for example. I just
think we need a really good reason to do it, and the performance arguments
just don’t seem good enough without a little empirical data.

Vish

> 
> If there's stuff in Nova that would break (which is doubtful,
> considering like you say, these are supposed to be "opaque values" and
> as such should not have any restrictions or validation on their value),
> then that is code in Nova that should be fixed.
> 
> Honestly, we shouldn't accept poor or loose code just because "stuff
> might break".
> 
> -jay
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][nova] Ownership and path to schema definitions

2014-03-04 Thread David Kranz

Given that

1. there is an ongoing api discussion in which using json schemas is an 
important part

2. tempest now has a schema-based auto-generate feature for negative tests

I think it would be a good time to have at least an initial discussion 
about the requirements for theses schemas and where they will live.
The next step in tempest around this is to replace the existing negative 
test files with auto-gen versions, and most of the work in doing that

is to define the schemas.

The tempest framework needs to know the http method, url part, expected 
error codes, and payload description. I believe only the last is covered 
by the current nova schema definitions, with the others being some kind 
of attribute or data associated with the method that is doing the 
validation. Ideally the information being used to do the validation 
could be auto-converted to a more general schema that could be used by 
tempest. I'm interested in what folks have to say about this and 
especially from the folks who are core members of both nova and tempest. 
See below for one example (note that the tempest generator does not yet 
handle "pattern").


 -David

From nova:

get_console_output = {
'type': 'object',
'properties': {
'get_console_output': {
'type': 'object',
'properties': {
'length': {
'type': ['integer', 'string'],
'minimum': 0,
'pattern': '^[0-9]+$',
},
},
'additionalProperties': False,
},
},
'required': ['get_console_output'],
'additionalProperties': False,
}

From tempest:

{
"name": "get-console-output",
"http-method": "POST",
"url": "servers/%s/action",
"resources": [
{"name":"server", "expected_result": 404}
],
"json-schema": {
"type": "object",
"properties": {
"os-getConsoleOutput": {
"type": "object",
"properties": {
"length": {
"type": ["integer", "string"],
"minimum": 0
}
}
}
},
"additionalProperties": false
}
}


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nova failed to spawn when download disk image from Glance timeout

2014-03-04 Thread Ben Nemec
 

Nora, 

This is a development list. Your questions sound more related to usage,
so you might have better luck asking on the users list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack 

Thanks. 

-Ben 

On 2014-03-03 03:09, Nora Zhou wrote: 

> Hi, 
> 
> I recently deploy Bare-metal node instance using Heat Template. However, Nova 
> failed to spawn due to a timeout error. When I look into the code I found 
> that the timeout is related to Nova downloading disk image from Glance. The 
> nova-schedule.log shows below: 
> 
> 2014-02-28 02:49:48.046 2136 ERROR nova.compute.manager 
> [req-09e61b23-436f-4425-8db0-10dd1aea2e39 85bbc1abb4254761a5452654a6934b75 
> 692e595702654930936a65d1a658cff4] [instance: 
> 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] Instance failed to spawn 
> 
> 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
> 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] Traceback (most recent call last): 
> 
> 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
> 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1417, in 
> _spawn/ 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
> 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] network_info=network_info, 
> 
> 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
> 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
> "/usr/lib/python2.7/dist-packages/nova/virt/baremetal/pxe.py", line 444, in 
> cache_images 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager 
> [instance: 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] 
> self._cache_tftp_images(context, instance, tftp_image_info) 
> 
> 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
> 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
> "/usr/lib/python2.7/dist-packages/nova/virt/baremetal/pxe.py", line 335, in 
> _cache_tftp_images 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager 
> [instance: 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] 
> project_id=instance['project_id'], 
> 
> 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
> 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
> "/usr/lib/python2.7/dist-packages/nova/virt/baremetal/utils.py", line 33, in 
> cache_image 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager 
> [instance: 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] user_id, project_id) 
> 
> 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
> 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 645, in 
> fetch_image 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager 
> [instance: 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] max_size=max_size) 
> 
> 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
> 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
> "/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 196, in 
> fetch_to_raw 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager 
> [instance: 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] max_size=max_size) 
> 
> 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
> 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
> "/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 190, in fetch 
> 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
> 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] image_service.download(context, 
> image_id, dst_path=path) 
> 
> 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
> 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
> "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 360, in 
> download 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
> 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] for chunk in image_chunks: 
> 
> 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
> 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
> "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 478, in 
> __iter__ 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
> 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] chunk = self.next() 
> 
> 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
> 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File 
> "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 494, in 
> next 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
> 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] chunk = self._resp.read(CHUNKSIZE) 
> 
> 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
> 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File "/usr/lib/python2.7/httplib.py", 
> line 561, in read 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager 
> [instance: 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] s = self.fp.read(amt) 
> 
> 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager [instance: 
> 35d00082-4cb4-45a3-a73b-b72ef2a6e2b2] File "/usr/lib/python2.7/socket.py", 
> line 380, in read 2014-02-28 02:49:48.046 2136 TRACE nova.compute.manager 
> [instance: 35d00

Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-04 Thread Brian Cline

On 03/04/2014 05:01 AM, Thierry Carrez wrote:

James E. Blair wrote:

Freenode has been having a rough time lately due to a series of DDoS
attacks which have been increasingly disruptive to collaboration.
Fortunately there's an alternative.

OFTC http://www.oftc.net/> is a robust and established alternative
to Freenode.  It is a smaller network whose mission statement makes it a
less attractive target.  It's significantly more stable than Freenode
and has friendly and responsive operators.  The infrastructure team has
been exploring this area and we think OpenStack should move to using
OFTC.

There is quite a bit of literature out there pointing to Freenode, like
presentation slides from old conferences. We should expect people to
continue to join Freenode's channels forever. I don't think staying a
few weeks on those channels to redirect misled people will be nearly
enough. Could we have a longer plan ? Like advertisement bots that would
advise every n hours to join the right servers ?


[...]
1) Create an irc.openstack.org CNAME record that points to
chat.freenode.net.  Update instructions to suggest users configure their
clients to use that alias.

I'm not sure that helps. The people who would get (and react to) the DNS
announcement are likely using proxies anyway, which you'll have to
unplug manually from Freenode on switch day. The vast majority of users
will just miss the announcement. So I'd rather just make a lot of noise
on switch day :)

Finally, I second Sean's question on OFTC's stability. As bad as
Freenode is hit by DoS, they have experience handling this, mitigation
procedures in place, sponsors lined up to help, so damage ends up
*relatively* limited. If OFTC raises profile and becomes a target, are
we confident they would mitigate DoS as well as Freenode does ? Or would
they just disappear from the map completely ? I fear that we are trading
a known evil for some unknown here.

In all cases I would target post-release for the transition, maybe even
post-Summit.



Indeed, I can't help but feel like the large amount of effort involved 
in changing networks is a bit of a riverboat gamble. DDoS has been an 
unfortunate reality for every well-known/trusted/stable IRC network for 
the last 15-20 years, and running from it rather than planning for it is 
usually a futile effort. It feels like we'd be chasing our tails trying 
to find a place where DDoS couldn't cause serious disruption; even then 
it's still not a sure thing. I would hate to see everyone's efforts to 
have been in vain once the same problem occurs there.


--
Brian Cline
br...@linux.vnet.ibm.com



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron: Need help with tox failure in VPN code

2014-03-04 Thread Paul Michali
All, I found the problem…

There is a race condition on the LB vs VPN unit tests. I've seen it with just 
reference VPN code, when trying to load the service driver via configuration.

Essentially, VPN sets up cfg.CONF with service driver entry, and then starts 
the Neutron plugin to handle various northbound APIs. For some tests, before 
the VPN plugin is started by Neutron, LB runs and sets a different cfg.CONF (to 
LOADBALANCE). It has the Service Type Manager load that config in and when VPN 
plugin __init__ runs, it goes to Service Type Manager, gets the existing 
instance (it is a singleton) that has the LB settings, and then fails to find 
the VPN service driver, obviously.

My workaround, was to have VPN plugin __init__() clear the instance for Service 
Type Manager and force it to re-parse the configuration (and get the right 
thing).  This will have little performance impact, as it is only run during 
init of VPN plugin, the config to load is small, and worst case is it happens 
twice (LB then VPN loads).

I don't know of any way of preventing this race condition, other than mocking 
out the Service Type Manager to return the expected service driver (though that 
doesn't test that logic). Nor do I know why this was not seen, when we had the 
full Service Type Framework in place. Not sure if it just changed the timing 
enough to mask the issue?

Note: I found that the Service Type Manager code raises a SystemExit() 
exception, when there is no matching configs. As a result, there is no 
traceback (just an error code), and it is really hard to tell why tox failed. 
Maybe sys.exit() would be better?

It was quite the setback, finding out yesterday afternoon that the VPN service 
type framework was definitely not going into I-3, having to rework the code to 
remove the dependency on that commit, and then hitting this test failure. Spent 
lots of time trying to figure this issue out, but many thanks for Akihiro, 
Henry G, and others for helping me trudge through the issue!

In any case, new reviews have been pushed out 
https://review.openstack.org/#/c/74144 and 
https://review.openstack.org/#/c/74156, which should be passing Jenkins again. 
We are in the process of bringing up Tempest with these patches to provide 3rd 
party testing.

I'd appreciate it very much, if you can (re)review these two change sets.

Thanks!


PCM (Paul Michali)

MAIL  p...@cisco.com
IRCpcm_  (irc.freenode.net)
TW@pmichali
GPG key4525ECC253E31A83
Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83

On Mar 4, 2014, at 8:06 AM, Paul Michali  wrote:

> Bo,
> 
> I did that change, and it passes when I run neutron.tests.unit.services.vpn, 
> but not when I run full tox or neutron.tetss.unit.services.  I still get 
> failures (either error code 10 or test fails with no info).
> 
> Irene,
> 
> Any thoughts on why the driver is not loading (even with the mod that Bo 
> suggests)?
> 
> Nachi,
> 
> I just tried run_tests.sh and it fails to run the test (haven't used that in 
> a very long time, so not sure I'm running it correctly). Do I need any 
> special args, when running that? I tried './run_tests.sh -f -V -P' but it ran 
> 0 tests.
> 
> 
> All,
> 
> The bottom line here is that I can't seem to get the loading of service 
> driver from neutron.conf, irrespective of the blueprint change set. If I use 
> a hard coded driver (as is on master branch and used in the latest patch for 
> 74144), all the tests work. But for this blueprint we need to be able to load 
> the service driver (so that the blueprint driver can be loaded). The issue is 
> unrelated to the blueprint functionality, as shown by the latest patch and by 
> previous versions where I had the full service type framework implementation. 
> It seems like there is some problem with this partial application of STF to 
> load the service driver.
> 
> I took the (working) 74144 patch and made the changes below to load the 
> service plugin from neutron.conf, and see tox failures. I've also patched 
> this into the master branch, and I see the same issue!  IOW, there is 
> something wrong with the method I'm using to setup the service driver at 
> least with respect to the current test suite.
> 
> diff --git a/neutron/services/vpn/plugin.py b/neutron/services/vpn/plugin.py
> index 5d818a3..41cbff0 100644
> --- a/neutron/services/vpn/plugin.py
> +++ b/neutron/services/vpn/plugin.py
> @@ -18,11 +18,9 @@
>  #
>  # @author: Swaminathan Vasudevan, Hewlett-Packard
>  
> -# from neutron.db import servicetype_db as st_db
>  from neutron.db.vpn import vpn_db
> -# from neutron.plugins.common import constants
> -# from neutron.services import service_base
> -from neutron.services.vpn.service_drivers import ipsec as ipsec_driver
> +from neutron.plugins.common import constants
> +from neutron.services import service_base
>  
>  
>  class VPNPlugin(vpn_db.VPNPluginDb):
> @@ -41,12 +39,10 @@ class VPNDriverPlugin(VPNPlugin, 
> vpn_db.VPNPluginRpcD

[openstack-dev] [Murano] Hooking the external events discussion

2014-03-04 Thread Alexander Tivelkov
Hi folks,

On today's IRC meeting there was a very interesting discussion about
publishing of handlers for external events in Murano applications. It
turns out that the topic is pretty hot and requires some more
discussions. So, it was suggested to host an additional meeting to
cover this topic.

So, let's meet tomorrow at #murano channel on freenode. The suggested
time is 16:00 UTC (8am PST).

Anybody who is interested in the topic, please feel free to join!
Thanks

--
Regards,
Alexander Tivelkov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Community Meeting minutes - 03/04/2014

2014-03-04 Thread Alexander Tivelkov
Hi,

Thanks for joining murano weekly meeting.
Here are the meeting minutes and the logs:

http://eavesdrop.openstack.org/meetings/murano/2014/murano.2014-03-04-17.01.html
http://eavesdrop.openstack.org/meetings/murano/2014/murano.2014-03-04-17.01.log.html

See you next week!

--
Regards,
Alexander Tivelkov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6][Security Group] BP: Support ICMP type filter by security group

2014-03-04 Thread Collins, Sean
On Tue, Mar 04, 2014 at 12:01:00PM -0500, Brian Haley wrote:
> On 03/03/2014 11:18 AM, Collins, Sean wrote:
> > On Mon, Mar 03, 2014 at 09:39:42PM +0800, Xuhan Peng wrote:
> >> Currently, only security group rule direction, protocol, ethertype and port
> >> range are supported by neutron security group rule data structure. To allow
> > 
> > If I am not mistaken, I believe that when you use the ICMP protocol
> > type, you can use the port range specs to limit the type.
> > 
> > https://github.com/openstack/neutron/blob/master/neutron/db/securitygroups_db.py#L309
> > 
> > http://i.imgur.com/3n858Pf.png
> > 
> > I assume we just have to check and see if it applies to ICMPv6?
> 
> I tried using horizon to add an icmp type/code rule, and it didn't work.
> 
> Before:
> 
> -A neutron-linuxbri-i4533da4f-1 -p icmp -j RETURN
> 
> After:
> 
> -A neutron-linuxbri-i4533da4f-1 -p icmp -j RETURN
> -A neutron-linuxbri-i4533da4f-1 -p icmp -j RETURN
> 
> I'd assume I'll have the same error with v6.
> 
> I am curious what's actually being done under the hood here now...

Looks like _port_arg just returns an empty array when hte protocol is
ICMP?

https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_firewall.py#L328

Called by: 

https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_firewall.py#L292


-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What is the currently accepted way to do plugins

2014-03-04 Thread Sandy Walsh
This brings up something that's been gnawing at me for a while now ... why use 
entry-point based loaders at all? I don't see the problem they're trying to 
solve. (I thought I got it for a while, but I was clearly fooling myself)

1. If you use the "load all drivers in this category" feature, that's a 
security risk since any compromised python library could hold a trojan.

2. otherwise you have to explicitly name the plugins you want (or don't want) 
anyway, so why have the extra indirection of the entry-point? Why not just name 
the desired modules directly? 

3. the real value of a loader would be to also extend/manage the python path 
... that's where the deployment pain is. "Use  driver 
and take care of the pathing for me." Abstracting the module/class/function 
name isn't a great win. 

I don't see where the value is for the added pain (entry-point 
management/package metadata) it brings. 

CMV,

-S

From: Russell Bryant [rbry...@redhat.com]
Sent: Tuesday, March 04, 2014 1:29 PM
To: Murray, Paul (HP Cloud Services); OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Nova] What is the currently accepted way to do 
plugins

On 03/04/2014 06:27 AM, Murray, Paul (HP Cloud Services) wrote:
> One of my patches has a query asking if I am using the agreed way to
> load plugins: https://review.openstack.org/#/c/71557/
>
> I followed the same approach as filters/weights/metrics using
> nova.loadables. Was there an agreement to do it a different way? And if
> so, what is the agreed way of doing it? A pointer to an example or even
> documentation/wiki page would be appreciated.

The short version is entry-point based plugins using stevedore.

We should be careful though.  We need to limit what we expose as
external plug points, even if we consider them unstable.  If we don't
want it to be public, it may not make sense for it to be a plugin
interface at all.

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Get volumes REST API with filters and limit

2014-03-04 Thread Steven Kaufer
Duncan Thomas  wrote on 03/04/2014 07:53:49 AM:

> From: Duncan Thomas 
> To: "OpenStack Development Mailing List (not for usage questions)"
> ,
> Date: 03/04/2014 08:06 AM
> Subject: Re: [openstack-dev] [Cinder] Get volumes REST API with
> filters and limit
>
> Definitely file a bug... a script to reproduce would be fantastic.
> Needs to be fixed... I don't think you need a blueprint if the fix is
> simple, but if you're making deep changes then a blueprint always
> helps.#

Bug created: https://bugs.launchpad.net/cinder/+bug/1287813

>
> Thanks for pointing this out
>
> On 28 February 2014 20:52, Steven Kaufer  wrote:
> > I am investigating some pagination enhancements in nova and cinder(see
nova
> > blueprint https://blueprints.launchpad.net/nova/+spec/nova-pagination).
> >
> > In cinder, it appears that all filtering is done after the volumes are
> > retrieved from the database (see the API.get_all function in
> > https://github.com/openstack/cinder/blob/master/cinder/volume/api.py).
> > Therefore, the usage combination of filters and limit will only work if
all
> > volumes matching the filters are in the page of data being retrieved
from
> > the database.
> >
> > For example, assume that all of the volumes with a name of "foo" would
be
> > retrieved from the database starting at index 100 and that you query
for all
> > volumes with a name of "foo" while specifying a limit of 50.  In this
case,
> > the query would yield 0 results since the filter did not match any of
the
> > first 50 entries retrieved from the database.
> >
> > Is this a known problem?
> > Is this considered a bug?
> > How should this get resolved?  As a blueprint for juno?
> >
> > I am new to the community and am trying to determine how this should be
> > addressed.
> >
> > Thanks,
> >
> > Steven Kaufer
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Duncan Thomas
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
Thanks,

Steven Kaufer___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Sean Dague
On 03/04/2014 12:26 PM, Vishvananda Ishaya wrote:
> 
> On Mar 4, 2014, at 9:10 AM, Russell Bryant  wrote:
>>
>> Thank you all for your participation on this topic.  It has been quite
>> controversial, but the API we expose to our users is a really big deal.
>> I'm feeling more and more confident that we're coming through this with
>> a much better understanding of the problem space overall, as well as a
>> better plan going forward than we had a few weeks ago.
> 
> Hey Russell,
> 
> Thanks for bringing this to the mailing list and being open to discussion
> and collaboration. Also, thanks to everyone who is participating in the
> plan. Doing this kind of thing in the open is difficult and it has lead to
> a ton of debate, but this is the right way to do things. It says a lot
> about the strength of our community that we are able to have conversations
> like this without devolving into arguments and flame wars.
> 
> Vish

+1, and definitely appreciate Russell's leadership through this whole
discussion.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat]Policy on upgades required config changes

2014-03-04 Thread Steven Hardy
Hi all,

As some of you know, I've been working on the instance-users blueprint[1].

This blueprint implementation requires three new items to be added to the
heat.conf, or some resources (those which create keystone users) will not
work:

https://review.openstack.org/#/c/73978/
https://review.openstack.org/#/c/76035/

So on upgrade, the deployer must create a keystone domain and domain-admin
user, add the details to heat.conf, as already been done in devstack[2].

The changes requried for this to work have already landed in devstack, but
it was discussed to day and Clint suggested this may be unacceptable
upgrade behavior - I'm not sure so looking for guidance/comments.

My plan was/is:
- Make devstack work
- Talk to tripleo folks to assist in any transition (what prompted this
  discussion)
- Document the upgrade requirements in the Icehouse release notes so the
  wider community can upgrade from Havana.
- Try to give a heads-up to those maintaining downstream heat deployment
  tools (e.g stackforge/puppet-heat) that some tweaks will be required for
  Icehouse.

However some have suggested there may be an openstack-wide policy which
requires peoples old config files to continue working indefinitely on
upgrade between versions - is this right?  If so where is it documented?

The code itself will handle backwards compatibility where existing stacks
were created with the old code, but I had assumed (as a concession to code
simplicity) that some documented upgrade procedure would be acceptable
rather than hacking in some way to support the previous (broken, ref bug
#1089261) behavior when the config values are not found.

If anyone can clarify the requirement/expectation around config files and
upgrades that would be most helpful, thanks!

Steve

[1] https://blueprints.launchpad.net/heat/+spec/instance-users
[2] https://review.openstack.org/#/c/73324/
https://review.openstack.org/#/c/75424/
https://review.openstack.org/#/c/76036/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-04 Thread Jarret Raim
>#1) do we believe OFTC is fundamentally better equipped to resist a
>DDOS, or do we just believe they are a smaller target? The ongoing DDOS
>on meetup.com the past 2 weeks is a good indicator that being a smaller
>fish only helps for so long.

It seems like we would need a answer to this question. If the main reason
to switch is to avoid DDoS interruptions, the question would really boil
down to whether the OFTC is actually more resilient to DDoS or if they
just haven't had to deal with it.


Jarret


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Toward SQLAlchemy 0.9.x compatibility everywhere for Icehouse

2014-03-04 Thread Matt Riedemann



On 3/3/2014 8:59 AM, Thomas Goirand wrote:

On 03/03/2014 01:14 PM, Thomas Goirand wrote:

On 03/03/2014 11:24 AM, Thomas Goirand wrote:

It looks like my patch fixes the first unit test failure. Though we
still need a fix for the 2nd problem:
AttributeError: 'module' object has no attribute 'AbstractType'


Replying to myself...

It looks like AbstractType is "not needed except for backwards
compatibility" in SQLA 0.7 & 0.8, and it's gone away in 0.9. See:

http://docs.sqlalchemy.org/en/rel_0_7/core/types.html
http://docs.sqlalchemy.org/en/rel_0_8/core/types.html
http://docs.sqlalchemy.org/en/rel_0_9/core/types.html

(reference to AbstractType is gone from the 0.9 doc)

Therefore, I'm tempted to just remove lines 336 and 337, though I am
unsure of what was intended in this piece of code.

Your thoughts?

Thomas


Seems Sean already fixed that one, and it was lost in the git review
process (with patches going back and forth). I added it again as a
separate patch, and now the unit tests are now ok. It just passed the
gating tests! :)

Cheers, and thanks to Sean and everyone else for the help, hoping to get
this series approved soon,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



You're going to need to rebase on this [1] now since we have a Tempest 
job running against sqlalchemy-migrate patches as of yesterday.  I'm 
trying to figure out why that's failing in devstack-gate-cleanup-host 
though so any help there is appreciated.  I'm assuming we missed 
something in the job setup [2].


[1] https://review.openstack.org/#/c/77669/
[2] https://review.openstack.org/#/c/77679/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Vishvananda Ishaya

On Mar 4, 2014, at 9:10 AM, Russell Bryant  wrote:
> 
> Thank you all for your participation on this topic.  It has been quite
> controversial, but the API we expose to our users is a really big deal.
> I'm feeling more and more confident that we're coming through this with
> a much better understanding of the problem space overall, as well as a
> better plan going forward than we had a few weeks ago.

Hey Russell,

Thanks for bringing this to the mailing list and being open to discussion
and collaboration. Also, thanks to everyone who is participating in the
plan. Doing this kind of thing in the open is difficult and it has lead to
a ton of debate, but this is the right way to do things. It says a lot
about the strength of our community that we are able to have conversations
like this without devolving into arguments and flame wars.

Vish





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Anne Gentle
On Tue, Mar 4, 2014 at 11:10 AM, Russell Bryant  wrote:

> On 03/03/2014 12:32 PM, Russell Bryant wrote:
> > There has been quite a bit of discussion about the future of the v3 API
> > recently.
>
>   :-)
>
> Since this proposal was posted, it is clear that there is not much
> support for it, much less consensus.  That's progress because it now
> seems clear to me that the path proposed (keep only v2) isn't the right
> answer.
>
> Let's reflect a bit on some of the other progress that I think has been
> made:
>
> 1) Greater understanding and documentation of the v3 API effort
>
> It has led to a larger group of people taking a much closer look at what
> has been done with the v3 API so far.  That has widened the net for
> feedback on what else should be done before we could call it done.
>
> Chris has put together an excellent page with the most comprehensive
> overview of the v3 API effort that I've seen.  I think this is very
> helpful:
>
> http://ozlabs.org/~cyeoh/V3_API.html
>
>
I still sense that the struggle with Compute v3 is the lack of
documentation for contributor developers but also especially end users so
that we could get feedback early and often.

My original understanding, passed by word-of-mouth, was that the goal for
v3 was to define an expanded core that nearly all deployers could
confidently put into production to serve their users needs. Since there's
no end-user-sympathetic documentation, we learned a bit too much about how
it's made, that supposedly it's implemented with "all extensions" -- a
revelation that I'd still prefer to be protected from. :) Or possibly I
don't understand. But the thing is, as a user advocate I shouldn't need to
know that. I should know what it does and what benefits it holds.

I recently had to write a paragraph about v3 for the Operations Guide, and
it was really difficult to write because of the conversational nature of
the discussion. Worse still, it was difficult to tell a deployer where
their voice could be best heard. I went with "respond on the user survey."
I still sense we need to ensure we have data from users (deployers and end
users) and that won't be available until May.


> 2) Expansion on ideas to ease long term support of APIs
>
> Thinking through this has led to a lot of deep thought about what
> changes we can make to support an API for a longer period of time.
> These are all ideas that can be applied to v3:
>
>   - minor-versions for the core API and what changes would be
> considered acceptable under that scheme
>
>   - how we can make significant changes that normally are not
> backwards compatible optional so that clients can declare
> support for them, easing the possible future need for another
> major API revision.
>
> 3) New ideas to ease keeping both v2 and v3
>
> There has been some excellent input from those that have been working on
> the v3 API with some new ideas for how we can lessen the burden of
> keeping both APIs long term.  I'm personally especially interested in
> the "v2.1" approach where v2 turns into code that transforms requests
> and responses to/from v3 format.  More on that here:
>
> http://ozlabs.org/~cyeoh/V3_API.html#v2_v3_dual_maintenance
>
>
> What I'd like to do next is work through a new proposal that includes
> keeping both v2 and v3, but with a new added focus of minimizing the
> cost.  This should include a path away from the dual code bases and to
> something like the "v2.1" proposal.
>


I'd like to make a better API and I think details about this proposal helps
us with that goal.

I'd like the effort to continue but I'd like an additional focus during the
Icehouse timeframe to write end user and SDK dev docs and to listen to the
user survey respondents.

Thanks Russell and Chris for the mega-efforts here. It matters and you're
fighting the good fight.
Anne




>
> Thank you all for your participation on this topic.  It has been quite
> controversial, but the API we expose to our users is a really big deal.
>  I'm feeling more and more confident that we're coming through this with
> a much better understanding of the problem space overall, as well as a
> better plan going forward than we had a few weeks ago.
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What is the currently accepted way to do plugins

2014-03-04 Thread Russell Bryant
On 03/04/2014 06:27 AM, Murray, Paul (HP Cloud Services) wrote:
> One of my patches has a query asking if I am using the agreed way to
> load plugins: https://review.openstack.org/#/c/71557/
> 
> I followed the same approach as filters/weights/metrics using
> nova.loadables. Was there an agreement to do it a different way? And if
> so, what is the agreed way of doing it? A pointer to an example or even
> documentation/wiki page would be appreciated.

The short version is entry-point based plugins using stevedore.

We should be careful though.  We need to limit what we expose as
external plug points, even if we consider them unstable.  If we don't
want it to be public, it may not make sense for it to be a plugin
interface at all.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Russell Bryant
On 03/03/2014 12:32 PM, Russell Bryant wrote:
> There has been quite a bit of discussion about the future of the v3 API
> recently.

  :-)

Since this proposal was posted, it is clear that there is not much
support for it, much less consensus.  That's progress because it now
seems clear to me that the path proposed (keep only v2) isn't the right
answer.

Let's reflect a bit on some of the other progress that I think has been
made:

1) Greater understanding and documentation of the v3 API effort

It has led to a larger group of people taking a much closer look at what
has been done with the v3 API so far.  That has widened the net for
feedback on what else should be done before we could call it done.

Chris has put together an excellent page with the most comprehensive
overview of the v3 API effort that I've seen.  I think this is very helpful:

http://ozlabs.org/~cyeoh/V3_API.html

2) Expansion on ideas to ease long term support of APIs

Thinking through this has led to a lot of deep thought about what
changes we can make to support an API for a longer period of time.
These are all ideas that can be applied to v3:

  - minor-versions for the core API and what changes would be
considered acceptable under that scheme

  - how we can make significant changes that normally are not
backwards compatible optional so that clients can declare
support for them, easing the possible future need for another
major API revision.

3) New ideas to ease keeping both v2 and v3

There has been some excellent input from those that have been working on
the v3 API with some new ideas for how we can lessen the burden of
keeping both APIs long term.  I'm personally especially interested in
the "v2.1" approach where v2 turns into code that transforms requests
and responses to/from v3 format.  More on that here:

http://ozlabs.org/~cyeoh/V3_API.html#v2_v3_dual_maintenance


What I'd like to do next is work through a new proposal that includes
keeping both v2 and v3, but with a new added focus of minimizing the
cost.  This should include a path away from the dual code bases and to
something like the "v2.1" proposal.

Thank you all for your participation on this topic.  It has been quite
controversial, but the API we expose to our users is a really big deal.
 I'm feeling more and more confident that we're coming through this with
a much better understanding of the problem space overall, as well as a
better plan going forward than we had a few weeks ago.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6][Security Group] BP: Support ICMP type filter by security group

2014-03-04 Thread Brian Haley
On 03/03/2014 11:18 AM, Collins, Sean wrote:
> On Mon, Mar 03, 2014 at 09:39:42PM +0800, Xuhan Peng wrote:
>> Currently, only security group rule direction, protocol, ethertype and port
>> range are supported by neutron security group rule data structure. To allow
> 
> If I am not mistaken, I believe that when you use the ICMP protocol
> type, you can use the port range specs to limit the type.
> 
> https://github.com/openstack/neutron/blob/master/neutron/db/securitygroups_db.py#L309
> 
> http://i.imgur.com/3n858Pf.png
> 
> I assume we just have to check and see if it applies to ICMPv6?

I tried using horizon to add an icmp type/code rule, and it didn't work.

Before:

-A neutron-linuxbri-i4533da4f-1 -p icmp -j RETURN

After:

-A neutron-linuxbri-i4533da4f-1 -p icmp -j RETURN
-A neutron-linuxbri-i4533da4f-1 -p icmp -j RETURN

I'd assume I'll have the same error with v6.

I am curious what's actually being done under the hood here now...

-Brian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] rpc concurrency control rfc

2014-03-04 Thread Daniel P. Berrange
On Tue, Mar 04, 2014 at 04:15:03PM +, Duncan Thomas wrote:
> On 28 November 2013 10:14, Daniel P. Berrange  wrote:
> 
> > For this specific block zero'ing case it occurred to me that it might
> > be sufficient to just invoke 'ionice dd' instead of 'dd' and give it
> > a lower I/O priority class than normal.
> 
> Excuse the thread necromancy, I've just been searching for thoughts
> about this very issue. I've merged a patch that does I/O nice, and it
> helps, but it is easy to DoS a volume server by creating and deleting
> volumes fast while maintaining a high i/o load... the zeroing never
> runs and so you run out of allocatable space.

Oh well, thanks for experimenting with this idea anyway.

> I'll take a look at writing something with more controls than dd for
> doing the zeroing...

Someone already beat you to it

  commit 71946855591a41dcc87ef59656a8a340774eeaf2
  Author: Pádraig Brady 
  Date:   Tue Feb 11 11:51:39 2014 +

libvirt: support configurable wipe methods for LVM backed instances

Provide configurable methods to clear these volumes.
The new 'volume_clear' and 'volume_clear_size' options
are the same as currently supported by cinder.

* nova/virt/libvirt/imagebackend.py: Define the new options.
* nova/virt/libvirt/utils.py (clear_logical_volume): Support the
new options. Refactor the existing dd method out to
_zero_logic_volume().
* nova/tests/virt/libvirt/test_libvirt_utils.py: Add missing test cases
for the existing clear_logical_volume code, and for the new code
supporting the new clearing methods.
* etc/nova/nova.conf.sample: Add the 2 new config descriptions
to the [libvirt] section.

Change-Id: I5551197f9ec89ae2f9b051696bccdeb1af2c031f
Closes-Bug: #889299

this matches equivalent config in cinder.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-04 Thread Thomas Goirand
On 03/04/2014 06:13 PM, Julien Danjou wrote:
> On Tue, Mar 04 2014, James E. Blair wrote:
> 
>> If there aren't objections to this plan, I think we can propose a motion
>> to the TC with a date and move forward with it fairly soon.
> 
> That plan LGTM, and +1 for OFTC. :)

Same over here, +1 for OFTC.

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] rpc concurrency control rfc

2014-03-04 Thread Duncan Thomas
On 28 November 2013 10:14, Daniel P. Berrange  wrote:

> For this specific block zero'ing case it occurred to me that it might
> be sufficient to just invoke 'ionice dd' instead of 'dd' and give it
> a lower I/O priority class than normal.

Excuse the thread necromancy, I've just been searching for thoughts
about this very issue. I've merged a patch that does I/O nice, and it
helps, but it is easy to DoS a volume server by creating and deleting
volumes fast while maintaining a high i/o load... the zeroing never
runs and so you run out of allocatable space.

I'll take a look at writing something with more controls than dd for
doing the zeroing...

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Proposal to add Fei Long Wang (flwang) as a core reviewer

2014-03-04 Thread Alejandro Cabrera
> Hi folks, I'd like to propose adding Fei Long Wang (flwang) as a core 
> reviewer on the Marconi team. He has been contributing regularly over the 
> past couple of months, and has proven to be a careful reviewer with good 
> judgment.
>
> All Marconi ATC's, please respond with a +1 or --1.
>
> Cheers,
> Kurt G. | @kgriffs | Marconi PTL

+1!

I second this thought. Fei Long Wang (flwang) has consistently
participated in discussions, meetings, and contributed.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Update the Wiki with links to blueprints and reviews

2014-03-04 Thread Collins, Sean
On Tue, Mar 04, 2014 at 04:06:02PM +, Robert Li (baoli) wrote:
> Hi Sean,
> 
> I just added the ipv6-prefix-delegation BP that can be found using the
> search link on the ipv6 wiki. More details about it will be added once I
> find time.

Perfect - we'll probably want to do a session at the summit on it.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [3rd party testing] How to setup CI? Take #2

2014-03-04 Thread Jay Pipes
On Tue, 2014-03-04 at 16:31 +0100, Luke Gorrie wrote:
> Option 1: Should I debug the CI implementation we developed and tested
> back in December, which was not based on Tempest but rather on our
> private integration test method that we used in the Havana cycle? The
> debugging that is needed is more defensive programming in our
> automated attempt to setup OpenStack for test -- so that if the
> installation fails for some unexpected reason we don't vote based on
> that.
> 
> Option 2: Should I start over with a standard Tempest test insead? If
> so, what's the best method to set it up (yours? Arista's? another?),
> and how do I know when that method is sufficiently debugged that it's
> time to start?

Although I recognize you and your team have put in a substantial amount
of time in debugging your custom setup, I would advise dropping the
custom CI setup and going with a method that specifically uses the
upstream openstack-dev/devstack and openstack-infra/devstack-gate
projects. The reason is because these two projects are well supported by
the upstream Infrastructure team.

devstack will allow you to set up a complete OpenStack environment that
matches upstream -- with the exception of using the Tailf-NCS ML2 plugin
instead of the default plugin. devstack-gate will provide you the git
checkout plumbing that will populate the source directories for the
OpenStack projects that devstack uses to build its OpenStack
environment.

I'd recommend using my os-ext-testing repository (which is mostly just a
couple shell scripts and documentation that uses the upstream Puppet
modules to install and configure Jenkins, Zuul, Jenkins Job Builder,
Gearman, devstack-gate/nodepool scripts on a master and slave node).

> I was on the 3rd party testing meeting last night (as 'lukego') and
> your recommendation for me was to hold off for a week or so and then
> try your method after your next update. That sounds totally fine to me
> in principle. However, this will mean that I don't have a mature test
> procedure in place by March 14th, and I'm concerned that there may be
> bad consequences on this. This date was mentioned as a deadline in the
> Neutron meeting last night, but I don't actually understand what the
> consequence of non-compliance is for established drivers such as this
> one.

I'm not going to step on Mark McClain's toes regarding policy for
drivers in the Neutron code tree; Mark, please chime in here.

I mentioned waiting about a week because, after discussions with the
upstream Infrastructure team yesterday, it became clear that putting a
nodepool manager in place to spin up *single-use devstack slave nodes*
for running Tempest tests is going to be necessary.

I had previously thought that it was possible to reset a Devstack
environment to a clean state (thus being able to re-use the slave
Jenkins node for >1 test run). However, so much is changed on the slave
node during a Tempest run (and by devstack itself), that the only way to
truly ensure a clean test environment is to have a brand new devstack
slave node created/launched for each test run. Nodepool is the piece of
software that manages a pool of these devstack slave nodes, and it will
take me about a week to complete a new article and testing on the
os-ext-testing repository for integrating and installing nodepool
properly.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Proposal to add Fei Long Wang (flwang) as a core reviewer

2014-03-04 Thread Kurt Griffiths
The poll has closed. flwang has been promoted to Marconi core.

@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Update the Wiki with links to blueprints and reviews

2014-03-04 Thread Robert Li (baoli)
Hi Sean,

I just added the ipv6-prefix-delegation BP that can be found using the
search link on the ipv6 wiki. More details about it will be added once I
find time.

thanks,
--Robert

On 3/4/14 10:05 AM, "Collins, Sean" 
wrote:

>Hi All,
>
>We've got a lot of work in progress, so if you
>have a blueprint or bug that you are working on (or know about),
>let's make sure that we keep track of them. Ideally, for bugs, add the
>"ipv6" tag
>
>https://bugs.launchpad.net/neutron/+bugs?field.tag=ipv6
>
>For blueprints and code reviews, please add them to the Wiki
>
>https://wiki.openstack.org/wiki/Neutron/IPv6
>
>-- 
>Sean M. Collins
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to move from Freenode to OFTC

2014-03-04 Thread Sergey Lukjanov
++, OFTC looks nice.

On Tue, Mar 4, 2014 at 3:31 PM, Daniel P. Berrange  wrote:
> On Tue, Mar 04, 2014 at 11:12:13AM +, Stephen Gran wrote:
>> On 04/03/14 11:01, Thierry Carrez wrote:
>> >James E. Blair wrote:
>> >>Freenode has been having a rough time lately due to a series of DDoS
>> >>attacks which have been increasingly disruptive to collaboration.
>> >>Fortunately there's an alternative.
>> >>
>> >>OFTC http://www.oftc.net/> is a robust and established alternative
>> >>to Freenode.  It is a smaller network whose mission statement makes it a
>> >>less attractive target.  It's significantly more stable than Freenode
>> >>and has friendly and responsive operators.  The infrastructure team has
>> >>been exploring this area and we think OpenStack should move to using
>> >>OFTC.
>> >
>> >There is quite a bit of literature out there pointing to Freenode, like
>> >presentation slides from old conferences. We should expect people to
>> >continue to join Freenode's channels forever. I don't think staying a
>> >few weeks on those channels to redirect misled people will be nearly
>> >enough. Could we have a longer plan ? Like advertisement bots that would
>> >advise every n hours to join the right servers ?
>>
>> Why not just set /topic to tell people to connect to OFTC and join there?
>
> That's certainly something you want todo, but IME of moving IRC channels
> in the past, plenty of people will never look at the #topic :-( You want
> to be more aggressive like setting channel permissions to block anyone
> except admins from speaking in the channel. Then set a bot with admin
> rights to spam the channel once an hour telling people to go elsewhere.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Nikola Đipanov
On 03/03/2014 06:32 PM, Russell Bryant wrote:
> There has been quite a bit of discussion about the future of the v3 API
> recently.  There has been growing support for the idea that we should
> change course and focus on evolving the existing v2 API instead of
> putting out a new major revision.  This message is a more complete
> presentation of that proposal that concludes that we can do what we
> really need to do with only the v2 API.
> 
> Keeping only the v2 API requires some confidence that we can stick with
> it for years to come.  We don't want to be revisiting this any time
> soon.  This message addresses a bunch of different questions about how
> things would work if we only had v2.
> 
> 1) What about tasks?
> 
> In some cases, the proposed integration of tasks is backwards
> compatible.  A task ID will be added to a header.  The biggest point of
> debate was if and how we would change the response for creating a
> server.  For tasks in v2, we would not change the response by default.
> The task ID would just be in a header.  However, if and when the client
> starts exposing version support information, we can provide an
> alternative/preferred response based on tasks.
> 
> For example:
> 
>Accept: application/json;type=task
> 

I feel that ability to expose tasks is the single most important thing
we need to do from the API semantics standpoint, unless we redesign the
API from scratch (see below). Just looking at how awkward and edge-case
ridden the interaction between components that use each others APIs in
OpenStack is, should be enough to convince anyone that this needs fixing.

I am not sure if "tasks" will solve this, but the fact that we have
tried to solve this in several different ways up until now, and the
effort was being driven by large deployers, mostly tells me that this is
an issue people need solved.

>From that point of view - if we can do this with V2 we absolutely should.

> 2) Versioning extensions
> 
> One of the points being addressed in the v3 API was the ability to
> version extensions.  In v2, we have historically required new API
> extensions, even for changes that are backwards compatible.  We propose
> the following:
> 
>  - Add a version number to v2 API extensions
>  - Allow backwards compatible changes to these API extensions,
> accompanied by a version number increase
>  - Add the option to advertise an extension as deprecated, which can be
> used for all those extensions created only to advertise the availability
> of new input parameters
> 
> 3) Core versioning
> 
> Another pain point in API maintenance has been having to create API
> extensions for every small addition to the core API.  We propose that a
> version number be exposed for the core API that exposes the revision of
> the core API in use.  With that in place, backwards compatible changes
> such as adding a new property to a resource would be allowed when
> accompanied by a version number increase.
> 
> With versioning of the core and API extensions, we will be able to cut
> down significantly on the number of changes that require an API
> extension without sacrificing the ability of a client to discover
> whether the addition is present or not.
> 

The whole extensions vs. core discussion has been confusing me since the
beginning, and I can't say it has changed much.

After thinking about this for some time I've decided :) that I think
Nova needs 2 APIs. Amazon EC2 was always meant to be exposed as a web
service to it's users and having a REST API that exposes "resources"
without actually going into details about what is happening is fine, and
it's fine for people using Nova in a similar manner. It is clear from
this that I think the Nova API borrows a lot from EC2.

But I think nova would benefit from having a lower level API, just as a
well designed software library would, that let's people build services
on top of it, that might provide different stability guarantees, as it's
customers would not be cloud applications, but rather other cloud
infrastructure. I think that having things tired in this manner would
answer a lot of questions about what is and isn't "core" and what we
mean by "extensions".

If I were to attempt a new API for Nova - I would start from the above.

> 4) API Proxying
> 
> We don't see proxying APIs as a problem.  It is the cost we pay for
> choosing to split apart projects after they are released.  We don't
> think it's fair to break users just because we have chosen to split
> apart the backend implementation.
> 
> Further, the APIs that are proxied are frozen while those in the other
> projects are evolving.  We believe that as more features are available
> only via the native APIs in Cinder, Glance, and Neutron, users will
> naturally migrate over to the native APIs.
> 
> Over time, we can ensure clients are able to query the API without the
> need to proxy by adding new formats or extensions that don't return data
> that needed to be proxied.
> 

Proxying is fine, and c

  1   2   >