Re: [openstack-dev] [Heat] Rework auto-scaling support in Heat

2014-11-28 Thread Angus Salkeld
On Fri, Nov 28, 2014 at 5:33 PM, Qiming Teng 
wrote:

> Dear all,
>
> Auto-Scaling is an important feature supported by Heat and needed by
> many users we talked to.  There are two flavors of AutoScalingGroup
> resources in Heat today: the AWS-based one and the Heat native one.  As
> more requests coming in, the team has proposed to separate auto-scaling
> support into a separate service so that people who are interested in it
> can jump onto it.  At the same time, Heat engine (especially the resource
> type code) will be drastically simplified.  The separated AS service
> could move forward more rapidly and efficiently.
>
> This work was proposed a while ago with the following wiki and
> blueprints (mostly approved during Havana cycle), but the progress is
> slow.  A group of developers now volunteer to take over this work and
> move it forward.
>

Thank you for taking on this big project!


>
> wiki: https://wiki.openstack.org/wiki/Heat/AutoScaling
> BPs:
>  - https://blueprints.launchpad.net/heat/+spec/as-lib-db
>  - https://blueprints.launchpad.net/heat/+spec/as-lib
>  - https://blueprints.launchpad.net/heat/+spec/as-engine-db
>  - https://blueprints.launchpad.net/heat/+spec/as-engine
>  - https://blueprints.launchpad.net/heat/+spec/autoscaling-api
>  - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-client
>  - https://blueprints.launchpad.net/heat/+spec/as-api-group-resource
>  - https://blueprints.launchpad.net/heat/+spec/as-api-policy-resource
>  -
> https://blueprints.launchpad.net/heat/+spec/as-api-webhook-trigger-resource
>  - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-resources
>
> Once this whole thing lands, Heat engine will talk to the AS engine in
> terms of ResourceGroup, ScalingPolicy, Webhooks.  Heat engine won't care
> how auto-scaling is implemented although the AS engine may in turn ask
> Heat to create/update stacks for scaling's purpose.  In theory, AS
> engine can create/destroy resources by directly invoking other OpenStack
> services.  This new AutoScaling service may eventually have its own DB,
> engine, API, api-client.  We can definitely aim high while work hard on
> real code.
>

Yes, I think AS is the last major bit of code that needs to be moved out
into its own
service. Tho' hopefully still using Heat to orchestrate.


>
> After reviewing the BPs/Wiki and some communication, we get two options
> to push forward this.  I'm writing this to solicit ideas and comments
> from the community.
>
> Option A: Top-Down Quick Split
> --
>
> This means we will follow a roadmap shown below, which is not 100%
> accurate yet and very rough:
>
>   1) Get the separated REST service in place and working
>   2) Switch Heat resources to use the new REST service
>
> Pros:
>   - Separate code base means faster review/commit cycle
>   - Less code churn in Heat
> Cons:
>   - A new service need to be installed/configured/launched
>   - Need commitments from dedicated, experienced developers from very
> beginning
>
> Option B: Bottom-Up Slow Growth
> ---
>
> The roadmap is more conservative, with many (yes, many) incremental
> patches to migrate things carefully.
>
>   1) Separate some of the autoscaling logic into libraries in Heat
>   2) Augment heat-engine with new AS RPCs
>   3) Switch AS related resource types to use the new RPCs
>   4) Add new REST service that also talks to the same RPC
>  (create new GIT repo, API endpoint and client lib...)
>
> Pros:
>   - Less risk breaking user lands with each revision well tested
>   - More smooth transition for users in terms of upgrades
>
> Cons:
>   - A lot of churn within Heat code base, which means long review cycles
>   - Still need commitments from cores to supervise the whole process
>
>

At summit people were leaning towards "B", but I am very tempted by the
potential speed of development of "A" and the reduced code churn on the heat
repo (assuming we pull it out into a new repo). Given the other code churn
going on in our code base (convergence) it might make non stop AS rework
difficult to manage.



> There could be option C, D... but the two above are what we came up with
> during the discussion.
>
> Another important thing we talked about is about the open discussion on
> this.  OpenStack Wiki seems a good place to document settled designs but
> not for interactive discussions.  Probably we should leverage etherpad
> and the mailinglist when moving forward.  Suggestions on this are also
> welcomed.
>

I think a mixture of here (mailing list) and the weekly meeting should be ok
to getting some consensus about the way forward.

-Angus


>
> Thanks.
>
> Regards,
>  Qiming
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.o

Re: [openstack-dev] [All] Proposal to add examples/usecase as part of new features / cli / functionality patches

2014-11-28 Thread Deepak Shetty
On Fri, Nov 28, 2014 at 10:32 PM, Steve Gordon  wrote:

> - Original Message -
> > From: "Deepak Shetty" 
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> >
> > But isn't *-specs comes very early in the process where you have an
> > idea/proposal of a feature, u don't have it yet implemented. Hence specs
> > just end up with Para's on how the feature is supposed to work, but
> doesn't
> > include any real world screen shots as the code is not yet ready at that
> > point of time. Along with patch it would make more sense, since the
> author
> > would have tested it so it isn't a big overhead to catch those cli screen
> > shots and put it in a .txt or .md file so that patch reviewers can see
> the
> > patch in action and hence can review more effectively
> >
> > thanx,
> > deepak
>
> Sure but in the original email you listed a number of other items, not
> just CLI screen shots, including:
>
> > >> > 1) What changes are needed in manila.conf to make this work
> > >> > 2) How to use the cli with this change incorporated
> > >> > 3) Some screen shots of actual usage
> > >> > 4) Any caution/caveats that one has to keep in mind while using this
>
> Ideally I see 1, 2, and 4 as things that should be added to the spec
> (retrospectively if necessary) to ensure that it maintains an accurate
> record of the feature. I can see potential benefits to including listings
> of real world usage (3) in the client projects, but I don't think all of
> the items listed belong there.
>

Agree, IMHO (2) and (3) will be possible only when patch is ready, others
can be part of spec.

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] internalURL and adminURL of endpoints should not be visible to ordinary user

2014-11-28 Thread joehuang
Hello,

if an ordinary user sent a get-token request to KeyStone, internalURL and 
adminURL of endpoints will also be returned. It'll expose the internal high 
privilege access address and some internal network topology information to the 
ordinary user, and leads to the risk for malicious user to attack or hijack the 
system.

the request to get token for ordinary user:
curl -d '{"auth":{"passwordCredentials":{"username": "huawei", "password": 
"2014"},"tenantName":"huawei"}}' -H "Content-type: application/json" 
http://localhost:5000/v2.0/tokens

the response will include internalURL and adminURL of endpoints:
{"access": {"token": {"issued_at": "2014-11-27T02:30:59.218772", "expires": 
"2014-11-27T03:30:59Z", "id": "b8684d2b68ab49d5988da9197f38a878", "tenant": 
{"description": "normal Tenant", "enabled": true, "id": 
"7ed3351cd58349659f0bfae002f76a77", "name": "huawei"}, "audit_ids": 
["Ejn3BtaBTWSNtlj7beE9bQ"]}, "serviceCatalog": [{"endpoints": [{"adminURL": 
"http://10.67.148.27:8774/v2/7ed3351cd58349659f0bfae002f76a77";, "region": 
"regionOne", "internalURL": 
"http://10.67.148.27:8774/v2/7ed3351cd58349659f0bfae002f76a77";, "id": 
"170a3ae617a1462c81bffcbc658b7746", "publicURL": 
"http://10.67.148.27:8774/v2/7ed3351cd58349659f0bfae002f76a77"}], 
"endpoints_links": [], "type": "compute", "name": "nova"}, {"endpoints": 
[{"adminURL": "http://10.67.148.27:9696";, "region": "regionOne", "internalURL": 
"http://10.67.148.27:9696";, "id": "7c0f28aa4710438bbd84fd25dbe4daa6", 
"publicURL": "http://10.67.148.27:9696"}], "endpoints_links": [], "type": 
"network", "name": "neutron"}, {"endpoints": [{"adminURL": "ht
 tp://10.67.148.27:9292", "region": "regionOne", "internalURL": 
"http://10.67.148.27:9292";, "id": "576f41fc8ef14b4f90e516bb45897491", 
"publicURL": "http://10.67.148.27:9292"}], "endpoints_links": [], "type": 
"image", "name": "glance"}, {"endpoints": [{"adminURL": 
"http://10.67.148.27:8777";, "region": "regionOne", "internalURL": 
"http://10.67.148.27:8777";, "id": "77d464e146f242aca3c50e10b6cfdaa0", 
"publicURL": "http://10.67.148.27:8777"}], "endpoints_links": [], "type": 
"metering", "name": "ceilometer"}, {"endpoints": [{"adminURL": 
"http://10.67.148.27:6385";, "region": "regionOne", "internalURL": 
"http://10.67.148.27:6385";, "id": "1b8177826e0c426fa73e5519c8386589", 
"publicURL": "http://10.67.148.27:6385"}], "endpoints_links": [], "type": 
"baremetal", "name": "ironic"}, {"endpoints": [{"adminURL": 
"http://10.67.148.27:35357/v2.0";, "region": "regionOne", "internalURL": 
"http://10.67.148.27:5000/v2.0";, "id": "435ae249fd2a427089cb4bf2e6c0b8e9", 
"publicURL": "http://10.67.148.27:5000/v2.0";
 }], "endpoints_links": [], "type": "identity", "name": "keystone"}], "user": 
{"username": "huawei", "roles_links": [], "id": 
"a88a40a635334e5da2ac3523d9780ed3", "roles": [{"name": "_member_"}], "name": 
"huawei"}, "metadata": {"is_admin": 0, "roles": 
["73b0a1ac6b0c48cb90205c53f2b9e48d"]}}}

At least, the internalURL and adminURL of endpoints should not be returned to 
ordinary users, only if the admin configured the policy to allow ordinary user 
has the right to see it.

Best Regards
Chaoyi Huang ( Joe Huang )


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] suds-jurko, new in our global-requirements.txt: what is the point?!?

2014-11-28 Thread Thomas Goirand
On 11/28/2014 07:35 PM, Ihar Hrachyshka wrote:
> On 27/11/14 19:10, Thomas Goirand wrote:
>> On 11/28/2014 12:06 AM, Ihar Hrachyshka wrote:
>>> On 27/11/14 12:09, Thomas Goirand wrote:
 On 11/27/2014 12:31 AM, Donald Stufft wrote:
>
>> On Nov 26, 2014, at 10:34 AM, Thomas Goirand
>>  wrote:
>>
>> Hi,
>>
>> I tried to package suds-jurko. I was first happy to see
>> that there was some progress to make things work with
>> Python 3. Unfortunately, the reality is that suds-jurko has
>> many issues with Python 3. For example, it has many:
>>
>> except Exception, e:
>>
>> as well as many:
>>
>> raise Exception, 'Duplicate key %s found' % k
>>
>> This is clearly not Python3 code. I tried quickly to fix
>> some of these issues, but as I fixed a few, others appear.
>>
>> So I wonder, what is the point of using suds-jurko, which
>> is half-baked, and which will conflict with the suds
>> package?
>>
> It looks like it uses 2to3 to become Python 3 compatible.
>>>
 Outch! That's horrible.
>>>
 I think it'd be best if someone spent some time on writing
 real code rather than using such a hack as 2to3. Thoughts
 anyone?
>>>
>>> That sounds very subjective. If upstream is able to support
>>> multiple python versions from the same codebase, then I see no
>>> reason for them to split the code into multiple branches and
>>> introduce additional burden syncing fixes between those.
>>>
>>> /Ihar
> 
>> Objectively, using 2to3 sux, and it's much better to fix the code, 
>> rather than using such a band-aid. It is possible to support
>> multiple version of Python with a single code base. So many
>> projects are able to do it, I don't see why suds would be any
>> different.
> 
> Their support matrix starts from Python 2.4. Maybe that's a reason for
> band-aid and not using runtime cross-version wrappers.
> /Ihar

If that's the reason, then that's unreasonable. I may as well ask for
supporting my old Atari 16 bits computers too then...

So finally: I don't think using suds-jurko is of any help, unless it
does a big step to stay current with modern Python 3.

Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Nailgun] Unit tests improvement meeting minutes

2014-11-28 Thread Thomas Goirand
On 11/29/2014 12:15 AM, Ivan Kliuk wrote:
> Hi, team!
> 
> Let me please present ideas collected during the unit tests improvement
> meeting:
> 1) Rename class ``Environment`` to something more descriptive
> 2) Remove hardcoded self.clusters[0], e.t.c from ``Environment``. Let's
> use parameters instead
> 3) run_tests.sh should invoke alternate syncdb() for cases where we
> don't need to test migration procedure, i.e. create_db_schema()
> 4) Consider usage of custom fixture provider. The main functionality
> should combine loading from YAML/JSON source and support fixture inheritance
> 5) The project needs in a document(policy) which describes:
> - Tests creation technique;
> - Test categorization (integration/unit) and approaches of testing
> different code base
> -
> 6) Review the tests and refactor unit tests as described in the test policy
> 7) Mimic Nailgun module structure in unit tests
> 8) Explore Swagger tool 
> 
> -- 
> Sincerely yours,
> Ivan Kliuk

Hi Ivan,

Sorry that I wasn't there during the meeting, otherwise I would have had
some things to say. Let me say it here if you don't mind.

I've been fighting *a lot* this week, to have nailgun to use a socket
for postgres that I created in /tmp/tmp. (as one can't use
something else when building packages). The normal way would be to put
the path of that created postgres instance as hostname in
nailgun/settings.yaml, but this doesn't work, and I always ended up
having the /tmp path being passed to psycopg2 as dbname. So, because the
resulting dbname is completely wrong, I never were able to run unit
tests correctly, unless I completely bypass that, and force my own baked
DSN into psycopg2 (eg, hacking __init__.py of psycopg2 to make sure I
had what I expected).

So my question is: could someone help me to fix nailgun, so that it is
possible to use a postgres instance path as hostname? Otherwise, I'll
have no way to run unit tests at package build time.

Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format

2014-11-28 Thread Dmitry Borodaenko
Vitaly,

It's there a document or spec or a wiki page that describes the current
status of this discussion in the context of the whole pluggable
architecture design?

Jumping into this thread without having the whole picture is hard. Knowing
what is already agreed, what is implemented so far, and having a structured
summary of points of disagreement with pro and contra arguments would help
a lot.
On Nov 28, 2014 9:48 AM, "Vitaly Kramskikh"  wrote:

> Folks,
>
> Please participate in this discussion. We already have a few meetings on
> this topic and there is still no decision. I understand entry level is
> pretty high, but please find some time for this.
>
> Evgeniy,
>
> Responses inline:
>
> 2014-11-28 20:03 GMT+03:00 Evgeniy L :
>
>> >> Yes, but is already used in a few places. I want to notice once again
>> - even a simple LBaaS plugin with a single checkbox needed to utilize this
>> functionality.
>>
>> Yes, but you don't need to specify it in each task.
>>
> Just by adding conditions to tasks we will be able to pluginize all
> current functionality that can be pluginized. On the other hand, 1 line
> will be added to task definition and you are concerned about this that much
> that you want to create a separate interface for "complex" plugins. Am I
> right?
>
>>
>> >> So, you're still calling this interface complicated. Ok, I'm looking
>> forward to seeing your proposal about dealing with complex plugins.
>>
>> All my concerns were related to simple plugins and that we should
>> find a way not to force a plugin developer to do this copy-paste work.
>>
> I don't understand what copy-paste work you are talking about. Copying
> conditions from tasks to is_removable? Yes, it will be so in most cases,
> but not always, so we need to give a plugin writer a way to define
> is_removable manually. If you are talking about copypasting conditions
> between tasks (though I don't understand why we need a few tasks with the
> same conditions), YAML links can be used - we use them a lot in
> openstack.yaml.
>
>>
>> >> If you have several checkboxes, then it is a complex plugin with
>> complex configuration ...
>>
>> Here we need a definition of s simple plugins, in the current
>> release with simple plugins you can define some fields on the UI (not a
>> single checkbox) and run several tasks if plugin is enabled.
>>
> Ok, we can define simple plugin as a plugin which doesn't require
> modification of generated YAML files at all. But with proposed approach
> there is no need to somehow separate "simple" and "complex" plugins.
>
>>
>> Thanks,
>>
>>>
>> On Fri, Nov 28, 2014 at 7:01 PM, Vitaly Kramskikh <
>> vkramsk...@mirantis.com> wrote:
>>
>>> Evgeniy,
>>>
>>> Responses inline:
>>>
>>> 2014-11-28 18:31 GMT+03:00 Evgeniy L :
>>>
 Hi Vitaly,

 I agree with you that conditions can be useful in case of complicated
 plugins, but
 at the same time in case of simple cases it adds a huge amount of
 complexity.
 I would like to avoid forcing user to know about any conditions if he
 wants
 to add several text fields on the UI.

 I have several reasons why we shouldn't do that:
 1. conditions are described with yet another language with it's own
 syntax

>>> Yes, but is already used in a few places. I want to notice once again -
>>> even a simple LBaaS plugin with a single checkbox needed to utilize this
>>> functionality.
>>>
 2. the language is not documented (solvable)

>>> It is documented:
>>> http://docs.mirantis.com/fuel-dev/develop/nailgun/customization/settings.html#expression-syntax
>>>
 3. complicated interface will lead to a lot of bugs for the end user,
 and it will be
 a Fuel team's problem

>>> So, you're still calling this interface complicated. Ok, I'm looking
>>> forward to seeing your proposal about dealing with complex plugins.
>>>
 4. in case of several checkboxes you'll have to write a huge conditions
 with
 a lot of "and" statements and it'll be really easy to forget about
 some of them

>>> If you have several checkboxes, then it is a complex plugin with complex
>>> configuration, so I see no problem here. There will be many more places
>>> where you can "forget" stuff.
>>>

 As result in simple cases plugin developer will have to specify the same
 condition of every task in tasks.yaml file, add it to metadata.yaml.
 If you add new checkbox, you should go through all of this files,
 add "and lbaas:new_checkbox_name" statement.

>>> Once again, in simple cases checkbox and the conditions (one for task
>>> and one for is_removable) can be easily pregenerated by FPB, so plugin
>>> developer has to do nothing more. If you add a new checkbox which doesn't
>>> affect plugin removeability and tasks, you have to change nothing in plugin
>>> metadata.
>>>

 Thanks,

 On Thu, Nov 27, 2014 at 7:57 PM, Vitaly Kramskikh <
 vkramsk...@mirantis.com> wrote:

> Fol

Re: [openstack-dev] [diskimage-builder] Tracing levels for scripts (119023)

2014-11-28 Thread James Slagle
On Thu, Nov 27, 2014 at 1:29 PM, Sullivan, Jon Paul
 wrote:
>> -Original Message-
>> From: Ben Nemec [mailto:openst...@nemebean.com]
>> Sent: 26 November 2014 17:03
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [diskimage-builder] Tracing levels for
>> scripts (119023)
>>
>> On 11/25/2014 10:58 PM, Ian Wienand wrote:
>> > Hi,
>> >
>> > My change [1] to enable a consistent tracing mechanism for the many
>> > scripts diskimage-builder runs during its build seems to have hit a
>> > stalemate.
>> >
>> > I hope we can agree that the current situation is not good.  When
>> > trying to develop with diskimage-builder, I find myself constantly
>> > going and fiddling with "set -x" in various scripts, requiring me
>> > re-running things needlessly as I try and trace what's happening.
>> > Conversley some scripts set -x all the time and give output when you
>> > don't want it.
>> >
>> > Now nodepool is using d-i-b more, it would be even nicer to have
>> > consistency in the tracing so relevant info is captured in the image
>> > build logs.
>> >
>> > The crux of the issue seems to be some disagreement between reviewers
>> > over having a single "trace everything" flag or a more fine-grained
>> > approach, as currently implemented after it was asked for in reviews.
>> >
>> > I must be honest, I feel a bit silly calling out essentially a
>> > four-line patch here.
>>
>> My objections are documented in the review, but basically boil down to
>> the fact that it's not a four line patch, it's a 500+ line patch that
>> does essentially the same thing as:
>>
>> set +e
>> set -x
>> export SHELLOPTS
>
> I don't think this is true, as there are many more things in SHELLOPTS than 
> just xtrace.  I think it is wrong to call the two approaches equivalent.
>
>>
>> in disk-image-create.  You do lose set -e in disk-image-create itself on
>> debug runs because that's not something we can safely propagate,
>> although we could work around that by unsetting it before calling hooks.
>>  FWIW I've used this method locally and it worked fine.
>
> So this does say that your alternative implementation has a difference from 
> the proposed one.  And that the difference has a negative impact.
>
>>
>> The only drawback is it doesn't allow the granularity of an if block in
>> every script, but I don't personally see that as a particularly useful
>> feature anyway.  I would like to hear from someone who requested that
>> functionality as to what their use case is and how they would define the
>> different debug levels before we merge an intrusive patch that would
>> need to be added to every single new script in dib or tripleo going
>> forward.
>
> So currently we have boilerplate to be added to all new elements, and that 
> boilerplate is:
>
> set -eux
> set -o pipefail
>
> This patch would change that boilerplate to:
>
> if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
> set -x
> fi
> set -eu
> set -o pipefail
>
> So it's adding 3 lines.  It doesn't seem onerous, especially as most people 
> creating a new element will either copy an existing one or copy/paste the 
> header anyway.
>
> I think that giving control over what is effectively debug or non-debug 
> output is a desirable feature.

I don't think it's debug vs non-debug. I think script writers that
have explicitly used set -x previously have then operated under the
assumption that they don't need to add any useful logging since it's
running -x. In that case, this patch is actually harmful.

>
> We have a patch that implements that desirable feature.
>
> I don't see a compelling technical reason to reject that patch.

I'm not specifically -2 on this patch based on the implementation.
It's more of the fact that I don't think this patch addresses the
problem in a meaningful way. The problem seems to be that dib either
logs too much or not enough information.

Also, it's a change to the current behavior that could be unexpected.
diskimage-builder has rather poor logging as-is. We don't use echo's
enough to actually say what's going on. Most script writers have just
relied on set -x to log everything explicitly, so there's no need to
echo or log any useful info. This patch turns off all tracing unless
specifically requested via $DIB_DEBUG_TRACE. Also, not all hook
scripts *have* to be bash. Do we have some that are python (i don't
honestly recall)? If so, do those honor $DIB_DEBUG_TRACE in a way that
makes sense? Do we need policy to enforce that?

The first thing we're going to do in our *own* tripleo CI if this
patch lands is set DIB_DEBUG_TRACE=1. Why? Because otherwise the
logging from dib is useless. Likewise on most dib bug reports I see
our first response being "please rerun with DIB_DEBUG_TRACE=1". We
discuss a lot about OpenStack service logs not being useful when
debug=0, yet this patch is about to apply the same problem to dib
unfortunately.

 James Slagle
--


Re: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format

2014-11-28 Thread Dmitriy Shulyak
>
>
>- environment_config.yaml should contain exact config which will be
>mixed into cluster_attributes. No need to implicitly generate any controls
>like it is done now.
>
>  Initially i had the same thoughts and wanted to use it the way it is, but
now i completely agree with Evgeniy that additional DSL will cause a lot
of problems with compatibility between versions and developer experience.
We need to search for alternatives..
1. for UI i would prefer separate tab for plugins, where user will be able
to enable/disable plugin explicitly.
Currently settings tab is overloaded.
2. on backend we need to validate plugins against certain env before
enabling it,
   and for simple case we may expose some basic entities like network_mode.
For case where you need complex logic - python code is far more flexible
that new DSL.

>
>- metadata.yaml should also contain "is_removable" field. This field
>is needed to determine whether it is possible to remove installed plugin.
>It is impossible to remove plugins in the current implementation. This
>field should contain an expression written in our DSL which we already use
>in a few places. The LBaaS plugin also uses it to hide the checkbox if
>Neutron is not used, so even simple plugins like this need to utilize it.
>This field can also be autogenerated, for more complex plugins plugin
>writer needs to fix it manually. For example, for Ceph it could look like
>"settings:storage.volumes_ceph.value == false and
>settings:storage.images_ceph.value == false".
>
> How checkbox will help? There is several cases of plugin removal..
1. Plugin is installed, but not enabled for any env - just remove the plugin
2. Plugin is installed, enabled and cluster deployed - forget about it for
now..
3. Plugin is installed and only enabled - we need to maintain state of db
consistent after plugin is removed, it is problematic, but possible
My main point that plugin is enabled/disabled explicitly by user, after
that we can decide ourselves can it be removed or not.

>
>- For every task in tasks.yaml there should be added new "condition"
>field with an expression which determines whether the task should be run.
>In the current implementation tasks are always run for specified roles. For
>example, vCenter plugin can have a few tasks with conditions like
>"settings:common.libvirt_type.value == 'vcenter'" or
>"settings:storage.volumes_vmdk.value == true". Also, AFAIU, similar
>approach will be used in implementation of Granular Deployment feature.
>
> I had some thoughts about using DSL, it seemed to me especially helpfull
when you need to disable part of embedded into core functionality,
like deploying with another hypervisor, or network dirver (contrail for
example). And DSL wont cover all cases here, this quite similar to
metadata.yaml, simple cases can be covered by some variables in tasks (like
group, unique, etc), but complex is easier to test and describe in python.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] proper syncing of cinder volume state

2014-11-28 Thread D'Angelo, Scott
A Cinder blueprint has been submitted to allow the python-cinderclient to 
involve the back end storage driver in resetting the state of a cinder volume:
https://blueprints.launchpad.net/cinder/+spec/reset-state-with-driver
and the spec:
https://review.openstack.org/#/c/134366

This blueprint contains various use cases for a volume that may be listed in 
the Cinder DataBase in state detaching|attaching|creating|deleting.
The Proposed solution involves augmenting the python-cinderclient command 
'reset-state', but other options are listed, including those that
involve Nova, since the state of a volume in the Nova XML found in 
/etc/libvirt/qemu/.xml may also be out-of-sync with the
Cinder DB or storage back end.

A related proposal for adding a new non-admin API for changing volume status 
from 'attaching' to 'error' has also been proposed:
https://review.openstack.org/#/c/137503/

Some questions have arisen:
1) Should 'reset-state' command be changed at all, since it was originally just 
to modify the Cinder DB?
2) Should 'reset-state' be fixed to prevent the naïve admin from changing the 
CinderDB to be out-of-sync with the back end storage?
3) Should 'reset-state' be kept the same, but augmented with new options?
4) Should a new command be implemented, with possibly a new admin API to 
properly sync state?
5) Should Nova be involved? If so, should this be done as a separate body of 
work?

This has proven to be a complex issue and there seems to be a good bit of 
interest. Please provide feedback, comments, and suggestions.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2014-11-28 Thread Jay Pipes

On 11/17/2014 05:49 PM, Salvatore Orlando wrote:

Hi all,

I am resuming this thread following the session we had at the summit in
Paris (etherpad here [1])

While there was some sort of consensus regarding what this library
should do, and how it should do it, the session ended with some open
questions which we need to address before finalising the specification.

There was a rather large consensus that the only viable architecture
would be one where the quota management library owns resource usage
data. However, this raises further concerns around:
- ownership of resource usage records. As the quota library owns usage
data, it becomes the authoritative source of truth for it. This could be
problematic in some projects, particularly nova, where the resource
tracker currently own usage data.


Well, just to be clear... the resource tracker in Nova owns the usage 
records for the compute node, not the usage records for a tenant or user 
(the quota driver and DB tables own that data).


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format

2014-11-28 Thread Vitaly Kramskikh
Folks,

Please participate in this discussion. We already have a few meetings on
this topic and there is still no decision. I understand entry level is
pretty high, but please find some time for this.

Evgeniy,

Responses inline:

2014-11-28 20:03 GMT+03:00 Evgeniy L :

> >> Yes, but is already used in a few places. I want to notice once again
> - even a simple LBaaS plugin with a single checkbox needed to utilize this
> functionality.
>
> Yes, but you don't need to specify it in each task.
>
Just by adding conditions to tasks we will be able to pluginize all current
functionality that can be pluginized. On the other hand, 1 line will be
added to task definition and you are concerned about this that much that
you want to create a separate interface for "complex" plugins. Am I right?

>
> >> So, you're still calling this interface complicated. Ok, I'm looking
> forward to seeing your proposal about dealing with complex plugins.
>
> All my concerns were related to simple plugins and that we should
> find a way not to force a plugin developer to do this copy-paste work.
>
I don't understand what copy-paste work you are talking about. Copying
conditions from tasks to is_removable? Yes, it will be so in most cases,
but not always, so we need to give a plugin writer a way to define
is_removable manually. If you are talking about copypasting conditions
between tasks (though I don't understand why we need a few tasks with the
same conditions), YAML links can be used - we use them a lot in
openstack.yaml.

>
> >> If you have several checkboxes, then it is a complex plugin with
> complex configuration ...
>
> Here we need a definition of s simple plugins, in the current
> release with simple plugins you can define some fields on the UI (not a
> single checkbox) and run several tasks if plugin is enabled.
>
Ok, we can define simple plugin as a plugin which doesn't require
modification of generated YAML files at all. But with proposed approach
there is no need to somehow separate "simple" and "complex" plugins.

>
> Thanks,
>
>>
> On Fri, Nov 28, 2014 at 7:01 PM, Vitaly Kramskikh  > wrote:
>
>> Evgeniy,
>>
>> Responses inline:
>>
>> 2014-11-28 18:31 GMT+03:00 Evgeniy L :
>>
>>> Hi Vitaly,
>>>
>>> I agree with you that conditions can be useful in case of complicated
>>> plugins, but
>>> at the same time in case of simple cases it adds a huge amount of
>>> complexity.
>>> I would like to avoid forcing user to know about any conditions if he
>>> wants
>>> to add several text fields on the UI.
>>>
>>> I have several reasons why we shouldn't do that:
>>> 1. conditions are described with yet another language with it's own
>>> syntax
>>>
>> Yes, but is already used in a few places. I want to notice once again -
>> even a simple LBaaS plugin with a single checkbox needed to utilize this
>> functionality.
>>
>>> 2. the language is not documented (solvable)
>>>
>> It is documented:
>> http://docs.mirantis.com/fuel-dev/develop/nailgun/customization/settings.html#expression-syntax
>>
>>> 3. complicated interface will lead to a lot of bugs for the end user,
>>> and it will be
>>> a Fuel team's problem
>>>
>> So, you're still calling this interface complicated. Ok, I'm looking
>> forward to seeing your proposal about dealing with complex plugins.
>>
>>> 4. in case of several checkboxes you'll have to write a huge conditions
>>> with
>>> a lot of "and" statements and it'll be really easy to forget about
>>> some of them
>>>
>> If you have several checkboxes, then it is a complex plugin with complex
>> configuration, so I see no problem here. There will be many more places
>> where you can "forget" stuff.
>>
>>>
>>> As result in simple cases plugin developer will have to specify the same
>>> condition of every task in tasks.yaml file, add it to metadata.yaml.
>>> If you add new checkbox, you should go through all of this files,
>>> add "and lbaas:new_checkbox_name" statement.
>>>
>> Once again, in simple cases checkbox and the conditions (one for task and
>> one for is_removable) can be easily pregenerated by FPB, so plugin
>> developer has to do nothing more. If you add a new checkbox which doesn't
>> affect plugin removeability and tasks, you have to change nothing in plugin
>> metadata.
>>
>>>
>>> Thanks,
>>>
>>> On Thu, Nov 27, 2014 at 7:57 PM, Vitaly Kramskikh <
>>> vkramsk...@mirantis.com> wrote:
>>>
 Folks,

 In the 6.0 release we'll support simple plugins for Fuel. The current
 architecture allows to create only very simple plugins and doesn't allow to
 "pluginize" complex features like Ceph, vCenter, etc. I'd like to propose
 some changes to make it possible. They are subtle enough and the plugin
 template still can be autogenerated by Fuel Plugin Builder. Here they are:


 https://github.com/vkramskikh/fuel-plugins/commit/1ddb166731fc4bf614f502b276eb136687cb20cf

1. environment_config.yaml should contain exact config which will
be mixed into cluster_attribu

Re: [openstack-dev] [Infra] Infra-manual documentation Sprint, December 1-2

2014-11-28 Thread Elizabeth K. Joseph
Hi everyone,

Just a quick reminder that this is coming up on Monday-Tuesday, join
us in channel to chat about what you're working on.

On Fri, Nov 7, 2014 at 5:57 AM, Elizabeth K. Joseph
 wrote:
> Hi everyone,
>
> The OpenStack Infrastructure team will be hosting a virtual sprint in
> the Freenode IRC channel #openstack-sprint for the Infrastructure User
> Manual on December 1st starting at 15:00 UTC and going for 48 hours.
>
> The goal of this sprint is to work on sections of the infra-manual
> which are still incomplete, review patches and note any style
> guidelines that still need to be addressed with the Documentation team
> (style guideines here:
> https://wiki.openstack.org/wiki/Documentation/Conventions )
>
> Live version of the current documentation is available here:
>
> http://docs.openstack.org/infra/manual/
>
> The documentation itself lives in the openstack-infra/infra-manual 
> respository.
>
> http://git.openstack.org/cgit/openstack-infra/infra-manual/tree/
>
> --
> Elizabeth Krumbach Joseph || Lyz || pleia2



-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format

2014-11-28 Thread Evgeniy L
>> Yes, but is already used in a few places. I want to notice once again -
even a simple LBaaS plugin with a single checkbox needed to utilize this
functionality.

Yes, but you don't need to specify it in each task.

>> So, you're still calling this interface complicated. Ok, I'm looking
forward to seeing your proposal about dealing with complex plugins.

All my concerns were related to simple plugins and that we should
find a way not to force a plugin developer to do this copy-paste work.

>> If you have several checkboxes, then it is a complex plugin with complex
configuration ...

Here we need a definition of s simple plugins, in the current
release with simple plugins you can define some fields on the UI (not a
single checkbox) and run several tasks if plugin is enabled.

Thanks,

>
On Fri, Nov 28, 2014 at 7:01 PM, Vitaly Kramskikh 
wrote:

> Evgeniy,
>
> Responses inline:
>
> 2014-11-28 18:31 GMT+03:00 Evgeniy L :
>
>> Hi Vitaly,
>>
>> I agree with you that conditions can be useful in case of complicated
>> plugins, but
>> at the same time in case of simple cases it adds a huge amount of
>> complexity.
>> I would like to avoid forcing user to know about any conditions if he
>> wants
>> to add several text fields on the UI.
>>
>> I have several reasons why we shouldn't do that:
>> 1. conditions are described with yet another language with it's own syntax
>>
> Yes, but is already used in a few places. I want to notice once again -
> even a simple LBaaS plugin with a single checkbox needed to utilize this
> functionality.
>
>> 2. the language is not documented (solvable)
>>
> It is documented:
> http://docs.mirantis.com/fuel-dev/develop/nailgun/customization/settings.html#expression-syntax
>
>> 3. complicated interface will lead to a lot of bugs for the end user, and
>> it will be
>> a Fuel team's problem
>>
> So, you're still calling this interface complicated. Ok, I'm looking
> forward to seeing your proposal about dealing with complex plugins.
>
>> 4. in case of several checkboxes you'll have to write a huge conditions
>> with
>> a lot of "and" statements and it'll be really easy to forget about
>> some of them
>>
> If you have several checkboxes, then it is a complex plugin with complex
> configuration, so I see no problem here. There will be many more places
> where you can "forget" stuff.
>
>>
>> As result in simple cases plugin developer will have to specify the same
>> condition of every task in tasks.yaml file, add it to metadata.yaml.
>> If you add new checkbox, you should go through all of this files,
>> add "and lbaas:new_checkbox_name" statement.
>>
> Once again, in simple cases checkbox and the conditions (one for task and
> one for is_removable) can be easily pregenerated by FPB, so plugin
> developer has to do nothing more. If you add a new checkbox which doesn't
> affect plugin removeability and tasks, you have to change nothing in plugin
> metadata.
>
>>
>> Thanks,
>>
>> On Thu, Nov 27, 2014 at 7:57 PM, Vitaly Kramskikh <
>> vkramsk...@mirantis.com> wrote:
>>
>>> Folks,
>>>
>>> In the 6.0 release we'll support simple plugins for Fuel. The current
>>> architecture allows to create only very simple plugins and doesn't allow to
>>> "pluginize" complex features like Ceph, vCenter, etc. I'd like to propose
>>> some changes to make it possible. They are subtle enough and the plugin
>>> template still can be autogenerated by Fuel Plugin Builder. Here they are:
>>>
>>>
>>> https://github.com/vkramskikh/fuel-plugins/commit/1ddb166731fc4bf614f502b276eb136687cb20cf
>>>
>>>1. environment_config.yaml should contain exact config which will be
>>>mixed into cluster_attributes. No need to implicitly generate any 
>>> controls
>>>like it is done now.
>>>2. metadata.yaml should also contain "is_removable" field. This
>>>field is needed to determine whether it is possible to remove installed
>>>plugin. It is impossible to remove plugins in the current implementation.
>>>This field should contain an expression written in our DSL which we 
>>> already
>>>use in a few places. The LBaaS plugin also uses it to hide the checkbox 
>>> if
>>>Neutron is not used, so even simple plugins like this need to utilize it.
>>>This field can also be autogenerated, for more complex plugins plugin
>>>writer needs to fix it manually. For example, for Ceph it could look like
>>>"settings:storage.volumes_ceph.value == false and
>>>settings:storage.images_ceph.value == false".
>>>3. For every task in tasks.yaml there should be added new
>>>"condition" field with an expression which determines whether the task
>>>should be run. In the current implementation tasks are always run for
>>>specified roles. For example, vCenter plugin can have a few tasks with
>>>conditions like "settings:common.libvirt_type.value == 'vcenter'" or
>>>"settings:storage.volumes_vmdk.value == true". Also, AFAIU, similar
>>>approach will be used in implementation of Gra

Re: [openstack-dev] [All] Proposal to add examples/usecase as part of new features / cli / functionality patches

2014-11-28 Thread Steve Gordon
- Original Message -
> From: "Deepak Shetty" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> But isn't *-specs comes very early in the process where you have an
> idea/proposal of a feature, u don't have it yet implemented. Hence specs
> just end up with Para's on how the feature is supposed to work, but doesn't
> include any real world screen shots as the code is not yet ready at that
> point of time. Along with patch it would make more sense, since the author
> would have tested it so it isn't a big overhead to catch those cli screen
> shots and put it in a .txt or .md file so that patch reviewers can see the
> patch in action and hence can review more effectively
> 
> thanx,
> deepak

Sure but in the original email you listed a number of other items, not just CLI 
screen shots, including:

> >> > 1) What changes are needed in manila.conf to make this work
> >> > 2) How to use the cli with this change incorporated
> >> > 3) Some screen shots of actual usage 
> >> > 4) Any caution/caveats that one has to keep in mind while using this

Ideally I see 1, 2, and 4 as things that should be added to the spec 
(retrospectively if necessary) to ensure that it maintains an accurate record 
of the feature. I can see potential benefits to including listings of real 
world usage (3) in the client projects, but I don't think all of the items 
listed belong there.

-Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [cinder backend options] Propagate Cinder backend config information to Heat

2014-11-28 Thread Pradip Mukhopadhyay
Thanks Qiming & Pavlo. We had looked into the v2 of Cinder API listings:
http://developer.openstack.org/api-ref-blockstorage-v2.html. However most
likely (or may be we missed to note) none of the APIs peeped into/exposed
the Cinder's backend configuration (the info we were looking for). So we
were curious if, *by design, *it is not expected to expose that through
Cinder? Or Cinder API can potentially be written to bridge the gap.

@Sirushti- that's interesting. Shall peep into it.


Thanks.
Pradip



On Fri, Nov 28, 2014 at 7:46 PM, Pavlo Shchelokovskyy <
pshchelokovs...@mirantis.com> wrote:

> That's true. Heat's job is mainly to call other OpenStack APIs in correct
> order in order to achieve desired combination of infrastructure resources.
> Physically though it may run on a completely different host where these
> files are not present, even including a host that is outside of the
> datacenter where OpenStack is deployed (so called Heat standalone mode).
> The only info Heat knows about other OpenStack services is what Heat can
> get trough their API.
>
> Pavlo Shchelokovskyy
> Software Engineer
> Mirantis Inc
> www.mirantis.com
>
> On Fri, Nov 28, 2014 at 3:15 PM, Qiming Teng 
> wrote:
>
>> The first thing you may want to check is the Cinder API.  If I'm
>> understanding this correctly, Heat only interact with other OpenStack
>> services via their APIs.  It is not supposed to peek into their
>> internals.
>>
>> Regards,
>>   - Qiming
>>
>> On Fri, Nov 28, 2014 at 06:19:56PM +0530, Pradip Mukhopadhyay wrote:
>> > Hello,
>> >
>> >
>> > Suppose we have a cinder backend in local.conf | cinder.conf as :
>> >
>> >
>> > [myNFSBackend]
>> > nfs_mount_options = nfsvers=3
>> > volume_backend_name = myNFSBackend
>> > volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
>> > netapp_server_hostname = IP
>> > netapp_server_port = 80
>> > netapp_storage_protocol = nfs
>> > netapp_storage_family = ontap_cluster
>> > netapp_login = admin
>> > netapp_password = password
>> > netapp_vserver = vserver_name
>> > nfs_shares_config = /opt/stack/nfs.shares
>> >
>> >
>> > We would like to access some of such cinder backend configuration
>> > information from Heat. More specifically from custom resource inside the
>> > Heat (e.g. access the netapp_server_hostname, netapp_login,
>> netapp_password
>> > etc. when defining a custom resource class extending the base Resource
>> > class). The purpose is to facilitate some (soap) service call to the
>> > backend storage from custom resource definitions.
>> >
>> >
>> > What is the best pattern/mechanism available? Any pointers to code/doc
>> will
>> > be highly appreciated.
>> >
>> >
>> > Does any database table holds the local.conf (or service specific conf)
>> > information?
>> >
>> >
>> >
>> > Thanks,
>> > Pradip
>>
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] [Nailgun] Unit tests improvement meeting minutes

2014-11-28 Thread Ivan Kliuk

Hi, team!

Let me please present ideas collected during the unit tests improvement 
meeting:

1) Rename class ``Environment`` to something more descriptive
2) Remove hardcoded self.clusters[0], e.t.c from ``Environment``. Let's 
use parameters instead
3) run_tests.sh should invoke alternate syncdb() for cases where we 
don't need to test migration procedure, i.e. create_db_schema()
4) Consider usage of custom fixture provider. The main functionality 
should combine loading from YAML/JSON source and support fixture inheritance

5) The project needs in a document(policy) which describes:
- Tests creation technique;
- Test categorization (integration/unit) and approaches of testing 
different code base

-
6) Review the tests and refactor unit tests as described in the test policy
7) Mimic Nailgun module structure in unit tests
8) Explore Swagger tool 

--
Sincerely yours,
Ivan Kliuk

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format

2014-11-28 Thread Vitaly Kramskikh
Evgeniy,

Responses inline:

2014-11-28 18:31 GMT+03:00 Evgeniy L :

> Hi Vitaly,
>
> I agree with you that conditions can be useful in case of complicated
> plugins, but
> at the same time in case of simple cases it adds a huge amount of
> complexity.
> I would like to avoid forcing user to know about any conditions if he wants
> to add several text fields on the UI.
>
> I have several reasons why we shouldn't do that:
> 1. conditions are described with yet another language with it's own syntax
>
Yes, but is already used in a few places. I want to notice once again -
even a simple LBaaS plugin with a single checkbox needed to utilize this
functionality.

> 2. the language is not documented (solvable)
>
It is documented:
http://docs.mirantis.com/fuel-dev/develop/nailgun/customization/settings.html#expression-syntax

> 3. complicated interface will lead to a lot of bugs for the end user, and
> it will be
> a Fuel team's problem
>
So, you're still calling this interface complicated. Ok, I'm looking
forward to seeing your proposal about dealing with complex plugins.

> 4. in case of several checkboxes you'll have to write a huge conditions
> with
> a lot of "and" statements and it'll be really easy to forget about
> some of them
>
If you have several checkboxes, then it is a complex plugin with complex
configuration, so I see no problem here. There will be many more places
where you can "forget" stuff.

>
> As result in simple cases plugin developer will have to specify the same
> condition of every task in tasks.yaml file, add it to metadata.yaml.
> If you add new checkbox, you should go through all of this files,
> add "and lbaas:new_checkbox_name" statement.
>
Once again, in simple cases checkbox and the conditions (one for task and
one for is_removable) can be easily pregenerated by FPB, so plugin
developer has to do nothing more. If you add a new checkbox which doesn't
affect plugin removeability and tasks, you have to change nothing in plugin
metadata.

>
> Thanks,
>
> On Thu, Nov 27, 2014 at 7:57 PM, Vitaly Kramskikh  > wrote:
>
>> Folks,
>>
>> In the 6.0 release we'll support simple plugins for Fuel. The current
>> architecture allows to create only very simple plugins and doesn't allow to
>> "pluginize" complex features like Ceph, vCenter, etc. I'd like to propose
>> some changes to make it possible. They are subtle enough and the plugin
>> template still can be autogenerated by Fuel Plugin Builder. Here they are:
>>
>>
>> https://github.com/vkramskikh/fuel-plugins/commit/1ddb166731fc4bf614f502b276eb136687cb20cf
>>
>>1. environment_config.yaml should contain exact config which will be
>>mixed into cluster_attributes. No need to implicitly generate any controls
>>like it is done now.
>>2. metadata.yaml should also contain "is_removable" field. This field
>>is needed to determine whether it is possible to remove installed plugin.
>>It is impossible to remove plugins in the current implementation.
>>This field should contain an expression written in our DSL which we 
>> already
>>use in a few places. The LBaaS plugin also uses it to hide the checkbox if
>>Neutron is not used, so even simple plugins like this need to utilize it.
>>This field can also be autogenerated, for more complex plugins plugin
>>writer needs to fix it manually. For example, for Ceph it could look like
>>"settings:storage.volumes_ceph.value == false and
>>settings:storage.images_ceph.value == false".
>>3. For every task in tasks.yaml there should be added new "condition"
>>field with an expression which determines whether the task should be run.
>>In the current implementation tasks are always run for specified roles. 
>> For
>>example, vCenter plugin can have a few tasks with conditions like
>>"settings:common.libvirt_type.value == 'vcenter'" or
>>"settings:storage.volumes_vmdk.value == true". Also, AFAIU, similar
>>approach will be used in implementation of Granular Deployment feature.
>>
>> These simple changes will allow to write much more complex plugins. What
>> do you think?
>> --
>> Vitaly Kramskikh,
>> Software Engineer,
>> Mirantis, Inc.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Vitaly Kramskikh,
Software Engineer,
Mirantis, Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [Neutron][L2 Agent][Debt] Bootstrapping an L2 agent debt repayment task force

2014-11-28 Thread Rossella Sblendido
On 11/27/2014 12:21 PM, marios wrote:
> Hi, so far we have this going
> https://etherpad.openstack.org/p/restructure-l2-agent

I finally pushed a design spec based on the etherpad above,
https://review.openstack.org/#/c/137808/ .
Anybody interested please comment on the review.

cheers,

Rossella

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid?

2014-11-28 Thread Erik Moe

Hi,

What is the status of this?

It looks like simplistic approach might not be that far from flow 
synchronization. Both methods needs to reinitialize internal structures so that 
they match deployed configuration. For example provision_local_vlan picks a 
free VLAN. This has to be the same one after restart.

Are you trying to also support an upgrade use case, not only agent restart?

/Erik


From: Damon Wang [mailto:damon.dev...@gmail.com]
Sent: den 7 november 2014 11:27
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent 
start? why and how avoid?

Hi all,
Let me introduce our experiment's result:
First we write an patch: https://review.openstack.org/#/c/131791/, and tried to 
use it in an experiment environment.
Bad things happened:
1. Note that this is the old flows (Network node's br-tun, the previous version 
is about icehouse):
"cookie=0x0, duration=238379.566s, table=1, n_packets=373521, n_bytes=26981817, 
idle_age=0, hard_age=65534, 
priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,21)
"cookie=0x0, duration=238379.575s, table=1, n_packets=30101, n_bytes=3603857, 
idle_age=198, hard_age=65534, 
priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
"cookie=0x0, duration=238379.530s, table=20, n_packets=4957, n_bytes=631543, 
idle_age=198, hard_age=65534, priority=0 actions=resubmit(,21)"
If the packet is a broadcast packet, we will resubmit it to table 20, and table 
20 will do nothing but resubmit to table 21.
the full sequence is:
from vxlan ports?: table 0 -> table 3 -> table 10 (learn flows and insert to 
table 20)
from br-int?: table 0 -> table 1 -> (table 20) -> table 21

In the new version (about to juno), we discard table 1, use table 2 instead:
"cookie=0x0, duration=142084.354s, table=2, n_packets=175823, n_bytes=12323286, 
idle_age=0, hard_age=65534, 
priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,22)
"cookie=0x0, duration=142084.364s, table=2, n_packets=861601, 
n_bytes=107499857, idle_age=0, hard_age=65534, 
priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)"
But if haven't remove all old flows, the table 1 will still exists, and it will 
intercept packets, and try to submit packets to table 21 and 20, which the 
correct tables are 22 and 20.
the full sequence is:
from vxlan ports?: table 0 -> table 4 -> table 10
from br-int?: table 0 -> table 2 -> (table 20, maybe output then!) -> table 22
Let's image we mix these up, because priority is 1 to table 0's flows, so we 
can't make sure packets will trans to right flow, so some packets may submit to 
table 21, this is quite beyond the pale!
2. What's more, let's imagine if we both use vxlan and vlan as provider:
  +-+
  | |
  |  namespace  |++
  | +---++  |||
  | | qg-|  ||  namespace |
  | ||  |||
  | ++  || ++ |
  | || |  tap   | |
  | ++  || ++ |
  | | qr x   |  |||
  | ++  |+--+-+
  | |   |
  +---+++   |
  |||
+-+++---+
|   |
+---+   |   |  
+---+
|   |   |   br-int  |  |
   |
|  ovs-br vlan  +---+   +--+  
br-tun(vxlan)|
|   |   |   |  |
   |
+---+---+   |   |  
+-+-+
|   +---+   
 |
|   
 |
|   
 |
|  +-+  
 |
|  | |  
 |
|  | 
+---+
+--+ |
   | eth0(ethernet card) |
   | |
   | |
   +

Re: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format

2014-11-28 Thread Evgeniy L
Hi Vitaly,

I agree with you that conditions can be useful in case of complicated
plugins, but
at the same time in case of simple cases it adds a huge amount of
complexity.
I would like to avoid forcing user to know about any conditions if he wants
to add several text fields on the UI.

I have several reasons why we shouldn't do that:
1. conditions are described with yet another language with it's own syntax
2. the language is not documented (solvable)
3. complicated interface will lead to a lot of bugs for the end user, and
it will be
a Fuel team's problem
4. in case of several checkboxes you'll have to write a huge conditions with
a lot of "and" statements and it'll be really easy to forget about some
of them

As result in simple cases plugin developer will have to specify the same
condition of every task in tasks.yaml file, add it to metadata.yaml.
If you add new checkbox, you should go through all of this files,
add "and lbaas:new_checkbox_name" statement.

Thanks,

On Thu, Nov 27, 2014 at 7:57 PM, Vitaly Kramskikh 
wrote:

> Folks,
>
> In the 6.0 release we'll support simple plugins for Fuel. The current
> architecture allows to create only very simple plugins and doesn't allow to
> "pluginize" complex features like Ceph, vCenter, etc. I'd like to propose
> some changes to make it possible. They are subtle enough and the plugin
> template still can be autogenerated by Fuel Plugin Builder. Here they are:
>
>
> https://github.com/vkramskikh/fuel-plugins/commit/1ddb166731fc4bf614f502b276eb136687cb20cf
>
>1. environment_config.yaml should contain exact config which will be
>mixed into cluster_attributes. No need to implicitly generate any controls
>like it is done now.
>2. metadata.yaml should also contain "is_removable" field. This field
>is needed to determine whether it is possible to remove installed plugin.
>It is impossible to remove plugins in the current implementation. This
>field should contain an expression written in our DSL which we already use
>in a few places. The LBaaS plugin also uses it to hide the checkbox if
>Neutron is not used, so even simple plugins like this need to utilize it.
>This field can also be autogenerated, for more complex plugins plugin
>writer needs to fix it manually. For example, for Ceph it could look like
>"settings:storage.volumes_ceph.value == false and
>settings:storage.images_ceph.value == false".
>3. For every task in tasks.yaml there should be added new "condition"
>field with an expression which determines whether the task should be run.
>In the current implementation tasks are always run for specified roles. For
>example, vCenter plugin can have a few tasks with conditions like
>"settings:common.libvirt_type.value == 'vcenter'" or
>"settings:storage.volumes_vmdk.value == true". Also, AFAIU, similar
>approach will be used in implementation of Granular Deployment feature.
>
> These simple changes will allow to write much more complex plugins. What
> do you think?
> --
> Vitaly Kramskikh,
> Software Engineer,
> Mirantis, Inc.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Rework auto-scaling support in Heat

2014-11-28 Thread Randall Burt
Per our discussion in Paris, I'm partial to Option B. I think a separate API 
endpoint is a lower priority at this point compared to cleaning up and 
normalizing the autoscale code on the back-end. Once we've refactored the 
engine code and solidified the RPC interface, it would be trivial to add an API 
on top of it. Additionally, we could even keep the privileged RPC interface for 
the Heat AS resources (assuming they stick around in some form) as an option 
for deployers. While certainly disruptive, I think we can handle this in small 
and/or isolated enough chances that reviews shouldn't be too difficult, 
especially if its possible to take the existing code largely unchanged at first 
and wrap an RPC abstraction around it.

On Nov 28, 2014, at 1:33 AM, Qiming Teng 
 wrote:

> Dear all,
> 
> Auto-Scaling is an important feature supported by Heat and needed by
> many users we talked to.  There are two flavors of AutoScalingGroup
> resources in Heat today: the AWS-based one and the Heat native one.  As
> more requests coming in, the team has proposed to separate auto-scaling
> support into a separate service so that people who are interested in it
> can jump onto it.  At the same time, Heat engine (especially the resource
> type code) will be drastically simplified.  The separated AS service
> could move forward more rapidly and efficiently.
> 
> This work was proposed a while ago with the following wiki and
> blueprints (mostly approved during Havana cycle), but the progress is
> slow.  A group of developers now volunteer to take over this work and
> move it forward.
> 
> wiki: https://wiki.openstack.org/wiki/Heat/AutoScaling
> BPs:
> - https://blueprints.launchpad.net/heat/+spec/as-lib-db
> - https://blueprints.launchpad.net/heat/+spec/as-lib
> - https://blueprints.launchpad.net/heat/+spec/as-engine-db
> - https://blueprints.launchpad.net/heat/+spec/as-engine
> - https://blueprints.launchpad.net/heat/+spec/autoscaling-api
> - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-client
> - https://blueprints.launchpad.net/heat/+spec/as-api-group-resource
> - https://blueprints.launchpad.net/heat/+spec/as-api-policy-resource
> - https://blueprints.launchpad.net/heat/+spec/as-api-webhook-trigger-resource
> - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-resources
> 
> Once this whole thing lands, Heat engine will talk to the AS engine in
> terms of ResourceGroup, ScalingPolicy, Webhooks.  Heat engine won't care
> how auto-scaling is implemented although the AS engine may in turn ask
> Heat to create/update stacks for scaling's purpose.  In theory, AS
> engine can create/destroy resources by directly invoking other OpenStack
> services.  This new AutoScaling service may eventually have its own DB,
> engine, API, api-client.  We can definitely aim high while work hard on
> real code.
> 
> After reviewing the BPs/Wiki and some communication, we get two options
> to push forward this.  I'm writing this to solicit ideas and comments
> from the community.
> 
> Option A: Top-Down Quick Split
> --
> 
> This means we will follow a roadmap shown below, which is not 100% 
> accurate yet and very rough:
> 
>  1) Get the separated REST service in place and working
>  2) Switch Heat resources to use the new REST service
> 
> Pros:
>  - Separate code base means faster review/commit cycle
>  - Less code churn in Heat
> Cons:
>  - A new service need to be installed/configured/launched
>  - Need commitments from dedicated, experienced developers from very
>beginning
> 
> Option B: Bottom-Up Slow Growth
> ---
> 
> The roadmap is more conservative, with many (yes, many) incremental
> patches to migrate things carefully.
> 
>  1) Separate some of the autoscaling logic into libraries in Heat
>  2) Augment heat-engine with new AS RPCs
>  3) Switch AS related resource types to use the new RPCs
>  4) Add new REST service that also talks to the same RPC
> (create new GIT repo, API endpoint and client lib...) 
> 
> Pros:
>  - Less risk breaking user lands with each revision well tested
>  - More smooth transition for users in terms of upgrades
> 
> Cons:
>  - A lot of churn within Heat code base, which means long review cycles
>  - Still need commitments from cores to supervise the whole process
> 
> There could be option C, D... but the two above are what we came up with
> during the discussion.
> 
> Another important thing we talked about is about the open discussion on
> this.  OpenStack Wiki seems a good place to document settled designs but
> not for interactive discussions.  Probably we should leverage etherpad
> and the mailinglist when moving forward.  Suggestions on this are also
> welcomed.
> 
> Thanks.
> 
> Regards,
> Qiming
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_

Re: [openstack-dev] [Heat] Rework auto-scaling support in Heat

2014-11-28 Thread Jastrzebski, Michal


> -Original Message-
> From: Qiming Teng [mailto:teng...@linux.vnet.ibm.com]
> Sent: Friday, November 28, 2014 8:33 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [Heat] Rework auto-scaling support in Heat
> 
> Dear all,
> 
> Auto-Scaling is an important feature supported by Heat and needed by many
> users we talked to.  There are two flavors of AutoScalingGroup resources in
> Heat today: the AWS-based one and the Heat native one.  As more requests
> coming in, the team has proposed to separate auto-scaling support into a
> separate service so that people who are interested in it can jump onto it.  At
> the same time, Heat engine (especially the resource type code) will be
> drastically simplified.  The separated AS service could move forward more
> rapidly and efficiently.
> 
> This work was proposed a while ago with the following wiki and blueprints
> (mostly approved during Havana cycle), but the progress is slow.  A group of
> developers now volunteer to take over this work and move it forward.
> 
> wiki: https://wiki.openstack.org/wiki/Heat/AutoScaling
> BPs:
>  - https://blueprints.launchpad.net/heat/+spec/as-lib-db
>  - https://blueprints.launchpad.net/heat/+spec/as-lib
>  - https://blueprints.launchpad.net/heat/+spec/as-engine-db
>  - https://blueprints.launchpad.net/heat/+spec/as-engine
>  - https://blueprints.launchpad.net/heat/+spec/autoscaling-api
>  - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-client
>  - https://blueprints.launchpad.net/heat/+spec/as-api-group-resource
>  - https://blueprints.launchpad.net/heat/+spec/as-api-policy-resource
>  - https://blueprints.launchpad.net/heat/+spec/as-api-webhook-trigger-
> resource
>  - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-resources
> 
> Once this whole thing lands, Heat engine will talk to the AS engine in terms 
> of
> ResourceGroup, ScalingPolicy, Webhooks.  Heat engine won't care how auto-
> scaling is implemented although the AS engine may in turn ask Heat to
> create/update stacks for scaling's purpose.  In theory, AS engine can
> create/destroy resources by directly invoking other OpenStack services.  This
> new AutoScaling service may eventually have its own DB, engine, API, api-
> client.  We can definitely aim high while work hard on real code.
> 
> After reviewing the BPs/Wiki and some communication, we get two options
> to push forward this.  I'm writing this to solicit ideas and comments from the
> community.
> 
> Option A: Top-Down Quick Split
> --

Do you want to drop support of AS from heat altogether? Many people would 
disagree with drop of AS (even drop of HARestarter is problem). We don't really 
want to support duplicate systems, so having 2 engines of autoscalling would be 
wrong.
That being said, I can see big gap which heat (or services around) could fill - 
intelligent orchiestration. By that I mean autohealing, auto-redeploying, 
autoscalling and pretty much auto-whatever. Clouds are fluid, we could provide 
framework for that. Heat would be great tool for that because it has context of 
whole stack, and in fact all we do would be stack update.

> This means we will follow a roadmap shown below, which is not 100%
> accurate yet and very rough:
> 
>   1) Get the separated REST service in place and working
>   2) Switch Heat resources to use the new REST service
> 
> Pros:
>   - Separate code base means faster review/commit cycle
>   - Less code churn in Heat
> Cons:
>   - A new service need to be installed/configured/launched
>   - Need commitments from dedicated, experienced developers from very
> beginning
> 
> Option B: Bottom-Up Slow Growth
> ---

Personally I'd be advocate of fixing what we have instead of making new thing. 
Maybe we should make it separate process (as long as we'll try to keep 
consistent api)? Maybe add place for new logic (autohealing?), but still keep 
that inside heat.
One thing - we'll need to make concurrent updates really robust when we want to 
make whole thing automatic (I'm talking about convergence).

> The roadmap is more conservative, with many (yes, many) incremental
> patches to migrate things carefully.
> 
>   1) Separate some of the autoscaling logic into libraries in Heat
>   2) Augment heat-engine with new AS RPCs
>   3) Switch AS related resource types to use the new RPCs
>   4) Add new REST service that also talks to the same RPC
>  (create new GIT repo, API endpoint and client lib...)
> 
> Pros:
>   - Less risk breaking user lands with each revision well tested
>   - More smooth transition for users in terms of upgrades
> 
> Cons:
>   - A lot of churn within Heat code base, which means long review cycles
>   - Still need commitments from cores to supervise the whole process
> 
> There could be option C, D... but the two above are what we came up with
> during the discussion.
> 
> Another important thing we talked about is about the open discussion o

Re: [openstack-dev] [heat] [cinder backend options] Propagate Cinder backend config information to Heat

2014-11-28 Thread Pavlo Shchelokovskyy
That's true. Heat's job is mainly to call other OpenStack APIs in correct
order in order to achieve desired combination of infrastructure resources.
Physically though it may run on a completely different host where these
files are not present, even including a host that is outside of the
datacenter where OpenStack is deployed (so called Heat standalone mode).
The only info Heat knows about other OpenStack services is what Heat can
get trough their API.

Pavlo Shchelokovskyy
Software Engineer
Mirantis Inc
www.mirantis.com

On Fri, Nov 28, 2014 at 3:15 PM, Qiming Teng 
wrote:

> The first thing you may want to check is the Cinder API.  If I'm
> understanding this correctly, Heat only interact with other OpenStack
> services via their APIs.  It is not supposed to peek into their
> internals.
>
> Regards,
>   - Qiming
>
> On Fri, Nov 28, 2014 at 06:19:56PM +0530, Pradip Mukhopadhyay wrote:
> > Hello,
> >
> >
> > Suppose we have a cinder backend in local.conf | cinder.conf as :
> >
> >
> > [myNFSBackend]
> > nfs_mount_options = nfsvers=3
> > volume_backend_name = myNFSBackend
> > volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
> > netapp_server_hostname = IP
> > netapp_server_port = 80
> > netapp_storage_protocol = nfs
> > netapp_storage_family = ontap_cluster
> > netapp_login = admin
> > netapp_password = password
> > netapp_vserver = vserver_name
> > nfs_shares_config = /opt/stack/nfs.shares
> >
> >
> > We would like to access some of such cinder backend configuration
> > information from Heat. More specifically from custom resource inside the
> > Heat (e.g. access the netapp_server_hostname, netapp_login,
> netapp_password
> > etc. when defining a custom resource class extending the base Resource
> > class). The purpose is to facilitate some (soap) service call to the
> > backend storage from custom resource definitions.
> >
> >
> > What is the best pattern/mechanism available? Any pointers to code/doc
> will
> > be highly appreciated.
> >
> >
> > Does any database table holds the local.conf (or service specific conf)
> > information?
> >
> >
> >
> > Thanks,
> > Pradip
>
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [cinder backend options] Propagate Cinder backend config information to Heat

2014-11-28 Thread Murugesan, Sirushti
One option would be to use a secret management service like Barbican[1] to 
store those credentials/secrets and use it whenever you want to make a SOAP API 
call. 

There's also Barbican resources available in the contrib section of the Heat 
repository which could also possibly be used and referenced to your custom 
resource's properties. 

[1] https://wiki.openstack.org/wiki/Barbican

Regards,
Sirushti Murugesan

From: Qiming Teng [teng...@linux.vnet.ibm.com]
Sent: Friday, November 28, 2014 6:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [heat] [cinder backend options] Propagate Cinder 
backend config information to Heat

The first thing you may want to check is the Cinder API.  If I'm
understanding this correctly, Heat only interact with other OpenStack
services via their APIs.  It is not supposed to peek into their
internals.

Regards,
  - Qiming

On Fri, Nov 28, 2014 at 06:19:56PM +0530, Pradip Mukhopadhyay wrote:
> Hello,
>
>
> Suppose we have a cinder backend in local.conf | cinder.conf as :
>
>
> [myNFSBackend]
> nfs_mount_options = nfsvers=3
> volume_backend_name = myNFSBackend
> volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
> netapp_server_hostname = IP
> netapp_server_port = 80
> netapp_storage_protocol = nfs
> netapp_storage_family = ontap_cluster
> netapp_login = admin
> netapp_password = password
> netapp_vserver = vserver_name
> nfs_shares_config = /opt/stack/nfs.shares
>
>
> We would like to access some of such cinder backend configuration
> information from Heat. More specifically from custom resource inside the
> Heat (e.g. access the netapp_server_hostname, netapp_login, netapp_password
> etc. when defining a custom resource class extending the base Resource
> class). The purpose is to facilitate some (soap) service call to the
> backend storage from custom resource definitions.
>
>
> What is the best pattern/mechanism available? Any pointers to code/doc will
> be highly appreciated.
>
>
> Does any database table holds the local.conf (or service specific conf)
> information?
>
>
>
> Thanks,
> Pradip

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Do we need an IntrospectionInterface?

2014-11-28 Thread Lucas Alvares Gomes
Hi,

Thanks for putting it up Dmitry. I think the idea is fine too, I understand
that people may want to use in-band discovery for drivers like iLO or DRAC
and having those on a separated interface allow us to composite a driver to
do it (which is ur use case 2. ).

So, +1.

Lucas

On Wed, Nov 26, 2014 at 3:45 PM, Imre Farkas  wrote:

> On 11/26/2014 02:20 PM, Dmitry Tantsur wrote:
>
>> Hi all!
>>
>> As our state machine and discovery discussion proceeds, I'd like to ask
>> your opinion on whether we need an IntrospectionInterface
>> (DiscoveryInterface?). Current proposal [1] suggests adding a method for
>> initiating a discovery to the ManagementInterface. IMO it's not 100%
>> correct, because:
>> 1. It's not management. We're not changing anything.
>> 2. I'm aware that some folks want to use discoverd-based discovery [2]
>> even for DRAC and ILO (e.g. for vendor-specific additions that can't be
>> implemented OOB).
>>
>> Any ideas?
>>
>> Dmitry.
>>
>> [1] https://review.openstack.org/#/c/100951/
>> [2] https://review.openstack.org/#/c/135605/
>>
>>
> Hi Dmitry,
>
> I see the value in using the composability of our driver interfaces, so I
> vote for having a separate IntrospectionInterface. Otherwise we wouldn't
> allow users to use eg. the DRAC driver with an in-band but more powerful hw
> discovery.
>
> Imre
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Handling soft delete for instance rows in a new cells database

2014-11-28 Thread Jay Pipes

On 11/27/2014 04:20 PM, Michael Still wrote:

On Fri, Nov 28, 2014 at 2:59 AM, Jay Pipes  wrote:

On 11/26/2014 04:24 PM, Mike Bayer wrote:


Precisely. Why is the RDBMS the thing that is used for
archival/audit logging? Why not a NoSQL store or a centralized log
facility? All that would be needed would be for us to standardize
on the format of the archival record, standardize on the things to
provide with the archival record (for instance system metadata,
etc), and then write a simple module that would write an archival
record to some backend data store.

Then we could rid ourselves of the awfulness of the shadow tables
and all of the read_deleted=yes crap.




+1000 - if we’re really looking to “do this right”, as the original
message suggested, this would be “right”.  If you don’t need these
rows in the app (and it would be very nice if you didn’t), dump it
out to an archive file / non-relational datastore.   As mentioned
elsewhere, this is entirely acceptable for organizations that are
“obliged” to store records for auditing purposes.   Nova even already
has a dictionary format for everything set up with nova objects, so
dumping these dictionaries out as JSON would be the way to go.



OK, spec added:

https://review.openstack.org/137669


At this point I don't think we should block the cells reworking effort
on this spec. I'm happy for people to pursue this, but I think its
unlikely to be work that is completed in kilo. We can transition the
new cells databases at the same time we fix the main database.


No disagreement at all. The proposed spec is a monster one, and we can 
certainly make a lot of progress in Kilo, but I wouldn't expect it to be 
completed any time soon.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] About deployment progress calculation

2014-11-28 Thread Evgeniy L
Hi Dmitry,

I totally agree that the current approach won't work (and doesn't work
well).

I have several comments:

>> Each task will provide estimated time

1. Each task has timeout, lets use it as an estimate, I don't think
that we should ask to provide both of this fields, execution
estimate depends on hardware, my suggestion is to keep it
simple and solve the problem internally with information from
timeout field.
2. I would like to clarify implementation a bit more, what is
"time delta of the task"? I think that Task executor (
orchestrator/astute/mistral)
shouldn't provide any information except status of the task,
it should be simple interface, like "task_uuid: 1, status: running"
and Nailgun on its side should do all of the magic with progress
calculation.

Thanks,

On Tue, Oct 28, 2014 at 10:29 AM, Dmitriy Shulyak 
wrote:

> Hello everyone,
>
> I want to raise concerns about progress bar, and its usability.
> In my opinion current approach has several downsides:
> 1. No valuable information
> 2. Very fragile, you need to change code in several places not to break it
> 3. Will not work with plugable code
>
> Log parsing works under one basic assumption - that we are in control of
> all tasks,
> so we can use mappings to logs with certain pattern.
>  It wont work with plugable architecture, and i am talking not about
> fuel-plugins, and the
> way it will be done in 6.0, but the whole idea of plugable architecture,
> and i assume that internal features will be implemented as granular
> self-contained plugins,
> and it will be possible to accomplish not only with puppet, but with any
> other tool that suits you.
> Asking person who will provide plugin (extension) to add mappings to logs
> - feels like weirdeist thing ever.
>
> *What can be done to improve usability of progress calculation?*
> I see here several requirements:
> 1.Provide valuable information
>   - Correct representation of time that task takes to run
>   - What is going on target node in any point of the deployment?
> 2. Plugin friendly, it means that approach we will take should be flexible
> and extendable
>
> *Implementation:*
> In nearest future deployment will be splitted into tasks, they are will be
> big, not granular
> (like deploy controller, deploy compute), but this does not matter,
> because we can start to estimate them.
> Each task will provide estimated time.
> At first it will be manually setted by person who develops plugin (tasks),
> but it can be improved,
> so this information automatically (or semi-auto) will be provided by
> fuel-stats application.
> It will require orchestrator to report 2 simple entities:
> - time delta of the task
> - task identity
> UI will be able to show percents anyway, but additionally it will show
> what is running on target node.
>
> Ofcourse it is not about 6.0, but please take a look, and lets try to agree
> on what is right way to solve this task, because log parsing will not work
> with data-driven
> orchestrator and plugable architecture.
> Thank you
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [cinder backend options] Propagate Cinder backend config information to Heat

2014-11-28 Thread Qiming Teng
The first thing you may want to check is the Cinder API.  If I'm
understanding this correctly, Heat only interact with other OpenStack
services via their APIs.  It is not supposed to peek into their
internals.

Regards,
  - Qiming

On Fri, Nov 28, 2014 at 06:19:56PM +0530, Pradip Mukhopadhyay wrote:
> Hello,
> 
> 
> Suppose we have a cinder backend in local.conf | cinder.conf as :
> 
> 
> [myNFSBackend]
> nfs_mount_options = nfsvers=3
> volume_backend_name = myNFSBackend
> volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
> netapp_server_hostname = IP
> netapp_server_port = 80
> netapp_storage_protocol = nfs
> netapp_storage_family = ontap_cluster
> netapp_login = admin
> netapp_password = password
> netapp_vserver = vserver_name
> nfs_shares_config = /opt/stack/nfs.shares
> 
> 
> We would like to access some of such cinder backend configuration
> information from Heat. More specifically from custom resource inside the
> Heat (e.g. access the netapp_server_hostname, netapp_login, netapp_password
> etc. when defining a custom resource class extending the base Resource
> class). The purpose is to facilitate some (soap) service call to the
> backend storage from custom resource definitions.
> 
> 
> What is the best pattern/mechanism available? Any pointers to code/doc will
> be highly appreciated.
> 
> 
> Does any database table holds the local.conf (or service specific conf)
> information?
> 
> 
> 
> Thanks,
> Pradip

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Compute Node lost the net-connection after spawning vm

2014-11-28 Thread Qiming Teng
Sounds like a iptables problem.
BTW, you may want to post this kind of question to
openst...@lists.openstack.org, not here.

Regards,
  Qiming

On Thu, Nov 27, 2014 at 06:45:26PM +0530, Aman Kumar wrote:
> Hi,
> 
> I am using DevStack since 4 months and it was working fine but 2 days back
> i got some problem and i tried to re-install devstack by cloning it again
> from git, and it got successfully installed, my both compute node got up.
> 
> After that i spawned vm from horizon, my spawned vm got ip and it is
> running successfully,
> but my compute node lost the net connection and i am not able to ssh that
> node from putty.
> 
> I checked all the settings there is no problem in my VM setting i think
> there is some problem with devstack because i tried 5-6 times with my old
> setup and also with new vm configurations. every time only compute node is
> getting lost net connection but spawned vm will be running and  also
> compute node will be enabled.
> 
> can anyone please help me, thanks in advance
> 
> Regards
> Aman Kumar
> HP Software India

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] [cinder backend options] Propagate Cinder backend config information to Heat

2014-11-28 Thread Pradip Mukhopadhyay
Hello,


Suppose we have a cinder backend in local.conf | cinder.conf as :


[myNFSBackend]
nfs_mount_options = nfsvers=3
volume_backend_name = myNFSBackend
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_server_hostname = IP
netapp_server_port = 80
netapp_storage_protocol = nfs
netapp_storage_family = ontap_cluster
netapp_login = admin
netapp_password = password
netapp_vserver = vserver_name
nfs_shares_config = /opt/stack/nfs.shares


We would like to access some of such cinder backend configuration
information from Heat. More specifically from custom resource inside the
Heat (e.g. access the netapp_server_hostname, netapp_login, netapp_password
etc. when defining a custom resource class extending the base Resource
class). The purpose is to facilitate some (soap) service call to the
backend storage from custom resource definitions.


What is the best pattern/mechanism available? Any pointers to code/doc will
be highly appreciated.


Does any database table holds the local.conf (or service specific conf)
information?



Thanks,
Pradip
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] Re: [neutron] the hostname regex pattern fix also changed behaviour :(

2014-11-28 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 28/11/14 01:26, Angus Lees wrote:
> Context: https://review.openstack.org/#/c/135616
> 
> As far as I can make out, the fix for CVE-2014-7821 removed a backslash
> that effectively disables the negative look-ahead assertion that
> verifies that hostname can't be all-digits. Worse, the new version now
> rejects hostnames where a component starts with a digit.

Thanks for raising the issue!

> 
> This certainly addressed the immediate issue of "that regex was
> expensive", but the change in behaviour looks like it was unintended. 
> Given that we backported this DoS fix to released versions of neutron,
> what do we want to do about it now?

I don't think we've actually *released* any stable versions with the
patch included, yet (neither Icehouse nor Juno). (Adding [stable] tag to
subject to raise awareness).

I'm adding the mail thread to stable/juno etherpad to track the
backwards incompatibility (probably a blocker for the forthcoming
release): https://etherpad.openstack.org/p/StableJuno

> 
> In general this regex is crazy complex for what it verifies.  I can't
> see any discussion of where it came from nor precisely what it was
> intended to accept/reject when it was introduced in patch 16 of
> https://review.openstack.org/#/c/14219.
> 
> If we're happy disabling the check for components being all-digits, then
> a minimal change to the existing regex that could be backported might be
> something like
>   
> r'(?=^.{1,254}$)(^(?:[a-zA-Z0-9_](?:[a-zA-Z0-9_-]{,61}[a-zA-Z0-9])\.)*(?:[a-zA-Z]{2,})$)'
> 
> Alternatively (and clearly preferable for Kilo), Kevin has a replacement
> underway that rewrites this entirely to conform to modern RFCs in
> I003cf14d95070707e43e40d55da62e11a28dfa4e

With the change, will existing instances work as before?

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUeGDkAAoJEC5aWaUY1u57kG0IAMz0jVCJ3D0gr6rydW/b3niY
tu7rQv/kKwfsmzCiKA8cpGoiGVm/23iwra5wU3oLSLQJDn+6XFBzseYy6F0Vy5+v
D6FUu3/AH5OOj3KeeC7TR500s+eR3kPNYqd/pzNYmpeW7b+yKJZUocgHjuYmiB0e
B4/JygQhox1zFdKOjsHF+x0PCeAc49VwQZkywN97TiFiwOqqr6iC3tmnOPnFbjNV
dwGqlPdiaS0GJ2STDnEJ8XABz8//Q7qwHBwQvM0VSIHkUmDI228crgWImAEClbyG
IIH67vjOJEFyBMRK0fMOqBT1CnUfS/OX7/OFwJVQh6fAyMKrMuXCixPUYQuSUBI=
=NYrv
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] suds-jurko, new in our global-requirements.txt: what is the point?!?

2014-11-28 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 27/11/14 19:10, Thomas Goirand wrote:
> On 11/28/2014 12:06 AM, Ihar Hrachyshka wrote:
>> On 27/11/14 12:09, Thomas Goirand wrote:
>>> On 11/27/2014 12:31 AM, Donald Stufft wrote:
 
> On Nov 26, 2014, at 10:34 AM, Thomas Goirand
>  wrote:
> 
> Hi,
> 
> I tried to package suds-jurko. I was first happy to see
> that there was some progress to make things work with
> Python 3. Unfortunately, the reality is that suds-jurko has
> many issues with Python 3. For example, it has many:
> 
> except Exception, e:
> 
> as well as many:
> 
> raise Exception, 'Duplicate key %s found' % k
> 
> This is clearly not Python3 code. I tried quickly to fix
> some of these issues, but as I fixed a few, others appear.
> 
> So I wonder, what is the point of using suds-jurko, which
> is half-baked, and which will conflict with the suds
> package?
> 
 It looks like it uses 2to3 to become Python 3 compatible.
>> 
>>> Outch! That's horrible.
>> 
>>> I think it'd be best if someone spent some time on writing
>>> real code rather than using such a hack as 2to3. Thoughts
>>> anyone?
>> 
>> That sounds very subjective. If upstream is able to support
>> multiple python versions from the same codebase, then I see no
>> reason for them to split the code into multiple branches and
>> introduce additional burden syncing fixes between those.
>> 
>> /Ihar
> 
> Objectively, using 2to3 sux, and it's much better to fix the code, 
> rather than using such a band-aid. It is possible to support
> multiple version of Python with a single code base. So many
> projects are able to do it, I don't see why suds would be any
> different.

Their support matrix starts from Python 2.4. Maybe that's a reason for
band-aid and not using runtime cross-version wrappers.
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUeF4XAAoJEC5aWaUY1u57Ty0IALsSr5MRNvpuq9g0/GTFGynh
qXraVZ/km+whgFtrheyM4+tVuwew2aY7y1Sb/ACuvjqBmtbnWPqEFgD/LIhmSe+R
uraelATiECOWnHLYYfIdQp8r3NkxlI1C2bwc6UkELYVgg/4mjqZa6ZtwSIkJB/2H
BrZ7Z45no0zIkAIDMPtc7GEG3aWPFLEhT7sG0JEu59z/F964wP6bXZrm3iqUxE1u
ft4mQBe3DCMhVjbhCLBXid843lvPLboOIcgRswKQc1GOjFCU3DEfKdTsxDr+koS2
UPc6UkOWR9pN/X5riijrSIg2QPTtJrIjRvdgzc/TJfq3K9h1Z+FxIsmKUFHM4Ls=
=Xl13
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] ironic driver retries on ironic driver Conflict response

2014-11-28 Thread Dmitry Tantsur

Hi!

On 11/28/2014 11:41 AM, Murray, Paul (HP Cloud) wrote:

Hi All,

Looking at the ironic virt driver code in nova it seems that a Conflict
(409) response from the ironic client results in the driver re-trying
the request. Given the comment below in the ironic code I would imagine
that is not the right behavior – it reads as though this is something
that would fail on the retry as well.

class Conflict(HTTPClientError):

 """HTTP 409 - Conflict.

 Indicates that the request could not be processed because of conflict

 in the request, such as an edit conflict.

 """

 http_status = 409

 message = _("Conflict")

An example of this is if the virt driver attempts to assign an instance
to a node that is in the power on state it will issue this Conflict
response.
It's possible that a periodic background process is going on, retrying 
makes perfect sense for this case. We're trying to get away from 
background processes causing Conflict btw.


Have I understood this or is there something about this I am not getting
right?

Paul

Paul Murray

Nova Technical Lead, HP Cloud

+44 117 316 2527

Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks
RG12 1HN Registered No: 690597 England. The contents of this message and
any attachments to it are confidential and may be legally privileged. If
you have received this message in error, you should delete it from your
system immediately and advise the sender. To any recipient of this
message within HP, unless otherwise stated you should consider this
message and attachments as "HP CONFIDENTIAL".



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][nova] ironic driver retries on ironic driver Conflict response

2014-11-28 Thread Murray, Paul (HP Cloud)
Hi All,

Looking at the ironic virt driver code in nova it seems that a Conflict (409) 
response from the ironic client results in the driver re-trying the request. 
Given the comment below in the ironic code I would imagine that is not the 
right behavior - it reads as though this is something that would fail on the 
retry as well.

class Conflict(HTTPClientError):
"""HTTP 409 - Conflict.

Indicates that the request could not be processed because of conflict
in the request, such as an edit conflict.
"""
http_status = 409
message = _("Conflict")

An example of this is if the virt driver attempts to assign an instance to a 
node that is in the power on state it will issue this Conflict response.

Have I understood this or is there something about this I am not getting right?

Paul


Paul Murray
Nova Technical Lead, HP Cloud
+44 117 316 2527

Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England. The contents of this message and any attachments 
to it are confidential and may be legally privileged. If you have received this 
message in error, you should delete it from your system immediately and advise 
the sender. To any recipient of this message within HP, unless otherwise stated 
you should consider this message and attachments as "HP CONFIDENTIAL".

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] How to run tempest tests

2014-11-28 Thread GHANSHYAM MANN
Tests can be skipped based on configuration file options. Its all depend on
what all set of tests you want to skip for your environment.

For example- to skip all sahara tests (as you mentioned)

You can make config option - 'sahara' to false which will skip all sahara
tests.

Particular feature of services can be skipped by same way if those have
specific configuration options (like IPv6) or through extension list
''api_extension" of specific service.



On Fri, Nov 28, 2014 at 6:24 PM, om prakash pandey 
wrote:

> Thanks Sridhar.
>
> I'm aware of using "skipException" for skipping tests at class level.
> However, this involves putting it in every class to skip tests which are
> not desired.
>
> I was looking for a way to control the tests I want to run through some
> kind of a configuration file, the options to pass to test runner.
>
> Regards
> Om
>
> On Fri, Nov 28, 2014 at 2:02 PM, Sridhar Gaddam <
> sridhar.gad...@enovance.com> wrote:
>
>>  If the deployment does not support IPv6, we use the following convention
>> to skip the tests at class level.
>>
>> https://github.com/openstack/tempest/blob/master/tempest/api/network/base.py#L65
>>
>> Regards,
>> --Sridhar.
>>
>>
>>
>> On 11/28/2014 01:50 PM, om prakash pandey wrote:
>>
>> Hi Folks,
>>
>>  I would like to know about the "best practices" followed for skipping
>> tests not applicable for my environment.
>>
>>  I know one of the ways is to use the below decorator over the test
>> method:
>>  @test.skip_because(bug="BUG_ID")
>>
>>  However, what if my deployment doesn't support VPNAAS and I want to
>> skip those tests. Similarly, what if I want to skip the entire suite of
>> sahara(data processing) tests.
>>
>>  Are there any options in testr to customize running of tempest tests as
>> per my environment/requirements?
>>
>>  Regards,
>> Om
>>
>> On Wed, Nov 26, 2014 at 3:13 AM, Vineet Menon 
>> wrote:
>>
>>> Hi,
>>> Thanjs for clearing that up... I had a hard time understanding the
>>> screws before I went with testr.
>>>
>>> Regards,
>>> Vineet
>>>  On 25 Nov 2014 17:46, "Matthew Treinish"  wrote:
>>>
  On Mon, Nov 24, 2014 at 10:49:27AM +0100, Angelo Matarazzo wrote:
 > Sorry for my previous message with wrong subject
 >
 > Hi all,
 > By reading the tempest documentation page [1] a user can run tempest
 tests
 > by using whether testr or run_tempest.sh or tox.
 >
 > What is the best practice?
 > run_tempest.sh has several options (e.g. ./run_tempest.sh -h) and it
 is my
 > preferred way, currently.
 > Any thought?

 So the options are there for different reasons and fit different
 purposes. The
 run_tempest.sh script exists mostly for legacy reasons as some people
 prefer to
 use it, and it predates the usage of tox in tempest. It also has some
 advantages
 like that it can run without a venv and provides some other options.

 Tox is what we use for gating, and we keep most of job definitions for
 gating in
 the tox.ini file. If you're trying to reproduce a gate run locally
 using tox is
 what is recommended to use. Personally I use it to run everything just
 because
 I often mix unit tests and tempest runs and I like having separate
 venvs for
 both being created on demand.

 Calling testr directly is just what all of these tools are doing under
 the
 covers, and it'll always be an option.

 One thing we're looking to do this cycle is to add a single entry point
 for
 running tempest which will hopefully clear up this confusion, and make
 the
 interface for interacting with tempest a bit nicer. When this work is
 done, the
 run_tempest.sh script will most likely disappear and tox will probably
 just be
 used for gating job definitions and just call the new entry-point
 instead of
 testr directly.

 >
 > BR,
 > Angelo
 >
 > [1]
 http://docs.openstack.org/developer/tempest/overview.html#quickstart
 >

 -Matt Treinish

  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> ___
>> OpenStack-dev mailing 
>> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http:

Re: [openstack-dev] [tempest] How to run tempest tests

2014-11-28 Thread om prakash pandey
Thanks Sridhar.

I'm aware of using "skipException" for skipping tests at class level.
However, this involves putting it in every class to skip tests which are
not desired.

I was looking for a way to control the tests I want to run through some
kind of a configuration file, the options to pass to test runner.

Regards
Om

On Fri, Nov 28, 2014 at 2:02 PM, Sridhar Gaddam  wrote:

>  If the deployment does not support IPv6, we use the following convention
> to skip the tests at class level.
>
> https://github.com/openstack/tempest/blob/master/tempest/api/network/base.py#L65
>
> Regards,
> --Sridhar.
>
>
>
> On 11/28/2014 01:50 PM, om prakash pandey wrote:
>
> Hi Folks,
>
>  I would like to know about the "best practices" followed for skipping
> tests not applicable for my environment.
>
>  I know one of the ways is to use the below decorator over the test
> method:
>  @test.skip_because(bug="BUG_ID")
>
>  However, what if my deployment doesn't support VPNAAS and I want to skip
> those tests. Similarly, what if I want to skip the entire suite of
> sahara(data processing) tests.
>
>  Are there any options in testr to customize running of tempest tests as
> per my environment/requirements?
>
>  Regards,
> Om
>
> On Wed, Nov 26, 2014 at 3:13 AM, Vineet Menon 
> wrote:
>
>> Hi,
>> Thanjs for clearing that up... I had a hard time understanding the screws
>> before I went with testr.
>>
>> Regards,
>> Vineet
>>  On 25 Nov 2014 17:46, "Matthew Treinish"  wrote:
>>
>>>  On Mon, Nov 24, 2014 at 10:49:27AM +0100, Angelo Matarazzo wrote:
>>> > Sorry for my previous message with wrong subject
>>> >
>>> > Hi all,
>>> > By reading the tempest documentation page [1] a user can run tempest
>>> tests
>>> > by using whether testr or run_tempest.sh or tox.
>>> >
>>> > What is the best practice?
>>> > run_tempest.sh has several options (e.g. ./run_tempest.sh -h) and it
>>> is my
>>> > preferred way, currently.
>>> > Any thought?
>>>
>>> So the options are there for different reasons and fit different
>>> purposes. The
>>> run_tempest.sh script exists mostly for legacy reasons as some people
>>> prefer to
>>> use it, and it predates the usage of tox in tempest. It also has some
>>> advantages
>>> like that it can run without a venv and provides some other options.
>>>
>>> Tox is what we use for gating, and we keep most of job definitions for
>>> gating in
>>> the tox.ini file. If you're trying to reproduce a gate run locally using
>>> tox is
>>> what is recommended to use. Personally I use it to run everything just
>>> because
>>> I often mix unit tests and tempest runs and I like having separate venvs
>>> for
>>> both being created on demand.
>>>
>>> Calling testr directly is just what all of these tools are doing under
>>> the
>>> covers, and it'll always be an option.
>>>
>>> One thing we're looking to do this cycle is to add a single entry point
>>> for
>>> running tempest which will hopefully clear up this confusion, and make
>>> the
>>> interface for interacting with tempest a bit nicer. When this work is
>>> done, the
>>> run_tempest.sh script will most likely disappear and tox will probably
>>> just be
>>> used for gating job definitions and just call the new entry-point
>>> instead of
>>> testr directly.
>>>
>>> >
>>> > BR,
>>> > Angelo
>>> >
>>> > [1]
>>> http://docs.openstack.org/developer/tempest/overview.html#quickstart
>>> >
>>>
>>> -Matt Treinish
>>>
>>>  ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> ___
> OpenStack-dev mailing 
> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] ER diagram of NOVA DB

2014-11-28 Thread Markus Zoeller
> Hi,
> 
> I used SchemaSpy for generating ER diagram of Nova DB. 
> This is just an FYI for people who are working for Cellv2, involved in
> generating new schema.
> 
> One can start by unzipping the archive and open index.html in browser...
> 
> Regards,
> 
> Vineet Menon 

I'm not working on the cellsv2 but thanks anyway. It helps me to 
understand the relationships a bit more.


Regards,
Markus Zoeller
IRC: markus_z
Launchpad: mzoeller


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] How to run tempest tests

2014-11-28 Thread Sridhar Gaddam
If the deployment does not support IPv6, we use the following convention 
to skip the tests at class level.

https://github.com/openstack/tempest/blob/master/tempest/api/network/base.py#L65

Regards,
--Sridhar.


On 11/28/2014 01:50 PM, om prakash pandey wrote:

Hi Folks,

I would like to know about the "best practices" followed for skipping 
tests not applicable for my environment.


I know one of the ways is to use the below decorator over the test method:
 @test.skip_because(bug="BUG_ID")

However, what if my deployment doesn't support VPNAAS and I want to 
skip those tests. Similarly, what if I want to skip the entire suite 
of sahara(data processing) tests.


Are there any options in testr to customize running of tempest tests 
as per my environment/requirements?


Regards,
Om

On Wed, Nov 26, 2014 at 3:13 AM, Vineet Menon > wrote:


Hi,
Thanjs for clearing that up... I had a hard time understanding the
screws before I went with testr.

Regards,
Vineet

On 25 Nov 2014 17:46, "Matthew Treinish" mailto:mtrein...@kortar.org>> wrote:

On Mon, Nov 24, 2014 at 10:49:27AM +0100, Angelo Matarazzo wrote:
> Sorry for my previous message with wrong subject
>
> Hi all,
> By reading the tempest documentation page [1] a user can run
tempest tests
> by using whether testr or run_tempest.sh or tox.
>
> What is the best practice?
> run_tempest.sh has several options (e.g. ./run_tempest.sh
-h) and it is my
> preferred way, currently.
> Any thought?

So the options are there for different reasons and fit
different purposes. The
run_tempest.sh script exists mostly for legacy reasons as some
people prefer to
use it, and it predates the usage of tox in tempest. It also
has some advantages
like that it can run without a venv and provides some other
options.

Tox is what we use for gating, and we keep most of job
definitions for gating in
the tox.ini file. If you're trying to reproduce a gate run
locally using tox is
what is recommended to use. Personally I use it to run
everything just because
I often mix unit tests and tempest runs and I like having
separate venvs for
both being created on demand.

Calling testr directly is just what all of these tools are
doing under the
covers, and it'll always be an option.

One thing we're looking to do this cycle is to add a single
entry point for
running tempest which will hopefully clear up this confusion,
and make the
interface for interacting with tempest a bit nicer. When this
work is done, the
run_tempest.sh script will most likely disappear and tox will
probably just be
used for gating job definitions and just call the new
entry-point instead of
testr directly.

>
> BR,
> Angelo
>
> [1]
http://docs.openstack.org/developer/tempest/overview.html#quickstart
>

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] How to run tempest tests

2014-11-28 Thread om prakash pandey
Hi Folks,

I would like to know about the "best practices" followed for skipping tests
not applicable for my environment.

I know one of the ways is to use the below decorator over the test method:
 @test.skip_because(bug="BUG_ID")

However, what if my deployment doesn't support VPNAAS and I want to skip
those tests. Similarly, what if I want to skip the entire suite of
sahara(data processing) tests.

Are there any options in testr to customize running of tempest tests as per
my environment/requirements?

Regards,
Om

On Wed, Nov 26, 2014 at 3:13 AM, Vineet Menon 
wrote:

> Hi,
> Thanjs for clearing that up... I had a hard time understanding the screws
> before I went with testr.
>
> Regards,
> Vineet
> On 25 Nov 2014 17:46, "Matthew Treinish"  wrote:
>
>> On Mon, Nov 24, 2014 at 10:49:27AM +0100, Angelo Matarazzo wrote:
>> > Sorry for my previous message with wrong subject
>> >
>> > Hi all,
>> > By reading the tempest documentation page [1] a user can run tempest
>> tests
>> > by using whether testr or run_tempest.sh or tox.
>> >
>> > What is the best practice?
>> > run_tempest.sh has several options (e.g. ./run_tempest.sh -h) and it is
>> my
>> > preferred way, currently.
>> > Any thought?
>>
>> So the options are there for different reasons and fit different
>> purposes. The
>> run_tempest.sh script exists mostly for legacy reasons as some people
>> prefer to
>> use it, and it predates the usage of tox in tempest. It also has some
>> advantages
>> like that it can run without a venv and provides some other options.
>>
>> Tox is what we use for gating, and we keep most of job definitions for
>> gating in
>> the tox.ini file. If you're trying to reproduce a gate run locally using
>> tox is
>> what is recommended to use. Personally I use it to run everything just
>> because
>> I often mix unit tests and tempest runs and I like having separate venvs
>> for
>> both being created on demand.
>>
>> Calling testr directly is just what all of these tools are doing under the
>> covers, and it'll always be an option.
>>
>> One thing we're looking to do this cycle is to add a single entry point
>> for
>> running tempest which will hopefully clear up this confusion, and make the
>> interface for interacting with tempest a bit nicer. When this work is
>> done, the
>> run_tempest.sh script will most likely disappear and tox will probably
>> just be
>> used for gating job definitions and just call the new entry-point instead
>> of
>> testr directly.
>>
>> >
>> > BR,
>> > Angelo
>> >
>> > [1]
>> http://docs.openstack.org/developer/tempest/overview.html#quickstart
>> >
>>
>> -Matt Treinish
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev