Re: [openstack-dev] [Neutron] Bug update

2015-11-20 Thread Gary Kotton
Yeah, I should have reread before hitting the send.

It should have been RBAC (I have opened a new official tag). Kevin understood 
and addressed the bugs :)



From: "Armando M." mailto:arma...@gmail.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, November 20, 2015 at 8:39 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] Bug update



On 20 November 2015 at 09:47, John Belamaric 
mailto:jbelama...@infoblox.com>> wrote:
I think Gary got auto-corrected:

training = triaging
brace = rbac

ah!


On Nov 20, 2015, at 12:41 PM, Armando M. 
mailto:arma...@gmail.com>> wrote:



On 19 November 2015 at 23:10, Gary Kotton 
mailto:gkot...@vmware.com>> wrote:
Hi,
There are a ton of old and ancient bugs that have not been trained. If you guys 
have some time then please go over them. In most cases some are not even bugs 
and are just questions. I have spent the last few days going over and training 
a few.
Over the last two days a number of bugs related to Neutron RBAC have been 
opened. I have created a new tag called 'brace'. Kevin can you please take a 
look. Some may be bugs, others may be edge cases that we missed in the review 
process and others may be a mis understanding of the feature.

What does brace mean? That doesn't seem very intuitive.

Are you suggesting to add one to cover 'access control' in general?

Thanks for helping out!

[1] 
http://docs.openstack.org/developer/neutron/policies/bugs.html#proposing-new-tags



A luta continua
Gary

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Developer Mailing List Digest November 7-13

2015-11-20 Thread Mike Perez
Perma Link: 
http://www.openstack.org/blog/2015/11/openstack-developer-mailing-list-digest-november-20151107/

New Management Tools and Processes for stable/liberty and Mitaka


* For release management, we used a combination of launchpad milestone
pages and our wiki to track changes in releases.
* We used to pull releases notes for stable point releases at the time
of releases.
* Release managers would work with PTLs and release liaisons at each
milestone to update launchpad to reflect the work completed.
* All this requires a lot of work of the stable maintenance and release teams.
* To address this work with the ever-growing set of project, the
release team is introducing Reno for continuously building release
notes as files in-tree.

The idea is small YAML files, usually one per note or patch to avoid
merge conflicts on back ports, which are then compiled to a readable
document for readers.
ReStructuredText and Sphinx are supported for converting note files to
HTML for publication.
Documentation for using Reno is available [1].

* Release liaisons should create and merge a few patches for each
project between now and Mitaka-1 milestone:

  - To the master branch, instructions for publishing the notes. An
example of Glance [2].
  - Instructions for publishing in stable/liberty of the project. An
example with Glance [3].
  - Relevant jobs in project-config. An example with Glance [4].
  - Reno was not ready before the summit, so the wiki was used for
release notes for the initial Liberty releases. Liaisons should covert
those notes to Reno YAML files in stable/liberty branch.

* Use the topic ‘add-reno’ for all patches to track adoption.
* Once liaisons have done this work, launchpad can stop being used for
tracking completed work.
* Launchpad will still be used for tracking bug reports, for now.
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078301.html


Keeping Juno “alive” For Longer
=

* Tony Breeds is seeking feedback on the idea of keeping Juno around a
little longer.
* According to the current user survey [5], Icehouse still has the
biggest install base in production clouds. Juno is second, which means
if we end of life (EOL) Juno this month, ~75% of production clouds
will be running a EOL’d release.
* The problems with doing this however:

  - CI capacity of running the necessary jobs of making sure stable
branches still work.
  - Lack of resources of people who care to make sure the stable
branch continues to work.
  - Juno is still tied with Python 2.6.
  - Security support is still needed.
  - Tempest is branchless, so it’s running stable compatible jobs.

* This is acknowledged as a common request. The common answer being
“push more resources in fixing existing stable branches and we might
consider it”.
* Matt Riedmann who works in the trenches of stable branches confirms
stable/juno is already a goner due to requirements capping issues. You
fix one issue to unwedge a project and with global-requirement syncs,
we end breaking 2 other projects. The cycle never ends.
* This same problem does not exist in stable/kilo, because we’ve done
a better job of isolating versions in global-requirements with
upper-constants.
* Sean Dague wonders what are the reasons that keep people from doing
upgrades to begin with. Tony is unable to give reasons since some are
internal to his companies offering.
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078630.html


Oslo Libraries Dropping Python 2.6 compatibility
=

* Davanum notes a patch to drop py26 oslo jobs [6].
* Jeremy Stanley notes that the infrastructure team plans to remove
CentOS 6.X job workers which includes all python 2.6 jobs when
stable/juno reaches EOL.
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/079249.html


Making Stable Maintenance its own OpenStack Project Team
===

* Thierry writes that when the Release Cycle Management team was
created, it just happen to contain release management, stable branch
management, and vulnerability management.

  - Security Team was created and spun out of the release team today.

* Proposal: spin out the stable branch maintenance as well.

  - Most of the stable team work used to be stable point release
management, but as of stable/liberty this is now done by the release
management team and triggered by the project-specific stable
maintenance teams, so there is no more overlap in tooling used there.
  - Stable team is now focused on stable branch policies [7], not patches.
  - Doug Hellmann leads the release team and does not have the history
Thierry had with stable branch policy.
  - Empowering the team to make its own decisions, visibility,
recognition in hopes to encourage more resources being dedicated to
it.

-- Defining and enforcing stable bran

Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Oleg Gelbukh
With CentOS7 we will have python2.7 at Fuel Admin node as a default
version, I believe.

--
Best regards,
Oleg Gelbukh,
Principal Engineer
Mirantis

On Fri, Nov 20, 2015 at 6:27 AM, Timur Nurlygayanov <
tnurlygaya...@mirantis.com> wrote:

> Hi Andrey,
>
> As far as I remember from the last usage of fuel master node, there was
>> Centos + py26 installation. Python 2.6 is old enough and sometimes it is
>> hard to launch some application on fuel node without docker (image with
>> py27/py3). Are you planning to provide py27 at least or my note is outdated
>> and I can already use py27 from the box?
>
> We can install docker on master node anyway to run Rally / Tempest or
> other test suites and scripts from master node with Python 2.7 or something
> also.
>
> On Fri, Nov 20, 2015 at 5:20 PM, Andrey Kurilin 
> wrote:
>
>> Hi!
>> I'm not fuel developer, so opinion below is based on user-view.
>> As far as I remember from the last usage of fuel master node, there was
>> Centos + py26 installation. Python 2.6 is old enough and sometimes it is
>> hard to launch some application on fuel node without docker (image with
>> py27/py3). Are you planning to provide py27 at least or my note is outdated
>> and I can already use py27 from the box?
>>
>> On Thu, Nov 19, 2015 at 4:59 PM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> Dear colleagues,
>>>
>>> As might remember, we introduced Docker containers on the master node a
>>> while ago when we implemented first version of Fuel upgrade feature. The
>>> motivation behind was to make it possible to rollback upgrade process if
>>> something goes wrong.
>>>
>>> Now we are at the point where we can not use our tarball based upgrade
>>> approach any more and those patches that deprecate upgrade tarball has been
>>> already merged. Although it is a matter of a separate discussion, it seems
>>> that upgrade process rather should be based on kind of backup and restore
>>> procedure. We can backup Fuel data on an external media, then we can
>>> install new version of Fuel from scratch and then it is assumed backed up
>>> Fuel data can be applied over this new Fuel instance. The procedure itself
>>> is under active development, but it is clear that rollback in this case
>>> would be nothing more than just restoring from the previously backed up
>>> data.
>>>
>>> As for Docker containers, still there are potential advantages of using
>>> them on the Fuel master node, but our current implementation of the feature
>>> seems not mature enough to make us benefit from the containerization.
>>>
>>> At the same time there are some disadvantages like
>>>
>>>- it is tricky to get logs and other information (for example, rpm
>>>-qa) for a service like shotgun which is run inside one of containers.
>>>- it is specific UX when you first need to run dockerctl shell
>>>{container_name} and then you are able to debug something.
>>>- when building IBP image we mount directory from the host file
>>>system into mcollective container to make image build faster.
>>>- there are config files and some other files which should be shared
>>>among containers which introduces unnecessary complexity to the whole
>>>system.
>>>- our current delivery approach assumes we wrap into rpm/deb
>>>packages every single piece of the Fuel system. Docker images are not an
>>>exception. And as far as they depend on other rpm packages we forced to
>>>build docker-images rpm package using kind of specific build flow. 
>>> Besides
>>>this package is quite big (300M).
>>>- I'd like it to be possible to install Fuel not from ISO but from
>>>RPM repo on any rpm based distribution. But it is double work to support
>>>both Docker based and package based approach.
>>>
>>> Probably some of you can give other examples. Anyway, the idea is to get
>>> rid of Docker containers on the master node and switch to plane package
>>> based approach that we used before.
>>>
>>> As far as there is nothing new here, we just need to use our old site.pp
>>> (with minimal modifications), it looks like it is possible to implement
>>> this during 8.0 release cycle. If there are no principal objections, please
>>> give me a chance to do this ASAP (during 8.0), I know it is a huge risk for
>>> the release, but still I think I can do this.
>>>
>>>
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best regards,
>> Andrey Kurilin.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists

[openstack-dev] OpenStack Developer Mailing List Digest November 14-20

2015-11-20 Thread Mike Perez
Perma Link: 
http://www.openstack.org/blog/2015/11/openstack-developer-mailing-list-digest-november-20151114

Time to Make Some Assertions About Your Projects
===

* The technical committee defined a number of “assert” tags which
allows a project team to to make assertions about their own
deliverables:

  - assert:follows-standard-deprecation
  - assert:supports-upgrade
  - assert:supports-rolling-upgrade

* Read more on their definitions [1]
* Update the project.yaml [2] of which tags apply to your project already.
* The OpenStack foundation will use “assert”tags very soon in the
project navigator [3].
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/079542.html

Making stable maintenance its own OpenStack Project Team
==

* Continuing discussion from last week [4]...
* Negatives:

  - Not enough work to warrant a designated “team”.
  - The change is unlikely to bring a meaning full improvement to the
situation, sudden new resources.

* Positives:

  - An empowered team could tackle new coordination tasks, like
engaging more directly in converging stable branch rules across teams,
or producing tools.
  - Release management doesn't overlap anymore with stable branch, so
having them under that PTL is limiting and inefficient
  - Reinforcing the branding (by giving it its own team) may encourage
more organizations to affect new resources to it

* Matt Riedemann offers to lead the team.
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078901.html

Release Countdown For Week R-19, November 23-27
=
* Mitaka-1 milestone scheduled for December 1-3.
* Teams should be...

  - Wrapping up incomplete work left over from the end of the Liberty cycle .
  - Finalizing and announcing plans from the summit.
  - Completing specs and blueprints.

* The openstack/release repository will be used to manage Mitaka 1
milestone tags.
Reno [5] will be used instead of Launchpad for tracking completed
work. Make sure any release notes done for this cycle are committed to
your master branchless before proposing the milestone tag.
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/079795.html

New API Guidelines Read for Cross Project Review
===

* The following will be merged soon:
  - Adding introduction to API micro version guideline [6].
  - Add description of pagination parameters [7].
  - A guideline for errors [8].
* These will be brought up in the next cross project meeting [9].
* Full thread: 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/079434.html


[1] - http://governance.openstack.org/reference/tags/index.html
[2] - 
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml
[3] - https://www.openstack.org/software/project-navigator/
[4] - 
http://www.openstack.org/blog/2015/11/openstack-developer-mailing-list-digest-november-7-13/#stable-team
[5] - 
http://www.openstack.org/blog/2015/11/openstack-developer-mailing-list-digest-november-7-13/#reno
[6] - https://review.openstack.org/#/c/187112/
[7] - https://review.openstack.org/#/c/190743/
[8] - https://review.openstack.org/#/c/167793/
[9]- https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Approved but not implemented specs

2015-11-20 Thread Oleg Gelbukh
It's a good point.

I think it could even be done automatically: once spec freeze is in place,
run an infra script and update all CRs still in review with specs targeted
to current (and previous) releases by moving them to next release's
directory.

-Oleg

On Fri, Nov 20, 2015 at 3:35 PM, Igor Kalnitsky 
wrote:

> Hey Fuelers,
>
> Today I noticed that some of Fuel specs have been merged for 7.0 while
> the features themselves weren't landed. It's kind confusing since it
> seems like the feature was implemented in 7.0 while it's not.
>
> What do you think guys about moving such specs into 8.0 folder? I
> believe it's a way to better understand what we're doing now, and what
> was done previously.
>
> Thanks,
> Igor
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]Tokyo Summit Summary

2015-11-20 Thread Zhipeng Huang
Hi Team,

We had very great sessions re Tricircle at Tokyo Summit, both main
conference [1] and design summit [2].

After the summit the core team dived into new architecture design as
discussed in the design summit session [3] , therefore there had been sorta
radio silent, but rest assure the work is continuing :)

We will still hold our weekly meeting at openstack-meeting every Wed from
UTC 1300 to UTC 1400, where we will discuss problems and ideas in the
developments. There would be no specific agenda assigned, except for the
last week meeting every month where we will deal with major problems with
focus.

Other than the everyday dev discuz at openstack-meeting, we will have
architectural/functional/conceptual discussions at openstack-tricircle at
earlier time each week, where we will bash ideas on how to proceed the
project.

I'm also contemplating google hangout for openstack-meeting sessions so
people could directly communicate. I will send out detailed info about this
later on :)

Anyways wish yall have a great weekend, and meet you guys next week at the
meeting. At the mean time check out the new arch proposal done by the core
team [3].

[1]
https://openstacksummitoctober2015tokyo.sched.org/event/49sw/multisite-openstack-deep-dive
[2] https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Tricircle
[3] https://wiki.openstack.org/wiki/Tricircle
-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] kilo bug on datasource listing

2015-11-20 Thread Tim Hinrichs
Congress stable-maintenance team:

We seem to have a bug in kilo that is making it tough for Bryan Sullivan to
get things up and running.  The swift driver doesn't have a 'secret' field,
which is causing a 500 error when listing datasources.  If I remember
right, we fixed this bug later.

https://bugs.launchpad.net/congress/+bug/1518496

Could someone volunteer to fix it?  I think we should do a release once
that's in.

Tim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] When to use parameters vs parameter_defaults

2015-11-20 Thread Ben Nemec
Thinking about this some more makes me wonder if we need a sample config
generator like oslo.config.  It would work off something similar to the
capabilities map, where you would say

SSL:
  templates:
-puppet/extraconfig/tls/tls-cert-inject.yaml
  output:
-environments/enable-ssl.yaml

And the tool would look at that, read all the params from
tls-cert-inject.yaml and generate the sample env file.  We'd have to be
able to do a few new things with the params in order for this to work:

-Need to specify whether a param is intended to be set as a top-level
param, parameter_defaults (which we informally do today with the Can be
overridden by parameter_defaults comment), or internal, to define params
that shouldn't be exposed in the sample config and are only intended as
an interface between templates.  There wouldn't be any enforcement of
the internal type, but Python relies on convention for its private
members so there's precedent. :-)
-There would have to be some way to pick out only certain params from a
template, since I think there are almost certainly features that are
configured using a subset of say puppet/controller.yaml which obviously
can't just take the params from an entire file.  Although maybe this is
an indication that we could/should refactor the templates to move some
of these optional params into their own separate files (at this point I
think I should take a moment to mention that this is somewhat of a brain
dump, so I haven't thought through all of the implications yet and I'm
not sure it all makes sense).

The nice thing about generating these programmatically is we would
formalize the interface of the templates somewhat, and it would be
easier to keep sample envs in sync with the actual implementation.
You'd never have to worry about someone adding a param to a file but
forgetting to update the env (or at least it would be easy to catch and
fix when they did, just run "tox -e genconfig").

I'm not saying this is a simple or short-term solution, but I'm curious
what people think about setting this as a longer-term goal, because as I
think our discussion in Tokyo exposed, we're probably going to have a
bit of an explosion of sample envs soon and we're going to need some way
to keep them sane.

Some more comments inline.

On 11/19/2015 10:16 AM, Steven Hardy wrote:
> On Mon, Nov 16, 2015 at 08:15:48PM +0100, Giulio Fidente wrote:
>> On 11/16/2015 04:25 PM, Steven Hardy wrote:
>>> Hi all,
>>>
>>> I wanted to start some discussion re $subject, because it's been apparrent
>>> that we have a lack of clarity on this issue (and have done ever since we
>>> started using parameter_defaults).
>>
>> [...]
>>
>>> How do people feel about this example, and others like it, where we're
>>> enabling common, but not mandatory functionality?
>>
>> At first I was thinking about something as simple as: "don't use top-level
>> params for resources which the registry doesn't enable by default".
>>
>> It seems to be somewhat what we tried to do with the existing pluggable
>> resources.
>>
>> Also, not to hijack the thread but I wanted to add another question related
>> to a similar issue:
>>
>>   Is there a reason to prefer use of parameters: instead of
>> parameter_defaults: in the environment files?
>>
>> It looks to me that by defaulting to parameter_defaults: users won't need to
>> update their environment files in case the parameter is moved from top-level
>> into a specific nested stack so I'm inclined to prefer this. Are there
>> reasons not to?
> 
> The main reason is scope - if you use "parameters", you know the data flow
> happens via the parent template (e.g overcloud-without-mergepy) and you
> never have to worry about naming collisions outside of that template.
> 
> But if you use parameter_defaults, all parameters values defined that way
> are effectively global, and you then have to be much more careful that you
> never shadow a parameter name and get an unexpected value passed in to it.
> 
> Here's another example of why we need to decide this btw:
> 
> https://review.openstack.org/#/c/229471/
> 
> Here, we have some workers parameters, going only into controller.yaml -
> this is fine, but the new options are completely invisible to users who
> look at the overcloud_without_mergepy parameters schema as their interface
> (in particular I'm thinking of any UI here).
> 
> My personal preference is to say:
> 
> 1. Any templates which are included in the default environment (e.g
> overcloud-resource-registry-puppet.yaml), must expose their parameters
> via overcloud-without-mergepy.yaml
> 
> 2. Any templates which are included in the default environment, but via a
> "noop" implementation *may* expose their parameters provided they are
> common and not implementation/vendor specific.

This seems like a reasonable approach, although that "may" still leaves
a lot of room for bikeshedding. ;-)

It might be good to say that in this case it is "preferred" to use a
top-level param, but if there's a 

Re: [openstack-dev] [release][stable] OpenStack 2014.2.4 (juno)

2015-11-20 Thread Matt Riedemann



On 11/20/2015 6:43 AM, Sean Dague wrote:

On 11/19/2015 08:56 PM, Rochelle Grober wrote:

Again, my plea to leave the Juno repository on git.openstack.org, but locked down 
to enable at least grenade testing for Juno->Kilo upgrades.  For upgrade 
testing purposes, python2.6 is not needed as any cloud would have to upgrade 
python before upgrading to kilo.  The testing could/should be limited to only 
occurring when Kilo backports are proposed.  The nodepool requirements should be 
very small except for the pre-release periods remaining for Kilo, especially if 
the testing is restricted to grenade only.

Thanks for the ear. I'm expecting to participate in the stable releases team, 
and to bring a developer along with me;-)


This really isn't a good idea.

Grenade makes sure the old side works first, with Tempest. Tempest won't
support juno any more, you'd need to modify the job to do something else
here.

Often times there are breaks due to upstream changes that require fixes
on the old side, which is now impossible.

Juno being eol means we expect you are already off of it, not that you
should be soon.

-Sean



I'm assuming Tempest will create a tag when it drops support for Juno. 
So theoretically you could bring up Juno 2014.2.4, run Tempest at the 
juno-eol tag, then upgrade to stable/kilo, checkout trunk Tempest and 
run that against the (new) kilo side.


If there are breaks on the (old) juno side, like upstream dependency 
issues (which was usually our problem), then we can't cap them in the 
code since the stable/juno branch is gone. You could somehow hack that 
into grenade maybe, but that gets pretty gross and doesn't reflect 
reality for downstream consumers of the 2014.2.4 release. This, in my 
mind, is the biggest reason we wouldn't be doing this kind of upgrade 
testing on stable/kilo changes.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Approved but not implemented specs

2015-11-20 Thread Igor Kalnitsky
Hey Fuelers,

Today I noticed that some of Fuel specs have been merged for 7.0 while
the features themselves weren't landed. It's kind confusing since it
seems like the feature was implemented in 7.0 while it's not.

What do you think guys about moving such specs into 8.0 folder? I
believe it's a way to better understand what we're doing now, and what
was done previously.

Thanks,
Igor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][security] what is OK to put in DEBUG logs?

2015-11-20 Thread Ben Nemec
On 11/19/2015 06:00 AM, Lucas Alvares Gomes wrote:
> Hi,
> 
>> Also keep in mind that DEBUG logging, while still should have some masking
>> of data, since it is explicitly called out (or should be) as not safe for
>> production, can contain some " sensitive" data. Credentials should still be
>> scrubbed, but I would say the swift temp URL is something that may line up
>> with this more flexible level of filtering logs.
>>
>> Now, if the service (and I don't think ironic suffers from this issue) is
>> only really runnable with debug on (because there is no useful information
>> otherwise) then I would aim to fix that before putting even potentially
>> sensitive data in DEBUG.
>>
>> The simple choice is if there is even a question, don't log it (or log it in
>> a way that obscures the data but still shows unique use).
>>
> 
> I agree with Morgan's statement here.
> 
> And just throwing an idea in the wind here, we could make use of the
> python logging filters to create a filter for sensitive information.
> We probably need one already to avoid having to do things like [1] in
> the code.

We actually have a thing to do that:
https://github.com/openstack/oslo.utils/blob/master/oslo_utils/strutils.py#L215

You might need to add a new key to the list of things to mask, but I
think it should be able to handle masking the log message for you.  I
don't know whether configdrive is a globally sensitive key, but if not
then we probably need to revisit the question of whether to allow
extending the key list dynamically in the consuming application instead
of having only the one hard-coded list.  More context here:
https://bugs.launchpad.net/oslo.utils/+bug/1407811

> 
> [1] 
> https://github.com/openstack/ironic/blob/812ed66ccabfcb1c1862951ea95a68b9d93b1672/ironic/drivers/modules/iscsi_deploy.py#L275-L284
> 
> Cheers,
> Lucas
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Call for review focus

2015-11-20 Thread Armando M.
On 20 November 2015 at 14:07, Kyle Mestery  wrote:

> On Wed, Nov 18, 2015 at 8:14 PM, Armando M.  wrote:
>
>> Hi Neutrites,
>>
>>
> Neutrinos?
>

I am still experimenting to see what sticks...So far I got Neutrinos,
Neutronians, and Neutrites...Neutrinos is the one I like most, but
Neutronians is cool too. Ludicrous, uh?


>
>> We are nearly two weeks away from the end of Mitaka 1.
>>
>> I am writing this email to invite you to be mindful to what you review,
>> especially in the next couple of weeks. Whenever you have the time to
>> review code, please consider giving priority to the following:
>>
>>- Patches that target blueprints targeted for Mitaka
>>;
>>- Patches that target bugs that are either critical
>>
>> 
>>or high
>>
>> 
>>;
>>- Patches that target rfe-approved
>>
>> 
>> 'bugs';
>>- Patches that target specs
>>
>> 
>>  that
>>have followed the most current submission process
>>
>>;
>>
>> Everything else should come later, no matter how easy or interesting it
>> is to review; remember that as a community we have the collective duty to
>> work towards a common (set of) target(s), as being planned in collaboration
>> with the Neutron Drivers
>>  team and the
>> larger core 
>> team.
>>
>> I would invite submitters to ensure that the Launchpad resources
>> (blueprints, and bug report) capture the most updated view in terms of
>> patches etc. Work with your approver to help him/her be focussed where it
>> matters most.
>>
>> Finally, we had plenty of discussions at the design summit
>> ,
>> and some of those discussions will have to be followed up with actions (aka
>> code in OpenStack lingo). Even though, we no longer have deadlines for
>> feature submission, I strongly advise you not to leave it last minute. We
>> can only handle so much work for any given release, and past experience
>> tells us that we can easily hit a breaking point at around the ~30
>> blueprint mark.
>>
>> Once we reached it, it's likely we'll have to start pushing back work for
>> Mitaka and allow us some slack; things are fluid as we all know, and the
>> random gate breakage is always lurking round the corner! :)
>>
>>
> Thanks for sending this out Armando. Keeping focus is important, and your
> occasional emails reminding people are super useful. I find them useful,
> and as the previous PTL, I can backup the fact that the team should be
> focusing on specific reviews.
>

Appreciate the endorsement :)


>
> Thanks!
> Kyle
>
>
>> Happy hacking,
>> Armando
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Running Fuel node as non-superuser

2015-11-20 Thread Dmitry Nikishov
Stanislaw,

proposing patches could be a viable option long-term, however, by the time
these patches will make it upstream, Fuel will use CentOS 7 w/ systemd.

On Fri, Nov 20, 2015 at 4:05 PM, Stanislaw Bogatkin 
wrote:

> Dmitry, as we work on opensource - it would be really nice to propose
> patches to upstream for non-Fuel services. But if it is not an option -
> using puppet make sense to me.
>
> On Fri, Nov 20, 2015 at 11:01 PM, Dmitry Nikishov 
> wrote:
>
>> Stanislaw,
>>
>> I want to clarify: there are 2 types of services, run on the Fuel node:
>> - Those, which are a part of Fuel (astute, nailgun etc)
>> - Those, which are not (e.g. atop)
>>
>> Capabilities for the former can easily be managed via post-install
>> scripts, embedded in respective package spec file (since specs are a part
>> of fuel-* repo). This is a very good idea.
>> Capabilities for the latter will have to be taken care of via either
>> a. some external utility (puppet)
>> b. rebuilding respective package with updated spec
>>
>> I'd say that (a) is still more convinient.
>>
>> Another option would be to have a fine-grained control only on Fuel
>> services and leave all the other at their defaults.
>>
>> On Fri, Nov 20, 2015 at 1:19 PM, Stanislaw Bogatkin <
>> sbogat...@mirantis.com> wrote:
>>
>>> Dmitry, I just propose the way I think is right, because it's strange
>>> enough - install package from *.deb file and then set any privileges to it
>>> by third-party utility. Set permissions for app now mostly managed by
>>> post-install scripts. Moreover - if it isn't - it should, cause if you set
>>> capabilities by puppet there always will be a gap between installation and
>>> setting permissions, so you will must bound package installation process
>>> with setting permissions by puppet - other way you will have no way to use
>>> your app.
>>>
>>> Setting setuid bits on apps is not a good idea - it is why linux
>>> capabilities were introduced.
>>>
>>> On Fri, Nov 20, 2015 at 6:40 PM, Dmitry Nikishov >> > wrote:
>>>
 Stanislaw,

 In my opinion the whole feature shouldn't be in the separate package
 simply because it will actually affect the code of many, if not all,
 components of Fuel.

 The only services whose capabilities will have to be managed by puppet
 are those, which are installed from upstream packages (e.g. atop) -- not
 built from fuel-* repos.

 Supervisord doesn't seem to use Linux capabilities, id does setuid
 instead:
 https://github.com/Supervisor/supervisor/blob/master/supervisor/options.py#L1326

 On Fri, Nov 20, 2015 at 1:07 AM, Stanislaw Bogatkin <
 sbogat...@mirantis.com> wrote:

> Dmitry, I mean whole feature.
> Btw, why do you want to grant capabilities via puppet? It should be
> done by post-install package section, I believe.
>
> Also I doesn't know if supervisord can bound process capabilities like
> systemd can - we could use this opportunity too.
>
> On Thu, Nov 19, 2015 at 7:44 PM, Dmitry Nikishov <
> dnikis...@mirantis.com> wrote:
>
>> My main concern with using linux capabilities/acls on files is
>> actually puppet support or, actually, the lack of it. ACLs are possible
>> AFAIK, but we'd need to write a custom type/provider for capabilities. I
>> suggest to wait with capabilities support till systemd support.
>>
>> On Tue, Nov 17, 2015 at 9:15 AM, Dmitry Nikishov <
>> dnikis...@mirantis.com> wrote:
>>
>>> Stanislaw, do you mean the whole feature, or just a user? Since
>>> feature would require actually changing puppet code.
>>>
>>> On Tue, Nov 17, 2015 at 5:08 AM, Stanislaw Bogatkin <
>>> sbogat...@mirantis.com> wrote:
>>>
 Dmitry, I believe it should be done via package spec as a part of
 installation.

 On Mon, Nov 16, 2015 at 8:04 PM, Dmitry Nikishov <
 dnikis...@mirantis.com> wrote:

> Hello folks,
>
> I have updated the spec, please review and share your thoughts on
> it: https://review.openstack.org/#/c/243340/
>
> Thanks.
>
> On Thu, Nov 12, 2015 at 10:42 AM, Dmitry Nikishov <
> dnikis...@mirantis.com> wrote:
>
>> Matthew,
>>
>> sorry, didn't mean to butcher your name :(
>>
>> On Thu, Nov 12, 2015 at 10:41 AM, Dmitry Nikishov <
>> dnikis...@mirantis.com> wrote:
>>
>>> Matther,
>>>
>>> I totally agree that each daemon should have it's own user which
>>> should be created during installation of the relevant package. 
>>> Probably I
>>> didn't state this clear enough in the spec.
>>>
>>> However, there are security requirements in place that root
>>> should not be used at all. This means that there should be a some 
>>> kind of
>>> maintenance or system user ('fueladmin

Re: [openstack-dev] [nova] Versioned notifications... who cares about the version?

2015-11-20 Thread Chris Dent

On Fri, 20 Nov 2015, gord chung wrote:

i think a lot of the complexity we have in versioning is that the projects 
are too silo'd. i think some of the versioning issues would be irrelevant if 
the producer knew it's consumers before sending rather than producers just 
tossing out a chunk of data (versioned schema or not) and considering their 
job complete once it leaves it's own walls. the producer doesn't necessarily 
have to be the individual project teams but whoever the producer of 
notifications is, it should know it's audience.


To me this is entirely contrary to the point of having notifications
as a generic concept in the OpenStack environment. To me the point is
to ensure that it is possible for the audience to be and do _anything_.

We've become so accustomed to some of the misfeatures in the messaging
architecture that we've lost track of the fact that it could be an
event pool on which we have diverse listeners that the producers have
no requirement to know anything about. We could have nova-compute spinning
along shouting "I made a VM. I made another VM. Hey, I made another
VM" and "This VM is hot. Halp, I am oversubscribed." All sorts of
tools and devices need to be able to hear that stuff and choose for
themselves what they might do with it.

(This is similar to the reason we have well-formed HTTP APIs: It is so
we can have unexpected clients that do unexpected things.)

It is certainly the case that if we're going to have schematized
and versioned notifications it is critical that the schema are
discoverable in a language independent fashion.

Sometimes it is hard, though, to be convinced that such formalisms are
quite what we really need. From the consumers end the concern is "do
you have these several keys that I care about?" and, as has been said,
the rest is noise. It sounds like microversioned notifications which
almost never version on the major axis might be able to provide this.

We can't allow the introduction of either the formalisms or
discoverability thereof to grant license to change stuff willy nilly.
Nor should we be building formalisms that are defenses against an
architecture that's sub-optimal. We need to evolve the formalisms and
the architecture toward the ideal.

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Call for review focus

2015-11-20 Thread Kyle Mestery
On Wed, Nov 18, 2015 at 8:14 PM, Armando M.  wrote:

> Hi Neutrites,
>
>
Neutrinos?


> We are nearly two weeks away from the end of Mitaka 1.
>
> I am writing this email to invite you to be mindful to what you review,
> especially in the next couple of weeks. Whenever you have the time to
> review code, please consider giving priority to the following:
>
>- Patches that target blueprints targeted for Mitaka
>;
>- Patches that target bugs that are either critical
>
> 
>or high
>
> 
>;
>- Patches that target rfe-approved
>
> 
> 'bugs';
>- Patches that target specs
>
> 
>  that
>have followed the most current submission process
>;
>
> Everything else should come later, no matter how easy or interesting it is
> to review; remember that as a community we have the collective duty to work
> towards a common (set of) target(s), as being planned in collaboration with
> the Neutron Drivers
>  team and the
> larger core  team.
>
> I would invite submitters to ensure that the Launchpad resources
> (blueprints, and bug report) capture the most updated view in terms of
> patches etc. Work with your approver to help him/her be focussed where it
> matters most.
>
> Finally, we had plenty of discussions at the design summit
> ,
> and some of those discussions will have to be followed up with actions (aka
> code in OpenStack lingo). Even though, we no longer have deadlines for
> feature submission, I strongly advise you not to leave it last minute. We
> can only handle so much work for any given release, and past experience
> tells us that we can easily hit a breaking point at around the ~30
> blueprint mark.
>
> Once we reached it, it's likely we'll have to start pushing back work for
> Mitaka and allow us some slack; things are fluid as we all know, and the
> random gate breakage is always lurking round the corner! :)
>
>
Thanks for sending this out Armando. Keeping focus is important, and your
occasional emails reminding people are super useful. I find them useful,
and as the previous PTL, I can backup the fact that the team should be
focusing on specific reviews.

Thanks!
Kyle


> Happy hacking,
> Armando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Running Fuel node as non-superuser

2015-11-20 Thread Stanislaw Bogatkin
Dmitry, as we work on opensource - it would be really nice to propose
patches to upstream for non-Fuel services. But if it is not an option -
using puppet make sense to me.

On Fri, Nov 20, 2015 at 11:01 PM, Dmitry Nikishov 
wrote:

> Stanislaw,
>
> I want to clarify: there are 2 types of services, run on the Fuel node:
> - Those, which are a part of Fuel (astute, nailgun etc)
> - Those, which are not (e.g. atop)
>
> Capabilities for the former can easily be managed via post-install
> scripts, embedded in respective package spec file (since specs are a part
> of fuel-* repo). This is a very good idea.
> Capabilities for the latter will have to be taken care of via either
> a. some external utility (puppet)
> b. rebuilding respective package with updated spec
>
> I'd say that (a) is still more convinient.
>
> Another option would be to have a fine-grained control only on Fuel
> services and leave all the other at their defaults.
>
> On Fri, Nov 20, 2015 at 1:19 PM, Stanislaw Bogatkin <
> sbogat...@mirantis.com> wrote:
>
>> Dmitry, I just propose the way I think is right, because it's strange
>> enough - install package from *.deb file and then set any privileges to it
>> by third-party utility. Set permissions for app now mostly managed by
>> post-install scripts. Moreover - if it isn't - it should, cause if you set
>> capabilities by puppet there always will be a gap between installation and
>> setting permissions, so you will must bound package installation process
>> with setting permissions by puppet - other way you will have no way to use
>> your app.
>>
>> Setting setuid bits on apps is not a good idea - it is why linux
>> capabilities were introduced.
>>
>> On Fri, Nov 20, 2015 at 6:40 PM, Dmitry Nikishov 
>> wrote:
>>
>>> Stanislaw,
>>>
>>> In my opinion the whole feature shouldn't be in the separate package
>>> simply because it will actually affect the code of many, if not all,
>>> components of Fuel.
>>>
>>> The only services whose capabilities will have to be managed by puppet
>>> are those, which are installed from upstream packages (e.g. atop) -- not
>>> built from fuel-* repos.
>>>
>>> Supervisord doesn't seem to use Linux capabilities, id does setuid
>>> instead:
>>> https://github.com/Supervisor/supervisor/blob/master/supervisor/options.py#L1326
>>>
>>> On Fri, Nov 20, 2015 at 1:07 AM, Stanislaw Bogatkin <
>>> sbogat...@mirantis.com> wrote:
>>>
 Dmitry, I mean whole feature.
 Btw, why do you want to grant capabilities via puppet? It should be
 done by post-install package section, I believe.

 Also I doesn't know if supervisord can bound process capabilities like
 systemd can - we could use this opportunity too.

 On Thu, Nov 19, 2015 at 7:44 PM, Dmitry Nikishov <
 dnikis...@mirantis.com> wrote:

> My main concern with using linux capabilities/acls on files is
> actually puppet support or, actually, the lack of it. ACLs are possible
> AFAIK, but we'd need to write a custom type/provider for capabilities. I
> suggest to wait with capabilities support till systemd support.
>
> On Tue, Nov 17, 2015 at 9:15 AM, Dmitry Nikishov <
> dnikis...@mirantis.com> wrote:
>
>> Stanislaw, do you mean the whole feature, or just a user? Since
>> feature would require actually changing puppet code.
>>
>> On Tue, Nov 17, 2015 at 5:08 AM, Stanislaw Bogatkin <
>> sbogat...@mirantis.com> wrote:
>>
>>> Dmitry, I believe it should be done via package spec as a part of
>>> installation.
>>>
>>> On Mon, Nov 16, 2015 at 8:04 PM, Dmitry Nikishov <
>>> dnikis...@mirantis.com> wrote:
>>>
 Hello folks,

 I have updated the spec, please review and share your thoughts on
 it: https://review.openstack.org/#/c/243340/

 Thanks.

 On Thu, Nov 12, 2015 at 10:42 AM, Dmitry Nikishov <
 dnikis...@mirantis.com> wrote:

> Matthew,
>
> sorry, didn't mean to butcher your name :(
>
> On Thu, Nov 12, 2015 at 10:41 AM, Dmitry Nikishov <
> dnikis...@mirantis.com> wrote:
>
>> Matther,
>>
>> I totally agree that each daemon should have it's own user which
>> should be created during installation of the relevant package. 
>> Probably I
>> didn't state this clear enough in the spec.
>>
>> However, there are security requirements in place that root
>> should not be used at all. This means that there should be a some 
>> kind of
>> maintenance or system user ('fueladmin'), which would have enough
>> privileges to configure and manage Fuel node (e.g. run "sudo puppet 
>> apply"
>> without password, create mirrors etc). This also means that certain 
>> fuel-
>> packages would be required to have their files accessible to that 
>> user.
>> That's the idea

[openstack-dev] [horizon][bug] Mitigation to BREACH vulnerability

2015-11-20 Thread BARTRA, RICK
Until django releases an official patch for the BREACH vulnerability, I think 
we should take a look at django-debreach. The django-debreach package provides 
some, possibly enough, protection against a BREACH attack. Its integration to 
Horizon is clear by following the configuration found here: 
https://pypi.python.org/pypi/django-debreach


The proposed change to Horizon: https://review.openstack.org/#/c/247838/

The proposed change to Requirements: https://review.openstack.org/#/c/248233/


Regards,

Rick Bartra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-11-20 Thread Matt Riedemann



On 11/20/2015 3:00 PM, Sylvain Bauza wrote:



Le 20/11/2015 17:36, Matt Riedemann a écrit :



On 11/20/2015 10:04 AM, Andrew Laski wrote:

On 11/20/15 at 09:51am, Matt Riedemann wrote:



On 11/20/2015 8:18 AM, Sean Dague wrote:

On 11/17/2015 10:51 PM, Matt Riedemann wrote:



I *don't* see any DB APIs for deleting instance actions.

Kind of an important difference there.  Jay got it at least. :)



Were we just planning on instance_actions living forever in the
database?

Should we soft delete instance_actions when we delete the referenced
instance?

Or should we (hard) delete instance_actions when we archive (move to
shadow tables) soft deleted instances?

This is going to be a blocker to getting nova-manage db
archive_deleted_rows working.

[1] https://review.openstack.org/#/c/246635/


instance_actions seems extremely useful, and at the ops meetups I've
been to has been one of the favorite features because it allows and
easy
interface for "going back in time" to figure out what happened.

I'd suggest the following:

1. soft deleting and instance does nothing with instance actions.

2. archiving instance (soft delete -> actually deleted) also archives
off instance actions.


I think this is also the right approach. Then we don't need to worry
about adding soft delete for instance_actions, they are just archived
when you archive the instances. It probably makes the logic in the
archive code messier for this separate path, but it's looking like
we're going to have to account for the bw_usage_cache table too (which
has a uuid column for an instance but no foreign key back to the
instances table and is not soft deleted).



3. update instance_actions API so that you can get instance_actions
for
deleted instances (which I think doesn't work today).


Right, it doesn't. I was going to propose a spec for that since it's a
simple API change with a microversion.


Adding a simple flag to expose instance actions for a deleted instance
if you know the uuid of the deleted instance will provide some
usefulness.  It does lack the discoverability of knowing that you had
*some* instance that was deleted and you don't have the uuid but want to
get at the deleted actions.  I would like to avoid bolting that onto
instance actions and keep that as a use case for an eventual Task API.





-Sean



--

Thanks,

Matt Riedemann


__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



If you're an admin, you can list deleted instances using:

nova list --deleted

Or could, if we weren't busted on that right now [1].

So the use case I'm thinking of here is:

1. Multiple users are in the same project/tenant.
2. User A deletes an instance.
3. User B is wondering where the instance went, so they open a support
ticket.
4. The admin checks for deleted instances on that project, finds the
one in question.
5. Calls off to os-instance-actions with that instance uuid to see the
deleted action and the user that did it (user A).
6. Closes the ticket saying that user A deleted the instance.
7. User B punches user A in the gut.

[1] https://bugs.launchpad.net/nova/+bug/1518382



Okay, that seems a good usecase for operators. Coolness, I'm fine with
soft-deleting instance_actions and provide a microversion for getting
actions for a known instance UUID, like Andrew said.


The plan right now (at least agreed to between myself and sdague) is not 
to soft delete instance actions, but to archive and hard-delete them 
when archiving instances.


As for allowing lookups on instance_actions for deleted instances, I 
plan on working that via this blueprint (still need to write the spec):


https://blueprints.launchpad.net/nova/+spec/os-instance-actions-read-deleted-instances





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FKs in the DB

2015-11-20 Thread Clint Byrum
Excerpts from Matt Riedemann's message of 2015-11-20 10:58:55 -0800:
> 
> On 11/20/2015 10:19 AM, Alexis Lee wrote:
> > We just had a fun discussion in IRC about whether foreign keys are evil.
> > Initially I thought this was crazy but mordred made some good points. To
> > paraphrase, that if you have a scale-out app already it's easier to
> > manage integrity in your app than scale-out your persistence layer.
> >
> > Currently the Nova DB has quite a lot of FKs but not on every relation.
> > One example of a missing FK is between Instance.uuid and
> > BandwidthUsageCache.uuid.
> >
> > Should we drive one way or the other, or just put up with mixed-mode?
> 
> For the record, I hate the mixed mode.
> 
> >
> > What should be the policy for new relations?
> 
> I prefer consistency, so if we're adding new relationships I'd prefer to 
> see that they have foreign keys.
> 

If FedEx preferred consistency over efficiency, then they'd only just
now be able to exist due to drones being available. Otherwise they'd
have to have had a way to cover the last mile of delivery using some
sort of air travel, to remain consistent.

What I'm saying is, sometimes you need a recommissioned 727 to carry
your package, and sometimes you need a truck. Likewise, there are times
when de-normalization is called for. As I said in my other reply to Mr.
Bayer, we don't really know that this is that time, because we aren't
measuring. However, if it seems like a close call when speculating,
then it is probably prudent to remain consistent with most other things
in the system.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][CI] recheck/reverify support for Fuel CI jobs

2015-11-20 Thread Jay Pipes
Why not "recheck fuel" to align with how other OpenStack 3rd party CI 
hooks work? See: recheck xen-server or recheck hyper-v


Best,
-jay

On 11/20/2015 05:24 AM, Igor Belikov wrote:

Alexey,

First of all, “refuel” sounds very cool.
Thanks for raising this topic, I would like to hear more opinions here.
On one hand, different keyword would help to prevent unnecessary
infrastructure load, I agree with you on that. And on another hand,
using existing keywords helps to avoid confusion and provides expected
behaviour for our CI jobs. Far too many times I’ve heard questions like
“Why ‘recheck’ doesn’t retrigger Fuel CI jobs?”.

So I would like to hear more thoughts here from our developers. And I
will investigate how another third party CI systems handle this questions.
--
Igor Belikov
Fuel CI Engineer
ibeli...@mirantis.com 







On 20 Nov 2015, at 16:00, Alexey Shtokolov mailto:ashtoko...@mirantis.com>> wrote:

Igor,

Thank you for this feature.
Afaiu recheck/reverify is mostly useful for internal CI-related fails.
And Fuel CI and Openstack CI are two different infrastructures.
So if smth is broken on Fuel CI, "recheck" will restart all jobs on
Openstack CI too. And opposite case works the same way.

Probably we should use another keyword for Fuel CI to prevent an extra
load on the infrastructure? For example "refuel" or smth like this?

Best regards,
Alexey Shtokolov

2015-11-20 14:24 GMT+03:00 Stanislaw Bogatkin mailto:sbogat...@mirantis.com>>:

Igor,

it is much more clear for me now. Thank you :)

On Fri, Nov 20, 2015 at 2:09 PM, Igor Belikov
mailto:ibeli...@mirantis.com>> wrote:

Hi Stanislaw,

The reason behind this is simple - deployment tests are heavy.
Each deployment test occupies whole server for ~2 hours, for
each commit we have 2 deployment tests (for current
fuel-library master) and that’s just because we don’t test
CentOS deployment for now.
If we assume that developers will rertrigger deployment tests
only when retrigger would actually solve the failure - it’s
still not smart in terms of HW usage to retrigger both tests
when only one has failed, for example.
And there are cases when retrigger just won’t do it and CI
Engineer must manually erase the existing environment on slave
or fix it by other means, so it’s better when CI Engineer
looks through logs before each retrigger of deployment test.

Hope this answers your question.

--
Igor Belikov
Fuel CI Engineer
ibeli...@mirantis.com 


On 20 Nov 2015, at 13:57, Stanislaw Bogatkin
mailto:sbogat...@mirantis.com>> wrote:

Hi Igor,

would you be so kind tell, why fuel-library deployment tests
doesn't support this? Maybe there is a link with previous
talks about it?

On Fri, Nov 20, 2015 at 1:34 PM, Igor Belikov
mailto:ibeli...@mirantis.com>> wrote:

Hi,

I’d like to inform you that all jobs running on Fuel CI
(with the exception of fuel-library deployment tests) now
support retriggering via “recheck” or “reverify” comments
in Gerrit.
Exact regex is the same one used in Openstack-Infra’s
zuul and can be found here

https://github.com/fuel-infra/jenkins-jobs/blob/master/servers/fuel-ci/global.yaml#L3

CI-Team kindly asks you to not abuse this option,
unfortunately not every failure could be solved by
retriggering.
And, to stress this out once again: fuel-library
deployment tests don’t support this, so you still have to
ask for a retrigger in #fuel-infra irc channel.

Thanks for attention.
--
Igor Belikov
Fuel CI Engineer
ibeli...@mirantis.com 








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org
?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

Re: [openstack-dev] [nova] FKs in the DB

2015-11-20 Thread Clint Byrum
Excerpts from Mike Bayer's message of 2015-11-20 11:29:31 -0800:
> 
> On 11/20/2015 11:19 AM, Alexis Lee wrote:
> > We just had a fun discussion in IRC about whether foreign keys are evil.
> > Initially I thought this was crazy but mordred made some good points. To
> > paraphrase, that if you have a scale-out app already it's easier to
> > manage integrity in your app than scale-out your persistence layer.
> 
> I've had this argument with mordred before, and it seems again there's
> the same misunderstanding going on:
> 
> 1. Your application can have **conceptual** foreign keys in it, without
> actually having foreign keys **for real** in the database.  This means
> your SQLAlchemy code still does ForeignKey, ForeignKeyConstraint, and
> most importantly your **database still uses normal form**, that is, any
> row that refers to another does it based on a set of columns that
> exactly match to the primary key of a single table elsewhere (not to
> multiple tables, not to a function of the columns cast from int to
> string and concatenated to the value in the other table etc, an *exact
> match*).   I'm sure that mordred agrees with all of these practices,
> however when one says "we aren't going to use foreign keys anymore",
> typically it is all these critical schema design practices that go out
> the window.  Put another way, the foreign key concept not only
> constrains data in a real database, just the concept of them constraints
> the **developer** to use correct normal form.
> 

Mike, thanks for making that clarification. I agree, that conceptual
FK's are not the same as FK constraints in the DB. Joins are not demons.
:)

To be clear, while what you say above is all true, normal form isn't
actually a goal of any system. It's a tactic one can use with a well known
efficiency profile. But there are times when it costs more than other
more brutal, less civilized methods of database design and usage. If we
don't measure our efficiency, we won't actually know if this is one of
those times or not.

> 2. Here's the part mordred doesn't like - the FK is actually in the
> database for real.   This is because they slow down inserts, updates,
> and deletes, because they must be checked.   To which I say, no such
> performance issue has been observed or documented in Openstack, we
> aren't a 1 million TPS financial system, so this is vastly premature
> optimization.
> 

I agree with you that this is unmeasured. I don't agree that we are not a
1 million TPS financial system, because the goal of running a cloud for
many is, in fact, to make money. So while we may not have an example of
a cloud running at 1 million TPS, it's not something we should dismiss
too quickly.

That said, the measurement should come first. What I'd like to show
is how many TPS we do actually do on boots, deletes, etc. etc. I'm
working on it now, and I'd encourage people to join the effort on the
counter-inspection QA spec if they want to get started measuring things.
We're taking baby steps right now, but eventually I see us producing a
lot of data that should be helpful in answering some of these questions.

> Also as far as #2, customers and operators *regularly* run scripts and
> queries to modify openstack databases, particularly to delete soft
> deleted rows.  These get blocked *all the time* by foreign key
> constraints.  They are doing their job, and they are needed as a final
> guard against data integrity issues.  We of course handle referential
> integrity in the application layer as well via SQLAlchemy ORM constructs.
> 

Nobody ever doubts that there are times where database-side FK constraints
help prevent costly mistakes. The question is always: at what cost? Right
now, I'd say we don't really know because we're not measuring. That's
fine, if you want to mitigate risk, one strategy is to buy insurance,
and that's what the FK's in the DB are: insurance.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate][chef] cookbook-openstack-dnsaas

2015-11-20 Thread JJ Asghar
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hey everyone!

Per our goals for the M cycle, I've started
cookbook-openstack-dnsaas[1]. It's under my namespace for the time
being; but shouldn't be much longer.

I walked through the developer install guide[2] and automated the build.
I'm posting here because I can't find a "production" install guide, and
was hoping that maybe someone will point me in the direction of the diff
between the two.

For the community as a whole though, after we get this "publicly"
released, you'll be able to do a `chef exec kitchen verify` and you'll
have a vagrant box with Designate built in just a few minutes.

Would the designate team like me to come to ya'lls IRC meeting to talk
about this more?

[1]: https://github.com/jjasghar/cookbook-openstack-dnsaas
[2]: http://docs.openstack.org/developer/designate/getting-started.html


- -J
- -- 
Best Regards,
JJ Asghar
c: 512.619.0722 t: @jjasghar irc: j^2
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJWT434AAoJEDZbxzMH0+jTRi8P/0YCBDqFjAJqxNH1b7DFIKNt
iWlPr8jyw0x7LXKAJdcJZ+vnnKjMzm9KMxhPsAPKN6M/WIupMdflJ1buOC1QWBlK
AoeIEdriTHZJWIz6yBkev+kxhBix73FvpGTIvknLTM6vhuKWIHONSbcYv+PeuzUb
kmCUhrnCMR4bHvaj3Z9wv1ECqkZJXcUP4gNVQxQ0sq7ibqsXXDis3o8WtXGt0iFT
22qc8csw4Apeh+THiDM4GY0hvg01Cf/osUDJcGZJrtDzBqsCOlSAWZHwF9O6nxEX
m6qm0DavT2yiuvdb4rQ1KyisdsVR7MQeDlrzpy2Pb3AIWV7mag6q041YzRlp3dUa
3SgdcGhia/+GPKaXeRir9f17WuUrTAEd2sf6aCzDkXMCxdIhSR3it23w1QKVKaEm
WfA3fqUBOCcpQkPO7Pw+jVvQTaVl/vzCPugizZjmM3rJXLRLQlOZYbpS30tG2WOD
g6PXPrgR3YnDmMorZ1kH5ZRDhuQWumhVGnXCgRHZ0ipL/fp1Bbgjxj3DrCUJYKOg
PrXkBF2BewYNAZSBHm+U8J3p4e+vdhhBfQSjQo1gHTolv3vL5MfCeDnzv3ZbZWeM
4fBxgl4ibZb2h9+Puc86BmWmCRQCMxyffNfDbCQUuG1ml6yhxsCna2BFxI5VS8fh
2IjbFf1X72dN9tEaJenL
=lISg
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-11-20 Thread Sylvain Bauza



Le 20/11/2015 17:36, Matt Riedemann a écrit :



On 11/20/2015 10:04 AM, Andrew Laski wrote:

On 11/20/15 at 09:51am, Matt Riedemann wrote:



On 11/20/2015 8:18 AM, Sean Dague wrote:

On 11/17/2015 10:51 PM, Matt Riedemann wrote:



I *don't* see any DB APIs for deleting instance actions.

Kind of an important difference there.  Jay got it at least. :)



Were we just planning on instance_actions living forever in the
database?

Should we soft delete instance_actions when we delete the referenced
instance?

Or should we (hard) delete instance_actions when we archive (move to
shadow tables) soft deleted instances?

This is going to be a blocker to getting nova-manage db
archive_deleted_rows working.

[1] https://review.openstack.org/#/c/246635/


instance_actions seems extremely useful, and at the ops meetups I've
been to has been one of the favorite features because it allows and 
easy

interface for "going back in time" to figure out what happened.

I'd suggest the following:

1. soft deleting and instance does nothing with instance actions.

2. archiving instance (soft delete -> actually deleted) also archives
off instance actions.


I think this is also the right approach. Then we don't need to worry
about adding soft delete for instance_actions, they are just archived
when you archive the instances. It probably makes the logic in the
archive code messier for this separate path, but it's looking like
we're going to have to account for the bw_usage_cache table too (which
has a uuid column for an instance but no foreign key back to the
instances table and is not soft deleted).



3. update instance_actions API so that you can get instance_actions 
for

deleted instances (which I think doesn't work today).


Right, it doesn't. I was going to propose a spec for that since it's a
simple API change with a microversion.


Adding a simple flag to expose instance actions for a deleted instance
if you know the uuid of the deleted instance will provide some
usefulness.  It does lack the discoverability of knowing that you had
*some* instance that was deleted and you don't have the uuid but want to
get at the deleted actions.  I would like to avoid bolting that onto
instance actions and keep that as a use case for an eventual Task API.





-Sean



--

Thanks,

Matt Riedemann


__ 



OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



If you're an admin, you can list deleted instances using:

nova list --deleted

Or could, if we weren't busted on that right now [1].

So the use case I'm thinking of here is:

1. Multiple users are in the same project/tenant.
2. User A deletes an instance.
3. User B is wondering where the instance went, so they open a support 
ticket.
4. The admin checks for deleted instances on that project, finds the 
one in question.
5. Calls off to os-instance-actions with that instance uuid to see the 
deleted action and the user that did it (user A).

6. Closes the ticket saying that user A deleted the instance.
7. User B punches user A in the gut.

[1] https://bugs.launchpad.net/nova/+bug/1518382



Okay, that seems a good usecase for operators. Coolness, I'm fine with 
soft-deleting instance_actions and provide a microversion for getting 
actions for a known instance UUID, like Andrew said.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Igor Kalnitsky
Hey Timur,

> I think we can change all docker-based code from our tests / scripts
> in 2-3 days

That sounds good.


> Do we really want to remove docker containers from master node?

Yes, we do. Currently we're suffering from using container-based
architecture on master node, and since we've decided to change our
*upgrade* approach (where we stop gain benefits from containers) it would
be nice to get rid of them and fix a bunch of docker-related bugs.


> How long it will take to provide the experimental MOS 8.0 build
> without docker containers?

I think we need to ask Vladimir Kozhukalov here.


> Are we ready to change the date of MOS 8.0 release and make this
> change?

No, we don't ready to change release date. If we don't have time for it,
let's postpone it till 9.0.

Regards,
Igor

On Fri, Nov 20, 2015 at 12:41 PM Timur Nurlygayanov <
tnurlygaya...@mirantis.com> wrote:

> Hi Igor and Alexander,
>
> >But I'd like to hear from QA how do we rely on container-based
>> infrastructure? Would it be hard to change our sys-tests in short
>> time?
>
> QA team hadn't significant dependencies from docker images in our tests
> [0], I think we can change all docker-based code from our tests / scripts
> in 2-3 days, but it is hard to predict when ISO without docker images will
> pass all SWARM / Tempest tests.
>
> And one more time:
>
>> Of course, we can fix BVT / SWARM tests and don't use docker images in
>> our test suite (it shouldn't be really hard) but we didn't plan these
>> changes and in fact these changes can affect our estimates for many tasks.
>
>
> Do we really want to remove docker containers from master node? How long
> it will take to provide the experimental MOS 8.0 build without docker
> containers?
> Are we ready to change the date of MOS 8.0 release and make this change?
>
> [0]
> https://github.com/openstack/fuel-qa/search?p=2&q=docker&utf8=%E2%9C%93
>
>
> On Fri, Nov 20, 2015 at 7:57 PM, Bogdan Dobrelya 
> wrote:
>
>> On 20.11.2015 17:31, Vladimir Kozhukalov wrote:
>> > Bogdan,
>> >
>> >>> So, we could only deprecate the docker feature for the 8.0.
>> >
>> > What do you mean exactly when saying 'deprecate docker feature'? I can
>> > not even imagine how we can live with and without docker containers at
>> > the same time. Deprecation is usually related to features which directly
>> > impact UX (maybe I am wrong).
>>
>> I may be understood this [0] wrong, and the docker containers are not
>> user-visible, but that depends on the which type of users do we mean :-)
>> Sorry, for being not clear.
>>
>> [0]
>>
>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html
>>
>> >
>> > Guys,
>> >
>> > When you estimate risks of the docker removal, please take into account
>> > not only release deadlines but also the overall product quality. The
>> > thing is that continuing using containers makes it much more complicated
>> > (and thus less stable) to implement new upgrade flow (upgrade tarball
>> > can not be used any more, we need to re-install the host system).
>> > Switching from Centos 6 to Centos 7 is also much more complicated with
>> > docker. Every single piece of Fuel system is going to become simpler and
>> > easier to support.
>> >
>> > Of course, I am not suggesting to jump overboard into cold water without
>> > a life jacket. Transition plan, checklist, green tests, even spec etc.
>> > are assumed without saying (after all, I was not born yesterday). Of
>> > course, we won't merge changes until everything is green. What is the
>> > problem to try to do this and postpone if not ready in time? And please
>> > do not confuse these two cases: switching from plain deployment to
>> > containers is complicated, but switching from docker to plain is much
>> > simpler.
>> >
>> >
>> >
>> >
>> > Vladimir Kozhukalov
>> >
>> > On Fri, Nov 20, 2015 at 6:47 PM, Bogdan Dobrelya <
>> bdobre...@mirantis.com
>> > > wrote:
>> >
>> > On 20.11.2015 15:10, Timur Nurlygayanov wrote:
>> > > Hi team,
>> > >
>> > > I think it too late to make such significant changes for MOS 8.0
>> now,
>> > > but I'm ok with the idea to remove docker containers in the future
>> > > releases if our dev team want to do this.
>> > > Any way, before we will do this, we need to plan how we will
>> perform
>> > > updates between different releases with and without docker
>> containers,
>> > > how we will manage requirements and etc. In fact we have a lot of
>> > > questions and haven't answers, let's prepare the spec for this
>> change,
>> > > review it, discuss it with developers, users and project
>> management team
>> > > and if we haven't requirements to keep docker containers on
>> master node
>> > > let's remove them for the future releases (not in MOS 8.0).
>> > >
>> > > Of course, we can fix BVT / SWARM tests and don't use docker
>> images in
>> > > our test suite (it shouldn't be really hard) but we didn't plan

Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Timur Nurlygayanov
Hi Igor and Alexander,

>But I'd like to hear from QA how do we rely on container-based
> infrastructure? Would it be hard to change our sys-tests in short
> time?

QA team hadn't significant dependencies from docker images in our tests
[0], I think we can change all docker-based code from our tests / scripts
in 2-3 days, but it is hard to predict when ISO without docker images will
pass all SWARM / Tempest tests.

And one more time:

> Of course, we can fix BVT / SWARM tests and don't use docker images in our
> test suite (it shouldn't be really hard) but we didn't plan these changes
> and in fact these changes can affect our estimates for many tasks.


Do we really want to remove docker containers from master node? How long it
will take to provide the experimental MOS 8.0 build without docker
containers?
Are we ready to change the date of MOS 8.0 release and make this change?

[0] https://github.com/openstack/fuel-qa/search?p=2&q=docker&utf8=%E2%9C%93


On Fri, Nov 20, 2015 at 7:57 PM, Bogdan Dobrelya 
wrote:

> On 20.11.2015 17:31, Vladimir Kozhukalov wrote:
> > Bogdan,
> >
> >>> So, we could only deprecate the docker feature for the 8.0.
> >
> > What do you mean exactly when saying 'deprecate docker feature'? I can
> > not even imagine how we can live with and without docker containers at
> > the same time. Deprecation is usually related to features which directly
> > impact UX (maybe I am wrong).
>
> I may be understood this [0] wrong, and the docker containers are not
> user-visible, but that depends on the which type of users do we mean :-)
> Sorry, for being not clear.
>
> [0]
>
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html
>
> >
> > Guys,
> >
> > When you estimate risks of the docker removal, please take into account
> > not only release deadlines but also the overall product quality. The
> > thing is that continuing using containers makes it much more complicated
> > (and thus less stable) to implement new upgrade flow (upgrade tarball
> > can not be used any more, we need to re-install the host system).
> > Switching from Centos 6 to Centos 7 is also much more complicated with
> > docker. Every single piece of Fuel system is going to become simpler and
> > easier to support.
> >
> > Of course, I am not suggesting to jump overboard into cold water without
> > a life jacket. Transition plan, checklist, green tests, even spec etc.
> > are assumed without saying (after all, I was not born yesterday). Of
> > course, we won't merge changes until everything is green. What is the
> > problem to try to do this and postpone if not ready in time? And please
> > do not confuse these two cases: switching from plain deployment to
> > containers is complicated, but switching from docker to plain is much
> > simpler.
> >
> >
> >
> >
> > Vladimir Kozhukalov
> >
> > On Fri, Nov 20, 2015 at 6:47 PM, Bogdan Dobrelya  > > wrote:
> >
> > On 20.11.2015 15:10, Timur Nurlygayanov wrote:
> > > Hi team,
> > >
> > > I think it too late to make such significant changes for MOS 8.0
> now,
> > > but I'm ok with the idea to remove docker containers in the future
> > > releases if our dev team want to do this.
> > > Any way, before we will do this, we need to plan how we will
> perform
> > > updates between different releases with and without docker
> containers,
> > > how we will manage requirements and etc. In fact we have a lot of
> > > questions and haven't answers, let's prepare the spec for this
> change,
> > > review it, discuss it with developers, users and project
> management team
> > > and if we haven't requirements to keep docker containers on master
> node
> > > let's remove them for the future releases (not in MOS 8.0).
> > >
> > > Of course, we can fix BVT / SWARM tests and don't use docker
> images in
> > > our test suite (it shouldn't be really hard) but we didn't plan
> these
> > > changes and in fact these changes can affect our estimates for
> many tasks.
> >
> > I can only add that features just cannot be removed without a
> > deprecation period of 1-2 releases.
> > So, we could only deprecate the docker feature for the 8.0.
> >
> > >
> > > Thank you!
> > >
> > >
> > > On Fri, Nov 20, 2015 at 4:44 PM, Alexander Kostrikov
> > > mailto:akostri...@mirantis.com>
> > >>
> > wrote:
> > >
> > > Hello, Igor.
> > >
> > > >But I'd like to hear from QA how do we rely on container-based
> > > infrastructure? Would it be hard to change our sys-tests in
> short
> > > time?
> > >
> > > At first glance, system tests are using docker only to fetch
> logs
> > > and run shell commands.
> > > Also, docker is used to run Rally.
> > >
> > > If there is an action to remove docker containers with carefull
> > > 

Re: [openstack-dev] [third-party][infra][CI] Common OpenStack 'Third-party' CI Solution - DONE!

2015-11-20 Thread Asselin, Ramy
Hi Sid,

Instead of documenting it, was simple enough to automate it. Please try these 
out:

https://review.openstack.org/248223

https://review.openstack.org/248226

Feel free to propose your own fixes or improvements. I think this is one of 
best parts of getting it all in sync upstream.

Best regards,
Ramy



From: Asselin, Ramy
Sent: Friday, November 20, 2015 11:03 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [third-party][infra][CI] Common OpenStack 
'Third-party' CI Solution - DONE!

Hi Sid,

Sorry, you’re right: log server fix is here. [1]
I thought I documented the scp v1.9 plugin issue, but I don’t see it now. I 
will submit a patch to add that.

Thanks for raising these issues!

Ramy

[1] https://review.openstack.org/#/c/242800/


From: Siddharth Bhatt [mailto:siddharth.bh...@falconstor.com]
Sent: Friday, November 20, 2015 10:51 AM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [third-party][infra][CI] Common OpenStack 
'Third-party' CI Solution - DONE!

Ramy,

I had previously used your os-ext-testing repo to build a 3rd party CI, and 
today I’ve been trying out this new approach. I’ve noticed a piece of the 
puzzle appears to be missing.

In the new instructions [1], there is no mention of having to manually install 
the Jenkins SCP plugin v1.9. Also, your old manifest would create the plugin 
config file [2] and populate it with the appropriate values for the log server, 
but the new approach does not. So when I finished running the puppet apply, the 
job configurations contained a site name “LogServer” but there was no value 
defined anywhere pointing that to the actual IP or hostname of my log server.

I did manually install the v1.9 plugin and then configured it from the Jenkins 
web UI. I guess either the instructions need to be updated to mention this, or 
the puppet manifests need to automate some or all of it.

Regards,
Sid

[1] 
https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/README.md
[2] /var/lib/jenkins/be.certipost.hudson.plugin.SCPRepositoryPublisher.xml

From: Asselin, Ramy [mailto:ramy.asse...@hpe.com]
Sent: Friday, November 20, 2015 12:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [third-party][infra][CI] Common OpenStack 
'Third-party' CI Solution - DONE!

All,

I’m happy to announce that there is now a working ‘Common’ OpenStack 
‘third-party’ CI Solution available! This is a 3rd party CI solution that uses 
the same tools and scripts as the upstream ‘Jenkins’ CI.

The last few pieces were particularly challenging.
Big thanks to Yolanda Robla for updating Nodepool  & nodepool puppet scripts so 
that is can be reusable by both 3rd party CI’s and upstream infrastructure!

The documentation for setting up a 3rd party ci system on 2 VMs (1 private that 
runs the CI jobs, and 1 public that hosts the log files) is now available here 
[1] or [2]

Big thanks again to everyone that helped submit patches and do the reviews!

A few people have already starting setting up this solution.

Best regards,

Ramy
IRC: asselin

[1] https://github.com/openstack-infra/puppet-openstackci/tree/master/contrib
[2] 
https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/README.md


From: Asselin, Ramy
Sent: Monday, July 20, 2015 3:39 PM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [third-party][infra] Common OpenStack CI Solution 
- 'Jenkins Job Builder' live

All,

I’m pleased to announce the 4th component merged to puppet-openstackci repo [1].

This means 3rd party ci operators can now use the same scripts that the 
OpenStack Infrastructure team uses in the official ‘Jenkins’ CI system for:


1.   Log Server

2.   Jenkins

3.   Zuul

4.   Jenkins Job Builder

This work is being done as part of the common-ci spec [2]

Big thanks to Juame Devesa for starting the work, Khai Do for fixing all issues 
found in the reviews during the virtual sprint, and to all the reviewers and 
testers.

We’re almost there! Just have nodepool and a sample config to compose all of 
the components together [3]!

Ramy
IRC: asselin

[1] https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/
[2] 
http://specs.openstack.org/openstack-infra/infra-specs/specs/openstackci.html
[3] https://storyboard.openstack.org/#!/story/2000101


From: Asselin, Ramy
Sent: Thursday, July 02, 2015 4:59 PM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [third-party][infra] Common OpenStack CI Solution - 
'Zuul' live

All,

I’m please to say that there are now 3 components merged in the 
puppet-openstackci repo [1]
This means 3rd party ci operators can now use the same scripts that the 
OpenStack I

[openstack-dev] Nova compute hooks are not called

2015-11-20 Thread Sundar Nadathur
Hello,
   I am trying to get Nova Compute create_instance hook to be called. However, 
although the VM gets started from Horizon properly,  the hook does not get 
called and there is no reference in it to the logs. When I run the hook script 
from the command line, it runs fine.

Please let me know what I am missing. Thanks!

Details:  I have created a directory with the following structure:
Nova-Hooks/
 setup.py
 demo_nova_hooks/
 __init__.py
 simple.py

Nova-Hooks is in $PYTHONPATH. Both setup.py and simple.py have execute 
permissions for all.

I ran "setup.py install", restarted nova-compute service, verified that 
nova-compute is running, and then started the instance. Here are the contents 
of setup.py:
http://paste.openstack.org/show/479627/

Here are the contents of simple.py:
http://paste.openstack.org/show/479628/

There are no reference in /var/log/nova/nova-compute.log to the strings "hook", 
"demo", "simple", etc. When I run the hook script from the command line, it 
runs fine.


Cheers,
Sundar



Cheers,
Sundar




Confidentiality Notice.
This message may contain information that is confidential or otherwise 
protected from disclosure. If you are not the intended recipient, you are 
hereby notified that any use, disclosure, dissemination, distribution, or 
copying of this message, or any attachments, is strictly prohibited. If you 
have received this message in error, please advise the sender by reply e-mail, 
and delete the message and any attachments. Thank you.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Running Fuel node as non-superuser

2015-11-20 Thread Dmitry Nikishov
Stanislaw,

I want to clarify: there are 2 types of services, run on the Fuel node:
- Those, which are a part of Fuel (astute, nailgun etc)
- Those, which are not (e.g. atop)

Capabilities for the former can easily be managed via post-install scripts,
embedded in respective package spec file (since specs are a part of fuel-*
repo). This is a very good idea.
Capabilities for the latter will have to be taken care of via either
a. some external utility (puppet)
b. rebuilding respective package with updated spec

I'd say that (a) is still more convinient.

Another option would be to have a fine-grained control only on Fuel
services and leave all the other at their defaults.

On Fri, Nov 20, 2015 at 1:19 PM, Stanislaw Bogatkin 
wrote:

> Dmitry, I just propose the way I think is right, because it's strange
> enough - install package from *.deb file and then set any privileges to it
> by third-party utility. Set permissions for app now mostly managed by
> post-install scripts. Moreover - if it isn't - it should, cause if you set
> capabilities by puppet there always will be a gap between installation and
> setting permissions, so you will must bound package installation process
> with setting permissions by puppet - other way you will have no way to use
> your app.
>
> Setting setuid bits on apps is not a good idea - it is why linux
> capabilities were introduced.
>
> On Fri, Nov 20, 2015 at 6:40 PM, Dmitry Nikishov 
> wrote:
>
>> Stanislaw,
>>
>> In my opinion the whole feature shouldn't be in the separate package
>> simply because it will actually affect the code of many, if not all,
>> components of Fuel.
>>
>> The only services whose capabilities will have to be managed by puppet
>> are those, which are installed from upstream packages (e.g. atop) -- not
>> built from fuel-* repos.
>>
>> Supervisord doesn't seem to use Linux capabilities, id does setuid
>> instead:
>> https://github.com/Supervisor/supervisor/blob/master/supervisor/options.py#L1326
>>
>> On Fri, Nov 20, 2015 at 1:07 AM, Stanislaw Bogatkin <
>> sbogat...@mirantis.com> wrote:
>>
>>> Dmitry, I mean whole feature.
>>> Btw, why do you want to grant capabilities via puppet? It should be done
>>> by post-install package section, I believe.
>>>
>>> Also I doesn't know if supervisord can bound process capabilities like
>>> systemd can - we could use this opportunity too.
>>>
>>> On Thu, Nov 19, 2015 at 7:44 PM, Dmitry Nikishov >> > wrote:
>>>
 My main concern with using linux capabilities/acls on files is actually
 puppet support or, actually, the lack of it. ACLs are possible AFAIK, but
 we'd need to write a custom type/provider for capabilities. I suggest to
 wait with capabilities support till systemd support.

 On Tue, Nov 17, 2015 at 9:15 AM, Dmitry Nikishov <
 dnikis...@mirantis.com> wrote:

> Stanislaw, do you mean the whole feature, or just a user? Since
> feature would require actually changing puppet code.
>
> On Tue, Nov 17, 2015 at 5:08 AM, Stanislaw Bogatkin <
> sbogat...@mirantis.com> wrote:
>
>> Dmitry, I believe it should be done via package spec as a part of
>> installation.
>>
>> On Mon, Nov 16, 2015 at 8:04 PM, Dmitry Nikishov <
>> dnikis...@mirantis.com> wrote:
>>
>>> Hello folks,
>>>
>>> I have updated the spec, please review and share your thoughts on
>>> it: https://review.openstack.org/#/c/243340/
>>>
>>> Thanks.
>>>
>>> On Thu, Nov 12, 2015 at 10:42 AM, Dmitry Nikishov <
>>> dnikis...@mirantis.com> wrote:
>>>
 Matthew,

 sorry, didn't mean to butcher your name :(

 On Thu, Nov 12, 2015 at 10:41 AM, Dmitry Nikishov <
 dnikis...@mirantis.com> wrote:

> Matther,
>
> I totally agree that each daemon should have it's own user which
> should be created during installation of the relevant package. 
> Probably I
> didn't state this clear enough in the spec.
>
> However, there are security requirements in place that root should
> not be used at all. This means that there should be a some kind of
> maintenance or system user ('fueladmin'), which would have enough
> privileges to configure and manage Fuel node (e.g. run "sudo puppet 
> apply"
> without password, create mirrors etc). This also means that certain 
> fuel-
> packages would be required to have their files accessible to that 
> user.
> That's the idea behind having a package which would create 
> 'fueladmin' user
> and including it into other fuel- packages requirements lists.
>
> So this part of the feature comes down to having a non-root user
> with sudo privileges and passwordless sudo for certain commands (like
> 'puppet apply ') for scripting.
>
> On Thu, Nov 12, 2015 at 9:52 AM, Matthew Mosesohn <
>>>

Re: [openstack-dev] [cinder][nova]Move encryptors to os-brick

2015-11-20 Thread Ben Swartzlander

On 11/20/2015 01:19 PM, Daniel P. Berrange wrote:

On Fri, Nov 20, 2015 at 02:45:15PM +0200, Duncan Thomas wrote:

Brick does not have to take over the decisions in order to be a useful
repository for the code. The motivation for this work is to avoid having
the dm setup code copied wholesale into cinder, where it becomes difficult
to keep in sync with the code in nova.

Cinder needs a copy of this code since it is on the data path for certain
operations (create from image, copy to image, backup/restore, migrate).


A core goal of using volume encryption in Nova to provide protection for
tenant data, from a malicious storage service. ie if the decryption key
is only ever used by Nova on the compute node, then cinder only ever sees
ciphertext, never plaintext.  Thus if cinder is compromised, then it can
not compromise any data stored in any encrypted volumes.


There is a difference between the cinder service and the storage 
controller (or software system) that cinder manages. You can give the 
decryption keys to the cinder service without allowing the storage 
controller to see any plaintext.


As Walt says in the relevant patch [1], expecting cinder to do data 
management without ever performing I/O is unrealistic. The scenario 
where the compute admin doesn't trust the storage admin is 
understandable (although less important than other potential types of 
attacks IMO) but the scenario where the guy managing nova doesn't trust 
the guy managing cinder makes no sense at all.


I support moving the code into a common place, and doing responsible key 
management, and letting the cinder guys make sure that storage 
controllers never see plaintext in the cases when they're not supposed to.


-Ben

[1] https://review.openstack.org/#/c/247372/


If cinder is looking to get access to the dm-seutp code, this seems to
imply that cinder will be getting access to the plaintext data, which
feels to me like it de-values the volume encryption feature somewhat.

I'm fuzzy on the details of just what code paths cinder needs to be
able to convert from plaintext to ciphertext or vica-verca, but in
general I think it is desirable if we can avoid any such operation
in cinder, and keep it so that only Nova compute nodes ever see the
decrypted data.

Regards,
Daniel




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FKs in the DB

2015-11-20 Thread Mike Bayer


On 11/20/2015 02:29 PM, Mike Bayer wrote:
> 
> 
> On 11/20/2015 11:19 AM, Alexis Lee wrote:
>> We just had a fun discussion in IRC about whether foreign keys are evil.
>> Initially I thought this was crazy but mordred made some good points. To
>> paraphrase, that if you have a scale-out app already it's easier to
>> manage integrity in your app than scale-out your persistence layer.

oh, I forgot the other use case, the "we might have these tables in two
different databases use case".  Again.   Start out with your two tables
together, put the FK there, have SQLAlchemy do the work of actually
maintaining this FK relationship.   The FKs can be removed at the schema
level at any time provided you aren't relying upon ON DELETE or ON
UPDATE constructs, which we're not.

If and when you split those tables out to two databases, I would
actually replace the relationship with one that uses GUIDs, and if the
table's primary key is not already a GUID (I favor integer primary
keys), there'd be a separate UNIQUE column on the parent table with the
GUID value.  Auto-incrementing integer primary key identifiers are
essential, but because they are auto-incrementing they are not quite as
portable to other databases, whereas GUIDs are extremely portable.  Then
continue using ForeignKeyConstraint and relationship() in SQLAlchemy as
always.



> 
> I've had this argument with mordred before, and it seems again there's
> the same misunderstanding going on:
> 
> 1. Your application can have **conceptual** foreign keys in it, without
> actually having foreign keys **for real** in the database.  This means
> your SQLAlchemy code still does ForeignKey, ForeignKeyConstraint, and
> most importantly your **database still uses normal form**, that is, any
> row that refers to another does it based on a set of columns that
> exactly match to the primary key of a single table elsewhere (not to
> multiple tables, not to a function of the columns cast from int to
> string and concatenated to the value in the other table etc, an *exact
> match*).   I'm sure that mordred agrees with all of these practices,
> however when one says "we aren't going to use foreign keys anymore",
> typically it is all these critical schema design practices that go out
> the window.  Put another way, the foreign key concept not only
> constrains data in a real database, just the concept of them constraints
> the **developer** to use correct normal form.
> 
> 2. Here's the part mordred doesn't like - the FK is actually in the
> database for real.   This is because they slow down inserts, updates,
> and deletes, because they must be checked.   To which I say, no such
> performance issue has been observed or documented in Openstack, we
> aren't a 1 million TPS financial system, so this is vastly premature
> optimization.
> 
> Also as far as #2, customers and operators *regularly* run scripts and
> queries to modify openstack databases, particularly to delete soft
> deleted rows.  These get blocked *all the time* by foreign key
> constraints.  They are doing their job, and they are needed as a final
> guard against data integrity issues.  We of course handle referential
> integrity in the application layer as well via SQLAlchemy ORM constructs.
> 
> 3. Another aspect of FKs is using them for ON DELETE CASCADE.   I think
> this is a great idea also, but I know that openstack apps are not
> comfortable with this.  So we don't need to use it (but we should someday).
> 
> 
> 
>>
>> Currently the Nova DB has quite a lot of FKs but not on every relation.
>> One example of a missing FK is between Instance.uuid and
>> BandwidthUsageCache.uuid.
>>
>> Should we drive one way or the other, or just put up with mixed-mode?
>>
>> What should be the policy for new relations?
> 
> +1 for correct normalized with foreign keys in all cases.   A slowdown
> that can be documented and illustrated will be needed to justify havint
> that FK to be disabled or removed on the schema-side only, but there
> would still be a "conceptual" foreign key (e.g. SQLAlchemy ForeignKey)
> in the model.
> 
>>
>> Do the answers to these questions depend on having a sane and
>> comprehensive archive/purge system in place?
>>
>>
>> Alexis (lxsli)
>>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova]Move encryptors to os-brick

2015-11-20 Thread Walter A. Boring IV

On 11/20/2015 10:19 AM, Daniel P. Berrange wrote:

On Fri, Nov 20, 2015 at 02:45:15PM +0200, Duncan Thomas wrote:

Brick does not have to take over the decisions in order to be a useful
repository for the code. The motivation for this work is to avoid having
the dm setup code copied wholesale into cinder, where it becomes difficult
to keep in sync with the code in nova.

Cinder needs a copy of this code since it is on the data path for certain
operations (create from image, copy to image, backup/restore, migrate).

A core goal of using volume encryption in Nova to provide protection for
tenant data, from a malicious storage service. ie if the decryption key
is only ever used by Nova on the compute node, then cinder only ever sees
ciphertext, never plaintext.  Thus if cinder is compromised, then it can
not compromise any data stored in any encrypted volumes.

If cinder is looking to get access to the dm-seutp code, this seems to
imply that cinder will be getting access to the plaintext data, which
feels to me like it de-values the volume encryption feature somewhat.

I'm fuzzy on the details of just what code paths cinder needs to be
able to convert from plaintext to ciphertext or vica-verca, but in
general I think it is desirable if we can avoid any such operation
in cinder, and keep it so that only Nova compute nodes ever see the
decrypted data.
Being able to limit the number of points where an encrypted volume can 
be used unencrypted

is obviously a good goal.
Unfortunately, it's entirely unrealistic to expect Cinder to never be 
able to have access that access.
Cinder currently needs access to write data to volumes that are 
encrypted for several operations.


1) copy volume to image
2) copy image to volume
3) backup

Cinder already has the ability to do this for encrypted volumes. What 
Lisa Li's patch is trying to provide
is a single point of shared code for doing encryptors.  os-brick seems 
like a reasonable place to put this
as it could be shared with other services that need to do the same 
thing, including Nova, if desired.


There is also ongoing work to support attaching Cinder volumes to bare 
metal nodes.  The client that does the
attaching to a bare metal node, will be using os-brick connectors to do 
the volume attach/detach.  So, it makes
sense from this perspective as well that the encryptor code lives in 
os-brick.


I'm ok with the idea of moving common code into os-brick.  This was the 
main reason os-brick was created

to begin with.
Walt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FKs in the DB

2015-11-20 Thread Mike Bayer


On 11/20/2015 11:19 AM, Alexis Lee wrote:
> We just had a fun discussion in IRC about whether foreign keys are evil.
> Initially I thought this was crazy but mordred made some good points. To
> paraphrase, that if you have a scale-out app already it's easier to
> manage integrity in your app than scale-out your persistence layer.

I've had this argument with mordred before, and it seems again there's
the same misunderstanding going on:

1. Your application can have **conceptual** foreign keys in it, without
actually having foreign keys **for real** in the database.  This means
your SQLAlchemy code still does ForeignKey, ForeignKeyConstraint, and
most importantly your **database still uses normal form**, that is, any
row that refers to another does it based on a set of columns that
exactly match to the primary key of a single table elsewhere (not to
multiple tables, not to a function of the columns cast from int to
string and concatenated to the value in the other table etc, an *exact
match*).   I'm sure that mordred agrees with all of these practices,
however when one says "we aren't going to use foreign keys anymore",
typically it is all these critical schema design practices that go out
the window.  Put another way, the foreign key concept not only
constrains data in a real database, just the concept of them constraints
the **developer** to use correct normal form.

2. Here's the part mordred doesn't like - the FK is actually in the
database for real.   This is because they slow down inserts, updates,
and deletes, because they must be checked.   To which I say, no such
performance issue has been observed or documented in Openstack, we
aren't a 1 million TPS financial system, so this is vastly premature
optimization.

Also as far as #2, customers and operators *regularly* run scripts and
queries to modify openstack databases, particularly to delete soft
deleted rows.  These get blocked *all the time* by foreign key
constraints.  They are doing their job, and they are needed as a final
guard against data integrity issues.  We of course handle referential
integrity in the application layer as well via SQLAlchemy ORM constructs.

3. Another aspect of FKs is using them for ON DELETE CASCADE.   I think
this is a great idea also, but I know that openstack apps are not
comfortable with this.  So we don't need to use it (but we should someday).



> 
> Currently the Nova DB has quite a lot of FKs but not on every relation.
> One example of a missing FK is between Instance.uuid and
> BandwidthUsageCache.uuid.
> 
> Should we drive one way or the other, or just put up with mixed-mode?
> 
> What should be the policy for new relations?

+1 for correct normalized with foreign keys in all cases.   A slowdown
that can be documented and illustrated will be needed to justify havint
that FK to be disabled or removed on the schema-side only, but there
would still be a "conceptual" foreign key (e.g. SQLAlchemy ForeignKey)
in the model.

> 
> Do the answers to these questions depend on having a sane and
> comprehensive archive/purge system in place?
> 
> 
> Alexis (lxsli)
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-11-20 Thread melanie witt
On Nov 20, 2015, at 6:18, Sean Dague  wrote:

> instance_actions seems extremely useful, and at the ops meetups I've
> been to has been one of the favorite features because it allows and easy
> interface for "going back in time" to figure out what happened.

Agreed, we're using it because it's such a quick and easy way to see what 
actions have been taken on an instance when users need support. We're not yet 
collecting notifications from the queue -- we do have them being dumped to the 
logs that are splunk searchable. So far, it hasn't been "easy" to look at 
instance action history that way.

> I'd suggest the following:
> 
> 1. soft deleting and instance does nothing with instance actions.
> 
> 2. archiving instance (soft delete -> actually deleted) also archives
> off instance actions.
> 
> 3. update instance_actions API so that you can get instance_actions for
> deleted instances (which I think doesn't work today).

+1

I kept trying to craft a reply to this thread and fortunately I waited long 
enough that someone else said exactly what I was trying to say.

-melanie (irc: melwitt)







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Running Fuel node as non-superuser

2015-11-20 Thread Stanislaw Bogatkin
Dmitry, I just propose the way I think is right, because it's strange
enough - install package from *.deb file and then set any privileges to it
by third-party utility. Set permissions for app now mostly managed by
post-install scripts. Moreover - if it isn't - it should, cause if you set
capabilities by puppet there always will be a gap between installation and
setting permissions, so you will must bound package installation process
with setting permissions by puppet - other way you will have no way to use
your app.

Setting setuid bits on apps is not a good idea - it is why linux
capabilities were introduced.

On Fri, Nov 20, 2015 at 6:40 PM, Dmitry Nikishov 
wrote:

> Stanislaw,
>
> In my opinion the whole feature shouldn't be in the separate package
> simply because it will actually affect the code of many, if not all,
> components of Fuel.
>
> The only services whose capabilities will have to be managed by puppet are
> those, which are installed from upstream packages (e.g. atop) -- not built
> from fuel-* repos.
>
> Supervisord doesn't seem to use Linux capabilities, id does setuid
> instead:
> https://github.com/Supervisor/supervisor/blob/master/supervisor/options.py#L1326
>
> On Fri, Nov 20, 2015 at 1:07 AM, Stanislaw Bogatkin <
> sbogat...@mirantis.com> wrote:
>
>> Dmitry, I mean whole feature.
>> Btw, why do you want to grant capabilities via puppet? It should be done
>> by post-install package section, I believe.
>>
>> Also I doesn't know if supervisord can bound process capabilities like
>> systemd can - we could use this opportunity too.
>>
>> On Thu, Nov 19, 2015 at 7:44 PM, Dmitry Nikishov 
>> wrote:
>>
>>> My main concern with using linux capabilities/acls on files is actually
>>> puppet support or, actually, the lack of it. ACLs are possible AFAIK, but
>>> we'd need to write a custom type/provider for capabilities. I suggest to
>>> wait with capabilities support till systemd support.
>>>
>>> On Tue, Nov 17, 2015 at 9:15 AM, Dmitry Nikishov >> > wrote:
>>>
 Stanislaw, do you mean the whole feature, or just a user? Since feature
 would require actually changing puppet code.

 On Tue, Nov 17, 2015 at 5:08 AM, Stanislaw Bogatkin <
 sbogat...@mirantis.com> wrote:

> Dmitry, I believe it should be done via package spec as a part of
> installation.
>
> On Mon, Nov 16, 2015 at 8:04 PM, Dmitry Nikishov <
> dnikis...@mirantis.com> wrote:
>
>> Hello folks,
>>
>> I have updated the spec, please review and share your thoughts on it:
>> https://review.openstack.org/#/c/243340/
>>
>> Thanks.
>>
>> On Thu, Nov 12, 2015 at 10:42 AM, Dmitry Nikishov <
>> dnikis...@mirantis.com> wrote:
>>
>>> Matthew,
>>>
>>> sorry, didn't mean to butcher your name :(
>>>
>>> On Thu, Nov 12, 2015 at 10:41 AM, Dmitry Nikishov <
>>> dnikis...@mirantis.com> wrote:
>>>
 Matther,

 I totally agree that each daemon should have it's own user which
 should be created during installation of the relevant package. 
 Probably I
 didn't state this clear enough in the spec.

 However, there are security requirements in place that root should
 not be used at all. This means that there should be a some kind of
 maintenance or system user ('fueladmin'), which would have enough
 privileges to configure and manage Fuel node (e.g. run "sudo puppet 
 apply"
 without password, create mirrors etc). This also means that certain 
 fuel-
 packages would be required to have their files accessible to that user.
 That's the idea behind having a package which would create 'fueladmin' 
 user
 and including it into other fuel- packages requirements lists.

 So this part of the feature comes down to having a non-root user
 with sudo privileges and passwordless sudo for certain commands (like
 'puppet apply ') for scripting.

 On Thu, Nov 12, 2015 at 9:52 AM, Matthew Mosesohn <
 mmoses...@mirantis.com> wrote:

> Dmitry,
>
> We really shouldn't put "user" creation into a single package and
> then depend on it for daemons. If we want nailgun service to run as 
> nailgun
> user, it should be created in the fuel-nailgun package.
> I think it makes the most sense to create multiple users, one for
> each service.
>
> Lastly, it makes a lot of sense to tie a "fuel" CLI user to
> python-fuelclient package.
>
> On Thu, Nov 12, 2015 at 6:42 PM, Dmitry Nikishov <
> dnikis...@mirantis.com> wrote:
>
>> Stanislaw,
>>
>> I agree that this approch would work well. However, does Puppet
>> allow managing capabilities and/or file ACLs? Or can they be easily 
>> set up
>> when installing RPM pac

Re: [openstack-dev] [third-party][infra][CI] Common OpenStack 'Third-party' CI Solution - DONE!

2015-11-20 Thread Asselin, Ramy
Hi Sid,

Sorry, you’re right: log server fix is here. [1]
I thought I documented the scp v1.9 plugin issue, but I don’t see it now. I 
will submit a patch to add that.

Thanks for raising these issues!

Ramy

[1] https://review.openstack.org/#/c/242800/


From: Siddharth Bhatt [mailto:siddharth.bh...@falconstor.com]
Sent: Friday, November 20, 2015 10:51 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [third-party][infra][CI] Common OpenStack 
'Third-party' CI Solution - DONE!

Ramy,

I had previously used your os-ext-testing repo to build a 3rd party CI, and 
today I’ve been trying out this new approach. I’ve noticed a piece of the 
puzzle appears to be missing.

In the new instructions [1], there is no mention of having to manually install 
the Jenkins SCP plugin v1.9. Also, your old manifest would create the plugin 
config file [2] and populate it with the appropriate values for the log server, 
but the new approach does not. So when I finished running the puppet apply, the 
job configurations contained a site name “LogServer” but there was no value 
defined anywhere pointing that to the actual IP or hostname of my log server.

I did manually install the v1.9 plugin and then configured it from the Jenkins 
web UI. I guess either the instructions need to be updated to mention this, or 
the puppet manifests need to automate some or all of it.

Regards,
Sid

[1] 
https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/README.md
[2] /var/lib/jenkins/be.certipost.hudson.plugin.SCPRepositoryPublisher.xml

From: Asselin, Ramy [mailto:ramy.asse...@hpe.com]
Sent: Friday, November 20, 2015 12:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [third-party][infra][CI] Common OpenStack 
'Third-party' CI Solution - DONE!

All,

I’m happy to announce that there is now a working ‘Common’ OpenStack 
‘third-party’ CI Solution available! This is a 3rd party CI solution that uses 
the same tools and scripts as the upstream ‘Jenkins’ CI.

The last few pieces were particularly challenging.
Big thanks to Yolanda Robla for updating Nodepool  & nodepool puppet scripts so 
that is can be reusable by both 3rd party CI’s and upstream infrastructure!

The documentation for setting up a 3rd party ci system on 2 VMs (1 private that 
runs the CI jobs, and 1 public that hosts the log files) is now available here 
[1] or [2]

Big thanks again to everyone that helped submit patches and do the reviews!

A few people have already starting setting up this solution.

Best regards,

Ramy
IRC: asselin

[1] https://github.com/openstack-infra/puppet-openstackci/tree/master/contrib
[2] 
https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/README.md


From: Asselin, Ramy
Sent: Monday, July 20, 2015 3:39 PM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [third-party][infra] Common OpenStack CI Solution 
- 'Jenkins Job Builder' live

All,

I’m pleased to announce the 4th component merged to puppet-openstackci repo [1].

This means 3rd party ci operators can now use the same scripts that the 
OpenStack Infrastructure team uses in the official ‘Jenkins’ CI system for:


1.   Log Server

2.   Jenkins

3.   Zuul

4.   Jenkins Job Builder

This work is being done as part of the common-ci spec [2]

Big thanks to Juame Devesa for starting the work, Khai Do for fixing all issues 
found in the reviews during the virtual sprint, and to all the reviewers and 
testers.

We’re almost there! Just have nodepool and a sample config to compose all of 
the components together [3]!

Ramy
IRC: asselin

[1] https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/
[2] 
http://specs.openstack.org/openstack-infra/infra-specs/specs/openstackci.html
[3] https://storyboard.openstack.org/#!/story/2000101


From: Asselin, Ramy
Sent: Thursday, July 02, 2015 4:59 PM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [third-party][infra] Common OpenStack CI Solution - 
'Zuul' live

All,

I’m please to say that there are now 3 components merged in the 
puppet-openstackci repo [1]
This means 3rd party ci operators can now use the same scripts that the 
OpenStack Infrastructure team uses in the official ‘Jenkins’ CI system for:


1.   Log Server

2.   Jenkins

3.   Zuul

This work is being done as part of the common-ci spec [2]

Big thanks to Fabien Boucher for completing the Zuul script refactoring, which 
went live today!
Thanks to all the reviewers for careful reviews which led to a smooth migration.

I’ve updated my repo [3] & switched all my CI systems to use it.

As a reminder, there will be a virtual sprint next week July 8-9, 2015 15:00 
UTC to finish the remaining tasks.
If you’re interested in helping out in any of the rema

Re: [openstack-dev] [nova] FKs in the DB

2015-11-20 Thread Matt Riedemann



On 11/20/2015 10:19 AM, Alexis Lee wrote:

We just had a fun discussion in IRC about whether foreign keys are evil.
Initially I thought this was crazy but mordred made some good points. To
paraphrase, that if you have a scale-out app already it's easier to
manage integrity in your app than scale-out your persistence layer.

Currently the Nova DB has quite a lot of FKs but not on every relation.
One example of a missing FK is between Instance.uuid and
BandwidthUsageCache.uuid.

Should we drive one way or the other, or just put up with mixed-mode?


For the record, I hate the mixed mode.



What should be the policy for new relations?


I prefer consistency, so if we're adding new relationships I'd prefer to 
see that they have foreign keys.




Do the answers to these questions depend on having a sane and
comprehensive archive/purge system in place?


I'm not sure. The problems this is causing with archive/purge is that I 
thought to fix archive all we had to do was reverse sort the tables, but 
which was working until it turned out we weren't soft deleting 
instance_actions. But now it also turns out that we aren't soft deleting 
bw_usage_cache *and* we don't have a FKey from that back to the 
instances table, so it's just completely orphaned and never archived or 
deleted, thus leaving that task up to the xenserver operator (since the 
xenserver driver is the only one that implements the virt driver API to 
populate this table).


So again, now we have to have special hack code paths in the 
archive/purge code to account for this mixed mode schema.





Alexis (lxsli)



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party][infra][CI] Common OpenStack 'Third-party' CI Solution - DONE!

2015-11-20 Thread Siddharth Bhatt
Ramy,

I had previously used your os-ext-testing repo to build a 3rd party CI, and 
today I’ve been trying out this new approach. I’ve noticed a piece of the 
puzzle appears to be missing.

In the new instructions [1], there is no mention of having to manually install 
the Jenkins SCP plugin v1.9. Also, your old manifest would create the plugin 
config file [2] and populate it with the appropriate values for the log server, 
but the new approach does not. So when I finished running the puppet apply, the 
job configurations contained a site name “LogServer” but there was no value 
defined anywhere pointing that to the actual IP or hostname of my log server.

I did manually install the v1.9 plugin and then configured it from the Jenkins 
web UI. I guess either the instructions need to be updated to mention this, or 
the puppet manifests need to automate some or all of it.

Regards,
Sid

[1] 
https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/README.md
[2] /var/lib/jenkins/be.certipost.hudson.plugin.SCPRepositoryPublisher.xml

From: Asselin, Ramy [mailto:ramy.asse...@hpe.com]
Sent: Friday, November 20, 2015 12:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [third-party][infra][CI] Common OpenStack 
'Third-party' CI Solution - DONE!

All,

I’m happy to announce that there is now a working ‘Common’ OpenStack 
‘third-party’ CI Solution available! This is a 3rd party CI solution that uses 
the same tools and scripts as the upstream ‘Jenkins’ CI.

The last few pieces were particularly challenging.
Big thanks to Yolanda Robla for updating Nodepool  & nodepool puppet scripts so 
that is can be reusable by both 3rd party CI’s and upstream infrastructure!

The documentation for setting up a 3rd party ci system on 2 VMs (1 private that 
runs the CI jobs, and 1 public that hosts the log files) is now available here 
[1] or [2]

Big thanks again to everyone that helped submit patches and do the reviews!

A few people have already starting setting up this solution.

Best regards,

Ramy
IRC: asselin

[1] https://github.com/openstack-infra/puppet-openstackci/tree/master/contrib
[2] 
https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/README.md


From: Asselin, Ramy
Sent: Monday, July 20, 2015 3:39 PM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [third-party][infra] Common OpenStack CI Solution 
- 'Jenkins Job Builder' live

All,

I’m pleased to announce the 4th component merged to puppet-openstackci repo [1].

This means 3rd party ci operators can now use the same scripts that the 
OpenStack Infrastructure team uses in the official ‘Jenkins’ CI system for:


1.   Log Server

2.   Jenkins

3.   Zuul

4.   Jenkins Job Builder

This work is being done as part of the common-ci spec [2]

Big thanks to Juame Devesa for starting the work, Khai Do for fixing all issues 
found in the reviews during the virtual sprint, and to all the reviewers and 
testers.

We’re almost there! Just have nodepool and a sample config to compose all of 
the components together [3]!

Ramy
IRC: asselin

[1] https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/
[2] 
http://specs.openstack.org/openstack-infra/infra-specs/specs/openstackci.html
[3] https://storyboard.openstack.org/#!/story/2000101


From: Asselin, Ramy
Sent: Thursday, July 02, 2015 4:59 PM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [third-party][infra] Common OpenStack CI Solution - 
'Zuul' live

All,

I’m please to say that there are now 3 components merged in the 
puppet-openstackci repo [1]
This means 3rd party ci operators can now use the same scripts that the 
OpenStack Infrastructure team uses in the official ‘Jenkins’ CI system for:


1.   Log Server

2.   Jenkins

3.   Zuul

This work is being done as part of the common-ci spec [2]

Big thanks to Fabien Boucher for completing the Zuul script refactoring, which 
went live today!
Thanks to all the reviewers for careful reviews which led to a smooth migration.

I’ve updated my repo [3] & switched all my CI systems to use it.

As a reminder, there will be a virtual sprint next week July 8-9, 2015 15:00 
UTC to finish the remaining tasks.
If you’re interested in helping out in any of the remaining tasks (Jenkins Job 
Builder, Nodepool, Logstash/Kibana, Documentation, Sample site.pp) Sign up on 
the eitherpad. [4]

Also, we can use the 3rd party meeting time slot next week to discuss plans and 
answer questions [5].
Tuesday 7/7/15 1700 UTC #openstack-meeting

Ramy
IRC: asselin

[1] https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/
[2] 
http://specs.openstack.org/openstack-infra/infra-specs/specs/openstackci.html
[3] https://github.com/rasselin/os-ext-testing (forked from 
jaypipes/os-ext-testing)
[4

Re: [openstack-dev] [nova] Versioned notifications... who cares about the version?

2015-11-20 Thread gord chung



On 20/11/15 11:33 AM, Alexis Lee wrote:

gord chung said on Thu, Nov 19, 2015 at 11:59:33PM -0500:

just to clarify, the idea doesn't involve tailoring the notification
payload to ceilometer, just that if a producer is producing a
notification it knows contains a useful datapoint, the producer
should tell someone explicitly 'this datapoint exists'.

I know very little about Nova notifications or Ceilometer, so stepping
wildly into the unknown here but... why would a producer spit out
non-useful datapoints? If no-one cares or will ever care, it simply
shouldn't be included.

fully agree.

it seems like even before addressing versioning, that the notification 
paradigm itself should be discussed. right now the producer is just 
sending out a grab bag of data that it thinks is important but doesn't 
define who the audience is. while that makes it extremely flexible so 
that anyone can consume the message, it also guarantees nothing (not 
even that it's being consumed). you can version a payload or make a 
schema accessible as much as you like but if no one is listening or the 
data published isn't useful to those listening, it's just noise.


i think a lot of the complexity we have in versioning is that the 
projects are too silo'd. i think some of the versioning issues would be 
irrelevant if the producer knew it's consumers before sending rather 
than producers just tossing out a chunk of data (versioned schema or 
not) and considering their job complete once it leaves it's own walls. 
the producer doesn't necessarily have to be the individual project teams 
but whoever the producer of notifications is, it should know it's audience.




The problem is knowing what each consumer thinks is interesting and that
isn't something that can be handled by the producer. If Ceilometer is
just a pipeline that has no opinion on what's relevant and what isn't,
that's a special case easily implemented by an identity function.
the notification consumption service in ceilometer is essentially just a 
pipeline that normalises select incoming notifications into a data 
model(s) and pushes that model to whoever wants it (a known consumer is 
the storage service but it's configurable to allow other consumers).


cheers,

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Bug update

2015-11-20 Thread Armando M.
On 20 November 2015 at 09:47, John Belamaric 
wrote:

> I think Gary got auto-corrected:
>
> training = triaging
> brace = rbac
>

ah!


>
> On Nov 20, 2015, at 12:41 PM, Armando M.  wrote:
>
>
>
> On 19 November 2015 at 23:10, Gary Kotton  wrote:
>
>> Hi,
>> There are a ton of old and ancient bugs that have not been trained. If
>> you guys have some time then please go over them. In most cases some are
>> not even bugs and are just questions. I have spent the last few days going
>> over and training a few.
>> Over the last two days a number of bugs related to Neutron RBAC have been
>> opened. I have created a new tag called ‘brace’. Kevin can you please take
>> a look. Some may be bugs, others may be edge cases that we missed in the
>> review process and others may be a mis understanding of the feature.
>>
>
> What does brace mean? That doesn't seem very intuitive.
>
> Are you suggesting to add one to cover 'access control' in general?
>
> Thanks for helping out!
>
> [1]
> http://docs.openstack.org/developer/neutron/policies/bugs.html#proposing-new-tags
>
>
>
>
>> A luta continua
>> Gary
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] M-1 Bugs/Reviews squash day

2015-11-20 Thread Flavio Percoco

On 16/11/15 16:45 -0300, Flavio Percoco wrote:

Greetings,

At our last meeting, we discussed the idea of having a Bug/Reviews
squash day before the end of M-1. I'm sending this email out to
propose that we do this work on one of the following dates:

- Friday November 20th (ALL TZs)
- Monday November 23rd (ALL TZs)

I realize that next week is Thanksgiving in the US and some folks
might want to take the whole week. Please, do vote before Wednesday
18th so we can prepare for Friday and/or monday.

Poll link: http://doodle.com/poll/mt7hwswtmcvmetdn


The Bug/Review squash day will be Monday November 23rd. You can check
the results in the poll link.

Thanks and looking forwrd to Monday!
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova]Move encryptors to os-brick

2015-11-20 Thread Daniel P. Berrange
On Fri, Nov 20, 2015 at 02:45:15PM +0200, Duncan Thomas wrote:
> Brick does not have to take over the decisions in order to be a useful
> repository for the code. The motivation for this work is to avoid having
> the dm setup code copied wholesale into cinder, where it becomes difficult
> to keep in sync with the code in nova.
> 
> Cinder needs a copy of this code since it is on the data path for certain
> operations (create from image, copy to image, backup/restore, migrate).

A core goal of using volume encryption in Nova to provide protection for
tenant data, from a malicious storage service. ie if the decryption key
is only ever used by Nova on the compute node, then cinder only ever sees
ciphertext, never plaintext.  Thus if cinder is compromised, then it can
not compromise any data stored in any encrypted volumes.

If cinder is looking to get access to the dm-seutp code, this seems to
imply that cinder will be getting access to the plaintext data, which
feels to me like it de-values the volume encryption feature somewhat.

I'm fuzzy on the details of just what code paths cinder needs to be
able to convert from plaintext to ciphertext or vica-verca, but in
general I think it is desirable if we can avoid any such operation
in cinder, and keep it so that only Nova compute nodes ever see the
decrypted data.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Bug update

2015-11-20 Thread John Belamaric
I think Gary got auto-corrected:

training = triaging
brace = rbac

On Nov 20, 2015, at 12:41 PM, Armando M. 
mailto:arma...@gmail.com>> wrote:



On 19 November 2015 at 23:10, Gary Kotton 
mailto:gkot...@vmware.com>> wrote:
Hi,
There are a ton of old and ancient bugs that have not been trained. If you guys 
have some time then please go over them. In most cases some are not even bugs 
and are just questions. I have spent the last few days going over and training 
a few.
Over the last two days a number of bugs related to Neutron RBAC have been 
opened. I have created a new tag called ‘brace’. Kevin can you please take a 
look. Some may be bugs, others may be edge cases that we missed in the review 
process and others may be a mis understanding of the feature.

What does brace mean? That doesn't seem very intuitive.

Are you suggesting to add one to cover 'access control' in general?

Thanks for helping out!

[1] 
http://docs.openstack.org/developer/neutron/policies/bugs.html#proposing-new-tags



A luta continua
Gary

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Bug update

2015-11-20 Thread Armando M.
On 19 November 2015 at 23:20, Gary Kotton  wrote:

> Hi,
> One extra thing. A large chunk of the latest bugs opened are RFE’s. These
> are tagged with ‘rfe’. In addition to this I would suggest changing the
> title of the bug to have [RFE]. This will at least help those who are
> perusing over the bugs to see what are actually real bug and what are new
> features etc.
>

These bugs are marked 'wishlist'...that should suffice IMO, but prefixing
is not going to hurt.


> Just a thought.
> Thanks
> Gary
>
> From: Gary Kotton 
> Reply-To: OpenStack List 
> Date: Friday, November 20, 2015 at 9:10 AM
> To: OpenStack List 
> Subject: [openstack-dev] [Neutron] Bug update
>
> Hi,
> There are a ton of old and ancient bugs that have not been trained. If you
> guys have some time then please go over them. In most cases some are not
> even bugs and are just questions. I have spent the last few days going over
> and training a few.
> Over the last two days a number of bugs related to Neutron RBAC have been
> opened. I have created a new tag called ‘brace’. Kevin can you please take
> a look. Some may be bugs, others may be edge cases that we missed in the
> review process and others may be a mis understanding of the feature.
> A luta continua
> Gary
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [third-party][infra][CI] Common OpenStack 'Third-party' CI Solution - DONE!

2015-11-20 Thread Asselin, Ramy
All,

I’m happy to announce that there is now a working ‘Common’ OpenStack 
‘third-party’ CI Solution available! This is a 3rd party CI solution that uses 
the same tools and scripts as the upstream ‘Jenkins’ CI.

The last few pieces were particularly challenging.
Big thanks to Yolanda Robla for updating Nodepool  & nodepool puppet scripts so 
that is can be reusable by both 3rd party CI’s and upstream infrastructure!

The documentation for setting up a 3rd party ci system on 2 VMs (1 private that 
runs the CI jobs, and 1 public that hosts the log files) is now available here 
[1] or [2]

Big thanks again to everyone that helped submit patches and do the reviews!

A few people have already starting setting up this solution.

Best regards,

Ramy
IRC: asselin

[1] https://github.com/openstack-infra/puppet-openstackci/tree/master/contrib
[2] 
https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/README.md


From: Asselin, Ramy
Sent: Monday, July 20, 2015 3:39 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [third-party][infra] Common OpenStack CI Solution 
- 'Jenkins Job Builder' live

All,

I’m pleased to announce the 4th component merged to puppet-openstackci repo [1].

This means 3rd party ci operators can now use the same scripts that the 
OpenStack Infrastructure team uses in the official ‘Jenkins’ CI system for:


1.   Log Server

2.   Jenkins

3.   Zuul

4.   Jenkins Job Builder

This work is being done as part of the common-ci spec [2]

Big thanks to Juame Devesa for starting the work, Khai Do for fixing all issues 
found in the reviews during the virtual sprint, and to all the reviewers and 
testers.

We’re almost there! Just have nodepool and a sample config to compose all of 
the components together [3]!

Ramy
IRC: asselin

[1] https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/
[2] 
http://specs.openstack.org/openstack-infra/infra-specs/specs/openstackci.html
[3] https://storyboard.openstack.org/#!/story/2000101


From: Asselin, Ramy
Sent: Thursday, July 02, 2015 4:59 PM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [third-party][infra] Common OpenStack CI Solution - 
'Zuul' live

All,

I’m please to say that there are now 3 components merged in the 
puppet-openstackci repo [1]
This means 3rd party ci operators can now use the same scripts that the 
OpenStack Infrastructure team uses in the official ‘Jenkins’ CI system for:


1.   Log Server

2.   Jenkins

3.   Zuul

This work is being done as part of the common-ci spec [2]

Big thanks to Fabien Boucher for completing the Zuul script refactoring, which 
went live today!
Thanks to all the reviewers for careful reviews which led to a smooth migration.

I’ve updated my repo [3] & switched all my CI systems to use it.

As a reminder, there will be a virtual sprint next week July 8-9, 2015 15:00 
UTC to finish the remaining tasks.
If you’re interested in helping out in any of the remaining tasks (Jenkins Job 
Builder, Nodepool, Logstash/Kibana, Documentation, Sample site.pp) Sign up on 
the eitherpad. [4]

Also, we can use the 3rd party meeting time slot next week to discuss plans and 
answer questions [5].
Tuesday 7/7/15 1700 UTC #openstack-meeting

Ramy
IRC: asselin

[1] https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/
[2] 
http://specs.openstack.org/openstack-infra/infra-specs/specs/openstackci.html
[3] https://github.com/rasselin/os-ext-testing (forked from 
jaypipes/os-ext-testing)
[4] https://etherpad.openstack.org/p/common-ci-sprint
[5] https://wiki.openstack.org/wiki/Meetings/ThirdParty


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Bug update

2015-11-20 Thread Armando M.
On 19 November 2015 at 23:10, Gary Kotton  wrote:

> Hi,
> There are a ton of old and ancient bugs that have not been trained. If you
> guys have some time then please go over them. In most cases some are not
> even bugs and are just questions. I have spent the last few days going over
> and training a few.
> Over the last two days a number of bugs related to Neutron RBAC have been
> opened. I have created a new tag called ‘brace’. Kevin can you please take
> a look. Some may be bugs, others may be edge cases that we missed in the
> review process and others may be a mis understanding of the feature.
>

What does brace mean? That doesn't seem very intuitive.

Are you suggesting to add one to cover 'access control' in general?

Thanks for helping out!

[1]
http://docs.openstack.org/developer/neutron/policies/bugs.html#proposing-new-tags




> A luta continua
> Gary
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-11-20 Thread Sean Dague
On 11/20/2015 11:36 AM, Matt Riedemann wrote:
> 
> 
> On 11/20/2015 10:04 AM, Andrew Laski wrote:
>> On 11/20/15 at 09:51am, Matt Riedemann wrote:
>>>
>>>
>>> On 11/20/2015 8:18 AM, Sean Dague wrote:
 On 11/17/2015 10:51 PM, Matt Riedemann wrote:
 
>
> I *don't* see any DB APIs for deleting instance actions.
>
> Kind of an important difference there.  Jay got it at least. :)
>
>>
>> Were we just planning on instance_actions living forever in the
>> database?
>>
>> Should we soft delete instance_actions when we delete the referenced
>> instance?
>>
>> Or should we (hard) delete instance_actions when we archive (move to
>> shadow tables) soft deleted instances?
>>
>> This is going to be a blocker to getting nova-manage db
>> archive_deleted_rows working.
>>
>> [1] https://review.openstack.org/#/c/246635/

 instance_actions seems extremely useful, and at the ops meetups I've
 been to has been one of the favorite features because it allows and
 easy
 interface for "going back in time" to figure out what happened.

 I'd suggest the following:

 1. soft deleting and instance does nothing with instance actions.

 2. archiving instance (soft delete -> actually deleted) also archives
 off instance actions.
>>>
>>> I think this is also the right approach. Then we don't need to worry
>>> about adding soft delete for instance_actions, they are just archived
>>> when you archive the instances. It probably makes the logic in the
>>> archive code messier for this separate path, but it's looking like
>>> we're going to have to account for the bw_usage_cache table too (which
>>> has a uuid column for an instance but no foreign key back to the
>>> instances table and is not soft deleted).
>>>

 3. update instance_actions API so that you can get instance_actions for
 deleted instances (which I think doesn't work today).
>>>
>>> Right, it doesn't. I was going to propose a spec for that since it's a
>>> simple API change with a microversion.
>>
>> Adding a simple flag to expose instance actions for a deleted instance
>> if you know the uuid of the deleted instance will provide some
>> usefulness.  It does lack the discoverability of knowing that you had
>> *some* instance that was deleted and you don't have the uuid but want to
>> get at the deleted actions.  I would like to avoid bolting that onto
>> instance actions and keep that as a use case for an eventual Task API.
>>
>>>

 -Sean

>>>
>>> -- 
>>>
>>> Thanks,
>>>
>>> Matt Riedemann
>>>
>>>
>>> __
>>>
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> If you're an admin, you can list deleted instances using:
> 
> nova list --deleted
> 
> Or could, if we weren't busted on that right now [1].
> 
> So the use case I'm thinking of here is:
> 
> 1. Multiple users are in the same project/tenant.
> 2. User A deletes an instance.
> 3. User B is wondering where the instance went, so they open a support
> ticket.
> 4. The admin checks for deleted instances on that project, finds the one
> in question.
> 5. Calls off to os-instance-actions with that instance uuid to see the
> deleted action and the user that did it (user A).
> 6. Closes the ticket saying that user A deleted the instance.
> 7. User B punches user A in the gut.
> 
> [1] https://bugs.launchpad.net/nova/+bug/1518382

+1

I think we need that on a T-shirt


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Neutron L3 Sub-team meeting canceled on November 26th

2015-11-20 Thread Miguel Lavalle
Dear Neutron L3 Sub-team members,

We are canceling our weekly IRC meeting on November 26th, due to the
Thanksgiving holiday in the US. We will reconvene again on December 3rd at
the usual time.

Best regards
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Bogdan Dobrelya
On 20.11.2015 17:31, Vladimir Kozhukalov wrote:
> Bogdan,
> 
>>> So, we could only deprecate the docker feature for the 8.0.
> 
> What do you mean exactly when saying 'deprecate docker feature'? I can
> not even imagine how we can live with and without docker containers at
> the same time. Deprecation is usually related to features which directly
> impact UX (maybe I am wrong). 

I may be understood this [0] wrong, and the docker containers are not
user-visible, but that depends on the which type of users do we mean :-)
Sorry, for being not clear.

[0]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html

> 
> Guys, 
> 
> When you estimate risks of the docker removal, please take into account
> not only release deadlines but also the overall product quality. The
> thing is that continuing using containers makes it much more complicated
> (and thus less stable) to implement new upgrade flow (upgrade tarball
> can not be used any more, we need to re-install the host system).
> Switching from Centos 6 to Centos 7 is also much more complicated with
> docker. Every single piece of Fuel system is going to become simpler and
> easier to support.
> 
> Of course, I am not suggesting to jump overboard into cold water without
> a life jacket. Transition plan, checklist, green tests, even spec etc.
> are assumed without saying (after all, I was not born yesterday). Of
> course, we won't merge changes until everything is green. What is the
> problem to try to do this and postpone if not ready in time? And please
> do not confuse these two cases: switching from plain deployment to
> containers is complicated, but switching from docker to plain is much
> simpler. 
> 
> 
> 
> 
> Vladimir Kozhukalov
> 
> On Fri, Nov 20, 2015 at 6:47 PM, Bogdan Dobrelya  > wrote:
> 
> On 20.11.2015 15:10, Timur Nurlygayanov wrote:
> > Hi team,
> >
> > I think it too late to make such significant changes for MOS 8.0 now,
> > but I'm ok with the idea to remove docker containers in the future
> > releases if our dev team want to do this.
> > Any way, before we will do this, we need to plan how we will perform
> > updates between different releases with and without docker containers,
> > how we will manage requirements and etc. In fact we have a lot of
> > questions and haven't answers, let's prepare the spec for this change,
> > review it, discuss it with developers, users and project management team
> > and if we haven't requirements to keep docker containers on master node
> > let's remove them for the future releases (not in MOS 8.0).
> >
> > Of course, we can fix BVT / SWARM tests and don't use docker images in
> > our test suite (it shouldn't be really hard) but we didn't plan these
> > changes and in fact these changes can affect our estimates for many 
> tasks.
> 
> I can only add that features just cannot be removed without a
> deprecation period of 1-2 releases.
> So, we could only deprecate the docker feature for the 8.0.
> 
> >
> > Thank you!
> >
> >
> > On Fri, Nov 20, 2015 at 4:44 PM, Alexander Kostrikov
> > mailto:akostri...@mirantis.com>
> >>
> wrote:
> >
> > Hello, Igor.
> >
> > >But I'd like to hear from QA how do we rely on container-based
> > infrastructure? Would it be hard to change our sys-tests in short
> > time?
> >
> > At first glance, system tests are using docker only to fetch logs
> > and run shell commands.
> > Also, docker is used to run Rally.
> >
> > If there is an action to remove docker containers with carefull
> > attention to bvt testing, it would take couple days to fix system 
> tests.
> > But time may be highly affected by code freezes and active features
> > merging.
> >
> > QA team is going to have Monday (Nov 23) sync-up - and it is
> > possible to get more exact information from all QA-team.
> >
> > P.S.
> > +1 to remove docker.
> > -1 to remove docker without taking into account deadlines/other
> > features.
> >
> > On Thu, Nov 19, 2015 at 10:27 PM, Igor Kalnitsky
> > mailto:ikalnit...@mirantis.com>
> >>
> wrote:
> >
> > Hey guys,
> >
> > Despite the fact I like containers (as deployment unit), we
> > don't use
> > them so. That means I +1 idea to drop containers, just because I
> > believe that would
> >
> > * simplify a lot of things
> > * helps get rid of huge amount of hacks
> > * increase master node deployment
> > * release us from annoying support of upgrades / rollbacks that
> > proved
> 

Re: [openstack-dev] [nova] FKs in the DB

2015-11-20 Thread Julien Danjou
On Fri, Nov 20 2015, Alexis Lee wrote:

> We just had a fun discussion in IRC about whether foreign keys are evil.
> Initially I thought this was crazy but mordred made some good points. To
> paraphrase, that if you have a scale-out app already it's easier to
> manage integrity in your app than scale-out your persistence layer.

That's interesting. Could you explain how it is easier to achieve the
level of data integrity provided by a RDBMS implementing ACID in your
own application?

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Vladimir Sharshov
+1 to remove docker in new CentOS 7.

On Fri, Nov 20, 2015 at 7:31 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Bogdan,
>
> >> So, we could only deprecate the docker feature for the 8.0.
>
> What do you mean exactly when saying 'deprecate docker feature'? I can not
> even imagine how we can live with and without docker containers at the same
> time. Deprecation is usually related to features which directly impact UX
> (maybe I am wrong).
>
> Guys,
>
> When you estimate risks of the docker removal, please take into account
> not only release deadlines but also the overall product quality. The thing
> is that continuing using containers makes it much more complicated (and
> thus less stable) to implement new upgrade flow (upgrade tarball can not be
> used any more, we need to re-install the host system). Switching from
> Centos 6 to Centos 7 is also much more complicated with docker. Every
> single piece of Fuel system is going to become simpler and easier to
> support.
>
> Of course, I am not suggesting to jump overboard into cold water without a
> life jacket. Transition plan, checklist, green tests, even spec etc. are
> assumed without saying (after all, I was not born yesterday). Of course, we
> won't merge changes until everything is green. What is the problem to try
> to do this and postpone if not ready in time? And please do not confuse
> these two cases: switching from plain deployment to containers is
> complicated, but switching from docker to plain is much simpler.
>
>
>
>
> Vladimir Kozhukalov
>
> On Fri, Nov 20, 2015 at 6:47 PM, Bogdan Dobrelya 
> wrote:
>
>> On 20.11.2015 15:10, Timur Nurlygayanov wrote:
>> > Hi team,
>> >
>> > I think it too late to make such significant changes for MOS 8.0 now,
>> > but I'm ok with the idea to remove docker containers in the future
>> > releases if our dev team want to do this.
>> > Any way, before we will do this, we need to plan how we will perform
>> > updates between different releases with and without docker containers,
>> > how we will manage requirements and etc. In fact we have a lot of
>> > questions and haven't answers, let's prepare the spec for this change,
>> > review it, discuss it with developers, users and project management team
>> > and if we haven't requirements to keep docker containers on master node
>> > let's remove them for the future releases (not in MOS 8.0).
>> >
>> > Of course, we can fix BVT / SWARM tests and don't use docker images in
>> > our test suite (it shouldn't be really hard) but we didn't plan these
>> > changes and in fact these changes can affect our estimates for many
>> tasks.
>>
>> I can only add that features just cannot be removed without a
>> deprecation period of 1-2 releases.
>> So, we could only deprecate the docker feature for the 8.0.
>>
>> >
>> > Thank you!
>> >
>> >
>> > On Fri, Nov 20, 2015 at 4:44 PM, Alexander Kostrikov
>> > mailto:akostri...@mirantis.com>> wrote:
>> >
>> > Hello, Igor.
>> >
>> > >But I'd like to hear from QA how do we rely on container-based
>> > infrastructure? Would it be hard to change our sys-tests in short
>> > time?
>> >
>> > At first glance, system tests are using docker only to fetch logs
>> > and run shell commands.
>> > Also, docker is used to run Rally.
>> >
>> > If there is an action to remove docker containers with carefull
>> > attention to bvt testing, it would take couple days to fix system
>> tests.
>> > But time may be highly affected by code freezes and active features
>> > merging.
>> >
>> > QA team is going to have Monday (Nov 23) sync-up - and it is
>> > possible to get more exact information from all QA-team.
>> >
>> > P.S.
>> > +1 to remove docker.
>> > -1 to remove docker without taking into account deadlines/other
>> > features.
>> >
>> > On Thu, Nov 19, 2015 at 10:27 PM, Igor Kalnitsky
>> > mailto:ikalnit...@mirantis.com>> wrote:
>> >
>> > Hey guys,
>> >
>> > Despite the fact I like containers (as deployment unit), we
>> > don't use
>> > them so. That means I +1 idea to drop containers, just because I
>> > believe that would
>> >
>> > * simplify a lot of things
>> > * helps get rid of huge amount of hacks
>> > * increase master node deployment
>> > * release us from annoying support of upgrades / rollbacks that
>> > proved
>> > to be non-working well
>> >
>> > But I'd like to hear from QA how do we rely on container-based
>> > infrastructure? Would it be hard to change our sys-tests in
>> short
>> > time?
>> >
>> > Thanks,
>> > Igor
>> >
>> >
>> > On Thu, Nov 19, 2015 at 10:31 AM, Vladimir Kuklin
>> > mailto:vkuk...@mirantis.com>> wrote:
>> > > Folks
>> > >
>> > > I guess it should be pretty simple to roll back - install
>> > older version and
>> > > restore the backup with preservation

Re: [openstack-dev] [Fuel][library] CI gate for regressions detection in deployment data

2015-11-20 Thread Bogdan Dobrelya
> Hi,
> 
> let me try to rephrase this a bit and Bogdan will correct me if I'm wrong
> or missing something.
> 
> We have a set of top-scope manifests (called Fuel puppet tasks) that we use
> for OpenStack deployment. We execute those tasks with "puppet apply". Each
> task supposed to bring target system into some desired state, so puppet
> compiles a catalog and applies it. So basically, puppet catalog = desired
> system state.
> 
> So we can compile* catalogs for all top-scope manifests in master branch
> and store those compiled* catalogs in fuel-library repo. Then for each
> proposed patch CI will compare new catalogs with stored ones and print out
> the difference if any. This will pretty much show what is going to be
> changed in system configuration by proposed patch.
> 
> We were discussing such checks before several times, iirc, but we did not
> have right tools to implement such thing before. Well, now we do :) I think
> it could be quite useful even in non-voting mode.
> 
> * By saying compiled catalogs I don't mean actual/real puppet catalogs, I
> mean sorted lists of all classes/resources with all parameters that we find
> during puppet-rspec tests in our noop test framework, something like
> standard puppet-rspec coverage. See example [0] for networks.pp task [1].
> 
> Regards,
> Alex
> 
> [0] http://paste.openstack.org/show/477839/
> [1] 
> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/openstack-network/networks.pp

Thank you, Alex.
Yes, the composition layer is a top-scope manifests, known as a Fuel
library modular tasks [0].

The "deployment data checks", is nothing more than comparing the
committed vs changed states of fixtures [1] of puppet catalogs for known
deployment paths under test with rspecs written for each modular task [2].

And the *current status* is:
- the script for data layer checks now implemented [3]
- how-to is being documented here [4]
- a fix to make catalogs compilation idempotent submitted [5]
- and there is my WIP branch [6] with the initial committed state of
deploy data pre-generated. So, you can checkout, make any test changes
to manifests and run the data check (see the README [4]). It works for
me, there is no issues with idempotent re-checks of a clean committed
state or tests failing when unexpected.

So the plan is to implement this noop tests extention as a non-voting CI
gate after I make an example workflow update for developers to the
Fuel wiki. Thoughts?

[0]
https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular
[1]
https://github.com/openstack/fuel-library/tree/master/tests/noop/astute.yaml
[2] https://github.com/openstack/fuel-library/tree/master/tests/noop/spec
[3] https://review.openstack.org/240015
[4]
https://github.com/openstack/fuel-library/blob/master/tests/noop/README.rst
[5] https://review.openstack.org/247989
[6] https://github.com/bogdando/fuel-library-1/commits/data_checks


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-11-20 Thread Matt Riedemann



On 11/20/2015 10:04 AM, Andrew Laski wrote:

On 11/20/15 at 09:51am, Matt Riedemann wrote:



On 11/20/2015 8:18 AM, Sean Dague wrote:

On 11/17/2015 10:51 PM, Matt Riedemann wrote:



I *don't* see any DB APIs for deleting instance actions.

Kind of an important difference there.  Jay got it at least. :)



Were we just planning on instance_actions living forever in the
database?

Should we soft delete instance_actions when we delete the referenced
instance?

Or should we (hard) delete instance_actions when we archive (move to
shadow tables) soft deleted instances?

This is going to be a blocker to getting nova-manage db
archive_deleted_rows working.

[1] https://review.openstack.org/#/c/246635/


instance_actions seems extremely useful, and at the ops meetups I've
been to has been one of the favorite features because it allows and easy
interface for "going back in time" to figure out what happened.

I'd suggest the following:

1. soft deleting and instance does nothing with instance actions.

2. archiving instance (soft delete -> actually deleted) also archives
off instance actions.


I think this is also the right approach. Then we don't need to worry
about adding soft delete for instance_actions, they are just archived
when you archive the instances. It probably makes the logic in the
archive code messier for this separate path, but it's looking like
we're going to have to account for the bw_usage_cache table too (which
has a uuid column for an instance but no foreign key back to the
instances table and is not soft deleted).



3. update instance_actions API so that you can get instance_actions for
deleted instances (which I think doesn't work today).


Right, it doesn't. I was going to propose a spec for that since it's a
simple API change with a microversion.


Adding a simple flag to expose instance actions for a deleted instance
if you know the uuid of the deleted instance will provide some
usefulness.  It does lack the discoverability of knowing that you had
*some* instance that was deleted and you don't have the uuid but want to
get at the deleted actions.  I would like to avoid bolting that onto
instance actions and keep that as a use case for an eventual Task API.





-Sean



--

Thanks,

Matt Riedemann


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



If you're an admin, you can list deleted instances using:

nova list --deleted

Or could, if we weren't busted on that right now [1].

So the use case I'm thinking of here is:

1. Multiple users are in the same project/tenant.
2. User A deletes an instance.
3. User B is wondering where the instance went, so they open a support 
ticket.
4. The admin checks for deleted instances on that project, finds the one 
in question.
5. Calls off to os-instance-actions with that instance uuid to see the 
deleted action and the user that did it (user A).

6. Closes the ticket saying that user A deleted the instance.
7. User B punches user A in the gut.

[1] https://bugs.launchpad.net/nova/+bug/1518382

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][bugs] Weekly Status Report

2015-11-20 Thread Markus Zoeller
Below are the bug stats of the week "Mitaka R-20".
Increases/decreases compared to "Mitaka R-21" are in parantheses.
The bug count of the novaclient is added now.

Stats
=

New bugs which are *not* assigned to any subteam

count: 30 (+2)
query: http://bit.ly/1WF68Iu

New bugs which are *not* triaged

subteam: libvirt 
count: 16 (+2)
query: http://bit.ly/1Hx3RrL
subteam: volumes 
count: 10 (-2)
query: http://bit.ly/1NU2DM0
subteam: compute
count: 5 (0)
query: http://bit.ly/1O72RQc
subteam: vmware
count: 5 (?)
query: http://bit.ly/1YkCU4s
subteam: network : 
count: 4 (0)
query: http://bit.ly/1LVAQdq
subteam: db : 
count: 4 (0)
query: http://bit.ly/1LVATWG
subteam: 
count: 89 (+6)
query: http://bit.ly/1RBVZLn

High prio bugs which are *not* in progress
--
count: 38 (-1)
query: http://bit.ly/1MCKoHA

Critical bugs which are *not* in progress
-
count: 0 (0)
query: http://bit.ly/1kfntfk

Untriaged python-novaclient bugs

count: 7 (?)
query: http://bit.ly/1kKUDDU


Readings

* https://wiki.openstack.org/wiki/BugTriage
* https://wiki.openstack.org/wiki/Nova/BugTriage
* 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078252.html

Markus Zoeller/Germany/IBM@IBMDE wrote on 11/13/2015 04:07:24 PM:

> From: Markus Zoeller/Germany/IBM@IBMDE
> To: "OpenStack Development Mailing List \(not for usage questions\)" 
> 
> Date: 11/13/2015 04:09 PM
> Subject: Re: [openstack-dev] [nova][bugs] Weekly Status Report
> 
> Below are the stats of the week "Mitaka R-21".
> Changes to the previous week are shown in parantheses behind the current
> numbers. For example, "28 (+9)" means we have 28 bugs in that category
> with an increase of 9 bugs comparing to the previous week.
> 
> 
> Stats
> =
> 
> New bugs which are *not* assigned to any subteam
> 
> count: 28 (+9)
> query: http://bit.ly/1WF68Iu
> 
> New bugs which are *not* triaged
> 
> subteam: libvirt 
> count: 14 (0)
> query: http://bit.ly/1Hx3RrL
> subteam: volumes 
> count: 12 (+1)
> query: http://bit.ly/1NU2DM0
> subteam: compute
> count: 5 (?)
> query: http://bit.ly/1O72RQc
> subteam: network : 
> count: 4 (0)
> query: http://bit.ly/1LVAQdq
> subteam: db : 
> count: 4 (0)
> query: http://bit.ly/1LVATWG
> subteam: 
> count: 83 (+16)
> query: http://bit.ly/1RBVZLn
> 
> High prio bugs which are *not* in progress
> --
> count: 39 (0)
> query: http://bit.ly/1MCKoHA
> 
> Critical bugs which are *not* in progress
> -
> count: 0 (0)
> query: http://bit.ly/1kfntfk
> 
> Readings
> 
> * https://wiki.openstack.org/wiki/BugTriage
> * https://wiki.openstack.org/wiki/Nova/BugTriage
> * 
> 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078252.html

> 
> 
> Markus Zoeller/Germany/IBM@IBMDE wrote on 11/06/2015 05:54:59 PM:
> 
> > From: Markus Zoeller/Germany/IBM@IBMDE
> > To: "OpenStack Development Mailing List" 
> 
> > Date: 11/06/2015 05:56 PM
> > Subject: [openstack-dev] [nova][bugs] Weekly Status Report
> > 
> > Hey folks,
> > 
> > below is the first report of bug stats I intend to post weekly.
> > We discussed it shortly during the Mitaka summit that this report
> > could be useful to keep the attention of the open bugs at a certain
> > level. Let me know if you think it's missing something.
> > 
> > Stats
> > =
> > 
> > New bugs which are *not* assigned to any subteam
> > 
> > count: 19
> > query: http://bit.ly/1WF68Iu
> > 
> > 
> > New bugs which are *not* triaged
> > 
> > subteam: libvirt 
> > count: 14 
> > query: http://bit.ly/1Hx3RrL
> > subteam: volumes 
> > count: 11
> > query: http://bit.ly/1NU2DM0
> > subteam: network : 
> > count: 4
> > query: http://bit.ly/1LVAQdq
> > subteam: db : 
> > count: 4
> > query: http://bit.ly/1LVATWG
> > subteam: 
> > count: 67
> > query: http://bit.ly/1RBVZLn
> > 
> > 
> > High prio bugs which are *not* in progress
> > --
> > count: 39
> > query: http://bit.ly/1MCKoHA
> > 
> > 
> > Critical bugs which are *not* in progress
> > -
> > count: 0
> > query: http://bit.ly/1kfntfk
> > 
> > 
> > Readings
> > 
> > * https://wiki.openstack.org/wiki/BugT

Re: [openstack-dev] [nova] What things do we want to get into a python-novaclient 3.0 release?

2015-11-20 Thread Matt Riedemann



On 11/20/2015 10:20 AM, Matt Riedemann wrote:



On 11/20/2015 3:48 AM, Matthew Booth wrote:

I wrote this a while back, which implements 'migrate everything off this
compute host' in the most robust manner I could come up with using only
the external api:

https://gist.github.com/mdbooth/163f5fdf47ab45d7addd

It obviously overlaps considerably with host-servers-migrate, which is
supposed to do the same thing. Users seem to have been appreciative, so
I'd be interested to see it merged in some form.

Matt

On Thu, Nov 19, 2015 at 6:18 PM, Matt Riedemann
mailto:mrie...@linux.vnet.ibm.com>> wrote:

We've been talking about doing a 3.0 release for novaclient for
awhile so we can make some backward incompatible changes, like:

1. Removing the novaclient.v1_1 module
2. Dropping py26 support (if there is any explicit py26 support in
there)

What else are people aware of?

Monty was talking about doing a thing with auth:

https://review.openstack.org/#/c/245200/

But it sounds like that is not really needed now?

I'd say let's target mitaka-2 for a 3.0 release and get these
flushed out.

--

Thanks,

Matt Riedemann



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



That's not a backward compatible change, so not really necessary for a
major version release. We'd need a blueprint at least for it though
since it's adding a new CLI that does some orchestration. I commented in
the repo, there is a similar version in another repo, so yeah, people
are doing this and it'd be good if we could decide if it's something
that should live in tree and be maintained officially.

A functional test would be sweet, but given it deals with migration and
the novaclient functional tests are assuming a single node devstack,
that's probably not going to fly. We could ask sdague about that though.



Sorry, that's not a backward *incompatible* change.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Vladimir Kozhukalov
Bogdan,

>> So, we could only deprecate the docker feature for the 8.0.

What do you mean exactly when saying 'deprecate docker feature'? I can not
even imagine how we can live with and without docker containers at the same
time. Deprecation is usually related to features which directly impact UX
(maybe I am wrong).

Guys,

When you estimate risks of the docker removal, please take into account not
only release deadlines but also the overall product quality. The thing is
that continuing using containers makes it much more complicated (and thus
less stable) to implement new upgrade flow (upgrade tarball can not be used
any more, we need to re-install the host system). Switching from Centos 6
to Centos 7 is also much more complicated with docker. Every single piece
of Fuel system is going to become simpler and easier to support.

Of course, I am not suggesting to jump overboard into cold water without a
life jacket. Transition plan, checklist, green tests, even spec etc. are
assumed without saying (after all, I was not born yesterday). Of course, we
won't merge changes until everything is green. What is the problem to try
to do this and postpone if not ready in time? And please do not confuse
these two cases: switching from plain deployment to containers is
complicated, but switching from docker to plain is much simpler.




Vladimir Kozhukalov

On Fri, Nov 20, 2015 at 6:47 PM, Bogdan Dobrelya 
wrote:

> On 20.11.2015 15:10, Timur Nurlygayanov wrote:
> > Hi team,
> >
> > I think it too late to make such significant changes for MOS 8.0 now,
> > but I'm ok with the idea to remove docker containers in the future
> > releases if our dev team want to do this.
> > Any way, before we will do this, we need to plan how we will perform
> > updates between different releases with and without docker containers,
> > how we will manage requirements and etc. In fact we have a lot of
> > questions and haven't answers, let's prepare the spec for this change,
> > review it, discuss it with developers, users and project management team
> > and if we haven't requirements to keep docker containers on master node
> > let's remove them for the future releases (not in MOS 8.0).
> >
> > Of course, we can fix BVT / SWARM tests and don't use docker images in
> > our test suite (it shouldn't be really hard) but we didn't plan these
> > changes and in fact these changes can affect our estimates for many
> tasks.
>
> I can only add that features just cannot be removed without a
> deprecation period of 1-2 releases.
> So, we could only deprecate the docker feature for the 8.0.
>
> >
> > Thank you!
> >
> >
> > On Fri, Nov 20, 2015 at 4:44 PM, Alexander Kostrikov
> > mailto:akostri...@mirantis.com>> wrote:
> >
> > Hello, Igor.
> >
> > >But I'd like to hear from QA how do we rely on container-based
> > infrastructure? Would it be hard to change our sys-tests in short
> > time?
> >
> > At first glance, system tests are using docker only to fetch logs
> > and run shell commands.
> > Also, docker is used to run Rally.
> >
> > If there is an action to remove docker containers with carefull
> > attention to bvt testing, it would take couple days to fix system
> tests.
> > But time may be highly affected by code freezes and active features
> > merging.
> >
> > QA team is going to have Monday (Nov 23) sync-up - and it is
> > possible to get more exact information from all QA-team.
> >
> > P.S.
> > +1 to remove docker.
> > -1 to remove docker without taking into account deadlines/other
> > features.
> >
> > On Thu, Nov 19, 2015 at 10:27 PM, Igor Kalnitsky
> > mailto:ikalnit...@mirantis.com>> wrote:
> >
> > Hey guys,
> >
> > Despite the fact I like containers (as deployment unit), we
> > don't use
> > them so. That means I +1 idea to drop containers, just because I
> > believe that would
> >
> > * simplify a lot of things
> > * helps get rid of huge amount of hacks
> > * increase master node deployment
> > * release us from annoying support of upgrades / rollbacks that
> > proved
> > to be non-working well
> >
> > But I'd like to hear from QA how do we rely on container-based
> > infrastructure? Would it be hard to change our sys-tests in short
> > time?
> >
> > Thanks,
> > Igor
> >
> >
> > On Thu, Nov 19, 2015 at 10:31 AM, Vladimir Kuklin
> > mailto:vkuk...@mirantis.com>> wrote:
> > > Folks
> > >
> > > I guess it should be pretty simple to roll back - install
> > older version and
> > > restore the backup with preservation of /var/log directory.
> > >
> > > On Thu, Nov 19, 2015 at 7:38 PM, Sergii Golovatiuk
> > > mailto:sgolovat...@mirantis.com>>
> > wrote:
> > >>
> > >> Hi,
> > >>
> > >> On Thu, Nov 19, 2015 at 5:50 PM, Matthew Mo

Re: [openstack-dev] [nova] Versioned notifications... who cares about the version?

2015-11-20 Thread Alexis Lee
gord chung said on Thu, Nov 19, 2015 at 11:59:33PM -0500:
> just to clarify, the idea doesn't involve tailoring the notification
> payload to ceilometer, just that if a producer is producing a
> notification it knows contains a useful datapoint, the producer
> should tell someone explicitly 'this datapoint exists'.

I know very little about Nova notifications or Ceilometer, so stepping
wildly into the unknown here but... why would a producer spit out
non-useful datapoints? If no-one cares or will ever care, it simply
shouldn't be included.

The problem is knowing what each consumer thinks is interesting and that
isn't something that can be handled by the producer. If Ceilometer is
just a pipeline that has no opinion on what's relevant and what isn't,
that's a special case easily implemented by an identity function.


Alexis (lxsli)
-- 
Nova developer, Hewlett-Packard Limited.
Registered Office: Cain Road, Bracknell, Berkshire RG12 1HN.
Registered Number: 00690597 England
VAT number: GB 314 1496 79

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Mid-cycle meetup for Mitaka

2015-11-20 Thread Miguel Lavalle
Gareth,

For the time being, we don't have a Neutron mid-cycle scheduled. Later in
the Mitaka cycle, it will be assessed whether we need to schedule on or
not. But for the time being, the decision is we are not having one

Cheers

On Thu, Nov 19, 2015 at 10:23 PM, Gareth  wrote:

> Guys,
>
> Is there a conclusion now? What's the schedule of Neutron Mid-cycle?
>
> On Thu, Nov 5, 2015 at 9:31 PM, Gary Kotton  wrote:
> > Hi,
> > In Nova the new black is the os-vif-lib
> > (https://etherpad.openstack.org/p/mitaka-nova-os-vif-lib). It may be
> > worthwhile seeing if we can maybe do something at the same time with the
> > nova crew and then bash out the dirty details here. It would be far
> easier
> > if everyone was in the same room.
> > Just and idea.
> > Thanks
> > Gary
> >
> > From: "John Davidge (jodavidg)" 
> > Reply-To: OpenStack List 
> > Date: Thursday, November 5, 2015 at 2:08 PM
> > To: OpenStack List 
> > Subject: Re: [openstack-dev] [Neutron] Mid-cycle meetup for Mitaka
> >
> > ++
> >
> > Sounds very sensible to me!
> >
> > John
> >
> > From: "Armando M." 
> > Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Date: Wednesday, 4 November 2015 21:23
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Subject: [openstack-dev] [Neutron] Mid-cycle meetup for Mitaka
> >
> > Hi folks,
> >
> > After some consideration, I am proposing a change for the Mitaka release
> > cycle in relation to the mid-cycle meetup event.
> >
> > My proposal is to defer the gathering to later in the release cycle [1],
> and
> > assess whether we have it or not based on the course of events in the
> cycle.
> > If we feel that a last push closer to the end will help us hit some
> critical
> > targets, then I am all in for arranging it.
> >
> > Based on our latest experiences, I have not seen a strong correlation
> > between progress made during the cycle and progress made during the
> meetup,
> > so we might as well save us the trouble of travelling close to Christmas.
> >
> > I'd like to thank Kyle, Miguel Lavalle and Doug for looking into the
> > logistics. We may still need their services later in the new year, but
> as of
> > now all I can say is:
> >
> > Happy (distributed) hacking!
> >
> > Cheers,
> > Armando
> >
> > [1] https://wiki.openstack.org/wiki/Mitaka_Release_Schedule
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Gareth
>
> Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball
> OpenStack contributor, kun_huang@freenode
> My promise: if you find any spelling or grammar mistakes in my email
> from Mar 1 2013, notify me
> and I'll donate $1 or ¥1 to an open organization you specify.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]New Quota Subteam on Nova

2015-11-20 Thread Raildo Mascena
Hi guys

Me and other guys are working in the nested quota driver (
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nested-quota-driver-api,n,z)
on Nova.

in addition, We want discuss the re-design of the quota implementation on
nova and in other projects, like cinder and neutron and we already have a
base spec for this here:
https://review.openstack.org/#/c/182445/4/specs/backlog/approved/quotas-reimagined.rst

So was I thinking on create a subteam on Nova to speed up the code review
in the nested quota implementation and discuss this re-design of quotas.
Someone have interest on be part of this subteam or suggestions?

Cheers,

Raildo
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] What things do we want to get into a python-novaclient 3.0 release?

2015-11-20 Thread Matt Riedemann



On 11/20/2015 3:48 AM, Matthew Booth wrote:

I wrote this a while back, which implements 'migrate everything off this
compute host' in the most robust manner I could come up with using only
the external api:

https://gist.github.com/mdbooth/163f5fdf47ab45d7addd

It obviously overlaps considerably with host-servers-migrate, which is
supposed to do the same thing. Users seem to have been appreciative, so
I'd be interested to see it merged in some form.

Matt

On Thu, Nov 19, 2015 at 6:18 PM, Matt Riedemann
mailto:mrie...@linux.vnet.ibm.com>> wrote:

We've been talking about doing a 3.0 release for novaclient for
awhile so we can make some backward incompatible changes, like:

1. Removing the novaclient.v1_1 module
2. Dropping py26 support (if there is any explicit py26 support in
there)

What else are people aware of?

Monty was talking about doing a thing with auth:

https://review.openstack.org/#/c/245200/

But it sounds like that is not really needed now?

I'd say let's target mitaka-2 for a 3.0 release and get these
flushed out.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



That's not a backward compatible change, so not really necessary for a 
major version release. We'd need a blueprint at least for it though 
since it's adding a new CLI that does some orchestration. I commented in 
the repo, there is a similar version in another repo, so yeah, people 
are doing this and it'd be good if we could decide if it's something 
that should live in tree and be maintained officially.


A functional test would be sweet, but given it deals with migration and 
the novaclient functional tests are assuming a single node devstack, 
that's probably not going to fly. We could ask sdague about that though.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] FKs in the DB

2015-11-20 Thread Alexis Lee
We just had a fun discussion in IRC about whether foreign keys are evil.
Initially I thought this was crazy but mordred made some good points. To
paraphrase, that if you have a scale-out app already it's easier to
manage integrity in your app than scale-out your persistence layer.

Currently the Nova DB has quite a lot of FKs but not on every relation.
One example of a missing FK is between Instance.uuid and
BandwidthUsageCache.uuid.

Should we drive one way or the other, or just put up with mixed-mode?

What should be the policy for new relations?

Do the answers to these questions depend on having a sane and
comprehensive archive/purge system in place?


Alexis (lxsli)
-- 
Nova developer, Hewlett-Packard Limited.
Registered Office: Cain Road, Bracknell, Berkshire RG12 1HN.
Registered Number: 00690597 England
VAT number: GB 314 1496 79

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][neutron] branches for release-independent projects targeting Openstack release X

2015-11-20 Thread Thierry Carrez
Jesse Pretorius wrote:
> [...] 
> The deployment projects, and probably packaging projects too, are faced
> with the same issue. There's no guarantee that their x release will be
> done on the same day as the OpenStack services release their x branches
> as the deployment projects still need some time to verify stability and
> functionality once the services are finalised.

The question then becomes: are you making an "x release", or are you
making a release "supporting/compatible with the x release". What you
are saying is that you need some time because you are downstream,
reacting to the x release. That is a fair request: you're actually
making a release that supports the x release, you're not in the x
release. The line in the sand is based on the date: if you release
within the development cycle constraints then you're part of the
release, if you release after you're downstream of it, reacting to it.

What you need to be able to do in all cases is creating a stable branch
to maintain that release over the long run. But what you may not be able
to do is to be considered "part of the x release" if you release months
after the x release is done and shipped.

I'll elaborate on that with a more complete proposal on Monday.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-11-20 Thread Andrew Laski

On 11/20/15 at 09:51am, Matt Riedemann wrote:



On 11/20/2015 8:18 AM, Sean Dague wrote:

On 11/17/2015 10:51 PM, Matt Riedemann wrote:



I *don't* see any DB APIs for deleting instance actions.

Kind of an important difference there.  Jay got it at least. :)



Were we just planning on instance_actions living forever in the database?

Should we soft delete instance_actions when we delete the referenced
instance?

Or should we (hard) delete instance_actions when we archive (move to
shadow tables) soft deleted instances?

This is going to be a blocker to getting nova-manage db
archive_deleted_rows working.

[1] https://review.openstack.org/#/c/246635/


instance_actions seems extremely useful, and at the ops meetups I've
been to has been one of the favorite features because it allows and easy
interface for "going back in time" to figure out what happened.

I'd suggest the following:

1. soft deleting and instance does nothing with instance actions.

2. archiving instance (soft delete -> actually deleted) also archives
off instance actions.


I think this is also the right approach. Then we don't need to worry 
about adding soft delete for instance_actions, they are just archived 
when you archive the instances. It probably makes the logic in the 
archive code messier for this separate path, but it's looking like 
we're going to have to account for the bw_usage_cache table too 
(which has a uuid column for an instance but no foreign key back to 
the instances table and is not soft deleted).




3. update instance_actions API so that you can get instance_actions for
deleted instances (which I think doesn't work today).


Right, it doesn't. I was going to propose a spec for that since it's 
a simple API change with a microversion.


Adding a simple flag to expose instance actions for a deleted instance 
if you know the uuid of the deleted instance will provide some 
usefulness.  It does lack the discoverability of knowing that you had 
*some* instance that was deleted and you don't have the uuid but want to 
get at the deleted actions.  I would like to avoid bolting that onto 
instance actions and keep that as a use case for an eventual Task API.






-Sean



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] When to use parameters vs parameter_defaults

2015-11-20 Thread Jiri Tomasek

On 11/16/2015 04:25 PM, Steven Hardy wrote:

Hi all,

I wanted to start some discussion re $subject, because it's been apparrent
that we have a lack of clarity on this issue (and have done ever since we
started using parameter_defaults).

Some context:

- Historically TripleO has provided a fairly comprehensive "top level"
   parameters interface, where many per-role and common options are
   specified, then passed in to the respective ResourceGroups on deployment

https://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/overcloud-without-mergepy.yaml#n14

The nice thing about this approach is it gives a consistent API to the
operator, e.g the parameters schema for the main overcloud template defines
most of the expected inputs to the deployment.

The main disadvantage is a degree of template bloat, where we wire dozens
of parameters into each ResourceGroup, and from there into whatever nested
templates consume them.

- When we started adding interfaces (such as all the OS::TripleO::*ExtraConfig*
   interfaces, there was a need to enable passing arbitrary additional
   values to nested templates, with no way of knowing what they are (e.g to
   enable wiring in third-party pieces we have no knowledge of or which
   require implementation-specific arguments which don't make sense for all
   deployments.

To do this, we made use of the heat parameter_defaults interface, which
(unlike normal parameters) have global scope (visible to all nested stacks,
without explicitly wiring in the values from the parent):

http://docs.openstack.org/developer/heat/template_guide/environment.html#define-defaults-to-parameters

The nice thing about this approach is its flexibility, any arbitrary
values can be provided without affecting the parent templates, and it can
allow for a terser implementation because you only specify the parameter
definition where it's actually used.

The main disadvantage of this approach is it becomes very much harder to
discover an API surface for the operator, e.g the parameters that must be
provided on deployment by any CLI/UI tools etc.  This has been partially
addressed by the new-for-liberty nested validation heat feature, but
there's still a bunch of unsolved complexity around how to actually consume
that data and build a coherent consolidated API for user interaction:

https://github.com/openstack/heat-specs/blob/master/specs/liberty/nested-validation.rst

My question is, where do we draw the line on when to use each interface?

My position has always been that we should only use parameter_defaults for
the ExtraConfig interfaces, where we cannot know what reasonable parameters
are.  And for all other "core" functionality, we should accept the increased
template verbosity and wire arguments in from overcloud-without-mergepy.

However we've got some patches which fall into a grey area, e.g this SSL
enablement patch:

https://review.openstack.org/#/c/231930/46/overcloud-without-mergepy.yaml

Here we're actually removing some existing (non functional) top-level
parameters, and moving them to parameter_defaults.

I can see the logic behind it, it does make the templates a bit cleaner,
but at the expense of discoverablility of those (probably not
implementation dependent) parameters.

How do people feel about this example, and others like it, where we're
enabling common, but not mandatory functionality?

In particular I'm keen to hear from Mainn and others interested in building
UIs on top of TripleO as to which is best from that perspective, and how
such arguments may be handled relative to the capabilities mapping proposed
here:

https://review.openstack.org/#/c/242439/

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I think I'll try to do a bit of a recap to make sure I understand 
things. It may shift slightly off the topic of this thread but I think 
it is worth it and it will describe what the GUI is able/expecting to 
work with.


Template defines parameters and passes them to child templates via 
resource properties.

Root template parameter values are set by (in order of precedence):
1. 'parameters' param in 'stack create' api call or 'parameters' section 
in environment

2. 'parameter_defaults' section in environment
3. 'default' in parameter definition in template

Non-root template parameter values are set by (in order of precedence):
1. parent resource properties
2. 'parameter_defaults' in environment
3. 'default' in parameter definition in template

The name collisions in parameter_defaults should not be a problem since 
the template author should make sure, the parameters names he defines 
don't collide with other templates.


The GUI's main goal (same as CLI and tripleo-common) is not to hardcode 
anything and use THT (or any

Re: [openstack-dev] [Fuel] Running Fuel node as non-superuser

2015-11-20 Thread Dmitry Nikishov
Stanislaw,

In my opinion the whole feature shouldn't be in the separate package simply
because it will actually affect the code of many, if not all, components of
Fuel.

The only services whose capabilities will have to be managed by puppet are
those, which are installed from upstream packages (e.g. atop) -- not built
from fuel-* repos.

Supervisord doesn't seem to use Linux capabilities, id does setuid instead:
https://github.com/Supervisor/supervisor/blob/master/supervisor/options.py#L1326

On Fri, Nov 20, 2015 at 1:07 AM, Stanislaw Bogatkin 
wrote:

> Dmitry, I mean whole feature.
> Btw, why do you want to grant capabilities via puppet? It should be done
> by post-install package section, I believe.
>
> Also I doesn't know if supervisord can bound process capabilities like
> systemd can - we could use this opportunity too.
>
> On Thu, Nov 19, 2015 at 7:44 PM, Dmitry Nikishov 
> wrote:
>
>> My main concern with using linux capabilities/acls on files is actually
>> puppet support or, actually, the lack of it. ACLs are possible AFAIK, but
>> we'd need to write a custom type/provider for capabilities. I suggest to
>> wait with capabilities support till systemd support.
>>
>> On Tue, Nov 17, 2015 at 9:15 AM, Dmitry Nikishov 
>> wrote:
>>
>>> Stanislaw, do you mean the whole feature, or just a user? Since feature
>>> would require actually changing puppet code.
>>>
>>> On Tue, Nov 17, 2015 at 5:08 AM, Stanislaw Bogatkin <
>>> sbogat...@mirantis.com> wrote:
>>>
 Dmitry, I believe it should be done via package spec as a part of
 installation.

 On Mon, Nov 16, 2015 at 8:04 PM, Dmitry Nikishov <
 dnikis...@mirantis.com> wrote:

> Hello folks,
>
> I have updated the spec, please review and share your thoughts on it:
> https://review.openstack.org/#/c/243340/
>
> Thanks.
>
> On Thu, Nov 12, 2015 at 10:42 AM, Dmitry Nikishov <
> dnikis...@mirantis.com> wrote:
>
>> Matthew,
>>
>> sorry, didn't mean to butcher your name :(
>>
>> On Thu, Nov 12, 2015 at 10:41 AM, Dmitry Nikishov <
>> dnikis...@mirantis.com> wrote:
>>
>>> Matther,
>>>
>>> I totally agree that each daemon should have it's own user which
>>> should be created during installation of the relevant package. Probably 
>>> I
>>> didn't state this clear enough in the spec.
>>>
>>> However, there are security requirements in place that root should
>>> not be used at all. This means that there should be a some kind of
>>> maintenance or system user ('fueladmin'), which would have enough
>>> privileges to configure and manage Fuel node (e.g. run "sudo puppet 
>>> apply"
>>> without password, create mirrors etc). This also means that certain 
>>> fuel-
>>> packages would be required to have their files accessible to that user.
>>> That's the idea behind having a package which would create 'fueladmin' 
>>> user
>>> and including it into other fuel- packages requirements lists.
>>>
>>> So this part of the feature comes down to having a non-root user
>>> with sudo privileges and passwordless sudo for certain commands (like
>>> 'puppet apply ') for scripting.
>>>
>>> On Thu, Nov 12, 2015 at 9:52 AM, Matthew Mosesohn <
>>> mmoses...@mirantis.com> wrote:
>>>
 Dmitry,

 We really shouldn't put "user" creation into a single package and
 then depend on it for daemons. If we want nailgun service to run as 
 nailgun
 user, it should be created in the fuel-nailgun package.
 I think it makes the most sense to create multiple users, one for
 each service.

 Lastly, it makes a lot of sense to tie a "fuel" CLI user to
 python-fuelclient package.

 On Thu, Nov 12, 2015 at 6:42 PM, Dmitry Nikishov <
 dnikis...@mirantis.com> wrote:

> Stanislaw,
>
> I agree that this approch would work well. However, does Puppet
> allow managing capabilities and/or file ACLs? Or can they be easily 
> set up
> when installing RPM package? (is there a way to specify 
> capabilities/ACLs
> in the RPM spec file?) This doesn't seem to be supported out of the 
> box.
>
> I'm going to research if it is possible to manage capabilities and
>  ACLs with what we have out of the box (RPM, Puppet).
>
> On Wed, Nov 11, 2015 at 4:29 AM, Stanislaw Bogatkin <
> sbogat...@mirantis.com> wrote:
>
>> Dmitry, I propose to give needed linux capabilities
>> (like CAP_NET_BIND_SERVICE) to processes (services) which needs them 
>> and
>> then start these processes from non-privileged user. It will give you
>> ability to run each process without 'sudo' at all with well 
>> fine-grained
>> permissions.
>>
>> On Tue, Nov 

Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-11-20 Thread Matt Riedemann



On 11/20/2015 8:18 AM, Sean Dague wrote:

On 11/17/2015 10:51 PM, Matt Riedemann wrote:



I *don't* see any DB APIs for deleting instance actions.

Kind of an important difference there.  Jay got it at least. :)



Were we just planning on instance_actions living forever in the database?

Should we soft delete instance_actions when we delete the referenced
instance?

Or should we (hard) delete instance_actions when we archive (move to
shadow tables) soft deleted instances?

This is going to be a blocker to getting nova-manage db
archive_deleted_rows working.

[1] https://review.openstack.org/#/c/246635/


instance_actions seems extremely useful, and at the ops meetups I've
been to has been one of the favorite features because it allows and easy
interface for "going back in time" to figure out what happened.

I'd suggest the following:

1. soft deleting and instance does nothing with instance actions.

2. archiving instance (soft delete -> actually deleted) also archives
off instance actions.


I think this is also the right approach. Then we don't need to worry 
about adding soft delete for instance_actions, they are just archived 
when you archive the instances. It probably makes the logic in the 
archive code messier for this separate path, but it's looking like we're 
going to have to account for the bw_usage_cache table too (which has a 
uuid column for an instance but no foreign key back to the instances 
table and is not soft deleted).




3. update instance_actions API so that you can get instance_actions for
deleted instances (which I think doesn't work today).


Right, it doesn't. I was going to propose a spec for that since it's a 
simple API change with a microversion.




-Sean



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Bogdan Dobrelya
On 20.11.2015 15:10, Timur Nurlygayanov wrote:
> Hi team,
> 
> I think it too late to make such significant changes for MOS 8.0 now,
> but I'm ok with the idea to remove docker containers in the future
> releases if our dev team want to do this.
> Any way, before we will do this, we need to plan how we will perform
> updates between different releases with and without docker containers,
> how we will manage requirements and etc. In fact we have a lot of
> questions and haven't answers, let's prepare the spec for this change,
> review it, discuss it with developers, users and project management team
> and if we haven't requirements to keep docker containers on master node
> let's remove them for the future releases (not in MOS 8.0).
> 
> Of course, we can fix BVT / SWARM tests and don't use docker images in
> our test suite (it shouldn't be really hard) but we didn't plan these
> changes and in fact these changes can affect our estimates for many tasks.

I can only add that features just cannot be removed without a
deprecation period of 1-2 releases.
So, we could only deprecate the docker feature for the 8.0.

> 
> Thank you!
> 
> 
> On Fri, Nov 20, 2015 at 4:44 PM, Alexander Kostrikov
> mailto:akostri...@mirantis.com>> wrote:
> 
> Hello, Igor.
> 
> >But I'd like to hear from QA how do we rely on container-based
> infrastructure? Would it be hard to change our sys-tests in short
> time?
> 
> At first glance, system tests are using docker only to fetch logs
> and run shell commands.
> Also, docker is used to run Rally.
> 
> If there is an action to remove docker containers with carefull
> attention to bvt testing, it would take couple days to fix system tests.
> But time may be highly affected by code freezes and active features
> merging.
> 
> QA team is going to have Monday (Nov 23) sync-up - and it is
> possible to get more exact information from all QA-team.
> 
> P.S.
> +1 to remove docker.
> -1 to remove docker without taking into account deadlines/other
> features.
> 
> On Thu, Nov 19, 2015 at 10:27 PM, Igor Kalnitsky
> mailto:ikalnit...@mirantis.com>> wrote:
> 
> Hey guys,
> 
> Despite the fact I like containers (as deployment unit), we
> don't use
> them so. That means I +1 idea to drop containers, just because I
> believe that would
> 
> * simplify a lot of things
> * helps get rid of huge amount of hacks
> * increase master node deployment
> * release us from annoying support of upgrades / rollbacks that
> proved
> to be non-working well
> 
> But I'd like to hear from QA how do we rely on container-based
> infrastructure? Would it be hard to change our sys-tests in short
> time?
> 
> Thanks,
> Igor
> 
> 
> On Thu, Nov 19, 2015 at 10:31 AM, Vladimir Kuklin
> mailto:vkuk...@mirantis.com>> wrote:
> > Folks
> >
> > I guess it should be pretty simple to roll back - install
> older version and
> > restore the backup with preservation of /var/log directory.
> >
> > On Thu, Nov 19, 2015 at 7:38 PM, Sergii Golovatiuk
> > mailto:sgolovat...@mirantis.com>>
> wrote:
> >>
> >> Hi,
> >>
> >> On Thu, Nov 19, 2015 at 5:50 PM, Matthew Mosesohn
> mailto:mmoses...@mirantis.com>>
> >> wrote:
> >>>
> >>> Vladimir,
> >>>
> >>> The old site.pp is long out of date and should just be
> recreated from the
> >>> content of all the other $service-only.pp files.
> >>>
> >>> My main question is how do we propose to do a rollback from
> an update (in
> >>> theory, from 8.0 to 9.0, then back to 8.0)? Should we
> hardcode persistent
> >>> data directories (or symlink them?) to
> >>> /var/lib/fuel/$fuel_version/$service_name, as we are doing
> behind the scenes
> >>> currently with Docker? If we keep that mechanism in place,
> all the existing
> >>> puppet modules can be used without any modifications. On the
> same note,
> >>> upgrade/rollback is the same as backup and restore, that
> means our restore
> >>> should follow a similar approach.
> >>> -Matthew
> >>
> >>
> >> There only one idea I have is to do dual partitioning system.
> The similar
> >> approach is implemented in CoreOS.
> >>
> >>>
> >>>
> >>> On Thu, Nov 19, 2015 at 6:36 PM, Bogdan Dobrelya
> mailto:bdobre...@mirantis.com>>
> >>> wrote:
> 
>  On 19.11.2015 15:59, Vladimir Kozhukalov wrote:
>  > Dear colleagues,
>  >
>  > As might remember, we introduced Docker containers on the
> master node
>  

Re: [openstack-dev] [Fuel][CI] recheck/reverify support for Fuel CI jobs

2015-11-20 Thread Vitaly Kramskikh
+1 for "refuel" to trigger Fuel CI only, awesome idea. "recheck" will
trigger both.

2015-11-20 21:12 GMT+07:00 Sergey Vasilenko :

>
> On Fri, Nov 20, 2015 at 4:00 PM, Alexey Shtokolov  > wrote:
>
>> Probably we should use another keyword for Fuel CI to prevent an extra
>> load on the infrastructure? For example "refuel" or smth like this?
>
>
> IMHO we should have ability to restart each one of two deployment tests.
> Often happens, that one test passed, but another fails while ENV setting
> up. Restart both tests for this case does not required.
>
>
> /sv
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Timur Nurlygayanov
Hi Andrey,

As far as I remember from the last usage of fuel master node, there was
> Centos + py26 installation. Python 2.6 is old enough and sometimes it is
> hard to launch some application on fuel node without docker (image with
> py27/py3). Are you planning to provide py27 at least or my note is outdated
> and I can already use py27 from the box?

We can install docker on master node anyway to run Rally / Tempest or other
test suites and scripts from master node with Python 2.7 or something also.

On Fri, Nov 20, 2015 at 5:20 PM, Andrey Kurilin 
wrote:

> Hi!
> I'm not fuel developer, so opinion below is based on user-view.
> As far as I remember from the last usage of fuel master node, there was
> Centos + py26 installation. Python 2.6 is old enough and sometimes it is
> hard to launch some application on fuel node without docker (image with
> py27/py3). Are you planning to provide py27 at least or my note is outdated
> and I can already use py27 from the box?
>
> On Thu, Nov 19, 2015 at 4:59 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> As might remember, we introduced Docker containers on the master node a
>> while ago when we implemented first version of Fuel upgrade feature. The
>> motivation behind was to make it possible to rollback upgrade process if
>> something goes wrong.
>>
>> Now we are at the point where we can not use our tarball based upgrade
>> approach any more and those patches that deprecate upgrade tarball has been
>> already merged. Although it is a matter of a separate discussion, it seems
>> that upgrade process rather should be based on kind of backup and restore
>> procedure. We can backup Fuel data on an external media, then we can
>> install new version of Fuel from scratch and then it is assumed backed up
>> Fuel data can be applied over this new Fuel instance. The procedure itself
>> is under active development, but it is clear that rollback in this case
>> would be nothing more than just restoring from the previously backed up
>> data.
>>
>> As for Docker containers, still there are potential advantages of using
>> them on the Fuel master node, but our current implementation of the feature
>> seems not mature enough to make us benefit from the containerization.
>>
>> At the same time there are some disadvantages like
>>
>>- it is tricky to get logs and other information (for example, rpm
>>-qa) for a service like shotgun which is run inside one of containers.
>>- it is specific UX when you first need to run dockerctl shell
>>{container_name} and then you are able to debug something.
>>- when building IBP image we mount directory from the host file
>>system into mcollective container to make image build faster.
>>- there are config files and some other files which should be shared
>>among containers which introduces unnecessary complexity to the whole
>>system.
>>- our current delivery approach assumes we wrap into rpm/deb packages
>>every single piece of the Fuel system. Docker images are not an exception.
>>And as far as they depend on other rpm packages we forced to build
>>docker-images rpm package using kind of specific build flow. Besides this
>>package is quite big (300M).
>>- I'd like it to be possible to install Fuel not from ISO but from
>>RPM repo on any rpm based distribution. But it is double work to support
>>both Docker based and package based approach.
>>
>> Probably some of you can give other examples. Anyway, the idea is to get
>> rid of Docker containers on the master node and switch to plane package
>> based approach that we used before.
>>
>> As far as there is nothing new here, we just need to use our old site.pp
>> (with minimal modifications), it looks like it is possible to implement
>> this during 8.0 release cycle. If there are no principal objections, please
>> give me a chance to do this ASAP (during 8.0), I know it is a huge risk for
>> the release, but still I think I can do this.
>>
>>
>>
>>
>> Vladimir Kozhukalov
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best regards,
> Andrey Kurilin.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Timur,
Senior QA Engineer
OpenStack Projects
Mirantis Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.o

Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Andrey Kurilin
Hi!
I'm not fuel developer, so opinion below is based on user-view.
As far as I remember from the last usage of fuel master node, there was
Centos + py26 installation. Python 2.6 is old enough and sometimes it is
hard to launch some application on fuel node without docker (image with
py27/py3). Are you planning to provide py27 at least or my note is outdated
and I can already use py27 from the box?

On Thu, Nov 19, 2015 at 4:59 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> As might remember, we introduced Docker containers on the master node a
> while ago when we implemented first version of Fuel upgrade feature. The
> motivation behind was to make it possible to rollback upgrade process if
> something goes wrong.
>
> Now we are at the point where we can not use our tarball based upgrade
> approach any more and those patches that deprecate upgrade tarball has been
> already merged. Although it is a matter of a separate discussion, it seems
> that upgrade process rather should be based on kind of backup and restore
> procedure. We can backup Fuel data on an external media, then we can
> install new version of Fuel from scratch and then it is assumed backed up
> Fuel data can be applied over this new Fuel instance. The procedure itself
> is under active development, but it is clear that rollback in this case
> would be nothing more than just restoring from the previously backed up
> data.
>
> As for Docker containers, still there are potential advantages of using
> them on the Fuel master node, but our current implementation of the feature
> seems not mature enough to make us benefit from the containerization.
>
> At the same time there are some disadvantages like
>
>- it is tricky to get logs and other information (for example, rpm
>-qa) for a service like shotgun which is run inside one of containers.
>- it is specific UX when you first need to run dockerctl shell
>{container_name} and then you are able to debug something.
>- when building IBP image we mount directory from the host file system
>into mcollective container to make image build faster.
>- there are config files and some other files which should be shared
>among containers which introduces unnecessary complexity to the whole
>system.
>- our current delivery approach assumes we wrap into rpm/deb packages
>every single piece of the Fuel system. Docker images are not an exception.
>And as far as they depend on other rpm packages we forced to build
>docker-images rpm package using kind of specific build flow. Besides this
>package is quite big (300M).
>- I'd like it to be possible to install Fuel not from ISO but from RPM
>repo on any rpm based distribution. But it is double work to support both
>Docker based and package based approach.
>
> Probably some of you can give other examples. Anyway, the idea is to get
> rid of Docker containers on the master node and switch to plane package
> based approach that we used before.
>
> As far as there is nothing new here, we just need to use our old site.pp
> (with minimal modifications), it looks like it is possible to implement
> this during 8.0 release cycle. If there are no principal objections, please
> give me a chance to do this ASAP (during 8.0), I know it is a huge risk for
> the release, but still I think I can do this.
>
>
>
>
> Vladimir Kozhukalov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-11-20 Thread Sean Dague
On 11/17/2015 10:51 PM, Matt Riedemann wrote:

> 
> I *don't* see any DB APIs for deleting instance actions.
> 
> Kind of an important difference there.  Jay got it at least. :)
> 
>>
>> Were we just planning on instance_actions living forever in the database?
>>
>> Should we soft delete instance_actions when we delete the referenced
>> instance?
>>
>> Or should we (hard) delete instance_actions when we archive (move to
>> shadow tables) soft deleted instances?
>>
>> This is going to be a blocker to getting nova-manage db
>> archive_deleted_rows working.
>>
>> [1] https://review.openstack.org/#/c/246635/

instance_actions seems extremely useful, and at the ops meetups I've
been to has been one of the favorite features because it allows and easy
interface for "going back in time" to figure out what happened.

I'd suggest the following:

1. soft deleting and instance does nothing with instance actions.

2. archiving instance (soft delete -> actually deleted) also archives
off instance actions.

3. update instance_actions API so that you can get instance_actions for
deleted instances (which I think doesn't work today).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][CI] recheck/reverify support for Fuel CI jobs

2015-11-20 Thread Sergey Vasilenko
On Fri, Nov 20, 2015 at 4:00 PM, Alexey Shtokolov 
wrote:

> Probably we should use another keyword for Fuel CI to prevent an extra
> load on the infrastructure? For example "refuel" or smth like this?


IMHO we should have ability to restart each one of two deployment tests.
Often happens, that one test passed, but another fails while ENV setting
up. Restart both tests for this case does not required.


/sv
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Timur Nurlygayanov
Hi team,

I think it too late to make such significant changes for MOS 8.0 now, but
I'm ok with the idea to remove docker containers in the future releases if
our dev team want to do this.
Any way, before we will do this, we need to plan how we will perform
updates between different releases with and without docker containers, how
we will manage requirements and etc. In fact we have a lot of questions and
haven't answers, let's prepare the spec for this change, review it, discuss
it with developers, users and project management team and if we haven't
requirements to keep docker containers on master node let's remove them for
the future releases (not in MOS 8.0).

Of course, we can fix BVT / SWARM tests and don't use docker images in our
test suite (it shouldn't be really hard) but we didn't plan these changes
and in fact these changes can affect our estimates for many tasks.

Thank you!


On Fri, Nov 20, 2015 at 4:44 PM, Alexander Kostrikov <
akostri...@mirantis.com> wrote:

> Hello, Igor.
>
> >But I'd like to hear from QA how do we rely on container-based
> infrastructure? Would it be hard to change our sys-tests in short
> time?
>
> At first glance, system tests are using docker only to fetch logs and run
> shell commands.
> Also, docker is used to run Rally.
>
> If there is an action to remove docker containers with carefull attention
> to bvt testing, it would take couple days to fix system tests.
> But time may be highly affected by code freezes and active features
> merging.
>
> QA team is going to have Monday (Nov 23) sync-up - and it is possible to
> get more exact information from all QA-team.
>
> P.S.
> +1 to remove docker.
> -1 to remove docker without taking into account deadlines/other features.
>
> On Thu, Nov 19, 2015 at 10:27 PM, Igor Kalnitsky 
> wrote:
>
>> Hey guys,
>>
>> Despite the fact I like containers (as deployment unit), we don't use
>> them so. That means I +1 idea to drop containers, just because I
>> believe that would
>>
>> * simplify a lot of things
>> * helps get rid of huge amount of hacks
>> * increase master node deployment
>> * release us from annoying support of upgrades / rollbacks that proved
>> to be non-working well
>>
>> But I'd like to hear from QA how do we rely on container-based
>> infrastructure? Would it be hard to change our sys-tests in short
>> time?
>>
>> Thanks,
>> Igor
>>
>>
>> On Thu, Nov 19, 2015 at 10:31 AM, Vladimir Kuklin 
>> wrote:
>> > Folks
>> >
>> > I guess it should be pretty simple to roll back - install older version
>> and
>> > restore the backup with preservation of /var/log directory.
>> >
>> > On Thu, Nov 19, 2015 at 7:38 PM, Sergii Golovatiuk
>> >  wrote:
>> >>
>> >> Hi,
>> >>
>> >> On Thu, Nov 19, 2015 at 5:50 PM, Matthew Mosesohn <
>> mmoses...@mirantis.com>
>> >> wrote:
>> >>>
>> >>> Vladimir,
>> >>>
>> >>> The old site.pp is long out of date and should just be recreated from
>> the
>> >>> content of all the other $service-only.pp files.
>> >>>
>> >>> My main question is how do we propose to do a rollback from an update
>> (in
>> >>> theory, from 8.0 to 9.0, then back to 8.0)? Should we hardcode
>> persistent
>> >>> data directories (or symlink them?) to
>> >>> /var/lib/fuel/$fuel_version/$service_name, as we are doing behind the
>> scenes
>> >>> currently with Docker? If we keep that mechanism in place, all the
>> existing
>> >>> puppet modules can be used without any modifications. On the same
>> note,
>> >>> upgrade/rollback is the same as backup and restore, that means our
>> restore
>> >>> should follow a similar approach.
>> >>> -Matthew
>> >>
>> >>
>> >> There only one idea I have is to do dual partitioning system. The
>> similar
>> >> approach is implemented in CoreOS.
>> >>
>> >>>
>> >>>
>> >>> On Thu, Nov 19, 2015 at 6:36 PM, Bogdan Dobrelya <
>> bdobre...@mirantis.com>
>> >>> wrote:
>> 
>>  On 19.11.2015 15:59, Vladimir Kozhukalov wrote:
>>  > Dear colleagues,
>>  >
>>  > As might remember, we introduced Docker containers on the master
>> node
>>  > a
>>  > while ago when we implemented first version of Fuel upgrade
>> feature.
>>  > The
>>  > motivation behind was to make it possible to rollback upgrade
>> process
>>  > if
>>  > something goes wrong.
>>  >
>>  > Now we are at the point where we can not use our tarball based
>> upgrade
>>  > approach any more and those patches that deprecate upgrade tarball
>> has
>>  > been already merged. Although it is a matter of a separate
>> discussion,
>>  > it seems that upgrade process rather should be based on kind of
>> backup
>>  > and restore procedure. We can backup Fuel data on an external
>> media,
>>  > then we can install new version of Fuel from scratch and then it is
>>  > assumed backed up Fuel data can be applied over this new Fuel
>>  > instance.
>> 
>>  A side-by-side upgrade, correct? That should work as well.
>> 
>>  > The procedure itself is under active development, b

Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-20 Thread Kevin Benton
There is something that isn't clear to me from your patch and based on your 
description of the workflow below. It sounds like you are following the basic 
L3 to ToR topology so each rack is a broadcast domain. If that’s the case, each 
rack should be a Neutron network and the mapping should be between racks and 
Networks, not racks and Subnets.

Also, can you elaborate a bit on the multiple gateway use case? If a subnet is 
isolated to a rack, wouldn’t all of the clients in that rack just want to use 
the ToR as their default gateway?


> On Nov 9, 2015, at 9:39 PM, Shraddha Pandhe  
> wrote:
> 
> Hi Carl,
> 
> Please find me reply inline
> 
> 
> On Mon, Nov 9, 2015 at 9:49 AM, Carl Baldwin  > wrote:
> On Fri, Nov 6, 2015 at 2:59 PM, Shraddha Pandhe  > wrote:
> We have a similar requirement where we want to pick a network thats 
> accessible in the rack that VM belongs to. We have L3 Top-of-rack, so the 
> network is confined to the rack. Right now, we are achieving this by naming 
> physical network name in a certain way, but thats not going to scale.
> 
> We also want to be able to make scheduling decisions based on IP 
> availability. So we need to know rack <-> network <-> mapping.  We can't 
> embed all factors in a name. It will be impossible to make scheduling 
> decisions by parsing name and comparing. GoDaddy has also been doing 
> something similar [1], [2].
> 
> This is precisely the use case that the large deployers team (LDT) has 
> brought to Neutron [1].  In fact, GoDaddy has been at the forefront of that 
> request.  We've had discussions about this since just after Vancouver on the 
> ML.  I've put up several specs to address it [2] and I'm working another 
> revision of it.  My take on it is that Neutron needs a model for a layer 3 
> network (IpNetwork) which would group the rack networks.  The IpNetwork would 
> be visible to the end user and there will be a network <-> host mapping.  I 
> am still aiming to have working code for this in Mitaka.  I discussed this 
> with the LDT in Tokyo and they seemed to agree.  We had a session on this in 
> the Neutron design track [3][4] though that discussion didn't produce 
> anything actionable.
> 
> Thats great. L3 layer network model is definitely one of our most important 
> requirements. All our go-forward deployments are going to be L3. So this is a 
> big deal for us. 
>  
> Solving this problem at the IPAM level has come up in discussion but I don't 
> have any references for that.  It is something that I'm still considering but 
> I haven't worked out all of the details for how this can work in a portable 
> way.  Could you describe how you imagine how this flow would work from a 
> user's perspective?  Specifically, when a user wants to boot a VM, what 
> precise API calls would be made to achieve this on your network and how where 
> would the IPAM data come in to play?
> 
> Here's what the flow looks like to me.
> 
> 1. User sends a boot request as usual. The user need not know all the network 
> and subnet information beforehand. All he would do is send a boot request.
> 
> 2. The scheduler will pick a node in an L3 rack. The way we map nodes <-> 
> racks is as follows:
> a. For VMs, we store rack_id in nova.conf on compute nodes
> b. For Ironic nodes, right now we have static IP allocation, so we 
> practically know which IP we want to assign. But when we move to dynamic 
> allocation, we would probably use 'chassis' or 'driver_info' fields to store 
> the rack id.
> 
> 3. Nova compute will try to pick a network ID for this instance.  At this 
> point, it needs to know what networks (or subnets) are available in this 
> rack. Based on that, it will pick a network ID and send port creation request 
> to Neutron. At Yahoo, to avoid some back-and-forth, we send a fake network_id 
> and let the plugin do all the work.
> 
> 4. We need some information associated with the network/subnet that tells us 
> what rack it belongs to. Right now, for VMs, we have that information 
> embedded in physnet name. But we would like to move away from that. If we had 
> a column for subnets - e.g. tag, it would solve our problem. Ideally, we 
> would like a column 'rack id' or a new table 'racks' that maps to subnets, or 
> something. We are open to different ideas that work for everyone. This is 
> where IPAM can help.
> 
> 5. We have another requirement where we want to store multiple gateway 
> addresses for a subnet, just like name servers.
> 
> 
> We also have a requirement where we want to make scheduling decisions based 
> on IP availability. We want to allocate multiple IPs to the hosts. e.g. We 
> want to allocate X IPs to a host. The flow in that case would be
> 
> 1. User sends a boot request with --num-ips X
> The network/subnet level complexities need not be exposed to the user. 
> For better experience, all we want our users to tell us is the number of IPs 
> they w

Re: [openstack-dev] [nova] build_instance pre hook cannot set injected_files for new instance

2015-11-20 Thread Rich Megginson

On 11/19/2015 10:34 AM, Rich Megginson wrote:
I have some code that uses the build_instance pre hook to set 
injected_files in the new instance.  With the kilo code, the argv[7] 
was passed as [] - so I could append/extend this value to add more 
injected_files.  With the latest code, this is passed as None, so I 
can't set it.  How can I pass injected_files in a build_instance pre 
hook with the latest code/liberty? 


I have filed bug https://bugs.launchpad.net/nova/+bug/1518321 to track 
this issue.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [QoS] meeting rebooted

2015-11-20 Thread Miguel Angel Ajo

Correct, thanks Moshe.

One of the first proposals is probably changing the periodicity of the 
meeting to 2-weeks instead if every week. We could vote on that by the 
end of the meeting, depending on how things go. And of course, we could 
change that back to 1-week later in the cycle as necessary.



Moshe Levi wrote:

Just to add more details about the when and where  :)
We will have a weekly meeting on Wednesday at 1400 UTC in #openstack-meeting-3
http://eavesdrop.openstack.org/#Neutron_QoS_Meeting

Thanks,
Moshe Levi.


-Original Message-
From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
Sent: Friday, November 20, 2015 12:08 PM
To: Miguel Angel Ajo
Cc: OpenStack Development Mailing List (not for usage questions); victor.r.how...@gmail.com;
irenab@gmail.com; Moshe Levi; Vikram
Choudhary; Gal Sagie; Haim
Daniel
Subject: Re: [neutron] [QoS] meeting rebooted

Miguel Angel Ajo  wrote:


Hi everybody,

  We're restarting the QoS meeting for next week,

  Here are the details, and a preliminary agenda,

   https://etherpad.openstack.org/p/qos-mitaka


   Let's keep QoS moving!,

Best,
Miguel Ángel.

I think you better give idea when/where it is restarted. :)

Ihar



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Alexander Kostrikov
Hello, Igor.

>But I'd like to hear from QA how do we rely on container-based
infrastructure? Would it be hard to change our sys-tests in short
time?

At first glance, system tests are using docker only to fetch logs and run
shell commands.
Also, docker is used to run Rally.

If there is an action to remove docker containers with carefull attention
to bvt testing, it would take couple days to fix system tests.
But time may be highly affected by code freezes and active features merging.

QA team is going to have Monday (Nov 23) sync-up - and it is possible to
get more exact information from all QA-team.

P.S.
+1 to remove docker.
-1 to remove docker without taking into account deadlines/other features.

On Thu, Nov 19, 2015 at 10:27 PM, Igor Kalnitsky 
wrote:

> Hey guys,
>
> Despite the fact I like containers (as deployment unit), we don't use
> them so. That means I +1 idea to drop containers, just because I
> believe that would
>
> * simplify a lot of things
> * helps get rid of huge amount of hacks
> * increase master node deployment
> * release us from annoying support of upgrades / rollbacks that proved
> to be non-working well
>
> But I'd like to hear from QA how do we rely on container-based
> infrastructure? Would it be hard to change our sys-tests in short
> time?
>
> Thanks,
> Igor
>
>
> On Thu, Nov 19, 2015 at 10:31 AM, Vladimir Kuklin 
> wrote:
> > Folks
> >
> > I guess it should be pretty simple to roll back - install older version
> and
> > restore the backup with preservation of /var/log directory.
> >
> > On Thu, Nov 19, 2015 at 7:38 PM, Sergii Golovatiuk
> >  wrote:
> >>
> >> Hi,
> >>
> >> On Thu, Nov 19, 2015 at 5:50 PM, Matthew Mosesohn <
> mmoses...@mirantis.com>
> >> wrote:
> >>>
> >>> Vladimir,
> >>>
> >>> The old site.pp is long out of date and should just be recreated from
> the
> >>> content of all the other $service-only.pp files.
> >>>
> >>> My main question is how do we propose to do a rollback from an update
> (in
> >>> theory, from 8.0 to 9.0, then back to 8.0)? Should we hardcode
> persistent
> >>> data directories (or symlink them?) to
> >>> /var/lib/fuel/$fuel_version/$service_name, as we are doing behind the
> scenes
> >>> currently with Docker? If we keep that mechanism in place, all the
> existing
> >>> puppet modules can be used without any modifications. On the same note,
> >>> upgrade/rollback is the same as backup and restore, that means our
> restore
> >>> should follow a similar approach.
> >>> -Matthew
> >>
> >>
> >> There only one idea I have is to do dual partitioning system. The
> similar
> >> approach is implemented in CoreOS.
> >>
> >>>
> >>>
> >>> On Thu, Nov 19, 2015 at 6:36 PM, Bogdan Dobrelya <
> bdobre...@mirantis.com>
> >>> wrote:
> 
>  On 19.11.2015 15:59, Vladimir Kozhukalov wrote:
>  > Dear colleagues,
>  >
>  > As might remember, we introduced Docker containers on the master
> node
>  > a
>  > while ago when we implemented first version of Fuel upgrade feature.
>  > The
>  > motivation behind was to make it possible to rollback upgrade
> process
>  > if
>  > something goes wrong.
>  >
>  > Now we are at the point where we can not use our tarball based
> upgrade
>  > approach any more and those patches that deprecate upgrade tarball
> has
>  > been already merged. Although it is a matter of a separate
> discussion,
>  > it seems that upgrade process rather should be based on kind of
> backup
>  > and restore procedure. We can backup Fuel data on an external media,
>  > then we can install new version of Fuel from scratch and then it is
>  > assumed backed up Fuel data can be applied over this new Fuel
>  > instance.
> 
>  A side-by-side upgrade, correct? That should work as well.
> 
>  > The procedure itself is under active development, but it is clear
> that
>  > rollback in this case would be nothing more than just restoring from
>  > the
>  > previously backed up data.
>  >
>  > As for Docker containers, still there are potential advantages of
>  > using
>  > them on the Fuel master node, but our current implementation of the
>  > feature seems not mature enough to make us benefit from the
>  > containerization.
>  >
>  > At the same time there are some disadvantages like
>  >
>  >   * it is tricky to get logs and other information (for example, rpm
>  > -qa) for a service like shotgun which is run inside one of
>  > containers.
>  >   * it is specific UX when you first need to run dockerctl shell
>  > {container_name} and then you are able to debug something.
>  >   * when building IBP image we mount directory from the host file
>  > system
>  > into mcollective container to make image build faster.
>  >   * there are config files and some other files which should be
> shared
>  > among containers which introduces unnecessary complexity to the
> 

Re: [openstack-dev] How to add a periodic check for typos?

2015-11-20 Thread Amrith Kumar
So, just for grins, I took this approach out for a spin on Trove and noticed 
this as part of the change proposed by topy.

-   "hebrew": ["hebrew_general_ci", "hebrew_bin"],
+   "Hebrew": ["hebrew_general_ci", "hebrew_bin"],

-   "greek": ["greek_general_ci", "greek_bin"],
+   "Greek": ["greek_general_ci", "greek_bin"],

In this particular case the change is being proposed in something that is a set 
of collation sequences and while "Hebrew" is the correct capitalization in the 
English language, what we need is "hebrew" in this case. Similarly for Greek 
and greek. 

If there were some way to specify a set of "required typos" I would be OK 
running this manually before I checked in code. I'm not sure I'd like it either 
as a tox or a hacking rule though because the pain may outweigh the gain.

-amrith


> -Original Message-
> From: Gareth [mailto:academicgar...@gmail.com]
> Sent: Thursday, November 19, 2015 10:13 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] How to add a periodic check for typos?
> 
> Just talking about the idea of auto-spelling-fix.
> 
> My example patch https://review.openstack.org/#/c/247261/ doesn't work.
> Topy fixes something and break others. So it is just okay to do auto-spelling-
> check now, not fix :(
> 
> 
> 
> On Fri, Nov 20, 2015 at 6:31 AM, Matt Riedemann
>  wrote:
> >
> >
> > On 11/18/2015 8:00 PM, Gareth wrote:
> >>
> >> Hi stacker,
> >>
> >> We could use some 3rd tools like topy:
> >>  pip install topy
> >>  topy -a 
> >>  git commit & git review
> >>
> >> Here is an example: https://review.openstack.org/#/c/247261/
> >>
> >> Could we have a periodic job like Jenkins users updating our
> >> requirement.txt?
> >>
> >
> > Are you asking for all projects or just a specific project you work on
> > and you forgot to tag the subject line?
> >
> > I wouldn't have a bot doing this, if I were going to do it (which I
> > wouldn't for nova). You could have it built into your pep8 job, or a
> > separate job that is voting on your project, if you really care about 
> > spelling.
> >
> > --
> >
> > Thanks,
> >
> > Matt Riedemann
> >
> >
> >
> >
> __
> 
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> --
> Gareth
> 
> Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball
> OpenStack contributor, kun_huang@freenode My promise: if you find any
> spelling or grammar mistakes in my email from Mar 1 2013, notify me and I'll
> donate $1 or ¥1 to an open organization you specify.
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][devstack][gnocchi] Unable to run devstack-gate with stable/1.3

2015-11-20 Thread Julien Danjou
On Fri, Nov 20 2015, Clark Boylan wrote:

> You need a mapping of some sort. How should devstack be configured for
> stable/X.Y? What about stable/Y.Z? This is one method of providing that
> mapping and it is very explicit. We can probably do better but we need
> to understand what the desired mapping is before encoding that into any
> tools.

AFAICT, Gnocchi supports any version of the components it leverages
(Keystone ans Swift). We just want devstack to deploy the latest stable
version, whatever it is.

> If you have a very specific concrete set of services to be configured
> you could possibly ship your own features.yaml to only configure those
> things (rather than the default of an entire running cloud). This may
> help make the jobs run quicker too.

We'd love that, we really don't need an "entire cloud". :)

> Another approach would be to set the OVERRIDE_ZUUL_BRANCH to master and
> the OVERRIDE_${project}_PROJECT_BRANCH to ZUUL_BRANCH so that your
> project is always checked out against the correct branch for the change
> but is tested against master everything else. This is probably the
> simplest mapping (our stable/X.Y should run against whatever is
> current).

I didn't think that's possible, it's good to know. That might be a good
option, though it has the downside of ultimately hitting potential bugs
in other project master. We already had our gate blocked for days
because we hit particular bugs in Keystone or Swift, and it took
days/weeks to fix and/or work-around them.

Thanks for the insight Clark!

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][fwaas]some architectural advice on fwaas driver writing

2015-11-20 Thread Somanchi Trinath
Hi-

As I understand you are not sure on "How to locate the Hardware Appliance" 
which you have as your FW?

Am I right?  If so you can look into, 
https://github.com/jumpojoy/generic_switch kind of approach.

-
Trinath



From: Oguz Yarimtepe [mailto:oguzyarimt...@gmail.com]
Sent: Friday, November 20, 2015 5:52 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [neutron][fwaas]some architectural advice on fwaas 
driver writing

I created a sample driver by looking at vArmour driver that is at the Github 
FWaaS repo. I am planning to call the FW's REST API from the suitable functions.

The problem is, i am still not sure how to locate the hardware appliance. One 
of the FWaaS guy says that Service Chaining can help, any body has an idea or 
how to insert the fw to OpenStack?
On 11/02/2015 02:36 PM, Somanchi Trinath wrote:
Hi-

I'm confused. Do you really have an PoC implementation of what is to be 
achieved?

As I look into these type of Implementations, I would prefer to have proxy 
driver/plugin to get the configuration from Openstack to external 
controller/device and do the rest of the magic.

-
Trinath

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][CI] recheck/reverify support for Fuel CI jobs

2015-11-20 Thread Igor Belikov
Alexey,

First of all, “refuel” sounds very cool.
Thanks for raising this topic, I would like to hear more opinions here.
On one hand, different keyword would help to prevent unnecessary infrastructure 
load, I agree with you on that. And on another hand, using existing keywords 
helps to avoid confusion and provides expected behaviour for our CI jobs. Far 
too many times I’ve heard questions like “Why ‘recheck’ doesn’t retrigger Fuel 
CI jobs?”.

So I would like to hear more thoughts here from our developers. And I will 
investigate how another third party CI systems handle this questions.
--
Igor Belikov
Fuel CI Engineer
ibeli...@mirantis.com






> On 20 Nov 2015, at 16:00, Alexey Shtokolov  wrote:
> 
> Igor,
> 
> Thank you for this feature.
> Afaiu recheck/reverify is mostly useful for internal CI-related fails. And 
> Fuel CI and Openstack CI are two different infrastructures. 
> So if smth is broken on Fuel CI, "recheck" will restart all jobs on Openstack 
> CI too. And opposite case works the same way.
> 
> Probably we should use another keyword for Fuel CI to prevent an extra load 
> on the infrastructure? For example "refuel" or smth like this?
> 
> Best regards, 
> Alexey Shtokolov
> 
> 2015-11-20 14:24 GMT+03:00 Stanislaw Bogatkin  >:
> Igor,
> 
> it is much more clear for me now. Thank you :)
> 
> On Fri, Nov 20, 2015 at 2:09 PM, Igor Belikov  > wrote:
> Hi Stanislaw,
> 
> The reason behind this is simple - deployment tests are heavy. Each 
> deployment test occupies whole server for ~2 hours, for each commit we have 2 
> deployment tests (for current fuel-library master) and that’s just because we 
> don’t test CentOS deployment for now.
> If we assume that developers will rertrigger deployment tests only when 
> retrigger would actually solve the failure - it’s still not smart in terms of 
> HW usage to retrigger both tests when only one has failed, for example.
> And there are cases when retrigger just won’t do it and CI Engineer must 
> manually erase the existing environment on slave or fix it by other means, so 
> it’s better when CI Engineer looks through logs before each retrigger of 
> deployment test.
> 
> Hope this answers your question.
> 
> --
> Igor Belikov
> Fuel CI Engineer
> ibeli...@mirantis.com 
> 
>> On 20 Nov 2015, at 13:57, Stanislaw Bogatkin > > wrote:
>> 
>> Hi Igor,
>> 
>> would you be so kind tell, why fuel-library deployment tests doesn't support 
>> this? Maybe there is a link with previous talks about it?
>> 
>> On Fri, Nov 20, 2015 at 1:34 PM, Igor Belikov > > wrote:
>> Hi,
>> 
>> I’d like to inform you that all jobs running on Fuel CI (with the exception 
>> of fuel-library deployment tests) now support retriggering via “recheck” or 
>> “reverify” comments in Gerrit.
>> Exact regex is the same one used in Openstack-Infra’s zuul and can be found 
>> here 
>> https://github.com/fuel-infra/jenkins-jobs/blob/master/servers/fuel-ci/global.yaml#L3
>>  
>> 
>> 
>> CI-Team kindly asks you to not abuse this option, unfortunately not every 
>> failure could be solved by retriggering.
>> And, to stress this out once again: fuel-library deployment tests don’t 
>> support this, so you still have to ask for a retrigger in #fuel-infra irc 
>> channel.
>> 
>> Thanks for attention.
>> --
>> Igor Belikov
>> Fuel CI Engineer
>> ibeli...@mirantis.com 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> 
> _

Re: [openstack-dev] [Fuel][CI] recheck/reverify support for Fuel CI jobs

2015-11-20 Thread Alexey Shtokolov
Igor,

Thank you for this feature.
Afaiu recheck/reverify is mostly useful for internal CI-related fails. And
Fuel CI and Openstack CI are two different infrastructures.
So if smth is broken on Fuel CI, "recheck" will restart all jobs on
Openstack CI too. And opposite case works the same way.

Probably we should use another keyword for Fuel CI to prevent an extra load
on the infrastructure? For example "refuel" or smth like this?

Best regards,
Alexey Shtokolov

2015-11-20 14:24 GMT+03:00 Stanislaw Bogatkin :

> Igor,
>
> it is much more clear for me now. Thank you :)
>
> On Fri, Nov 20, 2015 at 2:09 PM, Igor Belikov 
> wrote:
>
>> Hi Stanislaw,
>>
>> The reason behind this is simple - deployment tests are heavy. Each
>> deployment test occupies whole server for ~2 hours, for each commit we have
>> 2 deployment tests (for current fuel-library master) and that’s just
>> because we don’t test CentOS deployment for now.
>> If we assume that developers will rertrigger deployment tests only when
>> retrigger would actually solve the failure - it’s still not smart in terms
>> of HW usage to retrigger both tests when only one has failed, for example.
>> And there are cases when retrigger just won’t do it and CI Engineer must
>> manually erase the existing environment on slave or fix it by other means,
>> so it’s better when CI Engineer looks through logs before each retrigger of
>> deployment test.
>>
>> Hope this answers your question.
>>
>> --
>> Igor Belikov
>> Fuel CI Engineer
>> ibeli...@mirantis.com
>>
>> On 20 Nov 2015, at 13:57, Stanislaw Bogatkin 
>> wrote:
>>
>> Hi Igor,
>>
>> would you be so kind tell, why fuel-library deployment tests doesn't
>> support this? Maybe there is a link with previous talks about it?
>>
>> On Fri, Nov 20, 2015 at 1:34 PM, Igor Belikov 
>> wrote:
>>
>>> Hi,
>>>
>>> I’d like to inform you that all jobs running on Fuel CI (with the
>>> exception of fuel-library deployment tests) now support retriggering via
>>> “recheck” or “reverify” comments in Gerrit.
>>> Exact regex is the same one used in Openstack-Infra’s zuul and can be
>>> found here
>>> https://github.com/fuel-infra/jenkins-jobs/blob/master/servers/fuel-ci/global.yaml#L3
>>>
>>> CI-Team kindly asks you to not abuse this option, unfortunately not
>>> every failure could be solved by retriggering.
>>> And, to stress this out once again: fuel-library deployment tests don’t
>>> support this, so you still have to ask for a retrigger in #fuel-infra irc
>>> channel.
>>>
>>> Thanks for attention.
>>> --
>>> Igor Belikov
>>> Fuel CI Engineer
>>> ibeli...@mirantis.com
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
---
WBR, Alexey Shtokolov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][devstack][gnocchi] Unable to run devstack-gate with stable/1.3

2015-11-20 Thread Clark Boylan
On Thu, Nov 19, 2015, at 05:17 AM, Julien Danjou wrote:
> Hi,
> 
> The Gnocchi gate is broken for stable/1.3 because of devstack-gate
> saying¹:
>   ERROR: branch not allowed by features matrix: 1.3
> 
> From what I understand, that's because devstack-gate thinks it should
> try to pull stable/1.3 for devstack & all OpenStack projects, branch
> that does not exist – and make no sense elsewhere than in Gnocchi.
No, this isn't why this is happening. Devstack-gate will happily
fallback to grabbing master for projects if it doesn't otherwise find
the branch for the change under test in other projects. The issue is
that in order to run devstack you have to configure a set of services in
devstack and the services you want to run change over time because we
make releases and new services show up and old services go away.

As a result devstack-gate is very explicit about what should be
configured for each release [0]. If it doesn't recognize the branch
currently under test it fails rather than doing something implcitly that
is unexpected.
> 
> In the past, we set OVERRIDE_ZUUL_BRANCH=stable/kilo in the Gnocchi jobs
> for some stable branches (we did for stable/1.0), but honestly patching
> the infra each time we do a stable release is getting painful.
You need a mapping of some sort. How should devstack be configured for
stable/X.Y? What about stable/Y.Z? This is one method of providing that
mapping and it is very explicit. We can probably do better but we need
to understand what the desired mapping is before encoding that into any
tools.
> 
> Actually, Gnocchi does not really care about pulling whatever branch of
> the other projects, it just wants them deployed to use them (Keystone
> and Swift). Since the simplest way is to use devstack, that's what it
> uses.
If you have a very specific concrete set of services to be configured
you could possibly ship your own features.yaml to only configure those
things (rather than the default of an entire running cloud). This may
help make the jobs run quicker too.

Another approach would be to set the OVERRIDE_ZUUL_BRANCH to master and
the OVERRIDE_${project}_PROJECT_BRANCH to ZUUL_BRANCH so that your
project is always checked out against the correct branch for the change
but is tested against master everything else. This is probably the
simplest mapping (our stable/X.Y should run against whatever is
current).
> 
> Is there any chance we can have a way to say in devstack{,-gate} "just
> deploy the latest released version of $PROJECT" and that's it?
I think there are several options but which one is used depends on how
you need to map onto the 6 month release cycle (required because
devstack deploys projects using it).

[0]
https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/features.yaml#n3

Hope this helps,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova]Move encryptors to os-brick

2015-11-20 Thread Duncan Thomas
Brick does not have to take over the decisions in order to be a useful
repository for the code. The motivation for this work is to avoid having
the dm setup code copied wholesale into cinder, where it becomes difficult
to keep in sync with the code in nova.

Cinder needs a copy of this code since it is on the data path for certain
operations (create from image, copy to image, backup/restore, migrate).

I suggest a design where the worker code is in brick but the decisions stay
in nova. This enables code sharing while not substantially altering the
nova plan - it also encourages strong back-compatibility guarantees with on
disk formats since the dm setup part off the code will be slightly more
difficult to modify.
On 20 Nov 2015 13:10, "Daniel P. Berrange"  wrote:

> On Fri, Nov 20, 2015 at 03:22:04AM +, Li, Xiaoyan wrote:
> > Hi all,
> >
> > To fix bug [1][2] in Cinder, Cinder needs to use
> nova/volume/encryptors[3]
> > to attach/detach encrypted volumes.
> >
> > To decrease the code duplication, I raised a BP[4] to move encryptors to
> > os-brick[5].
> >
> > Once it is done, Nova needs to update to use the common library. This
> > is BP raised. [6]
>
> You need to proposal a more detailed spec for this, not merely a blueprint
> as there are going to be significant discussion points here.
>
> In particular for the QEMU/KVM nova driver, this proposal is not really
> moving in a direction that is aligned with our long term desire/plan for
> volume encryption and/or storage management in Nova with KVM.  While we
> currently use dm-crypt with volumes that are backed by block devices,
> this is not something we wish to use long term. Increasingly the storage
> used is network based, and while we use in-kernel network clients for
> iSCSI/NFS, we use an in-QEMU client for RBD/Gluster storage. QEMU also
> has support for in-QEMU clients for iSCSI/NFS and it is likely we'll use
> them in Nova in future too.
>
> Now encryption throws a (small) spanner in the works as the only way to
> access encrypted data right now is via dm-crypt, which obviously doesn't
> fly when there's no kernel block device to attach it to. Hence we are
> working in enhancement to QEMU to let it natively handle LUKS format
> volumes. At which point we'll stop using dm-crypt for for anything and
> do it all in QEMU.
>
> Nova currently decides whether it wants to use the in-kernel network
> client, or an in-QEMU network client for the various network backed
> storage drivers. If os-brick takes over encryption setup with dm-crypt,
> then it would potentially be taking the decision away from Nova about
> whether to use in-kernel or in-QEMU clients, which is not desirable.
> Nova must retain control over which configuration approach is best
> for the hypervisor it is using.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][stable] OpenStack 2014.2.4 (juno)

2015-11-20 Thread Sean Dague
On 11/19/2015 08:56 PM, Rochelle Grober wrote:
> Again, my plea to leave the Juno repository on git.openstack.org, but locked 
> down to enable at least grenade testing for Juno->Kilo upgrades.  For upgrade 
> testing purposes, python2.6 is not needed as any cloud would have to upgrade 
> python before upgrading to kilo.  The testing could/should be limited to only 
> occurring when Kilo backports are proposed.  The nodepool requirements should 
> be very small except for the pre-release periods remaining for Kilo, 
> especially if the testing is restricted to grenade only.
> 
> Thanks for the ear. I'm expecting to participate in the stable releases team, 
> and to bring a developer along with me;-)

This really isn't a good idea.

Grenade makes sure the old side works first, with Tempest. Tempest won't
support juno any more, you'd need to modify the job to do something else
here.

Often times there are breaks due to upstream changes that require fixes
on the old side, which is now impossible.

Juno being eol means we expect you are already off of it, not that you
should be soon.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Report from Gerrit User Summit

2015-11-20 Thread Daniel Comnea
Superb report Jim, thanks !

On Thu, Nov 19, 2015 at 10:47 AM, Markus Zoeller 
wrote:

> David Pursehouse  wrote on 11/12/2015 09:22:50
> PM:
>
> > From: David Pursehouse 
> > To: OpenStack Development Mailing List
> 
> > Cc: openstack-in...@lists.openstack.org
> > Date: 11/12/2015 09:27 PM
> > Subject: Re: [openstack-dev] [OpenStack-Infra] Report from Gerrit User
> Summit
> >
> > On Mon, Nov 9, 2015 at 10:40 PM David Pursehouse
>  > > wrote:
> >
> > <...>
> >
> > * As noted in another recent thread by Khai, the hashtags support
> >   (user-defined tags applied to changes) exists but depends on notedb
> >   which is not ready for use yet (targeted for 3.0 which is probably
> >   at least 6 months off).
>
> >
> > We're looking into the possibility of enabling only enough of the
> > notedb to make hashtags work in 2.12.
> >
> >
> >
> > Unfortunately it looks like it's not going to be possible to do this.
>
> That's a great pity. :(
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][infra][qa] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-20 Thread Sean Dague
On 11/20/2015 06:01 AM, Kuvaja, Erno wrote:
>> -Original Message-
>> From: Alan Pevec [mailto:ape...@gmail.com]
>> Sent: Friday, November 20, 2015 10:46 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [stable][infra][qa] Preparing 2014.2.4 (Juno)
>> WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.
>>
>>> So we were brainstorming this with Rocky the other night. Would this be
>> possible to do by following:
>>> 1) we still tag juno EOL in few days time
>>> 2) we do not remove the stable/juno branch
>>
>> Why not?
>>
>>> 3) we run periodic grenade jobs for kilo
>>
>> From a quick look, grenade should work with a juno-eol tag instead of
>> stable/juno, it's just a git reference.
>> "Zombie" Juno->Kilo grenade job would need to set
>> BASE_DEVSTACK_BRANCH=juno-eol and for devstack all
>> $PROJECT_BRANCH=juno-eol (or 2014.2.4 should be the same commit).
>> Maybe I'm missing some corner case in devstack where stable/* is assumed
>> but if so that should be fixed anyway.
>> Leaving branch around is a bad message, it implies there support for it, 
>> while
>> there is not.
>>
>> Cheers,
>> Alan
> 
> That sounds like an easy compromise.

Before doing that thing, do you regularly look into grenade failures to
determine root cause?

Because a periodic job that fails and isn't looked at, is just a waste
of resources. And from past experience very very few people look at
these job results.

-Sean


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][neutron] branches for release-independent projects targeting Openstack release X

2015-11-20 Thread Clark Boylan
On Fri, Nov 20, 2015, at 04:32 AM, Julien Danjou wrote:
> On Fri, Nov 20 2015, Clark Boylan wrote:
> 
> > If you have a stable/X.Y branch or stable/foo but are still wanting to
> > map onto the 6 month release cycle (we know this because you are running
> > devstack-gate) how do we make that mapping? is it arbitrary? is there
> > some deterministic method? Things like this affect the changes necessary
> > to the tools but should be listed upfront.
> 
> Honestly, we don't use devstack-gate because we map onto a 6 months
> release. We use devstack-gate because that seems to be the canonical way
> of using devstack in the gate. :)
Mayne I should've said "because you are running devstack via
devstack-gate". Running devstack requires making choices of what
services to run based on the 6 month release cycle,
> 
> Right now, I think the problem I stated in:
>   [openstack-dev] [infra][devstack][gnocchi] Unable to run devstack-gate
>   with stable/1.3
>   http://lists.openstack.org/pipermail/openstack-dev/2015-November/079849.html
> 
> is pretty clear. Or if it's not feel free to reply to it and I'll give
> more information. :)
Will look and followup there.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][neutron] branches for release-independent projects targeting Openstack release X

2015-11-20 Thread Julien Danjou
On Fri, Nov 20 2015, Clark Boylan wrote:

> If you have a stable/X.Y branch or stable/foo but are still wanting to
> map onto the 6 month release cycle (we know this because you are running
> devstack-gate) how do we make that mapping? is it arbitrary? is there
> some deterministic method? Things like this affect the changes necessary
> to the tools but should be listed upfront.

Honestly, we don't use devstack-gate because we map onto a 6 months
release. We use devstack-gate because that seems to be the canonical way
of using devstack in the gate. :)

Right now, I think the problem I stated in:
  [openstack-dev] [infra][devstack][gnocchi] Unable to run devstack-gate with 
stable/1.3
  http://lists.openstack.org/pipermail/openstack-dev/2015-November/079849.html

is pretty clear. Or if it's not feel free to reply to it and I'll give
more information. :)

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Redfish drivers in ironic

2015-11-20 Thread Dmitry Tantsur

On 11/20/2015 12:50 AM, Bruno Cornec wrote:

Hello,

Vladyslav Drok said on Thu, Nov 19, 2015 at 03:59:41PM +0200:

Hi list and Bruno,

I’m interested in adding virtual media boot interface for redfish (
https://blueprints.launchpad.net/ironic/+spec/redfish-virtual-media-boot).

It depends on
https://blueprints.launchpad.net/ironic/+spec/ironic-redfish
and a corresponding spec https://review.openstack.org/184653, that
proposes
adding support for redfish (adding new power and management
interfaces) to
ironic. It also seems to depend on python-redfish client -
https://github.com/devananda/python-redfish.


Very good idea ;-)


I’d like to know what is the current status of it?


We have made recently some successful tests with both a real HP ProLiant
server with a redfish compliant iLO FW (2.30+) and the DMTF simulator.

The version working for these tests is at
https://github.com/bcornec/python-redfish (prototype branch)
I think we should now move that work into master and make again a pull
request to Devananda.


Is there some roadmap of what should be added to
python-redfish (or is the one mentioned in spec is still relevant)?


I think this is still relevant.


Is there a way for others to contribute in it?


Feel free to git clone the repo and propose patches to it ! We would be
happy to have contributors :-) I've also copied our mailing list to the
other contributors are aware of this.


Bruno, do you plan to move it
under ironic umbrella, or into pyghmi as people suggested in spec?


That's a difficult question. One one hand, I don't think python-redfish
should be under the OpenStack umbrella per se. This is a useful python
module to dialog with servers providing a Redfish interface and this has
no relationship with OpenStack ... except that it's very useful for
Ironic ! But could also be used by other projects in the future such as
Hadoop for node deployment, or my MondoRescue Disaster Recovery project
e.g. That's also why we have not used OpenStack modules in order to
avoid to create an artificial dependency that could prevent that module
tobe used py these other projects.


Using openstack umbrella does not automatically mean the project can't 
be used outside of openstack. It just means you'll be using openstack 
infra for its development, which might be a big plus.




I'm new to the python galaxy myself, but thought that pypy would be the
right place for it, but I really welcome suggestions here.


You mean PyPI? I don't see how these 2 contradict each other, PyPI is 
just a way to distribute releases.



I also need to come back to the Redfish spec itself and upate with the
atest feedback we got, in order to have more up to date content for the
Mitaka cycle.

Best regards,
Bruno.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][fwaas]some architectural advice on fwaas driver writing

2015-11-20 Thread Oguz Yarimtepe
I created a sample driver by looking at vArmour driver that is at the 
Github FWaaS repo. I am planning to call the FW's REST API from the 
suitable functions.


The problem is, i am still not sure how to locate the hardware 
appliance. One of the FWaaS guy says that Service Chaining can help, any 
body has an idea or how to insert the fw to OpenStack?


On 11/02/2015 02:36 PM, Somanchi Trinath wrote:


Hi-

I’m confused. Do you really have an PoC implementation of what is to 
be achieved?


As I look into these type of Implementations, I would prefer to have 
proxy driver/plugin to get the configuration from Openstack to 
external controller/device and do the rest of the magic.


-

Trinath



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Redfish drivers in ironic

2015-11-20 Thread Vladyslav Drok
On Fri, Nov 20, 2015 at 1:50 AM, Bruno Cornec  wrote:

> Hello,
>
> Vladyslav Drok said on Thu, Nov 19, 2015 at 03:59:41PM +0200:
>
>> Hi list and Bruno,
>>
>> I’m interested in adding virtual media boot interface for redfish (
>> https://blueprints.launchpad.net/ironic/+spec/redfish-virtual-media-boot
>> ).
>> It depends on
>> https://blueprints.launchpad.net/ironic/+spec/ironic-redfish
>> and a corresponding spec https://review.openstack.org/184653, that
>> proposes
>> adding support for redfish (adding new power and management interfaces) to
>> ironic. It also seems to depend on python-redfish client -
>> https://github.com/devananda/python-redfish.
>>
>
> Very good idea ;-)
>
> I’d like to know what is the current status of it?
>>
>
> We have made recently some successful tests with both a real HP ProLiant
> server with a redfish compliant iLO FW (2.30+) and the DMTF simulator.
>

Great news! :)


>
> The version working for these tests is at
> https://github.com/bcornec/python-redfish (prototype branch)
> I think we should now move that work into master and make again a pull
> request to Devananda.
>
> Is there some roadmap of what should be added to
>> python-redfish (or is the one mentioned in spec is still relevant)?
>>
>
> I think this is still relevant.
>
> Is there a way for others to contribute in it?
>>
>
> Feel free to git clone the repo and propose patches to it ! We would be
> happy to have contributors :-) I've also copied our mailing list to the
> other contributors are aware of this.


I'll dig into current code and will try to contribute something meaningful
then.


>
>
> Bruno, do you plan to move it
>> under ironic umbrella, or into pyghmi as people suggested in spec?
>>
>
> That's a difficult question. One one hand, I don't think python-redfish
> should be under the OpenStack umbrella per se. This is a useful python
> module to dialog with servers providing a Redfish interface and this has
> no relationship with OpenStack ... except that it's very useful for
> Ironic ! But could also be used by other projects in the future such as
> Hadoop for node deployment, or my MondoRescue Disaster Recovery project
> e.g. That's also why we have not used OpenStack modules in order to
> avoid to create an artificial dependency that could prevent that module
> tobe used py these other projects.
>
> I'm new to the python galaxy myself, but thought that pypy would be the
> right place for it, but I really welcome suggestions here.
> I also need to come back to the Redfish spec itself and upate with the
> atest feedback we got, in order to have more up to date content for the
> Mitaka cycle.
>
> Best regards,
> Bruno.
> --
> Open Source Profession, Linux Community Lead WW  http://hpintelco.net
> HPE EMEA EG Open Source Technology Strategist http://hp.com/go/opensource
> FLOSS projects: http://mondorescue.org http://project-builder.org
> Musique ancienne? http://www.musique-ancienne.org http://www.medieval.org
>

Thanks for the answers :)
Vlad
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][neutron] branches for release-independent projects targeting Openstack release X

2015-11-20 Thread Clark Boylan


On Thu, Nov 19, 2015, at 12:55 PM, Chris Dent wrote:
> On Thu, 19 Nov 2015, Julien Danjou wrote:
> 
> > It would be good to support that as being *normal*, not "potentially
> > incorrect and random"!
> 
> Yes.
> 
> The underlying issue in this thread is the dominance of the six month
> cycle and the way this is perceived to be (any may actually be) a
> benefit for distributors, marketers, etc. That dominance drives the
> technological and social context of OpenStack. No surprise that it is
> present in our tooling and our schedules but sometimes I think it
> would be great if we could fight the power, shift the paradigm, break
> the chains.
> 
> But that's crazy talk, isn't it?
> 
> However it is pretty clear the dominance is not aligned with at least
> some of the goals of a big tent. One goal, in particular, is making
> OpenStack stuff useful and accessible to people or groups outside of
> OpenStack where release-often is awesome and the needs of the packagers
> aren't really that important.
> 
> I reckon (and this may be an emerging consensus somewhere in this
> thread) we need to make it easier (by declaration) in the tooling
> to test against whatever is desired. Can we enumerate the changes
> required to make that go?
"Test whatever is desired" is far to nebulous. We need an actual set of
concrete needs and requirements and once you have that you can worry
about enumerating changes. I am not sure I have seen anything like this
in the thread so far.

If you have a stable/X.Y branch or stable/foo but are still wanting to
map onto the 6 month release cycle (we know this because you are running
devstack-gate) how do we make that mapping? is it arbitrary? is there
some deterministic method? Things like this affect the changes necessary
to the tools but should be listed upfront.

Once we have enumerated the problem we can enumerate the changes to fix
it.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack-Announce List

2015-11-20 Thread Dean Troyer
On Fri, Nov 20, 2015 at 4:41 AM, Thierry Carrez 
wrote:

> We could definitely go back to "the place users wanting to keep up with
> upstream news directly affecting them should subscribe to", and post only:
>
> - user-facing service releases (type:service deliverables), on stable
> branches or development branches
> - security vulnerabilities and security notes
> - weekly upstream development news (the one Mike compiles), and include
> a digest of all library/ancillary services releases of the week in there
>
> Library releases are not "directly affecting users", so not urgent news
> for "users wanting to keep up with upstream news" and can wait to be
> mentioned in the weekly digest.
>

Matthieu mentioned the following a bit later:

> I however like being informed of projects and clients releases.
> They don't happen often and they are interesting to me as an
> operator (projects) and consumer of the OpenStack API (project
>  clients).

Libraries are not directly affecting users, but clients are: python-*client
and OSC and the like.  I'd be on the fence re the SDK (when it becomes
official) but it is intended for downstream consumers, app devs mostly
rather than end-users and operators.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Graduate cliutils.py into oslo.utils

2015-11-20 Thread Kekane, Abhishek


-Original Message-
From: Davanum Srinivas [mailto:dava...@gmail.com] 
Sent: 20 November 2015 16:59
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo] Graduate cliutils.py into oslo.utils

Abhishek,

Go for it!

Thank you Dims, I am on it!!

Abhishek

On Fri, Nov 20, 2015 at 2:32 AM, Kekane, Abhishek  
wrote:
> -Original Message-
> From: Doug Hellmann [mailto:d...@doughellmann.com]
> Sent: 16 November 2015 21:46
> To: openstack-dev
> Subject: Re: [openstack-dev] [oslo] Graduate cliutils.py into 
> oslo.utils
>
> Excerpts from Kekane, Abhishek's message of 2015-11-16 07:33:48 +:
>> Hi,
>>
>> As apiclient is now removed from oslo-incubator, to proceed with 
>> request-id spec [1] I have two options in mind,
>>
>>
>> 1.   Use keystoneauth1 + cliff in all python-clients (add request-id 
>> support in cliff library)
>
> cliff is being used outside of OpenStack, and is not at all related to REST 
> API access, so I don't think that's the right place.
>
>>
>> 2.   apiclient code is available in all python-*clients, modify this 
>> code in individual clients and add support to return request-id.
>
> Yes, I think that makes sense.
>
> Hi Devs,
>
> As per mentioned by Dough I will start pushing patches for 
> python-cinderclient, python-glanceclient and python-novaclient from next week 
> which includes changes for returning request-id to caller.
> Please let me know if you have any suggestions on the same.
>
>>
>> Please let me know your opinion on the same.
>>
>> [1] https://review.openstack.org/#/c/156508/
>>
>> Thanks & Regards,
>>
>> Abhishek Kekane
>>
>> > On Nov 11, 2015, at 3:54 AM, Andrey Kurilin > > mirantis.com>
>> >  wrote:
>>
>> >
>>
>> >
>>
>> >
>>
>> > On Tue, Nov 10, 2015 at 4:25 PM, Sean Dague > > dague.net
>> >  > > dague.net>>
>> >  wrote:
>>
>> > On 11/10/2015 08:24 AM, Andrey Kurilin wrote:
>>
>> > >>It was also proposed to reuse openstackclient or the openstack SDK.
>>
>> > >
>>
>> > > Openstack SDK was proposed a long time ago(it looks like it was 
>> > > several
>>
>> > > cycles ago) as "alternative" for cliutils and apiclient, but I 
>> > > don't
>>
>> > > know any client which use it yet. Maybe openstacksdk cores should 
>> > > try to
>>
>> > > port any client as an example of how their project should be used.
>>
>> >
>>
>> > The SDK is targeted for end user applications, not service clients.
>> > I do
>>
>> > get there was lots of confusion over this, but SDK is not the 
>> > answer
>>
>> > here for service clients.
>>
>> >
>>
>> > Ok, thanks for explanation, but there is another question in my head: If 
>> > openstacksdk is not for python-*clients, why apiclient(which is actually 
>> > used by python-*clients) was marked as deprecated due to openstacksdk?
>>
>>
>>
>> The Oslo team wanted to deprecate the API client code because it wasn't 
>> being maintained. We thought at the time we did so that the SDK would 
>> replace the clients, but discussions since that time have changed direction.
>>
>> >
>>
>> > The service clients are *always* going to have to exist in some form.
>>
>> > Either as libraries that services produce, or by services deciding 
>> > they
>>
>> > don't want to consume the libraries of other clients and just put a
>>
>> > targeted bit of rest code in their own tree to talk to other services.
>>
>> >
>>
>> > -Sean
>>
>> >
>>
>> > --
>>
>> > Sean Dague
>>
>> > http://dague.net 
>>
>> >
>>
>> > ___
>> > _
>> > __
>>
>> > OpenStack Development Mailing List (not for usage questions)
>>
>> > Unsubscribe: OpenStack-dev-request at 
>> > lists.openstack.org> > i nfo/openstack-dev>?subject:unsubscribe
>> > > > i
>> > be>
>>
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > 
>>
>> >
>>
>> >
>>
>> >
>>
>> > --
>>
>> > Best regards,
>>
>> > Andrey Kurilin.
>>
>> > ___
>> > _
>> > __
>>
>> > OpenStack Development Mailing List (not for usage questions)
>>
>> > Unsubscribe: OpenStack-dev-request at 
>> > lists.openstack.org> > i nfo/openstack-dev> > > lists.openstack.org> > i nfo/openstack-dev>>?subject:unsubscribe
>>
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > 
>>
>> -- next part ---

Re: [openstack-dev] [tc][infra][neutron] branches for release-independent projects targeting Openstack release X

2015-11-20 Thread Jesse Pretorius
On 19 November 2015 at 09:43, Thierry Carrez  wrote:

>
> So we have three models. The release:independent model is for projects
> that don't follow the common development cycle, and therefore won't make
> a "liberty" release. The release:cycle-with-milestones model is the
> traditional "one release at the end of the cycle" model, and the
> release:cycle-with-intermediary model is an hybrid where you follow the
> development cycle (and make an end-of-cycle release) but can still make
> intermediary, featureful releases as necessary.
>

Hmm, then it seems to me that OpenStack-Ansible should be tagged
'release:cycle-with-intermediary' instead of 'release:independent' - is
that correct?


> Looking at your specific case, it appears you could adopt the
> release:cycle-with-intermediary model, since you want to maintain a
> branch mapped to a given release. The main issue is your (a) point,
> especially the "much later" point. Liberty is in the past now, so making
> "liberty" releases now that we are deep in the Mitaka cycle is a bit
> weird.
>

The deployment projects, and probably packaging projects too, are faced
with the same issue. There's no guarantee that their x release will be done
on the same day as the OpenStack services release their x branches as the
deployment projects still need some time to verify stability and
functionality once the services are finalised. While it could be easily
said that we simply create the branch, then backport any fixes, this is not
necessarily ideal as it creates an additional review burden and doesn't
really match how the stable branches are meant to operate according to the
policy.


> Maybe we need a new model to care for such downstream projects when they
> can't release in relative sync with the projects they track.
>

Perhaps. Or perhaps the rules can be relaxed for a specific profile of
projects (non core?).
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [QoS] meeting rebooted

2015-11-20 Thread Moshe Levi
Just to add more details about the when and where  :) 
We will have a weekly meeting on Wednesday at 1400 UTC in #openstack-meeting-3
http://eavesdrop.openstack.org/#Neutron_QoS_Meeting 

Thanks,
Moshe Levi. 

> -Original Message-
> From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
> Sent: Friday, November 20, 2015 12:08 PM
> To: Miguel Angel Ajo 
> Cc: OpenStack Development Mailing List (not for usage questions)  d...@lists.openstack.org>; victor.r.how...@gmail.com;
> irenab@gmail.com; Moshe Levi ; Vikram
> Choudhary ; Gal Sagie ; Haim
> Daniel 
> Subject: Re: [neutron] [QoS] meeting rebooted
> 
> Miguel Angel Ajo  wrote:
> 
> >
> >Hi everybody,
> >
> >  We're restarting the QoS meeting for next week,
> >
> >  Here are the details, and a preliminary agenda,
> >
> >   https://etherpad.openstack.org/p/qos-mitaka
> >
> >
> >   Let's keep QoS moving!,
> >
> > Best,
> > Miguel Ángel.
> 
> I think you better give idea when/where it is restarted. :)
> 
> Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Graduate cliutils.py into oslo.utils

2015-11-20 Thread Davanum Srinivas
Abhishek,

Go for it!

On Fri, Nov 20, 2015 at 2:32 AM, Kekane, Abhishek
 wrote:
> -Original Message-
> From: Doug Hellmann [mailto:d...@doughellmann.com]
> Sent: 16 November 2015 21:46
> To: openstack-dev
> Subject: Re: [openstack-dev] [oslo] Graduate cliutils.py into oslo.utils
>
> Excerpts from Kekane, Abhishek's message of 2015-11-16 07:33:48 +:
>> Hi,
>>
>> As apiclient is now removed from oslo-incubator, to proceed with
>> request-id spec [1] I have two options in mind,
>>
>>
>> 1.   Use keystoneauth1 + cliff in all python-clients (add request-id 
>> support in cliff library)
>
> cliff is being used outside of OpenStack, and is not at all related to REST 
> API access, so I don't think that's the right place.
>
>>
>> 2.   apiclient code is available in all python-*clients, modify this 
>> code in individual clients and add support to return request-id.
>
> Yes, I think that makes sense.
>
> Hi Devs,
>
> As per mentioned by Dough I will start pushing patches for 
> python-cinderclient, python-glanceclient and python-novaclient from next week 
> which includes changes for returning request-id to caller.
> Please let me know if you have any suggestions on the same.
>
>>
>> Please let me know your opinion on the same.
>>
>> [1] https://review.openstack.org/#/c/156508/
>>
>> Thanks & Regards,
>>
>> Abhishek Kekane
>>
>> > On Nov 11, 2015, at 3:54 AM, Andrey Kurilin > > mirantis.com>
>> >  wrote:
>>
>> >
>>
>> >
>>
>> >
>>
>> > On Tue, Nov 10, 2015 at 4:25 PM, Sean Dague > > dague.net
>> >  > > dague.net>>
>> >  wrote:
>>
>> > On 11/10/2015 08:24 AM, Andrey Kurilin wrote:
>>
>> > >>It was also proposed to reuse openstackclient or the openstack SDK.
>>
>> > >
>>
>> > > Openstack SDK was proposed a long time ago(it looks like it was
>> > > several
>>
>> > > cycles ago) as "alternative" for cliutils and apiclient, but I
>> > > don't
>>
>> > > know any client which use it yet. Maybe openstacksdk cores should
>> > > try to
>>
>> > > port any client as an example of how their project should be used.
>>
>> >
>>
>> > The SDK is targeted for end user applications, not service clients.
>> > I do
>>
>> > get there was lots of confusion over this, but SDK is not the answer
>>
>> > here for service clients.
>>
>> >
>>
>> > Ok, thanks for explanation, but there is another question in my head: If 
>> > openstacksdk is not for python-*clients, why apiclient(which is actually 
>> > used by python-*clients) was marked as deprecated due to openstacksdk?
>>
>>
>>
>> The Oslo team wanted to deprecate the API client code because it wasn't 
>> being maintained. We thought at the time we did so that the SDK would 
>> replace the clients, but discussions since that time have changed direction.
>>
>> >
>>
>> > The service clients are *always* going to have to exist in some form.
>>
>> > Either as libraries that services produce, or by services deciding
>> > they
>>
>> > don't want to consume the libraries of other clients and just put a
>>
>> > targeted bit of rest code in their own tree to talk to other services.
>>
>> >
>>
>> > -Sean
>>
>> >
>>
>> > --
>>
>> > Sean Dague
>>
>> > http://dague.net 
>>
>> >
>>
>> > 
>> > __
>>
>> > OpenStack Development Mailing List (not for usage questions)
>>
>> > Unsubscribe: OpenStack-dev-request at
>> > lists.openstack.org> > nfo/openstack-dev>?subject:unsubscribe
>> > > > be>
>>
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > 
>>
>> >
>>
>> >
>>
>> >
>>
>> > --
>>
>> > Best regards,
>>
>> > Andrey Kurilin.
>>
>> > 
>> > __
>>
>> > OpenStack Development Mailing List (not for usage questions)
>>
>> > Unsubscribe: OpenStack-dev-request at
>> > lists.openstack.org> > nfo/openstack-dev> > > lists.openstack.org> > nfo/openstack-dev>>?subject:unsubscribe
>>
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > 
>>
>> -- next part --
>>
>> An HTML attachment was scrubbed...
>>
>> URL:
>> > 11/d457a660/attachment.html>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscrib

Re: [openstack-dev] [Fuel][CI] recheck/reverify support for Fuel CI jobs

2015-11-20 Thread Stanislaw Bogatkin
Igor,

it is much more clear for me now. Thank you :)

On Fri, Nov 20, 2015 at 2:09 PM, Igor Belikov  wrote:

> Hi Stanislaw,
>
> The reason behind this is simple - deployment tests are heavy. Each
> deployment test occupies whole server for ~2 hours, for each commit we have
> 2 deployment tests (for current fuel-library master) and that’s just
> because we don’t test CentOS deployment for now.
> If we assume that developers will rertrigger deployment tests only when
> retrigger would actually solve the failure - it’s still not smart in terms
> of HW usage to retrigger both tests when only one has failed, for example.
> And there are cases when retrigger just won’t do it and CI Engineer must
> manually erase the existing environment on slave or fix it by other means,
> so it’s better when CI Engineer looks through logs before each retrigger of
> deployment test.
>
> Hope this answers your question.
>
> --
> Igor Belikov
> Fuel CI Engineer
> ibeli...@mirantis.com
>
> On 20 Nov 2015, at 13:57, Stanislaw Bogatkin 
> wrote:
>
> Hi Igor,
>
> would you be so kind tell, why fuel-library deployment tests doesn't
> support this? Maybe there is a link with previous talks about it?
>
> On Fri, Nov 20, 2015 at 1:34 PM, Igor Belikov 
> wrote:
>
>> Hi,
>>
>> I’d like to inform you that all jobs running on Fuel CI (with the
>> exception of fuel-library deployment tests) now support retriggering via
>> “recheck” or “reverify” comments in Gerrit.
>> Exact regex is the same one used in Openstack-Infra’s zuul and can be
>> found here
>> https://github.com/fuel-infra/jenkins-jobs/blob/master/servers/fuel-ci/global.yaml#L3
>>
>> CI-Team kindly asks you to not abuse this option, unfortunately not every
>> failure could be solved by retriggering.
>> And, to stress this out once again: fuel-library deployment tests don’t
>> support this, so you still have to ask for a retrigger in #fuel-infra irc
>> channel.
>>
>> Thanks for attention.
>> --
>> Igor Belikov
>> Fuel CI Engineer
>> ibeli...@mirantis.com
>>
>>
>>
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][CI] recheck/reverify support for Fuel CI jobs

2015-11-20 Thread Igor Belikov
Hi Stanislaw,

The reason behind this is simple - deployment tests are heavy. Each deployment 
test occupies whole server for ~2 hours, for each commit we have 2 deployment 
tests (for current fuel-library master) and that’s just because we don’t test 
CentOS deployment for now.
If we assume that developers will rertrigger deployment tests only when 
retrigger would actually solve the failure - it’s still not smart in terms of 
HW usage to retrigger both tests when only one has failed, for example.
And there are cases when retrigger just won’t do it and CI Engineer must 
manually erase the existing environment on slave or fix it by other means, so 
it’s better when CI Engineer looks through logs before each retrigger of 
deployment test.

Hope this answers your question.

--
Igor Belikov
Fuel CI Engineer
ibeli...@mirantis.com

> On 20 Nov 2015, at 13:57, Stanislaw Bogatkin  wrote:
> 
> Hi Igor,
> 
> would you be so kind tell, why fuel-library deployment tests doesn't support 
> this? Maybe there is a link with previous talks about it?
> 
> On Fri, Nov 20, 2015 at 1:34 PM, Igor Belikov  > wrote:
> Hi,
> 
> I’d like to inform you that all jobs running on Fuel CI (with the exception 
> of fuel-library deployment tests) now support retriggering via “recheck” or 
> “reverify” comments in Gerrit.
> Exact regex is the same one used in Openstack-Infra’s zuul and can be found 
> here 
> https://github.com/fuel-infra/jenkins-jobs/blob/master/servers/fuel-ci/global.yaml#L3
>  
> 
> 
> CI-Team kindly asks you to not abuse this option, unfortunately not every 
> failure could be solved by retriggering.
> And, to stress this out once again: fuel-library deployment tests don’t 
> support this, so you still have to ask for a retrigger in #fuel-infra irc 
> channel.
> 
> Thanks for attention.
> --
> Igor Belikov
> Fuel CI Engineer
> ibeli...@mirantis.com 
> 
> 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova]Move encryptors to os-brick

2015-11-20 Thread Daniel P. Berrange
On Fri, Nov 20, 2015 at 03:22:04AM +, Li, Xiaoyan wrote:
> Hi all,
> 
> To fix bug [1][2] in Cinder, Cinder needs to use nova/volume/encryptors[3]
> to attach/detach encrypted volumes. 
> 
> To decrease the code duplication, I raised a BP[4] to move encryptors to
> os-brick[5].
> 
> Once it is done, Nova needs to update to use the common library. This
> is BP raised. [6]

You need to proposal a more detailed spec for this, not merely a blueprint
as there are going to be significant discussion points here.

In particular for the QEMU/KVM nova driver, this proposal is not really
moving in a direction that is aligned with our long term desire/plan for
volume encryption and/or storage management in Nova with KVM.  While we
currently use dm-crypt with volumes that are backed by block devices,
this is not something we wish to use long term. Increasingly the storage
used is network based, and while we use in-kernel network clients for
iSCSI/NFS, we use an in-QEMU client for RBD/Gluster storage. QEMU also
has support for in-QEMU clients for iSCSI/NFS and it is likely we'll use
them in Nova in future too.

Now encryption throws a (small) spanner in the works as the only way to
access encrypted data right now is via dm-crypt, which obviously doesn't
fly when there's no kernel block device to attach it to. Hence we are
working in enhancement to QEMU to let it natively handle LUKS format
volumes. At which point we'll stop using dm-crypt for for anything and
do it all in QEMU.

Nova currently decides whether it wants to use the in-kernel network
client, or an in-QEMU network client for the various network backed
storage drivers. If os-brick takes over encryption setup with dm-crypt,
then it would potentially be taking the decision away from Nova about
whether to use in-kernel or in-QEMU clients, which is not desirable.
Nova must retain control over which configuration approach is best
for the hypervisor it is using.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >