Re: [openstack-dev] [stable][infra][qa] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-20 Thread Kuvaja, Erno
> -Original Message-
> From: Alan Pevec [mailto:ape...@gmail.com]
> Sent: Friday, November 20, 2015 10:46 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [stable][infra][qa] Preparing 2014.2.4 (Juno)
> WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.
> 
> > So we were brainstorming this with Rocky the other night. Would this be
> possible to do by following:
> > 1) we still tag juno EOL in few days time
> > 2) we do not remove the stable/juno branch
> 
> Why not?
> 
> > 3) we run periodic grenade jobs for kilo
> 
> From a quick look, grenade should work with a juno-eol tag instead of
> stable/juno, it's just a git reference.
> "Zombie" Juno->Kilo grenade job would need to set
> BASE_DEVSTACK_BRANCH=juno-eol and for devstack all
> $PROJECT_BRANCH=juno-eol (or 2014.2.4 should be the same commit).
> Maybe I'm missing some corner case in devstack where stable/* is assumed
> but if so that should be fixed anyway.
> Leaving branch around is a bad message, it implies there support for it, while
> there is not.
> 
> Cheers,
> Alan

That sounds like an easy compromise.

- Erno
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][stable] OpenStack 2014.2.4 (juno)

2015-11-20 Thread Sean Dague
On 11/19/2015 08:56 PM, Rochelle Grober wrote:
> Again, my plea to leave the Juno repository on git.openstack.org, but locked 
> down to enable at least grenade testing for Juno->Kilo upgrades.  For upgrade 
> testing purposes, python2.6 is not needed as any cloud would have to upgrade 
> python before upgrading to kilo.  The testing could/should be limited to only 
> occurring when Kilo backports are proposed.  The nodepool requirements should 
> be very small except for the pre-release periods remaining for Kilo, 
> especially if the testing is restricted to grenade only.
> 
> Thanks for the ear. I'm expecting to participate in the stable releases team, 
> and to bring a developer along with me;-)

This really isn't a good idea.

Grenade makes sure the old side works first, with Tempest. Tempest won't
support juno any more, you'd need to modify the job to do something else
here.

Often times there are breaks due to upstream changes that require fixes
on the old side, which is now impossible.

Juno being eol means we expect you are already off of it, not that you
should be soon.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-20 Thread Kevin Benton
There is something that isn't clear to me from your patch and based on your 
description of the workflow below. It sounds like you are following the basic 
L3 to ToR topology so each rack is a broadcast domain. If that’s the case, each 
rack should be a Neutron network and the mapping should be between racks and 
Networks, not racks and Subnets.

Also, can you elaborate a bit on the multiple gateway use case? If a subnet is 
isolated to a rack, wouldn’t all of the clients in that rack just want to use 
the ToR as their default gateway?


> On Nov 9, 2015, at 9:39 PM, Shraddha Pandhe  
> wrote:
> 
> Hi Carl,
> 
> Please find me reply inline
> 
> 
> On Mon, Nov 9, 2015 at 9:49 AM, Carl Baldwin  > wrote:
> On Fri, Nov 6, 2015 at 2:59 PM, Shraddha Pandhe  > wrote:
> We have a similar requirement where we want to pick a network thats 
> accessible in the rack that VM belongs to. We have L3 Top-of-rack, so the 
> network is confined to the rack. Right now, we are achieving this by naming 
> physical network name in a certain way, but thats not going to scale.
> 
> We also want to be able to make scheduling decisions based on IP 
> availability. So we need to know rack <-> network <-> mapping.  We can't 
> embed all factors in a name. It will be impossible to make scheduling 
> decisions by parsing name and comparing. GoDaddy has also been doing 
> something similar [1], [2].
> 
> This is precisely the use case that the large deployers team (LDT) has 
> brought to Neutron [1].  In fact, GoDaddy has been at the forefront of that 
> request.  We've had discussions about this since just after Vancouver on the 
> ML.  I've put up several specs to address it [2] and I'm working another 
> revision of it.  My take on it is that Neutron needs a model for a layer 3 
> network (IpNetwork) which would group the rack networks.  The IpNetwork would 
> be visible to the end user and there will be a network <-> host mapping.  I 
> am still aiming to have working code for this in Mitaka.  I discussed this 
> with the LDT in Tokyo and they seemed to agree.  We had a session on this in 
> the Neutron design track [3][4] though that discussion didn't produce 
> anything actionable.
> 
> Thats great. L3 layer network model is definitely one of our most important 
> requirements. All our go-forward deployments are going to be L3. So this is a 
> big deal for us. 
>  
> Solving this problem at the IPAM level has come up in discussion but I don't 
> have any references for that.  It is something that I'm still considering but 
> I haven't worked out all of the details for how this can work in a portable 
> way.  Could you describe how you imagine how this flow would work from a 
> user's perspective?  Specifically, when a user wants to boot a VM, what 
> precise API calls would be made to achieve this on your network and how where 
> would the IPAM data come in to play?
> 
> Here's what the flow looks like to me.
> 
> 1. User sends a boot request as usual. The user need not know all the network 
> and subnet information beforehand. All he would do is send a boot request.
> 
> 2. The scheduler will pick a node in an L3 rack. The way we map nodes <-> 
> racks is as follows:
> a. For VMs, we store rack_id in nova.conf on compute nodes
> b. For Ironic nodes, right now we have static IP allocation, so we 
> practically know which IP we want to assign. But when we move to dynamic 
> allocation, we would probably use 'chassis' or 'driver_info' fields to store 
> the rack id.
> 
> 3. Nova compute will try to pick a network ID for this instance.  At this 
> point, it needs to know what networks (or subnets) are available in this 
> rack. Based on that, it will pick a network ID and send port creation request 
> to Neutron. At Yahoo, to avoid some back-and-forth, we send a fake network_id 
> and let the plugin do all the work.
> 
> 4. We need some information associated with the network/subnet that tells us 
> what rack it belongs to. Right now, for VMs, we have that information 
> embedded in physnet name. But we would like to move away from that. If we had 
> a column for subnets - e.g. tag, it would solve our problem. Ideally, we 
> would like a column 'rack id' or a new table 'racks' that maps to subnets, or 
> something. We are open to different ideas that work for everyone. This is 
> where IPAM can help.
> 
> 5. We have another requirement where we want to store multiple gateway 
> addresses for a subnet, just like name servers.
> 
> 
> We also have a requirement where we want to make scheduling decisions based 
> on IP availability. We want to allocate multiple IPs to the hosts. e.g. We 
> want to allocate X IPs to a host. The flow in that case would be
> 
> 1. User sends a boot request with --num-ips X
> The network/subnet level complexities need not be exposed to the user. 
> For 

Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-11-20 Thread Sean Dague
On 11/17/2015 10:51 PM, Matt Riedemann wrote:

> 
> I *don't* see any DB APIs for deleting instance actions.
> 
> Kind of an important difference there.  Jay got it at least. :)
> 
>>
>> Were we just planning on instance_actions living forever in the database?
>>
>> Should we soft delete instance_actions when we delete the referenced
>> instance?
>>
>> Or should we (hard) delete instance_actions when we archive (move to
>> shadow tables) soft deleted instances?
>>
>> This is going to be a blocker to getting nova-manage db
>> archive_deleted_rows working.
>>
>> [1] https://review.openstack.org/#/c/246635/

instance_actions seems extremely useful, and at the ops meetups I've
been to has been one of the favorite features because it allows and easy
interface for "going back in time" to figure out what happened.

I'd suggest the following:

1. soft deleting and instance does nothing with instance actions.

2. archiving instance (soft delete -> actually deleted) also archives
off instance actions.

3. update instance_actions API so that you can get instance_actions for
deleted instances (which I think doesn't work today).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][CI] recheck/reverify support for Fuel CI jobs

2015-11-20 Thread Vitaly Kramskikh
+1 for "refuel" to trigger Fuel CI only, awesome idea. "recheck" will
trigger both.

2015-11-20 21:12 GMT+07:00 Sergey Vasilenko :

>
> On Fri, Nov 20, 2015 at 4:00 PM, Alexey Shtokolov  > wrote:
>
>> Probably we should use another keyword for Fuel CI to prevent an extra
>> load on the infrastructure? For example "refuel" or smth like this?
>
>
> IMHO we should have ability to restart each one of two deployment tests.
> Often happens, that one test passed, but another fails while ENV setting
> up. Restart both tests for this case does not required.
>
>
> /sv
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][devstack][gnocchi] Unable to run devstack-gate with stable/1.3

2015-11-20 Thread Julien Danjou
On Fri, Nov 20 2015, Clark Boylan wrote:

> You need a mapping of some sort. How should devstack be configured for
> stable/X.Y? What about stable/Y.Z? This is one method of providing that
> mapping and it is very explicit. We can probably do better but we need
> to understand what the desired mapping is before encoding that into any
> tools.

AFAICT, Gnocchi supports any version of the components it leverages
(Keystone ans Swift). We just want devstack to deploy the latest stable
version, whatever it is.

> If you have a very specific concrete set of services to be configured
> you could possibly ship your own features.yaml to only configure those
> things (rather than the default of an entire running cloud). This may
> help make the jobs run quicker too.

We'd love that, we really don't need an "entire cloud". :)

> Another approach would be to set the OVERRIDE_ZUUL_BRANCH to master and
> the OVERRIDE_${project}_PROJECT_BRANCH to ZUUL_BRANCH so that your
> project is always checked out against the correct branch for the change
> but is tested against master everything else. This is probably the
> simplest mapping (our stable/X.Y should run against whatever is
> current).

I didn't think that's possible, it's good to know. That might be a good
option, though it has the downside of ultimately hitting potential bugs
in other project master. We already had our gate blocked for days
because we hit particular bugs in Keystone or Swift, and it took
days/weeks to fix and/or work-around them.

Thanks for the insight Clark!

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][CI] recheck/reverify support for Fuel CI jobs

2015-11-20 Thread Igor Belikov
Alexey,

First of all, “refuel” sounds very cool.
Thanks for raising this topic, I would like to hear more opinions here.
On one hand, different keyword would help to prevent unnecessary infrastructure 
load, I agree with you on that. And on another hand, using existing keywords 
helps to avoid confusion and provides expected behaviour for our CI jobs. Far 
too many times I’ve heard questions like “Why ‘recheck’ doesn’t retrigger Fuel 
CI jobs?”.

So I would like to hear more thoughts here from our developers. And I will 
investigate how another third party CI systems handle this questions.
--
Igor Belikov
Fuel CI Engineer
ibeli...@mirantis.com






> On 20 Nov 2015, at 16:00, Alexey Shtokolov  wrote:
> 
> Igor,
> 
> Thank you for this feature.
> Afaiu recheck/reverify is mostly useful for internal CI-related fails. And 
> Fuel CI and Openstack CI are two different infrastructures. 
> So if smth is broken on Fuel CI, "recheck" will restart all jobs on Openstack 
> CI too. And opposite case works the same way.
> 
> Probably we should use another keyword for Fuel CI to prevent an extra load 
> on the infrastructure? For example "refuel" or smth like this?
> 
> Best regards, 
> Alexey Shtokolov
> 
> 2015-11-20 14:24 GMT+03:00 Stanislaw Bogatkin  >:
> Igor,
> 
> it is much more clear for me now. Thank you :)
> 
> On Fri, Nov 20, 2015 at 2:09 PM, Igor Belikov  > wrote:
> Hi Stanislaw,
> 
> The reason behind this is simple - deployment tests are heavy. Each 
> deployment test occupies whole server for ~2 hours, for each commit we have 2 
> deployment tests (for current fuel-library master) and that’s just because we 
> don’t test CentOS deployment for now.
> If we assume that developers will rertrigger deployment tests only when 
> retrigger would actually solve the failure - it’s still not smart in terms of 
> HW usage to retrigger both tests when only one has failed, for example.
> And there are cases when retrigger just won’t do it and CI Engineer must 
> manually erase the existing environment on slave or fix it by other means, so 
> it’s better when CI Engineer looks through logs before each retrigger of 
> deployment test.
> 
> Hope this answers your question.
> 
> --
> Igor Belikov
> Fuel CI Engineer
> ibeli...@mirantis.com 
> 
>> On 20 Nov 2015, at 13:57, Stanislaw Bogatkin > > wrote:
>> 
>> Hi Igor,
>> 
>> would you be so kind tell, why fuel-library deployment tests doesn't support 
>> this? Maybe there is a link with previous talks about it?
>> 
>> On Fri, Nov 20, 2015 at 1:34 PM, Igor Belikov > > wrote:
>> Hi,
>> 
>> I’d like to inform you that all jobs running on Fuel CI (with the exception 
>> of fuel-library deployment tests) now support retriggering via “recheck” or 
>> “reverify” comments in Gerrit.
>> Exact regex is the same one used in Openstack-Infra’s zuul and can be found 
>> here 
>> https://github.com/fuel-infra/jenkins-jobs/blob/master/servers/fuel-ci/global.yaml#L3
>>  
>> 
>> 
>> CI-Team kindly asks you to not abuse this option, unfortunately not every 
>> failure could be solved by retriggering.
>> And, to stress this out once again: fuel-library deployment tests don’t 
>> support this, so you still have to ask for a retrigger in #fuel-infra irc 
>> channel.
>> 
>> Thanks for attention.
>> --
>> Igor Belikov
>> Fuel CI Engineer
>> ibeli...@mirantis.com 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 

Re: [openstack-dev] [neutron][fwaas]some architectural advice on fwaas driver writing

2015-11-20 Thread Somanchi Trinath
Hi-

As I understand you are not sure on "How to locate the Hardware Appliance" 
which you have as your FW?

Am I right?  If so you can look into, 
https://github.com/jumpojoy/generic_switch kind of approach.

-
Trinath



From: Oguz Yarimtepe [mailto:oguzyarimt...@gmail.com]
Sent: Friday, November 20, 2015 5:52 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [neutron][fwaas]some architectural advice on fwaas 
driver writing

I created a sample driver by looking at vArmour driver that is at the Github 
FWaaS repo. I am planning to call the FW's REST API from the suitable functions.

The problem is, i am still not sure how to locate the hardware appliance. One 
of the FWaaS guy says that Service Chaining can help, any body has an idea or 
how to insert the fw to OpenStack?
On 11/02/2015 02:36 PM, Somanchi Trinath wrote:
Hi-

I'm confused. Do you really have an PoC implementation of what is to be 
achieved?

As I look into these type of Implementations, I would prefer to have proxy 
driver/plugin to get the configuration from Openstack to external 
controller/device and do the rest of the magic.

-
Trinath

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Alexander Kostrikov
Hello, Igor.

>But I'd like to hear from QA how do we rely on container-based
infrastructure? Would it be hard to change our sys-tests in short
time?

At first glance, system tests are using docker only to fetch logs and run
shell commands.
Also, docker is used to run Rally.

If there is an action to remove docker containers with carefull attention
to bvt testing, it would take couple days to fix system tests.
But time may be highly affected by code freezes and active features merging.

QA team is going to have Monday (Nov 23) sync-up - and it is possible to
get more exact information from all QA-team.

P.S.
+1 to remove docker.
-1 to remove docker without taking into account deadlines/other features.

On Thu, Nov 19, 2015 at 10:27 PM, Igor Kalnitsky 
wrote:

> Hey guys,
>
> Despite the fact I like containers (as deployment unit), we don't use
> them so. That means I +1 idea to drop containers, just because I
> believe that would
>
> * simplify a lot of things
> * helps get rid of huge amount of hacks
> * increase master node deployment
> * release us from annoying support of upgrades / rollbacks that proved
> to be non-working well
>
> But I'd like to hear from QA how do we rely on container-based
> infrastructure? Would it be hard to change our sys-tests in short
> time?
>
> Thanks,
> Igor
>
>
> On Thu, Nov 19, 2015 at 10:31 AM, Vladimir Kuklin 
> wrote:
> > Folks
> >
> > I guess it should be pretty simple to roll back - install older version
> and
> > restore the backup with preservation of /var/log directory.
> >
> > On Thu, Nov 19, 2015 at 7:38 PM, Sergii Golovatiuk
> >  wrote:
> >>
> >> Hi,
> >>
> >> On Thu, Nov 19, 2015 at 5:50 PM, Matthew Mosesohn <
> mmoses...@mirantis.com>
> >> wrote:
> >>>
> >>> Vladimir,
> >>>
> >>> The old site.pp is long out of date and should just be recreated from
> the
> >>> content of all the other $service-only.pp files.
> >>>
> >>> My main question is how do we propose to do a rollback from an update
> (in
> >>> theory, from 8.0 to 9.0, then back to 8.0)? Should we hardcode
> persistent
> >>> data directories (or symlink them?) to
> >>> /var/lib/fuel/$fuel_version/$service_name, as we are doing behind the
> scenes
> >>> currently with Docker? If we keep that mechanism in place, all the
> existing
> >>> puppet modules can be used without any modifications. On the same note,
> >>> upgrade/rollback is the same as backup and restore, that means our
> restore
> >>> should follow a similar approach.
> >>> -Matthew
> >>
> >>
> >> There only one idea I have is to do dual partitioning system. The
> similar
> >> approach is implemented in CoreOS.
> >>
> >>>
> >>>
> >>> On Thu, Nov 19, 2015 at 6:36 PM, Bogdan Dobrelya <
> bdobre...@mirantis.com>
> >>> wrote:
> 
>  On 19.11.2015 15:59, Vladimir Kozhukalov wrote:
>  > Dear colleagues,
>  >
>  > As might remember, we introduced Docker containers on the master
> node
>  > a
>  > while ago when we implemented first version of Fuel upgrade feature.
>  > The
>  > motivation behind was to make it possible to rollback upgrade
> process
>  > if
>  > something goes wrong.
>  >
>  > Now we are at the point where we can not use our tarball based
> upgrade
>  > approach any more and those patches that deprecate upgrade tarball
> has
>  > been already merged. Although it is a matter of a separate
> discussion,
>  > it seems that upgrade process rather should be based on kind of
> backup
>  > and restore procedure. We can backup Fuel data on an external media,
>  > then we can install new version of Fuel from scratch and then it is
>  > assumed backed up Fuel data can be applied over this new Fuel
>  > instance.
> 
>  A side-by-side upgrade, correct? That should work as well.
> 
>  > The procedure itself is under active development, but it is clear
> that
>  > rollback in this case would be nothing more than just restoring from
>  > the
>  > previously backed up data.
>  >
>  > As for Docker containers, still there are potential advantages of
>  > using
>  > them on the Fuel master node, but our current implementation of the
>  > feature seems not mature enough to make us benefit from the
>  > containerization.
>  >
>  > At the same time there are some disadvantages like
>  >
>  >   * it is tricky to get logs and other information (for example, rpm
>  > -qa) for a service like shotgun which is run inside one of
>  > containers.
>  >   * it is specific UX when you first need to run dockerctl shell
>  > {container_name} and then you are able to debug something.
>  >   * when building IBP image we mount directory from the host file
>  > system
>  > into mcollective container to make image build faster.
>  >   * there are config files and some other files which should be
> shared
>  >  

Re: [openstack-dev] [neutron] [QoS] meeting rebooted

2015-11-20 Thread Miguel Angel Ajo

Correct, thanks Moshe.

One of the first proposals is probably changing the periodicity of the 
meeting to 2-weeks instead if every week. We could vote on that by the 
end of the meeting, depending on how things go. And of course, we could 
change that back to 1-week later in the cycle as necessary.



Moshe Levi wrote:

Just to add more details about the when and where  :)
We will have a weekly meeting on Wednesday at 1400 UTC in #openstack-meeting-3
http://eavesdrop.openstack.org/#Neutron_QoS_Meeting

Thanks,
Moshe Levi.


-Original Message-
From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
Sent: Friday, November 20, 2015 12:08 PM
To: Miguel Angel Ajo
Cc: OpenStack Development Mailing List (not for usage questions); victor.r.how...@gmail.com;
irenab@gmail.com; Moshe Levi; Vikram
Choudhary; Gal Sagie; Haim
Daniel
Subject: Re: [neutron] [QoS] meeting rebooted

Miguel Angel Ajo  wrote:


Hi everybody,

  We're restarting the QoS meeting for next week,

  Here are the details, and a preliminary agenda,

   https://etherpad.openstack.org/p/qos-mitaka


   Let's keep QoS moving!,

Best,
Miguel Ángel.

I think you better give idea when/where it is restarted. :)

Ihar



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][CI] recheck/reverify support for Fuel CI jobs

2015-11-20 Thread Sergey Vasilenko
On Fri, Nov 20, 2015 at 4:00 PM, Alexey Shtokolov 
wrote:

> Probably we should use another keyword for Fuel CI to prevent an extra
> load on the infrastructure? For example "refuel" or smth like this?


IMHO we should have ability to restart each one of two deployment tests.
Often happens, that one test passed, but another fails while ENV setting
up. Restart both tests for this case does not required.


/sv
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] build_instance pre hook cannot set injected_files for new instance

2015-11-20 Thread Rich Megginson

On 11/19/2015 10:34 AM, Rich Megginson wrote:
I have some code that uses the build_instance pre hook to set 
injected_files in the new instance.  With the kilo code, the argv[7] 
was passed as [] - so I could append/extend this value to add more 
injected_files.  With the latest code, this is passed as None, so I 
can't set it.  How can I pass injected_files in a build_instance pre 
hook with the latest code/liberty? 


I have filed bug https://bugs.launchpad.net/nova/+bug/1518321 to track 
this issue.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Timur Nurlygayanov
Hi Andrey,

As far as I remember from the last usage of fuel master node, there was
> Centos + py26 installation. Python 2.6 is old enough and sometimes it is
> hard to launch some application on fuel node without docker (image with
> py27/py3). Are you planning to provide py27 at least or my note is outdated
> and I can already use py27 from the box?

We can install docker on master node anyway to run Rally / Tempest or other
test suites and scripts from master node with Python 2.7 or something also.

On Fri, Nov 20, 2015 at 5:20 PM, Andrey Kurilin 
wrote:

> Hi!
> I'm not fuel developer, so opinion below is based on user-view.
> As far as I remember from the last usage of fuel master node, there was
> Centos + py26 installation. Python 2.6 is old enough and sometimes it is
> hard to launch some application on fuel node without docker (image with
> py27/py3). Are you planning to provide py27 at least or my note is outdated
> and I can already use py27 from the box?
>
> On Thu, Nov 19, 2015 at 4:59 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> As might remember, we introduced Docker containers on the master node a
>> while ago when we implemented first version of Fuel upgrade feature. The
>> motivation behind was to make it possible to rollback upgrade process if
>> something goes wrong.
>>
>> Now we are at the point where we can not use our tarball based upgrade
>> approach any more and those patches that deprecate upgrade tarball has been
>> already merged. Although it is a matter of a separate discussion, it seems
>> that upgrade process rather should be based on kind of backup and restore
>> procedure. We can backup Fuel data on an external media, then we can
>> install new version of Fuel from scratch and then it is assumed backed up
>> Fuel data can be applied over this new Fuel instance. The procedure itself
>> is under active development, but it is clear that rollback in this case
>> would be nothing more than just restoring from the previously backed up
>> data.
>>
>> As for Docker containers, still there are potential advantages of using
>> them on the Fuel master node, but our current implementation of the feature
>> seems not mature enough to make us benefit from the containerization.
>>
>> At the same time there are some disadvantages like
>>
>>- it is tricky to get logs and other information (for example, rpm
>>-qa) for a service like shotgun which is run inside one of containers.
>>- it is specific UX when you first need to run dockerctl shell
>>{container_name} and then you are able to debug something.
>>- when building IBP image we mount directory from the host file
>>system into mcollective container to make image build faster.
>>- there are config files and some other files which should be shared
>>among containers which introduces unnecessary complexity to the whole
>>system.
>>- our current delivery approach assumes we wrap into rpm/deb packages
>>every single piece of the Fuel system. Docker images are not an exception.
>>And as far as they depend on other rpm packages we forced to build
>>docker-images rpm package using kind of specific build flow. Besides this
>>package is quite big (300M).
>>- I'd like it to be possible to install Fuel not from ISO but from
>>RPM repo on any rpm based distribution. But it is double work to support
>>both Docker based and package based approach.
>>
>> Probably some of you can give other examples. Anyway, the idea is to get
>> rid of Docker containers on the master node and switch to plane package
>> based approach that we used before.
>>
>> As far as there is nothing new here, we just need to use our old site.pp
>> (with minimal modifications), it looks like it is possible to implement
>> this during 8.0 release cycle. If there are no principal objections, please
>> give me a chance to do this ASAP (during 8.0), I know it is a huge risk for
>> the release, but still I think I can do this.
>>
>>
>>
>>
>> Vladimir Kozhukalov
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best regards,
> Andrey Kurilin.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Timur,
Senior QA Engineer
OpenStack Projects
Mirantis Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] How to add a periodic check for typos?

2015-11-20 Thread Amrith Kumar
So, just for grins, I took this approach out for a spin on Trove and noticed 
this as part of the change proposed by topy.

-   "hebrew": ["hebrew_general_ci", "hebrew_bin"],
+   "Hebrew": ["hebrew_general_ci", "hebrew_bin"],

-   "greek": ["greek_general_ci", "greek_bin"],
+   "Greek": ["greek_general_ci", "greek_bin"],

In this particular case the change is being proposed in something that is a set 
of collation sequences and while "Hebrew" is the correct capitalization in the 
English language, what we need is "hebrew" in this case. Similarly for Greek 
and greek. 

If there were some way to specify a set of "required typos" I would be OK 
running this manually before I checked in code. I'm not sure I'd like it either 
as a tox or a hacking rule though because the pain may outweigh the gain.

-amrith


> -Original Message-
> From: Gareth [mailto:academicgar...@gmail.com]
> Sent: Thursday, November 19, 2015 10:13 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] How to add a periodic check for typos?
> 
> Just talking about the idea of auto-spelling-fix.
> 
> My example patch https://review.openstack.org/#/c/247261/ doesn't work.
> Topy fixes something and break others. So it is just okay to do auto-spelling-
> check now, not fix :(
> 
> 
> 
> On Fri, Nov 20, 2015 at 6:31 AM, Matt Riedemann
>  wrote:
> >
> >
> > On 11/18/2015 8:00 PM, Gareth wrote:
> >>
> >> Hi stacker,
> >>
> >> We could use some 3rd tools like topy:
> >>  pip install topy
> >>  topy -a 
> >>  git commit & git review
> >>
> >> Here is an example: https://review.openstack.org/#/c/247261/
> >>
> >> Could we have a periodic job like Jenkins users updating our
> >> requirement.txt?
> >>
> >
> > Are you asking for all projects or just a specific project you work on
> > and you forgot to tag the subject line?
> >
> > I wouldn't have a bot doing this, if I were going to do it (which I
> > wouldn't for nova). You could have it built into your pep8 job, or a
> > separate job that is voting on your project, if you really care about 
> > spelling.
> >
> > --
> >
> > Thanks,
> >
> > Matt Riedemann
> >
> >
> >
> >
> __
> 
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> --
> Gareth
> 
> Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball
> OpenStack contributor, kun_huang@freenode My promise: if you find any
> spelling or grammar mistakes in my email from Mar 1 2013, notify me and I'll
> donate $1 or ¥1 to an open organization you specify.
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Timur Nurlygayanov
Hi team,

I think it too late to make such significant changes for MOS 8.0 now, but
I'm ok with the idea to remove docker containers in the future releases if
our dev team want to do this.
Any way, before we will do this, we need to plan how we will perform
updates between different releases with and without docker containers, how
we will manage requirements and etc. In fact we have a lot of questions and
haven't answers, let's prepare the spec for this change, review it, discuss
it with developers, users and project management team and if we haven't
requirements to keep docker containers on master node let's remove them for
the future releases (not in MOS 8.0).

Of course, we can fix BVT / SWARM tests and don't use docker images in our
test suite (it shouldn't be really hard) but we didn't plan these changes
and in fact these changes can affect our estimates for many tasks.

Thank you!


On Fri, Nov 20, 2015 at 4:44 PM, Alexander Kostrikov <
akostri...@mirantis.com> wrote:

> Hello, Igor.
>
> >But I'd like to hear from QA how do we rely on container-based
> infrastructure? Would it be hard to change our sys-tests in short
> time?
>
> At first glance, system tests are using docker only to fetch logs and run
> shell commands.
> Also, docker is used to run Rally.
>
> If there is an action to remove docker containers with carefull attention
> to bvt testing, it would take couple days to fix system tests.
> But time may be highly affected by code freezes and active features
> merging.
>
> QA team is going to have Monday (Nov 23) sync-up - and it is possible to
> get more exact information from all QA-team.
>
> P.S.
> +1 to remove docker.
> -1 to remove docker without taking into account deadlines/other features.
>
> On Thu, Nov 19, 2015 at 10:27 PM, Igor Kalnitsky 
> wrote:
>
>> Hey guys,
>>
>> Despite the fact I like containers (as deployment unit), we don't use
>> them so. That means I +1 idea to drop containers, just because I
>> believe that would
>>
>> * simplify a lot of things
>> * helps get rid of huge amount of hacks
>> * increase master node deployment
>> * release us from annoying support of upgrades / rollbacks that proved
>> to be non-working well
>>
>> But I'd like to hear from QA how do we rely on container-based
>> infrastructure? Would it be hard to change our sys-tests in short
>> time?
>>
>> Thanks,
>> Igor
>>
>>
>> On Thu, Nov 19, 2015 at 10:31 AM, Vladimir Kuklin 
>> wrote:
>> > Folks
>> >
>> > I guess it should be pretty simple to roll back - install older version
>> and
>> > restore the backup with preservation of /var/log directory.
>> >
>> > On Thu, Nov 19, 2015 at 7:38 PM, Sergii Golovatiuk
>> >  wrote:
>> >>
>> >> Hi,
>> >>
>> >> On Thu, Nov 19, 2015 at 5:50 PM, Matthew Mosesohn <
>> mmoses...@mirantis.com>
>> >> wrote:
>> >>>
>> >>> Vladimir,
>> >>>
>> >>> The old site.pp is long out of date and should just be recreated from
>> the
>> >>> content of all the other $service-only.pp files.
>> >>>
>> >>> My main question is how do we propose to do a rollback from an update
>> (in
>> >>> theory, from 8.0 to 9.0, then back to 8.0)? Should we hardcode
>> persistent
>> >>> data directories (or symlink them?) to
>> >>> /var/lib/fuel/$fuel_version/$service_name, as we are doing behind the
>> scenes
>> >>> currently with Docker? If we keep that mechanism in place, all the
>> existing
>> >>> puppet modules can be used without any modifications. On the same
>> note,
>> >>> upgrade/rollback is the same as backup and restore, that means our
>> restore
>> >>> should follow a similar approach.
>> >>> -Matthew
>> >>
>> >>
>> >> There only one idea I have is to do dual partitioning system. The
>> similar
>> >> approach is implemented in CoreOS.
>> >>
>> >>>
>> >>>
>> >>> On Thu, Nov 19, 2015 at 6:36 PM, Bogdan Dobrelya <
>> bdobre...@mirantis.com>
>> >>> wrote:
>> 
>>  On 19.11.2015 15:59, Vladimir Kozhukalov wrote:
>>  > Dear colleagues,
>>  >
>>  > As might remember, we introduced Docker containers on the master
>> node
>>  > a
>>  > while ago when we implemented first version of Fuel upgrade
>> feature.
>>  > The
>>  > motivation behind was to make it possible to rollback upgrade
>> process
>>  > if
>>  > something goes wrong.
>>  >
>>  > Now we are at the point where we can not use our tarball based
>> upgrade
>>  > approach any more and those patches that deprecate upgrade tarball
>> has
>>  > been already merged. Although it is a matter of a separate
>> discussion,
>>  > it seems that upgrade process rather should be based on kind of
>> backup
>>  > and restore procedure. We can backup Fuel data on an external
>> media,
>>  > then we can install new version of Fuel from scratch and then it is
>>  > assumed backed up Fuel data can be applied over this new Fuel
>>  > instance.
>> 
>>  A side-by-side upgrade, correct? That should work as 

Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Andrey Kurilin
Hi!
I'm not fuel developer, so opinion below is based on user-view.
As far as I remember from the last usage of fuel master node, there was
Centos + py26 installation. Python 2.6 is old enough and sometimes it is
hard to launch some application on fuel node without docker (image with
py27/py3). Are you planning to provide py27 at least or my note is outdated
and I can already use py27 from the box?

On Thu, Nov 19, 2015 at 4:59 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> As might remember, we introduced Docker containers on the master node a
> while ago when we implemented first version of Fuel upgrade feature. The
> motivation behind was to make it possible to rollback upgrade process if
> something goes wrong.
>
> Now we are at the point where we can not use our tarball based upgrade
> approach any more and those patches that deprecate upgrade tarball has been
> already merged. Although it is a matter of a separate discussion, it seems
> that upgrade process rather should be based on kind of backup and restore
> procedure. We can backup Fuel data on an external media, then we can
> install new version of Fuel from scratch and then it is assumed backed up
> Fuel data can be applied over this new Fuel instance. The procedure itself
> is under active development, but it is clear that rollback in this case
> would be nothing more than just restoring from the previously backed up
> data.
>
> As for Docker containers, still there are potential advantages of using
> them on the Fuel master node, but our current implementation of the feature
> seems not mature enough to make us benefit from the containerization.
>
> At the same time there are some disadvantages like
>
>- it is tricky to get logs and other information (for example, rpm
>-qa) for a service like shotgun which is run inside one of containers.
>- it is specific UX when you first need to run dockerctl shell
>{container_name} and then you are able to debug something.
>- when building IBP image we mount directory from the host file system
>into mcollective container to make image build faster.
>- there are config files and some other files which should be shared
>among containers which introduces unnecessary complexity to the whole
>system.
>- our current delivery approach assumes we wrap into rpm/deb packages
>every single piece of the Fuel system. Docker images are not an exception.
>And as far as they depend on other rpm packages we forced to build
>docker-images rpm package using kind of specific build flow. Besides this
>package is quite big (300M).
>- I'd like it to be possible to install Fuel not from ISO but from RPM
>repo on any rpm based distribution. But it is double work to support both
>Docker based and package based approach.
>
> Probably some of you can give other examples. Anyway, the idea is to get
> rid of Docker containers on the master node and switch to plane package
> based approach that we used before.
>
> As far as there is nothing new here, we just need to use our old site.pp
> (with minimal modifications), it looks like it is possible to implement
> this during 8.0 release cycle. If there are no principal objections, please
> give me a chance to do this ASAP (during 8.0), I know it is a huge risk for
> the release, but still I think I can do this.
>
>
>
>
> Vladimir Kozhukalov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Number of IP addresses in a public network

2015-11-20 Thread Aleksey Kasatkin
We have more generic ticket: https://bugs.launchpad.net/fuel/+bug/1354803
and corresponding CR: https://review.openstack.org/#/c/245941/

Aleksey Kasatkin


On Fri, Nov 20, 2015 at 11:24 AM, Aleksey Kasatkin 
wrote:

> It's not about Public networks only. There can be the same problem with
> other networks as well.
> It's required to check all the networks (across all node groups).
> But it is done just for Public network now (and VIPs for plugins are not
> taken into account).
>
>
> Aleksey Kasatkin
>
>
> On Fri, Nov 20, 2015 at 12:04 AM, Andrew Woodward 
> wrote:
>
>> The high value of the bug here reflects that the error message is wrong.
>> From a UX side we could maybe even justify this as Critical. The error
>> message must reflect the correct quantity of addresses required.
>>
>>
>> On Tue, Nov 17, 2015 at 1:31 PM Roman Prykhodchenko 
>> wrote:
>>
>>> Folks, we should resurrect this thread and find a consensus.
>>>
>>> 1 вер. 2015 р. о 15:00 Andrey Danin  написав(ла):
>>>
>>>
>>> +1 to Igor.
>>>
>>> It's definitely not a High bug. The biggest problem I see here is a
>>> confusing error message with a wrong number of required IPs. AFAIU we
>>> cannot fix it easily now so let's postpone it to 8.0 but change a message
>>> itself [0] in 7.0.
>>>
>>> We managed to create an error that returns '7', when there are 8
>> available, but 9 are required, at some level we knew that we came up short
>> or we'd just have some lower level error caught here.
>>
>>>
>>> [0]
>>> https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/task/task.py#L1160-L1163
>>>
>>> On Tue, Sep 1, 2015 at 1:39 PM, Igor Kalnitsky 
>>> wrote:
>>>
 Hello,

 My 5 cents on it.

 I don't think it's really a High or Critical bug for 7.0. If there's
 not enough IPs the CheckBeforeDeploymentTask will fail. And that's
 actually Ok, it may fail by different reason without starting actual
 deployment (sending message to Astute).

 But I agree it's kinda strange that we don't check IPs during network
 verification step. The good fix in my opinion is to move this check
 into network checker (perhaps keep it here either), but that
 definitely shouldn't be done in 7.0.

 Thanks,
 Igor


 On Mon, Aug 31, 2015 at 2:54 PM, Roman Prykhodchenko 
 wrote:
 > Hi folks!
 >
 > Recently a problem that network check does not tell whether there’s
 enough IP addresses in a public network [1] was reported. That check is
 performed by CheckBeforeDeployment task, but there is two problems that
 happen because this verification is done that late:
 >
 >  - A deployment fails, if there’s not enough addresses in specified
 ranges
 >  - If a user wants to get network configuration they will get an error
 >
 > The solution for this problems seems to be easy and a straightforward
 patch [2] was proposed. However, there is a hidden problem which is that
 patch does not address which is that installed plugins may reserve VIPs for
 their needs. The issue is that they do it just before deployment and so
 it’s not possible to get those reservations when a user wants to check
 their network set up.
 >
 > The important issue we have to address here is that network
 configuration generator will fail, if specified ranges don’t fit all VIPs.
 There were several proposals to fix that, I’d like to highlight two of 
 them:
 >
 >  a) Allow VIPs to not have an IP address assigned, if network config
 generator works for API output.
 >  That will prevent GET requests from failure, but since IP
 addresses for VIPs are required, generator will have to fail, if it
 generates a configuration for the orchestrator.
 >  b) Add a release note that users have to calculate IP addresses
 manually and put sane ranges in order to not shoot their own legs. Then
 it’s also possible to change network verification output to remind users to
 check the ranges before starting a deployment.
 >
 > In my opinion we cannot follow (a) because it only masks a problem
 instead of providing a fix. Also it requires to change the API which is not
 a good thing to do after the SCF. If we choose (b), then we can work on a
 firm solution in 8.0 and fix the problem for real.
 >
 >
 > P. S. We can still merge [2], because it checks, if IP ranges can at
 least fit the basic configuration. If you agree, I will update it soon.
 >
 > [1] https://bugs.launchpad.net/fuel/+bug/1487996
 > [2] https://review.openstack.org/#/c/217267/
 >
 >
 >
 > - romcheg
 >
 >
 >
 >
 >
 __
 > OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [release][stable] OpenStack 2014.2.4 (juno)

2015-11-20 Thread Alan Pevec
2015-11-20 3:22 GMT+01:00 Davanum Srinivas :
> fyi https://review.openstack.org/#/c/247677/

That's not the right answer to Rochelle's plea :)
It was actually already answered by Matt, with a suggestion that
_Kilo_ grenade job could simply checkout 2014.2.4 tag instead of
stable/juno and for that we don't need to keep branches around.

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][taas] proposal: dedicated tunnel for carrying mirrored traffic

2015-11-20 Thread SUZUKI, Kazuhiro
Hi,

Thank you for your interest and suggestion.

A blueprint [1] has already proposed to add port mirroring
capabilities to Neutron.

[1] https://blueprints.launchpad.net/neutron/+spec/port-mirroring

Because our proposal is for the current design of TaaS
(tap-as-a-service), I guess a RFE is not required at this time.

Thank you.

--
Kazuhiro Suzuki


From: Li Ma 
Subject: Re: [openstack-dev] [neutron][taas] proposal: dedicated tunnel for 
carrying mirrored traffic
Date: Thu, 19 Nov 2015 11:51:15 +0800

> It is suggested that you can issue a RFE request for it. [1] We can
> discuss with it and track the progress in the launchpad.
> 
> By the way, I'm very interested in it. I discussed a another problem
> with Huawei neutron engineers about abstract VTEP to neutron port. It
> allows managing VTEP and can leverage the flexibility in many aspects
> as more and more neutron features need VTEP management, just like your
> proposal.
> 
> [1] http://docs.openstack.org/developer/neutron/policies/blueprints.html
> 
> On Thu, Nov 19, 2015 at 11:36 AM, Soichi Shigeta
>  wrote:
>>
>>  Hi,
>>
>> As we decided in the last weekly meeting,
>>   I'd like to use this mailing list to discuss
>>   a proposal about creating dedicated tunnel for
>>   carrying mirrored traffic between hosts.
>>
>>   link:
>> https://wiki.openstack.org/w/images/7/78/TrafficIsolation_20151116-01.pdf
>>
>>   Best Regards,
>>   Soichi Shigeta
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> -- 
> 
> Li Ma (Nick)
> Email: skywalker.n...@gmail.com
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack-Announce List

2015-11-20 Thread Dean Troyer
On Fri, Nov 20, 2015 at 4:41 AM, Thierry Carrez 
wrote:

> We could definitely go back to "the place users wanting to keep up with
> upstream news directly affecting them should subscribe to", and post only:
>
> - user-facing service releases (type:service deliverables), on stable
> branches or development branches
> - security vulnerabilities and security notes
> - weekly upstream development news (the one Mike compiles), and include
> a digest of all library/ancillary services releases of the week in there
>
> Library releases are not "directly affecting users", so not urgent news
> for "users wanting to keep up with upstream news" and can wait to be
> mentioned in the weekly digest.
>

Matthieu mentioned the following a bit later:

> I however like being informed of projects and clients releases.
> They don't happen often and they are interesting to me as an
> operator (projects) and consumer of the OpenStack API (project
>  clients).

Libraries are not directly affecting users, but clients are: python-*client
and OSC and the like.  I'd be on the fence re the SDK (when it becomes
official) but it is intended for downstream consumers, app devs mostly
rather than end-users and operators.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][fwaas]some architectural advice on fwaas driver writing

2015-11-20 Thread Oguz Yarimtepe
I created a sample driver by looking at vArmour driver that is at the 
Github FWaaS repo. I am planning to call the FW's REST API from the 
suitable functions.


The problem is, i am still not sure how to locate the hardware 
appliance. One of the FWaaS guy says that Service Chaining can help, any 
body has an idea or how to insert the fw to OpenStack?


On 11/02/2015 02:36 PM, Somanchi Trinath wrote:


Hi-

I’m confused. Do you really have an PoC implementation of what is to 
be achieved?


As I look into these type of Implementations, I would prefer to have 
proxy driver/plugin to get the configuration from Openstack to 
external controller/device and do the rest of the magic.


-

Trinath



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][infra][qa] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-20 Thread Alan Pevec
> So we were brainstorming this with Rocky the other night. Would this be 
> possible to do by following:
> 1) we still tag juno EOL in few days time
> 2) we do not remove the stable/juno branch

Why not?

> 3) we run periodic grenade jobs for kilo

>From a quick look, grenade should work with a juno-eol tag instead of
stable/juno, it's just a git reference.
"Zombie" Juno->Kilo grenade job would need to set BASE_DEVSTACK_BRANCH=juno-eol
and for devstack all $PROJECT_BRANCH=juno-eol (or 2014.2.4 should be
the same commit).
Maybe I'm missing some corner case in devstack where stable/* is
assumed but if so that should be fixed anyway.
Leaving branch around is a bad message, it implies there support for
it, while there is not.

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Graduate cliutils.py into oslo.utils

2015-11-20 Thread Kekane, Abhishek


-Original Message-
From: Davanum Srinivas [mailto:dava...@gmail.com] 
Sent: 20 November 2015 16:59
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo] Graduate cliutils.py into oslo.utils

Abhishek,

Go for it!

Thank you Dims, I am on it!!

Abhishek

On Fri, Nov 20, 2015 at 2:32 AM, Kekane, Abhishek  
wrote:
> -Original Message-
> From: Doug Hellmann [mailto:d...@doughellmann.com]
> Sent: 16 November 2015 21:46
> To: openstack-dev
> Subject: Re: [openstack-dev] [oslo] Graduate cliutils.py into 
> oslo.utils
>
> Excerpts from Kekane, Abhishek's message of 2015-11-16 07:33:48 +:
>> Hi,
>>
>> As apiclient is now removed from oslo-incubator, to proceed with 
>> request-id spec [1] I have two options in mind,
>>
>>
>> 1.   Use keystoneauth1 + cliff in all python-clients (add request-id 
>> support in cliff library)
>
> cliff is being used outside of OpenStack, and is not at all related to REST 
> API access, so I don't think that's the right place.
>
>>
>> 2.   apiclient code is available in all python-*clients, modify this 
>> code in individual clients and add support to return request-id.
>
> Yes, I think that makes sense.
>
> Hi Devs,
>
> As per mentioned by Dough I will start pushing patches for 
> python-cinderclient, python-glanceclient and python-novaclient from next week 
> which includes changes for returning request-id to caller.
> Please let me know if you have any suggestions on the same.
>
>>
>> Please let me know your opinion on the same.
>>
>> [1] https://review.openstack.org/#/c/156508/
>>
>> Thanks & Regards,
>>
>> Abhishek Kekane
>>
>> > On Nov 11, 2015, at 3:54 AM, Andrey Kurilin > > mirantis.com>
>> >  wrote:
>>
>> >
>>
>> >
>>
>> >
>>
>> > On Tue, Nov 10, 2015 at 4:25 PM, Sean Dague > > dague.net
>> >  > > dague.net>>
>> >  wrote:
>>
>> > On 11/10/2015 08:24 AM, Andrey Kurilin wrote:
>>
>> > >>It was also proposed to reuse openstackclient or the openstack SDK.
>>
>> > >
>>
>> > > Openstack SDK was proposed a long time ago(it looks like it was 
>> > > several
>>
>> > > cycles ago) as "alternative" for cliutils and apiclient, but I 
>> > > don't
>>
>> > > know any client which use it yet. Maybe openstacksdk cores should 
>> > > try to
>>
>> > > port any client as an example of how their project should be used.
>>
>> >
>>
>> > The SDK is targeted for end user applications, not service clients.
>> > I do
>>
>> > get there was lots of confusion over this, but SDK is not the 
>> > answer
>>
>> > here for service clients.
>>
>> >
>>
>> > Ok, thanks for explanation, but there is another question in my head: If 
>> > openstacksdk is not for python-*clients, why apiclient(which is actually 
>> > used by python-*clients) was marked as deprecated due to openstacksdk?
>>
>>
>>
>> The Oslo team wanted to deprecate the API client code because it wasn't 
>> being maintained. We thought at the time we did so that the SDK would 
>> replace the clients, but discussions since that time have changed direction.
>>
>> >
>>
>> > The service clients are *always* going to have to exist in some form.
>>
>> > Either as libraries that services produce, or by services deciding 
>> > they
>>
>> > don't want to consume the libraries of other clients and just put a
>>
>> > targeted bit of rest code in their own tree to talk to other services.
>>
>> >
>>
>> > -Sean
>>
>> >
>>
>> > --
>>
>> > Sean Dague
>>
>> > http://dague.net 
>>
>> >
>>
>> > ___
>> > _
>> > __
>>
>> > OpenStack Development Mailing List (not for usage questions)
>>
>> > Unsubscribe: OpenStack-dev-request at 
>> > lists.openstack.org> > i nfo/openstack-dev>?subject:unsubscribe
>> > > > i
>> > be>
>>
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > 
>>
>> >
>>
>> >
>>
>> >
>>
>> > --
>>
>> > Best regards,
>>
>> > Andrey Kurilin.
>>
>> > ___
>> > _
>> > __
>>
>> > OpenStack Development Mailing List (not for usage questions)
>>
>> > Unsubscribe: OpenStack-dev-request at 
>> > lists.openstack.org> > i nfo/openstack-dev> > > lists.openstack.org> > i nfo/openstack-dev>>?subject:unsubscribe
>>
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > 
>>

[openstack-dev] [neutron] [QoS] meeting rebooted

2015-11-20 Thread Miguel Angel Ajo


   Hi everybody,

 We're restarting the QoS meeting for next week,

 Here are the details, and a preliminary agenda,

  https://etherpad.openstack.org/p/qos-mitaka


  Let's keep QoS moving!,

Best,
Miguel Ángel.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [QoS] meeting rebooted

2015-11-20 Thread Ihar Hrachyshka

Miguel Angel Ajo  wrote:



   Hi everybody,

 We're restarting the QoS meeting for next week,

 Here are the details, and a preliminary agenda,

  https://etherpad.openstack.org/p/qos-mitaka


  Let's keep QoS moving!,

Best,
Miguel Ángel.


I think you better give idea when/where it is restarted. :)

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][taas] proposal: dedicated tunnel for carrying mirrored traffic

2015-11-20 Thread Soichi Shigeta


 Thank you for your interest.

 The (dedicated) tunnel is used only to carry mirrored packets.
 Time stamp and order of a mirrored packet is the same as
 original packet.

 We may need to consider the issue you pointed out, but I
 think it's independent from whether dedicated tunnel is
 used or not.



Regarding tunnel for that. How do you ensure packet timestamps and
ordering?

Endre Karlson
19. nov. 2015 4.55 a.m. skrev "Li Ma" :


It is suggested that you can issue a RFE request for it. [1] We can
discuss with it and track the progress in the launchpad.

By the way, I'm very interested in it. I discussed a another problem
with Huawei neutron engineers about abstract VTEP to neutron port. It
allows managing VTEP and can leverage the flexibility in many aspects
as more and more neutron features need VTEP management, just like your
proposal.

[1] http://docs.openstack.org/developer/neutron/policies/blueprints.html

On Thu, Nov 19, 2015 at 11:36 AM, Soichi Shigeta
 wrote:


  Hi,

 As we decided in the last weekly meeting,
   I'd like to use this mailing list to discuss
   a proposal about creating dedicated tunnel for
   carrying mirrored traffic between hosts.

   link:


https://wiki.openstack.org/w/images/7/78/TrafficIsolation_20151116-01.pdf


   Best Regards,
   Soichi Shigeta





__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][CI] recheck/reverify support for Fuel CI jobs

2015-11-20 Thread Igor Belikov
Hi Stanislaw,

The reason behind this is simple - deployment tests are heavy. Each deployment 
test occupies whole server for ~2 hours, for each commit we have 2 deployment 
tests (for current fuel-library master) and that’s just because we don’t test 
CentOS deployment for now.
If we assume that developers will rertrigger deployment tests only when 
retrigger would actually solve the failure - it’s still not smart in terms of 
HW usage to retrigger both tests when only one has failed, for example.
And there are cases when retrigger just won’t do it and CI Engineer must 
manually erase the existing environment on slave or fix it by other means, so 
it’s better when CI Engineer looks through logs before each retrigger of 
deployment test.

Hope this answers your question.

--
Igor Belikov
Fuel CI Engineer
ibeli...@mirantis.com

> On 20 Nov 2015, at 13:57, Stanislaw Bogatkin  wrote:
> 
> Hi Igor,
> 
> would you be so kind tell, why fuel-library deployment tests doesn't support 
> this? Maybe there is a link with previous talks about it?
> 
> On Fri, Nov 20, 2015 at 1:34 PM, Igor Belikov  > wrote:
> Hi,
> 
> I’d like to inform you that all jobs running on Fuel CI (with the exception 
> of fuel-library deployment tests) now support retriggering via “recheck” or 
> “reverify” comments in Gerrit.
> Exact regex is the same one used in Openstack-Infra’s zuul and can be found 
> here 
> https://github.com/fuel-infra/jenkins-jobs/blob/master/servers/fuel-ci/global.yaml#L3
>  
> 
> 
> CI-Team kindly asks you to not abuse this option, unfortunately not every 
> failure could be solved by retriggering.
> And, to stress this out once again: fuel-library deployment tests don’t 
> support this, so you still have to ask for a retrigger in #fuel-infra irc 
> channel.
> 
> Thanks for attention.
> --
> Igor Belikov
> Fuel CI Engineer
> ibeli...@mirantis.com 
> 
> 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova]Move encryptors to os-brick

2015-11-20 Thread Daniel P. Berrange
On Fri, Nov 20, 2015 at 03:22:04AM +, Li, Xiaoyan wrote:
> Hi all,
> 
> To fix bug [1][2] in Cinder, Cinder needs to use nova/volume/encryptors[3]
> to attach/detach encrypted volumes. 
> 
> To decrease the code duplication, I raised a BP[4] to move encryptors to
> os-brick[5].
> 
> Once it is done, Nova needs to update to use the common library. This
> is BP raised. [6]

You need to proposal a more detailed spec for this, not merely a blueprint
as there are going to be significant discussion points here.

In particular for the QEMU/KVM nova driver, this proposal is not really
moving in a direction that is aligned with our long term desire/plan for
volume encryption and/or storage management in Nova with KVM.  While we
currently use dm-crypt with volumes that are backed by block devices,
this is not something we wish to use long term. Increasingly the storage
used is network based, and while we use in-kernel network clients for
iSCSI/NFS, we use an in-QEMU client for RBD/Gluster storage. QEMU also
has support for in-QEMU clients for iSCSI/NFS and it is likely we'll use
them in Nova in future too.

Now encryption throws a (small) spanner in the works as the only way to
access encrypted data right now is via dm-crypt, which obviously doesn't
fly when there's no kernel block device to attach it to. Hence we are
working in enhancement to QEMU to let it natively handle LUKS format
volumes. At which point we'll stop using dm-crypt for for anything and
do it all in QEMU.

Nova currently decides whether it wants to use the in-kernel network
client, or an in-QEMU network client for the various network backed
storage drivers. If os-brick takes over encryption setup with dm-crypt,
then it would potentially be taking the decision away from Nova about
whether to use in-kernel or in-QEMU clients, which is not desirable.
Nova must retain control over which configuration approach is best
for the hypervisor it is using.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][infra][qa] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-20 Thread Kuvaja, Erno
> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: Friday, November 20, 2015 10:45 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [stable][infra][qa] Preparing 2014.2.4 (Juno)
> WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.
> 
> Kuvaja, Erno wrote:
> > So we were brainstorming this with Rocky the other night. Would this be
> possible to do by following:
> > 1) we still tag juno EOL in few days time
> > 2) we do not remove the stable/juno branch
> > 3) we run periodic grenade jobs for kilo
> >
> > I'm not that familiar with the grenade job itself so I'm doing couple of
> assumptions, please correct me if I'm wrong.
> > 1) We could do this with py27 only
> > 2) We could do this with Ubuntu 1404 only
> >
> > If this is doable would we need anything special for these jobs in infra 
> > point
> of view or can we just schedule these jobs from the pool running our other
> jobs as well?
> > If so is there still "quiet" slots on the infra utilization so that we 
> > would not
> be needing extra resources poured in for this?
> > Is there something else we would need to consider in QA/infra point of
> view?
> >
> > Benefits for this approach:
> > 1) The upgrade to kilo would be still tested occasionally.
> > 2) Less work for setting up the jobs as we do the installs from the
> > stable branch currently (vs. installing the last from tarball)
> >
> > What we should have as requirements for doing this:
> > 1) Someone making the changes to the jobs so that the grenade job gets
> ran periodically.
> > 2) Someone looking after these jobs.
> > 3) Criteria for stop doing this, X failed runs, some set timeperiod,
> > something else. (and removing the stable/juno branch)
> >
> > Big question ref the 2), what can we do if the grenade starts failing? In
> theory we won't be merging anything to kilo that _should_ cause this and we
> definitely will not be merging anything to Juno to fix these issues anymore.
> How much maintenance those grenade jobs themselves needs?
> >
> > So all in all, is the cost doing above too much to get indicator that tells 
> > us
> when Juno --> Kilo upgrade is not doable anymore?
> 
> Let's wait a bit for this discussion for the return of the Infra PTL from
> vacation, his input is critical to any decision we can make. Jeremy should be
> back on Monday.
> 
> --
> Thierry Carrez (ttx)

Sure, didn't know that he is on holidays, but there was a reason why I added 
infra and qa tags to the subject. Like you said infra being able to facilitate 
this is crucial for any plans.

- Erno
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Redfish drivers in ironic

2015-11-20 Thread Dmitry Tantsur

On 11/20/2015 12:50 AM, Bruno Cornec wrote:

Hello,

Vladyslav Drok said on Thu, Nov 19, 2015 at 03:59:41PM +0200:

Hi list and Bruno,

I’m interested in adding virtual media boot interface for redfish (
https://blueprints.launchpad.net/ironic/+spec/redfish-virtual-media-boot).

It depends on
https://blueprints.launchpad.net/ironic/+spec/ironic-redfish
and a corresponding spec https://review.openstack.org/184653, that
proposes
adding support for redfish (adding new power and management
interfaces) to
ironic. It also seems to depend on python-redfish client -
https://github.com/devananda/python-redfish.


Very good idea ;-)


I’d like to know what is the current status of it?


We have made recently some successful tests with both a real HP ProLiant
server with a redfish compliant iLO FW (2.30+) and the DMTF simulator.

The version working for these tests is at
https://github.com/bcornec/python-redfish (prototype branch)
I think we should now move that work into master and make again a pull
request to Devananda.


Is there some roadmap of what should be added to
python-redfish (or is the one mentioned in spec is still relevant)?


I think this is still relevant.


Is there a way for others to contribute in it?


Feel free to git clone the repo and propose patches to it ! We would be
happy to have contributors :-) I've also copied our mailing list to the
other contributors are aware of this.


Bruno, do you plan to move it
under ironic umbrella, or into pyghmi as people suggested in spec?


That's a difficult question. One one hand, I don't think python-redfish
should be under the OpenStack umbrella per se. This is a useful python
module to dialog with servers providing a Redfish interface and this has
no relationship with OpenStack ... except that it's very useful for
Ironic ! But could also be used by other projects in the future such as
Hadoop for node deployment, or my MondoRescue Disaster Recovery project
e.g. That's also why we have not used OpenStack modules in order to
avoid to create an artificial dependency that could prevent that module
tobe used py these other projects.


Using openstack umbrella does not automatically mean the project can't 
be used outside of openstack. It just means you'll be using openstack 
infra for its development, which might be a big plus.




I'm new to the python galaxy myself, but thought that pypy would be the
right place for it, but I really welcome suggestions here.


You mean PyPI? I don't see how these 2 contradict each other, PyPI is 
just a way to distribute releases.



I also need to come back to the Redfish spec itself and upate with the
atest feedback we got, in order to have more up to date content for the
Mitaka cycle.

Best regards,
Bruno.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Report from Gerrit User Summit

2015-11-20 Thread Daniel Comnea
Superb report Jim, thanks !

On Thu, Nov 19, 2015 at 10:47 AM, Markus Zoeller 
wrote:

> David Pursehouse  wrote on 11/12/2015 09:22:50
> PM:
>
> > From: David Pursehouse 
> > To: OpenStack Development Mailing List
> 
> > Cc: openstack-in...@lists.openstack.org
> > Date: 11/12/2015 09:27 PM
> > Subject: Re: [openstack-dev] [OpenStack-Infra] Report from Gerrit User
> Summit
> >
> > On Mon, Nov 9, 2015 at 10:40 PM David Pursehouse
>  > > wrote:
> >
> > <...>
> >
> > * As noted in another recent thread by Khai, the hashtags support
> >   (user-defined tags applied to changes) exists but depends on notedb
> >   which is not ready for use yet (targeted for 3.0 which is probably
> >   at least 6 months off).
>
> >
> > We're looking into the possibility of enabling only enough of the
> > notedb to make hashtags work in 2.12.
> >
> >
> >
> > Unfortunately it looks like it's not going to be possible to do this.
>
> That's a great pity. :(
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Versioned notifications... who cares about the version?

2015-11-20 Thread Balázs Gibizer


> From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
> Sent: November 19, 2015 23:29
> On 11/19/2015 4:05 PM, Ryan Rossiter wrote:
> > Reading through [1] I started getting worries in the back of my head
> > about versioning these notifications. The main concern being how can
> > the consumer know about the versions and what's different between
> them?
> > Because these versioned notification payloads hold live nova objects,
> > there can be a lot of rug-pulling going on underneath these
> > notifications. If the payload doesn't pin itself to a certain level of
> > the object, a consumer can never be guaranteed the version of the
> > payload's object they will be receiving. I ran through a few of the
> > scenarios about irregular versions in the notifications subteam
> > meeting on Tuesday [2].
> >
> > My question is do we care about the consumer? Or is it a case of
> > "the consumer is always right" so we need to make sure we hand them
> > super consistent, well-defined blobs across the wire? Consumers will
> > have no idea of nova object internals, unless they feel like `python
> > -c import nova`. I do think we get one piece of help from o.vo though.
> > When the object is serialized, it hands the version with the object.
> > So consumers can look at the object and say "oh, I got 1.4 I know what
> > to do with this". But... they will have to implement their own compat
> > logic. Everyone will have to implement their own compat logic.
> >
> > We could expose a new API for getting the schema for a specific
> > version of a notification, so a consumer will know what they're
> > getting with their notifications. But I think that made mriedem
> > nauseous. We could make an oslo library that stuffs a shim in between
> > o.vo and nova's notifications to help out with compat/versioning, but
> > that sounds like a lot of work, especially because the end goal is still not
> clearly defined.
> 
> The term is 'nauseated'. To be nauseous, you nauseate others. Which I might
> do from time to time.
> 
> Sorry, I'm channeling one of my wives' pet peeves because I say the same
> thing. Sometimes just to annoy her.
> 
> But yeah, I was made physically ill from that idea.
> 
> >
> > Thoughts?
> >
> > [1] https://review.openstack.org/#/c/247024
> > [2]
> > http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-
> alt/%23ope
> > nstack-meeting-alt.2015-11-17.log.html#t2015-11-17T20:22:29
> >
> >
> 
> One idea I had was (and maybe this would be in a separate library like you're
> talking about, i.e. nova-notifications), but if nova emits the notification 
> with
> the version and the consumer calls into the library that translates it into a
> version they want, then get that transformed thing back.
> 
> However, how does the consumer know what they want or what they can
> handle? Do they pin a version in configuration somewhere? I could see
> something like how we have upgrade levels pinned in nova so newer
> conductor can backlevel things for older computes.
> 
> I was also thinking about microversions and novaclient, but in that case
> novaclient knows what max microversion it can handle and only requests up
> to that version, and then nova-api handles the compat work. In the case of
> notifications, nova is just broadcasting those so it's not doing any compat
> work, but it's the only thing that knows *how* to do the compat work...
> 
> So yeah, I'm lost.

Minor version change shall not cause any problem for the consumer as the 
payload is backward compatible between minor versions. So if the consumer does 
not need the new field then he/she does not need to change anything in his/her 
parser. As far as I know we had a single major object version in nova so far so 
this is not a frequent event.  In case of major change I think we can offer 
version pinning from nova via configuration as a future step.

The library idea has the problem that it would be python lib and consumers can 
be in any language. For me lib would be used for compat code but as I mentioned 
above incompatibility is not that frequent. In the other hand discoverability 
of notifications are more important for me. For that I can suggest providing 
notification samples as a first step so the consumer can see in the source tree 
what notifications are provided by the nova. As a natural next step would be to 
provide not just samples but schemas for the notifications. I haven't looked it 
too deep but I feel if we provide json schema for the notifications then the 
consumer can generate an object model from that schema without too much effort. 
It might not help directly with backporting the payload but at least automate 
things around it. The json schema has the benefit that it is language 
independent too.

Cheers,
Gibi
> 
> --
> 
> Thanks,
> 
> Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] OpenStack-Announce List

2015-11-20 Thread Thierry Carrez
Tom Fifield wrote:
> I'd like to get your thoughts about the OpenStack-Announce list.
> 
> We describe the list as:
> 
> """
> Subscribe to this list to receive important announcements from the
> OpenStack Release Team and OpenStack Security Team.
> 
> This is a low-traffic, read-only list.
> """
> 
> Up until July 2015, it was used for the following:
> * Community Weekly Newsletter
> * Stable branch release notifications
> * Major (i.e. Six-monthly) release notifications
> * Important security advisories

Actually it's all security advisories, not just "important" ones.

> and had on average 5-10 messages per month.
> 
> After July 2015, the following was added:
> * Release notifications for clients and libraries (one email per
> library, includes contributor-focused projects)
> 
> resulting in an average of 70-80 messages per month.
> 
> Personally, I no longer consider this volume "low traffic" :)
> 
> In addition, I have been recently receiving feedback that users have
> been unsubscribing from or deleting without reading the list's posts.
> 
> That isn't good news, given this is supposed to be the place where we
> can make very important announcements and have them read.
> 
> One simple suggestion might be to batch the week's client/library
> release notifications into a single email. Another might be to look at
> the audience for the list, what kind of notifications they want, and
> chose the announcements differently.
> 
> What do you think we should do to ensure the announce list remains useful?

-announce was originally designed as an "all-hands" channel for
announcements affecting everyone. It then morphed into the place users
wanting to keep up with upstream news directly affecting them would
subscribe to. Since only the Release management team (back when it
included the VMT and stable maint) would post to it, it became a
"release announcement" mailing-list. So when we started to make
intermediary releases of "stuff" we just posted everything there.

Mailing-lists are not primarily defined by their topic, they are defined
by their audiences. The problem we have now with -announce is that we
forgot that and started to define it by topic. So we need to take a step
back and redefine it by audience, and then see which topics are appropriate.

We could definitely go back to "the place users wanting to keep up with
upstream news directly affecting them should subscribe to", and post only:

- user-facing service releases (type:service deliverables), on stable
branches or development branches
- security vulnerabilities and security notes
- weekly upstream development news (the one Mike compiles), and include
a digest of all library/ancillary services releases of the week in there

Library releases are not "directly affecting users", so not urgent news
for "users wanting to keep up with upstream news" and can wait to be
mentioned in the weekly digest.

The community weekly newsletter is not "upstream news directly affecting
users" but more of a general digest on openstack news & blogposts, and
would be posted on the openstack blog.

How does that sound ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][CI] recheck/reverify support for Fuel CI jobs

2015-11-20 Thread Stanislaw Bogatkin
Igor,

it is much more clear for me now. Thank you :)

On Fri, Nov 20, 2015 at 2:09 PM, Igor Belikov  wrote:

> Hi Stanislaw,
>
> The reason behind this is simple - deployment tests are heavy. Each
> deployment test occupies whole server for ~2 hours, for each commit we have
> 2 deployment tests (for current fuel-library master) and that’s just
> because we don’t test CentOS deployment for now.
> If we assume that developers will rertrigger deployment tests only when
> retrigger would actually solve the failure - it’s still not smart in terms
> of HW usage to retrigger both tests when only one has failed, for example.
> And there are cases when retrigger just won’t do it and CI Engineer must
> manually erase the existing environment on slave or fix it by other means,
> so it’s better when CI Engineer looks through logs before each retrigger of
> deployment test.
>
> Hope this answers your question.
>
> --
> Igor Belikov
> Fuel CI Engineer
> ibeli...@mirantis.com
>
> On 20 Nov 2015, at 13:57, Stanislaw Bogatkin 
> wrote:
>
> Hi Igor,
>
> would you be so kind tell, why fuel-library deployment tests doesn't
> support this? Maybe there is a link with previous talks about it?
>
> On Fri, Nov 20, 2015 at 1:34 PM, Igor Belikov 
> wrote:
>
>> Hi,
>>
>> I’d like to inform you that all jobs running on Fuel CI (with the
>> exception of fuel-library deployment tests) now support retriggering via
>> “recheck” or “reverify” comments in Gerrit.
>> Exact regex is the same one used in Openstack-Infra’s zuul and can be
>> found here
>> https://github.com/fuel-infra/jenkins-jobs/blob/master/servers/fuel-ci/global.yaml#L3
>>
>> CI-Team kindly asks you to not abuse this option, unfortunately not every
>> failure could be solved by retriggering.
>> And, to stress this out once again: fuel-library deployment tests don’t
>> support this, so you still have to ask for a retrigger in #fuel-infra irc
>> channel.
>>
>> Thanks for attention.
>> --
>> Igor Belikov
>> Fuel CI Engineer
>> ibeli...@mirantis.com
>>
>>
>>
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][neutron] branches for release-independent projects targeting Openstack release X

2015-11-20 Thread Jesse Pretorius
On 19 November 2015 at 09:43, Thierry Carrez  wrote:

>
> So we have three models. The release:independent model is for projects
> that don't follow the common development cycle, and therefore won't make
> a "liberty" release. The release:cycle-with-milestones model is the
> traditional "one release at the end of the cycle" model, and the
> release:cycle-with-intermediary model is an hybrid where you follow the
> development cycle (and make an end-of-cycle release) but can still make
> intermediary, featureful releases as necessary.
>

Hmm, then it seems to me that OpenStack-Ansible should be tagged
'release:cycle-with-intermediary' instead of 'release:independent' - is
that correct?


> Looking at your specific case, it appears you could adopt the
> release:cycle-with-intermediary model, since you want to maintain a
> branch mapped to a given release. The main issue is your (a) point,
> especially the "much later" point. Liberty is in the past now, so making
> "liberty" releases now that we are deep in the Mitaka cycle is a bit
> weird.
>

The deployment projects, and probably packaging projects too, are faced
with the same issue. There's no guarantee that their x release will be done
on the same day as the OpenStack services release their x branches as the
deployment projects still need some time to verify stability and
functionality once the services are finalised. While it could be easily
said that we simply create the branch, then backport any fixes, this is not
necessarily ideal as it creates an additional review burden and doesn't
really match how the stable branches are meant to operate according to the
policy.


> Maybe we need a new model to care for such downstream projects when they
> can't release in relative sync with the projects they track.
>

Perhaps. Or perhaps the rules can be relaxed for a specific profile of
projects (non core?).
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Redfish drivers in ironic

2015-11-20 Thread Vladyslav Drok
On Fri, Nov 20, 2015 at 1:50 AM, Bruno Cornec  wrote:

> Hello,
>
> Vladyslav Drok said on Thu, Nov 19, 2015 at 03:59:41PM +0200:
>
>> Hi list and Bruno,
>>
>> I’m interested in adding virtual media boot interface for redfish (
>> https://blueprints.launchpad.net/ironic/+spec/redfish-virtual-media-boot
>> ).
>> It depends on
>> https://blueprints.launchpad.net/ironic/+spec/ironic-redfish
>> and a corresponding spec https://review.openstack.org/184653, that
>> proposes
>> adding support for redfish (adding new power and management interfaces) to
>> ironic. It also seems to depend on python-redfish client -
>> https://github.com/devananda/python-redfish.
>>
>
> Very good idea ;-)
>
> I’d like to know what is the current status of it?
>>
>
> We have made recently some successful tests with both a real HP ProLiant
> server with a redfish compliant iLO FW (2.30+) and the DMTF simulator.
>

Great news! :)


>
> The version working for these tests is at
> https://github.com/bcornec/python-redfish (prototype branch)
> I think we should now move that work into master and make again a pull
> request to Devananda.
>
> Is there some roadmap of what should be added to
>> python-redfish (or is the one mentioned in spec is still relevant)?
>>
>
> I think this is still relevant.
>
> Is there a way for others to contribute in it?
>>
>
> Feel free to git clone the repo and propose patches to it ! We would be
> happy to have contributors :-) I've also copied our mailing list to the
> other contributors are aware of this.


I'll dig into current code and will try to contribute something meaningful
then.


>
>
> Bruno, do you plan to move it
>> under ironic umbrella, or into pyghmi as people suggested in spec?
>>
>
> That's a difficult question. One one hand, I don't think python-redfish
> should be under the OpenStack umbrella per se. This is a useful python
> module to dialog with servers providing a Redfish interface and this has
> no relationship with OpenStack ... except that it's very useful for
> Ironic ! But could also be used by other projects in the future such as
> Hadoop for node deployment, or my MondoRescue Disaster Recovery project
> e.g. That's also why we have not used OpenStack modules in order to
> avoid to create an artificial dependency that could prevent that module
> tobe used py these other projects.
>
> I'm new to the python galaxy myself, but thought that pypy would be the
> right place for it, but I really welcome suggestions here.
> I also need to come back to the Redfish spec itself and upate with the
> atest feedback we got, in order to have more up to date content for the
> Mitaka cycle.
>
> Best regards,
> Bruno.
> --
> Open Source Profession, Linux Community Lead WW  http://hpintelco.net
> HPE EMEA EG Open Source Technology Strategist http://hp.com/go/opensource
> FLOSS projects: http://mondorescue.org http://project-builder.org
> Musique ancienne? http://www.musique-ancienne.org http://www.medieval.org
>

Thanks for the answers :)
Vlad
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][infra][qa] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-20 Thread Sean Dague
On 11/20/2015 06:01 AM, Kuvaja, Erno wrote:
>> -Original Message-
>> From: Alan Pevec [mailto:ape...@gmail.com]
>> Sent: Friday, November 20, 2015 10:46 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [stable][infra][qa] Preparing 2014.2.4 (Juno)
>> WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.
>>
>>> So we were brainstorming this with Rocky the other night. Would this be
>> possible to do by following:
>>> 1) we still tag juno EOL in few days time
>>> 2) we do not remove the stable/juno branch
>>
>> Why not?
>>
>>> 3) we run periodic grenade jobs for kilo
>>
>> From a quick look, grenade should work with a juno-eol tag instead of
>> stable/juno, it's just a git reference.
>> "Zombie" Juno->Kilo grenade job would need to set
>> BASE_DEVSTACK_BRANCH=juno-eol and for devstack all
>> $PROJECT_BRANCH=juno-eol (or 2014.2.4 should be the same commit).
>> Maybe I'm missing some corner case in devstack where stable/* is assumed
>> but if so that should be fixed anyway.
>> Leaving branch around is a bad message, it implies there support for it, 
>> while
>> there is not.
>>
>> Cheers,
>> Alan
> 
> That sounds like an easy compromise.

Before doing that thing, do you regularly look into grenade failures to
determine root cause?

Because a periodic job that fails and isn't looked at, is just a waste
of resources. And from past experience very very few people look at
these job results.

-Sean


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][neutron] branches for release-independent projects targeting Openstack release X

2015-11-20 Thread Clark Boylan
On Fri, Nov 20, 2015, at 04:32 AM, Julien Danjou wrote:
> On Fri, Nov 20 2015, Clark Boylan wrote:
> 
> > If you have a stable/X.Y branch or stable/foo but are still wanting to
> > map onto the 6 month release cycle (we know this because you are running
> > devstack-gate) how do we make that mapping? is it arbitrary? is there
> > some deterministic method? Things like this affect the changes necessary
> > to the tools but should be listed upfront.
> 
> Honestly, we don't use devstack-gate because we map onto a 6 months
> release. We use devstack-gate because that seems to be the canonical way
> of using devstack in the gate. :)
Mayne I should've said "because you are running devstack via
devstack-gate". Running devstack requires making choices of what
services to run based on the 6 month release cycle,
> 
> Right now, I think the problem I stated in:
>   [openstack-dev] [infra][devstack][gnocchi] Unable to run devstack-gate
>   with stable/1.3
>   http://lists.openstack.org/pipermail/openstack-dev/2015-November/079849.html
> 
> is pretty clear. Or if it's not feel free to reply to it and I'll give
> more information. :)
Will look and followup there.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][CI] recheck/reverify support for Fuel CI jobs

2015-11-20 Thread Stanislaw Bogatkin
Hi Igor,

would you be so kind tell, why fuel-library deployment tests doesn't
support this? Maybe there is a link with previous talks about it?

On Fri, Nov 20, 2015 at 1:34 PM, Igor Belikov  wrote:

> Hi,
>
> I’d like to inform you that all jobs running on Fuel CI (with the
> exception of fuel-library deployment tests) now support retriggering via
> “recheck” or “reverify” comments in Gerrit.
> Exact regex is the same one used in Openstack-Infra’s zuul and can be
> found here
> https://github.com/fuel-infra/jenkins-jobs/blob/master/servers/fuel-ci/global.yaml#L3
>
> CI-Team kindly asks you to not abuse this option, unfortunately not every
> failure could be solved by retriggering.
> And, to stress this out once again: fuel-library deployment tests don’t
> support this, so you still have to ask for a retrigger in #fuel-infra irc
> channel.
>
> Thanks for attention.
> --
> Igor Belikov
> Fuel CI Engineer
> ibeli...@mirantis.com
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [QoS] meeting rebooted

2015-11-20 Thread Moshe Levi
Just to add more details about the when and where  :) 
We will have a weekly meeting on Wednesday at 1400 UTC in #openstack-meeting-3
http://eavesdrop.openstack.org/#Neutron_QoS_Meeting 

Thanks,
Moshe Levi. 

> -Original Message-
> From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
> Sent: Friday, November 20, 2015 12:08 PM
> To: Miguel Angel Ajo 
> Cc: OpenStack Development Mailing List (not for usage questions)  d...@lists.openstack.org>; victor.r.how...@gmail.com;
> irenab@gmail.com; Moshe Levi ; Vikram
> Choudhary ; Gal Sagie ; Haim
> Daniel 
> Subject: Re: [neutron] [QoS] meeting rebooted
> 
> Miguel Angel Ajo  wrote:
> 
> >
> >Hi everybody,
> >
> >  We're restarting the QoS meeting for next week,
> >
> >  Here are the details, and a preliminary agenda,
> >
> >   https://etherpad.openstack.org/p/qos-mitaka
> >
> >
> >   Let's keep QoS moving!,
> >
> > Best,
> > Miguel Ángel.
> 
> I think you better give idea when/where it is restarted. :)
> 
> Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova]Move encryptors to os-brick

2015-11-20 Thread Duncan Thomas
Brick does not have to take over the decisions in order to be a useful
repository for the code. The motivation for this work is to avoid having
the dm setup code copied wholesale into cinder, where it becomes difficult
to keep in sync with the code in nova.

Cinder needs a copy of this code since it is on the data path for certain
operations (create from image, copy to image, backup/restore, migrate).

I suggest a design where the worker code is in brick but the decisions stay
in nova. This enables code sharing while not substantially altering the
nova plan - it also encourages strong back-compatibility guarantees with on
disk formats since the dm setup part off the code will be slightly more
difficult to modify.
On 20 Nov 2015 13:10, "Daniel P. Berrange"  wrote:

> On Fri, Nov 20, 2015 at 03:22:04AM +, Li, Xiaoyan wrote:
> > Hi all,
> >
> > To fix bug [1][2] in Cinder, Cinder needs to use
> nova/volume/encryptors[3]
> > to attach/detach encrypted volumes.
> >
> > To decrease the code duplication, I raised a BP[4] to move encryptors to
> > os-brick[5].
> >
> > Once it is done, Nova needs to update to use the common library. This
> > is BP raised. [6]
>
> You need to proposal a more detailed spec for this, not merely a blueprint
> as there are going to be significant discussion points here.
>
> In particular for the QEMU/KVM nova driver, this proposal is not really
> moving in a direction that is aligned with our long term desire/plan for
> volume encryption and/or storage management in Nova with KVM.  While we
> currently use dm-crypt with volumes that are backed by block devices,
> this is not something we wish to use long term. Increasingly the storage
> used is network based, and while we use in-kernel network clients for
> iSCSI/NFS, we use an in-QEMU client for RBD/Gluster storage. QEMU also
> has support for in-QEMU clients for iSCSI/NFS and it is likely we'll use
> them in Nova in future too.
>
> Now encryption throws a (small) spanner in the works as the only way to
> access encrypted data right now is via dm-crypt, which obviously doesn't
> fly when there's no kernel block device to attach it to. Hence we are
> working in enhancement to QEMU to let it natively handle LUKS format
> volumes. At which point we'll stop using dm-crypt for for anything and
> do it all in QEMU.
>
> Nova currently decides whether it wants to use the in-kernel network
> client, or an in-QEMU network client for the various network backed
> storage drivers. If os-brick takes over encryption setup with dm-crypt,
> then it would potentially be taking the decision away from Nova about
> whether to use in-kernel or in-QEMU clients, which is not desirable.
> Nova must retain control over which configuration approach is best
> for the hypervisor it is using.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][CI] recheck/reverify support for Fuel CI jobs

2015-11-20 Thread Alexey Shtokolov
Igor,

Thank you for this feature.
Afaiu recheck/reverify is mostly useful for internal CI-related fails. And
Fuel CI and Openstack CI are two different infrastructures.
So if smth is broken on Fuel CI, "recheck" will restart all jobs on
Openstack CI too. And opposite case works the same way.

Probably we should use another keyword for Fuel CI to prevent an extra load
on the infrastructure? For example "refuel" or smth like this?

Best regards,
Alexey Shtokolov

2015-11-20 14:24 GMT+03:00 Stanislaw Bogatkin :

> Igor,
>
> it is much more clear for me now. Thank you :)
>
> On Fri, Nov 20, 2015 at 2:09 PM, Igor Belikov 
> wrote:
>
>> Hi Stanislaw,
>>
>> The reason behind this is simple - deployment tests are heavy. Each
>> deployment test occupies whole server for ~2 hours, for each commit we have
>> 2 deployment tests (for current fuel-library master) and that’s just
>> because we don’t test CentOS deployment for now.
>> If we assume that developers will rertrigger deployment tests only when
>> retrigger would actually solve the failure - it’s still not smart in terms
>> of HW usage to retrigger both tests when only one has failed, for example.
>> And there are cases when retrigger just won’t do it and CI Engineer must
>> manually erase the existing environment on slave or fix it by other means,
>> so it’s better when CI Engineer looks through logs before each retrigger of
>> deployment test.
>>
>> Hope this answers your question.
>>
>> --
>> Igor Belikov
>> Fuel CI Engineer
>> ibeli...@mirantis.com
>>
>> On 20 Nov 2015, at 13:57, Stanislaw Bogatkin 
>> wrote:
>>
>> Hi Igor,
>>
>> would you be so kind tell, why fuel-library deployment tests doesn't
>> support this? Maybe there is a link with previous talks about it?
>>
>> On Fri, Nov 20, 2015 at 1:34 PM, Igor Belikov 
>> wrote:
>>
>>> Hi,
>>>
>>> I’d like to inform you that all jobs running on Fuel CI (with the
>>> exception of fuel-library deployment tests) now support retriggering via
>>> “recheck” or “reverify” comments in Gerrit.
>>> Exact regex is the same one used in Openstack-Infra’s zuul and can be
>>> found here
>>> https://github.com/fuel-infra/jenkins-jobs/blob/master/servers/fuel-ci/global.yaml#L3
>>>
>>> CI-Team kindly asks you to not abuse this option, unfortunately not
>>> every failure could be solved by retriggering.
>>> And, to stress this out once again: fuel-library deployment tests don’t
>>> support this, so you still have to ask for a retrigger in #fuel-infra irc
>>> channel.
>>>
>>> Thanks for attention.
>>> --
>>> Igor Belikov
>>> Fuel CI Engineer
>>> ibeli...@mirantis.com
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
---
WBR, Alexey Shtokolov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Graduate cliutils.py into oslo.utils

2015-11-20 Thread Davanum Srinivas
Abhishek,

Go for it!

On Fri, Nov 20, 2015 at 2:32 AM, Kekane, Abhishek
 wrote:
> -Original Message-
> From: Doug Hellmann [mailto:d...@doughellmann.com]
> Sent: 16 November 2015 21:46
> To: openstack-dev
> Subject: Re: [openstack-dev] [oslo] Graduate cliutils.py into oslo.utils
>
> Excerpts from Kekane, Abhishek's message of 2015-11-16 07:33:48 +:
>> Hi,
>>
>> As apiclient is now removed from oslo-incubator, to proceed with
>> request-id spec [1] I have two options in mind,
>>
>>
>> 1.   Use keystoneauth1 + cliff in all python-clients (add request-id 
>> support in cliff library)
>
> cliff is being used outside of OpenStack, and is not at all related to REST 
> API access, so I don't think that's the right place.
>
>>
>> 2.   apiclient code is available in all python-*clients, modify this 
>> code in individual clients and add support to return request-id.
>
> Yes, I think that makes sense.
>
> Hi Devs,
>
> As per mentioned by Dough I will start pushing patches for 
> python-cinderclient, python-glanceclient and python-novaclient from next week 
> which includes changes for returning request-id to caller.
> Please let me know if you have any suggestions on the same.
>
>>
>> Please let me know your opinion on the same.
>>
>> [1] https://review.openstack.org/#/c/156508/
>>
>> Thanks & Regards,
>>
>> Abhishek Kekane
>>
>> > On Nov 11, 2015, at 3:54 AM, Andrey Kurilin > > mirantis.com>
>> >  wrote:
>>
>> >
>>
>> >
>>
>> >
>>
>> > On Tue, Nov 10, 2015 at 4:25 PM, Sean Dague > > dague.net
>> >  > > dague.net>>
>> >  wrote:
>>
>> > On 11/10/2015 08:24 AM, Andrey Kurilin wrote:
>>
>> > >>It was also proposed to reuse openstackclient or the openstack SDK.
>>
>> > >
>>
>> > > Openstack SDK was proposed a long time ago(it looks like it was
>> > > several
>>
>> > > cycles ago) as "alternative" for cliutils and apiclient, but I
>> > > don't
>>
>> > > know any client which use it yet. Maybe openstacksdk cores should
>> > > try to
>>
>> > > port any client as an example of how their project should be used.
>>
>> >
>>
>> > The SDK is targeted for end user applications, not service clients.
>> > I do
>>
>> > get there was lots of confusion over this, but SDK is not the answer
>>
>> > here for service clients.
>>
>> >
>>
>> > Ok, thanks for explanation, but there is another question in my head: If 
>> > openstacksdk is not for python-*clients, why apiclient(which is actually 
>> > used by python-*clients) was marked as deprecated due to openstacksdk?
>>
>>
>>
>> The Oslo team wanted to deprecate the API client code because it wasn't 
>> being maintained. We thought at the time we did so that the SDK would 
>> replace the clients, but discussions since that time have changed direction.
>>
>> >
>>
>> > The service clients are *always* going to have to exist in some form.
>>
>> > Either as libraries that services produce, or by services deciding
>> > they
>>
>> > don't want to consume the libraries of other clients and just put a
>>
>> > targeted bit of rest code in their own tree to talk to other services.
>>
>> >
>>
>> > -Sean
>>
>> >
>>
>> > --
>>
>> > Sean Dague
>>
>> > http://dague.net 
>>
>> >
>>
>> > 
>> > __
>>
>> > OpenStack Development Mailing List (not for usage questions)
>>
>> > Unsubscribe: OpenStack-dev-request at
>> > lists.openstack.org> > nfo/openstack-dev>?subject:unsubscribe
>> > > > be>
>>
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > 
>>
>> >
>>
>> >
>>
>> >
>>
>> > --
>>
>> > Best regards,
>>
>> > Andrey Kurilin.
>>
>> > 
>> > __
>>
>> > OpenStack Development Mailing List (not for usage questions)
>>
>> > Unsubscribe: OpenStack-dev-request at
>> > lists.openstack.org> > nfo/openstack-dev> > > lists.openstack.org> > nfo/openstack-dev>>?subject:unsubscribe
>>
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > 
>>
>> -- next part --
>>
>> An HTML attachment was scrubbed...
>>
>> URL:
>> > 11/d457a660/attachment.html>
>>
>
> __
> OpenStack Development Mailing List (not for 

Re: [openstack-dev] [tc][infra][neutron] branches for release-independent projects targeting Openstack release X

2015-11-20 Thread Clark Boylan


On Thu, Nov 19, 2015, at 12:55 PM, Chris Dent wrote:
> On Thu, 19 Nov 2015, Julien Danjou wrote:
> 
> > It would be good to support that as being *normal*, not "potentially
> > incorrect and random"!
> 
> Yes.
> 
> The underlying issue in this thread is the dominance of the six month
> cycle and the way this is perceived to be (any may actually be) a
> benefit for distributors, marketers, etc. That dominance drives the
> technological and social context of OpenStack. No surprise that it is
> present in our tooling and our schedules but sometimes I think it
> would be great if we could fight the power, shift the paradigm, break
> the chains.
> 
> But that's crazy talk, isn't it?
> 
> However it is pretty clear the dominance is not aligned with at least
> some of the goals of a big tent. One goal, in particular, is making
> OpenStack stuff useful and accessible to people or groups outside of
> OpenStack where release-often is awesome and the needs of the packagers
> aren't really that important.
> 
> I reckon (and this may be an emerging consensus somewhere in this
> thread) we need to make it easier (by declaration) in the tooling
> to test against whatever is desired. Can we enumerate the changes
> required to make that go?
"Test whatever is desired" is far to nebulous. We need an actual set of
concrete needs and requirements and once you have that you can worry
about enumerating changes. I am not sure I have seen anything like this
in the thread so far.

If you have a stable/X.Y branch or stable/foo but are still wanting to
map onto the 6 month release cycle (we know this because you are running
devstack-gate) how do we make that mapping? is it arbitrary? is there
some deterministic method? Things like this affect the changes necessary
to the tools but should be listed upfront.

Once we have enumerated the problem we can enumerate the changes to fix
it.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][neutron] branches for release-independent projects targeting Openstack release X

2015-11-20 Thread Julien Danjou
On Fri, Nov 20 2015, Clark Boylan wrote:

> If you have a stable/X.Y branch or stable/foo but are still wanting to
> map onto the 6 month release cycle (we know this because you are running
> devstack-gate) how do we make that mapping? is it arbitrary? is there
> some deterministic method? Things like this affect the changes necessary
> to the tools but should be listed upfront.

Honestly, we don't use devstack-gate because we map onto a 6 months
release. We use devstack-gate because that seems to be the canonical way
of using devstack in the gate. :)

Right now, I think the problem I stated in:
  [openstack-dev] [infra][devstack][gnocchi] Unable to run devstack-gate with 
stable/1.3
  http://lists.openstack.org/pipermail/openstack-dev/2015-November/079849.html

is pretty clear. Or if it's not feel free to reply to it and I'll give
more information. :)

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] What things do we want to get into a python-novaclient 3.0 release?

2015-11-20 Thread Matthew Booth
I wrote this a while back, which implements 'migrate everything off this
compute host' in the most robust manner I could come up with using only the
external api:

https://gist.github.com/mdbooth/163f5fdf47ab45d7addd

It obviously overlaps considerably with host-servers-migrate, which is
supposed to do the same thing. Users seem to have been appreciative, so I'd
be interested to see it merged in some form.

Matt

On Thu, Nov 19, 2015 at 6:18 PM, Matt Riedemann 
wrote:

> We've been talking about doing a 3.0 release for novaclient for awhile so
> we can make some backward incompatible changes, like:
>
> 1. Removing the novaclient.v1_1 module
> 2. Dropping py26 support (if there is any explicit py26 support in there)
>
> What else are people aware of?
>
> Monty was talking about doing a thing with auth:
>
> https://review.openstack.org/#/c/245200/
>
> But it sounds like that is not really needed now?
>
> I'd say let's target mitaka-2 for a 3.0 release and get these flushed out.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][CI] recheck/reverify support for Fuel CI jobs

2015-11-20 Thread Igor Belikov
Hi,

I’d like to inform you that all jobs running on Fuel CI (with the exception of 
fuel-library deployment tests) now support retriggering via “recheck” or 
“reverify” comments in Gerrit.
Exact regex is the same one used in Openstack-Infra’s zuul and can be found 
here 
https://github.com/fuel-infra/jenkins-jobs/blob/master/servers/fuel-ci/global.yaml#L3

CI-Team kindly asks you to not abuse this option, unfortunately not every 
failure could be solved by retriggering.
And, to stress this out once again: fuel-library deployment tests don’t support 
this, so you still have to ask for a retrigger in #fuel-infra irc channel.

Thanks for attention.
--
Igor Belikov
Fuel CI Engineer
ibeli...@mirantis.com






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][infra][qa] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-20 Thread Thierry Carrez
Kuvaja, Erno wrote:
> So we were brainstorming this with Rocky the other night. Would this be 
> possible to do by following:
> 1) we still tag juno EOL in few days time
> 2) we do not remove the stable/juno branch
> 3) we run periodic grenade jobs for kilo
> 
> I'm not that familiar with the grenade job itself so I'm doing couple of 
> assumptions, please correct me if I'm wrong.
> 1) We could do this with py27 only
> 2) We could do this with Ubuntu 1404 only
> 
> If this is doable would we need anything special for these jobs in infra 
> point of view or can we just schedule these jobs from the pool running our 
> other jobs as well?
> If so is there still "quiet" slots on the infra utilization so that we would 
> not be needing extra resources poured in for this?
> Is there something else we would need to consider in QA/infra point of view?
> 
> Benefits for this approach:
> 1) The upgrade to kilo would be still tested occasionally.
> 2) Less work for setting up the jobs as we do the installs from the stable 
> branch currently (vs. installing the last from tarball)
> 
> What we should have as requirements for doing this:
> 1) Someone making the changes to the jobs so that the grenade job gets ran 
> periodically.
> 2) Someone looking after these jobs.
> 3) Criteria for stop doing this, X failed runs, some set timeperiod, 
> something else. (and removing the stable/juno branch)
> 
> Big question ref the 2), what can we do if the grenade starts failing? In 
> theory we won't be merging anything to kilo that _should_ cause this and we 
> definitely will not be merging anything to Juno to fix these issues anymore. 
> How much maintenance those grenade jobs themselves needs?
> 
> So all in all, is the cost doing above too much to get indicator that tells 
> us when Juno --> Kilo upgrade is not doable anymore?

Let's wait a bit for this discussion for the return of the Infra PTL
from vacation, his input is critical to any decision we can make. Jeremy
should be back on Monday.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][devstack][gnocchi] Unable to run devstack-gate with stable/1.3

2015-11-20 Thread Clark Boylan
On Thu, Nov 19, 2015, at 05:17 AM, Julien Danjou wrote:
> Hi,
> 
> The Gnocchi gate is broken for stable/1.3 because of devstack-gate
> saying¹:
>   ERROR: branch not allowed by features matrix: 1.3
> 
> From what I understand, that's because devstack-gate thinks it should
> try to pull stable/1.3 for devstack & all OpenStack projects, branch
> that does not exist – and make no sense elsewhere than in Gnocchi.
No, this isn't why this is happening. Devstack-gate will happily
fallback to grabbing master for projects if it doesn't otherwise find
the branch for the change under test in other projects. The issue is
that in order to run devstack you have to configure a set of services in
devstack and the services you want to run change over time because we
make releases and new services show up and old services go away.

As a result devstack-gate is very explicit about what should be
configured for each release [0]. If it doesn't recognize the branch
currently under test it fails rather than doing something implcitly that
is unexpected.
> 
> In the past, we set OVERRIDE_ZUUL_BRANCH=stable/kilo in the Gnocchi jobs
> for some stable branches (we did for stable/1.0), but honestly patching
> the infra each time we do a stable release is getting painful.
You need a mapping of some sort. How should devstack be configured for
stable/X.Y? What about stable/Y.Z? This is one method of providing that
mapping and it is very explicit. We can probably do better but we need
to understand what the desired mapping is before encoding that into any
tools.
> 
> Actually, Gnocchi does not really care about pulling whatever branch of
> the other projects, it just wants them deployed to use them (Keystone
> and Swift). Since the simplest way is to use devstack, that's what it
> uses.
If you have a very specific concrete set of services to be configured
you could possibly ship your own features.yaml to only configure those
things (rather than the default of an entire running cloud). This may
help make the jobs run quicker too.

Another approach would be to set the OVERRIDE_ZUUL_BRANCH to master and
the OVERRIDE_${project}_PROJECT_BRANCH to ZUUL_BRANCH so that your
project is always checked out against the correct branch for the change
but is tested against master everything else. This is probably the
simplest mapping (our stable/X.Y should run against whatever is
current).
> 
> Is there any chance we can have a way to say in devstack{,-gate} "just
> deploy the latest released version of $PROJECT" and that's it?
I think there are several options but which one is used depends on how
you need to map onto the 6 month release cycle (required because
devstack deploys projects using it).

[0]
https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/features.yaml#n3

Hope this helps,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Number of IP addresses in a public network

2015-11-20 Thread Aleksey Kasatkin
It's not about Public networks only. There can be the same problem with
other networks as well.
It's required to check all the networks (across all node groups).
But it is done just for Public network now (and VIPs for plugins are not
taken into account).


Aleksey Kasatkin


On Fri, Nov 20, 2015 at 12:04 AM, Andrew Woodward  wrote:

> The high value of the bug here reflects that the error message is wrong.
> From a UX side we could maybe even justify this as Critical. The error
> message must reflect the correct quantity of addresses required.
>
>
> On Tue, Nov 17, 2015 at 1:31 PM Roman Prykhodchenko  wrote:
>
>> Folks, we should resurrect this thread and find a consensus.
>>
>> 1 вер. 2015 р. о 15:00 Andrey Danin  написав(ла):
>>
>>
>> +1 to Igor.
>>
>> It's definitely not a High bug. The biggest problem I see here is a
>> confusing error message with a wrong number of required IPs. AFAIU we
>> cannot fix it easily now so let's postpone it to 8.0 but change a message
>> itself [0] in 7.0.
>>
>> We managed to create an error that returns '7', when there are 8
> available, but 9 are required, at some level we knew that we came up short
> or we'd just have some lower level error caught here.
>
>>
>> [0]
>> https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/task/task.py#L1160-L1163
>>
>> On Tue, Sep 1, 2015 at 1:39 PM, Igor Kalnitsky 
>> wrote:
>>
>>> Hello,
>>>
>>> My 5 cents on it.
>>>
>>> I don't think it's really a High or Critical bug for 7.0. If there's
>>> not enough IPs the CheckBeforeDeploymentTask will fail. And that's
>>> actually Ok, it may fail by different reason without starting actual
>>> deployment (sending message to Astute).
>>>
>>> But I agree it's kinda strange that we don't check IPs during network
>>> verification step. The good fix in my opinion is to move this check
>>> into network checker (perhaps keep it here either), but that
>>> definitely shouldn't be done in 7.0.
>>>
>>> Thanks,
>>> Igor
>>>
>>>
>>> On Mon, Aug 31, 2015 at 2:54 PM, Roman Prykhodchenko 
>>> wrote:
>>> > Hi folks!
>>> >
>>> > Recently a problem that network check does not tell whether there’s
>>> enough IP addresses in a public network [1] was reported. That check is
>>> performed by CheckBeforeDeployment task, but there is two problems that
>>> happen because this verification is done that late:
>>> >
>>> >  - A deployment fails, if there’s not enough addresses in specified
>>> ranges
>>> >  - If a user wants to get network configuration they will get an error
>>> >
>>> > The solution for this problems seems to be easy and a straightforward
>>> patch [2] was proposed. However, there is a hidden problem which is that
>>> patch does not address which is that installed plugins may reserve VIPs for
>>> their needs. The issue is that they do it just before deployment and so
>>> it’s not possible to get those reservations when a user wants to check
>>> their network set up.
>>> >
>>> > The important issue we have to address here is that network
>>> configuration generator will fail, if specified ranges don’t fit all VIPs.
>>> There were several proposals to fix that, I’d like to highlight two of them:
>>> >
>>> >  a) Allow VIPs to not have an IP address assigned, if network config
>>> generator works for API output.
>>> >  That will prevent GET requests from failure, but since IP
>>> addresses for VIPs are required, generator will have to fail, if it
>>> generates a configuration for the orchestrator.
>>> >  b) Add a release note that users have to calculate IP addresses
>>> manually and put sane ranges in order to not shoot their own legs. Then
>>> it’s also possible to change network verification output to remind users to
>>> check the ranges before starting a deployment.
>>> >
>>> > In my opinion we cannot follow (a) because it only masks a problem
>>> instead of providing a fix. Also it requires to change the API which is not
>>> a good thing to do after the SCF. If we choose (b), then we can work on a
>>> firm solution in 8.0 and fix the problem for real.
>>> >
>>> >
>>> > P. S. We can still merge [2], because it checks, if IP ranges can at
>>> least fit the basic configuration. If you agree, I will update it soon.
>>> >
>>> > [1] https://bugs.launchpad.net/fuel/+bug/1487996
>>> > [2] https://review.openstack.org/#/c/217267/
>>> >
>>> >
>>> >
>>> > - romcheg
>>> >
>>> >
>>> >
>>> >
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not 

Re: [openstack-dev] [tc][infra][neutron] branches for release-independent projects targeting Openstack release X

2015-11-20 Thread Thierry Carrez
Julien Danjou wrote:
> On Thu, Nov 19 2015, Doug Hellmann wrote:
> 
>> In my mind the “independent” release model was originally meant to mean that
>> the project was completely on their own, doing potentially incorrect and 
>> random
>> releases. It wasn’t something I anticipated projects *wanting* to use. It
>> evolved to mean something closer to the opposite of the “managed” tag, but I
>> think we should pull back from that use. We want projects to clearly indicate
>> which of the other cycle-oriented models they intend to follow, and we want
>> something cycle-based for most projects to help distributors and deployers
>> understand which versions of things should be used together.
>>
>> If neither of the existing cycle-based tags meets the needs of a large number
>> of projects, then we should have a clear description of the model actually
>> being followed so we can tag the projects following it. That may mean, in 
>> this
>> case, a cycle-with-intermediary-following or something similar, to mean “we
>> have cyclical releases but they come after the cycle of most of the other
>> projects”.
> 
> Gnocchi is applying "release early, release often" so there is no really
> any big cycle like older OpenStack projects. Major or minor versions are
> released from time to time, and more often than 6 months in general.
> 
> It would be good to support that as being *normal*, not "potentially
> incorrect and random"!

It is now "normal" since it's a supported model. The issue in this
thread is more about "independent" projects which actually are following
the common cycles and therefore want to track them. I.e. projects that
picked "independent" not because they really are independent but because
they live in a grey area.

> [...]
> And by the way, it's a shame that the release:has-stable-branches cannot
> be applied for release:independent. We have stable branches in Gnocchi,
> we cannot have that tag currently for that only reason. Worse, we often
> hit issue about assumption made about how projects are released. See my
> recent thread about the devstack-gate based jobs failing for stable
> branches.
> 
> It'd be awesome to free those projects and support more flexible release
> schedule for project having a different velocity.

As I said elsewhere, the current "has-stable-branches" tag is useless
and needs to be replaced. Like you said, projects following an alternate
release cycle should be able to have "stable branches" (they just won't
be "stable/liberty" but something like "stable/2.0". What the tag should
be describing is if the stable branches follow the common stable policy,
or if they have their own rules.

So we plan to discontinue the "has-stable-branches" tag (which currently
is mostly a natural consequence of following a release:cycle-with*
model) and replace it with a "follows-stable-policy" tag that the stable
maint team would grant to compliant projects.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][infra][qa] Preparing 2014.2.4 (Juno) WAS Re: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-20 Thread Kuvaja, Erno
> -Original Message-
> From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
> Sent: Tuesday, November 17, 2015 2:57 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [stable] Preparing 2014.2.4 (Juno) WAS Re:
> [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.
> 
> 
> 
> On 11/16/2015 8:49 PM, Rochelle Grober wrote:
> > I would like to make a plea that while Juno is locked down so as no changes
> can be made against it, the branch remains on the git.openstack.org site.
> Please?  One area that could be better investigated with the branch in place
> is upgrade.  Kilo will continue to get patches, as will Liberty, so an 
> occasional
> grenade run (once a week?  more often?  Less often) could help operators
> understand what is in store for them when they finally can upgrade from
> Juno.  Yes, it will require occasional resources for the run, but I think 
> this is
> one of the cheapest forms of insurance in support of the installed base of
> users, before a Stable Release team is put together.
> >
> > My $.02
> >
> > --Rocky
> >
> >> -Original Message-
> >> From: Gary Kotton [mailto:gkot...@vmware.com]
> >> Sent: Friday, November 13, 2015 6:04 AM
> >> To: Flavio Percoco; OpenStack Development Mailing List (not for usage
> >> questions)
> >> Subject: Re: [openstack-dev] [stable] Preparing 2014.2.4 (Juno) WAS Re:
> >> [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.
> >>
> >>
> >>
> >> On 11/13/15, 3:23 PM, "Flavio Percoco"  wrote:
> >>
> >>> On 10/11/15 16:11 +0100, Alan Pevec wrote:
>  Hi,
> 
>  while we continue discussion about the future of stable branches in
>  general and stable/juno in particular, I'd like to execute the
> >> current
>  plan which was[1]
> 
>  2014.2.4 (eol) early November, 2015. release manager: apevec
> 
>  Iff there's enough folks interested (I'm not) in keep Juno alive
> >>
> >> +1 I do not see any reason why we should still invest time and effort
> >> here. Lets focus on stable/kilo
> >>
>  longer, they could resurrect it but until concrete plan is done
>  let's be honest and stick to the agreed plan.
> 
>  This is a call to stable-maint teams for Nova, Keystone, Glance,
>  Cinder, Neutron, Horizon, Heat, Ceilometer, Trove and Sahara to
> >> review
>  open stable/juno changes[2] and approve/abandon them as
> appropriate.
>  Proposed timeline is:
>  * Thursday Nov 12 stable/juno freeze[3]
>  * Thursday Nov 19 release 2014.2.1
> 
> >>>
> >>> General ack from a stable-maint point of view! +1 on the above
> >>>
> >>> Flavio
> >>>
>  Cheers,
>  Alan
> 
>  [1]
>  https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.
>  2F
> >> juno
>  _releases_.2812_months.29
> 
>  [2]
> 
> https://review.openstack.org/#/q/status:open+AND+branch:stable/juno
>  +A
> >> ND+%
> 
> 28project:openstack/nova+OR+project:openstack/keystone+OR+project:o
>  pe
> >> nsta
> 
> ck/glance+OR+project:openstack/cinder+OR+project:openstack/neutron+
>  OR
> >> +pro
> 
> ject:openstack/horizon+OR+project:openstack/heat+OR+project:opensta
>  ck
> >> /cei
> 
> lometer+OR+project:openstack/trove+OR+project:openstack/sahara%29,n
>  lometer+OR+,z
> 
>  [3] documented  in
> 
> https://wiki.openstack.org/wiki/StableBranch#Stable_release_manager
>  s
>  TODO add in new location
>  http://docs.openstack.org/project-team-guide/stable-branches.html
> 
> 
> __
> _
>  __
> >> 
>  _
>  OpenStack Development Mailing List (not for usage questions)
>  Unsubscribe:
>  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>> --
> >>> @flaper87
> >>> Flavio Percoco
> >>
> >>
> >>
> __
> ___
> >> __
> >> ___
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: OpenStack-dev-
> >> requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> 
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> I'm assuming you mean grenade runs on stable/kilo. A grenade job on
> stable/kilo is installing stable/juno and then upgrading to stable/kilo (the
> change being tested is on stable/kilo). The grenade jobs for stable/juno were
> stopped when icehouse-eol happened.
> 
> Arguably we could still be testing grenade on stable/kilo by just installing
> Juno 2014.2.4 (last Juno 

Re: [openstack-dev] [tripleo] When to use parameters vs parameter_defaults

2015-11-20 Thread Jiri Tomasek

On 11/16/2015 04:25 PM, Steven Hardy wrote:

Hi all,

I wanted to start some discussion re $subject, because it's been apparrent
that we have a lack of clarity on this issue (and have done ever since we
started using parameter_defaults).

Some context:

- Historically TripleO has provided a fairly comprehensive "top level"
   parameters interface, where many per-role and common options are
   specified, then passed in to the respective ResourceGroups on deployment

https://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/overcloud-without-mergepy.yaml#n14

The nice thing about this approach is it gives a consistent API to the
operator, e.g the parameters schema for the main overcloud template defines
most of the expected inputs to the deployment.

The main disadvantage is a degree of template bloat, where we wire dozens
of parameters into each ResourceGroup, and from there into whatever nested
templates consume them.

- When we started adding interfaces (such as all the OS::TripleO::*ExtraConfig*
   interfaces, there was a need to enable passing arbitrary additional
   values to nested templates, with no way of knowing what they are (e.g to
   enable wiring in third-party pieces we have no knowledge of or which
   require implementation-specific arguments which don't make sense for all
   deployments.

To do this, we made use of the heat parameter_defaults interface, which
(unlike normal parameters) have global scope (visible to all nested stacks,
without explicitly wiring in the values from the parent):

http://docs.openstack.org/developer/heat/template_guide/environment.html#define-defaults-to-parameters

The nice thing about this approach is its flexibility, any arbitrary
values can be provided without affecting the parent templates, and it can
allow for a terser implementation because you only specify the parameter
definition where it's actually used.

The main disadvantage of this approach is it becomes very much harder to
discover an API surface for the operator, e.g the parameters that must be
provided on deployment by any CLI/UI tools etc.  This has been partially
addressed by the new-for-liberty nested validation heat feature, but
there's still a bunch of unsolved complexity around how to actually consume
that data and build a coherent consolidated API for user interaction:

https://github.com/openstack/heat-specs/blob/master/specs/liberty/nested-validation.rst

My question is, where do we draw the line on when to use each interface?

My position has always been that we should only use parameter_defaults for
the ExtraConfig interfaces, where we cannot know what reasonable parameters
are.  And for all other "core" functionality, we should accept the increased
template verbosity and wire arguments in from overcloud-without-mergepy.

However we've got some patches which fall into a grey area, e.g this SSL
enablement patch:

https://review.openstack.org/#/c/231930/46/overcloud-without-mergepy.yaml

Here we're actually removing some existing (non functional) top-level
parameters, and moving them to parameter_defaults.

I can see the logic behind it, it does make the templates a bit cleaner,
but at the expense of discoverablility of those (probably not
implementation dependent) parameters.

How do people feel about this example, and others like it, where we're
enabling common, but not mandatory functionality?

In particular I'm keen to hear from Mainn and others interested in building
UIs on top of TripleO as to which is best from that perspective, and how
such arguments may be handled relative to the capabilities mapping proposed
here:

https://review.openstack.org/#/c/242439/

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I think I'll try to do a bit of a recap to make sure I understand 
things. It may shift slightly off the topic of this thread but I think 
it is worth it and it will describe what the GUI is able/expecting to 
work with.


Template defines parameters and passes them to child templates via 
resource properties.

Root template parameter values are set by (in order of precedence):
1. 'parameters' param in 'stack create' api call or 'parameters' section 
in environment

2. 'parameter_defaults' section in environment
3. 'default' in parameter definition in template

Non-root template parameter values are set by (in order of precedence):
1. parent resource properties
2. 'parameter_defaults' in environment
3. 'default' in parameter definition in template

The name collisions in parameter_defaults should not be a problem since 
the template author should make sure, the parameters names he defines 
don't collide with other templates.


The GUI's main goal (same as CLI and tripleo-common) is not to hardcode 
anything and use THT (or 

Re: [openstack-dev] [Fuel][library] CI gate for regressions detection in deployment data

2015-11-20 Thread Bogdan Dobrelya
> Hi,
> 
> let me try to rephrase this a bit and Bogdan will correct me if I'm wrong
> or missing something.
> 
> We have a set of top-scope manifests (called Fuel puppet tasks) that we use
> for OpenStack deployment. We execute those tasks with "puppet apply". Each
> task supposed to bring target system into some desired state, so puppet
> compiles a catalog and applies it. So basically, puppet catalog = desired
> system state.
> 
> So we can compile* catalogs for all top-scope manifests in master branch
> and store those compiled* catalogs in fuel-library repo. Then for each
> proposed patch CI will compare new catalogs with stored ones and print out
> the difference if any. This will pretty much show what is going to be
> changed in system configuration by proposed patch.
> 
> We were discussing such checks before several times, iirc, but we did not
> have right tools to implement such thing before. Well, now we do :) I think
> it could be quite useful even in non-voting mode.
> 
> * By saying compiled catalogs I don't mean actual/real puppet catalogs, I
> mean sorted lists of all classes/resources with all parameters that we find
> during puppet-rspec tests in our noop test framework, something like
> standard puppet-rspec coverage. See example [0] for networks.pp task [1].
> 
> Regards,
> Alex
> 
> [0] http://paste.openstack.org/show/477839/
> [1] 
> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/openstack-network/networks.pp

Thank you, Alex.
Yes, the composition layer is a top-scope manifests, known as a Fuel
library modular tasks [0].

The "deployment data checks", is nothing more than comparing the
committed vs changed states of fixtures [1] of puppet catalogs for known
deployment paths under test with rspecs written for each modular task [2].

And the *current status* is:
- the script for data layer checks now implemented [3]
- how-to is being documented here [4]
- a fix to make catalogs compilation idempotent submitted [5]
- and there is my WIP branch [6] with the initial committed state of
deploy data pre-generated. So, you can checkout, make any test changes
to manifests and run the data check (see the README [4]). It works for
me, there is no issues with idempotent re-checks of a clean committed
state or tests failing when unexpected.

So the plan is to implement this noop tests extention as a non-voting CI
gate after I make an example workflow update for developers to the
Fuel wiki. Thoughts?

[0]
https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular
[1]
https://github.com/openstack/fuel-library/tree/master/tests/noop/astute.yaml
[2] https://github.com/openstack/fuel-library/tree/master/tests/noop/spec
[3] https://review.openstack.org/240015
[4]
https://github.com/openstack/fuel-library/blob/master/tests/noop/README.rst
[5] https://review.openstack.org/247989
[6] https://github.com/bogdando/fuel-library-1/commits/data_checks


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Vladimir Sharshov
+1 to remove docker in new CentOS 7.

On Fri, Nov 20, 2015 at 7:31 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Bogdan,
>
> >> So, we could only deprecate the docker feature for the 8.0.
>
> What do you mean exactly when saying 'deprecate docker feature'? I can not
> even imagine how we can live with and without docker containers at the same
> time. Deprecation is usually related to features which directly impact UX
> (maybe I am wrong).
>
> Guys,
>
> When you estimate risks of the docker removal, please take into account
> not only release deadlines but also the overall product quality. The thing
> is that continuing using containers makes it much more complicated (and
> thus less stable) to implement new upgrade flow (upgrade tarball can not be
> used any more, we need to re-install the host system). Switching from
> Centos 6 to Centos 7 is also much more complicated with docker. Every
> single piece of Fuel system is going to become simpler and easier to
> support.
>
> Of course, I am not suggesting to jump overboard into cold water without a
> life jacket. Transition plan, checklist, green tests, even spec etc. are
> assumed without saying (after all, I was not born yesterday). Of course, we
> won't merge changes until everything is green. What is the problem to try
> to do this and postpone if not ready in time? And please do not confuse
> these two cases: switching from plain deployment to containers is
> complicated, but switching from docker to plain is much simpler.
>
>
>
>
> Vladimir Kozhukalov
>
> On Fri, Nov 20, 2015 at 6:47 PM, Bogdan Dobrelya 
> wrote:
>
>> On 20.11.2015 15:10, Timur Nurlygayanov wrote:
>> > Hi team,
>> >
>> > I think it too late to make such significant changes for MOS 8.0 now,
>> > but I'm ok with the idea to remove docker containers in the future
>> > releases if our dev team want to do this.
>> > Any way, before we will do this, we need to plan how we will perform
>> > updates between different releases with and without docker containers,
>> > how we will manage requirements and etc. In fact we have a lot of
>> > questions and haven't answers, let's prepare the spec for this change,
>> > review it, discuss it with developers, users and project management team
>> > and if we haven't requirements to keep docker containers on master node
>> > let's remove them for the future releases (not in MOS 8.0).
>> >
>> > Of course, we can fix BVT / SWARM tests and don't use docker images in
>> > our test suite (it shouldn't be really hard) but we didn't plan these
>> > changes and in fact these changes can affect our estimates for many
>> tasks.
>>
>> I can only add that features just cannot be removed without a
>> deprecation period of 1-2 releases.
>> So, we could only deprecate the docker feature for the 8.0.
>>
>> >
>> > Thank you!
>> >
>> >
>> > On Fri, Nov 20, 2015 at 4:44 PM, Alexander Kostrikov
>> > > wrote:
>> >
>> > Hello, Igor.
>> >
>> > >But I'd like to hear from QA how do we rely on container-based
>> > infrastructure? Would it be hard to change our sys-tests in short
>> > time?
>> >
>> > At first glance, system tests are using docker only to fetch logs
>> > and run shell commands.
>> > Also, docker is used to run Rally.
>> >
>> > If there is an action to remove docker containers with carefull
>> > attention to bvt testing, it would take couple days to fix system
>> tests.
>> > But time may be highly affected by code freezes and active features
>> > merging.
>> >
>> > QA team is going to have Monday (Nov 23) sync-up - and it is
>> > possible to get more exact information from all QA-team.
>> >
>> > P.S.
>> > +1 to remove docker.
>> > -1 to remove docker without taking into account deadlines/other
>> > features.
>> >
>> > On Thu, Nov 19, 2015 at 10:27 PM, Igor Kalnitsky
>> > > wrote:
>> >
>> > Hey guys,
>> >
>> > Despite the fact I like containers (as deployment unit), we
>> > don't use
>> > them so. That means I +1 idea to drop containers, just because I
>> > believe that would
>> >
>> > * simplify a lot of things
>> > * helps get rid of huge amount of hacks
>> > * increase master node deployment
>> > * release us from annoying support of upgrades / rollbacks that
>> > proved
>> > to be non-working well
>> >
>> > But I'd like to hear from QA how do we rely on container-based
>> > infrastructure? Would it be hard to change our sys-tests in
>> short
>> > time?
>> >
>> > Thanks,
>> > Igor
>> >
>> >
>> > On Thu, Nov 19, 2015 at 10:31 AM, Vladimir Kuklin
>> > > wrote:
>> > > Folks
>> > >
>> > > I guess it should be pretty simple to 

Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-11-20 Thread Matt Riedemann



On 11/20/2015 8:18 AM, Sean Dague wrote:

On 11/17/2015 10:51 PM, Matt Riedemann wrote:



I *don't* see any DB APIs for deleting instance actions.

Kind of an important difference there.  Jay got it at least. :)



Were we just planning on instance_actions living forever in the database?

Should we soft delete instance_actions when we delete the referenced
instance?

Or should we (hard) delete instance_actions when we archive (move to
shadow tables) soft deleted instances?

This is going to be a blocker to getting nova-manage db
archive_deleted_rows working.

[1] https://review.openstack.org/#/c/246635/


instance_actions seems extremely useful, and at the ops meetups I've
been to has been one of the favorite features because it allows and easy
interface for "going back in time" to figure out what happened.

I'd suggest the following:

1. soft deleting and instance does nothing with instance actions.

2. archiving instance (soft delete -> actually deleted) also archives
off instance actions.


I think this is also the right approach. Then we don't need to worry 
about adding soft delete for instance_actions, they are just archived 
when you archive the instances. It probably makes the logic in the 
archive code messier for this separate path, but it's looking like we're 
going to have to account for the bw_usage_cache table too (which has a 
uuid column for an instance but no foreign key back to the instances 
table and is not soft deleted).




3. update instance_actions API so that you can get instance_actions for
deleted instances (which I think doesn't work today).


Right, it doesn't. I was going to propose a spec for that since it's a 
simple API change with a microversion.




-Sean



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] FKs in the DB

2015-11-20 Thread Alexis Lee
We just had a fun discussion in IRC about whether foreign keys are evil.
Initially I thought this was crazy but mordred made some good points. To
paraphrase, that if you have a scale-out app already it's easier to
manage integrity in your app than scale-out your persistence layer.

Currently the Nova DB has quite a lot of FKs but not on every relation.
One example of a missing FK is between Instance.uuid and
BandwidthUsageCache.uuid.

Should we drive one way or the other, or just put up with mixed-mode?

What should be the policy for new relations?

Do the answers to these questions depend on having a sane and
comprehensive archive/purge system in place?


Alexis (lxsli)
-- 
Nova developer, Hewlett-Packard Limited.
Registered Office: Cain Road, Bracknell, Berkshire RG12 1HN.
Registered Number: 00690597 England
VAT number: GB 314 1496 79

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]New Quota Subteam on Nova

2015-11-20 Thread Raildo Mascena
Hi guys

Me and other guys are working in the nested quota driver (
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nested-quota-driver-api,n,z)
on Nova.

in addition, We want discuss the re-design of the quota implementation on
nova and in other projects, like cinder and neutron and we already have a
base spec for this here:
https://review.openstack.org/#/c/182445/4/specs/backlog/approved/quotas-reimagined.rst

So was I thinking on create a subteam on Nova to speed up the code review
in the nested quota implementation and discuss this re-design of quotas.
Someone have interest on be part of this subteam or suggestions?

Cheers,

Raildo
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] What things do we want to get into a python-novaclient 3.0 release?

2015-11-20 Thread Matt Riedemann



On 11/20/2015 3:48 AM, Matthew Booth wrote:

I wrote this a while back, which implements 'migrate everything off this
compute host' in the most robust manner I could come up with using only
the external api:

https://gist.github.com/mdbooth/163f5fdf47ab45d7addd

It obviously overlaps considerably with host-servers-migrate, which is
supposed to do the same thing. Users seem to have been appreciative, so
I'd be interested to see it merged in some form.

Matt

On Thu, Nov 19, 2015 at 6:18 PM, Matt Riedemann
> wrote:

We've been talking about doing a 3.0 release for novaclient for
awhile so we can make some backward incompatible changes, like:

1. Removing the novaclient.v1_1 module
2. Dropping py26 support (if there is any explicit py26 support in
there)

What else are people aware of?

Monty was talking about doing a thing with auth:

https://review.openstack.org/#/c/245200/

But it sounds like that is not really needed now?

I'd say let's target mitaka-2 for a 3.0 release and get these
flushed out.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



That's not a backward compatible change, so not really necessary for a 
major version release. We'd need a blueprint at least for it though 
since it's adding a new CLI that does some orchestration. I commented in 
the repo, there is a similar version in another repo, so yeah, people 
are doing this and it'd be good if we could decide if it's something 
that should live in tree and be maintained officially.


A functional test would be sweet, but given it deals with migration and 
the novaclient functional tests are assuming a single node devstack, 
that's probably not going to fly. We could ask sdague about that though.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Vladimir Kozhukalov
Bogdan,

>> So, we could only deprecate the docker feature for the 8.0.

What do you mean exactly when saying 'deprecate docker feature'? I can not
even imagine how we can live with and without docker containers at the same
time. Deprecation is usually related to features which directly impact UX
(maybe I am wrong).

Guys,

When you estimate risks of the docker removal, please take into account not
only release deadlines but also the overall product quality. The thing is
that continuing using containers makes it much more complicated (and thus
less stable) to implement new upgrade flow (upgrade tarball can not be used
any more, we need to re-install the host system). Switching from Centos 6
to Centos 7 is also much more complicated with docker. Every single piece
of Fuel system is going to become simpler and easier to support.

Of course, I am not suggesting to jump overboard into cold water without a
life jacket. Transition plan, checklist, green tests, even spec etc. are
assumed without saying (after all, I was not born yesterday). Of course, we
won't merge changes until everything is green. What is the problem to try
to do this and postpone if not ready in time? And please do not confuse
these two cases: switching from plain deployment to containers is
complicated, but switching from docker to plain is much simpler.




Vladimir Kozhukalov

On Fri, Nov 20, 2015 at 6:47 PM, Bogdan Dobrelya 
wrote:

> On 20.11.2015 15:10, Timur Nurlygayanov wrote:
> > Hi team,
> >
> > I think it too late to make such significant changes for MOS 8.0 now,
> > but I'm ok with the idea to remove docker containers in the future
> > releases if our dev team want to do this.
> > Any way, before we will do this, we need to plan how we will perform
> > updates between different releases with and without docker containers,
> > how we will manage requirements and etc. In fact we have a lot of
> > questions and haven't answers, let's prepare the spec for this change,
> > review it, discuss it with developers, users and project management team
> > and if we haven't requirements to keep docker containers on master node
> > let's remove them for the future releases (not in MOS 8.0).
> >
> > Of course, we can fix BVT / SWARM tests and don't use docker images in
> > our test suite (it shouldn't be really hard) but we didn't plan these
> > changes and in fact these changes can affect our estimates for many
> tasks.
>
> I can only add that features just cannot be removed without a
> deprecation period of 1-2 releases.
> So, we could only deprecate the docker feature for the 8.0.
>
> >
> > Thank you!
> >
> >
> > On Fri, Nov 20, 2015 at 4:44 PM, Alexander Kostrikov
> > > wrote:
> >
> > Hello, Igor.
> >
> > >But I'd like to hear from QA how do we rely on container-based
> > infrastructure? Would it be hard to change our sys-tests in short
> > time?
> >
> > At first glance, system tests are using docker only to fetch logs
> > and run shell commands.
> > Also, docker is used to run Rally.
> >
> > If there is an action to remove docker containers with carefull
> > attention to bvt testing, it would take couple days to fix system
> tests.
> > But time may be highly affected by code freezes and active features
> > merging.
> >
> > QA team is going to have Monday (Nov 23) sync-up - and it is
> > possible to get more exact information from all QA-team.
> >
> > P.S.
> > +1 to remove docker.
> > -1 to remove docker without taking into account deadlines/other
> > features.
> >
> > On Thu, Nov 19, 2015 at 10:27 PM, Igor Kalnitsky
> > > wrote:
> >
> > Hey guys,
> >
> > Despite the fact I like containers (as deployment unit), we
> > don't use
> > them so. That means I +1 idea to drop containers, just because I
> > believe that would
> >
> > * simplify a lot of things
> > * helps get rid of huge amount of hacks
> > * increase master node deployment
> > * release us from annoying support of upgrades / rollbacks that
> > proved
> > to be non-working well
> >
> > But I'd like to hear from QA how do we rely on container-based
> > infrastructure? Would it be hard to change our sys-tests in short
> > time?
> >
> > Thanks,
> > Igor
> >
> >
> > On Thu, Nov 19, 2015 at 10:31 AM, Vladimir Kuklin
> > > wrote:
> > > Folks
> > >
> > > I guess it should be pretty simple to roll back - install
> > older version and
> > > restore the backup with preservation of /var/log directory.
> > >
> > > On Thu, Nov 19, 2015 at 7:38 PM, Sergii Golovatiuk
> > > >

Re: [openstack-dev] [nova] Versioned notifications... who cares about the version?

2015-11-20 Thread Alexis Lee
gord chung said on Thu, Nov 19, 2015 at 11:59:33PM -0500:
> just to clarify, the idea doesn't involve tailoring the notification
> payload to ceilometer, just that if a producer is producing a
> notification it knows contains a useful datapoint, the producer
> should tell someone explicitly 'this datapoint exists'.

I know very little about Nova notifications or Ceilometer, so stepping
wildly into the unknown here but... why would a producer spit out
non-useful datapoints? If no-one cares or will ever care, it simply
shouldn't be included.

The problem is knowing what each consumer thinks is interesting and that
isn't something that can be handled by the producer. If Ceilometer is
just a pipeline that has no opinion on what's relevant and what isn't,
that's a special case easily implemented by an identity function.


Alexis (lxsli)
-- 
Nova developer, Hewlett-Packard Limited.
Registered Office: Cain Road, Bracknell, Berkshire RG12 1HN.
Registered Number: 00690597 England
VAT number: GB 314 1496 79

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Mid-cycle meetup for Mitaka

2015-11-20 Thread Miguel Lavalle
Gareth,

For the time being, we don't have a Neutron mid-cycle scheduled. Later in
the Mitaka cycle, it will be assessed whether we need to schedule on or
not. But for the time being, the decision is we are not having one

Cheers

On Thu, Nov 19, 2015 at 10:23 PM, Gareth  wrote:

> Guys,
>
> Is there a conclusion now? What's the schedule of Neutron Mid-cycle?
>
> On Thu, Nov 5, 2015 at 9:31 PM, Gary Kotton  wrote:
> > Hi,
> > In Nova the new black is the os-vif-lib
> > (https://etherpad.openstack.org/p/mitaka-nova-os-vif-lib). It may be
> > worthwhile seeing if we can maybe do something at the same time with the
> > nova crew and then bash out the dirty details here. It would be far
> easier
> > if everyone was in the same room.
> > Just and idea.
> > Thanks
> > Gary
> >
> > From: "John Davidge (jodavidg)" 
> > Reply-To: OpenStack List 
> > Date: Thursday, November 5, 2015 at 2:08 PM
> > To: OpenStack List 
> > Subject: Re: [openstack-dev] [Neutron] Mid-cycle meetup for Mitaka
> >
> > ++
> >
> > Sounds very sensible to me!
> >
> > John
> >
> > From: "Armando M." 
> > Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Date: Wednesday, 4 November 2015 21:23
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Subject: [openstack-dev] [Neutron] Mid-cycle meetup for Mitaka
> >
> > Hi folks,
> >
> > After some consideration, I am proposing a change for the Mitaka release
> > cycle in relation to the mid-cycle meetup event.
> >
> > My proposal is to defer the gathering to later in the release cycle [1],
> and
> > assess whether we have it or not based on the course of events in the
> cycle.
> > If we feel that a last push closer to the end will help us hit some
> critical
> > targets, then I am all in for arranging it.
> >
> > Based on our latest experiences, I have not seen a strong correlation
> > between progress made during the cycle and progress made during the
> meetup,
> > so we might as well save us the trouble of travelling close to Christmas.
> >
> > I'd like to thank Kyle, Miguel Lavalle and Doug for looking into the
> > logistics. We may still need their services later in the new year, but
> as of
> > now all I can say is:
> >
> > Happy (distributed) hacking!
> >
> > Cheers,
> > Armando
> >
> > [1] https://wiki.openstack.org/wiki/Mitaka_Release_Schedule
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Gareth
>
> Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball
> OpenStack contributor, kun_huang@freenode
> My promise: if you find any spelling or grammar mistakes in my email
> from Mar 1 2013, notify me
> and I'll donate $1 or ¥1 to an open organization you specify.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-11-20 Thread Matt Riedemann



On 11/20/2015 10:04 AM, Andrew Laski wrote:

On 11/20/15 at 09:51am, Matt Riedemann wrote:



On 11/20/2015 8:18 AM, Sean Dague wrote:

On 11/17/2015 10:51 PM, Matt Riedemann wrote:



I *don't* see any DB APIs for deleting instance actions.

Kind of an important difference there.  Jay got it at least. :)



Were we just planning on instance_actions living forever in the
database?

Should we soft delete instance_actions when we delete the referenced
instance?

Or should we (hard) delete instance_actions when we archive (move to
shadow tables) soft deleted instances?

This is going to be a blocker to getting nova-manage db
archive_deleted_rows working.

[1] https://review.openstack.org/#/c/246635/


instance_actions seems extremely useful, and at the ops meetups I've
been to has been one of the favorite features because it allows and easy
interface for "going back in time" to figure out what happened.

I'd suggest the following:

1. soft deleting and instance does nothing with instance actions.

2. archiving instance (soft delete -> actually deleted) also archives
off instance actions.


I think this is also the right approach. Then we don't need to worry
about adding soft delete for instance_actions, they are just archived
when you archive the instances. It probably makes the logic in the
archive code messier for this separate path, but it's looking like
we're going to have to account for the bw_usage_cache table too (which
has a uuid column for an instance but no foreign key back to the
instances table and is not soft deleted).



3. update instance_actions API so that you can get instance_actions for
deleted instances (which I think doesn't work today).


Right, it doesn't. I was going to propose a spec for that since it's a
simple API change with a microversion.


Adding a simple flag to expose instance actions for a deleted instance
if you know the uuid of the deleted instance will provide some
usefulness.  It does lack the discoverability of knowing that you had
*some* instance that was deleted and you don't have the uuid but want to
get at the deleted actions.  I would like to avoid bolting that onto
instance actions and keep that as a use case for an eventual Task API.





-Sean



--

Thanks,

Matt Riedemann


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



If you're an admin, you can list deleted instances using:

nova list --deleted

Or could, if we weren't busted on that right now [1].

So the use case I'm thinking of here is:

1. Multiple users are in the same project/tenant.
2. User A deletes an instance.
3. User B is wondering where the instance went, so they open a support 
ticket.
4. The admin checks for deleted instances on that project, finds the one 
in question.
5. Calls off to os-instance-actions with that instance uuid to see the 
deleted action and the user that did it (user A).

6. Closes the ticket saying that user A deleted the instance.
7. User B punches user A in the gut.

[1] https://bugs.launchpad.net/nova/+bug/1518382

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][bugs] Weekly Status Report

2015-11-20 Thread Markus Zoeller
Below are the bug stats of the week "Mitaka R-20".
Increases/decreases compared to "Mitaka R-21" are in parantheses.
The bug count of the novaclient is added now.

Stats
=

New bugs which are *not* assigned to any subteam

count: 30 (+2)
query: http://bit.ly/1WF68Iu

New bugs which are *not* triaged

subteam: libvirt 
count: 16 (+2)
query: http://bit.ly/1Hx3RrL
subteam: volumes 
count: 10 (-2)
query: http://bit.ly/1NU2DM0
subteam: compute
count: 5 (0)
query: http://bit.ly/1O72RQc
subteam: vmware
count: 5 (?)
query: http://bit.ly/1YkCU4s
subteam: network : 
count: 4 (0)
query: http://bit.ly/1LVAQdq
subteam: db : 
count: 4 (0)
query: http://bit.ly/1LVATWG
subteam: 
count: 89 (+6)
query: http://bit.ly/1RBVZLn

High prio bugs which are *not* in progress
--
count: 38 (-1)
query: http://bit.ly/1MCKoHA

Critical bugs which are *not* in progress
-
count: 0 (0)
query: http://bit.ly/1kfntfk

Untriaged python-novaclient bugs

count: 7 (?)
query: http://bit.ly/1kKUDDU


Readings

* https://wiki.openstack.org/wiki/BugTriage
* https://wiki.openstack.org/wiki/Nova/BugTriage
* 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078252.html

Markus Zoeller/Germany/IBM@IBMDE wrote on 11/13/2015 04:07:24 PM:

> From: Markus Zoeller/Germany/IBM@IBMDE
> To: "OpenStack Development Mailing List \(not for usage questions\)" 
> 
> Date: 11/13/2015 04:09 PM
> Subject: Re: [openstack-dev] [nova][bugs] Weekly Status Report
> 
> Below are the stats of the week "Mitaka R-21".
> Changes to the previous week are shown in parantheses behind the current
> numbers. For example, "28 (+9)" means we have 28 bugs in that category
> with an increase of 9 bugs comparing to the previous week.
> 
> 
> Stats
> =
> 
> New bugs which are *not* assigned to any subteam
> 
> count: 28 (+9)
> query: http://bit.ly/1WF68Iu
> 
> New bugs which are *not* triaged
> 
> subteam: libvirt 
> count: 14 (0)
> query: http://bit.ly/1Hx3RrL
> subteam: volumes 
> count: 12 (+1)
> query: http://bit.ly/1NU2DM0
> subteam: compute
> count: 5 (?)
> query: http://bit.ly/1O72RQc
> subteam: network : 
> count: 4 (0)
> query: http://bit.ly/1LVAQdq
> subteam: db : 
> count: 4 (0)
> query: http://bit.ly/1LVATWG
> subteam: 
> count: 83 (+16)
> query: http://bit.ly/1RBVZLn
> 
> High prio bugs which are *not* in progress
> --
> count: 39 (0)
> query: http://bit.ly/1MCKoHA
> 
> Critical bugs which are *not* in progress
> -
> count: 0 (0)
> query: http://bit.ly/1kfntfk
> 
> Readings
> 
> * https://wiki.openstack.org/wiki/BugTriage
> * https://wiki.openstack.org/wiki/Nova/BugTriage
> * 
> 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078252.html

> 
> 
> Markus Zoeller/Germany/IBM@IBMDE wrote on 11/06/2015 05:54:59 PM:
> 
> > From: Markus Zoeller/Germany/IBM@IBMDE
> > To: "OpenStack Development Mailing List" 
> 
> > Date: 11/06/2015 05:56 PM
> > Subject: [openstack-dev] [nova][bugs] Weekly Status Report
> > 
> > Hey folks,
> > 
> > below is the first report of bug stats I intend to post weekly.
> > We discussed it shortly during the Mitaka summit that this report
> > could be useful to keep the attention of the open bugs at a certain
> > level. Let me know if you think it's missing something.
> > 
> > Stats
> > =
> > 
> > New bugs which are *not* assigned to any subteam
> > 
> > count: 19
> > query: http://bit.ly/1WF68Iu
> > 
> > 
> > New bugs which are *not* triaged
> > 
> > subteam: libvirt 
> > count: 14 
> > query: http://bit.ly/1Hx3RrL
> > subteam: volumes 
> > count: 11
> > query: http://bit.ly/1NU2DM0
> > subteam: network : 
> > count: 4
> > query: http://bit.ly/1LVAQdq
> > subteam: db : 
> > count: 4
> > query: http://bit.ly/1LVATWG
> > subteam: 
> > count: 67
> > query: http://bit.ly/1RBVZLn
> > 
> > 
> > High prio bugs which are *not* in progress
> > --
> > count: 39
> > query: http://bit.ly/1MCKoHA
> > 
> > 
> > Critical bugs which are *not* in progress
> > -
> > count: 0
> > query: http://bit.ly/1kfntfk
> > 
> 

Re: [openstack-dev] [nova] FKs in the DB

2015-11-20 Thread Julien Danjou
On Fri, Nov 20 2015, Alexis Lee wrote:

> We just had a fun discussion in IRC about whether foreign keys are evil.
> Initially I thought this was crazy but mordred made some good points. To
> paraphrase, that if you have a scale-out app already it's easier to
> manage integrity in your app than scale-out your persistence layer.

That's interesting. Could you explain how it is easier to achieve the
level of data integrity provided by a RDBMS implementing ACID in your
own application?

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Bogdan Dobrelya
On 20.11.2015 15:10, Timur Nurlygayanov wrote:
> Hi team,
> 
> I think it too late to make such significant changes for MOS 8.0 now,
> but I'm ok with the idea to remove docker containers in the future
> releases if our dev team want to do this.
> Any way, before we will do this, we need to plan how we will perform
> updates between different releases with and without docker containers,
> how we will manage requirements and etc. In fact we have a lot of
> questions and haven't answers, let's prepare the spec for this change,
> review it, discuss it with developers, users and project management team
> and if we haven't requirements to keep docker containers on master node
> let's remove them for the future releases (not in MOS 8.0).
> 
> Of course, we can fix BVT / SWARM tests and don't use docker images in
> our test suite (it shouldn't be really hard) but we didn't plan these
> changes and in fact these changes can affect our estimates for many tasks.

I can only add that features just cannot be removed without a
deprecation period of 1-2 releases.
So, we could only deprecate the docker feature for the 8.0.

> 
> Thank you!
> 
> 
> On Fri, Nov 20, 2015 at 4:44 PM, Alexander Kostrikov
> > wrote:
> 
> Hello, Igor.
> 
> >But I'd like to hear from QA how do we rely on container-based
> infrastructure? Would it be hard to change our sys-tests in short
> time?
> 
> At first glance, system tests are using docker only to fetch logs
> and run shell commands.
> Also, docker is used to run Rally.
> 
> If there is an action to remove docker containers with carefull
> attention to bvt testing, it would take couple days to fix system tests.
> But time may be highly affected by code freezes and active features
> merging.
> 
> QA team is going to have Monday (Nov 23) sync-up - and it is
> possible to get more exact information from all QA-team.
> 
> P.S.
> +1 to remove docker.
> -1 to remove docker without taking into account deadlines/other
> features.
> 
> On Thu, Nov 19, 2015 at 10:27 PM, Igor Kalnitsky
> > wrote:
> 
> Hey guys,
> 
> Despite the fact I like containers (as deployment unit), we
> don't use
> them so. That means I +1 idea to drop containers, just because I
> believe that would
> 
> * simplify a lot of things
> * helps get rid of huge amount of hacks
> * increase master node deployment
> * release us from annoying support of upgrades / rollbacks that
> proved
> to be non-working well
> 
> But I'd like to hear from QA how do we rely on container-based
> infrastructure? Would it be hard to change our sys-tests in short
> time?
> 
> Thanks,
> Igor
> 
> 
> On Thu, Nov 19, 2015 at 10:31 AM, Vladimir Kuklin
> > wrote:
> > Folks
> >
> > I guess it should be pretty simple to roll back - install
> older version and
> > restore the backup with preservation of /var/log directory.
> >
> > On Thu, Nov 19, 2015 at 7:38 PM, Sergii Golovatiuk
> > >
> wrote:
> >>
> >> Hi,
> >>
> >> On Thu, Nov 19, 2015 at 5:50 PM, Matthew Mosesohn
> >
> >> wrote:
> >>>
> >>> Vladimir,
> >>>
> >>> The old site.pp is long out of date and should just be
> recreated from the
> >>> content of all the other $service-only.pp files.
> >>>
> >>> My main question is how do we propose to do a rollback from
> an update (in
> >>> theory, from 8.0 to 9.0, then back to 8.0)? Should we
> hardcode persistent
> >>> data directories (or symlink them?) to
> >>> /var/lib/fuel/$fuel_version/$service_name, as we are doing
> behind the scenes
> >>> currently with Docker? If we keep that mechanism in place,
> all the existing
> >>> puppet modules can be used without any modifications. On the
> same note,
> >>> upgrade/rollback is the same as backup and restore, that
> means our restore
> >>> should follow a similar approach.
> >>> -Matthew
> >>
> >>
> >> There only one idea I have is to do dual partitioning system.
> The similar
> >> approach is implemented in CoreOS.
> >>
> >>>
> >>>
> >>> On Thu, Nov 19, 2015 at 6:36 PM, Bogdan Dobrelya
> >
> >>> wrote:
> 
>  On 19.11.2015 15:59, Vladimir Kozhukalov wrote:
> 

Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-11-20 Thread Andrew Laski

On 11/20/15 at 09:51am, Matt Riedemann wrote:



On 11/20/2015 8:18 AM, Sean Dague wrote:

On 11/17/2015 10:51 PM, Matt Riedemann wrote:



I *don't* see any DB APIs for deleting instance actions.

Kind of an important difference there.  Jay got it at least. :)



Were we just planning on instance_actions living forever in the database?

Should we soft delete instance_actions when we delete the referenced
instance?

Or should we (hard) delete instance_actions when we archive (move to
shadow tables) soft deleted instances?

This is going to be a blocker to getting nova-manage db
archive_deleted_rows working.

[1] https://review.openstack.org/#/c/246635/


instance_actions seems extremely useful, and at the ops meetups I've
been to has been one of the favorite features because it allows and easy
interface for "going back in time" to figure out what happened.

I'd suggest the following:

1. soft deleting and instance does nothing with instance actions.

2. archiving instance (soft delete -> actually deleted) also archives
off instance actions.


I think this is also the right approach. Then we don't need to worry 
about adding soft delete for instance_actions, they are just archived 
when you archive the instances. It probably makes the logic in the 
archive code messier for this separate path, but it's looking like 
we're going to have to account for the bw_usage_cache table too 
(which has a uuid column for an instance but no foreign key back to 
the instances table and is not soft deleted).




3. update instance_actions API so that you can get instance_actions for
deleted instances (which I think doesn't work today).


Right, it doesn't. I was going to propose a spec for that since it's 
a simple API change with a microversion.


Adding a simple flag to expose instance actions for a deleted instance 
if you know the uuid of the deleted instance will provide some 
usefulness.  It does lack the discoverability of knowing that you had 
*some* instance that was deleted and you don't have the uuid but want to 
get at the deleted actions.  I would like to avoid bolting that onto 
instance actions and keep that as a use case for an eventual Task API.






-Sean



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][neutron] branches for release-independent projects targeting Openstack release X

2015-11-20 Thread Thierry Carrez
Jesse Pretorius wrote:
> [...] 
> The deployment projects, and probably packaging projects too, are faced
> with the same issue. There's no guarantee that their x release will be
> done on the same day as the OpenStack services release their x branches
> as the deployment projects still need some time to verify stability and
> functionality once the services are finalised.

The question then becomes: are you making an "x release", or are you
making a release "supporting/compatible with the x release". What you
are saying is that you need some time because you are downstream,
reacting to the x release. That is a fair request: you're actually
making a release that supports the x release, you're not in the x
release. The line in the sand is based on the date: if you release
within the development cycle constraints then you're part of the
release, if you release after you're downstream of it, reacting to it.

What you need to be able to do in all cases is creating a stable branch
to maintain that release over the long run. But what you may not be able
to do is to be considered "part of the x release" if you release months
after the x release is done and shipped.

I'll elaborate on that with a more complete proposal on Monday.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Bogdan Dobrelya
On 20.11.2015 17:31, Vladimir Kozhukalov wrote:
> Bogdan,
> 
>>> So, we could only deprecate the docker feature for the 8.0.
> 
> What do you mean exactly when saying 'deprecate docker feature'? I can
> not even imagine how we can live with and without docker containers at
> the same time. Deprecation is usually related to features which directly
> impact UX (maybe I am wrong). 

I may be understood this [0] wrong, and the docker containers are not
user-visible, but that depends on the which type of users do we mean :-)
Sorry, for being not clear.

[0]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html

> 
> Guys, 
> 
> When you estimate risks of the docker removal, please take into account
> not only release deadlines but also the overall product quality. The
> thing is that continuing using containers makes it much more complicated
> (and thus less stable) to implement new upgrade flow (upgrade tarball
> can not be used any more, we need to re-install the host system).
> Switching from Centos 6 to Centos 7 is also much more complicated with
> docker. Every single piece of Fuel system is going to become simpler and
> easier to support.
> 
> Of course, I am not suggesting to jump overboard into cold water without
> a life jacket. Transition plan, checklist, green tests, even spec etc.
> are assumed without saying (after all, I was not born yesterday). Of
> course, we won't merge changes until everything is green. What is the
> problem to try to do this and postpone if not ready in time? And please
> do not confuse these two cases: switching from plain deployment to
> containers is complicated, but switching from docker to plain is much
> simpler. 
> 
> 
> 
> 
> Vladimir Kozhukalov
> 
> On Fri, Nov 20, 2015 at 6:47 PM, Bogdan Dobrelya  > wrote:
> 
> On 20.11.2015 15:10, Timur Nurlygayanov wrote:
> > Hi team,
> >
> > I think it too late to make such significant changes for MOS 8.0 now,
> > but I'm ok with the idea to remove docker containers in the future
> > releases if our dev team want to do this.
> > Any way, before we will do this, we need to plan how we will perform
> > updates between different releases with and without docker containers,
> > how we will manage requirements and etc. In fact we have a lot of
> > questions and haven't answers, let's prepare the spec for this change,
> > review it, discuss it with developers, users and project management team
> > and if we haven't requirements to keep docker containers on master node
> > let's remove them for the future releases (not in MOS 8.0).
> >
> > Of course, we can fix BVT / SWARM tests and don't use docker images in
> > our test suite (it shouldn't be really hard) but we didn't plan these
> > changes and in fact these changes can affect our estimates for many 
> tasks.
> 
> I can only add that features just cannot be removed without a
> deprecation period of 1-2 releases.
> So, we could only deprecate the docker feature for the 8.0.
> 
> >
> > Thank you!
> >
> >
> > On Fri, Nov 20, 2015 at 4:44 PM, Alexander Kostrikov
> > 
> >>
> wrote:
> >
> > Hello, Igor.
> >
> > >But I'd like to hear from QA how do we rely on container-based
> > infrastructure? Would it be hard to change our sys-tests in short
> > time?
> >
> > At first glance, system tests are using docker only to fetch logs
> > and run shell commands.
> > Also, docker is used to run Rally.
> >
> > If there is an action to remove docker containers with carefull
> > attention to bvt testing, it would take couple days to fix system 
> tests.
> > But time may be highly affected by code freezes and active features
> > merging.
> >
> > QA team is going to have Monday (Nov 23) sync-up - and it is
> > possible to get more exact information from all QA-team.
> >
> > P.S.
> > +1 to remove docker.
> > -1 to remove docker without taking into account deadlines/other
> > features.
> >
> > On Thu, Nov 19, 2015 at 10:27 PM, Igor Kalnitsky
> > 
> >>
> wrote:
> >
> > Hey guys,
> >
> > Despite the fact I like containers (as deployment unit), we
> > don't use
> > them so. That means I +1 idea to drop containers, just because I
> > believe that would
> >
> > * simplify a lot of things
> > * helps get rid of huge amount of hacks
> > * increase master node deployment
> > * release 

Re: [openstack-dev] [Fuel] Running Fuel node as non-superuser

2015-11-20 Thread Dmitry Nikishov
Stanislaw,

In my opinion the whole feature shouldn't be in the separate package simply
because it will actually affect the code of many, if not all, components of
Fuel.

The only services whose capabilities will have to be managed by puppet are
those, which are installed from upstream packages (e.g. atop) -- not built
from fuel-* repos.

Supervisord doesn't seem to use Linux capabilities, id does setuid instead:
https://github.com/Supervisor/supervisor/blob/master/supervisor/options.py#L1326

On Fri, Nov 20, 2015 at 1:07 AM, Stanislaw Bogatkin 
wrote:

> Dmitry, I mean whole feature.
> Btw, why do you want to grant capabilities via puppet? It should be done
> by post-install package section, I believe.
>
> Also I doesn't know if supervisord can bound process capabilities like
> systemd can - we could use this opportunity too.
>
> On Thu, Nov 19, 2015 at 7:44 PM, Dmitry Nikishov 
> wrote:
>
>> My main concern with using linux capabilities/acls on files is actually
>> puppet support or, actually, the lack of it. ACLs are possible AFAIK, but
>> we'd need to write a custom type/provider for capabilities. I suggest to
>> wait with capabilities support till systemd support.
>>
>> On Tue, Nov 17, 2015 at 9:15 AM, Dmitry Nikishov 
>> wrote:
>>
>>> Stanislaw, do you mean the whole feature, or just a user? Since feature
>>> would require actually changing puppet code.
>>>
>>> On Tue, Nov 17, 2015 at 5:08 AM, Stanislaw Bogatkin <
>>> sbogat...@mirantis.com> wrote:
>>>
 Dmitry, I believe it should be done via package spec as a part of
 installation.

 On Mon, Nov 16, 2015 at 8:04 PM, Dmitry Nikishov <
 dnikis...@mirantis.com> wrote:

> Hello folks,
>
> I have updated the spec, please review and share your thoughts on it:
> https://review.openstack.org/#/c/243340/
>
> Thanks.
>
> On Thu, Nov 12, 2015 at 10:42 AM, Dmitry Nikishov <
> dnikis...@mirantis.com> wrote:
>
>> Matthew,
>>
>> sorry, didn't mean to butcher your name :(
>>
>> On Thu, Nov 12, 2015 at 10:41 AM, Dmitry Nikishov <
>> dnikis...@mirantis.com> wrote:
>>
>>> Matther,
>>>
>>> I totally agree that each daemon should have it's own user which
>>> should be created during installation of the relevant package. Probably 
>>> I
>>> didn't state this clear enough in the spec.
>>>
>>> However, there are security requirements in place that root should
>>> not be used at all. This means that there should be a some kind of
>>> maintenance or system user ('fueladmin'), which would have enough
>>> privileges to configure and manage Fuel node (e.g. run "sudo puppet 
>>> apply"
>>> without password, create mirrors etc). This also means that certain 
>>> fuel-
>>> packages would be required to have their files accessible to that user.
>>> That's the idea behind having a package which would create 'fueladmin' 
>>> user
>>> and including it into other fuel- packages requirements lists.
>>>
>>> So this part of the feature comes down to having a non-root user
>>> with sudo privileges and passwordless sudo for certain commands (like
>>> 'puppet apply ') for scripting.
>>>
>>> On Thu, Nov 12, 2015 at 9:52 AM, Matthew Mosesohn <
>>> mmoses...@mirantis.com> wrote:
>>>
 Dmitry,

 We really shouldn't put "user" creation into a single package and
 then depend on it for daemons. If we want nailgun service to run as 
 nailgun
 user, it should be created in the fuel-nailgun package.
 I think it makes the most sense to create multiple users, one for
 each service.

 Lastly, it makes a lot of sense to tie a "fuel" CLI user to
 python-fuelclient package.

 On Thu, Nov 12, 2015 at 6:42 PM, Dmitry Nikishov <
 dnikis...@mirantis.com> wrote:

> Stanislaw,
>
> I agree that this approch would work well. However, does Puppet
> allow managing capabilities and/or file ACLs? Or can they be easily 
> set up
> when installing RPM package? (is there a way to specify 
> capabilities/ACLs
> in the RPM spec file?) This doesn't seem to be supported out of the 
> box.
>
> I'm going to research if it is possible to manage capabilities and
>  ACLs with what we have out of the box (RPM, Puppet).
>
> On Wed, Nov 11, 2015 at 4:29 AM, Stanislaw Bogatkin <
> sbogat...@mirantis.com> wrote:
>
>> Dmitry, I propose to give needed linux capabilities
>> (like CAP_NET_BIND_SERVICE) to processes (services) which needs them 
>> and
>> then start these processes from non-privileged user. It will give you
>> ability to run each process without 'sudo' at all with well 

Re: [openstack-dev] [nova] What things do we want to get into a python-novaclient 3.0 release?

2015-11-20 Thread Matt Riedemann



On 11/20/2015 10:20 AM, Matt Riedemann wrote:



On 11/20/2015 3:48 AM, Matthew Booth wrote:

I wrote this a while back, which implements 'migrate everything off this
compute host' in the most robust manner I could come up with using only
the external api:

https://gist.github.com/mdbooth/163f5fdf47ab45d7addd

It obviously overlaps considerably with host-servers-migrate, which is
supposed to do the same thing. Users seem to have been appreciative, so
I'd be interested to see it merged in some form.

Matt

On Thu, Nov 19, 2015 at 6:18 PM, Matt Riedemann
> wrote:

We've been talking about doing a 3.0 release for novaclient for
awhile so we can make some backward incompatible changes, like:

1. Removing the novaclient.v1_1 module
2. Dropping py26 support (if there is any explicit py26 support in
there)

What else are people aware of?

Monty was talking about doing a thing with auth:

https://review.openstack.org/#/c/245200/

But it sounds like that is not really needed now?

I'd say let's target mitaka-2 for a 3.0 release and get these
flushed out.

--

Thanks,

Matt Riedemann



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



That's not a backward compatible change, so not really necessary for a
major version release. We'd need a blueprint at least for it though
since it's adding a new CLI that does some orchestration. I commented in
the repo, there is a similar version in another repo, so yeah, people
are doing this and it'd be good if we could decide if it's something
that should live in tree and be maintained officially.

A functional test would be sweet, but given it deals with migration and
the novaclient functional tests are assuming a single node devstack,
that's probably not going to fly. We could ask sdague about that though.



Sorry, that's not a backward *incompatible* change.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-11-20 Thread Sean Dague
On 11/20/2015 11:36 AM, Matt Riedemann wrote:
> 
> 
> On 11/20/2015 10:04 AM, Andrew Laski wrote:
>> On 11/20/15 at 09:51am, Matt Riedemann wrote:
>>>
>>>
>>> On 11/20/2015 8:18 AM, Sean Dague wrote:
 On 11/17/2015 10:51 PM, Matt Riedemann wrote:
 
>
> I *don't* see any DB APIs for deleting instance actions.
>
> Kind of an important difference there.  Jay got it at least. :)
>
>>
>> Were we just planning on instance_actions living forever in the
>> database?
>>
>> Should we soft delete instance_actions when we delete the referenced
>> instance?
>>
>> Or should we (hard) delete instance_actions when we archive (move to
>> shadow tables) soft deleted instances?
>>
>> This is going to be a blocker to getting nova-manage db
>> archive_deleted_rows working.
>>
>> [1] https://review.openstack.org/#/c/246635/

 instance_actions seems extremely useful, and at the ops meetups I've
 been to has been one of the favorite features because it allows and
 easy
 interface for "going back in time" to figure out what happened.

 I'd suggest the following:

 1. soft deleting and instance does nothing with instance actions.

 2. archiving instance (soft delete -> actually deleted) also archives
 off instance actions.
>>>
>>> I think this is also the right approach. Then we don't need to worry
>>> about adding soft delete for instance_actions, they are just archived
>>> when you archive the instances. It probably makes the logic in the
>>> archive code messier for this separate path, but it's looking like
>>> we're going to have to account for the bw_usage_cache table too (which
>>> has a uuid column for an instance but no foreign key back to the
>>> instances table and is not soft deleted).
>>>

 3. update instance_actions API so that you can get instance_actions for
 deleted instances (which I think doesn't work today).
>>>
>>> Right, it doesn't. I was going to propose a spec for that since it's a
>>> simple API change with a microversion.
>>
>> Adding a simple flag to expose instance actions for a deleted instance
>> if you know the uuid of the deleted instance will provide some
>> usefulness.  It does lack the discoverability of knowing that you had
>> *some* instance that was deleted and you don't have the uuid but want to
>> get at the deleted actions.  I would like to avoid bolting that onto
>> instance actions and keep that as a use case for an eventual Task API.
>>
>>>

 -Sean

>>>
>>> -- 
>>>
>>> Thanks,
>>>
>>> Matt Riedemann
>>>
>>>
>>> __
>>>
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> If you're an admin, you can list deleted instances using:
> 
> nova list --deleted
> 
> Or could, if we weren't busted on that right now [1].
> 
> So the use case I'm thinking of here is:
> 
> 1. Multiple users are in the same project/tenant.
> 2. User A deletes an instance.
> 3. User B is wondering where the instance went, so they open a support
> ticket.
> 4. The admin checks for deleted instances on that project, finds the one
> in question.
> 5. Calls off to os-instance-actions with that instance uuid to see the
> deleted action and the user that did it (user A).
> 6. Closes the ticket saying that user A deleted the instance.
> 7. User B punches user A in the gut.
> 
> [1] https://bugs.launchpad.net/nova/+bug/1518382

+1

I think we need that on a T-shirt


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Bug update

2015-11-20 Thread Armando M.
On 19 November 2015 at 23:10, Gary Kotton  wrote:

> Hi,
> There are a ton of old and ancient bugs that have not been trained. If you
> guys have some time then please go over them. In most cases some are not
> even bugs and are just questions. I have spent the last few days going over
> and training a few.
> Over the last two days a number of bugs related to Neutron RBAC have been
> opened. I have created a new tag called ‘brace’. Kevin can you please take
> a look. Some may be bugs, others may be edge cases that we missed in the
> review process and others may be a mis understanding of the feature.
>

What does brace mean? That doesn't seem very intuitive.

Are you suggesting to add one to cover 'access control' in general?

Thanks for helping out!

[1]
http://docs.openstack.org/developer/neutron/policies/bugs.html#proposing-new-tags




> A luta continua
> Gary
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Bug update

2015-11-20 Thread John Belamaric
I think Gary got auto-corrected:

training = triaging
brace = rbac

On Nov 20, 2015, at 12:41 PM, Armando M. 
> wrote:



On 19 November 2015 at 23:10, Gary Kotton 
> wrote:
Hi,
There are a ton of old and ancient bugs that have not been trained. If you guys 
have some time then please go over them. In most cases some are not even bugs 
and are just questions. I have spent the last few days going over and training 
a few.
Over the last two days a number of bugs related to Neutron RBAC have been 
opened. I have created a new tag called ‘brace’. Kevin can you please take a 
look. Some may be bugs, others may be edge cases that we missed in the review 
process and others may be a mis understanding of the feature.

What does brace mean? That doesn't seem very intuitive.

Are you suggesting to add one to cover 'access control' in general?

Thanks for helping out!

[1] 
http://docs.openstack.org/developer/neutron/policies/bugs.html#proposing-new-tags



A luta continua
Gary

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Versioned notifications... who cares about the version?

2015-11-20 Thread gord chung



On 20/11/15 11:33 AM, Alexis Lee wrote:

gord chung said on Thu, Nov 19, 2015 at 11:59:33PM -0500:

just to clarify, the idea doesn't involve tailoring the notification
payload to ceilometer, just that if a producer is producing a
notification it knows contains a useful datapoint, the producer
should tell someone explicitly 'this datapoint exists'.

I know very little about Nova notifications or Ceilometer, so stepping
wildly into the unknown here but... why would a producer spit out
non-useful datapoints? If no-one cares or will ever care, it simply
shouldn't be included.

fully agree.

it seems like even before addressing versioning, that the notification 
paradigm itself should be discussed. right now the producer is just 
sending out a grab bag of data that it thinks is important but doesn't 
define who the audience is. while that makes it extremely flexible so 
that anyone can consume the message, it also guarantees nothing (not 
even that it's being consumed). you can version a payload or make a 
schema accessible as much as you like but if no one is listening or the 
data published isn't useful to those listening, it's just noise.


i think a lot of the complexity we have in versioning is that the 
projects are too silo'd. i think some of the versioning issues would be 
irrelevant if the producer knew it's consumers before sending rather 
than producers just tossing out a chunk of data (versioned schema or 
not) and considering their job complete once it leaves it's own walls. 
the producer doesn't necessarily have to be the individual project teams 
but whoever the producer of notifications is, it should know it's audience.




The problem is knowing what each consumer thinks is interesting and that
isn't something that can be handled by the producer. If Ceilometer is
just a pipeline that has no opinion on what's relevant and what isn't,
that's a special case easily implemented by an identity function.
the notification consumption service in ceilometer is essentially just a 
pipeline that normalises select incoming notifications into a data 
model(s) and pushes that model to whoever wants it (a known consumer is 
the storage service but it's configurable to allow other consumers).


cheers,

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Neutron L3 Sub-team meeting canceled on November 26th

2015-11-20 Thread Miguel Lavalle
Dear Neutron L3 Sub-team members,

We are canceling our weekly IRC meeting on November 26th, due to the
Thanksgiving holiday in the US. We will reconvene again on December 3rd at
the usual time.

Best regards
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [third-party][infra][CI] Common OpenStack 'Third-party' CI Solution - DONE!

2015-11-20 Thread Asselin, Ramy
All,

I’m happy to announce that there is now a working ‘Common’ OpenStack 
‘third-party’ CI Solution available! This is a 3rd party CI solution that uses 
the same tools and scripts as the upstream ‘Jenkins’ CI.

The last few pieces were particularly challenging.
Big thanks to Yolanda Robla for updating Nodepool  & nodepool puppet scripts so 
that is can be reusable by both 3rd party CI’s and upstream infrastructure!

The documentation for setting up a 3rd party ci system on 2 VMs (1 private that 
runs the CI jobs, and 1 public that hosts the log files) is now available here 
[1] or [2]

Big thanks again to everyone that helped submit patches and do the reviews!

A few people have already starting setting up this solution.

Best regards,

Ramy
IRC: asselin

[1] https://github.com/openstack-infra/puppet-openstackci/tree/master/contrib
[2] 
https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/README.md


From: Asselin, Ramy
Sent: Monday, July 20, 2015 3:39 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [third-party][infra] Common OpenStack CI Solution 
- 'Jenkins Job Builder' live

All,

I’m pleased to announce the 4th component merged to puppet-openstackci repo [1].

This means 3rd party ci operators can now use the same scripts that the 
OpenStack Infrastructure team uses in the official ‘Jenkins’ CI system for:


1.   Log Server

2.   Jenkins

3.   Zuul

4.   Jenkins Job Builder

This work is being done as part of the common-ci spec [2]

Big thanks to Juame Devesa for starting the work, Khai Do for fixing all issues 
found in the reviews during the virtual sprint, and to all the reviewers and 
testers.

We’re almost there! Just have nodepool and a sample config to compose all of 
the components together [3]!

Ramy
IRC: asselin

[1] https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/
[2] 
http://specs.openstack.org/openstack-infra/infra-specs/specs/openstackci.html
[3] https://storyboard.openstack.org/#!/story/2000101


From: Asselin, Ramy
Sent: Thursday, July 02, 2015 4:59 PM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: [openstack-dev] [third-party][infra] Common OpenStack CI Solution - 
'Zuul' live

All,

I’m please to say that there are now 3 components merged in the 
puppet-openstackci repo [1]
This means 3rd party ci operators can now use the same scripts that the 
OpenStack Infrastructure team uses in the official ‘Jenkins’ CI system for:


1.   Log Server

2.   Jenkins

3.   Zuul

This work is being done as part of the common-ci spec [2]

Big thanks to Fabien Boucher for completing the Zuul script refactoring, which 
went live today!
Thanks to all the reviewers for careful reviews which led to a smooth migration.

I’ve updated my repo [3] & switched all my CI systems to use it.

As a reminder, there will be a virtual sprint next week July 8-9, 2015 15:00 
UTC to finish the remaining tasks.
If you’re interested in helping out in any of the remaining tasks (Jenkins Job 
Builder, Nodepool, Logstash/Kibana, Documentation, Sample site.pp) Sign up on 
the eitherpad. [4]

Also, we can use the 3rd party meeting time slot next week to discuss plans and 
answer questions [5].
Tuesday 7/7/15 1700 UTC #openstack-meeting

Ramy
IRC: asselin

[1] https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/
[2] 
http://specs.openstack.org/openstack-infra/infra-specs/specs/openstackci.html
[3] https://github.com/rasselin/os-ext-testing (forked from 
jaypipes/os-ext-testing)
[4] https://etherpad.openstack.org/p/common-ci-sprint
[5] https://wiki.openstack.org/wiki/Meetings/ThirdParty


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Bug update

2015-11-20 Thread Armando M.
On 19 November 2015 at 23:20, Gary Kotton  wrote:

> Hi,
> One extra thing. A large chunk of the latest bugs opened are RFE’s. These
> are tagged with ‘rfe’. In addition to this I would suggest changing the
> title of the bug to have [RFE]. This will at least help those who are
> perusing over the bugs to see what are actually real bug and what are new
> features etc.
>

These bugs are marked 'wishlist'...that should suffice IMO, but prefixing
is not going to hurt.


> Just a thought.
> Thanks
> Gary
>
> From: Gary Kotton 
> Reply-To: OpenStack List 
> Date: Friday, November 20, 2015 at 9:10 AM
> To: OpenStack List 
> Subject: [openstack-dev] [Neutron] Bug update
>
> Hi,
> There are a ton of old and ancient bugs that have not been trained. If you
> guys have some time then please go over them. In most cases some are not
> even bugs and are just questions. I have spent the last few days going over
> and training a few.
> Over the last two days a number of bugs related to Neutron RBAC have been
> opened. I have created a new tag called ‘brace’. Kevin can you please take
> a look. Some may be bugs, others may be edge cases that we missed in the
> review process and others may be a mis understanding of the feature.
> A luta continua
> Gary
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] M-1 Bugs/Reviews squash day

2015-11-20 Thread Flavio Percoco

On 16/11/15 16:45 -0300, Flavio Percoco wrote:

Greetings,

At our last meeting, we discussed the idea of having a Bug/Reviews
squash day before the end of M-1. I'm sending this email out to
propose that we do this work on one of the following dates:

- Friday November 20th (ALL TZs)
- Monday November 23rd (ALL TZs)

I realize that next week is Thanksgiving in the US and some folks
might want to take the whole week. Please, do vote before Wednesday
18th so we can prepare for Friday and/or monday.

Poll link: http://doodle.com/poll/mt7hwswtmcvmetdn


The Bug/Review squash day will be Monday November 23rd. You can check
the results in the poll link.

Thanks and looking forwrd to Monday!
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova]Move encryptors to os-brick

2015-11-20 Thread Daniel P. Berrange
On Fri, Nov 20, 2015 at 02:45:15PM +0200, Duncan Thomas wrote:
> Brick does not have to take over the decisions in order to be a useful
> repository for the code. The motivation for this work is to avoid having
> the dm setup code copied wholesale into cinder, where it becomes difficult
> to keep in sync with the code in nova.
> 
> Cinder needs a copy of this code since it is on the data path for certain
> operations (create from image, copy to image, backup/restore, migrate).

A core goal of using volume encryption in Nova to provide protection for
tenant data, from a malicious storage service. ie if the decryption key
is only ever used by Nova on the compute node, then cinder only ever sees
ciphertext, never plaintext.  Thus if cinder is compromised, then it can
not compromise any data stored in any encrypted volumes.

If cinder is looking to get access to the dm-seutp code, this seems to
imply that cinder will be getting access to the plaintext data, which
feels to me like it de-values the volume encryption feature somewhat.

I'm fuzzy on the details of just what code paths cinder needs to be
able to convert from plaintext to ciphertext or vica-verca, but in
general I think it is desirable if we can avoid any such operation
in cinder, and keep it so that only Nova compute nodes ever see the
decrypted data.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Bug update

2015-11-20 Thread Armando M.
On 20 November 2015 at 09:47, John Belamaric 
wrote:

> I think Gary got auto-corrected:
>
> training = triaging
> brace = rbac
>

ah!


>
> On Nov 20, 2015, at 12:41 PM, Armando M.  wrote:
>
>
>
> On 19 November 2015 at 23:10, Gary Kotton  wrote:
>
>> Hi,
>> There are a ton of old and ancient bugs that have not been trained. If
>> you guys have some time then please go over them. In most cases some are
>> not even bugs and are just questions. I have spent the last few days going
>> over and training a few.
>> Over the last two days a number of bugs related to Neutron RBAC have been
>> opened. I have created a new tag called ‘brace’. Kevin can you please take
>> a look. Some may be bugs, others may be edge cases that we missed in the
>> review process and others may be a mis understanding of the feature.
>>
>
> What does brace mean? That doesn't seem very intuitive.
>
> Are you suggesting to add one to cover 'access control' in general?
>
> Thanks for helping out!
>
> [1]
> http://docs.openstack.org/developer/neutron/policies/bugs.html#proposing-new-tags
>
>
>
>
>> A luta continua
>> Gary
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FKs in the DB

2015-11-20 Thread Matt Riedemann



On 11/20/2015 10:19 AM, Alexis Lee wrote:

We just had a fun discussion in IRC about whether foreign keys are evil.
Initially I thought this was crazy but mordred made some good points. To
paraphrase, that if you have a scale-out app already it's easier to
manage integrity in your app than scale-out your persistence layer.

Currently the Nova DB has quite a lot of FKs but not on every relation.
One example of a missing FK is between Instance.uuid and
BandwidthUsageCache.uuid.

Should we drive one way or the other, or just put up with mixed-mode?


For the record, I hate the mixed mode.



What should be the policy for new relations?


I prefer consistency, so if we're adding new relationships I'd prefer to 
see that they have foreign keys.




Do the answers to these questions depend on having a sane and
comprehensive archive/purge system in place?


I'm not sure. The problems this is causing with archive/purge is that I 
thought to fix archive all we had to do was reverse sort the tables, but 
which was working until it turned out we weren't soft deleting 
instance_actions. But now it also turns out that we aren't soft deleting 
bw_usage_cache *and* we don't have a FKey from that back to the 
instances table, so it's just completely orphaned and never archived or 
deleted, thus leaving that task up to the xenserver operator (since the 
xenserver driver is the only one that implements the virt driver API to 
populate this table).


So again, now we have to have special hack code paths in the 
archive/purge code to account for this mixed mode schema.





Alexis (lxsli)



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nova compute hooks are not called

2015-11-20 Thread Sundar Nadathur
Hello,
   I am trying to get Nova Compute create_instance hook to be called. However, 
although the VM gets started from Horizon properly,  the hook does not get 
called and there is no reference in it to the logs. When I run the hook script 
from the command line, it runs fine.

Please let me know what I am missing. Thanks!

Details:  I have created a directory with the following structure:
Nova-Hooks/
 setup.py
 demo_nova_hooks/
 __init__.py
 simple.py

Nova-Hooks is in $PYTHONPATH. Both setup.py and simple.py have execute 
permissions for all.

I ran "setup.py install", restarted nova-compute service, verified that 
nova-compute is running, and then started the instance. Here are the contents 
of setup.py:
http://paste.openstack.org/show/479627/

Here are the contents of simple.py:
http://paste.openstack.org/show/479628/

There are no reference in /var/log/nova/nova-compute.log to the strings "hook", 
"demo", "simple", etc. When I run the hook script from the command line, it 
runs fine.


Cheers,
Sundar



Cheers,
Sundar




Confidentiality Notice.
This message may contain information that is confidential or otherwise 
protected from disclosure. If you are not the intended recipient, you are 
hereby notified that any use, disclosure, dissemination, distribution, or 
copying of this message, or any attachments, is strictly prohibited. If you 
have received this message in error, please advise the sender by reply e-mail, 
and delete the message and any attachments. Thank you.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party][infra][CI] Common OpenStack 'Third-party' CI Solution - DONE!

2015-11-20 Thread Siddharth Bhatt
Ramy,

I had previously used your os-ext-testing repo to build a 3rd party CI, and 
today I’ve been trying out this new approach. I’ve noticed a piece of the 
puzzle appears to be missing.

In the new instructions [1], there is no mention of having to manually install 
the Jenkins SCP plugin v1.9. Also, your old manifest would create the plugin 
config file [2] and populate it with the appropriate values for the log server, 
but the new approach does not. So when I finished running the puppet apply, the 
job configurations contained a site name “LogServer” but there was no value 
defined anywhere pointing that to the actual IP or hostname of my log server.

I did manually install the v1.9 plugin and then configured it from the Jenkins 
web UI. I guess either the instructions need to be updated to mention this, or 
the puppet manifests need to automate some or all of it.

Regards,
Sid

[1] 
https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/README.md
[2] /var/lib/jenkins/be.certipost.hudson.plugin.SCPRepositoryPublisher.xml

From: Asselin, Ramy [mailto:ramy.asse...@hpe.com]
Sent: Friday, November 20, 2015 12:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [third-party][infra][CI] Common OpenStack 
'Third-party' CI Solution - DONE!

All,

I’m happy to announce that there is now a working ‘Common’ OpenStack 
‘third-party’ CI Solution available! This is a 3rd party CI solution that uses 
the same tools and scripts as the upstream ‘Jenkins’ CI.

The last few pieces were particularly challenging.
Big thanks to Yolanda Robla for updating Nodepool  & nodepool puppet scripts so 
that is can be reusable by both 3rd party CI’s and upstream infrastructure!

The documentation for setting up a 3rd party ci system on 2 VMs (1 private that 
runs the CI jobs, and 1 public that hosts the log files) is now available here 
[1] or [2]

Big thanks again to everyone that helped submit patches and do the reviews!

A few people have already starting setting up this solution.

Best regards,

Ramy
IRC: asselin

[1] https://github.com/openstack-infra/puppet-openstackci/tree/master/contrib
[2] 
https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/README.md


From: Asselin, Ramy
Sent: Monday, July 20, 2015 3:39 PM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: Re: [openstack-dev] [third-party][infra] Common OpenStack CI Solution 
- 'Jenkins Job Builder' live

All,

I’m pleased to announce the 4th component merged to puppet-openstackci repo [1].

This means 3rd party ci operators can now use the same scripts that the 
OpenStack Infrastructure team uses in the official ‘Jenkins’ CI system for:


1.   Log Server

2.   Jenkins

3.   Zuul

4.   Jenkins Job Builder

This work is being done as part of the common-ci spec [2]

Big thanks to Juame Devesa for starting the work, Khai Do for fixing all issues 
found in the reviews during the virtual sprint, and to all the reviewers and 
testers.

We’re almost there! Just have nodepool and a sample config to compose all of 
the components together [3]!

Ramy
IRC: asselin

[1] https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/
[2] 
http://specs.openstack.org/openstack-infra/infra-specs/specs/openstackci.html
[3] https://storyboard.openstack.org/#!/story/2000101


From: Asselin, Ramy
Sent: Thursday, July 02, 2015 4:59 PM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: [openstack-dev] [third-party][infra] Common OpenStack CI Solution - 
'Zuul' live

All,

I’m please to say that there are now 3 components merged in the 
puppet-openstackci repo [1]
This means 3rd party ci operators can now use the same scripts that the 
OpenStack Infrastructure team uses in the official ‘Jenkins’ CI system for:


1.   Log Server

2.   Jenkins

3.   Zuul

This work is being done as part of the common-ci spec [2]

Big thanks to Fabien Boucher for completing the Zuul script refactoring, which 
went live today!
Thanks to all the reviewers for careful reviews which led to a smooth migration.

I’ve updated my repo [3] & switched all my CI systems to use it.

As a reminder, there will be a virtual sprint next week July 8-9, 2015 15:00 
UTC to finish the remaining tasks.
If you’re interested in helping out in any of the remaining tasks (Jenkins Job 
Builder, Nodepool, Logstash/Kibana, Documentation, Sample site.pp) Sign up on 
the eitherpad. [4]

Also, we can use the 3rd party meeting time slot next week to discuss plans and 
answer questions [5].
Tuesday 7/7/15 1700 UTC #openstack-meeting

Ramy
IRC: asselin

[1] https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/
[2] 
http://specs.openstack.org/openstack-infra/infra-specs/specs/openstackci.html
[3] 

Re: [openstack-dev] [Fuel] Running Fuel node as non-superuser

2015-11-20 Thread Stanislaw Bogatkin
Dmitry, I just propose the way I think is right, because it's strange
enough - install package from *.deb file and then set any privileges to it
by third-party utility. Set permissions for app now mostly managed by
post-install scripts. Moreover - if it isn't - it should, cause if you set
capabilities by puppet there always will be a gap between installation and
setting permissions, so you will must bound package installation process
with setting permissions by puppet - other way you will have no way to use
your app.

Setting setuid bits on apps is not a good idea - it is why linux
capabilities were introduced.

On Fri, Nov 20, 2015 at 6:40 PM, Dmitry Nikishov 
wrote:

> Stanislaw,
>
> In my opinion the whole feature shouldn't be in the separate package
> simply because it will actually affect the code of many, if not all,
> components of Fuel.
>
> The only services whose capabilities will have to be managed by puppet are
> those, which are installed from upstream packages (e.g. atop) -- not built
> from fuel-* repos.
>
> Supervisord doesn't seem to use Linux capabilities, id does setuid
> instead:
> https://github.com/Supervisor/supervisor/blob/master/supervisor/options.py#L1326
>
> On Fri, Nov 20, 2015 at 1:07 AM, Stanislaw Bogatkin <
> sbogat...@mirantis.com> wrote:
>
>> Dmitry, I mean whole feature.
>> Btw, why do you want to grant capabilities via puppet? It should be done
>> by post-install package section, I believe.
>>
>> Also I doesn't know if supervisord can bound process capabilities like
>> systemd can - we could use this opportunity too.
>>
>> On Thu, Nov 19, 2015 at 7:44 PM, Dmitry Nikishov 
>> wrote:
>>
>>> My main concern with using linux capabilities/acls on files is actually
>>> puppet support or, actually, the lack of it. ACLs are possible AFAIK, but
>>> we'd need to write a custom type/provider for capabilities. I suggest to
>>> wait with capabilities support till systemd support.
>>>
>>> On Tue, Nov 17, 2015 at 9:15 AM, Dmitry Nikishov >> > wrote:
>>>
 Stanislaw, do you mean the whole feature, or just a user? Since feature
 would require actually changing puppet code.

 On Tue, Nov 17, 2015 at 5:08 AM, Stanislaw Bogatkin <
 sbogat...@mirantis.com> wrote:

> Dmitry, I believe it should be done via package spec as a part of
> installation.
>
> On Mon, Nov 16, 2015 at 8:04 PM, Dmitry Nikishov <
> dnikis...@mirantis.com> wrote:
>
>> Hello folks,
>>
>> I have updated the spec, please review and share your thoughts on it:
>> https://review.openstack.org/#/c/243340/
>>
>> Thanks.
>>
>> On Thu, Nov 12, 2015 at 10:42 AM, Dmitry Nikishov <
>> dnikis...@mirantis.com> wrote:
>>
>>> Matthew,
>>>
>>> sorry, didn't mean to butcher your name :(
>>>
>>> On Thu, Nov 12, 2015 at 10:41 AM, Dmitry Nikishov <
>>> dnikis...@mirantis.com> wrote:
>>>
 Matther,

 I totally agree that each daemon should have it's own user which
 should be created during installation of the relevant package. 
 Probably I
 didn't state this clear enough in the spec.

 However, there are security requirements in place that root should
 not be used at all. This means that there should be a some kind of
 maintenance or system user ('fueladmin'), which would have enough
 privileges to configure and manage Fuel node (e.g. run "sudo puppet 
 apply"
 without password, create mirrors etc). This also means that certain 
 fuel-
 packages would be required to have their files accessible to that user.
 That's the idea behind having a package which would create 'fueladmin' 
 user
 and including it into other fuel- packages requirements lists.

 So this part of the feature comes down to having a non-root user
 with sudo privileges and passwordless sudo for certain commands (like
 'puppet apply ') for scripting.

 On Thu, Nov 12, 2015 at 9:52 AM, Matthew Mosesohn <
 mmoses...@mirantis.com> wrote:

> Dmitry,
>
> We really shouldn't put "user" creation into a single package and
> then depend on it for daemons. If we want nailgun service to run as 
> nailgun
> user, it should be created in the fuel-nailgun package.
> I think it makes the most sense to create multiple users, one for
> each service.
>
> Lastly, it makes a lot of sense to tie a "fuel" CLI user to
> python-fuelclient package.
>
> On Thu, Nov 12, 2015 at 6:42 PM, Dmitry Nikishov <
> dnikis...@mirantis.com> wrote:
>
>> Stanislaw,
>>
>> I agree that this approch would work well. However, does Puppet
>> allow managing capabilities and/or file ACLs? 

Re: [openstack-dev] [nova] FKs in the DB

2015-11-20 Thread Mike Bayer


On 11/20/2015 11:19 AM, Alexis Lee wrote:
> We just had a fun discussion in IRC about whether foreign keys are evil.
> Initially I thought this was crazy but mordred made some good points. To
> paraphrase, that if you have a scale-out app already it's easier to
> manage integrity in your app than scale-out your persistence layer.

I've had this argument with mordred before, and it seems again there's
the same misunderstanding going on:

1. Your application can have **conceptual** foreign keys in it, without
actually having foreign keys **for real** in the database.  This means
your SQLAlchemy code still does ForeignKey, ForeignKeyConstraint, and
most importantly your **database still uses normal form**, that is, any
row that refers to another does it based on a set of columns that
exactly match to the primary key of a single table elsewhere (not to
multiple tables, not to a function of the columns cast from int to
string and concatenated to the value in the other table etc, an *exact
match*).   I'm sure that mordred agrees with all of these practices,
however when one says "we aren't going to use foreign keys anymore",
typically it is all these critical schema design practices that go out
the window.  Put another way, the foreign key concept not only
constrains data in a real database, just the concept of them constraints
the **developer** to use correct normal form.

2. Here's the part mordred doesn't like - the FK is actually in the
database for real.   This is because they slow down inserts, updates,
and deletes, because they must be checked.   To which I say, no such
performance issue has been observed or documented in Openstack, we
aren't a 1 million TPS financial system, so this is vastly premature
optimization.

Also as far as #2, customers and operators *regularly* run scripts and
queries to modify openstack databases, particularly to delete soft
deleted rows.  These get blocked *all the time* by foreign key
constraints.  They are doing their job, and they are needed as a final
guard against data integrity issues.  We of course handle referential
integrity in the application layer as well via SQLAlchemy ORM constructs.

3. Another aspect of FKs is using them for ON DELETE CASCADE.   I think
this is a great idea also, but I know that openstack apps are not
comfortable with this.  So we don't need to use it (but we should someday).



> 
> Currently the Nova DB has quite a lot of FKs but not on every relation.
> One example of a missing FK is between Instance.uuid and
> BandwidthUsageCache.uuid.
> 
> Should we drive one way or the other, or just put up with mixed-mode?
> 
> What should be the policy for new relations?

+1 for correct normalized with foreign keys in all cases.   A slowdown
that can be documented and illustrated will be needed to justify havint
that FK to be disabled or removed on the schema-side only, but there
would still be a "conceptual" foreign key (e.g. SQLAlchemy ForeignKey)
in the model.

> 
> Do the answers to these questions depend on having a sane and
> comprehensive archive/purge system in place?
> 
> 
> Alexis (lxsli)
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FKs in the DB

2015-11-20 Thread Mike Bayer


On 11/20/2015 02:29 PM, Mike Bayer wrote:
> 
> 
> On 11/20/2015 11:19 AM, Alexis Lee wrote:
>> We just had a fun discussion in IRC about whether foreign keys are evil.
>> Initially I thought this was crazy but mordred made some good points. To
>> paraphrase, that if you have a scale-out app already it's easier to
>> manage integrity in your app than scale-out your persistence layer.

oh, I forgot the other use case, the "we might have these tables in two
different databases use case".  Again.   Start out with your two tables
together, put the FK there, have SQLAlchemy do the work of actually
maintaining this FK relationship.   The FKs can be removed at the schema
level at any time provided you aren't relying upon ON DELETE or ON
UPDATE constructs, which we're not.

If and when you split those tables out to two databases, I would
actually replace the relationship with one that uses GUIDs, and if the
table's primary key is not already a GUID (I favor integer primary
keys), there'd be a separate UNIQUE column on the parent table with the
GUID value.  Auto-incrementing integer primary key identifiers are
essential, but because they are auto-incrementing they are not quite as
portable to other databases, whereas GUIDs are extremely portable.  Then
continue using ForeignKeyConstraint and relationship() in SQLAlchemy as
always.



> 
> I've had this argument with mordred before, and it seems again there's
> the same misunderstanding going on:
> 
> 1. Your application can have **conceptual** foreign keys in it, without
> actually having foreign keys **for real** in the database.  This means
> your SQLAlchemy code still does ForeignKey, ForeignKeyConstraint, and
> most importantly your **database still uses normal form**, that is, any
> row that refers to another does it based on a set of columns that
> exactly match to the primary key of a single table elsewhere (not to
> multiple tables, not to a function of the columns cast from int to
> string and concatenated to the value in the other table etc, an *exact
> match*).   I'm sure that mordred agrees with all of these practices,
> however when one says "we aren't going to use foreign keys anymore",
> typically it is all these critical schema design practices that go out
> the window.  Put another way, the foreign key concept not only
> constrains data in a real database, just the concept of them constraints
> the **developer** to use correct normal form.
> 
> 2. Here's the part mordred doesn't like - the FK is actually in the
> database for real.   This is because they slow down inserts, updates,
> and deletes, because they must be checked.   To which I say, no such
> performance issue has been observed or documented in Openstack, we
> aren't a 1 million TPS financial system, so this is vastly premature
> optimization.
> 
> Also as far as #2, customers and operators *regularly* run scripts and
> queries to modify openstack databases, particularly to delete soft
> deleted rows.  These get blocked *all the time* by foreign key
> constraints.  They are doing their job, and they are needed as a final
> guard against data integrity issues.  We of course handle referential
> integrity in the application layer as well via SQLAlchemy ORM constructs.
> 
> 3. Another aspect of FKs is using them for ON DELETE CASCADE.   I think
> this is a great idea also, but I know that openstack apps are not
> comfortable with this.  So we don't need to use it (but we should someday).
> 
> 
> 
>>
>> Currently the Nova DB has quite a lot of FKs but not on every relation.
>> One example of a missing FK is between Instance.uuid and
>> BandwidthUsageCache.uuid.
>>
>> Should we drive one way or the other, or just put up with mixed-mode?
>>
>> What should be the policy for new relations?
> 
> +1 for correct normalized with foreign keys in all cases.   A slowdown
> that can be documented and illustrated will be needed to justify havint
> that FK to be disabled or removed on the schema-side only, but there
> would still be a "conceptual" foreign key (e.g. SQLAlchemy ForeignKey)
> in the model.
> 
>>
>> Do the answers to these questions depend on having a sane and
>> comprehensive archive/purge system in place?
>>
>>
>> Alexis (lxsli)
>>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-11-20 Thread melanie witt
On Nov 20, 2015, at 6:18, Sean Dague  wrote:

> instance_actions seems extremely useful, and at the ops meetups I've
> been to has been one of the favorite features because it allows and easy
> interface for "going back in time" to figure out what happened.

Agreed, we're using it because it's such a quick and easy way to see what 
actions have been taken on an instance when users need support. We're not yet 
collecting notifications from the queue -- we do have them being dumped to the 
logs that are splunk searchable. So far, it hasn't been "easy" to look at 
instance action history that way.

> I'd suggest the following:
> 
> 1. soft deleting and instance does nothing with instance actions.
> 
> 2. archiving instance (soft delete -> actually deleted) also archives
> off instance actions.
> 
> 3. update instance_actions API so that you can get instance_actions for
> deleted instances (which I think doesn't work today).

+1

I kept trying to craft a reply to this thread and fortunately I waited long 
enough that someone else said exactly what I was trying to say.

-melanie (irc: melwitt)







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Running Fuel node as non-superuser

2015-11-20 Thread Dmitry Nikishov
Stanislaw,

I want to clarify: there are 2 types of services, run on the Fuel node:
- Those, which are a part of Fuel (astute, nailgun etc)
- Those, which are not (e.g. atop)

Capabilities for the former can easily be managed via post-install scripts,
embedded in respective package spec file (since specs are a part of fuel-*
repo). This is a very good idea.
Capabilities for the latter will have to be taken care of via either
a. some external utility (puppet)
b. rebuilding respective package with updated spec

I'd say that (a) is still more convinient.

Another option would be to have a fine-grained control only on Fuel
services and leave all the other at their defaults.

On Fri, Nov 20, 2015 at 1:19 PM, Stanislaw Bogatkin 
wrote:

> Dmitry, I just propose the way I think is right, because it's strange
> enough - install package from *.deb file and then set any privileges to it
> by third-party utility. Set permissions for app now mostly managed by
> post-install scripts. Moreover - if it isn't - it should, cause if you set
> capabilities by puppet there always will be a gap between installation and
> setting permissions, so you will must bound package installation process
> with setting permissions by puppet - other way you will have no way to use
> your app.
>
> Setting setuid bits on apps is not a good idea - it is why linux
> capabilities were introduced.
>
> On Fri, Nov 20, 2015 at 6:40 PM, Dmitry Nikishov 
> wrote:
>
>> Stanislaw,
>>
>> In my opinion the whole feature shouldn't be in the separate package
>> simply because it will actually affect the code of many, if not all,
>> components of Fuel.
>>
>> The only services whose capabilities will have to be managed by puppet
>> are those, which are installed from upstream packages (e.g. atop) -- not
>> built from fuel-* repos.
>>
>> Supervisord doesn't seem to use Linux capabilities, id does setuid
>> instead:
>> https://github.com/Supervisor/supervisor/blob/master/supervisor/options.py#L1326
>>
>> On Fri, Nov 20, 2015 at 1:07 AM, Stanislaw Bogatkin <
>> sbogat...@mirantis.com> wrote:
>>
>>> Dmitry, I mean whole feature.
>>> Btw, why do you want to grant capabilities via puppet? It should be done
>>> by post-install package section, I believe.
>>>
>>> Also I doesn't know if supervisord can bound process capabilities like
>>> systemd can - we could use this opportunity too.
>>>
>>> On Thu, Nov 19, 2015 at 7:44 PM, Dmitry Nikishov >> > wrote:
>>>
 My main concern with using linux capabilities/acls on files is actually
 puppet support or, actually, the lack of it. ACLs are possible AFAIK, but
 we'd need to write a custom type/provider for capabilities. I suggest to
 wait with capabilities support till systemd support.

 On Tue, Nov 17, 2015 at 9:15 AM, Dmitry Nikishov <
 dnikis...@mirantis.com> wrote:

> Stanislaw, do you mean the whole feature, or just a user? Since
> feature would require actually changing puppet code.
>
> On Tue, Nov 17, 2015 at 5:08 AM, Stanislaw Bogatkin <
> sbogat...@mirantis.com> wrote:
>
>> Dmitry, I believe it should be done via package spec as a part of
>> installation.
>>
>> On Mon, Nov 16, 2015 at 8:04 PM, Dmitry Nikishov <
>> dnikis...@mirantis.com> wrote:
>>
>>> Hello folks,
>>>
>>> I have updated the spec, please review and share your thoughts on
>>> it: https://review.openstack.org/#/c/243340/
>>>
>>> Thanks.
>>>
>>> On Thu, Nov 12, 2015 at 10:42 AM, Dmitry Nikishov <
>>> dnikis...@mirantis.com> wrote:
>>>
 Matthew,

 sorry, didn't mean to butcher your name :(

 On Thu, Nov 12, 2015 at 10:41 AM, Dmitry Nikishov <
 dnikis...@mirantis.com> wrote:

> Matther,
>
> I totally agree that each daemon should have it's own user which
> should be created during installation of the relevant package. 
> Probably I
> didn't state this clear enough in the spec.
>
> However, there are security requirements in place that root should
> not be used at all. This means that there should be a some kind of
> maintenance or system user ('fueladmin'), which would have enough
> privileges to configure and manage Fuel node (e.g. run "sudo puppet 
> apply"
> without password, create mirrors etc). This also means that certain 
> fuel-
> packages would be required to have their files accessible to that 
> user.
> That's the idea behind having a package which would create 
> 'fueladmin' user
> and including it into other fuel- packages requirements lists.
>
> So this part of the feature comes down to having a non-root user
> with sudo privileges and passwordless sudo for certain commands (like
> 'puppet apply ') for scripting.

Re: [openstack-dev] [third-party][infra][CI] Common OpenStack 'Third-party' CI Solution - DONE!

2015-11-20 Thread Asselin, Ramy
Hi Sid,

Instead of documenting it, was simple enough to automate it. Please try these 
out:

https://review.openstack.org/248223

https://review.openstack.org/248226

Feel free to propose your own fixes or improvements. I think this is one of 
best parts of getting it all in sync upstream.

Best regards,
Ramy



From: Asselin, Ramy
Sent: Friday, November 20, 2015 11:03 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [third-party][infra][CI] Common OpenStack 
'Third-party' CI Solution - DONE!

Hi Sid,

Sorry, you’re right: log server fix is here. [1]
I thought I documented the scp v1.9 plugin issue, but I don’t see it now. I 
will submit a patch to add that.

Thanks for raising these issues!

Ramy

[1] https://review.openstack.org/#/c/242800/


From: Siddharth Bhatt [mailto:siddharth.bh...@falconstor.com]
Sent: Friday, November 20, 2015 10:51 AM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: Re: [openstack-dev] [third-party][infra][CI] Common OpenStack 
'Third-party' CI Solution - DONE!

Ramy,

I had previously used your os-ext-testing repo to build a 3rd party CI, and 
today I’ve been trying out this new approach. I’ve noticed a piece of the 
puzzle appears to be missing.

In the new instructions [1], there is no mention of having to manually install 
the Jenkins SCP plugin v1.9. Also, your old manifest would create the plugin 
config file [2] and populate it with the appropriate values for the log server, 
but the new approach does not. So when I finished running the puppet apply, the 
job configurations contained a site name “LogServer” but there was no value 
defined anywhere pointing that to the actual IP or hostname of my log server.

I did manually install the v1.9 plugin and then configured it from the Jenkins 
web UI. I guess either the instructions need to be updated to mention this, or 
the puppet manifests need to automate some or all of it.

Regards,
Sid

[1] 
https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/README.md
[2] /var/lib/jenkins/be.certipost.hudson.plugin.SCPRepositoryPublisher.xml

From: Asselin, Ramy [mailto:ramy.asse...@hpe.com]
Sent: Friday, November 20, 2015 12:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [third-party][infra][CI] Common OpenStack 
'Third-party' CI Solution - DONE!

All,

I’m happy to announce that there is now a working ‘Common’ OpenStack 
‘third-party’ CI Solution available! This is a 3rd party CI solution that uses 
the same tools and scripts as the upstream ‘Jenkins’ CI.

The last few pieces were particularly challenging.
Big thanks to Yolanda Robla for updating Nodepool  & nodepool puppet scripts so 
that is can be reusable by both 3rd party CI’s and upstream infrastructure!

The documentation for setting up a 3rd party ci system on 2 VMs (1 private that 
runs the CI jobs, and 1 public that hosts the log files) is now available here 
[1] or [2]

Big thanks again to everyone that helped submit patches and do the reviews!

A few people have already starting setting up this solution.

Best regards,

Ramy
IRC: asselin

[1] https://github.com/openstack-infra/puppet-openstackci/tree/master/contrib
[2] 
https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/README.md


From: Asselin, Ramy
Sent: Monday, July 20, 2015 3:39 PM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: Re: [openstack-dev] [third-party][infra] Common OpenStack CI Solution 
- 'Jenkins Job Builder' live

All,

I’m pleased to announce the 4th component merged to puppet-openstackci repo [1].

This means 3rd party ci operators can now use the same scripts that the 
OpenStack Infrastructure team uses in the official ‘Jenkins’ CI system for:


1.   Log Server

2.   Jenkins

3.   Zuul

4.   Jenkins Job Builder

This work is being done as part of the common-ci spec [2]

Big thanks to Juame Devesa for starting the work, Khai Do for fixing all issues 
found in the reviews during the virtual sprint, and to all the reviewers and 
testers.

We’re almost there! Just have nodepool and a sample config to compose all of 
the components together [3]!

Ramy
IRC: asselin

[1] https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/
[2] 
http://specs.openstack.org/openstack-infra/infra-specs/specs/openstackci.html
[3] https://storyboard.openstack.org/#!/story/2000101


From: Asselin, Ramy
Sent: Thursday, July 02, 2015 4:59 PM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: [openstack-dev] [third-party][infra] Common OpenStack CI Solution - 
'Zuul' live

All,

I’m please to say that there are now 

Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Timur Nurlygayanov
Hi Igor and Alexander,

>But I'd like to hear from QA how do we rely on container-based
> infrastructure? Would it be hard to change our sys-tests in short
> time?

QA team hadn't significant dependencies from docker images in our tests
[0], I think we can change all docker-based code from our tests / scripts
in 2-3 days, but it is hard to predict when ISO without docker images will
pass all SWARM / Tempest tests.

And one more time:

> Of course, we can fix BVT / SWARM tests and don't use docker images in our
> test suite (it shouldn't be really hard) but we didn't plan these changes
> and in fact these changes can affect our estimates for many tasks.


Do we really want to remove docker containers from master node? How long it
will take to provide the experimental MOS 8.0 build without docker
containers?
Are we ready to change the date of MOS 8.0 release and make this change?

[0] https://github.com/openstack/fuel-qa/search?p=2=docker=%E2%9C%93


On Fri, Nov 20, 2015 at 7:57 PM, Bogdan Dobrelya 
wrote:

> On 20.11.2015 17:31, Vladimir Kozhukalov wrote:
> > Bogdan,
> >
> >>> So, we could only deprecate the docker feature for the 8.0.
> >
> > What do you mean exactly when saying 'deprecate docker feature'? I can
> > not even imagine how we can live with and without docker containers at
> > the same time. Deprecation is usually related to features which directly
> > impact UX (maybe I am wrong).
>
> I may be understood this [0] wrong, and the docker containers are not
> user-visible, but that depends on the which type of users do we mean :-)
> Sorry, for being not clear.
>
> [0]
>
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html
>
> >
> > Guys,
> >
> > When you estimate risks of the docker removal, please take into account
> > not only release deadlines but also the overall product quality. The
> > thing is that continuing using containers makes it much more complicated
> > (and thus less stable) to implement new upgrade flow (upgrade tarball
> > can not be used any more, we need to re-install the host system).
> > Switching from Centos 6 to Centos 7 is also much more complicated with
> > docker. Every single piece of Fuel system is going to become simpler and
> > easier to support.
> >
> > Of course, I am not suggesting to jump overboard into cold water without
> > a life jacket. Transition plan, checklist, green tests, even spec etc.
> > are assumed without saying (after all, I was not born yesterday). Of
> > course, we won't merge changes until everything is green. What is the
> > problem to try to do this and postpone if not ready in time? And please
> > do not confuse these two cases: switching from plain deployment to
> > containers is complicated, but switching from docker to plain is much
> > simpler.
> >
> >
> >
> >
> > Vladimir Kozhukalov
> >
> > On Fri, Nov 20, 2015 at 6:47 PM, Bogdan Dobrelya  > > wrote:
> >
> > On 20.11.2015 15:10, Timur Nurlygayanov wrote:
> > > Hi team,
> > >
> > > I think it too late to make such significant changes for MOS 8.0
> now,
> > > but I'm ok with the idea to remove docker containers in the future
> > > releases if our dev team want to do this.
> > > Any way, before we will do this, we need to plan how we will
> perform
> > > updates between different releases with and without docker
> containers,
> > > how we will manage requirements and etc. In fact we have a lot of
> > > questions and haven't answers, let's prepare the spec for this
> change,
> > > review it, discuss it with developers, users and project
> management team
> > > and if we haven't requirements to keep docker containers on master
> node
> > > let's remove them for the future releases (not in MOS 8.0).
> > >
> > > Of course, we can fix BVT / SWARM tests and don't use docker
> images in
> > > our test suite (it shouldn't be really hard) but we didn't plan
> these
> > > changes and in fact these changes can affect our estimates for
> many tasks.
> >
> > I can only add that features just cannot be removed without a
> > deprecation period of 1-2 releases.
> > So, we could only deprecate the docker feature for the 8.0.
> >
> > >
> > > Thank you!
> > >
> > >
> > > On Fri, Nov 20, 2015 at 4:44 PM, Alexander Kostrikov
> > > 
> > >>
> > wrote:
> > >
> > > Hello, Igor.
> > >
> > > >But I'd like to hear from QA how do we rely on container-based
> > > infrastructure? Would it be hard to change our sys-tests in
> short
> > > time?
> > >
> > > At first glance, system tests are using docker only to fetch
> logs
> > > and run shell commands.
> > > Also, docker is used to run Rally.
> > >
> > > If 

Re: [openstack-dev] [third-party][infra][CI] Common OpenStack 'Third-party' CI Solution - DONE!

2015-11-20 Thread Asselin, Ramy
Hi Sid,

Sorry, you’re right: log server fix is here. [1]
I thought I documented the scp v1.9 plugin issue, but I don’t see it now. I 
will submit a patch to add that.

Thanks for raising these issues!

Ramy

[1] https://review.openstack.org/#/c/242800/


From: Siddharth Bhatt [mailto:siddharth.bh...@falconstor.com]
Sent: Friday, November 20, 2015 10:51 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [third-party][infra][CI] Common OpenStack 
'Third-party' CI Solution - DONE!

Ramy,

I had previously used your os-ext-testing repo to build a 3rd party CI, and 
today I’ve been trying out this new approach. I’ve noticed a piece of the 
puzzle appears to be missing.

In the new instructions [1], there is no mention of having to manually install 
the Jenkins SCP plugin v1.9. Also, your old manifest would create the plugin 
config file [2] and populate it with the appropriate values for the log server, 
but the new approach does not. So when I finished running the puppet apply, the 
job configurations contained a site name “LogServer” but there was no value 
defined anywhere pointing that to the actual IP or hostname of my log server.

I did manually install the v1.9 plugin and then configured it from the Jenkins 
web UI. I guess either the instructions need to be updated to mention this, or 
the puppet manifests need to automate some or all of it.

Regards,
Sid

[1] 
https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/README.md
[2] /var/lib/jenkins/be.certipost.hudson.plugin.SCPRepositoryPublisher.xml

From: Asselin, Ramy [mailto:ramy.asse...@hpe.com]
Sent: Friday, November 20, 2015 12:40 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [third-party][infra][CI] Common OpenStack 
'Third-party' CI Solution - DONE!

All,

I’m happy to announce that there is now a working ‘Common’ OpenStack 
‘third-party’ CI Solution available! This is a 3rd party CI solution that uses 
the same tools and scripts as the upstream ‘Jenkins’ CI.

The last few pieces were particularly challenging.
Big thanks to Yolanda Robla for updating Nodepool  & nodepool puppet scripts so 
that is can be reusable by both 3rd party CI’s and upstream infrastructure!

The documentation for setting up a 3rd party ci system on 2 VMs (1 private that 
runs the CI jobs, and 1 public that hosts the log files) is now available here 
[1] or [2]

Big thanks again to everyone that helped submit patches and do the reviews!

A few people have already starting setting up this solution.

Best regards,

Ramy
IRC: asselin

[1] https://github.com/openstack-infra/puppet-openstackci/tree/master/contrib
[2] 
https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/README.md


From: Asselin, Ramy
Sent: Monday, July 20, 2015 3:39 PM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: Re: [openstack-dev] [third-party][infra] Common OpenStack CI Solution 
- 'Jenkins Job Builder' live

All,

I’m pleased to announce the 4th component merged to puppet-openstackci repo [1].

This means 3rd party ci operators can now use the same scripts that the 
OpenStack Infrastructure team uses in the official ‘Jenkins’ CI system for:


1.   Log Server

2.   Jenkins

3.   Zuul

4.   Jenkins Job Builder

This work is being done as part of the common-ci spec [2]

Big thanks to Juame Devesa for starting the work, Khai Do for fixing all issues 
found in the reviews during the virtual sprint, and to all the reviewers and 
testers.

We’re almost there! Just have nodepool and a sample config to compose all of 
the components together [3]!

Ramy
IRC: asselin

[1] https://git.openstack.org/cgit/openstack-infra/puppet-openstackci/
[2] 
http://specs.openstack.org/openstack-infra/infra-specs/specs/openstackci.html
[3] https://storyboard.openstack.org/#!/story/2000101


From: Asselin, Ramy
Sent: Thursday, July 02, 2015 4:59 PM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: [openstack-dev] [third-party][infra] Common OpenStack CI Solution - 
'Zuul' live

All,

I’m please to say that there are now 3 components merged in the 
puppet-openstackci repo [1]
This means 3rd party ci operators can now use the same scripts that the 
OpenStack Infrastructure team uses in the official ‘Jenkins’ CI system for:


1.   Log Server

2.   Jenkins

3.   Zuul

This work is being done as part of the common-ci spec [2]

Big thanks to Fabien Boucher for completing the Zuul script refactoring, which 
went live today!
Thanks to all the reviewers for careful reviews which led to a smooth migration.

I’ve updated my repo [3] & switched all my CI systems to use it.

As a reminder, there will be a virtual sprint next week July 

Re: [openstack-dev] [cinder][nova]Move encryptors to os-brick

2015-11-20 Thread Walter A. Boring IV

On 11/20/2015 10:19 AM, Daniel P. Berrange wrote:

On Fri, Nov 20, 2015 at 02:45:15PM +0200, Duncan Thomas wrote:

Brick does not have to take over the decisions in order to be a useful
repository for the code. The motivation for this work is to avoid having
the dm setup code copied wholesale into cinder, where it becomes difficult
to keep in sync with the code in nova.

Cinder needs a copy of this code since it is on the data path for certain
operations (create from image, copy to image, backup/restore, migrate).

A core goal of using volume encryption in Nova to provide protection for
tenant data, from a malicious storage service. ie if the decryption key
is only ever used by Nova on the compute node, then cinder only ever sees
ciphertext, never plaintext.  Thus if cinder is compromised, then it can
not compromise any data stored in any encrypted volumes.

If cinder is looking to get access to the dm-seutp code, this seems to
imply that cinder will be getting access to the plaintext data, which
feels to me like it de-values the volume encryption feature somewhat.

I'm fuzzy on the details of just what code paths cinder needs to be
able to convert from plaintext to ciphertext or vica-verca, but in
general I think it is desirable if we can avoid any such operation
in cinder, and keep it so that only Nova compute nodes ever see the
decrypted data.
Being able to limit the number of points where an encrypted volume can 
be used unencrypted

is obviously a good goal.
Unfortunately, it's entirely unrealistic to expect Cinder to never be 
able to have access that access.
Cinder currently needs access to write data to volumes that are 
encrypted for several operations.


1) copy volume to image
2) copy image to volume
3) backup

Cinder already has the ability to do this for encrypted volumes. What 
Lisa Li's patch is trying to provide
is a single point of shared code for doing encryptors.  os-brick seems 
like a reasonable place to put this
as it could be shared with other services that need to do the same 
thing, including Nova, if desired.


There is also ongoing work to support attaching Cinder volumes to bare 
metal nodes.  The client that does the
attaching to a bare metal node, will be using os-brick connectors to do 
the volume attach/detach.  So, it makes
sense from this perspective as well that the encryptor code lives in 
os-brick.


I'm ok with the idea of moving common code into os-brick.  This was the 
main reason os-brick was created

to begin with.
Walt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova]Move encryptors to os-brick

2015-11-20 Thread Ben Swartzlander

On 11/20/2015 01:19 PM, Daniel P. Berrange wrote:

On Fri, Nov 20, 2015 at 02:45:15PM +0200, Duncan Thomas wrote:

Brick does not have to take over the decisions in order to be a useful
repository for the code. The motivation for this work is to avoid having
the dm setup code copied wholesale into cinder, where it becomes difficult
to keep in sync with the code in nova.

Cinder needs a copy of this code since it is on the data path for certain
operations (create from image, copy to image, backup/restore, migrate).


A core goal of using volume encryption in Nova to provide protection for
tenant data, from a malicious storage service. ie if the decryption key
is only ever used by Nova on the compute node, then cinder only ever sees
ciphertext, never plaintext.  Thus if cinder is compromised, then it can
not compromise any data stored in any encrypted volumes.


There is a difference between the cinder service and the storage 
controller (or software system) that cinder manages. You can give the 
decryption keys to the cinder service without allowing the storage 
controller to see any plaintext.


As Walt says in the relevant patch [1], expecting cinder to do data 
management without ever performing I/O is unrealistic. The scenario 
where the compute admin doesn't trust the storage admin is 
understandable (although less important than other potential types of 
attacks IMO) but the scenario where the guy managing nova doesn't trust 
the guy managing cinder makes no sense at all.


I support moving the code into a common place, and doing responsible key 
management, and letting the cinder guys make sure that storage 
controllers never see plaintext in the cases when they're not supposed to.


-Ben

[1] https://review.openstack.org/#/c/247372/


If cinder is looking to get access to the dm-seutp code, this seems to
imply that cinder will be getting access to the plaintext data, which
feels to me like it de-values the volume encryption feature somewhat.

I'm fuzzy on the details of just what code paths cinder needs to be
able to convert from plaintext to ciphertext or vica-verca, but in
general I think it is desirable if we can avoid any such operation
in cinder, and keep it so that only Nova compute nodes ever see the
decrypted data.

Regards,
Daniel




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-11-20 Thread Matt Riedemann



On 11/20/2015 3:00 PM, Sylvain Bauza wrote:



Le 20/11/2015 17:36, Matt Riedemann a écrit :



On 11/20/2015 10:04 AM, Andrew Laski wrote:

On 11/20/15 at 09:51am, Matt Riedemann wrote:



On 11/20/2015 8:18 AM, Sean Dague wrote:

On 11/17/2015 10:51 PM, Matt Riedemann wrote:



I *don't* see any DB APIs for deleting instance actions.

Kind of an important difference there.  Jay got it at least. :)



Were we just planning on instance_actions living forever in the
database?

Should we soft delete instance_actions when we delete the referenced
instance?

Or should we (hard) delete instance_actions when we archive (move to
shadow tables) soft deleted instances?

This is going to be a blocker to getting nova-manage db
archive_deleted_rows working.

[1] https://review.openstack.org/#/c/246635/


instance_actions seems extremely useful, and at the ops meetups I've
been to has been one of the favorite features because it allows and
easy
interface for "going back in time" to figure out what happened.

I'd suggest the following:

1. soft deleting and instance does nothing with instance actions.

2. archiving instance (soft delete -> actually deleted) also archives
off instance actions.


I think this is also the right approach. Then we don't need to worry
about adding soft delete for instance_actions, they are just archived
when you archive the instances. It probably makes the logic in the
archive code messier for this separate path, but it's looking like
we're going to have to account for the bw_usage_cache table too (which
has a uuid column for an instance but no foreign key back to the
instances table and is not soft deleted).



3. update instance_actions API so that you can get instance_actions
for
deleted instances (which I think doesn't work today).


Right, it doesn't. I was going to propose a spec for that since it's a
simple API change with a microversion.


Adding a simple flag to expose instance actions for a deleted instance
if you know the uuid of the deleted instance will provide some
usefulness.  It does lack the discoverability of knowing that you had
*some* instance that was deleted and you don't have the uuid but want to
get at the deleted actions.  I would like to avoid bolting that onto
instance actions and keep that as a use case for an eventual Task API.





-Sean



--

Thanks,

Matt Riedemann


__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



If you're an admin, you can list deleted instances using:

nova list --deleted

Or could, if we weren't busted on that right now [1].

So the use case I'm thinking of here is:

1. Multiple users are in the same project/tenant.
2. User A deletes an instance.
3. User B is wondering where the instance went, so they open a support
ticket.
4. The admin checks for deleted instances on that project, finds the
one in question.
5. Calls off to os-instance-actions with that instance uuid to see the
deleted action and the user that did it (user A).
6. Closes the ticket saying that user A deleted the instance.
7. User B punches user A in the gut.

[1] https://bugs.launchpad.net/nova/+bug/1518382



Okay, that seems a good usecase for operators. Coolness, I'm fine with
soft-deleting instance_actions and provide a microversion for getting
actions for a known instance UUID, like Andrew said.


The plan right now (at least agreed to between myself and sdague) is not 
to soft delete instance actions, but to archive and hard-delete them 
when archiving instances.


As for allowing lookups on instance_actions for deleted instances, I 
plan on working that via this blueprint (still need to write the spec):


https://blueprints.launchpad.net/nova/+spec/os-instance-actions-read-deleted-instances





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon][bug] Mitigation to BREACH vulnerability

2015-11-20 Thread BARTRA, RICK
Until django releases an official patch for the BREACH vulnerability, I think 
we should take a look at django-debreach. The django-debreach package provides 
some, possibly enough, protection against a BREACH attack. Its integration to 
Horizon is clear by following the configuration found here: 
https://pypi.python.org/pypi/django-debreach


The proposed change to Horizon: https://review.openstack.org/#/c/247838/

The proposed change to Requirements: https://review.openstack.org/#/c/248233/


Regards,

Rick Bartra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-11-20 Thread Sylvain Bauza



Le 20/11/2015 17:36, Matt Riedemann a écrit :



On 11/20/2015 10:04 AM, Andrew Laski wrote:

On 11/20/15 at 09:51am, Matt Riedemann wrote:



On 11/20/2015 8:18 AM, Sean Dague wrote:

On 11/17/2015 10:51 PM, Matt Riedemann wrote:



I *don't* see any DB APIs for deleting instance actions.

Kind of an important difference there.  Jay got it at least. :)



Were we just planning on instance_actions living forever in the
database?

Should we soft delete instance_actions when we delete the referenced
instance?

Or should we (hard) delete instance_actions when we archive (move to
shadow tables) soft deleted instances?

This is going to be a blocker to getting nova-manage db
archive_deleted_rows working.

[1] https://review.openstack.org/#/c/246635/


instance_actions seems extremely useful, and at the ops meetups I've
been to has been one of the favorite features because it allows and 
easy

interface for "going back in time" to figure out what happened.

I'd suggest the following:

1. soft deleting and instance does nothing with instance actions.

2. archiving instance (soft delete -> actually deleted) also archives
off instance actions.


I think this is also the right approach. Then we don't need to worry
about adding soft delete for instance_actions, they are just archived
when you archive the instances. It probably makes the logic in the
archive code messier for this separate path, but it's looking like
we're going to have to account for the bw_usage_cache table too (which
has a uuid column for an instance but no foreign key back to the
instances table and is not soft deleted).



3. update instance_actions API so that you can get instance_actions 
for

deleted instances (which I think doesn't work today).


Right, it doesn't. I was going to propose a spec for that since it's a
simple API change with a microversion.


Adding a simple flag to expose instance actions for a deleted instance
if you know the uuid of the deleted instance will provide some
usefulness.  It does lack the discoverability of knowing that you had
*some* instance that was deleted and you don't have the uuid but want to
get at the deleted actions.  I would like to avoid bolting that onto
instance actions and keep that as a use case for an eventual Task API.





-Sean



--

Thanks,

Matt Riedemann


__ 



OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



If you're an admin, you can list deleted instances using:

nova list --deleted

Or could, if we weren't busted on that right now [1].

So the use case I'm thinking of here is:

1. Multiple users are in the same project/tenant.
2. User A deletes an instance.
3. User B is wondering where the instance went, so they open a support 
ticket.
4. The admin checks for deleted instances on that project, finds the 
one in question.
5. Calls off to os-instance-actions with that instance uuid to see the 
deleted action and the user that did it (user A).

6. Closes the ticket saying that user A deleted the instance.
7. User B punches user A in the gut.

[1] https://bugs.launchpad.net/nova/+bug/1518382



Okay, that seems a good usecase for operators. Coolness, I'm fine with 
soft-deleting instance_actions and provide a microversion for getting 
actions for a known instance UUID, like Andrew said.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FKs in the DB

2015-11-20 Thread Clint Byrum
Excerpts from Mike Bayer's message of 2015-11-20 11:29:31 -0800:
> 
> On 11/20/2015 11:19 AM, Alexis Lee wrote:
> > We just had a fun discussion in IRC about whether foreign keys are evil.
> > Initially I thought this was crazy but mordred made some good points. To
> > paraphrase, that if you have a scale-out app already it's easier to
> > manage integrity in your app than scale-out your persistence layer.
> 
> I've had this argument with mordred before, and it seems again there's
> the same misunderstanding going on:
> 
> 1. Your application can have **conceptual** foreign keys in it, without
> actually having foreign keys **for real** in the database.  This means
> your SQLAlchemy code still does ForeignKey, ForeignKeyConstraint, and
> most importantly your **database still uses normal form**, that is, any
> row that refers to another does it based on a set of columns that
> exactly match to the primary key of a single table elsewhere (not to
> multiple tables, not to a function of the columns cast from int to
> string and concatenated to the value in the other table etc, an *exact
> match*).   I'm sure that mordred agrees with all of these practices,
> however when one says "we aren't going to use foreign keys anymore",
> typically it is all these critical schema design practices that go out
> the window.  Put another way, the foreign key concept not only
> constrains data in a real database, just the concept of them constraints
> the **developer** to use correct normal form.
> 

Mike, thanks for making that clarification. I agree, that conceptual
FK's are not the same as FK constraints in the DB. Joins are not demons.
:)

To be clear, while what you say above is all true, normal form isn't
actually a goal of any system. It's a tactic one can use with a well known
efficiency profile. But there are times when it costs more than other
more brutal, less civilized methods of database design and usage. If we
don't measure our efficiency, we won't actually know if this is one of
those times or not.

> 2. Here's the part mordred doesn't like - the FK is actually in the
> database for real.   This is because they slow down inserts, updates,
> and deletes, because they must be checked.   To which I say, no such
> performance issue has been observed or documented in Openstack, we
> aren't a 1 million TPS financial system, so this is vastly premature
> optimization.
> 

I agree with you that this is unmeasured. I don't agree that we are not a
1 million TPS financial system, because the goal of running a cloud for
many is, in fact, to make money. So while we may not have an example of
a cloud running at 1 million TPS, it's not something we should dismiss
too quickly.

That said, the measurement should come first. What I'd like to show
is how many TPS we do actually do on boots, deletes, etc. etc. I'm
working on it now, and I'd encourage people to join the effort on the
counter-inspection QA spec if they want to get started measuring things.
We're taking baby steps right now, but eventually I see us producing a
lot of data that should be helpful in answering some of these questions.

> Also as far as #2, customers and operators *regularly* run scripts and
> queries to modify openstack databases, particularly to delete soft
> deleted rows.  These get blocked *all the time* by foreign key
> constraints.  They are doing their job, and they are needed as a final
> guard against data integrity issues.  We of course handle referential
> integrity in the application layer as well via SQLAlchemy ORM constructs.
> 

Nobody ever doubts that there are times where database-side FK constraints
help prevent costly mistakes. The question is always: at what cost? Right
now, I'd say we don't really know because we're not measuring. That's
fine, if you want to mitigate risk, one strategy is to buy insurance,
and that's what the FK's in the DB are: insurance.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FKs in the DB

2015-11-20 Thread Clint Byrum
Excerpts from Matt Riedemann's message of 2015-11-20 10:58:55 -0800:
> 
> On 11/20/2015 10:19 AM, Alexis Lee wrote:
> > We just had a fun discussion in IRC about whether foreign keys are evil.
> > Initially I thought this was crazy but mordred made some good points. To
> > paraphrase, that if you have a scale-out app already it's easier to
> > manage integrity in your app than scale-out your persistence layer.
> >
> > Currently the Nova DB has quite a lot of FKs but not on every relation.
> > One example of a missing FK is between Instance.uuid and
> > BandwidthUsageCache.uuid.
> >
> > Should we drive one way or the other, or just put up with mixed-mode?
> 
> For the record, I hate the mixed mode.
> 
> >
> > What should be the policy for new relations?
> 
> I prefer consistency, so if we're adding new relationships I'd prefer to 
> see that they have foreign keys.
> 

If FedEx preferred consistency over efficiency, then they'd only just
now be able to exist due to drones being available. Otherwise they'd
have to have had a way to cover the last mile of delivery using some
sort of air travel, to remain consistent.

What I'm saying is, sometimes you need a recommissioned 727 to carry
your package, and sometimes you need a truck. Likewise, there are times
when de-normalization is called for. As I said in my other reply to Mr.
Bayer, we don't really know that this is that time, because we aren't
measuring. However, if it seems like a close call when speculating,
then it is probably prudent to remain consistent with most other things
in the system.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of Docker containers on the Fuel master node

2015-11-20 Thread Igor Kalnitsky
Hey Timur,

> I think we can change all docker-based code from our tests / scripts
> in 2-3 days

That sounds good.


> Do we really want to remove docker containers from master node?

Yes, we do. Currently we're suffering from using container-based
architecture on master node, and since we've decided to change our
*upgrade* approach (where we stop gain benefits from containers) it would
be nice to get rid of them and fix a bunch of docker-related bugs.


> How long it will take to provide the experimental MOS 8.0 build
> without docker containers?

I think we need to ask Vladimir Kozhukalov here.


> Are we ready to change the date of MOS 8.0 release and make this
> change?

No, we don't ready to change release date. If we don't have time for it,
let's postpone it till 9.0.

Regards,
Igor

On Fri, Nov 20, 2015 at 12:41 PM Timur Nurlygayanov <
tnurlygaya...@mirantis.com> wrote:

> Hi Igor and Alexander,
>
> >But I'd like to hear from QA how do we rely on container-based
>> infrastructure? Would it be hard to change our sys-tests in short
>> time?
>
> QA team hadn't significant dependencies from docker images in our tests
> [0], I think we can change all docker-based code from our tests / scripts
> in 2-3 days, but it is hard to predict when ISO without docker images will
> pass all SWARM / Tempest tests.
>
> And one more time:
>
>> Of course, we can fix BVT / SWARM tests and don't use docker images in
>> our test suite (it shouldn't be really hard) but we didn't plan these
>> changes and in fact these changes can affect our estimates for many tasks.
>
>
> Do we really want to remove docker containers from master node? How long
> it will take to provide the experimental MOS 8.0 build without docker
> containers?
> Are we ready to change the date of MOS 8.0 release and make this change?
>
> [0]
> https://github.com/openstack/fuel-qa/search?p=2=docker=%E2%9C%93
>
>
> On Fri, Nov 20, 2015 at 7:57 PM, Bogdan Dobrelya 
> wrote:
>
>> On 20.11.2015 17:31, Vladimir Kozhukalov wrote:
>> > Bogdan,
>> >
>> >>> So, we could only deprecate the docker feature for the 8.0.
>> >
>> > What do you mean exactly when saying 'deprecate docker feature'? I can
>> > not even imagine how we can live with and without docker containers at
>> > the same time. Deprecation is usually related to features which directly
>> > impact UX (maybe I am wrong).
>>
>> I may be understood this [0] wrong, and the docker containers are not
>> user-visible, but that depends on the which type of users do we mean :-)
>> Sorry, for being not clear.
>>
>> [0]
>>
>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html
>>
>> >
>> > Guys,
>> >
>> > When you estimate risks of the docker removal, please take into account
>> > not only release deadlines but also the overall product quality. The
>> > thing is that continuing using containers makes it much more complicated
>> > (and thus less stable) to implement new upgrade flow (upgrade tarball
>> > can not be used any more, we need to re-install the host system).
>> > Switching from Centos 6 to Centos 7 is also much more complicated with
>> > docker. Every single piece of Fuel system is going to become simpler and
>> > easier to support.
>> >
>> > Of course, I am not suggesting to jump overboard into cold water without
>> > a life jacket. Transition plan, checklist, green tests, even spec etc.
>> > are assumed without saying (after all, I was not born yesterday). Of
>> > course, we won't merge changes until everything is green. What is the
>> > problem to try to do this and postpone if not ready in time? And please
>> > do not confuse these two cases: switching from plain deployment to
>> > containers is complicated, but switching from docker to plain is much
>> > simpler.
>> >
>> >
>> >
>> >
>> > Vladimir Kozhukalov
>> >
>> > On Fri, Nov 20, 2015 at 6:47 PM, Bogdan Dobrelya <
>> bdobre...@mirantis.com
>> > > wrote:
>> >
>> > On 20.11.2015 15:10, Timur Nurlygayanov wrote:
>> > > Hi team,
>> > >
>> > > I think it too late to make such significant changes for MOS 8.0
>> now,
>> > > but I'm ok with the idea to remove docker containers in the future
>> > > releases if our dev team want to do this.
>> > > Any way, before we will do this, we need to plan how we will
>> perform
>> > > updates between different releases with and without docker
>> containers,
>> > > how we will manage requirements and etc. In fact we have a lot of
>> > > questions and haven't answers, let's prepare the spec for this
>> change,
>> > > review it, discuss it with developers, users and project
>> management team
>> > > and if we haven't requirements to keep docker containers on
>> master node
>> > > let's remove them for the future releases (not in MOS 8.0).
>> > >
>> > > Of course, we can fix BVT / SWARM tests and don't use docker
>> images in
>> > > our test suite (it shouldn't be really hard) 

  1   2   >